✨ panel-web-llm¶
This extension for HoloViz Panel introduces a client-side interface for running large language models (LLMs) directly in the browser.
It leverages WebLLM under the hood to provide an in-browser LLM execution environment, enabling fully client-side interactions without relying on server-side APIs.
Features¶
- Run LLMs in the Browser: Execute large language models directly in the browser without requiring server-side APIs or cloud services.
- Offline Capability: Cache models locally in the browser, enabling offline use after initial download.
- Model Variety: Supports multiple models, including Llama 2 and Qwen 2.5. Check Available Models for the most up-to-date list.
- Privacy-Preserving: Keeps all computations client-side, ensuring data privacy and security.
- Panel Integration: Effortlessly incorporate LLM-powered features into interactive Panel applications.
Pin Version¶
This project is in its early stages, so if you find a version that suits your needs, it’s recommended to pin your version, as updates may introduce changes.
Installation¶
Install it via pip
:
Usage¶
Online¶
Try it out in Examples.
Command Line Interface¶
Once installed, you may launch the web LLM in the terminal with the following command:
Once the server launches, the Load <model_name>
button has been clicked, the model is cached in your browser.
That means, even if you restart the server without internet, you can still run that same model offline, as long as your browser cache is not cleared.
The following is an alias for convenience:
The default model used is Qwen2.5-Coder-7B-Instruct-q4f16_1-MLC
. To default to another model:
Replace <model_name>
with the name of the model you want to use. For a list of models:
Python¶
You can seamlessly integrate the Web LLM interface into your Panel applications:
import panel as pn
from panel_web_llm import WebLLMInterface
pn.extension()
template = pn.template.FastListTemplate(
title="Web LLM Interface", main=[WebLLMInterface()]
)
template.show()
If you don't like the built-in layout of WebLLMInterface
, you can instead wrap WebLLM
manually:
import panel as pn
from panel_web_llm import WebLLM
pn.extension()
web_llm = WebLLM(load_layout="column")
chat_interface = pn.chat.ChatInterface(
callback=web_llm.callback,
)
template = pn.template.FastListTemplate(
title="Web LLM Interface",
main=[chat_interface],
sidebar=[web_llm.menu, web_llm], # important to include `web_llm`
sidebar_width=350,
)
template.show()
Development¶
For a simple setup use uv
:
uv venv
source .venv/bin/activate # on linux. Similar commands for windows and osx
uv pip install -e .[dev]
pre-commit run install
pytest tests
For the full Github Actions setup use pixi:
This repository is based on copier-template-panel-extension (you can create your own Panel extension with it)!
To update to the latest template version run:
Note: copier
will show Conflict
for files with manual changes during an update. This is normal. As long as there are no merge conflict markers, all patches applied cleanly.
❤️ Contributing¶
Contributions are welcome! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch:
git checkout -b feature/YourFeature
. - Make your changes and commit them:
git commit -m 'Add some feature'
. - Push to the branch:
git push origin feature/YourFeature
. - Open a pull request.
Please ensure your code adheres to the project's coding standards and passes all tests.