Introducing EasyLLM - streamline open LLMs
Happy to introduce EasyLLM, an open-source Python package to streamline and unify working with open LLMs. EasyLLM is a side project I started, which i thought would be worth sharing with a border community.
EasyLLM is an open-source project that provides helpful tools and methods for working with large language models (LLMs).
The first release implements “clients” that are compatible with OpenAI's Completion API. This means you can easily replace openai.ChatCompletion
, openai.Completion
, openai.Embedding
with, for example, huggingface.ChatCompletion
, huggingface.Completion
or huggingface.Embedding
by changing one line of code.
The project is available on GitHub and you can check out the documentation for examples and usage instructions.
Key Features
Below is a list of current features
- Compatible clients - Implementation of clients compatible with OpenAI's API,
ChatCompletion
,Completion
, andEmbedding
. Easily switch between different LLMs by changing one line of code. - Prompt helpers - Utilities to help convert prompts between formats for different LLMs. For example, go from the OpenAI Messages format to a prompt for a model like LLaMA.
- Streaming support - Stream completions from your LLM instead of waiting for a whole response. Great for things like chat interfaces.
So far planned:
evol_instruct
(work in progress) - Use evolutionary algorithms to create instruction data for LLMs.sagemaker
client to easily interact with LLMs deployed on Amazon SageMaker
If you have great ideas or feature requests, feel free to open an issue or a pull request directly.
Getting Started
- Install EasyLLM via pip:
pip install easyllm
Then import a client and start using it:
Check out the documentation for more examples and detailed usage instructions. The code is on GitHub.
Examples
Here are some examples to help you get started with the easyllm library:
Example | Description |
---|---|
https://philschmid.github.io/easyllm/examples/chat-completion-api | Shows how to use the ChatCompletion API to have a conversational chat with the model. |
https://philschmid.github.io/easyllm/examples/stream-chat-completions/ | Demonstrates streaming multiple chat requests to efficiently chat with the model. |
https://philschmid.github.io/easyllm/examples/stream-text-completions | Shows how to stream multiple text completion requests. |
https://philschmid.github.io/easyllm/examples/text-completion-api | Uses the TextCompletion API to generate text with the model. |
https://philschmid.github.io/easyllm/examples/get-embeddings | Embeds text into vector representations using the model. |
The examples cover the main functionality of the library - chat, text completion, and embeddings. Let me know if you would like me to modify or expand the index page in any way.
Thanks for reading! If you have any questions, feel free to contact me on Twitter or LinkedIn.