philschmid

Introducing EasyLLM - streamline open LLMs

Published on
3 min read
View Code

Happy to introduce EasyLLM, an open-source Python package to streamline and unify working with open LLMs. EasyLLM is a side project I started, which i thought would be worth sharing with a border community.

EasyLLM is an open-source project that provides helpful tools and methods for working with large language models (LLMs).

The first release implements “clients” that are compatible with OpenAI's Completion API. This means you can easily replace openai.ChatCompletion, openai.Completion, openai.Embedding with, for example, huggingface.ChatCompletion, huggingface.Completion or huggingface.Embedding by changing one line of code.

The project is available on GitHub and you can check out the documentation for examples and usage instructions.

Key Features

Below is a list of current features

  • Compatible clients - Implementation of clients compatible with OpenAI's API, ChatCompletion, Completion, and Embedding. Easily switch between different LLMs by changing one line of code.
  • Prompt helpers - Utilities to help convert prompts between formats for different LLMs. For example, go from the OpenAI Messages format to a prompt for a model like LLaMA.
  • Streaming support - Stream completions from your LLM instead of waiting for a whole response. Great for things like chat interfaces.

So far planned:

  • evol_instruct (work in progress) - Use evolutionary algorithms to create instruction data for LLMs.
  • sagemaker client to easily interact with LLMs deployed on Amazon SageMaker

If you have great ideas or feature requests, feel free to open an issue or a pull request directly.

Getting Started

  1. Install EasyLLM via pip:
pip install easyllm

Then import a client and start using it:

from easyllm.clients import huggingface

# Define the prompt to use
huggingface.prompt_builder = "llama2"
# huggingface.api_key="hf_xxx" # change api key if needed

response = huggingface.ChatCompletion.create(
    model="meta-llama/Llama-2-70b-chat-hf",
    messages=[
        {"role": "system", "content": "\nYou are a helpful assistant speaking like a pirate. argh!"},
        {"role": "user", "content": "What is the sun?"},
    ],
      temperature=0.9,
      top_p=0.6,
      max_tokens=256,
)

print(response)

Check out the documentation for more examples and detailed usage instructions. The code is on GitHub.

Examples

Here are some examples to help you get started with the easyllm library:

ExampleDescription
https://philschmid.github.io/easyllm/examples/chat-completion-apiShows how to use the ChatCompletion API to have a conversational chat with the model.
https://philschmid.github.io/easyllm/examples/stream-chat-completions/Demonstrates streaming multiple chat requests to efficiently chat with the model.
https://philschmid.github.io/easyllm/examples/stream-text-completionsShows how to stream multiple text completion requests.
https://philschmid.github.io/easyllm/examples/text-completion-apiUses the TextCompletion API to generate text with the model.
https://philschmid.github.io/easyllm/examples/get-embeddingsEmbeds text into vector representations using the model.

The examples cover the main functionality of the library - chat, text completion, and embeddings. Let me know if you would like me to modify or expand the index page in any way.


Thanks for reading! If you have any questions, feel free to contact me on Twitter or LinkedIn.