philschmid

ISSUE #1: How far can you get with a single GPU in just one day?

Published on:
3 min read

Companies continue to adopt and rely on language models in their operations,e.g., BERT, RoBERTa, and it's important to consider not only the performance of these models, but also their efficiency and cost-effectiveness. With this in mind Jonas Geiping, Tom Goldstein published a paper, β€œCramming: Training a Language Model on a Single GPU in One Day” answering the question, β€œHow far can you get with a single GPU in just one day?”

The findings and applied methods can have an important impact on companies that may be using out-dated public checkpoints or could benefit from using transformers in their domain if given the opportunity. The authors also made the code available on GitHub.

News & Announcements πŸ“£

Before 2022 was over, the Hugging Face Transformers team shared an insight about the development of transformers in 2022. The Transformers library grew to 300 000 daily pip installs and 1 000 000 weekly active users in 2022. 🀯🀯

François Chollet also shared insights on the usage and new features of Keras in 2022 and what will come in 2023 in a Twitter Thread.

Suppose you were testing the new Pytorch 2.0 features with PyTorch-nightly during the Christmas break. In that case, you should re-install it, since the PyTorch team reported that the nightly version was compromised between December 25, 2022, and December 30, 2022.

Tutorials & Demos πŸ“

I published a blog post on will learn how to fine-tune google/flan-t5-base for chat & dialogue summarization using Hugging Face Transformers.

Deedy shows the tricks from the latest ML research to help you get better results with ChatGPT, including Chain-of-thought reasoning (CoT) & Self-consistency.

Steven Warren wrote a blog post on how to create & deploy a Stable Diffusion Discord Bot on AWS with an event-driven & scalable architecture.

Reads & Papers πŸ“š

Google & Deepmind published a paper on a new large language model aligned to the medical domain to generate safe and helpful answers, achieving SOTA performance on MedQA.

Muse: Text-To-Image Generation using Transformers by Google. Muse is a new, text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models.


I hope you enjoyed this newsletter. πŸ€— If you have any questions or are interested in collaborating, feel free to contact me on Twitter or LinkedIn.

See you next week πŸ‘‹πŸ»πŸ‘‹πŸ»

πŸ—žοΈ Stay updated with bi-weekly Transformers & Cloud News and Insights delivered to your inbox