Last Updated: 02/20/2024 @ 23:05:39
Machine Learning Engineering Open Book
This is an open collection of methodologies, tools and step by step instructions to help with successful training of large language models and multi-modal models.
This is a technical material suitable for LLM/VLM training engineers and operators. That is the content here contains lots of scripts and copy-n-paste commands to enable you to quickly address your needs.
This repo is an ongoing brain dump of my experiences training Large Language Models (LLM) (and VLMs); a lot of the know-how I acquired while training the open-source BLOOM-176B model in 2022 and IDEFICS-80B multi-modal model in 2023. Currently, Iโm working on developing/training open-source Retrieval Augmented Generation (RAG) models at Contextual.AI.
Iโve been compiling this information mostly for myself so that I could quickly find solutions I have already researched in the past and which have worked, but as usual Iโm happy to share these with the wider ML community.
Table of Contents
My apologies if the layout is a bit unstable while Iโm writing new chapters and gradually re-organizing the content to be more intuitive.
Part 1. Insights
- The AI Battlefield Engineering - what you need to know in order to succeed
Part 2. Hardware
Compute - accelerators, CPUs, CPU memory.
Storage - local, distributed and shared file systems.
Network - intra- and inter-node networking.
Part 3. Orchestration
- SLURM - the main orchestration environment
Part 4. Training
- Training - model training related guides
Part 5. Development
Debugging and Troubleshooting - how to debug easy and difficult issues
Testing - numerous tips and tools to make test writing enjoyable
Part 6. Miscellaneous
- Resources - LLM/VLM chronicles
Updates
I announce any significant updates on my twitter channel https://twitter.com/StasBekman
PDF version
Download the PDF version of the book.
I will try to rebuild it once a week or so, but if you want the latest, the instructions for building are here.
Thanks to HuggingFace for giving me permission to host my bookโs PDF at the HF hub.
Shortcuts
Things that you are likely to need to find quickly and often.
- ๐ ๏ธ Tools:
- all_reduce_bench.py - a much easier way to benchmark network throughput than nccl-tests.
- torch-distributed-gpu-test.py - a tool to quickly test your inter-node connectivity
- ๐ Guides:
- debugging pytorch applications - quick copy-n-paste solutions to resolve hanging or breaking pytorch applications
- slurm for users - a slurm cheatsheet and tricks
- make tiny models/datasets/tokenizers
- LLM/VLM chronicles collection
Gratitude
None of this would have been possible without me being entrusted with doing the specific LLM/VLM trainings I have learned this know-how from. This is a privilege that only a few enjoy due to the prohibitively expensive cost of renting huge ML compute clusters. So hopefully the rest of the ML community will vicariously learn from these notes.
Special thanks go to Thom Wolf who proposed that I lead the BLOOM-176B training back when I didnโt know anything about large scale training. This was the project that catapulted me into the intense learning process. And, of course, HuggingFace for giving me the opportunity to work full time on BLOOM-176B and later on IDEFICS-80B trainings.
Contributing
If you found a bug, typo or would like to propose an improvement please donโt hesitate to open an Issue or contribute a PR.
License
The content of this site is distributed under Attribution-ShareAlike 4.0 International.
My repositories map
โ Machine Learning: ML Engineering Open Book | ML ways | Porting
โ Guides: The Art of Debugging
โ Applications: ipyexperiments
โ Tools and Cheatsheets: bash | conda | git | jupyter-notebook | make | python | tensorboard | unix
Citation
@online{bekman2024,
author = {Bekman, Stas and Foreman, Sam},
title = {ML {Engineering}},
date = {2024-02-20},
url = {https://saforem2.github.io/ml-engineering},
langid = {en}
}