ML Engineering
    • Repository
    • Source Code
    • New Issue
  1. 🐛 Debugging
  2. A Back up of scripts
  • 📓 Resources
  • ✏️ Testing
  • 🤗 Transformers
  • 💻 Compute
    • CPU memory
    • CPU
    • Accelerators
      • Accelerators
      • Nvidia
        • Troubleshooting NVIDIA GPUs
  • 🐛 Debugging
    • A Back up of scripts
    • Faster debug and development with tiny models, tokenizers and datasets
    • NCCL: Debug and Performance
    • Debugging PyTorch programs
    • Debug Tools
    • Diagnosing Hangings and Deadlocks in Multi-Node Multi-GPU Python Programs
    • Underflow and Overflow Detection
  • 🧠 Insights
    • 🪖 The AI Battlefield
  • 🛜 Networking
    • Networking Benchmarks
      • Network Benchmarks Results
        • Disabling NVLink Benchmark
  • 🎻 Orchestration
    • Working in SLURM Environment
      • SLURM Administration
      • Launchers with SLURM
      • SLURM Performance
      • SLURM for users
  • 📦 Storage
    • Benchmarks
      • Results
        • fio benchmark results for hope on 2023-12-20-14:37:02
  • 🏋️ Training
    • Tensor precision / Data types
    • Emulate a multi-node setup using just a single node
    • Selecting Training Hyper-Parameters And Model Initializations
    • Checkpoints
    • Fault Tolerance
    • Model Parallelism
    • Software Tune Up For The Best Performance
    • Reproducibility
    • Re-train HF Hub Models From Scratch Using Finetuning Examples
    • Avoiding, Recovering From and Understanding Instabilities
      • Understanding Training Loss Patterns

On this page

  • A Back up of scripts
  • View source
  • Edit this page
  • Report an issue

Other Formats

  • Github (GFM)
  1. 🐛 Debugging
  2. A Back up of scripts

February 20, 2024

A Back up of scripts

This is a backup of scripts discussed in Faster debug and development with tiny models, tokenizers and datasets.

  • c4-en-10k.py
  • cm4-synthetic-testing.py
  • fsmt-make-super-tiny-model.py
  • general-pmd-ds-unpack.py
  • general-pmd-synthetic-testing.py
  • m4-ds-unpack.py
  • mt5-make-tiny-model.py
  • openwebtext-10k.py
  • oscar-en-10k.py
Back to top

Citation

BibTeX citation:
@online{bekman2024,
  author = {Bekman, Stas and Foreman, Sam},
  title = {ML {Engineering}},
  date = {2024-02-20},
  url = {https://saforem2.github.io/ml-engineering},
  langid = {en}
}
For attribution, please cite this work as:
Bekman, Stas, and Sam Foreman. 2024. “ML Engineering.” February 20, 2024. https://saforem2.github.io/ml-engineering.
🐛 Debugging
Faster debug and development with tiny models, tokenizers and datasets

ML-Engineering

2024

  • View source
  • Edit this page
  • Report an issue