Skip to content
Gopi Krishna Tummala

Tag: deep-learning

All the articles with the tag "deep-learning".

  • Advanced MLOps & Production
    45 MIN READ

    Training Frameworks: ZeRO, FSDP, and the Memory Math That Gets You Hired

    A practitioner's guide to distributed training frameworks — the memory formulas, parallelism strategies, and failure-mode reasoning that ML infra interviews actually test. Covers DDP, FSDP, DeepSpeed ZeRO, 3D parallelism, and fault tolerance.

  • Advanced MLOps & Production
    40 MIN READ

    Datasets & Dataloaders: The Art of Never Starving Your GPU

    GPU utilization is a lagging indicator — the real battle is in the data pipeline. A practitioner's deep dive into PyTorch DataLoader internals, zero-copy data pumps, WebDataset streaming, and the exact questions this gets you in ML system design interviews.

  • Advanced MLOps & Production
    45 MIN READ

    Post-Training Playbook: SFT, LoRA, DPO, and GRPO from First Principles

    Pre-training gives a model knowledge; post-training gives it behavior. A practitioner's breakdown of SFT, LoRA/QLoRA, DPO, and GRPO — with the memory math, concrete configs, and interview reasoning that separates candidates who've done this from candidates who've read about it.

  • Intermediate Fundamentals
    25 MIN READ

    Backpropagation — The Math Behind Learning

    A complete derivation of backpropagation for MLPs — from chain rule intuition to delta propagation, with a worked numerical example showing exactly how errors flow backward through a network.