Model Parallelism: Building and Deploying Large Neural Networks (MPBDLNN)

 

Course Overview

Very large deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2) have certain properties that set them apart from their smaller counterparts. As DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence. Training models that contain tens to hundreds of billions of parameters on vast datasets isn’t trivial and requires a unique combination of AI, high-performance computing (HPC), and systems knowledge.

Prerequisites

Familiarity with:

  • Good understanding of PyTorch
  • Good understanding of deep learning and data parallel training concepts
  • Practice with deep learning and data parallel are useful, but optional

Prices & Delivery methods

Online Training

Duration
1 day

Price
  • on request
Classroom Training

Duration
1 day

Price
  • on request

Schedule

Currently there are no training dates scheduled for this course.