Course Overview
Enterprises need to execute language-related tasks daily, such as text classification, content generation, sentiment analysis, and customer chat support, and they seek to do so in the most cost-effective way. Large language models can automate these tasks, and efficient LLM customization techniques can increase a model’s capabilities and reduce the size of models required for use in enterprise applications. In this course, you'll go beyond prompt engineering LLMs and learn a variety of techniques to efficiently customize pretrained LLMs for your specific use cases—without engaging in the computationally intensive and expensive process of pretraining your own model or fine-tuning a model's internal weights. Using NVIDIA NeMo™ service, you’ll learn various parameter-efficient fine-tuning methods to customize LLM behavior for your organization.
Pré- requisitos
- Professional experience with the Python programming language.
- Familiarity with fundamental deep learning topics like model architecture, training and inference.
- Familiarity with a modern Python-based deep learning framework (PyTorch preferred).
- Familiarity working with out-of-the-box pretrained LLMs.
Objetivos do Curso
By the time you complete this course you will be able to:
- Apply parameter-efficient fine-tuning techniques with limited data to accomplish tasks specific to your use cases
- Use LLMs to create synthetic data in the service of fine-tuning smaller LLMs to perform a desired task
- Drive down model size requirements through a virtuous cycle of combining synthetic data generation and model customization.
- Build a generative application composed of multiple customized models you generate data for and create throughout the workshop.