Get in Touch

Course Outline

Introduction to Parameter-Efficient Fine-Tuning (PEFT)

  • Motivation and limitations of full fine-tuning.
  • Overview of PEFT: goals and benefits.
  • Applications and use cases in industry.

LoRA (Low-Rank Adaptation)

  • Concept and intuition behind LoRA.
  • Implementing LoRA using Hugging Face and PyTorch.
  • Hands-on: Fine-tuning a model with LoRA.

Adapter Tuning

  • How adapter modules function.
  • Integration with transformer-based models.
  • Hands-on: Applying Adapter Tuning to a transformer model.

Prefix Tuning

  • Utilizing soft prompts for fine-tuning.
  • Strengths and limitations compared to LoRA and adapters.
  • Hands-on: Prefix Tuning on an LLM task.

Evaluating and Comparing PEFT Methods

  • Metrics for evaluating performance and efficiency.
  • Trade-offs in training speed, memory usage, and accuracy.
  • Benchmarking experiments and result interpretation.

Deploying Fine-Tuned Models

  • Saving and loading fine-tuned models.
  • Deployment considerations for PEFT-based models.
  • Integrating into applications and pipelines.

Best Practices and Extensions

  • Combining PEFT with quantization and distillation.
  • Application in low-resource and multilingual settings.
  • Future directions and active research areas.

Summary and Next Steps

Requirements

  • Fundamental understanding of machine learning principles.
  • Practical experience working with large language models (LLMs).
  • Familiarity with Python and PyTorch.

Audience

  • Data scientists.
  • AI engineers.
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories