Monday 22nd April: 13.00 – 17.00
In this workshop, you will learn practical techniques for customizing LLMs for quant finance using prompt engineering, retrieval augmentation, and fine-tuning.
Prior knowledge of LLMs or Python programming not required. Open-source examples will be provided for those interested in running and modifying the code (CPU or GPU).
The workshop will include two 90-minute sessions (13:00 – 14:30 and 15:00 – 16:30) with 30 min coffee and Q&A breaks after each session.
Models: GPT-3.5, GPT-4, Llama 2, Code Llama
Session One: Prompting and Retrieval Augmentation – 13:30 to 14:30
- Prompting – natural language programming of LLMs
- Principles of prompt engineering
- Prompt types
- Retrieval augmentation – using information outside model training
- Embedding – asking questions over documents
- Chains – multi-step workflows
- Memory
- Overcoming limitations
- Context window
- Large documents
- Hallucinations
- Reproducibility
- Performance Optimization
- CPU and GPU performance profiles
- Quantization
- Hands-on examples with Python
- Comprehension of trade confirmations
Q&A – 14:30 to 15:00
Session Two: Fine-Tuning – 15:00 to 16:30
- Unsupervised fine-tuning
- Expanding the model dataset and vocabulary
- Self-supervised fine-tuning
- Supervised fine-tuning
- Generated datasets
- Curated datasets
- Performance Optimization
- Hands-on examples with Python
- Generation of draft model release notes
Q&A – 16:30 to 17:00