- From 28 April–7 May 2026 (2 Weeks, 4 Classes, 8 Total Hours)
- Every Tuesday and Thursday from 1– 3 p.m. Eastern Time (all sessions will be recorded and available for replay; course notes will be available for download)
- Advance your AI expertise with this cutting-edge course specifically designed for engineers.
- All students will receive an AIAA Certificate of Completion at the end of the course.
OVERVIEW
In the age of AI-assisted software development, many developers are “vibe-coding”—prompting large language models (LLMs) without structure or validation.
The result: buggy code, token wastage, and unmeasured reliability.
This course transforms LLM-assisted coding from an experimental art into a measurable engineering practice - optimizing model use, improving accuracy, and minimizing wasted tokens. We introduce structured, research-backed strategies for planning, generating, testing, and evaluating code using LLMs. Participants learn reproducible frameworks for achieving higher functional accuracy, cost efficiency, and explainability across domains - from standard programming tasks (HumanEval) to aerospace-specific coding tasks (AeroEval is an experimental dataset on physics and aerospace coding tasks).
LEARNING OBJECTIVES- Understand modern generative code models and how they are evaluated on standardized datasets.
- Learn 10 advanced code generation strategies - from One-Shot and Test-Driven refinement to AlphaCodium, Reflexion, Self-Consistency, and MCTS.
- Design and execute evaluation pipelines that measure performance, stability, and token efficiency.
- Apply learned techniques to domain-specific datasets where correctness and reliability are critical.
- [Detailed Outline below]
- Software engineers, ML developers, and AI researchers building or validating LLM-assisted code systems.
- Technical leads seeking to improve productivity while maintaining code reliability and cost control.
COURSE FEES (Sign-In to Register)
- AIAA Member
Price: $595 USD
- AIAA Student
Member Price: $395 USD
- Non-Member
Price: $795 USD
OUTLINE
- Generative Code Models & HumanEval Benchmark Foundations of LLM code generation and standard evaluation metrics.
- Baseline Methods — One-Shot & Test-Driven From fast prototyping to structured test-feedback loops.
- Reflective & Understanding-Based Methods Using self-review and AlphaCodium/Reflexion strategies for deeper reasoning.
- Reasoning & Diversity-Driven Approaches Uncertainty-Guided CoT and Self-Consistency for adaptive reasoning and reliability.
- Search & Feedback Architectures Monte Carlo Tree Search, compiler feedback, and agentic frameworks.
- Evaluation & Benchmarking Frameworks Designing objective, reproducible pipelines for LLM performance tracking.
- Capstone Case Study: AeroEval Evaluate LLM-generated physics and aerospace code for correctness and numerical precision.
Mr. Sri Krishnamurthy, CFA, CAP, is the founder of www.QuantUniversity.com. With over twenty years of experience, Sri has guided and consulted with various organizations in AI, Quantitative Analysis, Risk Management, Fintech, Machine Learning, and Statistical Modeling related topics. Previously, Sri has worked for Citigroup, Endeca, and MathWorks, with extensive consulting roles for numerous top-tier clients. Sri has guided over 5,000 students and professionals through intricate quantitative methods, analytics, AI, and big data topics in the industry and as a faculty member at George Mason University, Babson College, and Northeastern University. Sri is a recognized thought leader and is a frequent speaker at multiple CFA, PRMIA, QWAFAFEW, TEDx events and at various international finance and machine learning conferences.
CLASSROOM HOURS / CEUs: 8 classroom hours / 0.8 CEU/PDH
COURSE DELIVERY AND MATERIALS
- The course lectures will be delivered via Zoom. Access to the Zoom classroom will be provided to registrants near to the course start date.
- All sessions will be available on demand within 1-2 days of the lecture. Once available, you can stream the replay video anytime, 24/7.
- All slides will be available for download after each lecture. No part of these materials may be reproduced, distributed, or transmitted, unless for course participants. All rights reserved.
- Between lectures during the course, the instructor(s) will be available via email for technical questions and comments.
RECOMMENDED READING MATERIALS
- Chen, M., Tworek, J., Jun, H., et al.(2021). Evaluating Large Language Models Trained on Code. arXiv preprint arXiv:2107.03374.
- Krishnamurthy,S. (2025). AeroEval: A Benchmark Dataset for Evaluating Code Generation in Aerospace Engineering. (Working Paper)
- Ridnik, T., Zabari, N., & Friedman, I. (2024).Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering. arXiv preprint arXiv:2401.08500.
Cancellation Policy: A refund less a $50.00 cancellation fee will be assessed for all cancellations made in writing prior to 5 days before the start of the event. After that time, no refunds will be provided.
Contact: Please contact Lisa Le or Customer Service if you have any questions about the course or group discounts.