AI-300: Operationalize machine learning and generative AI solutions
= Scheduled class
= Guaranteed to run
= Fully booked
| LOCATION | April | May | June | July |
|---|---|---|---|---|
| Auckland | ||||
| Hamilton | ||||
| Christchurch | ||||
| Wellington | ||||
| Virtual Class |
This course prepares learners to design, implement, and operate Machine Learning Operations (MLOps) and Generative AI Operations (GenAIOps) solutions on Azure. It covers building secure and scalable AI infrastructure, managing the full lifecycle of traditional machine learning models with Azure Machine Learning, and deploying, evaluating, monitoring, and optimizing generative AI applications and agents using Microsoft Foundry. Learners will gain hands-on knowledge of automation, continuous integration and delivery, infrastructure as code, and observability by using tools such as GitHub Actions, Azure CLI, and Bicep. The course emphasizes collaboration with data science and DevOps teams to deliver reliable, production-ready AI systems aligned with modern MLOps and GenAIOps best practices.
This course is intended for data scientists, machine learning engineers, and DevOps professionals who want to design and operate production-grade AI solutions on Azure. It is suited for learners with experience in Python, a foundational understanding of machine learning concepts, and basic familiarity with DevOps practices such as source control, CI/CD, and command-line tools, who are preparing to implement MLOps and GenAIOps workflows using Azure-native services.
It is recommended that students have:
- Programming experience with Python or R
- Experience developing and training machine learning models
- Familiarity with basic Azure Machine Learning concepts
By the end of this course, students will be able to:
- Operate AI in
production with confidence
Gain practical skills to deploy, automate, monitor, and optimise both machine learning models and generative AI applications in real€‘world environments. - Apply end€‘to€‘end
MLOps and GenAIOps practices
Learn how to manage the complete AI lifecycle €” from experimentation and CI/CD through to deployment, evaluation, and operational monitoring. - Automate and
standardise AI delivery pipelines
Use CI/CD pipelines and infrastructure automation to reduce risk, improve reliability, and support repeatable AI deployments. - Improve quality,
performance, and cost control
Implement structured evaluation, monitoring, and tracing techniques to optimise AI outcomes and support data€‘driven decision€‘making. - Bridge data
science, engineering, and DevOps
Build the operational skills required to collaborate effectively across teams and deliver enterprise€‘ready AI solutions.
Module 1: Experiment with Azure Machine Learning
Learn how to manage model experimentation using Azure Machine Learning, including automated machine learning (AutoML), MLflow€‘tracked notebooks, and responsible AI tools to identify high€‘quality models early in the lifecycle.
Module 2: Perform Hyperparameter Tuning with Azure Machine Learning
Explore systematic approaches to improving model performance using hyperparameter tuning and sweep jobs within Azure Machine Learning.
Module 3: Run Pipelines in Azure Machine Learning
Build reusable components and pipelines to automate training workflows, schedule jobs, and support repeatable, scalable machine learning operations.
Module 4: Trigger Azure Machine Learning Jobs with GitHub Actions
Implement CI automation that integrates GitHub Actions with Azure Machine Learning to trigger training and operational workflows from source control events.
Module 5: Trigger GitHub Actions with Feature€‘Based Development
Apply trunk€‘based and feature€‘based development practices to protect main branches and control how machine learning workflows are activated during development.
Module 6: Work with Environments in GitHub Actions
Use environment€‘based workflows to manage training, testing, and deployment stages as part of a robust MLOps strategy.
Module 7: Deploy a Model with GitHub Actions
Automate model deployment to production using GitHub Actions and Azure Machine Learning CLI, supporting continuous delivery of machine learning solutions.
Module 8: Plan and Prepare a GenAIOps Solution
Understand how to plan generative AI solutions, select appropriate models, and design development lifecycles for production€‘ready GenAI applications.
Module 9: Manage Prompts for Agents in Microsoft Foundry with GitHub
Apply software engineering practices to prompt management, using GitHub for version control and safe promotion of prompts used by AI agents.
Module 10: Evaluate and Optimize AI Agents Through Structured Experiments
Learn how to design evaluation experiments with clear metrics for quality, performance, and cost, enabling evidence€‘based optimisation of AI agents.
Module 11: Automate AI Evaluations with Microsoft Foundry and GitHub Actions
Implement automated evaluation pipelines using Python scripts and CI/CD workflows to support continuous quality assurance for generative AI solutions.
Module 12: Monitor Your Generative AI Application
Monitor live generative AI applications by tracking usage, latency, performance, and cost metrics to inform operational and optimisation decisions.
Module 13: Analyze and Debug Your Generative AI App with Tracing
Use distributed tracing techniques to debug complex workflows and improve the reliability and observability of AI systems in production.