STARWEST 2024 Concurrent Session : How to Test AI Systems

SEE PRICING & PACKAGES

Thursday, September 26, 2024 - 11:15am to 12:15pm

How to Test AI Systems

Companies have lost millions because they didn't have proper MLOps or testing processes. They relied on the training metrics, like accuracy, but software quality goes beyond that. As the barrier to entry to using AI tools, like ChatGPT and Midjourney, becomes smaller and companies start building their own systems and products with AI, the need for setting quality standards, testing practices, and thinking about ethics and safety has become crucial. Testing these goes beyond validation metrics like accuracy, precision, and recall. Instead, quality attributes like behaviors, usability, and fairness need to be tested and measured using exploratory and automated strategies. Carlos will cover some of the risks and biases that can happen throughout the MLOps pipeline, demonstrates a few techniques to test a model's behaviors, security robustness, and fairness, and apply them against some real-world scenarios and state-of-the-art models. By the end, you will have new ideas and techniques that you can use to test your own AI systems and approach these quality attributes from a customer's perspective.

Carlos Kidman
Qualiti

Carlos Kidman is a Director of Engineering at Qualiti but was formerly an Engineering Manager at Adobe. He is also an instructor at Test Automation University with courses around architecture, design, containerization, and Machine Learning. He is the founder of QA at the Point, which is the Testing and Quality Community in Utah, and does consulting, workshops, and speaking events all over the world. He streams programming and other tech topics on Twitch, has a YouTube channel, builds open-source software like Pylenium and PyClinic, and is an ML/AI practitioner. He loves fútbol, anime, gaming, and spending time with his wife and kids.