The good news is, we know why AI works. The bad news is, we don’t always know how AI works. We use AI and ML to solve problems where we can’t just write a solution; problems where the input is too messy for code or too complicated for humans.
How do you specify what a valid system looks like if you don’t even know how it works? How can you be certain you’ve covered every case for what it should do? How do you make sure you understand the system when even the designers don’t, and how do you do a pre-release test of a system that changes over time?
You can’t, so don’t.
Don’t do it (just) in pre-release. Do it in production, too. And do it with more AI. See, my colleagues and I have a vision; A plan for how testing should work. Humans can’t create certainty about AI performance… But AIs? AIs can.
We’re going to explore why AI can be so difficult to test in a traditional pre-release fashion, and some of the ways it can go wrong. We’ll talk about how bias creeps in and why getting good training data is hard.
Then, we’ll talk about how we plan to fix it by flipping the script. How do you check an AI? With an AI. We’ll describe how production monitoring combines with machine learning to turn live production data into authoritative test data.
Finally, we’ll talk about how AI can empower a DevOps Testing Toolchain, not only helping speeding up releases, but maybe even spotting problems before your team can.
307, Commerce House, Cunningham Road, Bangalore 560052
10:00 AM – 7.00 PM
Monday – Friday
Phone: +91 99101 05147
© STeP-IN Forum All Rights Reserved 2021