man in front of whiteboard

Episode 166: But Why, AI? ZestAI’s Quest to make Artificial Intelligence Explainable

In this episode of the podcast (#166): Jay Budzik, the Chief Technology Officer at ZestAI, joins us to talk about that company’s push to make artificial intelligence decisions explainable and how his company’s technology is helping to root out synthetic identity fraud.


At some point in the last decade, artificial intelligence silently crossed an invisible border that separates “cool future tech” from “technology that’s so ubiquitous we don’t even notice it.”

Today, if you have a smart phone – and use it – it is likely that artificial intelligence is influencing everything from your travel to the office to your choice of restaurants after work -and possibly even your companion at that restaurant. In sectors like healthcare, AI is flagging tumors and other irregularities on X-rays and MRIs. In industries like high finance and banking, artificial intelligence is shaping lending and investment decisions and – of course – being used to spot illegal or suspicious behavior before it becomes costly. 

A.I.: fighting bias, or scaling it?

The value of this huge. According to Gartner, the business value created by artificial intelligence will reach $3.9 trillion by the year 2022 – that’s Trillion with a “T.” But with critical, life- and business sustaining decisions riding on an algorithm, the need to understand not just what an artificial intelligence system decided but why it decided the way it did has become paramount. As researchers have noted: designed or applied improperly, artificial intelligence risks recreating the biases of its authors, then dressing them up as bloodless objectivism. As McKinsey has noted: “AI can help reduce bias, but it can also bake in and scale it.”

“In order for AI to be successful, you have to trust what its saying. And in order to to trust what it is saying, you have to understand why it thinks it is saying that.”

Jay Budzik, CTO at Zest.ai
Jay Budzik is CTO of Zest.ai
Jay Budzik is CTO of Zest.ai

Concerns about the “why” of algorithmic decision making has led to a push for so-called “explainable AI”: a way to reduce artificial decision making to its core elements and allow humans to make sense of why a particular decision was reached and – if they’re visible – spot the flaws in the logic the AI system used. 

Our guest this week is an expert on making AI explainable. Jay Budzik is the Chief Technology Officer at Zest AI, a 10 year-old firm that makes artificial intelligence (AI) software for the credit industry.

The company is a pioneer in the area of “explainable AI” In this conversation, Jay and I talk about what that means, and also about how artificial intelligence and machine learning technologies are being applied to spotting “synthetic identity fraud” – one of the most costly and hard to spot types of fraud in the banking and credit industries. 

“In order for predictive models to be successful, they have to come with explanations,” Budzik told me.

Zest’s first application of this technology was in the credit industry, where both customers and regulators demanded explanations about decisions on whether or not to extend credit to a particular applicant.

In this interview, Jay and I talk about explainable AI, the emergence of synthetic identity fraud and how demands for explainable AI will grow with greater applications for artificial intelligence technology.