As the race to adopt AI intensifies, many organizations prioritize speed and scale but often overlook a critical factor: trust. In this fireside chat, Amy Williams, Senior Manager of Brand and Communications, joins Dr. Michael Wu, Chief AI Strategist at PROS, to explore what responsible AI really means and why it’s more urgent than ever.
Together, they break down the core pillars of responsible AI, including fairness, transparency, and explainability. Dr. Wu shares how PROS has developed and deployed dozens of explainable AI algorithms that give users clear insights into how decisions are made. The conversation dives into how explainability not only builds confidence in AI but also accelerates adoption by demystifying the “why” behind the output.
In a world where the AI arms race is heating up, PROS sets the standard for responsible development. As Dr. Wu explains, the companies that win in the long run will be the ones that balance innovation with accountability.
Video Highlights
- 0:41 – Why has responsible AI become such a critical topic?
- 2:25 – Can you talk about the components you would put together for responsible AI?
- 10:44– Given explainability is connected to being trustworthy, what is PROS doing in terms of explainability?
- 12:31 – How many explainable AI algorithms does PROS have?
- 14:15 – Can AI adoption build more trust?
- 15:32 – What is next for AI?