Live Webinar June 22nd, 2017 1:00 PM – 2:00 PM EDT
Activity Type: Education – Course or Training 1 Hour 1 PDU free
Provider: O’Reilly
State-of-the-art machine learning techniques offer extraordinary performance on everything from text analysis to feature classification, but they often function as “black boxes” that obscure their decision-making.
As artificial intelligence makes its way into critical processes in every industry, stakeholders will demand transparency from their models and algorithms.
In this webcast Andy Hickl will discuss the problem of interpretability in AI and address techniques for building transparent artificial intelligence applications with explainable outcomes.
Presenter: Andy Hickl (LinkedIn profile) Chief Product Officer at Intel Saffron Cognitive Solutions Group; has led research in natural language processing, machine learning, artificial intelligence, computer vision, and ubiquitous computing. His work can be found in the proceedings of AAAI, ACL, SIGIR, and NIPS. Previously was a Senior Director for Innovation at Vulcan Inc. co-founded the Vulcan Proving Ground & served as Executive Technical Advisor to Paul G. Allen. A serial entrepreneur, Andy has co-founded three startups: A.R.O. Inc. (Vulcan Ventures, Paul Allen), Swingly, and Extractiv. Andy also served as CEO of Language Computer Corporation, a leading natural language processing company.
Click to register for:
Transparency In AI Decision Making
0 | 0 | 1.0 |
Technical Project Management | Leadership | Strategic & Business Management |
NOTE: For PMI® Audit Purposes – Print Out This Post! Take notes on this page during the presentation and also indicate the Date & Time you attended. Note any information from the presentation you found useful to your professional development and place it in your audit folder.