All models are wrong and when they are wrong they create financial or non-financial harm. Understanding, testing and managing potential model failures and their unintended consequences are the key focus of model risk management, particularly for mission critical or regulated applications. This is a challenging task for complex machine learning models and having an explainable model is a key enabler. Machine learning explainability has become an active area of academic research and an industry in its own right. Despite all the progress that has been made, machine learning explainers are still fraught with weakness and complexity. In this talk, I will argue that what we need is an interpretable machine learning model, one that is self-explanatory and inherently interpretable. I will discuss how to make sophisticated machine learning models such as Neural networks (Deep Learning) as self-explanatory models.
Agus Sudjianto
Agus Sudjianto is an executive vice president, head of Model Risk and a member of Management Committee at Wells Fargo, where he is responsible for