top of page

How to Employ AI in Credit Decisioning



These are tough times for financial institutions and consumers alike. Traditionally risk managers have responded to such sharp slowdowns by simply reducing risk appetite. This results in an immediate drop in approval rates in the hope that when the real recessions hits, the losses will not be too bad. But this also means that many well-meaning members in genuine need of credit in these tough times, will be denied credit.

We believe a better, AI powered framework to make lending businesses recession resistant and sustainable is now available. Just as it will transform all corners of the economy, AI promises to revolutionize credit risk management.

During RMA’s Risk Readiness Webinar, sponsored by Scienaptic Systems, Inc., Vinay Bhaskar, GM Risk Solutions at Scienaptic Systems, Inc., shared how artificial intelligence (AI) will make risk practices more granular, responsive, and cost-effective.

AI and machine learning use cases are rapidly evolving in the large, untapped credit market which has been handicapped by old technology. The old machine learning system relied on regression models and three- or four-year old customer data. The latest models are built on recent data with over 1,000 predictors and interactors. Regulators are starting to embrace these changes recommending a balanced view with consideration to the associated risks.

Regulatory and compliance requirements for machine learning adoption include the need for explainability, fair lending to uncover hidden biases to prevent discrimination, and model documentation/model validation.

Institutions need to address processes, systems, and culture in order to achieve seamless adoption of AI and machine learning. Bhaskar stressed the importance of the interlinking of all three for the most effective implementation and to avoid operational losses.

Bhaskar recommended that the time is now to move forward and offered the following strategies for how to get started:


  • Boost AI security with validation, monitoring, and verification.

  • Improve governance with AI operating models and procedures.

  • Create transparent, explainable, and provable AI models.

  • Test for bias in data, models, and human use of algorithms.

  • Create systems that are ethical, understandable, and legal.


Click here to access the recording of the webinar.



202 views
bottom of page