Neutralizing bias in financial decision making – Can AI aid?

Investment decisions are always tough calls. The market is flooded with investment strategies that support fundamentals as well as behavioral finance. Both types of strategies have their successes and failures, in relation to the larger economic conditions. However, in the last two years, on the onset of the pandemic, despite significant adverse impact on economic activities, the stock markets rose to all-time highs after the initial fall. That forced us to rethink the influence of human behavior on the holistic investment decision-making process, as human decisions are subject to individual and societal bias.

GMC

The rise of technology, particularly Artificial Intelligence (AI) is contributing to this decision-making process with data driven insights. AI’s decisions are based on correlation and statistics from millions of data elements. Interpretation from data is without prejudice.

#artificial intelligence #cognitive bias #algorithimic bias #model bias #data bias

Will AI’s algorithms be an antidote to human biases in making more holistic decisions? or will it make the investment decision worse, as it may not factor in the nuances of human behavior? or will the machine learning models learn the human biases and replicate, producing consequences not thought of?

Researchers usually identify different investment attributes as strong or weak indicators that propel the overall decision-making capabilities, however this applies for conventional investors. By conventional we presume the investors who are part of a homogeneous system and have overall synergy in accessing the market information. Plenty of studies have already been put in place to figure out a constructive relationship among multiple self-limited and self-discriminated variables that affect the overall investor buying behavior. We were quite thrilled to dig deeper to understand whether this pandemic has made any significant changes in those numbers, more precisely from statistical inferences perspectives. One interesting point we could interpret from there is the predominance of cognitive biases and analogous heuristics, that means somehow investors are affected by some ancillary extraneous parameters those are introduced during the pandemic, those are impacting and driving the overall buying behaviors. This is indeed an interesting observation and provides us a lot of room to do further research to understand the umbrella effect.

We spent quite some time understanding the interrelations among the indirect variables with researcher lenses. We could realize the indelible presence of cognitive dissonance during analyzing ‘bandwagon effect’. It’s more popular as ‘herd behavior’ among investors in capital market. Investors tend to get pacified with a known yardstick even if the same is not relevant from the immediate transaction standpoint.

Psychologically it’s more distracting from a rational perception and akin to more conflicting beliefs. Now if we look back and try to understand the genesis of this behavior, inconsistency in ROI is a paramount trigger here, now analyst will say that’s a generic cause for years, which is fact. However, we wanted to extrapolate this to understand from the behavioral finance angle, as this has a huge potential to understand the causal relationships so that in future when we will design an ‘Artificial Intelligence’ led investment platform these parameters will help us to minimize the noise to a considerable extent. As per the self-standard model, we end up being normalized in both the ways. If initial assessment of stock price evaluation is compared to a known and universally accepted reference point and is up to our personal standard then it may transform into self-esteem moderation or idiographic dissonance arousal, however if the difference persists then it goes through normative standard and transmutes to no self-esteem moderation which is an example of nomothetic dissonance arousal. Knowing these relationships in detail will help in designing high accuracy decision support system capable of using predictive models.

Let’s move on to technology aided financial decision making. From Robo advisors helping in investment decisions to evaluating credit applications to underwriting insurance, AI algorithms have evolved over the years. Increased efficiency, faster timing, better experiences are common knowledge. Research shows, decisions are broader and fairer.

Quoting Andrew McAfee of MIT, “If you want the bias out, get the algorithms in.” AI can reduce the impact of human beings’ subjective interpretation of data, as ML algorithms learn from the available data sources in the process, enhancing their predictive accuracy making decisions fairer over time. These are the desirable outcomes of technology. However, going deeper, the underlying broad picture is not the same always.

AI runs on algorithms and is trained on data. Evidence shows bias in outcome – both algorithmic bias (model bias) and data bias.

Research by Joy Buolamwini and Timnit Gebru, “Gender Shades, 2017”, found errors in facial-recognition technologies, how the outcome differed by race and gender. The accuracy in detecting images of white faces was much higher, while the system almost failed in case of black images and particularly black women. Similarly, investigations by ProPublica on the use of AI in the US justice system. ProPublica found the system biased as it failed to demonstrate “balance for the false positives”, which can be compared to the way Facebook failed in detecting “misinformation” on subjective context in recent times.

In financial decision making, this may lead to approval of loans to a segment of applicants, who may be more vulnerable to default in case of any immediate adverse economic condition, affecting their credit standing creating a domino effect going forward. Similar scenarios may crop up in case of investment decisions as well, like favoring commonly traded stocks instead of value investing. Since algorithms work on statistics and correlation, it may tend to favor data overload overlooking survival deviations (when you look at things that survived, when the focus should have been on things that didn’t), leading to “algorithmic bias,” where the algorithms itself are most often the main source of the issue. Another case in point is the addictive algorithms deployed by social media platforms. After the onset of the pandemic, when the economic indicators are dependent on a global health crisis caused by a virus, the efficacy of the algorithm becomes critical. Because of their self-learning nature, while algorithms adopt and implement their learnings for their efficiency and efficacy, the underlying bias gets deployed at scale unnoticed and unchecked.

It’s not always the algorithm, which is the cause of the bias, underlying data might be. If we train an AI on inaccurate data, it will give us biased results. Diversity of the data pool is one of the vital cogs as the algorithm trains and learns from it. If we train the system on dirt, we can’t get beauty in outcome. Though there is no causal relationship between race and ability to make mortgage payments, studies have shown higher rejection of racial minorities. Other studies also show gender bias in financial underwriting. When self-learning AI systems work on these types of data inflicted with human bias, it runs the risk of amplification over time. With automated efficient processing, these AI systems have the potential to impact much larger customer segment in a short span, before being detected.

As some regulators pointed out:

“Algorithms shouldn’t have an exemption from our anti-discrimination laws” – MAS

“If we are increasingly going to use the assistance of, or delegate decisions to, AI systems, we need to make sure these systems are fair in their impact on people’s lives” – European Commission

“For consumers, the main risk from broad employment of AI technologies is discrimination” – BaFin

With AI quietly seeping into our life, from facial recognition in mobile phones to digital assistants like Google home, Siri or Alexa to virtual assistants almost everywhere, the necessity to reduce bias has become paramount. However, it calls for greater effort across stakeholders.

With this understanding of the behavioral science and AI technologies and the current business and economic scenario, it can go either way – reduce bias or amplify bias. At the same time, AI can’t replace humans across the financial decision-making process either. The human touch still plays a significant role at a moment of crisis. So, it is important to consider the nuances of human judgement and societal contexts while AI supports data driven insights – with humans and machines working together proving holistic financial decisions.

About the authors:

Manas PandaDr. Manas Panda Ph.D is a partner in a leading technology MNC advising banks and financial institutions implement their digital transformation strategies with focus on customer experience and operational efficiency. A Stanford LEAD alumnus, he talks about technology innovations in financial services. He is based in Toronto, Canada and can be reached at :

Dr. Manas Panda can be contacted at:

E-mail | LinkedIn


Raja BasuMr. Raja Basu is a Senior Consulting professional in a leading technology MNC, in his current role he works as a business architect and helps clients to enable their digital transformation journey. He has special interest in Responsible use of AI. He is also pursuing his doctoral studies (Ph.D) from XLRI Jamshedpur. He is based out of Kolkata, India and reachable at

Mr. Raja Basu is Bestowed with the following Licenses &
Certifications :

https://www.linkedin.com/in/basuraja/details/certifications/

Mr. Raja Basu is Volunteering in the following International
Associations & Institutions :

https://www.linkedin.com/in/basuraja/details/volunteering-experiences/

Mr. Raja Basu is Accorded with the following Honors & Awards :

https://www.linkedin.com/in/basuraja/details/honors/

Mr. Raja Basu can be contacted at :

E-mail | LinkedIn


Also read Mr. Raja Basu’s earlier article :