AI in Finance: The Promise and Potential Pitfalls
March 11, 2024254 views0 comments
AI is drawing big investments with its promise of new efficiencies, but ethics and regulation remain concerns, according to a Wharton Future of Finance roundtable.
The integration of artificial intelligence in the financial domain offers substantial efficiency gains and enhanced client services. It also promotes financial inclusion and helps reduce data bias. But the technology also brings concerns relating to its ethical use, and regulatory challenges in addressing risks and ensuring compliance.
Those were the main takeaways from a roundtable discussion in October 2023 titled Capitalizing on the Potential of Artificial Intelligence, which was hosted by Wharton’s Future of Finance Forum.
The discussion was co-moderated by Wharton professor Chris Geczy, who is also academic director of the Jacobs Levy Equity Management Center for Quantitative Financial Research, and Cary Coglianese, professor of law and political science at the University of Pennsylvania’s Carey Law School. The roundtable’s participants included executives from Fortune 500 firms, academic experts, and other leaders in finance and AI.
“The current state of artificial intelligence puts us at the edge of something wonderful, something terrible, or both,” Geczy said. “Developers, regulators, and other stakeholders are responsible for guiding the further development of AI in socially and economically beneficial ways. There’s reason for optimism that AI’s potential for good can be realized while limiting its harms.”
AI Opens New Opportunities
The panelists noted that AI ventures are attracting substantial investments, including collaborations with startups and strategic academic partnerships. Also, financial institutions are leveraging AI to analyze vast trade volumes, providing actionable insights into trade probabilities and enhancing market participation strategies, they said.
Another revolutionary application is AI-assisted liquidity management, enabling precise forecasting of cash positions, essential for operational stability and strategic planning. These innovations show that AI’s role extends beyond task automation, providing insights that can alter fundamental business strategies and market interactions, the panelists noted.
A critical aspect of AI integration is understanding and explaining decision-making processes to minimize bias and ensure ethical use. Several institutions have taken proactive steps, establishing principles for ethical AI usage, recognizing the technology’s profound impact on client relationships and market dynamics.
The roundtable focused on other salutary aspects of AI as well, such as its societal impact, particularly in promoting financial inclusion. Financial systems often exclude lower-income households, and AI’s efficiency gains could be instrumental in addressing this disparity, they noted.
The roundtable also recognized efforts to democratize AI, particularly through empowering academic institutions and startups. Major service providers envisage a future where foundational AI models are widely accessible, promoting a democratized ecosystem of safe and compliant AI services, the panelists said.
AI’s potential in identifying and correcting data biases was another significant theme of the discussion. Using existing data sets poses a risk of perpetuating biases. By making mathematical adjustments, AI can help in recognizing implicit biases, a foundational step in developing fairer financial systems, the panelists pointed out. Innovative solutions like digital identity technologies offer seamless financial system integration, and open finance ecosystems could provide crucial data, driving more inclusive AI algorithms, they added.
Challenges Facing AI Adoption
AI integration faces hurdles in aspects like identifying suitable use cases and managing associated risks. The selection of AI use cases must also consider risk-reward dynamics, focusing on areas where AI’s advantages are clear, the panelists said. Foundational models and benchmarking emerged as primary areas for responsibility assumption, indicating an industry trend towards shared accountability in AI implementation, they noted. They also pointed to AI’s environmental impact, particularly the significant carbon footprint associated with running large AI models.
AI also presents regulatory challenges in that the centralizing tendency of AI contrasts with decentralized financial trends like cryptocurrencies. AI’s novel applications strain existing legal frameworks, often leading to delays in consensus building around new rules, the discussants noted. This lag results in potential compliance and operational risks, especially for novel financial products and services. Furthermore, the natural risk aversion of regulatory bodies contrasts with the private sector’s agility, necessitating a more nuanced approach to fostering innovation while mitigating risks, they added.
Navigating international AI regulations is another pressing issue. The panelists pointed to the EU’s initiatives with its AI Act to regulate AI and create a legal framework that balances innovation and consumer protection. The complexity of AI systems necessitates a broad approach, with debates around specific prohibitions and national sovereignty in AI regulation, they stated. Concerns were raised also about the broad scope of the AI Act, potentially leading to conflicts with existing laws and creating operational challenges for businesses.
However, safe harbors and technical standards could offer “green zones” for compliant operations, the panelists noted.