Regulating the Transformative Power of AI in Asset Management
Artificial intelligence (AI) is rapidly transforming our world, impacting everything from healthcare and transportation to finance and entertainment.
In asset management, the technology is set to grow exponentially.
According to Mercer Investment’s 2024 global manager survey — AI integration in investment management — nine out of 10 managers currently use (54%) or plan to use (37%) AI within their investment strategies or asset-class research. And a study by Verified Market Research estimates that the market size was USD 2.78bn last year and will grow at a CAGR of 37.1% to reach USD 47.58 billion in 2030.
However, alongside its vast potential comes a growing urgency for effective regulation. Concerns around bias, privacy, and safety mean governments and organizations must understand how to harness the power of AI responsibly.
Risk-based approach in AI regulation
One of the most significant trends in AI regulation is the adoption of a risk-based approach. This means that the level of regulatory oversight is proportional to the potential risks posed by an AI system. The European Union (EU) recently took a significant step forward with its AI Act, which categorizes AI systems into four risk levels:
- Unacceptable risk (banned)
- High risk (strict requirements)
- Medium risk (mitigation measures)
- Minimal risk (minimal oversight).
This approach allows for innovation in low-risk areas like spam filters while ensuring stricter controls for high-risk applications like facial recognition or autonomous weapons.
The United States (US) hasn’t implemented comprehensive AI regulations yet, but a similar risk-based approach has started to appear through some states and individual agencies. After hosting the first global AI safety summit in November 2023, the UK government has openly stated that it does not feel there is a requirement to introduce a broad, risk-based approach similar to the EU AI Act, instead opting for a non-statutory context-based approach to AI regulation.
Tackling bias and fairness in AI
Bias in AI algorithms can have real-world consequences, leading to discriminatory outcomes in many areas, for example, loan approvals or criminal justice. Regulatory efforts are increasingly focused on mitigating these risks. The EU AI Act mandates that high-risk AI systems be designed and developed to minimize bias and ensure fairness. This includes requirements for data quality, human oversight, and the ability to understand how an AI system arrives at a decision.
The US has seen increased scrutiny of algorithmic bias, with initiatives like the Algorithmic Justice League pushing for greater transparency and accountability in AI development. However, there’s a lack of centralized regulations, leaving the burden on individual agencies and states to address bias.
The UK regulators have already begun to set out rules based on the government’s principal-based approach and have been asked to publish an update outlining their strategic approach to AI by April 2024.
Concerns about algorithmic transparency
A key concern around AI is the lack of transparency in the decision-making process. The “black-box” nature of many algorithms can make it difficult to understand how they arrive at a conclusion, raising concerns about fairness and accountability.
Regulations like the EU AI Act are pushing for greater transparency in high-risk AI systems. This could involve allowing users to understand how their data is used in decision-making or providing explanations for AI-generated outputs.
The right to explanation informs individuals, enabling them to challenge potentially discriminatory decisions and fosters trust in AI systems. However, achieving effective visibility remains a technical challenge, especially for complex AI models. Striking a balance between transparency and protecting intellectual property is crucial.
Addressing data privacy and security
AI systems are data driven, raising concerns about how personal information is collected, used, and secured. Existing data privacy regulations like the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) in the US are being re-evaluated in the context of AI. New regulations may address issues like data retention limitations, opt-out mechanisms for automated decision-making, and the specific risks associated with sensitive personal data used in AI applications.
Ensuring data security is also essential. AI systems are vulnerable to cyberattacks, which could compromise vast amounts of data or manipulate AI decision-making for malicious purposes. Regulations will inevitably require developers to implement robust security measures to protect against these threats.
AI’s expanding role in asset management
AI is also affecting the asset management community, offering a powerful toolkit for portfolio managers and investment firms. Here are some key use cases:
- Investment Decisions: AI algorithms can analyze vast amounts of data, including financial statements, news articles, and social media, to identify hidden patterns and predict market trends. This can help portfolio managers make more informed investment decisions and potentially generate greater returns.
- Risk Management: AI can continuously monitor portfolios and identify potential risks in real time. By analyzing market fluctuations, economic indicators, and company news, AI can alert managers to potential problems and suggest adjustments to mitigate risk.
- Portfolio Optimization: AI can be used to optimize asset allocation across different asset classes. By analyzing factors like risk tolerance and investment goals, AI can create personalized portfolios that are tailored to each investor.
- Alternative Data: AI can analyze “alternative data” sources, such as satellite imagery or credit card transaction data, to gain insights into companies and industries. This can help identify undervalued assets or predict future market trends that traditional analysis might miss.
- Repetitive Tasks: AI can automate many time-consuming tasks in asset management, such as data analysis, report generation, and compliance checks. This frees up portfolio managers to focus on strategic decision-making and client relationships.
AI is a tool, not a silver bullet
However, it’s important to remember that AI is a tool, not a silver bullet. Human expertise remains crucial in areas like interpreting results, setting investment goals, and managing risk. The future of asset management lies in a collaborative approach, where AI empowers human professionals to make better investment decisions.
The ongoing development of AI regulations reflects a global commitment to harnessing this powerful technology responsibly. While challenges remain, the current landscape offers reasons for optimism. The risk-based approach, focus on bias mitigation and emphasis on transparency and data privacy are positive steps toward ensuring trust and accountability in AI. Collaboration between governments, industry leaders, and civil society organizations will be crucial to ensure that AI reaches its full potential.
Don't miss out
Subscribe to our blog to stay up to date on industry trends and technology innovations.