Regulatory Frameworks for Artificial Intelligence Risk Mitigation
Public concern about emerging technologies and their implied risk has been the key focus for years due to the unsupervised use of personal data. With the recent upsurge of Generative Artificial Intelligence (GenAI) including Large Language Models (LLM) and organizations embedding artificial intelligence in various products and processes, the attention of the masses and regulators has been shifting towards the potential for biased decisions by algorithms.
The recent headlines have been marked by AI systems producing biased results. One well-known example is Apple’s credit card algorithm, which has been accused of discriminating against women, triggering an investigation by New York’s Department of Financial Services.
Having said that regulators, legislators, and standard setters are now starting to develop frameworks to mitigate the risk while maximizing the optimal use of AI for human mankind.
Building a Collaborative Approach
Recognizing that each jurisdiction has taken a different regulatory approach, the fundamentals on which detailed regulations on AI can be built are discussed below:
Risk-Centric Approach: Regulations should be tailored to AI’s perceived risks, proportionally matching them to compliance obligations with factors like privacy, non-discrimination, transparency, and security.
Guiding Principles: AI regulations should be aligned with the Organization of Economic Cooperation and Development (OECD) endorsed principles, emphasizing human rights, sustainability, transparency, and risk management.
Versatility in Regulation: Jurisdictions recognize the need for both broad, sector-independent AI rules and specialized regulations catering to specific industries.
Harmonized Policy: Development of AI-related rules alongside digital policy. Priority topics like cybersecurity, data privacy, and intellectual property protection should be addressed with the EU leading in comprehensive policymaking.
Private Sector Collaborations: Ensuring the regulatory tools facilitate private sector collaboration with policymakers addressing the ethical use of AI, and high-risk AI innovations that require closer oversight.
Corporate Strategies for Navigating the Evolving AI Regulatory Landscape
With regulators and standard setters, private organizations also play a crucial role in streamlining the rules and regulations for AI platforms. Based on the industry trends we have discussed a few common practices that can be implemented by companies to stay relevant in the rapidly evolving AI regulatory landscape:
Understanding AI Regulations: Organizations would need to align the regulations issued by regulatory authorities with their internal AI policies, the market in which they operate, and other associated supervisory standards.
Strengthening Governance and Risk Management: Establishing clear and robust governance and risk management structures with regulatory alliances can bridge a gap between companies and standard setters. Additionally, actively engaging in conversations with public sector officials and other stakeholders can be helpful for both companies as well as policymakers.
As the growing reliance on AI significantly increases the strategic risks businesses face, companies need to take an active role in writing a rulebook for algorithms. Unless all companies including those not directly involved in AI development engage early with these challenges, the businesses would therein face backlashes resulting in a dip in their business profits undermining the true potential AI could render to the consumers and society.