Back to news and insights
Article

Four positives from the EU’s pioneering law on artificial intelligence

February 13, 2024

Since ChatGPT put AI technology directly in the hands of the public, advocates and skeptics alike have battled with the question of how to harness the benefits of AI while mitigating its risks. To what extent can a technology that is becoming more and more ubiquitous truly be regulated? The EU has pioneered the world’s first attempt with the EU agreement on a comprehensive law – the AI Act – on February 2nd, 2024.

The AI Act adopts a risk-based approach, categorizing AI systems from “limited” to “unacceptable” risk levels, with corresponding requirements ranging from minimal transparency obligations to complete bans. For example, bans will target practices like cognitive behavioral manipulation and untargeted facial image scraping, and stringent requirements will be imposed on high-risk AI for EU market access. Law enforcement can use AI with safeguards, and general-purpose AI systems will have to meet transparency obligations, especially for those with significant impact like ChatGPT-4.

These are positive developments from the perspective of protecting human rights, privacy, and safe use. However, critics maintain the AI Act will be a burden and an additional regulatory hurdle on developers and companies, which could slow down innovation and impede economic development in Europe. Additionally, there is also a danger that the regulations will also consolidate AI development into major companies only, who are big enough to afford to comply with these regulations.

For those who may be skeptical about the AI Act, we offer the following reasons to see the glass half full.

1. The AI Act will build trust and adoption

Resistance to change stands out as a primary hurdle in the integration of AI within businesses. The same can be said about the initial reaction to the General Data Protection Regulation (GDPR) in 2018, which met concerns around compliance costs and impact on business operations. However, over time, GDPR has been largely acknowledged for raising the bar for data protection, in Europe and globally.

The AI Act offers a structured framework for transparent, responsible development and implementation of AI. This will cultivate trust among stakeholders and stimulate the adoption of AI technologies within companies. The establishment of clear communication channels regarding the objectives and goals of the AI Act, coupled with robust mechanisms for oversight and enforcement of compliance, further reinforces trust in the system.

2. The AI ACT will ensure ethical use of AI

In recent years, certain AI applications have sparked significant privacy and ethical apprehensions, such as the unauthorized collection and utilization of personal data, or the utilization of AI in political advertising and biased recruitment practices. By tackling concerns like algorithmic bias, safeguarding  data privacy, and ensuring human oversight, we can effectively alleviate the potential risks and harms associated with the deployment of AI. No matter how innovative the AI application is, its development should always adhere to ethical and responsible standards.

3. In the long run, the AI Act will not be a regulatory burden

For so long, many companies have struggled with the absence of regulatory guidance on AI, impeding innovations for regulated functions like compliance. The AI Act now provides much-needed clarity on regulatory expectations, empowering businesses to navigate the regulatory terrain with greater precision, thereby minimizing uncertainty and potential compliance expenses. Moreover, it advocates for a ‘Compliance by Design’ approach that integrates ethical considerations and regulatory mandates into the very fabric of AI system design and development.

The initial phase of implementation may pose challenges, impacting the roles and responsibilities of developers, model risk management, and compliance teams, potentially resulting in additional costs. Over time, gradual establishment and refinement of frameworks will fortify compliance efforts. Consequently, the regulatory burden is expected to lessen as companies progressively strengthen their compliance infrastructure.

Much like GDPR, which initially faced criticism for its stringent rules but eventually became a global standard for data privacy, the AI act might initially appear as a regulatory burden. Yet, it is likely to foster a similar shift in perception and practice. As companies adapted to GDPR, they not only improved their data handling practices, but also gained the trust of their consumers. The AI act has the potential to replicate this success by setting a high standard for AI ethics and safety, eventually becoming a benchmark globally for other nations to strive towards.

4. The AI Act will encourage innovation

The AI Act is indeed setting some boundaries for AI development, but it leaves room for innovation by setting the right framework for it. It encourages advancements in research and development, particularly in areas like explainable and ethical AI and bias mitigation. This promotes the creation of innovative solutions that adhere to ethical principles. Additionally, the AI Act proposes the establishment of coordinated AI "regulatory sandboxes" to encourage innovation throughout the EU.

It is undeniable that the challenges related to complying with the AI Act are very significant and will affect a wide range of players. To comply and avoid substantial fines, companies need to start preparing for it by understanding the Act and assessing its impact on the company, its business, and teams. They should start listing AI systems developed or used and seek advice when the right skills are missing internally.

At FRA, we been engaging in discussions and working groups on responsible and explainable AI with the aim of helping our clients innovate and build trustworthy AI-based compliance tools. We are also leveraging our in-house AI tools to provide our clients with innovative capabilities to identify and mitigate risk as part of disputes, investigation, and compliance matters.

A remaining question is how strong the impact of the AI Act will be outside of the EU, and whether it will have a “Brussels effect”. Many countries outside the EU are developing their own AI regulations, but it is unsettled how strict and aligned those will be with the EU AI Act, to be continued...

No items found.
Article

Strategic Data Privacy Compliance for Litigation in the Gulf

April 24, 2024
Article

The Era of Legal Accountability

April 16, 2024
Article

Revisiting the case for a UK whistleblower reward programme

April 11, 2024
News

FRA debuts in GAR 100 Expert Witnesses 2024

April 11, 2024