menu
AI Act Published cover
Analysis & Design

AI Act published: empowering BAs and UX Designers in ethical AI

date: 6 August 2024
reading time: 5 min

On the 12th of July 2024, the AI Act – the EU regulation on artificial intelligence – was finally, officially published. This comprehensive legal framework addresses the risks and challenges associated with AI, positioning Europe as a global leader in AI regulation. It provides clear guidelines for developers and users, ensuring compliance with fundamental rights, safety standards, and ethical principles.

Background

Proposed by the European Commission in April 2021 and agreed to by the European Parliament and the Council in December 2023, the AI Act is part of a broader strategy to promote trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI.

Together, these initiatives aim to protect people’s rights, bring innovation, and encourage the widespread adoption of AI technologies across the EU. The Act’s rules promote the safe and ethical development of AI systems, ensuring transparency and accountability both within Europe and globally.


Key benefits

The AI Act offers several key benefits for businesses and individuals by categorising AI systems based on risk levels. It protects fundamental rights, ensures safety, and upholds ethical standards in AI applications, particularly for powerful AI models.

This legislation is part of a broader EU strategy to support AI development, reducing the regulatory burden on businesses (especially SMEs), and encouraging widespread adoption and investment. It aims to position Europe as a global leader in the responsible use of AI with:

  • Enhanced safety and protection: clear guidelines ensure the safe development and deployment of AI systems, protecting individuals from harmful applications.
  • Promoting trustworthy AI: the Act enforces transparency, accountability, and ethical standards, building public confidence in AI technologies.
  • Support for innovation and competitiveness: it creates a balanced regulatory environment that encourages innovation while safeguarding fundamental rights, promoting investment in AI.
  • Clear compliance framework: the Act provides clear compliance requirements, reducing legal uncertainty for businesses, particularly SMEs.
  • Protection of fundamental rights: by categorising AI systems, it ensures respect for privacy and non-discrimination, preventing unethical practices like social scoring.
  • Encouragement of ethical AI practices: It emphasises ethical considerations, promoting inclusivity, accessibility, and fairness in AI applications.
  • Global leadership and standard-setting: the Act positions the EU as a leader in AI regulation, influencing international standards.
  • Mitigation of risks: the Act addresses potential risks, such as biased decision-making and misuse of AI, aiming to prevent negative societal impacts.

These benefits highlight the AI Act’s role in building a secure, transparent, and innovative AI ecosystem, ensuring responsible development and use of AI technologies.


Prohibited practices

The AI Act categorises AI systems into different risk levels, establishing a comprehensive framework to protect fundamental rights and ensure safety.

It catgorises potential ‘risks’ into one of four categories.

Potential risk categorisation according to the AI Act
Potential risk categorisation according to the AI Act

  1. Unacceptable risk: this category includes AI practices that are considered to violate fundamental EU values and are therefore banned.
  2. High risk: AI systems that could significantly impact safety or fundamental rights. This includes safety-critical components, systems assessing eligibility for services (e.g., loans, jobs), and applications used by law enforcement.
  3. Specific transparency risk/Limited Risk: AI applications requiring transparency, especially where manipulation is possible (e.g., deep fakes or chatbots). Users must be informed when interacting with a machine.
  4. Minimal risk: Most AI systems fall under this category and can be developed without additional obligations. Providers are encouraged to adhere to voluntary codes of conduct for trustworthy AI.

The most potentially dangerous AI systems and processes become prohibited under the AI Act, protecting the welfare of all citizens. Here is a non-comprehensive list of the main prohibited practices under the AI Act:

  • Subliminal and manipulative techniques
    e.g. online advertising that uses subliminal visual or audio stimuli to manipulate user’s behaviour so that they make a purchase of a product they would not otherwise buy.
  • Exploiting the weaknesses of individuals
    e.g: an AI system that targets advertisements for high-interest loans to people in financial distress, using their desperation to get them to take on unfavourable financial commitments.
  • Social scoring
    e.g: a system that rates citizens based on their social media activity and uses this data to restrict access to public services, such as healthcare or education, based on their ‘social score’.
  • Biometric categorisation
    e.g: an AI system that analyses facial images on social media to deduce information about users’ sexual orientation, which can lead to discrimination and privacy violations.
  • Untargeted image acquisition
    e.g: a technology that helps collect CCTV images on streets and in public places without the consent of the people to expand its facial recognition database.
  • Emotional recognition in work and education
    e.g: AI systems that could be used in schools to monitor and assess students’ emotional state during lessons, which can lead to an unfair assessment of their performance and an invasion of their privacy.


How might this impact the future of BA and UX roles?

As more AI-enabled solutions and systems are implemented, the importance of the role of AI Ethicist is reinforced, and both BAs (Business Analysts) and UX designers have a natural aptitude for this role in IT projects.

If the role overseeing the protection of rights and freedoms is not assigned to dedicated personnel, this important task of upholding ethics will likely fall on the BAs and UXs designing digital solutions. This is a task that comes with great responsibility, and one that is not to be taken lightly. Here are some of the tasks of this role:

  • Establishing functional and non-functional requirements that align with ethical principles such as transparency, privacy, security, fairness, and non-discrimination.
  • Evaluating planned and existing AI projects to identify potential ethical risks and impacts on users and society.
  • Assessing how AI systems may affect different societal groups and proposing measures to mitigate negative impacts.
  • Designing AI systems to be transparent and explainable, ensuring that AI-driven decisions are understandable and justifiable to end-users and stakeholders.
  • Ensuring AI systems are designed to be inclusive and accessible, catering to the needs of diverse users, including those with disabilities.
  • Ensuring compliance with relevant regulations and standards, such as the AI Act, and reporting on ethical compliance.
  • Documenting ethical analyses, decisions, and the rationale behind them.
  • Reporting on ethical assessments, mitigation strategies, and compliance status to relevant stakeholders.


Steps to implementation

The AI Act comes into force in several stages:

  • 12th July 2024
    The AI Act was published in the Official Journal of the EU.
  • 1st August 2024
    The AI Act officially comes into force. This marks the beginning of the regulatory framework’s application across the EU.
  • 5th February 2025
    Key provisions regarding prohibited practices and obligations for AI literacy become enforceable. This includes the prohibition of AI systems deemed to pose “unacceptable risk,” such as those involving manipulation, social scoring, or untargeted data scraping.
  • 2nd August 2025
    Obligations for General Purpose AI (GPAI) models and penalties for non-compliance come into effect. This date also marks the application of transparency requirements for certain AI systems.
  • 5th February 2026
    The AI Act’s provisions extend to high-risk AI systems, which include detailed safety and transparency obligations. This stage involves the implementation of mandatory requirements for these systems, ensuring they align with the regulation’s standards.
  • 2nd August 2027
    Full implementation of the AI Act is achieved. All provisions, including specific obligations for high-risk systems and technical requirements, become fully enforceable.

With this staggered implementation, companies and other entities have the opportunity to prepare for full compliance with all regulations, including aligning their AI systems with the new requirements and conducting appropriate testing and validation in accordance with Annex IV of the AI Act.


Summary

The introduction and implementation of the EU’s AI Act is a groundbreaking development. It not only alleviates concerns about the rapid acceleration and widespread impact of AI but also establishes a robust legal and regulatory framework for governments and private businesses. This regulation eases the financial and operational burden on small-to-medium-sized companies, placing responsibility on those best equipped to oversee compliance.

As we navigate the uncertainties of AI’s future impact, the AI Act provides a much-needed roadmap for Europe and the world to address the ethical and practical challenges of integrating this technology into our lives. Moreover, this new landscape offers a significant opportunity for Business Analysts (BAs) and UX Designers to evolve their roles toward becoming AI Ethicists. This evolution presents a promising career path, allowing us to shape the future of IT by ensuring the ethical and responsible use of AI. It’s a chance to develop our expertise and find our place in the evolving tech industry.

Read more on our blog

Discover similar posts

Contact

© Future Processing. All rights reserved.

Cookie settings