The AI Act's gender gap: When algorithms get it wrong, who rights the wrongs?

Anamika Kundu, Columbia Global Freedom of Expression, Columbia University, New York City, United States

PUBLISHED ON: 11 Mar 2024

According to recent studies, Generative Artificial Intelligence (AI) output discriminates against women. On testing ChatGPT, terms such as “expert” and “integrity” were used to describe men, while women were associated with “beauty” or “delight”. This was the case while using the Large Language Model, Alpaca,  a model developed by Stanford University to produce recommendation letters for potential employees. This study, and others, have contributed to recognition from official bodies of the dire situation of gender discrimination in AI. For instance, the Report on Preventing Discrimination Caused by the Use of AI by the Committee on Equality and Non-Discrimination has explicitly stated that often women and other minorities are discriminated against to a higher degree through the use of AI. They specifically mention that in medical testing, the historical exclusion of women from clinical trials means that their health and bodies’ responses to medicine are not as understood as men's. They further connect it to saying such issues foreshadow the use of AI-based systems i.e. that these systems are trained with skewed historical data eventually further entrenching existing discrimination.

The Committee’s proposed solutions in the context of equality and non-discrimination have been noted regarding the case of gender. One proposed solution is for all grounds of discrimination to be sufficiently covered by non-discrimination laws. This would include the protected grounds to be open i.e. while the list should be as complete as possible, it should not be exhaustive. Though the EU AI Act aims to safeguard fundamental rights and address gender discrimination, it lacks comprehensive mechanisms for judicial review. It does not mandate fundamental rights impact assessments, which limits its effectiveness. Despite requirements for error-free and representative data sets, the Act also does not ensure diversity in data, potentially leading to continued biases, and does not provide technical solutions for compliance, which could challenge developers, particularly in high-risk areas like medical diagnosis apps.

This calls attention to the fact that there are no avenues of redressal in the AI Act to ensure judicial review in situations where discrimination does not align with existing obligations under Articles 9 and 10 of the Directive on Equal Treatment in Employment and Occupation and Articles 17 and 19 of the Recast Directive. Despite the AI Act mentioning transparency obligations, there are no law enforcement obligations in situations where discrimination arises from the application of AI systems. This leaves out the responsibility of AI deployers to disprove discrimination, rendering people and groups affected without any redressal. Since the Act itself does not provide for proper redress mechanisms, affected persons would then have to rely upon national law. This can lead to imposing a higher burden of proof on affected persons, along with the gaps in national laws.

The AI Act seeks to promote and enhance fundamental rights in the European Union Charter of Fundamental Rights, especially those dealing with non-discrimination and equality between men and women. Some provisions aim to address gender discrimination in AI systems through the creation of special rules for technologies that could pose a risk to the health, safety and fundamental rights of individuals. However, the AI Act cannot be seen as an all-encompassing legislation to combat issues involving gender discrimination in AI as it fails to provide strategies that can resolve the underrepresentation of women in the technology sector. Viewing the AI Act through a gendered lens showcases challenges in discerning prevention and protection against individual rights infringement rooted in gender.

As mentioned above, there is a notable absence of requiring a fundamental rights impact assessment before deploying any system, which could help shift the burden on the system deployers rather than the users when it comes to their rights. The Act continuously discusses the risks to humans as its primary rationale for coming into existence. However, ‘fundamental rights’ are only briefly mentioned in three articles within the chapter. Additionally, there are no ex-ante fundamental rights impact assessments for all AI systems. The closest semblance of an ex-ante assessment is the requirement for certification for ‘essential requirements’ under Chapter III. But this only applies to high-risk AI which excludes most AI systems as stated previously. Yet, this is not the kind of impact assessment developed by the Council of Europe.

Furthermore, Article 10, paragraph 3 titled ‘Data and data governance’ states that “training, validation and testing data sets shall be relevant, representative, free of errors and complete”. Such practices would include “proper design choices, data collection, and relevant data preparation processing operations (annotation, labelling, cleaning, enrichment, interoperability, and aggregation)”. However, there exists no systematic concern for the impact on algorithmically constituted groups. While there is occasional reference to risks to groups, the major focus is on individuals and not society as a whole. This will lead to the growth of structural discrimination due to gaps related to common and minority interests.

The Act does contain a requirement for developers of such systems to disclose formulas of relevant assumptions, particularly those related to the data being measured and represented. Additionally, they are obliged to preserve the statistical properties, including those regarding individuals or groups, on which high-risk AI systems are intended for usage. However, even though complete training sets are extremely crucial, they serve only little use if a regulation ensures the availability of a wide variety of data. Even though it is true that ‘representative’ datasets are necessary for non-discriminatory AI systems, the absence of data from certain groups in clinical trials and research is a huge gap.

As healthcare apps are considered high-risk, excluding generalised applications from the scope of the AI Act is still a huge problem from the lens of algorithmic discrimination. The AI Act is designed to be technology-neutral i.e. laying down requirements to be complied with, without giving any technical solution to comply with the laid down provisions. Though this is a step forward in regulating AI technologies, it would lead to developers’ struggle to translate the provisions into practical solutions.

In conclusion, while the AI Act accounts for conformity assessments to understand the algorithm designs, there is a lack of redressal mechanisms in case of fundamental rights violations. Specifically, when it comes to the individual rights of women, there are no special provisions although non-discrimination is a term mentioned throughout. The current regime within the EU is more oriented towards compliance by developers and does not acknowledge how AI systems impact the rights of users, especially in critical sectors such as healthcare.

Add new comment