Balancing efficiency and public interest: The impact of AI automation on social benefit provision in Brazil

Maria Alejandra Nicolás, Federal University of Latin American Integration, Foz do Iguaçu, Brazil
Rafael Cardoso Sampaio, Federal University of Paraná, Curitiba, Brazil

PUBLISHED ON: 30 Sep 2024 DOI: 10.14763/2024.3.1799

Abstract

This article examines the implementation of artificial intelligence (AI) systems by Brazil's National Social Security Institute (INSS) to automate the granting of social benefits. Using audit reports from government agencies, it analyses the efficiency improvements brought about by AI, such as the speed with which benefits are granted, as well as the unintended consequences of this automation, such as the increase in the number of automatic denials and the creation of barriers for less digitally literate users, disproportionately affecting the most vulnerable populations. The research points to the need for transparency, public justification, adequate risk monitoring tools, governance design, and participation in the implementation of these systems to ensure that they serve the public interest and promote equity. The paper argues that without proper regulation and consideration of ethical principles, AI automation could exacerbate inequalities and undermine trust in public services. The authors conclude by stressing the importance of a balanced approach that weighs in both technological innovation with the public interest.
Citation & publishing information
Received: Reviewed: Published: September 30, 2024
Licence: Creative Commons Attribution 3.0 Germany
Funding: The authors received funding from the Coordination for the Improvement of Higher Education Personnel (CAPES) Brazil and the National Council for Scientific and Technological Development (CNPq).
Competing interests: The author has declared that no competing interests exist that have influenced the text.
Keywords: Artificial intelligence, Public interest, Governance, Social benefit provision, Brazil
Citation: Nicolás, M.A., & Sampaio, R.C. (2024). Balancing efficiency and public interest: The impact of AI automation on social benefit provision in Brazil. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1799

This paper is part of AI systems for the public interest, a special issue of Internet Policy Review guest-edited by Theresa Züger and Hadi Asghari.

Introduction

The central objective of this article is to examine the implementation of an automated decision-making system in the Brazilian pension system, especially focusing on the utilisation of AI systems and automation in the delivery of social benefits at the National Institute of Social Security (INSS). The system evaluates citizen's requests by integrating social security regulations as defined by law with the INSS's own systems. Additionally, the technology considers the information provided by the citizens at the time of their benefit application. The following benefits are granted and administered through this automated platform: urban and rural retirement by age, retirement by contribution time, urban and rural death pension, urban and rural prison assistance, inclusion assistance for people with disabilities, assistance benefit for people with disabilities, assistance benefit for the elderly, and urban and rural maternity leave (INSS, 2023b).

Despite the sophistication and widespread implementation of such systems, the absence of an AI regulatory framework in Brazil is not an obstacle to the implementation of several AI systems in the Brazilian public administration, especially in the judiciary and the federal executive (Toledo & Mendonça, 2023; TCU, 2022; Transparência Brasil, 2021). This unregulated expansion raises critical questions about algorithmic decision-making in the public sector, especially in terms of safeguarding the public interest. Various authors (Bozeman, 2007; Cochran, 1974; Feintuck, 2002; Held, 1970) have discussed this concept and its relationship with democracy, particularly in relation to political decision-making. The term "public interest", like other concepts in political theory, lacks a universally accepted definition and is subject to ongoing debate, particularly concerning the identification of what constitutes the public interest (Züger & Asghari, 2023). Bozeman (2007) argues that, within a procedural and ideal-normative framework, the public interest refers to the outcomes that most effectively promote the long-term survival and well-being of a social collective, understood as a public. The author highlights that, while public policies may be designed to better serve the public interest, this interest is not universally predetermined; rather, it is defined situationally for each individual issue.

The interest of this article emerges from a theoretical discussion on the concept of public interest to examine the implementation of the AI system at the National Social Security Institute (INSS). It is the responsibility of the INSS to determine eligibility for rights, meaning that it must assess whether a person is entitled to social security benefits and other benefits provided (such as maternity leave, death pension, sickness benefit, and prison assistance among others). Furthermore, the INSS is responsible for maintaining the benefit, which involves ensuring the payment is granted if the person continues to meet the requirement. The implementation of this system stems from the need to automate the process of granting social benefits in response to the growing "waiting list", i.e. the backlog of cases initiated by citizens. The system uses predictive algorithms to make decisions regarding the granting of social security rights, analysing thousands of cases simultaneously, as well as consulting laws and government databases.

According to the INSS, the benefits of implementing an AI system at the INSS are evident in the speed of benefits for those individuals who keep their data up to date in the system. For instance, this translates into the granting of death pensions in a record time of 12 hours (INSS, 2023c). Furthermore, its implementation led to a substantial rise in the rate of automatic benefit analysis by the INSS, increasing from 17% in 2022 to 36% in 2023.

However, the use of automation in delivering benefits faces criticism from various sectors, including INSS administrative staff, who advocate careful analyses of applications. In addition, external audits by Brazilian state control bodies show that the system increased the number of requests for assistance. Still, it also leads to a significant increase in automatic rejections, often due to problems with the database and the system's complexity in handling user requests. While increasing efficiency in the provision of public services is an advantage, the increase in automatic refusals means that fewer people have access to social security benefits. The main negative effect is the lack of protection for the rights of citizens who depend on benefits to support themselves and their families. Many of them, due to the long time on the waiting list or the expectation that this will happen, end up taking their case to court, having to pay legal fees when they don't have the right to a public defender. The automated decision-making system does not seek ways to circumvent specific situations, a characteristic to be exercised by public servants.1 Instead, it makes decisions based on the available databases and legislation, which may contain errors.

This article explores the implementation of an AI system at the INSS based on a series of dimensions relating to the public interest, such as public justification, equality and human rights, deliberation processes, technical standards, and openness to validating the system (Züger & Asghari, 2023). Based on audits by Brazilian state control bodies and institutional documents, we evaluate whether the INSS AI fulfils these prerequisites to be considered in the public interest. Furthermore, we discuss how efficiency cannot be the exclusive motivator for a public interest system because a public interest justification differs from private and purely economic interests.

AI systems for the public interest

The literature on AI systems for social good (IS4SG) has grown exponentially in recent years, especially in North America and Europe (Floridi et al., 2018; Mukherjee et al., 2022; Tomašev et al., 2020; Vieweg, 2021). The studies focus on AI systems that promote positive social impacts and improve society's well-being. In other regions, particularly in the Global South, studies examine AI systems and the processes of datafication, algorithmization, and automation that are increasingly becoming hegemonic. These works also critically reflect on the risks associated with the implementation of such systems in democracies, particularly concerning discrimination, oppression, and the exacerbation of inequality (Arun, 2020; Mendonça et al., 2023; Ricaurte, 2022). Floridi et al. (2020) analyse a series of examples of AI4SG projects and point out that these systems not only seek to solve problems but also to prevent disorders that negatively affect human life or the natural world, as well as pursue sustainable development. The authors define AI4SG as:

the design, development, and deployment of AI systems in ways that (i) prevent, mitigate or solve problems adversely affecting human life and/or the wellbeing of the natural world, and/or (ii) enable socially preferable and/or environmentally sustainable developments. (Floridi et al., 2020, p. 1773)

These systems seek to predict, mitigate, or solve social problems (Cowls, 2021). This is no simple task, considering that, from a contemporary perspective, social problems are not objective phenomena but rather socially constructed realities. Their definitions emerge from intricate political negotiations and power dynamics within society. Therefore, every social problem strategically represents situations (Stone, 2011). Thus, what would be the best solution to a social problem? The literature on IS4SG indicates that these intelligent systems have the potential to contribute to solving complex social problems (Mukherjee et al., 2022). Bondi et al. (2021) emphasise that no single set of factors can determine whether an AI project serves the social good. To this end, the authors propose the participation of the community affected by the AI system in assessing the possible social benefits. They therefore suggest analysing the context and perspectives of the affected communities when defining and evaluating the impact of AI for the social good.

On the other hand, a group of authors focus their research on public interest AI systems (PIAI). These authors share the goal of investigating AI systems capable of promoting long-term and sustainable well-being and the S4SG approach. However, the research from analysing PIAI is theoretically based on discussions about the concept of public interest, as discussed by several intellectuals (Downs, 1962; Held, 1970; Cochran, 1974; Bozeman, 2007). They also argue that the implementation of AI systems that serve the common good requires democratic and political governance centred on the public interest. They therefore suggest shifting the focus from values to creating a governance system (Züger & Asghari, 2023). These studies emphasise the relevance of democratic and political governance, as well as the discussion of criteria for implementing AI systems so that they effectively serve the public interest (Züger & Asghari, 2023; Züger et al., 2022; Wikimedia Deutschland, 2023).

The term public interest has various interpretations, depending on the theoretical perspective adopted. Authors who analyse PIAI base their considerations on a concept of public interest linked to the rule of law and democracy. In this way, the collective interests of groups of citizens (which depend on the issue or area of application of the AI system) should play a key role rather than private interests (Wikimedia Deutschland, 2023). The PIAI's approaches converge in the discussion of some notable points. The public interest is not predefined and universal, although it can guide the implementation of public policies (Bozeman, 2007). The PIAI perspective encourages the development of a deliberative process. Decision-making is the result of a process of collective construction, where the deliberation of diverse perspectives and interests are confronted. Social learning is the result of this process of co-creation (Züger & Asghari, 2023). The importance of negotiation and consideration of the needs and perspectives of the community in question is highlighted, emphasising the dynamic and contextual nature of the public interest. This understanding is essential to guide a development of AI that meets the values and needs of the public interest in various situations and contexts. It also emphasises the importance of decisions based on fair, inclusive, and transparent procedures. In addition, it is important to promote the co-participation of different actors to overcome conflicts and achieve results that reflect collective well-being.

In empirical terms, authors who discuss AI systems in the public interest list a series of dimensions to analyse AI projects' design and implementation processes, namely public justification, equality and human rights, deliberation/co-design process, technical safeguards, openness to validation. According to Züger & Asghari (2023), the "public justification" dimension refers to the need to justify the choice of an AI system to the public. For an AI system to serve the public interest "first of all a justification to the public is necessary, to argue why the technology is not developed for the mere sake of innovation or commercial benefits but to serve a common public interest" (Züger & Asghari, 2023, p. 819). The explanation must be based on democratic principles (guaranteed rights and agreed objectives) and point out how such a system can solve a social problem, considering other options available. Furthermore, this justification differs from private and purely economic interests. This is because PIAI projects must serve equality, which is often opposed to projects aimed at private interests that, in turn, tend to go against a truly participatory design. The organisation that decides to implement an AI system must be able to inform and explain to citizens the reasons for the choice.

The "equality and human rights" dimension is based on the concept that equality refers to the principle of fairness and ethics, as well as the extent to which AI systems reduce prejudices (relating to gender, race, and other social groups) in data sets and algorithms (Floridi et al. 2020). At this point, the design needs to be inclusive, open, and without access barriers and promote diversity and the integration of disadvantaged population groups (Züger & Asghari, 2023). Therefore, "it is important for the public interest to avoid outcomes that – despite presenting a technically working solution – go against justice or shift power in an unwanted direction" (Züger & Asghari, 2023, p. 819).

The "deliberation/co-design process" dimension indicates that the public needs to be known for deliberation on interests and justifications through the various available channels. Identifying the audience requires reflecting on those affected by the AI system: developers, users, and those affected by AI decisions. In short, deliberation must allow the public's interests to be debated. Züger et al. (2022) state that transparency is relevant in the deliberative process and the justification, decisions, and infrastructure used for the AI system. What is more, the active participation of the public in other phases, for example, before or during the AI implementation process, an evaluation at these stages can provide feedback that is useful for identifying errors or flaws in the system (Züger & Asghari, 2023).

The “technical safeguards” dimension implies that an artificial intelligence system oriented towards the public interest must adhere to standards of quality and privacy of the data it uses. Several countries have regulated data protection laws, which is a starting point for the operation of automation systems around the world. At this point, it is important to monitor the functioning and results of the system to avoid failures that could lead to security problems, as well as results that generate some kind of prejudice or social injustice.

Finally, the “openness for validation” dimension refers to the inspection and validation of the AI system's results by external actors (those implementing the system). This is relevant for identifying errors or system failures. Such errors can cause and reproduce injustices and perpetuate structural prejudices. Algorithmic racism makes visible how decisions in the use of algorithms and AI are not neutral: "the development of algorithmic technologies feeds on social history to offer a pretence of artificial intelligence. But this artificial 'disintelligence', which actualises oppressions such as structural racism, is sold as neutral" (Silva, 2023). On the other hand, just as democratic decisions are documented and open to citizen evaluation, AI systems must follow similar standards to guarantee the democratic validation of their operations (Züger & Asghari, 2023).

The interest in this research arose when some news reports were published in the Brazilian media (Gercina, 2023; Tagiaroli, 2023), highlighting the large number of automatic denials of benefits made by the Isaac AI system, which contradicts previous reports about the launch of the system, which indicated that it had been developed with the public interest as a priority, as it promised to significantly reduce queues for citizens.

The Public Interest AI Systems (PIAI) framework provides a comprehensive approach to evaluating the use of AI in the INSS, going beyond the search for efficiency by considering the social and democratic implications of these technologies. It emphasises the need to ensure that automation does not compromise constitutional rights, such as fair access to benefits, and highlights the importance of public justification, transparency, and citizen participation in the implementation of AI systems. This is particularly relevant in the context of social benefits, where automated decisions can have a significant impact on the lives of vulnerable people, and ensuring that the pursuit of efficiency does not lead to inequality or discrimination.

Thus, based on two audits performed by Brazilian governmental control bodies TCU (Tribunal de Contas da União)2 and CGU (Controladoria Geral da União),3 we aim to answer how the implementation of ISAAC in Brazil's social security system meets the criteria outlined in the theoretical framework of Public Interest AI Systems.

Automation in the delivery of INSS benefits

Algorithmic decision-making in public management refers to steps or instructions used to solve a particular problem or task through AI systems. Algorithms assist or replace human decision-making, and public sector automation is used in a variety of areas, including the delivery of public services (Filgueiras & Almeida, 2020). The use of AI systems and automation in delivering social benefits at the National Social Security Institute (INSS), the subject of this article, dates back to 2017. The INSS is a federal autarchy, that is, "a legal entity under public law with an exclusively administrative capacity" (de Mello, 2015, p. 164). According to Brazilian legislation, autarchies possess administrative independence within legal boundaries, are not under the authority of any governmental entity, and enjoy financial self-governance. However, they are subject to the control and supervision of the internal and external control bodies of the federal executive (de Mello, 2015), being also subject to the macro and micro political disputes usual to governmental bodies. It is linked to the Ministry of Social Welfare, and its purpose is to ensure citizen protection through access to social security benefits and services linked to Social Welfare and other federal government social policies. It is in charge of granting, maintaining, reviewing, suspending, assigning and supervising pensions, maternity benefits, death pensions, sickness benefits, accident benefits, imprisonment benefits, and other benefits for those legally entitled to receive them. Like any federal autarchy, it is not exempt from macro-political influences, such as economic measures, nor from micro-political dynamics, which involve internal disputes and conflicts within the organisation itself.

Considering Brazil's population of around 210 million inhabitants and its continental size, the INSS is one of the organisations with the greatest capillarity, with more than 1,500 branches across the country and more than 29,000 civil servants assigned to the agency. Currently, Brazil has around 37 million beneficiaries of social security services, with more than 700,000 new applications per month and around 500,000 benefits granted (INSS, 2023a). The “Meu INSS” (My INSS) platform offers citizens access to Social Security policies via the Internet and mobile devices, offering more than 100 digital services to citizens, with approximately 36 million monthly visits. Since 2005, the INSS has been working on digital modernisation in conjunction with Dataprev – a state-owned company that develops and implements technologies in social security and social assistance.

In 2017, the INSS began to automate the analyses of applications for the Urban Age Retirement benefit. In 2018, the automation of the analyses of applications for the Urban Maternity Allowance and the Contribution Time Retirement benefit began (Comptroller General of the Union, 2023). Dataprev developed the automation process at the INSS using solutions that use artificial intelligence. To this end, one of the databases used for the automation process is the National Social Information Register (CNIS), maintained by Dataprev and made up of more than 42.3 billion pieces of data on individuals and companies, as well as all employment relationships, social security contributions, and benefits (Dataprev, 2023).

In 2019, Dataprev officially unveiled its AI system, "Isaac", which uses predictive algorithms to decide on recognising social security rights. According to a Dataprev report, this AI system is based on machine learning, uses OCR (Optical Character Recognition, a technology designed to identify and extract text from images, including scanned documents and photographs) to read several types of documents and analyses thousands of processes simultaneously, cross-referencing various databases and providing a reliable, assisted, or automatic response (Dataprev, 2023). As published by the INSS (2022), the implementation of an AI system in the automation of the granting of benefits consists of the immediate recognition of requests that meet pre-established criteria without the need for manual analyses for decision-making. This procedure aims to speed up the analysis of requests, reduce the waiting time in the concession queue, and direct the work of civil servants to the most complex cases.

It should be noted that Isaac is no longer disclosed by the INSS or Dataprev, nor does it appear in the audit reports carried out by the Office of the Comptroller General (CGU), the body of the Brazilian federal executive power responsible for internal control exercising auditing, correction, ombudsman, and corruption prevention functions, and by the Federal Court of Auditors (TCU), the external control body of the federal executive power in Brazil, which assists the National Congress in monitoring the country's budgetary and financial execution, about the INSS decision automation process. We were unable to find any details about why the original name was abandoned and replaced with general terms like artificial intelligence or automation systems. To maintain clarity, we decided to continue using the term "Isaac" to refer to this particular system.

In the years that followed, the INSS implemented a series of automation actions to optimise its workforce in the recognition of rights. Finally, the highlight is the progressive evolution in the delivery of automated benefits since 2021 through the inclusion of more benefits analysed automatically, namely retirement for age, contribution time, death pension, prisoner's allowance, disability inclusion allowance, continuous cash benefits for disabled people and the elderly, and maternity pay (INSS, 2023c).4

AI system in the delivery of INSS benefits and the public interest

This section analyses the Isaac system and its automation in the delivery of benefits at the INSS, based on the dimensions of public interest and AI systems: (1) a public justification, (2) equality and human rights, (3) deliberation/co-design process, (4) technical safeguards (5) and openness for validation. The INSS website was unclear about how the system worked and often the information could only be found in different documents. We made several attempts to contact INSS managers by email to interview them but received no response. We also made a request under the Access to Information Act for more details about the AI that made these automatic decisions, but the request was denied on the grounds of system security.

In order to gain a comprehensive understanding of the subject matter, we conducted a document analysis based on two audits conducted by the Brazilian governmental control bodies TCU (Tribunal de Contas da União) and CGU (Controladoria Geral da União) (TCU, 2021; CGU, 2023). These audits provided invaluable insights into the INSS routines and systems, complemented by additional information sourced from the INSS website and documents. We then reevaluated the diverse assessments and findings from these audits through the lens of the Public Interest AI Systems (PIAI) framework using guidelines of a deductive content analysis (Schreirer, 2014). Each principle of the PIAI framework was treated as a category, and different excerpts from the two reports were classified accordingly. This coding process was carried out by both authors, with any disagreements resolved through discussions until consensus was reached. This approach allowed us to recontextualise the technical, operational, and administrative evaluations within a broader ethical and social perspective, providing a more nuanced understanding of how ISAAC aligns with public interest considerations in AI implementation.

Public justification

As far as the "public justification" dimension is concerned, the use of Isaac to automate the granting of benefits is a response to the accumulation of processes for manual analysis at the INSS. The decision was, therefore, aimed at eliminating the queues for face-to-face appointments to apply for benefits, which have been done digitally since 2017, as announced on its website: "The automated analysis of benefit requests is one of the actions that Social Security has adopted to reduce the response time for citizens requesting a service or benefit." (INSS, 2023d). Despite this, there was no clear and open explanation provided to the public about how and why AI is utilised for these objectives and which specific technologies would be implemented.

Usually, when a Brazilian citizen applies for a benefit, the INSS collects data from government databases, social security regulations, and answers provided by the applicant, deciding whether the application is accepted, denied, or sent to public servants. Nevertheless, this only works properly when the database information is correct. Suppose the insured person's record is incomplete or incorrect, or there are discrepancies in the data provided, or when specialised analysis by an INSS official is required. In that case, documents must be submitted so that the benefit claim can be properly analysed. Requesting citizens can submit these documents via "My INSS" website or app (INSS, 2022).

From a purely quantitative point of view, the total number of decisions taken by the INSS each year (granting and rejecting, either manually or automatically) has increased from 7.0 million in 2006 to 10.3 million in 2022, it leads to the conclusion that the system is achieving the objective of efficiency (reducing response time for citizens requesting a service or benefit and, therefore, increasing the number of people benefited), for which it was created. In January 2021, two percent of the 378,000 applications filed by citizens to the INSS were decided automatically, and in December 2022, 45% of the 405,000 applications filed were decided automatically (Junior et al., 2022). Isaac is not capable of performing more precise calculations when it comes to service time credit for hazardous conditions (related to harmful substances and dangerousness), different requirements of teacher (the granting of retirement will depend on the year of entry into the career, the gender of the applicant, and exercise of teaching functions in education), qualification as a disabled person, and taking into account contribution to other types of social security. In this type of situation, the application is analysed manually by civil servants.

Therefore, in 2022, the number of automated decisions grew to 1,325,387, more than twice the previous figure. In May 2023, the municipality set a record for the highest number of applications decided automatically since the implementation of AI in the analysis of benefits. In percentage terms, 42% were concluded, which equates to more than 222,000 benefits (INSS, 2023b).

The realisation of the process depends largely on the correct registration of all the information in the CNIS (National Register of Social Information). This large database stores data on active civil servants, retirees, and pensioners for the Management Information System. In other words, an inconsistency in the register may lead to the rejection of the benefit claim, forcing the citizen to provide additional documentation to resolve the problem.

In this way, compared to previous years, the increase in automated decisions has also disproportionately increased the number of automatic rejections, many of which would not occur in the face-to-face modality with an INSS civil servant. Thus, in 2022, the INSS automatically rejected more than 800,000 applications, which grew by more than 300 per cent compared to 2021, causing dissatisfaction and increasing the need for appeals. Of the total denied applications in 2022 (automatic and manual), 200,009 appealed to the Social Security Appeals Board (CRPS) (6.8% of rejections), 50,464 (25.2%) of which were automatic decisions (CGU, 2023).

Furthermore, the use of such automated systems has not been formally planned (CGU, 2023), being more of an incremental process. The process begins with using CNIS data to replace the need for claimants to present documents as proof and the introduction of electronic benefit applications. These actions gradually led to the automation of the process from start to finish, including the application submission and the acceptance or denial of benefits. In other words, the high demand for the service and the long queues of beneficiaries led the INSS to automate the service internally, which means there was no proper planning for identifying risks.

It can therefore be said that there was a publicly defensible justification for using AI in the INSS decision chain. Nevertheless, there was no public justification, informing and explaining to citizens the reasons for the choice and how it is better than the alternative. It was not a public process involving various actors from the government, civil society, or even users to think about how to modernise the system and reduce queues. Apparently, it all came down to very practical and operational issues.

As the CGU report (2023) rightly points out, in addition to the serious damage to Brazilian citizens, other Brazilian institutions are directly affected, such as the Federal Public Defender's Office (DPU), the Federal Public Prosecutor's Office (MPF), and the Judiciary itself, which have to deal with appeals and possible judicialization against automatic decisions. Furthermore, the use of AI tools can negatively affect the image of the institution itself.

Equality and human rights

The main issue with the INSS's automated system is that it generates major drawbacks regarding Brazilian citizens' rights under the constitution and increases inequality in the country. While on the one hand, it eliminates the face-to-face queue and significantly reduces the waiting time to have a claim assessed, on the other hand, it considerably increases the number of automatic rejections.

These problems seem to stem particularly from the cost of processing essential information. The task of INSS civil servants in handling the high volume and intricacy of cases has become a significant burden that is now directly shifted onto the citizens seeking benefits. It is accompanied by an inadequate design for users.

By way of example, according to the CGU audit (2023), the main reason for the INSS refusing to provide urban maternity pay in 2022 was the lack of "leave from paid employment or optional membership, from the start of the leave". In contrast to 2021, the number of rejections due to non-termination of work or activities increased from 7,064 to 60,379 in 2022, with 51,883 (or 85.9%) being automatic decisions. However, the main reason for this sudden increase lies in filling out the application for the Urban Maternity Allowance, in which the applicant must answer "yes" or "no" to the question about leave. By answering no, the benefit is automatically refused. According to the CGU audit, the question about time off work could be misinterpreted as definitive time off, an error that would not happen with human assistance.

In addition, the automatic system is based on information available in the CNIS, but this database often does not have complete records. Among various other situations, the register may have incorrect data, including dates (admission or termination of the employment contract, which may often not be filled in), amounts (amounts recorded in the system are not the same as those received) and documents (wrongly typed documents, different married/divorced names, absence of employment contracts), and even the existence of more than one CNIS, when the worker has created two identifications by personal mistake or even that of their employers.

Based on data from the CNIS, the TCU (2021) conducted an external control audit and discovered an elevated level of risk associated with the automatic granting of benefits. The records showed that 24,306,894 entries for individuals contain incomplete, invalid or inconsistent information. This suggests that either the data entry controls or the underlying CNIS systems are flawed. A significant problem with CNIS data is the lack of traceability, as changes can be made without the possibility of checking their history. The audit concluded that the lack of access to structured data from the INSS CNIS harms social security management, as it hinders monitoring, risk management, inspection, and the proper granting of benefits.

Another risk of generating inequalities lies in the fact that the system may grant the benefit request but generate a pension that is less than what is due. This amount can only be changed by appeal, although the beneficiary will often not be aware that this is an error in the system. There was even a case that caught the attention of the national media when a person's application for retirement was rejected in 6 minutes, the INSS claiming that the information had not been correctly provided at the time of the application (Gercina, 2023).

However, some of the organisation's actions are aimed at providing information on social security services and automation. For example, the presence of infographics and explanatory tutorials on the automatic benefits analysis process (INSS, 2022), as well as the use of simulators on the My INSS portal, which guide insured people on their contribution status and benefit requirements to speed up and qualify the process of analysing and granting pensions and other benefits.

There is also a free-of-charge phone number (135), where automatic answer trails make it easier to understand these demands and the data needed to apply for benefits. It is even possible to speak to human attendants if desired. However, benefits can only be requested exclusively through the app or the INSS website.

All these issues tend to have an even greater impact on the poorer sections of the population, especially given the digital inequalities still strongly present in Brazil. Research by CETIC (2023), a Brazilian public body that studies digital inclusion in the country, shows that in 2022, 86% of Brazilians accessed the internet during the year. However, regional, and especially income factors, directly impact this connection, which drops to 80% in rural areas and 77% for individuals earning up to 1 minimum wage per month. The same logic applies to the connection quality, which is slower, more unstable, and based exclusively on mobile phones for a large part of the poorest population. Therefore, one could assume that tutorials on the website and the phone have limited impact on populations with less ability to deal with the digital world and who have poorer digital connections.

Hence, the complexity of such a system tends to affect the poorest sections of the population more directly, as well as, of course, the older generation, who often have more limited digital skills (CETIC, 2023). One of Cetic's survey (2023) questions is whether the participant has used a digital government service related to "Workers' rights or social security, such as INSS, FGTS, unemployment insurance, sickness benefit, or retirement" in the last 12 months. The overall result shows that 33% of respondents said they had done so. Still, only 19% of respondents over the age of 60 and 85% of respondents in this age group said they did not access digital services because they preferred to resolve the issue in person.

In short, this is not an open, accessible AI system that promotes diversity and the integration of marginalised groups; in fact, its results are unfair and biased against these groups. It is interesting to note that this is not discrimination against these groups because of bias in the system but rather from the inadequate structuring of information. As a result, many citizens are misled by the way the information is presented. However, this tends to be more present in the profile of low-income workers, who tend to work in multiple jobs and companies throughout their lives.

Deliberation/Co-Design Process

The lack of an open deliberation process and co-participation in developing and implementing artificial intelligence (AI) systems by the INSS raises significant concerns. Audits conducted by the Federal Court of Auditors (TCU) and criticisms from trade unions and civil society organisations highlight structural and efficiency problems in the AI systems used by the INSS, highlighting the need for a more inclusive and participatory approach. The various stakeholders directly or indirectly affected by AI have not been consulted to better understand their desires and difficulties.

A multisectoral committee for monitoring, regular audits, continuous feedback channels, and transparency are essential to ensure fairness, accuracy, and to prevent automated errors, meeting user demands, and avoiding unjustified rejections. The active participation and collaboration of different stakeholders, including beneficiaries, technology professionals, social rights experts, and civil society representatives, could bring several advantages. Firstly, including diverse voices in the development process could ensure that the design of AI systems effectively meet the needs of end users, avoiding the automated errors and unjustified rejections observed previously. In addition, transparency and openness during the development of the systems could increase public confidence in the automated analysis process, mitigating concerns about the fairness and accuracy of AI decisions. Brazil has institutionalised instances of social participation and external control, particularly mechanisms such as public policy councils, online public consultations, and public hearings. These procedures could have been used efficiently in implementing automation at the INSS.

Technical safeguards

The process of automating benefit claims was the result of a gradual evolution, not necessarily planned. It began with using CNIS data instead of requiring documentary proof from the applicant and implementing online benefit applications. This subsequently made it possible to automate from start to finish, from the application to the acceptance or refusal of the benefit.

The CGU audit (2023) highlights poor governance as one of the main problems with the automation of processes at the INSS. Although the agency established its governance system through Ordinance No. 3,213 (2019) and its Risk Management Policy through Resolution No. 5 (of 28 May 2020), none of these documents detailed the structure responsible for the automation process at the INSS. Only Decree No. 10,995/2022, which regulates the INSS's regimental structure and job descriptions, establishes that the Benefits and Citizen Relations Directorate is responsible for defining rules and requirements for service and benefit automation systems in conjunction with the Information Technology Directorate. Surprisingly, an internal INSS agency called “Automation Social Security Agency” (APSAUT) that works directly on the issue of automating the analyses of benefit applications since 2022 is not laid down in the INSS internal regulations.

This governance problem shows that no clear structure is responsible for the automation process, which creates serious gaps in planning, risk identification, and operational mitigation. The lack of clear governance also makes it difficult to develop effective instruments for monitoring and controlling automation and its results. So, one of the report's conclusions is that there is a lack of a clear definition of the level of risk that the organisation is willing to accept to achieve its objectives.

The lack of standardisation and governance, as well as full automation, continues to cause problems that directly harm the rights of Brazilian citizens, especially when they are denied benefits to which they are constitutionally entitled or receive lower amounts due to system errors. As mentioned, the CNSI database has various inconsistencies of diverse kinds, and INSS regulations guarantee individuals the right to have their benefit claims analysed regarding the inclusion, alteration, ratification, or exclusion of conflicting or insufficient information in this database. Thus, immediate rejection makes it difficult to comply with the provisions above, particularly when people request changes or additions to their records in the social security system.

Another point highlighted in the CGU report is the lack of civil servants dedicated exclusively to the automation system (Gercina, 2023; Tagiaroli, 2023). There are only two servers that work exclusively with the automation units at the INSS. This situation is repeated in the management, the General Coordination of Systems and Automation (CGAUT) and the Coordination of Service Systems and Automation (CSAA), which have only one civil servant manager each. According to the CGU, apart from the civil servants directly assigned to them, eight employees from other divisions, totalling 12, work directly or indirectly with the INSS's automated decision-making systems. This figure is quite discrepant compared to the total number of active INSS employees of 19,510 in 2023. Considering the weak governance, the CGU concludes that there seems to be a lack of greater participation by the INSS's upper echelons in assessing how automated decisions cause significant social impacts.

Finally, there are several news items about public servants protesting the replacement of the INSS workforce (Fenasps, 2022) but directly opposing the use of AI in the current way. The Union of Social Security and Social Welfare Workers in the State of São Paulo (SINSSP) points out that automation, despite technological advances, is no substitute for the detailed analysis of a human technician when making decisions about social security benefits, given the complexity of the cases of insured people. What stands out is the technicians' ability to adjust orders to maximise benefits, a nuance often lost in automation. The union also points out that although INSS management recognises the limitations of automatic denials, the strategy of investing in technology and systemic integration is presented as irrevocable (Gercina, 2023). According to the union, there are currently 19,500 civil servants at the agency, but at least 30,000 would be needed to cope with the backlog of requests (IEPREV, 2022).

Openness for Validation

The AIs in the public interest must allow external actors to inspect and validate their results (Züger & Asghari, 2023; Wikimedia Deutschland, 2023). In the case of Brazil, the institutional design of the Brazilian state allows this validation to take place through the action of control bodies such as the CGU and the TCU, but there is not enough documentation and transparency for other organisations in society to also carry out external audits. Likewise, no care has been taken to ensure that Isaac, its workings, and its decisions are understandable and justifiable. In practice, the audits indicate little transparency in these internal INSS decisions (TCU, 2021; GCU, 2023). Even when applications for benefits are rejected, there is often no detailed information, such as the CNIS link, considered, nor is the CNIS extract made available for the applicant to check in case of an appeal. Therefore, there is no way for citizens to contest it.

The CGU report even states that there is an internal evaluation form in which civil servants can request improvements and changes to the automated system and that at the end of 2022, there were 348 manifestations. However, the report points out that a single civil servant is responsible for evaluating these manifestations and that this does not happen systematically or periodically. Finally, the CGU audit could not identify any control on the part of the INSS over the adoption or lack thereof of measures to remedy the reported problems, such as the increase of automatic rejections and the tendency to approve benefits for citizens that are lower than they should be.

Despite previous evaluations and external audits, the CGU's main conclusion is that although the role of automation has increased in decisions, there has been little evolution in the control tools, including the lack of indicators for measuring and evaluating the process.

Conclusion

This article analysed the use of artificial intelligence systems by Brazil's social security agency, the INSS, in the light of the theory of AIs for the public interest (Züger & Asghari, 2023). We reconstructed the agency's process of automation, and how AI has been gradually implemented to automate certain decisions to guarantee or deny benefits. There are few civil servants to cope with the enormous workload, which generates queues of millions of people every year.

Of the five criteria for being a public interest system, the AI system delivers only one reason for it serving the public interest at the status quo, a minimum of public justification: its use could potentially benefit many people. However, the gradual implementation of automatic decision-making systems meant that it was not a well-thought-out and planned process, with some direct consequences.

Initially, this is a decision-making system with weak governance, in which responsibilities are not well assigned. There are no adequate tools for monitoring and evaluating results and subsequently improving the decision-making process. The number of officials involved is far below what could be considered ideal. As it was not a planned process, it did not involve other civil society actors, citizens, or even other stakeholders specialised in the subject, such as labour unions and other government bodies, in deliberative or co-participatory processes. There is also no effective concern with opening up data and documentation for validation by these same actors, with only audits by other public bodies responsible for this in Brazil's institutional design.

The two audits conducted by the Federal Comptroller General (CGU) and the Federal Court of Auditors (TCU), which served as the basis for our analyses, raise questions about the Isaac system being in the public interest. In both analyses, the focus is on the fact that the AI system generates many incorrect or even biased decisions due to the lack of complete information in the databases activated by the INSS AIs. The focus of the evaluation of the two agencies is very close to the issue of equality and human rights since both recognise that the rights of Brazilian citizens are threatened, including reinforcing inequalities by the compulsory use of My INSS digital tools, something that makes access difficult for the poorest, oldest and most rural sections of the population. Furthermore, in cases of automated rejection, the citizen has to request an appeal to the local authority or go to court, increasing the judicialisation of actions that could be resolved in previous instances. If the figures were significantly better, could it be considered a system of public interest? This seems to be the mentality of both the implementers and a large part of the views in the audits.

The theory of AI in the public interest allows us to answer a resounding "no". In practice, the AI system only considers the needs of the INSS agency to reduce queues, which is done by ignoring all the risks and costs of the processes. As has been shown, the INSS civil servants are unsatisfied with this solution and believe the issue deserves more attention.

Prior to its implementation, the INSS did not adequately inform or explain to citizens the reasons for choosing the system; there was no deliberative or even consultative process for the expression and consideration of public interests. Thus, while it seems obvious that every citizen prefers a faster process, it is unclear whether everyone would accept the decision knowing the various risks of denying their benefits or receiving less than the correct amount. There is not even the option for citizens to choose whether or not they want a quicker route through AI treatment or a longer one with greater human care. As many beneficiaries are poor and older, it would be reasonable to assume that many would still opt for human assistance. In other words, it cannot be said that the INSS AI system serves the public interest because it simply has not considered the public of people directly or indirectly affected in any way.

After its implementation, the system remains a black box in which there is no transparency and adequate documentation about how the system works, including its algorithms and how they work, so that its decisions are understandable and justifiable. As a result, as mentioned earlier, it does not allow for proper evaluation by other parts of society apart from the authorities. This happens even though these authorities know that the system does not reduce but increases societal inequalities because it hinders access to certain population segments and relies exclusively on inadequate databases, making it easier to cause or perpetuate injustices and prejudices against marginalised groups.

Given that it is one of Brazil's largest public bodies and its actions directly impact millions of Brazilians, this indicates the importance of regulating AI systems. As PIAI's theory reminds us, efficiency should not be the guiding justification for implementing AI systems. While it is understandable that the agency wants to reduce queues and speed up service and decisions, the efficiency of such a system cannot take precedence over other issues. Governments implementing AI systems need a more balanced approach between efficiency and technological innovation, and the public interest.

References

Arun, C. (2020). AI and the Global South: Designing for other worlds. In M. D. Dubber, F. Pasquale, & D. Sunit (Eds.), The Oxford handbook of ethics of AI (pp. 588–606). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.38

Bozeman, B. (2007). Public values and public interest: Counterbalancing economic individualism. Georgetown University Press. http://www.jstor.org/stable/j.ctt2tt37c

Cochran, C. E. (1974). Political science and ‘the public interest’. The Journal of Politics, 36(2), 327–355. https://doi.org/10.2307/2129473

Controladoria Geral da União. (2023). Relatório de avaliação Instituto Nacional do Seguro Social: Exercícios 2021 a 2023 [Evaluation report National Social Security Institute: Financial years 2021 to 2023] (Report No. 1205147). https://eaud.cgu.gov.br/relatorios/download/1205418

Dataprev. (2023). Relatório de gestão integrado—Exercício 2022 [Integrated management report—Financial year 2022] [Presentation]. https://portal3.dataprev.gov.br/sites/default/files/arquivos/relatorio_integrado_de_gestao_2022_aprovado_v1.13.pdf

de Mello, C. A. B. (2015). Curso de direito administrativo [Course in administrative law] (32nd ed.). Malheiros.

de Oliveira, P. R., & Kassouf, A. L. (2013). Impacts of the continuous cash benefit programme on family welfare [Report]. International Policy Centre for Inclusive Growth. https://socialprotection.org/discover/publications/impacts-continuous-cash-benefit-programme-family-welfare

Decreto no 10.995. (2022). Decreto no 10.995, de 14 de Março de 2022 aprova a estrutura regimental e o quadro demonstrativo dos cargos em comissão e das funções de confiança do Instituto Nacional do Seguro Social—INSS e remaneja e transforma cargos em comissão e funções de confiança [Decree no. 10,995, of March 14, 2022 approves the regimental structure and the demonstrative framework of commissioned positions and trust functions of the National Institute of Social Security—INSS and reallocates and transforms commissioned positions and trust functions] [Degree]. Government of Brazil. https://www.planalto.gov.br/ccivil_03/_ato2019-2022/2022/decreto/d10995.htm

Downs, A. (1962). The public interest: Its meaning in a democracy. Social Research, 29(1), 1–36. https://www.jstor.org/stable/40969578

European Parliament. (2024). EU AI Act: First regulation on artificial intelligence [Report]. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Federação Nacional dos Sindicatos de Trabalhadores em Saúde, Trabalho, Previdência e Assistência Social (Fenasps). (2023). Após seis meses de governo, presidente do INSS admite que a principal causa do caos no órgão é a falta de servidores [After six months in office, the president of the INSS admits that the main cause of the agency’s chaos is the lack of civil servants] [News]. https://fenasps.org.br/2023/06/07/apos-seis-meses-de-governo-presidente-do-inss-admite-que-a-principal-causa-do-caos-no-orgao-e-a-falta-de-servidores/

Filgueiras, F., & Almeida, V. (2021). Governance for the digital world: Neither more state nor more market. Palgrave MacMillan. https://doi.org/10.1007/978-3-030-55248-0

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 1–14. https://doi.org/10.1162/99608f92.8cd550d1

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People – An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5

Gercina, C. (2023, July 31). INSS aumenta análise de aposentadorias por robôs e nega benefício em seis minutos [INSS increases analysis of pensions by robots and denies benefits in six minutes]. Folha de São Paulo. https://www1.folha.uol.com.br/mercado/2023/07/inss-aumenta-analise-de-aposentadorias-por-robos-e-nega-beneficio-em-seis-minutos.shtml

Held, V. (1970). The public interest and individual interests. Basic Books.

Hupe, P. (Ed.). (2019). Research handbook on street-level bureaucracy: The ground floor of government in context. Edward Elgar Publishing. https://doi.org/10.4337/9781786437631

Instituto de Estudos Previdenciários (IEPREV). (2022). Análise automática de benefícios do INSS por robô falha, diz sindicato [Automatic analysis of INSS benefits by robot fails, says union] [News]. https://web.archive.org/web/20240521031513/https://www.ieprev.com.br/conteudo/categoria/4/9298/anaa

Instituto Nacional do Seguro Social (INSS). (2022). Reconhecimento automático [Automatic recognition] [Infographic]. https://www.gov.br/inss/pt-br/assuntos/atencao-ao-solicitar-a-aposentadoria-pelo-meu-inss/reconhecimento-automatico-infografico.pdf/@@download/file

Instituto Nacional do Seguro Social (INSS). (2023). Automação é aliada na agilização das decisões do INSS [Automation is an ally in speeding up INSS decisions] [Press release]. https://www.gov.br/inss/pt-br/assuntos/noticias/automacao-e-aliada-na-agilizacao-das-decisoes-do-inss

Instituto Nacional do Seguro Social (INSS). (2024). Começa fase de testes de inteligência artificial no Meu INSS [Testing phase for artificial intelligence in Meu INSS begins] [Press release]. https://www.gov.br/inss/pt-br/assuntos/comeca-fase-de-testes-de-inteligencia-artificial-no-meu-inss

Instituto Nacional do Seguro Social (INSS). (2023d). Cuidado na apresentação de documentos pode agilizar a conclusão de requerimentos do INSS [Careful presentation of documents can speed up the completion of INSS applications] [News]. https://www.gov.br/inss/pt-br/assuntos/cuidado-na-apresentacao-de-documentos-pode-agilizar-a-conclusao-de-requerimentos-do-inss

Instituto Nacional do Seguro Social (INSS). (2023c). INSS concede pensão por morte em 12h [INSS grants death pension in 12 hours] [Press release]. https://www.gov.br/inss/pt-br/assuntos/inss-concede-pensao-por-morte-em-12h

Instituto Nacional do Seguro Social (INSS). (2023a). Transparência Previdenciária: Dezembro de 2023 [Social Security Transparency December 2023] [Report]. https://www.gov.br/inss/pt-br/portal-de-transparencia/dezembro-2023

Junior, A. N. de M., Kuibida, A. S. M., & Gaetani, F. (2023, January 30). Governo digital na prática: O reconhecimento automático de direitos no INSS [Digital government in practice: Automatic recognition of rights at the INSS]. Anais do Congresso Internacional de Gestão da Previdência Social. Congeps, Brasília. https://doi.org/10.29327/congeps2022.574223

Mendonca, R. F., Almeida, V., & Filgueiras, F. (2023). Algorithmic institutionalism: The changing rules of social and political life. Oxford University Press. https://doi.org/10.1093/oso/9780192870070.001.0001

Mukherjee, S., Muppalaneni, N. B., Bhattacharya, S., & Pradhan, A. K. (2022). Intelligent systems for social good. Springer Singapore. https://doi.org/10.1007/978-981-19-0770-8

Portaria no 3.213. (2019). [Portaria no 3.213, de 10 de Dezembro de 2019 institui o sistema de fovernança do Instituto Nacional do Seguro Social] [Ordinance no. 3,213, of December 10, 2019 establishes the governance system of the National Social Security Institute] [Ordinance]. Government of Brazil. https://web.archive.org/web/20240610202859/https://www.in.gov.br/web/dou/-/portaria-n-3.213-de-10-de-dezembro-de-2019-232670056

Resolução INSS no 005. (2020). Resolução INSS no 005, de 28 de Maio de 2020 institui a Política de Gestão de Riscos do Instituto Nacional do Seguro Social—INSS [INSS Resolution no. 005 of 28 May 2020 institutes the Risk Management Policy of the National Social Security Institute—INSS] [Resolution]. https://www.editoraroncarati.com.br/v2/Diario-Oficial/Diario-Oficial/RESOLUCAO-INSS-N%C2%BA-005-DE-28-05-2020.html

Ricaurte, P. (2022). Ethics for the majority world: AI and the question of violence at scale. Media, Culture & Society, 44(4), 726–745. https://doi.org/10.1177/01634437221099612

Schreirer, M. (2014). Qualitative content analysis. In The Sage handbook of qualitative data analysis (pp. 170–183). https://doi.org/10.4135/9781446282243.n12

Silva, T. (2023). Tarcízio Silva: “O racismo algorítmico é uma espécie de atualização do racismo estrutural” [Tarcízio Silva: ‘Algorithmic racism is a kind of updating of structural racism’] [Interview]. Centro de Estudos Estratégicos Fiocruz. https://cee.fiocruz.br/?q=Tarcizio-Silva-O-racismo-algoritmico-e-uma-especie-de-atualizacao-do-racismo-estrutural

Stone, D. (2011). Policy paradox: The art of political decision making (3rd ed.). W. W. Norton & Company;

Tagiaroli, G. (2023, August 14). Robô do INSS já decide até 4 de cada 10 aposentadorias [INSS robot already decides up to 4 out of 10 pensions]. Tilt UOL. https://www.uol.com.br/tilt/noticias/redacao/2023/08/14/robo-do-inss-ja-decide-ate-4-de-cada-10-aposentadorias.html

Teixeira de Toledo, A., & Mendonça, M. (2023). A aplicação da inteligência artificial na busca de eficiência pela administração pública [The application of artificial intelligence in the search for efficiency by public administration]. Revista Do Serviço Público, 74(2), 410–438. https://revista.enap.gov.br/index.php/RSP/article/view/6829

Tomašev, N., Cornebise, J., Hutter, F., Mohamed, S., Picciariello, A., Connelly, B., Belgrave, D. C. M., Ezer, D., Haert, F. C. V. D., Mugisha, F., Abila, G., Arai, H., Almiraat, H., Proskurnia, J., Snyder, K., Otake-Matsuura, M., Othman, M., Glasmachers, T., Wever, W. D., … Clopath, C. (2020). AI for social good: Unlocking the opportunity for positive impact. Nature Communications, 11(1), 1–6. https://doi.org/10.1038/s41467-020-15871-z

Transparência Brasil. (2021). Recomendações de governança: Uso de inteligência artificial pelo poder público [Governance recommendations: Use of artificial intelligence by public authorities] [Recommendation]. https://www.transparencia.org.br/downloads/publicacoes/Recomendacoes_Governanca_Uso_IA_PoderPublico.pdf

Tribuna de Contas da União (TCU). (2021). Relatório de acompanhamento órgãos/entidades: Empresa de Tecnologia e Informações da Previdência—Dataprev; Instituto Nacional do Seguro Social [Follow-up report bodies/entities: Social Security Information and Technology Company—Dataprev; National Social Security Institute] [Report]. https://pesquisa.apps.tcu.gov.br/redireciona/acordao-completo/ACORDAO-COMPLETO-2539216

Vieweg, S. H. (Ed.). (2023). AI for the good: Artificial intelligence and ethics. Springer. https://doi.org/10.1007/978-3-030-66913-3

Wikimedia Deutschland. (2023). Eight requirements: Making digital policy serve the public interest [Policy paper]. https://upload.wikimedia.org/wikipedia/commons/a/a8/Brochure_Eight_requirements._Making_digital_policy_serve_the_public_interest.pdf

Züger, T., & Asghari, H. (2023). AI for the public. How public interest theory shifts the discourse on AI. AI & Society, 38(2), 815–828. https://doi.org/10.1007/s00146-022-01480-5

Züger, T., Faßbender, J., Kuper., F., Nenno, S., Katzy-Reinshagen, A., & Kühnlein, I. (2022). Civic coding: Grundlagen und empirische Einblicke zur Unterstützung gemeinwohlorientierter KI [Civic coding: Fundamentals and empirical insights to support AI for the common good] [Research report]. Initiative Civic Coding vom Bundesministerium für Umwelt, Naturschutz, nukleare Sicherheit und Verbraucherschutz, Bundesministerium für Arbeit und Soziales, Bundesministerium für Familie, Senioren, Frauen und Jugend. https://www.civic-coding.de/fileadmin/civic-ai/Dateien/Civic_Coding_Forschungsbericht.pdf

Footnotes

1. This situation is not new, as in the current governments, interactions with citizens are becoming increasingly (semi) automated. It refers to the dissemination of Information and Communication Technologies (ICTs) in the daily interactions between citizens and the government (Hupe, 2019).

2. The Federal Court of Audit (TCU) supervises the accounting, financial and asset management of the Union, guaranteeing the legality and efficiency of public spending, acting independently as an auxiliary body to the Brazilian National Congress (TCU, 2021).

3. The Office of the Comptroller General (CGU) is responsible for assisting the President of the Republic of Brazil in the defence of public assets and transparency in management, acting in the areas of internal control, auditing, correction, prevention of corruption, and ombudsman (CGU, 2023).

4. To gain a better understanding of how the Brazilian social protection system functions, please refer to International Policy Centre for Inclusive Growth (2013).