Brazil's AI-Powered Domestic Violence Monitoring Plan Raises Hard Questions About Privacy and Fairness
Brazil's Federal Senate is debating legislation that would use artificial intelligence to monitor domestic violence offenders in real time, combining electronic tracking devices with behavioral analytics to prevent abuse. Bill No. 750/2026, introduced by Senator Eduardo Braga and currently before Brazil's Chamber of Deputies, would establish the National Program for Monitoring Aggressors Using Artificial Intelligence. The initiative reflects a global trend toward AI-enabled public safety, but it also exposes fundamental tensions between protecting vulnerable people and safeguarding individual rights .
What Problem Is Brazil Trying to Solve?
Domestic violence remains a serious and persistent issue in Brazil. Each year, courts issue hundreds of thousands of protective measures under the Maria da Penha Law, the country's primary legal framework addressing violence against women. These orders typically prohibit aggressors from approaching victims or specific locations. However, enforcing these measures has proven difficult in practice. Violations often become known only after victims report them, and response times vary depending on local law enforcement capacity .
The proposed monitoring program seeks to address this enforcement gap by combining electronic monitoring technologies with AI systems capable of identifying violations in real time. Under the proposal, aggressors subject to court orders could be required to wear electronic monitoring devices such as ankle bracelets. These devices would connect to a centralized monitoring platform capable of continuously tracking the aggressor's location and detecting when court-imposed distance restrictions are violated. If the monitored individual approaches restricted locations or attempts to tamper with the monitoring device, the system would automatically generate alerts for authorities responsible for enforcement .
How Would the AI System Actually Work?
The proposal introduces several interconnected components designed to create a more proactive protection model. The legislation provides for the development of an official mobile app for individuals protected by judicial measures. Through the app, victims could trigger emergency alerts, share their location with authorities, and receive notifications if a monitored aggressor approaches restricted areas. The app could also provide access to a record of alerts and monitoring events related to the case. Importantly, the proposal establishes that victims' use of the app would be voluntary and dependent on explicit consent .
One of the most innovative and potentially controversial aspects of the proposal is the creation of a national database designed to analyze behavioral patterns of monitored aggressors using machine learning techniques. Data generated by monitoring devices could be analyzed to identify patterns that indicate an increased risk of violence. Examples may include repeated attempts to approach restricted areas, unusual movement patterns, or indications that monitoring devices have been tampered with. If such patterns are detected, authorities could receive alerts even before a formal violation occurs .
Steps for Implementing Responsible AI Governance in Public Safety Systems
- Establish Clear Algorithmic Standards: Algorithms used in monitoring systems must follow principles such as explainability, auditability, mitigation of discriminatory bias, and human supervision over automated processes, ensuring that AI decisions can be understood and challenged.
- Define Data Retention and Access Policies: Organizations must establish clear rules about how long monitoring data is kept, who can access it, and for what purposes, with particular attention to proportionality and data minimization principles.
- Create Oversight Mechanisms: Multiagency systems require robust accountability structures, strong cybersecurity protections, and transparent oversight mechanisms to audit the system and address potential errors or biases in predictive analytics.
- Ensure Privacy Compliance: Personal data processing within the program must comply with applicable data protection laws, with safeguards that address continuous geolocation monitoring, behavioral pattern analysis, and access control across multiple institutions.
What Are the Governance Challenges?
Brazil's proposal acknowledges some of these challenges by establishing governance requirements for the AI systems used within the program. According to the legislation, algorithms used in the monitoring system must follow principles such as explainability, auditability, mitigation of discriminatory bias, and human supervision over automated processes. These requirements align with broader international discussions on responsible AI deployment, including frameworks such as the Organisation for Economic Co-operation and Development's AI Principles and emerging regulatory approaches such as the European Union's AI Act .
However, translating these principles into operational safeguards will likely require detailed technical implementation and oversight mechanisms. From a privacy perspective, the proposed program would involve the processing of significant volumes of personal data. Continuous geolocation monitoring, behavioral pattern analysis, and records associated with judicial protective measures all involve sensitive information that must be handled carefully. The proposal explicitly states that personal data processing within the program must comply with Brazil's General Data Protection Law, and that collected data may only be used for the purposes defined by law .
Several operational questions remain unresolved. Continuous geolocation monitoring raises questions about proportionality and data minimization. Retention policies for monitoring data will be critical, particularly if behavioral analytics are used to identify patterns over extended periods. Access control will also be essential, as the program will likely involve coordination among multiple institutions, including law enforcement agencies, prosecutors, and the judiciary. Ensuring that sensitive monitoring data is accessed only by authorized actors will be crucial for maintaining trust in the system .
Algorithmic accountability will also play an important role. If predictive analytics influence enforcement decisions or risk assessments, oversight mechanisms should exist to audit the system and address potential errors or biases. The proposed bill highlights the importance of institutional cooperation, as the monitoring system would require coordination between security agencies, the judiciary, prosecutors, and services responsible for support. Such cooperation is essential for the program to function effectively, yet multiagency systems often create complex environments for data sharing and governance .
Brazil's proposal ultimately reflects a broader global movement toward AI-enabled public policy. Technologies such as electronic monitoring and predictive analytics have been explored in jurisdictions such as Spain, the United Kingdom, and parts of the United States, particularly in initiatives aimed at improving enforcement of protective orders and preventing domestic violence. However, predictive technologies used in public safety contexts are among the most sensitive applications of AI, raising important questions regarding transparency, fairness, and accountability that are central to global discussions on responsible AI governance .