Brazil's AI Monitoring Plan for Domestic Violence Offenders Raises Hard Questions About Privacy and Bias
Brazil's Federal Senate is debating legislation that would deploy artificial intelligence to monitor domestic violence offenders through electronic devices, behavioral analytics, and predictive alerts. Bill No. 750/2026 would establish the National Program for Monitoring Aggressors Using Artificial Intelligence, combining real-time location tracking with machine learning systems designed to identify patterns that suggest increased violence risk before violations occur .
What Problem Is Brazil Trying to Solve?
Domestic violence remains a serious issue in Brazil. Courts issue hundreds of thousands of protective measures annually under the Maria da Penha Law, the country's primary legal framework addressing violence against women . These orders typically prohibit aggressors from approaching victims or specific locations. In practice, however, enforcement is difficult. Violations often become known only after victims report them, and response times vary depending on local law enforcement capacity.
The proposed monitoring program seeks to close this enforcement gap. Aggressors subject to court orders could be required to wear electronic monitoring devices such as ankle bracelets that connect to a centralized platform. If a monitored individual approaches restricted locations or tampers with the device, the system automatically generates alerts for law enforcement . The goal is straightforward: reduce the time between a protective order breach and police intervention.
How Would the AI System Actually Work?
- Electronic Monitoring Devices: Aggressors wear ankle bracelets or similar devices that continuously track location and detect when court-imposed distance restrictions are violated, triggering automatic alerts to authorities.
- Victim Safety App: The legislation provides for a mobile app allowing protected individuals to trigger emergency alerts, share their location with authorities, and receive notifications if a monitored aggressor approaches restricted areas. Victims' use would be voluntary and require explicit consent.
- Predictive Behavioral Analytics: A national database would analyze behavioral patterns using machine learning to identify increased violence risk before formal violations occur, such as repeated attempts to approach restricted areas or unusual movement patterns.
The predictive approach reflects a broader shift toward data-driven public safety policies. Similar monitoring technologies have been explored in Spain, the United Kingdom, and parts of the United States, particularly for improving enforcement of protective orders and preventing domestic violence .
What Are the Governance Requirements Built Into the Proposal?
Brazil's proposal acknowledges the sensitivity of predictive technologies in public safety contexts by establishing governance requirements for the AI systems. According to the legislation, algorithms used in the monitoring system must follow principles such as explainability, auditability, mitigation of discriminatory bias, and human supervision over automated processes . These requirements align with international discussions on responsible AI deployment, including frameworks from the Organisation for Economic Co-operation and Development (OECD) and the European Union's AI Act.
In the European regulatory framework, certain AI systems used in law enforcement and risk assessment are classified as high-risk applications, subject to enhanced transparency, oversight, and governance requirements. Translating similar principles into operational safeguards will likely require detailed technical implementation and oversight mechanisms in Brazil as well .
Where Are the Privacy and Accountability Challenges?
The proposed program would involve processing significant volumes of personal data. Continuous geolocation monitoring, behavioral pattern analysis, and records associated with judicial protective measures all involve sensitive information that must be handled carefully . The proposal explicitly states that personal data processing must comply with Brazil's General Data Protection Law, and that collected data may only be used for purposes defined by law.
However, several operational questions remain unresolved. Continuous geolocation monitoring raises questions about proportionality and data minimization. Retention policies for monitoring data will be critical, particularly if behavioral analytics are used to identify patterns over extended periods. Access control is also essential, since the program will likely involve coordination among multiple institutions, including law enforcement agencies, prosecutors, and the judiciary .
Algorithmic accountability will play an important role. If predictive analytics influence enforcement decisions or risk assessments, oversight mechanisms should exist to audit the system and address potential errors or biases. Ensuring clear accountability structures, strong cybersecurity protections, and transparent oversight mechanisms will be critical for maintaining public trust .
Brazil's proposal ultimately reflects a broader global movement toward AI-enabled public policy. Technologies such as electronic monitoring and predictive analytics offer genuine benefits for victim protection and faster law enforcement response. At the same time, the case illustrates why policymakers worldwide are grappling with how to balance innovation with fundamental rights protections. The outcome in Brazil may serve as a test case for how democracies can deploy AI in sensitive public safety contexts while maintaining accountability and fairness.