Brazil's AI Domestic Violence Monitoring Plan Raises Hard Questions About Privacy and Fairness

Brazil's Federal Senate is debating legislation that would deploy artificial intelligence to monitor domestic violence offenders in real time, combining electronic tracking devices with behavioral analytics to prevent attacks before they happen. Bill No. 750/2026, introduced by Senator Eduardo Braga and currently before Brazil's Chamber of Deputies, would create the National Program for Monitoring Aggressors Using Artificial Intelligence. The proposal reflects a global trend of using AI in public safety, but it also highlights the tension between protecting vulnerable people and safeguarding fundamental rights like privacy and due process .

Domestic violence remains a serious crisis in Brazil. Each year, courts issue hundreds of thousands of protective orders under the Maria da Penha Law, the country's primary legal framework addressing violence against women. These orders typically prohibit aggressors from approaching victims or specific locations. However, enforcing these measures has proven difficult in practice. Violations often become known only after victims report them, and response times vary depending on local law enforcement capacity. The proposed monitoring program seeks to close this enforcement gap by combining electronic monitoring technology with AI systems capable of identifying violations in real time .

How Would Brazil's AI Monitoring System Actually Work?

The proposal combines several interconnected technologies designed to create a more proactive protection model:

  • Electronic Monitoring Devices: Aggressors subject to court orders would be required to wear electronic monitoring devices such as ankle bracelets that continuously track their location and detect when court-imposed distance restrictions are violated.
  • Centralized Monitoring Platform: These devices would connect to a centralized system capable of automatically generating alerts for authorities when a monitored individual approaches restricted locations or attempts to tamper with the device.
  • Victim Safety App: The legislation provides for an official mobile app where victims could trigger emergency alerts, share their location with authorities, and receive notifications if a monitored aggressor approaches restricted areas. Victims' use of the app would be voluntary and dependent on explicit consent.
  • Predictive Analytics Database: Data generated by monitoring devices would be analyzed using machine learning techniques to identify behavioral patterns indicating increased risk of violence, such as repeated attempts to approach restricted areas or unusual movement patterns.

The predictive component represents one of the most innovative, and potentially controversial, aspects of the proposal. Rather than waiting for a formal violation to occur, authorities could receive alerts if the AI system detects patterns suggesting elevated risk. Similar monitoring and predictive technologies have been explored in jurisdictions such as Spain, the United Kingdom, and parts of the United States, particularly in initiatives aimed at improving enforcement of protective orders and preventing domestic violence .

What Privacy and Fairness Concerns Does This Raise?

Predictive technologies used in public safety contexts are among the most sensitive applications of AI. Systems that attempt to infer risk from behavioral data raise important questions regarding transparency, fairness, and accountability. These concerns are central to global discussions on responsible AI governance .

From a privacy perspective, the proposed program would involve processing significant volumes of personal data. Continuous geolocation monitoring, behavioral pattern analysis, and records associated with judicial protective measures all involve sensitive information that must be handled carefully. The proposal explicitly states that personal data processing must comply with Brazil's General Data Protection Law, and that collected data may only be used for purposes defined by law. However, several operational questions remain unanswered .

Continuous geolocation monitoring raises questions about proportionality and data minimization. How long should monitoring data be retained, particularly if behavioral analytics are used to identify patterns over extended periods? Access control will also be essential. The program will likely involve coordination among multiple institutions, including law enforcement agencies, prosecutors, and the judiciary. Ensuring that sensitive monitoring data is accessed only by authorized actors will be crucial for maintaining trust in the system .

Algorithmic accountability will also play an important role. If predictive analytics influence enforcement decisions or risk assessments, oversight mechanisms should exist to audit the system and address potential errors or biases. The proposal acknowledges some of these challenges by establishing governance requirements for the AI systems used within the program. According to the legislation, algorithms used in the monitoring system must follow principles such as explainability, auditability, mitigation of discriminatory bias, and human supervision over automated processes .

These requirements align with broader international discussions on responsible AI deployment, including frameworks such as the Organisation for Economic Co-operation and Development's AI Principles and emerging regulatory approaches such as the EU AI Act. In the European regulatory framework, for example, certain AI systems used in law enforcement and risk assessment are classified as high-risk applications, subject to enhanced transparency, oversight, and governance requirements .

What Makes This a Global Governance Challenge?

Brazil's proposal ultimately reflects a broader global movement toward AI-enabled public policy. Technologies such as electronic monitoring and predictive analytics are being deployed in public safety contexts worldwide, but there is no consensus on how to balance effectiveness with fundamental rights protection. The proposal highlights the importance of institutional cooperation, as the monitoring system would require coordination between security agencies, the judiciary, prosecutors, and victim support services. Such cooperation is essential for the program to function effectively, but multiagency systems often create complex environments for data sharing and governance .

Ensuring clear accountability structures, strong cybersecurity protections, and transparent oversight mechanisms will be critical to the success of any such program. Brazil's approach suggests that policymakers are taking these concerns seriously by building governance requirements into the legislation itself, rather than treating them as afterthoughts. However, translating principles like explainability and bias mitigation into operational safeguards will likely require detailed technical implementation and ongoing oversight mechanisms that go beyond what the current proposal specifies .

As governments worldwide continue to explore AI applications in public safety, Brazil's domestic violence monitoring proposal serves as a case study in how to design systems that protect vulnerable populations while respecting privacy and fairness. The outcome of this legislative debate may influence how other countries approach similar challenges at the intersection of AI, public safety, and human rights.