OpenAI's Bold Plan to Keep Humans First in the Age of Superintelligence
OpenAI has published a comprehensive policy proposal aimed at preparing humanity for the emergence of superintelligence, recommending major changes like a four-day work week, support for displaced workers, and new tax structures to ensure AI benefits are widely shared rather than concentrated among the wealthy. The company released "Industrial Policy for the Intelligence Age: Ideas to Keep People First" on April 6, 2026, framing these recommendations as a starting point for democratic discussion about how society should respond to transformative AI capabilities .
The timing of OpenAI's proposal reflects growing recognition that superintelligence, artificial intelligence systems far exceeding human capabilities, could fundamentally reshape the economy and labor market. OpenAI stated that "humanity is about to enter a turning point with the emergence of superintelligence that far surpasses human capabilities, but we cannot accurately predict the impact that superintelligence will have," explaining that policymakers need to prepare for various possibilities through democratic processes .
What Specific Policies Is OpenAI Recommending?
OpenAI's proposal covers a wide range of policy areas designed to distribute AI's benefits broadly while mitigating risks. The recommendations fall into several key categories addressing economic security, worker protection, and safety oversight. Rather than leaving these decisions to market forces alone, OpenAI argues that governments should proactively shape how AI transforms society.
- Four-Day Work Week: OpenAI proposes that employers pass on productivity gains from AI to workers through reduced hours while maintaining current salary levels, alongside increased retirement benefits and expanded childcare and eldercare support.
- Tax Reforms for the AI Era: The company argues that AI will weaken traditional tax revenue bases funding social security, healthcare, and housing assistance, requiring increased taxation on high-income earners, higher corporate taxes, and special measures targeting sustained profit growth from AI.
- Support for Displaced Workers: OpenAI recommends offering employment opportunities in childcare, elder care, education, and healthcare for those whose jobs are eliminated by AI, with government-funded training programs and higher wages for these professions.
- Universal AI Access: The proposal treats access to AI as a "fundamental right to participate in the modern economy," comparable to literacy and internet access, requiring free or low-cost basic AI functions and support for AI infrastructure in communities, schools, and libraries.
- AI Safety Monitoring: OpenAI calls for establishing risk audit standards and monitoring systems developed with national security agencies to identify dangerous AI applications while avoiding excessive regulatory interference.
The proposal also addresses what OpenAI calls "containing dangerous AI," acknowledging that if dangerous AI models become publicly available or self-replicating AI systems are developed, containment strategies may be necessary. This reflects OpenAI's position that some AI capabilities pose genuine security risks requiring coordinated international response .
How Can Researchers and Policymakers Engage With These Proposals?
OpenAI has created a structured program to encourage rigorous examination of its policy recommendations. The company is offering grants of up to $100,000 and API credits worth up to $1 million to researchers studying the policy proposals, signaling that it views this as an open invitation for critique and refinement rather than a final blueprint .
- Research Grants: Researchers can apply for up to $100,000 in funding to study and evaluate OpenAI's policy recommendations.
- API Credits: Selected researchers receive up to $1 million in API credits to test and validate proposals using OpenAI's AI tools.
- Public Feedback: OpenAI explicitly positions the proposal as a starting point for discussion and is actively seeking feedback from policymakers, economists, and the public.
This approach differs from typical corporate policy advocacy, where companies present finished positions to regulators. Instead, OpenAI is framing its recommendations as preliminary ideas requiring refinement through democratic debate and academic scrutiny. The company's willingness to fund research that might critique or improve its proposals suggests confidence in the underlying logic while acknowledging the complexity of the challenges ahead .
Why Does OpenAI Think These Changes Are Necessary Now?
OpenAI's proposal reflects a fundamental concern: without deliberate policy intervention, the economic benefits of superintelligence could concentrate among AI developers and owners while workers face mass displacement. The company argues that current social safety nets and tax structures were designed for a different economic era and cannot adequately respond to rapid AI-driven transformation.
The proposal on supporting displaced workers is particularly telling. OpenAI acknowledges that AI will eliminate jobs across many sectors but argues that certain fields, like healthcare and education, will always require human connection and judgment. Rather than accepting technological unemployment as inevitable, OpenAI recommends treating job displacement as a policy problem requiring active government solutions, including retraining programs and wage support .
The tax reform recommendations address what OpenAI sees as a structural problem: as AI increases productivity, traditional income and corporate taxes may generate insufficient revenue to fund social programs. The company proposes that governments should tax the extraordinary profits generated by AI systems more aggressively, ensuring that the gains from automation benefit society broadly rather than accruing only to shareholders .
What Makes This Different From Previous Tech Policy Proposals?
OpenAI's proposal stands out because it explicitly frames AI policy as inseparable from broader economic and social policy. Rather than focusing narrowly on AI safety or regulation, the company argues that preparing for superintelligence requires rethinking work, taxation, and social support systems. This reflects a view that technological capability alone is insufficient; society must actively choose how to distribute AI's benefits.
The four-day work week recommendation is particularly notable because it directly challenges assumptions about labor markets. Rather than accepting that AI will simply displace workers, OpenAI proposes that productivity gains should translate into leisure time and better work-life balance for employees. This differs from arguments that workers should simply retrain for new jobs, instead suggesting that the nature of work itself should change .
OpenAI's emphasis on universal AI access as a fundamental right also represents a departure from market-driven approaches. The company argues that just as governments invested in literacy and internet infrastructure, they should ensure that AI tools are available to small businesses, schools, libraries, and communities that lack resources to develop their own systems. This positions AI access as a public good rather than a purely commercial product .
By publishing these recommendations and funding research to evaluate them, OpenAI is attempting to shape the policy conversation around superintelligence before regulatory frameworks harden. The company's approach suggests that it believes proactive, human-centered policies are more likely to result in beneficial outcomes than reactive regulation imposed after problems emerge.