95% of Cal State Students Use AI, But They Don't Trust It,and Fear for Their Jobs
Nearly every student in California's public university system has used artificial intelligence tools, yet most harbor significant doubts about the accuracy of AI-generated results and worry that automation will threaten their future employment prospects. A comprehensive 2025 survey of more than 80,000 students, faculty, and staff across the California State University (CSU) system's 22 campuses uncovered a striking paradox: high adoption rates coexist with high anxiety, and students are demanding a voice in how their institutions integrate AI into learning .
The survey, conducted by faculty at San Diego State University and receiving responses from over 94,000 people, represents the largest and most comprehensive study of how higher education students and instructors perceive artificial intelligence. The findings paint a picture of an education system grappling with rapid technological change while struggling to establish consistent, transparent rules that students can understand and trust .
What Are Students Actually Using, and Why Are They Worried?
Among all responding CSU students, 95% reported using an AI tool at some point, with ChatGPT emerging as the clear favorite. Specifically, 84% said they used ChatGPT, and the CSU system's February 2025 agreement with OpenAI to make the tool available throughout the system has only accelerated adoption .
Yet adoption does not equal confidence. A striking 82% of students worry that AI will negatively impact their future job security, according to the survey. Some students feel caught between two impossible choices: use AI and risk becoming dependent on it, or ignore it and fall behind in an increasingly competitive job market. One computer science major captured this tension bluntly in the survey: "Even though I don't want to use it, I HAVE TO! Because if I don't, then I'll be left behind, and that is the last thing someone would want in this stupid job market" .
The concern extends beyond job displacement. Students also expressed skepticism about the reliability of AI outputs. Many reported not trusting the results they received from AI tools, raising questions about whether they are learning to use these systems responsibly or simply becoming dependent on unreliable shortcuts .
Why Are Faculty Divided on AI's Impact?
Faculty attitudes toward AI in education are split almost evenly. Just over 55% of faculty reported a positive benefit from AI, while 52% said AI has had a negative impact so far. This division reflects a broader national tension: educators recognize AI's potential as a "transformative tool" for providing tutoring and personalized support to students, yet they worry that students will depend on AI agents to do their thinking for them and potentially receive incorrect information .
The lack of consistent institutional guidance has created what one administrator called "a hot mess." Students at the same university, sometimes even in the same course taught by different instructors, face wildly different expectations about AI use. In one introductory course, a professor might encourage students to use AI, while another professor teaching the same course might forbid it entirely. This patchwork of policies leaves students confused and frustrated .
How Can Universities Create Clearer AI Policies?
- Mandatory Syllabus Language: Require all faculty to include clear language about AI use expectations in course syllabi, specifying whether AI is encouraged, permitted with restrictions, or prohibited. San Diego State made this mandatory in 2025 after discovering that only 68% of teaching faculty included such language.
- Involve Students in Policy Development: Include student representatives in systemwide discussions about AI adoption and use. Students across the CSU system explicitly requested a seat at the table when decisions are made about which AI tools are adopted and how they should be used in classrooms.
- Provide Industry-Relevant Training: Offer formal AI training that connects to students' actual career paths rather than generic chatbot instruction. A mechanical engineering student in the survey noted: "I want to learn AI tools that are actually used in my industry, not just generic chatbots. Show me what engineers are actually doing with AI on the job."
- Establish Micro-Credentials: Require students to earn formal credentials in AI literacy during their first year, ensuring all graduates have baseline competency regardless of their major or which professors they encounter.
San Diego State's experience offers a roadmap. After conducting its first campuswide survey in 2023 in response to student complaints about inconsistent AI rules, the campus took action. One student's plea summed up the frustration: "Please just tell us what to do and be clear about it." Following that feedback, the San Diego State Academic Senate approved guidelines for generative AI use in instruction and assessments, and in 2025 made it mandatory that faculty include language about AI use in course syllabi .
"It doesn't say what your disposition has to be, whether it's pro or con. It just says you have to be clear about your expectations. Without the 2023 survey data, that never would have happened," said James Frazee, Vice President for Information Technology at San Diego State.
James Frazee, Vice President for Information Technology, San Diego State University
Who Wants AI Training Most, and What Do They Need?
The 2025 systemwide survey revealed important demographic differences in student attitudes toward AI education. First-generation students expressed greater interest in formal AI training compared to their peers, and Black, Hispanic, and Latino students showed more enthusiasm for structured AI instruction than white students. These findings suggest that AI literacy programs could help level the playing field for students from underrepresented backgrounds .
However, students are clear about what they want: training that will actually be useful in their careers. Generic instruction about how chatbots work does not resonate. Instead, students want to learn the specific AI tools and workflows used in their industries. At San Diego State, the response to this demand has been concrete. Students are now required to earn a micro-credential in AI use during their first year, a change implemented directly in response to the 2023 survey findings .
What Do Faculty and Unions Say About AI Governance?
Faculty and their representatives are pushing back against top-down AI adoption without adequate input. The California Faculty Association, which represents about 29,000 educators in the CSU system, issued a statement in February emphasizing that faculty must be included in future systemwide decisions about AI, including whether the contract with OpenAI should be renewed in July .
"CFA members continue to advocate for ethical and enforceable safeguards governing the use of artificial intelligence," the California Faculty Association stated, asking for "protections for using or refusing to use the technology, professional development resources to adapt pedagogy to incorporate the technology, and further protections for faculty intellectual property."
California Faculty Association
The union's concerns reflect a broader anxiety: that institutions are moving fast on AI adoption without ensuring that educators have the training, resources, and protections they need to use these tools responsibly. Faculty also want assurance that their own intellectual property, including course materials and research, will not be used to train AI systems without their consent .
What Does This Mean for the Future of Higher Education?
The CSU survey data provides what administrators say is crucial evidence for making informed decisions rather than relying on anecdotes and assumptions. "We need to have data to make data-informed decisions instead of just going by anecdote," explained Elisa Sobo, a professor of anthropology at San Diego State who helped interpret the survey's findings. "We have data that show high use, but we also have high levels of concern, very valid concern, to help people be responsible when they use it" .
The challenge ahead is clear: universities must move beyond simply adopting AI tools and instead build comprehensive frameworks that address student concerns, ensure equitable access to training, protect faculty rights, and establish transparent policies that everyone understands. The CSU system's survey and accompanying dashboard now allow other campuses to search for specific data by campus and discipline, offering a model for how data-driven governance of AI in education might work across the country .
Students want clarity, relevance, and a voice. Faculty want protections and professional development. Administrators want data to guide decisions. The next phase of AI in higher education will be determined by whether institutions can deliver on all three fronts simultaneously.