How can we responsibly incorporate AI use in modern higher education?
"Our View" is prepared by the Editorial Board and should be considered the institutional voice of The Record.
“Generative AI is rapidly reshaping life on college campuses, blurring the lines between innovation and integrity. As students and professors alike grapple with its possibilities and pitfalls, universities are being forced to redefine what learning, creativity and authorship really mean in the digital age.”
The previous statement was generated by ChatGPT, responding to an extremely simple prompt asking for an intro to an editorial piece about the challenges with integrating AI on college campuses. From here on out, the rest of this piece will come from the editorial board of The Record, not a machine. In our humble opinion, we think you’ll find our thoughts, along with additional voices from our community, a bit more insightful.
In the past few years, AI has been at the forefront of syllabi at colleges and universities as professors list their expectations and rules regarding AI. Some professors allow it; some even welcome it to be used on assignments, while other professors voice that without a doubt there will be no use of AI in their class. But, even so, do all students listen to these expectations?
Across the country, colleges and universities have been navigating what the use of AI looks like in higher education as it becomes harder to avoid. A New York Times article on Sunday reported on California State, the largest U.S. university system with 460,000 students, who announced in February that they planned to take a large step of support in the use of AI in higher education as they look to become the first and largest AI-empowered university.
Backed by Big Tech giants such as Amazon, OpenAI and Nvidia, their hope is that all 22 Cal State campuses become AI-led to prepare their students for careers that are being driven by AI. Their first-ever AI camp was held this summer for students to spend five days at California Polytechnic State University working on activities with Amazon Web Services.
The article notes that this move marks a major power shift on campuses, with AI companies taking an active stake in how students perceive and use their products. One Amazon official argued that this partnership won’t just enable students to look up answers on ChatGPT (which has been accused of helping spread misinformation and diminish critical thinking) but instead will help students build problem solving skills and the ability to communicate.
To this we ask, how? Is it even possible to guarantee that AI is always used as a supplement for learning rather than just for cheating or shortcuts? Is Cal State falling to the reality of where our world is headed with AI and ceding their independence to Silicon Valley?
Dean of Curriculum and Assessment Karyl Daughters said that CSB+SJU supports whatever parameters for generative AI are up to the discretion of professors, and that the institution expects these parameters to evolve with the expansion of AI tools. She also emphasized that using AI effectively requires specific critical skills that professors are hoping to help develop in students.
“The ability to craft clear prompts, evaluate AI-generated content and identify potential biases or inaccuracies are crucial skills that rely on your fundamental writing abilities and critical judgment. AI amplifies human capabilities but doesn’t replace the need for clear communication and analytical thinking — it increases their importance. Our dedicated faculty remain committed to ensuring our graduates continue to develop these essential skills,” Daughters said via email.
On the CSB+SJU libraries website, students can find a section covering Generative AI in higher education at https://guides.csbsju.edu/AI/syllabus. In regard to using these tools, such as ChatGPT or Gemini, they write, “Check each class syllabus or ask each of your instructors for guidelines and expectations around Generative AI use in their class. What is allowed or required in one class may be prohibited in another class. Unauthorized use of Generative AI tools could be considered cheating.”
Daughters also commented on broader ethical issues surrounding AI that students need to be aware of.
“Generative AI relies on the original work of human creators. This situation parallels broader ethical concerns we face in the digital age, where original work by human creators is often used without proper attribution or compensation — a practice that undermines both intellectual integrity and the sustainability of creative professions. It’s important to consider the implications if our ability to create and think critically diminishes,” Daughters said via email.
Jennifer Kramer, associate professor at CSB+SJU in the Strategic Communication Studies department, has seen the development of AI in higher education throughout her years as professor and former student at CSB+SJU.
“In my classes, I am generally okay with AI to help generate initial ideas, but after that students need to use their own critical thinking and knowledge to fill in the gaps because the internet doesn’t know the target audience here at CSB+SJU, or even me, as the professor,” Kramer said.
Kramer’s classes are all “personal,” as she teaches courses ranging from Interpersonal Communication to
Fat Studies, and AI doesn’t know students’ personal opinions. She said she can tell when AI is used because
the resulting responses are generic and lacking the personal connection often needed for her assignments.
Global Business Leadership Assistant Professor Clinton Warren is one of many professors working to help students understand how to navigate modern AI use by allowing it within his classroom, with specific guidelines that he said he is actively refining for the future.
“From a business major’s perspective, AI tools will be used in the workplace after graduation… In my classroom, I’m working to allow more AI usage. I would like to allow, and even encourage, students to use generative AI to help on assignments. In doing this, I want to be sure that the tools are viewed as more of an assistant than something that is doing the work for a student,” Warren said via email. “The compelling issue to me in higher education is academic integrity. We want students to come to college and leave better critical thinkers, problem solvers, writers, communicators and people. Overreliance on AI tools can absolutely erode that. That does concern me.”
When it comes to Cal State, Warren said he sees the issue from both sides.
“On the positive side, as with any industry-academic partnership, the opportunities for student professional development increases significantly. Having Amazon in the classroom is a great way for students to build their
skills and networks as they look toward the job market…the teaching potential there is immense if the instructional goal is to gain that technical proficiency,” Warren said via email.
“The tradeoffs to that are the cost and the potential for industry to take complete control of the classroom/curriculum. My understanding is Cal State paid many millions of dollars for this partnership. That is a big risk in today’s higher education landscape, but more fundamental than that, is the potential to lose the ability to build courses and curriculum that focus on the core elements of student learning.”
Some of Cal State’s faculty have pushed back against the AI partnership, citing the same concerns that Warren mentioned: budget cuts to roll out the partnership and how it would allow companies to promote their “unproven chatbots as legitimate educational tools,” according to a comment given to the NYT.
A study from MIT earlier this year found that there is evidence to suggest that ChatGPT may erode critical thinking skills in frequent users based on their brain activity. AI chatbots have also come under fire for a number of lawsuits that allege that some bots contributed to user death by providing harmful responses during mental health crises. These are just a handful of ethical issues that arise when AI is in the public’s hands, and it’s only getting smarter and more comprehensive.
In our minds, we feel it’s inevitable that the use of AI will be a compelling tool or even a required one in our future careers. CSB+SJU’s guidelines allow professors to use their discretion based on their fields and expertise, and we think this regulated approach makes sense. What we don’t necessarily agree with is Cal State’s approach, which seems to support the idea that universities need to become a training ground for tech companies in order to give students skills to make them more “marketable” in a post-AI world.
We believe we can still achieve the goal of being innovative and at the cutting edge of learning without allowing AI to completely remake our instructional policies. No institution has all the answers when it comes to working with AI, but a cautionary and practical approach makes more sense to us than a massive AI experiment.
Cal State is still in the early stages of this rollout, so it will be interesting to monitor their outcomes and see just how much this actually benefits students — especially those in technical universities who stand to gain the most from an AI partnership. Overall, it’s our hope that even as we work with AI tools, we don’t lose sight of what we believe matters the most in our academic and professional lives — our humanity.