How to prevent mental atrophy due to reliance on AI

June 23, 2025 | 03:05 pm PT
Trinh Phuong Quan Architect
I watched my nephew do his homework. He did not read a book, bullet points or even need scrap paper.

He typed into his computer: "Write a commentary on the following poem in 500 words using Comic Sans font."

All he had to do was download the file; he did not even have to change the font.

"Do you think it’s modern and progressive?" he asked me, turning away from the screen.

No, it's not progressive; it's a sign of a severe mental degeneration.

Students nowadays don't learn like "parrots" anymore; they don't learn at all.

Gen Z and Alpha are tech-savvy, but the downside is they can trade their thinking for grades and replace their ability to think with strings of words generated lifelessly from algorithms.

Abusing AI will certainly have a negative impact on young users’ ability to acquire knowledge and train their thinking.

But banning AI seems unthinkable in this era. The problem then is how to use it properly?

In December 2022 ChatGPT was launched and available for immediate use in the U.S. At that time Stanford had no specific regulations, and each faculty member decided for themselves: Some banned it outright, while others accepted AI as a tool to be tested.

Logos of OpenAI and ChatGPT. Photo by Reuters

Logos of OpenAI and ChatGPT. Photo by Reuters

In my intro to programming class, Professor Nick Parlante prohibited the use of AI, considering it akin to copying source code from the Internet.

His class used MOSS software to detect AI use in solving exercises.

The software is so sophisticated that even if students change variable names, add comments or deliberately make the code "messy" to make it different, MOSS can detect it.

"You need a clear policy to maintain class discipline," he said. This tough approach not only prevented cheating but also emphasized the importance of thinking for yourself.

Stanford also tweaked the way it rated students’ work. For example, students majoring in computer science were required to write code on paper for exams, rather than typing it on a computer, to limit AI interference.

Instructors graded on problem-solving logic and reading comprehension, not on code that ran perfectly or had every punctuation correct. This made me focus on the thought process rather than on the right/wrong outcome, a valuable lesson about the nature of learning.

However, not everyone was afraid of AI. In the construction management & construction class, Professor Martin Fischer, a leading expert on construction virtual reality, encouraged the use of AI as much as possible.

He was extremely enthusiastic, even sharing humorous stories composed by ChatGPT.

But for group projects simulating real-life situations, students were forced to analyze data, plan and present directly to the lecturer in class - skills that AI cannot do for them.

This experience showed me that, when used properly, AI can improve efficiency without causing us to lose independent thinking.

In the real estate technology seminar class, instead of writing a typical essay, we had to write a personal reflection essay after each meeting with an expert.

These exercises that require critical thinking in response to real-life events make AI a redundant tool, even a waste of time if you try to use it.

After all, a machine’s answers cannot replace reflections stemming from a person's perspective, culture and life experience.

Many of the world’s leading universities are proactively developing policies to manage the use of AI in learning and research.

Instead of an absolute ban, schools are choosing a flexible, controlled approach by taking advantage of the benefits of AI while developing independent thinking by students.

At Oxford University, students are encouraged to use AI as a learning aid, provided their use is transparent, especially in exams.

Harvard, Stanford, MIT, and Cambridge all take a similar stance: AI can be used to outline ideas, check grammar and aggregate information, but it should not completely replace individual effort.

In Asia, the National University of Singapore (NUS) and Tsinghua University have made it clear that the use of AI must comply with rules on academic integrity and data privacy.

Some schools, such as Peking University, have even imposed strict sanctions, including revoking degrees, if AI fraud is detected.

What these policies have in common are flexibility and balance. Many schools allow instructors to decide how much AI is used in each subject, demonstrating trust in the judgment of instructors and encouraging transparent dialogue between students and instructors based on certain common principles.

Firstly, banning AI will not stop people from using it. In an age where AI is embedded in most software, banning it will only encourage students to use it covertly and without supervision, causing them to miss out on learning how to use AI responsibly.

Secondly, using AI does not automatically mean cheating.

If students use AI to analyze text, ask critical questions or find new perspectives, that’s learning.

The problem is not with tools, but their purpose and how they are used.

Thirdly, the ability to work with AI is an important part of future competencies.

Just as calculators were banned from classrooms until a few decades ago but are now indispensable tools, AI will become a natural part of academic and professional environments.

If students are not properly trained to use them, they will lose their edge when entering the job market.

To effectively manage AI in education, it is necessary to avoid memorization questions and instead require analysis, synthesis and application of knowledge in practice.

Using project work instead of traditional exams will help students develop teamwork and problem-solving skills.

Handwritten exams, oral tests, quizzes, and case studies also help assess independent thinking and limit cheating.

Nevertheless, training sessions on ethics in using AI are very important, need to be updated and taught regularly in schools.

AI is not scary, but without clear direction, it can become a crutch that makes students gradually lose the habit of thinking for themselves.

Instead of prohibition, we need flexible solutions for AI application.

The important thing is to maintain the boundary between "support" and "replacement."

Because, no matter how far technology advances, the ability to debate, create and cooperate are still core values that people need to have.

The more modern technology is, the more people need to return to the most original form of thinking: asking questions, finding answers and growing from those questions.

*Trinh Phuong Quan is an architect.

The opinions expressed here are personal and do not necessarily match VnExpress's viewpoints. Send your opinions here.
 
 
go to top