In its recent episode on Deep Dive, CNA explored the boundaries governing students’ use of AI and asked if schools – including universities – should relax examination rules that forbid the use of AI.
After all, as the podcast panelists agreed, the workforce already turns to AI: why deprive students of this slice of reality? Steven Chia even went so far as to label educators ‘naïve’ to think that students would comply with limits on AI use.
Admittedly, there are no guarantees. Even this article might have been quietly generated by a prompt.
And that is the first problem.
Even if the results are stellar, can the student replicate success without the same tool? Or worst-case scenario, the poor workman simply blames his tools. I cannot accept that my success is predicated on a toolkit rather than my capacity to learn, unlearn and relearn. And one of the most difficult things we learn in school is when to take off the training wheels.
Learning how to use AI judiciously: this is a lesson in self-management.
But there looms in the shadows, a more sinister problem with AI-generated homework.
In a recent Talking Point survey, 73% of 500 students reported improvement in the skills of ‘productivity and efficiency’ through their use of AI tools. This is hardly surprising because why use AI otherwise. And it is precisely when these sorts of extrinsic motivations supersede learning that the wrong habits are hardened.
Expediency and effort offloading: is this what we want schools to reinforce?
Education, as our Centre Director, A/P Ben Leong, said in the podcast, is the business of motivating people. We believe that with intrinsic motivation our students can and will succeed. And this begins certainly not with a well-written prompt, but with an inquiring mind, an appetite to try, and eyes on the process not on the prize.
This reflection was written by Kaizen Low, Deputy Director, AICET Pedagogy.
Learning Smart Versus Learning Hard: Where do we draw the line?
•
Leave a Reply