As Canadian Schools Rethink AI, Failing Students Isn’t the Answer

AI Student ChatGPT

There is a scene many of us have watched unfold. A student sits at the kitchen table late at night, trying to finish an assignment. They are not looking for a shortcut; they are looking for help. They reach for an AI tool like ChatGPT the way they might open a dictionary, use a calculator, or type a query into a search engine. They ask for examples, ideas, and help on how to structure their thoughts.

Then the uncertainty arrives.

They have heard rumours that using AI tools is “cheating,” even if they only use it to brainstorm or clarify. They have heard stories of classmates being questioned or investigated. In some cases, when expectations have not been clearly communicated in advance, students may be told that suspected use of AI could result in a failing grade.

Why This Moment Matters

In a time when young people already feel pressure about their grades, finances, mental health stressors, and an uncertain labour market, unclear expectations around AI can add one more layer of anxiety. And as career professionals, we need to pay attention, because these experiences shape confidence, ethics, identity, and future employability.

Digital and generative tools are already being absorbed into the workplace, quickly, quietly, and unevenly, across industries. In many cases, enforcement policies were introduced quickly, before shared guidance and full understanding about AI had time to develop. Encouragingly, many Canadian institutions are beginning to reverse this scenario.

Several Canadian post-secondary institutions, such as Concordia University, are already taking a constructive, leadership-oriented approach by offering clear, practical guidance to help students understand how to use generative AI responsibly, transparently, and with integrity.

Enforcement-First Approaches Can Have Unintended Consequences

I understand why schools worry. They are trying to protect learning, fairness, and academic integrity. These are good goals. The challenge is that when enforcement comes before education, it can create unintended consequences.

When fear becomes the driver, students do not stop using AI, they simply stop being honest about it. This is a predictable response to uncertainty, not a reflection of student values. Curiosity gives way to anxiety. Students who could have learned to evaluate AI critically become afraid to engage with it at all.

Equity gaps can also widen when guidance and expectations are unclear. Students with strong support systems may receive help from parents, tutors, or paid services. Students without these supports often turn to free AI tools for basic assistance, then may face consequences for doing so.

Career professionals across Canada are already seeing the downstream impact. A student who is accused of cheating can lose confidence for years. A new graduate who never learned how to use AI appropriately may feel ill-equipped in their first job. A young adult who internalizes the message that “tools are dangerous” may struggle with adaptability later on. This message is rarely intentional, but it can be what students take away when rules are unclear.

The Real Issue: Not Whether AI Was Used, But How

This is the core issue. The workplace is already moving toward a different question. Many employers are no longer asking, “Did you use AI?” They are asking whether it was used responsibly.

They want to know whether information was verified, decisions were thoughtful, privacy was respected, and if the final work reflects the individual’s judgment and voice.

Schools can ask the same questions; not to catch students, but to teach them how to think and make decisions  ethically.

This shift mirrors emerging academic guidance, such as the University of Toronto’s AI Literacy Framework, which emphasizes ethical use, critical evaluation, and human judgment over simple AI detection or prohibition.

As career professionals, we are well positioned to model this shift in our own work and in our client guidance. We can help clients move from fear to competence, and from secrecy to transparency.

More Practical Approaches Being Used by Some Schools

Some educators are already experimenting with balanced approaches, and career professionals can learn from these trends when supporting students and early-career clients.

Clear definitions of acceptable AI use make a meaningful difference. When schools explain what is allowed, such as brainstorming, outlining, rewriting for clarity, grammar checks, generating practice questions, summarizing notes, or building study plans, students are far more likely to comply.

Another promising approach is to ask students to document or explain their creation process, rather than simply submitting the final product. Students might share a rough outline, reflection notes, sources consulted, or a brief description of how AI was used. This is not about surveillance. It is about learning, and it builds skills employers already value.

Building reflection into assignments can further strengthen academic integrity. When students are asked questions such as, “What did you learn while creating this work, and how did you decide what to include or exclude?” they must demonstrate their thinking. Thoughtful reflection is difficult to articulate without genuine engagement, making learning visible in ways that detection tools cannot.

Some schools are also redesigning the way students are assessed. Tasks such as presentations, applied case studies, personal reflection tied to course concepts, or improving AI-generated drafts make critical thinking visible, even when AI is part of the process.

What This Means for Career Professionals

Many of us work with students, co-op and internship candidates, early-career jobseekers, and career changers who are returning to education. We need a practical, consistent message that reduces shame while increasing responsible behaviour and confidence.

One place to start is teaching clients to see AI as a support tool, not a ghostwriter. If a client uses AI-assisted drafting, we can help them treat the draft as a starting point. Then, we teach them to take real ownership of the work. That means rewriting in their own voice, adding details only they would know, removing anything they cannot defend, and verifying facts.

Disclosure also matters. Many students and clients hesitate to disclose AI use because they expect judgment. We can help normalize calm, professional disclosure, such as explaining that AI supported brainstorming or clarity, while the analysis and final decisions remain their own. This builds trust and reinforces ethical habits that will matter throughout their careers.

Verification is another essential career skill. AI can sound confident and still be wrong. Helping clients build habits of checking sources, comparing information, verifying dates and definitions, and asking what might be missing strengthens both career readiness and professional integrity.

Finally, some clients need help rebuilding confidence after accusations. Supporting them to document their process, practice respectful self-advocacy, and shift from shame back to learning can be transformative.

A Gentle Reminder About Purpose

When schools lead primarily with enforcement, the message students may receive is: “We do not trust you.” When schools lead with education, the message becomes: “We believe you can learn this responsibly.”

That difference shapes behaviour. It shapes identity. It shapes how a young person enters the workforce.

Career professionals can be steady guides here. We can help clients understand that integrity is not the absence of tools; it is the presence and application of honest decision-making.

The CPC Certified Work-Life Strategist (CWS) certification supports professionals in developing the reflective, ethical, and strategic skills needed to guide clients responsibly in the age of AI.

In the spirit of transparency, I want to share that while the perspectives in this article are my own, I used AI as a support tool in shaping and refining the writing, the same way many professionals now use digital tools responsibly and ethically in their work.

Prepare Students for the World They Are Entering

AI is not a trend students can avoid. It is becoming an every-day part of how work gets done. If schools respond to AI with fear, we will graduate students who fear learning. If schools respond with guidance, we will graduate students who know how to think, verify, and act ethically.

Our role is to help clients, and the systems around them, move toward preparation instead of punishment. That is how we build confidence, competence, and a sustainable future that still feels human.

Grounding our practice in CPC’s Code of Professional Conduct can strengthen our ability to coach clients through ethical dilemmas with confidence and credibility.

I’ve chosen to openly acknowledge my use of AI as a writing aid, because demonstrating responsible, transparent AI use is exactly what we should be teaching and modelling as career professionals.

Canada has an opportunity to model a balanced path forward, one that protects academic integrity while actively teaching AI literacy, ethical judgment, and transparency.

– By Sharon Graham, Founder and Chair of Career Professionals of Canada –

Written in collaboration with ChatGPT, developed by OpenAI, based on the author’s original ideas. Image generated using ChatGPT.

Spread the love
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments