From Plagiarism to Partnership: Teaching Integrity in the Age of AI
Most of us grew up with a clear rule: if you copy someone else’s work and claim it as your own, that is plagiarism. It was a rule that was straightforward, teachable, and enforceable. Then AI arrived, and suddenly the line felt blurry for many people.
- A client says, “I used AI to help me rewrite my cover letter because my writing is not strong.”
- A student says, “I asked AI for an outline because I did not know where to start.”
- A jobseeker says, “I used AI to write my accomplishments, but I am not sure if that’s allowed.”
Underneath these questions is a deeper concern: “Am I being dishonest?”
Career professionals are uniquely positioned to answer that question with care and clarity. We can also help clients build a framework that works in school, in the workplace, and in professional practice.
The Issue Is Not AI—It Is Misrepresentation
AI can produce polished text quickly. That makes it tempting to submit work without understanding it, without verifying it, and without taking ownership of it. That is a real problem.
But the core issue is not about the use of an AI tool; the issue is how the work is represented.
Integrity has always been about truthfulness. It asks whether the work is being represented accurately, whether the person understands and stands behind what they submit, whether sources and contributions are acknowledged, and whether the work respects rules, people, and purpose.
AI does not erase ethical judgment. It simply makes the need to teach it more explicit.
To help clients and students understand expectations around responsible use of generative tools, refer to the University of Toronto’s guidelines on academic integrity and generative AI, which explain how to balance innovation with honesty.
A Simple Integrity Framework Clients Can Remember
It feels important to say openly that this piece was written with the assistance of ChatGPT, not to replace my thinking, but to help articulate it clearly. This mirrors the kind of ethical AI partnership I am describing.
When clients feel uncertain, I like to give them a practical standard they can use in almost any situation. Here is one that works well: “If you could not explain it, defend it, or redo it without the AI tool, you do not own it yet.” That is not meant to shame anyone. It is a guidepost. It helps clients focus on learning and ownership, not perfection.
Career professionals who want to model integrity and build client trust can benefit from CPC’s Code of Professional Conduct as a foundation for how to teach and demonstrate ethical practice in every interaction.
An award-winning Canadian framework developed at Brock University demonstrates how institutions can move away from fear-based enforcement toward shared responsibility for academic integrity in the age of AI.
Moving From Policing to Partnership
Many organizations and institutions respond to AI the way they respond to fear. They try to police it. Often, this response grows out of uncertainty rather than intent, as leaders work to keep pace with rapid change.
Policing tends to create a cat-and-mouse dynamic. Secrecy increases. Anxiety rises. Enforcement becomes inconsistent. False accusations occur. The focus shifts from learning to “getting away with it.”
This is rarely about character. It is about how people behave when expectations are unclear.
An AI partnership approach is more effective. It asks people to use these tools in ways that strengthen their thinking, not replace it.
What Partnership Looks Like in Practice
One habit career professionals can teach is using AI for thinking support, not identity replacement. Healthy uses include brainstorming possibilities, outlining structure, generating practice questions, summarizing information, comparing viewpoints, or improving clarity and grammar.
Risk increases when people submit AI-generated text as if it represents their full thinking, copy content they do not understand, inflate credentials, or rely on fabricated facts or references.
Another partnership habit is adding what I call “human fingerprints” to every output. Authenticity is not abstract; it is practical. Clients can be encouraged to include specific examples from their own experience, real metrics they can verify, Canadian context, and reflective insight about what they learned or why a choice matters.
AI can draft. Humans create meaning.
Critical thinking also needs to be taught explicitly. Clients benefit from learning to question AI rather than admiring it. Asking what assumptions are being made, what might be missing, and what evidence supports a claim builds judgment. Pausing to consider whether information is current for Canada and whether the language actually sounds like them builds confidence.
Disclosure is another quiet but powerful habit. In many professional contexts, calm disclosure protects integrity. Statements such as, “AI was used to brainstorm and refine wording, and I verified and finalized the content,” are often enough. Honesty, even when not required, is rarely a liability.
What This Means for Career Materials
AI is now everywhere in job search. The question is how we keep job search ethical while still helping clients compete.
Here are practical guidelines that protect integrity:
- For résumés, AI can help generate accomplishment language or tailor keywords, but clients must verify every detail and ensure it reflects real experience. Inflated titles, fake projects, or skills they cannot demonstrate undermine credibility.
- For cover letters, AI can support structure and tone, but the story must still be personal. Encouraging clients to add one or two specific details that only they could provide makes a meaningful difference.
- For LinkedIn profiles, AI can polish headlines and summaries, but profiles should still sound human, not overly formal and formulaic.
This is where career professionals shine. We protect clients from generic AI voice and guide them toward credible, confident communication.
CPC’s course catalogue includes workshops and training designed to strengthen practitioners’ expertise in ethical job search support, résumé writing, and LinkedIn coaching.
Helping Clients Protect Their Integrity
Some schools and workplaces are now using AI detection tools to help identify whether writing may have been generated by artificial intelligence rather than written entirely by a student. Many clients are understandably unsure how these tools work and what their results really mean. This is an area where career professionals need to speak with care and realism.
AI detection tools do not actually determine authorship. They look for writing patterns and produce probability estimates. Their results can vary widely, and they are known to mislabel writing by multilingual writers, neurodivergent writers, people who use formal language, or those who have learned structured writing styles. After analyzing the text, the tool assigns a probability score indicating how likely it is to have been produced by an AI. A high score (for example, 90-100%) suggests high probability of AI creation, while a low score suggests human authorship.
It’s important to note that a score may appear precise, even when it is uncertain. The risk for students and educators alike is false certainty. When a single result is treated as definitive, important context can be overlooked.
Rather than teaching clients about tools they cannot control, we can teach practices that demonstrate integrity and authenticity. We can support clients by encouraging habits that make their writing process visible, beyond just the final product. Recommend that clients keep outlines, drafts, notes, sources, and brief reflections to show how their ideas developed and what choices were made along the way. When thinking is “visible,” learning and ownership are easier to demonstrate, even if questions arise. This protects fairness, reduces anxiety, and keeps the focus where it belongs—on honest work, thoughtful decisions, and genuine learning.
A Gentle Integrity Check-In
Instead of long rules, many clients benefit from a short self-check-in. With their final work in hand, they can ask themselves:
- Can I explain what I submitted in my own words?
- Have I verified all facts, dates, sources, and claims?
- Does the final work reflect my real experience and voice?
- Have I followed the rules of my school or workplace?
- If someone asks how the work was created, can I answer honestly and without panic?
If clients can answer “yes” to these questions, they are usually on solid ground.
CPC’s Certified Career Strategist (CCS) certification is designed for professionals who want to hone specialized skills in career services, including supporting clients through emerging AI-related career challenges.
Conclusion: Integrity Is the Skill That Will Outlast Every Tool
AI is changing how people write, learn, and work. These changes can make integrity feel complicated, especially for students and jobseekers trying to do their best.
But the foundation remains the same: integrity is truthfulness, responsibility, and ownership.
Career professionals can help clients move from fear to partnership. We can teach them how to use AI to strengthen their thinking, not replace it, and how to stay transparent, credible, and authentic in a world where tools are powerful and temptation is real.
I have chosen to be transparent about using ChatGPT as a supportive writing tool, because, as career professionals, ethical, disclosed AI use is exactly what we should be teaching and modelling.
For a deeper exploration of ethical principles related to client interactions, CPC’s free “Living Our Code” e-book provides real examples and actionable insights you can bring into your practice.
– By Sharon Graham, Founder and Chair of Career Professionals of Canada –
Written in collaboration with ChatGPT, developed by OpenAI, based on the author’s original ideas. Image generated using ChatGPT.