When the first speaker in a Senior HR Connex Think Tank asked half-jokingly whether using generative AI amounted to “cheating,” they hit on the confluence of anxieties that had been plaguing themselves and their peers: productivity, authorship, liability, and professional identity in a world where there’s no certainty over who really put pen to paper. A marketing brief, a consultant’s recommendations, a contract draft that may or may not have been machine-assisted – each daily interaction was framed with room for skepticism and the gravity of what was, at least in part, an ethics question. Across their discussion on the use of “shadow AI” by employees – that is, the use of AI tools outside of an organization’s direct purview or tech stack – executives repeatedly returned to variations on that initial question.
When is assistance a shortcut and when is it an enhancement? When does a draft become someone else’s work, and who owns what follows? How can the workflow promise of AI be leveraged without unnecessarily inflating risk?
Transparency, Uses & Credibility
In many offices, AI has become an improvisational fallback. People use it to tidy email drafts, seed slide decks, synthesize rows of spreadsheet data, and produce first-pass research on vendors. For example, one director described giving two spreadsheets to an assistant and asking who was on which list: something that would have taken an hour of staff time before reduced to just a quick prompt and editorial pass. A learning officer discussed using AI-driven role-play so managers can rehearse difficult conversations; practiced exchanges can make those encounters more human, they explained, if the AI remains a practice partner rather than an oracle.
What emerged most plainly from participants was a preference for transparency over prohibition. Several participants described a familiar arc: an initial reflex to ban consumer tools - “don’t use ChatGPT” - followed by a compromise in which security teams block only risky sites, procurement teams negotiate enterprise licenses, and the organization nudges employees toward an approved list of resources like Copilot. The simplest policy that was at least serviceable was to allow the use of AI but require disclosures.
“If you use an AI tool to draft, analyze, or summarize content,” expressed one participant, “acknowledge it and take responsibility for the output.” That approach aligns, at least in ethos, with how work has been performed for decades. Do what must be done to achieve the best possible outcome, but be honest about the process, its risks, and its drawbacks.
This still left several edges to smooth over; however, many of which may take years before we see true best-practice answers on. Several participants noted that AI outputs can flatten authorship, producing generic language that erodes a presenter’s credibility. Copy-and-paste content is easy to spot; what’s more consequential is when a person presents material they cannot explain. Boards notice this. Executives notice it. Employees notice it. A slide deck is, after all, only a prompt for the human who stands behind it. Leaders said they quickly learned to insist on human narrative - the slides may show the data, but the person must deliver the context, reasoning, and judgment without using their tools as a replacement or scapegoat for informed decision-making.
Bias, Security & The Legal Frame
Yet another edge to account for is the reality that AI both reveals and reinforces patterns. It can surface exclusionary phrasing in job descriptions, yet it can also perpetuate the very signals that led to exclusion in the first place. Several HR teams have used bots to scan appraisals and nominations for problematic language - terms that sound innocuous, like “energetic,” can skew candidate pools away from older applicants or those who don’t fit a narrow cultural frame. But tools that detect excluding language need human oversight. An algorithm flagged a pattern; humans must decide whether it reflects legitimate criteria or an unfair barrier.
This is made only more important in matters of security and legality, where constraints themselves can shape outcomes. Some organizations firewall public models outright, while others require enterprise-grade licenses or internal models that can be fed proprietary documents without risk. A compromise several leaders mentioned was license stewardship - allow pilots and advanced features, but tether them to actual use, where teams that fail to demonstrate consistent satisfactory performance or human review may have their access revoked. This approach encourages the nimble experimentation of team-specific decentralization while curbing vendor sprawl and ensuring structured oversight. It also gives IT a practical lever to manage cost and risk.
One stark anecdote made the procurement implications concrete. After paying a consulting firm a hefty fee for training materials, one team found that the deliverable matched verbatim a public generative model output. Confronted with the work, the vendor could not plausibly deny using AI. This led to contract management reforms, where plain clauses were embedded into statements of work: disclose the use of any generative tools, warrant originality, and assign IP as needed.
Training, Culture & Board Oversight
Rigorous AI training came up repeatedly as both a tactical fix and a cultural necessity. Leaders have begun running short prompt-engineering workshops to teach staff how to ask better questions of models so outputs are less likely to drift and more likely to be useful drafts. Prompt design, they found, reduces verification time and increases utility. But training cannot be only technical. It has to be about judgment - when not to feed sensitive data into a model, when to escalate to questionable verbiage to legal teams or management, and who should have final sign-off on work that includes AI-generated passages.
In some places, boards now receive briefings on AI governance as part of their regular enterprise risk conversations. In others, executives pilot secure models to see how the technology can support high-level decision-making without exposing proprietary information.
Always, the tone of participants toward labor displacement was cautiously consistent: the preferred goal was staff augmentation rather than elimination. AI promises to reclaim hours currently swallowed by repetitive tasks and redirecting that effort towards more valuable work like analysis, relationship-building, and strategic thinking holds the secret to true force multiplication. But rhetoric alone does not change reality. HR leaders must manage expectations, coach managers, and design work so that freed time is used productively.
A Compact Roadmap
Participants shared several practical, immediate steps HR leaders can take. Firstly, create a brief disclosure policy and a short AI ethics training module. Next, ask IT to inventory what tools are being used, both shadow and approved; once the most useful are identified, you can begin piloting two or three low-risk, high-value use cases (e.g., role-play coaching, data synthesization, meeting brief generation). Add AI disclosure clauses into your vendor agreements and workflow SOPs, ideally under the oversight of an appointed AI steward who can bridge HR, legal, and IT considerations and support requests. These moves are modest and valuable specifically because they center AI proliferation around risk reduction and the path of least resistance – a vital consideration for any change management initiative.
Ultimately, the tie that binds each is clarity of ownership. AI can summarize, propose, and pare away the unnecessary, but it cannot be allowed to shoulder the weight of institutional or individual judgment. The work of HR, then, is not only to produce AI guidelines but to sustain a culture that emphasizes integrity.
That middle path - neither technophobic nor technophilic - felt to participants like the most practical stance. Clamp down too hard, and both usage and creativity will go underground. Allow unfettered experimentation, and the organization risks compliance failures and reputational harm. Clarity, coaching, and reasonable controls strike the necessary balance for innovation to achieve exactly that.
To return to the question that opened the session - is using AI cheating? It depends, participants agreed, on whether the tool is being used as a shortcut or as a scaffold. Cheating implies concealment, a desire to pass off someone else’s labor as one’s own. Acknowledge the assistance, take responsibility for the product, and ensure those who present the work can explain it. Wielded with care, these resources can free staff to perform the work machines cannot: judgment, mentorship, and the quiet but persistent labor of fostering organizational culture.
Connex membership is an excellent opportunity to join and learn from conversations just like the one that preceded this article – all of which are peer-driven with the goal of uncovering and exploring emerging HR best practices. Connex provides a variety of resources to that end, including access to an exclusive online community, a library of industry content, and a host of virtual and live events. To learn more about becoming a member, CLICK HERE.