Close Menu
Human Resources Mag
  • Home
  • News
  • Management
  • Guides
  • Law
  • Talents
  • Benfits
  • Technology
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release
What's On

Here’s What HR Needs to Know About the Proposed H-1B Rule Changes

August 30, 2025

One-third of Canadians say AI is harmful for society: poll

August 29, 2025

Bringing Burnout Down with the Netherlands’ Four-Day Workweek

August 29, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Human Resources Mag
Subscribe
  • Home
  • News
  • Management
  • Guides
  • Law
  • Talents
  • Benfits
  • Technology
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release
Human Resources Mag
Home » What HR can do to minimize the risks of unauthorized AI at work
Benfits

What HR can do to minimize the risks of unauthorized AI at work

staffBy staffJuly 21, 20256 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The rise of artificial intelligence has brought both opportunities and challenges to the workplace. However, a growing trend of employees using free or unauthorized AI tools poses significant risks, from security breaches to the loss of trade secrets. Recent reports indicate that some workers are engaging with AI in ways that are not authorized by the employer, highlighting the importance of establishing policies and protocols that will enable responsible and deliberate adoption and use of AI at work.

One report by Ivanti revealed:

  • 46% of office workers say some or all the AI tools they use at work are not provided by their employer;
  • 38% of IT workers are using unauthorized AI tools; and
  • 32% of people using generative AI at work are keeping it a secret.

Another recent study out of the Melbourne Business School found that among those who use AI at work:

  • 47% say they have done so in ways that could be considered inappropriate; and
  • 63% have seen other employees using AI inappropriately.

What could possibly go wrong?

In a report aptly named From Payrolls to Patents, Harmonic found that 8.5% of prompts into popular generative AI tools included sensitive data. Of those prompts:

  • 46% included customer data, such as billing information and authentication data;
  • 27% included employee data, such as payroll data and employment records;
  • 15% included legal and finance data, such as sales pipeline data, investment portfolio data and M&A materials; and
  • 12% included security policies and reports, access keys and proprietary source code.
Co-author Laura Lemire, Schwabe

Inappropriate uses of AI in the workplace can result in cybersecurity incidents, threats to national security, IP infringement liability and the loss of IP protections.

For example:

  • Patent eligibility: Patent applications are examined against prior art. While U.S. patent law grants inventors a one-year grace period to file an application after public disclosure of the invention, inadvertent employee disclosure of information through AI could become “prior art” that prevents patent protection.
  • Trade secrets: If an employee does disclose confidential information, the company may lose trade secret protection.
  • Copyright: Employees who do not fully appreciate how the AI tool works may inadvertently give away company information to allow the AI tool provider to train its large language model (LLM). Further, using copyrighted materials as prompts (or parts of prompts) can constitute copyright infringement and is often more likely to generate output that is itself infringing.
  • Trademark: A trademark is a company’s exclusive brand. However, improper use of the mark to refer to a category of goods or services can cause the mark to become generic and available for everyone’s use. “Thermos,” “Aspirin” and “Escalator” are examples of former trademarks that are now generic. As such, it is possible that as an LLM continues to train on employee-provided data, it may produce outcomes that weaken the trademark.

10 steps to minimize AI risks and encourage responsible AI adoption at work

Jim Vana, Schwabe
Co-author Jim Vana
Photograph by Stuart Isett
©2023 Stuart Isett. All rights reserved.

In addition to applying technical solutions to address these risks, business leaders can implement a variety of organizational measures to support the responsible adoption of AI in the workplace. For example, business may:

Adopt an AI policy

As a starting point, consider a policy that:

  • Prohibits the download and use of free AI tools without approval.
  • Limits acceptable use cases for free AI tools.
  • Prohibits sharing confidential, proprietary and personal information with free AI tools.
  • Limits inputs, prompts or asks of free AI tools.
  • Restricts the use and distribution of output from free AI tools.

Update existing policies

These should include IT, network security and procurement policies, to account for AI risks. While reducing AI risks requires a multidisciplinary approach, teams who provide cross-functional support for your organization may be best positioned to spot issues early.

Review contracts for AI tools

AI developers often require disclosures or other measures in their terms and conditions, which may necessitate changes to users’ privacy statements or terms of use.

Train employees on the responsible use of AI

Ensure employees are informed of your AI policies, understand AI risks and best practices, and know how to report AI-related issues.

Develop a data classification strategy

Help employees spot and label confidential, proprietary and personal information. This increases each employee’s AI proficiency, which reduces exposure for the company.

Designate employees who will be authorized to use company-approved AI tools

Companies can create an approval mechanism that allows interested employees to obtain authorization to use AI tools. This may increase efficiency by narrowing the pool of employees who need more comprehensive AI training.

Require documentation

Individuals using AI tools should document their use, including inputs and outputs. This information may be necessary to assess IP risks or claims. Such data can also be used to assess compliance with AI policies and identify new risks.

Implement a review process for the publication or wide distribution of AI-generated content

Checking outputs for bias and accuracy, for example, can reduce the likelihood of reputational issues related to the use of AI-generated content.

Continuously monitor the use of AI in your workplace

Monitoring may include regular review of contracts for AI tools (which can often change) or testing for accuracy, relevance and bias in AI outputs. Companies can form oversight committees to ensure regular compliance and catch potential risks.

Implement an incident response plan that covers foreseeable AI scenarios.

For example, designate a first point of contact for an employee when he or she suspects or realizes that someone gave confidential information to an AI tool, or if they have any concerns about the tool.

The future of AI at work

Employers should take the initiative and actively communicate with employees about AI risks and acceptable use, adopt clear AI policies, update existing security protocols and provide employee training. Such actions not only protect sensitive data, but they can also empower employees to innovate responsibly. By prioritizing preparedness, organizations can benefit from AI gains—from enhanced productivity to cost savings—while reducing risks.


This article summarizes aspects of the law and opinions that are solely those of the authors. This article does not constitute legal advice. For legal advice regarding your situation, you should contact an attorney.

Schwabe patent attorney Jeff Liao contributed to this article.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Articles

Mini Experiments: What If Your Job Description Requirements Are the Problem?

August 15, 2025 Benfits

How HR can adopt gen AI without losing the human touch

August 15, 2025 Benfits

How to Decide if a Candidate Deserves a Second Interview

August 15, 2025 Benfits

S&P Global’s employee strategy builds on human talent by investing in their skills and development in AI and beyond

August 14, 2025 Benfits

Changes Every Employer Must Know

August 14, 2025 Benfits

Embracing AI and automation in recruitment

August 14, 2025 Benfits
Top Articles

Accused of fraud, murder, fired exec awarded $500,000, 24 months’ notice

January 9, 2024101 Views

Canadian Tire store under investigation for alleged exploitation of temporary foreign workers

October 2, 202495 Views

5 Best Learning Management Systems in 2025

February 11, 202594 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest News

Nike Layoffs Come for 1% of Its Corporate Workers

staffAugust 29, 2025

How to win and keep top talent with fringe benefits

staffAugust 28, 2025

Performance improvement plan templates: Guide to growth, not goodbyes

staffAugust 28, 2025
Most Popular

Here’s What HR Needs to Know About the Proposed H-1B Rule Changes

August 30, 20250 Views

One-third of Canadians say AI is harmful for society: poll

August 29, 20250 Views

Bringing Burnout Down with the Netherlands’ Four-Day Workweek

August 29, 20250 Views
Our Picks

Nike Layoffs Come for 1% of Its Corporate Workers

August 29, 2025

How to win and keep top talent with fringe benefits

August 28, 2025

Performance improvement plan templates: Guide to growth, not goodbyes

August 28, 2025

Subscribe to Updates

Get the latest human resources news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Human Resources Mag. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.