10 ways to tackle the growing problem of unauthorized AI at work

Date:

Share post:

Laura Lemire and Jim Vana
Laura Lemire and Jim Vana
Laura Lemire is a privacy and cybersecurity attorney at Schwabe with more than a decade of experience in the technology sector. Contact her at 206-407-1574 or [email protected]. Jim Vana is a Schwabe shareholder with over three decades of experience centered on Trademark, Copyright, Internet and Advertising law. Contact him at 206-407-1533 or [email protected].

The rise of artificial intelligence has brought both opportunities and challenges to the workplace. However, a growing trend of employees using free or unauthorized AI tools poses significant risks, from security breaches to the loss of trade secrets. Recent reports indicate that some workers are engaging with AI in ways that are not authorized by the employer, highlighting the importance of establishing policies and protocols that will enable responsible and deliberate adoption and use of AI at work.

How are employees using AI at work?

One report by Ivanti revealed:

  • 46% of office workers say some or all the AI tools they use at work are not provided by their employer;
  • 38% of IT workers are using unauthorized AI tools; and
  • 32% of people using generative AI at work are keeping it a secret.

Another recent study out of the Melbourne Business School found that among those who use AI at work:

  • 47% say they have done so in ways that could be considered inappropriate; and
  • 63% have seen other employees using AI inappropriately.

What could possibly go wrong?

In a report aptly named From Payrolls to Patents, Harmonic found that 8.5% of prompts into popular generative AI tools included sensitive data. Of those prompts:

  • 46% included customer data, such as billing information and authentication data;
  • 27% included employee data, such as payroll data and employment records;
  • 15% included legal and finance data, such as sales pipeline data, investment portfolio data and M&A materials; and
  • 12% included security policies and reports, access keys and proprietary source code.
Laura Lemire, Schwabe
Co-author Laura Lemire, Schwabe

Inappropriate uses of AI in the workplace can result in cybersecurity incidents, threats to national security, IP infringement liability and the loss of IP protections.

For example:

  • Patent eligibility: Patent applications are examined against prior art. While U.S. patent law grants inventors a one-year grace period to file an application after public disclosure of the invention, inadvertent employee disclosure of information through AI could become “prior art” that prevents patent protection.
  • Trade secrets: If an employee does disclose confidential information, the company may lose trade secret protection.
  • Copyright: Employees who do not fully appreciate how the AI tool works may inadvertently give away company information to allow the AI tool provider to train its large language model (LLM). Further, using copyrighted materials as prompts (or parts of prompts) can constitute copyright infringement and is often more likely to generate output that is itself infringing.
  • Trademark: A trademark is a company’s exclusive brand. However, improper use of the mark to refer to a category of goods or services can cause the mark to become generic and available for everyone’s use. “Thermos,” “Aspirin” and “Escalator” are examples of former trademarks that are now generic. As such, it is possible that as an LLM continues to train on employee-provided data, it may produce outcomes that weaken the trademark.

10 steps to minimize AI risks and encourage responsible AI adoption at work

Jim Vana, Schwabe
Co-author Jim Vana
Photograph by Stuart Isett
©2023 Stuart Isett. All rights reserved.

In addition to applying technical solutions to address these risks, business leaders can implement a variety of organizational measures to support the responsible adoption of AI in the workplace. For example, business may:

Adopt an AI policy

As a starting point, consider a policy that:

  • Prohibits the download and use of free AI tools without approval.
  • Limits acceptable use cases for free AI tools.
  • Prohibits sharing confidential, proprietary and personal information with free AI tools.
  • Limits inputs, prompts or asks of free AI tools.
  • Restricts the use and distribution of output from free AI tools.

Update existing policies

These should include IT, network security and procurement policies, to account for AI risks. While reducing AI risks requires a multidisciplinary approach, teams who provide cross-functional support for your organization may be best positioned to spot issues early.

Review contracts for AI tools

AI developers often require disclosures or other measures in their terms and conditions, which may necessitate changes to users’ privacy statements or terms of use.

Train employees on the responsible use of AI

Ensure employees are informed of your AI policies, understand AI risks and best practices, and know how to report AI-related issues.

Develop a data classification strategy

Help employees spot and label confidential, proprietary and personal information. This increases each employee’s AI proficiency, which reduces exposure for the company.

Designate employees who will be authorized to use company-approved AI tools

Companies can create an approval mechanism that allows interested employees to obtain authorization to use AI tools. This may increase efficiency by narrowing the pool of employees who need more comprehensive AI training.

Require documentation

Individuals using AI tools should document their use, including inputs and outputs. This information may be necessary to assess IP risks or claims. Such data can also be used to assess compliance with AI policies and identify new risks.

Implement a review process for the publication or wide distribution of AI-generated content

Checking outputs for bias and accuracy, for example, can reduce the likelihood of reputational issues related to the use of AI-generated content.

Continuously monitor the use of AI in your workplace

Monitoring may include regular review of contracts for AI tools (which can often change) or testing for accuracy, relevance and bias in AI outputs. Companies can form oversight committees to ensure regular compliance and catch potential risks.

Implement an incident response plan that covers foreseeable AI scenarios.

For example, designate a first point of contact for an employee when he or she suspects or realizes that someone gave confidential information to an AI tool, or if they have any concerns about the tool.

The future of AI at work

Employers should take the initiative and actively communicate with employees about AI risks and acceptable use, adopt clear AI policies, update existing security protocols and provide employee training. Such actions not only protect sensitive data, but they can also empower employees to innovate responsibly. By prioritizing preparedness, organizations can benefit from AI gains—from enhanced productivity to cost savings—while reducing risks.


This article summarizes aspects of the law and opinions that are solely those of the authors. This article does not constitute legal advice. For legal advice regarding your situation, you should contact an attorney.

Schwabe patent attorney Jeff Liao contributed to this article.

Related Articles