Should you secretly use AI at work?
Feb 19, 2024
5 mins
US Editor at Welcome to the Jungle
In an ironic twist, many tech giants, those touted guardians of the digital age, are setting technology boundaries on their brainchildren. It's a humorous nod to the age-old adage: Loose lips sink ships, even if those lips are coded. Amidst concerns about data security and the imperative to safeguard their intellectual property, these companies caution their workforce against the casual exchange of proprietary information with Large Language Models or LLMs, which are the algorithmic basis for generative AI applications like OpenAI's ChatGPT and Google's Gemini (previously Bard).
For example, Apple restricted its employees’ use of such programs because they might inadvertently disclose confidential information. Additionally, it instructed its employees to refrain from using Copilot, a tool offered by Microsoft-owned GitHub that automates writing software code. Amazon cautioned its employees against sharing confidential information with generative AIs after instances where ChatGPT’s responses mimicked internal Amazon data. Similarly, Google advised its workers on the cautious use of AI technologies, including its own Gemini, emphasizing the need to avoid sharing classified information.
Furthermore, findings from a survey conducted by BlackBerry in August 2023, polling 2,000 IT decision-makers worldwide, reveal that a staggering 75% are contemplating or actively enforcing bans on ChatGPT and similar generative AI tools in corporate settings. Of those, 61% intend these measures to be enduring solutions, hinting at a significant shift towards long-term or permanent restrictions on AI usage.
However, employees are still finding discreet ways to utilize generative AI tools at work, regardless of their employers’ official stance on the technology. According to a survey by Fishbowl in February 2023, out of the 5,067 respondents who use AI at work, 68% reported that they don’t disclose their usage of generative AI with their boss’ knowledge, making AI a significant clandestine aid in the workplace.
So, what drives employees to use LLMs despite the risks to their jobs and the companies they work for? And how can organizations unlock the potential of AI tools to fuel innovation while safeguarding sensitive data?
Productivity gains and privacy pains
Everyone wants to be more efficient at their job. But Robert Strohmeyer, a technology executive who has led teams in several AI/ML companies and consults businesses on integrating AI and related technologies into their workflows, says these workers are taking risks. “The major AI vendors like Microsoft and Google are quite transparent in their End User License Agreements (EULA) …. [stating that] engineers regularly review inputs to optimize their AI. Therefore, anything you submit may not remain confidential. They also don’t assure that proprietary content fed into the AI will be kept as such,” he explains.
So why do workers risk their jobs and the security of their employers? Richard G*, a DC-based quality assurance tester, who fixes coding issues in software, says he uses generative AI secretly to be more efficient at his job. “I work on government contracts, so sensitive government data needs to be protected. Although I avoid inputting this data, my company has strict policies against such AI tools,” he says. Richard is also worried about his job being automated away in the future and, therefore, prefers to keep his employer in the dark about the extent to which AI handles his tasks.
Despite the policies, he still uses it anyway, adding, “ChatGPT excels at automating tests, making it easier to deal with challenges or unclear situations by mapping out steps for projects. It’s incredibly useful for programming, providing accurate help with a wide range of programming languages, including ones I’m not used to,” he explains.
Richard’s story shows employees’ precarious juxtaposition with AI technologies in the workplace, balancing the pursuit of increased productivity with adherence to company policies. So, how can we progress this innovative technology while still safeguarding proprietary data?
Safeguarding data while embracing AI innovation
Pete Davis, a senior software engineer working across multiple industries, began experimenting with generative AI in his workplace before it gained official endorsement from his company. He recalls, “In the initial months following ChatGPT’s release, my company promoted awareness and learning about the technology. However, significant apprehensions arose regarding issues such as ownership of the code produced by LLMs and the hazards of exposing confidential information to them.”
Despite these concerns, Davis was still an avid supporter of integrating AI company-wide. “These models introduce features that would be extremely difficult, if not impossible, to create with traditional programming methods,” he says. He was given six weeks to research and demonstrate ways that LLMs could be used not only in the development of code but also in data analysis and in enhancing application functionality.
Davis eventually helped develop some of his company’s internal tools for interfacing with LLMs, limiting security risks posed by open, generative AI models. “We’re lucky because we’re a pretty forward-thinking technology company,” Davis says.
Strohmeyer agrees, “For businesses or use cases involving proprietary data with LLMs or any AI, deploying your own proprietary AI becomes critical. We’re witnessing the rise of a significant market in AI, centered around self-hosted and fully internal proprietary AI models, offering assurances against external usage.”
The double-edged sword of AI innovation
Do the breakthroughs in generative AI merit the confidentiality surrounding their use? Strohmeyer contends that while the benefits are significant, they come with caveats. He points out that AI can enhance decision-making speed and accuracy by automating routine data analysis tasks.
“Such technology, for example, can refine decision accuracy beyond human capability, allowing employees to dedicate their efforts to roles where human insight is indispensable,” he adds. This advancement is notably advantageous in sales, where selecting prospects based on data is more effective than relying on intuition.
Yet, Strohmeyer acknowledges the pitfalls when AI falters, such as producing errors or, in some instances, generating content that may inadvertently mirror sources like the New York Times. These mistakes, he argues, highlight rather than diminish the critical role of human oversight.
He adds, “We’ve seen some high-profile, disastrous consequences from that, like the PagerDuty CEO last year who was suspected of using generative AI to compose a layoff email, which was tremendously embarrassing for the company, and major executives at Sports Illustrated were just fired for using AI as uncredited contributors to Sports Illustrated online.”
Strohmeyer advises that while AI offers immense benefits, it demands vigilant scrutiny. “You have to bring your own intelligence to bear on AI’s contribution to your work, no matter what you do. If you’re writing code with Copilot, most of the time, it won’t produce fully usable, production-grade code. You’re almost certainly going to have to edit or revise that code for use in your application,” he says.
To err is human, to enhance is AI
The debate extends to whether certain tasks should remain exclusively human, especially those requiring emotional intelligence. However, AI is increasingly being integrated into such areas. “For example, in suicide prevention, AI is used to detect red flags that then escalate the issue to a human with the right expertise,” Strohmeyer notes. Adding, “AI has shown capability in detecting changes in emotional tone and recommending script adjustments in call centers.” Suggesting AI’s growing capability in complementing human empathy and intelligence.
Nonetheless, the human need for genuine connection and accountability poses challenges AI cannot fully address. “People typically crave genuine empathy and trust in their interactions with another human being. This desire for a genuine human connection, coupled with the need for personal accountability, especially in sensitive situations, highlights an area where AI cannot fully take over,” he adds.
From resistance to integration
Yes, there are major security concerns with using AI at work, but the benefits are there—from streamlining data to creating a project plan. However, Strohmeyer believes that if you’re in a situation where your company has explicitly stated they don’t want their data or AI-generated content used, then “it’s frankly a breach of trust,” stating, “anyone discreetly going against their company’s and coworkers’ trust in this manner should really reflect.”
While Richard’s company maintains a stance against AI, he continues to incorporate ChatGPT into his daily tasks. Conversely, Davis’s company is actively seeking ways to safely integrate this technology into their operations. “I believe it’s on the verge of becoming a part of nearly everything we do,” says Davis. “Software development, I predict, will undergo significant changes, demanding new skills. It could shift towards writing detailed specifications for software, with AIs handling the coding,” he adds.
As AI becomes increasingly integrated into our work, Strohmeyer’s guidance extends to workers across all levels, including managers and executives: “Recognize and uphold the unique value of human contribution.” He says we need to focus on areas where humans add irreplaceable value, which varies by role, encompassing aspects like judgment and empathy, and consider how these human qualities enhance our work. He adds, “Rather than overly relying on AI, workers should view it as a tool to augment their capabilities, using AI-generated insights as a foundation for their own innovative solutions and ensuring human creativity and judgment remain paramount.”
Photo: Welcome to the Jungle
Follow Welcome to the Jungle on Facebook, LinkedIn, and Instagram, and subscribe to our newsletter to get our latest articles every week!
More inspiration: Productivity & tools
Goal setting: How to bounce back when you feel like a failure
The big F word ... Failure. We all face it, but here’s how to make it your secret weapon for success.
Dec 18, 2024
Productivity boost: Why mental health outshines long hours
Long hours don’t equal better work. Discover how mental health support can unlock productivity and time efficiency in the workplace.
Nov 28, 2024
10 fun ways people are using AI at work
While many use AI for basic tasks like grammar checks or voice assistants, others are finding innovative ways to spice up their work days.
Nov 05, 2024
12 Slack habits that drive us crazy
Slack is a top messaging platform, but coworkers can misuse it. Over-tagging and endless messages can make it frustrating ...
Oct 16, 2024
10 CareerTok creators you should be following
Looking for career advice? CareerTok has quick tips from real experts on interviews and job offers.
Sep 25, 2024
The newsletter that does the job
Want to keep up with the latest articles? Twice a week you can receive stories, jobs, and tips in your inbox.
Looking for your next job?
Over 200,000 people have found a job with Welcome to the Jungle.
Explore jobs