Is it safe to use ChatGPT at work yet?
Jul 27, 2023
11 mins
Journalist and translator based in Paris, France.
OpenAI’s chatbot is quickly becoming a key work tool, but many of America’s leading corporations have banned ChatGPT in their workplaces. Welcome to the Jungle spoke with five cybersecurity and legal experts about the current dangers of generative AI bots, and how to proceed safely.
Steven A Schwartz has been a lawyer for three decades, but his legal achievements weren’t why he made the news recently. Instead, Schwartz garnered attention for submitting a legal brief full of false citations – because he’d used ChatGPT to write it. “I did not comprehend that ChatGPT could fabricate cases,” Schwartz told the judge, who fined Schwartz and his legal partner $5,000.
In April, a few Samsung engineers found themselves in similarly public hot water when it turned out they’d entered confidential company info into ChatGPT. Although these employees were simply using the bot to check their code and organize their notes, any data typed into Open AI’s sensational new work tool is kept and used to train its algorithm. It didn’t take long for Samsung to enact a company-wide ban on all generative AI bots, aka algorithms that generate content.
How is it possible for ChatGPT to be the “hottest new job qualification” when it has proven itself to be an enormous liability? The bot provides some companies with undeniable competitive advantages. Yet, as of now, Apple, Amazon, JPMorgan Chase, Bank of America, Goldman Sachs and other pillars of the US economy don’t want their workers going near it. In June, the House of Representatives set limits on staff usage in their work tasks. To understand how – or whether – employees can safely use bots like ChatGPT to make their tasks easier and faster to complete, Welcome To The Jungle talked to three cybersecurity leaders and two legal experts specializing in artificial intelligence. Despite the complexity of the situation, they all agreed on where the problems lie and the best way forward. Here’s what they said:
No one knows how it operates
It’s not just America’s hefty employers who have shown reticence about ChatGPT, until recently, the entire country of Italy had banned it. Italy’s data protection authority, which adheres to much stricter data protection laws than the United States, deemed the bot had no legal justification for “the massive collection and storage of personal data” happening as it trained its algorithm. Italy was concerned that if users typed in personal information as part of their prompts, OpenAI would keep that data and use it as part of the reserves ChatGPT uses to come up with answers. This could expose the personal data to serious risks: for example, the bot could regurgitate sensitive information typed in by one user to answer another user’s questions. The ban was eventually lifted after Open AI created a form that allows users to signal when they don’t want the personal data they’ve entered to be retained.
While this step might have appeased the Italian authorities, many chief information security officers (CISOs) find it insufficient. Exactly what happens with information fed into bots like ChatGPT is still unclear even to experts like Rishi Kaushal, chief information officer at cybersecurity firm Entrust, which is based in Minnesota. “A lot of companies are stopping the use of it until there is more clarity around how the data is used,” he says. “How is the data stored? How is it secured?” Kaushal’s colleague, CISO Mark Ruchie, quotes a long list of cybersecurity hazards engendered by generative AI. Among them is the question of how companies like OpenAI protect the data gleaned from people’s prompts. “These [generative AI] tools that are out there, they can use models that can be used in a way that exposes our confidential information,” he says. In other words, the possibility of hackers breaking into ChatGPT to see what people have typed into their prompts is a real cause for concern.
Arvind Raman, senior vice president and CISO of BlackBerry, which switched from a cellphone manufacturer to a cybersecurity business in 2016, points out that generative AI is still at an early stage. “The technology is still evolving, the policy around it is still evolving even from OpenAI’s perspective,” he says. “I think it’ll take most companies a while before they can actually get comfortable in saying, ‘Okay, this is not in violation of any agreements or using our confidential personal information to train the models.’”
It’s too risky on a number of fronts
Both BlackBerry and Entrust have tiger teams – cross-functional groups – examining the privacy, legal and security issues surrounding generative AI in order to form company guidelines for the safe usage of these bots. Until then, employees at both businesses are not allowed to use these particular AI tools for company work, although Entrust notes that employees are allowed to request permission. “I’m sure the Samsung employee – whoever posted that code – did not intentionally want to put it out there. [They probably thought], ‘I want to improve my efficiency and improve my code, and maybe try and do this with this new technology,’” Raman says. “Very few people do it with bad intent. But it turns out that unintentional and non-malicious activity could result in bad things.”
Ieuan Mahony, a partner at the law firm Holland & Knight, who works on cases involving AI, intellectual property and data privacy, says, “We’re seeing certain clients saying [to staff], ‘Don’t use generative AI because it’s too risky. Let’s say I say I’m working on cold fusion and I put that into ChatGPT. So maybe my company’s trade secrets were just exposed to the engine. So it’s similar to the cybersecurity issues […] It’s just a different kind of risk, which is an intellectual property risk.”
Even if we get a better picture of what ChatGPT does with people’s input, employees still run the risk of violating data protection rules. According to BlackBerry’s blog, “Even if AI bot security isn’t compromised, sharing any confidential customer or partner information may violate your agreements with customers and partners, since you are often contractually or legally required to protect this information.” Kevin P Lee, a law professor at North Carolina Central University, who specializes in AI ethics, jurisprudence and commercial law, puts it this way: “Any time you have a fiduciary duty — an obligation to act in the best interest of someone else — then any kind of communication or research about that someone else is going to be problematic. So if lawyers start putting in queries about matters related to a client. That’s a potential risk.” Lee notes that the finance and medical industries should be particularly wary of this, as they often work with highly sensitive data, such as health records or trading information.
No one’s sure where the content comes from — or who owns it
An enormous amount of ChatGPT’s data is sourced from the internet, which is full of misinformation and disinformation. So when the bot confidently replies to a query, its answer may be factually incorrect. What’s more, OpenAI admits that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers” as a side-effect of the algorithm training itself to respond to a vast number of questions. This phenomenon, known as hallucination, is exactly what created a serious problem for New York lawyer Schwartz who handed in the erroneous legal brief authored by ChatGPT. Fact-checking the bot’s answers is essential for anyone using it as a search engine, but sometimes the verification process can take so long, it’s easier for an employee to simply write and research the text themselves – defeating the purpose of this productivity tool.
Internet-based datasets also open a can of intellectual property worms. While OpenAI states that users own the rights to ChatGPT’s answers to their queries, those answers could come from content floating around the internet that’s actually copyright-protected. Take Creative Commons, a public copyright license that allows anyone to share or build on existing works as long as the copyright terms are respected — which usually means giving credit to the original creator. The internet is filled with texts published under these licenses, but ChatGPT will spit them back out in its answers without citing the original authors. So while users technically have ownership of the answers to their prompts, they may not actually have the rights to use the text in those answers.
According to Mahony, lawyers working on copyright cases involving generative AI are asking, “‘What are the sources? What’s the dataset that they’re using to train their algorithm?’ And then we’re saying, ‘What are the contractual terms governing the use of that data set?’” And therein lies the problem: there’s such a vast amount of data used to train these algorithms that it’s nearly impossible to figure out what’s copyrighted.
Then there’s all the content on the internet that shouldn’t be there in the first place. On June 28th, two authors filed a lawsuit against OpenAI for using their novels in ChatGPT’s dataset without permission arguing that publications are “copied by OpenAI without consent, without credit, and without compensation.” Their complaint maintains that “shadow libraries,” which illegally share content from books online, are included in the vast sea of internet information absorbed by ChatGPT. So if an employee uses the bot to write a blurb for their company website, how can they know if it includes illegally-sourced information? How can they be sure the text isn’t plagiarized, or includes sentences they need copyright permission to use?
Intellectual property concerns
Generative AI also exacerbates intellectual property issues around artistic inspiration. “You have these wonderful applications where someone can create new art based off of old artist’s works, like create a picture of a Frankenstein in the style of Salvador Dali or something. And the drawings are very creative. But the problem is that the artists rights aren’t being considered at all,” says Lee, who is concerned that current laws aren’t equipped for these situations. “One way to approach it is to think of these as derivative works, in which the creator would hold the copyright. But what about the original pictures that went into creating it, and the copyrights that have been potentially violated there?”
Mahony says these blurry ownership rules often revolve around the concept of fair use. “What fair use means is, I’ll say, ‘Look, [this person] wrote the great French novel, and I’ve taken a chapter from that, but I’ve transformed it into something new. I’m going to do a parody of your chapter. I need to be pretty close to the original for it to make sense.’ So fair use means I can essentially infringe your copyright, and transform [your work] enough that it’s not infringing.” He notes that generative AI developers are using this “mushy concept” as a defense, arguing their bots transform copyrighted works into something new. Judges, however, may not agree.
“These are really big questions that need a lot of legislative work. We need some new laws to clarify what’s going on there,” says Lee. As long as ownership rules remain opaque, Lee warns that any music, text, video, or other such creations made with generative AI could expose the employer to liability. “In terms of risk analysis, if you use ChatGPT to generate marketing materials, you’re unequivocally going to infringe on someone’s copyrighted work,” Mahony warns.
The results can be biased
Among its other heralded uses, ChatGPT could streamline a lot of work for human resource departments, from powering chatbots that answer employee FAQs to screening resumes. But as of now, there’s an enormous drawback: ChatGPT has a well-documented bias problem. For example, the bot is liable to regurgitate any prejudice it encounters in its internet-based dataset — just take the uncomfortable, viral example of ChatGPT telling one user that a “good” scientist was white.
Lee says, “Large language models [an AI technique used in ChatGPT] can easily intensify or enhance bias that already exists. So when we’re looking at resumé screening and the hiring side of things, the potential for bias — gender bias, racial bias — is very much a concern.” He points out that while places like New York are trying to crack down on AI bias in employment and hiring, it has yet to be properly regulated anywhere: “The EEOC [Equal Employment Opportunity Commission] put out guidelines on AI bias in hiring, and the Justice Department put out some guidance too. But they’re not a fully developed legal regime.” So the moral of the story is: any time ChatGPT is used in a decision-making process, recruiters and HR reps could be unintentionally perpetuating discrimination.
So, where does that leave us?
This is the kind of situation where an employee would look to their employer for protocol. And to create protocol, employers generally rely on laws, government guidelines, and precedent examples. But generative AI is so new that there’s almost no previous experience to learn from, and no government has passed any legislation regulating its use – though some have introduced guidelines. “I think the biggest problem from an employee standpoint is that there’s no policy,” says Lee. “Because these things are totally new and employers are not familiar with them. So you’re blindly trying to figure out how to best use this.”
Raman predicts that the obstacles to using generative AI at work will eventually be removed. “The Zoom we’re using now was not considered the most secure in terms of the video encryption and whatnot. But now they’ve learned and adapted, and I can see that in the future for generative AI technologies,” he says.
Until ChatGPT’s legal and cybersecurity challenges are cleared up, workers and their bosses will have to resort to getting advice from experts working on these issues in real time. So it’s worth staying on top of the news emerging around this issue. Given the challenges facing staff and managers about how to use ChatGPT safely, we canvassed our interviewees for their advice. Here’s what they said:
What employers can do about ChatGPT
Provide training for all employees on the topic. Kushal, Richie, Raman, Mahony and Lee all agree that it’s up to companies to guide their employees on the use of generative AI in their work. “All organizations should be looking at their enterprise-wide policy, which should encompass security, legal and IT framework,” says Kushal. “The marketing stuff as well. All of that should be part of a policy that says, ‘This is how we should be using [generative AI], and this is how we should be securing it.’”
Draw up a policy and processes, and then monitor their use. Despite all the ambiguity around generative AI, it is possible for employers to create a comprehensive policy. “The goal is to have the business comply with what’s called the ‘business judgment rule’,’” explains Mahony. “What that means is that you can’t expect the business to always get it right. But what you can expect is for the business to use a careful, transparent, well-thought-out process for coming up with a course of action. And to monitor that decision over time to make sure it’s still a reasonable approach.”
Make sure all staff are well aware of the policy and what it says. After crafting their policy, the companies’ responsibility extends to “training employees so they are aware of it too,” says Kushal. “That becomes a critical piece so that something like what happened with Samsung doesn’t happen to another organization.”
Develop informed consent practices. Lee tells managers to be aware that asking employees to read the fine print isn’t enough. In his experience, “The cures that are being offered are cures like, ‘Let’s make sure we have full disclosure and transparency and people know they’re using [generative AI].’ So on one side, we’re looking at consent as being fundamental to the development of the tools. On the other side, courts are enforcing end-user agreements knowing that the end user isn’t actually reading them.” Lee encourages employers to develop informed consent practices for their workers that resemble those used in the medical industry, where patients have to sign forms confirming they can answer basic questions about how their procedure works.
What employees can do about ChatGPT
Big corporations have the resources to investigate the risks and benefits of using ChatGPT, and how it relates to their industry. But what about the 33.2 million small businesses in America that don’t? Or employees working for companies that simply don’t give any useful guidance? Our experts have advice here too:
- Don’t jump in. “If things are not clear, don’t assume it’s okay to do it,” says Raman. “Because sometimes the employers haven’t fully thought out the repercussions of this.” Ruchie’s bottom line?: “Don’t type anything into [a publicly available generative AI bot] unless you’re comfortable with it being [published] in the local newspaper.”
- Be aware of the risks involved for your industry or profession. “There’s risk involved, and there’s reward,” says Lee. “If I’m not the authorized decision maker to take on that risk, then I’m not going to do it. I wouldn’t want to use a system at work that could potentially expose the corporation to risk, even if it means being more competitive. I just think it’s not worth it.” Lee is skeptical that ChatGPT’s productivity gains are more impressive than its pitfalls. “You’ve been doing something that has worked in the past. Why leave that behind to do something entirely new, just to be a little bit more productive? I’d say the risk outweighs the gain there, at least at the moment.”
- Proceed carefully if you decide to go ahead. Mahony, on the other hand, believes employees can benefit from generative AI tools as long as they exercise caution. “There are some pretty mellow uses,” he says, such as coming up with the name of a store. “If you’re using that tool to simply give you a first draft – take marketing materials as an example – and then you revise that first draft fairly heavily, that seems to be a relatively safe use.” He would not use it for everything, however. “If it’s any type of risky use – in other words, ‘I’m going to put this in my annual report,’ or ‘I’m going to use this as a report to senior management’ – that potentially low risk gets much higher,” he adds.
- Check everything that you put into ChatGPT. “The key would be to really make sure that you’ve vetted what you’re writing really carefully,” Mahony says. “You know, your credibility is on the line. That lawyer case [Schwartz] is a really good example.”
More inspiration: Productivity & tools
12 Slack habits that drive us crazy
Slack is a top messaging platform, but coworkers can misuse it. Over-tagging and endless messages can make it frustrating ...
Oct 16, 2024
10 CareerTok creators you should be following
Looking for career advice? CareerTok has quick tips from real experts on interviews and job offers.
Sep 25, 2024
Finding your Genius Zone … and staying there
Have you found your "genius zone" yet? If not, how can you get there?
Jul 10, 2024
"Can you hear me now?": When hybrid meetings go awry
Practical in theory, annoying in practice ...
Jul 02, 2024
How productive are you, really? Unlocking productivity in the workplace
How does the traditional 8-hour workday measure up to actual productivity levels?
Apr 03, 2024
The newsletter that does the job
Want to keep up with the latest articles? Twice a week you can receive stories, jobs, and tips in your inbox.
Looking for your next job opportunity?
Over 200,000 people have found a job with Welcome to the Jungle.
Explore jobs