ChatGPT has already made a lasting impression on the workforce, yet many employers are unclear on how to leverage it within their business. But an expansion to the controversial chatbot may be the answer they’ve been clamoring for.
Since the launch of ChatGPT nine months ago, 80% of Fortune 500 companies have adopted the program in some form , according to OpenAI — the artificial intelligence company that developed ChatGPT and its successor, GPT4. Earlier this month, in the wake of companies’ mixed reviews of the original two offerings, OpenAI launched ChatGPT Enterprise, designed specifically for businesses.
“ChatGPT Enterprise is a more refined version that was released to address the growing concerns over data security and privacy,” says Ahmed Reza, the founder of Yobi, a communication app for business. “It’s meant to encourage enterprises to use ChatGPT with confidence in sensitive environments.”
Read more: What ChatGPT means for the future of work
ChatGPT Enterprise offers higher-grade security and privacy, advanced data analysis capabilities, longer processing windows for more in-depth or complex task demands, and includes an admin console that allows employers to manage the team members using the platform. In its mission statement, OpenAI also made a point to quell companies’ concerns, disclosing that Enterprise will not access business data or tap into conversations, top concerns for employers when it came to using chatbots in the office.
A 2023 report from browser security company LayerX revealed that of the 15% of workers that are using ChatGPT and other generative AI tools at work, nearly 25% of their visits involved copying and pasting sensitive information into the bot. This led many companies, including Amazon and Bank of America, to ban ChatGPT at work.
But even with OpenAI’s latest reassurance, Satish Kumar, CEO and co-founder of Glider AI, urges employers to stay weary.
“Any [AI] model is only as good as the foundational data,” he says. “If that data is corrupt in some way, so is the response. Therefore, despite the growing sophistication of generative AI, it’s important to have governance, which includes oversight by a diverse committee, transparency and regular audits to mitigate risks and communicate what the issue was with the respective resolution.”
It’s not just employers who are wary of the rapid growth of chatbots like ChatGPT — tech titans such as Apple’s Steve Wozniack and Elon Musk all signed an open letter in March that called for AI developers to halt the continued efforts toward creating more chatbots. And while the roll-out of Enterprise comes after the petition, there is still plenty of apprehension to take into consideration.
“The concerns regarding the potential risks of AI, the need for safety precautions and proactive regulation simply highlighted the importance of responsible AI development,” says Reza. “I think the rollout of ChatGPT Enterprise doesn’t necessarily mean that companies are more comfortable with the use of AI. Instead, it demonstrates the industry’s commitment to developing technologies that are safer, more reliable and more suitable for business purposes.”
Even if the new iteration of ChatGPT promises to be more conscientious of data and privacy, Kumar urges business leaders to keep whatever security measures they were previously taking, as well as expand on them by adding internal audits and oversight committees as a precautionary measure. He also recommends employers take it a step further and begin considering how the growing proliferation of AI will impact their headcount and how they can take a reactionary stance.
“We’re at an inflection point that impacts all of us,” he says. “While there is excitement, we must be willing to take some risks with transparent and reasonable governance.”