Skip to main content

Microsoft is giving businesses access to OpenAI’s powerful AI language model GPT-3

Microsoft is giving businesses access to OpenAI’s powerful AI language model GPT-3

/

A promising and problematic AI tool

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Illustration by Alex Castro / The Verge

It’s the AI system once deemed too dangerous to release to the public by its creators. Now, Microsoft is making an upgraded version of the program, OpenAI’s autocomplete software GPT-3, available to business customers as part of its suite of Azure cloud tools.

GPT-3 is the best known example of a new generation of AI language models. These systems primarily work as autocomplete tools: feed them a snippet of text, whether an email or a poem, and the AI will do its best to continue what’s been written. Their ability to parse language, however, also allows them to take on other tasks like summarizing documents, analyzing the sentiment of text, and generating ideas for projects and stories — jobs with which Microsoft says its new Azure OpenAI Service will help customers.

Here’s an example scenario from Microsoft:

“A sports franchise could build an app for fans that offers reasoning of commentary and a summary of game highlights, lowlights and analysis in real time. Their marketing team could then use GPT-3’s capability to produce original content and help them brainstorm ideas for social media or blog posts and engage with fans more quickly.”

GPT-3 is already being used for this sort of work via an API sold by OpenAI. Startups like Copy.ai promise that their GPT-derived tools will help users spruce up work emails and pitch decks, while more exotic applications include using GPT-3 to power a choose-your-own-adventure text game and chatbots pretending to be fictional TikTok influencers.

While OpenAI will continue selling its own API for GPT-3 to provide customers with the latest upgrades, Microsoft’s repackaging of the system will be aimed at larger businesses that want more support and safety. That means their service will offer tools like “access management, private networking, data handling protections [and] scaling capacity.”

Microsoft already uses GPT-3 in its products

It’s not clear how much this might cannibalize OpenAI’s business, but the two companies already have a tight partnership. In 2019, Microsoft invested $1 billion in OpenAI and became its sole cloud provider (a vital relationship in the compute-intensive world of AI research). Then, in September 2020, Microsoft bought an exclusive license to directly integrate GPT-3 into its own products. So far, these efforts have focused on GPT-3’s code-generating capacities, with Microsoft using the system to build autocomplete features into its suite of PowerApps applications and its Visual Studio Code editor.

These limited applications make sense given the huge problems associated with large AI language models like GPT-3. First: a lot of what these systems generate is rubbish, and requires human curation and oversight to sort the good from the bad. Second: these models have also been shown time and time again to incorporate biases found in their training data, from sexism to Islamaphobia. They are more likely to associate Muslims with violence, for example, and hew to outdated gender stereotypes. In other words: if you start playing around with these models in an unfiltered format, they’ll soon say something nasty.

Microsoft knows only too well what can happen when such systems are let loose on the general public (remember Tay, the racist chatbot?). So, it’s trying to avoid these problems with GPT-3 by introducing various safeguards. These include granting access to use the tool by invitation only; vetting customers’ use cases; and providing “filtering and monitoring tools to help prevent inappropriate outputs or unintended uses of the service.”

However, it’s not clear if these restrictions will be enough. For example, when asked by The Verge how exactly the company’s filtering tools work, or whether there was any proof that they could reduce inappropriate outputs from GPT-3, the company dodged the question.

“I would not want to be the person or company accountable for what it might say based on that training data”

Emily Bender, a professor of computational linguistics at the University of Washington who’s written extensively on large language models, says Microsoft’s reassurances are lacking in substance. “As noted in [Microsoft’s] press release, GPT-3’s training data potentially includes ‘everything from vulgar language to racial stereotypes to personally identifying information,’” Bender told The Verge over email. “I would not want to be the person or company accountable for what it might say based on that training data.”

Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency — meaning AI systems should be accountable and understandable. Despite this, says Bender, the exact composition of GPT-3’s training data is a mystery and Microsoft is claiming that the system “understands” language — a framing that is strongly disputed by many experts. “It is concerning to me that Microsoft is leaning in to this kind of AI hype in order to sell this product,” said Bender.

But although Microsoft’s GPT-3 filters may be unproven, it can avoid a lot of trouble by simply selecting its customers carefully. Large language models are certainly useful as long as their output is checked by humans (though this requirement does negate some of the promised gains in efficiency). As Bender notes, if Azure OpenAI Service is just helping to write “communication aimed at business executives,” it’s not too problematic.

“I would honestly be more concerned about language generated for a video game character,” she says, as this implementation would likely run without human oversight. “I would strongly recommend that anyone using this service avoid ever using it in public-facing ways without extensive testing ahead of time and humans in the loop.”