OpenAI Releases Commercial API for Multipurpose Artificial Intelligence

OpenAI publie une API commerciale pour l Cybersecurity

OpenAI is launching its first commercial product, the company announced Thursday, which gives companies access to its most advanced general-purpose AI models via an API. The API, launched in private beta, is currently used by customers for a variety of applications, including semantic search, sentiment analysis, and content moderation.

While most AI models are designed for specific use cases, the OpenAI API provides a general purpose “text in, text out” interface that could be applied to a wide range of language tasks. English.

The API runs models with weights from the GPT-3 family, the OpenAI family of massive neural networks. The recently released GPT-3 uses 175 billion parameters, allowing it to “meta-learn” – which means that the GPT neural network is not re-trained to perform a task such as phrase completion .

Reddit explores content moderation with API

If you give the new API an entry line, it will attempt to return a completed text that matches the template given to it. Users can refine its performance on specific tasks by training it on a set of examples, small or large, or through human comments provided by users or labellers.

Among the first customers is the Quizlet online learning platform, which uses the API to automatically generate examples of the use of vocabulary words in a sentence. Reddit is exploring content moderation with the API, while the legal research platform Casetext aims to improve its semantic search capabilities with it. The MessageBird cloud communication platform uses the API to develop automated spelling and grammar tools, as well as predictive text capabilities.

OpenAI was founded in 2015 by Sam Altman, former president of the Y Combinator, and Elon Musk, CEO of Tesla. The research and deployment company focuses on “general artificial intelligence”, which it defines as “highly autonomous systems that outperform humans in the most profitable work”.

Prevent harmful use cases

The company is launching its API in private beta partly because of the risks associated with launching a versatile AI tool.

“We will end access to the API for obviously harmful use cases, such as harassment, spam, radicalization or astroturfing”, can be read on the blog of OpenAI. “But we also know that we cannot anticipate all the possible consequences of this technology.”

In addition to limiting its availability, OpenAI said that it was developing tools to help users better control the content returned by the API, and that it was studying security aspects of language technology, such as analysis, mitigation and intervention in the event of harmful bias.

Source: ZDNet.com

Source: www.zdnet.fr

Rate article