Caribbean Today News

Microsoft is expanding access to its AI toolkit, including ChatGPT

A chatbot that can answer questions and write essays (almost) like a human will soon be a click away for tens of thousands of developers and data scientists using Microsoft Azure’s portfolio of artificial intelligence (AI) services.

Microsoft announced the “general availability” of Azure OpenAI Service, which has been available to a handful of customers since it was first teased in 2021. Many more businesses will soon be able to access some of the most advanced AI models in the world created by American research lab OpenAI, including:

“Customers will also be able to access ChatGPT—a fine-tuned version of GPT-3.5 that has been trained and runs inference on Azure AI infrastructure—through Azure OpenAI Service soon,” Eric Boyd, corporate vice president of AI Platform, wrote in a blogpost.

ChatGPT, in OpenAI’s words

“We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” —OpenAI

Microsoft and OpenAI, by the digits

$1 billion: Microsoft’s stake in OpenAI in 2019

$10 billion: How much the tech giant is reportedly looking to invest, according to news site Semafor. Microsoft declined to comment on any potential deal

49%: The stake Microsoft would get in OpenAI if it invests the $10 billion

75%: The share of OpenAI’s profits Microsoft would get until it recuperates its $10 billion investment

570GB: Data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet to be fed into ChatGPT

300 billion: Words that were fed into the ChatGPT system

Microsoft’s big plans for ChatGPT

Google is the undisputed leader in the search engine world, but ChatGPT could give Microsoft’s Bing a fighting chance to contend with Alphabet’s behemoth. Rumor has it that the chatbot could answer some search queries rather than just showing a list of links. It’s in line with Microsoft’s December announcement, planning to introduce a DALL-E 2 image generator in Bing. However, Google has LAMda—the AI chatbot that got an engineer fired for thinking it was too sentient. The LAMda chatbot can talk about a topic, make to-do lists to carry out tasks, and even construct imaginary scenarios like what’s it like to live on a marshmallow planet.

Microsoft is looking at adding OpenAI’s tools to everyday apps like PowerPoint, Outlook and other apps, wherein customers will be able to automatically generate text using simple prompts. But of course, wider usage requires the disclaimer that the chatbot comes with its pros and cons. While its ethics and morals seem to be largely in place and it works quickly, there are shortcomings, like getting overwhelmed with too much information and failing to keep up with current affairs.

Quotable: ChatGPT can replicate humans, not replace them

ChatGPT can’t code as well as human coders. Not only because it gets confused sometimes, but also because coders don’t operate in online siloes—they have to understand and solve complex business problems. There’s also copyright issues to consider when the chatbot is doing a writer or an artist’s job. And above all else, the human touch would be lost.

“Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.” —Australian singer-songwriter Nick Cave, reacting to a song ChatGPT wrote “in the style of Nick Cave”

One more thing: AI gets creepier

According to a new research paper, a new AI called VALL-E “can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt.” In simple terms, it can realistically mimic human speech based on just 3-second samples. The tech draws from an audio library called LibriLight, assembled by Meta, which has 7,000 people talking for a total of 60,000 hours.

This could not only put people like voice artists out of work, it could also be misused to run scams where the online system poses as friends, relatives, bank personnel, and more. The researchers themselves admit to perils like “spoofing” and “impersonating” in the paper, suggesting the next step is to build a detection system to ward off these risks.

Source: Quartz