Next-generation artificial intelligence chatbots explained

Cosmos

Cosmos

By Reece Hooker

The explosion of next generation chatbots threatens to transform every facet of our lives. And the tech is accelerating. As soon as OpenAI’s ChatGPT entrenched itself in the lexicon, the updated GPT-4 model was rolled out. As quickly as experts in fields like education began to understand capabilities and limitations of large language models on the market, the latest iteration prompted a new round of questions without clear answers.

Chatbots are here to stay. Microsoft founder Bill Gates said improvements in artificial intelligence will “change our world”. Microsoft has invested USD$10 billion in OpenAI, which boasted a record 100 million monthly users in January 2023. But it’s far from the only game in town, as megalithic corporations and plucky startups jostle for funding and talent in the crowded artificial intelligence gold rush. 

Who’s making the next generation of chatbots?

It seems like pretty much everybody. OpenAI seems to be the name on everyone’s lips, with its ChatGPT model dwarfing the popularity of its peers. And, with its ambitious newest model GPT-4 just hitting the market, it might be putting distance between itself and its peers.

But it’ll need to stay on top of its game to keep pole position. Google has released Bard, which has been incubating for a while. The tech giant had kept its AI-chatbots hidden from the public over concerns about inaccuracies, but it’s changed its tune as rivals started beating it to market.

Tech start-up Antropic, founded by ex-OpenAI employees, launched Claude in mid-March.

Amazon’s plans to enter the fray are afoot, announcing in February 2023 a partnership with Hugging Face to produce the next iteration of the AI startup’s Bloom language model.

Facebook’s parent company Meta announced its model, LLaMA, which was leaked online two weeks later. Chinese giant Baidu launched its chatbot, Ernie, in March to mixed reviews.

Elon Musk — a co-founder of OpenAI who left the board in 2018 — has reportedly started hiring AI experts for a research laboratory. The Twitter and Tesla CEO is reportedly interested in competing with OpenAI after praising the “scary good” ChatGPT.

The gold rush for AI chatbots isn’t just contained to the big companies. Character.AI, a startup with no revenue, raised USD$150 million after receiving a USD$1 billion valuation from influential venture capital firm Andreessen Horowitz.

A name to watch as the space grows is Apple, which hasn’t gone public with any plans, but is perfectly resourced to make a splash in the generative AI market.


The future is now with chatbots, but can we make it more human and our lives more meaningful?


Can we trust chatbots to be accurate?

The short answer is no. Developers aren’t shying away from this — OpenAI warns users of GPT-4 the product has “many known limitations”, citing “social biases, hallucinations, and adversarial prompts”. The technology is pushing boundaries but no one’s pretending  it’s a finished product.

The transparency of developers doesn’t quell worries that the widespread adoption of next-generation chatbots could contribute to the spread of misinformation. US news ratings agency NewsGuard ran an exercise and found GPT-4 spread false stories at a higher rate than its predecessor, although OpenAI claimed the product is 40 percent more likely to produce factual answers than GPT-3.5 in internal testing.

While some are concerned about the consequences of people placing their trust in undercooked language models, there are experts who see the issue differently: that the challenge will be in convincing people to trust artificial intelligence. 

Boston University behavioural psychologist Chiara Longoni sees “potential to do a lot of good with AI” — if we can generate enough trust in its limited usage.

Chiara longoni 1 424x636 1
Chiara Longoni

“When a reporter makes an error, a reader isn’t likely to think all reporters are unreliable. After all, everyone makes mistakes,” she said.

“But when AI makes a mistake, we are more likely to mistrust the entire concept. Humans can be fallible and forgiven for being so. Not so machines.”

What are governments doing to regulate chatbots?

Tobywalshwithrobot3
Prof. Toby Walsh

Not a lot, yet. The speed of innovation is far out-running any government’s ability to legislate and that’s not n indictment on any administration. Writing laws is a tough job that requires close scrutiny and a robust series of checks and balances. 

But there’s certainly more that could be done — as Chief Scientist at UNSW.AI Toby Walsh points out, there are models regulators can look to.

“In high-risk areas like aviation or pharmacology, there are government bodies with significant powers to oversee new technologies,” he said.

“We can also look to Europe whose forthcoming AI Act has a significant risk-based focus. Whatever shape this regulation takes, it is needed to assure we secure the benefits of AI while avoiding the risks.”

Proponents for more agile regulation have pointed to a trend towards less openness from big players. Walsh said OpenAI’s latest technical report for GPT-4 was “more white paper” and contained “no technical details” about the newest chatbot.

As commercial pressures mount on companies to retain every edge in the lucrative, crowded marketplace for chatbots, it becomes harder to rely on goodwill and a spirit of cooperation to underpin the flow of information and data from companies that are developing and releasing the technology. In its place, a robust set of regulations might offer a more cohesive way to innovate while still protecting users.

Originally published under Creative Commons by 360info™.

Please login to favourite this article.