Home AI TECHNOLOGY Is the world prepared for the coming Ai storm?

Is the world prepared for the coming Ai storm?

IS the world prepared for the coming Ai storm

by moeedrajpoot
IS the world prepared for the coming Ai storm?

Is the world prepared for the coming Ai storm?

Artificial intelligence has the incredible power to change our lives in both positive and negative ways. Experts are skeptical that those in power are prepared for what is to come.

In 2019, a non-profit research organization called OpenAI developed a software program that could generate paragraphs of coherent text as well as perform rudimentary reading comprehension and analysis without explicit instruction.

OpenAI initially decided not to make its creation, GPT-2, fully available to the public for fear that malicious individuals could use it to generate massive amounts of disinformation and propaganda. The group described the program as “too dangerous” in a press release announcing its decision.

Three years later, artificial intelligence capabilities have grown exponentially.

In contrast to the previously limited distribution, GPT-3 was made widely available in November. The Chatbot-GPT interface that resulted from that programming was the service that spawned a thousand news articles and social media posts as reporters and experts tested its capabilities – often with startling results.

Chatbot-GPT scripted stand-up routines about the Silicon Valley Bank failure in the style of the late comedian George Carlin. It expressed views on Christian theology. It composed poetry. It pretended to be rapper Snoop Dogg while explaining quantum theory physics to a child. Other AI models, such as Dall-E, generated visuals that were so compelling that their inclusion on art websites sparked debate.

Machines have achieved creativity, at least to the naked eye.

OpenAI debuted the latest iteration of its program, GPT-4, on Tuesday, claiming that it has strict limits on abusive uses. Microsoft, Merrill Lynch, and the Icelandic government were among the first clients. And at this week’s South by Southwest Interactive conference in Austin, Texas – a global gathering of tech policymakers, investors, and executives – the potential and power of artificial intelligence programs was the hottest topic of conversation.

Arati Prabhakar, director of the White House Office of Science and Technology Policy, expressed excitement about AI’s potential, but she also issued a warning.

“We are all witnessing the rise of this extremely powerful technology. This is a turning point “She told the audience of a conference panel. “All history demonstrates that such powerful new technologies can and will be used for good or ill.”

Austin Carson, her co-panelist, was a little blunter.

“If you aren’t completely freaked the (expletive) out in six months, I will buy you dinner,” said the founder of SeedAI, an artificial intelligence policy advisory group.

 

“Freaked out” is one way to describe it. Amy Webb, the director of the Future Today Institute and a business professor at New York University, attempted to quantify the potential outcomes in her SXSW presentation. She believes artificial intelligence will take one of two paths over the next ten years.

In an ideal world, AI development is centered on the common good, with transparency in AI system design and the ability for individuals to choose whether their publicly available internet information is included in the AI’s knowledge base. As AI features on consumer products can anticipate user needs and help accomplish virtually any task, the technology serves as a tool that makes life easier and more seamless.

Ms. Webb’s nightmare scenario involves less data privacy, more power concentration in a few companies, and AI that predicts user needs – and then gets them wrong or stifles choices.

She assigns only a 20% chance to the optimistic scenario.

Ms. Webb told the that the direction of technology ultimately depends on the responsibility with which companies develop it. Do they do so in a transparent manner, revealing and policing the sources from which the chatbots, dubbed Large Language Models by scientists, draw their information?

The other factor, she says, is whether the government – federal regulators and Congress – can establish legal guardrails quickly enough to guide technological developments and prevent their misuse.

In this regard, the government’s experience with social media companies such as Facebook, Twitter, and Google is instructive. And the results have not been encouraging.

“What I heard in a lot of conversations was apprehension that there aren’t any guardrails,” Melanie Subin, managing director of the Future Today Institute, says of her time at South by Southwest. “There is a strong sense that something must be done. And I believe that when people see how quickly generative AI is developing, they are thinking of social media as a cautionary tale.”

READ ABOUT GOOGLE AI;https://factslover.com/what-is-google-bard-ai-heres-everything-you-need-to-know/

follow us on twitter;https://twitter.com/facts_loverr

You may also like

Leave a Comment