“We were thinking about AI backwards.” This is how the co-creator of ChatGPT sees the future

“We were thinking about AI backwards.”  This is how the co-creator of ChatGPT sees the future

AI can be a wonderful tool of the future, or it can be as dangerous as a nuclear weapon. ChatGPT co-creator Sam Altman carefully weighs the enormous gains and significant disruptions of “mankind’s most powerful technology” in a recent talk.

The head of OpenAI and co-creator of ChatGPT, Sam Altman, together with two Polish specialists of the company – Wojciech Zaremba and Szymon Sidor – were honorary guests of the University of Warsaw and the NCBR IDEAS center on Tuesday. In a wide-ranging discussion, experts shared their vision of the future of AI and what role ChatGPT could play in it

What does the future hold for AI like ChatGPT?

Journalist Sylwia Czubkowska asked specialists a number of questions related to the numerous possibilities and threats resulting from artificial intelligence. One of the first concerned predictions related to the development of AI in the future.

– People once thought that the AI ​​revolution would start with robots. This is what sci-fi creators like Isaac Asimov predicted. It turned out that the real progress in artificial intelligence was quite different and started in the digital sphere. This is where it is easiest to reach a large number of users and use huge amounts of data, said Wojciechem Zaremba, co-founder of OpenAI.

The expert listed many areas that can significantly benefit from increasingly advanced AI tools. This includes, among others: o areas of education, health care, law and even interpersonal communication. – The possibilities of translating text into speech or speech into text will soon become fantastic – he said.

– We thought about AI backwards. We thought that AI would first do physical work. Over time, he will switch to mental work, and maybe then he will take over difficult tasks such as programming. He may never be able to perform creative tasks. Exactly the opposite happened, emphasizes Sam Altman, CEO of Open AI.

Altman also believes that today it is very difficult to predict which direction the development of artificial intelligence will take. – We believe that when we give people better tools, their creativity will allow them to create things that we cannot even imagine today. OpenAI creates tools, and it’s up to you how you use them, he noted.

AI threats – we must be afraid of systems in 10 years

Lawmakers, ordinary users, and even Elon Musk often draw attention to the growing doubts related to the dynamic development of artificial intelligence. So how can we take advantage of the benefits of AI without causing a disaster?

– We founded OpenAI precisely because of concerns about the development of AI. As with any powerful technology, AI has enormous potential, but we must manage the potential risks. This was the case, among others, with nuclear power plants and synthetic biology. AI will be one of humanity’s most powerful technologies, and we need to think globally about how to protect ourselves, Altman said.

Sylwia Czubkowska asked the company’s head whether he still believes that in the case of artificial intelligence, an organization should be created to respond to AI incidents, similar to the body currently responding to nuclear accidents.

– We should treat AI the same (as nuclear energy). Maybe artificial intelligence won’t be such a big threat, maybe it won’t be misused often. However, until we know more, we should treat AI in the same way, he emphasized.

As Wojciech Zaremba added, this is not about AI systems available here and now. – Our concerns are about AI in the future. It’s about the trajectory and further development of artificial intelligence, which in a decade may become as powerful as today’s global corporations, he emphasized.

New AI regulations – do the systems violate copyright law?

There is a lot of talk nowadays about the lack of restrictions related to AI, which causes considerable anxiety in society. The European Union is currently working on an AI Act, although this is only the beginning of properly limiting artificial intelligence by law. What does Altman say?

– With the public launch of ChatGPT, it was crucial for us to release an AI tool that is not yet very powerful. Thanks to this, society can become familiar with them and learn about the advantages and disadvantages of artificial intelligence. Thanks to this, AI can also be assessed by lawmakers, he explains.

The head of OpenAI reminds that the AI ​​Act currently being developed looked completely different before ChatGPT entered the market. As he argues, the world did not yet fully understand what generative AI systems were. – We did a lot of testing on ChatGPT even before we showed the chatbot to the world. However, the development of this tool must take place in small steps, in dialogue with society.

At the same time, an increasing number of artists and writers are protesting against artificial intelligence that takes over their works for free. How do the creators of ChatGPT intend to approach copyright law?

– For moral reasons, we want to provide users with the opportunity to exclude their creations from our AI models. If you don’t want to, you can unsubscribe. On the other hand, if your content helps improve our large language model (LMM), then you should make some kind of profit from it, Altman emphasizes.

OpenAI’s CEO confirmed that the company is in talks with many artists, journalists and other creators. Their goal is to find a compromise that could be implemented globally and in many industries.

Similar Posts