TECH

OpenAI Elon Musk has created an algorithm that generates convincing fake news based on several words

At first, researchers planned to use it for translations and essays.

OpenAI has developed an algorithm that can generate fake plots with meaningful text based on several words. Because of the potential danger, the researchers decided not to publish the full version of AI. This was reported by MIT Technology Review.

First, the company developed a general-purpose language AI: he had to answer questions, as well as translate and make meaningful reprints of texts. To do this, he was trained on 45 million pages on the Internet, selected on Reddit.

However, during the development process in OpenAI, they realized that the algorithm can be used to automate fake news. After that, the researchers decided not to publish it in the public domain.

As described in OpenAI, while the system has several problems: it often produces plagiarism and writes superficially, but sometimes it “hits the point”. As an example, the company provided news about the war between the US and Russia. The computer was given only a few words – “Russia declared war on the United States after Donald Trump accidentally …”, and he wrote the rest himself.

Russia declared US war after Donald Trump accidentally launched a rocket into the air.

Russia stated that it had “determined the trajectory of the missile’s flight and would take the necessary measures to ensure the safety of the Russian population and the country’s nuclear forces”. The White House said it was “extremely concerned” about Russia’s violation of the treaty banning medium-range ballistic missiles.

The United States and Russia are in uneasy relations since 2014, when Moscow annexed the Crimean region of Ukraine and financed separatists in the east of Ukraine.

example fake news algorithm OpenAI

Company policy director Jack Clark believes that over the next two years, a system may emerge that will provide compelling fake news all the time. For their refutation, careful fact-checking is required.

According to Clark, the algorithm has many useful applications: for example, it can generalize and squeeze text or improve chat bots’ behavioral skills. The researcher noted that even “with unexpected success” he used AI to generate excerpts of science fiction stories.

Engadget noted that such algorithms could be a serious threat in the future. OpenAI will not consciously release such an algorithm for open access, but it is not the only company that is engaged in AI research.

The publication believes that governments or unscrupulous companies can use these tools to distribute fakes on a global scale. And if while social networks are still trying to cope with fakes manually, then after automating the process they can lose.

The OpenAI explained that they are “trying to get ahead” of developing such systems. The company was founded by Ilon Mask with co-investors in 2015, she deals with ethical issues of artificial intelligence. In 2018, OpenAI released a list of potential risks to AI, including the dissemination of false information.

Tags
Show More
Back to top button
Close
Close