Elon Musk’s company has created an Artificial Intelligence system that can make considerable fake news. According to new OpenAI blog, the model, GPT-2, is trained to forecast the next word in 40GB of Internet text. The blog further reads that due to the worries about malicious applications, the company is not revealing its model publicly and it will be releasing a much smaller model for the researchers.
8 million web pages were fed to train the GPT-2 model until it could look at a given set of text and predict the words that could come afterward. According to the OpenAI blog, GPT-2 is just like a chameleon. The model adapts to the content and its style. This system thus allows the user to create realistic and logical continuations of a topic. During the research, GPT-2 system was provided with a human-generated text prompt at the input and astonishingly from that, the AI system has continued with its own AI generated story and it took 10 tries for it.
We could just imagine the capabilities and possibilities of GPT-2 system. Let’s say it can freely scribble on a presidential campaign story. It is still unclear that why OpenAI stated to only release a very small piece of the GPT-2 sampling code in public. It’s not unveiling any of the training code or dataset including the weights. The OpenAI blog also stated that they are aware of some researchers having the technical capacity to open source and copy the results. The release strategy initially involves the set of organizations that might elect to do this and gives time to have a discussion with the AI community about the implications. The blog post added that the governments should consider taking initiatives to systematically observe the social impact and flow of AI technologies. Also, they should measure the evolution in the abilities of such systems.