Tesla CEO Elon Musk has denied that James Murdoch, the younger son of media mogul Rupert Murdoch, is going to replace him at electric car maker Tesla.
Reacting to a Financial Times report that claimed James Murdoch was going to join Tesla as Chairman, Musk tweeted on Thursday that this was not true.
“This is incorrect,” tweeted Musk, reacting to the report.
James Murdoch, who serves as a director on Tesla’s board, is set to end his innings as 21st Century Fox’s CEO.
Media reports have also thrown names like former Vice President Al Gore and Jim McNerney of Boeing as potential candidates to lead Tesla.
Coming under pressure from his lawyers and investors of Tesla, the tech billionaire on September 29 agreed to step down as Tesla Chairman for three years and paid a $20 million fine, in a deal with the US stock market regulatory authority, the Securities and Exchange Commission (SEC), to resolve securities fraud charges.
Elon Musk-founded non-profit Artificial Intelligence (AI) research group OpenAI has decided not to reveal its new AI software in detail, fearing the AI-based model can be misused by bad actors in creating real-looking fake news.
Dubbed as “GPT2”, the AI-based automated text generator can produce fake news articles and abusive posts after being fed with a few pieces of data.
“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text and performs rudimentary reading comprehension, machine translation, question answering and summarization – “all without task-specific training,” OpenAI said in a blog post late on Thursday.
Trained on a data set of eight million web pages, “GPT2” can adapt to the style and the content of the text you feed it.
OpenAI said the AI model is so good and the risk of malicious use is so high that it is not releasing the full research to the public.
However, the non-profit has created a smaller model that lets researchers experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.
“We can imagine the application of these models for malicious purposes, including the following: Generate misleading news articles, impersonate others online, automate the production of abusive or faked content to post on social media and automate the production of spam/phishing content,” said OpenAI.
Today, malicious actors – some of which are political in nature – have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.
OpenAI further said that we should consider how research into the generation of synthetic images, videos, audio and text may further combine to unlock new as-yet-unanticipated capabilities for these bad actors.