Sunday February 17, 2019
Home Lead Story James Murdoch...

James Murdoch Isn’t Taking Over Tesla: Elon Musk

Musk is still the CEO of Tesla.

0
//
Elon Musk, CEO of SpaceX. Wikimedia.

Tesla CEO Elon Musk has denied that James Murdoch, the younger son of media mogul Rupert Murdoch, is going to replace him at electric car maker Tesla.

Reacting to a Financial Times report that claimed James Murdoch was going to join Tesla as Chairman, Musk tweeted on Thursday that this was not true.

Tesla CEO Elon musk
Elon Musk agrees to step down as Chairman (VOA)

“This is incorrect,” tweeted Musk, reacting to the report.

James Murdoch, who serves as a director on Tesla’s board, is set to end his innings as 21st Century Fox’s CEO.

Media reports have also thrown names like former Vice President Al Gore and Jim McNerney of Boeing as potential candidates to lead Tesla.

tesla, maezawa, elon musk
Tesla has become the most valuable American carmaker, with its stock worth more than $50 billion. Pixabay

Coming under pressure from his lawyers and investors of Tesla, the tech billionaire on September 29 agreed to step down as Tesla Chairman for three years and paid a $20 million fine, in a deal with the US stock market regulatory authority, the Securities and Exchange Commission (SEC), to resolve securities fraud charges.

Also Read: Tesla Launches a New Programme That Lets You Send Images Into Space

Tesla paid another $20 million to the SEC, despite not being charged with fraud.

Musk is still the CEO of Tesla. (IANS)

Next Story

Musk-founded AI Group Not to Release Software on ‘Fake News’ Fears

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies

0
Tesla CEO Elon musk, board
Tesla CEO Elon Musk. (VOA)

Elon Musk-founded non-profit Artificial Intelligence (AI) research group OpenAI has decided not to reveal its new AI software in detail, fearing the AI-based model can be misused by bad actors in creating real-looking fake news.

Dubbed as “GPT2”, the AI-based automated text generator can produce fake news articles and abusive posts after being fed with a few pieces of data.

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text and performs rudimentary reading comprehension, machine translation, question answering and summarization – “all without task-specific training,” OpenAI said in a blog post late on Thursday.

Trained on a data set of eight million web pages, “GPT2” can adapt to the style and the content of the text you feed it.

OpenAI said the AI model is so good and the risk of malicious use is so high that it is not releasing the full research to the public.

However, the non-profit has created a smaller model that lets researchers experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

Elon Musk, CEO of SpaceX. Wikimedia Commons

“We can imagine the application of these models for malicious purposes, including the following: Generate misleading news articles, impersonate others online, automate the production of abusive or faked content to post on social media and automate the production of spam/phishing content,” said OpenAI.

Today, malicious actors – some of which are political in nature – have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.

OpenAI further said that we should consider how research into the generation of synthetic images, videos, audio and text may further combine to unlock new as-yet-unanticipated capabilities for these bad actors.

Also Read- Adults With Obstructive Sleep At Greater Risk Of Cardiovascular Diseases

Musk, who is the staunch critic of AI and co-founded OpenAI in 2016, stepped down from its board in 2018.

OpenAI said governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies. (IANS)