New York, April 29, 2017: Google’s India-born CEO Sundar Pichai received a stock award of $198.7 million in 2016, almost double his 2015 stock award of $99.8 million, a media report said.
That brought his total compensation in 2016 to $199.7 million, almost twice the $100.6 million he earned in 2015.
Pichai received a salary of $650,000 in 2016, slightly less than the $652,500 he earned in 2015, CNBC reported on Friday.
NewsGram brings to you current foreign news from all over the world.
Pichai’s massive pay package came even as his two bosses and Google co-founders Larry Page and Sergey Brin, once again drew salaries of only one dollar for their roles as CEO and President, respectively, of parent company Alphabet.
But Page and Brin are each worth more than $40 billion through their stock holdings.
According to the report, Pichai’s raise came during a year when Google’s sales rose 22.5 per cent and net income rose 19 per cent as it maintained its position as the top seller of internet advertising. (IANS)
Google is developing text-to-speech AI as an “AI First.”
It will also be able to mimic human voices.
Not much is revealed, but it can be sure to say that this could be a big success for Google.
In a major step towards its “AI first” dream, Google has developed a text-to-speech artificial intelligence (AI) system that will confuse you with its human-like articulation.
The tech giant’s text-to-speech system called “Tacotron 2” delivers an AI-generated computer speech that almost matches with the voice of humans, technology news website Inc.com reported.
At Google I/O 2017 developers conference, company’s Indian-origin CEO Sundar Pichai announced that the internet giant was shifting its focus from mobile-first to “AI first” and launched several products and features, including Google Lens, Smart Reply for Gmail and Google Assistant for iPhone.
According to a paper published in arXiv.org, the system first creates a spectrogram of the text, a visual representation of how the speech should sound.
That image is put through Google’s existing WaveNet algorithm, which uses the image and brings AI closer than ever to in-discernibly mimicking human speech. The algorithm can easily learn different voices and even generates artificial breaths.
“Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech,” the researchers were quoted as saying.
On the basis of its audio samples, Google claimed that “Tacotron 2” can detect from context the difference between the noun “desert” and the verb “desert,” as well as the noun “present” and the verb “present,” and alter its pronunciation accordingly.
It can place emphasis on capitalised words and apply the proper inflection when asking a question rather than making a statement, the company said in the paper.
Meanwhile, Google’s engineers did not reveal much information but they left a big clue for developers to figure out how far they have come in developing this system.
According to the report, each of the ‘.wav’ file samples has a filename containing either the term “gen” or “gt.”
Based on the paper, it’s highly probable that “gen” indicates speech generated by Tacotron 2 and “gt” is real human speech. (“GT” likely stands for “ground truth,” a machine learning term that basically means “the real deal”.) IANS