US-based chip-designer Nvidia Corporation unveiled two new products, “Nvidia Issac”, a new developer platform and “Jetson Xavier”, an Artificial Intelligence (AI)-based computer in an attempt to power autonomous robots.
“‘Isaac and Jetson Xavier’ were designed to capture the next stage of AI innovation as it moves from software running in the cloud to robots that navigate the real world,” The Verge quoted Nvidia CEO Jensen Huang as saying on Monday.
The developer platform “Nvidia Issac” is a set of software tools including application programming interfaces (APIs) to connect to 3D cameras and sensors, a library of AI accelerators to keep algorithms running smoothly and a new simulation environment “Isaac Sim” for training and testing bots in a virtual space.
On the other hand, Nvidia’s AI-based computer, “Jetson Xavier”, is made of over nine billion transistors and processing components including deep learning accelerators and processors for static images and video, capable of delivering over 30 trillion operations per second (TOPS) of compute, consuming just 30 watts of power, the report added.
“AI, in combination with sensors and actuators, will be the brain of a new generation of autonomous machines,” Hunag was quoted as saying.
Russian researchers have revealed that artificial intelligence (AI) is able to infer people’s personality from ‘selfie’ photographs better than human raters do. The study, published in the journal Scientific Reports, revealed that personality predictions based on female faces appeared to be more reliable than those for male faces.
The technology can be used to find the ‘best matches’ in customer service, dating or online tutoring, the researchers from HSE University and Open University in Russia, said. Studies asking human raters to make personality judgments based on photographs have produced inconsistent results, suggesting that our judgments are too unreliable to be of any practical importance. According to the study, there are strong theoretical and evolutionary arguments to suggest that some information about personality characteristics, particularly, those essential for social communication, might be conveyed by the human face.
After all, face and behaviour are both shaped by genes and hormones, and social experiences resulting from one’s appearance may affect one’s personality development.
However, the recent evidence from neuroscience suggests that instead of looking at specific facial features, the human brain processes images of faces in a holistic manner.
For the findings, the researchers teamed up with a Russian-British business start-up BestFitMe to train a cascade of artificial neural networks to make reliable personality judgments based on photographs of human faces.
The performance of the resulting model was above that discovered in previous studies which used machine learning or human raters.
The artificial intelligence was able to make above-chance judgments about conscientiousness, neuroticism, extraversion, agreeableness, and openness based on ‘selfies’ the volunteers uploaded online.
The resulting personality judgments were consistent across different photographs of the same individuals.
The study was done in a sample of 12,000 volunteers who completed a self-report questionnaire measuring personality traits based on the “Big Five” model and uploaded a total of 31,000 ‘selfies’. The respondents were randomly split into a training and a test group.
A series of neural networks were used to preprocess the images to ensure consistent quality and characteristics and exclude faces with emotional expressions, as well as pictures of celebrities and cats.
Next, an image classification neural network was trained to decompose each image into 128 invariant features, followed by a multi-layer perceptron that used image invariants to predict personality traits.
In comparison with the meta-analytic estimates of correlations between self-reported and observer ratings of personality traits, the findings indicate that an artificial neural network relying on static facial images outperforms an average human rater who meets the target in person without prior acquaintance.
Conscientiousness emerged to be more easily recognizable than the other four traits. Personality predictions based on female faces appeared to be more reliable than those for male faces, the study said. (IANS)
Google has launched a free training course in 17 languages to teach journalists around the world what impact can Artificial Intelligence (AI) and Machine Learning (ML) have on their profession.
In a global survey conducted by Google last year about the use of AI by news organizations, most respondents highlighted the urgent need to educate and train their newsroom on the potential offered by machine learning and other AI-powered technologies.
“Improving AI literacy was seen as vital to change culture and improve understanding of new tools and systems,” said Mattia Peretti, who manages the programme called JournalismAI.
The new training course is produced by JournalismAI in collaboration with VRT News and the Google News Initiative (GNI).
They realized that more the newsroom at large embraces the technology and generates the ideas and expertise for AI projects, the better the outcome.
“This Introduction to Machine Learning is built by journalists, for journalists, and it will help answer questions such as: What is machine learning? How do you train a machine learning model? What can journalists and news organizations do with it and why is it important to use it responsibly?” said Google.
The course is available in 17 different languages on the Google News Initiative Training Centre.
By logging in, you can track your progress and get a certificate when you complete the course.
The training centre also has a variety of other courses to help journalists find, verify and tell news stories online.
It’s a tough time for journalists and news organisations worldwide, as they try to assess the impact that COVID-19 will have on the business and editorial side of the industry.
“With JournalismAI, we want to play our role in helping to minimize costs and enhance opportunities for the industry through these new technologies,” said Google.
At the end of the course, the users will find a list of recommended resources, produced by journalism and technology experts across the world, that have been instrumental in designing Introduction to Machine Learning.
“After this course, and the previous training module with strategic suggestions on AI adoption, we are planning to design more training resources on Artificial Intelligence and machine learning for journalists later this year,” said Peretti. (IANS)
At a time when Artificial Intelligence (AI) has demonstrated the profound impact it can have on society, the technology is no magic and the outcome of implementing an AI model has a direct correlation to the underlying data that has gone into training it, a top Microsoft executive said on Wednesday.
Building an AI model involves eternal iterations, and the outcome only gets better with new or more data over time.
“Do not expect human parity on day one. Businesses need to invest in the evolution of the model that may take umpteen number of iterations, even before it reaches an acceptable level of accuracy and precision,” according to Sandeep Alur, Director, Microsoft Technology Center, India.
By attaining human parity across vision, speech, and text, AI has the potential to have a significant impact on business outcomes.
“For organisations that are still trying to figure out their AI journey, they have to be realistic, invest in the evolution of an AI model, set guardrails for responsible AI and establish trust,” Alur said in a statement.
Microsoft has identified six principles for responsible AI that guide the development and use of AI with people at the centre.
These are – fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.
“Organisations may develop their own principles according to the nature of their business, but the guiding principles will ensure that their AI models are trustworthy,” said Alur.