Saturday October 19, 2019
Home Politics Dissent in Co...

Dissent in Congress? Senior leaders urge party members to refrain from making comments on Rahul Gandhi

0
//

Rahul Gandhi, vice president of Congress

By NewsGram Staff Writer
Amid simmering noises in the ranks of Congress, the party on Wednesday asked its senior leaders to renounce from making “open” comments that could potentially scrape up “unnecessary controversy”.

“Senior leaders should refrain from making open comments. They should not create unnecessary controversy as interested sections want to create confusion,” P.C. Chacko, party spokesperson said in response to questions regarding Sheila Dikshit’s comment on party vice president Rahul Gandhi.

Earlier, Shiela Dikshit, former chief minister of Delhi, had said, “(in case of) Rahul, of course, there is a question mark, there is skepticism because you have not seen him perform as yet.”

However, the former Delhi chief minister later denied having made such comments.

Denying all the speculations about any rift within the party over the leadership role of Rahul Gandhi, Chacko said that there was “no conflict” between the two as their leadership was totally acceptable to Congress party.

“The entire Congress party is in full support of both the leaders. President and the vice president are two different positions in the party and they are performing their duties very well,” he added.

Former Punjab chief minister and MP Captain Amarinder Singh had also reportedly suggested in an interview that Sonia Gandhi should continue as party president because she is the only force that could keep the party together.

Congress leader and Sheila Dikshit’s son Sandeep Dikshit also expressed similar opinion in an interview to a news channel.

 

Next Story

Social Robots Can Now be Conflict Mediators: Study

The study also found that the teams did respond socially to the virtual agent during the planning of the mission they were assigned (nodding, smiling and recognising the virtual agent's input by thanking it) but the longer the exercise progressed, their engagement with the virtual agent decreased

0
Artificial Intelligence Bot
Artificial Intelligence Bot. Pixabay

We may listen to facts from Siri or Alexa, or directions from Google Maps, but would we let a virtual agent enabled by artificial intelligence help mediate conflict among team members? A new study says they might help.

The study was presented at the 28th IEEE International Conference on Robot & Human Interactive Communication in the national capital on Tuesday.

“Our results show that virtual agents and potentially social robots might be a good conflict mediator in all kinds of teams. It will be very interesting to find out the interventions and social responses to ultimately seamlessly integrate virtual agents in human teams to make them perform better,” said study lead author Kerstin Haring, Assistant Professor at the University of Denver.

Researchers from the University of Southern California (USC) and the University of Denver created a simulation in which a three-person team was supported by a virtual agent ‘Avatar’ on screen in a mission that was designed to ensure failure and elicit conflict.

The study was designed to look at virtual agents as potential mediators to improve team collaboration during conflict mediation.

AI
“We’re beginning to see the first instances of artificial intelligence operating as a mediator between humans, but it’s a question of: ‘Do people want that?” Pixabay

While some of the researchers had previously found that one-on-one human interactions with a virtual agent therapist yielded more confessions, in this study, team members were less likely to engage with a male virtual agent named ‘Chris’ when conflict arose.

Participating members of the team did not physically accost the device, but rather were less engaged and less likely to listen to the virtual agent’s input once failure ensued among team members.

Also Read: Uber Joins Hands with DocsApp to Avail Free Medical Consultations for its Drivers

The study was conducted in a military academy environment in which 27 scenarios were engineered to test how the team that included a virtual agent would react to failure and the ensuing conflict.

The virtual agent was not ignored by any means.

The study also found that the teams did respond socially to the virtual agent during the planning of the mission they were assigned (nodding, smiling and recognising the virtual agent’s input by thanking it) but the longer the exercise progressed, their engagement with the virtual agent decreased. (IANS)