Humanlike Virtual Assistants Make Some People Feel Foolish
Humanlike Virtual Assistants Make Some People Feel Foolish
Online learning is an increasingly popular tool across most levels of education and most computer-based learning environments offer various forms of help, such as a tutoring system that provides context-specific help.

People may hesitate to seek help from humanlike virtual assistants on tasks such as online learning because they are afraid to sound foolish, a study has found. "We demonstrate that anthropomorphic features may not prove beneficial in online learning settings, especially among individuals who believe their abilities are fixed and who thus worry about presenting themselves as incompetent to others," said Daeun Park from Chungbuk National University in South Korea. "Our results reveal that participants who saw intelligence as fixed were less likely to seek help, even at the cost of lower performance," Park said. Previous research has shown that people are inclined to see computerised systems as social beings with only a couple social cues.

This social dynamic can make the systems seem less intimidating and more user-friendly, but researchers wondered whether that would be true in a context where performance matters, such as with online learning platforms. "Online learning is an increasingly popular tool across most levels of education and most computer-based learning environments offer various forms of help, such as a tutoring system that provides context-specific help," said Park. "Often, these help systems adopt humanlike features; however, the effects of these kinds of help systems have never been tested," he said. For the study published in the journal Psychological Science, researchers had 187 participants complete a task that supposedly measured intelligence. In the task, participants saw a group of three words (eg room, blood, salts) and were supposed to come up with a fourth word that related to all three (eg bath).

On the more difficult problems, they automatically received a hint from an onscreen computer icon - some participants saw a computer "helper" with humanlike features including a face and speech bubble, whereas others saw a helper that looked like a regular computer. Participants reported greater embarrassment and concerns about self-image when seeking help from the anthropomorphised computer versus the regular computer, but only if they believed that intelligence is a fixed, not malleable trait. The findings indicated that a couple of anthropomorphic cues are sufficient to elicit concern about seeking help, at least for some individuals. Researchers decided to test this directly in the second experiment with 171 university students. In the experiment, they manipulated how the participants thought about intelligence by having them read made-up science articles that highlighted either the stability or the malleability of intelligence.

The participants completed the same kind of word problems as in the first study - this time, they freely chose whether to receive a hint from the computer "helper." The results showed that students who were led to think about intelligence as fixed were less likely to use the hints when the helper had humanlike features than when it did not. The findings could have implications for our performance using online learning platforms, researchers said.

Watch: Bose QuietComfort 35 ii Review | Noise-Cancellation Meets Google Assistant

 

What's your reaction?

Comments

https://filka.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!