How Do We Trust Our Robots?

Trust plays the most important role in accepting social robots in our communities, especially in an era where robots take part in daily activities in close distance with humans. While some communities accept the presence of robots in their surrounding, others are less open to them. Furthermore, this acceptance depends on a large number of factors which are specific to the communities as well as the deployed robots. We conducted a survey to explore the aspects which people would consider if they had an option to deploy a robot at home and assign responsibilities to it. The study will help direct future researches on embodied intelligence in robots towards a more human-accepted level. The paper presents a human study conducted to evaluate the trust between humans and robots. We present the feedback we received from the participants to assess the level of trust participants have on their robots and their personal preferences upon the abilities of robots. Human responses and their decisions observed during the study are analyzed and critical observations are highlighted. From the analysis we derive a set of guidelines to improve human trust in future robots by adjusting their humanlike social behaviors.


Introduction
The rise of social robots in our surroundings opens up an opportunity of transforming robots into companions rather than interactive machines that make our work easier.Hence, natural interaction between robots and humans is important to make the collaboration effective.On the other hand, if robots can perceive the requirements of a scenario without external guidance, this will lessen the burden on humans in these activities, both emotional and physical.Trust plays an important role in making this companionship between humans and social robots more interactive [1].
To assess human expectations within a given situation, robots might have to think and act like humans.In short, their behavior will have to be more humanlike in order to make their interaction with humans smooth and flexible during day-to-day tasks.This natural behavior will help humans to accept the robots in their surrounding more easily, enabling humans to closely work without worrying over a set of opaque handling rules.Machines expressing humanlike features have been shown to benefit human-robot team cohesion [2], but may also prove detrimental if implemented poorly, as is demonstrated by the now well-known uncanny valley [3].
At present, there are several obstacles that prevent humans from fully trusting robots.On the one hand, there are technological constraints to enable robots with all the required features.On the other hand, we lack a proper conceptual model for a robot's brain [4] that would equip a IOP Publishing doi:10.1088/1757-899X/1292/1/012014 2 robot with adequate emotional intelligence to make humanlike decisions at proper occasions.The adequateness of the robot's actions is one of the cornerstones for building trust into the robot's capabilities in human-robot collaborative environments.In addition, there can be cultural and personal variations in communities and individuals in accepting robots as well [5].Embodied intelligence and humanlike perceptive skills can immensely enhance human acceptance towards assistive robots [6].
Improving human trust towards robots despite these issues requires an analysis of the state of the human trust on robots at present.As different communities have different perceptions upon robots, it is important to conduct researches to analyze the level of acceptance received by robots from around the world [7].Even so, some of the findings presented here can be assumed to be universal and may help in developing social robots in the future.
We conducted a survey to assess the level of trust in our community and explore the issues associated with that.The main objective of this research is to explore the requirements for improving human trust on robots and to mediate a set of guidelines for the future robotic companions in social environments.During the survey, we allowed the participants to answer different questions to assess their acceptance of robots in their surroundings and later evaluated their preferences in assigning responsibilities for robots in mission-critical tasks.We further explored human preferences upon the appearance and physical and cognitive skills of a robot.

Related Work
Trust is crucial for the flow of interaction between humans and their robotic companions.The work explained in [8] made an argument if and under which conditions the notion of trust is applicable to robots.The author concludes that the relationship between humans and robots is distinct from that of most other technical devices and involves an often implicit level of trust that should be considered during design, e.g. by enabling the robot to explain its actions.Since then, various studies have been conducted to examine factors influencing trust in robots, like their overall appearance [9], specific appearance features [10], personality traits [11], transparency [12], team interactions [13], and many more.
In [14], a model of human trust in robots and which factors influence it is developed.They separate these factors into three categories: robot characteristics, human characteristics and environmental characteristics (i.e.interaction modalities).Respective examples for these are adaptability, workload and communication.It seems worth mentioning that this research considered a military context, where trust and reliability are of particular importance.
A more refined model can be found in [15], where the authors present a meta-analysis of empirical studies investigating factors that influence human trust in robots.They identify factors that have been proven to be relevant as well as those that lack sufficient evidence or have not been considered yet, especially from the area of team collaboration (e.g.interaction frequency).In contrast, our survey tries to assess whether humans even want to trust robots and under which conditions.
In [1], a social study is presented where a connection was made between a situation's severity, the magnitude of errors made by the robot, and their users' trust towards it.The authors also found that some character traits of their participants, like benevolence and agreeableness, are strongly correlated with their disposition to trust a robot.We add to this study by exploring the connection between trust in robots, user preferences and user expectations in more detail.
While the above works take a more passive-descriptive approach, [16] gives a summary of how a robotic system may actually be gain, measure and maintain its user's trust.Although several challenges remain unsolved (in particular reliable approaches for establishing and measuring trust "in the wild"), the authors note that for all of these promising approaches are under active research.
Although studies exist that assess how people's willingness to trust changes when the robot is making mistakes [17,18], we argue that people -through media and video games -already have an idea of what present day robots supposedly can and cannot do and thus how trustworthy they are.These beliefs need to be addressed when designing a robot in order to maximize its user acceptance.Through this study we hope to shed some light on this area and inspire others to expand on it.
3. The survey 3.1.Background Participants of the study come from a relatively young population of around 28 years (mean-28.46,SD-3.36).There were 63 participants; 32 female and 31 male.17% of the participants had previous experience with robots and the rest did not.The participants were Sri Lankans who at least finished high school.The effect of gender on human trust towards robots was not considered within the scope of this study.Previous human studies explored that the Sri Lankan community has a positive mindset towards the deployment of service robots in domestic environments for conversations [19].This lead us to inquire about more complex aspects of human-robot collaboration such as trust, which directly affects the human acceptance towards robots.

Survey questions
The survey questionnaire was formed so that participants will indirectly answer the question: To which extent do you trust robots in your surrounding?.To this end we started by asking participants if they would like to have a robot around them (Question 1-2).From these initial questions we wanted the participants to create an image of the robot they actually prefer, without asking them directly.These are as follows.
• Question 1: Do you like a robot helping you with your work at home?
• Question 2: How do you prefer a robot?As a companion or an assistant?Or both?
The options given for question 1 were Yes, No and May be while the options given for question 2 were as an assistant, as a companion and both.Question 3 directly explores the participant's preference upon the robot's appearance and its stereotyped personality.
• Question 3: How would you call your robot: him, her or it?
The options given for question 3 were him, her and it.Questions 4 and 5 explore the participants' intentions for deploying a robot in their surroundings.These questions reveal the participants' main reason to choose a robot.Hence, these questions are the foundation for all other answers provided by the participant.
• Question 4: What do you think robots do not know, but better if they had known?• Question 5: Which activities do you think that a robot cannot do?
The participants were given the opportunity to explain or make a list of their preferences for questions 4 and 5.
Questions 6 to 9 are selected in order of complexity and privacy of the task assigned to the robot.While cleaning a house and cooking can be the typical tasks assigned to domestic robots, driving a vehicle and driving kids to school can be critical tasks where high trust and reliability are required.Therefore, by allowing the participants to imagine their robots in such simple and mission-critical tasks, we wanted to compare how people measure the abilities of present robots.
• Question 6: Would you like a robot to clean your house?• Question 7: Would you like a robot to cook for you?• Question 8: Would you let your robot drive your car? • Question 9: Would you like a robot to drive your kids to school?
The options given for questions 6 to 9 were Yes, No and May be.Other than the common set of tasks which domestic robots are specialized in, we further explored how individual preferences of people matter in human-robot domains.While some people accept robots in their surrounding or community, others do not.Question 10 was added to weigh this fact.
• Question 10: For which tasks would you like to get help from robots?
The participants were given the opportunity to list their preferences for question 10.However, this may have reasons other than trust, and so questions 5 and 11 will help understand such reasons.
• Question 11: Will it be better if a human/servant do these for you?
The options here were a robot, on my own or another human and depends on the situation.When humans trust someone, they will be relaxed or comfortable around that person.Furthermore, they will be sharing sentiments, having conversations with them [20].Question 12 explores this fact to evaluate the sentimental relationship between the human and the robot.
• Question 12: Say you are having a relaxed evening sitting outside.Then your robot approached you.Will you be interested in having a conversation with the robot?
The options given for question 12 were Yes, No and May be.Question 13 helps to get a better idea of the participant and their background in robotics, since experience with robots can have a significant influence on a person's stand towards them.The options given for question 12 were Yes and No.
• Question 13: Have you worked with robots before?Questions 14 to 16 examine the appearance of the robot participants imagined.A graphical illustration of a robot was requested as well to give a better impression of the participants' answers.We have given the chance for the participants who had no drawing ability to search the Internet and upload a matching image.We have taken this step to avoid the participants skills affecting the results of the survey.
• Question 14: What type of robot did you picture in your mind when answering these questions?• Question 15: If your answer was other to the previous question, state the type of robot you imagined.• Question 16: Illustrate the robot you imagined while answering this questionnaire.
As a summary, the survey questions are formulated, not only to evaluate the level of human trust on robot, but also to understand the following aspects about human-robot collaboration.
• Human preferences upon the capabilities of their robot companions (Question 1, 2) • Motives of deploying a robot (Question 4, 5, 10, 11) • How people interpret the personality of a robot during this collaboration (Question 2-3, 14-16) • How people assign responsibilities to their robots (Question 4-5, 6-9) • How robots can receive human-level acceptance (Question 12) Answers for these questions are discussed in section IV.

Results and discussion
For question 1, 3.2% of participants responded No, 1.6% responded May be and the remaining 95.2% responded Yes.Therefore, the number of participants who voted against using robots at home were very low compared to those who preferred a robot helping them at home.It is important to note that the participants who voted against robots selected either no or may be for questions 6 to 9 as well.
Figure 1 (a) and (b) illustrate the results for question 2 and 4. According to Figure 1  (a), out of all the participants, 1.6% did not prefer getting help from a robot, while the remaining 98% were open to the company of a robot.While the majority (47.6%) preferred their robots to be assistants, a similar amount of participants preferred robots as both companions and assistants.From that we can observe that people expect a much friendlier, collaborative interaction with robots while making their tasks easier.This further suggests that people would prefer companionship with robots if the technology permits, despite the conventional role of robots to perform dedicated tasks.
For question 3, despite the three options we provided, a considerable amount of participants have suggested calling the robot by a name, although this was not provided as an option.This was an interesting observation during this survey.This is demonstrated in Figure 1 (b).According to the pie chart in Figure 1 (b), this new category: calling the robot by a name made up 18.8% of all votes.Altogether, 45.4% of the participants, chose it or calling by a name.Hence it is interesting to see that nearly half of the participants did not choose a gender for their robot.However, the relationship between the gender of participants and their preference upon the gender of the robot was not considered within the scope of this study.Here, we would argue that 73,4% of the participants assigned biological features to their robots and only a small amount considered it a thing.
The answers we received for question 4 are illustrated in Figure 2 (a).We gave participants the opportunity to write their own answers here.Later we categorized their responses under five groups: Emotional intelligence, Physical activities, Technical tasks, Specialized tasks and Other .Responses such as understanding people, having feelings or emotions, humanity, creativity, loyalty appreciation and being part of a family, etc. were categorized under  Emotional Intelligence.Physical activities included responses such as gardening and cleaning the swimming pool.We categorized responses such as air pollution, auto-correction of errors and precision under Technical knowledge.Abilities such as medical training, sorting food, tasting, smelling and recognizing visitors were categorized under Specialized tasks.In addition, there were responses such as mind reading, humor and no preference which were categorized under Other .Out of the robots' abilities preferred by the participants, a majority (69.8%) required emotional intelligence.All the other categories received less than 10% votes.Furthermore, while there were multiple answers for question 4, the keywords feel, emotion and think have been used in 33 answers, that is 52.4%, and again 26 answers (41.3%) in question 5.This shows that people expect robots to be more than specialists in a certain task, but be more humanlike in cognitive skills.
While questions 4 and 5 appear to be similar, question 4 focuses more on the cognitive skills of a robot while question 5 evaluates the human expectations towards the abilities of a robot.Figure 2 (b) illustrates the answers for question 5. Again, we allowed participants to write down their own answer for this question, similar to question 4. Then we categorized the responses into five groups such as Learning and Intelligence, Physical activity , Specialized or technical tasks, Nonhumanitarian and None.Under Learning and Intelligence, we categorized responses such as thinking, advising me, loving and caring, creativity, replacing a human, formulation of new ideas, having a long conversation with me and attending to urgent matters.Physical, Specialized and Technical Tasks were as in question 4. The Nonhumanitarian category included responses such as hitting others.This yields similar results to the above.A greater percentage voted for the abilities under Learning and intelligence.It is interesting to note that 9.4% completely believed in the abilities of robots despite the inadequacy in technology at present.Compared to the previous occasion in Figure 2 (a), in Figure 2 (b), the votes for physical activity grew by 15.5%.This suggests that it is equally important to equip robots physically as well as cognitively.
Although many participants preferred robots as assistants rather than companions (Figure 1 (a)), they expect emotional intelligence from them even with task-specific applications (Figure 2 (a),(c)).
Figure 2 (c) illustrates answers for question 10, which explores where people expect help from robots.The responses were categorized into four groups: Emotional support, Physical activity , Technical tasks and Any .Any category includes the votes where participants preferred any kind of help from their robots.The rest of the categories include responses same as before.It is important to note that the majority (85.7%) expect help from their robots for physical activities.This seems to contradict the majority vote for Emotional intelligence in the first two charts: Figure 2 (a) and (b).The reason for this is that participants prefer emotionally intelligent robots even in support of physical work.A minor group of participants preferred robots in emotional support and technical tasks, 3.2% and 6.3% respectively.4.8% of the participants accepted any help from robots in their surrounding.Results obtained from the charts in Figure 2 (a) to Figure 2 (c) suggest that people expect more humanlike cognition in robots rather than the machinelike nature in robots in the past.But this fact is not true for the preference of participants on a robot's appearance.This is explained in Figure 6.
Pie charts in Figure 3 plot the preferences Yes, No and May be for questions 6 to 9. For the ease of finding patterns in these responses, we plotted responses for each task on the same graph as in Figure 4.If participants preferred their robots performing tasks from 1 to 4. they responded Yes and if not responded No.If they are uncertain, they were allowed to respond May be.These tasks in questions 6 to 9 were selected in the order of increasing complexity and criticality.For the task of cleaning, 87.3% of participants preferred a robot while there were zero no responses.For the task 2: cooking, the percentage of Yes-sayers reduced to 43.4% (by 33.9%).This further reduced down to 15.9% in task 4: driving kids to school.An opposite reaction was seen in the No response for these tasks as is shown on the graph.Participants' likeliness to respond May be increased from task 1 to task 2 and stayed almost constant until task 4. It can be seen that the uncertainty of participants' preferences increased from task 1 to task 4. Therefore it can be seen that particpants' trust on robots reduced as the criticality of the task increased.An ANOVA test was performed on this data in Table 1, for all four tasks in Figure 4 to analyze the difference between participants' preferences.The null hypothesis will be that there is no significant difference between the preferences for the four tasks considered.As seen in Table 1, F-value (36.83) was significantly larger than F-critical (2.64).Hence, in this case the null hypothesis can be rejected and the assumption that there is no significant difference between participant's preference for robots for the selected set of tasks was declined.It can thus be deduced that the user preference for robots on each task was significantly different when the given four tasks were considered.Furthermore, in this occasion, p-value (1.15E-19) was smaller than the alpha variable (0.05).This also suggests that the individual variables were statistically significant.Hence it can be deduced that there is a significant drawback in human trust on robots with critical tasks compared to normal tasks.
In question 10, the participants were asked to list the tasks for which they prefer to get help from robots.Then they were allowed to compare the results they would expect from the robot and another human for the tasks they listed as in question 11. Figure 5 (a) illustrates the responses received for question 11.More than 50% of the participants relied on robots in these tasks while a still considerable amount of participants (41.3%) would prefer it if a human can replace the robot for the tasks they prefer.Therefore a considerable amount of participants have problems accepting the capabilities of present robots.Therefore it is likely that people still trust robots less due to the lack of emotional intelligence and physical capabilities as found under question 5.
Each slice in the pie chart shown in Figure 5 represents responses received for question 12. 36.5% of the participants preferred interacting with a robot during a relaxed occasion while 49.2% were uncertain of their decision.Possible reasons for this uncertainty were discussed in [21].According to [21], the internal state of mind of a person largely contributes to their decision to interact with a robot.Therefore, a robot has to be able to understand if a situation is suitable for interaction prior to a conversation.We think that this may be the reason for the large share of May be.14.3% of the participants did not prefer interaction with a robot while relaxed.The reasons can be seen in Figure 2 (c) where many prefer their robots in physical tasks rather than in social engagement.
Figure 6 illustrates some of the pictures drawn by participants for question 16.We allowed participants to upload an image from the Internet which matches their imagination if they lacked confidence in their drawing skills.An important observation we made here was that almost all the illustrations had eyes as a physical feature despite the robot being anthropomorphic-, zoomorphic-or machinelike in appearance.After analyzing these pictures and the response for questions 14 and 15, we categorized them into four categories considering the physical appearance of robots.This is shown in Figure 5 (c).These categories are anthropomorphic, zoomorphic, machinelike, and Other .A majority of participants (55.6%) preferred robots which look like a machine.According to the participants' illustrations, this look was more similar to the typical robot look typical to humanoid robots which can be frequently seen on the internet and sci-fi movies.It was interesting to observe that people prefer robots to have their own look rather than being humanlike.The uncanny valley problem can be a reason for these preferences by the participants [3].Therefore, only 30.2% preferred the human look in their robots while 7.9% preferred an animal figure.More abstract designs such as vehicles or spheres were categorized under Other.It is found in [19] that people responded to robots in a similar way they treat children during relaxed conversations.The same approach could be seen in the illustrations participants made during this survey as well: a tendency to draw cute robots in this regard.Figure 5 (c) and Figure 6 testify to this.

CONCLUSIONS
Robot intervention in our daily tasks is becoming more and more common nowadays.Trust is an important aspect in enhancing the collaboration between humans and robots in this regard.In this paper, we report findings from a survey related to human preferences when allocating tasks to and sharing responsibilities with robots.In addition, we also asked participants to provide a visualization of the robot they envisioned while taking part in the survey.This helped us explore the uncanny valley of robotics and how it affects human trust on robots.
This study evaluates the willingness of people to trust their robot companions in different scenarios through a set of questions.Results of the study confirm the fact that capability of the robot to perceive their surroundings in a way similar to humans, makes an impact to sustain interaction between a robot and its user.Our results further confirm that the capability of the robot to perform physical actions as smoothly and flexibly as humans can considerably improve human trust.However, limitations in the dynamics as well as emotional intelligence have to be overcome so humans can build trust on their robotic companions for more critical Figure 6.Some of the pictures drawn by the participants under question 16 IOP Publishing doi:10.1088/1757-899X/1292/1/01201410 tasks.Moreover, human preferences upon the appearance of their robotic companions have been analyzed to explore the relationship between human trust on robots, their appearance and their capabilities.
Future research should evaluate more factors that might impact human trust towards robots in their surroundings.We also encourage readers to conduct similar surveys in their local communities in order to help separate generally applicable aspects of trust in robots, from locally specific ones.

Implications for design and theory
The acceptance of a robot for everyday tasks depends on the values and risks associated with the task (i.e. its criticality) as well as their trust towards the robot.Trust in turn depends on the capabilities the user is willing to ascribe to the robot, both in physical and social interactions.While these capabilities do not have to be demonstrated within the target task, they shape the user's understanding of the robot.If the participants' vision could be matched, many participants would prefer robots in their surroundings to share their workload.It is therefore important to increase the capabilities of a robot according to the tasks it is allocated for.This is the first design guideline derived from this study.
Results confirmed that people preferred it if robots could understand and adjust to their emotions.This suggests that people prefer interaction with a robot that are, to a certain degree, emotional and less clinical.Therefore, emotional intelligence will be a sought-after feature in future domestic or social robots.This is the second design guideline derived from this study.
An important observation made during the study was that all pictures drawn or uploaded by participants included a feature similar to eyes.We can thus deduce that gaze is a prominent feature that humans prefer during human-robot interaction.Therefore investigating the importance of gaze and eyes as a feature for robots worth exploring in the future.This will be the third design guideline derived from this study.
As per the results, it can be seen that people believe that robots are physically not capable enough to perform all tasks participants desired in a satisfactory way.Therefore, equipping robots with adequate physical capabilities so that they can perform human activities, will be the next design guideline derived from the study.This requires adequate technologies and improved robot dynamics.
Humans preferred emotional intelligence in their robotic companions even though they mainly wanted to use robots for physical activities.Therefore engraving at least a minimum amount of emotional intelligence for collaborative robots despite their task will be the fifth design guideline for our future robots.
From the answers we received for the questionnaire, we observed a less tight relationship between human expectations upon humanlikeness in the appearance and the skills of a robot.Our findings suggest that people prefer humanlike features in robot's capabilities but surprisingly less humanlikeness in the appearance.Instead they preferred the 'typical robot' figure more.Therefore working on robot's skills more than its appearance will attract more human acceptance in the future.Therefore this will be the sixth design guideline for our future robotic companions.

Figure 1 .
Figure 1.Pie charts categorizing the percentage of votes received for the questions 2 and 4 are shown in (a) and (b) respectively.

Figure 2 .
Figure 2. Pie charts categorizing the percentage of votes received for the questions 4, 5 and 10 are shown in (a), (b) and (c) respectively.

Figure 3 .
Figure 3. Pie charts representing the percentages of yes, no and may be received as answers for the questions 6 to 9 are shown in (a), (b), (c) and (d) respectively.

Figure 4 .
Figure 4.The percentage of yes, no and may be received as answers for the questions 6 to 9 are plotted against the task involved in each question.

Figure 5 .
Figure 5. Pie charts categorizing the percentage of votes received for the questions 11, 12 and 14 are shown in (a), (b) and (c) respectively.

Table 1 .
An ANOVA Test for the comparison of responses received for the task 1 to 4 in Figure4