CREST project (updated page preview)

申し訳ありません、このコンテンツはただ今 アメリカ英語 のみです。 For the sake of viewer convenience, the content is shown below in the alternative language. You may click the link to switch the active language.

CREST Project

Introduction

Nowadays, robots have become able to execute various tasks, and it is expected that more and more jobs will be automated through Artificial Intelligence, and in our future daily lives, we can expect to see robots assuming the role of shopkeepers, watchmen or receptionists.

However, beyond the simple goal of accomplishing tasks, the presence and behaviour of human employees in a public place protects the standard of civility of the surroundings. Without this, the people’s morals will degrade, and the amount of incivilities will grow: we will witness more and more bullying, littering, illegal parking, graffiti, and so forth as morals fade.

Politeness, enjoyable ambiance and peer pressure are fundamental to protect the moral level of a place. If robots are to carry out these jobs, it is crucial that they undertake the moral interactions required for keeping the moral level high.
It has been observed that current robots, unlike humans, are not seen as a moral entity, and as such they are neither granted respect, nor is their presence providing any moral pressure. Our project is to tackle this problem.

Research approach

Our research carries two goals:
-Moral attribution: create a situation where the robot is respected like a peer.
-Moral encouragement: the robot, by its presence, should promote moral behaviour.

To fulfil those goals, our approach is to perform field experiments while focusing on three points:
-Observe a large number of moral behaviours and become able to automatically recognize them.
-Develop a robot with a human-like sense of moral, that can execute moral interactions.
-Explain the causes and circumstances of low morals based on real-world data, and unveil the cognitive processes involved in morals.

Published studies

Below is a list of the research publications that were achieved under this project.


Can a Robot handle Customers with Unreasonable Complaints?

Daichi Morimoto, Jani Even, Takayuki Kanda, HRI 2020

In the service industry, customers with unreasonable claims has become one of the most stressful experience for workers. We have made a robot with a behavioral model to deal with this type of situation. This model was successful at making the customers believe that the robot listened to them and tried to help them.

Read more...

Publication: Daichi Morimoto, Jani Even, and Takayuki Kanda. Can a Robot Handle Customers with Unreasonable Complaints? In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3319502.3374830

Abstract: In recent years, the service industry faces a rise in the number of malicious customers. Making unreasonable complaint is one misbehavior that is particularly stressful for the workers. Considering the recent push for introducing robots in the working places, a robot should be handling these undesirable customers in place of the workers. This paper shows how we designed a behavioral model that enables a robot to handle a customer making an unreasonable complaint. The robot has to “please the customer” without proposing a settlement. We used information from a recent large survey of workers from the Japanese service industry supplemented by information from interviews we conducted with experienced workers to derive our proposed behavioral model. We identified the conventional complaint handling flow as 1) listening to the complaint, 2) confirm the content of the complaint, 3) apologize, 4) give an explanation and 5) conclude. Our proposed behavioral model takes into account the “state of mind” of the customer by looping on the first step as long as the customer is not “ready to listen”. The robot also asks questions while looping. Using the Wizard-of-Oz paradigm, we conducted a user study in our laboratory that imitates the situation of a complaining customer in a mobile phone shop. The proposed behavioral model was significantly better at making the customers believe that the robot listened to them and tried to help them


Can a Social Robot Encourage Children’s Self-Study?

Risa Maeda, Jani Even, Takayuki Kanda, IROS 2019

We developed a robot behavioral model designed to support children during self-study. By checking the state of the studying child (whether the child was learning, stuck or distracted), and interacting with the child when difficulties arise, the robot could successfully support the child and improve his/her learning time.

Read more...

Publication: R. Maeda, J. Even and T. Kanda, “Can a Social Robot Encourage Children’s Self-Study?,” 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 1236-1242.

Link: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8967825&isnumber=8967518

Abstract: We developed a robot behavioral model designed to support children during self-study. In particular, we want to investigate how a robot could increase the time children keep concentration. The behavioral model was developed by observing children during self-study and by collecting information from experienced tutors through interviews. After observing the children, we decided to consider three states corresponding to different levels of concentration. The child can be smoothly performing the task (“learning” state), encountering some difficulties (“stuck” state) or distracted (“distracted” state). The behavioral model was designed to increase the time spent concentrating on the task by implementing adequate behaviors for each of these three states. These behaviors were designed using the advices collected during the interview survey of the experienced tutors. A self-study system based on the proposed behavior model was implemented. In this system, a small robot sits on the table and encourages the child during self-study. An operator is in charge of determining the state of the child (Wizard of Oz) and the behavioral model triggers the appropriate behaviors for the different states. To demonstrate the effectiveness of the proposed behavioral model, a user study was conducted: 22 children were asked to solve problems alone and to solve problems with the robot. The children spent significantly (p = 0.024) more time in the “learning” state when studying with the robot.


An Escalating Model of Children’s Robot Abuse

Sachie Yamada, Takayuki Kanda, Kanako Tomita, HRI 2020

Abuse from children is a major difficulty for service robots. We studied various cases of abuse, and established a model describing the social guides involved in robot abuse escalation. This was then confirmed by checking a large amount of abuse data including 522 children.

Read more...

Publication: Sachie Yamada, Takayuki Kanda, and Kanako Tomita. 2020. An Escalating Model of Children’s Robot Abuse. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3319502.3374833

Abstract: We reveal the process of children engaging in such serious abuse as kicking and punching robots. In study 1, we established a process model of robot abuse and used a qualitative analysis method specialized for time-series data: the Trajectory Equifinality Model (TEM). With the TEM method, we analyzed interactions from nine children who committed serious robot abuse from which we developed a multi-stage model: the abuse escalation model. The model has four stages: approach, mild abuse, physical abuse, and escalation. For each stage, we identified social guides (SGs), which are influencing events that fuel the stage. In study 2, we conducted a quantitative analysis to examine the effect of these SGs. We analyzed 12 hours of data that included 522 children who visited the observed area nearby the robot, coded their behaviors, and statistically tested whether the presence of each SG promoted the stage. Our analysis confirmed the correlations of four SGs and children’s behaviors: the presence of other children related a new child to approach the robot (SG1); mild abuse by another child related a child to do mild abuse (SG2); physical abuse by another child related a child to conduct physical abuse (SG3); and encouragement from others related a child to escalate the abuse (SG5).


Parent Disciplining Styles to Prevent Children’s Misbehaviors toward a Social Robot

Jorge Gallego Pérez, Kazuo Hiraki, Yasuhiro Kanakogi, Takayuki Kanda, HAI 2019

It has been noticed that children offend tend to interrupt and sometimes bully robots in public space. In a laboratory environment designed to stimulate children’s disruptive behaviour, we evaluated the robot’s use of an adaptation of a parental discipline strategy, the so-called love-withdrawal technique.

Read more...

Publication: Jorge Gallego Pérez, Kazuo Hiraki, Yasuhiro Kanakogi, and Takayuki Kanda. 2019. Parent Disciplining Styles to Prevent Children’s Misbehaviors toward a Social Robot. In Proceedings of the 7th International Conference on Human-Agent Interaction (HAI ’19). Association for Computing Machinery, New York, NY, USA, 162–170. DOI:https://doi.org/10.1145/3349537.3351903

Abstract: It has been noticed that children offend tend to interrupt and sometimes bully robots in public space. In a laboratory environment designed to stimulate children’s disruptive behaviour, we compared the robot’s use of an adaptation of a parental discipline strategy, the so-called love-withdrawal technique, to a similar set of robot behaviors that lacked any specific strategy (neutral condition). The main insight we gained was that perhaps we should better not focus on general robot behaviors to try to fit all children, but rather, we should adapt the robot behaviors to children’s individual differences. For instance, we found that the love-withdrawal-based strategy was significantly more effective in children of age 8-9 than on children of 7.


Monitoring Blind Regions with Prior Knowledge Based Sound Localization

Jani Even, Satoru Satake, Takayuki Kanda, ICSR 2019

Using a sound localization method designed for dealing with blind regions, we could develop a robot that can identify noise coming from places out of sight and understand the situation despite not seeing it. Testing it in a convenience store replica, participants had an image of a more understanding robot, which could help prevent shoplifting.

Read more...

Publication: Even J., Satake S., Kanda T. (2019) Monitoring Blind Regions with Prior Knowledge Based Sound Localization. In: Salichs M. et al. (eds) Social Robotics. ICSR 2019. Lecture Notes in Computer Science, vol 11876. Springer, Cham

Link: https://doi.org/10.1007/978-3-030-35888-4_64

Abstract: This paper presents a sound localization method designed for dealing with blind regions. The proposed approach mimics Human’s ability of guessing what is happening in the blind regions by using prior knowledge. A user study was conducted to demonstrate the usefulness of the proposed method for human-robot interaction in environments with blind regions. The subjects participated in a shoplifting scenario during which the shop clerk was a robot that has to rely on its hearing to monitor a blind region. The participants understood the enhanced capability of the robot and it favorably affected the rating of the robot using the proposed method.


Approaching Strategy for a Robot to Admonish Pedestrians

 Kazuki Mizumaru, Satoru Satake, Takayuki Kanda, Tetsuo Ono, HRI 2019

By analysing the movement of a professional guard in a shopping mall, this study established differences in trajectory and speed when approaching visitors for reproving inappropriate behaviours, as compared to providing service, and integrated this trajectory pattern in a guard robot.

Read more...

Publication: Mizumaru, K., Satake, S., Kanda, T., & Ono, T. (2019). Stop Doing it! Approaching Strategy for a Robot to Admonish Pedestrians. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 449-457.

Link: https://dl.acm.org/doi/abs/10.5555/3378680.3378756

Abstract: We modeled a robot’s approaching behavior for giving admonishment. We started by analyzing human behaviors. We conducted a data collection in which a guard approached others in two ways: 1) for admonishment, and 2) for a friendly purpose. We analyzed the difference between the admonishing approach and the friendly approach. The approaching trajectories in the two approaching types are similar; nevertheless, there are two subtle differences. First, the admonishing approach is slightly faster (1.3 m/sec) than the friendly approach (1.1 m/sec). Second, at the end of the approach, there is a ‘shortcut’ in the trajectory. We implemented this model of the admonishing approach into a robot. Finally, we conducted a field experiment to verify the effectiveness of the model. A robot is used to admonish people who were using a smartphone while walking. The result shows that significantly more people yield to admonishment from a robot using the proposed method than from a robot using the friendly approach method


Autonomously Learning One-To-Many Social Interaction Logic from Human-Human Interaction Data

Amal Nanavati, Malcolm Doering, Dražen Brščić, Takayuki Kanda, HRI 2020

We present a data-driven system that learns how a robotic shopkeeper should interact with customers, solely from human-human data. The system tackles the challenges of multi-party interaction and is much better than the state-of-the-art. We believe such systems will be widely used to train service robots to interact with many humans customers.

Read more...

Publication: Amal Nanavati, Malcolm Doering, Dražen Brščić, and Takayuki Kanda. 2020. Autonomously Learning One-To-Many Social Interaction Logic from Human-Human Interaction Data. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI’20), March 23-26,2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 9 pages.

Abstract: We envision a future where service robots autonomously learn how to interact with humans directly from human-human interaction data, without any manual intervention. In this paper, we present a data-driven pipeline that: (1) takes in low-level data of a human shopkeeper interacting with multiple customers (28 hours of collected data); (2) autonomously extracts high-level actions from that data; and (3) learns – without manual intervention – how a robotic shopkeeper should respond to customers’ actions online. Our proposed system for learning the interaction logic uses neural networks to first learn which customer actions are important to respond to and then learn how the shopkeeper should respond to those important customer actions. We present a novel technique for learning which customer actions are important by first learning the hidden causal relationship between customer and shopkeeper actions. In an offline evaluation, we show that our proposed technique significantly outperforms state-of-the-art baselines, in both which customer actions are important and how to respond to them.


Identification of social relation within pedestrian dyads

Zeynep Yucel, Francesco Zanlungo, Claudio Feliciani, Adrien Gregorj, Takayuki Kanda, PLoS ONE, 2019

This study concerns pedestrian behaviour in public environments. In detail, we develop an algorithm to infer the social relation between two pedestrians in a group. This algorithm could be used by a robot to infer in a probabilistic way if pedestrians are colleagues, friends, members of a family or couple, and improve its ability to interact with humans.

Read more...

Publication: Yucel Z, Zanlungo F, Feliciani C, Gregorj A, Kanda T (2019) Identification of social relation within pedestrian dyads. PLoS ONE 14(10): e0223656. https://doi.org/10.1371/journal.pone.0223656

Abstract: This study focuses on social pedestrian groups in public spaces and makes an effort to identify the type of social relation between the group members. As a first step for this identification problem, we focus on dyads (i.e. 2 people groups). Moreover, as a mutually exclusive categorization of social relations, we consider the domain-based approach of Bugental, which precisely corresponds to social relations of colleagues, couples, friends and families, and identify each dyad with one of those relations. For this purpose, we use anonymized trajectory data and derive a set of observables thereof, namely, inter-personal distance, group velocity, velocity difference and height difference. Subsequently, we use the probability density functions (pdf) of these observables as a tool to understand the nature of the relation between pedestrians. To that end, we propose different ways of using the pdfs. Namely, we introduce a probabilistic Bayesian approach and contrast it to a functional metric one and evaluate the performance of both methods with appropriate assessment measures. This study stands out as the first attempt to automatically recognize social relation between pedestrian groups. Additionally, in doing that it uses completely anonymous data and proves that social relation is still possible to recognize with a good accuracy without invading privacy. In particular, our findings indicate that significant recognition rates can be attained for certain categories and with certain methods. Specifically, we show that a very good recognition rate is achieved in distinguishing colleagues from leisure-oriented dyads (families, couples and friends), whereas the distinction between the leisure-oriented dyads results to be inherently harder, but still possible at reasonable rates, in particular if families are restricted to parent-child groups. In general, we establish that the Bayesian method outperforms the functional metric one due, probably, to the difficulty of the latter to learn observable pdfs from individual trajectories.


Intrinsic group behaviour II: On the dependence of triad spatial dynamics on social and personal features; and on the effect of social interaction on small group dynamics

Francesco Zanlungo, Zeynep Yücel, Takayuki Kanda, PLoS ONE, 2019

This work studies the effect of explicit social interaction (e.g., oral communication) and of social relation (being colleagues, friends or family) on the velocity and relative distance probability distributions of groups of three pedestrians. This study could be the basis for the development of algorithms to infer the social dynamics of larger pedestrian groups.

Read more...

Publication: Zanlungo F, Yücel Z, Kanda T (2019) Intrinsic group behaviour II: On the dependence of triad spatial dynamics on social and personal features; and on the effect of social interaction on small group dynamics. PLoS ONE 14(12): e0225704. https://doi.org/10.1371/journal.pone.0225704

Abstract: In a follow-up to our work on the dependence of walking dyad dynamics on intrinsic properties of the group, we now analyse how these properties affect groups of three people (triads), taking also in consideration the effect of social interaction on the dynamical properties of the group. We show that there is a strong parallel between triads and dyads. Work-oriented groups are faster and walk at a larger distance between them than leisure-oriented ones, while the latter move in a less ordered way. Such differences are present also when colleagues are contrasted with friends and families; nevertheless the similarity between friend and colleague behaviour is greater than the one between family and colleague behaviour. Male triads walk faster than triads including females, males keep a larger distance than females, and same gender groups are more ordered than mixed ones. Groups including tall people walk faster, while those with elderly or children walk at a slower pace. Groups including children move in a less ordered fashion. Results concerning relation and gender are particularly strong, and we investigated whether they hold also when other properties are kept fixed. While this is clearly true for relation, patterns relating gender often resulted to be diminished. For instance, the velocity difference due to gender is reduced if we compare only triads in the colleague relation. The effects on group dynamics due to intrinsic properties are present regardless of social interaction, but socially interacting groups are found to walk in a more ordered way. This has an opposite effect on the space occupied by non-interacting dyads and triads, since loss of structure makes dyads larger, but causes triads to lose their characteristic V formation and walk in a line (i.e., occupying more space in the direction of movement but less space in the orthogonal one).


Would You Mind Me if I Pass by You? Socially-Appropriate Behaviour for an Omni-based Social Robot in Narrow Environment

TODO