Companion-like Wellness Intervention

Social support plays a crucial role in managing and enhancing one’s mental health and well-being. In order to explore the role of a robot’s companion-like behavior on its therapeutic interventions, we conducted an eight-week-long deployment study with seventy participants to compare the impact of (1) a control robot with only assistant-like skills, (2) a coach-like robot with additional instructive positive psychology interventions, and (3) a companion-like robot that delivered the same interventions in a peer-like and supportive manner. The companion-like robot was shown to be the most effective in building a positive therapeutic alliance with people, enhancing participants’ well-being and readiness for change. Our work offers valuable insights into how companion AI agents could further enhance the efficacy of the mental health interventions by strengthening their therapeutic alliance with people for long-term mental health support. 

This work is funded by the ETRI in Republic of Korea. 

[HRI]   [video]

Positive Psychology for College Students

Many college students suffer from mental health issues that impact their physical, social, and occupational outcomes. Various scalable technologies have been proposed in order to mitigate the negative impact of mental health disorders. We explore the use of a social robot coach to deliver positive psychology interventions to college students living in on-campus dormitories. A one-week deployment study with 35 college students showed that our robotic intervention improved participants' psychological wellbeing, mood, and readiness to change behavior. We also found that students' personality traits had a significant association with intervention efficacy. 

This work was funded by the ETRI in Republic of Korea. 

*Best Paper Award* and *Best Student Paper Finalist* at RO-MAN 2020

[RO-MAN]   [RO-MAN talk]   [UMUAI]   

Migrateable AI

Conversational assistants are all around us. Yet, these agents do not share information with each other or provide continuous interactions across devices. We present "Migratable AI," a platform where a conversational agent migrates across different platforms and share information to enable more continuous and personalized experience. Results from a 2x2 experimental study suggest that migrating the agent's identity and information improve perceived competence, likability, trust and social presence of the conversational agent.


Personalized Interactive Journaling

We developed a virtual avatar that provides personalized positive psychology interventions in order to enhance users' psychological wellbeing on smart phone devices. Users' emotional states were measured by analyzing facial expressions and the sentiment of SMS messages. A Markov Decision Process (MDP) model and a State-Action-Reward-State-Action (SARSA) algorithm were used to learn users' preferences about the positive psychology interventions based on their long-term mood and immediate affect. A three-week study showed that interacting with the avatar resulted immediate improvement in participants' arousal and valence. The interaction duration increased significantly throughout the study as well.

This work was funded by LG Electronics.


Pediatric Companion Robot 

Children and their parents undergo challenging experiences when admitted for inpatient care at pediatric hospitals. We aim to close this gap by creating a social robot that can mitigate pediatric patients' stress/anxiety/pain by engaging them in playful interactive activities. We conducted a randomized control trial that compare the effects of the Huggable robot to a virtual character on a screen and a plush teddy bear at Boston Children's Hospital with 54 pediatric inpatients in MSICU, Oncology and Surgical units. The study showed that social robot's physical embodiment has significant impact on children and their caregivers' engagement and socio-emotional interactions. This work was done in collaboration with Boston Children's Hospital and Northeastern University.

This work was funded by Boston Children's Hospital and was done in collaboration with Boston Children's Hospital and Northeastern University. 

[Pediatrics]   [CHI]   [RO-MAN]   [Wired]   [NYT]

The Huggable Robot 

The Huggable robot is designed to playfully interact with children and provide socio-emotional support for them in pediatric care context. Our design takes into consideration that many young patients are nervous, intimidated, and are socio-emotionally vulnerable at hospitals. The robot has a childlike and furry appearance, and can perform swift and smooth motions. It uses a smart phone for its computational power and internal sensors. The robot's haptic sensors perceive physical touch and can use the information in meaningful ways. The modular arm component allows easy sensor replacement and increases the usability of the Huggable robot for various pediatric care services. Removable fur pieces can be machine-washed between each play session for infection control. 

This work was funded by Boston Children's Hospital and was done in collaboration with Boston Children's Hospital and Northeastern University. 


Robot Vocal Expressivity

Prior research has established that dialogic reading, the process of having a dialogue with students around the text they are reading, is an effective method for expanding young children’s vocabulary. In this project, we asked whether a social robot can effectively engage preschoolers in dialogic reading, and whether the robot's verbal expressiveness impacted children’s learning and engagement during a dialogic reading activity. Our study showed that children learned from the robot, emulated the robot’s story during the story retelling, and treated the robot as a social being. However, children showed more engagement and were more likely to identify target vocabulary words after interacting with a robot with expressive voice than with a robot with flat voice. Taken together, these results suggest that children may benefit more from the expressive robot than from the flat robot.

This project was funded by an NSF Cyberlearning grant and was done in collaboration with Harvard University and Northeastern University.

[Frontiers in Human Neuroscience]   [video]

Non-verbal Cues to Learn from Robots

When learning from human partners, infants and young children pay attention to nonverbal signals to figure out what a person is looking at and why. In this study, we examined whether young children attend to the gaze and body orientation from a robot as from a human partner during a word learning task. When images were presented close together, children subsequently identified the correct animals at chance level, whether they had been named by the human or by the robot. By contrast, when the two images were presented further apart, children identified the correct animals at better than chance level from both interlocutors. These results suggest that children learned equally well from the robot and the human but in each case learning was constrained by the distinctiveness of non-verbal cues to reference.

This work was funded by an NSF Cyberlearning grant and was done in collaboration with Harvard University and Northeastern University. 


Robots as Informants

We investigate how children perceive a robot' social cues and whether they interpret these cues in the same way they perceive cues emitted by a human. This project leverages emerging technologies in social robotics with recent findings from social, developmental, and cognitive psychology in an effort to design and initially implement/evaluate a new generation of robots that is capable of interacting with and instructing young learners in a truly social way.

Our study showed that children treated the robots as interlocutors, supplied information to the robots and retained what the robots told them. Children also treated the robots as informants from whom they could seek information, and were more attentive and receptive to the robot that displayed greater non-verbal contingency. Such selective information seeking is consistent with recent findings showing that although young children learn from others, they are selective with respect to the informants that they question or endorse.

This work was funded by an NSF Cyberlearning grant and was done in collaboration with Harvard University and Northeastern University. 

[Topics in Cognitive Science]

Learning French with Sophie

Young children learn better from playful interactions rather than text-based lessons. However, many existing digital learning resources are screen-based and lack the back and forth interactions and practices necessary to learn a new language. We present a robotic learning companion who can only speak French and is designed to encourage young children to practice French through playful interactions. The "café table" on the Android tablet provides a hybrid physical-digital interface for a child and the robot to share, and they learn names of several food items in French while engaging an imaginative play. Sophie uses social cues, such as gaze, facial expressions and gestures to express its likes/dislikes, and can respond to the child's behavior. This project was supported by an NSF Cyberlearning grant.

I worked on this project as an undergraduate student and participated in developing the Android tablet application used for the shared "cafe table" space and assisted in running the user study for Natalie Freed's thesis study.



DragonBot is a low-cost robot platform to support long-term interactions with young children. It runs entirely on an Android phone, which displays an animated virtual face and fully controls the actuation of the robot. It leverages the speaker, the microphone and the camera on the phone to communicate with users. The wireless Internet connection allows the robot to  leverage cloud-computing services and talk with other sensors/devices, e.g. smart tablet. A teleoperation interface could view what the robot sees and hears, and trigger appropriate animations and stream the operator's voice in real time.

I worked on this project as an undergraduate student and developed the realtime voice pitching and viseme extraction features of the teleoperation interface. The remote operator's voice was pitch-shifted using the Fast Fourier transform (FFT) to sound like an animated character, and appropriate visemes were extracted using the Annosoft software. The processed audio and viseme data were sent to the robot and expressed through its speaker and mouth animation. 


Le Fonduephone

Le Fonduephone is a dining experience designed to allow children to practice language with a robotic companion in a mixed-reality environment. The child can at a small table with the robot in a virtual French café environment projected around them. A projector displayed images of fruit onto physical plates on the table. The child can use a sensor fork to pick up and take bites of the virtual fruit. The blend of virtual and physical created a shared play context, preserving the benefits of tangible objects as well as the flexibility and ease of sensing of digital content. A teacher/parent can also observe the interaction remotely via a virtual representation of the cafe and can adjust the robot's conversation and behavior to support the learner.

I worked on this project as an undergraduate student and developed the teleoperation interface that calibrated the location on the digital contents for the interaction. I also worked on improving the digital fruit "pick-up" interaction with the sensor embedded fork.