


In 2019 Dutch Design Week, Eindhoven, credit by Jianli Zhai
In 2024 iF Award Ceremony, Berlin,
credit by Kaixi Zhou
In 2019 Social Robotics Lab, Eindhoven, credit by SociBot
Hi, I'm Zhiping (Arya) Zhang
/ʒiː pɪŋ ʒɑŋ/
👂
I'm a researcher,
and mum of 7 kitties and a zz plant.
🐈🐈🐈🐈⬛🐈⬛🐈⬛🐈⬛ 🪴
🙌
I'm a first year PhD student at Northeastern University in the Khoury College of Computer Sciences supervised by Prof. Tianshi Li, and also a member of PEACH (Privacy-Enabling AI and Computer-Human interaction) Lab. My research interests lie in Human-Computer Interaction, Privacy, Responsible Technology and Behavior Change. I normally used mixed methods and research through design, building early-stage prototypes and leveraging design and technology as research tools.
I obtained an M.Sc. in Industrial Design (HCI Research Track) from Eindhoven University of Technology in Netherlands. During my graduate studies, I was a member of Social Robotics Lab and was supervised by Prof. Emilia Barakova and Prof. Panos Markopoulos.
Before that, I earned dual first-class honours B.Eng. degree (top 1) from the University of Liverpool in the UK and Xi'an Jiaotong-Liverpool University in China. There, I had the honor of being advised by Prof. Martijn ten Bhömer who introduced me to the field of HCI.
With my industry background in Human-AI interaction systems, including roles as a UX researcher and designer at ALIBABA, an AI product manager and creative technologist at FITURE, I value collaboration in multidisciplinary teams. This ensure that the systems we develop are genuinely integrated into users’ lives and truly make a positive impact. My landed projects brought tangible benefits to users and awarded by top international events.
🏆(2024 IF Design Award, 2023 Red Dot Award, 2020 IF Talent Award, 2018 Dutch Design Week)
I do research.
I approach complex questions using human-centered design methods, including mixed methods and research through design. My primary goal is to understand how people interact with agentic systems, like their perceptions or mental models, and to create agents that truly benefit people. I'm interested in guiding behavior changes to foster responsible technology use.
CHI

. The advancements of Large Language Models (LLMs) have decentralized the responsibility for the transparency of AI usage. Specifically, LLM users are now encouraged or required to disclose the use of LLM-generated content for varied types of real-world tasks. However, an emerging phenomenon, users’ secret use of LLMs, raises challenges in ensuring end users adhere to the transparency requirement. Our study used mixed-methods with an exploratory survey (125 real-world secret use cases reported) and a controlled experiment among 300 users to investigate the contexts and causes behind the secret use of LLMs. We found that such secretive behavior is often triggered by certain tasks, transcending demographic and personality differences among users. Task types were found to affect users’ intentions to use secretive behavior, primarily through influencing of perceived external judgment regarding LLM usage. Our results yield important insights for future work on designing interventions to encourage more transparent disclosure of LLM/AI use.
Secret Use of Large Language Models
Zhiping Zhang, Chenxinran Shen, Bingsheng Yao, Dakuo Wang, and Tianshi Li
In CSCW 2025
CHI

“It’s a Fair Game”, or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
Zhiping Zhang, Michelle Jia, Hao-Ping (Hank) Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, and Tianshi Li
In CHI Conference on Human Factors in Computing Systems Apr 2024
. The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users’ perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users’ erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users’ ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigmatic shifts to protect the privacy of LLM-based CA users.
CHI

Human-Centered Privacy Research in the Age of Large Language Models
Tianshi Li, Sauvik Das, Hao-Ping (Hank) Lee, Dakuo Wang, Bingsheng Yao and Zhiping Zhang
In CHI Conference on Human Factors in Computing Systems (CHI’24 Companion) Apr 2024
. The emergence of large language models (LLMs), and their increased use in user-facing systems, has led to substantial privacy concerns. To date, research on these privacy concerns has been model-centered: exploring how LLMs lead to privacy risks like memorization, or can be used to infer personal characteristics about people from their content. We argue that there is a need for more research focusing on the human aspect of these privacy issues: e.g., research on how design paradigms for LLMs affect users' disclosure behaviors, users' mental models and preferences for privacy controls, and the design of tools, systems, and artifacts that empower end-users to reclaim ownership over their personal data. To build usable, efficient, and privacy-friendly systems powered by these models with imperfect privacy properties, our goal is to initiate discussions to outline an agenda for conducting human-centered research on privacy issues in LLM-powered systems. This Special Interest Group (SIG) aims to bring together researchers with backgrounds in usable security and privacy, human-AI collaboration, NLP, or any other related domains to share their perspectives and experiences on this problem, to help our community establish a collective understanding of the challenges, research opportunities, research methods, and strategies to collaborate with researchers outside of HCI.
HRI

Robot Role Design for Implementing Social Facilitation Theory in Musical Instruments Practicing
Heqiu Song, Zhiping Zhang, Emilia I. Barakova, Jaap Ham and Panos Markopoulos
In HRI Conference on Human-Robot Interaction 2020
. The application of social robots has recently been explored in various types of educational settings including music learning. Earlier research presented evidence that simply the presence of a robot can influence a person’s task performance, confirming social facilitation theory and findings in human-robot interaction. Confirming the evaluation apprehension theory, earlier studies showed that next to a person’s presence, also that person’s social role could influence a user’s performance: the presence of a (non-) evaluative other can influence the user’s motivation and performance differently. To be able to investigate that, researchers need the roles for the robot which is missing now. In the current research, we describe the design of two social roles (i.e., evaluative role and non-evaluative role) of a robot that can have different appearances. For this, we used the SocibotMini: A robot with a projected face, allowing diversity and great flexibility of human-like social cue presentation. An empirical study at a real practice room including 20 participants confirmed that users (i.e., children) evaluated the robot roles as intended. Thereby, the current research provided the robot roles allowing to study whether the presence of social robots in certain social roles can stimulate practicing behavior and suggestions of how such roles can be designed and improved. Future studies can investigate how the presence of a social robot in a certain social role can stimulate children to practice.
I design and realize.
I believe that good design truly shines when it integrates seamlessly into users' lives, bringing tangible benefits. I enjoy bridging theory with practical needs to create applications that matter. Here are my selected landed projects that had great impact on our users, achieved commercial success, and earned top international awards. I often incorporated the concept of embodied interaction in my designs, enabling technology such as AI to benefit users.


Apply in FITURE 3 PLUS
Embodied Interaction with Light to
Engage In-Home Workout
2022-2023
Worked as the Creative Technologist for the lighting system
(concept, programming and test)
# Embodied Interaction
# Motion Detection
# Light Pattern Coding (Java)
.


Apply in all FITURE intelligent mirrors
Voice Assistant in Multi-Modal Remote Control
.
2021-2023
Worked as the AI Product Manager for the voice assistant.
# Conversational Agent
# Remote Control
# Multi-Modal Interaction
Apply in FEMOOI Skin & Hair Care Device
.
2023
Worked as the HCI Designer for the embodied avatar
(concept and build)
# Human-Agent Interaction
# UX
I make for curiosity.
I enjoy making things and crafting prototypes.
"What would this idea look like if brought to life?" I create demos to see.
Throughout this process, I also enjoy problem-solving-oriented learning.
.
2023
# Facial Recognition # Head Movement Recognition # Unreal Engine 5
.
2019
# Mimic Player # Machine Learning
# Bayesian Algorithm # Game # Java

.
2018
# Intelligent Facbric # Muscle Detection
# EMG # Wearable # Haptic Feedbacks
.
2019
# Topological Transformation # Temperature & Humidity Sensing # Creative Electronic
.
2018
# Tangible Interaction # Asthma #Health and Wellbeing # Arduino # Processing #Java