About me


Recent Update(s)

12/3/2021 – Our co-design paper has been conditionally accepted to the CHI’22 conference! Be on the lookout for the full publication in April 2022.

Who am I?

I am a fifth-year Computing and Information Sciences PhD student at the Rochester Institute of Technology (RIT). I am currently an accessibility and human-computer interaction researcher at the Center for Accessibility and Inclusion Research (CAIR) Lab at RIT and I am advised by Dr. Matt Huenerfauth. My work is generously supported by the National Science Foundation Graduate Research Fellowship. I have published my work at leading HCI research venues, including the ACM CHI and ASSETS conferences; I am profoundly Deaf and fluent in English and American Sign Language.

My current research investigates how to better design automatic speech recognition (ASR) technologies to facilitate communication between deaf and hard-of-hearing (DHH) and hearing individuals. I have hosted and led co-design activities, conducted interviews and experimental studies, and performed both qualitative and quantitative analysis of video recordings, questionnaires, and speech data to support my dissertation research. In addition, I have a computer science background and my secondary research interests include augmented and virtual reality (AR/VR) and machine learning (ML), particularly how AR/VR and ML can be applied to American Sign Language (ASL) research and to improve accessibility for the DHH community.

Please check my full CV for more details.

Current research focus

Deaf and hard-of-hearing (DHH) individuals face several barriers to communication in the workplace, particularly in small-group meetings with their hearing peers. The impromptu nature of these meetings makes scheduling sign-language interpreting or professional captioning services difficult. Recent advances in Automatic Speech Recognition (ASR) technology could help remove some of these barriers that prevent DHH people from becoming involved in group meetings. However, ASR is still imperfect, and it contains errors in its output text in many real-world conversational settings. My research proposes to investigate whether there are benefits in using ASR technology to aid understanding and communication between DHH and hearing individuals. My dissertation research will evaluate the effectiveness of using ASR in small group meetings (through empirical studies with DHH and hearing participants), as well as develop guidelines for system design to encourage hearing participants to communicate and speak more clearly.

Please visit my Publications page for more details.