Publications

photo of journal laying open

 

1. 

Title: "Health technology-enabled interventions for adherence support and retention in care among US HIV-infected adolescents and young." 

Citation: Navarra, A., Gwadz, M., Whittemore, R., Bakken, S. Cleland, C., Burleson, W., Kaplan Jacobs, S., and D’Eramo Melkus, G. (2017) Health technology-enabled interventions for adherence support and retention in care among US HIV-infected adolescents and young. AIDS and Behavior. IF: 3.1.

Abstract: “The objective of this integrative review was to describe current US trends for health technology-enabled adherence interventions among behaviorally HIV-infected youth (ages 13–29 years), and present the feasibility and efficacy of identified interventions. A comprehensive search was executed across five electronic databases (January 2005–March 2016). Of the 1911 identified studies, nine met the inclusion criteria of quantitative or mixed methods design, technology-enabled adherence and or retention intervention for US HIV-infected youth. The majority were small pilots. Intervention dose varied between studies applying similar technology platforms with more than half not informed by a theoretical framework. Retention in care was not a reported outcome, and operationalization of adherence was heterogeneous across studies. Despite these limitations, synthesized findings from this review demonstrate the feasibility of computer-based interventions and the initial efficacy of SMS texting for adherence support among HIV-infected youth. Moving forward, there is a pressing need to expand this evidence base.”

Keywords: Patient compliance, Smartphone, Text messaging, Cell phones, HIV, Adherence, Retention in HIV care, Technology.

2. 

Title: "Assistive Dressing System: A Capabilities Study for Personalized Support of Dressing Activities for People Living with Dementia."

CitationBurleson, W., Mahoney, D., Lozano, C., Ravishankar, V., Rowe, J., Mahoney, E. (2017) Assistive Dressing System: A Capabilities Study for Personalized Support of Dressing Activities for People Living with Dementia. JMIR Medical Informatics.

AbstractPeople living with advanced stages of dementia (PWD) or other cognitive disorders do not have the luxury of remembering how to perform basic day-to-day activities, making them increasingly dependent on the assistance of caregivers. The act of dressing is one of the most common activities provided by caregivers. It is also one of the most stressful for both parties due to its complexity and privacy challenges posed during the process. In this paper, we present the first-of-its-kind system (DRESS) that aims to provide much-needed independence and privacy to individuals with PWDs and afford additional freedom to their caregivers. The DRESS system is designed to deliver continuous, automated, personally tailored feedback to support PWDs during the process of dressing. The core of DRESS consists of a computer vision-based detection system that continuously monitors the dressing state of the user, identifies and prompts correct and incorrect dressing states, and provides corresponding cues to help complete the dressing process adequately with minimal, or ideally no, caregiver intervention. The DRESS system detects clothing location, orientation, and status with respect to the dressing process by identifying and tracking fiducial markers (visual icons) attached to clothes.

Keywords: User-centered design; Assistive technologies for persons with disabilities; Human factors, Performance; Context-Aware Computing; Ubiquitous Computing; Sensing Systems; Image recognition. 

 

 

 

1. 

Title: "CrowdMuse: Supporting Crowd Idea Generation through User Modeling and Adaptation."

Citation: Girotto, V., Walker, E., Burleson, W. (2019) CrowdMuse: Supporting Crowd Idea Generation through User Modeling and Adaptation. In Proceedings of the 2019 on Creativity and Cognition (C&C '19) ACM, New York, NY, USA, 95-106. AR: 29%, Best Paper Award.

Abstract: Online crowds, with their large numbers and diversity, show great potential for creativity. Research has explored different ways of augmenting their creative performance, particularly during large-scale brainstorming sessions. Traditionally, this comes in the form of showing ideators some form of inspiration to get them to explore more categories or generate more and better ideas. The mechanisms used to select which inspirations are shown to ideators thus far have not taken into consideration ideators' individualities, which could hinder the effectiveness of support. In this paper, we introduce and evaluate CrowdMuse, a novel adaptive system for supporting large-scale brainstorming. The system models ideators based on their past ideas and adapts the system views and inspiration mechanism accordingly. We evaluate CrowdMuse over two iterative large online studies and discuss the implication of our findings for designing adaptive creativity support systems.

KeywordsCreativity; Brainstorming; Crowd; Adaptive systems.

2. 

Title: "More than a Feeling: The MiFace Framework for Defining Facial Communication Mappings."

CitationButler, C., Michalowicz, S., Subramanian, L., Burleson, W., (2017) More than a Feeling: The MiFace Framework for Defining Facial Communication Mappings, Proceeding of the 30th ACM Symposium on User Interface Software and Technology (UIST*) Quebec City, Canada. 12 pages. AR: 12%.

AbstractFacial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of effect. In contrast, our approach to generativity simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression’s value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.

Keywords: Facial expression recognition; virtual humans; 3D modeling; avatars; affective computing; natural language processing; social signal processing.