Research
Selected Research with Abstract
Shades of Representation: Auto-Detection and Perception of Skin-tone Diversity in Visual Marketing Communication
With Gijs Overgoor, Hsin-Hsuan Meg Lee, and Zhu Han
Abstract: Skin-tone representation in brand visuals can signal brands’ diversity, equity, and inclusion (DEI) efforts. However, it is unclear what skin-tone DEI means to people, and there is no objective standard for its quantification from visual content. We propose an automated framework assessing skin-tone DEI across three dimensions: richness (number of skin-tone categories included), evenness (equality of representation of different skin-tone categories), and brightness (lightness or darkness of skin-tones). Through a collage-making experiment, we discover that people perceive representations with greater richness, greater evenness, and darker skin tones as more diverse. After testing the reliability and sensitivity, we find that richness and evenness are objective measures, whereas brightness is subjective due to heterogeneity of observers’ perceptions. Applying our methodology to 48,607 images posted on Instagram and Twitter by 34 fashion brands from 2019 to 2021, we find: (1) brands featured significantly more darker skin tones in their visual communication from May 2020, possibly influenced by Black-out Tuesday; and (2) noticeable improvement in skin-tone inclusion and equality within these brands did not emerge until August 2020. This study enhances our understanding of skin-tone DEI, provides tools for future researchers, and offers managerial implications for practitioners to benchmark and target their DEI efforts.
- Under review at Marketing Science DEI Special Issue
- Presented at 2023 Hawaii International Conference on System Sciences, Maui, Hawaii
- Presented at 2023 Marketing Science Diversity Equity Inclusion Conference, Dallas, TX
- Click here to view full paper
Congruence Affects Social Media Ad Engagement
With Ron Dotsch, Yozen Liu, Zhu Han, and Maarten Bos
Abstract: Story ads, a popular advertising format on large social platforms like Instagram and Snapchat, have attracted significant attention in practice but remain understudied in academia. Story ads appear between user-generated videos or photos, known as Stories, on these platforms. Unlike targeted searches on platforms like YouTube, users engage with Stories primarily for social sharing and entertainment. Often consuming a sequence of Stories at once, users may encounter an ad between two consecutive Stories. This creates a unique real-world context for examining the contextual effects of the preceding Story on subsequent ad engagement. Leveraging the preceding Story as a natural prime, we study the effects of two types of ad-context congruence—media format (video or image) and content (17 categories, such as food, sports, and gaming)—on ad engagement. Our large-scale first-party dataset with 8,260,689 observations from Snapchat captures user viewing experiences of a Story followed by an ad. By addressing potential confounding issues with comprehensive controls and propensity score weighting, we find that (1) both media format and content congruence significantly increase ad viewing time; and (2) video format negatively moderates the format congruence effect. These findings offer practical insights for marketers, content creators, and advertising platforms.
- In preparation for submission at Journal of Marketing Research
- Presented at 2023 Academy of Marketing Science Annual Conference, New Orleans, LA
- Best Conference Paper M. Wayne Delozier Award
Understanding Consumers’ Visual Attention in Mobile Advertisements: An Ambulatory Eye-Tracking Study with Machine Learning Techniques
With Mi Hyun Lee, Ming Chen, and Zhu Han
Abstract: As mobile devices have become a necessity in our daily lives, mobile advertising is also prevalent. Accordingly, it is critical for practitioners to understand how consumers visually attend to mobile advertisements. One popular way of doing so is via eye-tracking methodology. However, scant eye-tracking research exists in mobile settings due to technical challenges, e.g., cumbersome data annotation. To tackle these challenges, the authors propose an object-detection machine learning (ML) algorithm - You Only Look Once v3 (YOLO) - to analyze eye-tracking videos automatically. Moreover, we extend the original YOLO model by developing a novel algorithm to optimize the analysis of eye-tracking data collected from mobile devices. Through a lab experiment, we investigate how two types of ad elements, i.e., textual vs. pictorial, and two types of shopping devices, i.e., mobile vs. PC devices, affect consumers’ visual attention. Our findings suggest that (1) textual ad elements receive more attention than pictorial ones, and such differences are more pronounced in ads on mobile devices than those on PCs; and (2) mobile ads receive less attention than PC ads. Our findings provide managerial insights into developing effective digital advertising strategies to improve consumers’ visual attention in online and mobile advertisements.
- Published at Journal of Advertising, Click here to view article online
- Presented at 2020 American Marketing Association Winter Academic Conference, San Diego, CA
- Best Paper Award in Market Research