UI design & biasA survey
Fair UI publication · Peer reviewed at CHI 2018 · This article presents the results of a preliminary qualitative survey concerning user-interface (UI) designers’ awareness of, and techniques for addressing, non-algorithmic online bias and discrimination.
About
Full publication title: “A Survey on User-Interface Design Strategies to Address Online Bias”. Written by Roxanne Leitão & Filip Jakobsen. This Extended Abstract was published in the “late breaking work” track of CHI 2018.
Abstract
This article presents the results of a preliminary qualitative survey concerning user-interface (UI) designers’ awareness of, and techniques for addressing, non-algorithmic online bias and discrimination. The results suggest that 1) online bias and discrimination are not being widely considered in professional UI development, 2) strategies currently being used to address bias are not evidence-based and therefore their efficacy is unknown, and accordingly, 3) there is a need for more evidence-based UI design strategies to address online bias.
Introduction
Online marketplaces for goods, services, and/or labour, such as Airbnb, Uber, Lyft, TaskRabbit, and Fiverr, have proliferated over the past decade. However, this has happened with limited regulation and oversight3, leading to increasing concerns over the levels of digital bias and the discriminatory behaviours being unconsciously enabled on such platforms4,5,6. Digital discrimination can be defined as a range of circumstances on the internet, in which a person, or group of people, are treated less-favourably by others due to their personal characteristics, such as race, religious beliefs, sexual orientation, age, and gender. Underpinning discrimination, often lie implicit and/or explicit biases that guide biased behaviours. Unconsciously, implicit biases influence perceptions, actions, and decisions even though individuals are largely unaware that they are doing so. Explicit biases, on the other hand, are those that are consciously acknowledged. Although a fair amount of progress has been made in addressing explicit bias over the last few decades, implicit bias remains incredibly pervasive12.
A recent study4 demonstrated that guests with African American sounding names are 16% less likely to secure bookings on Airbnb when compared to guests with White sounding names. Moreover, another study found that males with African American sounding names using Uber, were significantly more likely to have their requests cancelled and to wait longer for their rides, than passengers with White sounding names5.
A review of TaskRabbit and Fiverr6 revealed that workers perceived as Black received significantly worse reviews than workers with similar qualifications that were perceived to be White. The authors also found that TaskRabbit’s algorithms discriminate according to race and gender in search result rankings, and that this varies according to the city in which a search is conducted, therefore tailoring rankings to the particular biases present in a specific geographical place. A 2013 study of an (anonymous) online labour marketplace8 with workers from Sub-Saharan Africa and South-east Asia, showed that often workers list their location as the USA or Australia, due to perceptions that workers from these locations get more assignments. Workers also struggle with misconceptions about Africa, stating that employers perceive the African population as illiterate, uneducated, and willing to work for any fee.
These studies reveal that individuals’ information displayed online can lead to the manifestation of implicit and/or explicit biases, and that we are far from a ‘post- racial’ society13.
Related Work
A number of studies have investigated algorithmic bias11 and suggested solutions to address the issue, such as tighter regulation, transparency, and accountability. On the other hand, to the best of our knowledge, how user-interface (UI) design and the display of users’ personal information may allow opportunity for, or counteract, non-algorithmic digital discrimination, has yet to be investigated.
Previous work has explored the impact of personal characteristics on traditional job applications, conducted offline, through correspondence tests and live auditions, by purposefully omitting or revealing personal information on candidates’ CVs or by conducting auditions behind a screen. Finding that perceived ethnicity7, age10, gender9, sexual orientation1, and social class2 can all negatively impact upon candidates’ employment prospects. However, these studies focussed on offline behaviours and are, therefore, not directly applicable to novel online marketplaces, nor to the design of their UIs and user information being displayed.
Nextdoor is a neighbourhood social network and was recently accused of allowing racist behaviours to proliferate throughout its platform. In order to eliminate opportunity for bias, the company implemented a series of UI redesigns that they claim to have reduced discriminatory behaviours by 75%14. The UI strategies include forcing users to fill in extra fields if they choose to include race in the description of a person, and reminding users that racial profiling is expressly prohibited. Still, the experiments and data that Nextdoor is using to evaluate their redesigns are not publicly available, which makes it difficult for the community to analyse the validity of their results, or make inferences based on the UI strategies that the company has employed.
Fair UI: Ongoing work
Fair UI is an industry-led research project aiming to create tools and strategies, to help UX/UI professionals, design digital UIs that counteract non-algorithmic digital bias. The project kicked-off in October 2017 and is structured according to the following phases:
- A qualitative survey of existing UI design practice, and designers’ awareness of, digital discrimination and bias.
- A design challenge built from the survey findings and existing research, calling for the creation of UI design strategies to counteract digital bias.
- Iterative creation of a series of UI design strategies that respond to the design brief.
- Iterative evaluation of these strategies through remote user-testing.
- Ongoing open-access publication of UI design strategies and user-testing results.
The following section presents the preliminary results of an ongoing qualitative survey with UI designers and front-end developers, which will feed into the next phase of the Fair UI project — the design brief.
Survey and Participants
The qualitative survey was posted on a popular design blog, and shared on social media platforms, explicitly targeting professional UI/UX designers and front-end developers. A total of 126 responses have been collected, analysed, and reported on in this paper.
Of the 126 respondents, 25% had more than 10 years professional experience in designing UIs, 15% had 5 years or more, 21% had 3 to 5 years, 14% had 2 to 3 years, and 24% had 1 year or less experience. Furthermore, 22% had worked on designing UIs for sharing economy and/or online marketplace platforms, 18% had designed platforms were users are able to rate each other based on goods/services provided, 8% had designed UIs for online labour marketplaces, and 67% had never designed any of the above types of platforms. Of this 67%, 61% said they had designed UIs where users’ personal information was displayed: 92% designed UIs that displayed users’ names, 88% a profile photo, 41% users’ geolocation, 20% users’ age, and 12% other types of personal information (e.g., credit scores).
In total, 95 participants (75.3%) stated having designed a UI that displayed users’ personal identifiable information, such as a profile picture, real name, or location. The following sections provide an overview of the analysis and our main findings to date.
Analysis
Participant responses were grouped according to key categories.
The categories used in the data analysis:
- Motivation for addressing digital bias
- Reasons for not addressing digital bias
- Design strategies for addressing digital bias
- Existing resources and guidance for addressing digital bias
- Participant-identified concerns and issues relevant to digital bias
- Personal experience of digital bias
- Relevance (or not) of the Fair UI project
- Preference of design guidance formats
Categories were developed from close reading and comparison of the responses, by the authors who are both involved in the Fair UI project. Responses were assigned codes which were organized according to the categories.
Results
Below we present our findings, organised according to the main categories that emerged from the analysis. Where response transcriptions are included to illustrate a point, we have maintained respondents’ privacy by using identifiers (e.g., R01).
Finding 1: Concerns regarding digital bias and discrimination are not currently at the forefront of UI design decisions.
Of the respondents that signalled having designed UIs displaying users’ personal information, only 23% stated that they developed the designs with awareness of how personal information can allow for digital bias. However, of this 23%, only 8 participants said they did so based on existing evidence-based strategies, and only 2 participants provided details on the sources of these materials. 1 of the sources referred to the impact of user-ratings on bias, another was specific to patient data protection within medical contexts, whilst the remaining source referred to more general web articles about discrimination on online marketplaces. The latter 2 did not contain evidence-based strategies for addressing digital bias.
Finding 2: There is a clear need for evidence-based UI design strategies to counteract digital bias.
In addition to the findings above, 96% of all respondents stated that evidence-based UI design guidance — on how to prevent the potential for discrimination — would be useful to them as professionals. There was also a clear preference for the format Design Guidelines (80%) and/or Design Patterns (65%), with Design Heuristics (54%) receiving the least number of votes.
Finding 3: Designers are currently using a limited set of intuitive UI design strategies to counteract digital bias.
When asked the question about strategies that respondents were implementing, to address bias within their own UI designs, the responses we received described strategies based on personal and/or professional intuition, as outlined below.
- Removing, or gradually disclosing, profile pictures and/or other personal identifying information.
R06 [Digital Product Designer]: “I didn’t use profile pictures, instead I used only first name and first initial of their last name.”
R32 [UX Director]: “Our team tried to push not showing nationality, which is different opinion with skateholder [sic]. In the end the host will see nationality only after accepting booking [sic].” - Providing customisable fields instead of limiting users’ choices to a set of predefined values which may not conform to their personal preferences, nor to the established norms within a given culture.
R16 [Software Developer]: “People are not aware of gender-fluid users. They see it as an unnecessary complication and an affront to use corresponding pronouns.”
R74 [Designer]: “[…] only 2 gender identification (female / male)[sic]”
R45 [Web Designer]: “[…] there’s disregard for different name lengths & conventions.”
R11 [Designer]: “I recently ran a worldwide survey that included addressing ethnicity bias. There was no universal way to address ethnicity terms; both in terms of the way that people are identified, but also in the way that ethnicities are identified; in the US ‘people of color’ is a term, whereas in the UK it tends to be ‘ethnic minorities’. In the UK the term “handicapped” is not used, but in French it is. This lack of a common linguistic architecture is bound to cause difficulties when designing platforms that cross national boundaries […].” - Creating and displaying anti-discrimination policies and/or data regarding the (business) benefits of diversity.
R105 [UX Designer]: “We launched a non- discrimination policy that outlines what constitutes discrimination, and plan to create a UI to put it in front of all our users”
R19 [Design Director]: “Statistics from usability reports or social studies on the downside/missing opportunity of discrimination”
Finding 4: Personal experience, user-research, and media articles were the main sources of awareness regarding digital bias.
In response to the question “What prompted you to think about discrimination and bias when designing?”, the broad reasons given can be understood according to the categories outlined below.
- Personal experience of discrimination as a reason for awareness and action.
R95 [UI/UX Developer]: “Personal experience as minority and non white-sounding name”
R107 [Digital Product Designer]: “I’m also a woman of color and have experienced discrimination myself.” - User-research and observation of discriminatory behaviours.
R30 [UX Director]: “Many times sellers don’t understand that ‘I want to sell to a family just like mine’ is illegal so we built a home offer system to counteract that. The highest offer on a home wins.”
R32 [UX Designer]: “While user testing, some user requested to see the nationality on product because they doesn’t [sic] prefer [nationality removed] and [nationality removed].” - Inspired by social media posts and web articles.
R108 [UX Designer]: “After seeing issues with discrimination come to light on platforms like Airbnb, it seemed inevitable that these issues would also exist in other marketplaces […]”.
Conclusion
Based on the responses from the qualitative survey, we argue that 1) at present, awareness of digital bias is quite low amongst UI/UX professionals, 2) there seems to be a lack of evidence-based guidance to support designers in making informed decisions, and 3) once informed about the issue, designers are keen to engage in designing-out opportunities for digital bias.
Furthermore, the findings indicate that 2 separate but related issues were being addressed by respondents. On the one hand, there are issues related to a UI itself not accommodating for diversity (e.g., name fields that are too short or restrictive gender categories). On the other hand, there are UI characteristics (e.g., profile photos) that may allow for users to be biased towards each other, based on perceived personal characteristics, such as ethnicity. The latter has been less investigated, therefore, future Fair UI work will aim to 1) create UI strategies to address bias, 2) submit these strategies to iterative remote user-testing, 3) use the findings to create actionable guidance for the UI/UX community, and 4) publish findings as open-access.
References
1. Ali M. Ahmed, Lina Andersson, and Mats Hammarstedt. 2013. Are gay men and lesbians discriminated against in the hiring process? Southern Economic Journal 79, 3: 565–585.
2. Abhijit Banerjee, Marianne Bertrand, Saugato Datta, and Sendhil Mullainathan. 2009. Labor market discrimination in Delhi: Evidence from a field experiment. Journal of Comparative Economics 37.
3. Benjamin G. Edelman and Damien Geradin. 2015. Efficiencies and Regulatory Shortcuts: How Should We Regulate Companies Like Airbnb and Uber. Stanford Technology Law Review 19: 293.
4. Benjamin Edelman, Michael Luca, and Dan Svirsky. 2017. Racial discrimination in the sharing economy: Evidence from a field experiment. American Economic Journal: Applied Economics 9, 2: 1–22.
5. Yanbo Ge, Christopher R. Knittel, Don MacKenzie, and Stephen Zoepf. 2016. Racial and Gender Discrimination in Transportation Network Companies. National Bureau of Economic Research.
6. Anikó Hannák, Claudia Wagner, David Garcia, Alan Mislove, Markus Strohmaier, and Christo Wilson. 2017. Bias in Online Freelance Marketplaces: Evidence from TaskRabbit and Fiverr. Proceedings of the 2017 ACM CCSCW.
7. Leo Kaas and Christian Manger. 2012. Ethnic Discrimination in Germany’s Labour Market: A Field Experiment. German Economic Review 13, 1: 1–20.
8. Mark Graham, Isis Hjorth, and Vili Lehdonvirta. 2017. Digital labour and development: impacts of global digital labour platforms and the gig economy on worker livelihoods. Transfer: European Review of Labour and Research 23, 2: 135–162.
9. Peter A Riach and Judith Rich. 2006. An Experimental Investigation of Sexual Discrimination in Hiring in the English Labor Market. Advances in Economic Analysis & Policy 5, 2.
10. Peter Riach and Judith Rich. 2006. An Experimental Investigation of Age Discrimination in the French Labour Market. Social Science Research Network, Rochester, NY.
11. Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2016. Automation, Algorithms, and Politics | When the Algorithm Itself is a Racist: Diagnosing Ethical Harm in the Basic Components of Software. International Journal of Communication 10, 0: 19.
12. Cheryl Staats, Kelly Capatosto, Robin A. Wright, and Danya Contractor. 2014. State of the science: Implicit bias review 2014. Kirwan Institute for the Study of Race and Ethnicity. Columbus, OH.
13. Gavan Titley. 2016. On Are we all postracial yet? Ethnic and Racial Studies 39, 13: 2269–2277.
14. For Nextdoor, Eliminating Racism Is No Quick Fix | Backchannel. WIRED. Retrieved December 18, 2017 from https://goo.gl/sjMvx8.
Peer review R1
Reviewer
External
Reviewer expertise, on a scale of 1-4, with 4 meaning having most expertise
4 (Expert)
Reviewer’s overall rating of extended abstract, on a scale of 1-5, with 5 being best
4 – Possibly Accept: I would argue for accepting this submission.
Review of the Extended Abstract
This is promising preliminary work in an important and under-explored space. It provides a useful set of initial findings which could guide the development of further work.
The abstract could be improved by distinguishing more clearly between human bias enacted through platforms, and algorithmic bias resulting from the application of e.g. machine learning to data that is reflective of human bias. Separating the two seems important since they are likely to have quite different design solutions.
While the authors cite related prior work, there are also some important papers which have been overlooked. This includes examinations of biases in Uber ratings, e.g.:
– Rosenblat, Alex, et al. “Discriminating Tastes: Uber’s Customer Ratings as Vehicles for Workplace Discrimination.” Policy & Internet 9.3 (2017): 256-279.
As well as a range of papers which propose tools for investigating and mitigating algorithmic discrimination:
– Berendt, Bettina, and Sören Preibusch. “Better decision support through exploratory discrimination-aware data mining: foundations and empirical evidence.” Artificial Intelligence and Law 22.2 (2014): 175-209
– Veale, Michael, and Reuben Binns. “Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.” Big Data & Society 4.2 (2017): 2053951717743530.
And one paper which specifically deals with design considerations for preventing discrimination between peers in online markets:
– Levy, Karen, and Solon Barocas. “Designing Against Discrimination in Online Markets.” (2017).
Despite these omissions, this abstract is commendable. Unlike previous work, it is based on interviews with designers which surface interesting and unexpected themes. Especially useful is the distinction between a) UI not accommodating for diversity, and b) UI characteristics that may allow for users to be biased towards each other, based on perceived personal characteristics.
Some parts could do with further explanation. For instance, the findings report that people preferred “format Design Guidelines (80%) and/or Design Patterns (65%), with Design Heuristics (54%)”, but what are these formats? Were they just suggested titles, or are they already developed? How were they described to participants? Without seeing the survey it’s hard to interpret the meaning of these findings.
Peer review R2
Reviewer
External
Reviewer expertise, on a scale of 1-4, with 4 meaning having most expertise
3 (Knowledgeable)
Reviewer’s overall rating of extended abstract, on a scale of 1-5, with 5 being best
4 – Possibly Accept: I would argue for accepting this submission.
Review of the Extended Abstract
This extended abstract reports on a qualitative survey of UI/UX professional about awareness and tactics for addressing implicit and explicit biases found in social platforms such as online market places. It finds that while there is some awareness amongst professional about the ways in which UI/UX designs can either encourage or discourage biased behaviour amongst users there is need for greater awareness and evidence based design strategies to address this problem.
The problem of how designs reproduce and perpetuate existing biases is an important and timely issue and it is good to see this being addressed from an industry perspetive [sic]. It is a well designed initial study of the understanding and practices by professionals. It is clearly reported and identifies many topics for further investigation. I’m not an expert on survey methods but the design appear sensible and the conclusions appropriate for the methods used.
There are some areas for improvement: The extended abstract would benefit from a clearer statement of the authors positions. I assume they are involved in the ‘Fair UI’ project in some way but this is never explicitly stated. A clearer statement of their relationship to the project or topic would help us understand the assumptions and biases that they bring to the research.
In the conclusion the authors identify a tension between accommodating diversity and counteracting biases through UI design before concluding to focus future work on developing strategies for ‘designing-out opportunity for bias’. There are a number of issues here: Firstly, caution is needed around claims that bias can be designed away. Prejudice is much stickier than we would like it to be and there is long history of people overcoming attempts to prevent its expression. Secondly, the tension between recognising and celebrating diversity and difference and counteracting biases against certain races, genders, sexualities etc has been a [sic] area of discussion in social movements that have worked for equality. The consequence of ‘bracketing’ identity in order that people can be treated equally is that the things that make us different get erased. Is it it fair that some people have to hide parts their identities in order to participate in online platforms?
These are issues that go beyond the scope of this extended abstract but I raise them to encourage the authors to engage more widely in the history of feminist / critical race / LGBT / disability scholarship and activism. This is definitely a ‘wicked problem’ but there is richer and better design work to be done by tackling with the complexity and contradictions identified by the authors.
Overall, this work would be a great candidate for inclusion in the late breaking work track. It would undoubtedly spark some good discussions on this important topic and support the authors in developing the research further.
Peer review R3
Reviewer
Primary
Reviewer expertise, on a scale of 1-4, with 4 meaning having most expertise
2 (Passing Knowledge)
Reviewer’s overall rating of extended abstract, on a scale of 1-5, with 5 being best
4
Associate Chair’s Review
This well-written paper addresses the timely topic of online bias and presents results of a survey of designers and their strategies to address online bias. This initial study is novel, well motivated, and provides clear avenues for future work. The study raises both existing issues and challenges for designers working in this space, as well as potential areas to consider design-led strategies to address these.
While the framing and outcomes of the study are very strong, I would encourage the authors to provide more detail about their methods and particularly their process of analysis. An overview of the questions (or category of questions) asked in the qualitative survey would be very helpful. These could easily be accommodated in a sidebar.
Very little detail is given as to exactly how the responses to the survey were analysed. The authors report that responses were grouped ‘according to categories that emerged from the data’. Categories do not emerge so passively, they are actively constructed through particular analytical lenses. I challenge the authors to be specific about the scope of the data they gathered, how (using what tools and by who was it coded), how these codes were checked or validated and what analytical lenses they brought to the data e.g. what were the authors looking for in particular?
It is also a little unclear in the reporting that how the categories and themes which are claimed to emerge relate to the specific findings presented. This coupling could be more clearly articulated.
I am however confident the authors can address these issues with the addition of a further short paragraph.
Meta Review Rating of extended abstract, on a scale of 1-5, with 5 being best
4 – Possibly Accept: I would argue for accepting this submission.
The Meta-Review
All reviewers view this paper as timely, and addressing an important topic, with clear implications for future work that would engender discussion at the conference.
Reviewers find the study well framed and motivated, however R1 suggests a number of important additions that could bolster the related work.
R1 commends the distinction between a) UI not accommodating for diversity, and b) UI characteristics that may allow for users to be biased towards each other, based on perceived personal characteristics and sees this as valuable for future work. However, they also urge the authors to reflect on distinguishing between human bias and machine or algorithmic bias.
R2 also values the survey presented, however offers some valuable provocations to the authors to help better recognise the ‘wicked problem’ they are seeking to address. Firstly they ask the authors to introduce their own positions more clearly, particularly in relation to the Fair UI project – this relates also to a better explanation of the analytic process and perspectives that were brought to this.
More broadly, they encourage or at least acknowledge the authors to deal more directly with the thorny and ongoing debates around identity politics – in particular the tensions in celebrating diversity and counteracting bias based on those differences.
Nonetheless, it is clear the paper provides much scope for discussion and would be well served by inclusion at CHI’s LBW track.
Change log
22 March 2018
Modification made in reporting of results to correct a prior mistake and include reference to an evidence-based source provided by one of the survey respondents.
Section: Results → Finding 1
Text changed from: “1 of the sources referred to the impact of user ratings on bias, another was specific to patient data protection within medical contexts, whilst the remaining source referred to more general web articles about discrimination on online marketplaces, without any evidence-based strategies for addressing digital bias.”
Text changed to: “1 of the sources referred to the impact of user-ratings on bias, another was specific to patient data protection within medical contexts, whilst the remaining source referred to more general web articles about discrimination on online marketplaces. The latter 2 did not contain evidence-based strategies for addressing digital bias.”
Samhæng ApS
Islands Brygge 79A, 4. 1 2300 Copenhagen
hello@samhaeng.com ❧