Cella Sum

MHCID

I’m currently pursuing my Master of Human-Computer Interaction (MHCID) at UC Irvine.


Key areas of interest

Universal design and accessibility

According to the U.S. Census Bureau, nearly 20 percent of the population has a disability. Today, only 8.5 percent of people worldwide are 65 and over. However, this number is expected to jump to nearly 17 percent by 2050. That’s 1.6 billion people! These numbers highlight the fact that we can no longer treat accessibility as an afterthought.

While accessibility considers the needs of people with disabilities by definition, it has the added benefit of improving the experience for everyone. For example, using accessible language helps people of all ages and abilities, including those with low literacy skills and speakers of English as a foreign language. Universal design takes this idea even further by making products usable for everyone, including those with disabilities. Governments and organizations have policies to ensure equal access and rights for people with disabilities in terms of public accommodations, employment, transportation, and government services. However, there are currently no legal standards in place for web accessibility. Fortunately, the World Wide Web Consortium (W3C) has developed international standards for web accessibility, and hopefully someday these standards will be enforceable by law.

Traditionally, companies have taken a reactionary method towards accessibility, only fixing issues after a product launches. I believe that teams should instead take an “accessibility first” approach, incorporating these standards into their user-centered design process from the very beginning. Today, more companies are learning that accessibility is good for business. In fact, they can extend their market reach by 1 billion people globally, which accounts for over $6 trillion of spending power. There are also cost-saving benefits through the increase of usability and customer satisfaction.

Accessibility is also driving innovation. It has helped increase the diversity of input and output devices such as eye-tracking software, brain-connected devices, and voice-based technologies. It also has improved the design of existing input and output devices. Traditional keyboards are often problematic for people with limited motor skills. Wearable keyboards, like Tap, are now being developed to account for this. A.I. is also playing an increasingly important role in accessibility, including auto-captioning for deaf people, self-driving cars for those who can’t drive, and facial recognition for blindness. However, even these algorithms, which are intended to be helpful, have the potential to exclude other groups. Therefore, we should ensure that we are building inclusive data sets.

We should care about accessibility because it is the right thing to do. People with disabilities have the same fundamental human right to access the same information and products as everyone else. As UX professionals, we have a responsibility to bring empathy into our practice to make it happen. Accessibility shouldn’t be just a checkbox to tick off, but instead should be a guiding principle that a whole team can get behind. Government legislation should also be introduced to account for accessibility in the digital age. I would also argue that accessibility should extend to people who are affected by other barriers, such as the use of older technologies or lower bandwidth connections. By working with real people with these barriers, talking to them, doing usability tests with them, and employing them, we can ensure that our products are useful for everyone.

Ethics of Artificial intelligence

A.I. has become more prevalent in the technologies we use today. It is being used to interact with customers through chatbots, help blind people navigate the world, and pick the next movie you’re going to watch. Yet, there is increasing concern about the ethics surrounding A.I. There is often a lack of transparency, poor accountability, and bias built into these tools. Reports also show that only 25% of consumers trust A.I. With such a low level of trust and the current state of A.I., it’s hard to argue if these technologies could ever be truly used effectively.

A.I. systems often produce false positives and negatives by nature as they learn. This creates bad experiences that can range from benign, like a poor movie suggestion to potentially dangerous, like an individual being wrongly flagged by surveillance. Consistency is one facet used to measure usability. By its nature, however, A.I is inconsistent as it learns and reacts to different nuances and conditions. This inconsistency and lack of understanding can drive consumers away from using A.I. systems. It is also important to look beyond the user when analyzing the effectiveness of A.I. systems. For example, facial recognition software can turn individuals into unknowing subjects of artificial intelligence. This has immense privacy and consent concerns with little to no transparency into how the system works, what data is being collected, and how to opt-out of it.

In response to these concerns, the HCI community has proposed guidelines and best practices in A.I. design. These include helping the user understand what the A.I. system is capable of doing, mitigating social biases, and giving controls to customize what the system monitors and how it behaves. However, these guidelines only touch the surface in mitigating the ethical concerns surrounding A.I. Tackling things like social bias goes beyond the scope of HCI and design and requires awareness and evaluation in how these issues impact society as a whole. It is crucial for UX professionals to not only practice these guidelines but also to not be complacent in thinking that we have our hands tied in making positive change at a higher level. Advocating for more just practices in A.I. can start with hiring diverse people to help evaluate and test these systems. Currently, there is minimal incentive for companies to offer privacy protections and for governments to enforce them. Even if HCI cannot singlehandedly implement these protections ourselves, our proximity to these technologies makes us powerful allies to push for them.

References

United States Census Bureau. “Nearly 1 in 5 People Have a Disability in the U.S., Census Bureau Reports.” 25 Jul. 2012, https://www.census.gov/newsroom/releases/archives/miscellaneous/cb12-134.html.

National Insistutes of Health. “World’s older population grows dramatically.” 28 Mar. 2016, https://www.nih.gov/news-events/news-releases/worlds-older-population-grows-dramatically.

Usability.gov.”Accessibility Basics.” https://www.usability.gov/what-and-why/accessibility.html.

Bureau of Internet Accessibility. “Ditch the Fancy Vocabulary for Accessible Language.” 4 May. 2019, https://www.boia.org/blog/ditch-the-fancy-vocabulary-for-accessible-language​.

Centre for Excellence in Universal Design. “What is Universal Design.” http://universaldesign.ie/What-is-Universal-Design/​.

Web Accessibility Initiative. “Web Accessibility Laws & Policies.” ​https://www.w3.org/WAI/policies/​.

Web Accessibility Initiative. “The Business Case for Digital Accessibility” 9 Nov. 2018, https://www.w3.org/WAI/business-case/​.

Andrews, Mischa. “Heck yes, accessibility — let’s make the future awesome.” 24 Sep. 2018, https://uxdesign.cc/future-tech-accessibility-e93600e8917e.

Robbins, Keaton. “7 Innovations That Are Transforming Accessible Technology” 20 Jun. 2019, https://www.voices.com/blog/accessible-technology/​.

World Institute of Disability. “AI and Accessibility.” 12 Jun. 2019, https://wid.org/2019/06/12/ai-and-accessibility/​.

West, M. Darryl. “The role of corporations in addressing AI’s ethical dilemmas.” 13 Sep. 2018, https://www.brookings.edu/research/how-to-address-ai-ethical-dilemmas/​.

Fintech News. “Report shows consumers don’t trust artificial intelligence.” 1 Sep. 2019, https://www.fintechnews.org/report-shows-consumers-dont-trust-artificial-intelligence/​.

Amershi, Saleema et al. “Guidelines for Human-AI Interaction” https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf​.