Ability and Accessibility in AI with Meredith Ringel Morris


Meredith Ringel Morris.png

What should you know about Ability and Accessibility in AI and responsible technology development? In this episode we interview Meredith Ringel Morris.

Meredith is a computer scientist conducting research in the areas of human-computer interaction (HCI), computer-supported cooperative work (CSCW), social computing, and accessibility. Her current research focus is on accessibility, particularly on the intersection of accessibility and social technologies.

Follow Meredith Morris on Twitter @merrierm

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.


Relevant Resources Related to This Episode:

(7 Principles & Ethical Considerations from the Interview) AI and Accessibility: A Discussion of Ethical Considerations

Merrie’s Webpage on Stanford.edu

Accessibility and Social Technologies

Ability Research Group at Microsoft

Enable Research Group at Microsoft

On the perennial appearance of systems claiming to "translate sign language"

Opportunities for students with disabilities to grow careers in computing

The @AccessCompUW programs

The @CRA_WP grad cohort programs such as IDEALS (formerly URMD cohort)and Skip Ellis award

The @MSFTResearch Ada Lovelace fellowship and dissertation grant programs (dissertation grant applications will open on Feb 1 this year)

The Google Lime scholarship program

Various travel and career development grants offered by @sigaccess such as this one

The #Microsoft Disability scholarship (taking applications now!)

Current career opportunities with the @MSFTResearch Ability Team (right now there are internship and postdoc opportunities)


Transcript

Merrie Morris3_mixdown.mp3 transcript powered by Sonix—easily convert your audio to text with Sonix.

Merrie Morris3_mixdown.mp3 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2021. Our automated transcription algorithms works with many of the popular audio file formats.

Welcome to Radical A.I., a podcast about technology, power society and what it means to be human in the age of information, we are your hosts, Dylan and Jess. And welcome to season two of The Radical, a podcast. We hope that you all have had a wonderful and restful holiday season and we are ready to hit the ground running in this new year with this new season.

And we have a lot of exciting news for all of you. So make sure you stay tuned to our intros, outros and our Twitter and LinkedIn for some exciting announcements to be coming up.

And some of those announcements are externals of future partnerships that we might be a part of, that we will be a part of. No, we're not announcing quite yet. Some of them are more internal. So we have some interns, two interns coming on board.

And we will announce their names in a little of about about the projects that they will be working on shortly.

And some of our changes are actually a little bit structural. And we're changing things up a bit in the way that we ask the questions in the interviews that we're conducting. So if you've been around since the beginning of the podcast about nine months ago, you probably noticed that with every guest on this show, we have asked them to define for us what they think the word radical means. And if they situate their work, their research, their thoughts within the realm of radical A.I. and this was a part of our effort to codifying a radical AI with this community. And if you want to hear more about our thoughts and some of our initial findings, you should definitely check out the video that we created at the end of 2020, as well as our debrief in our New Year's episode, The New Year's Spectacular.

Yes, it's called the New Year's spectacular episode, as you will discover if you listen to the episode, because it's spectacular. It was pretty spectacular.

But for this upcoming season, we are shifting the narrative a little bit and we're going to focus less on specifically and explicitly asking our guests to define what radical means to them. And instead, we're going to be focusing more on community building and collective storytelling since this interview was conducted several months ago. It was actually the last interview where we asked one of our guests to define the term radical AI in their own words to us. So this will actually be the last episode that we aired where we do ask this question explicitly to one of our guests. What you can expect from this upcoming season of the podcast is more attention on events, stories, case studies and experiences that we believe fall under the umbrella of radical AI as we are continuing to co define this term with our community.

And a large part of that is some of the feedback that we heard about the needs of folks who are listening around accessibility and around a deeper understanding of of some of these issues and how they impact us and impact our communities. And so our hope is that in the spirit of that accessibility, we focus more in on these events and these stories and these case studies because we believe that by sharing those, we can continue to build community and then also make the biggest impact right. On the lives of all of you.

And speaking of accessibility, in this episode, we interview Meredith Ringel Moris, a computer scientist conducting research in the areas of a human computer interaction, computer supported, cooperative work, social computing and accessibility. Her current research focus is on accessibility, particularly on the intersection of accessibility and social technologies.

And we are just so excited to start this new season with this interview and to share it right now with all of you.

We're on the line today with Mary Morris, how are you doing today, Mary? Good, thanks. How are you doing while doing well. We were wondering if you could just get us started by telling us a little bit more about yourself and what motivates you to do the research that you do.

Sure. So I am a research scientist at Microsoft and I lead the interaction, accessibility and mixed reality research areas.

And I founded Microsoft's Ability Research Team several years ago, which looks to combine innovations in human computer interaction and artificial intelligence to enhance everyone's capabilities, and particularly to take a user centered approach to meeting the needs of people with disabilities. With respect to emerging technologies.

And for you, why was it important to start doing that work, or has that always been kind of your career trajectory to work on accessibility and ability?

In general, the U.S. Census indicates that one in five Americans has a disability. And so, like many people, I have many close family members who have experienced disability in their lives. And that's something that has been a part of my life. And so I don't want to go into anyone else's personal health information in too much detail. But that's been something influential in my life. And that's one of the many reasons that I think doing work in this space is important.

And where did the technology come into all of this? Because you're doing specifically disability and ability research through technology and technologies like A.I. And so what made you interested in that portion or that section of this field?

So I got into computer science as an undergraduate in college sort of by accident.

I actually took an intro computer science course because on my college tour at Brown University, the tour guide told us that one of the professors, Andy Vandam, had helped make the movie Toy Story by Pixar, which at the time in the 90s, that was a really popular movie, and I was excited about that. So I just decided to take his class for that reason. And then I found that I really was excited about the problem solving aspects of programming and I got more involved with computer technology.

But I have to admit, I like people more than I like computers. You know, I see computers not as interesting for their own sake, but as interesting as tools that can help people do things that can help people have meaningful social relationships, that can help people be more productive in their professional and educational lives, and they can help people live more complete and fulfilling lives. And that's one of the reasons that I think the area of accessible technology is particularly interesting. It's looking at an important application area of technology that has impact on society.

So I come from a social sciences background and just comes from more of a computer science background. But then we're both in the space doing some level of human computer interaction, work or information, science work. And I know that you are kind of at the forefront of HCI in general as a field and for folks who don't really know that those worlds can kind of intermingle. I was wondering if you could say more about just human computer interaction as a field and some of the questions that it asks. Yes, that's a great question.

In human computer interaction has many different disciplines that come together to take a user centered approach to designing, building and evaluating technology. So, for example, I come from a more traditional computer science background and a lot of the contributions that I make in the area of HCI are around developing novel systems and interaction techniques.

But many of my colleagues come from different backgrounds like psychology and cognitive science, which focus more on, you know, for example, ethnographic studies to understand a user's needs deeply or developing new methodologies for evaluating technology in a user centered way. The ability team at Microsoft Research includes people from a variety of backgrounds. So, for example, while my doctorates in computer science, we have people on the team with cognitive science, Ph.D., mechanical engineering PhDs, information science PhDs. So it's really a diverse group of people.

Maybe it would help our listeners. Little bit, if we got specific about some of the work that you're doing with the ability team and then also in Microsoft Research's enabled group, and I'm wondering if there's a specific project right now, I think you mentioned you're leading a project on fairness and disability that you could just explore and explain a bit to our listeners.

Absolutely. So the Abilities Teams Project on Fairness and Disability is examining how issues around responsible I particularly impact people with disabilities. So that includes things like ensuring that mainstream AI tools are exclusively designed so that they work correctly for everyone, regardless of their disability status. It also means thinking particularly about accessibility oriented or assistive technologies, making sure that these are designed in a human centered way so that there are solving problems that really matter to end users, and also includes thinking about making sure that A.I. practitioners are following inclusive design processes when they develop and evaluate emerging technologies and proactively considering issues around how I relate to disability status, health and age.

Why is this work necessary? Like was there an impetus for this?

Was there maybe a case study of responsible A.I. gone wrong or what's what's driving this work?

So I've been thinking about this topic for the past few years and I have identified seven areas that I think are particularly worth examining through the lens of disability and some of these areas.

In fact, most of these areas, I think, apply to fairness across all demographics, but offer a particularly nuanced challenges around disability.

So the first is inclusion, as has been brought up by by other researchers, many of whom have already been on this podcast. It's become very well known in the past few years that representation and inclusion in the data sets used to train and test AI systems, as well as on the teams that are developing AI is important to make sure that I works for people from different ethnicities, ethnicities, people with very gender identities, people from the developing world as well as in the West. And this issue also holds true for characteristics like disability status and age. And I think it's particularly challenging to think about inclusion in data sets with something like disability because of the long tail of disability. Right. Unlike.

You know, there there are a large number of different disabling conditions that all have, you know, relatively low proportion in the population. And so in some sense, even if one were to ensure that people with disabilities are represented in training data, they might still always be viewed as statistical outliers by current MLS systems. And so understanding how to address this, I think, is a big challenge for the small community.

The second issue that I think is important to think about is the issue of bias. Again, you know, this has come up in other demographic domains, the idea that A.I. systems might amplify biases that marginalized groups already experience in our society and again, particularly with respect to disability and health information. This this is quite challenging.

For example, even today, AI systems can already infer from public information someone's health or disability status. You know, there was a study that showed that from people's mouse movements on a Web page, one could infer whether they might be in the early stages of Parkinson's disease. And is it ethical to even build systems like this? And, you know, how might this be used? Are these systems going to be used to charge people different rates for health insurance or even deny them insurance? Are they going to be used to discriminate in employment and hiring? Will they be used to determine what level of benefits people receive from the government?

I think these issues are quite important to proactively address, and many of these issues can't only be addressed through technology, but must be addressed perhaps through policy as well. The third area that I think is important to consider is privacy.

And again, with respect to something like health information, unlike other demographic characteristics which may or may not be more more public for people to at least make educated guesses about, many aspects of disability and health status are often private.

For example, people talk about the concept of an invisible disability. Someone might not know, let's say, whether you have epilepsy or ADHD or mental health concerns.

So if you think back, for example, to the first issue around inclusion and representation and data sets, many people with hidden or invisible disabilities may not want to contribute data and metadata because it would require disclosure and have privacy risks. And so you get this sort of catch 22 where the privacy concerns are further amplified by the inclusion problems and create a really complicated feedback cycle to address and of course, related to privacy.

I think also some of the techniques that are used to try to preserve anonymity in data sets may not be effective for people from disability groups that have a relatively low number.

So, for example, thinking of some of our own work, we've done quite a bit of work on communication systems for people with ALS or Lou Gehrig's disease. And the incidence of ALS in the United States is about one in fifty thousand adults. So, for example, if we are conducting studies to understand how to create better predictive speech, technology is to be used by people with ALS. And the Seattle metro area has maybe three million people. So one in fifty thousand, you're talking maybe like 30, 40, 50 people in the Seattle metro area. If we've interviewed 20 of them. And even if we're anonymizing our data and saying, well, we interviewed 20 adults, you know, in this age group or of this gender, someone could probably figure out who they were.

And so I think these kinds of concerns take on a special significance when we start to talk about disability, the fourth issue is error.

So there is always going to be error in the A.I. systems. You know, with deep learning, a lot of these systems are really improving.

We've seen exponential improvements in the capability of A.I. in the past few years, but there will always be mistakes and there will be always be errors, and particularly for certain disability groups who cannot verify the output of an AI system with their own senses. So, for example, think about someone who is blind, who's using an AI vision language system to describe their environment. They cannot verify the output of that system by themselves or someone who is deaf, who's using AI based captions to understand, say, a video call. They cannot verify the output by themself. And so how do we convey this error to the end user in a way that's understandable by end users? I think this is, you know, this. Metric of being understandable by end users is really important because right now, especially with many deep learning systems, the error isn't even understandable by the developers themselves. And now we want to convey it to lay people who cannot verify the output with their eyes and and maybe using the system for safety critical tasks. If if someone who's blind is going to rely on a vision to language system to scan their environment and decide whether it's safe to cross the street, what level of error is acceptable in that scenario, and how do we encourage end users to have appropriate skepticism about the capability of AI?

In our research, we have found that people are overly trusting of the output of AI systems, even when the output makes little sense, frankly.

And so I think this is really a topic of concern for this demographic. The next area that I have concerns about is expectations setting about the near-term capabilities of AI systems. Again, particularly when you think about vulnerable demographics whose lives and quality of life stand to fundamentally be changed by advances in I. I am quite concerned about the way that I see the popular media reporting on advances in AI, whether it's to create kickboard headlines or out of fundamental misunderstandings about the current state of the art. But for example, in this space, I literally get one email a week about sign language technologies because of frequent articles that appear in the media that seem to promise that A.I. technology is that can translate between sign language and English either already exist or on the cusp of release. And frankly, most of these technologies that are being reported on are what I would refer to as well-intentioned systems that are created by people who are not deaf themselves, who are not signers, who sign language.

And so, for example, you'll see an article that will report on a new app that can translate twenty six individual signs into English. And if I told you I made a French to English translation app that could translate twenty six words of French into English, you would not be very impressed. And so I think people are misled reading these articles about what the state of the art is for technologies that might might impact them. And that is an ethical concern that we should be aware of as a community.

Another concern that I think has particular impacts for people with disabilities is the issue of synthetic or simulated data.

So, again, you know, going back to the earlier issue around inclusion and representation and data sets in the small community, now, there are many techniques for artificially synthesizing a more variety in data as a maybe cheaper and more scalable method for enhancing the variety of a data set. And there are studies that show that synthetic data can create improvements. But I think around disability in particular, there are a lot of sensitivities around disability simulation. So there are studies that show that, first of all, simulating disability is not is not the same in terms of the kinds of data you get. So if you were to say, put a blindfold on someone and ask them to perform a task, the way they do that task is not the same as someone who has been blind for years or for their whole life. And not only is the data not of the same quality, but simulating disability often leads people who participate in those simulations to form negative, stereotyped opinions about the capabilities of people with with disabilities. And for example, you know, in my work, one place where I've come across some of these problematic issues in simulation is in our work on.

Communication technologies where predictive language can be very important. There aren't.

Realistic public corpora available of text produced by AC technology users and so a lot of the like which models are trained on existing public corpora like The New York Times. And of course, you know, most people don't speak in the grammar and cadence of a New York Times article. So that's maybe not ideal. And so I know of some well intentioned researchers who decided to address this problem by asking crowd workers on Mechanical Turk to imagine that they were a disabled person who spoke via an augmentative technology and generate sentences that they thought they might say. And now many other people use this simulated cleophus. But if you actually look at this corpus, it's full of very stereotyped depictions of what people with disabilities might talk about. Will you take me to the doctor? Will you get me a blanket? Will you make me some soup? So it's you know, of course, you know, people with disabilities do need to talk about health concerns, but they also want to talk about everything else in the world. Right. They want to talk about gardening and Star Trek and politics and their family. And so I am quite concerned about the use of synthetic data for this population. And then the last thing that I would mention is issues around social acceptability of emerging AI technologies and again, particularly how that interacts with disability status.

So, for example, many emerging technologies that might greatly benefit people with disabilities have impacts on what I would call secondary users. So let's take an example of something like Google Glass. Right. So Google Glass has a camera and a microphone on it, and that might be very important. So someone who has low vision, the camera with AI algorithms from computer vision could be helping that person identify objects in their environment as an assistive device. Or if someone has a cognitive disability, perhaps that camera is helping them remember the names of people that go with faces that they see as they pass them on the street. Or if someone is hard of hearing, maybe that device is automatically using speech recognition technology to provide them with real time captioning. But of course, that device also has privacy implications for the people who are being captured. And there are studies like from the University of Colorado that show that people are more tolerant, for example, of someone using Google Glass if they know that it's because that person has a disability. But what does that mean? Does that mean that people with disabilities are required to publicly disclose their disability status in order to use emerging technologies? Does it mean that the benefits of the technology to the person with disabilities outweigh the privacy concerns of other people in the public?

And these are challenging questions. And I don't know what all the answers are, but I think they're very important for us to have serious conversations about.

If you would like to learn a little bit more about the seven principles that Mary just shared, or if you would like to explore a little bit deeper into this work, don't forget that you can always do this by visiting our website at radical Eigg and checking out the show notes for this interview and for all of our interviews. Now back to our interview with Meredith Ringel Morris. When you were talking, I was brought back to I was a I was a child of the 90s, so early 90s, going to elementary school and had a very particular education around disability and especially around language. And there was so much just like stigma and a lack of understanding and a lack of even knowing what the language was that we were supposed to be using around some of these issues.

And I'm wondering if you could speak just briefly to how when you say disability, a little more of like what? What you mean so that we're all kind of speaking the same language.

Yeah. So again and first again, as a caveat, you know, I personally do not identify as being disabled. And I think that it's important that people with disabilities often have their own preferences for language and how they would like to be identified and respecting people's individual choices. I know there's a lot of debate back and forth around, for example, people first language versus identity, first language and different people have different preferences. I often use people first language when I'm not familiar with someone's own preference as as a fallback. But I think, you know, that's something important to be aware of often in terms of discussing just like what falls into the scope of the disability space, which I think is also part of your question.

There are several different categories of disability that we often think about. So one are sensory disabilities.

So, for example, vision loss, hearing loss would fall into that category, limited mobility. So, for example, people who rely on mobility, AIDS, people who have maybe a limb differences, prosthetics, amputation would fall into that kind of category. Also, differences in a strength or tremor might fall into that category as well. Another category are speech disabilities. So, for example, our work on augmentative and alternative communication technologies benefits people with speech, disabilities, cognitive disability.

So I think, for example, of learning disabilities like dyslexia or autism spectrum disorder or ADHD could all fall under the freedom of cognitive differences. Intellectual disability is another area and that, of course, technology for intellectual disability brings up a whole issue of additional ethical considerations around consent for whether or not people can give informed consent for giving data or participating in technology research at all.

And then.

Also, of course, it's important to recognize that many of these categories of disability are not experienced in isolation, right. So people may experience conditions that that impact these across different places. So ALS is an example where people with ALS often experience both limited mobility as well as speech disabilities because of losing motor control over their speech. Another category that often falls under disability are mental health concerns. So PTSD, depression, anxiety and then more broadly, often people will characterize, for example, chronic health concerns under disability as well, as well as concerns related to aging. I think aging is is one area that hasn't been considered extensively with respect to AIG, for example, how many of the AIG data sets that people rely on right now include representation of older adults, particularly when we consider our oldest old adults, like in the 80s and beyond?

And I think this is very important because older adults, just as part of the natural aging process, have characteristics that may really impact the way I systems work. So, for example, most older adults speak at a slower cadence. So how does this impact speech recognition systems or older adults have a slower gait in walking? How does that impact, you know, body tracking systems that might be used, for instance, by self-driving cars to to recognize pedestrians?

So I think thinking about older adults, even though many older adults may not self identify as disabled, is is very important and relevant to this conversation. One concern around the issue of stigma is that relates again to the social acceptability of A.I. technologies is actually end users willingness to use a technology.

For example, does using the technology mark them as different in some way? And sometimes that can be helpful. So, for example, if a person who's blind is using a white cane, the visibility and recognized meaning of the white cane is actually often very helpful so that, you know, people have expectations around how quickly that person might cross the street or, you know, moving out of the way, for example, of that that person's path. But sometimes that that recognition that you are blind might be undesirable. For example, if you are in a new city and it's evening and you're concerned about being mugged, know a mugger might be more likely to approach you because they see the cane and recognize that that you might be more vulnerable. And so I think that while the king is an example of a low tech technology, those same considerations about the visibility of a technology and whether it's recognized as an assistive technology versus a mainstream technology is important for consideration, as well as how the idea of whether something is a mainstream technology versus like a medical device also impacts costs, which can impact whether a technology is truly democratized and available for everyone, or whether it's a specialized technology that might be unaffordable to people who need it.

On this topic of technologies that help augment people's lives, who are experiencing disability, I'm really interested in the work that the Enable group is doing. And on their website, one of the taglines is to improve the lives of people with disabilities. And I'm wondering if you have any examples of projects or technologies that are helping to improve people's lives that have taken these concerns that you were just mentioning into consideration.

So just to clarify, there are two teams within Microsoft Research and Incubations that do work related to accessibility. So one is the ability team and the other is the enabled team that you mentioned. They are separate. So the enable team is focused on two specific projects. One is IGIS based interaction for people who experience limited mobility, and the other is navigation app called Soundscape for using directional audio beacons for people who are blind or have low vision for pedestrian navigation, whereas the ability team's focus is a bit further out in industrial research. Speak more H3 research that has a much broader scope. And so the A.I. fairness work falls under the purview of the ability team current project and the ability team that I think is quite relevant to this issue of AI. Responsible AI is around automated image descriptions that might be useful for people who are blind or have low vision. So typically, if an image is appearing on the web and social media in an office document, it requires metadata called alt text or alternative text so that a screen reader can read aloud that text or that caption of an image so that it can be consumed by someone with a disability, usually someone who is blind and in theory, content authors should be providing alt text for all images that they're putting online.

But typically this doesn't happen. So recent studies indicate that about half of images online on websites, for instance, have alternative text on social media. It's much less. We studied Twitter last year and fewer than one tenth of one percent of images on Twitter had alt text attached to them. And then, of course, this isn't even considering the quality of the alternative text. Many people either because they're not understanding what alt text is for or because they're just trying to game the system for search engine optimization will fill in very low quality text, like a file name or the word image.

So A.I. technology could substantially improve the accessibility of images, for example, by using new vision to language technologies to provide these kinds of details instead of some of the issues that our team is looking at in the space. For instance, are are these A.I. technologies providing the kinds of details that actually matter to people who are blind? And so, for instance, some of the interesting things we've learned by taking this user centered approach are that.

One, many of the details people are interested in are not in the realm of what current systems can do today, for example, explaining whether an image is humorous or not and why or explaining the aesthetic qualities of an image, you know, is this image beautiful?

Does it evoke nostalgia and sadness, also providing more detail? And this also suggests opportunities for new interactive techniques. This is where some of the HCI comes in.

Different users might want different levels of detail about an image depending on their personal interests and the context in which they're consuming an image. So maybe I'm really interested in fashion and I always want the clothing of people in an image described to me. You know, this I think is really different for different people. And if the same image appears in different contexts, should the A.I. system provide different kinds of detail if an image appears in a news article versus on Twitter versus in my electronic text book? How does that change? What sort of caption should be provided? And then, of course, we get to the issue of error. And how should the systems be conveying error in image descriptions to end users that they can trust these descriptions?

This is really fascinating because in order to build these systems and make them better, it all comes back to data, right? Like you need more data to build these systems. But then you mentioned in your seven ethical concerns earlier that in order to get that data, there's problems with, you know, synthetic and simulated data. But there's also not a ton of data to collect because there's only a certain percentage of the population that you can collect this data from and you don't want to only collect it from them because there's privacy concerns. So how do you walk that line between making good technologies that are inclusive and also making sure that you take those ethical considerations into account?

So this is an ongoing challenge. I think one example that I think is a great example of a correct step in this direction is some of the work from Carnegie Mellon University and UT Austin. I'm thinking of folks like Jeff Bigum and Hongo Donna Gary who are creating the VISAGES data set. And the visible data set is a public data set of photographs that are captured by people who are blind or have low vision that they have consented to share publicly for research, as well as the questions that they have about the contents of those photographs. And I think that that's a really interesting example, because, again, if we go back to the scenario not just of describing text for images that already exist on the Web or social media, but for actually describing, say, in real time from a phone app, scenes around a person who is blind, which is often an application area described in the computer vision community. One concern right now is that most of these vision language technologies are trained on corpora of images taken by people who are sighted. So if you think of something like the cocoa data set or image net, these are mostly images scraped from Flickr.

I think it's a safe assumption that these are largely High-Quality images taken by people with sight. Whereas while many people who are blind television do engage in photography, on average, the quality of photographs that they capture is very different than those of people with sight. So we did a study of this with images and found that more than 80 percent of them had serious issues around image framing, overexposure under exposure blur. And so being able to actually train computer vision algorithms on this type of data is really important for being able to accurately serve the needs of this population. And so I think things like creating the visible data set are really a step in the right direction also. Similarly, I think Google's project euphonious, which was announced a couple of years ago, which aims to collect more speech data from people with speech differences, is another example of a step in the right direction, because current automatic speech recognition algorithms are typically trained on people with typical speech patterns.

So they do not work well for people with conditions like DSHEA, a deaf accent, etc.. And so beginning to create these data sets and to make them public I think is very important.

Mary, as you know, you're on the radical A.I. podcast, so you probably expect the question. That's coming next. Something we like to ask all of our guests in an effort to co create and define this radical EHI term is how do you define the word radical and do you situate your work in the space of that definition?

When I think of the word radical, I guess I think of, you know, ideas that are outside of the mainstream. And so I think that two opinions that I hold that might be considered radical with respect to mainstream computer science. One is about, you know, who who is a computer scientist and who is in particular an A.I. scientist. I think, for example, you know, questions of, you know, are only people who are developing new deep learning systems really doing real quote unquote, real AI versus are people who are considering, for example, these issues of a responsible AI and ethics. And I also valued members of the community. And of course, I would argue to the latter. Right, that that that that people in the responsible space are also AI researchers whose contributions are are every bit as important as those who are on the forefront of advances in deep learning. Similarly, you know, it relates back to HCI, you know, who's considered a computer scientist? I think some some people argue that HCI, because of its interdisciplinary nature, isn't part of computer science.

But I think thinking about computers and thinking about AI in the absence of thinking about people and the people who use these systems is not a complete picture of computing. So I would argue that that opinion might be slightly radical. I think my other radical opinion is around the language we use to talk about. I frankly, I, I am bothered by the terminology artificial intelligence. And this goes back to my earlier point about setting expectations and how we communicate to the public at large about our work. You know, when a layperson hears the term artificial intelligence, they think that we're talking about, you know, what computer scientists would call artificial general intelligence.

Right. They think of something that is has a human like intelligence, a semantic intelligence, whereas current trends in machine learning are all about pattern recognition and statistics and, you know, no semantic understanding and knowledge.

And so the public is worried about, you know, hyper intelligence that's going to, you know, conquer us than be our robot overlords. And I think that is that is not really the imminence area of of concern.

And instead, we should actually be more concerned about these pattern based definitions of Emelle that lacks semantics and that therefore results in this kind of inadvertent bias and ethical issues. And so, yeah, I would.

So I guess my radical position is that I wish we didn't have the term intelligence in in AI because what computer scientists mean by intelligence and what regular people mean by intelligence is just not the same thing.

This is probably also a good time to mention, while we're talking about being radical, that, of course, in this interview I'm speaking in my role as an individual person rather than as an official representative of Microsoft. So, of course, all these opinions are my own.

For folks out there who are designing these just technologies in general or AI technology specifically. Do you have any advice about how they might be able to centre disability studies more in their design?

Absolutely. So, of course, you know, from an HCI approach interacting with the target community at all stages of the design process in gathering requirements for what a technology should do in building, you know, the data sets and models, in testing and evaluating systems with end users, you know, encourage including people with disabilities at all stages of that process is important. But I think it's also important that we take the bigger picture view around expanding who is participating as technology creators and ensuring that people with disabilities are part of the teams working on technology. And that includes encouraging and mentoring and supporting the careers of more people with disabilities in computer science and related disciplines. I think they're wonderful programs like Access Computing from the University of Washington or the UMD grad cohort workshops from the computing. Association, that's our offer, opportunities to support and mentor and grow the careers of a more diverse set of computer scientists.

And for listeners who are interested in this space and maybe want to get in touch with you or look into some of your work a little bit further, where's the best place for them to go for that?

So the Ability Team's website is a.k.a. Ms. Slash MSRA ability, and our website has the contact information for everyone on the team. It has our research articles, blog posts, podcasts, videos. So I would really encourage people to check out MSgt MSR ability.

Mary, thank you so much for coming on the show today and discussing all of this with us. It's been a pleasure.

Thank you.

We again want to thank Meredith Wrangell Morris for this interview, and as usual, Jess and I are going to break down a little bit of our thoughts and some of our learnings from this interview.

And just what what are you thinking about?

I learned so much from this interview, Dylan. I know I say that a lot, but I genuinely learned so much because I have actually not ever been exposed to ability or disability studies, research, especially related to technology in any way before this interview. So I was just like taking vigorous notes the entire time that Meredith was talking. And I think that my immediate reaction is still around this like catch 22 that Mary brought up partway through the interview, where there is such a problem with trying to create data sets for these technologies to try to create accessible technologies and technologies that are helpful for differently abled communities. And the problem is that in order to get the data for those technologies, you have to find a way to collect it without infringing on the privacy of the people in these communities as well. And I'm still just I'm struggling with that concept. And I loved the ideas that Mary brought up in the different projects that are going on, like the one around Busways that some amazing scholars are working on. But I'm still just, like, wondering what to do with that. I'm sitting in that uncomfortable space. What about it makes it uncomfortable to you? I think the fact that I don't see an obvious solution, I mean, isn't that like the nature of like a catch 22? Is that like you kind of have to make a trade off somewhere? It's really hard to find a solution that's actually beneficial for everyone.

Yes, I guess that is the nature of Pastrone, too.

But part of part of my question is because I also have a similar, I guess I would say a fear about it and about data collection in general. And what really stood out to me around this concept of privacy was about that it's so hard to have anonymity in some of these data sets. So when she used the example of, you know, someone having a particular disability within the Seattle metro area, and if there's only fifty thousand people that have that, you know, based on overall ratios and Seattle has, you know, however many millions of people and even that. Right. Like once you have that data set, you could actually trace it back to those individuals. And part of this around privacy, it's connected to privacy, but also connected to other areas of bias. And ah, and by our I guess, I mean the United States socioeconomic system and health care system of using, you know, some of that identity or even some of, you know, the scores that an algorithm could create around someone's disability, that then impacting the price that they might have to pay for health care, which would just further, you know, make that cycle worse and make it much harder for people to have access to affordable health care and a system that already has so many issues with people, especially people with chronic conditions or more regular conditions, that they need care for getting the access and the care that they need.

Well, it's interesting because I remember I think it must have been about three years ago, I saw this article come out about this health insurance company that was partnering with Fitbit so that people could submit their health data from Fitbit to this company. And if I guess assuming that they had a healthy lifestyle and that they were keeping up with their exercise on Fitbit and that they had good health ratings, a good heart rate, whatever it is that Fitbit measures for for humans, that then they could get good rates on their health insurance. And when I saw this article, I was thinking, oh, well, it's actually like that's a pretty good idea. I don't know why they haven't thought of this earlier. And now, after listening to Meredith talk about some of the concerns here, I'm thinking, oh, God, no, they should they should definitely not be pairing with Fitbit. That's awful, because now especially since there's technologies that are coming out that can predict whether someone has a mental illness or a disability of some kind that can become scary and really threatening very quickly, because if especially if the people who use these technologies aren't aware of the ways in which they're being tracked and measured and the ways that their data about those measurements and about those classifications of themselves are being shared with third parties, be it insurance companies are not. That can get really scary really fast.

You know, just I'm really happy that you specified data for humans, because really, I think I think the real ethical concern here is what about that data for, you know, animals? You're just saying this because you got a puppy recently. Your mind is an animal. You know, for listeners. Check out our Twitter for a picture of my very cute puppy. But no, I think just you and I.

We're talking even like the other week, I think it came up on my phone that who wasn't? It wasn't Apple, it was Google bought Fitbit.

And so there's also an element and obviously this episode wasn't necessarily about tracking that, but it does raise the concern around, like, who has this data.

Right. And Mary is, you know, working for Microsoft. And I have great respect for the, you know, ethics of a lot of the teams that are working for Microsoft in terms of research. And also it falls into the same pitfalls that a lot of other large corporations do around, you know, data protection, data management, because there's just a lot of data and it's, you know, similar with Google and with Apple and any of these large corporations that just have a lot of our data and just keep getting more. You know, it's the question that we've talked about a lot in the shows, like how can you ethically sourced that data and collect that data, which we've already talked about here, but then also the use of that data, which can be such a slippery slope, or at the very least, it can just be a bunch of really well-intentioned people creating well-intentioned algorithms that then harm people that are already at the greatest risk of harm.

Yeah, and we were literally talking about this earlier today, Dylan. Right. That it's not necessarily that the data itself is bad or unethical or corrupt, and it's not necessarily that the algorithm itself is bad or unethical or corrupt. But you can take someone's well-meaning intention of, let's say, for example, creating a technology where we take in user mouse clicks and we can determine whether or not they're likely to have Parkinson's disease or whatever it is that we're measuring. And that may seem like it has a positive impact on the community. But if you take that into practice, well, there might be some unintended consequences, like, for example, misidentifying someone and how that data and the information that labeling is shared. And I mentioned to you earlier today, Dylan, that I have a problem with technologies like this because they seek to classify humans in a way that is very, very quantitative and they're trying to quantify something that is so inherently complex and granular, like a mental illness or a disability that cannot and should not ever be quantified.

Absolutely. And that that was part of Mary's seven points, which were just absolutely brilliant.

And I think that they have far reaching implications for the disability studies community and then also beyond to other communities or just even how we think about the nature of morality and ethics in how we design and deploy our technology. But specifically, one of the things that stuck with me is this concept of expectation of A.I. And we've talked about it a little bit. And I know we talk about like folk theories around A.I. and other things like that.

But it's it's it just keeps coming up.

Right, is like what what is the reality behind what I can do and will do versus what are the stories that we tell ourselves about what I can do and will do, especially on a societal level? And Mary brought up like the media and, you know, there can be real harms in perpetuating some of these stories and narratives around the Utopia dystopia that A.I. is creating or has already created.

It's already out there that's either destroying our world or making it like completely better. And it can be, you know, much more of a crutch than an actual representation of what's going on in the world. And so I guess the bigger point around that is just like how can we engage with those expectations with more reality, but also in a way that helps people?

For more information on today's show, please visit the episode page at Radical I Dawg.

And as always, if you enjoyed this episode, we invite you to subscribe and rate and review the show on iTunes or on your favorite podcast. You catch our new episodes every week on Wednesdays. Join our conversation on Twitter at Radical iPod and as always, Happy New Year and stay radical.

Automatically convert your audio files to text with Sonix. Sonix is the best online, automated transcription service.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Automated algorithms have improved a lot over the past decade. Are you a radio station? Better transcribe your radio shows with Sonix. Imagine a world where automated transcription just works. Sonix converts audio to text in minutes, not hours. Better audio means a higher transcript accuracy rate. Quickly and accurately convert your audio to text with Sonix.

Better organize your audio files with Sonix; it's really easy. Automatically and accurately translate your transcript with Sonix. Create and share better audio content with Sonix. Do you have a lot of background noise in your audio files? Here's how you can remove background audio noise for free.

Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.

Sonix is the best online audio transcription software in 2021—it's fast, easy, and affordable.

If you are looking for a great way to convert your audio to text, try Sonix today.