Envisioning a Decolonial Digital Mental Health with Sachin Pendse, Munmun De Choudhury, and Neha Kumar


In this episode we have a panel discussion about decolonial digital mental health with three leading experts on the topic: Sachin Pendse, Munmun De Choudhury, and Neha Kumar

Sachin is a PhD student in Human-Centered Computing at Georgia Tech, researching the role that technology plays in addressing barriers that prevent people from receiving consistent mental health care.

Follow Sachin on Twitter @SachinPendse

Munmun is the Associate Professor in the School of Interactive Computing at Georgia Tech. She founded and directs the Social Dynamics and Wellbeing Lab that seeks to develop technologies for improving our mental well-being.

Follow Munmun on Twitter @munmun10

Neha is an Associate Professor at Georgia Tech and leads the Technology and Design for Empowerment lab with a focus on the intersection of human-centered computing and global development.

Follow Neha on Twitter @nehakumar

If you enjoyed this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

audio.mp3: Audio automatically transcribed by Sonix

Sachin et al_mixdown.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Speaker1:
Welcome to Radical AI, a podcast about technology, power, society, and what it means to be human in the age of information. We are your hosts, Dylan and Jess, where two PhD students with different backgrounds researching AI and technology ethics. In this episode we have a panel discussion about Decolonial digital mental health with three leading experts on the topic, such in PNC Moon, Moon Da Chowdhary and Nayyar Kumar. Sachin is a PhD student in human centered computing at Georgia Tech, researching the role that technology plays in addressing barriers that prevent people from receiving consistent mental health care. Moon Moon is the associate professor in the School of Interactive Computing at Georgia Tech. She founded and directs the Social Dynamics and Wellbeing Lab that seeks to develop technologies for improving our mental wellbeing. Nayar is an associate professor at Georgia Tech and leads the Technology and Design for Empowerment Lab with a focus on the intersection of human centered computing and global development. We were originally connected to such in Moon, Moon and Naha because they gave a talk at our university, the University of Colorado, Boulder, in our department, which is the Information Science Department last semester about a paper that they recently released with some other co-authors that's titled From Treatment to Healing Envisioning a Decolonial Digital Mental Health.

Speaker1:
And once we heard them talk about the themes in this paper, we knew we had to have them on the show, and we were really excited to have this conversation with them. So that's one of the topics that we focus on in this episode, but we also just extract themes about colonialism and digital mental health generally. As a note, before we begin, we did want to mention that this conversation will include sensitive topics around mental health. And so please, listeners do what you need to in order to take care of yourself. And if that means not listening to this episode, then that's totally fine as well. But without further ado, we wanted to dive right into this interview. We're on the line today with such a moon moon. And now we're going to just dive into this conversation because we have a lot to cover today. We are going over Decolonial Digital Mental Health, and we're drawing a lot from a paper that this team, amongst others, put out recently called From Treatment to Healing and Visioning a Decolonial Digital Mental Health. And this project was led by such. And so we're going to start with you, Sachin. What are we talking about here? What is a decolonial digital mental health in the first place?

Speaker2:
I know. I know it can it can be a mouthful, but I promise you that each individual word of that phrase is very, very important. But to kind of to simplify it a little bit, if I could summarize it in three words, right? And if there's like three words that I want people to get out of, out of this podcast episode, I would say culture, power and agency, you know, actually, maybe I should say those words super slowly in case people are listening to the podcast of like 2x3x or something. But yeah, I think culture, power and agency are the most important parts of this, namely the fact that there are cultural differences with regards to how we experience what's come to be commonly known as mental health, and that those cultural differences have intersected with racism, sexism, colonialism, these other prejudices and biases to disempower people from having agency over their care or being able to practice diverse non psychiatric forms or clinical forms of care.

Speaker1:
I think one of the questions I have is how is this different? Like what is this pushing against in what currently exists in these spaces?

Speaker2:
Yeah, I think that's a fantastic question. And I think one of the things that's important about it is that, as I mentioned right, there are power dynamics embedded in how these tools are designed. So like Rama Chan and I will be talking about how like for example, let's say you post something on Facebook that indicates that you might be experiencing suicidal ideation. It's very likely that Facebook uses like an RN, Gram's rooted algorithm to be able to predict that that's happening. And then without your consent call, like the police call, do other kinds of quote unquote, crisis intervention that might actually be harmful for you. Right. I think another thing that is happening currently in the state of digital mental health is that a lot of the classification tools that are used, like the Phq nine, the DSM, they're used uncritically, right? They're used as a we have some kind of measure of of mental health. And I think one of the things that we hope to do with this paper is complicated, the idea that those and then actually argue against the idea that those tools are like end all bills to even measuring just simple symptoms of mental health. Because as we talked about in the paper, like the ways that people express their mental health and the ways that they experience it, even the symptoms that we have are super, super diverse.

Speaker1:
You mentioned some classification systems of digital mental health. Can you give us a little bit of a 101 of what you mean by that?

Speaker2:
Yeah, totally. And I actually think one one might be a better a better person to talk about this first before I chime in.

Speaker3:
Sure. Happy to. So, you know, the historical backdrop, if you think about mental health, the historical backdrop says that a lot of the signs that we know in this domain has originated in the Western world. And the way this has happened over about the last century has been essentially through what is known as a diagnostic manual for mental disorders or DSM, which essentially identifies different sorts of mental disorders and says that this is a way to think about the specific condition these will. These are the symptoms that people experience and so on. So a lot of the research that is seeing digital mental health draws upon that classification system because that's that's what is out there commonly. So if you look at algorithms that are built on social media data based on smartphone sensing data are pretty much anything else that you see even self-reported data. They essentially draw upon these classification systems. So the point that we are making in this paper is that essentially these classification systems advance a colonial view of how we think about mental health and how we think about extending care and treatment to those people who need it. And the argument we are making is that there is actually a lot to learn from other cultures and to think and go beyond sort of these colonized approaches of thinking about mental health and digital mental health and drawing upon the agency and drawing upon the cultural aspects of such an was mentioning earlier. So stop that.

Speaker2:
Absolutely. And I think I can talk a little bit more about the history of some of these classification systems, because I think I think that history is really interesting. Right. So and we talk about this in the paper, right? Like mental health is seen as this like scientific, right? Like a very, very and there is a science to it. There's like a huge field of psychological science. But concepts around this, this term, mental health are often used as like a veil of scientific objectivity. Right? So like because we have metrics, clearly this must be objective, right? Clearly there must be objectivity. And I mean, there are whole fields that are against this. Right. But they're used to justify oppression. Right. We can see this throughout history. So some examples we've content morning. We'll be talking about slavery, racism, forced institutionalization. But yeah. So I mean, psychiatrists during the Civil War argued that enslaved people escaping freedom were just quote unquote, mentally ill, which was like a horrid take. Really, really racist. But this this writing was later used by colonial psychiatrists and was justified to create these racist constructs around what they call tribal ization, which they then use to argue that black individuals were more mentally ill and linking that to violence. Right. And saying that they are more violent. And this was again brought back across also cited in the future by segregationists to justify segregation in the United States. And these are, quote unquote, scientific classifications that are still right. Those biases are still a part of how we think about mental health today. So we talk about this in the paper as well. You strictly use the DSM right uncritically. There's a bias towards diagnosing black men with schizophrenia because of this history of of oppression, oppression, marginalization. Yeah. So I think that the actual constructs that are used around mental health have these biases, right? The classifications that are used embedded into them but are not looked at as critically and the power dynamics that underlie them are also not looked at as critically.

Speaker1:
I'm wondering if I could just zoom out one more level of just talking about the history of health and colonialism, especially going over some of the same ground, but like defining what colonialism is in the first place.

Speaker3:
Sure. So I think going back to what such an educated it is about power fundamentally, and it's about histories of power. So how people have been subjugated by power and along different dimensions. So some of the work that we do in my lab looks at health in different contexts, but there is a strong element of looking at the Global South and what what does that do? So y why look at the Global South. What is what is it that it presents in terms of just histories of power and structural inequities that have impacted access to care. And so that's that's where we're generally kind of coming from, looking at care, looking at access to care, looking at the care work that is done. And this is also something that's such an has looked at. So what goes and and what they bring and how they are impacted also by the care work that they do. And then if we take a step back, I think what's interesting is by looking at it from a decolonial perspective, it lets us look also at the histories of of that access to care and what that does. So in some sense, even even thinking about it from the perspective of this method is an act of get right to say that within the space of technology mediated care, we are now trying to see how we can bring access to more and more people.

Speaker3:
And this is I mean, it's not it's certainly not just a few people, but really widely speaking, we are trying to see how technology can help in many different contexts. And going back to this, I mean, we've looked at the issue of mental health here from this conversation that we're having. But if we think about even other fields looking and reassessing the ways in which we think about whether it's maternal health, like that's one of the things that's one of the areas that that we've looked at also more carefully. So maternal and child health and there as well the standards that are used, whether they're coming from the W.H.O. or whether they're coming from just cultural practices, it will look very different when you're actually looking at how they're implemented in health interventions that might be large scale, that might be trying to assess whether this method is better or that method is better. Eventually. I think the message is, is that we we need to look at the past to understand what's possible for the future. And I think that's really what to me, this work has been honoring that we need to look at past knowledge as past perspectives to be able to design responsibly for the future.

Speaker2:
Yeah. And I think you bring up a good point, like why? Why use the word decolonial, right? Why talk about colonialism? And I mean, I want to be clear and as we talk about in the paper, we don't see ourselves as like engaging in a process of decolonization like we. Our argument in this paper is that and we believe it, that like real decolonization is giving back land to indigenous people. So we started, we site the, the event paper in the decolonization is not a metaphor and we talk about how by really true decolonization is giving people, indigenous people back their land. That is colonialism. But I think what's really important for us to consider and what we do in this paper is like centering the coloniality, right? Like the, the by the propagation of, like the history of colonialism and bias. Centering those power dynamics and digital mental health tools is really, really important given the ability and the the prospect that digital mental health tools have, as we talked about, cause quite a bit of harm.

Speaker1:
I've heard us throwing around the phrase digital mental health a little bit in this conversation, and I think that you've all done a really good job describing what colonialism, colonialism and mental health broadly is in the context of this discussion. But I'm wondering if we bring it back to technology, what do we mean when we say digital mental health? And I guess specifically using the language that was brought up earlier, what is technologically mediated care?

Speaker3:
So it's unfortunately, there isn't an agreed upon definition. So let me preface my answer with that. But normally what is accepted is essentially a very broad conceptualization that could mean technology mediating technology, creating the the the point of care or technology helping people do something around mental health. So let me break it down. So it could mean essentially that technology either helps people provide insights that may not be otherwise available. It could mean it augments people's abilities to understand their own mental health or as a caregiver to extend help to someone else. It could also mean that it fills in the gaps in our general understanding of mental health and people's experiences around it. And there are examples of research along all of those lines that we see so far. And the primary approach has been either by thinking about new kinds of applications or tools to do one of these or one or more of these activities, or thinking about sources of data that come from these different technologies and using them in the design of these tools that extend or help or support mental health care.

Speaker2:
Yeah, I think I think that's a great super accurate. I think one of the things that's also important about what we do in this paper is that often when you hear these terms like mental health, technology, computational psychiatry, digital mental health, oftentimes they take on a very clinical framing. So anything from like digital phenotyping doing public health right, like surveillance studies on what rates of depression or anxiety or different mental illnesses might be. We take a much broader framing, right? So we add like online support groups, right, as that being a part of digital mental health and how we understand it, right? Kelly Therapy, suicide hotlines. These are all ways that technology becomes a medium by which people are able to get here. So we include that in our in our broader understanding of what digital mental health is.

Speaker1:
And let's continue with the paper. I was really struck by this primary contribution of three main suggestions for designers specifically, and you all list one to center the lived experience of the potential users of their technologies to to center the power relationships that may underline the use of their technologies. And then, three, to center the structural factors that may broadly influence well-being. And there's a lot there. And so I'm wondering if we can start breaking that down a little bit in the first question that comes to mind for me is the centering lived experience, because I think that lived experience has become in vogue as a term for us to continue to talk about this human centered design. And the question that comes to me and the question that I want to pose to you and maybe such and you can start is whose lived experience like where do we start? Which experiences are we trying to focus in on? And perhaps why?

Speaker2:
Well. So I have a couple of different answers to this. I think that the first the a very simple answer is, I mean, we want to understand those experiences of those who are most marginalized. Right. And in talking about this, I borrow quite a bit from the psychiatric survivors movement, right? So understanding for people, because like I mentioned, like throughout history, people who have experienced mental mental distress, right. Mental illness have been super disempowered. So understanding. Right. How different parts of their identity and different dimensions of marginalization play into how they understand their mental health and like how they understand their symptoms and the kinds of care that they have access to. So that's one part of it, right. But I think when we think of like what does that look like very concretely? I think as Nina mentioned, we have to turn to history. So I think one of the things that's really cool that we talk about in this paper is Thomas Lambert's work to create a new psychiatric system. So it's like so the story is like it's the late 1950s and sixties in Nigeria, and Nigeria has just become an independent state. Thomas Lamba is a psychiatrist who wants to create a new psychiatric system at a place called Aura Mental Hospital. Right. And it's a tough job because there's a place called Yaba Asylum nearby where I mean, there's lots and lots of news about how this has effectively been used culturally as a jail.

Speaker2:
Right. So people do not want to engage with psychiatrists. They're mistrustful of psychiatrists. Right. And I think one of the cool things that Tombo did was he allowed for patients to be treated within the cultural frameworks that they were familiar and comfortable. Right. So integrated community medical practitioners alongside European trained medical practitioners to have and like we talked about right. Those two have the kind of care that fit people's lived experiences of care. Right. And I mean, Matthew Hayden writes about this in his book that like when this happened, like a lot of the European practitioners were quite uncomfortable with the idea of bringing in community medical practitioners. Right. But they it resulted in quite efficacious treatments for mental health. And in fact, Thomas Lembo and his work like I mean, it took center stage. He eventually ended up working in the W.H.O. to try and center his ideas around understanding lived experiences of care, particularly of those who are most marginalized. And I think I think one of the other interesting things about this is that at the time he had a tough line to walk. Right, because he understood that cultural differences were super important with regards to how people understood their care. Right. Experienced distress. But the issue was that if he spoke really, really openly about them, there was this whole history of racism.

Speaker2:
Right. In which British colonial psychiatrist said, well, if there cultural differences, it shows. And of course, this is super racist. Right. The inferiority of people of color. So you have to walk this tight line between like, how do I acknowledge cultural differences while also not playing into racist stereotypes around mental illness? And I think I think one of the things that was really awesome about how he did it is that he tied his research paradigm to the universal universality of human psychology while also being considered of differences in illness, experience, care and marginalization. So on that note, one of the things that was really cool about our mental hospital was that when people could come in, there was no like it was completely voluntary. You could leave anytime you wanted. There was no like no decarceration, right? Like no punitive forms of care. Right. It was completely up to the person when they wanted to take care how they wanted to take it. It was completely affordable to they were not charged for the care that they were receiving. So you go back to your question, and I think I think when we think about how to fit digital mental health care to people's lived experiences, I think there are lots of fantastic examples from history, particularly thinking about those intersections between. Right, marginalization as well as people's individual different diverse experiences with care and with illness.

Speaker3:
So when we're talking about centering that experience, I think there's different ways of centering the experience. And oftentimes we see that we want to do things in the service of people who are marginalized, communities that are marginalized. But there's also the question of how and I think method is really important. So we talked about the method of Coloniality here, but even if you look at design methods, right, or look at participatory methods, it's really important to think about whether we are disempowering or empowering the people that we're working with. And, you know, to people who are working in this space, I think this is just this is an obvious sort of thing that comes up over and over again. But I just wanted to pull that out, that there are the way that we we talk about these things that are in terms of acid space and deficits focused approaches also. Right? So when we're looking at people as being marginalized, we can also often be looking at them as having lack or suffering from lack. So what is missing? And often we're not looking at what they have. And I think what we're trying to also bring out in in the people is that there are assets that come from people's own sort of position in the world. Right. What what their histories might be, what their cultures might bring. And so how do we actually honor those assets? How do we take assets based approaches to designing for and with as opposed to thinking about, oh, this person needs my help and I am going to go and save them.

Speaker1:
Thank you for bringing that up, because I was definitely going towards the question of how, and I completely agree that the historical context definitely informs how we can try to solve these issues or at least confront these issues in the modern day. But I also imagine that a hospital in the fifties and sixties functioned a whole lot differently than a machine learning algorithm on a digital platform does today. And so I'm wondering, I guess, how, for lack of a better word, how do we center the lived experiences of individuals in a vastly scaled, large, complex digital system like the modern day Internet?

Speaker2:
Yeah, I think I think that's a fantastic question. So the way that we break it down in the paper is we talk about personal interface design, right? So let's say you're in distress, right? And you start to like search symptoms on Google, right? Or you start to look for help lines write online or you tweet write or you post on social media or Tumblr, right? Like what is your experience when you do that? Right. And how is it understood? Like you said, by these algorithms, we also talk and I'll go in depth to all three of these. Right? We also talk about it in the context of classification and measurement, right? So let's say algorithms are out there, right? Like the one we talked about quite earlier that uses like passive sensing data or social media data to predict your mental health right or or right. And then using these kinds of classic. Patients to do so. Right. Whether you have depression or anxiety. Right. Also read privacy. Right. And what does that look like in the prediction of these states? So we believe that for each one of these topic areas. Right. And each one of these applications within digital mental health, there are ways that we can fundamentally either foreground and deter those power differentials that exist that underlies some of these algorithms.

Speaker2:
So going from the first example, right, so like when you search for like mental health stuff online, right? Currently like let's say you search for depression on Google, you may if you're if you're searching on your phone, you might get the Phq nine right. And we talk about this in the paper that like Tom Osborne at Harvard with Arthur Kleinman, fantastic researcher, looked into how the Phq has been designed and found that like it was not very sensitive to cultural nuances among a certain community of people in Kenya. Right. So the symptoms that you put into Google, right. You may not be diagnosed with depression even if you're experiencing what others may call a depressive episode because of your identity. Right. So an alternative may be being more expansive with regards to the kinds of resources that someone gets and the kinds of symptoms they get or the kind of symptoms they that trigger the these resources when when someone searches on like a search engine, right? So warm lines, identity based resources like peer support groups, there are much more ways they're significantly more ways to practice care than the current interfaces of social media of Google. Right. Like of different search engines provide. So that's one of the things that one of the ways that we've thought about in.

Speaker1:
Such an would you mind saying what nine is?

Speaker2:
Yeah, totally. Totally. Yep. So that's the patient health questionnaire. It's a nine question questionnaire. Ask you questions like, have you been sleeping right? Have you been feeling down or depressed lately? And it's used to quantify the extent to which people have depressive symptoms. And if they get a like a moderate to high number, they say, well, a person may have moderate, high depression. You should probably go get this diagnosed by a mental health professional.

Speaker1:
I'm curious about this question of metrics and metrics of success, because we know that in Silicon Valley or otherwise, there's still this question of like, well, how do we optimize for this? Right? So like, we're on board. We're like, yes, everything in this paper. Absolutely. Are we optimizing for a specific metric within algorithms or do we need to tear down the entire algorithmic system? Like I assume there's some sort of middle ground here, but I'm actually going to throw to moon moon, if that's all right, because I know you've done a lot of work in this algorithmic and also mental health prediction space.

Speaker3:
Gosh, I have a very long form answer, but I'll try to try to keep it short. There are many problems, first of all, with the Silicon Valley approach to digital mental health, you're absolutely right that there is an over an emphasis on metrics. But unfortunately, some of those metrics have nothing to do with many times have nothing to do with mental health at all. For instance, sometimes it's optimized for number of views or how many times somebody opened up the app or did something on the app, which do not necessarily indicate what mental health is really for that person at that point in time. But that aside, I think what we are envisioning here is to rethink the design of these algorithms. A lot of these algorithms kind of take questionnaires like the ffq at face value, and that's what a lot of them evaluate the success of those algorithms on. We are not saying that we do away with those approaches, but we are saying that we need to think critically about which questionnaires we adopt and for who and how do we evaluate those algorithms. To give you an example, there is so much of a push on taking some of these algorithms that are built on tools like the Phq nine and maybe using it, you know, in populations across the world or even like, you know, populations who may not necessarily be who are represented in the training data of these algorithms. What we are saying is that we need to think deeply about the appropriateness of these scales in those populations. Maybe there are other lessons that we can draw upon that are more contextual and sensitive to the to the cultures, to the to the identities of those populations that need to be part of this algorithm design.

Speaker3:
And I think that currently we find is not happening. The the field of digital mental health has too many examples of bad algorithms. Like I was saying, a lot of them do not even measure anything about mental health at all. And I think we need to take a step back and think about even when we think that we are using a scale to evaluate them, is that the right scale or is that the right metric for the population who we are trying to extend help on? The good news, though, is that with the Internet, despite all of its problems, we also now have an opportunity to learn about these diverse experiences of these populations. The Internet is providing a platform so that those otherwise marginalized populations can now have a voice. So I think we have a real opportunity to connect more deeply with those marginalized populations and see what is it that we can learn from their experiences that they're describing on the Internet and incorporate that knowledge and the design of the tools that we build? And I want to just actually piggyback on that a bit and say that it comes back to method. So how are we learning about the voices that haven't been heard? And I think there's increasingly effort within the field of HCI to look at different methods. And I think there's, you know, orientations around design like the approach or design justice or if we think about the community based participatory research, these are all methods that increasingly we're hearing about. And I think there is a need to focus more on different types of methods and different qualitative quantitative design to, to, to figure out how we let these voices surface.

Speaker2:
Yeah. And when one side. I'll add on to that. I think that it's right. So going back to that study from Tom Osborne, I think one of the cool things that they did is they use the the the depression scale, the Phq nine as a jumping off point, right? So they ask them the questions and they said, are there things that are missing, like the kinds of things that we are very familiar with? Right. In qualitative methods. Right. They were able to find symptoms that better tracked what depression was to this community of people compared to what the Phq had. And like when one said, I think that like given the access to data we have given social media and given the internet, there's this awesome opportunity to be able to write foreground and center these kinds of marginalized experiences of distress. So rather than sticking to write in archaic scale that was made in the eighties or nineties, we can ask open ended questions about distress and like cluster free text expressions of how people experience their distress and then measure like efficacy based on improvements in those clusters rather than some old metric from like 20, 30 years ago.

Speaker3:
I would also quickly add that there is already sort of a movement in the in the mental health and psychiatric psychiatry fields where researchers are arguing to move away from something like the DSM, the diagnostic manual for mental disorders, because it is just so inflexible and doesn't capture the range of experiences that people have. So give you an example in simple terms. If you take two people, both of who have a diagnosis of schizophrenia, it is totally possible that their experience of the illness are wildly different and they wouldn't you wouldn't see a lot of commonalities across their experiences. So the movement actually is arguing and this is sort of complementary to what Sachin was describing, is to think about other ways to capture those range of diverse experiences instead of essentially siloing people's experiences into these rigid, inflexible sort of buckets or categories like we today understand to be schizophrenia or major depression or bipolar disorder.

Speaker1:
So on the one hand, I hear that modern technology and the Internet affords this opportunity for us to really understand the lived experiences of people of all different kinds of cultures. And then, on the other hand, I hear that these algorithms are trained using bad data. They are trying to quantify things that are inherently unquantifiable and inherently qualitative and inherently local about someone's personal lived experience and might be harming people with Western centric views that are not holistic of their lived experience. And so I feel this tension between the benefits and the negatives of digital mental health generally, regardless of whether we're talking about decolonial. And so I'm wondering this might not be a fair question, but maybe this is the pessimistic tech ethics academic in me coming out. Is it worth it? Is is technology more helpful or more harmful when it comes to mental health?

Speaker2:
I think that you're right. I think that is a core tension that all people in digital who work in digital mental health will have to contend with every day. Right. Because I mean, so for example. Right, like going back to history, right? Like Fuku talks about how the chains of the asylum. Right. Are replaced with what he calls. I mean, he basically says tech, right. And surveillance. Surveillance tech. Right. The surveillance technology that enables the surveillance and control of human bodies is what he called it in the asylum. Right. And the concern, you're right, is that like different forms of tech, like technology mediated data, like social media, predictive and predictive algorithms, passive sensing, facial recognition. Right. It's really scary to think about the kinds of harm that these sorts of things can be used for. Right. Like we talk about this in the paper, right? Like after mass shootings, right after two of them, right. Trump like the proposed that they like that the government could use or at least someone in his administration had proposed to him that you could use like passive sensing data to create like a giant registry of people with mental illness, which is really scary and has like the completely incorrect link of between like mass shootings and violence and mental health that underlies it.

Speaker2:
Right. Which we talk about in the paper comes from colonialism, right. And comes from the globalization of the asylum system. But that's the thing, right? There's a lot of harm, right. And you're completely correct in that. But I also think that I mean, anyone who is on Tumblr, right, in like the 2009, 2010 era or maybe maybe that's just me. Right. Knows that it can be really empowering, right? To be able to find spaces where you can express your mental health. Right, and find others who have experienced the same kinds of marginalized symptoms that you have. Right. And our argument in this paper is that right? Yes. You're right that that tension exists and part of the responsibility. Right. Of designers. Its rate of algorithm designers, researchers, is to build in those protections to ensure that people are being able to get right the kind of community right and the kind of collective that you see from people being able to meet other people who have similar symptoms without minimizing that risk of harm. Right. So and we talk about this right in the paper, like building in like accountability, social transparency, explainability we also talk about the fact that like, I mean, it's super important and this is what I know a woman has thoughts about that there come in regulations right on like how mental health data can and can be used.

Speaker2:
Right. Because I mean, right now, digital mental health is kind of like a wild, wild west when it comes to data protections. Right. Some things may apply to HIPAA. Other things may not apply to HIPAA. I think those regulations are important. But I also think that we can always be dependent on state power to protect us from those forms of oppression. So it's a it's a combination of both designing tools that by design make it very, very difficult for data to be extracted that can be that that can harm us while also having regulations, other forms of legal mechanisms that keep that data from being used for harm. But you're right, I think that there is a fundamental tension there, and I think that's sort of a responsibility. That is something that we take on being in this field of thinking about what what that tension might look like and the harms that are associated with it.

Speaker3:
Yeah, I, I'm on the same page as such and it is a double edged sword. Like in many contexts, technology is and the field of digital mental health is very nascent right now. And this is the time for us to think about how do we ensure that what we are building up to actually outweighs the risks? The benefits outweigh the risks. I am being very pragmatic here. I think whatever we do, we are never going to be able to completely do away with the risks. It is it is going to be there. But I think given given that the opportunity we have here is that it's it's early on in the in the trajectory of of this of this field. And we have a real opportunity here. So our arguments around decolonizing the design and and the evaluation and so on of these technologies is sort of one way to think about how can we really take the benefits that these kind of technologies can potentially offer while trying to minimize the risks? I mean, to be honest, we have examples on both sides, right? We know that there are teenagers who saw self-harm posts on Instagram and then they took their lives. We also know how someone posted on Facebook that they are experiencing a crisis and then a friend or a family member was able to connect with them and prevent that from happening. So we have evidence on both sides of of what is possible and what is dangerous. I think we are at a point, like Sachin was saying, it is our responsibility to think about how do we go forward from here. Regulation personally, I think is one path forward to tackle some of those challenges and so is essentially a ground up approach where so the next generation of digital mental health researchers and practitioners, we centre our approaches on a decolonial a method or an approach so that we are thinking about these risks from the very early on instead of five or ten years later, where we look back and we realize that we got it all wrong from the beginning.

Speaker1:
One thing I've been thinking about, and this ties into my own research, talking with people who are struggling with mental health and using some of these systems is. Well, this is these are massive stakeholders. There's massive design questions and big picture questions. But for folks who are just navigating these systems as experts yourselves, like, what would you say? Like obviously we're going to put resources in this episode in terms of mental health and crisis lines and things like that. But besides creating more resources in terms of helping or navigating these systems, what can just everyday people who are struggling with mental health do? So in some ways, they don't get taken advantage of these systems or so that they find themselves in situations where the systems are just not helping them or actively harming them.

Speaker2:
Yeah, I think I think that's a fantastic question. Right. And I think that there's a couple of different I mean, there's a couple of different approaches. One is to I mean, make use of the kinds of resources that that are recommended by search engines. Right. So like. Like national crisis lines. Right. Trans Lifeline, Trevor Project. Like there's a bunch of different helplines that people can take advantage of, right? There's also new tools that help people get connected with therapists that fit their identity based needs. Right. But I think. But you're right. Right. I think that particularly for someone who is experiencing mental health struggles, it is really difficult to find. Like to find. To be able to find resources that work for you. Our own research talks about this. Right. So I think another thing that's really, really important, both from like an individual site as well as from a tech company algorithmic side is to ensure that like things like warm lines, things like peer support portals, like I think those are some of the keywords that individuals in distress can search for to find like care that is safe for what they're like, for their needs and for what they're looking for. And just to explain what some of those things are like.

Speaker2:
So crisis helplines are generally staffed by a wide variety of people. Warm lines are staffed by people with limited experience, with mental health concerns. Generally, they and of course, this is something that needs to be foregrounded, right? Something that should be really, really evident. Sometimes it's not. But generally they do not call the police when people are in crisis, particularly, I think, because the people who staffed the line know what it's like to experience really severe mental health distress and like what engagements with quote unquote care that's actually harmful can look like. So I think from a person who is in distress from that side, searching for things like peer support. In fact, one of my mentors and friends, Stephanie Lynn Kauffman, is someone who taught me a lot about this, right? That like peer support can be really, really incredible, particularly when the state is not there for us, when someone is in need of of help. Additionally, warm lines like I think some of those are some of the keywords to search for when someone's in distress, if they're looking for things that are outside of the typical like medical clinic establishment when it comes to care.

Speaker1:
We've talked about a lot of really deep and difficult topics in this conversation today. And so something that we'd like to close this conversation with is a little bit of the opposite, something that's a little bit more bright and cheerful and hopeful leaning. And so for each of you, I'm wondering what is something that.

Speaker3:
You are.

Speaker1:
Feeling hopeful about in the space of Decolonial digital mental health?

Speaker3:
I can. So the last about 2 to 2 and a half years, the pandemic has been difficult, right? It's been it's been difficult for me personally in different ways, but nowhere close to some of the difficulties that I'm sure other people have experienced. One positive thing, though, that I see is a more open dialogue around mental health that is happening. I'm seeing in person. I'm seeing that happen on the Internet and in other digital spaces. I'm hopeful that this is not just a temporary thing. It is here to stay because we need that dialogue. We need it because there is a long way to go in terms of destigmatizing these experiences. And that goes hand in hand with some of the goals that we chart out and a decolonial mental health view or perspective, which is not thinking of it as a source of shame or a stigma, but thinking of it as yet another experience. And then looking to our community as community members, as allies, how do we support that person to go through that experience? I'm hopeful that a lot of we are not going back to 2019, I guess, for better or worse. But on a positive note, I hope we continue this dialogue going forward around mental health.

Speaker2:
I can go next, I think very similarly. So like I mentioned in my youth, I was a Tumblr and Reddit user. I would post on all the mental health communities. Right. And, and I think that so maybe you can call me a part of the old guard, right? But I think that one of the things that's really great that I've seen during the pandemic is that like on Tik-tok, right on Twitter, right on different forms of social media, suddenly I'm seeing and I mean, I've been researching this for many years now, right? I'm seeing an awareness around how culture influences our responses right to collective trauma, right to how we understand and how we experience mental illness. I'm seeing a greater awareness of that in spaces that I never expected to see it. Right. And I think and I think there is a there's a complication, right, of the narrative that mental illness should only look a certain way. And I and I and I think it's really hopeful for me to see that even like kids, youth, like adolescents, people who are young college students. Right. Are having these conversations about how mental health does not always need to look like what the DSM prescribes it to be. And that gives me a lot of hope for what the future of treatment can look like for people who have experienced symptoms that they've never seen represented before.

Speaker3:
So adding on to what was said both by one woman and such, and I would say I've certainly been really hopeful just thinking about how willing and open the HCI community to the scholarly community has been to our work. And I think that says something about just how willing people are to take on different perspectives, to try to understand where knowledge comes from and whose knowledge counts, right, and what are partial perspectives that we all have. And not just people who are in the room, people who are outside, people across the world. That's been really amazing to see. And I really hope that that trend continues just being more and more willing to see where knowledge is created, whether it's in the topic of the research that we're looking at, but also from the scholarly perspectives that we take.

Speaker1:
And I think one of the powerful things about the work that you three are doing, both individually and collectively, is opening that door for more conversations, more voices being represented and continuing to take the risk to have these conversations that don't always fall in with the status quo and sometimes actively go against the status quo in these conversations. So. Thank you for the work that you're doing and also thank you for joining us today. We will make sure to link the paper in the show notes and talk a bit more about it during our intro and outro. But for now, thank you three for joining us. We want to think such and moon, moon and nature again for joining us today for this wonderful panel conversation. And Dylan, as somebody who is also researching a lot about digital mental health in some different contexts, I would love to hear your initial reactions to this conversation. Yeah, I thought it was great conversation, partially because these are people that I cite all the time in the papers that I'm currently writing. So it was a real pleasure to be able to to talk and to learn. And I think in a lot of ways some of what they are currently publishing about Decolonial digital mental health will work its way into my own work. And my own theory is, especially the work that I'm doing right now around suicide, bereavement and suicidality, which is a space where there's some support and some research on.

Speaker1:
But it's still folks are figuring out how to do that research well. And so I think one of the things that I'm gleaning from this conversation that I'm hoping to put into practice is questions around methodology and the methodology that we use. Like where can computation be used? Where can it be successful and ethical and all of that when it comes to digital, mental health, measuring digital mental health and monitoring and implementing design and where might there be barriers or where might there be some hidden privilege value systems that are being embedded within that measurement itself that may lead to active harm in some cases of how we do our research. And so those methods and how we implement them is something that I'm going to continue to reflect on, both in that computational, but then also in the qualitative of like what questions do we ask in the first place and how do we represent voices more generally and represent just more voices? So just that's that's a little bit about what I'm thinking about and how I'm thinking about even applying the stuff. What about you? I think I'm actually having a bit of a similar reflection as you and the phrase that you used about like which questions should we ask in the first place? I'm I'm thinking along similar lines, but instead of the questions that we're asking, it's like the systems that we're designing and which systems are appropriate to design in the first place.

Speaker1:
And mental health is a really interesting space when you apply it to the digital world, because I feel like there's it's easy to get really hyped about its potential, like with Facebook's predictive mental health machine learning algorithms that they've been working on that we talked to Stevie Chancellor about a little bit in a previous episode that has amazing potential to be this this really wonderful system that helps people predict or helps Facebook predict if people need resources or additional help. But as we discussed in that episode, there's a lot of potential for harm that is unintended as a consequence to deploying such a system, especially for people who might not want it. And I think about that when it comes to a lot of ways to digitize our mental health care for people. And I was just really, really hopeful, but also unsure about my own feelings when we were talking about the pros and cons of applying technology to mental health in the first place. Because I also totally agree with everybody on this panel that there are some amazing affordances to the digital mental health care world, especially when it comes to platforms for people to discuss their symptoms and their experiences with one another and to not feel so alone that honestly, in my personal opinion, I think might be the single most powerful modern technology that exists today for for digital mental health.

Speaker1:
But then I think about a lot of these more predictive systems and these algorithmic systems that have to rely on data and aggregating users and making assumptions about people. And that feels like dangerous waters to me. And I feel like that has a lot of potential, but has also a lot of potential for harm just as much as it does good. So I am very, very grateful for the the strides that are being made in the digital mental health world for some of these modern technologies to combat some issues that people previously didn't have solutions or remedies for pre-Internet. But I'm also just I'm feeling a little bit concerned about where some of this hype could take us that could lead to harm. I'm glad you brought up our conversation with Stevie. Our interview with Stevie, because I see a lot of similar topics being brought up in this interview, the first of which is the fact that. We haven't figured out mental health generally in our society in terms of social support, in terms of also like unequal access. And so to now bring the technology and the machine learning. To be embedding these values in these systems while we're trying to put them in partnership, but also translate some of the stuff from the non digital space and these social structures that we haven't figured out. It's just it's a lot.

Speaker1:
And this includes some of the colonial and decolonial considerations as well about power and how power plays out in both technology but also in the social world beyond that, in the mental health support world. Another thing that came up both in our conversation with Stephen in this conversation is that at this point, we're kind of at a point of no return. It's not a question of if there's going to be this world of digital mental health or if algorithms are going to be used. They're already deployed, they're already being used. It's a question of how they're going to be used and how they're going to be built. And so the conversation and this conversation was very it sounded very pragmatic, which I really appreciated. And the question of, well, how do we put these things into practice? And I think based on the work that they're doing, the work that some of my lab mates are doing, it's a very live question of how to do any of this well, which is what we come back to is like, well, okay, how do we actually do this pragmatically? And I think there's some maybe not clear solutions from the work that these three authors and their labs are presenting, but also ways that we can simply think about these things differently. And again, I think that's a real power in the work that they're publishing as well. How how can we twist how we think of these things and either make new assumptions or just not make the assumptions that can cause harm out as we're deploying these systems? Yeah.

Speaker1:
Hearing you say that made me reflect a bit on I feel like I've been really focused on the word scale in our episodes this year for some reason. That's like the question that I keep coming back to, and maybe it's because I've been working in an industry context. I actually I don't know if I love scale. I think I just live in scale, so I am constantly reminded of it. You do love discussing scale. Yeah, I honestly, you know, upon reflection now I feel like it might be because I've been so deeply embedded in industry for the last eight months that it's just been like beaten into me to think about how to scale solutions. And when it comes to mental health and these types of technologies that we're discussing in this episode, and especially like local experiences and lived personal individual experiences, I just get so wrapped up in this tension between lived experience and scale because I know personally if I have like an individual therapist who's working with me on a mental health issue that I have that is so incredibly unique to me, the things that I'm going to need for treatment or for care or for support are so incredibly unique to me. Even if a sibling of mine was experiencing the same exact mental health struggle as I was, I think they would need entirely different care than me.

Speaker1:
And now when we try to take technology and create some sort of scalable solution that averages people's experiences and needs for support and mediation and care, I just I really feel a little bit pessimistic about technology's capacity to do this, but I also see the potential for some really incredible. Interventions that could happen at scale, also to help people and to provide support for people. So. That's sort of yeah, that's where I'm sitting. I think I'm still I'm living in this tension between scalability and local, independent or individual lived experiences. And I think that tension will always exist in digital mental health. I don't think it's going anywhere. Absolutely. I think these are questions that we're going to be asking for a long time to come. But for more information on today's show, please visit the episode page at radical ie dot org. There you'll also find, as we referenced in the episode, you'll find some of the resources, including this awesome paper that we were highlighting in this conversation. And if you enjoyed this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite pod catcher. You can catch our regularly scheduled episodes on the last Wednesday of every month with some bonus episodes in between. Perhaps join our conversation on Twitter at Radical I pod and as always, stay radical.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including upload many different filetypes, powerful integrations and APIs, automated subtitles, world-class support, and easily transcribe your Zoom meetings. Try Sonix for free today.