Data Privacy and Women’s Rights with Rebecca Finlay


What is the reality of data privacy after the overruling of Roe v. Wade?

In this episode we interview Rebecca Finlay about protecting user data privacy and human rights, following the US Supreme Court ruling of Dobbs v. Jackson Women’s Health Organization.

Rebecca Finlay is the CEO of the non-profit, Partnership on AI overseeing the organization’s mission and strategy. In this role, Rebecca ensures that the Partnership on AI and their global community of Partners work together so that developments in AI advance positive outcomes for people and society.

Follow Rebecca on Twitter @RFinlayPAI

Follow Partnership on AI on Twitter @PartnershipAI

If you enjoyed this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at @radicalaipod.



Transcript

Rebecca PAI1_mixdown.mp3: Audio automatically transcribed by Sonix

Rebecca PAI1_mixdown.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Speaker1:
Welcome to Radical AI, a podcast about technology, power, society and what it means to be human in the age of Information. We are your hosts, Dylan and Jess. We're two PhD students with different backgrounds researching AI and technology ethics.

Speaker2:
In this episode, we interview Rebecca Findlay about protecting user data, privacy and human rights, Following the US Supreme Court ruling of Dobbs versus Jackson Women's Health Organization. If you're unfamiliar with this ruling, here's a very quick description. On June 24, 2022, this ruling was a landmark decision in the US Supreme Court, where the court decided that the Constitution of the United States does not confer a right to abortion. Although this news was local to the United States, we also discuss the global impact of this ruling in our interview with Rebecca.

Speaker1:
Rebecca Findlay is the CEO of the non-profit Partnership on AI, overseeing the organization's mission and strategy. In this role, Rebecca ensures that the partnership on AI and their global community of partners work together so that developments in AI advance positive outcomes for people and society.

Speaker2:
In this episode, we do discuss some sensitive topics, so we recommend that you take care in the ways that are best for you. And with that, we'll head into the interview. We are on the line today with Rebecca Findlay. Rebecca, welcome to the show.

Speaker3:
Thank you so much. It's a real pleasure to be here. I listen to your podcast often and I'm really honored to have an opportunity to participate and talk about this important topic.

Speaker2:
Yes. And we are honored to have you on the show to talk about this topic. That is unfortunately not the best of news, but it is, as you said, very important to discuss in this time in current events, and that is in the United States, the aftermath of what is called the Dobbs versus Jackson Women's Health Organization ruling from the Supreme Court. So we were hoping to just sort of begin at the beginning here. And if you could just paint a picture and set the scene of what this ruling is, what it means and how it relates to technology.

Speaker3:
Happy to do so. Yes. So on June 24th, the Supreme Court of the United States that held that the Constitution does not confer a right to abortion and the turned the authority to regulate abortion back to the people and their elected representatives at the state level. And so with the decision in Dobbs V Jackson Women's Health Organization, they overturned the long standing ruling in Roe v Wade, which granted access to abortion for individuals living throughout the United States. The immediate implication of that ruling was that a number of states had in place what were called trigger laws, and those were laws that would regulate and constrain access to abortion. Upon the decision by the Supreme Court. So that meant that several states immediately were able to impose many constraints, in some cases complete bans on access to abortion for individuals living within those states. And since then, there have been a number of other rulings, and there are more that are expected to come. And as a result of that, I became very, very concerned about the way in which women and individuals had been thinking about information that they could have been providing through health apps or through access, through the platforms and browsers that they were using that they would have thought of as not being high risk at all and suddenly were high risk. And so the implications for that and the concern about privacy is what led me and the organization to issue the statement that we did on the day of the ruling, which was really to draw attention to in the first instance, the AI community within the US, but then even more broadly to really think about the ways in which we needed to double down on protection in order to protect the rights of women and individuals who were no longer to have access to medical care that they had had previously.

Speaker1:
And when you say high risk, are we mostly talking about privacy? And if so, can you say a bit more about what we mean by privacy?

Speaker3:
Yeah, happy to do So. In this case, I think the concern is largely about privacy when we think about privacy broadly. So in the first instance, that could be things like someone's browser history. If, for example, they were searching for information about access to medical services like abortion, it could be location information both in terms of geo locating that may be on their mobile device, but also location information in terms of where they are based and where they're searching. It could be in terms of the information that they're providing through a health app. There's been a lot of attention focused on what are called period tracking apps and other ways in which women may be tracking their health and well-being. That could be collecting data that could provide, again, private health data that could be accessed and potentially provide information about both their health and their reproductive status as well. And I think the concern really is, is around privacy is that we know that data can be identified even if it's anonymized in some way. Know, there's other studies that show that with just four data points, you can determine who and identify the individual for whom that data belongs. So really thinking about privacy and this much broader way, I mean, there's been a number of initiatives to really sort of dig into this. Following the Dods decision, you may have seen, for example, the work that Mozilla has done about looking. Specifically at what they call period and pregnancy tracking technology and apps. Now, they looked at 25 different apps that provide this services and really had quite a lot to say in terms of their extreme concerns about the protection of privacy of their users, that they were using all sorts of different apps and all sorts of different ways.

Speaker2:
So when I hear about these different apps being used in this data that's collected in the news, as a woman myself, it scares me and it makes me unsure about what apps I should be using and what data I should be giving to large tech corporations. And I'm wondering if in practice this privacy risk is actually something that I should be concerned about. Like what are these companies actually doing with this data? Is the government knocking on big Tech's door and asking them for access to this data now because of this ruling? Is this something that we might fear in the future? And right now we're still protected and it's okay. What is the reality of these privacy concerns?

Speaker3:
That's a great question because I think there really is a spectrum of opinions with regard to how high risk that data is within those within those different apps or browsers browsers that you might be using. So, for example, there are examples of cases where government and law enforcement agencies have asked private companies to provide data with regard to their users. And there are some specific examples with regard to health data and otherwise as well. And so I think that's one of the concerns is what is in place within companies to both minimize the amount of data they're collecting in the first place. Therefore, they wouldn't have the data if it was being asked of them from a potential state or government or law enforcement agency, but also in terms of how the companies themselves will protect the private data of their users, should in fact that request be made. So I think there is a difference of opinion when you when you read across the community. But there are examples where this has happened in the past. And I think the concern is that that this could potentially be an even greater issue going forward. And one of the interesting perspectives on that end of the spectrum is that there are all sorts of companies now that act as data brokers whereby data is bought and sold off of a variety of different apps and platforms. And so the question is what are the protections? How do you know you may be protected in the first instance in terms of the first technology that you're coming interaction with? But what happens if that data is then sold and repurposed in some way?

Speaker1:
What is the downstream impact of that repurposing of data? Like what is the real fear on the ground for for people of what will be done with that data?

Speaker3:
Well, I think it's there is an element of not knowing what not knowing what's happening with your data. Now, I think, as you say, there are things that individuals can do and we all should be doing anyways, whether or not we're thinking about private health data or other private data in terms of in terms of what we're sharing and how we're limiting access and use of it with regard to our use of technology, I do think that the Mozilla work that was done looking at apps and really trying to understand what different implications were and some of them, they particularly one or two, they rate much more highly than others. So consumers do have a choice. You can choose what what apps you use, what platforms you use, how you send information online to ensure that it's encrypted. There are ways in which you can protect yourself.

Speaker2:
And something else that I'm wondering beyond, I guess, the individual response, which could be to opt out of the app entirely or to choose what kind of data to send in is what can people who are a part of these larger corporations, organizations, institutions do to help with some of these concerns and these higher risk scenarios? I know there's a lot of different places that we could take this. So maybe let's start with those at tech companies who are working in the health sector. What what are things that they can do to try to help alleviate some of these really important risks?

Speaker3:
Great, great question. And that was really the the the community that I was trying to speak to when I issued when I issued the statement. So clearly, companies should really be thinking about an individual tech workers at companies who are working on health applications should really be thinking about how they limit collecting, retaining, selling, transferring, otherwise any use of information that is provided to them, particularly when it comes to a person's reproductive health, but of course health data more broadly. And so that really includes questions around location, data browsing and search history. It could be emails, any data that is specifically tied to reproductive health. And so that really says not only should you be minimising the collection of data, but then also you should be protecting the collection of that data. And so this really comes to the question of encryption and ensuring that the data is kept securely and and is not provided, for example, to third parties. And, you know, there are other sort of more general aspects related to the way in which technology platforms can do this. So they could really think about making sure that they're keeping access to abortion information open and available to their users as well. So this could be through their content moderation policies or their practices, but making sure that individuals do have access to information around reproductive health and potential other areas of health and being moved forward. And then, as I said, really trying to interrogate those requests that come in from government agencies who are seeking to access the personal health information of their users as well. And I think those pieces I mean, there are some very specific things sort of around geofencing and geo locating and how to protect that data as well. But those core elements of data minimization, data encryption and protection and then open access in terms of content, moderation and otherwise to this information is critically important.

Speaker1:
One stakeholder that I'm thinking of when we talk about reproductive health is the health system. And so I think about health data and health privacy data and HIPAA, and that's that. There's a lot of complexity in that as well. But I'm wondering how you see health data and health data privacy interacting with these other topics of apps that don't have to be accountable to some of those systems.

Speaker3:
Yeah, I think that's one of the things that anybody who's using a privately developed app should really take a good look at, because oftentimes access to health information outside of the protections that are offered by HIPAA, the Health Insurance Portability and Accountability Act, you really need to know your rights in terms of whether or not you're using those sorts of things. I mean, I think one of the interesting things was that following the Dobbs decision, the Biden administration did issue an executive order asking the Federal Trade Commission to consider steps to protect consumers privacy when they seek information about the provision of reproductive health care services and to consider additional actions, including potentially under HIPAA, to protect sensitive information related to reproductive health care. So I do think there is an important role as well for for those governments that want to ensure protection of access to those reproductive rights, that those are in place. And potentially there are some mechanisms there through that particular order.

Speaker2:
Now, I'm no lawyer and I do not have a background at all in policy, so I'm assuming some of our listeners as well probably have not read the full Supreme Court ruling and don't understand the nuances of what actually is allowed by the government when they're trying to access data and when they're trying to learn who to. Possibly incriminate because they're breaking this this new law. And what I'm wondering is, like, pragmatically, what is the government allowed to ask these tech companies for? What are they allowed to know and what are they allowed to do?

Speaker3:
Yeah, So I am also not a lawyer, so I want to just absolutely say that front and center in terms of responding to the question, the law enforcement agencies and government, if they feel that a law has been broken, can request information from companies with regard to their user practices. What that means under this particular ruling and in those states that have decided to put in place more restrictive regulations and rules remains to be seen. But there is definitely there are many companies who are responding frequently to requests with regard to those sorts of things. In fact, one of the interesting announcements that did come out in in July was that Google announced that they were going to delete user location history for abortion clinic visits. So they really wanted to get ahead of the potential requests that might be coming in with regard to this information about their users. So they they announced that they were going to delete user location history whenever whenever their users visited an abortion clinic, a domestic violence shelter or other similar similarly sensitive places, with the idea being therefore that that would really limit the amount of information should they be asked to provide it for potential prosecution.

Speaker1:
But I am wondering why more companies in this space who are implicated here are not taking more steps. Is it because it's unclear what those steps are or is it because there's just like a huge value gap between some of these stakeholders?

Speaker3:
Yeah, that's that is a question I can't answer. It's I think we could speculate on all sorts of reasons why it's, it's tricky and it's it's clear that that we know around data minimization and protection and encryption how important those are with regard to data usage. And one can only hope that we'll see that some of the work that's been doing, for example, by Mozilla, by some of our partners like the Electronic Frontier Foundation and others, will really advocate for changes to happen in that space.

Speaker2:
And something that we've discussed before in different interviews on this show. And it's also just a broader topic in the responsible tech community is the ways that these kinds of decisions, whether it's through the Supreme Court or through technological design, how they impact people desperately. And so sometimes certain people are disproportionately disadvantaged by a certain decision than others. And I'm wondering, with this ruling in mind, are there specific populations or communities of people who are I mean, despite the like, obviously this is targeted towards women, but beyond that, are there certain communities that are disproportionately disadvantaged because of this decision?

Speaker3:
Well, yes. And of course, those those communities that are most disproportionately disadvantaged are those individuals who live within states where they no longer have access to these reproductive health care and may be surrounded by other states where they don't have access to care either. So if you are disadvantaged in terms of your economic capacity to travel, to take time off of work in order to be able to to get to a clinic that's quite far away, there's a really interesting New York Times map that shows the numbers of number of hundreds of miles that some women and women and individuals would have to travel in order to get access to those clinics. So we know that that means that those are communities that are marginalized, they're often racialized and just do not have the resources to be able to get to the care that they need. And and then, of course, in addition to to women, I also think about those individuals who are professionals, the professionals who are providing this care and may be putting themselves at risk in terms of trying to help and serve communities that may no longer have access to to abortion services that they may have had in the past.

Speaker1:
Can you say a bit more about the providers in this? Are there is there a technological perspective not even necessarily to the ruling, but is there a sense? Of how technology is either assisting a barrier to just in terms of the clinician perspective for reproductive care.

Speaker3:
Yeah, I think it's just exactly the same issue as as one would think about for an individual who's looking for information. Right? If if you're an individual who's trying to provide information online, then of course you want to be very sensitive with regard to your personal information and how you go about protecting that in order and at the same time making sure that you're able to be able to promote and share the places where you could provide support. So I think there's the same issues around making sure that you're protecting your privacy, that you are using services that allow you to encrypt your messaging and all those sorts of things. And again, it's the potential right for this to have an issue. And it really, again, does depend on where you're based and how you're offering those services.

Speaker2:
I think we've done a pretty good job now of painting, unfortunately, bleak picture of what is the reality around this issue. And on this show, we we do like to be realistic and to share the things that are maybe not so great in the news that are happening. But we also like to paint a vision of the future and ways to alleviate some of these unfortunate challenges and and negative consequences, whether intended or not, that are happening in society. So I guess transitioning from the bleak present to the hopeful future, what are some things that you are feeling hopeful about that are maybe design decisions or regulatory efforts or a group efforts to to try to help improve some of these issues going forward?

Speaker3:
Yeah. So I think there is just this remarkable community of and many of the women who are really leading in the field of AI and health care and are beginning to really understand the way in which, as you so well cover on this particular podcast, all of the socio technical challenges related to deploying algorithmic systems all the way from biases in the data sets through the biases in the structures and the systems within which algorithmic systems are being modeled. And so I think there are I think that there's a much better understanding of some of the the challenges and beginning to have an exploration of some of the opportunities to that work. Some of the work we've been doing over the last year is in partnership with a coalition for health care and AI specifically trying to look at what are some of the best practices guidelines, guardrails that really need to be put into place. If we're thinking about deploying AI in the health care system and what does it mean in terms of inclusivity, designing with equity and making sure that the the decisions and predictions that come out of these models are are reliable and trustworthy and as transparent as they can be? So I'm very happy to see that community really emerging around this question and around the possibility for thinking what this means. I had an opportunity and I would highly recommend to anyone who hasn't had a chance to see it, to look at this amazing list of resources that have been developed by the Center on Privacy and Technology as part of their color of surveillance, policing of abortion and reproduction reading list. I hope we can put it up on your website afterwards. If there's a great list of stories and essays and articles specifically looking at this question through all of these intersectional ways in which health care and women reproduction and technology have intersected over the last several, several years.

Speaker1:
I was struck at the beginning of that answer when you were talking about who is doing some of this advocacy and research, you said, many of which are women. And that's something that I've heard from a fair amount of colleagues who are either researching or doing advocacy with this work. And I'm also, again, aware of my position as a man in this space. And I'm not seeing, even after this decision, especially around technology and the technology sector, I'm not seeing a lot of folks who look like me speaking out against this. And so I'm wondering almost as an aside right now, but I'm wondering for folks who may be male, whether they're impacted directly or are impacted indirectly by this issue, like what what would you say? What would your invitation be?

Speaker3:
Well, I always recommend one particular research project, which I think is illustrative of how we need to think about. Flying high in health care settings. And one of the leads on it is Mark Sendak. So it's Sendak at Owl. It's a proceedings out of ACM. It's called The Human Body is a black box supporting clinical decision making with deep learning. You probably know this well. This is the work that was really developed around the deployment of Sepsis Watch, which is a deep learning model to predict the severity of infection that can come from sepsis, from sepsis and patient populations. And one of the amazing things about this particular research project is that they began it from the very beginning, exploring the deployment of this model as a socio technical system that required integration into the existing social and professional context within they want which then which they wanted to deploy it. So it was interdisciplinary in nature right from their very beginning. It worked very, very closely with the teams of health care professionals who were already in place in the clinical setting to better understand how to deploy and make this work for them.

Speaker3:
And they came up with these really four key values and practices. First, rigorously defined the problem as we know it is often the case that is right in how the problem is defined in ie that we see bias and assumptions and integrating into that work so rigorously define the problem in context. So in the context within which it's going to be deployed, build relationships with stakeholders, respect professional discretion and create ongoing feedback loops with stakeholders. So this real notion of intentionally questioning each step of the development all the way from how we define the problem in context through to the data, through to the model development, through to the hard work of understanding how to integrate it in into professional health care practices. It's just I think it's a really great piece of work and it really speaks to how we need to be thinking about how to deploy AI so that it's responsible, trustworthy and really focuses on all of the issues that we know that are so intensely critical when we're thinking about health care settings.

Speaker2:
And thinking a bit more about how this ruling implicates the future of technology design, I'm I'm stuck a little bit on privacy, and we mentioned a bit of this earlier in this conversation, but I'd love for us to go a little bit deeper about privacy design and technology. And this was something that was really fascinating at the beginning when this podcast launched in April 2020, and we spoke with Zeta Garces in one of our very first episodes about COVID tracking and how privacy implications and health were just really contentious topic that people didn't really know what to do because there are some benefits of tracking data and collecting data on the one hand. But then there's these obvious negative consequences that mostly have to do with privacy when it comes to collection or a misuse of that data. So I'm wondering how this ruling might potentially change the trajectory for privacy design in technology in the future or ways that it could inspire or motivate us to do better?

Speaker3:
You know, I think one of the interesting things about privacy are all the dimensions of privacy and all of the ways in which we think about privacy from where we sit, from the context that we're in and and all of the elements there in and and I know that Zeta would have spoken at length about about all of those of those questions specifically. I mean, that question of COVID and tracing apps really, really put that into into fine point. And that was some of the work that I had done as well during that time. So but I do think there's some really interesting questions for privacy beyond just individual privacy and the concerns therein. And we've been doing some of this work related to algorithmic fairness and decision making. And how do you better understand the challenges associated with ensuring that algorithms are fair and therefore are not discriminating? And what does that what are the implications? Are that in terms of how much data needs to be in the data set and what data is in there and why the data is in there that is in there both historically and structurally and otherwise, but also in terms of how individuals within a data set and.

Speaker3:
Communities within a data set may be implicated differently from a privacy perspective. So we talked a little bit about being able to identify individuals within data sets, but there are also connections that are made within within algorithmic models as well between individuals in order to classify, in order to make decisions. And so there are implications on the group or community level as well. And I think that's a really important area of research moving forward. I think the other piece is that we're starting to see some technological approaches to how to protect privacy as well. And I think there are I think there are a lot of questions and open questions about those technological approaches and how how well they work and and really what might the tradeoffs be in terms of how they're deployed as well, and how do we understand them within the social and structural systems that they're being developed. So I think there are some really important questions related to privacy that the field needs to focus on beyond beyond the ones that we've been talking about today, specifically with regard to this ruling.

Speaker1:
I'm thinking about, I guess, the local element of the fact that this is a ruling, Dobbs versus Jackson that happened in the United States. And then we have the state level of the trigger laws that we saw come into effect. But then your organization and also you're in Canada. So this is obviously not just a US based issue. And I'm wondering if we could say a little bit more if you could say a little bit more about when we think about the international community, how do we think about the scale of either women's health or health generally? What is the impact on this international scale?

Speaker3:
So I think that's why we put the call out to the international community, because we really wanted to use it as a moment when we could all be thinking about the ways in which there may be concerns about privacy, but also the usage of individual and private data by by potential authoritarian regimes, for example, and the concerns therein. But also in terms of building on the work, for example, in the EU with the general Data protection regulation. And what does that mean in terms of privacy moving forward in the EU A.I. Act? It is it remains one of the central questions for the AI community is to think about how do we best protect privacy within the way in which current models are using AI, for example, through engagement and through advertising and all sorts of other things as well. So and of course, as you know, there's an international community of researchers who are focused on these questions and there's all sorts of questions when you think about potential surveillance of workers. And we think about how so many of those workers now are based in low to middle income countries as well. So they're outside of the United States. So thinking about those questions of privacy, worker well-being and worker rights are all important areas of work, I think, for the community moving forward.

Speaker2:
And as we move towards the closing of this conversation, I'm I'm brought back to the day when this ruling was made and just how much anger and fear I felt and I witnessed those around me feel from this ruling and its impacts. And I think that those emotions are still there largely. And I'm wondering if you have any thoughts or advice for people who feel anger or who feel fear or who feel confusion and feel that lack of agency to do anything, What what can we do today to help us gain some of that agency back?

Speaker3:
Take a look at the apps that you have on your phone. Take a look at the privacy settings that you have on your browser. Be cautious, be careful. Think about what information you are sharing beyond health information, and do so to take back your your right to own and control that data. And I think for me, that has been part of my learning through this process as a woman and reading and learning it. This has been we have seen the ways in which tech has been deployed against communities of color for many, many years and in. Empowering those communities and empowering ourselves to to take back the control that we have over our data and to make those decisions, I think is is a critically important first step. There's lots of information available online. I know you're going to share out a bunch of resources with this. I strongly encourage people to read the materials that were produced on the day of the ruling from the Center for Democracy and Technology, from the Electronic Frontier Foundation. I've mentioned the work that Mozilla has been doing. There are many other organizations, really great resources available that we've connected to on our website, but I know you will as well. And then finally, you know, for me, it all comes back to making sure that as an AI community we are including those voices that are most impacted by our technology in the development process and really understanding that there are we need to hear those voices as we develop this work and think about how in which we respect the burden of labor to be engaged in that work and to integrate that perspective right from the very beginning as we move forward. And I think that's that's my my call to the community as well as how do we do that? We talked we talked a little bit about the implications in the US. We've talked a little bit about the implications internationally. How do we bring those voices into the conversation right from the very beginning? That's the challenge for all of us.

Speaker1:
I am wondering, because you have your own head on, but then you also have the the partnership on an AI hat on. If folks did want to plug into the work that you're doing specifically on this issue, could you just say briefly what what you all are up to and how people can get involved?

Speaker3:
Absolutely happy to do so. So please go to our website WW partnership on air dot org. Lots of resources available there with regard to all of the initiatives that are in place across our work on fairness, transparency and accountability, inclusive research and design. As I just mentioned, we do work also on media integrity, misinformation in that world as well, and also in labor and the economy. So if there's any of that work that is of interest to individuals who are listening to that podcast, please take a look. You can sign up to be involved in any and all of those activities. And of course, you can reach out to me directly as well.

Speaker2:
Well, Rebecca, unfortunately, as these things go, we are out of time. But thank you so much for sharing with us the important work that you are all doing in this space and for helping us make sense of some of the things in the world that don't quite make sense right now.

Speaker3:
Thank you so much. I really appreciated the opportunity to be with you.

Speaker1:
We want to thank Rebecca again for this conversation. As usual, we'll do a very brief outro. But besides that, we do invite folks to check out the resources that Rebecca was just naming that, of course, you'll find in the show notes. But just let's start with you. What are you thinking?

Speaker2:
This is a hard topic to talk about, as are a lot of topics on this show, but this one especially hits close to home for me. And I mentioned this a little bit in the interview about my own personal experience with the aftermath of this ruling in the States. And something that I didn't really mention as much was my own uncertainty about which apps I should be using on my phone or I guess deleting on my phone and not using anymore because of this ruling. And I also come from a state and have a lot of friends and family in states that had those trigger laws. And I, I worry about them and and what apps they are using and what data they are sharing with large corporations. And I guess something that came to my mind taking this, I guess sort of removing myself from the topic of the ruling and just thinking more broadly and generally about this uncertainty and this fear of sharing data. I guess I see this as a really good opportunity for not just women, but for all tech users to assess how much data we are sharing with large organizations and corporations right now and to really think through our digital footprint.

Speaker2:
And the unfortunate reality is that, you know, big tech companies don't have as much of a vested interest in our privacy as we probably do. And sometimes the more convenient and easy thing to do, oftentimes the more convenient and easy thing to do is to just give your data away because it allows for more ease of use in a lot of circumstances, a lot of platforms. But this ruling was just a really good moment, at least for me and maybe for a lot of other people, to to to recognize that maybe efficiency and ease of use and user experience is not worth sharing that data sometimes. And there is a reason to to hold our data closely and to be concerned about privacy in online platforms, especially because future rulings could cause some data that maybe we don't think is quite as sensitive now to become more sensitive in the future like this one did. So that's sort of I guess my maybe broad takeaway is that this is like a moment of recognizing that reality, that unfortunate reality, and trying to pivot my actions to help with my own personal privacy and that of my family and friends going forward. What about you, Dylan?

Speaker1:
Yeah. So a lot of what I'm researching right now is around health and some of it's adjacent to privacy a lot. It's adjacent to general well-being and how we design for wellbeing. And this was a very humbling conversation for me because I realized how little I've read about women's health and technology and how little I was aware of what the advocacy work is that's being done out in the world by folks like Partnership on AI and other colleagues. And so I think for me, I'm just going to keep my comments really brief and I'm seeing my role as as a listener role of going through the resources that Rebecca shared, trying to connect with other colleagues and seeing where I can plug in. Because I think one thing that Rebecca really drove home is that activity is important to remain active and to try to to change something and do work, to change something about these systems who are not working on behalf of everyone. So I'm going to just close my comments at that. But as always, if you enjoyed this episode, we invite you to subscribe, rate and review the show on iTunes or your favorite pod culture.

Speaker2:
You can catch our regularly scheduled episodes the last Wednesday of every month with possibly some bonus episodes in between. Join our conversation on Twitter at Radical iPod if you haven't already. And do not forget, there are so many resources that Rebecca shared during this interview and that we've also curated and compiled and put in the show notes which you can find at radical AI dot org and as always, stay radical.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including world-class support, transcribe multiple languages, upload many different filetypes, automated subtitles, and easily transcribe your Zoom meetings. Try Sonix for free today.