The first articles that come up when you Google “artificial intelligence” and “Black people” are concerning to say the least. The New York Times asked, “Who Is Making Sure the A.I. Machines Aren’t Racist?” in a story about two Black former Google researchers and AI experts who had been forced out of the company for ringing alarm bells about how this tech was being built. And Forbes put the question more bluntly: “Is Artificial Intelligence Anti-Black?” Another story: “How Artificial Intelligence Can Deepen Racial and Economic Inequities.” The Guardian wrote about the “rise of racist robots” tracking “how AI is learning all our worst impulses.” NPR covered “how AI could perpetuate racism, sexism and other biases in society.” So it’s safe to say if you are Black and a woman like me (and the rest of the Unbothered team), artificial intelligence doesn’t seem like it’s for us. Not only that, it feels like its growing popularity and prevalence will actively harm us. And yet, AI is everywhere. Its integration into our daily lives feels unavoidable. That future we’ve been dreading with AI? Well, it’s here (hey, Siri). The inevitability of AI is why I felt like we couldn’t ignore it. This Black History Month, I wanted us to look forward with a Black futurist mindset instead of solely referring to the past (which is also important) and since, at least for me, a future with AI feels terrifying, why not confront those fears head on?
AdvertisementADVERTISEMENT
Throughout February, Unbothered is going to be tackling AI and its effect on various industries — TV and film, media and literature, music, beauty, health and wellness — and parse through the good (if any) and bad (many!) of artificial intelligence’s impact on Black folks specifically. We know the aforementioned harm AI could and will inflict is real, and really scary, because those are often the sole stories being told about the new tech and our community. But that can’t be the only story when it comes to Black people and AI, can it? And if it is, what can we do to ensure a future with AI doesn’t deliver more of the same pain, inequity and injustice we’ve faced throughout history?
These are questions that I did not have the answers to. So, for an introductory crash course in all things Black and AI, I called up Sinead Bovell, a tech expert and self-proclaimed “futurist” who has been very vocal about pushing for more diversity behind-the-scenes of the tech industry and forecasting what a future with AI could look like… with the right checks and balances. You may know Bovell from her work with the UN, her company WAYE Talks, her very informative social media accounts, or her popular talk show segments (last year in the midst of the WGA and SAG-AFTRA strikes, Bovell went viral for comments made on a daytime talk show about AI-generated content in Hollywood). Whether you know her work or not, or whether you agree with her or not, Bovell is a Black woman expert in the field of AI, which is rare. And she’s adamant that she’s not declaring AI as inherently one thing or another. It is tech built by humans after all, and remembering that is key, she says. If we resign ourselves to the foregone conclusion that AI is racist, or that the future is hopeless and bleak for Black folks, we are absolving those in power of their agency and clearing them of their complicity in the inequality that currently exists in the tech space. Bovell believes we can build a better future — one that includes AI — but that won't happen by sitting on the sidelines; we have to push for it, and hold tech giants to account to make the necessary changes so that the future Bovell believes in comes to fruition.
AdvertisementADVERTISEMENT
In the wide-ranging conversation below, I get to unpack all my AI nightmares with Bovell, ask her my burning questions about deepfakes, AI’s bad reputation with Black folks, and how creatives should approach a piece of tech that can co-opt and manipulate our work and our likeness.
“
I don't think I'd be able to wake up and do what I do every day if I didn't think there was a future worth fighting for.
sinead bovell
”
Unbothered: We’re delving deep into AI all month. What do you think sets the conversations surrounding AI apart from how we’ve talked about tech in the past?
Sinead Bovell: I'm happy that AI is your focus for this month. I think it’s proof we're already [approaching] AI differently than we did with past technologies. It's becoming a cultural conversation. AI ethics are being talked about on talk shows and at dinner tables. To me, that's already an optimistic step. Let's interrogate this technology. Let's not have this be a closed conversation with the seven people that happen to be coding it. Let's support more voices into it. Even if the conversations are scary, we're having them because there's a lot of technologies where that wasn't the case. And that's what I think feels different about artificial intelligence. And even though governments are playing whack-a-mole, for the most part, trying to figure out how to regulate this technology, that's something that hasn't historically been the case this early into the creation of tech. It was about a decade into social media [that regulations came into play], and 15 years with seatbelts. There’s a lot of AI talk in Congress, which I think is good. People are mobilizing, and the fear that you have, decision makers feel that too, and different leaders and activists feel that too. And so there are people trying to move us in the right direction and across the board.
AdvertisementADVERTISEMENT
You call yourself a futurist. What does Black futurism mean to you?
SB: To me, Black futurism means co-creating spaces in the future that work for everyone, but especially Black folk, and that we play a leading role in designing the futures that work for us. And also being able to use technology to reimagine different historical storylines.
What do you mean by “reimagine historical storylines”?
SB: If you look historically at what would have happened if more people had access to technology, how many more doors would that have opened? To me, that is something I use as motivation to ensure many more voices, especially Black voices, are in the rooms where the future is being coded, decided upon, and built. When you see how many doors could have been opened, or what things could have looked like, imagine we got the chance to invent that future because we weren't colonized, and we didn't have these resources taken. There are all sorts of different ways that you can tell different stories with it.
That's really interesting, and an optimistic view of looking at the world which I know some might think is naive but I think it’s necessary. Every activist throughout history, and every civil rights leader had to have some sort of optimism, right? Or else they just would have given up. But looking at AI, I will say that it is hard to feel optimistic. We are seeing articles saying that artificial intelligence can deepen racial and economic inequities. There’s the story in Forbes asking if AI is anti-black. These are the stories we’re seeing about AI when it has to do with people who look like us. So how do you feel when you see all of those stories? And how should we be approaching these hard truths about AI?
AdvertisementADVERTISEMENT
SB: As a futurist, it's all about scenario planning. How can we look at the signals and data and imagine the different scenarios that could be created? And some of those are not good scenarios, but they allow us to have a benchmark to peel away from or ask, “where could this go wrong? And how can we avoid that?” When we see some of the narratives in the media about artificial intelligence, and some of the scarier things, I think it's very accurate, and very true. And even though it feels terrifying, I think the thing that I find scariest is when I see terms like “AI is…” and we remove the human agency, and the human involvement in how these technologies show up. Because at the end of the day, if there were ever a technology that was a reflection of humanity, and a reflection of history, it’s artificial intelligence. It holds up a mirror to society, the present and the past. And we see ourselves right in that reflection. And when we look at history, who wasn't included, whose voices have been marginalized, AI without intervention is going to shine a light on all of that. And so when I see all of those stories about AI algorithmic bias, who isn't getting loans, who's getting jail sentences, it's infuriating because it could be avoided. This does not have to be the future. This doesn't have to be the present or the future. I do hope these stories motivate us to act and ensure that there's different voices coding the futures that we all have to live in.
AdvertisementADVERTISEMENT
When you say that AI is like a mirror and a reflection of history, a lot of what I understand AI to be is something that pulls from history, and from what already exists. What we know, historically, is that what already exists is glaring inequity and glaring injustices. So if that is what AI is pulling from, how will it then create a future that is better or different?
SB: Dr. Joy Buolamwini has a phrase that I love: “Data isn't destiny.” None of the AI systems that we have, and the predictions that they make, have to be what gets implemented. Yes, it is very true that history is riddled with nothing but historical power imbalances, and societal inequities. And AI is going to shine a light on that. But that doesn't mean that you can't intervene, or correct those datasets, and edit history to one where things would have been more fair. That's what I mean when I say it's so important that we recognize we have agency. We tend to feel this kind of learned helplessness with technology. But that isn't the case at all. We can rebalance datasets, we can update our algorithms, and we can update the code. Taking a passive approach to AI does mean we're going to repeat historical power imbalances right into the future. But what if we have an active interventionist approach using artificial intelligence? And how we do that is it means we have to have more people in these rooms, that know to interrogate the data in certain ways that are honest about what happened in history, so it doesn't get repeated, and then that have the skills and the tools to make the edits as necessary. So I don't think any of the systems in any of the datasets have to stay as is. We have the power to make those changes.
AdvertisementADVERTISEMENT
“
If there were ever a technology that was a reflection of humanity, and a reflection of history, it’s artificial intelligence. It holds up a mirror to society, the present and the past.
sinead bovell
”
Are you seeing those changes happening now? And if not, why?
SB: Things are better than what they were in artificial intelligence. But I would still say that there is nothing short of a diversity crisis in the industry. And there are many reasons for that. Tech companies like to say that it's a pipeline problem, and that they don't have the right skills for certain groups to be able to get these opportunities. I don't buy that at all. I think a big portion of the challenge is company culture. Who gets invited, who feels included? You see a lot of people with the skills to execute and to be in these rooms, but they're often not invited, or they're not welcomed. And once they get there, they're fired when they try to make a change. I know that sounds kind of dire, but I do think things have improved even in the last four years of me being a bit more vocal. The statistics are moving in the right direction but they have to move a lot faster
It sounds like that's happening in every industry across the board when it comes to DEI. There's a diversity crisis in AI, as you said. So when you hear stories about companies trying to use AI to “increase diversity” — like Levi’s who announced they were going to use avatars of Black people instead of actual Black people as models— it feels like AI is being used to cheat, or to aid in backsliding on DEI promises made in 2020.
AdvertisementADVERTISEMENT
SB: So I don't know if it's adding to the backslide. What AI does is amplify patterns in data. AI can spot patterns good or bad and amplify that. But what is also happening is the diversity and equity challenges from the AI industry are now merging with the diversity and equity challenges of other industries. So as AI becomes a foundational technology, and it infiltrates every department, every industry, every sector, if that sector or industry had its own equity challenges, we’re going to see that. Fashion modelling, for a long time, there was only a certain kind of profile and identity. And now you're going to merge that with automation and AI, which comes with its own challenges, then you have this kind of massive intersection point of diversity that was kind of never addressed in one industry, and now is gonna get amplified by AI. And so we'll see that in creative industries and the diversity challenges of Hollywood, in HR rooms, in mortgage lending rooms, and judicial systems, all of those areas where there have been blatant diversity challenges [and] outright exclusion. AI now comes in and shines a light on some of those challenges.
You are one of the few Black voices in this space that is not just spewing doom and gloom, that the future will be horrible and all Black people are fucked. I feel like that's all we hear. With those challenges you mentioned, what are the specific things that you and other Black people in this space are advocating for?
AdvertisementADVERTISEMENT
SB: I’m not a techno-optimist and in fact, I don't subscribe to [the idea of] pessimists or optimists at all. I'm just a futurist; I follow the data. I do know that when people think things are entirely pessimistic, and they think it's just doom and gloom, they unsubscribe. And I think that's the worst action that we can take. In the future, we all have the right to co-create it, regardless of what our backgrounds are. I want people to feel like they have the right to show up into any room where their future is being decided upon. There's no skill set or prerequisite, there's no specific resume required to steer the future or to participate in those conversations. AI isn't just technology. It's a social technology. And so it impacts all of us. One of the things I hope that we recognize is that AI isn't neutral. Technology isn't neutral. I don't know who started the rumour that technology is just a tool. But it's not. And that doesn't inherently have to be a bad thing. But it does mean we recognize that technology is a product of the culture, the society, the economy. All of that is [a product of] the moment it's been built in, and the people who chose to design it in certain ways. There are certain design choices that are inherently not neutral. And so I think that that's one thing is really important to get across. It's not just a tool because that also means it's up to us to decide, and it's up to the user to make the right ethical decisions with it. But there's an entire design process where people are making decisions that can exclude [people], even ahead of the product actually getting deployed.
AdvertisementADVERTISEMENT
“
We all have the right to co-create [the future], regardless of what our backgrounds are. I want people to feel like they have the right to show up into any room where their future is being decided upon.
sinead bovell
”
The second thing is that the people behind technology matter. And you can see that, for example, in the crisis at Levi's you mentioned [where the company said] that they're going to use AI to enhance diversity. Sure, if the people coding those avatars looked like the avatars they were building and resembled the market that Levi's was trying to target, then you could say that's interesting. But if the people in the coding rooms or the people behind the technology don't represent the society that that company is trying to target, that raises nothing but alarm bells. And the same can be said with [the AI model] Shudu Gram [who is Black but created by a white man]. So we need to recognize that there are people behind these technologies, and they matter, and their identities matter.
You just brought up one of my AI nightmares and biggest concerns. When it comes to co-opting, credit and profit. Who is benefiting off of our identities? When you can have a white man take on the identity of a marginalized group, especially Black women, and then profit off of that or gain any sort of social capital or financial capital off of that, it’s infuriating.
SB: Yes, right. But there are the two main lanes within the wrong misrepresentation and profit exploitation. Yes, if you have people traipsing in other people's identities, and a person is profiting off of experiences they've never had to go through, that is a very big problem. If you have people misrepresenting communities, in a way that is actually significant, that matters. If you were doing a talk in the metaverse, or you're creating a fashion model that's supposed to represent a real segment of society, and it's used to influence purchasing decisions and to make people feel seen or included, so you're designing a Black woman, but it's through the eyes of a white man, there are real opportunities for misrepresentation that marginalize real Black women, which becomes very, very alarming. Some people call it digital Black blackface, and I call it digital cultural appropriation. I think we need to think really critically about it.
AdvertisementADVERTISEMENT
Aside from being critical, what are the tangible things we can do to make AI safer for us in the future?
SB: I think the more people that understand how to build these tools, and how to shape them, the better. Each of us can certainly go off and try to lean into these AI tools (they're actually a lot easier to learn than many of the past technological innovations), but I’m speaking more about structural changes. If we want more people in the rooms where the future is being built, we have to equip more people with those skills. And that means investing in things like AI literacy and tech literacy in schools, not doing things like banning technology or AI in classrooms. This is really important. There's a case that I like to reference where when ChatGPT first hit the market, there were some schools that banned it. And those were more public schools, for example, in New York, and then private schools, they had teachers come in and teach the students how to use these tools. That's a prime example of the beginning of equipping some people for the future and further marginalizing people in the future. We have to acknowledge that there are structural reasons why some markets have more access to the skills and tools to create technologies.
It feels like we’re getting into the idea that AI and tools like ChatGPT are inevitable, whether we want them to or not, so how do we make sure they don’t deepen divides. Is that how we should be thinking about it?
AdvertisementADVERTISEMENT
SB: I think technological evolution is, of course, inevitable, but how the future shows up isn't. I don't like us to think that it's just going to go a certain way, with or without us, because that's not true. We can actively shape and steer things. The more people that understand and lean into this future, the more voices you have pushing things in certain directions. If you look at social media, it’s a textbook example of what not to do. It was just like a movie that we watched, where the ending just got scarier and scarier. But eventually, as more people started to realize we're not cool with [social media’s] damaging polarization and that mental health is just plummeting, things started to change. How technology shows up isn't inevitable, as evolution is innovation, but the form and the products that get made can be shaped, changed, interrogated. We just have to feel empowered to do that.
This is why I called you Sinead, because I think you just shifted my entire brain with that answer. I've had a hard time with the talk of inevitability, because, for example, watching Oppenheimer, you're reminded that a lot of people said, “welp, this is inevitable” about a future with atomic bombs. To me, it’s such a warped way of thinking. But the way you just put it makes a lot more sense to me. Sure, there is some level of inevitability, but if we become active participants, that future doesn't have to be as dire as it looks like it could be.
AdvertisementADVERTISEMENT
SB: One thing I'm gonna get asked a lot is, “Do you get terrified that you spend your whole life examining, studying and essentially living in the future?” And I actually find it the opposite. It's so much more empowering, because you can see the different scenarios for how things could evolve based on the actions we do or don't take. And with that, I feel more empowered to steer the future in certain directions. I think when we unsubscribe from it, that's when we feel like it's just happening to us. And that doesn't mean that we are all responsible for technology or that the state of social media is all our fault. No, there were a lot of specific decisions by a lot of very powerful people in very small rooms. So it's not that it's all anybody’s fault, but our voices do matter, and we do have agency and we can steer things. And products aren't inevitable.
“
How technology shows up isn't inevitable, as evolution is innovation, but the form and the products that get made can be shaped, changed, interrogated. We just have to feel empowered to do that.
sinead bovell
”
I think it's natural that there's always going to be a bit of lag in terms of how governments are kind of safeguarding technology and science, but I think we need to do a lot better. And even just this week, we saw the crisis incident with AI images of Taylor Swift, and then there was an AI-generated President Biden telling people not to vote. Governments, I hope, were alarmed, but not surprised. And if they are surprised, that's part of the problem because these tools have existed for a long time. Is there a mismatch between what internal governments are keeping up with and doing and what's actually happening in the world? Yeah, absolutely.
AdvertisementADVERTISEMENT
Let’s talk about deep fakes. In the case of deep fake porn with Taylor Swift, it was interesting to me that it took Taylor Swift, a blonde, white woman, also the biggest pop star in the world,for this to become a story that we're all talking about and that might finally lead to safeguards being put in place because this has been going on for a long time. Is there anything about deep fakes that's not terrifying?
SB: I don't think we're hopeless. So on one hand, there are the exciting aspects of the technology, the creative aspects. For example, what an indie filmmaker could do now to compete against a big studio with some of these AI tools. That's the cool, fun way of looking at it. But in terms of impersonating people and doing things without their consent, that’s absolutely alarming and outraged and terrifying. But there's a lot of things we can do. And the only thing that keeps me somewhat optimistic is that we haven't done anything yet. We’re in this nightmare scenario which is awful and we shouldn't be here, but we actually haven't done anything about it. And there are a lot of things that we can do. So let's hope that this inspires action. Let's hope that it doesn't always have to take the most famous pop star in the world or a president before people act. It's scary because we don't have to end up in these dire situations. And democracy shouldn't feel like it's on the line every seven days because we're not doing anything. There's a lot that we can do to combat deep fakes. On the one hand, for example, you could put a ban on non consensual, deep fake intimate images across the board. So there's a consistent legal protocol as to what can and can't be created. Also, we might need to start rethinking the digital infrastructure that our entire world operates. Does it make sense that people get to be entirely anonymous on social media, and kind of can run around and mask under these identities and cause a lot of harm? If you went into a restaurant, you don't have to go in with a nametag but if you went in and started harassing people, there's a pathway to accountability. On these social media sites where most of this harm gets shared and distributed, we could make some changes. We could put in pathways to accountability to disincentivize people from sharing this type of content as well.
AdvertisementADVERTISEMENT
When I think of Black artists and creators, especially in creative industries like music and TV and film, I wonder how they should approach a piece of tech that can co-opt and manipulate their work and their likeness without their consent?
SB: I think, first and foremost, we need to understand what labor looks like in this AI age. And acting on TV or posting something online is the new labor. And so if we don't see it that way, that's when people get exploited. And I don't think that there is a win in a world where people’s identity is just scanned, or uploaded, and then that's the future of the arts. To me, that would be a very dystopian future. And I really hope that that's not where we end up. And if that's the future big studios are trying to create, I don't think that it will go well, because I don't think that that's where art, storytelling and creativity thrives at all.
Is there a world where AI can potentially be kind of a great equalizer in some aspects of creativity and creation and, and that space? That's certainly possible. With AI, you might now have the tools to edit a scene that a big studio would have only had the ability to do. I think AI could potentially be helpful for set design, and leveling the playing field there. But I think when I look at creative arts in particular, and I think about AI in those spaces, I think the conversation is very uninteresting.I really hope we don't just passively move forward with AI, copying and scanning the creation of historical artists, and then just regurgitating or just kind of re-mixing their work. I find that very uninteresting. And I do worry about what that would mean for already marginalized communities if we don't deal with ownership structures. I'm worried about that and I'm very vocal about [the pitfalls of] a non-consensual data-sharing future. I don't think that works out well. What interests me the most with AI in this world of entertainment, is the arts that haven't been invented yet. So the same way the invention of the camera led to movies, I think AI is going to lead to entire new sectors of entertainment that have yet to be created. And that's what I'm really excited about. And I find studios that are looking at it from a way to automate and streamline, that's uninteresting. I don't think it's gonna work out well. And I don't think it's going to be the future that many are gonna want to sign up for. But I think that there's an entire new field of entertainment that has yet to be invented.
AdvertisementADVERTISEMENT
“
I think the companies that see AI as just a tool to exploit are just uninteresting. I don't think that's where art is gonna live or thrive, because it never has.
sinead bovell
”
Wow, that's really interesting. And when you think of it like that and consider Black creativity and how we're always at the forefront of innovation. If there is a new thing that emerges, we will be there making dope shit.
SB: Yes! Give us the tools. Black art and Black talent have been at the forefront of music. Can Black voices be at the forefront of the new things that are inevitably going to be created because of AI technologies in art and entertainment, kind of like the emergence of hip hop with new technology tools and sounds? The same way the invention of the internet and smartphones led to the entire career economy, podcasts, YouTube videos, TikTok videos, none of that existed without these technologies, there are entire new industries and sectors that have yet to be invented. And that's where I think the most interesting kind of place to be is and where things are going to inevitably go. And I think the companies that see AI as just a tool to exploit are just uninteresting. I don't think that's where art is gonna live or thrive, because it never has.
When I brought up this idea of tackling AI this month to our team (our entire team is made up of Black women), a few of them felt like AI and this conversation had nothing to do with them. They didn’t want anything to do with it because it just seems so scary. What would you say to those people, and specifically to Black folks who are just scared of AI?
SB: I understand the fear. It's very real, because most of the stories in the media are of the harms and the dangers of where AI has gone wrong. There are identities that are more likely to be the victims of AI harms, and Black women especially are right at that intersection point. But the reason why I think AI doesn't work for certain groups is because those groups specifically aren't in the rooms where it's being built. And so the louder we can be, the more we can insert ourselves in these conversations, the better. And it has nothing to do with technical skill. That's what I really want to instil in people. It's not about knowing how to code, it's knowing when fairness is or isn't being displayed. It's knowing when somebody is or isn't being included, or excluded from a technology, that is all that's required to participate in these conversations. And so I want to really empower people to know that you have a right to show up in these conversations, and their voice really does matter in steering it.
There are no shortage of Black leaders across any subject, including artificial intelligence, there are no shortage of people really trying to build this technology on the right side of history. But the media doesn't always do the best job of elevating those voices. It’s comforting knowing that they're there, and that there are people that are working tirelessly to make sure that we get this right. There are a lot more people working on solutions tirelessly. And that makes me excited. I don't think I'd be able to wake up and do what I do every day if I didn't think there was a future worth fighting for.
This interview has been edited for clarity.
AdvertisementADVERTISEMENT