Design x David Dylan Thomas

Founder & CEO of David Dylan Thomas, LLC

 
Illustration by Bonnie Kate Wolf

Illustration by Bonnie Kate Wolf

 

Interview conducted by Michelle Berois on October 29, 2020

Tell us a little bit about yourself. 

I am originally from Columbia, Maryland, and I grew up in Baltimore. I went to school at Johns Hopkins, and at the time my background was in filmmaking. I’d been making movies ever since I was a kid. I did a little bit of that when I was in Baltimore, and then I got a day job working in distance education for a center for talented youth. 

I didn’t really get into content strategy until I ended up in Philadelphia. I had a bunch of different job titles that weren’t necessarily content strategy by name, but definitely by practice. I was editor-in-chief of five different websites for a trade publication company, so I was doing content strategy but it wasn’t really called that. Eventually I ended up at a company called EPAM Systems. That was the first time it actually said “content strategy” on my card, but I’d been developing that capacity for a long time. That’s where I really got the agency life and got a grounding in user experience (UX) and client work. From there I went to Think Company, which is where I work now. 


What exactly are cognitive biases? 

Cognitive bias is a fancy word for the shortcuts your mind takes and gets wrong. Depending on how you want to quantify a decision, we make about a trillion decisions daily, easily. You don’t have time to think about all of them. There’s not even a computer out there that could function, thinking carefully and processing deeply every single one of those decisions you make. We have time to maybe process 10 to 20 thoughtful decisions a day. You’re not thinking through everything you do. Your body is just handling all this stuff on autopilot and then later it tells you, “Oh yeah, you thought of that. You’re in control, totally.”

With that in mind, it’s actually a good thing that this is all running on autopilot, but because it is autopilot, sometimes it gets it wrong. We call those errors “cognitive biases.” Sometimes it’s harmless in funny ways, like the illusion of control where you roll a die really hard for a high number and really softly for a low number. That’s adorable. But then, it gets it wrong in horrific ways like the confirmation bias, where you might think, “I don’t need to get vaccinated.” 


How did you become an expert on cognitive biases?

I saw a talk by Iris Bohnet called “Gender Equality By Design.” In the talk she gave, she discussed how pattern recognition is at the root of a lot of gender and racial bias in the workplace. That’s when I really started keying into bias and cognitive bias in particular. 

I became obsessed with the idea that pattern recognition is such a basic thing that’s having this outsized effect, so I wanted to learn more. I studied one bias a day from the Rational Wiki page of cognitive biases. Each episode of my podcast is like a section from that page.

Eventually, after looking at one bias a day for a year, I became the guy who wouldn’t shut up about cognitive biases. So my friends were like, “Dave, please, just start a podcast.” It was my friend Emily McManus in particular, who at the time was working for TED. When someone who works for TED tells you to start a podcast, you listen.

That led to me really building up my cognitive bias expertise, which led to me giving a talk for the city of Philadelphia. A friend of mine worked for the city of Philadelphia, and she was putting together a panel about accessibility for city workers. She had the usual topics like language accessibility and disability, but she wanted someone to talk about cognitive bias in that same context. 


What prompted you to write your book Design for Cognitive Bias?

After I’d given that talk a bunch of times, a friend of mine, Lisa Maria Martin, who is the managing editor of A Book Apart, asked me if I ever considered writing a book. 

The talk I would give concluded with me discussing our cognitive biases as designers. As I started to dig into that, I realized this was actually a talk about design ethics. That was how the arc of the book started out. It starts out talking about user and stakeholder biases, and at the end, we take a look at our own biases. I realized she owned a publishing house and was asking me a serious question, so I got back to her. Long story short, it’s out now and doing great. 


Your title at Think Company is “Content Strategy Advocate.” Can you elaborate on what that role is and what you do? 

When I first came to Think Company, I came for the culture, which paid off. What they needed was someone to own the content strategy practice. After about a year of learning the ropes to understand how the business worked, I took on this role of what they called Principal, where I was really architecting what a content strategy practice at Think Company should look like. 

This included things like the boring details of what gets put in the statement of work around different deliverables, or how we budget out a content audit, or deciding what aspects of content strategy we’re going to focus on as well as informing the hiring decisions.

I did that for a while, and as that started to solidify, my speaking career started to take off. It became clear that the best fit for me at the company would be to now go out and advocate, to be more lead generational. Now I go out and give talks or do workshops focusing on content strategy or inclusive design. That becomes the beginning of the funnel for different people to find out about the company and have a good first experience with Think Company. 


You mentioned going to school for theater and film. How did you make that jump into content strategy?

Content was always in the back of my brain as a concern or interest. The storytelling aspect of me was always a part of everything. It wasn’t until I was working at the North American Publishing Company in Philadelphia, where I was really tasked with thinking broadly and strategically about content. There were five websites that each represented five physically published magazines. 

I had to make decisions about what content from the published magazine would make it onto the web. For example, what was going to be exclusive, how we were going to think about forums, or social media and how we were going to position it because there were about 15 different audiences in play. So all of those kinds of decisions are decisions that a content strategist needs to make. 

And so even though there wasn’t really a word for it yet, or at least a word I was aware of, I was doing the work and thinking analytically about content, as opposed to just producing the art itself. It was more about focusing on the body of all these different kinds of content. How do we think about that usefully and to make a profit? I would say that was really when content strategy walked in the door.


Can people in general become more aware of their biases? 

Before I answer that question, I want to ask the question, “Why would you?” Because I think that the supposition there is that if you’re aware of your biases, you can do something about them. One of the really troubling things I learned by looking up bias after bias is that I’d be reading the page about it and be waiting for the part where it tells you what you can do about it, but that never came. 

More often than not, even if you make people aware of specific biases, they still do them anyway. They are hardwired. In terms of knowing what biases are out there, there’s a lot of cool books and websites with visualizations that show this beautiful cornucopia of biases. That information is out there, but don’t think that knowing about them is going to stop them. That’s why I wrote the book. Partly to demythologize that part of the source. 

 

 

“More often than not, even if you make people aware of specific biases, they still do them anyway. They are hardwired.”


 
 

If you’re concerned about a bias because maybe it’s hurting people and your goal is to change the outcome, there are ways to do that. For example, if you think your hiring practices are biased, they probably are. But the way to stop that isn’t just saying, “Stop being biased.” That doesn’t work. These biases are happening faster than you can recognize. 

So maybe I don’t let you see the name at the top of a resume, because I know that’s going to trigger you in a way that you weren’t even aware of. Getting to the point where that won’t trigger you should still be a goal that we should absolutely fix, but we’re not going to fix overnight. It’s triggering because of sexist patterns that are in society. But in the meantime, I’m not going to show you these names, or what college they went to. I’m going to build the form in a way where we’re not even going to ask for that information. That’s a whole bias right there. As I’m asking for the information, I’m actually intimidating you in certain ways because there’s a whole thing around how women are often not considering themselves qualified, while men are confident they have all the qualifications but might have fewer qualifications than women. There are ways in which you frame the job application form that could be biasing for the person filling out the resume. So all of that is what the book says, for the short term.

I call this out when I’m talking about anonymized resumes. In the short term, let’s use design in a way that inhibits some of this bad behavior. In the long term, there’s a whole other book around the question, “How do we fix public education so that women don’t drop out in seventh or eighth grade and black people don’t dropout in third grade?” That’s a much bigger project.

Don’t let that stop us from working on it, but don’t think you’re going to fix it in the same timeframe that you can make this smaller area less biased or have these less harmful outcomes. 


What do you think are some ways that UX designers can design for cognitive biases?

The book breaks it down into two ways. You could be talking about the designer, themselves, and their activity, as opposed to the bias of the user or stakeholder. It is about bringing in outside perspectives. Systemically, making yourself aware of things like power. 

It’s really similar to when we create code for a website. Before we launch, we always have somebody come in for quality assurance. No matter how good a programmer is, they know there’s going to be bugs in their code. It’s the nature of the game. We just need to think that way about our minds. No matter how many websites we’ve designed or how many products we’ve developed, there’s always going to be bugs in the ethics. There’s always going to be bugs in the biases. We haven’t experienced all of life, but someone out there has experienced something that we haven’t, could walk in the room and tell you exactly what’s wrong in five minutes.


How do you think cognitive biases will affect things like AI-based products and designs?

They already are. You’ve got companies like COMPAS out there handing out harsher parole recommendations for black offenders than for white offenders who committed the same crime. You’ve got Amazon’s hiring bots, until they shut it down, recommending men over women every time. 


We have a myth, and for this I’ll recommend a book by Meredith Broussard called Artificial Unintelligence, which is fantastic at debunking myths about AI. 


When we think of AI, we often think of Skynet or something out of a Terminator movie that is this cold, objective, all-knowing genius, or something out of The Matrix. But AI is just a glorified prediction machine that’s able to look at billions of pieces of data to make more predictions than we do. 


So if you point AI at a bunch of sexist, racist data, it’s going to make sexist, racist predictions. It’s not rocket science. You have to think about the whole board. How is the entire thing playing out? What were the assumptions made in what they were asking the computer to predict? What materials they were giving it to predict, and what it can and can’t actually predict?


I’d say cognitive biases are absolutely already showing up, and unless you assume that your assumptions are wrong or incomplete and you bring in outside opinions, it will continue to show up. 


What can UX people do? Bring in outside opinions, but also think very carefully about power. Who is going to be impacted by this thing you’re designing? It isn’t necessarily limited to users.

 

 

“Bring in outside opinions, but also think very carefully about power. Who is going to be impacted by this thing you’re designing? It isn’t necessarily limited to users.”


 
 

Rank them based on power. How much power do they actually have? The people with the least amount of power are the ones you want informing how we’re going to design this product, what we’re asking this AI to do, and how we’re going to train this AI. If the people who are going to be impacted by the AI aren’t in on that conversation, you have already lost. You’ve set yourself up to hurt somebody at scale. 


What are your thoughts around “dark patterns”? Where do you see that type of design strategy going? 

I feel like we’re going to need to do a larger anti-racist language discussion. Even I love using the term “dark” to describe things both in literature and in technology, but I’m trying to wean myself off of that because it’s really problematic. 

One of the things I like about the anti-racist language movements is that it’s just good content strategy. “Dark” doesn’t really describe a pattern. It describes a mood maybe, but what about the pattern is “dark”? But phrases like “deceptive patterns”—now I can see what’s bad about that. Anybody can be deceptive. White people can be deceptive. Black people can be deceptive. So that term isn’t limiting it. And you’re telling me what about it is bad: it’s a deceptive pattern. 

The subtitle of my talk is “Using mental shortcuts for good instead of evil.” I usually joke at the beginning of my talk that I think we already know how to use them for evil. One of the things I really liked about the Netflix special Social Dilemma is that they spent a lot of time talking to some of the key decision-makers of the early stages of social media. Most of them learned at the Stanford d.school. 

There was a course on behavioral psychology where a lot of them are basically taking the evil version of my course. In that course, they learned about how giving intermittent rewards is a much more addictive thing than if you were given consistent rewards. Or how we know that people get a dopamine hit from certain kinds of interactions. So they took those insights and baked them right into websites and invented endless scrolling and “like” buttons. 

I used to think that was sort of an unhappy coincidence, and that they were throwing stuff at the board to see what would stick and maybe later figured out that there was a psychological component. But they knew it was an actual psychological phenomenon, a bias they could exploit, and it was remarkably straightforward. 

So what my book and discussions help highlight is that there is no neutral design. You don’t get to design in a way that isn’t trying to manipulate users one way or the other. I could randomly assort a bunch of things on a desk, and your mind would look at them until it made a story out of it, and then come up with a reason why this was in the upper left and this was in the lower right. 

 

 

“…there is no neutral design. You don’t get to design in a way that isn’t trying to manipulate users one way or the other.”


 
 

What you design is going to impact the user, so it is your responsibility to know as much about that as you can and place things in a responsible way. 


What are some resources you could recommend for designers who want to be more socially responsible and mindful in their designs?

The good news is people have been working on this for a while. There are a whole bunch of different frameworks you can use to run your decision making process through the grist mill on thousands of years of moral philosophy. 

I like to say that people have been working on this for a long time. We just haven’t been paying them very much. From the ethics standpoint, that’s all out there. From a prosocial design movement, there are so many organizations like the Design Justice Network and the Prosocial Design Network that are focused on this. 

The open source movement has always been in this space, and then the history of accessibility, I would say, is a very key subset of the exact same concerns. Our accessibility was basically built on the idea that when you made that thing you were thinking only about yourself. That’s one of the key problems we’re dealing with when we talk about inclusive design. 

The particular swim lane for thinking only about yourself had more to do with ableism, but you can extend that to race, to gender, to poverty. You can extend that to immigration status and incarceration. There’s all these identities, so it’s really about taking the work of accessibility and making it even more intersectional. 

We can learn from that movement, the things that they tried, the things that worked and didn’t work, how they got to legislation, and how they’re still fighting. In my ideal world, I see this great collaboration between folks who’ve done the accessibility work and are still doing it, and folks who are trying to push for inclusive design and thinking about ethics and bias. 

I tell people that the best part of my book is the resources section. Skip to the resources section and study up on that. The other thing that’s really worth noting is that there is no silver bullet. When you think about accessibility, think of it like a practice. You are going to practice the art of inclusive design. 

There are so many different ways you can do it, and so many people you can learn from, so take a look, graze the fields, see what appeals to you. Come up with your own framework for doing this. But come up with something.


Is there anyone who has influenced or impacted who you are today? 

Alex Hillman. He is the founder of the coworking space Indy Hall and author of Tiny MBA. He gave me the best piece of advice I ever got in my life. He told me that it’s impossible to listen and react at the same time. I’ve carried that piece of advice and practice it daily.

What question haven’t we asked you that we should have?

I always want an excuse to talk about operationalization so ask me, “Dave, how do you operationalize this?” I am so glad you asked! 

I like to tell people that I’m out to change hearts, minds, and budgets. I come from the agency and tech world where if it’s not in the budget, it doesn’t exist. That is a hard reality for anybody who does product design or CAD work. 

I have an exercise which I like to do called an “assumption audit.” It’s an exercise I adapted from Project Inkblot’s Design for Diversity framework, where you basically get in a room with your project team before the project and ask, “Okay, what identities do we represent, how might that influence the project?” “What identities aren’t here, how might that influence the project?” And then finally, “What can we do to honor who’s not here?” This is how I am spearheading rolling this out at Think Company. 

The idea is you start small with an assumption audit on a few projects, and then start adding things like red team–blue team exercises, which take a little more time, and one day you wake up and you’re an inclusive design shop. I love talking about that because to me that’s something you can literally go out and do tomorrow. 

 

Connect with David.

 

Michelle Berois.png

ABOUT THE INTERVIEWER

Michelle Berois

Lead Content Creator at Design x Us

Previous
Previous

Design x Felicia Wang

Next
Next

Design x Bonnie Kate Wolf