How to Vet Technology for Diversity

by Rachel Baker

Angela wanted to use the new virtual try-on service to see what different lip shades look like on her skin. But when she opened it up, a message said: “no face detected.” Her friend, who’s white, could see the different products virtually applied to her skin. But Angela, a black woman, could not.

When facial recognition systems were being initially created, the grad students used pictures of classmates, predominantly white and Asian men. Unsurprisingly this technology works especially well for those groups. New technology tends to be tested by those who are most easily available to the creator, leaving room for potential shortcomings with other populations. 

Why do these problems happen?

In Canada, 3.5% of the population is black. Of that most black people have lighter skin tones. That means an algorithm can be 96.5% accurate in Canada and not work for predominantly dark-skinned people. That means that if a technology company takes a random sample of a population, its technology will be biased toward population’s demographics. 

Data sets dictate how algorithms work.

What proportion of the dataset should be of black people? Do you create separate databases for each race and measure performance for each? How should you collect data in such a way that includes everyone?  Exactly how well does a product need to address these questions before it’s ready to bring to market? These questions have no definitive answer or protocol. In a business ecosystem that embraces the expression “go fast and break things,” some companies may not be considering inclusive algorithms at all.

On the one hand, inclusivity has costs. It’s expensive to compile vast datasets that are inclusive of all people. It’s easier and cheaper to test products solely with people in your network. A smaller data set can reduce time to market, technical complexity, and, ultimately, the cost of building a product. 

On balance, though, a lack of inclusivity can ultimately be costlier. A company that adopts exclusive technologies risks branding itself as more interested in padding its wallet than making the world a better place. Phrased another way: inclusivity is an essential part of goodwill, and goodwill is crucial part of marketability.

How to vet a technology partner:

The technology partners you choose to work with reflect your brand. Especially for any customer-facing systems. To vet your technology partners, you should:

Check Their Ad Copy

Does this company care about diversity? Do the company’s LinkedIn pages, websites, or advertisements show images with diverse populations? Do they have any blogs discussing algorithmic inclusivity or diversity?

Ask about their processes

Any company should be aware of how their algorithms work for people of all races. They should be able to clearly express how they are addressing biased algorithms. Do they have a process they can explain in simple terms? They should be able to tell you where some problems may be and how they are solving them. 

Test it

Get a diverse group of peers to try the system. See if the experience differs in any way. What would ideal behavior look like? If anything doesn’t work, provide feedback to the company so they have the opportunity to improve it.

Be candid

It can be hard to make sure a product works for everyone. There are 8 billion people in the world, and unless you test with everyone, you cannot be sure it works for everyone. Your voice and expectations can help guarantee that technology partners are taking the issue of algorithmic inclusivity seriously.


——————————————


Assignment: each participant could select their own topic, as long as it was relevant to their business venture. Several participants incorporated entrepreneurship and innovation into their stories, and we felt that Rachel Baker's article best aligned with HAE's mission.

The Harvard Alumni Entrepreneurs (HarvardAE) is a community dedicated to advancing innovation and entrepreneurship by alumni from all thirteen Harvard schools. Through our Chapters and global initiatives, alumni are offered support, resources and connection to peers, mentors and experts to help grow their ideas and succeed.

The HAE Thought Leadership Writing Incubator (TLWI) is designed to help aspiring thought leaders find, frame, and write their best stories. Created in partnership with The Institute for Thought Leadership (IFTL), the 10 week program offers subject-matter experts skills and tools to find and write the stories that position you as a thought leader in your niche. 

Rhea Wessel is a writer and founder and head of the Institute for Thought Leadership. A former finance and tech journalist, she is now focused on training subject-matter experts in the language of story, as well as ideation and messaging for companies. As a journalist, she wrote for the Wall Street Journal, The New York Times and CFA Magazine. More recently in her work for companies, Rhea has written and edited thousands of stories about more than 30 industries. She has worked for companies such as Accenture, Roland Berger, Allianz Global Investors, BASF and Siemens. Rhea is a graduate of Columbia University. Her book, “Write Like a Thought Leader,” was published in 2022.