Skip to main content

The World-Wide Work.

Summary

“The World-Wide Work” is a talk on automation, power, justice, and labor in the tech industry. It’s a very different kind of talk for me; I hope you enjoy it.

If you’d like, you can read my transcript of the talk. Here’s a video recording of the talk, courtesy of the New Adventures Conference, which took place in Nottingham, England on . (You can also watch the video on YouTube, or watch the video on Vimeo. Please note that the video is not transcribed or captioned; you can read my transcript of the talk below.)

I was fortunate enough to give this talk two other times in 2019: at WordCamp Boston on , and at Clarity Conference on . I was also scheduled to deliver the talk at the end of this year’s Webstock as well, but a family emergency called me away before I could do so. (To the organizers and attendees: I’m truly sorry I missed you.)

This talk owes many things to many, many people. First, my deepest thanks to Mandy Brown, Deb Chachra, and Ingrid Burrington for introducing me to Ursula Franklin, and to The Real World of Technology. Maciej Cegłowski brought several Tech Solidarity events to Boston in 2016 and 2017, which inspired me to read more about the history of organized labor in the United States. I’m incredibly grateful to Garret Keizer and Graham Sleight, who each read early drafts of this talk, and offered me invaluable help and advice. And of course, my deepest thanks to all the people whose writing and research contributed to this talk.

I’d also like to thank Elizabeth Galle not just for reviewing several versions of this talk, not just for her encouragement, and not just for listening to more than a few rehearsals: I’d also like to thank her for everything else she’s ever done for me. This talk, and much of my career, couldn’t have happened without her time, her support, and her love.

And finally, I’d like to thank you for reading.

Introduction


In this talk, I want to look at some ways the Web is changing, and how our work is changing alongside it. I want to talk about web design as an agent of power, and to discuss its potential to do harm. I want to suggest that web design has, as a practice, become industrialized, and I want to look at how that will change the nature of our work in the months and years to come. I want to talk about how the web has always excelled at creating new kinds of work, before rendering that work—and its workers—invisible.

And then I want to talk a little bit about hope.

But I’d like to start by talking about something else.

This is a starling.

A speckled starling in flight, its wings spread wide open.
Image source: Peter Thornton — flickr.com

Starlings are small birds, found on nearly every continent.

A speckled starling in flight, its wings spread wide open.
Image source: Noel Feans — wikipedia.org

Some starlings have this beautiful metallic shine, which is stunning when the light catches them in flight.

A starling sits on a fence, with some orange and green leaves in the background.
Image source: Phil McIver — flickr.com

Generally, starlings are known to be incredibly mischievous, and rather brilliant. Their birdsong is quite complex, syntactically speaking, and starlings are skilled mimics. Many starlings are capable of imitating other birds’ calls, even the odd bit of human speech. There have even been cases of starlings imitating car alarms. (Often in the middle of the night.)

But that’s not what I want to talk to you about. Because what happens when a mass of starlings takes flight is…well, I wish I had the words to describe it.

Video © Nerina Fielding — youtube.com

A flock of starlings has an absolutely beautiful name. It’s called a murmuration. A murmuration of starlings. This is one of the most complex flocking mechanisms known in nature. And it almost feels like something…different, doesn’t it? It’s a bit like watching a current of water, or an avalanche in slow motion, or a billow of dust alight on the wind. A murmuration could be several dozen starlings, a few hundred, or it could be several thousand. All moving as though in unison, tracing ethereal, ghostly shapes in the air.

Mechanically, a murmuration is surprisingly straightforward: when a neighbor moves, so do you. And that action cascades up from an individual bird, rippling upward and outward to the entire flock.

A massive flock of starlings flies over a marsh.
Photo by Tony Armstrong-Sly

Now, a murmuration often happens when starlings gather in the evenings to roost. But occasionally, it’s a matter of defense: a murmuration might begin when the flock is under attack, or when a falcon or some other predator draws near.

That said, my favorite thing still might be the name: “murmuration.” After all, it’s reminiscent of the sound the flock makes as it moves.

Video © Jan van IJken — youtube.com

A murmuration is like a quiet, gentle voice, whispering in the air.

Individually, starlings are beautiful. Collectively, starlings become a wonder.

Delighting in the small

I’ve been thinking about starlings and their murmurations recently, especially as my work over the last few years has shifted to working with design systems. Because really, a design system is all about beauty rippling up from tiny, isolated, seemingly disconnected parts. We design these patterns—effectively little responsive layouts themselves, and understand how they need to change and adapt across different breakpoints.

Personally, I love that my work has begun focusing on the smaller pieces of a design, and working up from there. Because there’s some real delight to be found in those details.

Want an example?

Let’s talk about drop caps.

I was recently assigned a bug report, dealing with the drop caps in a design system I’m working on. Three screenshots were attached to the bug report, and they looked something like this.

Three screenshots from Firefox, Chrome, and Safari. Each shows a large first letter, or “drop cap”: Firefox’s is properly aligned, but inconsistent with Chrome and Safari, which are much too tall.

It’s possible you’ve already spotted the bug, but let me turn borders on—that should really highlight the issue.

The same three screenshots from Firefox, Chrome, and Safari, with borders around the drop caps.

Here’s the problem in a nutshell—and technically, it’s two problems. First and foremost, the alignment of these drop caps isn’t consistent across these three browsers. In Firefox, we have a nice, tight box around the shape of our letter, which is the effect we’d really like to achieve. But unfortunately, that’s the outlier: Chrome and Safari are sized consistently, but the boxes are much taller than we’d like. And that’s our second bug.

So what’s happening here? Well, if we peek at the code, we can see another kind of pattern, as our drop cap’s generated with a very common bit of CSS:

<p>
  Matthew watched the storm, so…
</p>

…

p:first-letter {
  font-family: "Playfair Display", serif;
  font-size: 5.5rem;
  float: left;
  line-height: 1;
  margin-right: 0.05em;
}

Here, we’re using the :first-letter pseudo-selector to, uh, select the first letter in our paragraph, and then style it: setting it in a pleasing serif, making it visually more prominent, and floating it to the left.

So that’s where :first-letter’s left us. But bear with me for a second: what if we changed our approach? What if we surrounded our first letter with some actual markup, like a span element?

<p>
  <span class="dropcap">M</span>atthew watched the storm…
</p>

…

.dropcap {
  font-family: "Playfair Display", serif;
  font-size: 5.5rem;
  float: left;
  line-height: 1;
  margin-right: 0.05em;
}

Now, I’ll admit: from a semantic standpoint, this feels really ungainly. But what happens to the design?

Three screenshots from Firefox, Chrome, and Safari, showing consistently-sized drop caps.

…well, look at that. Now we have consistent heights—all just by switching to a little bit of markup.

But! While the box heights are consistent, they’re all consistently too tall: we’ve still got ungainly heights above and below each letter. Can we close up those spaces, and make the drop cap a little more attractive?

It wasn’t until I found a blog post by Vincent De Oliveira that I understood what was happening, which helped me understand how to address this little issue. Basically, every glyph in a typeface—every number, every letter, every character—is drawn on something called an “em square,” a box that acts as a sort of coordinate plane, upon which each character is drawn. That’s the box that’s being drawn for our drop caps, and that’s what makes them appear a little too tall.

In a typeface, an em square is the box upon which a single glyph is drawn. There is space reserved at the top and bottom of the square for a glyph’s ascenders or descenders.

You see, each em square has space reserved at the top and the bottom for a letter’s ascenders and descenders. And for our drop caps, we don’t actually need that space. Put another way, we want to remove those spaces from the bounding box, leaving just enough room for our letters.

And since we’ve introduced some markup surrounding the letter, we can do exactly that. We can use generated markup to establish some margins around each letter, like so:

.dropcap:before,
.dropcap:after {
  content: "";
  display: block;
}
.dropcap:before {
  margin-bottom: -0.175em;
}
.dropcap:after {
  margin-top: -0.05em;
}

Those negative margins “draw” the letter closer to the outer edges of the box, effectively erasing the extra spaces.

Two images of text. The first is labeled “Before,” and shows a misaligned drop cap. The second image is labeled “After,” and shows a perfectly-aligned drop cap.

And with that extra bit of code, we’re finished!

…or are we?

Here’s the thing: we’ve finished the visual treatment of this design pattern. But on the web, we’re not just serving sighted users. We’re designing for users who might not browse the web like you and I do—who might not see the web like I do.

So what does this design sound like, I wonder?

Well. That’s interesting, isn’t it? Let’s listen to it again.

Instead of saying “Matthew,” this screen reader breaks it into two separate words: “M”, and then the rest of the word: “atthew.”

Now, the reason this is happening is because some assistive technology—in this example, Apple’s VoiceOver screen reader—notices the markup around our initial letter, and interprets it as a separate, discrete word. That’s why we hear “M! atthew”, which is a fairly jarring experience. (To say the least.)

Instead of resting on our laurels simply because the design looks right, we need to provide markup that better describes our content to all our readers—including those who might be listening to our design.

And after some experimentation, we landed on this:

<p>
  <span role="text">
    <span aria-hidden="true">
      <span class="dropcap">M</span>atthew
    </span>
    <span class="visually-hidden">Matthew</span>
  </span>
  watched the storm, so…
</p>

—okay, I’ll be the first to admit that this feels like a lot! But amid all that markup, there are only two things happening here.

First, we’re treating the drop cap as a purely visual element, using the aria-hidden attribute:

<p>
  <span role="text">
    <span aria-hidden="true">
      <span class="dropcap">M</span>atthew
    </span>
    <span class="visually-hidden">Matthew</span>
  </span>
  watched the storm, so…
</p>

Here, the aria-hidden attribute lets us hide the element from assistive technology. This prevents the “split” word in our drop cap from being read aloud.

Additionally, we’ve introduced a visually-hidden class, which allows us to visually hide the text in a more inclusive manner:

<p>
  <span role="text">
    <span aria-hidden="true">
        <span class="dropcap">M</span>atthew
    </span>
    <span class="visually-hidden">Matthew</span>
  </span>
  watched the storm, so…
</p>

But since that word is only visually hidden, it will still be read aloud by VoiceOver and other assistive technology. In other words, a screen reader will read this markup as one word: “Matthew.”

With some more robust, descriptive markup in place, now we have a finished design. And it’s a design that sounds just as good as it looks.

Design and power

I want to acknowledge something about this. As the designer of this pattern, I have an incredible amount of power throughout this process.

Now, I realize that sounds a bit odd. “Power.” It’s a word we don’t use very much in our industry. After all, it’s hard to think of web designers as “powerful”: so much of our work is beholden to clients, to deadlines, to the demands of our products and their audiences.

So what do I mean by power?

Well, quite literally, it’s the ability to exert influence. To change behavior. As I work through these drop caps, the way in which I design this pattern determines who is able to access my work—and who will be excluded from accessing it. I’m shaping how usable this content will be.

This power applies to drop caps, sure, but it applies to everything we design and build on the web. It’s all too easy to design something that will exclude people: maybe our sites are too heavy, and won’t load for people who have limited data plans, or slow connections; maybe someone needs assistive technology to browse the web, and our designs don’t work with their needs.

It doesn’t matter if we don’t mean to exclude someone, sadly. As a friend once said,

“Any sufficiently advanced neglect is indistinguishable from malice.”

That’s why we have to start being more explicit about power.

We don’t talk nearly enough about power in design. And part of the problem is the way we talk about “web design” as a concept. In our industry, we tend to assume that design is somehow apolitical. Something pure. We think of design as separate from the world in which it sits. Unfortunately, nothing could be further from the truth.

Want one example of the ways in which power can influence design? Let’s talk for a moment about Robert Moses.

Robert Moses wears a dark suit and tie while holding a folder containing his plans for the New York World’s Fair. Moses stands in front of a map of the park where the World’s Fair will be held.
Image source: The New York Times: “Why Robert Moses Keeps Rising From an Unquiet Grave”

Robert Moses was a massively influential public official in mid-twentieth century New York. He was the New York City Parks Commissioner, overseeing countless public construction projects—like the Triborough Bridge, Shea Stadium, and Flushing Meadows Park, to name just a few. He was also instrumental in getting the United Nations to locate its headquarters in Manhattan.

Moses was known as the “Master Builder” of New York, and during his tenure quite literally shaped New York’s infrastructure. The impact and scope of Moses’ work can still be seen today.

Robert Moses was also an avowed racist.

Robert Caro, Moses’ biographer, said that “Moses had always displayed contempt for people he felt were considerably beneath him.” I’ll not quote some of the vile, hateful things Moses said—for that, I’ll refer you to Caro’s biography of Moses. But I will note many of Moses’ public works projects were intentionally placed out of reach of New York communities that were predominantly Black or Hispanic.

I want you to keep that in mind as we look at one example of Moses’ work.

Video source: Hopes and Fears: “The lingering effects of NYC's racist city planning”

This is one of the overpasses Robert Moses built on the Long Island Parkway. Moses specified that the height of these overpasses must be low, some with clearances as low as 7’7”. That’s about the height I can reach if I raise my hand above my head, if that gives you an idea of how low that is.

Why so low? Moses wanted to ensure that buses would never be able to pass beneath these overpasses. In other words, you could access the beautiful parks of Long Island if you owned a car—which, in the middle of the twentieth century, meant that you were fairly affluent, and almost certainly white.

Moses’ design of these overpasses meant that if you relied on mass transit—in other words, if you were Black, or poor, or both—you would be prevented from accessing the parkways, and the lovely parks they led to.

Throughout history, there are many, many instances of design being used much as Robert Moses did—as a means to encode racist and classist biases, as a vehicle through which vulnerable communities are harmed.

But that’s enough about Robert Moses. Because now, I want to show you a map.

A map of Cleveland neighborhoods, showing the disconnect between quality broadband and low incomes

This is a map of Cleveland, Ohio. The map was drawn in 2015, based on information collected by the National Digital Inclusion Alliance, an American non-profit. The pink areas on the map represent communities where more than 35 percent of residents live below the poverty line; the green areas of the map are where AT&T deployed high-speed fiber networks.

You’ll notice there’s almost no overlap.

In pursuit of profits, a business prioritized deploying its networks in areas of a city that would yield the greatest financial return.

I want to suggest that this, too, is a kind of design. Functionally, it’s the same as Robert Moses’ work: it’s enforcing structural biases against the poor, and excluding them from accessing the internet with the same ease, the same speed, as more affluent citizens.

Whether we’re discussing Robert Moses, communications companies, or, yes, even drop caps, we need to sit with the fact that while design has a rich history of providing solutions, it is also capable of visiting great harm upon people, of packaging bias, of cementing inequalities.

In other words, we have to look at design as an agent of power. Of something capable of good, yes, but of something equally capable of harm. Even if we don’t always mean to do it, our work can privilege the privileged, and exclude marginalized populations. That’s why we have to talk about the power we possess as designers, as developers. And we have to work to wield that power responsibly.

How the Web has shifted.

I wanted to talk about power today, because it feels like…well, like something shifted in recent years. Let’s look at a few things that happened in 2018, a year that somehow seemed to last ten whole years.

To that end, let’s look at a few things some of our industry’s most successful companies did last year, in 2018. Let’s start with Amazon.

A diagram showing a photo of a smiling Black woman, with four areas highlighted and labeled: “Female: 100%”, “Eyes are open: 100%”, “Happy: 97.4%”, “Smiling: 100%”.
Image source: Amazon Rekognition homepage

In May of 2018, it was discovered that Amazon had developed its own facial recognition software called “Rekognition,” and had begun selling it to law enforcement agencies in the United States.

Additionally, it was discovered in November 2018 that Amazon had filed a patent application in which its Ring doorbells—which have a camera embedded in them—could use that same Rekognition software to alert homeowners and police of suspicious activities and people. This despite the fact that the software was tested by the ACLU to have identified over two dozen members of the US Congress as criminals—many of whom were people of color.

Twenty-eight members of the United States Congress.
Image source: ACLU: “Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots”

And next, Google. In May of 2018, it was discovered that Google was attempting to bid on contracts for the United States Pentagon, specifically for something called “Project Maven.” Google was bidding to provide artificial intelligence that could interpret video images, which could, in turn, be used to improve the targeting of drone strikes.

Shortly thereafter, various internal leaks started to surface about “Project Dragonfly”, a secretive initiative to create a censored, trackable version of Google’s search tool for release in China. This initiative was roundly condemned by human rights organizations across the globe, including Human Rights Watch and Amnesty International.

According to an open letter signed by the coalition,

If such features were launched, there is a real risk that Google would directly assist the Chinese government in arresting or imprisoning people simply for expressing their views online, making the company complicit in human rights violations.

And on that front, let’s turn briefly to Facebook.

You may remember that over the course of 2018, Facebook had a number of massive user data breaches, and more revelations came out about use of the platform for coordinated political attacks.

But I’d like to call your attention to something that happened in September of 2018. That’s when United Nations human rights investigators issued a report stating Myanmar’s military targeted its population of Rohingya Muslims with mass killings and gang rapes with “genocidal intent.” And in that report, the investigators said that Facebook had been instrumental in facilitating that hatred. Specifically,

The role of social media is significant. Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet. Although improved in recent months, the response of Facebook has been slow and ineffective. The extent to which Facebook posts and messages have led to real-world discrimination and violence must be independently and thoroughly examined.

A useful instrument for those seeking to spread hate.

Well.

I don’t know about you, but it’s easy for me to look at the last year’s events and feel, well, despair. Because this doesn’t feel like the industry I grew up in. The Web just…feels a little darker now.

And to me, that feels like quite a shift from the Web’s early years. Back then, much of the way we talked about the Web was rooted in endless amounts of optimism. Maybe you remember this moment, from the 2012 Olympics.

Video source: The Complete London 2012 Opening Ceremony

Fun fact about me: when I saw that video for the first time, I stood up in my living room and cheered. (That should give you some indication of how much fun I am at parties.)

But for me, there was something magical about that moment. I mean, here’s Sir Tim Berners-Lee—the guy who created the World Wide Web!—standing in front of the world to proclaim that THIS IS FOR EVERYONE. And I think it really epitomized the way our industry thought of the Web: that it wasn’t just a communications network, it was an ideal. An ideal that said the promise of a free, open, universally-accessible network would connect people across the globe and, given time, improve the world.

But I don’t feel that ideal much any more. Not these days.

Now, maybe it’s just me; I’ve become an old man in this industry, after all. But I don’t think it is just me. I think there really has been a shift of late. The Web I grew up in looks very, very different from the Web today.

But here’s something I’ve realized. We got to this point—this low point—in part because the Web’s growth has followed a kind of pattern. Not a design pattern, mind you: it’s a historical pattern. And it’s one that has a profound effect on on the Web, on our industry, our work, and on on us.

If we can understand that pattern, that’ll go a long way to helping us understand how we got here—and how we might move forward.

How we got here

Are you familiar with Ursula Franklin? She was a metallurgist, a physicist, a university professor, and generally one of the most inspiring, most accomplished people I know of. In addition to everything else, she was the author of one of my favorite books: The Real World Of Technology. It’s one of those books I wish I’d read at the beginning of my career. It’s a remarkable look at technology not as an abstract concept, but as a social force that shapes—and reshapes—people’s lives.

In the book, Franklin suggests that every technology “ages” in the same fashion, passing through three distinct stages of development, with each stage marked by the technology’s relationship to the greater society as a whole.

If I was going to presume to slap labels onto each of those stages, they could be broken down like so:

  1. Advocacy.
  2. Adoption.
  3. Institutionalization.

The first stage is one where a technology is first introduced to a society.

  1. Advocacy. Excitement and promises.
  2. Adoption.
  3. Institutionalization.

In this phase, a technology offers a solution to all of life’s drudgeries and cares. Early adopters explain how wonderful and helpful a new technology might be, and engender a real sense of excitement.

That advocacy and optimism plants the seed for widespread adoption. And that brings us to the second phase, which is what happens as a technology gains traction.

  1. Advocacy.
  2. Adoption. Acceptance, growth, and standardization.
  3. Institutionalization.

As more people use a technology, standards are established, and infrastructures are put in place to support that new technology.

There’s also a shift in the relationship between a technology, and the people who use it. In the first phase, the user is intimately involved with the technology, and may have a great deal of control over it; in this phase, however, that control is lessened, and the role of people—whether users or workers—is drastically reduced.

The third phase is what happens when a technology achieves an institutional status.

  1. Advocacy.
  2. Adoption.
  3. Institutionalization. Economic consolidation, and stagnation.

At this stage, a technology is no longer optional. It’s an expectation. A necessity. The standardization process results in economic consolidation, with control over the technology held by a few powerful, invested parties. And at this point, a technology can become so stabilized it becomes stagnant. If it changes at all, the changes are minor.

Here’s the important thing to note about this trajectory:

The promise of liberation that’s made in the first phase is never, ever fulfilled.

Here’s one example Franklin cites in her book. This is an excerpt from an article written in 1860:

The sewing machine will, after some time, effectively banish ragged and unclad humanity from every class. In all benevolent institutions, these machines are now in operation, and do—or may do—100 times more towards clothing the indigent and feeble than the united fingers of all the charitable and willing ladies collected through the civilized world could possibly perform.

The sewing machine was introduced to the public in the middle of the 19th century. When it was made commercially available, it was advertised as an appliance that would free women from the routine drudgery of hand-sewing.

In fact, the authors of this article claimed that the sewing machine would end poverty. And remember, this was written in 1860! The sewing machine didn’t quite capitalize on these promises. Sadly, poverty’s still going very, very strong.

But here’s something interesting. A few short decades later, this pamphlet said that a female operator could use a Singer sewing machine to produce 3,300 stitches per minute.

A yellowed instructional pamphlet for a Singer Sewing Machine.
Image source: Hagley Digital Archives, Instructions for Using Singer Sewing Machine, No. 81-4

That shift in tone is really intriguing to me: as the technology improved, the messaging around sewing machines shifted from personal liberty to technical efficiency.

Why the change? Well, let’s put the machine in context. At this time, the sewing of clothing was being done at a truly industrial scale for the very first time.

A factory room full of female employees seated working sewing machines. There is a male supervisor standing near the left.
Image source: Museums Victoria Collections, Item MM 114834: “Female Employees Working in Sewing Room, Simpson's Gloves Factory, Richmond, Victoria, circa 1932”

And that work was being done factories and in sweatshops that exploited the labor of women and immigrants.

A factory room full of male employees, who are seated while working sewing machines. There is a male supervisor standing near the right.
Image source: Kheel Center: “Men sewing in a large shop, circa 1910”

This industrialization of clothing required a classic division of labor: one seamstress only sewed up sleeves, another worker sewed the sleeves to the shirt, another worker would cut buttonholes, another worker would press the finished shirts. And then, as the industry evolved, those individual tasks—the sewing, the cutting, the pressing—became increasingly automated, excluding people altogether.

A young Black woman stands in front of seven electric sewing machines, watching them as they manufacture clothing automatically.
Image source: Kheel Center: “A young African American woman watches at least seven sewing machines working simultaneously.”

People are promised that technology will free them; ultimately, as the technology matures, it captures them.

On the left, a nineteenth-century pamphlet for the Singer sewing machine. On the right, an image from the opening ceremonies of the 2012 Olympics in London, where the words “This is for everyone” appears in large digital letters.
Image sources: Hagley Digital Archives, Instructions for Using Singer Sewing Machine, No. 81-4; The Complete London 2012 Opening Ceremony

I’d like to propose that what happened with the sewing machine is currently happening with the Web: that the Web is becoming industrialized in the same way that the sewing machine was. And it’s following the exact same pattern that Ursula Franklin outlined.

In fact, there’s a great example of this in the mission statement of The Web Foundation, the non-profit organization founded by Sir Tim Berners-Lee to advocate for the open Web. That mission statement says,

We envision a world where all people are empowered by the Web.… The Web’s capabilities will multiply, and play an increasingly vital role in reducing poverty and conflict, improving healthcare and education, reversing global warming, spreading good governance and addressing all challenges, local and global.

I don’t think we have to squint too hard to see how these statements resonate with the introduction of the sewing machine. Not only will the Web make it easier to find, locate, and publish information online, but it will reduce poverty, improve healthcare, and reverse global warming.

I don’t want to deny the Web’s potential—it really is a revolutionary medium. Heck, it gave me a career; it probably gave you one, too. But it’s worth asking ourselves: if that’s the promise of the Web, where do things stand now?

Has the Web’s promise been fulfilled?

To try and answer that question, let’s start small. I want to tell you two short stories.

The first one is a story about Jade.

A Black woman sits at a fancy dinner table, looking over the back of her chair toward the camera. She has a slight smile on her face. Behind her, wine glasses and other dinner guests can be seen in soft focus.
Copyright © Joyelle West

Jade is a Black woman in her mid-twenties. She recently moved to New York, and has spent the last few years trying to get a job in the tech industry. Jade doesn’t have a college degree, which has made her search difficult, to say the least. Now, Jade has a sharp mind, and learns quickly; she even managed to teach herself HTML and CSS while working several part-time jobs. But even still, her lack of formal education has made it hard for her to get interviews, and harder still to get job offers.

In happier news, Jade recently got a job working in customer support for an internet start-up. Her day involves fielding emails and phone calls from the company’s customers. She’ll be the first to tell you that this isn’t especially glamorous work, but she feels like she’s finally got her foot in the door in the tech industry. And she’s excited about that.

Ethan and Jade smile broadly at the camera while standing on a busy New York city street. In the background, New Yorkers walk quickly past them.

I’ve known Jade all her life. Jade is my sister.

And I was thinking about Jade last May, when Duplex took the stage at last year’s Google’s I/O conference. And from that stage, Duplex made two phone calls. Here’s the first phone call. In this recording, you can hear Duplex booking an appointment at a hair salon:

Sounds pretty mundane, right? Duplex may sound like you and I do—right down to the umms and ahhs—but Duplex isn’t human.

According to Google, Duplex is a complex neural network built atop Tensor Flow, a machine learning framework maintained by Google. And it’s capable of making phone calls, speaking to a human, and completing simple transactions. But again: Duplex isn’t human.

Now. There’s a lot to be said about how creepy Duplex is, about how ethically and morally problematic it is.

But when I heard Duplex’s recordings, the first thing I thought of was Jade. Her job requires her to reply to customer emails, and occasionally offer phone support. It’s a job that many would classify as “unskilled.” What happens when Duplex, or technologies like it, become mainstream? What happens to Jade’s job?

What happens to Jade, and the millions of women and men like her?

Now I’d like to tell you a story about a woman named Brenda.

A Black woman named Brenda Monyangi gestures to herself while being interviewed on a purple couch.
Image source: BBC: “Why Big Tech pays poor Kenyans to teach self-driving cars”

Brenda Monyangi is a single mother who lives in Kibera, which is considered the largest slum in Nairobi, and the largest urban slum on the African continent. (source) Brenda commutes two hours by bus to her job, where she works for a company that produces “training data.”

Training data is fed into machine learning algorithms, to make them more accurate. Put another way, if you show an image recognition algorithm one million pictures that you know contain cats, the algorithm will be able to more accurately identify, well, cats.

Brenda’s job is to create training data. During her eight-hour shift, she and her fellow “trainers” will view many, many images on their computer screens.

Video source: BBC: “Why Big Tech pays poor Kenyans to teach self-driving cars”

On each image, Brenda will highlight key areas with her mouse, and then label each region.

A computer program showing an image of cars driving down the street. Certain areas of the image are highlighted in blue, and each highlighted region is labeled to describe its contents.
Image source: BBC: “Why Big Tech pays poor Kenyans to teach self-driving cars”

The results of Brenda’s work, and the work of the employees at Brenda’s company, will be used by companies like Microsoft, Salesforce, Amazon, and Google to fuel their AI research. Put another way, employees at Brenda’s company are powering the platforms and profits of the world’s biggest corporations.

Employees at Brenda’s company make around $9 a day. The most accurate “trainers” will receive a shopping voucher.

The BBC reporter who covered Brenda’s story visited the company’s San Francisco headquarters. During that visit, he asked the CEO about the low pay of their workers:

Video source: BBC: “Why Big Tech pays poor Kenyans to teach self-driving cars”

Here’s the key exchange:

Interviewer: Some of your clients are the biggest, richest companies in the world. Can they not afford to pay more than $9 a day for this work?

CEO: We make a guarantee to every single worker…that they are paid a living wage. If we were to pay people substantially more than that, in some of the markets that we’re in, that would throw everything off, and that would have, actually, potentially, a negative impact on the cost of housing, on the cost of food, on the communities in which our workers live.

This is exploitation.

And frankly, this is also a kind of design. None of this happened by accident: hiring low-cost labor to perform rote, repetitive work is something that was intentionally, thoughtfully done. By opening offices in impoverished communities, where wages are low, where labor protections are nonexistent, this company—and their corporate clients—can maximize their gains and minimize their costs.

And imagine how precarious Brenda’s job is. As image recognition technology improves, will her job still exist in a few years’ time? It’s quite possible that the data work she’s doing is being fed into automated solutions that will eventually replace her.

Which is, for better and for worse, a fairly common strategy.

Several content moderators are seated at laptops in a shared workspace, reviewing information on their respective screens.
Image source: Wired: “The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed”

For example, take Facebook. Modern social media companies struggle with content moderation. If you’ve ever wondered how you can get through most of a day online without seeing truly disturbing content, it’s due to the efforts of thousands of contract workers. Each of them sorts through videos, through images, through individual posts, searching for anything that might violate the company’s terms of service.

Just to be clear: this is terrible, stressful, traumatic work, visited upon low paid contractors. But more recently, Facebook has been investing in various automated approaches to flagging explicit content. And to do so, they’ve begun gathering data from their current human reviewers, and using that data to train moderation algorithms.

An excerpt from the article: “Facebook also collects moderator data so this binary approach can be more easily automated…continuing to develop artificial intelligence solutions that can flag content without user reports.”
Image source: Vice: “The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People”

This is what we’re seeing that at the corners of our industry: the automated solutions we design are responsible both for creating exploitative working conditions for vulnerable workers; they’re also responsible for displacing and eliminating jobs.

But here’s the thing I want to suggest to you today, right now: it’s not just happening at the corners of our industry. Namely, our jobs—yours and mine—are just as vulnerable to the effects of automation.

I don’t think this is a controversial statement. Our industry has always embraced automation, at various levels. There are the task runners you and I use in our work, of course.

But there are more design-oriented utilities as well, like remove.bg: a remarkable website that uses artificial intelligence to instantly and automatically remove backgrounds from photos.

The remove.bg website, showing a photo of Ethan and Jade with its background removed.

On a broader scale, there’s some truly remarkable work coming out of Netflix. They’ve begun delivering not just personalized video recommendations, but personalized artwork. That is, the cover image that appears on a show or movie will change, depending on what you’ve watched.

For example, if you’ve watched a lot of romantic comedies, the cover artwork for Good Will Hunting might feature Matt Damon and Minnie Driver. However, if you tend to prefer comedies, Robin Williams might be featured.

A series of movie posters showing how your Netflix viewing history determines whether the Good Will Hunting artwork shows either Robin Williams, or Matt Damon and Minnie Driver.
Image source: Netflix Tech Blog: “Artwork Personalization at Netflix”

Or take Pulp Fiction. Watch a lot of movies starring Uma Thurman? She’ll be more likely to appear. If you’re more of a John Travolta fan—and who isn’t, right?—you can probably guess who’s shown.

A series of movie posters showing how your Netflix viewing history determines whether the Pulp Fiction artwork shows either Uma Thurman or John Travolta.
Image source: Netflix Tech Blog: “Artwork Personalization at Netflix”

To deliver on this level of personalization, Netflix created AVA, a suite of complex tools and software.

Frames taken from the Netflix shows “Stranger Things” and “The Cloverfield Paradox,” analyzed with facial detection.
Image sources: Netflix Tech Blog: “AVA: The Art and Science of Image Discovery at Netflix”

AVA uses facial detection and image analysis algorithms to scour videos for frames that have “cinematic qualities.” It looks for still images that have interesting compositions, or for characters with compelling facial expressions.

From there, those images are fed into another array of algorithms, which then determine which piece of artwork you see, based on your viewing history, your country of origin, and other profile markers.

It’s rather stunning work, and nearly entirely automated. And in fact, that automation was a key motivator for Netflix in building these tools. Specifically, they said that:

As our Original content slate continues to expand, our technical experts are tasked with finding new ways to scale our resources and alleviate our creatives from the tedious and ever-increasing demands of digital merchandising.

— “AVA: The Art and Science of Image Discovery at Netflix

“Finding new ways to scale our resources.” This line spells it out pretty clearly: Netflix created an array of automated image production solutions not just to help improve engagement with specific videos. They adopted an automated solution because it would have been far more expensive to hire enough designers to achieve an equivalent scale.

As wildly impressive as Netflix’s image customization is, as miraculous as remove.bg feels, I can’t help but think of the design jobs that will be impacted by them. Heck, my first design job involved a considerable amount of production work. Low-level, unglamorous, repetitive production work: adjusting, cropping, and masking photos, day in, and day out. It wasn’t glorious work, mind you, but it was a job. And all things told, it was a pretty excellent way to learn some fundamental skills.

But in the face of automated technologies invented to “scale resources” more effectively, what does that career path look like now?

This brings me to last year, when I saw this video, made by the design systems team at Airbnb.

Video source: Airbnb Design: “Sketching Interfaces: Generating code from low fidelity wireframes”

It’s very brief, but a number of things happen in it. In the video, a designer sketches a few shapes onto a piece of paper. That paper is then placed under a camera. Once that’s done, a computer analyzes the sketch, and instantly renders a finished prototype in a browser. It creates a “real” design, simply by looking at the sketch.

Now, it’s able to do so because each shape drawn by the designer corresponds to a specific design pattern in the company’s pattern library. From an engineering perspective, it’s brilliant; and on a personal level, I’ve always dreamed of something like this, of being able to instantly translate my ideas into a real, working prototype.

And now that it’s here, well, I’m a little scared of it.

Don’t misunderstand me—this technology looks like a marvel, one that could be a real aid to me in my work. But I can’t help but think of all the jobs that normally sit between a sketch and a prototype: the designers, the developers, the researchers, the people involved in translating a rough idea into working, shipping code. And I wonder what this schematic means for them.

For us. For you, and for me.


Here’s the way of it: over the last few years, our industry has engaged in various ethical and moral lapses in the pursuit of scale. And with that scale, comes a precariousness amongst its workers—amongst warehouse workers and contractors, but also for you and me.

Given all that, I’d like to make three suggestions to you.

  1. We are workers who make our living on the Web.
  2. As workers, we should have a voice in saying how our work is meant to be used.
  3. As workers, we should have a voice in saying how automation is incorporated into our work.

Keeping those in mind, the question we have to ask ourselves is:

What do we do about this?

I think we start with hope.

Let me share a bit of one of my favorite speeches with you. It’s by Cornel West, the American philosopher:

Video source: Cornel West, “Race Matters”

Here’s the bit I love:

I don’t believe in optimism. I don’t believe in pessimism. Black folks saying “I been down so long the ‘down’ don’t worry me no more, but I’ll keep struggling anyway.” That is not an optimistic statement, nor a pessimistic statement. It is neither sentimental nor cynical.

It is an expression of hope.

Hope is not the same thing as optimism. Never confuse or conflate hope with optimism. Hope cuts against the grain. Hope is participatory—it’s an agent in the world. Optimism looks at the evidence, to see whether it allows us to infer whether we can do X or Y. Hope says, “I don’t give a damn, I’m gonna do it anyway.”

— Cornel West, “Race Matters”, 27 April 2001

I think it’s a truly extraordinary talk. I can’t recommend it highly enough.

Now, I want to be very clear about something: Professor West is speaking from a very specific context here: that of struggling against the racial inequalities that Black Americans face every day. And I want to respect that context. I want to honor that context.

…with that said, I think his definition of hope can also be useful for us. Here in our industry, in our moment.

“Hope is participatory—it’s an agent in the world.”

In these times, I think we need to realize that hope is participatory. Hope takes action. Because in these times, it’s absolutely true that we have many, many challenges in front of us. And it’s easy to look at those challenges, and to feel overwhelmed. To feel despair.

But we have just as any opportunities to take action: to redesign our industry to be more equitable, and more just.

With that said, we need power to do it. And we need to recognize that individually, we lack that power.

Two photos of starlings. On the left, a speckled starling in flight, its wings spread wide open. On the right, an orange-and-blue starling perches on a branch, its feathers reflecting the setting sun.
Image sources: Peter Thornton — flickr.com; Noel Feans — wikipedia.org

In fact, we’re not unlike starlings in that way. We’re beautiful, and we’ve each done so many clever, striking things. But we’re small. Almost insignificant.

But collectively—well. That’s another matter entirely.

Video © Jan van IJken — youtube.com

After all, it’s our design thinking, our engineering, our planning, our work that’s made the Web what it is today. Collectively, we have quite literally designed and built some of these systems. When we work together—when we move together, what’s to stop us from redesigning these systems?

We need to unionize, my loves.

Just to be clear: I don’t mean this in an abstract sense. I mean we should recognize that we’re not just web designers, or web developers, but workers. And as workers, we have an opportunity to organize our workplaces, and form unions.

Now, a “union” means different things in different countries. But fundamentally, a union provides power and leverage: it allows workers to more equitably negotiate with their employers. Working collectively, they can negotiate for better hours, better wages, better working conditions—all this is true, and necessary.

Maybe more critically, a union allows us to express solidarity with every worker at our organizations: not just designers and developers, but also administrators, cafeteria workers, and contractors. And it helps ensure we’re working collectively to protect each other.

Now, you may be wondering: is there an example of an organized workplace in the tech industry? Well thankfully, we actually don’t have to look further than Google.

As you might remember, last year held a litany of bad news for Google: first there was Project Maven, Google’s foray into defense contracting; and then, there was news of Project Dragonfly, an attempt to create a censored search engine for China, one that could potentially enable state surveillance.

Then, in late October, the New York Times reported that Google had protected three executives accused of sexual misconduct—including Andy Rubin, the founder of Android.

Screenshot of the New York Times article revealing Google’s protection of executives who had been accused of sexual misconduct.
Image source: New York Times, “How Google Protected Andy Rubin, the ‘Father of Android’”

According to the reporting, rather than firing the executives, Google paid each of them millions of dollars upon their departure, even though Google was not legally required to do so.

Video source: PBS NewsHour — “Across the globe, Google employees walk out to protest sexual misconduct, inequity”

On 1 November 2018, one week after the Times exposé, twenty thousand Google employees and contractors walked out of the company’s offices around the world. And they presented Google’s leadership with five demands. Each demand represented the workers’ calls for a safe working environment—for an end to the sexual harassment, the discrimination, and the systemic racism they saw in their workplace. And the workers used the anger generated by the New York Times report as fuel they could use to mobilize on these issues.

I want to be clear: this mobilization didn’t happen in one week. Thanks to the internal debate around Projects Maven and Dragonfly, Google employees began organizing throughout 2018. And that mobilization took many, many forms. One group penned an open letter to Google’s CEO, asking him to cancel the Project Maven contract, and circulated it among employees; other groups of employees met in chat rooms to share their concerns with each other; another group got employees to submit questions to every single all-hands meeting, and then got other employees to up-vote those questions so they could guarantee those questions would be asked; and so on, and so forth.

In other words, amid all this internal dissent, Google’s employees had begun building social infrastructure that would enable them to better connect with each other. To build solidarity. And then, in the wake of the sexual harassment scandals, they could draw upon those lines of communication—and their justified anger—to mobilize a worldwide walkout.

Now, this isn’t a union. Not in a formal sense of the word. But it feels like we’re witnessing the beginning of something: to me, it feels like the beginning of a labor movement in the tech industry. A movement that might let us connect, share our concerns, and collectively work together on the issues before us.

After all, we built this mess. We designed this mess. We have participated in the creation of a world-wide web, a global network capable of cruelty, of exploitation, of enforcing vastly cruel inequalities.

But.

It’s also a Web that can delight people, that can educate people, that can connect people. The web has brought me here, to this wonderful room, and it’s brought you here, too.

And I’m grateful for those two things, because I consider this a rare, beautiful opportunity: for us to talk to each other, for us to listen to each other, to hear what we’re concerned about, to hear what we love about our work, and what we wish were different. Because it’s in these connections that we can start to build solidarity, to consolidate our own kind of power.

We are absolutely on a new adventure, you and I. There’s so much work before us, work for us to design a better Web—one that works more equitably for all.

Individually, we are capable of brilliant things. But collectively? We are a wonder.

Let’s get to work.


Current page