Cory Doctorow made a name for himself by writing about the changing world of the internet and technology, both on the website Boing Boing, as in his science fiction. Through his novels like Down and Out in the Magic Kingdom, Little Brother, Makers, Homeland, and most recently, Walkaway, he’s explored everything from the future of copyright, totalitarianism and technologically-enhanced police states, internet communities, and more.
His next novel is Attack Surface, which is set in the same world as his novels Little Brother and Homeland. Those novels are set in a near-future dystopia, in which a teenager-turned-hacktivist works to undermine the Department of Homeland Security after it establishes a police state in the aftermath of a terrorist attack, and were passionate arguments against the erosion of civil liberties and privacy that’s occurred in the last two decades. This new book comes out on Oct. 13, and Tor will reissue both Little Brother and Homeland as an omnibus edition on Aug. 11.
I spoke with Doctorow about this upcoming novel, and how he looks back on how his works have changed since he began writing.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/19791349/AttackSurface.jpg)
Polygon: You initially released Little Brother and Homeland in 2008 and 2013, respectively. Looking back on those two novels (and the short story, Lawful Interception), what does the world look like, more than a decade on?
Cory Doctorow: There’s a (largely false) narrative about the prevailing view of networked computers and their relationship to human thriving in the 2000s and early 2010s, which is that “techno-optimists” were convinced that some kind of great force of history, multiplied by networked resistance movements, would free the world from tyranny.
While there were certainly some people — call them “techno-triumphalists,” maybe — who felt that way, the people I hung out with, the people who were offering aid and support to (or participating directly in) radical democracy movements had a more nuanced position: not “this will all be so great” but rather “this will all be so great ... if we don’t screw it up.”
After all, you don’t build an oppressive-regime-proof encrypted messenger or stealth VPN because you think that the other side isn’t *also* using technology! Nor do you get involved in policy fights over making it easier to censor and surveil the internet if you think everything is going to work out great in the end. The activism of those supposedly naive days was (and remains) as much motivated by stark terror of networked authoritarianism as it is by hope about the liberating power of technological resistance movements.
But things have changed in the past decade! The past decade has seen two things:
First, the rise and rise of networked authoritarianism, abetted by monopolistic tech companies, realizing the worst fears of technological activists. Second (and ironically), a widespread recognition that technological activists were 100% right to be worried about what would happen if the networked world were to develop without any kind of human rights framework, and, *simultaneously*, a widespread revisionist history project that condemns those same activists for their supposed naïve blind optimism that such a thing would happen automatically, without any human intervention.
There are ways in which this pattern is similar to other policy fights, particularly those in which activists are trying to avert something that is a long way away. Think about climate change: for decades, climate
change activists were primarily engaged with convincing people that climate change was happening at all. The combination of a long, attenuated cause-and-effect relationship between driving your car and facing a wildfire a generation later; and of massively funded disinformation campaigns to cast expensive doubt on the idea of climate change, made convincing people of the reality of climate change into a brutal, grinding, decades-long project.
Those activists had some success, but for the most part, the thing that’s convincing people that climate change is real is climate change itself. The “policy debt” built up by inaction on climate change means that floods, droughts, wildfires, pandemics, hurricanes, and other climate crises are breaking on the reg, and they have a convincing power that surpasses even the most articulate activist’s arguments.
The problem now isn’t denialism, it’s nihilism. The climate crisis is so advanced today that it’s easy to just have no hope for averting it, to substitute the inaction driven by a failure to recognize the problem with inaction driven by a belief that nothing can be done to avert it.
The same thing happened with privacy, surveillance, control, and monopolization online — after years of inaction (driven by a combination of sincere doubt about whether problems would manifest years down the road, and expensive campaigns to suggest that tech and governments should be left alone to collaborate on systems of surveillance and control), it’s now obvious to everyone that something terrible has happened to our electronic nervous system. The issue now is convincing those same people that it’s not too late to do something about it.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/19791362/LittleBro_Homeland.jpg)
A lot has happened in the world in the last decade. How has writing about rights and oppression changed for you in that time?
There have been two big changes, I think.
First, there was once the idea that the tech-based discourse over human rights and civil liberties was a privileged parlor game. Tech users were perceived as overwhelmingly wealthy, male and white, so worrying about the liberties and rights of those users was unseemly — after all, these were the people who already had the most privilege and least to worry about in those departments.
But of course, the demographics of tech users today is...everyone. There’s a digital divide, but we are at the point where even homeless people often have smart phones and those who lack them rely on libraries to get online. The broad reach of tech across class, gender, age, racial and geographic lines means that first of all, it’s impossible to talk about tech and rights without talking about the intersections of those factors, and second of all, any project to include a human rights framework within tech touches on all those other issues, too.
The other big change is in the views of tech workers themselves. I see three distinct waves of tech workers during my time in the industry (and the struggle):
- Pre-dotcom bubble: Generally affluent, passionate technologists. Computers were expensive so computer users largely came from affluent backgrounds, but computers were not a path to enormous riches, so computer users were (often) driven by a passion for the subject, not (merely) dreams of absurd wealth. This cohort often combined a missionary zeal to get all people and all information online, with a patrician sense of duty to the network and the systems that ran it, working as volunteers to nurse along the fundamentally unstable and primitive systems their peers depended on.
- Post-recovery from the dotcom crash: After the boom-and-bust of the dotcoms, tech once again surged, with huge financial upsides materializing for a few lucky entrepreneurs and their early hires, and more on the horizon. While pre-dotcom technologists were often self-trained, or had done CS or engineering degrees out of a passion for the subject, a new cohort entered who had signed up because of the likelihood of a huge paycheck and an even bigger payout through an IPO or acquisition. These people might have gone into an MBA program a generation earlier, or maybe law-school. They wanted a big upside, and tech was a sound bet. The infusion of people who were chasing finance ahead of passion damped down that sense of patrician duty to the network and those who relied upon it, though it amped up the missionary zeal. Many of these technologists, even very talented ones, saw the biggest tech companies as eternal fixtures: they had a dim awareness that Facebook had replaced Myspace, that Google had bested Altavista and Yahoo, and that Apple had clobbered Nokia, but they didn’t really believe that any of these companies would ever be unseated.
- Post-2008 financial crisis-today: A burgeoning awareness of inequality, discrimination, and corporate malfeasance (from foreclosure mills to climate denial to coverups for sexual predators to Big Pharma’s role in the opioid epidemic) has birthed a passionate new movement of young (relatively) diverse technologists who are flexing their economic muscle — as a cohort of workers in an incredibly tight labor market who are at low risk of being fired and who will find it easy to find more work if they are — to pressure their employers and regulators to think through the human rights implications of their commercial activities, from drones to censorship to surveillance to harassment. Movements like Tech Won’t Build It and the Googler walkout (20,000 people!) are finding common cause with broader movements like Black Lives Matter and Extinction Rebellion, building the tools to support their radical brothers and sisters, while carrying out a fusion that says that the cause of technological liberation is inextricable from the cause of human liberation.
How did your latest novel, Attack Surface come to be? It looks like your Little Brother character Marcus Yallow is about to have a bit of a rough time.
I often write as a form of therapy: I had been watching the growing monopolization of tech, its fusion with authoritarian projects, and the growth of a tech-for-evil industry (Palantir, for example) and getting increasingly anxious. At hacker conferences like Defcon and HOPE and CCC, I’d meet security researchers who cared about human rights, but drew paychecks from companies that were destroying them. The Little Brother books had always had a character who followed that path — Masha, the mysterious young woman who both helps and hinders Marcus and who works first for the DHS and then for a private security contractor — and her motives were doubtless just as sincere and complicated as Marcus’.
Embedding myself in the mindset of someone who knew everything Marcus did, but came to a different conclusion about what she should do about it — what was morally justifiable, as well — was an exercise in managing my own anxiety, of thinking through how people who seemed so nice and thoughtful in person could be doing these unthinkable, wicked things.
They say no one is the villain of their own story, but Masha actually is — she knows she’s doing wrong, but she also believes that in the grand scheme of things, it doesn’t matter, and if it does, it’s balanced out by the good deeds she does to balance her moral books. Marcus, by contrast, lacks the self-awareness to understand how he could be the villain of someone else’s story, so he never becomes the villain of his own. It leads him to put some people in harm’s way, to be reckless. Masha knows exactly what she’s doing.
This novel’s main character, Masha Maximow, works for a transnational cybersecurity firm, where they’re helping devise ways for governments to spy on their citizens. I’m really interested to see how she rationalizes what she does, given that it doesn’t seem to affect her directly.
No one is pure. We all make compromises, and often we make those compromises as a series of small, reasonable-seeming steps. You have a moral code, but you stray from it just a little, and now that new position is your new moral code. The next time you make an exception, it’s not relative to where you started, it’s relative to where you are now, and that step, too, seems reasonable. One inch at a time, you travel miles from where you started, and unless you’re looking back on the journey, it’s easy to feel like you’re basically doing good in the world. But then, if you look back to where you started from, it can trigger a vertiginous realization that you’re doing just appalling, terrible things.
Masha works with people who are trapped in that world, but she’s not: she knows exactly how she got to where she is and she knows about each and every compromise. She does it anyway, and rather than rationalizing, she compartmentalizes. The part of her that wants to do a good job for her boss is installing spyware to catch and terrorize dissident movements. The part of her that cares about her fellow human beings is secretly training those dissidents to avoid the software she just installed. She understands the reasons for doing both and doesn’t try to reconcile them — she just lives with the contradiction.
There are many instances around the world where we see repressive regimes using technology as a tool in their arsenal. What are some of the examples that inspired this story?
Companies like Hacking Team, NSO Group, and Palantir have installed mass surveillance systems in both “advanced” nations and poor, post-colonial ones in order to help dictators and autocrats shore up their power. From the Ukrainian authorities who used ISMI catchers (fake cellphone towers, AKA Stingrays) to capture the identities of everyone protesting the regime and threaten them by SMS message, to the use of NSO’s software to target Mexican anti-corruption activists, and to abet the murder of Jamal Khashoggi. Palantir’s policing tools have put whole populations of racialized people under continuous surveillance, with algorithmic guilt-assessments convicting people for what amounts to the color of their skin.
How are you balancing these two lines of activist mindsets, the naïve idealists vs. the pragmatists? How do you see this playing out in the real world?
I think a much better frame is “tactics” vs “strategy.” As Chinese people discovered, using iPhones, the tactical choice to use an iPhone — because you have it, because it works, because it’s convenient — is in tension with the strategic goal of evading state surveillance. When the Chinese state ordered Apple to remove working VPN software from the App Store (and Apple caved), these people were exposed to unfettered surveillance by a state that was, at the very same time, actively rounding up a million Uyghurs and putting them in concentration camps where forced labor, nonconsensual medical experimentation and punitive rape were all practiced; and also murdering imprisoned members of another religious minority, Falun Gong, to harvest their organs.
Tactics and strategy are always in tension: when Koch charities want to support the decarceration movement — but also want to preserve the hostility to working people that creates the economic conditions that lead to mass incarceration — do you work with them? They can mobilize enormous amounts of cash and resources to your immediate campaign, but they are also working to undermine your cause, and lending your credibility to an organization that is ultimately your adversary.
The answer, I think, is “it depends.” To a certain extent, merely practicing mindfulness (reminding yourself that the Kochs and Apple are not your friends) can vaccinate you against trusting them too much and give you the alertness you need to ditch them when the mask slips, but that’s a hard discipline to maintain. At the same time, only working with people whom you support 100% is a self-marginalizing, sectarian move. It’s a constant balancing act, and if I had a formula for getting it right every time, I’d be a lot more effective as an activist!
What responsibility do you feel major tech and hardware companies have when it comes to how their tools are implemented around the world, for good or bad?
There’s two ways to think about this: on the one hand, building hazardous, dangerous products is an immoral act. If you design your system to allow police interception and then the cops in, say, Bahrain or Saudi Arabia order you to spy on your customers for them, you are 100% complicit in that absolutely foreseeable outcome.
On the other hand, there’s a different kind of culpability that’s much more technical and more subtle, which is the extent to which you build products that can be modified by your users (or technologists working on their behalf) to protect themselves from the consequences of your design decisions. It’s one thing to design (say) Twitter in a way that enables mass harassment campaigns, but it’s a much worse thing to compound that harassment risk by narrowing or closing down APIs, and using patents, terms of service, and other legal weapons to deter those who would design their own anti-harassment systems to protect themselves from the bad conduct you’ve enabled.
Designing an imperfect system doesn’t necessarily mean you’ve been reckless or negligent — but designing an imperfect system and walling away others from fixing your mistakes makes you a colossal asshole and a poor custodian of your users’ trust.
There’s been a lot of reporting around the US about how governmental officials and organizations are utilizing newer technologies like facial recognition, predictive software, and machine learning. What lessons do you hope people will glean from reading a book like Attack Surface?
I think we spend too much energy thinking about what technology DOES and not enough thinking about who it does it FOR and TO. It’s one thing to use predictive policing tools to empiricism-wash racist policing practices (the current usage factory), but you could also use the same tools to project the year’s policing data into the future to see if there are subtle patterns of bias that your police reform program has not caught.
A good technological future doesn’t just need well-made technology: it needs technological self-determination (the right to decide which technology you use, and how), and pluralism (though which decisions about technological design, implementation and use are widely diffused and not vested in the hands of a small clutch of tech execs or their captured state regulators).
Are you optimistic for the future?
I think optimism and pessimism are flip-sides of the same coin, which is fatalism. Optimists believe things will get better irrespective of our actions, and pessimists believe they’ll worsen, regardless of what we do.
I’m hopeful. Hope is the belief in human agency — that we, working together, can navigate our way to a better future, through our hard, committed, ethical collective labor. Hope does not require that you be able to chart a course from today’s world to a better one: merely that you can identify a *single step* that you can take towards that world, because from that newly attained vantage point, you may well spot another step, and then another.
A belief in “optimism” is a belief in humans as pathetic, tempest-tossed detritus in the winds of history. A belief in “hope” is a belief in humans as having agency, able to steer their way to a better world.
Attack Surface is available for pre-order.