They say you’re not supposed to apologize nearly as much as women are socialized to, so instead I’m going to tell you I’ve written seven or eight apologies to introduce this talk...and then I deleted them all. I respect you all too much to saddle you with socially mandated self-negation.
Well. That’s an introduction. I’m Rowan, and I’m an SRE at BuzzFeed. During my interview I was asked ‘why BuzzFeed?’ I gave an answer I knew was either 100 emoji or 0… ‘During the summer of Black Lives Matter, BuzzFeed was some of the most reliable and up to date journalism I could get my hands on.’
I bring this up because ethics are an ever present conversation at BuzzFeed. Not a constant conversation. I mean, we spend a LOT of time talking about both the Kardashians and Robert Mueller, which is the singular blessing of working above a news room. But I pitched this talk because I am aware of the fact that some of how we approach these issues is unique. Internally when given a choice between doing it ‘right’ or doing it ‘fast’, we maintain an expectation that even the ‘fast’ version will be executed ethically.
And we feel this conversation is crucial. In modern life, technology is ever present, from Kindles to laptops, and apps to smart home devices. People open entire their lives to apps and devices that can monitor location and physical activity just by carrying a cell phone or owning a fitness tracker. Social media sites know how long you spend on a link, or the text you composed but didn't send. And until now, most of this technology was developed in the dark. Toxic pop-culture narratives about the superiority of software engineers and the inscrutability of code became dominant stereotypes that insulated many technologists from difficult questions about the things they had built.
Over time these stereotypes and hand-waved assurances that technology was for the greater good became a stable foundation on which "rock star code ninjas" normalized an ethically oblivious culture. Too much of tech culture became focused on forging bravely forward to disrupt the next industry, without asking how to do that in a way that promotes equity and human dignity. Pursuit of short term profit has obscured the very real need to mediate the way the technologies we create affect the world.
J. Robert Oppenheimer, known as “the father of the atom bomb,” once spoke about the creation of the bomb, saying:
“[...] it is my judgement in these things that when you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success.”
He spoke these words with regret, reflecting on the the destructive force his creation made possible in the world.
What began a few years ago as 'outliers' is now a constant refrain of technical success we must argue 'what to do about.' Facebook and Cambridge Analytica exposing the personal information of tens of millions of people is only one recent example of the consequences of software design attempted in an ethical vacuum. Older examples include the 2014 revelation of Uber's 'god mode' that allowed the viewing of personal ride information, and the 2005 revelation of Sony including "rootkit" software in their CDs as copy protection that also made any computer the CD was played on vulnerable to viruses and malware...as well as spying on the listening habits of the computer's user and sending them home to Sony.
From the history of the atom bomb, to the pop-cultural dinosaurs of Jurassic Park, with stops at internet harassment and web filtering, there are plenty of stories to demonstrate to us that a continuous and integrated ethical design practice is required.
Beyond the emotional or moral considerations of why ethics are important to technology, the blunt truth of the practical reasoning is that the quick short-term profit of glossing past ethical concerns is never worth the loss of user or stakeholder trust. Ethics are the rules that guide how people interact with other people, and ignoring them in favor of profit eventually results in real harm and a loss of trust from users.
There are upsides too, though.
Beginning and maintaining a continuous, integrated ethical awareness and analysis as part of the design process comes with benefits. Ethically oriented communities tend to be some of the most intellectually interesting communities I've participated in because ethics require continued awareness as context changes or new facts become available. They also tend to be more inclusive, as the empathy required flourishes best in intellectually safe and emotionally secure environments.
Technology is quickly evolving, and as it does it changes society and culture. We can no longer ignore the responsibility we have as the creators of these shifts to consider our impact on the world around us. While previous decisions will have consequences for some time yet, if we introduce an ethical process to our design phase we can learn from what has already happened and potentially mitigate its effect on the future.
If we agree that ethics are necessary, then, how can we bring them into our design practices?
The preservation of human rights, dignity, and potential is a hard problem and demands more than a single simple answer.
As long as there has been harassment on the internet, there has been the discussion of what to do about harassment on the internet. Various dogmas have emerged about how to handle trolling and worse. Like what to do when someone on your forum (or platform) makes a threat or calls a SWAT team on someone. I am consistently heartbroken how often the response to this abuse is to declare it too hard to solve because it 'cannot be automated.'
Technologists talk a lot about technically challenging problems like reducing latencies by microseconds or increasing capacity instantly when app traffic is higher than usual. It is much rarer to hear or talk about difficult human problems, for example, privacy protection and harassment reduction, outside of specialized groups.
Even in many activist communities, there is pressure to see technology as objective and apolitical. Twitter is a place marginalized people are often harassed, and it has also facilitated political revolution and liberation. Facebook facilitates the exchange of information and support between white supremacists, but it also does the same for abused women and queer teenagers. It's deceptively easy to think of this as inevitable balance, rather than the result of an absence of ethical analysis.
This absence of ethical awareness exists in microcosm in our day to day technical work too. In an essay on ethics by Yonatan Zunger, I first learned about the concept of the paperclip maximizer. For a five-hundred foot view, the idea is that an algorithm is programmed to maximize the availability of paperclips to executives in an office building in an automated fashion. It starts by evaluating situations that result in a reduction of paperclips, and adjusting to maximize the clips. Ordering more, changing order quantities, etc.. Eventually the paperclip maximizer realizes the the most paper clips would be available if humans weren’t able to take them, and rather than 'manage paperclips' it begins obstructing humans.
You now have a paper clip maximizing Skynet. All because you tried to make your boss’ life a little easier.
As humans are involved, every technical decision we make is a political and social decision too. The paperclip maximizer is a thought experiment about the social consequences of technological decisions in an ethics free design process.
Facial recognition software is a kind of tech that has political implications. Facebook giving you the ability to automatically tag your friends may feel good when posting pictures of a birthday party, but when that same kind of software misidentifies a police suspect, the political and social implications become visible. In a world in which unconscious bias is lethal for too many populations, from black men to trans women, giving that bias the pseudo-scientific glamour of 'algorithmic legitimacy' will only further entrench systemic oppressions.
Like doctors, lawyers, civil engineers, and other professionals with life and death impact on people, engineers and other technologists must be aware of and respect that responsibility. Prioritize ethics is a call to commit to an ethical design practice over pursuit of quick profit or easy wins. Normalizing ethical conversations and establishing ethical values now creates the environment required to find and address ethical concerns early, while they are still theoretical.
Every project should incorporate an ethical analysis and should encourage regular ethical feedback. While the need for ethical analysis is clear in 'green field' new product design, it’s also important when we upgrade, change, or remove features. Just as adding a few lines of code will increase complexity quickly, layering of technologies or enhancing algorithms increases the ethical complexity. Interactions between systems can cause emergent ethical vulnerabilities, even in ethically and intentionally designed systems.
It is easy to picture an oversimplified ethical dilemma that we’d all disagree with. The obvious problems with an app that blocked a user’s device from reading certain kinds of content unless they could guess an arbitrary password or receive permission from an administrator, for instance. Especially if the development of that app was sponsored by the government. When described that way in the abstract, it seems almost condescendingly straightforward. But what I just described is web-filtering software, used by educational institutes and libraries and public wifi in city and county buildings to keep people from watching explicit content in public space.
The fact that these filters have a false positive error that often filters out educational content or news is seen as a cost of doing business, not an ethical gap in the system. The software fails people who come into contact with it, who may not even be aware how they're being impacted.
Like error handling, negative test cases, and security "red team" activities, technology also requires thoroughly evaluating the social and political “bad paths” of development. While glaring cases of obvious conflict with our ethical values will sometimes occur, it is far more likely that we will each be building a small piece of something larger...and without taking time to evaluate we may miss how a technology, its use, or misuse may be in conflict with our ethical values. Whether it’s a few sentences in a design document or proposal, or some sort of full analysis with case studies and data visualizations, every new system or addition deserves an ethical consideration.
Realize that unaddressed ethical concerns tend to become risk and security concerns. I like to call this the ‘how would my worst enemy use this’ guideline. This isn’t a talk about how to establish your ethical principles, but it does bear mentioning the fact that ethics are rules for how to avoid harm and mediate social relationships. Security violations, user discomfort, risk to stakeholder profit and reputation, and the potential loss of good faith from unethical use or exploitation of your technology is nearly inevitable in the long-term if ethics are not a conscious priority. Every feature is a potential vulnerability in some way. It is crucial that we take time in the design process to look for these potential problems and mitigate them, before they have outsized impact.
Given the fact that ethical gaps become security risks, it is sometimes easier to start an ethical analysis with risk and security assessments. Patterns will emerge and point at broader principles that should be considered.
To take the example of the web filter again, it’s made of small, innocuous parts. One of those parts is a log function, which on its own seems pretty harmless. We have lots of reasons to want logs of all kinds of things. But the logger in this case is maintaining a list of websites searched for and blocked by the filter...maybe the library is even using this list to unblock truly educational resources, a theoretically justifiable use of such data. Unfortunately this local government doesn’t have a lot of money, so they keep the logs of the websites (complete with timestamps) in the same easily attacked plaintext database as their records of what patron was logged into each computer. Often the kinds of sites that are false positives involve content that can be very sensitive (such as issues of gender identity, health status, and abuse prevention to name just a few). With timestamps in both tables, a bad actor with a little bit of patience who got their hands on the data would be able to establish which patrons searched which content. With this information it is trivial for the bad actor to hurt someone else, whether by outing them on social media or by telling their boss they've been researching cancer treatments before the employee is ready to disclose.
If the patron information had been considered sensitive data for privacy protection at design time, privacy ethics and harm reduction would have suggested some sort of data protections such as encryption or separate storage that would mitigate or reduce this vulnerability all together.
From design to construction, from testing to release, make sure every group who may have a stake in the construction of a technology is allowed to share input and concerns. We require this diversity of opinions and experiences in order to identify costs and benefits we are incapable of seeing. Populations who are underrepresented in a team should be overrepresented in their feedback mechanisms to compensate for gaps caused by missing perspectives.
We’ve all heard stories about technology that was not created with a diversity of perspective...one of the easiest to call to mind may be various news stories published about automatic hand dryers that didn’t activate when presented with certain shades of skin. The labs that had tested them used only lighter shades of skin in all the pre-release testing. Contrast that with Microsoft, who specifically sought testers from a broad range of ethnicities, skin colors, hand sizes, and other dimensions when testing the HoloLens.
By including the perspectives of users and stakeholders in the design and testing phases we can eliminate these sorts of “low hanging fruit” and make certain we’re not falling into easily avoided traps or missing details that ought to be obvious to us but aren’t due to our limited perspectives. Specifically ask who isn’t in a room when you get feedback, or who you haven’t heard from, and seek out their perspectives.
Invite stakeholders to design reviews even if they are not "technical"...and I don't mean members of your tech org who aren't engineers...I mean truly non-technical folks such as outside users or internal customers from other orgs in your business. Actively solicit feedback from these stakeholders around potential ethical issues, even if expressed as risk, security, or harm issues. When collecting feedback, ensure you’re reaching out to a broad sample of stakeholders, not just those who happen to be familiar or convenient.
Maintaining the sort of culture that promotes diverse sharing like this requires intellectual security and emotional safety. Vulnerability is crucial on both the feedback giving and receiving sides, and listening respectfully to feedback, even if it is uncomfortable, is paramount. In return, when we participate in these conversations from the position of the underrepresented, it is helpful to remember that the feedback we are offering can be uncomfortable and difficult. While honesty and candor are important, they should never be an excuse for cruelty or bullying.
The Institute of Electrical and Electronics Engineers, or I-triple-E, a professional association and standards body for computer science engineers and IT professionals founded in 1963 maintains a ten point code of ethics for its members. The first point is a ‘do no harm’ statement, reading:
hold paramount the safety, health, and welfare of the public, [...] strive to comply with ethical design and sustainable development practices, and [...] disclose promptly factors that might endanger the public or the environment.
A major area of concern on this front is the field of algorithmic justice. While ethical algo and data science are a specialty far deeper than this talk, the concept of algorithmic justice is, in my opinion, a fundamental for every technologist to know the basics of. Dr. Joy Buolamwini of MIT founded the Algorithmic Justice League after studying facial classification algorithms and discovering massive disparities between the accuracy of classification for light skinned men (with an error rate of eight-hundredths of a single percent) and the classification of women with dark skin (whose error rate rose to thirty-four point seven per cent). The fact that many algorithms are opaque and difficult, if not impossible, to audit risks the further hardening of entrenched biases both conscious and unconscious. In feedback loops with the desire to believe that technology is objective, this lends legitimacy to inequality and becomes a self-fulfilling cycle.
This is only one area of technological ethics where marginalized populations are affected in outsized ways. Consider the battle for net neutrality and how quickly internet fees could put access to free information out of reach of all but the wealthy. If the grand utopian dream of the Internet is "information wants to be free, as in both beer and speech," killing net neutrality is the equivalent of lighting that dream on fire. Freedom of access to information is considered a fundamental human right by the UN, to deny it is a real and lasting harm that impacts those who have the least ability to mitigate it.
The potential harms a technology can cause must be a consideration of its design long before it’s in prod. This means accounting for the disproportionate ways new technologies can affect marginalized people, communities, and populations: whether that be by institutionalizing a bias or by enabling and furthering structural violence and other oppressions.
In preparing for this talk, every code of ethics I read began with some variation of a 'do no harm' statement. In realizing the power to create change in society, we become responsible for awareness of how that power can harm others, particularly the vulnerable or powerless. When commiting to reduce harm, we acknowledge that in the real world a 100% 'error free' performance is impossible. True ethical practice includes accepting that we will fail, being accountable if we enable an unethical or harmful behavior, and patching against future exploits of that nature.
Lawyers insist on license agreements full of technical and complex language that has specific legal meaning, but this is not enough when communicating with users. Rather than expect a user to comprehend a lengthy EULA, or in the case of internal users perhaps to read a mountain of technical documentation, communicate in clear language to convey the rights, responsibilities, and risks inherent in a design or implementation. Keep an open channel of communication with stakeholders such as a blog or regular announcement page, or for internal projects a well formatted changelog and clearly organized documentation. Communicate with as much transparency and detail as is appropriate.
I’ve named a few different projects as examples here in previous drafts of this talk, but every time I have pulled out the references because this is a problem I believe the entire industry could improve upon. I found myself reflecting on how often release notes or changelogs are the only record of changes in app functionality.
To communicate transparently, clearly and concisely declare:
In addition to the rights, responsibilities, and risks, however, transparent communication also candor when a new vulnerability or ethical debt is uncovered. A strong ethical practice requires disclosure when violated to provide accountability in remediating the harm and securing the vulnerability. Next, then, we must...
When a vulnerability is exploited and an ethical violation occurs, owning up promptly, and being transparent about remediation is the most ethical path forward. When a user is harmed, whether that’s a stranger on the far reaches of the internet or a member of our internal organizations at work, accountability for repairing the damage is the most important part of regaining trust.
Our users will not understand or care that we did not intend for our technology to harm them. Accountability is accepting responsibility if our software is used in unethical ways or for unethical purposes, and doing everything we can to prevent that being a continued state.
“If I don’t build it, someone else will,” should never be a justification for building unethical technology.
Recent collective action such as the tech won’t build it hashtag; open letters at Google, Amazon, and Microsoft; and yes, even what I’ve been referring to as the “conscientious objectors” who walked away due to Aphabet’s government contracts, demonstrate that support in holding the companies who employ us accountable when they cross ethical lines is not isolated or a fringe concern. For ethical principles to matter, failing to be held to them must have teeth. This too is a kind of accountability.
While there should always be opportunity to engage in healthy and productive conversation about ethical principles, emotional safety and intellectual security are vital. These are jeopardized by a tendency to diminish the concerns of other people as too extreme or fanciful. If you are asked to support someone who is working through an ethical dilemma: offering emotional support, listening, and validating are more important than ‘comforting’. If a person’s ethical principles are in conflict, discomfort is crucial and important signal for them. Seeking to ‘comfort’ them may silence this important internal voice.
When following the other principles in this approach, if we are still asked to build something we feel is unethical, we must decline to do so. Declining may mean turning down a project, it may mean changing teams...it may even mean leaving a job.
If put into the position to decline to build a technology, we may risk our employment, our social connections, or even our residence in a particular country. It’s an unfortunate truth of our industry that often due to various pressures, we have built our lives around our work, from where we live to who we socialize with. When our ethics are in conflict with our work we may be putting our entire ‘adult lives’ in jeopardy of disruption, set back, and yes, failure.
This is another point in favor of a continuous and integrated ethical design process: the upfront outlay of time and effort greatly reduces the chances of ever reaching a point of no return where high-risk actions are the only ones that can be heard. As Liz has said, hiring engineers is hard and expensive. Ethical focus during design is, at the end of the day, lower friction than (and a bargain compared to) hiring a new engineer.
As a technological community, however, we must provide support folks forced to take these high-risk actions both socially and materially. Social support ranges from being a sounding board to help someone establish their boundaries to being a sponsor for an engineer you know who is interviewing with your company after having left a project they could not enthusiastically consent to working on. It may be signing your name to an open letter because you have status and legitimacy to add to the ask.
We need to get comfortable talking about the discomfort of money for material support. Material support is real, concrete, monetary and physical support. It’s showing up with a vegan casserole when a friend is laid off for objecting. It’s giving money and time to organizations who protect whistleblowers. It’s taking the privilege we have as technologists, and turning it into immediate sustenance for those among us who need it.
I opened this talk with a quote from J. Robert Oppenheimer, and I close it with a quote from everyone’s favorite fictional mathematician, Ian Malcolm. In Jurassic Park he says:
“Your scientists were so preoccupied with whether they could, they didn’t stop to think whether they should.”
While drafting this, I spent a fair amount of time thinking about how in tech culture a high mark of praise is to call something “disruptive”...a quality that once got me kicked out of the Girl Scouts.
Technology culture is broken because it has no framework for mediating the social relationships it facilitates. Healthy social cultures don’t just happen because good people get together, they happen due to ethical rules that facilitate people living together and through developing empathy to understand how the ethical rules matter.
Ethics are the rules that mediate human interactions and relationships with a minimum of harm. Technology is science made manifest for human use, and therefore we must consider the human rules, ethics, that should govern it. One methodology for a continuous and integrated ethical analysis practice is to: incorporate ethics in your design phase, prioritize their consideration, ask who is missing from your reviews and seek their feedback, communicate clearly and transparently, and be accountable. In doing this we are preventing harm and seeking to protect the public welfare. When we cannot do these things, we are obligated to raise our concerns, seek change, and ultimately decline to do evil.
I hope this is part of the beginning of an ongoing, yes, integrated and continuous, conversation about ethical technology and ethical design. We are at a strong inflection point in our industry, and if we take advantage of it this is an opportunity to change the world for the better.
If anything I said today sparked a fire in you, or you really want to ask me about my hiring story but you’re too timid to do it in 3D, please reach out using one of the many channels on this slide. Thank you, very much, for your time this afternoon.