Towards a New Ethical Foundation for Computer Programmers

In computer science, we struggle with some pretty complex ethical questions everyday — is it possible to create an AI that isn’t a Nazi? Should we stop child pornography from being distributed on our services? What about hate groups and pedophile rings — the delicate balance between free speech and *checks notes* organized child rape? Is it OK to inflict eating disorders on teenage girls as long as it drives revenue? Should we help oppressive regimes censor their people? Should we hire women and let them start companies, be paid and get funding, in proportion to their gender? 

You know, the kind of questions for which there are no clear answers…. Lol. 

Sound dramatic? It only seems that way because all conversations about tech ethics are very much filtered through soft light: we address ethics in Silicon Valley like ephemera, meaning nothing, abstract, open to interpretation.  It’s always about “bias in algorithms” and “representation in leadership” and “decentralizing control” and “democratizing information” and “the consequences of surveillance” and “the balance of free speech”. Far from the realm of materialism, ethics are measured in “giving back”, “passing it on”, “mentorship”, “employee benefits”, “hustle”, “being a connector”. They have no meaning, no materialism. 

The consequences of our products, the impact on users, on countries, on societies, on the environment are presented as essentially mysterious, with some kind of organic quality — as if our code is just exploratory, experimental, and we are simply putting it out into the world to see what it does — like one of those little pills you put in water as a kid that bursts into an animal-shaped sponge when it hits the water. What happens next, after it hits production, the results, we simply observe but have no relation to. We don’t consider ourselves responsible, and to actually map out a material result from an individual or a system is considered impossible, what with all the complexities  — materialist analysis is barely even attempted by reporters. We somewhat observe the negative effects on our industry, but use that only as data, disinterested in its value. But data is good. And thus, so is the impact. 

What pisses me off is how we get by with such rank, anti-intellectualism and anti-materialism: we act as if one set of acts is not related to the acts which immediately and causally follow it. It is sacrosanct that “well, we can’t predict human behavior or control what our users do with it”, and act as if the negative trends and effects that emerge are simply the mysteries of the human user (who we just coincidentally know more about than anyone in their own life) and the complexity of the system (designed specifically by us to specifications from our people). 

This ethical separation of work from its results is actually foundational to the industry. During the Manhattan Project, even the most renowned physicists and computer scientists in the world said that scientists simply should have no quarrel with what is done with their science; their contribution is only the science itself (here we see science, technology as neutrality), and the use cases they are building for are for others to decide — politicians, ethicists, lawyers, pretty much anyone but them. Joseph Rotblat, who left the project on moral grounds, said “scientists with a social conscience -- were a minority in the scientific community… The majority were not bothered by moral scruples; they were quite content to leave it to others to decide how their work would be used." 

Sound familiar? This is essentially the axiom of the technology industry today: an axiom that wears the mark of over 200,000, vast majority civilian and innocents, killed by American computer scientists, engineers and physicists. They knew they were building a mass-death bomb, and they knew that it was going to be used to create mass-death, and they knew it would open the door to mass-death forever more. The literal inception of the possibility of a war that would kill everything on earth, was in the Manhattan Project, whose contributors in majority claimed absolutely no moral implication for themselves even as the material reality of dead children was inherent in everything they did. 

The people who built the bomb were, in fact, able to claim that they weren’t responsible for it , and occupy a place of political neutrality, which they viewed in and of itself as ethical: the remove of the scientist which must be done to ensure the integrity of the experiment is the moral factor, not the result. 

This ethics of non-ethics, of ethics as stepping away from ethics, has become a defining feature of our field. That to stay in the abstract, lofty, and ambiguous, is in fact virtuous. Anytime something is proposed that looks at technology in a critical and material way, I.e. How it affects the literal lives of innocent people, is “unserious” in today’s VC parlance. Anything outside of the technical implementation, including its known and direct results, and the ones that are aimed for, is not the ethical responsibility of the computer programmer or even the executives or the VCs — after all, they are simply producing a good or service for a customer (who just happens to be the war machine) or for “innovation” or whatever the latest buzzword is. The ripple of responsibility fades into oblivion along with countless lives.  

We are at the point that software engineers in the Valley wouldn’t ever take responsibility for *any* deleterious effect of their code, including and especially when they are specifically building for that deleterious effect. Much like the team from the Manhattan Project, we build bombs, don’t know if they’ll blow up the entire world, and sit back to watch with sunglasses. 

Oh, I’m being dramatic. Oh, it’s not like that. 

Okay! We act as if we don’t work in a field where death is a direct result of many of startups, big tech firms and ESPECIALLY venture capital firms. But it does. All the time. Our drones are in wars right now. The social media apps consistently drive kids to suicide. The surveillance systems we create mark mortality for so many people of color. The wealth gaps that our companies create impact maternal and infant mortality rates. The gentrification of cities kills people in poverty of myriad causes. Our weapons-grade vehicles and police command centers make it easier than ever for cops to hunt down and kill people. We made the surveillance towers they are using to find children and separate them from their parents and sexually abuse them. Our field performs all of the IT for the fucking CIA, which is constantly killing people and couping socialist and communist countries. Grow the fuck up.

How many people has Andreessen Horowitz alone killed with Anduril ALONE (deployed at the US/Mexico border, in use by the US military and operating directly in Ukraine). Anduril is the leading weapons company in the technology space, and as this “vertical” grows, and a16z doubles down on weapons companies via “American Dynamism”, why aren’t we getting a confirmed kill report from Marc Andreessen when he goes on Joe Rogan? And all of this is just a very quick overview that doesn’t even look at in-Q-Tel, (CIA VC firm), which venture capitalists like those at a16z have been getting into bed with on deals forever. 

But we never talk about it. Technology’s death toll, which should be our moral rubric, or the start of one. It doesn’t even exist. Many other industries deal with the impact of their work on death tolls all the time — the car industry thinks about auto fatality, the medical industry of human error, hell, even the fucking food companies, what with strokes and heart attacks and food poisoning. But to suggest that we in tech … do that … (kill people) … and do it …. Often…. And do that… as a core and growing part of our industry…. ?? 

Ridiculous! 

In the face of this reality, and because we are so fucking confused about ethics, so incapable of having any kind of nuanced discussion, I propose a very simple, clear, material, ethical code for computer scientists: 

Don’t kill people. 

It’s not hard to draw the line: if your code is being used to kill people, that’s on you. Many of you are out there coding things and saying: well, the military’s only ONE of our customers, only SOME of these drones will be used to kill children, the CIA is just a channel to federal funding….  But a material analysis doesn’t weigh everything equal, especially your personal enrichment via taking the lives of innocent children. The death that is created will and does eclipse any other use case or application. 

History will not say: They made drones, and they were used to slaughter tens of thousands of innocent children… but a lot of their drones were used for making movies and taking aerial shots for real estate ads so its fine. 

And that is because the moral balance is obvious. History will say: They killed so, so many children. When we talk about something that your company made, materialism doesn’t give a shit if your code was used for things *besides* child murder. That is the moral argument of an absolute dunce. What will matter is the dead bodies. Not your other “enterprise use cases”.

But we’ll never have that rule, that “don’t kill people with your code” rule, because our industry is nested up with the feds and the cops and the war machine like a snake den. The CIA has a VC fund, the military dudes are constantly in and out of all the startups, FBI agents “retire” and go into some “data” company, there’s a constant revolving door between Silicon Valley and the defense industry. The fucking Domain Awareness System? The weapons companies sit perched outside every major computing school, recruiting 18 year old kids to come work for them. Recently I saw Girls Who Code, which targets girls as young as middle school, announce a sponsorship from fucking Raytheon. You’re talking about the Bill Gates Center at CMU, SETI hijacking TOR on fed orders. And well, MIT worked with Epstein and that is all tied up in the feds too. 

The shadow of death over our industry, infested by war machines and war architects and war barons like Marc Andreessen and Palmer Luckey, grows deeper. And remember, that’s just what we see out in the open, plain as day. Imagine what they are doing that we don’t know about. 

You know it, and I know it. This industry is PUMPING surveillance weapons, autonomous vehicles, drones, police gear, data and intelligence, AI and cyber warfare into every parts of the weapons industry and the intelligence community and the policing superstructure. We’ve been running all of their shit for yearrrrsssss. We are the IT for the military, the CIA, the weapons companies, period.  AWS built an entire datacenter just for the CIA, fucking ten years ago. Cmon, bro. Snowden was like a decade ago too. All of these things have only gotten way worse. 

If you ripped tech out of the system, the military would most likely fall immediately and the police superstructure would be de facto abolished. Of course, those two things are hugely profitable for us, and we fuse more and more with the weapons and war machine every day, instead of untangling the snakes and cutting their heads off. 

Why aren’t we tracking how many people we kill? As more and more weapons companies proliferate to sell to the forever war, how will we know what our death count is when we won’t even acknowledge death as the result of our technology for killing people? 

I’m not being morbid. We’re entering a whole new stage of killing people. A16z has created its “American Dynamism” portfolio, a massive build out of tech weaponry; they are supported in this by right-wing extremist Peter Thiel, who is similarly funneling loads of money into weapons companies, and Palmer Luckey, another right-wing extremist. The latter is a serial killer and the founder of Andural, the most advanced weapons company out of any VC’s portfolio. It is the crown jewel in the A16z military.

“American dynamism” is a very nice set of words for some very scary things: the industry shifting more and more to weapons production, tech becoming even MORE embedded in the war machine, tech becoming true architects of war itself, fighting its own wars… but who cares, because it’s opening a floodgates of defense contracting cash that will flood the Valley and help VCs take over the world. To talk of these things is “unserious”. 

And, by the way, don’t be dense: it’s not that I believe that no one even needs to die. It’s that all of this will be killing on behalf of technofascists, Nazis, capitalists and colonizers, right wing extremism and the American imperial war machine; it offers zero rescue from oppression to anyone or any neutralization of any threat that isn’t towards power itself.

The rise of the American Dynamism “movement” is terrifying and we should move immediately to stop it. Because the thing is, if you can’t start with a basis of “our code isn’t going to kill people”, if you don’t even have that level of baseline — when it is de facto acceptable by our industry that our field kills people —  when some of those are innocent children like the ones Anduril is rounding up with surveillance towers — 

well in that case, everything is permitted. Where money and power and bloodlust are morality, when being a billion times richer than anyone else becomes a reflection of a great person, when pouring lavish office perks on over-paid employees is generosity, when making any startup that’s doing anything at all “changing the world” — under these condition, everything is permitted and understandable. 

The entire moral hinge of our industry is on this issue: are we an industry that kills people, or not? 

And our answer to that question is yes, we are, emphatically yes, we are an industry that kills people and kills them faster and cleaner than anyone has ever killed, the pioneer of totally autonomous killing. And nothing moral can ever come out of that foundation. This — this willingness to kill with our fucking scripts — is the root of the ethical death of our industry, the ethical rot you see all around. It’s everywhere. While this continues to be the basis of our industry, nothing good will ever grow from it, and all our efforts are in vain. 

Who will speak against this? The many people in our industry who rose to prominence in the DEI movement now work at every fucking evil terrible company you can imagine, and are in fact CONSECUTIVELY going from the most irredeemable company to the next. This is another example of the moral separation, this fundamental moral separation of our actions and its consequences. There is no moral consistency whatsoever, the “women in tech” movement as stands produces meaningless spaghetti logics such as “feminist war criminal” and “feminist serial killer” and “warlord GirlBoss”. 

Working against killing in tech won’t be effective from a lens of “women in tech”, because the entire women in tech movement was taken over by intelligence agencies, weapons contractors and major banks that profit from these wars, FAANG companies and surveillance centers. There is no movement against mass death that currently exists within tech, that is outside of the war machine influence. We require a fundamentally different approach to tech, but that’s for another article. 

I often get concern-trolled by computer programmers who claim: well, I have to make money don’t I, its not my fault if capitalism forces me to work and get a job… or some such other ridiculous arguments. When you tell someone its not OK that they work at a FAANG company or a weapons startup or other tech group, they will immediately act like you suggested they put everything they own on the curb and start hitchhiking. This is the moral depravity that is the defining feature of the “women in tech” movement as stands. 

The thing is: yes, our industry is fully infiltrated and is fully part of the war machine and taking a greater and greater role in it, but there are still plenty of places to work that aren’t war criminal companies. We don’t lack for programmers who are willing to put their morals aside to work at fucking Facebook or Palintir, we lack for programmers who are willing to take a SINGLE ethical stand, with the very minimum of requirements: don’t kill people. 

If you’re not willing to take responsibility for the consequences of your work and what is done with it, you shouldn’t be working at all. It is the height of ethical cowardice to ever push out responsibility; it’s not what *you* wanted, its the heads of the company, its the culture, its the venture capitalists, it’s the system. Yes; but its ALSO you. Materially you. You killing people. Save your soul, and fight back before it is too late. If you kill people with your code, you will burn in hell — please consider notice served.

Our industry is literally built on an ethical platform that produced the motherfucking atomic bomb. That is how important it is for you, and all of you, all of us to resist this. Because,

Was the Manhattan Project not enough warning for you? 

How many people will YOU kill this year? 

Previous
Previous

An Open Letter to the New Class of Women in Tech

Next
Next

Why does the mental health movement leave out crazy people?