This post is written in commemoration of one who was sent out to protect and serve. A member of the security services, who has just served their last week on the force, and while they are certainly not the only one of their distinct breed, their record on the front line should provide an example we can all learn from. I am talking about K-9, the Knightscope K5 security bot, which in its brief service history patrolling outside the offices of the San Francisco branch of the SPCA, was accused of harassing the homeless and being a general pain in the arse. According to Knightscope, the bot was put into service to “protect [SF SPCA’s] property, employees and visitors”, but its activities were met with a public backlash, leading to its early retirement from duty.

The story offers us two points to consider. First, we should think about what we want the effects of our roles as technologists to be in society. Second, while K-9 has been taken off this particular beat, Knightscope and others like them will be learning from the PR nightmare, to understand how to make robot policing more palatable to the public. We need to seriously consider the desirability of automating security services, before it is forced upon us.

We have created a world where we perceive that it makes sense to give annual cohorts of young people high levels of technical training, while placing no emphasis on equipping any of them with the abilities to understand themselves, examine their ethics, or reflect on their society. I can count on one hand the number of days dedicated to considering such topics in my seven years of scientific training, and my graduate program congratulated itself for supplying even that. This form of education, which does not equip us to tackle messy human problems such as homelessness, combined with the incentives of an economy that rewards profit above genuine social value, results in the development of swaths of technologies that are pointless or even harmful to human experience. The system we have built actually makes it easier for us to start up or work for a company that builds robots to police vulnerable people considered a problem, than to collectively address the actual issues that lead to their suffering.

Once, during a conversation about complex political issues, a friend’s PhD supervisor told him, “Well, that’s why I stick to doing physics. In comparison to trying to sort any of that stuff out, it’s easy!”. There is a public perception that science and tech are difficult subjects for clever people only (perpetuated by elitist projections from some scientists and tech workers themselves). However, I think my friend’s supervisor’s comment is telling. Sure, building robots is hard work that needs certain skills that some minds may take more readily to than others, but the really challenging work is that which we too often shy away from. It is formed from the kinds of challenges that cannot be solved with clean answers or clever code fixes alone, but require the ability to empathise, compromise, communicate, listen, and ultimately use the tools we have to arrive at the best possible outcomes for every person affected.

Rather than take the path of least resistance laid out in front of us, we can take action to engage more meaningfully. There are thousands of people and organisations working on local and global issues who would be interested in us lending a hand, and in turn we could learn a great deal from them to carry forward on our own paths. In Seattle, myself and a group of volunteer data scientists are working with a homelessness non-profit, using our knowledge to help their work. I’m not under any illusion that our contribution is going to solve the problem of homelessness in the city - that will require skilled social work, compassionate politics, and systemic change - but it’s a start, and we have gained a greater understanding of the issue in the process.

And so what of our new fully automated friends themselves? How should we react to them as they are deployed, allegedly for our safety and security? My first reaction to considering this question was to imagine interactions with robot security personnel as similar to those with supermarket self-checkouts. The pattern is familiar to us all; infuriation at the bot’s unbending adherence to the rules, punishment for minor infractions in the bagging area, being barked at repeatedly while frantically trying to resolve any problems by jabbing at the non-responsive interface, turning around in a desperate attempt to catch the eye of a sympathetic human who can serve to execute the machine’s additional demands, all while being stared at by the next in line, awaiting their turn to be processed. I can’t wait.

The Knightscope K5 is the early-release self-checkout of security bots; clunky and incompetent at navigating the human environment. But advances will be made. Earlier this year, Boston Dynamics showed off its humanoid robolympian, Atlas, doing backflips. It is completely foreseeable that eventually, robotic superhuman chassis will be coupled with the sensing and facial recognition technologies already present in the K5, to provide among other things, automated policing of our streets. I, for one, do not welcome our new robot enforcers, and it is not just the dehumanising nature of human-robot interactions that disturbs me. I am concerned with who will own law enforcement, with oversight for algorithmic decisions, with the continuation of racial and social biases by officers that blindly follow commands set by humans, and with the privatisation and profiteering spreading through civic life that is capable of displacing the real public interest. I am worried about these technologies becoming cheap, advanced, and acceptable enough that our open spaces become corridors of automated surveillance and enforcement, serving groups that are beyond public accountability. We should critically assess and publicly debate these developments at every step.

Human policing is certainly not perfect either. It can be dangerous for everyone involved, is riddled with prejudice, and can also be commandeered by forces with a disregard for the people they are supposed to protect. But is developing robots the answer? I don’t think so, and this brings us back to thinking about what we actually want to achieve with our work. It is possible to make the short term argument that crime and disorder can be reduced with more police in one form or another, but surely in the long term, when we aim for the society we actually want, we should actually be reducing the need for policing. Rather than finding increasingly alienating ways to police low level crime and disorder, it is time that we start working to create an inclusive, cohesive, and more equal society.

The technology sector already wields huge power, and it is only growing. A healthy civilisation is not one where those with power take the easy way out of problems. It is one where they examine the morality of their actions, and are critically honest with themselves about the motivations and effects of the paths they pursue. I’m completely in favour of robots for potentially carrying out dangerous or boring jobs that already involve little human interaction, but it is not unreasonable for us to draw lines that say which jobs are for humans, which are for robots, and which are those that we would rather combine our humanity and inventiveness to remove completely.