The following two pieces seek to bring together two views of the world that I have long been interested in but was afraid were contradictory. The first essay – Human Science – makes the case that human judgment is fallible and easily influenced, and hence we ought to embrace machines bringing a more mechanistic approach to tasks which would benefit from a more systematic approach. The second essay – Human Nature – makes the seemingly contradictory case that human judgment is easily influenced exactly thanks to a unique ability to care about others, an ability which is potentially underutilized and should be leveraged more broadly across society. These essays attempt to make the case that these are two complimentary schools of thought and both necessary to pursue in their own right.
Human Science: Discovering the lines of code in each of us
Last winter, I went to an evening lecture at the London School of Economics where two authors introduced their new book on the way technological innovation was changing the human professions. Their thesis, recognizable for anyone following the changing face of work: as computers and machines take on increasingly more routine tasks (and routine tasks, the authors argued, make up a large portion of the jobs done by professionals), humans will be either freed up to concentrate on some higher-order activities which cannot currently be done by machines – or may be freed up of jobs altogether.
This process of ‘machinization,’ while job-dependent, generally involves breaking down tasks into smaller concrete actions for which algorithms can be built to replicate that action in a systematic manner. Machinization may be as visceral as a severe reduction in the number of people at a factory, or as gradual and nuanced as websites which offer expert advice for free (think WebMD or TurboTax), thanks to processing power which is able to store more data and retrieve the relevant information much faster than a human brain. Machines are far from being able to do all of our tasks or answer all of our questions, but the actions they have mastered are often superior to human abilities. Machines for labeling bottles are unlikely to place those labels at imperfect angles, and the software calculating your tax refund is unlikely to add numbers instead of multiplying them. It is better for humans, in other words, if machines are allowed to take over the tasks they perform better – and which would benefit from a more systematic assessment.
Hence why, back to the lecture at the LSE, my ears perked up when one of the presenting authors described his main objection to such a scenario: “when something requires uniquely human judgment, such as questions of ethics,” he explained, “you will always need to have humans accountable.” As I started to process what this might mean he continued, “for example, trials will always be decided with a jury,” and upon further reflection, “those decisions are too big to not have a human to blame.”
Having your medical prognoses and tax returns done in a more systematic way, in other words, is fantastic – but human fates should still be at the mercy of human fallibilities. With often no track record of – and almost certainly no official qualifications for – deciding people’s fates, juries all over arrive at trial verdicts which determine the fates of others.
Imagine a more algorithmic approach, capable of scanning through millions of data points of historical cases, analyzing things like the probabilities of future acquittals due to wrong judgments, societal influences of those individuals convicted, and dozens of other relevant factors, to provide some reasonable estimate of what the appropriate historical fate might be – or whether it was appropriate to assign a fate at all. Each case is slightly unique and brought to court on the grounds that evidence is not conclusive, but surely a more unbiased approach would be at least complementary.
The author’s afterthought about needing another human to blame for an unfavorable verdict is doubly depressing. Should we not aspire towards a more scientific judicial system, where judgments work so well to consider everything in the attempt at meritocracy that the only person left to blame for judgment would be the person being judged? It is an idealistic goal more than a practical one, but unless we strive to eliminate biases which cloud our judgments, we cannot possibly believe that our judgments are made in good faith.
It was an icy January night in New York City when I received my snail mail solicitation for jury duty. This meant a slightly bigger glass of whiskey. I was not thrilled. I knew the subjects of these cases could range from the boring to the disturbing, and the trials could last from days to months. I also knew that it would mean leaving my cocoon of polite professional services where my decisions had negligible impacts on people’s lives for the real world of unclear circumstances and very direct consequences.
Going in for jury duty in Manhattan means sitting in a large group of people as various (presumably random) combinations of the crowd are called in to different trials. Despite having my computer with me, it was not easy to concentrate, the conversations of New Yorkers being anything but boring. An older lady sitting two rows ahead spoke excitedly at what seemed to be her granddaughter. “You can’t move to Harlem, Anita,” she spat into the phone, “do you know how dangerous that is?” Anita protested on the muffled end of the line, but grandma wouldn’t have it, “you’re white,” she told her granddaughter, with a certain delicateness, as if this might be the first time Anita has heard such a thing, “and you will be the minority in Harlem.” God forbid, experiencing what it’s like to be a minority! At the front of the room, a young man shook the snack machine, “I put my dolla in!” he spoke in a volume which was presumably natural for him, but appeared frighteningly loud for most of the patient prospective jurors. “Could you please keep it down?” Said a voice from somewhere in the room, the pleader seemingly disappearing as quickly as they appeared. Another man to my left texted vigorously between what I interpreted as two romantically-involved women. His constant use of the backspace button confirmed my fear that he would accidentally send something intended for one lady to the other.
We were finally called up for a trial: we would need to decide the fate of an 18-year-old boy from a poor part of the Bronx, accused of selling a large quantity of cannabis (which, at the time, was not legal anywhere in the U.S.). Admittedly not having a clear idea of the sort of trials I would be interested in, I knew that this was the sort I had no desire to be involved in.
The judge and respective lawyers are allowed to ask questions to eliminate members of the jury who would be obviously biased, but this tends to capture a small portion of the population. If a potential juror doesn’t believe they should serve a certain trial, they are also allowed to express that opinion to the judge. So I sat in front of the judge – an older woman with a steely complexion and what must have been very thick skin – and explained that I didn’t have the right to place a judgment because I didn’t believe that child (only a few years older at the time myself, I used the word child on purpose and with a certain delicacy) should have been judged. I knew nothing about his life or community or circumstances which all likely had something to do with his actions. How could anyone on a jury possibly know enough about all of those things? What ordains us with the ability to pass judgment on others?
The woman looked annoyed at best. “We are not asking you to know all of that,” she drove over my staunch idealism like a bulldozer, “we are asking whether you will be able to, based on seeing a presentation of facts, make a decision as to whether you believe he is likely to be guilty or innocent.” I thought for a minute whether this was a complicated lawyer question in which any answer gets me in trouble. I am not able to serve on this jury, I said quietly. She stared at me for a few seconds before saying quietly that I was dismissed, anger in her tone. As I ran through the court, I spotted the ADHD man-texter and racist grandma being selected for jury.
Human Nature: When love is enough
Despite this threat-opportunity of machines, there is a surprising many things which even the best of modern artificial intelligence is far away from able to compete on. Tasks grounded in navigating human relationships come to mind (or, “taking initiative!” as one excited participant in the LSE lecture crowd offered). That’s despite the existence of machines which can engage in dialogue with humans, having been programmed to ask topical questions and express soothing phrases to mirror emotions.
Artists and art-lovers argue that “art will be the last thing left” when machines do everything else. Facial recognition algorithms might be built to understand aesthetics, but surrealist art which releases the unconscious imagination of an individual is much harder to replicate.
Writers would argue that creating unique combinations of words to express theoretical ideas or human emotions is something uniquely human. Some political speech generators are strikingly accurate, but it might only be marginally harder to teach a machine how to express sentiments such as “it pains me to hear that” than to a politician, for whom those emotions may be equally foreign and whose execution of such words may be as systematic as that of a computer.
Why do those things seem more naturally human than we can ever find or build a science behind? Why is it that no matter how good machines become at breaking down tasks and judgments into algorithms, they will never entirely replicate the animal in flesh and blood?
The answer lies in a common thread, a uniquely human characteristic which would do well to increase in the near-term. Caring about the social interaction or the art or the writing is what will always separate humans from machines. Caring is one thing we don’t need to outsource or export, but invest in instead. It is difficult to judge whether “rates of caring” have increased or decreased through the years, in fact I wouldn’t have the slightest clue where to start when attempting to measure such a thing. But the persistence of racism, sexism, xenophobia, and ignorant hatred generally – the presence of which is impossible to escape if you engage with the world or at the least read news of it – certainly feels like we aren’t caring for one another enough.
We’ve made enormous progress through the years, but future progress will require overcoming huddles of effort. After all, taking care of one another requires conscious work. It is easier to be rude to or ignore the humans we see at the end of a long and tiring day, after we’ve spent (for example) the majority of it staring eagerly into a computer screen – which ironically could have cared less about how kind or cruel we were. Taking care also requires showing respect for things which may otherwise be alien to us. It requires looking beyond factors we can explain based on the customs we are used to, or the institutions and places we hail from – so often the reasons behind human bonds and understanding – to something a lot less rational. Sometimes, such respect can feel akin to a cognitive deficiency – completely unexplainable through known reason alone – and so requires recognition that it is impossible for even the wisest human to understand, which is why benefit of doubt must always dominate.
It requires effort to get around our default state of seeing the world through a filtered perception, fed to us through custom 3-D glasses – rainbows of built up biases and stereotypes and groupings. As we grow, we put pieces of the world into our preconceived taxonomy, driving our (over)confidence as we grow into what Clarence Darrow called “walking balls of prejudice.”
Perhaps, in our defense, it is to avoid being overwhelmed by the need to comprehend the world otherwise: to view every single individual as a unique being with a unique circumstance would mean headaches when thinking about laws and policies to accommodate an entire population. It would mean taking a lot more time to build economic models which accurately depict human behavior without being able to assume some representative agent for the sake of ease. It would certainly make it much harder to incarcerate, to underpay, to be violent towards, and generally take advantage of other human beings.
As machines begin to automate things which gave us little joy but take up our time, we can focus on cleaning our tinted lens so as to see other humans as they truly are, stripped of our own preconceptions about them as well as the preconceptions they may have built up of themselves. Perhaps with time, we can more actively take care of one another, and embark on spending more time on things more naturally human.
So in retrospect, I missed the point by seeking some sort of scientific ordinance for my right to be on the jury. I was allowed to sit on the jury not because of my track record of judgments or my freedom from biases, but because I had the capacity – just like every other human being – to care. And I could have used that care to convince others to give him a favorable outcome in the trial, instead of disengaging altogether.
The judge was right to be disappointed in me. Judgments will be made, she was telling me silently with her steely eyes, and you can choose to engage with those judgments or you can remain disengaged. But it will not change the fact that people’s lives will continue to be in the hands of other people. My duty as a human, I realized, is not to always get things right in my judgments. My duty as a human is to care and via caring for others, know the things I do and judgments I make will be in good faith – and thus, if the logic of the world works as I want to believe – have a higher chance of also being the right judgments.
I could have tried to enlighten the racist grandma, instead of running away from what I perceived as immovable biases. I could have engaged in a dialogue around community and privilege – factors I consider to be hugely influential to human action – and shared my view, as I had with the judge, that if I too were born in a low income community where selling cannabis was a normalized way of gaining income (particularly when compared to inexistent or demoralizing other routes), I probably would not consider it a crazy path to take. Perhaps some of the other jurors felt the same way, and we could have banded together to change the biases of a few and alter the outcome, despite what historical evidence may have suggested as the best verdict. For machines can only get us as far as we’ve come. To continue forward, we must invest in – amongst other things – caring for one another.