Software Studies / A Lexicon Summary

   

“Software is seen as a tool, something that you do something with. It is neutral, grey, or optimistically blue. On the one hand, this ostensive neutrality can be taken as its ideological layer, as deserving of critique as any such myth."
- Matthew Fuller


I was most interested in the notion of software being neutral, or “pure”, and two particular areas where this neutrality requires critique, but importantly, action.

Over the past decades, and especially the last few years, it has become apparent that the people creating the code and software that almost everyone on the planet interacts with on a daily basis are not representative of the diversity of ideas, cultures, religions, sexualities, genders, politics, economics (the list goes on…) of the people that use these tools. As a result, the statistically white, male and middle class biases of these technologists have begun to noticeably bleed into these tools. Matthew Fuller touched upon this in the Software Studies / A Lexicon paper, but there is a complex political discussion to be had about how the tech industry could benefit hugely from a more diverse group of creators. Not only would it foster a much needed sense of inclusivity, but software, the internet and the wider set of computer tools we all use would be better, more intuitive and more effective if made by people from all over the world with unique perspectives on gender, race, socio-politics and -economics etc. Mick Grierson, technologist and Goldsmiths professor, recently spoke about how working with people with disabilities had a phenomenal effect on the musical prototypes they were creating, making them more engaging and almost perceptive for everyone — including and especially able-bodied people. I think this idea can be seen as a microcosm for what we could achieve if the people in “control” of creating these tools were to reflect the true diversity of their users.

 

“What is an algorithm if not the conceptual embodiment of instrumental rationality within real machines?”
  - Andrew Goffey


The other area of supposed neutrality that I was drawn to concerns the idea of computers’ and algorithms’ ability or need to deal with extremely complex and entangled philosophical, human questions. I mentioned the example of the ethical considerations of self-driving cars, and I think this is a perfect conundrum to scrutinise. At this time in technological history, we think of computing and the processes that surround it as the most perfect form of rationality. However, there are many feasible situations where rationality, though usually a very effective approach to problem solving, becomes precarious. When people are asked if they would sacrifice one person to save five, there is near universal agreement that they would. However, when they are asked if they would kill one person to save five people, most are vehemently against this. Though the outcome is the same, we as human beings are able to understand that being directly responsible for another person’s death is wildly different to passively allowing someone to die — but ask a computer the same question, and there is no difference. The maths is the same, and computers (at the time of writing — I am aware this may well change in the future) don’t have a conscience. So a self-driving car could realistically sacrifice its driver in an emergency when it identifies a baby’s life being lost as the alternative. This question is no longer a philosophical mind-bender, it is a real world issue that needs to be dealt with. Hopefully by people other than car manufacturers and the people that program them. Maybe car companies, and all companies that overlap into tech for that matter, should be employing ethicists to ponder, and come to conclusions about, these deep and often transcendental questions.

200w.gif