I have thoroughly enjoyed my research into the area of Digital Witnessing. In searching for documentation and resources on this topic, I have realised that it is quite a new and niche field of research, especially concerning art as a conduit for Digital Witnessing. I had always set out to do a field study, and was able to conduct extremely insightful, interesting and propelling interviews with Lynette Wallworth (Collisions VR), Nishat Awan (Sheffield University), Francisco Gallardo and Audrey Samson (FRAUD), and Simon Rowat (Forensic Architecture). As always, Feminist TechnoScience has been my primary method of inquiry, however all of the works I looked at also touch on Archival theory, with post-Colonial narratives and geopolitical privilege in mind.
The interviews that I conducted had a really profound effect on me, and I was surprised to find that voices, even within the “political art” context, can differ significantly when it comes to discussions about Digital Witnessing and whether there can be positive change as a result of artistic works. A particular impression was made on me in speaking to Lynette Wallworth, as I am such an admirer of her work, attitude, understanding and respect in creating moving and significant immersive art. Most importantly, however, is the thought and energy she exerts to create effective communication and change by knowing exactly who her audience is and seeking them out.
Through conducting interviews and reflecting on what I was hoping to achieve, I decided that I would like to create and include some audio about my research journey. This allowed me to approach the project in an artistic and meditative way, which I found immensely rewarding, and I now have a record of the most profound words that have been imparted in the process.
I never intended to come to any conclusions about Digital Witnessing, rather I was exploring its entanglement with political issues, and my research has only emphasised how complex and potentially problematic the act of Digital Witnessing is. And yet, the power of technology has also been used as a tool to create educational resources, awareness, change and helped to forge new narratives, disseminating voices of people who have historically been unable to speak.
“Images and other cultural artefacts selected for local relevance, are more likely to carry scientific knowledge in ways we find engaging… The social affordances of AR, when combined with imagery that has strong local relevance, can scaffold constructions of social formations and personally-relevant activities around scientific data.”
As someone interested in how data and especially real-time information about ecologies, environments and people, I think Pocket Penjing is a really interesting app and study. It is not surprising that local information and data creates more interest, especially when it is our immediate world that we interact with on a regular basis, which becomes our best frame of reference. It could be interesting for the app to work in collaboration with an organisation such as C40, who research, make goals and implement policy regarding environmental issues in major cities. It seems there may be real potential in partnering with an organisation such as this to utilise the data and individual involvement to make positive change in our biggest cities. Maybe this model will be used in future iterations of such studies.
Language Extinction: https://www.sbs.com.au/nitv/my-grandmothers-lingo/article/2016/10/06/what-language-extinction-and-why-should-we-care
A language becomes extinct when its last native speaker dies, and it’s usually the result of its speakers shifting to a lingua franca like English, Arabic or Spanish. This implies choice, but it’s often a history of marginalisation that leads to the change.
" VR will soon hit in a big way, very possibly to become ubiquitous. In the window of time that exists before then I wanted to make a work that has protocols of meeting at its core. Nyarri’s world is only available to me to visit, and in this work through the technology, that invitation is extended to the viewer. The agency in Collisions belongs to Nyarri. When I put the camera down in front of him he said, “It has sixteen eyes.” I replied that it has sixteen eyes and four ears. From that moment, Nyarri became the one who decided what was seen and what was not to be seen, what was told and what was not told. The powerful sense of presence of VR makes everything personal. Nyarri knew who it was he was speaking to. "
- Lynette Wallworth
What is the overarching area of research?
Digital witnessing and the politics of seeing.
What are the key questions or queries you will address?
Who makes this work we digitally witness and for what purpose? If it can be perceived as activist computational art, what, if any, effect has it had on its subjects? I will be critically analysing Lynette Wallworth’s VR film “Collisions”, and two works by SBS Australia, interactive video “My Grandmother’s Lingo” and interactive graphic novel “The Boat”. I am interested in unpacking the complex nature of being a digital witness; how one in the West can choose to engage or ignore; how the existence of technologies, truly everything we touch and use each day, have hidden and obscured consequences; how we, as humans, have a responsibility to use technology and our ability to witness digitally to change the world, to amplify the voices of those who have and continue to be silenced.
Why are you motivated to undertake this project?
I am fascinated by the politics of seeing, the geopolitics of privilege and the idea that computational art can give audiences immersive experiences that cross political, social, cultural and in ways, physical borders. The effects of scaling such borders exist on a spectrum, ranging from immensely positive intervention and support (potentially construed as “white saviours”) to fetishising violence and death, and exploiting vulnerable, persecuted or stranded persons or groups. I am interested in delving into these complex undercurrents, specifically looking at computational artworks that hinge on the idea of witnessing.
What theoretical frameworks will you use in your work to guide you?
I will be guided by Digital Witnessing and Geopolitics to situate and contextualise the investigated ideas and artworks, with specific focus on those who have an interest in obscuring information, those that seek to make it visible, and those that are affected by the actions of the aforementioned parties. I will also be utilising Archival Theory to consider how specific narratives are documented, both historically and online, or not documented, and how the internet has and will continue to revolutionise the way information is disseminated.
What theoretical frameworks will you use in the analysis of your project?
I will be approaching my topic through the lens of Feminist TechnoScience and Intersectional Technology Studies, considering ideas of privilege, colourism, colonialism, the politics of forgetfulness (Verges) and how computational activist art approaches and encompasses these concepts.
Iris Scanners and Recognition — Biometric Identification Techniques
The use of biometric information is becoming increasingly common around the world. In a political, economic and social climate where media circulates fear of terrorism and refugees, it does not take a Black Mirror episode to uncover the potential for this technology to be used and exploited in ways that are scary. It brings to mind the use of eugenics in Gattaca, and how it created a new hierarchy utilising gene mapping to privilege some and discriminate against others. From a black intersectional analysis viewpoint, it is clear that this technology will almost definitely be used to police black bodies, and the bodies of people of colour, in a more rigorous, potentially dangerous and harmful way. Consequently, there are many questions that require our attention — who is creating this technology and to what end? Are the people creating it aware of or critical about the ways in which it will be used? Does an individual have any power in not using such technologies? It is clear that this will disproportionately affect the Global South in a concerning way, especially as borders, migration and immigration are more politicised and policed than ever. It will mean it is harder for people to move around this supposedly globally connected world, it will mean that people of colour, people from lower socio-economic backgrounds, people further away from white, cis, hetero, able, wealthy and male will face, at the benign end, inconvenience, and at the most extreme, persecution, imprisonment, and potentially death.
Machine Vision that analyses artworks
Analysing artwork seems a more innocuous way to use machine vision. This specific example used a training set of 80 000 artworks to build AI that could relatively accurately decide which artist a painting was created by. Using a black intersectional lens, this can be approached with many questions and critiques that could be angled at the internet and technology more widely — who makes these tools? Is a true cross-section of humanity taken into account when thinking about the history of art? Without even knowing the details about this training set, I would guess that of the 80 000 training images, very few, if any, would be from indigenous art practices from Africa, Australia, Asia, North and South America. Like the people who create the technology we use every day, it is highly likely that the people who created this AI come from an anglo-European-centric viewpoint, analysing art from a tiny slice of art history, which also happens to be a huge proportion of documented art history. Erasure of history, practices, languages, culture, people, and so much more, through colonisation, has been and will continue to be woefully common, however the internet does offer an opportunity for those who have been silenced to document their history, their experiences. The internet is at once a perpetuator of this silencing, as well as a vessel for voices.
I found Nishat Awan’s paper incisive and extremely topical. I was intrigued by the concept of places like Gwadar, Pakistan, that are almost completely physically cut off from the outside world, yet are accessible online and are an increasingly powerful economic resource with interest from governments (especially their own) and international corporations.
“Gwadar is situated in the province of Balochistan, which is the largest, yet least populated and poorest province of the country, but one that is the most resource rich. It therefore sits within a very particular set of exploitative relations to the rest of the country, as well as having strategic importance within the region… Not only is it becoming increasingly visible to the outside world due to its geopolitical importance, but physical access to it is also being restricted by the Pakistani military.”
ON VR AS A FORCE FOR GOOD
" “Virtual reality, fundamentally, is a technology that removes borders... Anything can be local to you” (Harris 2015). The primacy of vision embedded within such statements is only one in a line of problematic assumptions."
Figure 1 shows delegates watching a VR film at the World Economic Forum in 2016, in an attempt to secure donations for humanitarian work regarding refugees. The idea of using virtual reality, or really any other art form, for humanitarian purposes is theoretically good, however there are multiple moral issues and questions that crop up in practice, many of which are clear when looking at this photo. The people “consuming” this film sit in a room, in a city, in a country that is safe from the very forces they are observing. They are extremely wealthy, suit-clad, almost exclusively older white men who have the privilege of treating this digital witnessing as a fleeting experience that they can forget about as soon as it’s over, if they so choose. There is also the question of whether or not such films or experiences fetishise violence and war, and a question about weighting potential good, for example monetary support to those who want to intervene and help, against this fetishisation and exploitation. Finally, does such monetary support even swing the scales in terms of making a difference in situations, specifically regarding refugees, considering that governments, corporations and organisations are the powerful actors who decide on these outcomes, often for economic and political gain?
It occurred to me that in the short time since this paper was written, the notion that images are the most credible form of proof or evidence has shifted. In an era of “post-truth”, “alternative facts” and fake news, people are rightly becoming more suspicious of the images we are presented with. Therefore, the onus falls upon those that respect and require credibility from their readers, consumers, users and so on, to prove the accuracy and truthfulness of their information.
“Future-looking Black scholars, artists, and activists are not only reclaiming their right to tell their own stories, but also to critique the European/American digerati class of their narratives about cultural others, past, present and future and, challenging their presumed authority to be the sole interpreters of Black lives and Black futures.”
Afrofuturism is an established and yet fast developing area of narrative fiction that is becoming increasingly visible in the mainstream arts landscape. The soon-to-be-released Black Panther film is evidence that these narratives are extremely appealing to film audiences, and will hopefully inspire a generation of young people whose cultural environment is inclusive, inspiring and empowering. Afrofuturism is powerful because it reasserts black power in a society that has condemned, suppressed and annihilated cultures, peoples, knowledge and autonomy for thousands of years, and offers a reimagined narrative, utilising technology to visualise and realise mythologies and societies from pan-African, African diasporic, Afro-future female perspectives, to name but a few. In a world that remains hostile, unequal and exceedingly violent towards African peoples around the globe, like much other culture that has originated in these communities, Afrofuturism offers hope, alternative stories, and a vision of the future.
An approach to the study of apps...
Duolingo is a freemium language learning platform. Its slogan is: Free language education for the world.
The main page has an icon to signify which language course(s) you are subscribed to. The page has a pyramid of icons called “Basic”, “Phrases”, “Food” etc to describe collections of vocabulary to be accumulated by the user. The bottom panel of the app allows you to navigate between “Learn” (main page), “Health”, “Bots”, “Clubs”, and “Shop”.
The app utilises machine learning with its bots, which encourage you to have a text conversation in your chosen language, and any words you don’t recognise can be clicked on and explained. It’s quite intuitive-feeling and encourages you to conjugate sentences, which is good practice. The bots have notification icons that crop up over time to remind you to chat to the bots. The app itself also sends badge notifications each day to remind you to practice.
Duolingo’s mandate is to provide free education, however the Shop function allows you to buy “health” top ups using your accumulated or purchased “gems”. For example, you can buy a Streak Freeze, which “allows your streak to remain in place for one full day of inactivity”, or a Double or Nothing which will “double your 50 gem wager by maintaining a 7 day streak”. The monetised part of the Shop function is where you can spend between $2.99 for 400 gems and $159.99 for 35000 gems. Your collection of “gems” is prominently placed in the top panel, which also has a fluency metre and a streak calendar. In the context of language learning, these tools can be really valuable as they encourage users to return regularly and practice, which is the most important behaviour to cultivate in learning a language. Where many social media apps use these methods to create obsessive use (which at its best can be a waste of time and at its worst, is a potential threat to mental health), it seems like Duolingo is using these incentives and rewards in a generally positive way, if occasionally using guilt to cultivate regular use. The “Club” function encourages users to join different online groups, which encourages community and an opportunity to practice.
Symbolic representation uncovers some interesting discoveries. There are 23 languages from which you can approach the app, and 45 languages you can choose to learn. There is an interesting bias towards Western languages, including Welsh, Norwegian and Irish, arguably not languages that should be included before languages that are much more widely used such as Hindi and Arabic, which are both languages you can learn from but not languages you can learn. I find it interesting that Hebrew is available, but not Arabic, which may suggest some geopolitical and socio-political biases. It is also possible that the developers are simply prioritising languages based on user demand.
The design of Duolingo is cartoonish and playful, which suits its gamification of language learning. I would be curious about who their target demographic was in designing the app, as it seems more appropriate for a younger audience, which is not necessarily indicative of its user base.
There is an inherent expectation that its users can see and hear, so there is an able-bodied bias (there might be a way to create dictation for blind users, for example, but I can’t find it) which is evident through its user interface arrangement. There is also an expectation of literacy, though if you are using a smart phone, that has an implicit literacy bias (as does The Walkthrough Method itself).
A common interest in the materiality and environmental effects of the internet brought our group together. We were initially interested in gaining an understanding of an individual’s digital and carbon footprint when analysing their internet and technology usage, but it was clear early on that it would be virtually impossible to accurately gather data on such a huge project. As such, we honed in on Facebook, and used Feminist TechnoScience, Archival theory and Materialism as our guiding lights in critique, analysis and research.
Mél Hogan’s piece “Facebook Data Storage Centres as the Archive’s Underbelly” became our central text, as it was clear that bringing digital and carbon footprints into the same arena is a relatively new area of inquiry. We looked at “ecoart”, works by artists such as Julian Oliver, that use an artistic medium to make a political statement as well as having a tangible effect on an ecology or environment. With ecoart in mind, we decided to create speculative products that would allow people to gather energy through green methods (solar, wind, hydro) to power their own social media, or internet usage.
I found it exciting to be working in an area of computational theory that seems quite new and unexplored, though I suspect in the future, interest will intensify. It brought together interests of mine that, at the outset, felt quite disparate, but I presume that is how many theoretical and especially computational theory frameworks appear. I realised in undertaking the research that there is much to be gleaned on this topic, and I feel it is particularly pertinent at this time in human history.
“A supposedly ‘natural’ setting turns out to be nothing if not a highly artificial context or an information-intensive environment, and it appears attentively oriented towards us rather than being neutral or perfectly non-caring.”
Ekman’s paper tackles a topic that will surely intensify in the coming years, with ubiquitous computing and the internet of things becoming increasingly pervasive. What is most interesting about ubiquitous computing is its invasion into these largely organic spaces, meaning that being truly “off grid” may soon be, or is already, impossible. In a world where data is worth a hefty price, and people’s complicit and probably ignorant disinterest allows corporations and governments to gather any and everything, ubiquitous computing seems an issue we should be rushing to understand and ideally, making educated, ethical decisions about now.
“We are perhaps only in the early stages of articulating the issues to be debated, a task made more demanding because cultural practices and forms of life are to a large extent habitual and tacit knowledge, and because the technologies may appear ‘ubiquitous,’ ‘pervasive,’ or ‘ambient’ but most often do so inconspicuously and invisibly."
It’s this invisibility that means it creeps into our lives without being noticed or questioned. And yet, there are many arguments in favour of some forms of ubiquitous computing, for example where safety is concerned. It is the kind that masquerades as convenience that we should be critiquing first, as well as fighting for an individual’s ability to opt out.
“The nonhuman turn, on the other hand, insists (to paraphrase Latour) that “we have never been human” but that the human has always coevolved, coexisted, or collaborated with the nonhuman—and that the human is characterized precisely by this indistinction from the nonhuman.” “
In trying to define the nonhuman, it is easy to travel down a rabbit hole of the many facets of thought concerning new materialism, networks, environments and the nonhuman turn. However, I was drawn to trying to understand what it is to be human in order to characterise the nonhuman (or inhuman, post-human, more-than-human etc). It seems that one of the most essential aspects of being human is to coexist with, collaborate with, use and often exploit the “nonhuman”, which throughout history has moved from creating tools and controlling nature for agricultural purposes to creating computers, software and advancing technology to factory farming, human slavery and war (where we redefine "nonhuman" for political and economic gain). Therefore, being human is inherently linked to our relationship and engagement with the nonhuman — it is the bedrock upon which we build everything.
If we are currently experiencing a nonhuman turn, a cynical reason for that may be that the nonhuman, particularly the environment and technology, now pose a threat to human existence through climate change and AI. Initially, the idea of a nonhuman turn seemed to me like a move away from a tunnel visioned anthropocentric view of the world, but with the looming peril of AI and climate change in mind, the nonhuman turn may indeed be a continuation of anthropocentrism, or in fact the very crux of it. Are we only turning our attention towards the nonhuman because we have no other choice?
“Contained in every “blip” of execution is a range of technical and cultural issues to be addressed, with one operational experience of executing practices opening onto another (Fuller 2003).“
The discussion in Executing Practices was a very interesting look at the many technological, social, political, economic, physical and even personal repercussions of executing algorithms and other kinds of computational instructions. Thinking critically about computation is a relatively new area of study, however as technology becomes more ingrained in our every day lives, we know that algorithms and the processes of execution can have tremendous knock-on effects for people across the globe.
“…computational practices, the problems of execution are historically situated and entangled with the contingent forces of machines, bodies, institutions, military labour practices and geopolitics, rather than simply a set of instructions that are outside of life. “
“As Jennifer Gabrys notes in the collection’s afterword, execution “is a process and condition that might unfurl through code, but also overspills the edges of code”. “
It is important for us to understand the gravity of such overspills, especially when talking about massive shifts in the labour market — from most people working to more robotics and AI taking over — to huge environmental and social implications regarding warfare, drones, privacy, etc.
When discussing the linguistic and teleological qualities of “execute”, I was reminded of the word “alarm”: something we might use every day to wake up, forgetting that “alarm” has its roots in fear and impending danger. The meaning of the language we use is often obfuscated by history or complex linguistic lineages, but remembering the meaning of “execute” has to remind us of the many ways in which it can be used — both positive and negative.
Climate change | Food politics | Intersectional feminism | Afrofuturism | Anti-speciesism | Ecology | Intimacy | Nature | Experiential | Relationships | Surveillance | Digital privacy | Humanism | Health | Activist action | Immersion | Veganism | Storytelling | Refugees | Geopolitics | Inequality | Post-colonialism | Open borders | Universal basic income | Cognitive dissonance | Waste | Cryptocurrency | Extremism | Meditation | Digital colonialism | Many worlds theory | Darknet | Biochemical warfare | AI
Wind energy used to mine cryptocurrency to fund climate research, 2017
Men In Grey
Surveillance conspiracy, paranoia framework and wireless-network interventions, 2009-2014
Lost Common Sense
Black lights, broccoli, Monsanto broccoli seed patent, 2014/2017
Made from ocean terrazzo, an innovate material produced by fragments of ocean plastic waste
“In the case of the human, the prevailing figuration in Euro-American imaginaries is one of autonomous, rational agency, and projects of artificial intelligence reiterate that culturally specific imaginary. At stake, then, is the question of what other possible conceptions of humanness there might be, and how those might challenge current regimes of research and development in the sciences of the artificial, in which specifically located individuals conceive technologies made in their own image, while figuring the latter as universal.”
I found Suchman’s discussion around agency and autonomy really interesting, and engaging in debate regarding artificial intelligence inevitably forces us to question what it is to be human. We have continually explored this concept in class, but it seems particularly pertinent at this time in history, to use a Feminist TechnoScience perspective to examine the Western-centric, white, masculine, gendered, able-bodied biases that are prevalent in those creating and advancing our technologies and relationship with artificial intelligence. What are we teaching and imparting to, even accidentally, these machines, and what will be the repercussions?
Suchman’s thoughts on embodiment also make for interesting critique, outlining how “Feminist theorists have extensively documented the subordination, if not erasure, of the body within the Western philosophical canon.”, yet interestingly, even early ventures into artificial intelligence have proven that embodiment has been treated as “a fundamental condition for intelligence”. If that is the case, then what kind of embodiment will we grant to AI? Creative depictions of intelligent machines often adhere to extremely narrow, gender binary and heteronormative portrayals, with Alex Garland’s Ex Machina and Spike Jonze’s Her coming to mind. If this is any sign of the future, there needs to be a radical shakeup in the people creating our tech.
“Software is seen as a tool, something that you do something with. It is neutral, grey, or optimistically blue. On the one hand, this ostensive neutrality can be taken as its ideological layer, as deserving of critique as any such myth."
- Matthew Fuller
I was most interested in the notion of software being neutral, or “pure”, and two particular areas where this neutrality requires critique, but importantly, action.
Over the past decades, and especially the last few years, it has become apparent that the people creating the code and software that almost everyone on the planet interacts with on a daily basis are not representative of the diversity of ideas, cultures, religions, sexualities, genders, politics, economics (the list goes on…) of the people that use these tools. As a result, the statistically white, male and middle class biases of these technologists have begun to noticeably bleed into these tools. Matthew Fuller touched upon this in the Software Studies / A Lexicon paper, but there is a complex political discussion to be had about how the tech industry could benefit hugely from a more diverse group of creators. Not only would it foster a much needed sense of inclusivity, but software, the internet and the wider set of computer tools we all use would be better, more intuitive and more effective if made by people from all over the world with unique perspectives on gender, race, socio-politics and -economics etc. Mick Grierson, technologist and Goldsmiths professor, recently spoke about how working with people with disabilities had a phenomenal effect on the musical prototypes they were creating, making them more engaging and almost perceptive for everyone — including and especially able-bodied people. I think this idea can be seen as a microcosm for what we could achieve if the people in “control” of creating these tools were to reflect the true diversity of their users.
“What is an algorithm if not the conceptual embodiment of instrumental rationality within real machines?”
- Andrew Goffey
The other area of supposed neutrality that I was drawn to concerns the idea of computers’ and algorithms’ ability or need to deal with extremely complex and entangled philosophical, human questions. I mentioned the example of the ethical considerations of self-driving cars, and I think this is a perfect conundrum to scrutinise. At this time in technological history, we think of computing and the processes that surround it as the most perfect form of rationality. However, there are many feasible situations where rationality, though usually a very effective approach to problem solving, becomes precarious. When people are asked if they would sacrifice one person to save five, there is near universal agreement that they would. However, when they are asked if they would kill one person to save five people, most are vehemently against this. Though the outcome is the same, we as human beings are able to understand that being directly responsible for another person’s death is wildly different to passively allowing someone to die — but ask a computer the same question, and there is no difference. The maths is the same, and computers (at the time of writing — I am aware this may well change in the future) don’t have a conscience. So a self-driving car could realistically sacrifice its driver in an emergency when it identifies a baby’s life being lost as the alternative. This question is no longer a philosophical mind-bender, it is a real world issue that needs to be dealt with. Hopefully by people other than car manufacturers and the people that program them. Maybe car companies, and all companies that overlap into tech for that matter, should be employing ethicists to ponder, and come to conclusions about, these deep and often transcendental questions.
Blog is up and running!