The Social Problem of Technology

Sophie-Carolin Wagner, 19.9.2023
“The Social Problem of Technology,” in Book of X, 10 Years of Computation, Communication,Aesthetics & X. Porto, Portugal: i2ADS: Research Institute in Art, Design and Society, S. 47–56.

Sophie-Carolin Wagner’s academic background is in media theory, digital art, and social economics. She is the co-founder of Research Institute for Art and Technology (RIAT), was the co-editor of the Journal for Research Cultures and works and lives in Vienna.

Technology is of no great interest to me, other than it being a social act. I have found failed objects to be useful vehicles via which my theoretical and artistic practices converge. They are the subject matter of discussion because I hope that our miscon- ceptions about technology allow us to decode what it means to be human. My approach has led me to explore the pitfalls of media and technology; the exclusionary and alienating mechanisms in- scribed in and nurtured by them. A prominent example of ex- clusionary processes in media and technology is the existence of biases in systems under the term artificial intelligence (AI). In a previous paper I wrote, I’m addressing this research,1 howev- er, I want to divert my own narrative. I want to speculate that we can draw insights into deeply human operations from the unin- tended ways that contemporary technological systems capable of reflexive adaptations over time operate and about how it might be possible to build systems that can detect what human intelli- gence can’t.

The problem of perception is not a neurological, physio- logical or psychological one. Rather, it is part of the logical-philo-

1 Sophie-Carolin Wagner, “Programming is Law: Can I be a feminist if I don’t want to be- come a programmer?” ISEA (Durban, 2018): 336.

47

Sophie-Carolin Wagner

sophical or socio-cultural realm. It is a phenomenon allocated to metaphysics, and an undecidable question.2 Perception can’t be treated unequivocally but only from approximate perspectives. As the Swiss psychologist Jean Piaget has described in The Con- struction of Reality in the Child, it is a senso-motoric competence offering only the fundamentals for a construction of reality.3 The visual sense is not projecting reality as a mere copy of the world onto our retina, likewise hearing is not simply receiving audio in- formation in our ears.

Whereas certain systematics can be determined and in- vestigated physiologically or neurologically, the particular living situation, the experiences and the social and cultural condition- ing of the perceiving organism are critical parameters of its per- ception. The sensory system builds the epistemological tank hu- mans use to encounter their surroundings. Yet this tank may tell us very little about how the sensed attributes inform our thinking or the narratives we live by and in.

Technology is a cultural product. Informed by cultural and social structures, it is subject to the expression of historical lega- cies of privilege, violence and oppression. This becomes obvious in biases of technologies curating information and narratives, such as AI. Well-known examples are Google’s photo service tagging photos of African-Americans as “gorillas”, Google ads, which algorithmically display crimes and felony notifications when searching names that are associated to be African-Ameri- can, or facial recognition software from IBM, Microsoft or Megvii, which correctly identify a person’s gender from a photograph 99 percent of the time for white men and drop to merely 35 percent

  1. 2  Jean Baudrillard and Heinz Förster (1989). “Wahrnehmen.” in Philosophien Der Neuen Technologie: Ars Electronica, 27–28. Berlin: Merve.
  2. 3  Jean Piaget (1977). La construction du réel chez l’enfant: 6e ed. Delachaux & Niestlé.

48

in accuracy for dark-skinned women.4 Assuming the detection of these biases to be a purely technical problem would be missing a crucial part of the picture. Training data—particularly when it comes to images—reflects a long history of discrimination.

The influence on AI systems, which are based on artificial neural networks and on an iterative learning process of data cas- es, are a logical consequence. The dependency of AI on its train- ing data underlines that biassed AI is a social problem first and a technical problem second. The social and political implications of biassed AI becomes apparent when we think about systems that don’t only control how images are being tagged, but decide about the access to mortgages, water, healthcare or a country.

The technologies of AI are new, and their representation- al elements need to be rethought. AI can certainly help the con- structing of the epistemological tank, but the epistemological tank needs to be built by humans. Yet our intuitive models about how we think might be lovely but frequently wrong and that’s part of why AI doesn’t function as intended—because we can’t teach a machine what we don’t understand. What happens in the base function of an AI is a paradigm shift from a cognitive almost axiomatic system to a system based on algorithms and data. The AI systems are programmed to function by receiving data and in- formation. However, it is questionable if these data and informa- tion are “real”, and if they are real, then AI systems can still nei- ther be intelligent, nor can their representations be valid.

AI systems need human data, and humans need AI sys- tems, but this does not mean that the human and the machine are one and the same even if they are no longer the other either.

4 Tom Simonite (2018, January 11). “When it comes to gorillas, Google Photos remains blind.” Wired. Retrieved 25/02/2022, from https://www.wired.com/story/when-it-comes- to-gorillas-google-photos-remains-blind/

The Social Problem of Technology

49

Sophie-Carolin Wagner

What we are seeing today is the setting up of cognitive machines with intelligence that is based on algorithms and data. The AI systems seem to be intelligent, but they have no meaning; they might not be members of the same species to which they belong. They are increasingly proving to be data-processing machines, not intelligent cognitive machines, and they are not capable of representing any living situation, not even their own.

An unfortunate psychological effect that weighs on the re- sulting consequences of AI-based decisions, is that humans have the tendency to trust in decisions of systems that they don’t un- derstand. A prominent example of this effect was when Aviation Security officers forcibly removed passenger David Dao, a pul- monologist, from a United Express Flight, after Dao refused to leave the aircraft, but was algorithmically selected for removal due to overbooking. Even though airport security personnel are trained to know that removing a paying customer, and in this case a physician, has no legal grounds, they proceeded based on the algorithmic decision and with a dramatic show of physi- cal force. Yet somehow the machine-curated data dissemination provided a more relevant narrative than their formal training.5 As Sadie Plant formulates “intelligence is no longer monopolised, imposed or given by some external, transcendent, and implicitly superior source which hands down what it knows—or rather what it is willing to share—but instead evolves as an emergent process, engineering itself from the bottom up” and appearing only later as an identifiable object or product: “the virtuality emergent with

5 Jack Simpson (2017, September 8). “If you’re reading this, the algorithm said yes.” Har- vard. Retrieved 25/02/2022, from https://www.harvard.co.uk/youre-reading-algorithm- said-yes/

50

the computer is not a fake reality, or another reality, but the im- manent processing and imminent future of every system.”6

The significance of how technology informs social or indi- vidual processes and the importance to create systems that are just and bias-free, by controlling training sets, or by reflecting who might be oppressed by these systems should not be under- estimated. Indeed, Google and IBM have created tools aimed at detecting biases in AI in recent years.7 However, AI and its fail- ures also offer the opportunity to learn more about the limits and the potential of human symbolic faculties, fortunes and misap- prehensions.

Ignorance of perception constrains the relationships of perception to the conceptual act, limits the conceptual act it- self, and predetermines an experientialist position, one which assumes that all, or nearly, all experiences are intentional. Con- temporary technological systems capable of reflexive adapta- tions over time operate analogously to living organisms develop- ing cognitively during evolution. Moreover, these systems must be seen as actors in a sort of “extended reality”, which entails the coexistence of humans and machines, virtual and non-virtual, in a shared reality. Humans live in a self-reflexive virtuality which denotes an ever-expanding, yet continuously changing, complex system. The field of media studies, at its deepest level, aims to understand the role of new media technologies in mediating re- ality and experience. Yet, the screen and the interface are merely screens. All cognitive processes and representations are embod-

  1. 6  Sadie Plant “The Virtual Complexity of Culture”, Futurenatural: Nature, Science, Culture (1996), 203. Anna Greenspan, Capitalism’s Transcendental Time Machine, PhD Thesis, (2000), 204; 206. quoted by Amy Ireland, “Scrap Metal and Fabric: Weaving as Temporal Technology”, Agorism in the 21st Century, 1 (2022), 59-75.
  2. 7  Zoe Kleinman (2018, September 19). “IBM launches tool aimed at detecting AI bias.” BBC News. Retrieved 25/02/2022, from https://www.bbc.com/news/technology-45561955

The Social Problem of Technology

51

Sophie-Carolin Wagner

ied. To declare that the screen is a virtuality only, is to presup- pose that there is a virtuality prior to the screen. It is precisely this presupposition, which screens us from experience.

Our intuitive sense of how we think is often at odds with the underlying reality. This is a necessary consequence from our sensory system and our cognitive system’s primary function not representing reality, but to create an operable narrative. The in- ference from these representations to cognitive or sensory pro- cesses however simply doesn’t work. The inscrutability of sens- ing and thinking also explains why implicit biases are so hard to understand and even more so to correct. Investigating this in- scrutability holds a promise for correcting biases and more gen- erally for philosophy. The failures of AI might allow for just that. What I am proposing is that the failures of AI may be able to teach us more about ourselves than about the AI, and that further cre- ating AI systems that don’t aim at replicating human intelligence holds a lot of potential. Further developing AI systems that don’t even try to mimic human intelligence could potentially end up completely reshaping the way we think about thinking. In their paper Semantics Derived Automatically from Language Corpora Con- tain Human-like Biases, Caliskan et al. showed that machines can learn word associations from written texts and that these associ- ations mirror those learned by humans, as measured by the Im- plicit Association Test (IAT).8 The IAT has predictive value in un- covering the association between concepts and allows to identify attitudes and beliefs such as associations based on implicit bias- es, e.g. gender and leading or assisting positions. Anthony Green- wald concludes that this AI can serve as a method to identify im-

8 Aylin Caliskan, Joanna J. Bryson & Arvind Narayanan (2017). “Semantics Derived Automatically From Language Corpora Contain Human-like Biases.” Science, 356(6334), 183–186. https://doi.org/10.1126/science.aal4230

52

plicit human biases in language and one might postulate that it might be more adequate to do so than a human.9

A meta point concerning these discussions is the scale, or scope. Much previous research has focussed on the reactive and immediate nature of (e.g.) computer image recognition, with less concern given to the ability for technologies such as AI to shape the flows and dissemination of information; to shape our narra- tives about the world and its (and our) place in it. If we, as hu- mans, are wired to access patterns in and of our own experienc- es, then any system that can access and synthesise vast amounts of information, contextualise that data, and make sense of it in an acceptable manner is bound, at the very least, to shape our worldviews. Applying the epistemology of second order cyber- netics to our analysis of technology underlies the relevance of narratives, a change of scope which is currently very underrep- resented in discussions of fixing data set bias, or discriminatory techno-policing. Narratives allow us to explore the many ques- tions that concern us and give us a sense of identification and belonging. They allow us to grasp the world, to process and in- terpret data, and I am choosing this terminology as an indicator of this affecting the most scientific insights, rendering even pro- found technological advances irrelevant if they can’t be embed- ded in a good story.

Data labelling and data curation disseminate the most inti- mate bits of information, yet somehow this development has gen- erated a void of overarching narratives in some areas of the world. This void can, in almost all cases, be filled both by propagandist narratives generated by people (authors, journalists, filmmak- ers, agents of propaganda), and by AI systems that act as inter-

9 Anthony G. Greenwald & Brian A. Nosek (2001). “Health of the implicit association test at age 3.” Experimental Psychology, 48(2), 85–93.

The Social Problem of Technology

53

Sophie-Carolin Wagner

mediaries between the media and the public, which might be the source of multiple societal intricacies. A prominent example for this imbalance can be seen in Cambridge Analytica’s manipula- tion of citizens’ data during the 2016 US presidential elections.10 Reactive AI, which is largely responsible for the aforementioned exclusionary and alienating mechanisms resulting from biassed datasets, has actively contributed to the lack of overarching nar- ratives by shrouding us in a cloud of misinformed immediacy. This can further be weaponized to cast aspersions on the verac- ity of claims made by those who are politically opposed to those funding and utilising these large-scale data processing infra- structures. Yet none of these narratives stem from the creation of a machine. AI doesn’t create meaning or culture; it is merely a re- flection of them. What narratives emerge from an AI system will be a reflection of its users’ beliefs and biases and the algorithms used to curate the data it processes and interpret that data to pro- duce its outputs—nothing more nor less!

Having said this, the AI I am writing this essay with, kept on asking me what narratives AI needs or what the narratives which AI uses might look like. In our communal writing process—the AI suggesting and me negating the fact that AI does or will need nar- ratives, and all that while trying to reflect what we can learn from AI functioning differently than anticipated, the text arrived at yet another question: how can we overcome the lure of representa- tion, the attempt to impose psychological or physical models on experience, and how can we make sensible our pseudo-scientific technologies, which assume the existence of simulated realities and virtual actors? Using this text as a lead and asking the AI to flow, it stated that technology can solve social problems, but also

10 David R. Carroll (2021). “Cambridge Analytica.” Research Handbook on Political Propagan- da, 49–58.

54

create new ones. The inhuman feel of the AI gave me a visceral understanding of this duality, but with the same line of thought touched upon earlier, I wondered how we can use technology to drastically change this dichotomy. The AI answered that, at the end of the day, we are the ones who decide, so let’s decide now that we would rather focus on how we can use technology to inte- grate ourselves, not how it denies us our being.

55

Veröffentlicht von

Harald R. Preyer

Unternehmer mit Tiroler Wurzeln Einfühlsamer Begleiter von Persönlichkeiten

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert