Author Archives: Lisa Ng

Thoughts on Algorithms of Oppression

There are a lot of thoughts swimming through my head after this reading. In one of my other classes, we talk about all the different ways that technologies can perpetuate different systems of oppression in a theoretical / anecdotal way, but this book takes it to another to show how Google actively perpetuates racist, sexist, etc systems. Reading about Dylann Roof was terrifying because of what happened. Considering that most of the people who built the internet and are continuously building the technological tools that everyone is either using or beginning to use are white (and increasingly Asian) men, it is not unexpected that the tools they construct have their biases embedded in the technologies. Talking about Google as a tool of American Imperialism was something I had never thought about before either. When I am thinking about the pervasiveness of American technological tools on a global scale, I’m now thinking of all the different ways that the -isms that exist in American and Western cultures are being shared throughout the world.

Something else that was also extremely interesting was learning about long tail words. Everything one types onto the internet, especially in a search bar, is collected as data. How does this work with search engines that are not google? In Noble’s work, she writes that search results for “Asian girls” and “Latina girls” still contain a lot of sexually implicit or explicit content compared to “Black girls” after Google took some steps to reorganize the results. When I searched these terms on Duckduckgo (which is the search engine that I use) and Ecosia (another search engine that uses the money they make to plant trees), the results that come up are sexualized women.

I went to high school with a lot of people who now work at different tech companies (Facebook, Google, Snapchat, etc). They’re definitely the problematic Asians that Noble references here and there in the books. I’ve been thinking a lot about how to politicize my peers and how to get them to think critically about the tools that they are building because technology is never apolitical! It is also not possible to be apolitical if they tools that are being built have a political agenda being attached to them. I don’t know. This is definitely something that I need to think more about. How can we imagine a world past all these different systems of oppression if the people building and constructing them don’t even know that they are perpetuating these systems?

Thoughts on Automating Inequality

When reading the introduction to this excerpt, I was skeptical that the processes created to make solving society’s problems more efficient could work. Specifically, I was surprised that the automated process to determine which children would be most at risk for abuse could, for the lack of a better word, exist. As Eubanks laid out so eloquently in her narratives, these issues require a solution beyond a technological one. If created with true equality and equity in mind, algorithms in social services / public services provide a band aid solution at most. In addition, it was extremely disheartening to learn that it was possible for clients to be extremely vulnerable based on the VI-SPDAT (Vulnerability Index – Service Prioritization Decision Assistance Tool) to the point where they would be ideal to be housed, but require a lot of social services that the government could not provide in order to stay in that housing based on what the landlords wanted out of tenants. I would think it makes sense to put the folks who need the least social services into homes first, because it seems they would need the least support to be housed, which meant that fewer people would be returning to the streets and entering the system. Eubanks writes “But in the absence of sufficient public investment in building or repurposing housing, coordinated entry is a system for managing homelessness, not solving it” (109). People are cycled through the system and because this information is shared with the LAPS, they are also cycled through the criminal justice system.

In thinking about these programs, I would like to discuss the idea of opting out. Those who are privileged enough to not need these programs are fortunate to not be tracked in the same way these folks are. The idea of opting out in general is only a viable option to those who do not depend on various technologies, whether it is the VI-SPDAT or something like Facebook – a tool many freelancers depend on to find events. How can we build technologies that assist people without tracking them? What can we do with the technologies that track us and make decisions about us that affect our lives, in ways that we are unaware?

Habeaus Data

Reading Habeas Data was the first time I ever thought about litigation regarding personal data. In addition, prior to this reading, I knew nothing about the security features of email, as well as the role of encryption in the email system. I was extremely fascinated to learn about Germany’s restriction on the type of data the government can collect following Nazism. One could say that Germany was ahead of the times when they created their first data privacy act in the 1970s, far before personal computing became prevalent. In addition, the fact that Germany continuously updated the law up until 2003 is a sign that they take the development of technology seriously (although, now that 2003 was 15 years ago, the law could use an update because technology and data collection has changed drastically since then). The contrast between German privacy laws regarding data and US laws is stark – even after all the court cases regarding personal data and search warrants were settled, the US still does not have a federal law restricting the type of data the government can collect on a person. This contrast between Germany and the US reminds me of another scenario with Twitter. On Twitter in the US, one can easily spread and have access to far-right conspiracy theories and the sort. In Germany, that type of propaganda is not allowed. This can be seen if a user logs onto the German version of Twitter as opposed to the US version of Twitter. It appears that the main difference between the US and Germany is that Germany is aware of its dark history – the age of Nazism – and is doing its best to prevent history from repeating itself. Some argue that the US does not have the same history of violence, or that the entirety of US history is violence, that the US government has no interest in halting the perpetuation of violent far-right rhetoric.

I was not surprised to read the Supreme Court came back 9-0 for both the Riley and the Wurie case. I think most people would be surprised that the Right leaning judges voted for the right to privacy, but most Republicans tend to prefer smaller governments and therefore limiting the powers of the government. When reading about these cases, I also did some reflecting on my relationship with my phone. People dump data about their entire day to day lives on their phones without giving it a second thought. Furthermore, most of us use applications that automatically communicate with the cloud (via Google or Amazon). At this point, it is perhaps naïve to assume the existence of any sort of privacy in the US. In addition, it is not even the data collection that is the most nefarious process of the internet – it is the personality profiling, the microtargeting, and the psychometrics developed to manipulate unassuming people into doing things for someone else’s agenda.

Thoughts on Compression

Sterne’s “Compression: A Loose History” discusses the historical and theoretical contexts under which compression of thought and language, audio, and data have emerged. In regards to the piece, I am especially interested in the variations between the high definition media and the low definition media, especially when thinking on their potential to shape education and knowledge. When discussing the alternative high definition files that need to be compressed – low definition ‘aesthetic’ files, I was surprised to see that the author made no mention of In Defense of the Poor Image, written by artist Hito Steryerl.  Steryerl makes several points about the affordances of the poor image, one of them being that poor images are able to spread quickly and easily its low resolution is able to accommodate a variety of infrastructure, especially in places that do not have access to high bandwidth connections. She writes “The poor image is no longer about the real thing—the originary original. Instead, it is about its own real conditions of existence: about swarm circulation, digital dispersion, fractured and flexible temporalities. It is about defiance and appropriation just as it is about conformism and exploitation. In short: it is about reality.”

This brings us to the discussion of verisimilitude. In this context, it means the ability for technology to portray what we perceive to be reality, what we perceive to be true. Sterne introduces his work with the sentence “this is the story of communication as being about the anxiety over the loss of meaning through a succession of technical forms. The assumption here is that progress in technology comes through its ability to produce verisimilitude” (32). However, it is important to think about whose truth technology is perpetuating. As we have discussed in our previous classes, technology and data are not all knowing apolitical machines that have the ability to be objective because the biases of the creators are built into them. As we are able to produce more and more media and construct more infrastructure to accommodate this media, whose truths are we prioritizing?

Low definition images are powerful because they are able to be spread more quickly, they can be created by anyone, and they are unusual to the vast majority of media consumers at this point. With “bootleg aesthetics”, Lucas Hilderbrand “finds these videos all the more affectively powerful because of their low definition” (34). I believe low definition images are more powerful than the ability to compress because of the context under which they exist in this day and age. When compressing a concept, image, audio byte, there is information that has to be trimmed and therefore lost during the process of compression and transmission. With low definition images, what the user captures is what is shared. The user captured their truth with the creation of that media source. In terms of information compression, if the creator is not the compressor, an outside source is determining what is and is not important. How might their biases skew the information being transmitted in an unintentional way?

Side note: It might be interesting to explore the relationship between coding and cryptography, but I don’t know enough about cryptography enough to have a coherent statement or opinion about that relationship.