Category Archives: Uncategorized

Automated Inequality

The inequality discussed in this piece is like that in “Weapons of Math Destruction”, which make unsystematic categories to fit people in. This reading points out a very important danger of using data and algorithm as a tool to solve social issues: the algorithm, or the technology dominates the whole process and human beings have to fit the technology. Shouldn’t it be the other way around? With the high failure rate of technology in solving social problems mentioned in the article, how is it possible that people still believe in technology as a “neutral” and “objective” tool that can reduce human errors? The “automated” inequality and “quantified” “weapons of math destruction” have revealed the flaws of using data. When you need to deal with large amount of people, you kind of have to reduce them to a certain extent and sacrifice some of the individualism, which is important in solving such problems. As is mentioned in the infrastructure article, system thinking or relationality is important in studying media infrastructure. It is also important in studying human beings, who are themselves unique related systems rather than the aggregation of a bunch of segregated and meaningless categories. The organic systems of human beings are cut into unrelated pieces which people who interpret the data look for “correlations” that are may or may not mean anything in solving the problems. In addition, a lot of the readings on data speak about the dangers and flaws of data and algorithms, and they point out that data has dominated human beings’ lives instead of assisting them. But how do we solve the problems? How to implement data and algorithms to assist people to address the various social issues we are facing today since it is unlikely that we are going to drive data out of our lives because it is prevalent. How to make it a useful tool rather than coding biases into it and using algorithms to sabotage marginalized people and sacrifice their most urgent needs in favor of those of the richer middle class? Using data for real “public good” is a complex problem requiring various aspects of efforts from different disciplines and social organizations.  

Thoughts on Automating Inequality

When reading the introduction to this excerpt, I was skeptical that the processes created to make solving society’s problems more efficient could work. Specifically, I was surprised that the automated process to determine which children would be most at risk for abuse could, for the lack of a better word, exist. As Eubanks laid out so eloquently in her narratives, these issues require a solution beyond a technological one. If created with true equality and equity in mind, algorithms in social services / public services provide a band aid solution at most. In addition, it was extremely disheartening to learn that it was possible for clients to be extremely vulnerable based on the VI-SPDAT (Vulnerability Index – Service Prioritization Decision Assistance Tool) to the point where they would be ideal to be housed, but require a lot of social services that the government could not provide in order to stay in that housing based on what the landlords wanted out of tenants. I would think it makes sense to put the folks who need the least social services into homes first, because it seems they would need the least support to be housed, which meant that fewer people would be returning to the streets and entering the system. Eubanks writes “But in the absence of sufficient public investment in building or repurposing housing, coordinated entry is a system for managing homelessness, not solving it” (109). People are cycled through the system and because this information is shared with the LAPS, they are also cycled through the criminal justice system.

In thinking about these programs, I would like to discuss the idea of opting out. Those who are privileged enough to not need these programs are fortunate to not be tracked in the same way these folks are. The idea of opting out in general is only a viable option to those who do not depend on various technologies, whether it is the VI-SPDAT or something like Facebook – a tool many freelancers depend on to find events. How can we build technologies that assist people without tracking them? What can we do with the technologies that track us and make decisions about us that affect our lives, in ways that we are unaware?

In the name of automate algorithm

It is true that big data and artificial intelligence have been used by service providers to protect their customers, for example, banks tracking customers’ transaction habit and detecting anomaly transactions to prevent fraud, email platforms analyzing the content of the email contents to category spams and non-spams, and e-commerce firms profiling buyers by tracking their purchasing history and clicking on goods to push goods recommendations. These kinds of automate algorithm generally provides much help and convenience to customers and users, at least no harm to them, although sometimes the result of the algorithm is not accurate or unexplainable.

However, it is unacceptable that some corporate increase the service price to the customers in the name the result automate algorithm. Car insurance could finds any excuse to add on insurance rate when car owner make any changes. It is a real case that my car insurance rate was raised without any incident happened. I was told by the online representative of the insurance firm: the system tells that the new residence area is an accident prone area. The higher rate is the result of the system algorithm of evaluating insurance. In fact it is a better maintained community. The insurance purportedly lift the insurance price in the name of automate and algorithm.

Automating Inequality

I find it sad that automated systems that are supposed to help the most vulnerable people in our society are often used to further discriminate against and disenfranchise these people. Thinking critically about the results of programs like Los Angeles’ VI-SPDAT and Allegheny, Pennsylvania’s AFST helps identify the harmful assumptions at the foundation of these tools’ creation. They perpetuate the idea that poverty in the United States is the result of individuals’ inherent weakness or poor decisions, instead of the result of systemic legal, medical, gendered, racial, and educational inequalities that make it difficult for those who are already poor to experience improved circumstances.

Los Angeles’ housing match system has solved some problems, including getting some unhoused people into housing and making it easier for community organizations with similar missions to reach as many people as possible. These are great benefits, but there are large costs as well. The data that is collected from applicants can be kept for seven years and shared with 168 organizations, as well as several local and federal government entities. Applicants do not get to see what their information looks like before it is distributed, and the algorithmic score that their data yields is not shared with them. The flow of information is one way only. Because of this lack of transparency, it’s difficult to understand why some unhoused people are able to find homes with relative ease and others can apply several times with no success. In addition to the amount of information that is required to apply, making applicants responsible for obtaining documentation such as birth certificates is rather short-sighted, considering that a lot of both chronic and crisis unhoused people may lack the financial and/or technological resources to get the required documents. The author’s point in her introduction that the sheer time it takes for individuals to navigate these systems is not something afforded to everyone is so important to keep in mind when reading these stories.

Data as Intangible Asset of the Public

In this book, examples are shown to demonstrate various types of risks of privacy posed by technology. The first is the police accessing someone’s data through a list on the phone and making incriminating interpretations; the second is knowing a suspect’s potentially criminal behavior and accessing technological device of the suspect; the third one is accessing data of a suspect while gaining access to other user’s data of the same technological service. A key question discussed in these examples is under what circumstances can the government access individual’s data, to what extent, and with or without permission (such as a warrant). This question is sometimes taken for granted or oversimplified because, as is said in the article and previous readings, data is not as tangible and visible as other objects that are considered as connecting to the privacy of someone. In my opinion, this is another reason why it is important to study and bring to the front the materialistic aspect of data and the mechanisms of how data works. Otherwise, data will stay in the minds of the laypeople who constitute the majority of the public as something that works mysteriously in the clouds, as is promoted by big corporations. Being aware of the materiality of data and its prevalence in people’s everyday life can help people realize its positive and negative impacts – some of them may not even be known. Only when the public have more understanding of data and start using it to serve their life can it really “serve for the public good”. Or they will be some other fancy tools manipulated by the rich and the powerful to exploit the people.

Another issue highlighted in this article is the actor infringes upon the privacy of the public through accessing data without one’s consent. The information of the public is thus not only subject to risks posed by corporations from the private sector, the goal of which is seeking profit, but also to those posed by agencies and organizations in the public sector, such as the government. To what extent can the government represent “the public” and having the right to take what is the public, however intangible that is, for their own purposes. In an age where data has become so closely intertwined with individuals, it is time to redefine what is an individual’s “possessions” and who may have access to them under what circumstances. The role the government plays is not only a guardian angel for the public. It may also violate people’s rights in ways that could never have been imagined. The nature of data as a new form of asset derived from human beings and its potential misuse by power should be aware to ordinary people so that they can be more conscious about protecting themselves in ways they may have not even imagined.

 

Habeas Data

Tinfoil hat time: The government’s lack of proactivity regarding laws that address the current and future concerns of digital life does not strike me as coincidental. I think that there is, at least on some level, and intentionality to the logic that permits law enforcement LPR systems to scan and keep location data on thousands of license plates that are not implicated or involved in crimes. The foundational documents of this country were written by people who could never have imagined email, or data centers, or Wikileaks.

While technology has advanced beyond what anybody could have imagined in even the 1980s, when most households didn’t own a computer, it seems especially troubling that the government has used these advancements to exponentially expand their abilities to monitor the populace, and has not acted like an institution that is supposed to exist within a framework of checks and balances. The combination of secrecy and incompetence exhibited by the government when trying to get information from Lavabit is especially troubling, and I can’t decide whether it’s a good or bad thing that they’re so bad at this stuff.

Other thoughts: I have a lot of different email addresses, all free. With the professional and academic addresses, I have no expectation of privacy and conduct myself accordingly. WIth the other address, most of which are through Google, I’ve been pretty lax about considering how my data is used. I have browser add-ons that disable ads, so I don’t even remember that I should be seeing targeted ads. The adage that Lavabit founder Ladar Levison cites (“If you’re not paying for the product, you are the product.”) is one that makes complete sense, but is hard to keep at the forefront of my mind when compared to the ease of using Gmail/Google mail for business. Related: Yahoo has to pay $50b due to breaching mail users’ data

More Facebook issues

Over the holiday weekend, news that Facebook had hired a PR firm to “make claims” about George Soros dropped.

FB went after Soros because he has ties to the Freedom from Facebook Foundation, which is trying to break up FB into its component parts.

Soros says that it’s a smear campaign, and is demanding a Congressional investigation.

COO Cheryl Sandberg claims to know nothing about this, and MArk Zuckerberg can’t be reached for comment.

An outgoing executive is likely to take the fall for this.

I don’t see how anyone could be surprised by this.

One thing that gets me about some people I know. They’ve quit FB, and said that “They’re free of it”, while continuing to use Instagram. When I (or others) point out that IG is owned by FB, people tend not to react well.

Habeaus Data

Reading Habeas Data was the first time I ever thought about litigation regarding personal data. In addition, prior to this reading, I knew nothing about the security features of email, as well as the role of encryption in the email system. I was extremely fascinated to learn about Germany’s restriction on the type of data the government can collect following Nazism. One could say that Germany was ahead of the times when they created their first data privacy act in the 1970s, far before personal computing became prevalent. In addition, the fact that Germany continuously updated the law up until 2003 is a sign that they take the development of technology seriously (although, now that 2003 was 15 years ago, the law could use an update because technology and data collection has changed drastically since then). The contrast between German privacy laws regarding data and US laws is stark – even after all the court cases regarding personal data and search warrants were settled, the US still does not have a federal law restricting the type of data the government can collect on a person. This contrast between Germany and the US reminds me of another scenario with Twitter. On Twitter in the US, one can easily spread and have access to far-right conspiracy theories and the sort. In Germany, that type of propaganda is not allowed. This can be seen if a user logs onto the German version of Twitter as opposed to the US version of Twitter. It appears that the main difference between the US and Germany is that Germany is aware of its dark history – the age of Nazism – and is doing its best to prevent history from repeating itself. Some argue that the US does not have the same history of violence, or that the entirety of US history is violence, that the US government has no interest in halting the perpetuation of violent far-right rhetoric.

I was not surprised to read the Supreme Court came back 9-0 for both the Riley and the Wurie case. I think most people would be surprised that the Right leaning judges voted for the right to privacy, but most Republicans tend to prefer smaller governments and therefore limiting the powers of the government. When reading about these cases, I also did some reflecting on my relationship with my phone. People dump data about their entire day to day lives on their phones without giving it a second thought. Furthermore, most of us use applications that automatically communicate with the cloud (via Google or Amazon). At this point, it is perhaps naïve to assume the existence of any sort of privacy in the US. In addition, it is not even the data collection that is the most nefarious process of the internet – it is the personality profiling, the microtargeting, and the psychometrics developed to manipulate unassuming people into doing things for someone else’s agenda.

Tumblr has problems.

The Tumblr app has been removed from the Apple store

We have been focusing on Facebook, Amazon, and other entities, but we really haven’t discussed Tumblr.

Tumblr is a microblogging site. Many people use it to post fanfiction (An Archive of Our Own is another site for that), others use it to explore hobbies, or discuss politics. I have a Tumblr that consists mostly of bad jokes, things I’m interested in (mostly related to history, linguistics, and politics) and pictures I’ve taken at various museums I’ve visited.

However, Tumblr has always had a dark side. The alt-right has found a home there, for example. That isn’t what got Tumblr in trouble, though. Child pornography is. Apparently users have managed to post it on Tumblr despite Tumblr’s filters.

Tumblr is saying that it’s working on fixing this. We’ll see.

Surveillance, tech, and the limits of privacy

Farivar’s Habeas Data is an interesting read that discusses what happens at the intersection of technology, government, and privacy.

Perhaps the most salient point for me is that the law is just simply not keeping up with the changes in technology. I am not sure that it can, honestly. Cases take time to work their way through the legal system. Something that was an issue in, say, 2015, might be resolved by the time the case is heard in an appellate court in 2018.

The geography of the Appellate court system adds to this problem. In the United States, we have twelve appellate courts, and they frequently make decisions that contradict one another, which forces cases to the Supreme Court, which also adds time.

Part of the problem here is that sometimes it’s difficult view the technology through the lens of the relevant parts of the Constitution, say, the Fourth Amendment,

For instance, in the Riley case, the lawyers representing the police claimed that finding photos on a cell phone was just like finding those same photos in the defendant’s pocket, that they were in plain sight and, therefore, a warrant wasn’t necessary.

The Supreme Court said otherwise, and I;m inclined to agree. The phone is there, but what is on it isn’t obvious at all.

Of course, this is just the United States. Other countries have different issues.

One thing that struck me early in the piece was the conflict between Google and Germany over Google Street View. Here in the United States, we didn’t bat an eye when Google drove around recording our streets. Germany had a huge problem with it. Google eventually dropped the project in Germany.

The author states that Germans put up greater resistance to large scale data gathering because of their historical experience with the Nazi government and the Stasi in East Germany. I wonder if this feeling is not just limited to Germany, but common across Europe, which would lead to being more receptive to the Right to be Forgotten.

But, again, in the Uhnited States, we have different issues. For instance, the amount of data the Oakland Police Department collected just with their license plate photography program.

They weren’t just tracking suspects, or recent parolees, they were tracking everyone. Which leads us to the question, “Do we want our habits to be that well-known?”

I would say probably not.

Law and technology is a fascinating. scary place.