Turns out “Move Fast and Break Things” is not the same as “Bringing the World Closer Together”. Well, duh.

Just wanted to bring your attention to one of the articles the NYT published this week regarding the internal Facebook emails released by the UK.

Find the full article here: https://www.nytimes.com/2018/12/05/technology/facebook-emails-privacy-data.html?smid=fb-nytimes&smtyp=cur&fbclid=IwAR0UdDnZaJ65zejZtQ7tDUEUT6fGwFX8ojThJyyq9gBfTyyMaY5I6KQSXlk

In addition to discussing work-arounds for collecting data without notifying users, they have also engaged in some very interesting business practices when it comes to outlasting their competition. In our class, we’ve spent a lot of time discussing the many ethical questions regarding Facebook’s privacy policies. Reading a bit more about the large-scale ways that Facebook dominates was a useful perspective that has been partially absent from our conversations. In short, Facebook has been making decisions about how to interact with other app start-ups based on their potential to keep their place in the market cornered. For example, one of the reasons the video-app, Vine, was so successful was their link to Facebook. You’d sign up for Vine and it would suggest other Vine users to connect with based on your Facebook contacts. Upon realizing that they were the fuel to another company’s success, they not only restricted this connection, but also released Instagram video (Facebook has owned Insta since 2010). This example would suggest that Facebook is interested in being self-contained to maintain dominance, but this isn’t quite true. Ultimately, Facebook decided to grant free reign of other apps on the Facebook platform, as long as those apps send the data they’ve collected back to Facebook.

So where does this leave us? In the social media space, Facebook users are merely a product sold to companies seeking data. But other, smaller, companies are also part of that product and are also being sold by FB. Upon thinking more about Facebook’s aggressive business practices, I’m finding that they’ve become so unimaginably powerful that in a world separated by businesses and people, Facebook’s scale is so big that apps, companies, and people are all just ant-sized products. I’m left wondering what this means about Social Media Economics (if that isn’t already a term, it should be). Is this a whole new layer of our “consumer-based” system? Or does Facebook, perhaps, take the place of a meta-power that had already existed?

Automated Inequality

The inequality discussed in this piece is like that in “Weapons of Math Destruction”, which make unsystematic categories to fit people in. This reading points out a very important danger of using data and algorithm as a tool to solve social issues: the algorithm, or the technology dominates the whole process and human beings have to fit the technology. Shouldn’t it be the other way around? With the high failure rate of technology in solving social problems mentioned in the article, how is it possible that people still believe in technology as a “neutral” and “objective” tool that can reduce human errors? The “automated” inequality and “quantified” “weapons of math destruction” have revealed the flaws of using data. When you need to deal with large amount of people, you kind of have to reduce them to a certain extent and sacrifice some of the individualism, which is important in solving such problems. As is mentioned in the infrastructure article, system thinking or relationality is important in studying media infrastructure. It is also important in studying human beings, who are themselves unique related systems rather than the aggregation of a bunch of segregated and meaningless categories. The organic systems of human beings are cut into unrelated pieces which people who interpret the data look for “correlations” that are may or may not mean anything in solving the problems. In addition, a lot of the readings on data speak about the dangers and flaws of data and algorithms, and they point out that data has dominated human beings’ lives instead of assisting them. But how do we solve the problems? How to implement data and algorithms to assist people to address the various social issues we are facing today since it is unlikely that we are going to drive data out of our lives because it is prevalent. How to make it a useful tool rather than coding biases into it and using algorithms to sabotage marginalized people and sacrifice their most urgent needs in favor of those of the richer middle class? Using data for real “public good” is a complex problem requiring various aspects of efforts from different disciplines and social organizations.  

Automating Inequality

3. Homelessness on Skid Row
• coordinated entry system: prioritization and “housing first” (vs. “housing readiness” 92–93)
• VI-SPDAT: Vulnerability Index—Service Prioritization Decision Assistance Tool
• data used for what? (114)
• community policing (118–119)
• coordinated entry is both a system for managing housing and a system for surveillance. (121)
4. Allegheny Algorithm
• “predictive risk models”
• AFST “training” the intake workers (142)
• proxies for child abuse: community re-referral and child placement
• outcome variables, predictive variables, and validation data (143–145) How do these design flaws lead to limited accuracy?
• referral bias (153, 154)
• AFST best-case scenario (171)

Thoughts on Automating Inequality

When reading the introduction to this excerpt, I was skeptical that the processes created to make solving society’s problems more efficient could work. Specifically, I was surprised that the automated process to determine which children would be most at risk for abuse could, for the lack of a better word, exist. As Eubanks laid out so eloquently in her narratives, these issues require a solution beyond a technological one. If created with true equality and equity in mind, algorithms in social services / public services provide a band aid solution at most. In addition, it was extremely disheartening to learn that it was possible for clients to be extremely vulnerable based on the VI-SPDAT (Vulnerability Index – Service Prioritization Decision Assistance Tool) to the point where they would be ideal to be housed, but require a lot of social services that the government could not provide in order to stay in that housing based on what the landlords wanted out of tenants. I would think it makes sense to put the folks who need the least social services into homes first, because it seems they would need the least support to be housed, which meant that fewer people would be returning to the streets and entering the system. Eubanks writes “But in the absence of sufficient public investment in building or repurposing housing, coordinated entry is a system for managing homelessness, not solving it” (109). People are cycled through the system and because this information is shared with the LAPS, they are also cycled through the criminal justice system.

In thinking about these programs, I would like to discuss the idea of opting out. Those who are privileged enough to not need these programs are fortunate to not be tracked in the same way these folks are. The idea of opting out in general is only a viable option to those who do not depend on various technologies, whether it is the VI-SPDAT or something like Facebook – a tool many freelancers depend on to find events. How can we build technologies that assist people without tracking them? What can we do with the technologies that track us and make decisions about us that affect our lives, in ways that we are unaware?

In the name of automate algorithm

It is true that big data and artificial intelligence have been used by service providers to protect their customers, for example, banks tracking customers’ transaction habit and detecting anomaly transactions to prevent fraud, email platforms analyzing the content of the email contents to category spams and non-spams, and e-commerce firms profiling buyers by tracking their purchasing history and clicking on goods to push goods recommendations. These kinds of automate algorithm generally provides much help and convenience to customers and users, at least no harm to them, although sometimes the result of the algorithm is not accurate or unexplainable.

However, it is unacceptable that some corporate increase the service price to the customers in the name the result automate algorithm. Car insurance could finds any excuse to add on insurance rate when car owner make any changes. It is a real case that my car insurance rate was raised without any incident happened. I was told by the online representative of the insurance firm: the system tells that the new residence area is an accident prone area. The higher rate is the result of the system algorithm of evaluating insurance. In fact it is a better maintained community. The insurance purportedly lift the insurance price in the name of automate and algorithm.

Automating Inequality

I find it sad that automated systems that are supposed to help the most vulnerable people in our society are often used to further discriminate against and disenfranchise these people. Thinking critically about the results of programs like Los Angeles’ VI-SPDAT and Allegheny, Pennsylvania’s AFST helps identify the harmful assumptions at the foundation of these tools’ creation. They perpetuate the idea that poverty in the United States is the result of individuals’ inherent weakness or poor decisions, instead of the result of systemic legal, medical, gendered, racial, and educational inequalities that make it difficult for those who are already poor to experience improved circumstances.

Los Angeles’ housing match system has solved some problems, including getting some unhoused people into housing and making it easier for community organizations with similar missions to reach as many people as possible. These are great benefits, but there are large costs as well. The data that is collected from applicants can be kept for seven years and shared with 168 organizations, as well as several local and federal government entities. Applicants do not get to see what their information looks like before it is distributed, and the algorithmic score that their data yields is not shared with them. The flow of information is one way only. Because of this lack of transparency, it’s difficult to understand why some unhoused people are able to find homes with relative ease and others can apply several times with no success. In addition to the amount of information that is required to apply, making applicants responsible for obtaining documentation such as birth certificates is rather short-sighted, considering that a lot of both chronic and crisis unhoused people may lack the financial and/or technological resources to get the required documents. The author’s point in her introduction that the sheer time it takes for individuals to navigate these systems is not something afforded to everyone is so important to keep in mind when reading these stories.

Data as Intangible Asset of the Public

In this book, examples are shown to demonstrate various types of risks of privacy posed by technology. The first is the police accessing someone’s data through a list on the phone and making incriminating interpretations; the second is knowing a suspect’s potentially criminal behavior and accessing technological device of the suspect; the third one is accessing data of a suspect while gaining access to other user’s data of the same technological service. A key question discussed in these examples is under what circumstances can the government access individual’s data, to what extent, and with or without permission (such as a warrant). This question is sometimes taken for granted or oversimplified because, as is said in the article and previous readings, data is not as tangible and visible as other objects that are considered as connecting to the privacy of someone. In my opinion, this is another reason why it is important to study and bring to the front the materialistic aspect of data and the mechanisms of how data works. Otherwise, data will stay in the minds of the laypeople who constitute the majority of the public as something that works mysteriously in the clouds, as is promoted by big corporations. Being aware of the materiality of data and its prevalence in people’s everyday life can help people realize its positive and negative impacts – some of them may not even be known. Only when the public have more understanding of data and start using it to serve their life can it really “serve for the public good”. Or they will be some other fancy tools manipulated by the rich and the powerful to exploit the people.

Another issue highlighted in this article is the actor infringes upon the privacy of the public through accessing data without one’s consent. The information of the public is thus not only subject to risks posed by corporations from the private sector, the goal of which is seeking profit, but also to those posed by agencies and organizations in the public sector, such as the government. To what extent can the government represent “the public” and having the right to take what is the public, however intangible that is, for their own purposes. In an age where data has become so closely intertwined with individuals, it is time to redefine what is an individual’s “possessions” and who may have access to them under what circumstances. The role the government plays is not only a guardian angel for the public. It may also violate people’s rights in ways that could never have been imagined. The nature of data as a new form of asset derived from human beings and its potential misuse by power should be aware to ordinary people so that they can be more conscious about protecting themselves in ways they may have not even imagined.

 

habeas data

Introduction
• d-order: warrantless search of provider for who, when, and where information, meta-data.
• third-party doctrine: individuals “relinquish reasonable expectation of privacy when they transact via a third party”
• license plate readers (LPRs)
• world’s first data protection law, 1970, Germany. requires “consent” to collect/use personal data.
• habeas corpus –> habeas data (xvi)
Ch. 6: email
Warshak v. US: must have warrant before ISPs turn over email content
• 1986 Electronic Privacy Communications Act (EPCA). two outdated aspects: distinguished between “remote” and “electronic” computing services.
• pen/trap: used to get electronic content
• what was Levison’s primary concern, leading him to refuse to comply with order giving TLS keys?
• what are some reasons tech companies like Google and Facebook decided together to demand ECPA warrants from the government?
• how has “the plummeting costs of storage . . . flipped the default understanding of how surveillance threatens privacy?” (143)
Ch. 9: phone searches
• “search incident to arrest” is exception to warrant rule.
Smallwood v. Florida: cannot search cell phone without warrant
Riley / Wurie vs. US: argument was that “digital was different” (206)
• fingerprints vs. PIN codes: Fourth and Fifth amendments
• foregone conclusion exception to Fifth amendment
• lawful hacking and NITs (216)

Habeas Data

Tinfoil hat time: The government’s lack of proactivity regarding laws that address the current and future concerns of digital life does not strike me as coincidental. I think that there is, at least on some level, and intentionality to the logic that permits law enforcement LPR systems to scan and keep location data on thousands of license plates that are not implicated or involved in crimes. The foundational documents of this country were written by people who could never have imagined email, or data centers, or Wikileaks.

While technology has advanced beyond what anybody could have imagined in even the 1980s, when most households didn’t own a computer, it seems especially troubling that the government has used these advancements to exponentially expand their abilities to monitor the populace, and has not acted like an institution that is supposed to exist within a framework of checks and balances. The combination of secrecy and incompetence exhibited by the government when trying to get information from Lavabit is especially troubling, and I can’t decide whether it’s a good or bad thing that they’re so bad at this stuff.

Other thoughts: I have a lot of different email addresses, all free. With the professional and academic addresses, I have no expectation of privacy and conduct myself accordingly. WIth the other address, most of which are through Google, I’ve been pretty lax about considering how my data is used. I have browser add-ons that disable ads, so I don’t even remember that I should be seeing targeted ads. The adage that Lavabit founder Ladar Levison cites (“If you’re not paying for the product, you are the product.”) is one that makes complete sense, but is hard to keep at the forefront of my mind when compared to the ease of using Gmail/Google mail for business. Related: Yahoo has to pay $50b due to breaching mail users’ data