So, this is happening.

Cold cases solved via online DNA profiles.

We talked about this sort of thing earlier in the term, though, perhaps, not in this way.

In just a few years, the DNA every white person whose family is from Northern Europe will be identifiable in GEDMatch’s database.

I guess that;s great for people who want to find family members, but our DNA isn’t really private. That’s an issue. It can go down a bunch of different rabbit holes, most of them unpleasant.

Transparency of Algorithm

In Weapons of Math Destruction, one of the issues discussed is the transparency of algorithms, especially those that are deployed to “measure” human beings and have wreaked havoc on them. This reminds me of a very vivid example that I saw on the New York Subway. It is advertisement for Seamless, an online food order service. In this interestingly funny advertisement, there are several pictures with words that describe the characteristics of New York neighborhoods based on the data Seamless collected through their services and their interpretation of the data analysis results. For example, one of the picture says “The most tender neighborhood- Fordham, Bronx Based on the number of orders of chicken tenders”. In this picture, there is a macho guy who holds a piece of chicken tender in one hand  while holding a cute little kitty in the other hand. Obviously, this interpretation of analyzed data is completely for commercial purpose. It uses the play-on word of “tender” to achieve a humorous effect so that viewers of the advertisement can have a deep impression of their service through exposure to this unreasonable but funny connection they make between data analyzed by algorithms and their product. The company is transparent in revealing the way in which data and algorithm are used to draw conclusions, which may impose a stereotype on a neighborhood. For people who do not agree with this conclusion, they will know how it is made through such transparency. In this series of advertisements, chelsea is named “the most homesick neighborhood” based on the order of a home-made dish and another neighborhood is named “the neighborhood with the most hot yoga” for having the most orders kumbacha. Every one of those names could have an impact on the perceptions and images of these neighborhoods. If such perceptions and images have detrimental effects on the people living there, as what the algorithms did to the math teachers who were labeled “incompetent” in the elementary school mentioned in the reading, then the methods through which those images and perceptions were created are of major significance as they are the keys to solving issues of injustice and inequality.

 

However, transparency of algorithms is still at the mercy of big corporations who keep them as top secret in spite of the harm they can cause to individuals who are subject to unfair “measuring” based on such algorithms. Naming or categorizing individuals is of high risk because there are millions of implications such names or categories are associated with that may change one’s life tremendously. A socially responsible approach to algorithms should be adopted by corporations, the government, and individuals to make sure that we are not hurt by what we create for a better life in the first place.

 

Weapons of Math Destruction

Introduction
• “Weapons of Math Destruction” WMDs
• “value-added modeling” algorithm and feedback loops
• “statistical systems require feedback”
• “Ill-conceived mathematical models now micromanage the economy”
• difference between school district WMDs vs. business WMDs?
On the Job
• “clopenings” and scheduling software
• “operations research” (OR) and “Just in Time” manufacturing.
• Cataphora’s idea generation software
• sociometric badges
• Simpson’s paradox and the essential randomness of value-added models
Civic Life
• Facebook’s “voter megaphone” experiment
• Facebook’s “news from friends” experiment
• Facebook’s mood experiment
• focus groups -> direct-mail campaigns -> microtargeting
• proxies for data, profiling buckets. “shift from region to individual”
• television’s move to personalized advertising
Conclusion
• Bottom lines vs. by-products of fairness
• Hippocratic oath for data scientists?
• what would a regulatory system for WMDs look like?
• “mathematical models [are] the engines of the digital economy”
• (how) can Big Data algorithms be used for positive, democratic reasons?

Post Re: Cathy O’Neil

Before I begin my comments on WMDs, I would like to share with you that on my way home from class last week, I passed a sign outside of a Bank of America advertising their mobile “assistant”.  Her name was Erica.

O’Neil, in her conclusion, says that “big data processes codify the past.  They do not invent the future.”  This neatly sums up the arguments she’s made.  The examples are clear.  In schools, the “codifying of the past” is done wildly inaccurately.  The performance indicators of teachers simply do not measure what they are meant  to.  This is the first type of problem introduced by WMDs.  The response to the inaccuracies are not surprising.  In the name of ease or of streamlining, or most likely in the name of cost minimization, teachers are held to standards that bring inherent contradiction.  How can a school measure the value added by a teacher of underperforms with the same algorithm that it measures teachers of overachievers?  The outcomes are not important here, only the seemingly priceless impact of essentially digitizing employee review.  While pretending that taking humans out of the judgement process will level the playing field, it actually codifies the human error.

The example of teacher evaluation is the least threatening of the examples given by O’Neil in the assigned reading.  Worse is the outright and blatant codification of existing systems and structures.  Where value-added educator evaluation is an original model of measurement with new flaws, in the case of the use of WMDs in the financial industry is the codification of unoriginal, existing models that have unfairness baked deep within already.  By using existing data, choices are made about the value of individuals without consideration of the data that has not already been collected – like using the zip code as a weapon despite an unmeasured propensity toward frugality, for example.  This, arguably more dangerous, form of WMD highlights O’Neil’s point about “codifying the past”.

Reading these chapters, I thought about our prior conversations about digitization and datafication. The data had already been collected; vast swaths of information exists about individual insurance risk, policing patterns, or political motivations.  The use of WMDs seems to me a type of digitization of our existing social structures and patterns.  This begs a new perspective.  Why are we looking at the success of data systems to fix the world when we cannot even create data systems that properly express the world as it is?  O’Neil’s answer is this: the mathematical tools discussed can be used for good or for evil, for equity or inequality, to codify or to “create” our society.  It is the human component that decides how to use these tools.  Unfortunately, it appears that the same players involved in codifying, datafying, and digitizing our reality have very little interest in the human component at all – likely underestimating or even devaluing their roles.

a new word went viral among Chinese netizens.

I have been in a tangled feeling along with reading the book, Weapon of Math Destruction.
We get much convenience from the abundant services of all kinds of information feeding, such news notifications ,shopping recommendations,music suggestions, even ads sometimes. We can get to know the basic information for daily life during in the subway to office in the morning without needing to subscribe news papers and journals magazines, to pay much attentions to look for sales events. The world seems going towards perfect with the coming of Information Age and AI.
However the book lists many cases of WMD, revealing its negative results from the perspective of downsize. I got to wary of the horrible consequence of savagery developing, evolving and applications of WMD in various areas, including educations, finances,policing, etc.
This reminds me of a word, “melon-eating masses”, which is newly produced by Chinese netizens and went viral in the recent 2 two years in China . There are many versions about the origin of the new word. The major one is from an elderly who was interviewed by a reporter. In the interview, the elderly said “I know nothing about it, I was just eating watermelon on the roadside”. From then on, the Chinese internet users, often use it to describe a massive group of passive onlookers at a major incident or event. In my opinion, the fired teacher, a victim of the WMD, is a member of “melon-eating masses” for she couldn’t figure out why he got such a low score as to be fired. The single mother, who can’t arrange well his child care any more after the introduction of precise algorithm to calculate the job time, could be also a member of “melon-eating masses”, who might be only able to accept the “truth” that the technology advance is improving the efficiency of work, without doubting of the fairness of the algorithm. The designer and developers of the computing models, the privileged politicians, could also be members of the “melon-eating masses”, with scale expanding of data collection, he complexity deepening of algorithm and the neural network getting more and more entangled, and getting out of control on the plans plotted by themselves. The new word created in China internet reflects a society phenomenon of that the mass are a little desperate with the current situation of pool access to the real information and are looking for forward to the governors to regulate and rule the data usage, business modeling, information feeding.

Good to see that yesterday it is reported: Facebook takes down hundreds of pages and accounts that were spreading false or misleading political content ahead of the midterm elections.

https://www.wsj.com/articles/facebook-takes-down-hundreds-of-u-s-pages-it-said-spread-misinformation-1539289601?emailToken=29411f823c22631890df9e943e3debb2tap1voHRTPW1NnbNXyll2f7csI8gu6r5CyDg1SDgG+zu+U0/hkpCZLlehT6zKf+os3K+dKzJW+7fYrW+6AaXXTHt/+/2ime5T1IxtCG81gOONqzYtKaaiyRGQB5rlwC6l80n331F/lpAFoP6iPP4HA%3D%3D&reflink=article_email_share

Selective Exposure & The Uncanny Valley

This age of media is shaping virtually every choice we make. Most of us don’t go a few hours without connectivity to the Internet. Thus, our constant exposure influences our lives tremendously and sometimes in ways that we don’t consciously realize. The focus of selective search algorithms/exposure in the readings led me think about these things in a little more depth. I found myself forming big questions like:

  • What are the long-term consequences of selective exposure from media to society as a whole?
  • How can we begin to consistently be conscious of the effects that selective exposure has on us?
  • Is our selective exposure to media  creating a myopic view of the world?

As a whole, to me, these dynamics look like darkness. We are blind to the how and why certain ads and posts are made available to us. We just see and experience them according to what we’ve clicked on before, and what we click on next. That’s all we really know. And, that’s the scary part. Yes, we are alone in the appearance that our browsers make for us. But, I don’t believe we are truly alone in our filter bubble as the author points out. I think just the opposite. We are constantly interacting with each other and, though our online presence is influenced solely by what we do, our cognitions are influenced by everything we come into contact with. The invisibility and unconscious participation in this bubble is what I find striking. The instantaneous nature and ease that Google gives us is why I question the state of our cognitions. We can’t see the how or why things are shown/answered in a Google search. Yet, we usually find what we are looking for so easily that we don’t care about the how or why because we are too busy.

For the most part, content we see is attitude-consistent versus counter attitudinal, and why it’s important for us, as users and members of society, to experience both. Selective exposure poses problems for a democratic system because it inhibits opinion formation that builds on diverse input. It can can be viewed as an attempt to reduce the cognitive load due to the likelihood of users avoiding dissonance and effortful, inconsistent and unpleasant content. I believe that counter attitudinal content is crucial in order to allow our current opinions and attitudes to be strengthened, weakened, or made moot.

It’s great that we see ads and content that we prefer. Yet, it seems that we need to remember that the things that we don’t prefer allow us to strengthen our knowledge of our opinions with good, healthy counteraction. How much are we resting on the ease of selections and decisions that algorithms give to us? Since we cannot see what is being filter, tailored, or stored about our personal browsing, how can we keep up or even chose what we see and experience? Where do we draw the line in how much we let the Internet do for us? What if we were all stripped of this luxury for a day, a week, or even forever? What effects does such have on our ability to think and do for ourselves?

I realize selective exposure is a positive way for many companies, products, politicians, etc. to reach a specific audience and generate revenue, attention, and exposure. However, like Pariser points out, too much of a good thing can cause real problems. And due to invisibility, how do we know how much we are receiving on a daily basis? And is it good for us? Our doppelganger selves reflected in our media are a lot like, but not exactly, ourselves. And as we’ll see, there are some important things that are lost in the gap between data and reality. Do we look online more and more each day and wonder how the Internet knows so much about who we are? Is what we see so accurate we begin have negative impressions? Are we stuck in a place we want to be or don’t want to be? I believe such questions will become more and more vital to the direction of our media centered society.

Transparency

Apparently it’s not a thing in the Big Data universe. Everything from how data is collected, to who sees it, to how it is processed and analyzed

As O’Neil points out in Weapons of Math Destruction, this is just part of the problem, but it comes back again and again.

The lack of transparency prevents any real analysis of effectiveness of the various WMD’s. Not only do we not know what data is collected, we don’t know how it is measured.

So, if the WMD is inaccurate, we really don;t have any recourse. We can protest it, but, more often than not, the powers that be will say, “This is what the data show.” They accept it as correct even though they don’t know what it’s doing.

The opaqueness of the process also prevents correction. These are closed systems. They don’t change until the coders decide they need to. the coders may be resistant to change. After all, they came up with the data analysis to begin with, they might think they got it right and resist evidence to the contrary.

I’m not saying that other issues aren’t important, they absolutely are, but the lack of transparency just gets to me every time.

Amazon’s Sexist AI and Alexa Offering Therapy?

“It’s Not an Intruder. It’s Alexa Asserting Her Independence.
Amazon’s Echo device and Alphabet Inc.’s Google Home can handle a growing array of tasks—they are also freaking people out.”

http://archive.is/8JelG


Amazon Ditches AI Recruiting Tool That ‘Didn’t Like Women’

https://www.thedailybeast.com/amazon-ditches-ai-recruiting-tool-that-didnt-like-women