Suicide Belt [...]

A spike of suicides concentrated around a set of western states has policy makers wondering whats going on. The area of concentration, which stretches from Idaho down through Utah to Arizona and New Mexico, has been dubbed the Suicide Belt. No single cause has been determined, although the reasons are likely to be related to the reasons driving Rising Suicides in the U.S. as a whole. (Link)

One theory of the belt is that it is demographic, associated with the middle aged single white males that commit suicide at higher rates. (Link)

Another theory is the lack of social structures in the west for single men contributes. See the Atlantic (Link)


For more on demographic link see the Male White West.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Opioids, Alcohol, Suicide [...]

In contrast to every other major demographic, death rates for middle aged whites are rising, and rising fast. While part of this is attributable to increasing suicide rates, stunning new research indicates much of the increase is due to drug and alcohol abuse. The effect is centered in the poorest populations, and seems to be related to increased levels of pain. (nyt)

(source)
(source)

Analyzing health and mortality data from the Centers for Disease Control and Prevention and from other sources, they concluded that rising annual death rates among this group are being driven not by the big killers like heart disease and diabetes but by an epidemic of suicides and afflictions stemming from substance abuse: alcoholic liver disease and overdoses of heroin and prescription opioids. (nyt)

The researchers note that the magnitude of the increase is almost unprecedented: “the mortality rate for whites 45 to 54 years old with no more than a high school education increased by 134 deaths per 100,000 people from 1999 to 2014.”

The only precedent for this rapid a rise in death rates in a demographic? AIDS.

While the ramifications of the finding are enormous, the finding was discovered primarily by accident, during research into increases in the suicide rate.

Despite middle-aged mortality in the U.S. being one of the most studied demographic phenomena in recent history, the pattern had been largely missed. While many had looked at the data, no one had happened on the right slice to show the pattern:

Dr. Preston of the University of Pennsylvania noted that the National Academy of Sciences had published two monographs reporting that the United States had fallen behind other rich countries in improvements in life expectancy. One was on mortality below age 50 and the other on mortality above age 50. He coedited one of those reports. But, he said, because of the age divisions, the researchers analyzing the data missed what Dr. Deaton and Dr. Case found hiding in plain sight. (nyt)

Kevin Drum notes that why the headline is about middle-aged whites, the truth is that the trends are evident in all age groups:

But the paper is being misreported. It’s not just middle-aged whites. It’s all whites. The chart below tells the real story:every age group from 30 to 65 has shown a steep increase in mortality. So why focus just on middle-aged whites? “The midlife group is different only in that the sum of these deaths is large enough that the common growth rate changes the direction of all-cause mortality.” In other words, the midlife group makes for a more dramatic chart. But every age group has shown a similar trend.

To support this he shows his own chart of increases in white mortality at different age groups from the Opioids, Alcohol, and Suicide trend:

Mortality of Americans by age group (source)
Mortality of Americans by age group (source)

In fifteen years death by overdose, suicide, and liver conditions associated with drug and alcohol abuse have doubled in the white American population. So much so that they have changed the shape of all-cause mortality in middle age, but they are impacting other age brackets in profound ways as well.


The suicide trend among middle-aged white men is well known. See Suicide Belt, and Rising Suicides

“More than half of Americans now report a personal connection to painkiller abuse, 16 percent know someone who has died from an overdose, and 9 percent have seen a family member or close friend die.” [http://www.motherjones.com/politics/2015/11/most-americans-now-have-personal-experience-painkiller-addiction quote]

80% of Heroin Users Started with Painkillers [...]

From the documentary Heroin: Cape Cod, MA a potentially damning fact: 80% of heroin users started with painkillers. In the case of the 20-somethings in the documentary, most were crushing pills from their parents cabinets or by abusing pills they received after accidents. When the pill addiction they developed became unmanageable they began use of heroin.

From another article:

Respondents who began using heroin in the 1960s were predominantly young men in their teens living in urban areas, whose first opioid use was heroin (80%). Recent users are more likely to be white, older (average age almost 23 years) men and women living in suburban or rural areas. Three out of four were first introduced to opioids through prescription painkillers. (Source)

Recent attempts to make Oxycontin more tamper-resistant have caused some addicts to move to heroin, which is cheaper.

“A few years ago when we did interviews with people in treatment, many would tell us that although they were addicts, at least they weren’t using heroin,” he said. “But now, many tell us that a prescription opioid might run $20 to $30 per tablet while heroin might only cost about $10.” (Source)


See Opioids, Alcohol, Suicide to understand the scope of the problem.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Opioid Increase 1997-2002 [...]

From an abstract of a 2005 paper:

I measured the role of opioid analgesics in drug abuse–related deaths in a consistent panel of 28 metropolitan areas from the Drug Abuse Warning Network. The number of reports of opioid analgesics increased 96.6% from 1997 to 2002; methadone, oxycodone, and unspecified opioid analgesics accounted for 74.3% of the increase. Oxycodone reports increased 727.8% (from 72 to 596 reports). By 2002, opioid analgesics were noted more frequently than were heroin or cocaine. Dramatic increases in the availability of such opioids have made their abuse a major, growing problem. (Source)

The increase was related directly to the marketing push of Purdue Pharma around OxyContin:

Starting in 1996, Purdue Pharma expanded its sales department to coincide with the debut of its new drug. According to an article published in The American Journal of Public Health, “The Promotion and Marketing of OxyContin: Commercial Triumph, Public Health Tragedy,” Purdue increased its number of sales representatives from 318 in 1996 to 671 in 2000. By 2001, when OxyContin was hitting its stride, these sales reps received annual bonuses averaging over $70,000, with some bonuses nearing a quarter of a million dollars. In that year Purdue Pharma spent $200 million marketing its golden goose. Pouring money into marketing is not uncommon for Big Pharma, but proportionate to the size of the company, Purdue’s OxyContin push was substantial. (Source)


More on this trend at Opioids, Alcohol, Suicide.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Big Data and OxyContin [...]

Purdue Pharma, authors of the current opiate and heroin epidemic in the U.S., created the problem by ignoring the “why” behind the numbers:

(source)
(source)

Boots on the ground was not the only stratagem employed by Purdue to increase sales for OxyContin. Long before the rise of big data, Purdue was compiling profiles of doctors and their prescribing habits into databases. These databases then organized the information based on location to indicate the spectrum of prescribing patterns in a given state or county. The idea was to pinpoint the doctors prescribing the most pain medication and target them for the company’s marketing onslaught.

That the databases couldn’t distinguish between doctors who were prescribing more pain meds because they were seeing more patients with chronic pain or were simply looser with their signatures didn’t matter to Purdue. The Los Angeles Times reported that by 2002 Purdue Pharma had identified hundreds of doctors who were prescribing OxyContin recklessly, yet they did little about it. The same article notes that it wasn’t until June of 2013, at a drug dependency conference in San Diego, that the database was ever even discussed in public.

Combining the physician database with its expanded marketing, it would become one of Purdue’s preeminent missions to make primary care doctors less judicious when it came to handing out OxyContin prescriptions. (Source)

The result? The largest drug epidemic in the history of the United States, one which has literally reversed declines in all-cause mortality in many demographics. See Opioids, Alcohol, Suicide

(source)
These figures actually don’t cover the much larger effects from death by liver disease and suicide attributable to opioid abuse. (source)

Part of this is a warning about the morality of Big Data. But perhaps an even larger issue is the problem of data without theory. The reasons behind these trends mattered — were these replacing other drugs due to efficacy or due to addiction? Were the super-prescribers more enlightened as to pain management or were they running cash for scripts businesses?

Marketing, in one sense, does not require answers to these issues; you use the correlations to make sales, and the why does not matter. But ethical marketing is a different matter.


There is little doubt the pharmaceutical industry is behind the current heroin epidemic. See 80% of Heroin Users Started with Painkillers, Opioid Increase 1997-2002

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Secret Sauce Is Mostly Mayo [...]

Companies spend a lot of time talking about how their proprietary analytics can help identify opportunity. In reality, the most useful analytics are not that complex.

In the most recent example a proprietary engagement metric was beat by a simple metric: how much time did the student spend reading?

Researchers also compiled an “engagement index,” based on students’ highlighting and minutes spent reading. They found that this index predicted performance more accurately than even past grades. However, when each attribute of the “engagement index” was studied, the amount of minutes spent reading was ultimately most indicative of course outcomes, even more so than the index itself. The researchers stipulate that these findings could help professors identify struggling students as the latter worked through assignments. (Source)

Duffer’s Drift

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Google Flu Trends [...]

Google Flu Trends — a project to use Big Data around Google searches to predict flu trends faster than the CDC — was the poster child for the glory of Big Data right up until it “failed spectacularly” in 2013. What happened?

It began as a research experiment, followed by a paper in none other than Nature:

The paper demonstrated that search data, if properly tuned to the flu tracking information from the Centers for Disease Control and Prevention, could produce accurate estimates of flu prevalence two weeks earlier than the CDC’s data—turning the digital refuse of people’s searches into potentially life-saving insights.

And then, GFT failed—and failed spectacularly—missing at the peak of the 2013 flu season by 140 percent. When Google quietly euthanized the program, called Google Flu Trends (GFT), it turned the poster child of big data into the poster child of the foibles of big data. (Source)

Researchers writing at Wired this year go back to postmortem the program, and find, they claim, that the problem was not Big Data per se, but “Big Data Hubris”. For example, Google did not make their algorithms transparent, which led to them missing problems around seasonal terms:

But while Google’s efforts in projecting the flu were well meaning, they were remarkably opaque in terms of method and data — making it dangerous to rely on Google Flu Trends for any decision-making at all.

For example, Google’s algorithm was quite vulnerable to overfitting to seasonal terms unrelated to the flu, like “high school basketball.” With millions of search terms being fit to the CDC’s data, there were bound to be searches that were strongly correlated by pure chance, and these terms were unlikely to be driven by actual flu cases or predictive of future trends. Google also did not take into account changes in search behavior over time. After the introduction of GFT, Google introduced its suggested search feature as well as a number of new health-based add-ons to help people more effectively find the information they need. While this is great for those using Google, it also makes some search terms more prevalent, throwing off GFT’s tracking.

These problems could have been easily spotted (and perhaps corrected) had GFT not been a black box alogrithm — flu researchers at the CDC are nothing if not experts in understanding spurious correlation and seasonal confounding. But the nature of the project was that only a few people could see into the black box, and for the most part they had facile understandings of the issues involved.


This is a big argument against Secret Sauce Analytics, even if the Secret Sauce Is Mostly Mayo.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Big Data Bra Size [...]

Big Data is seen as the End of Theory by some, and as the dawn of a humanistic personalization by others. Which is why we find this a useful example of Big Data: Alibaba, the largest online store in the world (centered in China) has correlated bra size to spending habits. After the quote we explore the weird intersection of Big Data, sexism, racism, and any other ism’s you might want to fold in.

Alibaba vice president Joseph Tsai talked to Quartz about findings that 65% of women with a B cup fell into the “low” spending category, while those with a C cup or higher were in the “middle” and “high” demographics.

“We’ve only seen the tip of the iceberg,” he said of the company’s data-dive. “We really haven’t done even 5% of leveraging that data to really make our operations more efficient, consumers more satisfied.” (Source)

What do we make of this? It makes us feel (hopefully) a bit icky. Why is that? It’s probably worth exploring.

It’s also interesting that if why knew WHY this correlation applied we’d feel (I think) a bit less icky. Is age a confounding factor (older women have bigger bra sizes, teenagers have smaller bra sizes)? Is cup size a sign of affluence in China? A proxy for city dwelling? Leaving the correlation at this basic level feels unfinished, dirty, and exploitative.

Suppose we extend special deals to C-cup women. (and certainly this is happening — otherwise why collect this data?). Is this wrong? What if such things break down by race?

When we look at this we realize that a world without theory is also a world that discards intent. And intent means something. Big Data Doesn’t Care why your coupon works, but you do. There is a big difference between offering sweet deals to large-breasted women and offering sweet deals to people in their twenties or to more frequent shoppers. Somewhere along the line we are going to have to come to terms with that.


For a more serious case of why the why matters, consider Big Data and OxyContin

Apart from issues of why, Big Data’s correlations are often ephemeral: Google Flu Trends succeeded until it failed.

The Missing Sense of User [...]

Things like Facebook arose mostly because of gaps in the basic HTML/HTTP infrastructure.

A great example is that HTTP has no native concept of a persistent user across servers. An HTTP request can encode what machine address a request came from but it can’t tell you, at the protocol level, what person it came from. This means that the only unified view you can get of a person’s activity is at the server level. 

Were we building the Web today, we could probably do this differently. In our Bitcoin/blockchain world it is not hard to imagine a scheme where your identity exists outside any particular server, written and maintained in a million different ledgers.

In such a world, Facebook wouldn’t look on it’s own servers to see if you had access to it. It’d look to the distributed ledger to see if it had access to you. Such a scheme might have allowed more integrated services to have arisen, and would have mitigated against the rise of supersites like Facebook and Google.

When I talk like this, sometimes people get a bit frustrated — this wasn’t possible in 1991, it didn’t get done after that, and it’s not going to happen now, so what’s the point?

The point is that if you understand the gaps that made Facebook possible (instead of just assuming Facebook used evil magic to get everybody on it) you can work to address those gaps. We’re not going to get a blockchain user identity scheme in HTTP at this point, but understanding the identity issue has allowed companies like Known to propose other solutions (such as a site that uses a combination of syndication and other technologies to build a makeshift identity server that can service Facebook, Twitter, WordPress and other sites from a central location. 

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Opioid Increase 1997-2002 [...]

From an abstract of a 2005 paper:

I measured the role of opioid analgesics in drug abuse–related deaths in a consistent panel of 28 metropolitan areas from the Drug Abuse Warning Network. The number of reports of opioid analgesics increased 96.6% from 1997 to 2002; methadone, oxycodone, and unspecified opioid analgesics accounted for 74.3% of the increase. Oxycodone reports increased 727.8% (from 72 to 596 reports). By 2002, opioid analgesics were noted more frequently than were heroin or cocaine. Dramatic increases in the availability of such opioids have made their abuse a major, growing problem. (Source)

The increase was related directly to the marketing push of Purdue Pharma around OxyContin:

Starting in 1996, Purdue Pharma expanded its sales department to coincide with the debut of its new drug. According to an article published in The American Journal of Public Health, “The Promotion and Marketing of OxyContin: Commercial Triumph, Public Health Tragedy,” Purdue increased its number of sales representatives from 318 in 1996 to 671 in 2000. By 2001, when OxyContin was hitting its stride, these sales reps received annual bonuses averaging over $70,000, with some bonuses nearing a quarter of a million dollars. In that year Purdue Pharma spent $200 million marketing its golden goose. Pouring money into marketing is not uncommon for Big Pharma, but proportionate to the size of the company, Purdue’s OxyContin push was substantial. (Source)


More on this trend at Opioids, Alcohol, Suicide.

Note that the Route to Heroin Abuse is often through pill addiction.

See also: A 2,000 Percent Increase

Big Data and OxyContin [...]

Purdue Pharma, authors of the current opiate and heroin epidemic in the U.S., created the problem by ignoring the “why” behind the numbers:

(source)
(source)

Boots on the ground was not the only stratagem employed by Purdue to increase sales for OxyContin. Long before the rise of big data, Purdue was compiling profiles of doctors and their prescribing habits into databases. These databases then organized the information based on location to indicate the spectrum of prescribing patterns in a given state or county. The idea was to pinpoint the doctors prescribing the most pain medication and target them for the company’s marketing onslaught.

That the databases couldn’t distinguish between doctors who were prescribing more pain meds because they were seeing more patients with chronic pain or were simply looser with their signatures didn’t matter to Purdue. The Los Angeles Times reported that by 2002 Purdue Pharma had identified hundreds of doctors who were prescribing OxyContin recklessly, yet they did little about it. The same article notes that it wasn’t until June of 2013, at a drug dependency conference in San Diego, that the database was ever even discussed in public.

Combining the physician database with its expanded marketing, it would become one of Purdue’s preeminent missions to make primary care doctors less judicious when it came to handing out OxyContin prescriptions. (Source)

The result? The largest drug epidemic in the history of the United States, one which has literally reversed declines in all-cause mortality in many demographics. See Opioids, Alcohol, Suicide

(source)
These figures actually don’t cover the much larger effects from death by liver disease and suicide attributable to opioid abuse. (source)

Part of this is a warning about the morality of Big Data. But perhaps an even larger issue is the problem of data without theory. The reasons behind these trends mattered — were these replacing other drugs due to efficacy or due to addiction? Were the super-prescribers more enlightened as to pain management or were they running cash for scripts businesses?

Marketing, in one sense, does not require answers to these issues; you use the correlations to make sales, and the why does not matter. But ethical marketing is a different matter.


There is little doubt the pharmaceutical industry is behind the current heroin epidemic. See 80% of Heroin Users Started with Painkillers, Opioid Increase 1997-2002

80% of Heroin Users Started with Painkillers [...]

From the documentary Heroin: Cape Cod, MA a potentially damning fact: 80% of heroin users started with painkillers. In the case of the 20-somethings in the documentary, most were crushing pills from their parents cabinets or (more usually) by abusing pills they received from doctors after accidents. When the pill addiction they developed became unmanageable they began use of heroin.

From another article:

Respondents who began using heroin in the 1960s were predominantly young men in their teens living in urban areas, whose first opioid use was heroin (80%). Recent users are more likely to be white, older (average age almost 23 years) men and women living in suburban or rural areas. Three out of four were first introduced to opioids through prescription painkillers. (Source)

Recent attempts to make Oxycontin more tamper-resistant have caused some addicts to move to heroin, which is cheaper.

“A few years ago when we did interviews with people in treatment, many would tell us that although they were addicts, at least they weren’t using heroin,” he said. “But now, many tell us that a prescription opioid might run $20 to $30 per tablet while heroin might only cost about $10.” (Source)


See Opioids, Alcohol, Suicide to understand the scope of the problem.

Selfie False Frame [...]

Selfie behavior isn’t always what it seems.

At a Diamondbacks game, the announcers mocked a set of women not paying attention to the game in favor of taking selfies.

(You only need to watch the first 30 seconds)

Outside the weirdness and creepiness of grown men shaming women from the announcer’s booth there’s another element when you see the uncut video: the very same announcer’s had just announced a T-Mobile contest to take a selfie:

The situation developed, the network was forced to apologize. They offered the women, who were from an ASU sorority free tickets. They declined the tickets, instead having them donated to a local charity. They later used their new-found fame to advocate for the charity. (post)


False Frame describes a misinterpretation based on race.

False Frame, Baseball Bat Edition shows a similar smartphone disparagement, that is in fact wrong.

Shaming can be a lousy way to change behavior anyway. See Why Shame Doesn’t Work

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

The Wages of Political Data [...]

Parties used to be based on precincts, which covered broad swaths of people, albeit often people with similar profile. That changed as data allowed politicians to focus on individual voters rather than communities. See From Precinct to Voter

What has been the result? Perhaps it is data which is to blame for the “rise of lifestyle politics”. Consider the political change that David Frum describes:

Politics was becoming more central to Americans’ identities in the 21st century than it ever was in the 20th. Would you be upset if your child married a supporter of a different party from your own? In 1960, only 5 percent of Americans said yes. In 2010, a third of Democrats and half of Republicans did. Political identity has become so central because it has come to overlap with so many other aspects of identity: race, religion, lifestyle. In 1960, I wouldn’t have learned much about your politics if you told me that you hunted. Today, that hobby strongly suggests Republican loyalty. Unmarried? In 1960, that indicated little. Today, it predicts that you’re a Democrat, especially if you’re also a woman.

This is just a thought, but how much of this is the result of data which allows us to cobble together party support based on individual demographic data, and target people within communities via phone and email and personalized web ads? The political class’s fascination with Soccer Moms and NASCAR Dads and the like inspires some eye-rolling, but these are demographic communities that would not even be targetable in 1960 by traditional GOTV efforts.


There’s a link here to this idea of Imagined Communities though I haven’t hashed this out.

Part of the reason for the marriage statistic is simply that the parties were not seen as that far apart on anything, at least by some. See Dime-Store New Deal

Age of the Incunable [...]

After the western invention of movable type not much changed for a very long time. It took many many years for people to realize the peculiar possibilities of cheap, printed texts.

Gutenberg invents the Western version of movable type in the 1440s, and it’s in use by 1450. He thinks of it in terms of cost, really. Efficiency.

You can print cheap bibles – still in Latin, mind you. Affordable chess manuals.

He dies broke, by the way.

For almost fifty years, change creeps along.

They have a name for books of this period, which I love: “Incunabula”. Or if we go singular, the incunable. So we could call this the “Age of the Incunable”.

Detail of a Gutenberg Bible
Detail of a Gutenberg Bible. Source.

This is what books look like at that time. Almost identical in form and function, style and content to medieval manuscripts.

Just to be really clear – this is a machine printed book here, later adorned by hand. In case you didn’t notice.

There were printed books, but there was no book culture. There were printed books but there was no shift in what those books did.

But then things change. First in the Italian presses. Bibles are printed in Italian, for example. Illustrations become more common.

Aldus Manutius creates the “pocket book” in an octavo format, somewhere around 1500. We get cheap mobility. In 1501, his shop ditches the Calligraphic font for early “Roman fonts” more like the unadorned fonts we know today.

Sentence structure starts to change. We start to develop written forms of argument that have no parallel in verbal rhetoric. Ways of talking that don’t exist in oral culture.

People learn to read silently, which is huge, at three to four times the speed of reading aloud.

And here’s the transition: We start to think the sort of thoughts that are impossible without books.

De Revolutionibus Orbium, by Copernicus, 2nd edition. 1566.
De Revolutionibus Orbium, by Copernicus, 2nd edition. 1566. Source.

And it’s almost 70 years after Gutenberg that you see a real print culture emerge. Copernicus, Luther, etc. What we start to see is how fast new ideas can spread. We start to see what happens when every believer has their own Bible in which to look up things, in their own language.

We see what happens when an idea can be proposed and replied to across a continent in months rather than decades. We start to see the impact of the long tail of the past, what happens when esoteric works of the past, long hidden away, can be mass produced. What happens when you get Aristotle for everyone. What happens when every scientist can get his hands on a copy of Copernicus.

And Churches fell. And Science was born. And Governments toppled.

But 70 years later.

It’s something worth remembering for those of us excited about the educational affordances of digital material and networked learning. For a long time I thought — well, change is faster now, right? Technological change is, maybe. But it may be the case that certain types of social change are as slow as they ever were. There are days when I think they might even be slower.

We’ll see. For the moment, whether fact or fiction, the belief that this is just a lull will power me through. We’ll get there yet.


Voyager Expanded Books was an early attempt to capture the possibility of digital texts.

The Social Book was an attempt to update books for the digital age as well.

Vague Dread [...]

The price of the Internet of Things will be a vague dread of a malicious world. by Marcelo Rinesi. [http://ieet.org/index.php/IEET/more/rinesi20150925 post]

Volkswagen didn’t make a faulty car: they programmed it to cheat intelligently. The difference isn’t semantics, it’s game-theoretical and it borders on applied demonology.

The intrinsic challenge to our legal framework is that technical standards have to be precisely defined in order to be fair, but this makes them easy to detect and defeat. They assume a mechanical universe, not one in which objects get their software updated with new lies every time regulatory bodies come up with a new test.

And even if all software were always available, checking it for unwanted behavior would be unfeasible — more often than not, programs fail because the very organizations that made them haven’t or couldn’t make sure it behaved as they intended.

Our experience of the world will increasingly come to reflect our experience of our computers and of the internet itself: full of programs never installed doing unknown things to which we’ve never agreed to benefit companies we’ve never heard of, inefficiently at best and actively malignant at worst.

via Vague Dread.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Secret Sauce Is Mostly Mayo [...]

Companies spend a lot of time talking about how their proprietary analytics can help identify opportunity. In reality, the most useful analytics are not that complex.

In the most recent example a proprietary engagement metric was beat by a simple metric: how much time did the student spend reading?

Researchers also compiled an “engagement index,” based on students’ highlighting and minutes spent reading. They found that this index predicted performance more accurately than even past grades. However, when each attribute of the “engagement index” was studied, the amount of minutes spent reading was ultimately most indicative of course outcomes, even more so than the index itself. The researchers stipulate that these findings could help professors identify struggling students as the latter worked through assignments. (Source)

Duffer’s Drift

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Reintegrative Shaming [...]

Reintegrative shaming is an idea proposed by John Braithwraite that suggests that shame that stigmatizes does more harm than good. Reintegrative shaming is seen as a viable alternative, pulling offenders back into the community rather than pushing them out.

Tomkins teaches us that shame is a basic affect occurring spontaneously in all human beings when confronted about their wrongdoing. John Braithwaite, in Crime, Shame and Reintegration (1989) advises that the experience of dealing with shame should be reintegrative, not stigmatizing.

Braithwaite’s sociological theory of “reintegrative shaming” suggests that Western society’s current strategies for responding to crime and wrongdoing may actually be doing more harm than good. Schools and courts punish and humiliate offenders without offering a way to make amends, right the wrong or shed their “offender” label. Instead, offenders are stigmatized, alienated and pushed into society’s growing negative subcultures. They join the others in their school or community who feel excluded from the mainstream and become a source of persistent trouble.

Braithwaite says societies that reintegrate offenders back into the community have a lower crime rate than those that stigmatize and alienate wrongdoers. Reintegration involves separating the deed from the doer so that society clearly disapproves of the crime or inappropriate behavior, but acknowledges the intrinsic worth of the individual. (Source)

Reintegrative shaming involves a process whereby offenders are asked to confront what they’ve done within an environment of respect and integration. Shaming happens, but under conditions which allow the offender a clear path into acceptance.

Critiques

The biggest critique of reintegrative shaming is it may not be the shaming part that works at all:

Collectively, these findings do provide some support for
Braithwaite’s notion of reintegrative shaming: he
stressed the importance of invoking remorse and rejected
stigmatic shaming. However, the research does not show
that disapproval (shaming) was necessarily the
mechanism which invoked the remorse. Another way of
interpreting these data is that empathy or understanding
the effects of offending on victims was the trigger. If this
interpretation is right, the practice and policy implications
are very different from a continuing emphasis on
shaming (disapproval). (Source)

ds106 [...]

ds106 began life as Digital Studies 106, an undergraduate course in digital storytelling at the University of Mary Washington. In Spring 2011, Jim Groom and Martha Burtis opened the course to non-credit participants worldwide. More than 200 open participants were a part of this first open iteration.

Key features of the open structure were the use of an assignment bank which allowed participants to create, rate, and complete assignments, and the use of an aggregator to pull in work from participant sites.

Eventually, courses built on the ds106 model were operating simultaneously at several institutions.

While the course is no longer listed in the UMW catalog, in fall 2013, an instance of ds106 was created without an instructor. Later, ds106 transitioned from “course to community”.

Other offshoots include ds106 radio, created in February 2011 by Grant Potter, and ds106 TV(now defunct)


DS106 site[http://ds106.us/ html]

DS106 Radio[http://ds106rad.io/ html]

2011 Presentation on DS106 open course by Jim Groom and Martha Burtis[https://www.youtube.com/watch?v=LtQwf3YAXH0 video]

 

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Shame, Guilt and Social Media [...]

This is a tricky subject, and trip wires are everywhere. I’d ask that people put aside for a minute the whose-side-are-you-on lens (you can pick it up again soon) and consider why shame and guilt are different, and why public shaming might erode empathy.

Put simply, guilt is an internal feeling directed outward. We feel it when we do something that we feel is not a reflection of the person we want to be. In feeling guilt, our desire is rid ourselves of the guilt, either by taking responsibility what we have done or taking actions to make sure we don’t do it again. Because guilt is so often caused by seeing how other people are impacted by our actions, it is the partner of empathy. Guilt is empowering.

Shame, on the other hand, is about how other people see us, or might see us if they knew X or Y.

Some people say that shame is paralyzing, but that is not entirely true. It can be quite motivating. But what it motivates us to seek is a shame-free environment.

Drug addiction forms a typical example of this. A drug addict who is shamed about the hurt they have caused people does not seek to address the hurt. What they seek is to get away from the shame. That involves getting high, or returning to a group of friends where getting high is not shameful.

People with food addictions who are shamed seek the place where they feel no shame about how they treat their body, and that place happens to be where they abuse their body the most. By binging or starving or throwing up they find a temporary relief from the stress of caring what other people think or worrying about their body.

But the same thing holds true with smaller amounts of shame. When a person is shamed for what they have done, they do not feel guilt. What they feel is an overwhelming desire to get away from the shame, because shame is an environmental condition. Like a drug addict, we can do that by doubling down on the behavior or by finding new friends who won’t shame us.

How does this connect to online behavior? Well, it’s easier than ever to lash out, and easier than ever to find new friends who won’t shame you (although, as with drug addiction, these may not be the best people to hang around with). To carry the metaphor further, when you’re online you’re shaming the alcoholic while they have a whiskey bottle in their hand and a dozen bars round the corner.

So why would we think shame is a useful tool in that environment? Why would we think it would lead to empathy rather than polarization?

There is a long discussion online about who deserves to be shamed and who doesn’t, who gets hurt most by it and who gets the most sympathy for it. And these are good discussions. But two questions I would ask are:

  • Do we think that shaming will lead to better behavior?
  • If so, on what possible basis?

There are other ways to build empathy. See The Believing Game

Eric Meyer talks about Empathetic Design in RebeccaPurple

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Two Beers and a Puppy [...]

From author of Works Well With Others: An Outsider’s Guide to Shaking Hands, Shutting Up, Handling Jerks, and Other Crucial Skills in Business That No One Ever Teaches You

If you want to really evaluate how you feel about someone, give them the “Two Beers and a Puppy” test.

Both the office and, more broadly, life entail lots of interactions with people you’re not quite sure about. Maybe they’re fun in some settings but not in others; maybe they have moments of brilliant talent mixed with astounding incompetence. When you encounter people like this, McCammon recommends a simple test in which you ask yourself two questions: “Would I have two beers with this person?” and “Would I allow this person to look after my puppy over a weekend?”

“Some people are yes and yes, and those are the best people in your life,” McCammon said. “Hopefully you were raised by people like that. Hopefully those are your friends. And then there’s the no and no people — those are the assholes.” Yes-beer, no-puppy people “are to be cautiously trusted,” he writes in the book, while no-beer, yes-puppy people “are no fun but they make the world a better place — for puppies, especially.” Whatever the results of a given iteration of the “Two beers and a puppy game,” said McCammon, “it’s always revealing” to ask these questions. (post)

Why Shame Doesn’t Work [...]

Shaming people seems like it should work — won’t people change if it helps them avoid the pain of feeling shame? But as researcher Brené Brown points out, shame does the opposite, because it pushes us inside of ourselves:

Researchers June Tangney and Ronda Dearing, authors of Shame and Guilt, explain that feelings of shame are so painful that it pulls the focus to our own survival, not the experiences of others.

Example: A man shakes a bottle of pills in his wife’s face, “Look around you! Your pill-popping is destroying our family. Our son is failing out of school and our daughter is literally starving herself for attention. What’s wrong with you?”

Does the shame of what she’s doing to her family lead her to get help, or does it lead her to slink away and get high? After-school specials tell us she gets help. Data say she gets high. In fact, new research shows that some addiction may be born of shame and that shame leads to relapse rather than relapse prevention. (Source)

An alternative to shame is guilt, which is good. Shame tells you you are seen as a bad person; guilt tells you you did something bad. We can, of course, tell people they did bad things, and we can ask people to change. But we need to — as Catholics would say — love the sinner at the same time we may hate the sin. As Brown points out, to not do so is almost certainly making things worse:

Reeves writes, “We need a sense of shame to live well together. For those with liberal instincts, this is necessarily hard. But it is also necessary.” I’m not sure what he means by “liberal instincts,” but what I do know is that using shame as a tool when we are frustrated, angry, or desperate to see behavior change in people is a much better example of the “it feels good – do it” ethos than the teen pregnancy problem [he claims to address]. We might feel justified in belittling and humiliating people, but it makes the world a more dangerous place.


Shame is not guilt strongly felt. People with empathy disorders, for example, can feel shame but not guilt. (Link)

See also Shame, Guilt and Social Media

Given this, a Data Wall of Shame seems like bad pedagogy.

Some political activists see Vulgarity as Ethos

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Delta Bans Trophies [...]

A possible example of good coming out of a public shaming. Delta bans Big Game trophies in the Cecil the Lion aftermath.

The airline announced on Monday that it would ban the shipment of all lion, leopard, elephant, rhinoceros and buffalo trophies worldwide as freight.
The move comes weeks after Cecil, a popular male lion beloved by tourists and locals in Zimbabwe, was lured from a national park and killed by Walter Palmer, a Minnesota dentist and hunter.
Delta said it may extend the ban to cover other animals. news


Noting this because the question of whether good comes out of shaming comes up. It sometimes does, perhaps, though things like this seem small bore change.

In general, shame is dangerous. See Why Shame Doesn’t Work

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Why Shame Doesn’t Work [...]

Shaming people seems like it should work — won’t people change if it helps them avoid the pain of feeling shame? But as researcher Brené Brown points out, shame does the opposite, because it pushes us inside of ourselves:

Here’s the rub:

Shame diminishes our capacity for empathy.

Shame corrodes the very part of us that believes we are capable of change.

You can’t depend on empathetic connection to make a campaign effective, then crush the needed empathy with shame.

Researchers June Tangney and Ronda Dearing, authors of Shame and Guilt, explain that feelings of shame are so painful that it pulls the focus to our own survival, not the experiences of others.

Example: A man shakes a bottle of pills in his wife’s face, “Look around you! Your pill-popping is destroying our family. Our son is failing out of school and our daughter is literally starving herself for attention. What’s wrong with you?”

Does the shame of what she’s doing to her family lead her to get help, or does it lead her to slink away and get high? After-school specials tell us she gets help. Data say she gets high. In fact, new research shows that some addiction may be born of shame and that shame leads to relapse rather than relapse prevention. (Source)

An alternative to shame is guilt, which is good. Shame tells you you are a bad person; guilt tells you you did something bad. We can, of course, tell people they did bad things, and we can ask people to change. But we need to — as Catholics would say — separate the sin from the sinner. As Brown points out, to not do so is almost certainly making things worse:

Reeves writes, “We need a sense of shame to live well together. For those with liberal instincts, this is necessarily hard. But it is also necessary.” I’m not sure what he means by “liberal instincts,” but what I do know is that using shame as a tool when we are frustrated, angry, or desperate to see behavior change in people is a much better example of the “it feels good – do it” ethos than the teen pregnancy problem [he claims to address]. We might feel justified in belittling and humiliating people, but it makes the world a more dangerous place.


Reintegrative Shaming attempts to point out bad behavior while giving the offender a clear route back into community acceptance.

Answer Garden [...]

AnswerGarden is a neat service that allows you to quickly create and distribute open-ended poll questions. It’s a great tool for group brainstorming and for quickly gathering responses to short questions that you pose to your students. In the video embedded below I demonstrate how to create a poll and gather feedback through AnswerGarden. (Source)

Mass [...]

Mass, or form, refers to a shape or three-dimensional volume that has or gives the illusion of having weight, density or bulk. Notice the distinction between two and three- dimensional objects: a shape is by definition flat, but takes on the illusion of mass through shading with the elements of value or color. In three dimensions a mass is an actual object that takes up space.

Eugene Delaplanche, Eve after the Fall, 1869. Marble. Musee d’Orsay, Paris.
Eugene Delaplanche, Eve after the Fall, 1869. Marble. Musee d’Orsay, Paris.

Eugene Delaplanche’s sculpture ‘Eve After the Fall’ from 1869 (below) epitomizes the characteristics of three-dimensional mass. Carved from stone with exaggerated physicality to appear bigger than life, the work stands heavily against the space around it. Delaplanche balances the massive sculpture by his treatment of the subject matter. Eve sits, her body turned on two diagonal planes, one rising, the other descending, her right hip being the meeting point of the two. She rests her head in her hand as she agonizes over the consequences of what she’s just done, the forbidden apple at her feet as the serpent slinks away to her left.

Although actual mass and form are physical attributes to any three-dimensional work of art, they are manifested differently depending on the culture they are produced in.

Gedenkfigur Kamerun
Gedenkfigur Kamerun. Berlin-Dahlem
(source)

For example, traditional western European culture is known for its realistic styles, represented by Delaplanche’s ‘Eve After the Fall’. In contrast, look at the figurative sculpture from the Cameroon culture in Africa below to see how stylistic changes make a difference in the form. The sculpture is carved from wood, generally more available to the artist in sub-Saharan Africa than is marble. Moreover, the Cameroon figure stands upright and frontal to the viewer, and is carved without the amount of descriptive detail seen in Delaplanche’s work, yet the unknown African artist still gives the figure an astonishing amount of dramatic character that energizes the space around it.

Another example comes from a comparison of this building by Le Corbusier with this Scottish castle. Note how the form leads to different effects and different cultural meaning.

(source)
Le Corbusier’s Berlin Unité (source)

Dunrobin Castle (source)
Dunrobin Castle (source)

Form and space, whether actual or implied, are markers for how we perceive reality. How objects relate to each other and the space around them provide the evidence for the visual order in our world. The artist’s creative manipulation of these elements determines the stylistic qualities in a work of art that, in the end, always contains the subjective fingerprint of the artist’s idea of the real.


 

Contains original content from Saylor.org, CC BY. (Link)

Retronasal Olfaction [...]

Our taste buds are a bit of a blunt instrument when it comes to taste. We rely on something called retronasal olfaction to give us the variety of taste sensations we expect — that is, we rely on our sense of smell to give us a sense of taste.

In reality, you know this intuitively already, as you lose the ability to taste when your olfactory sensors are blocked:

It may be difficult to believe but we can actually test whether this is true. For example, if we have a cold, our nose is stuffed. We cannot breathe through our nose, and we can’t smell anything. If we eat something during having a cold we also realise that every food tastes bland. We are still able to perceive its sweetness, its saltiness, its bitterness, its saltiness, and its savoriness, even if we have a blocked nose, but we would have a hard time to distinguish an apple from a pineapple. Both are sweet and a bit sour, and we would still perceive this, but in order to perceive the typical apple flavor and the typical pineapple flavor we have to rely on the sense of smell. The access to the receptors of the sense of smell is however blocked, since the nose is blocked when we have a cold; so we are not be able to perceive the flavors and consequently we cannot distinguish apples from pineapples.(Source)

More info in the video below.


Tongue Map [...]

Although there are subtle regional differences in sensitivity to different compounds over the lingual surface, the oft-quoted concept of a ‘tongue map’ defining distinct zones for sweet, bitter, salty and sour has largely been discredited. (Link)

Tongue Map

———-

Taste is largely a function of smell anyway. See Retronasal Olfaction

The Claim: Tongue Is Mapped Into Four Areas of Taste (Link)

Friction in Tax Claims [...]

Many people eligible for tax benefits don’t claim those tax benefits, and that’s a problem. The IRS decided to experiment with direct mailings letting people know they had not claimed their money, and seeing if it resulted in them claiming it. But interestingly, they used an experimental method that reveals a lot about the concept of “friction” when it comes to action, as well as motivation. A new paper in the American Economic Review details the results. (Link)

Here’s a list of what they tried:

  • Complexity (design). Simplified design vs. more complex. Complex design features denser text, repeated information.

  • Complexity (length). Lengthy version of application added questions.

  • Benefit display (low and high). Benefit display notes upper bound of credit, some mentioning low numbers ($457) and some mentioning high ($5,000+)

  • Transaction cost (low and high). Adding a note about the time it would take to complete the form, both estimated low (10 minutes) and high (60 minutes).

  • Penalty/audit information. Letting people know they cannot be held accountable for unintentional mistakes.

  • Envelope message. Message in envelope lets people know it is “Good news!”

  • Informational flyer. Flyer details nature of the benefits, why they exist, how they work.

  • Personal stigma reduction. People told they get this benefit because they worked hard.

  • Social influence. People told many similarly situated peers are claiming this benefit.

So first, what did harm?

  • Complex flyers
  • Informational brochures
  • Trying to minimize social stigma (either through personal reduction or social influence)

And what helped? Two things: a shorter, less complex form and a benefit estimate.

And when we say shorter, we’re not talking that much shorter. We’re talking cutting a couple of questions and adding more whitespace, here’s an example of the difference:

(source)
(Source)

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Rising Support for Open Materials [...]

Support is rising for open materials on college campuses. A 2014-2015 survey shows significant movement on this issue. And four-fifths (81 percent) of the survey participants agree that “Open Source textbooks/Open Education Resource (OER) content “will be an important source for instructional resources in five years.”(post)

Encouragement of Open Resources Chart
(source)

Significant hurdles with implementation still exist. Only 6 percent of courses actually use educational materials, and IT leaders continue to focus more on the integration of technology into the classroom than the use of OER.

The results were part of the Campus Computing Survey, one of the largest surveys of Campus IT.


Openness is not enough, when other barriers prevail. See Gated Openness.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories UncategorizedTags

Engram Lifecycle [...]

An engram is a hypothetical means by which memory traces are stored. It forms a prominent role in Consolidation Theory.

engram lifecycle
Illustration used under fair use. (Source)

The perils of reconsolidation suggests why False Memories are Common

Retroactive Inhibition [...]

When several things are learned in quick sequence, things learned last can interfere with those things learned first. For example, if I was to ask you to memorize the numbers 24-17-99-3 and give you a minute to do that, and then ask you to memorize the number sequence 71-33-82-6 then your memory of the first sequence would more quickly deteriorate than if I had not asked you to memorize the second sequence. We call this and related patterns of forgetting “retroactive inhibition”.

The phenomenon has been extensively studied in relation to arbitrary lists, and “unlearning curves” have been developed with some precision for both retroactive inhibition and Proactive Inhibition. The effects are so significant that it has led some theorists to hypothesize that all forgetting is due to interference, as in Interference Theory, a theory introduced by Hugo Munsterberg. See Hugo Munsterberg’s Watch

During the early twentieth century the two theories of forgetting were the decay theory and the interference theory. The interference theory proposed that memories become irretrievable as new cues overwrite old cues.

Decay theory assumed that there was some sort of catabolic process which caused memories to natural decay.

A third suggested explanation for forgetting was introduced in consolidation theory. Consolidation theory sees new memories as ‘labile’ and prone to disruption. As time passes, a series of biological processes (e.g., glutamate release, protein synthesis, neural growth and rearrangement) stabilize the memory. Decay is the loss of memories that have not yet “set”.

Modern consolidation theory adds an additional twist: the reactivation of the memory may render it temporarily pliable again and prone to disruption or faulty recoding.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Writing as a Memory Eraser [...]

In creative writing, recalling memory is part of the author’s journey, a journey whose end result creates a written narrative for others to use as part of their own journey.  So it might come as ironic that the act of writing is considered by some  an impediment to memory, as eraser as much (if not more) than a catalyst.  In a very famous George Plimpton interview, Ernest Hemingway spoke of the delicate balance for the writer between putting thoughts to words for a public and what the author can lose in the process:

“…though there is one part of writing that is solid and you do it no harm by talking about it, the other is fragile, and if you talk about it, the structure cracks and you have nothing.”

9600110b
From the National Portrait Gallery.

The talking part for Hemingway is not an actual attack on the verbal, but rather on the space between memory as a personal item stored in the self and as a written item stored on paper or in a computer.  This is seen in Hemingway’s short story A Strange Country, where the protagonist Robert cannot remember the stories he wrote because, to Robert, the process of writing expunged the memory from the person and put it on the page, wiping the slate clean for the writer but also removing the memory.


But wait, isn’t writing a good way to spur memory?  Argument for can be found at harnessing-the-potential-of-memory-in-writing

Interested in A Strange Country?  Read it and many others [html].

It is also worth reading the full George Plimpton/Ernest Hemingway exchange [html].

Pimsleur Recall [...]

Paul Pimsleur is famous for redesigning language programs to make use of periodic, gradual recall. Via Mattan Griffel, here’s what he found:

…he hypothesizes the following intervals: 5 seconds, 25 seconds, 2 minutes, 10 minutes, 1 hour, 5 hours, 1 day, 5 days, 25 days, 4 months, and 2 years…. One of the cool side-effects of this is that you can take a large set of facts (e.g. the 1000 most commonly used words in the French language) and introduce them in batches slowly (e.g. 10 per day). That way you only have to review each word every few months in order to remember them all indefinitely. (The flip-side of this is: it doesn’t matter how awesome your teachers or your school was, if you don’t review each fact you learned at least every few years, you will forget them)…(Source)

Brad Delong looks at this and notes a corollary — one must build in time to remember old things as well as learn new ones. In fact, as life progresses, the mathematical conclusion one can draw is the older you are the more time one must build in for remembering. (Link)

In wiki we solve this problem by habitually linking newly learned things to old knowledge, as well as by idly browsing our collection of notes from time to time.


Hemingway argues writing down ideas can be harmful. See Writing as a Memory Eraser

Pimsleur’s 1967 Modern Language Journal is available. (Link)

Engram Lifecycle suggests biological mechanisms for the effect.

Crisis of Purpose [...]

Christopher Lucas, writing in 1996, identified the crisis at the heart of higher education in America as a “crisis of purpose”, a result of a chain of historical events that resulted in the multiversity we have today, which must be all things to all people.

From Crisis in the Academy: Rethinking Higher Education in America by Christopher J. Lucas:

Second, if there is a true crisis in American higher education today, it is chiefly a crisis of purpose within the university. The hegemony of the multiversity as a regulative idea is well-nigh complete, but its preeminence does not seem to have come about as the outcome of principled decisions or any discernible process of rational choice.

On the contrary, it appears to have been the inevitable result of an academic system seeking to garner popular support by attempting in most times and places to be all things to all people. In the process, a single model of the university as a multipurpose institution dedicated simultaneously to teaching, research, and service has gained the ascendancy. Its predicament at this historic juncture, it must be observed, is not unlike the juggler balancing too many objects in midair. The spectacle is awe-inspiring, vastly entertaining even. But whether and for how long it can be sustained seem open to serious question. (Source)


The Stalkers of Jimmy Wales [...]

Those that endure harassment on Wikipedia have often found Jimmy Wales to be a sympathetic advocate. Perhaps this is not surprising, since Wales himself deals with stalkers himself at a level unimaginable to most people. In a Quora question about voting, he revealed that fear of stalkers even keeps him from voting:

In Florida, in order to register to vote, you have to give your real home address. Not a post office box, not a proxy address of any kind. To do otherwise is a felony. There was some kind of scandal with Ann Coulter about this a few years back. And this address becomes a public record, accessible by anyone.

I have serious scary stalkers, including one who obsesses about my personal life, my children, and my home address. He’s posted a photo of himself with a gun along with a fantasy about a shootout with me.

Unlike people of a similar public profile, I don’t have billions of dollars and a security staff. I just live in my ordinary Florida home.

Therefore, I am unwilling to register to vote.

This is a shame, because I’ve always taken great pride in voting. (Source)

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Not a Boxing Fan [...]

In October 1899 the New South Wales parliament debated the institution of “bank holidays” — holidays that when falling on a weekend would be celebrated on the following Monday. Member E.M. Clark was not a fan of the plan in general, but in what seems an attempt to mock the proposal, he argues that Boxing Day should be a bank holiday, postponed until the following Monday, because, among other things, what the heck is Boxing Day about? (Source)

Here is the submitted proposal which he aims to amend:

 1 Clause I. When any of the following days, 
   that is to say, the twenty-sixth day of Jan-
   uary, the anniversary of the birthday of her 
   Majesty or her successor, the first day of 
 5 August, or the anniversary of the birthday 
   of the Prince of Wales, falls on any day of 
   the week other than Monday, that day shall not 
   be a bank holiday, but the following Monday 
   shall be a bank holiday in lieu thereof ; and 
10 the schedule to the Bank Holidays Act, 1875, 
   is hereby amended accordingly. 

Clark proposes that Boxing Day should be added to the list:

He submitted that this was a legitimate proposal. He did not see why we should have a holiday following Christmas Day called Boxing Day. He admitted that he was not a very intelligent man, and was a little bit mixed on many subjects, but he would like to know what Boxing Day really meant. He did not even know why it was called Boxing Day, unless it was because following upon the jubilations of Christmas Day people fell into antagonism and fought with each other, and this was fixed as a special fighting day following Christmas Day.

He thought we might just as well include Boxing Day as any of the other holidays mentioned. In regard to the remarks of the Hon. member for Glebe, he admitted that Christmas Day was a sacred day, and he did not press that proposition. But that argument did not apply to Boxing Day. Christmas Day might fall on Monday and Boxing Day on Tuesday, and according to the argument of the Hon. member for Waratah Boxing Day should be postponed till the following Monday.

Look at the inconvenience to banking business in the case of promissory notes falling due on Saturday, when Christmas Day fell on Monday and Boxing Day on Tuesday. If there was an argument in favour of any of the holidays in the bill being postponed until Monday, it applied equally to Boxing Day. (Source)

Clark’s amendment to the Bank Holidays Bill was answered in the negative.


Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Before Posting to NetNews [...]

“Before Posting to NetNews” is the title of Intel guidelines for posting to Usenet newsgroups from their 1995 Netiquette document. Note this advice: you should read a mailing list or Usenet group for one to two months before posting anything. (Link)

3.1.1 General Guidelines for mailing lists and NetNews

 * Read both mailing lists and newsgroups for one to two months before you post anything. This helps you to get an understanding of the culture of the group.

 * Do not blame the system administrator for the behavior of the system users...

This is not bad advice, but it is part of the problem with communities. Because the community knowledge and culture is buried in discourse, entry is time-consuming. God forbid you come into the community and post something someone already posted or use the wrong acronyms.


Compare Twitter, where people feel free to jump into conversations of which they have very little knowledge. See Sea-Lioning

The Turn in Privilege [...]

>In 1988, American academic and professor Peggy McIntosh published the essay White Privilege and Male Privilege: A Personal Account of Coming to See Correspondences through Work in Women’s Studies, in which she documented forty-six privileges which she, as a white person, experienced in the United States. For example: “I can be sure that if I need legal or medical help, my race will not work against me”, and “I do not have to educate my children to be aware of systemic racism for their own daily physical protection”.

McIntosh described white privilege as an “invisible package of unearned assets” which white people do not want to acknowledge, and which leads to them being confident, comfortable and oblivious about racial issues, while non-white people become unconfident, uncomfortable and alienated. McIntosh’s essay has been credited for stimulating academic interest in privilege, which has been extensively studied in the decades since. (Source)

From McIntosh’s paper:

It is concluded that all the various interlocking oppressions take two forms: an active form which can be seen; and an embedded form which members of the dominant group are taught not to see. To redesign the social system therefore requires acknowledgement of its colossal unseen dimensions. (Source)

Instead of Big Data Try Basic Data [...]

While vendors talk about big data, the data districts and colleges actually need is often ridiculously simple:

For example, the state collects student attendance data, but right now that data only shows how many kids are going to school every day on average. “You might have 99 percent of students attending,” Miller said, “but in fact there are six kids who are chronically absent.” The new data will be able to point out to teachers the specific students who are consistently missing school, so the teachers could investigate the root causes behind absences and try to address them.

The state also collects high school graduation rates, but many students drop out in middle school. CORE is now collecting this information, as well as highlighting the names of individual students who are not on track to graduate on time. (Source)


Some of these examples may be seen as moving from From Precinct to Voter.

From Precinct to Voter [...]

Early political processes focused on precincts and wards as the unit of allocating campaign effort. You would get out the vote in the areas where you had broad support, and sometimes, less honorably, suppress the vote in those places where you didn’t.

As canvassing became supplemented with phone calls, direct mail, and other pieces, the unit increasingly became the individual voter. Campaigns were less concerned about getting out the vote of Ward 8 and more concerned about flushing out demographics such as “College educated under-25s”

As early as the 1960s Democrats began systematically assessing which precincts should be allocated campaign resources using statistics aggregated over fairly wide geographic areas. By the 1990s, the precinct was being supplanted by the individual voter as the unit of analysis, just as wall maps and clipboards were giving way to web applications and Palm Pilots. (Source)

As campaigns became more focused on these units of analysis, many traditional GOTV efforts became less privileged. And even where traditional methods were used, they were used under the guidance of the new approach — a ward captain in 1960 would get out the vote for his or her ward; by the 2000s they would be armed with data on what specific doors they needed to knock on, based on likely percentages of support and number of previous touches tracked.

Fear-Selling Methodology [...]

Computer consultants want to help you do better and they may be sincere in what they promise but they might be fooling you and themselves when fear seals the deal.

————

Ward Cunningham describes where he has seen this in the industry:

I’m reminded of the time as a child when my father came home and told of an ex-convict dropping by his business selling check embosser. The man had been sent up for check fraud. Now he applied his skill before my father’s own eyes with my father’s own check. Wow, did you buy a check embosser? I asked. No, my father explained, he would never buy a product that addressed a fear that the salesman had just created.

I know software methodology. I created programming methods myself, out of self-defense. I had been successful programming. I just explained what I did so that I could keep doing it. But soon I was charging money for the explanations and business was good.

My consulting business made sense to me. I knew a better way to make software. People wanted a better way. The market brought us together. I made a living.

I even spoke about fear and building safety nets in our projects to catch us when things go wrong. But my clients remained fearful. I rarely made them happy.

Hate-Selling our Students reminds me of fear-selling my methods. A/B tests show that people part with money faster if you make things worse and then promise to make them better for a fee. Think discount airline.

I eventually gave up the consulting business and took more normal jobs. I’ve had a few. Only recently have I had a position where the hire was made in confidence and not as a solution to some imagined problem I helped create. It feels good.

Hate-Selling You Domain Names [...]

We’ve previously discussed the phenomenon of “hate-selling”, whereby instead of offering you an attractive process at a good price, companies (driven by the logic of the marketing conversion funnel) create processes that generate the appropriate clicks, profits, and retention rates but destroy the experience. Jim Groom finds another instance that looks like hate-selling: the process of buying a domain name. (Link)

We care so much about you as a customer that we're going to stop overcharging you. How does that sound?
We care so much about you as a customer that we’re going to stop overcharging you. How does that sound?

The term hate-selling is of recent vintage, first used in August 2015 to describe the travel industry. See Hate-Selling

The sequence Groom describes looks familiar. Sudden upgrades and special offers appear as you try to quit the service. Now it turns out you can get the service cheaper, as long as you say you are quitting.

We care so much about you as a customer that we’re going to stop overcharging you. How does that sound?
Then comes the fear. Jim describes the pitch:

Next comes the fear. WARNING! You actually want to leave us! We can’t help you once you do, and everything could go to shit and you’re own your own. Are you really ready for this. Don’t do it. We know you overpay, but you’re relatively happy, right? You still want to go? Well, then click this box which frees us of any responsibility of helping you. Goodbye

This pitch has been discussed elsewhere on wiki. See Fear-Selling Methodology

Other stages follow. This is really final! You should call us to talk about it. You have to call us to talk about it.

It’d be interesting to think about what the hate-selling industries (travel, domain names, cable tv, cell phone service) have in common.

Hate-Selling [...]

A post on Skift introduces a new term: “hate-selling”. You see it in travel where “conversion managers have run amok” and you are charged absurd combinations of little charges at the precise amounts analytics says you will tolerate. (Link)

Some examples of hate-selling in the travel industry from the article:

  • Car rental sites with crazy surcharges (a 17.25% premium location fee)

  • Low fare airline seats, “hate-sold” to you in such a way they say — “Here’s what you don’t get with your cheap seat, you idiot” in an attempt to upsell.

  • Buy-now-or-else prompts in the buying experience – “We’ve unearthed this special offer/upgrade for you only available if you click here now.”

The author shares some screenshots and receipts. It’s truly horrifying to look at. The author concludes:

This is what happens when you let conversion marketers run amok with customer experience. They made it a science, but forgot being human.

The problem, as the author points out is that hate-selling works. You can mathematically prove it. You A/B test a gate-checked baggage fee and revenue goes up or down. You take away the free in-flight soda from Economy class and give it to a special Economy+ class. You choose the way that maximizes the revenue.

But revenue isn’t the only metric. The long term outcomes to your business are determined not by quarterly revenue, but by customer experience. Just ask Blackboard, which made a mint hate-selling the LMS to institutions only to find itself in a desperate state when it encountered a “one price gets you everything” competitor.

Blackboard now wants love, but all people remember are baggage fees and upsell.


Jim Groom discusses companies that are Hate-Selling You Domain Names

Problems with the Fee Model [...]

Offering insufficient basic service for a flat price with fees for upgrades seems like a smart move in a number of ways. For one, it shifts the burden of certain amenities onto the people who actually use them. Why should you subsidize a companies “backup” feature, for example, when you aren’t using backups? However, as experience in the airline indstry has shown, it can create the wrong incentives for both consumers and businesses. The New Yorker investigates. (Link)

But the fee model comes with systematic costs that are not immediately obvious. Here’s the thing: in order for fees to work, there needs be something worth paying to avoid. That necessitates, at some level, a strategy that can be described as “calculated misery.” Basic service, without fees, must be sufficiently degraded in order to make people want to pay to escape it. And that’s where the suffering begins. (Source)

Here’s some of the hidden costs of fees:

  • Baggage Fees. We are not rational creatures. So to avoid a $25 baggage charge we will invest $40 of our time packing clothes into a smaller bag, $10 on smaller toiletries that can get through security, and another $20 on whatever we forgot. As a group we’ll also shove increasingly more into overhead bins which means slower loading and unloading of the plane.

  • Rebooking Fees. We’re increasingly assessed hefty rebooking fees which go far beyond what it costs a company to rebook. Because we discount future trouble, we ignore the rebooking fees when we book, leading to suboptimal behavior. (Delta and United collected nearly a billion dollaars of rebooking fees in 2014).

The biggest problem, however, is on the business incentive side. In an everything included model, each company competes on the best overall experience of the customer.

In a fee-based model, the way to make money is to increase the differential between the economy ticket and the ticket-with-benefits. One way to do that is to create new and exciting benefits. But the easiest way to do that is to take things that should be included (humane boarding, reasonable rebooking policies, some level of refreshment on the plane) and moving those to different fee based tiers.

In other words, you get rewarded for making the basic ticket people miserable.


See also Hate-Selling

Flynn Effect [...]

The “Flynn effect” refers to the observed rise in IQ scores over time, resulting in norms obsolescence. In other words, as the 20th century progressed, IQ tests had to be recalibrated, because on average people did better on them.

And not just a little bit better. A lot better:

In the 1980s, social scientist James Flynn made a startling discovery: Real IQ scores had been going up, on average, three points every decade since the early 20th century. The existence of this increase had been masked by the fact that the test gets updated and renormed every generation or so, pushing the average score back to 100.

The implications of the eponymous “Flynn effect” are astonishing. A person of average intelligence today would have registered a full two standard deviations higher a century ago, giving him a “very superior” score of 130. We’re getting smarter. A lot smarter. (Source)

When looked at closely, the gains have not been in mathematics or vocabulary, but specifically on the portions of the test most dedicated to abstract reasoning.

Flynn discusses his findings in this TED Talk:


Flynn talks here about Taking the Hypothetical Seriously.

Intellectual Impairment below 10 µg [...]

Contrary to what was previously thought, there may be no level at which lead exposure does not adversely affect intelligence. This study looked at the effects on IQ of levels below what was perceived to be the safe lead blood level of 10 µg/dL and found significant adverse effects on intelligence as lead exposure increased. In fact, “IQ declined by 7.4 points as lifetime average blood lead concentrations increased from 1 to 10 µg per deciliter.” (Link)


The broadness of lead impact may call into question hypotheses about the Flynn Effect

Cai Lead Study [...]

A 2007 study of the effects of lead on behavior found significant increases in aggressiveness correlated with increases in lead levels in children. The study also suggested that blood lead levels might produce such an effect not only in the early years of life (on development) but on concurrent behavior of seven year olds.

For every increase of 10 micrograms per deciliter of blood, children scored about five points worse on a 100-point scale that measures “externalizing” behavior problems, such as aggression and acting out.

Also, for every 10-microgram increase, the children were nearly 11/2 times more likely to exhibit these types of problems. (Source)

source
(Source)

The study was published in Pediatrics and is available through NIH public access. (Link)

Reputation Traps [...]

A reputation trap is protective device set around issues of sensitivity in a given profession. Reputation traps enforce epistemic closure: to express a certain view or engage in a certain type of work renders one untrustworthy, which in turn further invalidates the view.

The term was coined by Hew Price in an article on cold fusion research:

Again, there’s a sociological explanation why few people are willing to look at the evidence. They put their reputations at risk by doing so. Cold fusion is tainted, and the taint is contagious – anyone seen to take it seriously risks contamination. So the subject is stuck in a place that is largely inaccessible to reason – a reputation trap, we might call it. People outside the trap won’t go near it, for fear of falling in. ‘If there is something scientists fear, it is to become like pariahs,’ as Lundin puts it. People inside the trap are already regarded as disreputable, an attitude that trumps any efforts that they might make to argue their way out, by reason and evidence. [https://aeon.co/essays/why-do-scientists-dismiss-the-possibility-of-cold-fusion source]

Sometimes such traps are helpful, in cases where there is a near-unanimous consensus backed by overwhelming evidence. As an example, a person expressing a view that man-made global warming does not exist would (at this point) find it hard to work in mainstream climatology. It would be difficult to trust the expertise of someone who could look at the available evidence and come to that conclusion. A history of effort to disprove evolution would cause similar issues for a biologist (and again, rightly so).

In most cases, however, reputation traps do more harm than good. In evolutionary science many scientists steered clear of certain work in evolutionary science because it bore a superficial resemblance to the disgraced ideas of Lamarck. Today those same issues form one of the more exciting branches of evolutionary science.

As Price notes, we find a similar situation today with cold fusion research:

Again, the explanation for ignoring these claims cannot be that other attempts failed 25 years ago. That makes no sense at all. Rather, it’s the reputation trap. The results are ignored because they concern cold fusion, which we ‘know’ to be pseudoscience – we know it because attempts to replicate these experiments failed 25 years ago! The reasoning is still entirely circular, but the reputation trap gives its conclusion a convincing mask of respectability. That’s how the trap works.

Fifty years ago, Thomas Kuhn taught us that this is the usual way for science to deal with paradigm-threatening anomalies. The borders of dominant paradigms are often protected by reputation traps, which deter all but the most reckless or brilliant critics. [https://aeon.co/essays/why-do-scientists-dismiss-the-possibility-of-cold-fusion source]


When reputation traps are institutionally defined, they are sometimes considered Lysenkoism

Reputation Traps are one issue around innovation, as they keep people from working on paradigm-shifting solutions.

Flint vs. 1976 [...]

Flint, Michigan recently called a state of emergency because the use of water from the Flint river as a public water source had caused a dangerous spike in blood lead levels.° It’s anticipated that this public policy disaster will cause cognitive impairment and possible behavioral problems in the kids affected; lawsuits are underway. But how does the amount of lead in children’s blood in Flint compare to what Generation X was exposed as a result of tailpipe emissions?

The answer is a bit shocking. The Flint emergency has been declared because for a brief period of time (a few months) over 7 percent of Flint’s children had levels in excess of 5µg/dL. That’s three times the current national rate.

Blood Lead Levels in Flint Michigan.
Blood Lead Levels in Flint Michigan. (Source)

Children under the age of five tend to have higher blood lead levels than other groups, as their bodies absorb much more of it.° The damage can last a lifetime, which is why in 2012, based on a review of recent research, the CDC set 5µg/dL as the level at which children should be entered into case management to prevent further exposure. About 2.5% of children nationally had this level of exposure from 2007 to 2010, although this level has decreased a bit since then.°

How do these Flint emergency levels (7% of children above the 5µg/dL level) compare to historical levels? Well, in 1970 the average preschooler had blood lead levels of 23µg/dL, four to five times above the danger level:

Lead Levels and Crime
(Source)

Moreover, these were not levels that children were exposed to for a month or two: this would have been exposure to lead at levels 4 to 5 times the recommended maximum over the child’s entire childhood.

Comparison with average rates (vs. dangerous level rates) is perhaps even more striking. The average blood lead level in a 1970 preschooler was about 10 times the average blood lead level today.


Reputation Traps [...]

A reputation trap is protective device set around issues of sensitivity in a given profession. Reputation traps enforce epistemic closure: to express a certain view or engage in a certain type of work renders one untrustworthy, which in turn further invalidates the view.

The term was coined by Hew Price in an article on cold fusion research:

Again, there’s a sociological explanation why few people are willing to look at the evidence. They put their reputations at risk by doing so. Cold fusion is tainted, and the taint is contagious – anyone seen to take it seriously risks contamination. So the subject is stuck in a place that is largely inaccessible to reason – a reputation trap, we might call it. People outside the trap won’t go near it, for fear of falling in. ‘If there is something scientists fear, it is to become like pariahs,’ as Lundin puts it. People inside the trap are already regarded as disreputable, an attitude that trumps any efforts that they might make to argue their way out, by reason and evidence. [https://aeon.co/essays/why-do-scientists-dismiss-the-possibility-of-cold-fusion source]

Sometimes such traps are helpful, in cases where there is a near-unanimous consensus backed by overwhelming evidence. As an example, a person expressing a view that man-made global warming does not exist would (at this point) find it hard to work in mainstream climatology. It would be difficult to trust the expertise of someone who could look at the available evidence and come to that conclusion. A history of effort to disprove evolution would cause similar issues for a biologist (and again, rightly so).

In most cases, however, reputation traps do more harm than good. In evolutionary science many scientists steered clear of certain work in evolutionary science because it bore a superficial resemblance to the disgraced ideas of Lamarck. Today those same issues form one of the more exciting branches of evolutionary science.

As Price notes, we find a similar situation today with cold fusion research:

Again, the explanation for ignoring these claims cannot be that other attempts failed 25 years ago. That makes no sense at all. Rather, it’s the reputation trap. The results are ignored because they concern cold fusion, which we ‘know’ to be pseudoscience – we know it because attempts to replicate these experiments failed 25 years ago! The reasoning is still entirely circular, but the reputation trap gives its conclusion a convincing mask of respectability. That’s how the trap works.

Fifty years ago, Thomas Kuhn taught us that this is the usual way for science to deal with paradigm-threatening anomalies. The borders of dominant paradigms are often protected by reputation traps, which deter all but the most reckless or brilliant critics. [https://aeon.co/essays/why-do-scientists-dismiss-the-possibility-of-cold-fusion source]


When reputation traps are institutionally defined, they are sometimes considered Lysenkoism

Reputation Traps are one issue around innovation, as they keep people from working on paradigm-shifting solutions.

Blue-Eyed Shuffle [...]

Flickr photo, user LookIntoMyEyes (source)
Flickr photo (source)

A genetic mutation which took place 6,000-10,000 years ago and is the cause of the eye color of all blue-eyed humans alive on the planet today, and can be traced to a single ancestor. There appears to be no evolutionary advantage to the mutation; it is instead a good example of how biological processes constantly reshuffle our genetic makeup into new combinations.

“Originally, we all had brown eyes,” said Professor Hans Eiberg from the Department of Cellular and Molecular Medicine. “But a genetic mutation affecting the OCA2 gene in our chromosomes resulted in the creation of a “switch,” which literally “turned off” the ability to produce brown eyes.” The OCA2 gene codes for the so-called P protein, which is involved in the production of melanin, the pigment that gives colour to our hair, eyes and skin. The “switch,” which is located in the gene adjacent to OCA2 does not, however, turn off the gene entirely, but rather limits its action to reducing the production of melanin in the iris — effectively “diluting” brown eyes to blue. The switch’s effect on OCA2 is very specific therefore. If the OCA2 gene had been completely destroyed or turned off, human beings would be without melanin in their hair, eyes or skin colour — a condition known as albinism. [http://www.sciencedaily.com/releases/2008/01/080130170343.htm source]

Weird Twitter [...]

Considered by some to be an outgrowth of the FYAD forum on Something Awful,° Weird Twitter is a group of comic artists working at the intersection of self-expression, comedy, and surrealism. Artists working in the area use a variety of techniques to create bizarre comic scenes often using poorly punctuated imagined dialogue or tweets from exaggerated personas. The surrealist nature of the medium is mixed with a dedication to authenticity that is somewhat unique in comedy.

(source)
(source)

While many think of Weird Twitter as a place for a certain type of comedian, it is a place for poets as well. Patricia Lockwood, famous for the poem “Rape Joke”[°](http://www.theawl.com/2013/07/rape-joke-patricia-lockwood cite) is known on Weird Twitter for originating the “Sext” form of Twitter joke:°

enhanced-buzz-1877-1365114940-2

The name “weird twitter” comes from a “digifesto” by Sebastian Benthall. In that 2012 post he argued that there was an emerging subculture on Twitter held together by a mix of brutal personal honesty and surrealism.° While he would later claim this post as an attempt to troll the community, the name stuck.

a70

There is, as always, a debate about the name “Weird Twitter”, which many participants feel to be too derogatory or limiting.° However, from a cultural history perspective the term may make sense. There are parallels here to the New Weird America movement in music which mixed elements of 1960s psychedelia with an emphasis on a deeper psychological authenticity.° Of course, many musical artists hated that term as well.

And there is something going on here in many cases, an attempt to create a world that is more real than real.

Screenshot 2015-12-12 at 9.54.23 AM
(source)

Screenshot 2015-12-12 at 10.00.20 AM
(source)

One of the more interesting elements of Weird Twitter is how it impacts the Twitter activity and voice of the general public. As more of the public are exposed to it through retweets, the work behind crafting the style tends to be taken for granted, as a normal way of expressing oneself on Twitter. It becomes harder to tell the difference between posts from bougie beth, a master of the form, and posts from your relatives. The idiom of the subculture merges into the culture.

Screenshot 2015-12-12 at 10.14.38 AM


Weird Twitter: An Oral History. (Link)

Templated Self [...]

Coined by Amber Case, the term “templated self”  describes how the affordances and defaults of systems affect online expressions of identity.

A self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information

For example, the design of the facebook profile expresses one’s identity as a combination of where you live, where you work, where you were educated, and what you like.  Audrey Watters describes how software such as learning management systems, although not explicitly social, can have some of the same identity templating effects (Link)


See also schema

The Spy Who Painted Me [...]

The CIA was one of the biggest patrons of modern art in the postwar world:

For decades in art circles it was either a rumour or a joke, but now it is confirmed as a fact. The Central Intelligence Agency used American modern art – including the works of such artists as Jackson Pollock, Robert Motherwell, Willem de Kooning and Mark Rothko – as a weapon in the Cold War. In the manner of a Renaissance prince – except that it acted secretly – the CIA fostered and promoted American Abstract Expressionist painting around the world for more than 20 years. (Source)

Dwarf

The artists themselves did not know:

The artists themselves were completely unaware that their work was being used as propaganda. On what agents called a “long leash,” they participated in several exhibitions secretly organized by the CIA, such as “The New American Painting” (see catalog cover at top), which visited major European cities in 1958-59 and included such modern primitive works as surrealist William Baziotes’ 1947 Dwarf (below) and 1951’s Tournament by Adolph Gottlieb above. (Source)

The plan seems to have been hatched after a disastrous attempt to run a State Department show focused on modern art.° Advancing American Art was an attempt to show a world shocked by American xenophobia and consumerism that we had an artistic and intellectual culture. In a hilarious case of unexpected results, the show that was meant to show us as open-minded was shut down and roundly criticized by Congress.

Still, the promotion abroad of American art and letters after 1946 required a delicate form of intrigue between private institutions and government agencies. The more radical or modernist the art and letters, the more covert the government’s participation needed to be. The State Department and the U.S.I.A. could send “Oklahoma!” around the world (and did), but they could not very comfortably arrange emergency funding to keep Partisan Review afloat, as the C.I.A. seems to have done in 1953, or promote a style of avant-garde painting offensive to congressional tastes. (Source)

Horse_ebooks [...]

Horse_ebooks was a Twitter account initially thought to be generated by a spam bot slipping under the radar. It became famous for seemingly randomly generated yet poetic tweets.

Source

It was revealed in 2013 that the account had been, at least since 2011, the work of two media artists who had written each tweet in an attempt to impersonate a bot veering towards the poetic.

Alena Smith, a writer of twitter fiction herself, summarized the confused feeling that followers felt on the reveal:

We thought we had happened upon a trove of found art, and like a horde of minor Duchamps, we faved and retweeted these supposedly accidental, inexplicably engaging, ready-made bits of Internet nonsense, savvily designating them as interesting, as amusing, as meaningful. Instead, the disclosure that @horse_ebooks was already the intentionally curated work of two artists — two writers — put us back in our place as a passive audience: as performance-spectators, as fiction-readers. (Source)

She goes on to note that the work succeeded as art:

The fiction @Horse_ebooks wove was not the kind we typically find in books. The intention was not to tell a story but rather to involve us in an experience — an experience that could only exist on Twitter and, indeed, is reflective about ethical quandaries that arise specifically when humans are engaged with Twitter. (How do we know if we’re interacting with other humans when we can’t see them? What if they are robots? What does it mean to “follow” a robot? Are we becoming robots ourselves? Et cetera.) (Source)


See Objet Trouve for the philosophy of found objects.

Wow So Portland! is a bot that we think actually is a bot, but it is still a work of art.

via Horse_ebooks.

Mullet Strategy [...]

Simon Owens describes the “mullet strategy” of sites like Medium in a recent article: business in front, party in the back. The front pages are high quality, professional content to draw in users. The back is the social media platform of unknowns speaking to unknowns. (Link)

The strategy was pioneered by early political blogging sites, most notably Daily Kos, which modeled the idea of a blogging community with a curated front page. In Kos-like sites (including Blue Hampshire) dedicated writers would write for the front page to build a following, and to encourage people to sign up for accounts to comment.

Once people signed up to comment, however, they found they could write their own blog posts under that account. A variety of front page affordances then allowed the “back pages” bloggers to gain recognition, the ultimate of which was being “front-paged” by an editor.

The strategy worked remarkably well. Through a series of efforts at engaging readers, our site Blue Hampshire was able to build a registered user base of over 5,000 users. When users made good comments they were encouraged to write a post on it. If the post was good, it would be front-paged. Consistently good posters were invited to become permanent “front-pagers”.

Part of this relates to Shirky’s Own Worst Enemy point that communities need ways to elevate the status of the users that are most productive on a site. But the bigger point is that the hybrid magazine/community model provides an easy way for the community to thrive, and associates membership with a site recognized for solid writing or reporting, while allowing the messiness that communities need to thrive.

The Mullet Strategy was supported in software platforms both by Daily Kos’s platform and by the SoapBlox platform used by many state blogs. An argument could be made that the success of progressive state blogs was partially driven by access to these platforms. Certainly the transition of Blue Hampshire to a generic WordPress platform coincided with a Colony Collapse on the site (though many, many other factors played a role).


A claim that the mullet strategy is deceiving and exploitative. html

Labor Illusion [...]

While we say that we only care about results, in reality we tend to value results based on the effort we perceive people put into a task. Artists get asked “How long did this take you to paint?” and workers that accomplish little but leave late are seen to be exemplary workers. Some psychologists refer this to the “labor illusion”.

And the ubiquity of complaining about long days at the office suggests that people tend to apply this idea to themselves, too, giving more weight to time spent laboring than actual accomplishments. Because one, of course, does not necessarily lead to another: Consider the recent study, for example, that found office workers routinely fib about working 80-hour weeks in order to get ahead at work. Their bosses mostly fell for it, suggesting that they couldn’t tell the difference in terms of what their subordinates actually got done. (post)

via Labor Illusion | Hapgood.

Lead Blood Levels Against Unleaded Gasoline [...]

Lead Levels

Lead levels in the U.S. declined quickly after the banning of lead in gasoline.  The year 1976 represented a peak for lead, after which followed a steep decline. Children from 1979 onward grew up with a fraction of the exposure to lead that their older siblings would have had.

If the lead theory is correct, what we should see then is a decline starting about 1993 or so, as lead-free young males move into the 16-24 demographic associated with violent crime. (Source)

Screenshot 2015-12-23 at 7.36.35 PM

Peter Elbow [...]

Peter Elbow is a teacher and theorist of composition. His work focuses on the writing process, and the ways in which teachers can help their students become better writers through embracing the messiness of the writing process.


Techniques advocated by Elbow include Minimal Grading and The Believing Game.

Lysenkoism in Poland (Test Page) [...]

This should be a YouTube video

This a paragraph with a note,* a note in the middle.

Lysenko with Stalin
Lysenko with Stalin


First Header Second Header
Content Cell Content Cell
Content Cell Content Cell

This is a link. °

 

 

Hierarchy and Cooperation [...]

> We have shown that achieving cooperation among humans is more difficult when there is an underlying hierarchical structure producing different ranks between people and therefore unequal payoffs for the participants. This result is driven by insufficient contributions from lower ranked individuals who cannot be confident that they will benefit from cooperating. Remarkably, human behavior is consistent with a trend that permeates the rest of the primate order; primates in steeply hierarchical societies have difficulty cooperating for benefits that must be divided, whereas primates organized in weakly hierarchical (egalitarian) societies are more successful. (Source)

Streams Don’t Merge [...]

Network map showing two widely disconnected groups

Emma Pierson writing about tweets and Ferguson notes that red and blue tweeters live in discourse environments that are nearly fully separated. The graphic produced by her demonstrates the severity of this division.

So we have two groups of people who rarely communicate, have very different backgrounds, think drastically different things, and often spray vitriol at each other when they do talk. Previous studies of Twitter have found similar echo chambers, the Israel-Palestine conflict offering one representative example. It is unclear to what extent Twitter merely reflects social divisions as opposed to causing them; I find it unlikely that Mckesson and the red tweeters would be friends if they met over beers. But even this preliminary analysis does not bode well for the possibility of reconciliation.– from Quartz (post)

That said, it’s not clear that the intent here is to communicate to each other. As Bonnie Stewart and others have noted, these events are used by activists tactically, and the goal is not persuasion of enemies but often the much more narrow aim of shaming the press into covering an undercovered story, alerting other activists of an event or issue, or mobilizing moderates into to more radical action.


Bonnie Stewart talks about the activist uses of Twitter. See Tactical Twitter

Related to polarization: in social media, Anger Spreads Fastest

Our world now is a result of the lifestream concept. See Lifestream History

schema theory [...]

The term schema was coined by Jean Piaget.  Richard Anderson described how the concept applied to language comprehension in 1977[https://www.ideals.illinois.edu/bitstream/handle/2142/17946/ctrstreadtechrepv01977i00050_opt.pdf?sequence=1 pdf]

Schema (in the context of communication or language learning) refers to an underlying concept that guides and shapes an interaction.  For example, if you tell someone that you are going to go to a movie, he or she probably has a schema of the process something like this:

  1. Arrive at theater
  2. Go to window
  3. Tell box office worker what film you want to see and when
  4. If seats are available, pay for tickets
  5. Enter theater
  6. Go to concession stand
  7. etc.

This schema serves as a template for the interaction.

Schemata are sometimes quite culturally dependent.  For example, the interaction “getting a cup of coffee” might trigger very different expectations to someone raised in the USA and Italy.

 

 

 

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Templated Self [...]

Coined by Amber Case, the term “templated self”  describes how the affordances and defaults of systems affect online expressions of identity.

A self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information

For example, the design of the facebook profile expresses one’s identity as a combination of where you live, where you work, where you were educated, and what you like.  Audrey Watters describes how software such as learning management systems, although not explicitly social, can have some of the same identity templating effects[http://hackeducation.com/2014/07/22/reclaim-your-domain-hackathon/ cite]


see also schema theory

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Wow So Portland! [...]

Screenshot 2015-10-25 at 12.21.48 PM

Wow So Portland! is a bot-run twitter account by @tinysubversions that selects random images from portions of the Google Street View of Portland and pairs them with common sayings about Portland’s unique and quirky hipsterness. The images selected generally come from the office parks and factory shipping docks of Portland. The effect of the bot, which runs four times a day, is broadly humorous as most street scenes look like any formerly industrial city trying to transition to a post-industrial economy.

Screenshot 2015-10-25 at 12.17.42 PM

The media narrative of Portland is this revealed to be focused on the tiniest sliver of Portland life, the Portland of the upper middle class young creative professional that the press loves to cover, at the expense of the broad experience of the general population.

Screenshot 2015-10-25 at 12.27.09 PM


Horse_ebooks was thought to be a bot unintentionally producing poetic tweets, but the truth was more interesting.

Black Box Society [...]

Term coined by Frank Pasquale to describe a society where intellectual property laws make the algorithms responsible for important individual and societal decisions opaque  — not subject to study, review, or transparency.  Pasquale has written a book with the same title .

Notable examples of opaque algorithms include Google PageRank and the algorithms responsible for prioritizing an individual’s Facebook feed[https://medium.com/message/how-facebook-s-algorithm-suppresses-content-diversity-modestly-how-the-newsfeed-rules-the-clicks-b5f8a4bb7bab#.xom3d6wpz cite]


The term is derived in part from the use of “black box” to describe a component the inner workings of which are not evident.  The Oxford English Dictionary notes this usage as early as 1949.

Bell Syst. Techn. Jrnl. 28 367   In principle, one needs no knowledge of the physics of the transistor in order to treat it circuitwise; any ‘black box’ with the same electrical behavior at its terminals would act in the same way.

 

Video of a Pasquale talk on the concept at the Harvard Berkman Center[https://www.youtube.com/watch?v=f_PFhJrPxoU YouTube]

 

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Baumol’s Cost Disease [...]

Named for co-author William Baumol, the term refers to an economic phenomenon in which wages rise in certain sectors despite no gains in productivity.

Baumol and co-author William Bowen originally considered the economics of the performing arts and noted that the production of Beethoven string quartets was not amenable to productivity improvements. Producing a Beethoven string quartet takes four musicians a certain amount of time, and improvements in technology won’t change this.  Therefore, rising wages in the sector are not linked to productivity gains.

Baumol later extended the notion to sectors such as health care and education, where, for example, output is measured in Carnegie Units and credit hours, both of which have traditionally used a time based definition.


The first chapter of Baumol’s 2012 book, The Cost Disease, is available online from Yale University Press[http://yalepress.yale.edu/yupbooks/excerpts/Baumol_excerpt.pdf pdf]

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized

Per Capita School Age Population Is Shrinking in U.S. [...]

Screenshot 2015-12-23 at 10.18.55 AM

Screenshot 2015-12-23 at 10.14.35 AM

School Age population (ages 5-17) considered as a total bottomed out in the 1990s and recently recovered. But as a per capita measure, the school age population has been shrinking since at least 1970.

Sensory Adaptation and Inattentional Blindness [...]

Although our perceptions are built from sensations, not all sensations result in perception. In fact, we often don’t perceive stimuli that remain relatively constant over prolonged periods of time. This is known as sensory adaptation. Imagine entering a classroom with an old analog clock. Upon first entering the room, you can hear the ticking of the clock; as you begin to engage in conversation with classmates or listen to your professor greet the class, you are no longer aware of the ticking. The clock is still ticking, and that information is still affecting sensory receptors of the auditory system. The fact that you no longer perceive the sound demonstrates sensory adaptation and shows that while closely associated, sensation and perception are different.

There is another factor that affects sensation and perception: attention. Attention plays a significant role in determining what is sensed versus what is perceived. Imagine you are at a party full of music, chatter, and laughter. You get involved in an interesting conversation with a friend, and you tune out all the background noise. If someone interrupted you to ask what song had just finished playing, you would probably be unable to answer that question.

See for yourself how inattentional blindness works by checking out this selective attention test from Simons and Chabris (1999).

One of the most interesting demonstrations of how important attention is in determining our perception of the environment occurred in a famous study conducted by Daniel Simons and Christopher Chabris (1999). In this study, participants watched a video of people dressed in black and white passing basketballs. Participants were asked to count the number of times the team in white passed the ball. During the video, a person dressed in a black gorilla costume walks among the two teams. You would think that someone would notice the gorilla, right? Nearly half of the people who watched the video didn’t notice the gorilla at all, despite the fact that he was clearly visible for nine seconds. Because participants were so focused on the number of times the white team was passing the ball, they completely tuned out other visual information. Failure to notice something that is completely visible because of a lack of attention is called inattentional blindness.

In a similar experiment, researchers tested inattentional blindness by asking participants to observe images moving across a computer screen. They were instructed to focus on either white or black objects, disregarding the other color. When a red cross passed across the screen, about one third of subjects did not notice it (Figure) (Most, Simons, Scholl, & Chabris, 2000).

A photograph shows a person staring at a screen that displays one red cross toward the left side and numerous black and white shapes all over.
Nearly one third of participants in a study did not notice that a red cross passed on the screen because their attention was focused on the black or white figures. (credit: Cory Zanker)

Portions of text orginally from Psychology by OpenStax College, licensed under a Creative Commons License 4.0 International

Lysenkoism [...]

Lysenkoism, named for Russian botanist Trofim Lysenko, was a political doctrine in Joseph Stalin’s Soviet Union that mandated that all biological research conducted in the USSR conform to a modified Lamarckian evolutionary theory. It was imposed for largely ideological reasons. Lysekoism is today used as a term applied to centralized attempts to determine the direction of scitech.

The Russian problem with Darwinism was the embracing of “Malthusian ideas”. [https://www.marxists.org/reference/archive/lysenko/works/1940s/report.htm html]


As a response to Lysenkoism, Polanyi developed the theory of Spontaneous Order.

Darwin and Malthus

Lysenkoism in Poland

Many conservatives believe Global Warming research is the new Lysenkoism. [http://www.forbes.com/sites/peterferrara/2013/04/28/the-disgraceful-episode-of-lysenkoism-brings-us-global-warming-theory/ post] [http://www.navlog.org/lysenkoism.html html]

Reputation Traps [...]

A reputation trap is protective device set around issues of sensitivity in a given profession. Reputation traps enforce epistemic closure: to express a certain view or engage in a certain type of work renders one untrustworthy, which in turn further invalidates the view.

The term was coined by Hew Price in an article on cold fusion research:

Again, there’s a sociological explanation why few people are willing to look at the evidence. They put their reputations at risk by doing so. Cold fusion is tainted, and the taint is contagious – anyone seen to take it seriously risks contamination. So the subject is stuck in a place that is largely inaccessible to reason – a reputation trap, we might call it. People outside the trap won’t go near it, for fear of falling in. ‘If there is something scientists fear, it is to become like pariahs,’ as Lundin puts it. People inside the trap are already regarded as disreputable, an attitude that trumps any efforts that they might make to argue their way out, by reason and evidence. [https://aeon.co/essays/why-do-scientists-dismiss-the-possibility-of-cold-fusion source]

Sometimes such traps are helpful, in cases where there is a near-unanimous consensus backed by overwhelming evidence. As an example, a person expressing a view that man-made global warming does not exist would (at this point) find it hard to work in mainstream climatology. It would be difficult to trust the expertise of someone who could look at the available evidence and come to that conclusion. A history of effort to disprove evolution would cause similar issues for a biologist (and again, rightly so).

In most cases, however, reputation traps do more harm than good. In evolutionary science many scientists steered clear of certain work in evolutionary science because it bore a superficial resemblance to the disgraced ideas of Lamarck. Today those same issues form one of the more exciting branches of evolutionary science.

As Price notes, we find a similar situation today with cold fusion research:

Again, the explanation for ignoring these claims cannot be that other attempts failed 25 years ago. That makes no sense at all. Rather, it’s the reputation trap. The results are ignored because they concern cold fusion, which we ‘know’ to be pseudoscience – we know it because attempts to replicate these experiments failed 25 years ago! The reasoning is still entirely circular, but the reputation trap gives its conclusion a convincing mask of respectability. That’s how the trap works.

Fifty years ago, Thomas Kuhn taught us that this is the usual way for science to deal with paradigm-threatening anomalies. The borders of dominant paradigms are often protected by reputation traps, which deter all but the most reckless or brilliant critics. [https://aeon.co/essays/why-do-scientists-dismiss-the-possibility-of-cold-fusion source]


When reputation traps are institutionally defined, they are sometimes considered Lysenkoism

Reputation Traps are one issue around innovation, as they keep people from working on paradigm-shifting solutions.

See also Sugar Trap for an example from the field of nutrition.

The Hirschsprung Family [...]

Numerous people have tweeted and blogged this 1881 family portrait as “People Ignoring People Before Cell Phones”. It was rather difficult to track down details on it, so we capture them here.

the-hirschsprung-family-1881.jpg!Blog

It was painted by Peder Severin Kroyer, a famous Dane known for painting scenes of 19th century Danish life.

It was commissioned by Heinrich Hirschsprung, a tobacco manufacturer and patron of the arts at that time. He became good friends with Kroyer, and Kroger would have known all of these family members he painted quite well. The painting was meant to show a happy engaged family for which he had a deep affection. [https://en.wikipedia.org/wiki/Heinrich_Hirschsprung cite]


Idea for future page: It strikes me right now that the big sin that people are reacting to with cell phones is not engagement with something else among others, but engaging with distant others, over the people in front of you. This violates the “natural order of things”, in a way that interacting with knitting, newspapers the view do not.

Goethe’s Color Theory [...]

Johann Wolfgang von Goethe is well known for his work in art and criticism, but he also worked in science as well. One of his contributions was a series of works on the nature of color. Conceived as a refutation of Newton’s theory of light, it did not have much impact in the physical sciences, but its attention to detail in how we experience light and its embrace of color symbolism would influence painters for more than a century. It would also form an important contribution in the psychology of perception.

The crux of the theory seems to have been that Newton’s assertion that white light contains all colors was wrong. To Newton, light is presence and dark is absence; this offends both Goethe’s philosophical sensibilities and his perception of the world.

Goethe instead proposes to see light and dark as opposing forces, an assertion in line with his general view of life, which saw humanity as torn between opposing forces. Color comes about through the conflict between the light and the dark.

Goethe's color wheel (source)
Goethe’s color wheel (source)

As an example, Goethe sees yellow as a weakening of light by darkness:

This is the color nearest the light. It appears on the slightest mitigation of light, whether by semi-transparent mediums or faint reflection from white surfaces.

On the other hand, blue is darkness that has been weakened by light:

As yellow is always accompanied with light, so it may be said that blue still brings a principle of darkness with it.

This color has a peculiar and almost indescribable effect on the eye. As a hue it is powerful — but it is on the negative side, and in its highest purity is, as it were, a stimulating negation. Its appearance, then, is a kind of contradiction between excitement and repose.

As the upper sky and distant mountains appear blue, so a blue surface seems to retire from us.

Turner, William (1843) Light and Colour (Goethe's Theory) - The Morning after the Deluge - Moses Writing the Book of Genesis. Oil on canvas. Tate Gallery, London. (source)
Turner, William (1843) Light and Colour (Goethe’s Theory) – The Morning after the Deluge – Moses Writing the Book of Genesis. Oil on canvas. Tate Gallery, London. (source)

Goethe’s work on the psychology of color would be picked up by others, but his more immediate impact was on the world of art. His precise notes on the interplay of light and color combined with the psychology of the viewer formed the basis for much artistic experimentation.

Goethe’s colour theory has in many ways borne fruit in art, physiology and aesthetics. But victory, and hence influence on the research of the following century, has been Newton’s. – Werner Heisenberg.

 

Circadian Typology [...]

Circadian rhythmic expression differs among individuals. Some individuals are night people and some are morning people. Mechanisms and evolutionary rationale for such differences are unknown, as is the reason behind related disorders. Studies have shown that “eveningness” may be associated with mood disorders, ADHD, and eating disorders. (pdf)


Circadian Typology could be related to kin selection, via Hamilton’s Rule

Typology calls into question the premise of First Hours, Best Hours.

From Toys [...]

From Jane Jacobs, on the way transformations come out of weird sectors.

Even the most startling cultural and economic developments do not arise out of thin air. They are always built upon prior developments and upon a certain amount of serendipity and chance. And their consequences are unpredictable, even to their originators and the pioneers who believed in them and initiated them. After all, the first financially successful railroad in the world was an amusement ride in London. Many of us remember when plastics were useful for little except toys, kitchen gadgets and decorative touches that taste-makers derided for their vulgarity. That was before strong, lightweight plastics, reinforced with fibers of glass, boron or carbon, replaced metals in the making of springs and joints. These plastics transformed serious spectacle frames like mine. At last I have frames that never hurt my nose and ears and that last for years without weakened joints. These plastics were originated by the makers of tennis rackets and of rods for surf and sport fishing. nyt


While this sounds like disruptive innovation, Disruption is Real But Rare

Source: From Toys

Submission of the Rods [...]

Scene in Context (source)
Scene in Context (source)

The Submission of the Rods (sometimes the “bringing of the rods” or “presentation of the rods”) is one painting in a sequence by Giotto depicting scenes from the life of the Virgin Mary in Scrovegni Chapel. Along with a number of other paintings in the sequence, it depicts a non-biblical story of how Joseph is selected as Mary’s husband.

Submission of the Rods
Submission of the Rods

The priests have been warned to seek a husband for the Virgin from among the men of the house of David; the suitor worthy of her will be made known by the budding of his rod upon the altar.

In this and the two following frescoes, culminating in the Virgin’s betrothal, the Temple is represented in a new form ; the symbol here consists of the section of a church—on greatly reduced scale—taken just in front of the altar, showing the end of the nave, two low side aisles, and a small semicircular apse. The round arch is used throughout, as typical of the eastern style of architecture.

Detail showing the suspicion of the priest.
Detail showing the vigilance and suspicion of the priest. This attempt to capture character is one of the defining marks marks of the Giotto style.

The chief interest of this, as of the succeeding fresco, is in the anxiety and intentness with which both priests and suitors enter into the ceremony. Vigilance, not untinged with suspicion, appears in the features of the priest behind the altar, described by a high authority as one of the most admirable of Giotto’s studies in expression. One rod has been already laid upon the altar ; both priests have a hand upon it, while both fix their eyes upon the suitor who next comes forward; the greatest need for oversight is felt.

Giotto makes little attempt to individualise the suitors : the faces of six only can be seen, but the presence of a greater number is conveyed, after the Byzantine manner, by a block of heads behind in rough perspective. Joseph contrasts strongly with the rest, his grey hair and beard giving them the appearance of boys. He stands in the rear, watching no less intently than the others, but wholly devoid of the impatient eagerness by which they seem to be animated.

Text originally from public domain work Giotto (1905)

Giotto’s O [...]

A story about how Giotto came to get commissions from the Church, as relayed by Van Mander. In Van Mander’s account, the Pope sends a courtier to a variety of workshops and asks for a painting that can prove the skill of various painters. The apocryphal story follows:

This work (his paintings in the Campo Santo of Pisa) acquired for him, both in the city and externally, so much fame, that the Pope Benedict IX. sent a certain one of his courtiers into Tuscany, to see what sort of a man Giotto was, and what was the quality of his works, he (the pope) intending to have some paintings executed in St. Peter’s; which courtier, coming to see Giotto, and hearing that there were other masters in Florence who excelled in painting and in mosaic, spoke, in Siena, to many masters; then, having received drawings from them, he came to Florence; and having gone one morning into Giotto’s shop as he was at work, explained the pope’s mind to him, and in what way he wished to avail himself of his powers, and finally requested from him a little piece of drawing to send to his Holiness.

Giotto, who was most courteous, took a leaf (of vellum ?), and upon this, with a brush dipped in red, fixing his arm to his side, to make it as the limb of a pair of compasses, and turning his hand, made a circle so perfect in measure and outline, that it was a wonder to see : which having done, he said to the courtier, with a smile, ‘There is the drawing.’

He, thinking himself mocked, said, ‘Shall I have no other drawing than this?’ ‘This is enough, and too much,’ answered Giotto; ‘send it with the others: you will see if it will be understood.’

The ambassador, seeing that he could not get anything else, took his leave with small satisfaction, doubting whether he had not been made a jest of. However, when he sent to the pope the other drawings, and the names of those who had made them, he sent also that of Giotto, relating the way in which he had held himself in drawing his circle, without moving his arm, and without compasses. Whence the pope, and many intelligent courtiers, knew how much Giotto overpassed in excellence all the other painters of his time.

Afterwards, the thing becoming known, the proverb arose from it : ‘Thou art rounder than the 0 of Giotto; ‘ which it is still in custom to say to men of the grosser clay ; for the proverb is pretty, not only on account of the accident of its origin, but because it has a double meaning, ’round’ being taken in Tuscany to express not only circular form, but slowness and grossness of wit.” [https://goo.gl/gvb572 source]

Cimabue [...]

Cimabue was one of the first painters to break from the Byzantine style of painting that dominated the the late middle ages, and in doing so he laid the foundations for the dramatic shift in style that would characterize the Renaissance. His move towards more naturalistic shadings and human proportions are some of the first evidence of the shift towards realism, and he would both inspire and teach Giotto, the first great painter of the proto-Renaissance. Even given these significant stylistic differences Cimabue is largely seen as being the last great painter in the Italo-Byzantine tradition rather than a painter of the proto-Renaissance.

Detail from Cimbue's Saint Francis of Assisi. (public domain)
Detail from Cimbue’s Saint Francis of Assisi. (public domain)

Cimabue’s name (“bullheaded”) is said to have reflected his personality, which was said by later commentators to be so demanding that he would instantly destroy any of his works that came under criticism. (For this reason, Dante placed him in Purgatory).[https://goo.gl/6g2EHP cite]


Cimabue is said to have trained Giotto, after Giotto qualified in an odd way. See Giotto’s Circle

Christus Triumphans [...]

Describes a pose of Christ in early Christian Art. In this pose the figure of Christ on the cross is shown as alive and fully conscious, often loooking at the viewer, part of the iconography of Christ’s triumph over death.

 Berlinghiero Berlinghieri
Berlinghiero Berlinghieri’s crucifix. Public Domain.
A good example of this is Berlinghiero Berlinghieri’s c. 1220 crucifix, shown here. While suffering is evident in the blood from the wounds and the body position, the overall effect is one of bodily transcendence.

As the Franciscan order gained influence, with its emphasis on suffering, the Christus Triumphus was replaced with the Christus Patiens (the “Suffering Christ”). Artists began to depict Christ in more naturalistic ways, emphasizing mortality over divinity.


The Cimabue Crucifix shows a more realistic portrayal, in the Christus Patiens style.

The life of Cimabue

Cimabue Crucifix [...]

Cimabue Crucifix
Cimabue Crucifix

Crucifix is a large distemper on wood painting by the Florentine painter and mosaicist Cimabue, dated to c. 1265. It is one of the first Italian artworks to break from the Byzantine style and is notable for its technical innovations and humanistic iconography.

Earlier Byzantine depictions of Christ tended to show Christ as invincible, even in death. He hung on the cross with eyes wide open, unblemished skin, and a body full of power: symbolic of everlasting life. See Christus Triumphans

While Cimabue’s work continues many of the elements of the Byzantine tradition, he breaks substantially with past iconography, choosing to emphasize the mortal and human elements of Christ over the divine, in a style that would later be called Christus Patiens, or “Suffering Christ”.

Cimabue’s work here and elsewhere marks the transition between the middle ages and the Renaissance. The attention to the representational elements of the painting, along with the Franciscan emphasis on the human would pave the way for later works of Giotto, and set the scene for the revolution in painting and art that was to come.

 

Marble Cake Federalism [...]

Dual federalism is a political arrangement in which power is divided between the federal and state governments in clearly defined terms. While the public often envisions the relationship between the different levels of the government as separate layers, the truth is not so tidy.

One political scientist has coined the term “marble-cake federalism” to emphasize that the levels of government are often cooperative:

Grodzins challenged the view that the federal system was adversarial in nature, in which the state endeavored to check the expansion of federal power. Instead, h argued, the federal-state relationship is rooted in cooperation, which dates back to the nineteenth and early twentieth centuries. In his view, cooperative federalism reached its apex during Franklin Roosevelt’s New Deal and afterward. Grodzins took exception to the notion that federal power threatened to undermine state sovereigns} arguing to the contrary that it both enhanced and strengthened it. He also insisted that a strong federal government was essential to preven parochial and private interests from undermining the national interest.[https://goo.gl/BzmMNS source]

In reality, neither layer-cake federalism nor marble-cake federalism describes the entire federal-state relationship, which is sometimes cooperative and sometimes adversarial.

Cohen’s Law [...]

Quoting Clay Shirky, from Own Worst Enemy:

Geoff Cohen has a great observation about this. He said “The likelihood that any unmoderated group will eventually get into a flame-war about whether or not to have a moderator approaches one as time increases.” As a group commits to its existence as a group, and begins to think that the group is good or important, the chance that they will begin to call for additional structure, in order to defend themselves from themselves, gets very, very high.

via Cohen’s Law.

False Memories are Common [...]

Memories are not stored the way an event is stored in a video or audio file. For something to be remembered, it must be recreated. Furthermore, each time we remember something, we alter our memory of it. In the case of remembering events, the more we remember an event, the more we are likely to have distorted it. Surprisingly, the most-remembered events can sometimes be the least reliable.°

The psychologist Jean Piaget had a particularly interesting false memory. He believed he was the victim of an attempted kidnapping:

… one of my first memories would date, if it were true, from my second year. I can still see, most clearly, the following scene, in which I believed until I was about fifteen. I was sitting in my pram, which my nurse was pushing in the Champs Elysees, when a man tried to kidnap me. I was held in by the strap fastened round me while my nurse bravely tried to stand between me and the thief. She received various scratches, and I can still see vaguely those on her face. … When I was about fifteen, my parent received a letter from my former nurse … she wanted to confess her past faults, and in particular to return the watch she had been given as a reward. … She had made up the whole story. … I, therefore, must have heard, as a child, the account of this story, which my parents believed, and projected into the past in the form of a visual memory. (Source)


False identification of criminals from victim and eyewitness testimony presents a similar problem (Link)

Reconsolidation theory provides insight into why this happens. See Engram Lifecycle

Middletown in Transition [...]

Middletown in Transition was a study of 1930s Muncie, Indiana. It proposed and investigated a theory of elitism as a governing principle of local politics.

[The researchers] proposed that a city of almost fifty thousand inhabitants was ruled by a socioeconomic oligarchy that revolved around the five Ball brothers. Their second study introduced the theory of elitism, whereby a small group of influential notables and tightly connected businessmen manipulated local affairs behind the scenes. Their categorization of the elite as part of the definition of community power (Warner 1959; Warner et al. 1963; Hunter 1953; Vidich and Bensman 1958) was the one most in vogue among scholars at that time and during the following decade. [https://books.google.com/books?id=SVKgrMlMAFkC&pg=PA12&lpg=PA12&dq=middletown+elitism&source=bl&ots=owr0yWOA6X&sig=718Wfb5X1002XIjENRnyv42Zn1A&hl=en&sa=X&ved=0ahUKEwjz6NiV0-nJAhUY32MKHS8wBncQ6AEIITAB#v=onepage&q&f=false source]

The theory did not go unchallenged:

One of the most famous challenges to the guiding concept of elitism was the “pluralistic” paradigm put forth by Dahl (1961), who maintained that a community is not governed by a single elite, but rather by opposing interests and lobbies who participate in politics because of problems that relate directly to themselves. This results in an abdication of leadership responsi-bility, with the consequence that no one really “directs” a community (Lowry 1965).

A critique of Dahl — the “two faces of power”: [http://www.columbia.edu/itc/sipa/U6800/readings-sm/bachrach.pdf pdf]

More recently, another model of decision making at the local level has emerged: the idea that “centralization” sustains the most important decisions in the community (Martindale and Hanson 1969; Elazar 197o), which are nor determined by either local elites or pressure groups. Solutions to citizens’ problems are encouraged, stimulated, and reinforced by agencies or authorities that are outside the community—namely, the state and the federal government. [https://books.google.com/books?id=SVKgrMlMAFkC&pg=PA12&lpg=PA12&dq=middletown+elitism&source=bl&ots=owr0yWOA6X&sig=718Wfb5X1002XIjENRnyv42Zn1A&hl=en&sa=X&ved=0ahUKEwjz6NiV0-nJAhUY32MKHS8wBncQ6AEIITAB#v=onepage&q&f=false source]

 

The Homevoter Hypothesis [...]

Why do local governments in America supply and compete on some services (environment, education) but not on others (affordable housing). Why do they support large useless projects like stadiums? Why are they often unprepared for local emergency, but over-policed on an average day?

The Homevoter Hypothesis attempts to answer these questions by noting that the homeowning public attempts to expand those services which increase the value of homes. Theorist David Fischel explains:

The explanation is so simple that I stared at it for years without recognizing it. Everybody knows that homeowners care about the value of their major physical asset, their home. Most economists and nearly all home buyers know that the good things and bad things that local governments do tend to raise or lower the value of that asset. Studies of political economy find that homeowners are the dominant political faction in all but the largest local governments. This book puts these three propositions together in a positive account of local political economy that shows how and why it differs from the state and national brand. [https://books.google.com/books?hl=en&lr=&id=q9bJ6eZMR_IC&oi=fnd&pg=PR9&dq=homevoter+hypothesis+localities&ots=_X_B9AIdOi&sig=lHoLplMTWK5HDxZsyLivIUCeH0s#v=onepage&q=homevoter%20hypothesis%20localities&f=false source]

Typewriter Art [...]

Typewriter art predates ASCII Art, and anticipates many of its techniques. See Basics of ASCII Art.

October 1898 typewriter art, by Flora Stacy in Phonetic Journal. (source)
October 1898 typewriter art, by Flora Stacy in Phonetic Journal. (source)

Flora Stacy is the first well-known typewriter artist, although people were playing with the possibilities of typewriter art from the invention of the very first typewriter. Stacy’s picture of a butterfly, typed on an early typewriter, appeared in the Phonetic Journal in 1898.

Interestingly, typewriters allow more granular positioning of the typed elements than most electronic means, allowing for the creation of more organic looking compositions. Also, on older manual typewriters the darkness of the letter can be adjusted through the force with which the key is hit.

Works such as the Portrait of Nicole Kidman by Keira Rathbone make use of this ability to position, overtype, and vary darkness. (html)

Portrait of Nicole Kidman, by Keira Rathbone (source)
Portrait of Nicole Kidman, by Keira Rathbone (source)

Paul Smith, a man with cerebral palsy, is famous for his typewriter art. To see his technique, try Paul Smith’s Typewriter Art.

Typewriter artists even did things close to emojis. See Typewriter Emojis.

Liebig’s barrel [...]

Liebig’s barrel was initially a model to demonstrate how the efficacy of fertilizer mixes were constrained not by the overall element totals but by the single element out of proportion. Over time it has been used to demonstrate a more broadly applied Theory of the Minimum and the Theory of Constraints.

LIEBIGbarrel
(source)

In the original formulation, each stave of the barrel represents a mineral or amino acid (you could also add other growth factors such as water and light). In the example to the left, the level of lysine is very low in proportion to the other elements.

A naive analysis would assume that the horse getting this mix of amino acids in its food would be deficient in only one nutrient (lysine). But in fact this is not the case. The other amino acids, though plentiful in the food supply, cannot be used by the body because the absence of lysine limits their use.  The only way for the other nutrients to be utilized is to increase the level of lysine.

This realization revolutionized the fertilizer business, which began focusing on fertilizer mix ratios instead of overall amounts of materials.

The barrel more recently has been used in understanding the productivity of organizations. In the Theory of Constraints, for example, the primary effort is spent on trying to identify the most limiting factor for productivity and effectiveness, to the exclusion of more general improvements.

Star Wars Episode IV – A New Hope [...]

Nineteen years after the formation of the Empire, Luke Skywalker is thrust into the struggle of the Rebel Alliance when he meets Obi-Wan Kenobi, who has lived for years in seclusion on the desert planet of Tatooine. Obi-Wan begins Luke’s Jedi training as Luke joins him on a daring mission to rescue the beautiful Rebel leader Princess Leia from the clutches of the evil Empire. Although Obi-Wan sacrifices himself in a lightsaber duel with Darth Vader, his former apprentice, Luke proves that the Force is with him by destroying the Empire’s dreaded Death Star.  (1977)

Defeat Device [...]

A defeat device is an embedded system or mechanism that attempts to dupe regulators or potential buyers. The most recent example is Volkswagen’s on-board emissions module, which detected whether a car was undergoing emissions tests and changed the functioning of the car to pass the tests.

Samsung Smart TVs have also been accused of using a defeat device:

Independent lab tests have found that some Samsung TVs in Europe appear to use less energy during official testing conditions than they do during real-world use, raising questions about whether they are set up to game energy efficiency tests. (html)

Samsung claims this is an abuse of the term.

Source: Defeat Device


More recently. Renault is facing suspicion of using defeat devices. (Link)

The VW defeat device was apparently developed at Audi in 1999. (Source)

Glitch Art [...]

Glitch cabinet
An example of glitch aesthetic applied to furniture. The photo here is not glitching, the cabinet is carved to look exactly like this. (source)

Glitch art intentionally reproduces the glitches associated with analog and digital media for artistic effect. Examples of glitch art abound in the audiovisual arts, but have even been used in areas such as furniture creation (see photo).


Glitch Safari collects photos of glitches from the real world. (flickr) (vimeo)

Star Wars Episode VII – The Force Awakens [...]

Thirty years after the defeat of the Galactic Empire, the galaxy faces a new threat from the evil Kylo Ren (Adam Driver) and the First Order. When a defector named Finn crash-lands on a desert planet, he meets Rey (Daisy Ridley), a tough scavenger whose droid contains a top-secret map. Together, the young duo joins forces with Han Solo (Harrison Ford) to make sure the Resistance receives the intelligence concerning the whereabouts of Luke Skywalker (Mark Hamill), the last of the Jedi Knights. (2015)

Star Wars Episode VI – Return of the Jedi [...]

In the epic conclusion of the saga, the Empire prepares to crush the Rebellion with a more powerful Death Star while the Rebel fleet mounts a massive attack on the space station. Luke Skywalker confronts his father Darth Vader in a final climactic duel before the evil Emperor. In the last second, Vader makes a momentous choice: he destroys the Emperor and saves his son. The Empire is finally defeated, the Sith are destroyed, and Anakin Skywalker is thus redeemed. At long last, freedom is restored to the galaxy.  (1983)

Star Wars Episode V – The Empire Strikes Back [...]

After the destruction of the Death Star, Imperial forces continue to pursue the Rebels. After the Rebellion’s defeat on the ice planet Hoth, Luke journeys to the planet Dagobah to train with Jedi Master Yoda, who has lived in hiding since the fall of the Republic. In an attempt to convert Luke to the dark side, Darth Vader lures young Skywalker into a trap in the Cloud City of Bespin. In the midst of a fierce lightsaber duel with the Sith Lord, Luke faces the startling revelation that the evil Vader is in fact his father, Anakin Skywalker.  (1980)

Star Wars Episode IV – A New Hope [...]

Nineteen years after the formation of the Empire, Luke Skywalker is thrust into the struggle of the Rebel Alliance when he meets Obi-Wan Kenobi, who has lived for years in seclusion on the desert planet of Tatooine. Obi-Wan begins Luke’s Jedi training as Luke joins him on a daring mission to rescue the beautiful Rebel leader Princess Leia from the clutches of the evil Empire. Although Obi-Wan sacrifices himself in a lightsaber duel with Darth Vader, his former apprentice, Luke proves that the Force is with him by destroying the Empire’s dreaded Death Star.  (1977)

Star Wars Episode III – Revenge of the Sith [...]

Years after the onset of the Clone Wars, the noble Jedi Knights lead a massive clone army into a galaxy-wide battle against the Separatists. When the sinister Sith unveil a thousand-year-old plot to rule the galaxy, the Republic crumbles and from its ashes rises the evil Galactic Empire. Jedi hero Anakin Skywalker is seduced by the dark side of the Force to become the Emperor’s new apprentice — Darth Vader. The Jedi are decimated, as Obi-Wan Kenobi and Jedi Master Yoda are forced into hiding. The only hope for the galaxy are Anakin’s own offspring — the twin children born in secrecy who will grow up to become heroes.  (2005)

Star Wars Episode II – Attack of the Clones [...]

Ten years after the invasion of Naboo, the galaxy is on the brink of civil war. Under the leadership of a renegade Jedi named Count Dooku, thousands of solar systems threaten to break away from the Galactic Republic. When an assassination attempt is made on Senator Padmé Amidala, the former Queen of Naboo, twenty-year-old Jedi apprentice Anakin Skywalker is assigned to protect her. In the course of his mission, Anakin discovers his love for Padmé as well as his own darker side. Soon, Anakin, Padmé, and Obi-Wan Kenobi are drawn into the heart of the Separatist movement and the beginning of the Clone Wars.  (2002)

Star Wars Episode I – The Phantom Menace [...]

Stranded on the desert planet Tatooine after rescuing young Queen Amidala from the impending invasion of Naboo, Jedi apprentice Obi-Wan Kenobi and his Jedi Master Qui-Gon Jinn discover nine-year-old Anakin Skywalker, a young slave unusually strong in the Force. Anakin wins a thrilling Podrace and with it his freedom as he leaves his home to be trained as a Jedi. The heroes return to Naboo where Anakin and the Queen face massive invasion forces while the two Jedi contend with a deadly foe named Darth Maul. Only then do they realize the invasion is merely the first step in a sinister scheme by the re-emergent forces of darkness known as the Sith. (1999)

Art of the Stone Age [...]

The Stone Age is the first of the three-age system of archaeology, which divides human technological prehistory into three periods: the Stone Age, Bronze Age, and Iron Age. The Stone Age lasted roughly 3.4 million years, from 30,000 BC to about 3,000 BC, and ended with the advent of metalworking.

The Stone Age has been divided in three distinct periods:

  • Paleolithic Period or Old Stone Age (30,000 BC – 10,000 BC)
  • Mesolithic Period or Middle Stone Age (10,000 BC – 8,000 BC)
  • Neolithic Period or New Stone Age (8,000 BC – 3,000 BC)

The art of the Stone Age represents the first accomplishments in human creativity, preceding the invention of writing. While numerous artifacts still exist today, the lack of writing systems from this era greatly limits our understanding of prehistoric art and culture.

The Art of the Stone Age: Paleolithic

The very earliest human artifacts showing evidence of workmanship with an artistic purpose are the subject of some debate, but it is clear that such workmanship existed by 40,000 years ago. By 20,000 BC human settlements of hunters and gatherers were present all over the world, except for in Antarctica. The earliest settlements occurred in Africa, where rock paintings and engravings represented the oldest form of art found in this continent. Depictions of highly stylized human figures and richly colored animals were used for magical purposes in order to ensure a successful hunt.

The Art of the Stone Age: Mesolithic

From the Paleolithic through the Mesolithic, cave paintings and portable art such as figurines, statuettes, and beads predominated, with decorative figured workings also seen on some utilitarian objects. Venus figurines—an umbrella term for a number of prehistoric female statuettes portrayed with similar physical attributes—were very popular at the time. These figurines were carved from soft stone (such as steatite, calcite, or limestone), bone or ivory, or formed of clay and fired. The latter are among the oldest ceramics known. Also in this period, personal accessories and adornments were made from shell and bone. All the examples mentioned above fall under the category of portable art: small for easy transport.

Lascaux Caves, Prehistoric Paintings

Archaeological discoveries across a broad swath of Europe (especially Southern France, like those at Lascaux; Northern Spain; and Swabia, in Germany) include over two hundred caves with spectacular paintings, drawings, and sculptures that are among the earliest undisputed examples of representational image-making. Paintings and engravings along the caves’ walls and ceilings fall under the category of parietal art.

The Art of the Stone Age: Neolithic

The Neolithic saw the transformation of nomad human settlements into agrarian societies in need of permanent shelter. From this period there is evidence of early pottery, as well as sculpture, architecture, and the construction of megaliths. Early rock art also first appeared in the Neolithic.

The End of the Stone Age

The advent of metalworking in the Bronze Age brought additional media available for use in making art, an increase in stylistic diversity, and the creation of objects that did not have any obvious function other than art. It also saw the development in some areas of artisans, a class of people specializing in the production of art, as well as in early writing systems.

By the Iron Age, civilizations with writing had arisen from Ancient Egypt to Ancient China.

Original Source: Boundless. “Art of the Stone Age.” Boundless Art History. Boundless, 08 Nov. 2015. Retrieved 19 Dec. 2015 from https://www.boundless.com/art-history/textbooks/boundless-art-history-textbook/prehistoric-art-2/the-stone-age-44/art-of-the-stone-age-270-5307/ Licensed CC BY-SA 4.0.