Posts filed under Social Media (95)

March 16, 2015

Maps, colours, and locations

This is part of a social media map, of photographs taken in public places in the San Francisco Bay Area

bayarea

The colours are trying to indicate three social media sites: Instagram is yellow, Flickr is magenta, Twitter is cyan.

Encoding three variables with colour this way doesn’t allow you to easily read off differences, but you can see clusters and then think about how to decode them into data. The dark green areas are saturated with photos.  Light green urban areas have Instagram and Twitter, but not much Flickr.  Pink and orange areas lack Twitter — mostly these track cellphone coverage and population density, but not entirely.  The pink area in the center of the map is spectacular landscape without many people; the orange blob on the right is the popular Angel Island park.

Zooming in on Angel Island shows something interesting: there are a few blobs with high density across all three social media systems. The two at the top are easily explained: the visitor centre and the only place on the island that sells food. The very dense blob in the middle of the island, and the slightly less dense one below it are a bit strange. They don’t seem to correspond to any plausible features.

angelisland

My guess is that these are a phenomenon we’ve seen before, of locations being mapped to the center of some region if they can’t be specified precisely.

Automated data tends to be messy, and making serious use of it means finding out the ways it lies to you. Wayne Dobson doesn’t have your cellphone, and there isn’t a uniquely Twitter-worthy bush in the middle of Angel Island.

 

March 14, 2015

Ok, but it matters in theory

Some discussion on Twitter about political polling and whether political journalists understood the numbers led to the question:

If you poll 500 people, and candidate 1 is on 35% and candidate 2 is on 30%, what is the chance candidate 2 is really ahead?

That’s the wrong question. Well, no, actually it’s the right question, but it is underdetermined.

The difficulty is related to the ‘base-rate‘ problem in testing for rare diseases: it’s easy to work out the probability of the data given the way the world is, but you want the probability the world is a certain way given the data. These aren’t the same.

If you want to know how much variability there is in a poll, the usual ‘maximum margin of error’ is helpful.  In theory, over a fairly wide range of true support, one poll in 20 will be off by more than this, half being too high and half being too low. In theory it’s 3% for 1000 people, 4.5% for 500. For minor parties, I’ve got a table here. In practice, the variability in NZ polls is larger than in theoretically perfect polls, but we’ll ignore that here.

If you want to know about change between two polls, the margin of error is about 1.4 times higher. If you want to know about difference between two candidates, the computations are trickier. When you can ignore other candidates and undecided voters, the margin of error is about twice the standard value, because a vote added to one side must be taken away from the other side, and so counts twice.

When you can’t ignore other candidates, the question isn’t exactly answerable without more information, but Jonathan Marshall has a nice app with results for one set of assumptions. Approximately, instead of the margin of error for the difference being (2*square root (1/N)) as in the simple case, you replace the 1 by the sum of the two candidate estimates, so  (2*square root (0.35+0.30)/N).  The margin of error is about 7%.  If the support for the two candidates were equal, there would be about a 9% chance of seeing candidate 1 ahead of candidate 2 by at least 5%.

All this, though, doesn’t get you an answer to the question as originally posed.

If you poll 500 people, and candidate 1 is on 35% and candidate 2 is on 30%, what is the chance candidate 2 is really ahead?

This depends on what you knew in advance. If you had been reasonably confident that candidate 1 was behind candidate 2 in support you would be justified in believing that candidate 1 had been lucky, and assigning a relatively high probability that candidate 2 is really ahead. If you’d thought it was basically impossible for candidate 2 to even be close to candidate 1, you probably need to sit down quietly and re-evaluate your beliefs and the evidence they were based on.

The question is obviously looking for an answer in the setting where you don’t know anything else. In the general case this turns out to be, depending on your philosophy, either difficult to agree on or intrinsically meaningless.  In special cases, we may be able to agree.

If

  1. for values within the margin of error, you had no strong belief that any value was more likely than any other
  2. there aren’t values outside the margin of error that you thought were much more likely than those inside

we can roughly approximate your prior beliefs by a flat distribution, and your posterior beliefs by a Normal distribution with mean at the observed data value and with standard error equal to the margin of error.

In that case, the probability of candidate 2 being ahead is 9%, the same answer as the reverse question.  You could make a case that this was a reasonable way to report the result, at least if there weren’t any other polls and if the model was explicitly or implicitly agreed. When there are other polls, though, this becomes a less convincing argument.

TL;DR: The probability Winston is behind given that he polls 5% higher isn’t conceptually the same as the probability that he polls 5% higher given that he is behind.  But, if we pretend to be in exactly the right state of quasi-ignorance, they come out to be the same number, and it’s roughly 1 in 10.

March 12, 2015

Variation and mean

A lot of statistical reporting focuses on means, or other summaries of where a distribution lies. Often, though, variation is important.  Vox.com has a story about variation in costs of lab tests at California hospitals, based on a paper in BMJ OpenVox says

The charge for a lipid panel ranged from $10 to $10,169. Hospital prices for a basic metabolic panel (which doctors use to measure the body’s metabolism) were $35 at one facility — and $7,303 at another

These are basically standard lab tests, so there’s no sane reason for this sort of huge variation. You’d expect some variation with volume of tests and with location, but nothing like what is seen.

What’s not clear is how much this is really just variation in how costs are attributed. A hospital needs a blood lab, which has a lot of fixed costs. Somehow these costs have to be spread over individual tests, but there’s no unique way to do this.  It would be interesting to know if the labs with high charges for one test tend to have high charges for others, but the research paper doesn’t look at relationships between costs.

The Vox story also illustrates a point about reporting, with this graph

 F1.large-1

If you look carefully, there’s something strange about the graph. The brown box second from the right is ‘lipid panel’, and it goes up to a bit short of $600, not to $10169. Similarly, the ‘metabolic panel’, the right-most box, goes up to $1000 on the graph and $7303 in the story.

The graph is taken from the research paper. In the research paper it had a caption explaining that the ‘whiskers’ in the box plot go to the 5th and 95th percentiles (a non-standard but reasonable choice). This caption fell off on the way to Vox.com, and no-one seems to have noticed.

March 5, 2015

Showing us the money

The Herald is running a project to crowdsource data entry and annotation for NZ political donations and expenses: it’s something that’s hard to automate and where local knowledge is useful. Today, they have an interactive graph for 2014 election donations and have made the data available

money

February 27, 2015

Siberian hamsters or Asian gerbils

Every year or so there is a news story along the lines of”Everything you know about the Black Death is Wrong”. I’ve just been reading a couple of excellent posts  by Alison Atkin on this year’s one.

The Herald’s version of the story (which they got from the Independent) is typical (but she has captured a large set of headlines)

The Black Death has always been bad publicity for rats, with the rodent widely blamed for killing millions of people across Europe by spreading the bubonic plague.

But it seems that the creature, in this case at least, has been unfairly maligned, as new research points the finger of blame at gerbils.

and

The scientists switched the blame from rat to gerbil after comparing tree-ring records from Europe with 7711 historical plague outbreaks.

That isn’t what the research paper (in PNAS) says. And it would be surprising if it did: could it really be true that Asian gerbils were spreading across Europe for centuries without anyone noticing?

The abstract of the paper says

The second plague pandemic in medieval Europe started with the Black Death epidemic of 1347–1353 and killed millions of people over a time span of four centuries. It is commonly thought that after its initial introduction from Asia, the disease persisted in Europe in rodent reservoirs until it eventually disappeared. Here, we show that climate-driven outbreaks of Yersinia pestis in Asian rodent plague reservoirs are significantly associated with new waves of plague arriving into Europe through its maritime trade network with Asia. This association strongly suggests that the bacterium was continuously reimported into Europe during the second plague pandemic, and offers an alternative explanation to putative European rodent reservoirs for how the disease could have persisted in Europe for so long.

If the researchers had found repeated, prevously unsuspected, invasions of Europe by hordes of gerbils, they would have said so in the abstract. They don’t. Not a gerbil to be seen.

The hypothesis is that plague was repeatedly re-imported from Asia (where affected a lots of species, including, yes, gerbils) to European rats, rather than persisting at low levels in European rats between the epidemics. Either way, once the epidemic got to Europe, it’s all about the rats [update: and other non-novel forms of transmission]

In this example, for a change, it doesn’t seem that the press release is responsible. Instead, it looks like progressive mutations in the story as it’s transmitted, with the great gerbil gradually going from an illustrative example of a plague host in Asia to the rodent version of Attila the Hun.

Two final remarks. First, the erroneous story is now in the Wikipedia entry for the great gerbil (with a citation to the PNAS paper, so it looks as if it’s real). Second, when the story is allegedly about the confusion between two species of rodent, it’s a pity the Herald stock photo isn’t the right species.

 

[Update: Wikipedia has been fixed.]

February 19, 2015

London card clash sensitivity analysis

The data blog of the Daily Mirror reports a problem with ‘card clash’ on the London Underground.  You can now pay directly with a debit card instead of buying a ticket — so if you have both a transport card and a debit card in your wallet, you have the opportunity to enter with one and leave with the other and get overcharged. Alternatively, you can take the card out of your wallet and drop it.  Auckland Transport has a milder version of the same problem: no-touch credit cards can confuse the AT HOP reader and make it not recognise your card, but you won’t get overcharged unless you don’t notice the red light.

They looked at numbers of cards handed in at lost-and-found across the London Underground over the past two years (based on FOI request)

card-clash

If we’re going to spend time on this, we might also consider what the right comparison is. The data include cards on their own and cards with other stuff, such as a wallet. We shouldn’t combine them: the ‘card clash’ hypothesis would suggest a bigger increase in cards on their own.

Here’s a comparison using all the data: the pale points are the observations, the heavy lines are means.

allcards

Or, we might worry about trends over time and use just the most recent four months of comparison data:

recentcards

Or, use the same four months of the previous year:

matchedcards

 

In this case all the comparisons give basically the same conclusion: more cards are being handed in, but the increase is pretty similar for cards alone and for cards with other stuff, which weakens the support for the ‘card clash’ explanation.

Also, in the usual StatsChat spirit of considering absolute risks: there are 3.5 million trips per day, and about 55 cards handed in per day: one card for about 64000 trips. With two trips per day, 320 days per year, that would average once per person per century.

February 15, 2015

Caricatures and credits

 

A lot of surprisingly popular accounts on Twitter just tweet pictures, without giving any sources,and often with captions that misleading or just wrong.  One from yesterday had a picture of a picnic on a highway in the Netherlands in 1973 and described it as being from the US.

Here’s one that came from @AmazingMaps, today, captioned “Most popular word used in online dating profiles by state”

B916Zi9IIAAXAJb

 

Could it really be true that ‘NASCAR’ is the most popular word in Indiana dating profiles? Or that ‘oil’ is the most popular word in Texas? Have the standard personal-ad clichés become completely outdated? Aren’t Americans easy-going any more? Doesn’t anyone care about romance or honesty or humour?

We’ve seen this sort of analysis before on StatsChat. It’s designed to produce a caricature, though not necessarily in a bad way. This one comes from Mashable, based on analysis by Match.com. The original post says

Essentially, they broke down which words are used with relative frequency in certain states, as compared to relative infrequency in the rest of the country.

That is, the map has ‘oil’ for Texas and ‘NASCAR’ for Indiana not because these words were used very often in those states, but because they were used much less often in other states. Most Indiana dating profiles probably don’t mention NASCAR, but a much higher proportion do than in, say, New York or Oregon. Most Texas dating profiles don’t talk about oil, but it’s more common in Texas than in Maine or Tennessee. It’s not that everyone in Oregon or Idaho kayaks, but a lot more do than in Iowa or Kansas.

 

When this map first came out, in November, there were lots of stories about it, typically getting things wrong (eg an NBC motor sports site had the headline “NASCAR” is most frequently used word among Indiana online dating profiles”). That’s still bad, but most of these sites had links or at least mentioned the source of the map, so that people who care could find out what the facts are. @AmazingMaps seems confident none of its followers care.

January 16, 2015

Women are from Facebook?

A headline on Stuff: “Facebook and Twitter can actually decrease stress — if you’re a woman”

The story is based on analysis of a survey by Pew Research (summary, full report). The researchers said they were surprised by the finding, so you’d want the evidence in favour of it to be stronger than usual. Also, the claim is basically for a difference between men and women, so you’d want to see summaries of the evidence for a difference between men and women.

Here’s what we get, from the appendix to the full report. The left-hand column is for women, the right-hand column for men. The numbers compare mean stress score in people with different amounts of social media use.

pew

The first thing you notice is all the little dashes.  That means the estimated difference was less than twice the estimated standard error, so they decided to pretend it was zero.

All the social media measurements have little dashes for men: there wasn’t strong evidence the correlation was non-zero. That’s not we want, though. If we want to conclude that women are different from men we want to know whether the difference between the estimates for men and women is large compared its uncertainty.  As far as we can tell from these results, the correlations could easily be in the same direction in men and women, and could even be just as  strong in men as in women.

This isn’t just a philosophical issue: if you look for differences between two groups by looking separately for a correlation each group rather than actually looking for differences, you’re more likely to find differences when none really exist. Unfortunately, it’s a common error — Ben Goldacre writes about it here.

There’s something much less subtle wrong with the headline, though. Look at the section of the table for Facebook. Do you see the negative numbers there, indicating lower stress for women who use Facebook more? Me either.

 

[Update: in the comments there is a reply from the Pew Research authors, which I got in email.]

January 9, 2015

The Internet of things and its discontents

The current Consumer Electronics Show is full of even more gadgets that talk to each other about you. This isn’t necessarily an unmixed blessing

From the New Yorker

To find out, the scientists recruited more than five hundred British adults and asked them to imagine living in a house with three roommates. This hypothetical house came equipped with an energy monitor, and all four residents had agreed to pay equally for power. One half of the participants was told that energy use in the house had remained constant from one month to the next, and that each roommate had consumed about the same amount. The other half was told that the bill had spiked because of one free-riding, electricity-guzzling roommate.

From Buzzfeed

It’s not difficult to imagine a future in which similar data sets are wielded by employers, the government, or law enforcement. Instead of liberating the self through data, these devices could only further restrain and contain it. As Walter De Brouwer, co-founder of the health tracker Scanadu, explained to me, “The great thing about being made of data is thatdata can change.” But for whom — or what — are such changes valuable?

and the slightly chilling quote “it’s not surveillance, after all, if you’re volunteering for it”
Both these links come from Alex Harrowell at the Yorkshire Ranter, whose comment on smart electricity meters is

The lesson here is both that insulation and keeping up to the planning code really will help your energy problem, rather than just provide a better class of blame, and rockwool doesn’t talk.

 

January 2, 2015

Maybe not a representative sample

The Dominion Post asked motorists why they thought the road toll had climbed, and what should be done about it.

roadtoll

Interestingly, three of the five(middle-aged, white, male ,Wellington area) motorists attributed it to random variation. That’s actually possible: the evidence for a real change in risk nationally is pretty modest (and the Wellington region toll is down on last year).

(via @anderschri5 on Twitter)