Posts filed under Graphics (394)

October 1, 2012

Let’s-all-panic colour scheme

The excellent blog Freedom to Tinker, which focuses on political and social policy concerns related to computing, has an interactive graphic showing where problems with electronic voting are most likely to have a serious impact on the US election. Here’s a snapshot:

 

The ‘risk’ is scaled so that the top state, Ohio, is at 100. Because of the association of 100 with 100% that probably tends to exaggerate the impact, but the color scheme is worse. There’s almost no visible difference between Ohio at 100 and Virginia at 77, but Pennsylvania (47) is visibly paler than Nevada (57).  For comparison with the colour scale in the map, here’s a colour scale that tries to be uniform (a straight line in CIE Lab space)

Looking at this scale (and using a color picker program for better matching), Virginia seems to be at about 85, and Florida(61) well above 70.  So there really is a distortion of the visual impression.  The distortion probably isn’t deliberate, but comes from using linear interpolation on a scale that doesn’t match visual perception as well.

September 28, 2012

Visualising health findings

The Cochrane Collaboration are holding their annual conference in Auckland starting on Sunday.  They are a decentralised, grassroots effort to collate and summarise all randomised clinical trials, to make sure that the information isn’t buried, but is available to clinicians and patients.  The online Cochrane Library of Systematic Reviews is available free to anyone in New Zealand, thanks to funding from the DHBs and the Ministry of Health.  As with many organisations, they award a variety of prizes in their field of work.  In contrast to many organizations, one of the prizes is awarded for the best criticism of the organization’s work.

Anyway, the conference is an excuse to link to a video by the Cambridge “Understanding Uncertainty” group.  They are working on animations to further improve the summaries of health findings from the Cochrane systematic reviews.

September 27, 2012

How is the beer up (down) here?

UBS economists have produced a nice graph showing how many minutes does it take to earn a beer. There is one fatal flaw however – it doesn’t have New Zealand! I thought we had better remedy this (maybe I am just avoiding something). The graph takes the average price for 500mL of beer, and divides it by the median hourly wage. The dollar figures, I assume, are all converted to US dollars so that everything is on the same scale. Statistics New Zealand helpfully provides us with the average hourly wage for 2011 – NZD20.38, and pricepint.com uses the power of crowd-sourcing to give us the average price of a pint in New Zealand at GBP2.36. Converting both of these figures in to US dollars gives us USD16.8241 and USD3.82047 respectively at today’s rates from xe.net This means on average it takes 13.62 minutes to earn a pint in New Zealand. There are no figures on the plot, but we seem to sit somewhere between Australia and Argentina, our fellow Rugby Championship competitors, but a long way below South Africa.

September 19, 2012

Colour choices matter

In the map below(via), of extrapolated obesity rates in the US in 2030, one state stands out

Does Colorado have the highest predicted rate? No, that’s down in the South-East. The lowest rate? Again, no, that’s further west. Colorado is just the pinkest state, because of infelicitous colour choices.

 

September 3, 2012

Extreme Venn diagrams

That’s a five-set Venn diagram, invented in 1975.  Some mathematicians have just worked out how to do an 11-set one.

These are pretty, and a computational and mathematical achievement, but unfortunately they don’t have any of the properties that make Venn diagrams a marginally-useful visualization method.     (via)

August 27, 2012

Visual perception demonstrations

A nice page by Christopher Healey, at North Carolina State University.  Among other things, it includes demonstrations of change blindness  and of preattentive perception:

For many years vision researchers have been investigating how the human visual system analyses images. An important initial result was the discovery of a limited set of visual properties that are detected very rapidly and accurately by the low-level visual system. These properties were initially called preattentive, since their detection seemed to precede focused attention. We now know that attention plays a critical role in what we see, even at this early stage of vision. The term preattentive continues to be used, however, since it conveys an intuitive notion of the speed and ease with which these properties are identified.

Typically, tasks that can be performed on large multi-element displays in less than 200 to 250 milliseconds (msec) are considered preattentive. Eye movements take at least 200 msec to initiate, and random locations of the elements in the display ensure that attention cannot be prefocused on any particular location, yet viewers report that these tasks can be completed with very little effort. This suggests that certain information in the display is processed in parallel by the low-level visual system.

The limits of preattentive perception are why you can’t usefully represent more than four or five groups of points in a scatterplot, no matter how creative you get with colours and symbols.

August 23, 2012

Stat-related startups

At Simply Statistics, a set of stat/data related startups.

One that looks interesting for teaching and for data journalism purposes is Statwing, which is building a web-based pointy-clicky data analysis system, aiming to have good graphics and good text descriptions of the results.  This is the sort of project where the details will matter a lot — poking around at their demo there were a few things I was slightly unhappy about, but nothing devastatingly bad, so there is potential.

August 20, 2012

Nostra maxima culpa

As Alan Keegan points out in his Stat of the Week nomination, the Stats Department Facebook page was sporting a graph whose only redeeming feature is that it doesn’t even pretend to convey information.

To decide what to do with the graph, we are hosting a bogus poll:

 

August 16, 2012

Probabilistic weather forecasts

For the Olympics, the British Meterology Office was producing animated probabilistic forecast maps, showing the estimated probability of various amounts of rain or strengths of wind at a fine grid of locations over Britain.  These are a great improvement over the usual much more vague and holistic predictions, and they were made possible by a new and experimental high-resolution ensemble forecasting system.  (via)

I will quibble slightly about the probabilities in the forecast, though.  The Met Office generates a set of predictions spanning a reasonable range of weather models and input uncertainties, and then says “80% change of rain” if 80% of the predictions have rain at that location.   That is, 80% means an 80% chance that a randomly chosen prediction will say “rain”, it doesn’t necessarily mean that “out of locations and hours with 80% forecast probability, 80% of them will actually get rain”.

It’s possible to improve the calibration of the probabilities by feeding the ensemble of predictions into a statistical model, and researchers at the University of  Washington have been working on this.  Their ProbCast page gives probabilistic rain and temperature forecasts for the state of Washington that are based on a statistical model for the relationship between actual weather and the ensemble of forecasts, and this does give more accurate uncertainty numbers.

August 14, 2012

London 2012 and data journalism: What did we learn at the Olympics?

Fascinating item in The Guardian, which looks at the Olympics from a data journalist’s point of view …. and does a great job.