Friday, October 24, 2008
I wanted to promote a podcast - Beauty and the Beast - Numbers and Public Policy - that I recently listened to by Andrew Dilnot, one of the authors of the book The Tiger That Isn't. His co-author is Michael Blastland. Dilnot gave a lecture at the LSE which was put up on the web at UChannel, a fantastic resource for lectures and presentations given at various universities and other institutions.
Dilnot's lecture centred around the appalling nature of numeracy in the British population, and the world population in general. Specifically, the lack of a general understanding of basic statistics. He discusses how there are regular mishaps in interpretations of data, by journalists, by government and by the public. He makes the point that people should be more skeptical about the numbers that they see and that they should be made aware of all kinds of statistical 'rules of thumb' that good quality practitioners know. For example, regression to the mean where "individuals far from the mean on the first set will tend to be closer to the mean on the second set".
To use Dilnot and Blastland's example, consider that you are a government official thinking of installing speed traps, you make an observation of the number of accidents in several locations over a period of time, say six months, and see that in several places there are numerous accidents. You install these cameras in these 'danger areas'. You then observe that for the six months following the installation of the cameras there was a 'decrease' in the incidence of accidents and your policy 'worked'. Well... no. You are actually observing regression to the mean. The sample that you have of two six month periods constitutes a sample of almost zero. With any given time series data (i.e. observations of specific events over time) there will be deviations around a mean. The government, because it observed a freakishly high incidence of accidents in one area, assumed that its policy instrument of installing speed cameras was effective simply because the incidence went down. This is not accurate. What you need to have is a measure which gives you what the mean number of acccidents are in a specific location and observe a statistically significant decrease - i.e. a decrease for which we can impute a correlation with the use of speed traps holding other factors significant (such as that the skill of drivers wasn't suddenly higher, or that people had suddenly stopped drinking alcohol, or that the number of drivers on the roads was different).
Dilnot considers this and other factors that affect public opinion and makes the point that far too many people blithely accept numbers that are thrown at them by people in 'authority', be it the media or government. They need to be aware of the types of behavior displayed by data and be aware that people will manipulate how something is portrayed to aggrandize their own actions (far too much is said, for example, on the effects of anti-smoking legislation, it's normally nowhere near as effective as claimed).
Anyway, listed to the podcast, I think it is worthwhile. I am always in favour of skepticality improving broadcasts and information (Dilnot also tells a great story of being thought of as a literary philistine and asks why the same doesn't apply to numeracy). Thanks Andrew Dilnot, LSE and UChannel. I'm going to keep an eye out for the book when I am next in the UK.
Dilnot's lecture centred around the appalling nature of numeracy in the British population, and the world population in general. Specifically, the lack of a general understanding of basic statistics. He discusses how there are regular mishaps in interpretations of data, by journalists, by government and by the public. He makes the point that people should be more skeptical about the numbers that they see and that they should be made aware of all kinds of statistical 'rules of thumb' that good quality practitioners know. For example, regression to the mean where "individuals far from the mean on the first set will tend to be closer to the mean on the second set".
To use Dilnot and Blastland's example, consider that you are a government official thinking of installing speed traps, you make an observation of the number of accidents in several locations over a period of time, say six months, and see that in several places there are numerous accidents. You install these cameras in these 'danger areas'. You then observe that for the six months following the installation of the cameras there was a 'decrease' in the incidence of accidents and your policy 'worked'. Well... no. You are actually observing regression to the mean. The sample that you have of two six month periods constitutes a sample of almost zero. With any given time series data (i.e. observations of specific events over time) there will be deviations around a mean. The government, because it observed a freakishly high incidence of accidents in one area, assumed that its policy instrument of installing speed cameras was effective simply because the incidence went down. This is not accurate. What you need to have is a measure which gives you what the mean number of acccidents are in a specific location and observe a statistically significant decrease - i.e. a decrease for which we can impute a correlation with the use of speed traps holding other factors significant (such as that the skill of drivers wasn't suddenly higher, or that people had suddenly stopped drinking alcohol, or that the number of drivers on the roads was different).
Dilnot considers this and other factors that affect public opinion and makes the point that far too many people blithely accept numbers that are thrown at them by people in 'authority', be it the media or government. They need to be aware of the types of behavior displayed by data and be aware that people will manipulate how something is portrayed to aggrandize their own actions (far too much is said, for example, on the effects of anti-smoking legislation, it's normally nowhere near as effective as claimed).
Anyway, listed to the podcast, I think it is worthwhile. I am always in favour of skepticality improving broadcasts and information (Dilnot also tells a great story of being thought of as a literary philistine and asks why the same doesn't apply to numeracy). Thanks Andrew Dilnot, LSE and UChannel. I'm going to keep an eye out for the book when I am next in the UK.
Subscribe to:
Post Comments (Atom)
Currently have 0 comments:
Post a Comment