IIn many ways, the built world was not designed for you. It was designed for the average person. Standardized tests, building codes, insurance rates, clothing sizes, The Dow Jones – all these measurements are based around the concept of an “average.”
Tagged with “statistics” (12)
In order to understand global economics, you need perspective — that’s according to Max Roser, a ‘data visualisation historian’ at the Oxford Martin School. A lot of perspective. The good news is that all this perspective gives a surprisingly optimistic picture about the state of the world.
Max Roser: Good data will make you an economic optimist | WIRED 2015 | WIRED https://www.youtube.com/wireduk
All it takes to improve forecasting is KEEP SCORE
Will Syria’s President Assad still be in power at the end of next year?
Will Russia and China hold joint naval exercises in the Mediterranean in the next six months?
Will the Oil Volatility Index fall below 25 in 2016?
Will the Arctic sea ice mass be lower next summer than it was last summer?
Five hundred such questions of geopolitical import were posed in tournament mode to thousands of amateur forecasters by IARPA—the Intelligence Advanced Research Projects Activity—between 2011 and 2015.
(Tetlock mentioned that senior US intelligence officials opposed the project, but younger-generation staff were able to push it through.)
Extremely careful score was kept, and before long the most adept amateur “superforecasters” were doing 30 percent better than professional intelligence officers with access to classified information.
They were also better than prediction markets and drastically better than famous pundits and politicians, who Tetlock described as engaging in deliberately vague “ideological kabuki dance."
What made the amateurs so powerful was Tetlock’s insistence that they score geopolitical predictions the way meteorologists score weather predictions and then learn how to improve their scores accordingly.
Meteorologists predict in percentages—“there is a 70 percent chance of rain on Thursday.”
It takes time and statistics to find out how good a particular meteorologist is.
If 7 out of 10 such times it in fact rained, the meteorologist gets a high score for calibration (the right percentage) and for resolution (it mostly did rain).
Superforecasters, remarkably, assigned probability estimates of 72-76 percent to things that happened and 24-28 percent to things that didn’t.
How did they do that?
They learned, Tetlock said, to avoid falling for the “gambler’s fallacy”—detecting nonexistent patterns.
They learned objectivity—the aggressive open-mindedness it takes to set aside personal theories of public events.
They learned to not overcompensate for previous mistakes—the way American intelligence professionals overcompensated for the false negative of 9/11 with the false positive of mass weapons in Saddam’s Iraq.
They learned to analyze from the outside in—Assad is a dictator; most dictators stay in office a very long time; consider any current news out of Syria in that light.
And they learned to balance between over-adjustment to new evidence (“This changes everything”) and under-adjustment (“This is just a blip”), and between overconfidence ("100 percent!”) and over-timidity (“Um, 50 percent”).
“You only win a forecasting tournament,” Tetlock said, “by being decisive—justifiably decisive."
Much of the best forecasting came from teams that learned to collaborate adroitly.
Diversity on the teams helped.
One important trick was to give extra weight to the best individual forecasters.
Another was to “extremize” to compensate for the conservatism of aggregate forecasts—if everyone says the chances are around 66 percent, then the real chances are probably higher.
In the Q & A following his talk Tetlock was asked if the US intelligence community would incorporate the lessons of its forecasting tournament.
He said he is cautiously optimistic.
Pressed for a number, he declared, “Ten years from now I would offer the probability of .7 that there will be ten times more numerical probability estimates in national intelligence estimates than there were in 2005.”
Asked about long-term forecasting, he replied, “Here’s my long-term prediction for Long Now.
When the Long Now audience of 2515 looks back on the audience of 2015, their level of contempt for how we go about judging political debate will be roughly comparable to the level of contempt we have for the 1692 Salem witch trials."
Tim Harford investigates numbers in the news. Numbers are used in every area of public debate. But are they always reliable? Tim and the More or Less team try to make sense of the statistics which surround us. A half-hour programme broadcast at 1600 on Friday afternoons and repeated at 2000 on Sundays on Radio 4. BBC World Service broadcasts a short edition over the weekend.
Are the statistics put forward by UKIP accurate, and are Romanians responsible for more crime than other nationalities? Plus: Gerd Gigerenzer on the famous probability puzzle involving goats and game shows; do 24,000 people die every year from lightning strikes globally; how old will you be before you’re guaranteed a round-number birthday on a weekend; and is the divorce rate in the US state of Maine linked to margarine consumption?
With an avalanche of 2.5 quintillion bytes of data generated daily, could this be used to change our lives and does it have a darker side?
What does a ‘guess the weight of the ox’ competition tells us about a bloated and dysfunctional financial system? We find out in the Parable of the Ox written by John Kay of the Financial Times. The tale is told with the help of economics writer James Surowiecki as well as John Kay himself. It also features a brand new composition from the New Radiophonic Workshop.
Statistical analyst Nate Silver says humility is key to making accurate predictions. Silver, who writes the New York Times’ FiveThirtyEight blog, has just written a new book called The Signal and the Noise.
A.I., artificial intelligence, has had a big run in Hollywood. The computer Hal in Kubrick’s “2001” was fiendishly smart. And plenty of robots and server farms beyond HAL. Real life A.I. has had a tougher launch over the decades. But slowly, gradually, it has certainly crept into our lives.
Think of all the “smart” stuff around you. Now an explosion in Big Data is driving new advances in “deep learning” by computers. And there’s a new wave of excitement.
Guests: Yann LeCun, professor of Computer Science, Neural Science, and Electrical and Computer Engineering at New York University.
Peter Norvig, director of research at Google Inc.
Tim Harford interviews Daniel Kahneman, a psychologist who won the Nobel Prize in Economics. The author of Thinking, Fast and Slow describes the common mistakes people make with statistics.
Which are the world’s biggest cities, and what are their populations? Two simple questions that we discover are surprisingly difficult to answer. Plus, has the world got heavier or lighter since the industrial revolution? It’s a question posed by a More or Less listener that got us wondering, too. Dr Chris Smith, part of a group of Cambridge University researchers, known as the Naked Scientists, reckons he’s worked out the answer. This programme was originally broadcast on the BBC World Service.
Page 1 of 2Older