Nobody knows what the mathematician Rev. Thomas Bayes looked like, but this is the picture everyone uses. The equation is Bayes' theorem.

Nate Silver, baseball statistician turned political analyst, gained a lot of attention during the 2012 United States elections when he successfully predicted the outcome of the presidential vote in all 50 states. The reason for his success was a statistical method called Bayesian inference, a powerful technique that builds on prior knowledge to estimate the probability of a given event happening.

Bayesian inference grew out of Bayes' theorem, a mathematical result from English clergyman Thomas Bayes, published two years after his death in 1761. In honor of the 250th anniversary of this publication, Bradley Efron examined the question of why Bayes' theorem is not more widely used—and why its use remains controversial among many scientists and statisticians. As he pointed out, the problem lies with blind use of the theorem, in cases where prior knowledge is unavailable or unreliable.

As is often the case, the theorem ascribed to Bayes predates him, and Bayesian inference is more general than what the good reverend worked out in his spare time. However, Bayes' posthumous paper was an important step in the development of probability theory, and so we'll stick with using his name.

Read 6 remaining paragraphs | Comments

- Ars Technica
- Bayes' theorem
- bayesian
- Bayesian inference
- Bayesian statistics
- Bradley Efron
- Empirical Bayes method
- facial recognition
- Frequentist inference
- Prior probability
- Probability theory
- Scientific Method
- Sero 7 Lite
- Statistical forecasting
- Statistical inference
- Statistical theory
- Statistics
- The Guardian
- The Guardian Review
- Thomas Bayes
- Thomas Bayes
- United States

- Adobe
- artificial intelligence
- Bayesian inference
- Bayesian statistics
- Carl Rasmussen
- Chicago
- Chris Williams
- David Blei
- David Blei
- David MacKay
- Dirichlet process
- Edinburgh
- energy models
- Geoff Hinton
- Germany
- Gilbert Strang
- GPS
- Isabelle Guyon
- Latent Dirichlet allocation
- Louisiana
- LSI
- M.I.T
- machine learning
- Machine Learning
- machine translation
- Matt Hoffman
- Michael Jordon
- MJ DUNKS
- ML
- Multivariate statistics
- Murray Teaches
- NYU
- online chatter
- Painful! Trust
- Peter Dayan
- Philip Koehn
- Principal component analysis
- Sam Roweis
- Statistical inference
- Statistical natural language processing
- Statistical theory
- Statistics
- Technology
- the Machine Learning
- Topic model
- Yann LeCun

- Alexandre Passos
- Alpha
- Bayes' theorem
- Bayesian inference
- Bayesian statistics
- Bob Carpenter
- Empirical Bayes method
- John D. Cook
- Latent Dirichlet allocation
- Naive Bayes classifier
- Pavel Rappo
- php
- Probability
- search space
- Statistical classification
- Statistics
- translation algorithm
- translation algorithm

George E.P. Box, a statistician known for his body of work in time series analysis and Bayesian inference (and his quotes), recounts how he became a statistician while trying to solve actual problems. He was a 19-year-old college student studying chemistry. Instead of finishing, he joined the army, fed up with what the British government was doing to stop Hitler.

Before I could actually do any of that I was moved to a highly secret experimental station in the south of England. At the time they were bombing London every night and our job was to help to find out what to do if, one night, they used poisonous gas.

Some of England's best scientists were there. There were a lot of experiments with small animals, I was a lab assistant making biochemical determinations, my boss was a professor of physiology dressed up as a colonel, and I was dressed up as a staff sergeant.

The results I was getting were very variable and I told my colonel that what we really needed was a statistician.

He said "we can't get one, what do you know about it?" I said "Nothing, I once tried to read a book about it by someone called R. A. Fisher but I didn't understand it". He said "You've read the book so you better do it", so I said, "Yes sir".

Box eventually worked with Fischer, studied under E. S. Pearson in college after his discharge from the army, and started the Statistical Techniques Research Group at Princeton on the insistence of one John Tukey.

Noah Shachtman reports at *Wired Danger Room* blog that the investment arms of the CIA and Google are together backing a firm that monitors the web in real time, and claims to use that information to predict the future.

The company is called Recorded Future, and it scours tens of thousands

of websites, blogs and Twitter accounts to find the relationships

between people, organizations, actions and incidents -- both present

and still-to-come. In a white paper, the company says its temporal

analytics engine "goes beyond search" by "looking at the 'invisible

links' between documents that talk about the same, or related,

entities and events."

The idea is to figure out for each incident who was involved, where it

happened and when it might go down. Recorded Future then plots that

chatter, showing online "momentum" for any given event.

The "How People Use It" page on Recorded Future's website makes absolutely no attempt to hide The Creepy:

**Research a person**

*Monitor news on public figures to...*

Identify future travel plans; spot past travel trends and patterns

Search for communication with other individuals; graph their network

Monitor career history and announced job changes

Find quotations and sound bites in the news and blogs

Discover future and past strategic positioning

Uncover public political ties and family relationships

Exclusive: Google, CIA Invest in 'Future' of Web Monitoring *(Wired Danger Room blog)*

Video above, a trailer of sorts for "Recorded Future."