September 26, 2012

NRL Predictions, Grand Final

Team Ratings for the Grand Final

Here are the team ratings prior to the Grand Final, along with the ratings at the start of the season. I have created a brief description of the method I use for predicting rugby games. Go to my Department home page to see this.

Current Rating Rating at Season Start Difference
Bulldogs 8.55 -1.86 10.40
Storm 7.63 4.63 3.00
Cowboys 6.35 -1.32 7.70
Sea Eagles 4.66 9.83 -5.20
Rabbitohs 4.48 0.04 4.40
Raiders 1.12 -8.40 9.50
Knights 0.01 0.77 -0.80
Dragons -0.37 4.36 -4.70
Broncos -0.98 5.57 -6.50
Sharks -2.05 -7.97 5.90
Titans -2.20 -11.80 9.60
Wests Tigers -2.74 4.52 -7.30
Roosters -5.43 0.25 -5.70
Panthers -6.45 -3.40 -3.00
Warriors -8.08 5.28 -13.40
Eels -8.25 -4.23 -4.00

 

Performance So Far

So far there have been 200 matches played, 123 of which were correctly predicted, a success rate of 61.5%.

Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Storm vs. Sea Eagles Sep 21 40 – 12 3.56 TRUE
2 Bulldogs vs. Rabbitohs Sep 22 32 – 8 0.27 TRUE

 

Prediction for the Grand Final

Here is my prediction for the Grand Final.

Game Date Winner Prediction
1 Bulldogs vs. Storm Sep 30 Bulldogs 5.40

 

avatar

David Scott obtained a BA and PhD from the Australian National University and then commenced his university teaching career at La Trobe University in 1972. He has taught at La Trobe University, the University of Sheffield, Bond University and Colorado State University, joining the University of Auckland, based at Tamaki Campus, in mid-1995. He has been Head of Department at La Trobe University, Acting Dean and Associate Dean (Academic) at Bond University, and Associate Director of the Centre for Quality Management and Data Analysis at Bond University with responsibility for Short Courses. He was Head of the Department of Statistics in 2000, and is a past President of the New Zealand Statistical Assocation. See all posts by David Scott »

Comments

  • avatar

    Hi David,

    Thanks for giving me the NRL data earlier. In Tuesday’s 331 class we made a model to predict the grand final…once we put in home ground advantage, the probability of the Bulldogs winning was 0.605. This seems to qualitatively agree with your prediction of a small margin.

    Interestingly, on one gambling website the odds for the Bulldogs are $2.05. Using my probability and a logarithmic utility for money, the optimal decision is to bet 23% of your money on the Bulldogs. Intuitively, that seems pretty dumb – I wonder where my intuition differs from the calculation.

    Any readers, please do not gamble based on anything I’ve said here! :)

    12 years ago

    • avatar
      Thomas Lumley

      If you were certain that the probability was 0.605 you might be willing to bet.

      I think the problem is that you’re not certain, and because of the non-linear utility of money, the right bet for an uncertain distribution of probabilities is not the same as if the distribution was concentrated on its posterior mean.

      Jensen’s inequality says you should behave as if you had a point probability somewhere less than 0.605, but we’d need to know the whole shape of the distribution to say how much less.

      12 years ago

      • avatar

        “If you were certain that the probability was 0.605 you might be willing to bet.”

        I don’t think it’s that…I’d still be unwilling. I think I must just be more risk averse than that. Also the peak in the expected utility at 23% is not a very high peak — maybe that’s got something to do with it.

        I was unaware that the distribution of conditional probabilities should affect the optimal decision. I do have that distribution (and 0.605 was the posterior mean). I would be interested to know how you would use that information differently to change the decision process.

        12 years ago

  • avatar

    FYI, I ran my model progressively, simulating what I would have predicted as the season occurred. My success rate was only 57%, which is about what you’d get by just guessing the home team in all games.

    The success rate hovered around 50% until around 3/4 of the way through the season, where all of a sudden predictions started being correct. Did your model have the same experience?

    One thing I didn’t do is take into account any information from the previous season. So the early matches were mostly guesses.

    12 years ago