February 13, 2015

Super 15 Predictions for Round 1

Team Ratings for Round 1

Sorry about the delay in posting these predictions. I have never been really happy with my selection of parameters, so decided to do a massive grid search yesterday which ended up taking about 16 hours. That means this year I have shiny new parameter values for my predictions. As ever, remember the adage, “past performance is not necessarily indicative of future behaviour”.

The basic method is described on my Department home page.

Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Crusaders 10.42 10.42 0.00
Waratahs 10.00 10.00 0.00
Sharks 3.91 3.91 -0.00
Hurricanes 2.89 2.89 -0.00
Bulls 2.88 2.88 0.00
Chiefs 2.23 2.23 0.00
Brumbies 2.20 2.20 -0.00
Stormers 1.68 1.68 -0.00
Blues 1.44 1.44 0.00
Highlanders -2.54 -2.54 -0.00
Lions -3.39 -3.39 -0.00
Force -4.67 -4.67 0.00
Reds -4.98 -4.98 0.00
Cheetahs -5.55 -5.55 -0.00
Rebels -9.53 -9.53 -0.00

 

Predictions for Round 1

Here are the predictions for Round 1. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Crusaders vs. Rebels Feb 13 Crusaders 24.40
2 Brumbies vs. Reds Feb 13 Brumbies 11.20
3 Lions vs. Hurricanes Feb 13 Hurricanes -1.80
4 Blues vs. Chiefs Feb 14 Blues 3.20
5 Sharks vs. Cheetahs Feb 14 Sharks 13.50
6 Bulls vs. Stormers Feb 14 Bulls 5.20
7 Waratahs vs. Force Feb 15 Waratahs 18.70

 

avatar

David Scott obtained a BA and PhD from the Australian National University and then commenced his university teaching career at La Trobe University in 1972. He has taught at La Trobe University, the University of Sheffield, Bond University and Colorado State University, joining the University of Auckland, based at Tamaki Campus, in mid-1995. He has been Head of Department at La Trobe University, Acting Dean and Associate Dean (Academic) at Bond University, and Associate Director of the Centre for Quality Management and Data Analysis at Bond University with responsibility for Short Courses. He was Head of the Department of Statistics in 2000, and is a past President of the New Zealand Statistical Assocation. See all posts by David Scott »

Comments

  • avatar

    Hi David.

    Thanks for the Super Rugby stats chat. I really find the data very useful and interesting and so far this year the numbers have certainly been a lot more right than wrong. A few questions.

    I see that you subtract the weaker team from the favourite and then add in the home field adv to get the expected spread/h-cap but i can’t see how you come up with the number for home field adv? Or is that number standard across the board at 5? Obviously some teams are much stronger at home than away, the Bulls from SA probably the best example over the years, so perhaps it would be worthwhile adding in some extra variance for different teams based on home and away performances? Just a thought.

    Lastly, how did you come up with the initial Rating scale. I have done something similar however i rate the BEST team at 100 and then proceed to rate them in order of perceived strength down from 100. So Crusaders would be 100 at start of season, Chiefs 98, Waratahs 97 … Rebels 82. Then i would adjust the numbers for home and away matches. SO when the Crusaders play the Rebels at home the exp win margin would be around 20. Quite close to your expected win margin in week 1. So i am just interested in how you get to the rating numbers of 10.42, -3.39 etc etc.

    Thanks again for the input. Much appreciated.

    Murray from Jo burg South Africa.

    9 years ago

    • avatar

      The home ground advantage is a parameter chosen by optimisation using past data. I have just two parameters for home ground advantage, one for when both teams are from the same country, one when they are from different countries. There is also the possibility of no home ground advantage, for example when a Super Rugby game was played in London (Crusaders vs Sharks or Stormers if I recall). Too many parameters is not a good thing—choice of appropriate values is difficult, and there is the likelihood of overfitting which is known to lead to poor forecasting performance.

      Initial ratings for the year are obtained by starting with all teams equal 0 at the start of my records and running the algorithm to the start of the new season. New teams (for example when the competition was expanded) get -10, which is arbitrary but has worked reasonably well in my experience.

      9 years ago