September 9, 2015

ITM Cup Predictions for Round 5

Team Ratings for Round 5

The basic method is described on my Department home page.

Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Canterbury 14.83 10.90 3.90
Tasman 12.85 12.86 -0.00
Taranaki 6.91 7.70 -0.80
Wellington 6.38 -4.62 11.00
Auckland 6.18 5.14 1.00
Hawke’s Bay 5.01 -0.57 5.60
Counties Manukau 1.16 7.86 -6.70
Waikato -3.92 -6.96 3.00
Otago -4.68 -4.84 0.20
Manawatu -5.80 -1.52 -4.30
Bay of Plenty -7.91 -9.77 1.90
Southland -10.98 -6.01 -5.00
Northland -11.47 -3.64 -7.80
North Harbour -12.57 -10.54 -2.00

 

Performance So Far

So far there have been 30 matches played, 20 of which were correctly predicted, a success rate of 66.7%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Taranaki vs. Counties Manukau Sep 02 17 – 10 3.30 TRUE
2 Manawatu vs. Canterbury Sep 03 7 – 57 -9.30 TRUE
3 Otago vs. Tasman Sep 04 17 – 34 -12.80 TRUE
4 Waikato vs. Auckland Sep 05 28 – 50 -2.60 TRUE
5 Southland vs. Wellington Sep 05 3 – 53 -5.30 TRUE
6 Hawke’s Bay vs. North Harbour Sep 05 48 – 32 22.80 TRUE
7 Northland vs. Taranaki Sep 06 7 – 50 -7.80 TRUE
8 Counties Manukau vs. Bay of Plenty Sep 06 26 – 37 18.70 FALSE

 

Predictions for Round 5

Here are the predictions for Round 5. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Auckland vs. Manawatu Sep 09 Auckland 16.00
2 Waikato vs. Southland Sep 10 Waikato 11.10
3 Wellington vs. Tasman Sep 11 Tasman -2.50
4 North Harbour vs. Counties Manukau Sep 12 Counties Manukau -9.70
5 Bay of Plenty vs. Taranaki Sep 12 Taranaki -10.80
6 Canterbury vs. Hawke’s Bay Sep 12 Canterbury 13.80
7 Auckland vs. Otago Sep 13 Auckland 14.90
8 Manawatu vs. Northland Sep 13 Manawatu 9.70

 

avatar

David Scott obtained a BA and PhD from the Australian National University and then commenced his university teaching career at La Trobe University in 1972. He has taught at La Trobe University, the University of Sheffield, Bond University and Colorado State University, joining the University of Auckland, based at Tamaki Campus, in mid-1995. He has been Head of Department at La Trobe University, Acting Dean and Associate Dean (Academic) at Bond University, and Associate Director of the Centre for Quality Management and Data Analysis at Bond University with responsibility for Short Courses. He was Head of the Department of Statistics in 2000, and is a past President of the New Zealand Statistical Assocation. See all posts by David Scott »

Comments

  • avatar

    how accurate are these predictions???

    9 years ago

    • avatar

      Success rates are given for all predictions. It is up to you to decide if they are accurate or not.

      9 years ago

  • avatar
    Mike Readman

    I’ve employed a statistical model to predicting game outcomes but never attempted a “rating system”. Seems there would be too many variables to deal with, head to head history, home field advantage, current season form, even month to month and day of week variances in performance.

    I wonder if your rating approach could be applied to day-to-day American sports like NBA (1230 games/regular season) or MLB (2430 games).

    9 years ago

    • avatar

      My approach uses only a limited amount of data: game results and home ground. Including a lot of information brings problems, starting with collecting all the information. Statistically it can lead to overfitting and hence poor forecasting performance. In the sports I have been predicting teams meet only a few times per season so it is not possible to get a reliable estimate of head to head performance. The notion that taking into account lots of different factors will give a better prediction is I think false. You have to accept that competitive team sport is inherently unpredictable to a greater or lesser extent.

      I think the method I use would work for NBA and MLB. It appears the maximum number of head to head games for any pair of teams is 4 in NBA and 8 in MLB. For MLB you might think of including parameters for head to head performance within divisions, but you would be adding 10 parameters per division, 60 overall, if I understand how it works.

      9 years ago

      • avatar
        Mike Readman

        Baseball teams play inter-division rivals as many as 16-20 times every season (not including playoffs). Baseball is a tricky proposition when predicting. No one player in any sport can affect an event like the starting pitcher does. An under-performing team can punch well above their weight with their ace on the mound in any given game.

        Goalies in hockey and to a lesser degree Quarterbacks in football can also have a disproportionate effect on the result compared to other participants.

        I’m curious about your rating system and how far back you venture into past results. I have collected the MLB results of approx. the last 15,000 games played (5+ seasons) including starting pitchers, NBA and NHL around 10,000 each (10 seasons with goalie info for hockey), NFL (2,670 games/15 seasons w/ starting QB). I know the sample size is far smaller in rugby but it seems you are achieving great results with little information.

        9 years ago

        • avatar

          I did a bit more research and found some details about the MLB schedule. I see they play 19 games against each of the other teams in their division. I think it would be possible to do ranking. An approach which might work is to have parameters for each pair of teams in each division to get a ranking within each division, and some inter-division parameters which would then modify those intra-division rankings based on inter-division games. Could be a lot of computing involved, particularly in trying to choose parameters for smoothing constants.

          Any change in the composition of the divisions would pose some challenges. I have developed some rules of thumb for when a new team joins a rugby competition, but investigation would be needed for baseball.

          I have records for quite a few years. I think the most is for Super Rugby where I now have 10 years of data.

          9 years ago