Test Cricket and the Decision Referral System

The introduction of technology into sport to allow those officiating to review crucial events has not been met with the universal approval of fans. There are some who feel it slows the play, some who feel it destroys the spontaneity and emotion around spectacular events that are eventually reviewed, some who believe it undermines the authority of those who are officiating on the field, some who feel it takes away one quintessential element of randomness (the umpire’s decision), and some who believe that, in any case, officiating errors will somehow magically eventually “even themselves out in the long run”. If you’re not a fan of technology in sport, you’ll probably have yet more reasons to dislike it.

Whilst I understand all of those apprehensions, I am firmly in the advocacy camp for using technology, however we sensibly can, as a way of increasing the proportion of correct calls that are made in sporting contests. There is never a time when it feels right to me that a contest has been substantially decided by a decision that is objectively wrong.

One sport that has been using technology for quite some time is cricket, where it is broadly referred to as the Decision Review System or DRS and has been in use in the Test match format since 2008.

Yesterday, Himanish Ganjoo (@hanjoo153 on Twitter) generously released a set of data that contained 3,369 referrals from Test matches spanning the period from 15 November 2010 to 17 December 2020. The data includes:

  • A MatchID

  • The Start Date of the Test from which the referral came

  • The Ground or Venue

  • The Home and Away Teams

  • The Day on which the referral was made (a value mostly between 1 and 5, though there are a few cases of 6th-day referrals)

  • The Over in which the referral was made

  • The Team making the referral

  • Whether that Team was batting or bowling when it made the referral

  • The Umpire whose decision is being referred

  • The Batsman whose wicket is in jeopardy

  • The Result of the referral from the perspective of the referring team (one of Struck Down, Umpire’s Call, and Upheld)

In this blog we’ll be analysing that data.

REFERRAL OUTCOMES

It’s important to recognise that changes in the protocols for DRS have resulted in different mixes of referral outcomes from year to year, as reflected in the chart below. In particular, there are very few “umpire’s call” outcomes before 2017.

(Note that this and all other charts in this blog can be clicked on to access a larger version)

Given the change over time in the mix, in interpreting some of the charts that follow it might help to mentally add “struck down” and “umpire’s call” (which are equivalent anyway in their effect in that they result in the umpire’s original decision standing), and to compare that total with the “upheld” value.

Overall, we see that about one-quarter of referrals are upheld, and the remaining three-quarters are struck down or adjudicated as “umpire’s call”.

TEAM ANALYSES

Firstly, we’ll look at how each of the Test-playing nations has fared when they’ve availed themselves of the DRS.

The breakdown of available data is as shown at right and reveals that:

  • There are very few referrals from Afghanistan or Ireland

  • There are about 300 or more referrals for each of England, Sri Lanka, Australia, Pakistan, West Indies, South Africa, and New Zealand

  • India, who were relatively late in allowing DRS to be used in matches involving them, mostly have referral data only from the 2016 to 2020 period, which largely excludes the period before “umpire’s call”

Results for these referrals are summarised in the chart below (note that Afghanistan and Ireland are excluded because of their very small sample sizes). The numbers inside the bars record the actual number of referrals in that category, and we can read off the relevant proportions from the y-axis.

The mix of referral outcomes is very similar for each team, with the exception that India has seen a somewhat larger proportion of “umpire’s call” outcomes and a concomitantly smaller proportion of “struck down” outcomes. This can be largely attributed to the fact that a disproportionate number of their referrals have occurred since 2017, which is when “umpire’s call” became most prevalent.

If, as suggested above, we add “struck down” and “umpire’s call”, the figures for India are quite similar to those of all other teams, and we see that about 1 in 4 reviews are successful for each team.

We can drill down into the outcomes for each team, looking at what “state” they were in - batting or bowling - when they made their referral.

Here, too, we see relatively little variation across teams for a given state, with Pakistan faring worst when referring as the batting team, and Bangladesh faring best. When referring as the bowling team, England and Zimbabwe do best, and Bangladesh, New Zealand, Sri Lanka, and the West Indies worst.

The more notable feature, however, is the difference in the overall proportion of reviews upheld for batting teams at about 35%, versus that for bowling teams at around 20%.

UMPIRE ANALYSIS

The entire data contains referral information for 29 distinct umpires, for only 20 of which are there outcomes for at least 30 referrals. The summary for those 20 umpires appears below and is ordered by the proportion of decisions that were upheld.

There is some variability in the mix of outcomes across the umpires, with the proportion of upheld decisions ranging from about 15% to 35%, but with the majority sitting in the 25% to 30% range.

DAY OF THE MATCH ANALYSES

As a Test progresses we might expect players to become slightly more speculative in the decisions they choose to refer, which is a supposition that’s supported by an analysis of the mix of outcomes by day of the Test match.

Although the effect size is small, we see a slow decline in the proportion of upheld referrals as we move from Day 1 (29%) data to Day 5 (25%) data.

This decline is evident both for teams referring when they are batting (36% on Day 1 to 32% on Day 5) and for teams referring when they are bowling (25% on Day 1 to 19% on Day 5)

Looking by team, we find quite a bit of noise because of some small sample sizes, but the generally downward trend in upheld decisions is evident for a number of teams, with India and maybe England perhaps outliers.

GROUND ANALYSIS

The entire data also contains referral information for 71 different venues, for 39 of which there are outcomes for at least 30 referrals. The summary for those venues appears below, ordered by the proportion of referrals upheld.

There is some variability here, though a considerable proportion of it might be attributable to sampling variation. With that by way of caveat, note that about 41% of referrals are upheld at Birmingham, compared to 12% at Sharjah.

PLAYER ANALYSIS

I’m sure a lot of you are curious about the outcome statistics for particular batsmen, so let’s finish by looking at those.

Across the entire data set are referrals relating to 410 different batsmen, but there are only 21 for which we have outcomes for at least 30 referrals. Alas, Shane Watson, with only 24 referrals, doesn't; make the cut

The summary for those 21 batsmen who do make the cut, appears below.

The results shown in the “Batting” columns are for referrals made by the batsman himself, while those shown in the “Bowling” columns are for referrals made by the bowling team against that batsman.

Here, too, small sample sizes make definitive statements problematic, but we can see that (Karunaratne being the obvious exception) bowling sides have had roughly similar success in having reviews upheld against each batsman, with rates all in the 10% to 30% range. In contrast, individual batsman have had quite variable rates of success in overturning decisions against them, with success rates ranging from Mathews’ 8% to Watling’s 62%.

(You still want the data for Shane Watson, don’t you? Okay he had 3 of 11 referrals upheld when batting, and only 2 of the 13 referrals made against him were successful)

SUMMARY AND CONCLUSION

Some of the key findings from the analyses above are as follows:

  • About 25% of all referrals are successful, 35% when the referral comes from the batsman, and 21% when the referral comes from the bowling team

  • There is some, but relatively little, variability in the rate of upheld reviews by team

  • There is somewhat more, but still not a great deal of variability in the rate of upheld reviews by umpire

  • The rate at which reviews are upheld declines as a Test match progresses

  • There is considerable variability in the rate at which reviews are upheld across different venues

  • Bowlers do about equally well in their referrals against most batsman, but some batsman have a substantially higher (and some a substantially lower) rate of upheld referrals.

All of these analyses, to some, varying degree, are hamstrung by relatively small sample sizes. Nonetheless, the variability that we do see is interesting and, in some cases, unlikely to be solely an artefact of sampling variation. There is noise, but there is also signal.

Also, looking at the bigger picture, what this data reveals is that about 1 in 4 referred decisions were demonstrably incorrect, and the only way we could have corrected them was via the use of technology. I find it hard to believe that those more than 800 incorrect decisions would have been, on the whole, inconsequential if not overturned.

But, of course, YMMV.