
The men’s ICC World Test championship final is scheduled to take place this month between India and New Zealand.
This upcoming marquee match prompted me to dig into an unusual pattern in men’s international batting that has emerged in recent years. ODI batting averages have surpassed Test batting averages for the first time in their 50-year coexistence.
That shouldn’t happen. Read on for why I think so.
Diverging formats
While keeping an eye on international cricket series in various formats over the past five to ten years, I started to notice the emergence of an something unusual.
Watching many matches during 2015 Men’s Cricket World Cup in Australia, I was astounded at the jump in what seemed to be par scores in the 50-over format. Only a decade or so earlier, I remembered that a score of 220 or so would often have been considered ‘par’ on Australian pitches. But by 2015, it felt like teams were putting on close to half of that for fun in the final ten overs alone.
Since that tournament, it has felt like that trajectory has only continued in one day cricket, riding in the wake of the ultra-aggressive T20 cricket, with its hit-out-or-get-out approach.
Over the same period, my perception of the scoring ability in Test cricket has wavered. Maybe it was my Australian bias showing, but certainly in this country, where innings previously seemed to often reach above 400 and regularly 500, now both host and visitors were scrapping out Test innings scores of 200s and 300s.
This niggling came to a head when keeping an eye on the recent Indian tour of Australia over the summer of 2020/21.
All six of the lead-up white-ball cricket matches finished with higher averages (runs per wicket) than the 1st Test match, of course at batting strike rates (runs per over) far in excess of the red-ball format.
Then, earlier this year, my interest was piqued enough following two international cricket matches completed on different continents, concluding on the same day:
The Test match – the format with 540 overs on offer, and 40 wickets to take – was over in under two days, with only 387 runs scored in total. Meanwhile, the T20I match – the format with 40 overs on offer, and 20 wickets to take – put on 434 runs in just an afternoon.
What was happening here?
Batting theory 101
I have to state the obvious – a team wins a cricket match when it scores more runs than its opponents.
A rational batting team in a cricket match is therefore looking to optimise its score in runs, controlling for two resources:
- The wickets it has to lose, and
- The number of overs it has available to bat
(This true in all cricket formats, however there are additional wrinkles in long-format cricket, where to win the match the team’s opponents must close both innings twice, thus the team must also leave enough time to take 20 wickets.)
In general, overs available to bat are plentiful in comparison to wickets remaining. As such, wickets are treated as somewhat to very precious. This balance starts to tilt the other way when there are considerably fewer overs on offer to bat – now wickets are considered more disposable as more balls must looked to be scored from.
All standard cricket formats include 10 wickets per innings, with the amount of overs to bat varied. Overs are therefore relatively more scarce in shorter cricket formats, and wickets are therefore relatively more scarce in longer cricket formats.
As such, we would expect (and we have historically seen):
- Shorter formats (i.e. ODI cricket) to feature higher batting strike rates (runs per over) and lower batting averages (runs per wicket)
- Longer cricket formats (i.e. Test cricket) to feature lower batting strike rates and higher averages
Theoretically, there is a trade-off between the risk (batting strike rate) and the reward (batting average). Empirically, we have seen this play out over half a century. But data from recent years indicate this trade-off is being turned upside down…
The data and analysis
To investigate if there was anything to my hunch, I pulled the data from all innings across men’s Test, ODI and T20I matches from ESPN CricInfo StatsGuru. I am only going to look at Test matches since 1971 for comparison, since this was the first year ODI cricket was first played.
I also chose to limit the competing teams to the eight top-ranked men’s Test-playing nations, to both keep the team samples comparable across all formats, and to eliminate any skewed results from major nations overpowering cricket minnows in shorter-form cricket.
I calculated the three standard cricket KPIs common to all followers at an aggregate level by year:
- Batting strike rate (runs scored per 100 balls)
- Bowling strike rate (balls per wicket taken)
- Batting average (runs scored per wicket) (this is also equivalent to bowling average at the aggregate level)
An inverted batting market
For the first time in the 50-year history of the coexistence of men’s Test and ODI cricket formats, the year-on-year batting average curves have overlapped. Batting sides in the ODI format are consistently scoring more runs per wicket than their Test compatriots even though they are also making their runs much more quickly.

You can see Test cricket batting averages were relatively flat until the mid-1990s, but perhaps off the back of the momentum and philosophy of ODI cricket, surged by more than 5 runs per wicket into the mid-to-late 2000s. In the last decade particularly, Test batsmen have had a much more difficult time of it with averages plummeting to easily the lowest in the past half century.
ODI cricket has featured a steadily increasing batting average since its 1971 inception. Indeed, since about 2010 this average is starting to run away further from a relatively consistent trend, with the 2019 (37 runs per wicket) and 2020 batting averages (39) easily the best on record.
Also note that the much-narrower T20I trend is also steadily increasing with its peak also in 2020 (29 runs per wicket). In fact, this was under a run per wicket less than the Test average!
It’s not that ODI batting averages have just peeked about Test batting averages. It’s that they have consistently done so for over five years, and they are actually pulling away further!
So how is this the case? How is ODI cricket having its cake (high batting strike rates) and eating it too (high batting averages)?
A deeper dive
Although I indicated earlier the batting philosophy is finding the optimal balance between scoring rates and against number of runs for the format, this hides the true trade-off. The fundamental balancing act is between the scoring rate (batting strike rate) versus the number of balls faced (bowling strike rate). Increase one, you should expect to reduce the other. Typically what you want to optimise on is the number of runs, which is actually the product of the two.
Although the denominators are a little funny, our three metrics can be linked together via the following expression:
Batting average = Batting strike rate (/100) × Bowling strike rate
or,
[Runs / Wicket] = [Runs / 100 balls] × [Balls / Wicket]
Let’s consider these fundamental variables individually to search for a better explanation.
1. Batting strike rates
You can see below that batting strike rates have generally increased over time across all formats. What is interesting to note is the diverging trend lines during the 2010s. Where batting strike rates in both T20I and ODI cricket have surged by over 10 per cent in that time, the potency of batting in Test cricket has tapered off slightly.

While the commentary of the mid-90s to the mid-00s often attributed the increasing attacking nature of Test batting to skills and philosophies honed in ODI cricket, it appears this link may have broken over the past decade.
2. Bowling strike rates
You can also see Test cricket’s bowling strike rates have fallen consistently over the past half-century. This indicates that the level of bowling has improved relative to batting defences. Indeed, prior to the 1970s, the data show it was fairly common for strike rates of one wicket per more than 80 deliveries. In recent years, wickets are tending to fall for less than 60 deliveries, once again with the chart showing a particular drop-off in the past decade.

Bowling strike rates (balls per wicket) have consistently fallen in Test cricket over the past 50 years, but are relatively flat in ODI and T20I cricket.
When it comes to short form cricket, the above chart also shows that bowling strike rates have been noticeably flat for a long time. If anything, bowling strike rates have slightly increased on ODI and T20I cricket in the past five or so years, indicating bowlers are finding it harder to take wickets.
Bringing it all together
Using our equation from earlier, we can summarise the general trends by 2020 compared to the respective baseline levels from about a decade earlier:
Batting strike rate | Bowling strike rate | Batting average | |
Test | Slightly lower | Lower | Significantly lower |
ODI | Higher | Slightly higher | Significantly higher |
T20I | Higher | Slightly higher | Significantly higher |
This is where we find some answers. If we consider batting philosophy as a trade-off between risk and return, we find that Test batting is losing ground on both counts. Rather than balancing the speed of runs scored with the number of balls faced, Test batsmen are scoring at slower rates at the same time as facing fewer balls in the process.
All the while, ODI and T20I formats are also paying no credence to the trade-off, winning on both counts. Batsmen in these shorter formats are scoring at faster rates and at the same time facing more balls in the process.
On face value, this doesn’t make much sense. Yes, a global, systemic factor across formats could explain one of these trends. For example, technology producing higher quality cricket bats could suggest why bat is becoming more dominant over the ball. But here we have contrasting patterns. Why is bat winning over ball in two formats, and ball is winning over bat in the other?
Across these data, the teams are the same, the grounds are (mostly) the same, the players are pulled from the same talent pool (and are often the same)… so what could explain the difference?
Possible explanations
If everything else was equal (players, teams, grounds, pitches) – and all players and teams were attempting to play optimally at all times – I can’t see a logical reason for this trend to occur (although I may be wrong, and feel free to correct me if I am!). As such, my hypothesis is that one or a number of structural divergences between formats that have widened over the past decade that may go some way to explaining the trend.
1. Diverging match and pitch conditions
A fundamental factor underlying all cricket scoring is the match conditions, and the biggest one that could be systematically controlled over time is the quality of the pitch.
My first hypothesis is that it is likely that the quality of cricket pitches to bat on have improved moreso in short format cricket than in Test cricket over the past decade.
ODI cricket found its position under attack from both flanks towards the back end of the first decade of the 2000s,. Test cricket roared to new life with a rekindled Ashes rivalry and a new India-Australia rivalry. From the other flank, T20 cricket marketed itself as the ‘best bits of ODI’ cricket without the dawdling middle overs. So how could cricket administrators keep selling tickets and broadcast rights for the once-most popular format? Keep the boundaries flowing in ODI matches, as close to T20 pace as possible for 100 overs. That would require batting-enticing pitches.
Following the same logic, it wouldn’t surprise me that as a response the batting fireworks produced by ODI and T20I cricket, executives steered Test cricket away from being merely the vanilla cousin, by positioning it as the unique format to view the traditional battle between bat and ball.
It is plausible that the art of curating cricket pitches is now completely distinct per format, and that Test pitches are attempting to the bowlers back as significant performers on the main stage, while ODI and T20I seek flatter and flatter roads.
This would also explain the relative dominance of the ball in Test cricket and the relative dominance of the bat in ODI and T20I cricket over the same period.
2. Diverging talent pool per format
Another possible contributing factor could be the elite cricket talent pool diverging or concentrating into discrete pools per format.
We are now effectively a whole generation into the Twenty20 era, where franchise-based domestic T20 tournaments provide lucrative professional cricket pathways for a player pool an order of magnitude larger than even the previous generation.
Where Gen Y talent grew up seeing limited international cricket as the only true professional pathway, Gen Z have grown up knowing of the opportunities on the T20 circuit. Perhaps these incentives, alongside shifting schedules towards shorter formats in the junior ranks, are shifting techniques to favour dare over grit. This would align with the ongoing and continued improvements of batting trends in the ODI and T20I formats.
Perhaps this has also resulted in a reduced depth of the talent pool of batsmen able to bat through a day of Test cricket. You may then argue why are we not seeing the run rate improving at Test level? Well perhaps the talent pool has diverged so much, that those left over with enough of a technique to ‘bat time’ don’t also have the gears to go along at the same clip as the previous generation. And if they once did, their techniques may have played second fiddle to their ability to six-hit at will.
I have no evidence for this hypothesis, nor do I necessarily think it is likely. But I do think it could partly explain the data.
Improving the analysis
While enlightening and potentially uncovering at a legitimate signal, I note this analysis is shallow and could be improved by controlling for a number of factors and assumptions:
- Quality of participating batsmen per innings – Not all wickets are lost equally, with the top six or seven batsman expected to produce both higher averages and strike rates than the tail. As the data have been pulled at innings-level (rather than at a batting position- or partnership- level), I am unable to control for the number of wickets lost per innings. For example, in a declared Test innings, or an innings which ends prematurely when a target score is achieved (with only recognised batsmen featuring), this would skew results to higher averages and strike rates. This could be controlled if the data were available at batting position- or partnership-levels.
- Frequency of team participation and matchups – Although I have selected the same standard list of the top eight men’s Test batting nations for consistency, I have not controlled for the number of matches played by each team in each format in the sample. For example, there is likely to be skews towards more or less dominant countries or matchups within the sample. Or perhaps England have played a higher proportion of Test matches but India has played a relatively higher proportion of T20 internationals.
- Frequency of venue use – A little like the frequency of team participation, my hypothesis here is that different venues are more or less suited to batting. I have not controlled for the number of matches played at certain venues. It is very possible that scheduling patterns means that more shorter-format matches are being played on traditionally batting friendly grounds (better pitches, shorter boundaries) while Test matches are being retained at other venues.
Feedback
I have followed cricket for almost thirty years, but this is my first ever cricket post and I may gain a few first-time readers. By no means is this post meant to be authoritative, but explorative. If I have missed something, made an error, or if you have any suggestions or ideas, please feel free to comment below, shoot me a direct message or hit me up on Twitter.