The previous Timeform Knowledge on “Pricing Up A Race” mentioned the existence of both intuitive and mathematical ways of tackling the task of assigning probabilities to horses in a race, but focused on the former.
The mathematical treatment of the problem is a vast subject, and beyond the scope of this series in its entirety, but it may be useful to consider one particular approach before moving onto how we can utilise the prices we have arrived at.
Timeform Master Ratings are a reflection of the best form of which a horse is currently considered to be capable. Separate research shows that a horse’s highest recent rating – from which that Timeform Master Rating will often be derived – has a powerful predictive element.
However, horses with identical peak ratings may have different likelihoods of repeating those ratings, or of running better, worse or much worse than those peak ratings. There is, in reality, a “probability density” around those ratings, covering everything from the horse improving greatly to running woefully.
In most instances, we cannot reasonably deduce this probability density directly from the horse itself – which might have run only a few times, or not at all – but we can infer it from similar cases, of which there will be countless in the course of horseracing history.
By way of illustration, we can consider how every older horse on the Flat in Britain and Ireland in 2014 performed compared to its pre-race Timeform Master Rating if that rating was between 70 and 90 inclusive. These were, in the vast majority of cases, exposed horses with many runs under their belts.
We can then “sample” 10 performances for horses of different kinds, though in this instance the sampling is deliberate (“purposive” in technical jargon) rather than random. The 10 performances range from the best to the worst efforts relative to the horses’ pre-race ratings, as well as eight regularly spaced efforts covering what comes in between.

On the top row you have a representative of those horses which had a “p” symbol on their ratings pre-race; on the bottom row you have a representative of those horses which had no symbol at all. The array of performances show how much below (minus) or above (plus) form compared to pre-race master ratings each representative horse ran in these 10 purposively sampled instances.
We can assume that these two horses had identical numerical ratings and then run fictional races between them.
We need 100 such fictional races for every eventuality to be covered (10 possible outcomes for Horse A multiplied by 10 possible outcomes for Horse B).
For example, Horse A’s best possible effort (20 lb improvement from its pre-race rating) will trump every possible effort – all 10 of them – from Horse B; but an ordinary, minus 8, performance from Horse A will beat five and be beaten by five of Horse B’s possible efforts.
If you run those 100 fictitious races, you get Horse A beating Horse B 55 times (including dead-heats) and Horse B beating Horse A 45 times (also including dead-heats). What’s more, you have performed a “simulation”, if rather a crude one.
The same procedure could be extended to the same distributions but for horses with different master ratings, and to different distributions in which the master ratings are the same or different.
For instance, you would get the following outcomes if you used the same distribution but for horses with different master ratings, with the % figures indicating the proportion of those 100 fictional races that would be won by Horse A and Horse B:

Horseraces are not, usually, simple matches between two horses, so you would need to extend the experiment to include more competitors. You would also, ideally, want to extend it to include hundreds or thousands of variations of performance by each of those competitors, not just 10 as in the illustration.
There is not a spreadsheet in the world big enough to perform every possible calculation for, say, a 40-runner Grand National in which every horse has a multitude of potential performances. What statisticians often do in such instances is sample randomly from that entire probability density and run thousands (or even millions) of fictitious races in what is known as a Monte-Carlo simulation.
Despite the somewhat frivolous connotation of the term, Monte-Carlo simulation has a long and distinguished history in statistics and mathematics. For an example of this sort of approach applied to racehorse ratings in a much more advanced way, readers are directed to an excellent blog by former Timeform employee James Willoughby.
***
So, you have some odds/probabilities for horses in a race, derived from instinct, or algorithms, or Monte-Carlo simulation, or maybe a bit of all: how do you use them?
Well, the temptation may be to plough straight on and start backing, or laying, horses that are even slightly out of line with your assessments.
Maybe. But that ignores at least two important considerations which may cause you to become unstuck.
One is that you, or your algorithm, or your simulation, may be prone to error and inaccuracy. Indeed, it would be staggering were it otherwise.
The other, perhaps even more importantly, is that you will be acting on partial information, no matter how hard you try. The market itself will tell you things that you could not have factored in, though the market is not all-knowing and you should be prepared to stick to your guns when you have good reason to.
This could be viewed as another example of a Bayesian process. Remember, from an earlier module in The Timeform Knowledge, that the essence of Bayesian updating is: that you should, when presented with new information, adjust your expectation of an uncertain outcome in a manner that trades off the strength of that original expectation against the strength of the new information.
If you are convinced, come what may, that a horse should be 5/1, no more nor less, then feel free to lay it at 9/2 (5.5 in fractional terms) and back it at 11/2, but you will almost certainly be wrong to be so dogmatic.
If you think a horse should be “around 5/1” but you will be encouraged by support for it, as that usually presages a good performance, you might feel justified in taking slightly less on the back of the additional information provided by the horse being well backed.
Conversely, you might choose to look a gift horse in the mouth when that horse doubles in odds for no apparent reason.
It depends: it depends on events, on the predictive nature of the market, and on your own degree of confidence in the steps you have taken to assess the many probabilities.
Public odds compilers usually have to specify one single price at which a horse should trade. A private odds compiler does not have to be so constrained. View a horse’s probability of winning a race as a range of acceptable values, dependant on the probabilities of other horses in the same race, and learn how to read the indications of the market. That should take you a long way.









Url copied to clipboard.
