We do take critics scores every time they re-taste a given wine/vintage - quite a lot of them do that. I can’t say without checking with our data people whether we only use the latest score from a given critic for a given wine, or whether we use them all. We probably should just use just the latest if we are not already.
I would consider CT users as not “generic consumers”. They are probably not as disciplined or well trained as critics (though some will be) - but probably more objective than your random consumer. I think we do treat CT scores as a “critic” - again I’d need to check on that.
It is always an interesting challenge when dealing with large amounts of data - picking out the highs, lows, and norms from the noise.
Just because the 2001 Yquem was 100 points from Mr Parker - doesn’t mean that all 2001 Sauternes were epic - maybe in fact it was an outlier. How can we show the outliers, but also be realistic about the average. Maybe showing min/max/average is worthwhile?
A min/max/average is for sure more interesting. A vintage with more outliers/spread would tell me if it is a vintage where i have to be more producer selective. I would prefer a scatter plot for this. But that might be a bit to nerdy for most
Thanks - I was clicking on the rating for a single vintage. Once I started drilling down it all became pretty clear.
Poking around (e.g. Napa, White Burgundy), what really jumps out (as others have mentioned) is the narrow range of vintage ratings for a particular region. Nearly all White Burg back 30 years is in a very narrow band (92-93). The outliers for other regions are similarly rare. So I guess I’m trying to understand what the value of this is, and why the aggregate scores are so consistent.
As challenging as vintage charts are to use at all, I think this one is less useful than one from a single source. The Berserker theory about wine critics is that they are most useful when you understand one well enough to get how they align with your own palate. Vintage charts would be similar - if you understand the source that you have a sense for how these scores would relate to your palate.
Still it’s an interesting exercise. It reveals more about the critics (they are wildly inconsistent) than anything else.
Is there a way to tweak the algorithm, so that more recent reviews get more weight? In other words, I would think that a reviewer that is doing a retrospective on, say 2010 Bordeaux should get more weight than a guy who is rating barrel samples.
As a drinker, I don’t typically care how a vintage was rated at the time the wines were released, I’m much more interested in how wines are drinking now. Of course, I would typically use Cellartracker to make that determination, as opposed to a chart, so perhaps I’m not the right audience.
You can always read something useful from the data. If the range of deviation is small - then you can assume that looking at white Burgundy as a whole within a given price range - that the perceived quality is quite consistent. So you could read that to mean you are less likely to be making a bad choice, or that there is no need to seek out one vintage particularly over another. Of course when you get down the the granularity of individual wineries and wines then you should look up their scores specifically. You can also drill down further into the sub-regions, like here: https://www.wine-searcher.com/vintage-chart/white-wine/fine-wines/3255-puligny-montrachet-premier-cru
and extract a bit more insight. And you could look here for Red Burgundy: https://www.wine-searcher.com/vintage-chart/red-wine/fine-wines/1639-gevrey-chambertin
and see perhaps more variation from year to year - and maybe more point-inflation.
Users of the chart, perhaps, should step back from the idea that a vintage chart will show them 98, 99 and 100 points - it will never be that way when looking at averages over many wineries. Maybe we should NOT use the 100-point scale - maybe we should normalise back to something more generic - 1 to 10.
Another factor is whether you are trying to compare between regions. Is it fair to say that a 93 point score for fine Red Bordeaux is not as good as a 95 point score for a New Zealand Pinot, or perhaps they should not be directly compared, perhaps a matter of taste. I think it’s safer to look at the scores in relation to each other - better/similar/worse - than trying to rank them absolutely.
You will get a clearer picture if you take data from just one source - and if you know/trust that source then great! I would prefer to canvass a wider opinion - though of course that has dangers too.
Thanks Jules for those examples. I see your point, and it’s reasonable. If you’re trying to understand broadly which are the better (reviewed) vineyards over a particular time period, this is helpful. Pucelles > Garenne? Interesting. The very narrow band of scores though still undermines this analysis. How much better is a 92 over a 91? What about a 93? Is it is a little better, a lot better?
This dig into the data is extremely interesting, I cannot deny it. It’s super cool how you go down all the way to vineyard when there is data sufficiency. Wow! Still not sure it’s useful though . I hope it has good business value for WSPro so you guys think it’s worthwhile to dig in deeper.
I had a look at the Rhone, focused on the north. In the last decade Hermitage barely edges Cote Rotie and Cornas, with Saint-Joseph a step behind. In the 2000s it seems like the hierarchy was firmer with Hermitage > Cote Rotie > Cornas > St. Joseph. But it makes me wonder if this is about reviews at the time, i.e. when the wines were released, or recent reviews which indicate some regions age better than others? Or both?
We’ve made a few updates to the vintage chart - added a “super fine” category for the really high end wines and this helps to show some of the well established highlights.
Also added information for each score to indicate how many wines and critics scores were used to create the score.
In the future we expect to add further details - drilling down to the individual wines, etc.
Let us know what you’d like to see.
Thanks for giving us some feedback.
Pretty close to just rating everything 93 and being done with it.
Since critics don’t generally publish lower scores (and don’t review lesser wines), these are an average of a subset of wines made. It’s an average of wines rated within a points range. Basically, here’s the average rating of the subset of wines in this category that rated between 88 and 100 points.
But, since we have so much information available on most better known wines - most of the better wines - the ones that got reviewed or get reviewed on CT - the usefulness of a vintage chart is more for wines we don’t have a lot of better, more specific info on. Those are more wines that mostly did not contribute to the data on which these vintage rating were calculated.
There’s a reason there’s an occasional wine board thread when a critic’s rating for a vintage doesn’t seem to sync up with his/her individual wine ratings. It’s a more informed assessment.
So I have to go back to 1993 to find a Cote de Beaune vintage that wasn’t 90+?
But no vintage in that time was 94+?
I understand that this rating is based on just wines for which there are published reviews, and I think this is why the chart is of minimal value as it stands.
How is this for a motto? “WineSearcher, where every vintage is above average.”