With offers already out and more on the way it would be helpful to compile whatever information our community has on the vintage and any wines tasted as there is so little info from credible sources to review (there is another topic on this but it is now mostly about a wager vs good info we can use when making buying decisions - not that there is anything wrong with that as I am already gambling on the few offers I have blindly purchased).
Thank you- yes kid of. I was hoping we had members that have tasted as well as soon to be other critics to help formulate a compare and contrast from say 2019 in QPR but also style.
I am hearing from some this is a more classic vintage others it’s a cross between ‘18 & ‘19 and others stating it’s 2010 with a touch less fruit… but reading some of the reviews have rich, opulent… tough to read the tea/grape leaves at this point. Pulled the trigger on Cheval but no clue what I just bought.
Henry, this is great! Thanks so much for posting it. And if you’re the one doing the work to create it, even bigger thanks. I’m impressed! It will be very useful to me, and I hope there’s a way to keep this as a living tool on Berserkers.
On the off chance you’re looking for suggestions to make this even better:
I really like the use of Z scores to overcome things like Suckling inflation. But the problem of tasting preference remains. For example, while both JL and JR are very capable tasters, they favor different attributes in wine. I know that my palate is much more aligned with JR than JL. I think the model mechanics results in each grader getting the same credibility. That makes perfect sense for a default, but it would be very cool if a user could assign weights (sum of weights=1) to each of the grader columns.
Thinking about this took me to independent and dependent variables. Between ratings and price, which one is a function of the other? I think you could argue it both ways, but I’d hope the answer is that ratings are a numeric (imperfect) indication of quality, and price would presumably be a function of quality (along with prestige, marketing ability, etc.) The alternative argument is that ratings are driven by price, and I’d hope that isn’t true for reputable raters–particularly when the ratings come out before the prices. So if my argument is right, then I would instinctively have ratings on the x-axis and price [f(x)] on the y-axis. OF course maybe you tried that and it didn’t work as well.
I was also thinking about the regression curve. This one is a straight line, and at this point I don’t think there are enough data points to have anything but a straight line. But with a full set of data points, I think you’d see the phenomenon of the most highly rated wines, and the high number of people chasing those relative scarce wines, creating a somewhat parabolic best-fit curve at the right end. And on the left end, where ratings don’t sell the wine, marketing does, I’m guessing that the left end correlation will be so poor that leaving those wines off the chart might produce the best curve.
Anyway, do with those thoughts what you will, and again, thanks so much for posting this.
IMO, thinking of quality and price as a strictly linear relationship is not valid. Is a wine with 2x the price going to necessarily have 2x the score? Even with z-score normalization we would expect that scores should ultimately have a sublinear relationship to price.
Instead, I prefer to look at the “orthogonal convex hull” around the points in the graph. That is, we should only say that wine(A) > wine(B) if score(A) > score(B) AND price(A) < price(B). If you apply this ordering rule, there will be several maximas in the graph and they correspond to the “upper” vertices of the orthogonal convex hull around all of the points.
Someone else created it for 2019, a couple of us have been updating it and adding to it for 2020.
1 - I believe it should be publicly available as a document, so feel free to take a clone of it and modify to your critics satisfaction. Deleting irrelevant critics from the main score chart shouldd do the job I believe.
2 - Yeah, I kind of get the argument to be honest, but somehow it feels more intuitive to me to have it this way round. We can probably knock up both charts and see what people prefer, but I’d expect a scores-x price-y chart to be messier and harder to read. Arguably the price is what the chateau set, and the score is what gets measured (via the critics, etc), so it kind of makes sense to me
3 - The regression is non-linear, it’s in log(price), so if, say, price is proportional to scores^2, then scores = price^1/2, log(score) = 1/2 log(price), where the 1/2 is the gradient on a linear regression on the log basis, it works out.
I’m looking for volunteers to add more analytics to this, so far the focus to date has been fixing some broken links in it, updating for 2020 (e.g. Jancis now uses a ++ system), but if you’re interested in contributing please just drop me a DM with an email address for you and I can add you as an editor?
One thing I’d love to get into place, as well, is release pricing vs end of EP pricing - e.g. capturing any secondary price movements
The regression is at least displayed as log price, so any exponent should come out as a gradient rather than an exponent, making it a linear fit in non-linear space (!), rather than a purely linear fit.
That being said, I’d love to have some people contribute more analytics to this. I’ve been trying to control edit access so we dont end up in a messy situation, but if you’re happy to add some more insight, please ping me a DM with an email address and I can add you as an editor
One random observation:
The linear model fit here produces a better R^2 than my attempts to find any level of consistency amongst major critics. Jancis to Neal, even Neal to Parker all produced substantially worse predictability!
just to clarify, on a per-critic basis, imho the score is only meaningful if compared to their own scores from previous vintages. As an example, I read that Parker was the first critic to score highly the 1982 Bordeaux vintage, it’d probably come out as a very high score from him with less so from other people. But if you take it in the context of his other scores, rather than other critic’s scores of the same vintage, you’d realise he’s calling it one of (in his view) the best vintages ever.
I have no idea, nor the incliniation to do the analysis, for Jeff’s historic vintage scores and call out how he thinks this vintage ranks (in terms of data) against other recent vintages.
But I’d love a critic to be self-aware enough to do that level of insight. Here are the histograms for the last 20 vintages, this is how 2019 numerically stacks up against the en primeur scors for them. I tell you I think its somewhere between 2018 and 2019, but does the data show that? You could actually work out, on a region-by-region level, which historical vintage matched it most closely, and a lot of claims could be put under scientific scrutiny.
But I personally dont have the energy to do that right now. Maybe it’s the next side project after Bordeaux 2020. Any volunteers to help?
Ask and you shall receive - there’s a new tab which relates the critic z-score with the price per bottle. I’ve fit with an exponential for now, which is the inverse of the log on the inversed chart, so I think it works.
TThe regression quality is quite good for now, but its clear it will get worse as more wines come out imho. I’ll try to swap to a Log Y scale at some point in time to make it a bit tidier.
Personally I prefer the original chart, but it doesnt hurt to add it in
The fit is different to the other chart, so things are a bit inconsistent. I think on the new chart, right of the line = better value for money, left of the line = worse value for money. But Cheval blanc shows up as (slightly) bad VFM here, whereas on the previous chart its slightly better VFM
N.B. There was a bug in the spreadsheet, which was impacting the scores. Resolved now.
lots of fairly high scores from Jeff it looks like. I haven’t read the other thread with TOO much interest yet, but has he given his overall impressions on the vintage yet? granted, the high ones are the usual suspects, but then potential 99-100s from like 5 or so others as well
I attended the UGC in NYC event last night. Big crowd, maybe 500-1000 attendees, as a guess? Also it is restaurant week, so a nice night overall with friends. My overall impression is that I liked the 2019’s better at this event last June, in general and for my tastes. Granted, the 2019 event, with its postponement from January to June, benefitted from an additional 6 months in bottle. But in general I thought the ‘19’s were richer, bigger mid-palates and showed more across the various chateaus. The 20’s with brighter acids for sure, less luxurious mid-palates, lower alcohols. But very evident terroir (I believe that is what it was) in some cases, e.g., Pavie Macquin for example had a mid-palate with mineral flavors washing over the palate that was awesome. My favorite wines included Leoville Barton and Leovile Poyferre, Clinet and Pichon Baron. I was surprised by the Poyferre which I usually am not a fan of, but this vintage just seemed more traditional and hit my taste mark. Clerc Milon was right up there close to the PB in Pauillac. Langoa Barton was packed…maybe more so than any recent vintage IMO. Of the ~30 or so wines I tasted, I also made note of Larcis Ducasse, the Pavie Macquin of course, Canon, Rauzan Segla, Phelan Segur, Gazin, SHL, Malartic as a qpr and Giscours’ driving red berry leanings. So, as usual no formal notes, just hastily gathered impressions FWIW.