With offers already out and more on the way it would be helpful to compile whatever information our community has on the vintage and any wines tasted as there is so little info from credible sources to review (there is another topic on this but it is now mostly about a wager vs good info we can use when making buying decisions - not that there is anything wrong with that as I am already gambling on the few offers I have blindly purchased).
Thank you- yes kid of. I was hoping we had members that have tasted as well as soon to be other critics to help formulate a compare and contrast from say 2019 in QPR but also style.
I am hearing from some this is a more classic vintage others itās a cross between ā18 & ā19 and others stating itās 2010 with a touch less fruit⦠but reading some of the reviews have rich, opulent⦠tough to read the tea/grape leaves at this point. Pulled the trigger on Cheval but no clue what I just bought.
I cant help on the tasting front unfortunately. Not tasted any myself, though if any chateaus would like to send me samples going forwards Iām open to discussion!
At the least its usefull lto have an aggregated view of the critics
Henry, this is great! Thanks so much for posting it. And if youāre the one doing the work to create it, even bigger thanks. Iām impressed! It will be very useful to me, and I hope thereās a way to keep this as a living tool on Berserkers.
On the off chance youāre looking for suggestions to make this even better:
I really like the use of Z scores to overcome things like Suckling inflation. But the problem of tasting preference remains. For example, while both JL and JR are very capable tasters, they favor different attributes in wine. I know that my palate is much more aligned with JR than JL. I think the model mechanics results in each grader getting the same credibility. That makes perfect sense for a default, but it would be very cool if a user could assign weights (sum of weights=1) to each of the grader columns.
Thinking about this took me to independent and dependent variables. Between ratings and price, which one is a function of the other? I think you could argue it both ways, but Iād hope the answer is that ratings are a numeric (imperfect) indication of quality, and price would presumably be a function of quality (along with prestige, marketing ability, etc.) The alternative argument is that ratings are driven by price, and Iād hope that isnāt true for reputable ratersāparticularly when the ratings come out before the prices. So if my argument is right, then I would instinctively have ratings on the x-axis and price [f(x)] on the y-axis. OF course maybe you tried that and it didnāt work as well.
I was also thinking about the regression curve. This one is a straight line, and at this point I donāt think there are enough data points to have anything but a straight line. But with a full set of data points, I think youād see the phenomenon of the most highly rated wines, and the high number of people chasing those relative scarce wines, creating a somewhat parabolic best-fit curve at the right end. And on the left end, where ratings donāt sell the wine, marketing does, Iām guessing that the left end correlation will be so poor that leaving those wines off the chart might produce the best curve.
Anyway, do with those thoughts what you will, and again, thanks so much for posting this.
IMO, thinking of quality and price as a strictly linear relationship is not valid. Is a wine with 2x the price going to necessarily have 2x the score? Even with z-score normalization we would expect that scores should ultimately have a sublinear relationship to price.
Instead, I prefer to look at the āorthogonal convex hullā around the points in the graph. That is, we should only say that wine(A) > wine(B) if score(A) > score(B) AND price(A) < price(B). If you apply this ordering rule, there will be several maximas in the graph and they correspond to the āupperā vertices of the orthogonal convex hull around all of the points.
Someone else created it for 2019, a couple of us have been updating it and adding to it for 2020.
1 - I believe it should be publicly available as a document, so feel free to take a clone of it and modify to your critics satisfaction. Deleting irrelevant critics from the main score chart shouldd do the job I believe.
2 - Yeah, I kind of get the argument to be honest, but somehow it feels more intuitive to me to have it this way round. We can probably knock up both charts and see what people prefer, but Iād expect a scores-x price-y chart to be messier and harder to read. Arguably the price is what the chateau set, and the score is what gets measured (via the critics, etc), so it kind of makes sense to me
3 - The regression is non-linear, itās in log(price), so if, say, price is proportional to scores^2, then scores = price^1/2, log(score) = 1/2 log(price), where the 1/2 is the gradient on a linear regression on the log basis, it works out.
Iām looking for volunteers to add more analytics to this, so far the focus to date has been fixing some broken links in it, updating for 2020 (e.g. Jancis now uses a ++ system), but if youāre interested in contributing please just drop me a DM with an email address for you and I can add you as an editor?
One thing Iād love to get into place, as well, is release pricing vs end of EP pricing - e.g. capturing any secondary price movements
The regression is at least displayed as log price, so any exponent should come out as a gradient rather than an exponent, making it a linear fit in non-linear space (!), rather than a purely linear fit.
That being said, Iād love to have some people contribute more analytics to this. Iāve been trying to control edit access so we dont end up in a messy situation, but if youāre happy to add some more insight, please ping me a DM with an email address and I can add you as an editor
One random observation:
The linear model fit here produces a better R^2 than my attempts to find any level of consistency amongst major critics. Jancis to Neal, even Neal to Parker all produced substantially worse predictability!
Interestingly, heās only a shade behind Sucklingās scores. Averages so far:
Suckling 96.1
Leve: 95.8
Anson: 95.1
PM: 94.7
JMQ: 93.7
JR: 93.3 (Using 62.3+1.81*Score - which was derived from data comparing her scores to Neal Martins I think)
just to clarify, on a per-critic basis, imho the score is only meaningful if compared to their own scores from previous vintages. As an example, I read that Parker was the first critic to score highly the 1982 Bordeaux vintage, itād probably come out as a very high score from him with less so from other people. But if you take it in the context of his other scores, rather than other criticās scores of the same vintage, youād realise heās calling it one of (in his view) the best vintages ever.
I have no idea, nor the incliniation to do the analysis, for Jeffās historic vintage scores and call out how he thinks this vintage ranks (in terms of data) against other recent vintages.
But Iād love a critic to be self-aware enough to do that level of insight. Here are the histograms for the last 20 vintages, this is how 2019 numerically stacks up against the en primeur scors for them. I tell you I think its somewhere between 2018 and 2019, but does the data show that? You could actually work out, on a region-by-region level, which historical vintage matched it most closely, and a lot of claims could be put under scientific scrutiny.
But I personally dont have the energy to do that right now. Maybe itās the next side project after Bordeaux 2020. Any volunteers to help?
Ask and you shall receive - thereās a new tab which relates the critic z-score with the price per bottle. Iāve fit with an exponential for now, which is the inverse of the log on the inversed chart, so I think it works.
TThe regression quality is quite good for now, but its clear it will get worse as more wines come out imho. Iāll try to swap to a Log Y scale at some point in time to make it a bit tidier.
Personally I prefer the original chart, but it doesnt hurt to add it in
The fit is different to the other chart, so things are a bit inconsistent. I think on the new chart, right of the line = better value for money, left of the line = worse value for money. But Cheval blanc shows up as (slightly) bad VFM here, whereas on the previous chart its slightly better VFM
N.B. There was a bug in the spreadsheet, which was impacting the scores. Resolved now.
To the earlier point, LPBās average is 94.8, with a high (2.8) standard deviation. I THINK high standard deviation is good - it means youāre using more of the scoring range.
lots of fairly high scores from Jeff it looks like. I havenāt read the other thread with TOO much interest yet, but has he given his overall impressions on the vintage yet? granted, the high ones are the usual suspects, but then potential 99-100s from like 5 or so others as well
Panos has a good take in his top 50 and his discussion of 2020 vs 2019 in the other thread posts 505 and 532. Worth reading if you havenāt seen them.
I attended the UGC in NYC event last night. Big crowd, maybe 500-1000 attendees, as a guess? Also it is restaurant week, so a nice night overall with friends. My overall impression is that I liked the 2019ās better at this event last June, in general and for my tastes. Granted, the 2019 event, with its postponement from January to June, benefitted from an additional 6 months in bottle. But in general I thought the ā19ās were richer, bigger mid-palates and showed more across the various chateaus. The 20ās with brighter acids for sure, less luxurious mid-palates, lower alcohols. But very evident terroir (I believe that is what it was) in some cases, e.g., Pavie Macquin for example had a mid-palate with mineral flavors washing over the palate that was awesome. My favorite wines included Leoville Barton and Leovile Poyferre, Clinet and Pichon Baron. I was surprised by the Poyferre which I usually am not a fan of, but this vintage just seemed more traditional and hit my taste mark. Clerc Milon was right up there close to the PB in Pauillac. Langoa Barton was packedā¦maybe more so than any recent vintage IMO. Of the ~30 or so wines I tasted, I also made note of Larcis Ducasse, the Pavie Macquin of course, Canon, Rauzan Segla, Phelan Segur, Gazin, SHL, Malartic as a qpr and Giscoursā driving red berry leanings. So, as usual no formal notes, just hastily gathered impressions FWIW.