Two-part Burgundy question

Hi, new to the community, this is my first post.

When I’m looking at scores (e.g. Burghound) on wines, is the score that the wine gets a completely objective score, as though it’s compared to all other wines (perhaps just of that type?) or is it a subjective score, like, this 2010 Ponsot Clos De La Roche scores 94 points, compared to other vintages?

In other words, in the mind of the reviewer, is a 92 point Laboure-Roi just as good as a 92 point DRC?

If the answer is yes and scores are meant to be truly objective, why would anyone in their right mind pay a few hundred dollars for a village wine from a top producer that scores an 89-90 when they could get a grand cru from a less prestigious producer that scores a 95 for the same amount of money?

1 Like

Dear lord, you’ve gone and poked the Burgundy bear. Get ready.

15 Likes

That’s why numeric scores aren’t very useful.

7 Likes

Wait…I thought spending more money automatically made a wine taste better. :rofl:

5 Likes

And the points bear as well , double hit!

:popcorn::popcorn::popcorn::popcorn::popcorn:

5 Likes

This is the “obviously ridiculous” question to people here that is the most reasonable question to everyone else on earth.

It would make a lot of sense that a burgundy reviewer would make their relative evaluation applicable to all of wines they review in burgundy, and as such their numerical rating would be based on the quality of the wine only.

Now here come the 800 people to say why that plain logic is stupid and useless.

7 Likes

The problem is that there’s no consistency in how people score wines numerically. Some reviewers have a maximum score that they’ll give a village tier wine. Some reviewers also score only based on ultimate potential. If you look at some reviewers scoring a portfolio of wines they’ll always max out at ~92 pts for a village wine, ~94-95 pts for 1er, and no cap for a grand cru.

Some people only score wines based on a single point in time. It’s certainly possible, if not likely, that shortly after release a village wine, especially one made without a lot of new oak, is drinking better than a top grand cru, but you’ll never see a critic score them that way.

I think it can be very difficult to compare wines in this context though. I’ll give you an example, although without numeric scores as I don’t use them.

https://www.instagram.com/p/DKEyDS6uVZR/?igsh=d2N1OGlsM296Nnky

I copy/pasted the notes in case you can’t see them.

2019 DRC Corton

Wide open and ready for business. Lovely sandalwood, sea salt, and 5 spice on the nose, with incredible density and palate presence. Super long finish. Just outstanding.

2021 Mugneret Gibourg Vosne Romanee

Bought off the list for essentially retail. Tough act to follow but this was a very pretty, classic wine that hits all the right notes. It didn’t have the intensity, density, or length of the DRC, but was very enjoyable with beautiful pure exuberant fruit, and was a lovely nightcap. These 21 MGs are drinking so very well right now.

The 21 MG vosne romanee is one of my favorite wines. It’s beautiful, pretty, floral, with lovely fruit and aromatics. It’s also clearly not at the same caliber as the 19 DRC Corton which has far more weight and complexity. Different reviewers would score these wines very differently. If you were issuing a numeric score based on potential clearly the DRC would win. If you were scoring based on pleasure on the day they were both opened, I think it’d be very close.

11 Likes

Hi Paul, welcome to the community! Your question is a completely reasonable one. Rest assured, if you see any sarcasm in response, it’s not aimed at you. It’s the bitter/hilarious experience of veterans of many burgundy flame wars.

To address your question: no — no sane person with functioning taste buds can seriously think a 92 pt Labouré-Roi is the quality equal of a 92 pt DRC. A separate question is whether the difference in experience merits a 2+ order of magnitude price difference. To me, the issue is not “quality “ per se, it is uniqueness. I personally get something out of DRC that I really don’t get out of much anything else — in my case, only with age. Add to that scarcity premise and pretty soon you have a wi that costs a ridiculous amount of money.
A long winded way of agreeing with @MChang — numeric scores are of very limited use across different wineseven when by the same reviewer

4 Likes

In order to score Burgundy wines, I use this formula:
image

16 Likes

Hi Paul
Welcome!

My understanding is that points are generally allocated relative to either other burg wines for a specialist in that region, or relative to all other wines for a generalist.

Moving on to the latter question cuts IMO to the heart of the debate about whether points are useful, or a blight on our hobby. It can be somewhat contentious.

For me it’s just to easy to fall into the thinking of that latter question, trusting the reviewer to come up with a precise number to represent how much everyone would enjoy that wine. For context I absolutely do not trust myself to come up with such a number for myself, let alone you or anyone else, who may read what I write. Some critics write in the manner that encourages readers to believe that is they say “it’s a 95 point wine”, then that’s an undeniable truth. Their confidence attracts followers who are unsure of their own palates.

My recommendation is to try your theory out. Gather some friends and some bottles, and without mentioning scores from critics, have your friends try wines in matched (by price) pairs. Will the big name producers come out on top, or the big name vineyards? It’s very interesting to see not just which ‘wins’, but the variance between the tasters.

If you do try it out, please post your observations on the tasting here. It would be interesting to read.

1 Like

Far too logical and deterministic to fully capture the joys of burgundy scoring. I prefer:

https://wikimedia.org/api/rest_v1/media/math/render/svg/2db6afd0b84b7705fcc8891a5c6c48e816b60f41

2 Likes

MBGA approved?

Summary

Make Burgundy Great Again

All kidding aside, scores are a subjective, singular assessment by an individual. They have little relevance for another individual.

As for whether a 92 point this is the same quality as a 92 point that - perhaps, but given that “this” and “that” are different things, there’s no objective way to compare them that is relevant across a population greater than the one individual who assigned the 92 to both.

4 Likes

I agree w @David_Bu3ker. Now that said, in some contexts scores along with notes serve a useful purpose with circumscribed applicability. For example, if I have a bead on the preferences of a reviewer, that reviewer’s scores are useful on a relative basis — and this is true even if my tastes don’t align with the reviewer’s. For example, I think I have a handle on @Jeff_Leve ’s preferences — mostly because I don’t think he tries to hide them or pretend that he is the last word on wine. That makes his notes and scores useful to me, especially when assessing which vintage of a given chateau I will like the most. I made extensive use of his generously supplied TNs on Cellar Tracker to help me judge where to allocate my cellar space, because I can’t own all chateaux in all vintages all the time. And this is true even though my tastes don’t really fully align with his. I could give other examples with other critics as well

On the other hand, to take a different example, average scores on Cellar Tracker are a poor guide to relative value. I love Cellar Tracker, but I don’t take scores there seriously.

2 Likes

Yes, this brings me to my next question: what is generally a more reliable quality signal — the collection of scores/remarks from the professional critics or the chorus of guys on CellarTracker?

While the former reviews might be quite old, shortly after release, they are all put out by people who ostensibly have a more significant level of authority or expertise. While the latter have the benefit of being much more recent, but alas might have been written by a rank amateur such as myself.

For some reason, I think about this subject in the context of movie reviews. For example, are the rave reviews and appreciation for Bridesmaids - generally regarded as one of the funniest/top comedies of the past 25-odd years - equivalent to the acclaim There Will Be Blood received, even if both got five stars? Maybe not. Should they be? Maybe not, too.

So to answer the question, I think context matters - a 90 for a Bourgogne is generally considered great, and a 92 for a Grand Cru generally considered just OK or even a slight disappointment.

4 Likes

You should

If you subscribe the service, ask the critic what his intent is. I’ve had many of those intellectually honest discussions with @William_Kelley - and he is very clear about this methods (and where there is imperfection in the method as well). There is no point asking WineBeserker what the critic intended - you will get a bingo card of responses!

Generally speaking, I would assume the wine critic is scoring all wines of a specific region equally. However, I’m assuming that same critic may run into a ‘compression’ issue where lets say DRC makes 20 different wines and they need to be also scored relative to each other, leading to some great wines being scored a few points lower than otherwise.

And yes, I believe a DRC wine can score 92 points. Just because a wine is expensive doesn’t guarantee it will be 95 points all the time. And no, I have not had a 92 point DRC wine - recognizing I’ve only had 2 tastes over my lifetime.

2 Likes

100% the most reliable signal is your own palate / experience. Even that isn’t infallible, as tastes change, or the setting may affect opinions.

Critics often have a small advantage over others in having a good wine vocabulary that helps us better understand the wine descriptions, but I’d take an ‘amateur’ like @Otto_Forsberg over any of them in that respect. In terms of aligning to a critic’s palate, no-one has struck me that way e.g. repeatedly calling out wines I love as an under the radar gem, or cutting down tall poppies I’ve not been impressed by.

Thus if I read a Tasting Note (TN) that sounds interesting, then it’s a wine I might seek out to try, but the truth of whether I’ll like it or not, lies within the bottle.

3 Likes

Well, given those two choices the professional critic will be more reliable — in the sense of more likely to be consistent. Even here, though, there are limits. Burg usually scores lower than Bordeaux at similar prestige levels. Dessert wines seem always to score 95-100, because…. RS? I find it very difficult to compare scores across critics for the same category of wine (eg burg in general) and handicapping scores across categories (Bdx v CA v Germany v Burg) almost impossible. It’s like how time flows between Narnia and Earth - no clear transform from one to the other, not linear, barely monotonic
One way to handle Cellar Tracker reviews is to find other posters who give descriptive notes about wines you’ve already had so you can baseline their perceptions. Get a few of these. Then use those reviews on CT to help you make decisions. Average CT scores, or scores posted without a review, are pretty useless.

2 Likes

You don’t drink scores, you drink wines. While everything people have said so far about objectivity and pro vs amateur scores is useful for evaluating scores here versus scores there, in the end no two 92 point wines tatse the same, ever. Why spend the extra money for a grand cru 92 pointer over village wine of 92 pts? Because you like it more. You should always buy the wine you like more, regardless of score, if you can. It’s nonesense to say “here are 10 wines all of which got 90 points, so I will buy the lesst expensive 90 pointer and know I’ve gotten the best deal.”

Scores are much more useful to me in the context of wines I know I like or want, helping me choose which vintage or which cru. For a wine I don’t know, scores are only useful up to the point of “okay, people seem to think this is good.”

13 Likes