CA Pinot vs Burgundy — why are critics kinder to CA?

First; while I read tasting notes from publications, I have never bought a wine because of its score.

Second, I get that sometimes/often these are different reviewers for the two regions.

However, if you’re running a publication that uses the old 100-point scale, and you’re reviewing pinots from all over the world, wouldn’t you want a 95 to always be better than a 92?

And there is simply no way that there are tons of great and world class pinots (95 points?) in CA for $50 if there are hardly any in burgundy for less than $200… if that were the case, no one would bother chasing good/great burgundy.

Again, I am by no means an expert, but I’ve tasted my thousand or so pinots. I’ve never been afforded the chance to taste much burgundy in the $150 and up category, but in the $50-150 range it does often seem as or more compelling per dollar than the CA counterparts.

Tell me why I’m an idiot. Thanks in advance!

1 Like

I have always wondered this as well. its not just true for professional reviewers I feel though. CT does the same thing. consider this: average CT ratings for Rivers Marie old vine Summa are about the same as Domaine Leroy Nuits St Georges.

im gonna get pushback on that, so I’m gonna qualify with it was just an example based on a very fast CT search.

2 Likes

It’s pretty much an objective fact in my own mind that Burg scorers (pro and and amateur alike) are the toughest of the bunch. I don’t know why that is.

2 Likes

Definitely are. Reason Suckling doesn’t really do Burgundy haha. Also, it could have something to do with cost? However, prices should be absent cost/QPR and the note should state if a wine punches above or below its price point IMO

1 Like

I think many burg lovers are tough scorers because they tend to be more experienced drinkers. They’ve tried all the other wines (started on cabs, etc.) before the road led them to Burgundy. But I agree with the OP’s observation - lots of expensive red burgundy out there with low ninety scores.

2 Likes

Lucky you, I’ve bought hundreds.

Second, I get that sometimes/often these are different reviewers for the two regions.
Yup,

However, if you’re running a publication that uses the old 100-point scale, and you’re reviewing pinots from all over the world, wouldn’t you want a 95 to always be better than a 92?
Trick question? This has been ground covered almost since professional wine scores began. Scores typically apply to the specific regions being tasted, relative to the best wines from that region…allegedly.

And there is simply no way that there are tons of great and world class pinots (95 points?) in CA for $50 if there are hardly any in burgundy for less than $200… if that were the case, no one would bother chasing good/great burgundy.
Points are relative to each region. Supposedly. Maybe. Sometimes.

Again, I am by no means an expert, but I’ve tasted my thousand or so pinots. I’ve never been afforded the chance to taste much burgundy in the $150 and up category, but in the $50-150 range it does often seem as or more compelling per dollar than the CA counterparts.
Maybe you’ve just learned to appreciate Burgundy more?

Tell me why I’m an idiot.
No can do. You seem to be expecting scores to be valid between regions rather than solely within them.

Thanks in advance!
De rien.

RT

2 Likes

This was sort of brought up on another site…

Weather is a big factor and sub scale production (Parcel size). You bottle that dam grand cru know matter what!

I agree, that just seems to be the culture of scoring Burgundy, or at least red Burgundy. Maybe it’s because red Burg tends to be so reserved in its youth, it doesn’t give the wow factor to the critics who are blind tasting new releases. But I agree it feels like there is more to it than that.

The flipside is sweet wines. Critics always seem to give those really high scores. Solid middle of the road $20-ish Kabinett starts at 91 points and goes up from there. Port, Sauternes, Tokaji, ice wine, it all gets right up into the middle 90s. I guess that would somewhat track with my Burg theory, in that sweet wines make a big impression, including when they’re new releases.

2 Likes

My impression is that the scales are just not the same between regions. In other words, a score of 95 for a Sonoma Pinot doesn’t equal a 95 for a Red Burgundy. Let alone comparison of different varieties, colors of grape, sparkling to still, etc. Or any other comparison you might make. I’m not going to judge whether one set of drinkers has different experience/skill/etc., than any other. They’re just different.

For each region/variety/type (i.e still vs sparkling vs. sweet) a set of scores need their own unique interpretation. To really be useful you need to have some kind of grasp of the kind of wine, the critic’s palate, range, and history. And even then for some critics the scores are inconsistent or even nonsensical!

I find scores to be useful. I use them myself. I’ve thought a lot about what my own scores mean (for me, as a tool for remembering my own wine experiences) and have written down a fairly detailed summary of my wine scoring methodology so I can try to be consistent.

The problem that many of us have with scores is either inconsistency, or a blind use of scores to make decisions without context for them. In these cases, I just don’t use them (e.g. looking at wines in a store with scores from a critic I’ve never heard of).

Back to then exact original question - maybe it depends on the critic. Take a look at Alan Meadows (Burghound). His CA and OR Pinot scores seem to track way lower than his Burgundy scores. I think he’s using a relatively consistent scale for his palate, or at least trying to. But not many reviewers spread their attention across so many regions.

If only the reviewers tasted blind the problem would be solved.

Much to their surprise.

1 Like

Just did a tasting of 8 pinots. 4 burgs and 4 Ca. All were 2015 that was supposed to be a good year for burgundy. The burgs were all premier cru and all rated 92-95 from WA or BH. They also cost on average twice the CA. I alternated odd and even. All were blind. Out of 18 people only two liked the burgs better and one of those was pregnant and could only smell. Maybe global warming will help burgundy.

I hate this kind of excuse making, but Burgundy famously goes through a closed period starting a few years after release. Maybe your event suffered from that. Or small sample size. Or bad luck. Or inconsistent stylistic choices. Yep four excuses!

But: just this past week I tasted (La Paulee in NYC) dozens of 2017s, many 2011/12/13/14s, and quite a few 2001/2/5 and older. That middle group showed the least well - lots of fairly stiff closed wines. The 2017s were excellent, and the older group positively yummy. Closed period anyone?

Because California Pinot Noirs are in your face with ripe fruit and young Burgundy can be pretty tight and unyielding those first few years after bottling. Entirely different wines -

3 Likes

Because Burgundy is a strict hierarchy, and when reviewing a large line-up of wines, the critics have to leave themselves enough runway to give big scores to the bigguns. So like clockwork, Bourgogne to Village to 1er to Grand Cru goes up in score each time, ending with the high scores. If each producer made just a couple of wines, they’d all get high scores.

It’s not because the wines are tough or tannic or closed or whatever. That doesn’t stop the top Piedmont or Bordeaux wines from getting super high scores on release; it’s because there are only one or a couple of those wines.

3 Likes

Maybe it’s not the Burgundy that needs help?

1 Like
  1. God i hope not about the global warming aspect…

  2. 2015 burgs needs 10 yrs+ more. comparing it to 2015 cali today is a good academic study, but not best for enjoyment.

1 Like

This makes sense. It’s what Parker fought in Bordeaux, at least initially, but it remained the case in Burgundy. And as someone else said, if people tasted blind, there would probably be different scores. I’ve tasted enough Burgundy with people who have tried to convert me to know that people are very willing to make excuses for Burgundy and they are willing to score it on potential rather than what it is in the glass. “It’s not quite there yet but you can tell it has the structure and the stuffing to be great. I’ll give it a 92 for what it will be at some point later in its evolution.”

The irony is that later, it’s “Wow. You can tell this was a fantastic wine. Would have been absolutely magnificent a few years ago!”

It’s why a friend always jokes that the drinking windows for expensive Burgundy are measured in minutes.

And then, as mentioned, a lot of times the wines are scored by different reviewers.

4 Likes

Because of Romanee Conti, La Tache, Musigny and such. If these are 100 point wines or something close, probably a really, really good MSD Clos Sorbes, as wonderful as it is, isn’t. This is one reason why I have stopped rating wines.

As for why Wine writers overrate domestic pinots, read this board. The lovers of these wines are snowflakes who go ballistic when any of their favorites don’t get huge scores. It is easier for wine writers to humor them.

1 Like

This is the reason why Burg scores get depressed. The hierarchy is so ingrained that it permeates all Burg scoring and tasting notes. Burgundy is unavoidably hyper-contextual. No reviewer is going to rate Mugnier’s Chambolle AOC 98 points, no matter how good it is (and at its best it’s absolutely better than any CA Pinot I’ve ever had), because then how are you going to deal with Fuees, Amoureuses, and Musigny? Most of the famous Burg reviewers tend to be very hierarchical. By contrast, wines like Scarecrow or even something like Yquem are more singular and a-contextual.

3 Likes