CA Pinot vs Burgundy — why are critics kinder to CA?

Parker was the only critic I am aware of who explicitly said his ratings were contextual to a region or type of wine. When Tanzer was asked why all his New Zealand pinot scores in one addition were 87-89 points, he responded that sometimes a region has only offers 87-89 point wines, so that implicitly rejected the regional context of the scores. In his explanation of scoring at Vinous, Galloni never mentions regional context. Allen Meadows seeming rejects regional context, at least based on his reviews for California pinots. I don’t know if John Gilman adjusts for region either.

Jancis Robinson has made that point when scoring wines.

Trying to compare the two is pretty tenuous anyway.

A high school friend’s mother used to be the movie critic for a national newspaper, and I vividly remember her coming to our school for a presentation. She mentioned that it wasn’t all glamorous and that she was going to see Joe Dirt later that evening. When asked a question about how she approaches reviewing a movie like that, she said that it wasn’t fair to compare it in a vacuum to, say, Citizen Kane. She judged Joe Dirt based on whether it was a successful example of a comedy and fulfilled its purpose as such (for the record, it is a terrible movie by any standard and in no way am I calling California pinot the Joe Dirt to Burgundy’s Citizen Kane).

I often have to remind myself when tasting Oregon/California/Ontario pinot to judge it according to its own merits and whether it succeeds as a wine from its place (or a wine at all). I think doing otherwise does a disservice to both the wine and yourself.

Wine scoring is a deeply bizarre and flawed method for examining a wine’s merits as many people have demonstrated in this thread and elsewhere. But if the success of a California pinot noir is based less on the firmly established Burgundian hallmarks of quality - complexity, ability to develop - and more on providing easier-drinking pleasure, it would stand to reason that scores would adjust accordingly.

I ve got thoughts here:
1/the price of Burgundy is so high a critic has to be darned sure when a $90 village wine gets more than 92
2/think of Burgundy as Pauillac or St Estephe and California Pinot as Pomerol…the softer wine is easier to understand and drink. It is so easy to be wrong about Burgundy.
3/the classification system does throw a monkey wrench into our minds…if DRC etc are 100s then how can Chorey rate more than 91??

Life’s a garden, dig it.

1 Like

I thought about this for a long time before crossing over from wine merchant to critic, knowing that I would need to use the 100 point scale but wanting to find a way that successful wines from lesser appellations in Burgundy could still have a chance to shine. My solution is to score out of 100 but to offer alongside stars out of 5: So Bourgogne Rouge 89 points is ***** while Musigny 89 points is *

It also means that if tasting a range of pinot noirs from a fledgling region (Hokkaido perhaps) or a less well known part of California or New Zealand - just to pick a few random examples - scores in the 80s can be given, if supported with the reward of a decent number of stars. Of course it does depend on having expectations on what should be the baseline quality of an appellation or region.

Claude Kolm does something similar in his newsletter

1 Like

Maybe I slightly missed what I meant with the wording in that paragraph. I quite like wines from certain California producers in Anderson Valley, Sonoma Coast, and Sta Rita Hills. And all I meant by that is many of the CA wines I like (what prompted this was an Anthill Farms review) are damned good to me, but aren’t BETTER to me than similarly priced Burgundy that I’ve had.

And I really do not like some CA pinots…especially in warmer years.

Thanks to everyone for the comments. I think the “headroom” explanation rings true as certain grand crus SHOULD be rated higher than everything else…

Here is the new Wine Spectator. Maybe we had the premise backwards?
883B02D8-32FC-4219-B2AC-9597722FB083.jpeg

A good bottle of CA pinot is always a (pleasant) surprise.

I find this an excellent idea and that it works very well.
It’s a great way to express enthusiasm about the many wonderful passetoutgrains, bourgogne rouge or blanc and the likes, without giving them scores which are reserved for DRC’s.

How would you go about rating more elegant US Pinots like Joseph Swan, Arcadian or Littorai against bigger is better type of high octane California Pinot? Would you rate a 16% pinot (after being watered back) that does what it wants to do as highly as a more elegant one on your easier-drinking pleasure standard.

I gave up giving wines points years ago because I could not figure out how to rate German Kabinetts vs. German Auslesen (and these are both wines I love) but if I did rate wines with points I would have a hard time rating based on something other than what I like better. But, then again, nobody ever pays to see my ratings and even when I did rate wines the ratings were mostly for my own future reference.

That’s what I was thinking. You would think a wine critic would be skilled at knowing where a wine will be once it reaches the right age, but still…tasting a new release CA Pinot tends to be more pleasant experience than a new release Burg.

Speaking in generalities, of course.

It’s not just an issue of comparing two stereotypical styles of one grape. We use the same ratings system for all grapes, all styles of wine. So, maybe an 87 point rose is an excellent choice for drinking in a certain context. Actually using the point system literally says so, too.

I have no issue applying the point system universally and literally. I have no problem strongly disagreeing with a critic and seeing a wine as superficial over-ripe oaky garbage. I have no problem panning joyless dirt water. I have no problem with people loving wines I find mediocre or repulsive.

If you think a particular critic is over-rating wines, that’s a reflection of the critic, not the point system. It simply means that critic is useless to you.

One issue touched on is rating age-worthy wines vs ready-to-go stuff. That’s been discussed many times. One certainly shouldn’t rate something that needs age to show well on how it shows at the moment - not without being clear what you’re doing. One doesn’t need to give a wine a score, or a precise score.

that average price of the Cote de Nuits wines in this photo is over $800. seems questionably representative.

I really like this idea, and I’m glad a critic I respect weighed in on this topic! this is a good idea because it lets you rate the wine objectively AND within its context. because lets be honest: there just ARE some regions that don’t produce 97 point wines. that idea could get hyper granular if you let it (“well, its a perfect example of hauts-cotes!”). It also saves us from having to know “ok, well this is Jancis, so I have to remember that this 97 is only comparing it to Argentinian Cabs…”) I’d love to see Total put a qualifier like that on the shelf-caller.

I think the idea is that you don’t! You do your best to ask whether it’s a successful representation of the place it comes from and the overall harmony and balance of the wine.

Sounds to me like you’re contradicting yourself. How about comparing an Arcadian Garys’ to a bigger-is-better producer’s Garys’? You can’t compare those? Could they both be equally successful representations of the place? Or is one better, now that one’s eyes are opened and a better benchmark is set? Is the one that’s representative of the earliest CA Pinot producers less valid than the one of a fad style that ingrained a skewed view of “California style” to a generation of wine drinkers?

I think that they COULD be equally successful representations of the place. Whether they are is the critic’s jobs to suss out. Sure, I absolutely trend towards more elegant, restrained styles of wine, but I’m not going to pretend like it’s objective. I don’t think critics should even BE objective, per se, but if you’re going to insist on numerical scores, especially wine critics’ scores which are for all intents and purposes an 11 to 12 point range - which I think is a really terrible way of talking about food/art/music - you’re subscribing to the idea.

One could argue that the style you refer to as a fad IS the accepted/established California style of winemaking. Whether you think that’s a good thing or not is up to you.

[scratch.gif] What if one of them is not a representation of the place and really does not try to be?

I think this is where “objective” wine rating gets silly and too PC.

Tough to get a straight answer on this issue. I tried once, back in 2014 on Vinous website with a question to Steve Tanzer. You be the judge whether I was successful.

Steve, is a 95 pt IWC Washington Cabernet comparable in quality to a 95 pt IWC Napa Valley Cabernet–in both cases, a wine rated by yourself? Or is each wine rated within its own category.
December 2014

Does anyone have any thoughts on this? For example, a 2009 Colgin Cabernet (WA 95/IWC 95) versus a 2009 Corliss Cabernet (WA 95/IWC 95). $300 for the former, $80 for the latter. Are they comparable in quality? If so, why shell out an extra $250?

December 2014
Stephen TanzerStephen Tanzer Posts: 120
December 2014
Gary: That’s the $64 question (or $250). Always a matter of supply and demand, track record for longevity, collectibility and price appreciation in the secondary market, etc. But consumers who buy American cabernets to drink rather than for investment purposes are certainly discovering the best wines from Washington. And if you like a somewhat more restrained style of wine than 15+% Napa cabs, you DO need to check out wines like Corliss.

No, that’s a perception bias that the critics use a narrow range. 1) They rarely publish lower ratings. 2) They tend to not review plonk, and plonk producers tend to not submit.

A market niche was found, but we’re also seeing a correction. We’re seeing a lot more better, less ripe Pinots from producers “dialing in it”. Better oak regimes. Better yeast understanding. But, then, the market has been propping up mediocrity. Growers are spoiled. Most of the sites aren’t all that great, which is why there are so many superficial makeup on a pig Pinots. Isn’t that sort of the premise of this thread? Disagreement on the ratings on a lot of CA PNs? But, that appears to be just a handful of critics. Perhaps those aren’t the critics to look to. Perhaps there are actually a large number of producers who never had an incentive to make goofy wines for critic scores and market acceptance in the first place. Perhaps a lot of that comes down to regional marketing. Perhaps the democratization of wine criticism; market saturation by anonymous sameness producers, where a 92 score is of zero help; consumer awareness, often lead by past disappointment, seeking out the truly better, making differentiation a viable business model, are all contributing to more terroir-driven world class Pinots being made.

Also note how many European producers have been playing with “New World style” winemaking over the last couple decades. This stuff is a choice, often a crutch, often critic pandering. There are plenty of great Pinot sites in CA, even if most of what’s planted aren’t on great sites. How much of France is Burgundy? How much of Burgundy, eve, is great? So, ignore the also-rans, the way you would a Languedoc Pinot. If some hack critic started heaping praise on some sickly sweet, oaky, awkward monstrosity Vin de Pays Pinots, would you call them “French style”. Imagine those being 90% of what’s on the shelves, with often higher prices and usually higher ratings than the Burgs they shared the shelf space with. “French style”.