I guess you and I have different opinions regarding what kind of access this “general group of consumers” has to the kind of wineries being discussed in this thread. Likewise, it seems you and I have different opinions regarding what kind of knowledge this “general group of consumers” has of the kind of wineries being discussed in this thread. That’s cool.
Yes. What you’re missing is that despite the definition you cited being entirely accurate and your logic just fine, score driven buyers sneer at anything less than 90 points. Silly, but… what can ya do?
Hell, I just posted a TN on a little Nero d’Avola that is in the Good to Very Good range… but if I described it as an 85 point wine people would think it’s undrinkable swill. Have I mentioned I hate scores?
Incidentally, it just struck me that this thread really is also a perfect illustration of why I can’t take CT (or any crowd-sourced) scores seriously. Look, we have two pro critics here. RP and AG. They know each other, have tasted together. And yet their scores vary by a significant amount on some wines and we don’t really know how to map one of their scores to the other. How am I supposed to judge what the score of some anonymous taster is all about?
The definition is published. It is not accurate. Statistically, 87 points is an average or slightly below-average score. Argue why that may be all you want, but that’s just a simple fact. An 87-point score fails to differentiate a wine, and so will not help you to sell it.
That remains true whether or not you think it OUGHT to be true. It is what it is.
You realize you agreed with me on sales effect, yes? On the rest…who are YOU to redefine a critic’s published scoring definitions? Oh right… your action in fact supported the rest of my post, that people refuse to use the scale as defined and instead make up their own meanings. “Statistically”? Um… that has no meaning here. The samples are biased to only include a) published scores and b) above 85.
If your following is happy, there shouldn’t be too much of a problem. Back at Patton Valley I managed to place the wines I’d made at Gary Danko, Alain Ducasse, David Bouley and Daniel Boulud all without good scores. Of course, it took years to get into the IPNC, but that’s another story…
I used to think a good score was over 90, but I’ve learned that anything under 92 points now seems to mean failure…
Bill Hatcher had a great line, “Only God can tell the difference between 89 and 90 points”.
Wouldn’t tourists be exposed to these wines at the top restaurants in Napa? Surely The French Laundry, Le Toque and other such joints get all the good stuff, no?
This is probably the #1 reason I do not score wine. Scoring some wines in the 80’s would be a compliment to me. But In my early days on Ebob I saw someone ask why a person “trashed” a wine by given it an 89. At that point I realize this system had been corrupted in people’s minds too far. Plus, all that math I’ve had made part of my brain twitch.
Incidentally, it just struck me that this thread really is also a perfect illustration of why I can’t take CT (or any crowd-sourced) scores seriously. Look, we have two pro critics here. RP and AG. They know each other, have tasted together. And yet their scores vary by a significant amount on some wines and we don’t really know how to map one of their scores to the other. How am I supposed to judge what the score of some anonymous taster is all about?
It’s easy.
< 87 - Not even consumable for the buzz
87-89 - Choked it down but a waste of money
90-93 - Pretty good. Every bottle this individual drinks better be this good.
93-97 - “OMFG!!! This was amazing and changed my life or I was really buzzed and happy when I saw this famous label”
98-100 - (Do not use. No confidence in calling wines of this nature. Leave it to the critics.)
I guess before you get a hangover, you have to have a party. Allegedly with Mouton and Margaux NOT receiving 100, there are still 18(?) wines worthy of the third digit in 2009 BDX.
Actually, I didn’t. I just pointed out that 87 is, in fact, a near-average published score. The critic can SAY they define it however they like, but actions speak louder than words. If wine reviewers never publish the lousy scores, then the average will continue to creep up, regardless of what critics say they are doing. If I understand it correctly, range creep is at the heart of your entire complaint. Don’t blame me for that. The only scores I publish are on CellarTracker, and I actually do post the lousy ones.
Up to this point, the only dispute I really had was your characterization of buyer behavior as “silly”. In fact, that behavior is an entirely predictable consequence of the existing, actual score structure.
So you are saying that you think an IWC review with a published score of 87 actually does describe a wine that the publication intends to regard as “Very Good to Excellent” (i.e. significantly better than average)? And, therefore, that an IWC rating of 77 really is intended to describe an “average” wine, per their range definitions? If so, then why do you agree with me on the sales effect? Shouldn’t an actual “Very Good to Excellent” rating differentiate a wine from the average and boost sales? How else do you account for your agreement that it does not?
Even WA redefined their score verbiage so that 80-89 now broadly describes wines that are “Barely above average to very good” although they never actually define “average” and their actual published average is well above 79. It seems to me that a buyer - to the extent they pay attention to a score range at all - should pay attention to the ACTUAL range of the scores, not whatever marketing drivel the publications use to describe their scores.
My opinion is there has been a higher portion of this new super-ripe flashy style than there is a true market for. The market has been realizing and reacting to this (after a significant lag factor) and that is why Parker wisely chose a much better critic to replace him. There are true masters of this riper style and there are pretenders who superficially mimic that style. Parker couldn’t tell the difference, Galloni can.
I believe the term “average wine” applies to something sort of middle-of-the-road in the entire universe of wines, which of course includes Gallo, 2 Buck Chuck, Yellow Tail, and lots of similar wines not of interest to wine geeks or journal subscribers. It has to be that way, if you think about it. According to TWA, last time I looked, mid-80’s is good, upper 80’s is excellent, and outstanding wine starts at 90. But average versus the entire universe is a small accomplishment. Personally, I find lots of interesting pleasurable wines rated in the upper 80’s (rated there by critics and/or rated there by me). I think it is wrong to say 87 means a bad wine.
OTOH, I do agree that 87 is not what one hopes for in an expensive bottle, and this 87-point cab over $100 per bottle will feel the impact if that is where their scores tend to settle.
so a 100 from Galloni does mean something? so confused. who is going to allow me to make money now on CA? Laube? sure i like to drink but i also like to make money from time to time.
I’m reading from the beginning of the thread, so forgive me if this is covered later, but there have certainly been wineries that have been “darlings” without having great scores, and notwithstanding so-so scores. Kosta Browne comes to mind, no?
Kosta Browne became a ‘darling’ after J Laube and WS gave them a bunch of scores in the upper 90’s for their 2005 vintage wines. No one really cared about them prior to that.
Fair enough, I didn’t follow the WS at the time, but I do remember that over on eBob there were many avid Parker followers who diligently purchased full allocations of KB notwithstanding Parker’s dismissal of the wines.
Yes. It is more difficult for a critic to tear down a cult wine than it is for them to build them. Many people get swept up into the hype due to their friends or the general buzz around something without really have to be followers of a critic or wine boards. All famous brands start somewhere.