Question - if you were a critic designing a system for rating wines, what would it look like?

Given the uproar over at the WA because of the fact that there seem to be several methods of reviewing accepted, and with the multi-page Sierra Carche thread going, I’m curious as to how many people prefer blind vs non-blind tasting.

I recognize the value of going to a winery, walking the fields, and talking to the importers. Even if I were not in the business I would like to do that because, well, it’s interesting. I did it for years anyhow. And it does in fact give you some context. I understand that the grapes have thick skins because the winds are blowing and the sun is hot, etc.

But I also think fondly on the people who showed me their hospitality and the eagerness with which they poured their wines. And when I take the wine home and drink it, I remember that.

So were I a professional critic, my own preference would be for blind tasting and for that reason I tend to trust the critics who do that. I recognize that we do not do this with film or books, but I think if someone slipped a different DVD into my machine I’d probably know. On the other hand, if they slip me a different wine, what do I know?

For that reason I have always thought that the method Parker outlined years ago was a great method. In other words, go to the wineries, talk to the people who make the wine, talk to the people who select and import it, learn all you can, but when it comes time to rate the bottles, taste them blind in peer groups. You remove all temptation and you can fairly say that wine C is better than wine D, even though D is a first growth and C might be a fifth growth. That was his great contribution - fairness.

So maybe you have an assistant bag the wines and then you can taste in flights of ten or twenty or thirty or whatever. You can take your time and watch them develop in the glass, but score and discuss them before unveiling. That seems to be the way that Wine Spectator does it and it seems fair.

You can accept shipments from wineries and importers. But you can do a randomized test to help eliminate fraud.

Someone posted on another thread, I think it was Nathan, that it’s impossible to buy all the wines at retail for blind tasting because a critic like Jay Miller tastes five to ten thousand wines a year. But you don’t need to buy them all.

Take the 10,000 figure. Assume maybe 40-week work year. That means you only need to taste 250 wines a week for those 40 weeks. It’s most certainly possible to taste 25 in the morning and 25 in the evening. I’ve done more than that. And if the figure is closer to 5,000, so much the better.

And let’s say you want to be pretty sure you’ve got a really good chance of getting the real stuff. Maybe 95% sure, with a confidence level of maybe 5% either way. You only need to randomly sample about 350 or so wines, or for my scenario, buy them at retail. At an average cost of $100, that’s only $35,000 a year needed for wine.

Seems do-able.

I’ve actually been surprised that many people would eschew this kind of system but I guess it’s just a matter of what an individual finds more important in a critic. Anyway, if you were a critic designing a system for rating wines, what would it look like?

If you want a better confidence level the numbers change. If you don’t want to figure it out yourself then go here:
http://www.surveysystem.com/sscalc.htm" onclick="window.open(this.href);return false;

Wines shouldn’t be tasted in “peer groups,” they should be drank with dinner.

I like drinking wine with my peers though.

This is more my speed:
Seriously, I think you should drink the wines blind with your peers. No stories on the estate or anything else, purely the wine. Write your own thoughts, and then share it with the group.

While recalcitrant to this initially, I think in a perfect world, you release your scores and notes AFTER the wine has been released for sale. This may not work in a business model sense because all of your potential customers may flock to critics that already have their scores out. But, for someone like RMP (heavyweight, market maker, etc.), he might be able to pull it off. Of course this would alienate him from most all of his French friends.

Also, I agree with only posting notes on the higher rated wines. However, I would post a list of wines tasted, but not rated.

Thinking about it, you probably only need a 10-15 point scale. Anything below a 1 is not worth drinking for YOUR palate (doesn’t make it bad, but just not worth commenting on at the highest level).

Finally, I would give a current score and what you thought was a potential score. Many BDX wines might start with a 3 or 4 but with a potential of 8-10. When you give a high score to a massively tannic, difficult to drink wine, what does it mean to the consumer? It sells well on the expense account steak house, but how many people actually enjoy it at that moment?

Chris

Keith

Tasting with dinner is just foolish. No wines would ever get reviewed.

Greg

Interesting analysis, but 250 wines per week is a lot, anyway you slice it. Vistiting wineries is important. Tasting and analyzing wine is more important. But what about logistics…travel to your office, meals, socializing, family life, etc etc etc.

Being a wine critic is still a job anyway you slice it. Most critics today that I know do a decent job of it. They work very hard, and I do not believe their methods need to be changed. No need to overhaul a system for a few bad apples.

As many other people have always said, this is all about trust and full disclosure, if necessary. If a consumer trusts a critic and the job that they do, then all is well in the world. To earn that trust, a critic ought to disclose their practices and principles and actually follow through on them. If not, the questions arise.

I don’t disagree. I was kind of curious as to what sort of reviews people would trust. Doing what you say you do is the first step in gaining trust, whatever your method might be. But I do think there’s a specific value in blind - i.e. objectivity, that you miss otherwise.

I think I would do several tests to see what caused larger errors (variations in score from test to test of the same wine) and whatever process gave me the least errors, that would be the way to go.

Of course I think I’d stay away from numbers all-together, but the tasting process should still be tested. Better to do blind, non-blind, at your house, at a restaurant, etc. you can’t really know unless you test the different possibilities.

If I were a critic I would strive to move away from “rating” wines. I think it’s created all kinds of problems in the wine world.
Rather than rate them, I would instead attempt to provide guidance by sorting out producers and their styles, and categorizing wines generally along the lines of what they are best suited for (long term cellaring, near term quaffing,
food, sipping, serious or casual enjoyment). I would cite preferences without rankings, I might offer certain cautions
with regard to questionable practices by producers (heavy use of chemicals etc). I probably would not be able to
succeed as a critic in today’s world because the trade MUST have scores as they perceive them to be an indispensable sales tool.

[winner.gif]

But putting a score on is just so much quicker.

The biggest problem with the 100pt system is that RP took a methodology that works for Bdx and attempted to extend it by putting up some bullshit metric of what the points me.

The breakdown of colour, nose, ageability - all that makes sense in Bordeaux and there is historical sense to it. Even if we haven’t had it, a wine drinker of some experience can visualize what a 100pt Cheval Blanc is like. And the way Bdx is crafted, it even makes sense to have outliers like '90 Beausejour. It all works relatively well.

The first problem was extending it to something like Burgundy. There, wines are measured by their fidelity to type (terroir) and the historical tendencies of that terroir. It is really hard to visualize La Tache unless you’ve had La Tache. The fixed metrics of color, for example, just don’t apply. The scale has to be altered in the way the Allen or Claude has, but that undermines the objectivity veneer of the WA.

The biggest problem now is that you have Miller and RP who want to praise a wine based on their emotional reaction to it - nothing wrong with that - but are hoisted on the breakdown of the 100pt scale RP created. There is neither reasonable reason to believe nor is it automatically desirable that these highly-pleasurable (to them) wines age and develop. But the metric they’ve set requires that component, so we get bullshit ratings and bullshit aging windows.

A.

My only comment on this is I think the wine rating system should be changed to reflect both the pleasure given by the wine and it’s technical score.

The current WA 100pt system is in my opinion a sham. A wine should have two scores: One for sheer pleasure and the other a technical assessment of the wine. Example A is the parker rating system which according to his website uses the following formula:

  • Maximum score: 100 pts
    Base score: 50 pts
  • Color and appearance: 5 pts
    • Aroma and bouquet: 15 pts
    • Flavor: 20 pts
    • Potential: 10 pts
      Now i find this odd in my view because to use but one of 1000’s of examples out there Parker and many other critics have said that the 47 cheval blanc is technically a disaster wine but it always gets awarded 100pts. So clearly he and other critics don’t even follow their own guidelines for rating wines. Hence the need I think for a scoring system that more accurately and flexibly reflects what is really in the bottle. One possible solution as mentioned is to have two scores. You could do this by having a 100pt pleasure score followed by a technical break out or use the 35 pt flavor/aroma scale followed by a 100pt technical score.

In this scenario a wine could get 85 points technically (flavor and aroma maxed zero points for color/potential) yet still score 100pts for sheer pleasure.

To add yet another dimension a wine should probably categorized as either a:

  • cocktail wine - a wine to be consumed and enjoyed by itself like a hard liquor drink
    • food wine - a wine best served and enjoyed with food

So a score might look like this:
P-96 T-85 S-C
where P=Pleasure, T=Technical and S=Setting (e.g. C for cocktail)

Much of this could be conveyed in a tasting note but advertising and marketing is about sound bites so you need a scoring system that the average lay person could determine was a match for them without having to have a lot of input.

This is just one idea, If I really thought about it, it would not be hard to devise a vastly superior rating system to what currently exists. The current ones just do not take into consideration the huge range of wines and styles that currently exist in the market in a way makes sense for every consumer and for marketing purposes

Agreed

Gene -

How can one possibly apply a numerical score to rate the pleasure given by a wine? If you LOVE burgundy and I am not a fan - isn’t that particular wine going to provide you with more pleasure than I would get out of it?

good idea but bad idea for marketing: I don’t see it as scalable. most wine consumed is not by geeks like you or i.

obviously one has to make an assumption that you love burgundy, otherwise no simple wine rating system will work which what the market demands and why the parker 100pt scale was so successful… but like I said, this was a off the top of my head idea and not the one i would probably settle on.

Ratings may be good for marketing and lend themselves to being scalable. The irony is that they are massively flawed
and don’t actually deliver what they pretend to deliver. It’s remarkable, we stick to a flawed system simply because it
SEEMS like it works, and I suppose it does if you are a believer.

If I was Parker, I would ask each winery that got a good score for a voucher good for a bottle of the same wine at retail (critic’s choice, and the voucer). Then he could re-taste a bunch without going broke or passing the cost along to the subscriber. If the winery or distributor refuses, the review doesn’t get published. I believe the fear of being openly discredited would prevent distributors from filling their bottles up with Pegau or Le Pin for the tastings.

So we’d call the first one the pleasure score and the second one the bullshit score…