So, what do any of us know? Maybe more truly blind tastings are in order:
https://www.foodstuffsa.co.za/12578/wine-tasting-its-junk-science/
So, what do any of us know? Maybe more truly blind tastings are in order:
https://www.foodstuffsa.co.za/12578/wine-tasting-its-junk-science/
Itâs not science at all. Like any other form of evaluation, itâs an expression of taste conditioned by knowledge and experience. Get used to it and it wonât bother you.
I think some of this falls under the category of obvious, and some is probably highly debatable. The types of people invited to âjudgeâ regional wine contests with medals are not always the best tasters. Or maybe they are good tasters, but donât have much experience tasting 100 wines in succession. I consider myself a decent taster and judge of quality, but thereâs NO WAY Iâd be a good taster (or produce reputable results) on my 75th sample over a period of 2 hours.
Yes. It seems like a huge upgrade or more attention than it deserves to call it âscienceâ at all.
I am heavily into both math and science. The moment that wine tasting becomes science I will sell my cellar and drink nothing but water.
Of course, wine writing is a science.Just ask John Morris.
100 wines in a row, and they blame the taster?
I wonder how that would work at a perfume âcompetition.â
The better question would how âscalableâ a tasting/judging should be. (I think I am pretty good up to only 6!)
This also explains why some over the top wines keep getting good resultsâŚthey find a way to stand out from the rest of the flight, but I donât consider that a measure of âquality.â
People who donât drink wine or who only drink cheap stuff like to believe that thereâs no difference between plonk and expensive or prestigious wines, and that wine geeks are just engaged in blowhardism and phoniness.
So there will always be articles and âexperimentsâ and so forth to feed that appetite. Iâm fully used to it and it doesnât bother me any.
This is sage. I find a lot of agreement in your comments, Anton.
Your # you write is 6, and you know, Iâve become pretty attuned this past few years that I too have a #. Itâs maybe 10. I do believe that if there is food, time and a chance to rest my senses, maybe the # is 12-15. Past that #, I donât find I can provide the right value in sharing with others how I perceived the wines.
That 100 wines in a row thing is hyperbolic. With the competitions Iâm somewhat familiar with, panels sit with a reasonable flight and spend time with it. Then have a break before the next flight. I havenât had any admit they tasted anything close to 100 wines in a day.
That said, the first point is spot on. How a given wine does in these competitions is like rolling dice. The judges will bitch up a storm if you tell them that, but the wineries all know itâs a fact because they see the fâing results of the competitions they enter. Itâs just well established that if you want some gold medals to brag about, you enter several competitions.
One judge who got bent out of shape over my social media response to the results of a secondary wine competition provided me a list of the judges toâŚhelp prove my point? There, the better results went to wines that are ready to go on release, even some mediocre ones. The most classic age-worthy, age-needing were down at the bottom. The vast majority of judges were buyers for restaurants and shops. So fine, maybe theyâre good at assessing how much non-geeks will like a wine off the shelf, pop-n-pour. There were a bunch of second rate winemakers on the list. Not impressed. It seemed like maybe 5% of the judges may have been qualified, imo. That ought toâŚwellâŚhave practically no impact.
So, the better wineries learn thereâs no point (for most of them) in entering.
But really, being into wine or itb doesnât make you a great taster, no matter how self-important you think you are.
As far as preference and subjectivity go, I always refer to our local blind tasting group. With an 8 wine flight, it was typical for 6 to have both first and last place votes. Often thatâs with strong opinions. Also, pouring at open house events itâs just normal for peoplesâ preferences to be all over the place for which wines they like or donât like.
We see the same sort of variation with critics. Some wines are universally well-liked, and others see quite a range of scores. The one-time Wine Spectator tasting of red Burgs vs OR PNs in blind pairs with their OR and Burg critics had a variance of 25 points on one wine, plus plenty of other big differences of opinion. Thatâs whatâs normal. The mags rarely publish lower scores, so the public doesnât see them. Retailers will show you a wine got a 95 and a 94 - from the two critics who gave the two best scores for that particular wine. They wonât show the 86, and probably wonât bother with others in between.
Hereâs one.
Iâve been a tasting judge in wine competitions for several years. Usually itâs over weekends from 10 am to 6 pm with a half-hour meal somewhere in the middle. We usually go through 90-120 wines over the course of a day, fully blind.
The judges usually do 3-4 days during the spring (the tastings go from January to March).
Palate fatigue is definitely a real thing that happens over such a huge task, but nothing impossible to an experienced taster. One just has to spit all and everything during the day.
The root of why the article is dumb is that no one is pretending wine competitions are scientifically run. (Nor does non-science = useless, as an aside.) If the article were meant to be anything more than click-bait (and not bias affirming as mentioned above), it should read that wine competitions are not run scientifically, not that wine tasting isnât science.
Tasting panels that attempt to be more scientific randomize the order in which every taster tastes to eliminate position bias which is well-documented. And that is just one example of protocols used across the food/bev industry. These just donât exist in wine competitions for many and good reasons.
I did hear an interview once with someone who ran a wine competition that was about as close as could be practical. I wish I could remember what competition it was. She randomized the order, might have the same wines 2x and repeated the tastings across time.
I do think this exact same article has been discussed before. Itâs ten years old after all.
And as some of the others have said - the people who judge at a state fair arenât necessarily people who know anything at all about wine.
But even so, wine evaluation is NOT a science. Wine tasting, like any tasting, depends on your mood, your physical state, the ambient temperature and environmental influences, the item tasted, the temperature of the item tasted, what you have recently tasted, and your own life experiences. Someone who has never tasted currants will not understand âcassisâ. If you grew up with them, that might be the first place you go.
Why?
Because taste, like sight, is about pattern recognition. You taste something enough times and you become familiar with it. Most people on this board can probably identify most Rieslings by the nose. Same with Gwertz. Other wines have less of a signature aroma. Then on the palate, the wines are identifiable again. The complex pattern is easy to remember, somewhat like a checkerboard. Other wines are more like a Pollack painting. If you know his work really well, youâll be able to tell one from the other. If you donât, they all look the same.
Same thing with wine. It is absolutely not a science. It is an individualâs familiarity with the pattern that determines whether the individual will be able to recall it or not.
It depends on what is meant by âevaluationâ. Evaluation in terms of judgement is just that, a judgement. Evaluation in terms of qualities (in the descriptive sense, not the value sense) can be done in a controlled and repeatable manner.