To: 14421400 Canada Inc. o/a both Canadian Brewing Awards and Best Beer, Beer Judge Certification Program, Cicerone® Certification Program, Prud’homme Beer Certification, Canadian Craft Brewers Association, Brewers Association Inc., Canadian Homebrewers Association, American Homebrewers Association

Apr 5, 2025

To whom it may concern:

Beer, and more broadly beverage competitions are a hallmark of the wider industry. They give recognition and notoriety to producers both large and small in creating beverages that are outstanding in their respective region and style. Competitions are provided with the intent to improve the product quality of the beverage industry as a whole, and increase consumer engagement with their local beverage producers. Flexible style guidelines are maintained and utilized to increase quality standardization across different regions, production techniques, and locally available ingredients. The Beer Judge Certification Program (BJCP), which publishes one such set of style guidelines, administers judge training exams and registers competitions, currently recognizes over 98 beer styles – not including regional, historical, mixed, or experimental styles.

Judges provide extensive sensory evaluation, feedback, and notes regarding each beverage’s stylistic accuracy, drinkability, and mass market appeal. This includes real-world information on how to improve their products by way of process improvement, ingredient selection, and promoting industry best practices. Beer judges are volunteers, and as per the BJCP code of conduct should not be directly compensated for judging services. While travel stipends, catered meals or accommodations are sometimes provided, these are limited in scope and budget, not taking into account things such as missed work to attend judging sessions. The vast majority of judges end up paying out of pocket to volunteer, and if the judge is not local the amount could be in the hundreds or even thousands of dollars. Judging sessions are grueling and fatiguing mentally and physically. An average judge will provide a full sensory and stylistic evaluation of 6-8 beverages per hour. Large competitions are judged over multiple 8-12 hour days. Beverages are evaluated in panels of 2-3 judges in order to provide varied feedback and reduce bias. Results are tied back to each judge to increase accountability and ensure a safe, fair and equitable judging environment. Judging for competitions is a very human experience that depends on people filling diverse roles: as judges, stewards, staff, organizers, sorters, and venue maintenance workers.

In 2025, the Canadian Brewing Awards introduced a new generative AI model called Best Beer, marketed as a replacement for the traditional centralized competition judging model from their privately-owned online competition portal Beer Awards Platform (BAP). The organizers went as far as claiming it was “BAP 2.0” and on the consumer side an “Untappd Killer”, referencing another consumer-focused beverage rating platform. The aim was to utilize aggregate sensory evaluation data to train a generative AI model on each individual beverage entry, giving the opportunity for producers to see which beverages are the most marketable in their regions regardless of style and train more judges in sensory evaluation against AI training data. They introduced this AI model to their pool of 40+ judges in the middle of the competition judging, surprising everyone for the sudden shift away from traditional judging methods. They presented it as a new, innovative tool to increase judging efficiency and provide better data to producers, with head judge Stephen Beaumont boasting that once he had gotten used to using the tool he could “evaluate a beer in 5 minutes”. They also emphasized that our data would be anonymized and aggregated. Later discussions revealed their intentions to decentralize beer evaluation, leading to ad-hoc brewery visits to evaluate beer in its home environment in exchange for large sums of money. Regardless, their intentions to gather our training data for their own profit was apparent. Many judges felt pressured to use the new system despite a variety of concerns, as their hotel accommodations, meals, etc were dependent on their attendance of judging sessions. Others were ready to walk out the door over their concerns, with one judge commenting “I am here to judge beer, not to beta test”. When concerns about the use of AI were brought to Best Beer, they shrugged it off, saying “If people aren’t upset, we aren’t innovating” and further reiterated that we should feel lucky for the opportunity to provide our training data to their generative model. Eventually, more than half of the judges in the session voted to switch their respective judge panels back to the traditional evaluation method.

It’s important to note here that producers that entered their products into this competition were told that their entries would be evaluated using “enhanced judging” or “consumer-focused evaluation”. Details to entrants were sparse, and did not mention generative AI in any communications. Furthermore, producers were required to send 8 containers per entry – a 33% increase from the previous year, and double what is expected for standard commercial competitions. In a listed breakdown on the official site, 2 extra entry containers were to allow for “consumer-focused evaluation”, or plainly, entry into the AI dataset. This was mandatory for all entries, and with roughly 1600 individual entries this means 3200 extra cans of beer were sent in for this purpose. As a result of the extra volume of entries, competition logistics ground to a halt and judging was constantly delayed. To put it plainly, the introduction of generative AI significantly reduced the efficiency and speed of the competition.

On April 4, it was revealed that Canadian Brewing Awards would be hosting a ticketed beer tasting event called “Beer Boozled Challenge”. Marketed towards beer enthusiasts and influencers, the event was based around the blind tasting of a flight of beer, evaluating the beers in the Best Beer AI model, and being provided a consumer tasting profile that would recommend beers based on the consumer’s preferences. Tickets range from $25 to $200, include a flight of beer for evaluation, and at least 12 beers to take home, informed by the AI profile. We have strong reason to believe that beers entered into the Canadian Brewing Awards are being used for this tasting and given away to participants in exchange for their training data – in other words, their “Consumer-focused evaluation” is a ticketed event of untrained consumers that contributes doubly to their profit and the future profit of their private corporation’s AI model. Canadian Brewing Awards states the event is not for profit, and ticket sales will be used to pay for event costs, but this doesn’t make much sense considering the ticket prices go up to $200 per person, and again – they did not purchase the beer. Producers paid entry fees to Canadian Brewing Awards and shipped their product on their own dime.

The use of AI is dehumanizing, stripping away all of the individuality in judging and palate variations across people of different environments, race, gender, and genetic makeup. There are already recorded instances of racial bias among generative AI models. Anonymization and aggregation of data goes against the core tenets of beer judging, removing accountability from the equation and denying recourse against corrupted, impaired, biased or belligerent judges. De-emphasizing style removes an important element in consumer choice, and gives less information to consumers who use style identifiers to inform their beverage decisions in favour of generative suggestions.

Training new judges on existing training data, such as measuring the accuracy of a person’s sensory evaluation of a beverage against an aggregated evaluation of that beverage, has further opportunity to introduce widespread bias. Judges train extensively for the ability to detect undesirable or unpleasant flavours (off-flavours) in beverages, often using laboratory isolated compounds. In places where these are not readily available, prospective judges using AI training might be disincentivized from identifying off-flavours. While quality control is a primary consideration for beverage production, beverages are produced by humans and not by machines. Many people are unable to identify these characteristics without specific training, and human variations can increase or decrease the perception thereof. For humans and for AI, bias, once learned, is difficult to unlearn. Furthermore, using this new system in a decentralized manner as was suggested could allow further opportunity for corruption.

There are yet more concerning factors: Privacy concerns around AI system training data include the wholesale, non-consensual use of personal information, the misuse and reproduction of copyrighted material, and lack of ability to delete data, effectively meaning that data is stored forever. Environmental concerns about the implementation of AI include destruction of wildlife habitat and old-growth forests to build data centers, the burning of carbon fuel to generate electricity to power datacenter servers, and the functional loss of environmental fresh water used to cool the servers. Ethical concerns involve the building of datacenters in third-world and developing countries to exploit them of their land, resources and labour, providing poor conditions for datacenter maintenance workers and wage slavery. Furthermore, the use of generative AI has been shown to reduce critical thinking skills, creativity and original thought in children and adults. None of these concerns have been addressed by Best Beer.

The use of AI in beverage sensory evaluation should not and cannot be used as a substitute for real human judging. Even supplementing using AI tools has massive negative implications for the industry at large, the producer, the competition, and the judge. It leaves massive spaces for corruption, bias and greed to proliferate. It also doesn’t address organizational shortcomings, or increase competition throughput, as humans are needed to collect, sort, pour, and deliver entries to judges. Nothing of substance is created from using these systems, it is simply transforming data based on inference, assumption, and training bias.

We the undersigned beverage judges, beverage industry owners, professionals, workers, and educators, urge you:

  1. To revoke sponsorship, endorsement, investment, or accreditation of 14421400 Canada Inc. o/a both Canadian Brewing Awards and Best Beer until such time that they provide evidence they have ceased AI development and use.
  2. To discontinue, discredit, divest from, and ban the development and use of AI tools and operations in beverage competitions, sensory evaluations, judge training and consumer engagement activities.
  3. To update your respective Code(s) of Conduct to include verbiage about the use of AI.

To 14421400 Canada Inc. o/a both Canadian Brewing Awards and Best Beer: 

  1. We demand a public, formal written apology for exploiting our status as volunteers, for gathering our training data without prior consent, and for operating in bad faith.
  2. We demand for our data to be deleted and never used for any reason without formal, written consent.
  3. If AI development and/or use is to continue:
    1. We demand a privacy policy outlining where and how user data will be used across all applications.
    2. We demand an annual, public report on the environmental and ethical impacts of AI tools and operations.

To our fellow beverage judges, beverage industry owners, professionals, workers, and educators: Sign our letter. Spread the word. Raise awareness about the real human harms of AI in your spheres of influence. Have frank discussions with your employers, colleagues, and friends about AI use in our industry and our lives. Demand more transparency about competition organizations.

Sign the Letter Here