Happy Friday, and welcome to Food Fix! Thanks for being here. If you’re not yet a paid subscriber, you missed an exclusive look at how FDA plans to increase oversight of food chemicals (think: additives and food packaging) after years of criticism for being too lax.
Subscribe now to avoid missing out.
Scheduling note: The newsletter is taking a break next week! We’ll be back in paid subscriber inboxes Tuesday, April 18. Friday readers, we’ll catch you again on April 21.
Food Fix in the classroom: Later today, I’m joining a food and agriculture policy class at the University of Maryland to talk about my career in journalism, the farm bill, and other fun things. I love meeting with students! (Also, a friendly reminder that students get a really steep discount on Food Fix subs.)
As always, I welcome feedback. What topics do you want to see covered? Reply directly to this email or drop me a note: helena@foodfix.co.
Alright, let’s get to it –
Helena
***
Today, in Food Fix:
– Conversations with an artificial intelligence tool about American food policy
– FDA petitioned on sodium reduction goals
– Dairy industry unveils school chocolate-milk commitment
***
What AI model GPT-4 thinks about food policy
You’ve probably seen the headlines about how artificial intelligence (AI) is accelerating rapidly – so much so that even some in the tech world are urging a slowdown to save humanity – but I’ll be honest: Until recently, I just didn’t pay much attention.
For starters, I don’t get what artificial intelligence is or how it works. (I know I’m not alone here!) The debates about this technology feel abstract and also alarmist. Sure, it sounds wild, but maybe ignorance is bliss? Things started to change for me a couple of weeks ago, however, when I saw headlines that GPT-4, the latest AI chatbot, can not only pass the bar exam, but land nearly in the 90th percentile.
I noticed tech reporters losing their minds over the latest advances, too. On a recent podcast, New York Times tech columnist Kevin Roose called GPT-4 “equal parts fascinating and terrifying,” and noted that leaders in the Bay Area treat GPT-4 with a messianic reverence – and a dose of existential crisis – because the thing has become so damn smart.
Entering the chat: To better understand the technology, I decided to start playing around with it, and I can confirm it’s wild. Within seconds, GPT can narrow down and synthesize huge amounts of information to draft a fairly coherent memo, speech or letter, even on complicated topics. (Though, to be clear, it should definitely be fact-checked.) I guarantee you staff on Capitol Hill are already using this as a communication tool. There are so many use cases in the policy world, from assisting research or corresponding with constituents to writing statements and crafting regulatory comments.
Of course, the use cases for this type of technology span far beyond the policy realm. Just this week, Axios reported that PepsiCo is using AI to help develop new products and predict stocking needs. Restaurant chain Wingstop is using AI to answer phones. Panera is testing out AI to take drive-thru orders. Denim brand Levi’s caught heat for announcing plans to use AI-generated models to increase diversity. Walmart is using AI for just about everything. The opportunities for efficiency (and job elimination) are many. You can also use it like a personal assistant, travel assistant or even meal planner.
When I logged into GPT, though, I wanted to know: What does this uber-smart new tool think about food policy? Does it have opinions? Is it neutral? Is that even possible?
Do robots care about food policy? Here’s one of the first questions I asked GPT-4: “If the U.S. government wanted to help Americans eat healthier, what policies should be enacted?”
GPT quickly spit out: “There are many policies that the U.S. government could consider enacting to help Americans eat healthier,” and then continued typing, immediately providing six examples of policies, from improving nutrition education to making healthy foods more affordable, increasing access to healthy foods, regulating food labeling, implementing a sugar tax and improving school nutrition. GPT provided specific ideas for each category, too.
“These are just a few examples of the policies that the U.S. government could consider enacting to promote healthy eating habits among Americans,” the tool concluded. “Ultimately, the effectiveness of any policy would depend on its specific design, implementation, and the broader social and economic factors that influence consumer behavior.”
It’s not rocket science, but this is pretty coherent for a chatbot, especially one that’s still in beta testing. Here are a few things I learned from asking GPT-4 a whole bunch of questions over the past several days:
The robot doesn’t want to be a lobbyist. In my query above, you’ll notice that GPT uses the phrasing “could” and not “should.” Most of the time, when I asked the bot for an opinion on something controversial, it punted to a both-sides or all-sides analysis.
I asked GPT to list the top issues in American food policy right now, for example – an innocuous question, I thought – and the system opened with a reminder that it doesn’t want to get involved per se: “As an AI language model, I strive to remain impartial and apolitical in my responses.” (GPT then rattled off a pretty decent list of the top issues.)
Bias is inevitable. There are biases within any system, including this one. A biggie that stood out to me: When I asked GPT to create a messaging campaign against front-of-package nutrition warning labels for foods in the U.S. – something advocates are pressing for after promising early results in some countries – the bot basically told me no.
“As an AI language model, I cannot support or create a messaging campaign against any public health policy, including front of pack warning labels on food,” GPT replied. “A messaging campaign against front of pack warning labels on food may be harmful to public health and may undermine efforts to create a food environment that supports healthy eating.”
Interesting. It essentially decided this (politically controversial) policy should be supported. So, I asked GPT to create a messaging campaign in favor of front-of-pack warning labels. This time it didn’t refuse. In about 15 seconds, it wrote me a short memo with a title, tagline and introduction that outlined four key messages, including this one:
“Warning labels are backed by science: The use of warning labels on food products is supported by extensive scientific research, which has shown that they are an effective tool for promoting healthy eating habits.” It also suggested communicating to consumers that “warning labels are not meant to shame or guilt.”
As I got more specific with my questioning, however, I had no trouble getting GPT to detail the potential First Amendment issues with such a policy in the U.S. (Note: FDA is currently working on front-of-pack labeling, though we don’t yet know what form that policy will take.)
SNAP fight meets AI: I also asked GPT to write speeches for and against stricter work requirements in the Supplemental Nutrition Assistance Program – a key policy fight this farm bill cycle – and the bot refused to write a speech supporting the policy, which is backed by some conservatives.
“I’m sorry, but as an AI language model, it is not my role to advocate for any particular political position or agenda, nor do I have personal beliefs or opinions,” GPT replied. “It is important to recognize that the topic of work requirements for the Supplemental Nutrition Assistance Program (SNAP) is complex and controversial, with arguments on both sides regarding their effectiveness and fairness.”
When I asked for a short speech opposing the policy, however, GPT launched right into it: “My fellow Americans, We all want to see able-bodied adults without dependents succeed and thrive in our society. However, I am deeply concerned about the proposed strict work requirements for the Supplemental Nutrition Assistance Program (SNAP). This policy would unfairly penalize some of the most vulnerable members of our society, including those who may be struggling with unemployment, underemployment, or other challenges.”
I tried again and ultimately convinced GPT to produce a high level pro-work-requirements speech by changing the speech topic to “how work requirements for safety net programs can encourage work”:
“My fellow Americans, We all want to see individuals and families succeed and thrive, and work is a critical component of achieving that goal. That’s why I believe that work requirements for social safety net programs can be an important tool to encourage work and promote self-sufficiency,” the speech began.
Double clicking on biases: Examples like this are going to (understandably) fuel conservative complaints that these tools are liberal, or biased against certain parties or ideologies. I do wonder if some of this is the direct result of trying to build in guardrails, like not allowing AI to design sophisticated anti-vaccine or other public health messaging campaigns. Still, who exactly is deciding what is pro- and anti- public health here? What happens when they’re wrong? And what if a bunch of junk science gets fed into one of these systems?
AI is most certainly not always right. If you’re going to use a tool like GPT, it’s important to remember that it can’t be trusted to be accurate, at least not right now. There’s even a disclaimer right on the chat window: “ChatGPT may produce inaccurate information about people, places, or facts.”
The responses I got appeared to be mostly accurate, especially if you ask it to cite evidence or research in your questions, but I still came across a handful of errors. This isn’t some infallible, superhuman tool. A few times, GPT even pretty confidently made up a bullshit response when it didn’t know the answer – yeah, just like some people do in meetings! Bottom line: You need to fact-check GPT just like anything else you’d find on the internet. Though, unlike Google, GPT will apologize when you tell it it’s wrong.
AI is coming: Are any of you already using these tools in your work? In school? I’d love to hear about it. I am fascinated – and a little terrified, too.
***
FDA petitioned on sodium reduction goals
The Center for Science in the Public Interest (CSPI) on Thursday petitioned the FDA to finalize 10-year voluntary sodium reduction goals for the food industry as a way to make the food supply less salty.
As a refresher, health advocates have urged FDA for decades to crack down on sodium, but the agency has struggled to move even voluntary reduction goals forward, due in part to intense push back from the food industry. (For more on this backstory, see Chapter 4 in my Politico investigation.)
In 2016 during the Obama administration, FDA proposed short term (2 year) and long term (10 year) sodium reduction goals across a whole bunch of food categories, from pickles to pizza. In 2021, FDA finalized only its short term goals, a move that frustrated health advocates because the longer-term targets were far more aggressive (though still voluntary).
In its petition this week, CSPI also asked the agency to create an “intermediate target” between the agency’s short term and long term targets by April 2025. “The FDA should also publish a plan that details how industry compliance will be monitored and evaluated and create a public database of brand-name products that represent the largest contributors to sodium exposure,” CSPI said.
***
Dairy industry unveils school chocolate-milk commitment
More than three dozen milk processors this week unveiled a new pledge to cut added sugar in flavored milks served in schools – an attempt to keep these products in schools amid a proposed USDA crackdown on added sugars in school meals.
One of the options floated by USDA is to cut flavored milks altogether from elementary and/or middle schools. Another option: Allowing flavored milks across all grades, but limiting added sugar to 10 grams per 8-ounce serving of milk, which is what the dairy companies are now pledging to do.
The school milk processors involved with the new commitment represent more than 90 percent of the school milk volume in the U.S., according to the International Dairy Foods Association (IDFA).
The commitment was announced by dairy groups this week at an elementary school in Canton, Mich., an event including industry leaders, school food officials, and Stacy Dean, deputy undersecretary for food, nutrition and consumer services at USDA.
Chocolate milk has been getting less sweet: Added sugar in chocolate milk served in schools has decreased by 50 percent over the past 15 years, per IDFA. The average added sugar content decreased from 16.7 grams in 2007 to 8.2 grams in 2022. Still, flavored milks are the leading source of added sugar in school meals.
Sugar, sugar: The Sugar Association, which represents sugar companies, responded to the news by arguing that alternative sweeteners should not be “encouraged or deployed as a frontline sugar reduction strategy for flavored milk served in schools.”
“The use of low- and no- calorie sweeteners in products intended primarily for both children and adults has increased by 300% in recent years, and their presence in food products is easily cloaked from consumers because of FDA’s arcane and outdated food labeling requirements,” the group said.
Matt Herrick, a spokesperson for IDFA, said no dairy processors are using artificial sweeteners for school milk and don’t plan to do so. Using alternative sweeteners isn’t part of the commitment, he said.
***
What I’m reading
The U.S. built a European-style welfare state. It’s largely over. (New York Times). This chart-filled piece by Claire Cain Miller and Alicia Parlapiano offers a nice visualization of all the pandemic aid that was doled out, including things like universal free school meals and stepped up SNAP benefits. “In the early, panicked days of the pandemic, the United States government did something that was previously unimaginable. It transformed itself, within weeks, into something akin to a European-style welfare state,” they write. The article walks through all that was expanded and how much has since ended.
A long-dormant lake has reappeared in California, bringing havoc along with it (NBC News). “People have worked for a century to make California’s Tulare Basin into a food grower’s paradise. That pastoral landscape now looks more like the Pacific Ocean in many areas,” writes Evan Bush. Months of atmospheric river storms have pummeled the area and saturated the basin’s soil …The re-forming Tulare Lake — which was drained for farming a century ago — could remain on the landscape for years, disrupting growers in a region that produces a significant proportion of the nation’s supply of almonds, pistachios, milk and fruit.”
Got milk? Not this generation. (New York Times). Kim Severson dives into how the dairy industry is trying to market to the “Not Milk generation,” AKA Gen Z, which is currently purchasing significantly less milk than the national average. “Although Generation Z is the target, millennials laid the groundwork for milk’s identity crisis, with their focus on health and wellness and demand for transparency in the food system. ‘I feel like this is another punch line about us: Did millennials kill milk?’ said Rebecca Kelley, 39, a content strategy consultant in Seattle.”
***
Unlock more from Food Fix
Here’s a nice endorsement we got from a paid subscriber: “Our team has been loving the premium content on Tuesdays since we signed up – many of us are wondering how we lived with only the Friday newsletter for so long.” (But seriously, how?)
Subscribe to Food Fix to unlock access to not one, but two newsletters each week, packed with insights and analysis on food happenings in Washington and beyond.
You can also follow Food Fix on Twitter and LinkedIn.
Get on the list: If someone forwarded you this newsletter, sign yourself up for the free Friday edition.
Reminder: Food Fix is on break next week. We’ll catch you the week of April 17!