The AI Daily Brief: Artificial Intelligence News
OpenAI Proposes a New Deal
2026-04-09 21min 2,565 views watch on youtube →
Channel: The AI Daily Brief: Artificial Intelligence News
Date: 2026-04-09
Duration: 21min
Views: 2,565
URL: https://www.youtube.com/watch?v=P_oabCLJhb0

OpenAI's Industrial Policy for the Intelligence Age addresses worker protections, public wealth proposals, tax reform, and datacenter energy issues. Analysis highlights PR framing concerns and a lack of concrete commitments such as funding, energy rate separation, or reinstated profit caps. Quinnipiac polling and debates over AI hype versus real-world capabilities create urgency around benefits, risks, and redistribution.

The AI Daily Brief helps you understand the most important news and disc

Today, we're looking at the latest policy document from OpenAI and why it might be more significant than similar documents we've seen in the past. Welcome back to the AI Daily Brief. Today, we are looking at a policy document from OpenAI and it comes at the convergence of two moments in and around the industry. The first moment is what we were discussing on yesterday's show. This growing indication from the labs that the next jump, the one that we are on the verge of with the next set of models, represents a really big one. Remember at the end of March, we got the leak about Anthropic's Mythos model, which it said represented a step change, their words, in capabilities. In fact, what we got with the leak was a blog post saying that the model was so powerful that they were going to slow roll it a little bit rather than a full announcement and a release of the model as we've gotten in the past. On the OpenAI side, the company has been heavily teasing their new Spud model, actually doing more to hype it up than to tamp down expectations, reversing the trend that they've had all the way since back when GPT-5 underperformed. So, on the one side, we have this moment of

precipice where the next set of models could represent a very big jump. Then, on the other side, we have the continued and frankly increasing reality of dreary American sentiment when it comes to AI. A new poll from Quinnipiac suggests that sentiment is going from bad to worse. 55% of Americans now believe that AI will do more harm than good in their day-to-day lives. That's up 11 percentage points from a year ago and tips to the majority for the first time. 70% believe that AI will reduce job opportunities, which is up 14 percentage points. A mere 7% of respondents believe that AI will increase job opportunities. In other words, Americans believe by a 10-to-1 ratio that AI will reduce rather than increase jobs. 30% said that they were either very or somewhat concerned about AI making their job obsolete. And yet, this is all despite adoption rocketing forward. The majority of people are now using AI to research topics they're curious about, rising from 37 to 51% over the past year. Analyzing data and creating images each increase significantly as use cases as well, both rising from around 16 to

around 25%. The number of Americans who said they had never used AI was down from 33% last year to 27% this year. Tamila Trantoro, an associate professor at the Quinnipiac School of Business, noted, "Younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market. AI fluency and optimism here are moving in opposite directions. This is also not just one poll. We're seeing AI being blamed for increasing electricity prices, opposition to data centers growing, and in one dramatic example of just how negative the perception around AI is, it has worse PR right now than the extremely controversial ICE." Into that environment, OpenAI released the new document, Industrial Policy for the Intelligence Age. The document is framed not as some complete policy statement or comprehensive anything, but instead a way to try to nudge the conversation around important policy topics forward. They divide their policy discussions into two areas. First, building an open economy, and second, building a resilient society. And I think that the document needs to be judged in two

different ways. One is from a PR lens and what it does for OpenAI and the AI industry in general when it comes to public perception, and second, in terms of what one might think about the policies themselves. Now, to be fair to OpenAI on the first way of judging this, as a PR document, it obviously isn't intended to be that primarily. It feels like it's much more designed for perhaps a Washington insider audience and that if it was a document for general public consumption, maybe it would look a little bit different. At the same time, the reason I won't give OpenAI a pass here, the reason I'm not interested in giving OpenAI a pass on that front, is that at this point, with where they sit in the industry, and especially when they pair this with big premiere interviews with the founders of media companies like the one Sam Altman did with Axios, they clearly recognize that everything that they say is, whether they would like it to be or not, a public relations statement as well as whatever it is supposed to be. To be completely transparent, I very, very, very much dislike this document. It exists in this strange uncanny valley

where it is so technocratic, down to the narcolepsy-inducing name, Industrial Policy for the Intelligence Age, that it is inevitably going to fail in any sort of PR goal, but at the same time, not robust enough from a policy perspective that it feels likely to do a particularly good job at advancing any of these policies as well. It is a document, in other words, without a clear home or purpose, or one where its home and purpose is so confused that it makes it, at least in this current form, not all that useful to anyone. Now, we are going to go through the policy proposals because there are some interesting and important discussions that are started there, and I want to take this idea of being a conversation starter in good faith, but I do have to say a couple more things about the PR impact right now. I don't know that I've ever seen an industry that is so fundamentally unwilling to spend any time at all articulating why it deserves to exist as the AI industry. Every single document like this, every single statement that comes out of Dario's or Sam's mouths, is so focused on affirming

the negative and validating people's concerns that literally no time is spent actually explaining how this is going to make the world better. Every discussion is this incredible quick pass-through where a bunch of theoretical benefits in the future are listed in short order without actually articulating how we get there or what the impact of those changes will be on people's lives, all along the way to getting to what seems to be the core point, which again is validating all the bad things. We get these hand-wavy statements like this one, "We strongly believe that AI's benefits will far outweigh its challenges." Then only to have the next three lines be all about how clear-eyed about the risks they are. This does not come off as being reasonable. It does not come off as being sober or thoughtful. What it does is make people ask, "Why the hell are we doing this in the first place then?" You know how when you see an ad for some new miracle drug on TV, the last 10 or 15 seconds of the 60-second spot is always them disclosing all the risks and side effects. The way the AI industry communicates, it's as if they flipped that ratio around and spent three quarters of the ad talking about all the side effects and negatives and

only a tiny little bit on why the thing should actually exist in the first place. And what all of these risk descriptions, these sober, thoughtful risk descriptions fail to engage with is the thing that seems incredibly obvious to most average people, which is that AI doesn't have some mandate from heaven to exist. When OpenAI or Anthropic or anyone else in the AI industry talks about mitigating these serious risks, many of which sound absolutely horrible, the response of many normal people is to say, "Well, then why are we doing this in the first place?" When those companies' answer is, "Well, it's happening one way or another." and don't respond when people say, "Wait, but why?" the people are left to assume that the answer is because it's going to make some people rich. That is the default understanding in the absence of a better answer. And of course, that default understanding just makes people angrier. If the answer is because China is going to do it if we don't, maybe for some that's a little bit more understandable, but it remains incredibly abstract. The only possible, satisfying, and only possible, viable answer must be that the

benefits of AI are higher than the costs. And just saying that in this hand-wavy way, "We think the benefits are higher than the costs." no longer cuts it. It never cut it, but it really doesn't anymore. Right now, with where things are, every single time any leader or senior official from any major lab speaks, they are either contributing to the strong sentiment that we see in all of these polls that AI is likely to be worse than it is good, or they are doing work to reverse that sentiment. I think that we in the AI industry should be judging every communication on the basis of on whether it reinforces that negative sentiment or whether it actually combats it. So, as I said, giving credit to the people who wrote this, I do not believe they were thinking about it first and foremost as a PR document, but unfortunately, in the world that we live in and in the world that OpenAI and all these companies operate in, it is that whether they want it to be or not. Now, as you might imagine, I am far from the only person who has some negative feelings on that side. Daniel Jeffries

writes, "Please, please, please, I'm on my knees begging every AI exec on the planet, just stop with this stuff. Just give us models. Let the collective distributed intelligence of people figure things out in real time like we always do. Let people adapt, it's what we do. We are not giving birth to magic super miracle machines that suddenly invalidate every single pattern of the entirety of human history and technological development. We're not, really. AI is amazing, it's wonderful, but it's not magic. Can we please just let AI be cool and useful and problematic in realistic ways instead of all this crazy talk?" Meanwhile, others point out that there is something discordant about where AI actually is and all of this talk of world-changing superintelligence. And by the way, this is not just the Gary Marcus's of the world who are desperate to convince you that AI isn't all that powerful. These are people who are totally bought in. Chaian Zhao, whose literal handle is genAIisreal, posted the companion Altman interview and said, "The replies are more insightful than the interview." Someone pointing out that GPT-5.4 has been spinning in circles on a webhook for 4 hours while Sam talks about superintelligence captures everything wrong with how AI is being discussed

right now. The models are genuinely impressive and improving fast, but calling this superintelligence devalues the word and makes it harder to have serious policy conversations when we actually need them. We're in the extremely capable tool era, not the new social contract era." Booco Capital Block put it a little bit more bluntly last week, speaking in general, not about this specific document. He writes, "You must understand that every tech executive has AI psychosis. They're puking out Claude-generated markdown files full of hallucinations asking if this means they can fire 500 people." Aaron Levie from Box actually responded and said, "The worst thing you can do is just dabble with AI a little bit. That's the spot where you see its capability, but overgeneralize on the use cases and how easy the automation is. You almost have to use it too much, develop psychosis, then get to the other side, and realize how much care and feeding and management of the agentic workflows is required. On the other end, you realize you actually need to probably hire more or new people to then do all the new things agents can do. But let's talk about some of the policy proposals. I'm going to spend a lot more time on section one, the open economy, than I am on the second part, resilient society. The first thing they discuss is the

importance of including worker perspectives in the AI transition. They write, "Give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights." This is something that I do think is extremely important, but also reveals one of the biggest challenges with this document overall, which is the thing identified by Wil Manidis in his response essay, "No New Deal for OpenAI", which is that basically this document is absolutely chock-full of pretty sentiments that at least in the way that they are described right now, seem to wholly ignore the political reality and the political history that they operate within. We've discussed this worker-management thing numerous times in the past on this show. And what is happening and will happen is a wholesale shift in the relationship between employees and management in lots of different ways. On the one hand, managers have much more power because they feel like they can do things with fewer people. On the other hand, the end worker who is actually using the AI kind of negates the need for a lot of layers of middle management. But then there's

also issues like the fact that in many cases workers are training their own replacements. The point being that what's happening here, what will happen, and what needs to happen is not some policy that can be enacted. It's going to be a total new labor movement. OpenAI doesn't use the word union here, which is one of Will's biggest beefs, with Will pointing out that the New Deal was not some benevolent meaning between the capital class and labor class facilitated by FDR, but the byproduct of decades of political violence and a labor movement that was willing to fight and literally die for change, not to mention leadership that had an actual mandate, the likes of which no one in American politics has had for a very long time. Still, to the extent that we are talking about conversation starters, yes, we do need to have the conversation about this shift in the relationship between employees and management. Next up, we have AI-first entrepreneurs. Now, the critique of this one is that telling a displaced customer service agent to go start some small business that competes with their former employers feels at best tone-deaf. But of course, that's not the actual point of pro-entrepreneur

policy. In other words, the point is not that every worker who is displaced by AI is going to all of a sudden go be an entrepreneur now. It's to ask what sort of policy interventions and support structures could increase the successful small business entrepreneurship rate by 50% or even 100% from where it is today. There is not going to be one single policy silver bullet for the amount of change that's going to happen. Pro-entrepreneurial policy is one part of a much larger toolkit, and in that I'm completely supportive. Now, I'm not totally sure what the right policy interventions are, what the right type of entrepreneurial entrepreneurial support is, but I do think that this is going to be a part of the solution because in a fastly adapting future, for many the only secure future will be the one they secure for themselves. Next up, we have the right to AI. And this is something that OpenAI has talked about before. They write, "We need to treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy or to make sure that internet or to make sure that electricity and the internet reach remote parts of the globe." And what I would say here, which to be fair they

give at least mention to, is that access to AI is going to be meaningless without the agency to actually use it. What I mean by that is that we can't just give everyone a free ChatGPT account and hope it works. The amount that companies are spending on AI infrastructure right now is based on studies that we found more than 12 times bigger than the amount that they're spending investing in people's capability to use these tools. And that's within the companies who have a direct financial incentive to have their people use these tools well. We need a mass scale infrastructure mobilization to help people figure out how to use the new tools of the new economy. Call it whatever you want, a Marshall Plan for education. We need to be thinking in those big massive terms because without it, any right to AI is just a pretty notion on a piece of paper. Next up, OpenAI calls on us to modernize the tax base. And this is actually an area where I think we are inevitably going to see some of the biggest shifts. And frankly, I think that we are going to see some breakdown of traditional conservative and liberal lines when it comes to tax policy. The logic is that if the balance of the economy shifts from labor to capital,

there just literally has to be some commensurate change when it comes to taxation. Now, doing that well is going to be massively challenging, but I think based on the trajectory of both the economy and the larger political conversation, some version of this is inevitable. Maybe it's policies that have a lot of support in liberal circles already like higher taxes on capital gains. Maybe it's new types of taxes on automation. But basically, I think something has to give here, and I think you will likely find some very strange bedfellows when it comes to figuring out how to do it well. Now, luckily when it comes to inside the AI industry perspective, this sort of shift in how we think about taxation likely has the benefit of being extremely good politics. The next idea from OpenAI, which is getting a lot of coverage, is a public wealth fund. They write "While tax reforms help ensure governments can continue to fund essential programs, a public wealth fund is designed to ensure that people directly share in the upside of that growth. Policy makers and AI companies should work together to determine how to best seed the fund, which could invest in diversified long-term assets that capture growth in both AI companies and the broader set of

firms adopting and deploying AI. Returns from the fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth regardless of their starting wealth or access to capital." I seem to be a little bit more skeptical of the ultimate importance of this than others out there. I certainly don't think it's bad. I think it would be good to have people rooting for the success of these companies, but I think I have a little bit more skepticism than many others around things where everyone gets a little share of them. And again, it's not because they're bad, but because I think maybe the central challenge of American politics is that people don't want the average of what people have. They want and feel like they deserve the exceptional. We live in a world where it feels like we are constantly confronted with people who have more than us, whether that's an Instagram posts, whether they're real or not, or having to walk through first class to get to our section of the plane. Now, it's not necessarily AI's job to deal with that. In fact, it may not be a policy remediation at all. But my concern about a public wealth fund is that I think it could be a very window-dressingy exciting to write about type of thing that doesn't really move the needle when it comes to core sentiment. On the other end of the

spectrum, I'm much more enthusiastic about things like OpenAI's discussion of accelerating grid expansion. Except I would take it farther and not just think about how to accelerate grid expansion in ways that don't cost individual people money, but actually have the benefits accrue to those people first. Basically, rather than these pretty pledges to ensure that the data center build-out doesn't increase people's electricity prices, we should be actively making their lives cheaper, not just keeping it the same. I think that as an incredible amount of wealth accrues to the AI companies, we are going to need ways for that to flow back to the rest of the world. Private financing of public utilities may end up being part of that equation. Another area that's seeing lots of discussion is the incredibly poorly named and framed efficiency dividends, by which OpenAI is basically talking about reinvesting the realized value of AI back into regular people's lives. Now again, to be fair to them, they are not planting their flag heavily in one or another policy, but they're coming back to ideas which have been floating around for a while now like the 32-hour or four-day workweek. This is something that before he decided

to go full frontal assault on the data centers, Bernie was putting in his AI policy back last summer. I tend to be a little bit more skeptical of things like the 32-hour workweek because I think people view them as a panacea when really a lot of people are just going to work more anyway. But there are plenty of other ideas that have the same principle of reinvesting AI's realized value back into people that I think could be a really important thing. And this is both on the individual level, i.e. things like retirement matches or covering a larger share of healthcare costs, but it also could be on that more global societal level. Later on in the document, they talk about portable benefits, i.e. things like healthcare, retirement savings, and skills trainings and skills training that aren't solely connected to a single private employer, and the efficiency dividends could go to pay for that. They also talk about pathways into human-centered work. And to the extent that there need to be things like free training programs and better support infrastructure around some of these industries that are historically taxed on resources like, for example, elder care, again, those efficiency dividends could go to pay for that. To not dance around it, there is going to be some redistribution of AI-generated wealth, and I think some of these types of programs could be more

politically palatable than just handing people money directly. One idea that is very technocratic, but also interesting, and I think worthy of a lot more conversation, is some of the ideas of adaptive safety nets that OpenAI is proposing. One of the things that they're suggesting is investing in much better, more direct measurement of how AI is impacting things like work, wages, job quality, and then use those things to inform automated and dynamic social safety net programs. And honestly, holding us out of the AI context, what they're basically saying is that the tools we have at our disposal allow us to potentially make much more targeted, narrow, and specific interventions rather than having these big cumbersome programs which can buckle under their own weight over time. So again, as you can see, although I have a lot of specific thoughts around each of these areas, I do think there's a lot of good fodder for discussion here. I just don't know that this type of document I'm just not sure that this type of document is the right way to actually start those discussions, and I think in the context into which it is arriving, it might actually in some ways be counterproductive. The biggest critique that I've seen, and the biggest applied critique that I've seen, is that one of

the things that is noticeably absent from the document is any sort of even hint of a commitment from OpenAI to programs or initiatives or policies that would cost them anything. As Wil Manidis writes, "The document proposes that policy makers might consider higher taxes on capital. OpenAI could commit to paying them. The document proposes a public wealth fund. OpenAI could seed it. The document proposes that data centers pay their own energy costs. Open AI could accept voluntary rate separation today in every jurisdiction where it operates. The document proposes that frontier AI companies adopt public benefit governance. Open AI could reinstate the profit caps it dismantled 6 months ago. None of these things are in the document. The only things in the document are a workshop, fellowships paid in the company's own product, and an email address that routes to no one. Alexander McCoy puts this sentiment a little more cynically writing, "Good ideas, Sam. I know some members of Congress who can get right to work on writing the legislation. Some quick questions. How much equity in Open AI should we plan on you contributing? Will it be your own equity, a dilution of existing shares, or is your idea that the federal government will buy shares using taxpayer dollars once you IPO?

Two, how many tens of millions of dollars of your own money are you pledging to commit to pass these policies you say are necessary? How are you going to counter the hundred million dollars of leading future AI political spending which opposes these policies, which is funded by your own investors and fellow executives? Three, how are you directing Open AI chief of policy Chris Lehane to redirect Open AI's massive lobbyist and public affairs resources to support this agenda which they currently actively oppose? Now, this is coming from someone who is very much who in their Twitter bio says that they are fighting the power of big artificial intelligence corporations. So, you need to view it through that lens, but I think that that would be a more prominent and common sentiment than you might think. Effectively, where I agree with Open AI wholeheartedly is that we need to have these conversations. But, in the context but what seems to go unrecognized is that in the context of both the changes that they say are coming and the grave state of public opinion on AI in America, 13-page policy PDFs with no actual commitment or direction ain't it. For now, that is going to do it for today's AI daily brief. Appreciate you listening or watching as always, and until next

time, peace.