I was reading an analysis from Oliver Wyman, and something clicked right away. AI is sprinting ahead while newsroom operations are jogging behind. Vendors show incredible demos. Everything looks impressive. But inside a newsroom, the demo is never the hard part. The hard part is everything around it.
That gap keeps growing. AI moves at lightning speed. Newsroom systems do not — and that tension creates real problems.

If you have ever run operations, you see it fast. People experiment on their own. Editors lose track of when AI was used. Legal teams raise governance questions no one has answers for. Everyone moves quickly without rules. It feels like innovation, but it is mostly noise.
That is why the Oliver Wyman perspective stood out. The real key for unlocking the future of AI is not more AI. It is more structure.
The role of good governance
Good governance is not bureaucracy. It keeps teams from going sideways.
Tools do not scale. Systems do.
The teams getting the most out of AI right now are not the ones with the flashiest models. They are the ones with simple rules everyone can follow. Rules such as:
- Who reviews what.
- When AI can be used.
- How usage is logged.
- Where human judgment fits.
Basic, but powerful.
These choices build confidence. They keep quality consistent. They give every department the same expectations instead of a dozen parallel experiments.
From an operator’s seat, governance is not a blocker. It is the thing that makes innovation repeatable.
The coming split
The next two to three years will split publishers into two groups.
Some publishers will slow down just enough to build real foundations. They will train early, align teams, and treat AI like a workflow redesign. Their progress will compound.
Others will sprint. They will launch tools before they are ready. They will skip the basics. They will introduce risk faster than they introduce support. Over time, AI will create more mess than momentum.
AI changes how work happens
AI is not a feature. It reshapes ideas form, assignments move, editors collaborate, analytics inform decisions, and legal teams engage. Once you see how many teams this touches, it becomes obvious AI adoption is not a product rollout. It is an operational redesign.
Those big responsible AI frameworks only matter if they show up in daily routines. They need to shape planning, training, and the small decisions teams make at every step.
Where publishers should start
You do not need a perfect governance model. You just need clarity. The most grounded teams focus on:
- Simple principles.
- Light human review steps.
- Early training.
- Regular cross-functional check-ins.
- Tracking the right signals like quality and speed.
None of this is glamorous, but it keeps the organisation steady while the technology accelerates.
The running thread
When you strip it down, it comes back to trust. AI only works when people trust the system that surrounds it. Trust does not come from capability. It comes from consistency and shared expectations.
Oliver Wyman captured the urgency. Operators feel the reality. This is where the actual work happens. This is where the future gets built. And this is where publishers will either unlock real value or stay stuck in perpetual “experimentation mode.”
The teams adding just enough structure to make AI stable will move faster with far less friction. The teams skipping that step will feel the drag every time they try to accelerate.
A little structure is not a limitation. It is a competitive advantage.
About Evan Young
Evan Young is co-founder and chief operating officer at Nota in Los Angeles, California, USA. Evan can be reached at evan@heynota.com.
