Are you ready for AI in 2026?

AIConfident's predictions for AI-related change in 2026

January 15, 2026
Blog article

Are you ready?

I was asked last week 'What one thing has changed in the world of AI since this time last year?'

My answer? I’m more convinced than ever that a lot of things are going to change, some of them pretty quickly. Some will be positive. Some will be negative. All require the attention of leaders of every organisation.

Advances in AI technologies provide a huge opportunity to do more. Reach more people, have greater impact and increase revenue. More capability is in everyone’s hands than most people or organisations realise.

But doing this won’t be enough. Every organisation will need to adapt to how others are using these technologies. Norms are being thrown out the window. Markets will shift. Needs of users and clients will change. For many leaders, this will also mean advocating for change, driving for better governance and controls where they are needed. Engaging citizens in choices about how AI technologies are and aren’t used in particular domains.

This challenge will need leaders who understand their own domain, and also understand the potential impact that AI technologies will have on society and the economy.

There's a lot to be ready for, here are my seven things to have on your radar as we go into 2026...

Are you ready for... proper organisational structure and workflow change?

You’ve got an AI policy in place, you’ve trained your team, and some people are dabbling. The next question is: how is work actually changing? Are you redesigning workflows, roles and decision points to match?

If Gen AI tools are just dropped onto existing processes, there's a good chance you’ll mostly get noise, uneven quality and frustration. If you get the operating model right, you can genuinely reach more people, deliver more impact, and free up capacity, without undermining trust.

To get going, why not pick one type of work that you often do (ideally a real, high-volume pain point) and run a mini sprint on it: map out what you do, decide where AI helps (and where it mustn’t), define human sign-off points, and what outcomes you will measure. Then give it a go.

McKinsey have set out 12 adoption/scaling practices, how many are you currently doing?

Are you ready for... governing AI Agents?

It is now easier than ever for employees to create ‘Agents’, with both Microsoft Copilot “Workflows” and Google Workspace Studio/Flows giving people the capability to plug agentic workflows into their inboxes. Other tools allow you to easily create agents with access to documents, that are capable of acting on a trigger event (rather than human instigation) and able to complete actions with minimal or no human oversight.

In a recent article, the Information Commissioner's Office described four scenarios for agent adoption, the highest risk of which was Agents that are 'Just good enough to be everywhere', characterised by high adoption and use of agentic AI technologies despite limited capabilities. I think we are already seeing signs of this scenario, which opens new cyber security, data and privacy risks.

We'll write more on governing AI agents shortly, including how to keep a simple register and avoid 'shadow agents'. Make sure you don't miss it by signing up to our Newsletter now!

Are you ready for... deciding which AI tools you do/don’t want to deploy based on the value sets they are trained on?

Already we’re seeing clients thinking not just about the capability of the AI tools they bring into an organisation, but also the values that that AI model has.

The recent non-consensual sexualised image editing/‘nudification' by Grok has already got a number of our clients considering whether they should ban this, or other AI tools based on their values. Meanwhile, Anthropic have published a constitution that sets out the values that their AI model Claude will use.

Why not take a look at this constitution, and consider what kind of values you want to see in the AI models that you allow into your organisation?

Are you ready for... losing control of how your information is consumed?

You spend valuable time, effort and money curating your website, your external-facing documents, guides and reports, thinking hard about how you want it to be consumed by your target audience. And now a Large Language Model gets between your information and your audience.

The outcome might be reduced website traffic, but it might have more serious implications, like public health, as explored by the Guardian and the Patient Information Forum

Now the audience member themselves has the potential to present your information in their preferred style, rather than yours - for example turning your carefully curated report into a podcast or narrated slide-deck via NotebookLM. “Format chosen by sender” becomes “format chosen by receiver".

We're already working with our Design Impact Studio to explore how we optimise our website for a generative AI era, why not start with a conversation with your website provider to understand how you make your information show up in a Gen AI world.

Are you ready for the widespread removal of ‘normal’ friction? Automations and agents can remove the little bits of friction that currently act as filters. If someone can set an agent to book gym classes at the second they’re released, they’ll always beat the humans. The same logic applies to ticketing, appointments, grant portals, recruitment queues — anywhere “first come, first served” was quietly relying on human limitation.

Friction is often doing unseen fairness work. When it disappears, you risk creating “automation advantage” where people with the right tools (and confidence) consistently get first access — which can widen inequality and undermine trust.

It's time to identify the allocation process you run that rely on friction (appointments, application windows, limited places). Ask: “What happens if applicants use automation?” Then put one mitigation in place (eg randomised windows, fair queuing, throttling, verification, or alternative access routes).

Are you ready for significant pushback?

We saw increasing divergence in client sessions during 2025: some people are excited, some are cautious, and some actively disengage from anything AI-related. Choosing not to use AI is one thing; choosing not to understand how others are using it is another.

Change that’s perceived as imposed (or morally suspect) leads to resistance, culture splits, and quiet workarounds. Purpose-driven organisations can’t afford internal fractures on something that is rapidly changing expectations from funders, partners, service users and the wider public.

If you're not already discussing people's perceptions and use of AI technologies in your teams, I always offer two conversation starters:

"What's the one word that describes how you feel about AI technologies?"

"How are you currently using AI technologies?"

Start here. You'll be surprised what you find out!

Are you ready for exploring smaller, customised, local AI models?

“Doing AI” isn’t just rolling out Enterprise Copilot/Gemini/ChatGPT. General-purpose models are broad — but sometimes you need something excellent at one task, with tighter control of data, behaviour and oversight.

Smaller or domain-specific models can offer more control and potentially a better fit for sensitive contexts — but they require leaders to make active choices, and organisations to build enough capability to implement them well. Gartner have even predicted that by 2028, more than half of Gen AI deployments will be domain-specific: Gartner’s 2026 trends.

If your team are telling you that your general purpose Gen AI tool isn't quite up to the task, perhaps it's time to start exploring smaller, more niche AI models.

--

This might sound quite an overwhelming list. But these are all things that you can do something about. That you can respond effectively to, maybe even take a leadership role on in your sector. But doing so needs leadership time and focus to get to grips with, and that's why AIConfident exists.

At AIConfident we help leaders foresee and manage a range of implications relating to AI technologies. The place where we really excel is in boardrooms and with leadership teams, helping you to identify the implications that mean the most for your organisation and setting out plans, strategies and governance that enable you to be on the front foot as this change unfolds.

We're not here to sell you any AI product, or even to tell you that you need to be using AI technologies all the time. Just to support you every step of the way as you make confident decisions about how to adopt, and adapt to, AI technologies.

Sound like what you need? Get in touch

Want to make sure you don't miss our next piece on AI Agents? Sign up for our Newsletter to get all our content straight in your inbox!

Image: Suraj Rai & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Data is processed in line with our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.