Automation

5 February 2026

Become an AI-powered CMO: your questions answered

Automations, hiring, measurement and tools - your AI questions, answered.

This January, I ran my own webinars for the first time. Your interest blew me away. 180 CMOs registered. 90 attended. I feel genuinely blessed for that attention and trust.

If you missed the webinar, you can get the replay here.

You asked excellent questions that deserve detailed answers. So here they are.

Automation and agents

Are you able to share the specific steps and tools to make the automations?

I currently have two case studies where the clients gave me permission to share:

These are productivity wins. Automations are when you take a human process and re-engineer it with AI.

The starting point is simple: think about how a social media manager or podcast producer would do it manually. Then engineer it with AI.

Let me walk you through the Instagram example.

The Instagram carousel automation was built in Make.com with two sub-agents:

Sub-agent 1: Daily Populate – Uses Perplexity to grab daily news articles on your chosen topic and populates them into a Google Sheet (which acts as the control interface for the automation).

Sub-agent 2: Action Buttons – When you select an article and push a button, it generates the caption and design copy for each carousel slide.

Tools used: Make.com, Perplexity, ChatGPT, Placid App.

For design generation, there are two options: automated design via Placid App, or CSV bulk upload into Canva templates. If building this today, I’d likely use Gemini Nano Banana Pro for image generation instead or I would explore Pletor for design consistency.


Where in the automation is the human input? Is it only at the end of the workflow?

This is a really deep question. I’d actually reframe it: where does the automation genuinely need human input? And where is human input offered by design to give people the perception of control?

Here’s the truth: it’s possible to create more autonomous agents. But lots of people, particularly marketing people, are control freaks. They want control over what agents produce.

So for most agents I design, I build in multiple layers of control. Agents that allow humans to select, approve or give feedback. If you think about it, this mimics exactly how a marketing team already works. As a Head of Brand, your content manager suggests campaigns and you decide which are worth pursuing. The same logic applies to automation.

For both case studies I presented, human input happens at multiple points throughout the workflow:

  • Selection stage: You decide which articles are worth turning into content. The automation surfaces options, but you choose which to pursue.

  • Review stage: After AI generates the copy, you check it before triggering the next step. Only when you approve does the automation continue.

  • Design choice: You decide whether to use the automated Placid design or go to Canva for more control.

Having a human in the loop is essential. The human brings taste, high standards and discernment.

We’re also there to iterate and improve the agent: “How should I rewrite my agent to get the quality I want?”

 

Image generation

What was the tool used for images that you showed?

Different tools for different purposes:

Use case

Tool

AI avatar photos (static images for LinkedIn)

Higgsfield

AI video avatar (for courses)

HeyGen for vide avatar + ElevenLabs for voice cloning

Carousel design automation

Placid App (original build)

Image generation from briefs

Gemini Nano Banana Pro

For AI avatars, input quality matters enormously. Professional photography produces a much better avatar. Ideally, your input photos should be from the same period of your life so your look is consistent.

 

Hiring and operating model

You talked about the need to have a tech partner. Do you have tips on the profile of the talent we should consider?

I’ve hired people from multiple sources:

  • I’ve worked with some Skool/LinkedIn/YouTube AI influencers. Their output is good but not great. They need a very detailed agent specification from you and will implement it to the letter, with minimal enhancement beyond that.

  • I’ve worked with people who are CTO or CRO-level, with deep AI expertise. Those people rock.

After interviewing around 30 engineers, here’s what I’ve learned:

Look for senior experience. My technical partners are CTO-level with 20+ years of experience. We operate at a similar level, which means we can collaborate effectively on complex problems.

The division of labour matters. My strength is in building specifications, designing workflows, identifying what can be automated and documenting requirements (essentially acting as the product manager). The engineer’s strength is in the actual build. When you combine tech expertise with marketing experience, the ideas you can come up with are on another level.

My foundational advice: If you know how to write an agent specification, go for a technical co-partner. If you don’t, hire an agency who will do both for you:

  • Work as product manager and help you clarify your requirements

  • Then build it

If you’re stuck, contact me. I’m happy to share my technical partners who can help.

 

How can we hire a tech freelancer/partner? I’ve looked on Upwork and the quality has been variable.

Networking has been my strongest channel. I’ve pulled people from the Skool community, emailed people whose YouTube videos I liked and connected with LinkedIn influencers. The majority of them offer agent builds as a service.

My approach:

  • Interview extensively. I spoke with around 30 engineers before finding the right partners. Communication matters to me as much as technical skill.

  • Ask for proof. Get your technical people to show you projects they’ve built. You’ll quickly see how they approach problems.

  • Prioritise seniority over cost. Senior experience makes a significant difference in output quality and problem-solving ability.

  • Look for complementary skills. The most learning comes from working with tech people who bring a different perspective. The fusion of tech and marketing expertise produces the best results.

  • Start with a small project. Test the working relationship before committing to larger builds.

The reality is: you’ll make mistakes. Allow yourself to. Every mistake is a learning.

 

Does the tech partner build the more complex automations? Or do you do it yourself?

At this point, I don't build automations at all. I learned early that I could get agents in Make.com to deliver an 8 out of 10 experience, but not 10. Formatting was off, issues needed code to fix. Although these tools say they’re no-code or low-code, you do need code for production-quality output.

My role is to be the brain, designer and creator of the agent. I identify what can be automated, design the workflow with AI and document it extensively. I also test any automation or agent step-by-step, manually, multiple times to fine-tune the prompts. Then I hand it to an engineer to build.

For example, the Competitor Ad Cloning agent is built in n8n and uses Gemini, ChatGPT, Foreplay and Airtable. It has multiple sub-agents for every flow. That level of complexity requires engineering expertise.

 

Measurement and impact

In your experience, how much of the benefit comes from: a) enhanced quality, b) incremental activity, c) time saving, d) other?

AI generates value across three dimensions:

  • Productivity wins: Same workflow, but AI handles some steps. You draft a blog post, AI checks grammar and fine-tunes it.

  • Operational transformation: Completely re-engineer the workflow. Build a knowledge base of your content, then have AI write from it.

  • Innovation: New products, new formats, new experiences. An AI avatar answering customer questions. An autonomous agent that analyses SEO results, decides on keywords, writes articles and auto-publishes.

I’m not sure AI delivers enhanced quality compared to humans. I think AI delivers different things that humans didn’t even think of, because our brains are incapable of processing information the same way.

The biggest value doesn’t come from “productivity wins” alone. The real value comes from completely redesigning workflows with an AI-first mindset. Don’t get stuck in efficiency. Go further.

 

How do you measure AI’s impact specifically? That can be a bit of a black box.

I measure across three levels:

  • Leading indicators: Adoption (percentage of team members using AI weekly) and Experiments (number of AI experiments running).

  • Efficiency: Time saved for employees or teams. Productivity gains (higher output per unit of input).

  • Business outcomes: Revenue, ROI/ROAS, CAC, LTV, traffic, conversion rate, leads.

For some projects, linking to business outcomes is straightforward. If you launch a podcast, after three months you can see how many leads it generated. If you implement GEO, the impact is measurable. For others it’s more challenging, but I try to anchor everything back to business outcomes. See the Podcast Automation case study for a worked example.

 

How much of your working time do you spend on “playing with AI”?

Around 20–30 hours a week on learning and trying new things, separate from teaching, creating roadmaps and developing agents.

Recently, I’m more anchored in use cases. Saying “I learn AI” is the same as saying “I learn everything about digital.” Instead, I focus on a particular area that excites me and I learn about it. I try things, I try tools, I work to master that use cases. Currently, I’m deep in GEO.

The pace is intense. AI capabilities double roughly every seven months. But it’s incredibly rewarding. The question to ask yourself is: how do you continue learning?

 

Question about AI tools

What is NotebookLM used for?

I’d use NotebookLM if I needed to upskill myself in a new area quickly. It synthesises information beautifully and can create presentation, documents and podcasts based on that information.

I saw a brilliant use case: a very senior board member prepared for a board interview by doing deep research, combining all the information in a NotebookLM knowledge base and then using NotebookLM as their learning and thinking partner.


What is manus.im good for?

Some AI experts love it for its research and agentic capabilities. I personally haven’t used it. I’m very happy with Perplexity for research, and it’s my go-to tool.

 

Over to you

These were the questions that came up most during the webinar. If yours wasn’t covered, or if reading this has sparked new ones, I’d love to hear from you.

Actually useful AI. In your inbox.

Monthly newsletter with use cases, playbooks, case studies from top companies and invites to live webinars and events.