Why you're getting this: You've crossed paths with CE Digital somewhere along the way. Every week we share what we're actually seeing inside high-spend accounts, including how we're using AI and our internal data tool, AdSignl, to analyze performance at scale and improve results for our clients. Short, practical, no filler.
Not for you? No hard feelings. Unsubscribe anytime.
Alright, let's get into it.
This week we're getting into why hiring AI to run your account is going to fail you in the exact moment you need it, an email-list bug we caught on a $1M/month account that almost nobody is talking about, what taking an AI UGC course actually taught us about the work involved, the worst tracking month on Meta in 13 months and the two changes pulling our accounts out of it, and why your in-house "expert" is reading the same dashboard and reaching different conclusions than a senior buyer.
Five sections. Should be a 6-minute read.
01. Why AI can't run your ads (yet).
Sure, Claude can run your ads when everything is easy. Scale budget 5–20% every 3–5 days. Simple creative ops. Anybody can do it.
The moment the algorithm shifts or the model breaks, AI has no idea what to do. There are two reasons this fails. One human, one technical. Most people running AI ad agents have only thought through the human one.
The human reason is the obvious one. You're not paying a media buyer to push buttons when things are stable. You're paying them for weeks like this April. To diagnose whether the problem is the algorithm, the creative, the consent banner on the Shopify site, or last week's iOS update. To call their Meta rep. To recognize this looks like March 2024 before any dashboard could tell them.
The technical reason is more interesting. Every AI conversation lives inside a context window. Today the ceiling is around 1M tokens on Claude Opus 4.7. Even at 10M, the problem doesn't go away. The model doesn't weight every token equally. What it reads first is anchored, what's at the end is what it just saw, and the middle gets compressed and quietly forgotten. Now imagine running a high-spend Meta account through that. A year of campaign data, creative-level data, daily exports, demo splits, placement breakdowns. You're past a million tokens before you've fed it last week's results.
People assume that if they tell the AI to only make changes every 7 days, it'll behave like a thoughtful operator. It won't. Every minor fluctuation looks like an anomaly to a system with no real baseline. So it overcorrects on day 7 the same way a panicked junior buyer would on day 1. Same day-trading behavior. Slower clock.
We use AI heavily across our stack. For creative analysis. Weekly account analysis. Pattern matching. It's where AdSignl earns its weight. But the decision about whether to shorten attribution windows, redeploy a new customer event, or hold structure in a fog month is a judgment call. You need a human in the loop.
02. Your consent banner is silently deleting emails from your list.
This one came up on a call last week and it's the one we want to flag because we haven't seen anyone else talking about it.
One of our clients spending over $1M/month on Meta saw their email capture rate drop more than 40% last month. It wasn't the creative. Wasn't the popup. Wasn't a Klaviyo bug. It was the consent banner.
Here's the sequence: a user clicks your Meta ad, lands on the site, your email popup fires, they submit their email for the discount, they close it, they keep browsing. Then your cookie consent banner appears, they hit reject, and the email they just gave you gets automatically removed from your list. As if they never opted in. The user has no idea. They think they're getting their discount. You think you have a new subscriber. The system silently deleted the record on the way out.
The downstream damage is bigger than the email list. These users are also invisible to click-based attribution. Your CAPI signal is weaker. Your retargeting pools are missing a real cohort of high-intent buyers who literally raised their hand. You paid Meta to acquire them. They told you they wanted to hear from you. The consent banner threw it all away.
Audit your settings this week. Check whether rejecting cookies deletes email records that were captured before the consent action. Check the order your popups fire. Reconcile your unique email submissions in Klaviyo against your actual list. The delta is what you're losing.
If you're seeing this on your end too, hit reply. We want to know how widespread it is.
03. AI creative is still human creative.
I took a course on AI UGC last month. By module three the punchline was obvious: this is not a button you press.
It's a five-to-ten step process to make a single AI creator. Generate the character. Build scene structures in Nano Banana or Kling. Write the script. Pick the voice in ElevenLabs. Render. Iterate. Sure, you don't have to pay an actor or source the talent. But it's still a lot of work. You're still directing the scenes, picking the voices, doing the revisions.
The internet sells AI UGC like you type a sentence and a finished video pops out. AI doesn't will it into existence. You piece it together, scene by scene, voice by voice, cut by cut.
This is also why we're building out a new service inside CE Digital: a creator partner manager. We source the creators, brief them, handle the contract on behalf of the brand, manage all the communication, and edit the raw content. Most brands don't have the horsepower to run a UGC pipeline at the volume their accounts need. The ones who try usually burn out within two months.
AI tools sit inside that pipeline, not on top of it. They speed up the steps where they're good: script variations, scene generation, voice tuning, edit assists. They don't replace the human directing the work. They just change what the human's hands are on.
You're still the director. You're just directing AI instead of humans.
04. April 2026 has been the worst tracking month on Meta in 13 months.
If your tracking has felt broken in a way you can't diagnose, you're not crazy and you're not alone. Same symptoms as March 2024. Algorithm not delivering like it had, normal levers not working, pulling spend back doesn't snap things back the way it usually does.
It's signal, not performance. Recent iOS updates have degraded the data flowing into Meta and the platform is struggling to optimize against it. The actual customer demand on most of our accounts hasn't moved much. What's broken is Meta's ability to see the people who would have bought. Two changes are doing most of the heavy lifting on our accounts right now, and they're the same playbook that pulled us out of March 2024.
First, shorten your attribution window. Move new test campaigns from 7-day click, 1-day view to purely 1-day click. A 7-day window gives degraded signal more surface area to leak across. Every day you wait to attribute is another day of corrupted data getting pulled into the optimizer. Tightening the window forces Meta to optimize against cleaner, more recent signal and pushes delivery toward more primed audiences. We're seeing cost per new visit down, new visit percentage up, and CPMs cleaner across the campaigns we've moved over.
Second, relaunch a new customer purchase event. Build a custom conversion that fires only for new customers, exclude existing purchasers, and run a campaign optimizing for that event. It pulls Meta out of the same shrinking pool of in-market buyers it keeps recycling and chases incrementally better traffic. We've been running this for over a year. It's the single most reliable defense we have when standard purchase events start leaking 30–40% against existing customers.
If your April tracking looked like March 2024, run the March 2024 playbook.
05. Don't overcorrect. Fly the plane.
The hardest part of a month like this isn't finding the fix. It's sitting still while the dashboard is screaming at you to do something.
You're not optimizing against a static system. Meta is unstable on the backend right now, and Meta itself doesn't fully know what's going on. So when you stack new tests, panic budget pulls, and structural changes on top of that, all you produce is noise. The data gets harder to read. You make worse decisions on the next change. The cycle compounds.
You can't control the algorithm, the iOS updates, the auction behavior, or what Meta is testing on the backend. What you can control is whether you keep flying the plane through the fog. Hold your structure. Trust your creative pipeline. Tighten budgets where you have to. Let the dust settle. Then push spend back in.
This is also where account loading matters more than people think. Most operators who panic in a month like this aren't making a skill mistake. They're making a context mistake. If you've only ever managed one account, every dip looks like the worst dip you've ever seen. If you've only been on the account for three months, you have no baseline for what a real anomaly looks like versus a Tuesday.
This is why we don't let our buyers carry stacks of accounts. We keep the load to a handful, and we keep the same senior buyer on an account for the entire lifespan of the relationship. The value of that buyer isn't the SOPs they ran on day one. It's the historical context they accumulate over months of watching the account behave under different conditions. That context is what lets them sit still when the dashboard panics.
That's it for this week.
If your April tracking has been ugly, run the two-change playbook in section 04 and resist the urge to do anything else for a few days. Let the dust settle, then push back in.
If you want to see how we track signal degradation, account anomalies, and creative pipeline health across all our high-spend accounts in one place, get early access to AdSignl.
The CE Digital Team
