Most Meta accounts test creatives the same way: throw five or six variants in, let the algorithm "learn," and hope a winner emerges. The problem is you're changing too many things at once. When something wins, you don't know if it was the hook, the visual, the CTA, or the audience. So you can't repeat the win. You're just burning budget on random variation.

One variable at a time

We run creative tests where only one element changes per test. Headline tests: same creative, same audience, different headline. Visual tests: same copy, same audience, different image or video. That way when a variant wins, you know why. You can then roll that learning into the next batch — e.g. "short hooks outperform long ones, so all new creatives use short hooks." Over time you build a playbook instead of a pile of one-off tests.

Structure so the algo can actually learn

Meta needs volume to optimise. So we don't split one ad set into 12 creatives with £5 each. We run 2–4 creatives per ad set, give each enough budget to get out of learning (usually 50+ conversions per week per ad set), and kill losers fast. Winners get more budget; new tests get their own ad sets so they're not competing with scaled creatives. That keeps the delivery system stable and stops good creatives from getting throttled by bad ones in the same set.

What to test first

Hook or opening frame (first 1–3 seconds) usually has the biggest impact on CTR and cost per result. Then the main visual (lifestyle vs product, UGC vs polished). Then the CTA and the offer framing. We've seen 20–40% improvements in CPA just from systematically testing hooks and then rolling the winning pattern into everything new. The "40% cut in wasted spend" in the title isn't a guarantee — it's the kind of result we've seen when teams stop random testing and start following a framework. Your mileage will vary, but the direction is right.