Consider the two following ads that are in the same ad sets in a Campaign Budget Optimization campaign on Facebook. Based on the KPIs (key performance indicators) of these two ads, which ad do you think is the better ad?
On the surface level, many would say that B is the better because ad B has a 2.0 ROAS (return on ad spent) and a $50 CPA (cost per acquisition) which is better than ad A’s 1.45 ROAS and $70 CPA. But is B really the better ad? On the other hand, others would argue that A is the better ad because of a phenomenon that Facebook calls The Breakdown Effect.
The Breakdown Effect is what happens when Facebook seemingly shifts impressions and spend towards less-performing ads. You might wonder why would Facebook do this? This is because Facebook’s algorithm believes that if it continued to put spend towards the better-performing ad, the ad’s performance would drop and not garner as many results as the less-performing ads. For example, if Facebook had put $58k into ad B, it believes that the ROAS and CPA would be less than the results that ad A received with $58k spend.
The problem now is, what ad do you decide to scale knowing all this information? Ad B has better KPIs but Facebook favors Ad A over Ad B. The truth is, we believe neither one of these ads is truly better than the other. One may argue valid reasoning for ad A to be better while one may argue that ad B is better.
When looking at both ads as a whole, both ads are valid candidates to scale but a better solution to scaling both ads would be to create iterations of both ads and see how those ads perform. Iterating both ads and testing these versions would allow you to base further findings on the results to truly determine which ad drives better results at scale.
The new problem is, how do you create iterations of Ad A and Ad B and how do you test these new iterations for concrete findings?
We may have a solution for you at Savvy.
Kommentare