Excellent article! I think what surprised me was the customer logos, but it goes to show that when you focus more on the benefits rather than the name drops, it pays off. I've always found having comparison charts confusing, so I can see why that wouldn't have worked. Also, I totally agree that you should keep your messaging super clear and simple (it usually works for me).
Yeah, the problem with logos is similar to comparison content - It's hard to validate it's legit. Rarely do logos even link out to customer stories, so there tends to be better ways to build trust. Also, people put the logos at the top which I think is backwards. You have to build confidence in the core value prop first, then customer proofs is to get people over the line.
Exactly! I feel like they're trying to impress their customers with the logos, but it does the opposite. It's not enough to say that these companies use your stuff. It should be focused on what your customer will get out of your stuff.
100%. When I joined DoWhatWorks and started to get into our database, I was blown away by stuff like customer logos not working (despite a huge portion of brands using them). Just because something is "standard practice" doesn't mean it's the optimized layout!
Awesome email and definitely taking away the ‘quality over quantity’ point re testing.
My only observation would be that these are the particular results for THESE brands on these particular tests, right? They’re not necessarily universal.
This is pure gold! Love how you’ve unpacked these tests with real-world examples. It's wild to see how clarity and simplicity can flip the script on what we thought we knew about conversions. Can't wait to dive deeper into those insights!
As a consumer/user, most of the test results make sense to me. As a marketer, I have seen cross out pricing (temp sale only) has worked. Will be interesting to test it out.
Industry could be a factor here, too. We saw cross-out pricing losing pretty handily in B2B SaaS, B2C SaaS, streaming etc., but don't have a lot of data on how it performs in eCommerce, which could have some differences. Anecedotally, however, when I ran an e-commerce (N of one here), it did not improve conversions (I was doing $499 crossed out to $297)
The tests are fascinating, but I think that a few of the takeaways aren't as conclusive at it would seem - or, at least, the specific illustrations chosen here don't necessarily support these claims.
2. The test shows that the carousel outperforms a messy static collage of 6 images. Does this mean that "carousels consistently outperform static images", even if we compare it to a single, high-quality image, with no navigation controls and no distracting animation? I wouldn't bet on it. Especially if the purported explanation is that of cognitive load (what is this based on? Speaking as a cognitive psychologist here - cognitive load is pretty difficult to measure even in a cognitive lab setting, let alone based on the reported results of an AB test)
6. Asana is a super well-known name in the industry, they don't need customer logos, they have nothing to prove. Run the same test for a smaller fish and see if it yields the same results.
7. Strikethrough pricing breaks trust - it sure does, if you replace the prices with 0. Possibly because it's perceived as "too good to be true" and an obvious scam. Try measuring it with a reduction of 10%-50%, that's a whole different story.
Thanks for the thoughts Vitaly. For clarity, all these tests are aggregate trends and these individual examples are simply part of the dataset. For each of these claims we look at anywhere from dozens to hundreds of tests. DoWhatWorks has a patented technology that detects page variants, then sees visually where there are element changes, then has a research team add them to the database after 3m+ of a winning version being kept. So carousels, customer logos and strike through pricing we see losing at a high probability (we use BetScores that look at both variable isolation of the test and test volume) for large and small brands alike.
Just a random thought about A/B testing. The solution should be in the creation stage. As you create the invite the ai should guide you with AB test "running in the background", realtime prompting you to make adjustments. Maybe I am missing something in terms of this feature. But running tests independently seems OLD.
Thanks for collaborating! It was great digging through the research for this one.
This is awesome, I'm so glad I found this
Amazing! Thanks for sharing
Amazing summary!
Great insights. Simple strategies I can use today.
Excellent article! I think what surprised me was the customer logos, but it goes to show that when you focus more on the benefits rather than the name drops, it pays off. I've always found having comparison charts confusing, so I can see why that wouldn't have worked. Also, I totally agree that you should keep your messaging super clear and simple (it usually works for me).
Yeah, the problem with logos is similar to comparison content - It's hard to validate it's legit. Rarely do logos even link out to customer stories, so there tends to be better ways to build trust. Also, people put the logos at the top which I think is backwards. You have to build confidence in the core value prop first, then customer proofs is to get people over the line.
Exactly! I feel like they're trying to impress their customers with the logos, but it does the opposite. It's not enough to say that these companies use your stuff. It should be focused on what your customer will get out of your stuff.
This was very interesting & valuable. Thanks for sharing
Super interesting! Thanks!
I was really surprised by this newsletter. It truly reaffirmed the need to constantly question common beliefs while viewing reality objectively.
Thank you for always sending such wonderful emails!
100%. When I joined DoWhatWorks and started to get into our database, I was blown away by stuff like customer logos not working (despite a huge portion of brands using them). Just because something is "standard practice" doesn't mean it's the optimized layout!
Awesome email and definitely taking away the ‘quality over quantity’ point re testing.
My only observation would be that these are the particular results for THESE brands on these particular tests, right? They’re not necessarily universal.
This is an eye opener! wow, thank you both for sharing this gems with us all.
This is pure gold! Love how you’ve unpacked these tests with real-world examples. It's wild to see how clarity and simplicity can flip the script on what we thought we knew about conversions. Can't wait to dive deeper into those insights!
Thanks for the research.
As a consumer/user, most of the test results make sense to me. As a marketer, I have seen cross out pricing (temp sale only) has worked. Will be interesting to test it out.
Industry could be a factor here, too. We saw cross-out pricing losing pretty handily in B2B SaaS, B2C SaaS, streaming etc., but don't have a lot of data on how it performs in eCommerce, which could have some differences. Anecedotally, however, when I ran an e-commerce (N of one here), it did not improve conversions (I was doing $499 crossed out to $297)
The tests are fascinating, but I think that a few of the takeaways aren't as conclusive at it would seem - or, at least, the specific illustrations chosen here don't necessarily support these claims.
2. The test shows that the carousel outperforms a messy static collage of 6 images. Does this mean that "carousels consistently outperform static images", even if we compare it to a single, high-quality image, with no navigation controls and no distracting animation? I wouldn't bet on it. Especially if the purported explanation is that of cognitive load (what is this based on? Speaking as a cognitive psychologist here - cognitive load is pretty difficult to measure even in a cognitive lab setting, let alone based on the reported results of an AB test)
6. Asana is a super well-known name in the industry, they don't need customer logos, they have nothing to prove. Run the same test for a smaller fish and see if it yields the same results.
7. Strikethrough pricing breaks trust - it sure does, if you replace the prices with 0. Possibly because it's perceived as "too good to be true" and an obvious scam. Try measuring it with a reduction of 10%-50%, that's a whole different story.
Thanks for the thoughts Vitaly. For clarity, all these tests are aggregate trends and these individual examples are simply part of the dataset. For each of these claims we look at anywhere from dozens to hundreds of tests. DoWhatWorks has a patented technology that detects page variants, then sees visually where there are element changes, then has a research team add them to the database after 3m+ of a winning version being kept. So carousels, customer logos and strike through pricing we see losing at a high probability (we use BetScores that look at both variable isolation of the test and test volume) for large and small brands alike.
Thanks for the clarification and for the awesome writeup Casey!
Just a random thought about A/B testing. The solution should be in the creation stage. As you create the invite the ai should guide you with AB test "running in the background", realtime prompting you to make adjustments. Maybe I am missing something in terms of this feature. But running tests independently seems OLD.
E isso que precisamos no mundo, pessoas sérias que falam a verdade para nós (público).
Parabéns pelo conteúdo de alto nível.