- Posted by Timur Taepov
- On April 18, 2015
- a/b testing
Welcome to the new episode of the App Marketing – Growth Hacking Podcast. This time I would like to cover so live issue as problems of A/B testing in mobile app development. It won`t be a surprise that it’s not so controversial subject. There are lots of uncertain sides of this method, so let me be more specific about it. For that episode I`ve used materials from mobiledevmemo website.
Please, pay more attention and let`s start!
The common criticism of the constant regimen of feature and aesthetics tests that most apps go through is that the creative component of the design process is usually postponed, even though app development has the data-oriented nature.
From a statistical and analytical point of view, A/B testing is problematic for other reasons. For example, in free-to-play games A/B tests are difficult to verify, leading to long testing periods. These tests are often unreal to be applied universally, because they are too specific. It`s also difficult to administer A/B testing in games where player experiences should be unified across several devices. Furthermore, the Pareto distribution of monetization characteristics in free-to-play games lends itself to innacurate results derived from a small number of highly enthusiastic players.
The serious problem is that it can be difficult to run mobile advertising campaigns with A/B testing marketing materials (ad creatives, icons, app store screenshots, etc.), because they`re automatically optimized by ad networks, and it again leads to wrong results. Some ad networks (eg. Facebook) let campaigns to be run without automatic optimizations being used, but most don’t.
The second problem with A/B testing for app marketing creatives is even more essential. A/B testing aspects of an app’s design require an observation of the overall impact of a change on a user’s behavior over their lifetime in the app. The same with A/B testing of marketing items, which also should take into account targeting and total addressable market.
There are also problems with targeting. It`s wrong to depend on creative conversion (click-through-rate) performance against cost of acquisition (cost per install). It`s called the click-through-rate conundrum: ad conversions (clicks) can move in the opposite direction of platform store conversions (installs from the platform store page) when ad creatives are optimized exclusively for clicks. That`s why the real success metric for ad performance is click-to-install, but even this isn`t the best one. Only total net revenue matters and that’s how targeting should be defined.
So that`s it with this episode. I hope you liked it. Matthew Riley − mobiledevmemo`s subscriber claims that the further downstream you measure, the noisier your data gets as it becomes subject to the whims of a bunch of other factors. Do you agree with him? You can express your point of view in comments below this episode!
Please, let me know what you think about my podcast! Welcome to my iTunes page by this link justforward.co/review. Here you can leave your feedback. I really appreciate it. Thx for your attention. See you.