π In yesterday's #growthgems newsletter I shared insights from the recent "Understanding, Optimizing and Predicting LTV in Mobile Gaming" GameCamp webinar
Read the 15 #LTV π from this 100 min video in the twitter thread below π
π 1/15
Chart your creatives on a X axis and check your D3/D7/D28 ROAS to quickly spot outlier creatives (both good and bad) so you can act on that (by reallocating spend for example).
π 2/15
A benchmark comparing ROAS (e.g. D7/D28) for each week (X axis) with success thresholds allows you to evaluate your UA strategies.
π 3/15
Understanding the CPI to spend relationship is a key factor in understanding UA payback and how you can scale your campaign.
π 4/15
Ask yourself this to find the most profitable players (Android): 1. Group great at buying IAPs, frequently? 2. Group heavily engaged, does engagement grow over time? 3. Group of players watch ads frequently? Have a benchmark to compare these new LAL/audiences to.
π 5/15
Do not create too many groups/segments of players when looking at LTV. You need to make sure they are different so you can understand which group is better. It is not enough to segment based on how much they purchase however, you need to use other attributes too.
π 6/15
If your LTV curve looks like a step function with jumps, either your game is relying mainly on LiveOps offers (not ideal design) or the amount of payers/players is too low.
π 7/15
Your special offers are great if you can increase revenue per user while minimizing the discount. The value is in the personalization and showing the right offer (with a relevant content and price) at the right time.
π 8/15
Predicting LTV is different at different game stages: soft launch, some time after global launch or when the game is at maturity.
π 9/15
In soft launch we do not have the whole LTV curve but we somehow need to calculate the lifetime length so product knowledge is crucial because you're extrapolating. You have to know:
- Monetization limits (depth)
- User behavior
π 10/15
Most important step in LTV model development: LTV model and forecast validation. ALWAYS have a validation sample to test the model against, and it also must be representative. Do not build the model to work especially well against your validation sample (i.e. "overfit")
π 11/15
Some time after global launch you need different LTV models: country groups (Tier 1 vs. Tier 2 vs. Tier 3), acquisition sources and optimization types (Google Ads vs. video networks) and monetization types (in-apps vs. ad-based, or live ops vs. regular purchases).
π 12/15
Always think about how the LTV model will be used. Example: LTV model for the UA team needs to be working with a small sample size so decisions can be made at the campaign level vs. LTV model for strategic decisions needs to be more accurate and can be more thorough.
π 13/15
If you are encountering issues when leveraging machine learning, build a quick model with rough "soft launch techniques" for quick validation. Have a few models using very limited amount of data so you can retrain the machine learning models as soon as possible.
π 14/15
Understanding the impact of LiveOps events on LTV is difficult when only 3 or 4 LiveOps done. Look at peaks during the LiveOps event. "Slice" the LTV curve into smaller periods, define a validation cohort for each slice and calculate the impact on LTV over the period.
π 15/15
When evaluating the impact of LiveOps on LTV, do not forget to take into account the novelty effect: peaks tend to be higher during the first LiveOps events.
Read the #ASO#ASA insights from the 56 min video in the twitter thread below π
π #1
Apple will tell you "do not use search ads data for ASO" because the timeframe of the trend they are reporting is short term. You can not use a short timeframe for ASO because it takes 2-3 weeks to get your keywords fully indexed.
π #2
Apple will still index you for 2 to 3 weeks after an update. If you have good rankings already, you can do keyword metadata optimization every 30 days and have a solid view of how an optimization performed.
Read the #UA#creatives insights in the twitter thread below π
π #1
HopSkipDrive uses the same brand feeling and emotional value across channels but what might differ is the goal of each campaign. Example: Twitter for retargeting/re-engagement and Facebook for conversion.
π #2
Try to take a portfolio-based approach to creative testing and understand that you're adding something new to the mix. Some channels do not have a "clean" way to A/B test (e.g. TikTok) but even if there is it might not give you results of what will happen "live".
15 mobile #growthgems π from "Mobile Marketing for Card Games" with Josh Chandley (Wildcard Games) @eniac, Jonathan Lau (Weee!) and @jokim1
Watch this 52:14 discussion directly...Or start by reading the main #mobilegrowth#UA#creatives insights in the twitter thread below π
A way to test creatives at a lower cost is to start in India with a small budget to filter them out first, then move to Canada and deploying the best-performing onesΒ in the US.
π #2:
For games that have a strong organic demand, it makes sense to pay attention to ASO and also to potentially try preloads on devices as well as try low cost/quality channels. Examples: Gin, Rummy, Solitaire, Tetris.
π #1
Tag the ids within each ad creative: not one id per creative but an id for each "element": light/dark background, team id, player offer, CTA, etc. Add granularity of tagging for each concept so you can really understand what drives results. [DraftKings]
π #2
Yousician did a live op with paid content unlocked (e.g. Jimmy Hendrix songs) and people playing these songs for free for a limited time ended up converting (subscribing) at a 40-50% higher conversion rate. [Yousician]
10 mobile #growthgems π from "How We Scaled from 80k to 6m Users" by Jordan Gladstone (Director of Marketing at Dave - Banking App) at @apppromotion APS WFH USA
ππ Twitter thread below
π #2
Then start filming other people to be able to show the app more. It helps figuring out which messaging works as well as test different thumbnails (e.g. empty gas), showing or not the app (showing the app works better), actors, locations, etc.
π #3
Something that has worked well for Dave is responding to Facebook comments on their ads "as Dave" (they now have a style guide with what Dave likes). It allows for ads to look more organic and creates more engagement, leading to lower CPMs.
π #1
There are very different kinds of subscriptions, and the retention tactics/levers are entirely different for each subcategory:
- Content - music, books, video, comics, news, etc.
- Motivation - health & fitness, dating, education
- Utilities - video editors, dropbox etc.
π #2
Subscriptions align users and developers' interests. People will keep paying if you keep providing them value.
Example of Evernote: as you use it more, it becomes even more valuable and that's when the subscription model makes the most sense.