To launch or not to launch? Evaluating a new betting app

discover more
two jockeys riding horses in horse race

challenge

Racetracks across North America face challenges with the aging out of their existing customer base while finding it difficult to attract, engage and retain new and younger audiences in an increasingly competitive entertainment landscape. That’s why our client, a premier racetrack in North America, designed an innovative new way of playing the ponies. With the minimum viable product (MVP) development wrapping up just prior to the racetrack’s annual marquee event, it felt like the perfect opportunity to get in front of the sport’s newcomers. However, was the app ready? Was it truly a minimum viable product? Launching this highly anticipated app on the year’s biggest day with a poor user experience was a risk not worth taking, which is why RESEARCH STRATEGY GROUP was engaged to conduct a user test to provide a ‘GO’ or ‘NO-GO’ recommendation along with key pain points to prioritize ahead of launch.

Excited modern businessman with beard using digital tablet while standing in front of brick wall outdoor.

impact

User tests confirmed that, pending a few fixes, there were no major red flags to launch, and a recommendation to go ahead as planned was well received. Results indicated satisfactory scores in engagement and fun, and good ratings in usability and ease of use, all key areas of importance for the client to feel comfortable in pressing on.

Further, recommendations were provided across the whole app experience and prioritized in a way to ensure the MVP would meet users’ expectations of usability. Non-urgent recommendations were also integrated into the product roadmap for future enhancements.

method

With speed in mind, the intended launch date quickly approaching, RESEARCH STRATEGY GROUP executed 23 semi-structured in-depth user sessions with target customers, asking users to download the app and execute a variety of tasks including registration, research, placing a bet, and finally watching a race. Respondents were encouraged to think aloud and describe their experience in contrast to their expectations. Further, ratings were gathered in a variety of areas of usability to ultimately roll up to a score on the System Usability Scale, an objective measurement of user acceptance facilitating the “GO” recommendation. Users’ observations and challenges were also captured and codified to roll up into a recommended roadmap of enhancements and fixes. Clients were able to observe the sessions and received all recordings and reports following the study.

Comments are closed.