Qapitol QA

Building a Quality Engineering Partnership with a Leading Cloud-Based Health & Wellness App Product Development Leader

Table of Contents

Health and wellness apps provide a convenient way to bridge the health seekers and care providers. While desktop apps and mobile apps are used as front-end by consumers, there are complex systems, including external integrations, running in the back-end to make operations successful. The key success factors for such apps are user acceptance and customer adoption. As the number of users increase across geographies and personas accessing different types of services, the complexities of maintaining the systems also increases and servicing the users becomes challenging. The platform not only needs to scale but has to be tested for various usage requirements. The following is a story of a leading wellness software provider whose operations were scaling rapidly when the client felt the need to engage a testing partner to ensure their systems integrated efficiently, operations ran smoothly and users were engaged successfully. Qapitol QA, with its testing experience, automation talent and proprietary tools was selected to guide the quality engineering activities and execute enhanced test automation strategies to manage their cloud-based software operations successfully and scale business efficiently.

Applications that power thousands of brands across fifty countries

The client is a provider of a cloud-based software solution for the spa, salon and med spa industry. The system powers thousands of spas and salons in more than 50 countries. The software allows users to seamlessly manage every aspect of the business through a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Over 10000 clients use this software which include many of the world’s leading salon and spa brands.

Developing an automated test pack for testing APIs

The cloud platform facilitates end to end business solutions for employees and guests (customers) with features such as appointments booking, buying products, memberships, packages and loyalty points. Initially the app was on a web platform, and with the digital thrust, the company started its operations on mobile platforms (Android/iOS) with their mobile applications. As part of its transformation, APIs were developed for use across the following areas:

  • Payments
  • Mobile POS
  • Marketing
  • Loyalty Points
  • Inventory
  • Employee Mobile App
  • Customer Mobile App
  • Booking Wizard
  • Analytics
  • Alerts
  • WebStore

The transformation journey of the client involved growth of 5x to 10x from their current 1.5 billion API calls a year. This will mean that automated testing of APIs assumes greater significance. Qapitol QA was engaged by the client for developing an automated test pack for testing the APIs. The APIs were prioritized by their usage across the platform. The engagement involved designing test cases in Lucid charts, creating test data and developing automation scripts.

The key objectives were to:

  • Create the automated test pack using the existing framework and enrich it further
  • Ensure the automated execution of the scripts daily in three different build environments – QA, Pre Release and HotFix.
  • Enable Dev teams to run and maintain the automated scripts as and when required
  • Ensure that the execution timelines for the automation scripts do not increase substantially as the number of automated scripts increase beyond 10,000

Planning and executing a structured testing strategy

Project Type: API Unit testing Automation

Planning

The APIs were developed over a period of time resulting in limited documentation for some APIs. This needed the attention of multiple teams (Dev, Data Analysts) to set up meetings and planning & design/update the API’s.

 Test Data Design

Test data preparation required enabling/disabling plug-ins, configuration settings with dependency on the backend teams for some and the UI screens for others. The data challenges included dependencies on related APIs which at times alter the responses to API requests. This makes the maintenance of expected results a challenge for automation. This is further complicated by the multiplicity of build environments and other teams also sharing the same environment. 

Implementation

Scripting needs to be done/updated to ensure smooth execution across all servers available. While a dedicated codebase for each server provides high accuracy for results, updating scripts/fixes for each individual server would mean a higher maintenance effort. Hardcoding expected response also leads to increased maintenance activity on the scripts.

 Execution

Limited availability of Testing, Product Development and Hot Fix environments servers in a day to run scripts and deploy fixes for the failed scripts. The usage of test management tools helped publishing of test automation run results with the count of passed & failed scripts. However, it was difficult to find the root cause(RCA) for the failed cases along with the fail status of the test case. The logs for passed and failed scripts were also not segregated to identify the failures and fix them quickly. 

It is interesting to note that this was the first ever engagement for the client to have an external partner supporting them in their quality engineering exercise. The engagement was kick-started on the same day when the nationwide Lockdown was announced (March 23rd) which continued for months. This meant that all the team members were working remotely from their respective locations. A well integrated remote working strategy is a catalyzing force.

Meeting test automation challenges with powerful enhancements

The Qapitol QA team came up with solutions and suggestions to enrich the automation framework and deliver the automation pack with improved performance. Some of the solutions suggested and implemented are listed below

Challenge:  Tedious task of hardcoding expected response for each test case leading to higher maintenance issues and effort.
Enhancement: Designed a common Response template instead of hardcoding the responses enabling an override at Test case/ environment level

Challenge:  Huge log files for all executed test cases making it difficult for the Testers to distinguish pass vs failed test cases and identify/ fix issues related to syntax errors and bugs.
Enhancement:  Segregation of log files for Pass and Fail Test cases – Representing the log files, Detailed API Response at test case level – to optimise debugging effort to fix the errors and identify the bugs quickly.

Challenge:  Customise Test Suite Execution
Enhancement: Grouping of Test cases – Capability to run test cases based on specific grouping – Feature wise, Functionality wise

Challenge:  Module wise Test results representation is not available yet
Enhancement: Corresponding Framework changes could be done to implement module wise / Feature wise or Group wise test results reporting

Designing and implementing a variety of software tests

  • API Testing
  • API Automation
  • Data Migration
  • Functional UI Testing

Documents and reports submitted

Test Plan

  • KT on APIs
  • Test Cases Preparation – LucidChart Modelling and Writing test cases
  • Data Creation – on TDR reflects on ZAR on daily basis
  • Scripting – Client Custom Framework using JSON and executing on python environment
  • Bug Verification/Validation

 UI Testing Scenarios and Scripts

  • Validation of each and every Params, URI such as: Cross centre, org, voided, valid ids etc.
  • Comparing JSON to JSON objects
  • Validating the response values that are returned 

 Usability Recommendations [At framework level]

  • Failed, Pass test scripts segregation in the logs
  • JSON attribute values that has list

Tools and Platforms explored and applied

  • Tools: Pycharm, Lucidcart, Postman
  • Source Code Management: Bit Bucket
  • Bug Tracking Tool: JIRA
  • Test Management Tool: Test Rail
  • Platforms: Windows, iOS

Analyzing the testing results for quality enhancements

Better environment management for stable software releases

  • Set up the process to generate automated mails daily basis across three environments (Testing, Product Development and Hot Fix environments) each time the automation pack is executed. This is used by the team to analyze the logs further and address any changes or raise new bugs for the failed test cases.
  • Test execution results are uploaded on Testrail with Pass or Fail status for the corresponding test scripts for a particular module.

Reaping the business benefits of speed and scale

Achieving more coverage, less risk, more frequent releases

  • The APIs tested with the automated execution account for 95% hits in the live environment for the select modules.
  • With this automated execution, defects were isolated early to improve the build quality and process efficiency.
  • The client was able to serve customers with better performance and satisfy guests with delightful user experiences.

The team was able to complete the development of 7,500 automation scripts for 95 API endpoints maintaining the productivity levels despite the challenges of distributed teams, remote working and dependencies. The automated enhancement unearthed 500+ defects for the endpoints. The next phase of the project includes automation of APIs for mobile POS and Webstore, dataio and to develop automation for efficiently testing the UI layer of the platform.

Write to us at: [email protected] to implement proven quality engineering solutions to hasten your product releases.

Share this post:

Talk to Us