giffgaff is a mobile telephone network in the UK running as a Mobile Virtual Network Operator.
giffgaff Money wanted to enter the financial sector by introducing low-interest loan offerings for its members, as well as providing financial advice and allowing advice and deal sharing. The giffgaff Money project had two key deliverables; a financial advice and money saving tips site, and a member loan-offering site (the latter was provided through a branded RateSetter site). Bringing the site in-house ensured giffgaff could appeal and work with their members more effectively.
The project would model a new way of working within giffgaff. Agile best practices would be implemented such as Test-Driven Development, Continuous Integration (CI) and Continuous Deployment (CD) in additional to building the site using micro-services and a mobile-first design for the AngularJS client.
Qualitest’s managed testing had already worked well for giffgaff’s mobile market. Would client ‘teams’ with Qualitest QA consultants fostering, driving, and shaping the testing approach in a fast-paced agile environment likewise improve a financial market project?
giffgaff had a monolithic architecture, with little automated unit or end-to-end (E2E) testing, built using outdated solutions and requiring lots of manual regression. This project would transform giffgaff and demonstrate their agile mindset, allowing it to compete in a modern technical work landscape. Agile experts across different roles including Qualitest test consultants were either moved from within the business or brought in to create a ‘dream team’ to build a product designed, built and tested through agile best practices. The team had several cross-functional developers and testers, a Scrum Master, Agile Project Manager, Business Analyst and Product Owner.
The business objective was a 500% loan increase compared to their white-label hosted solution. The technology aim was to move to a modern technology stack, hosting solution and way of working. Other goals included:
The team had complete ownership from development to operations, with agile objectives, looking to challenge the current way of working (following strict Scrum guidelines) to alter practices based on what does or does not work.
All CI tests from unit through to E2E acceptance ran on any commit to any branch, triggering the build for that specific component. Red builds immediately alerted the team so they can focus on quickly returning to a green state. All successful builds automatically deploy onto test environments to ensure high visibility of the latest changes to the business and facilitating manual testing.
The first CI cycle testing layer was unit testing, via standard frameworks such as JUnit and Jasmine. The developers averaged a 90% code coverage rate across both client-side and server-side components. The standard testing pyramid model emphasized low-level unit tests as a basis for automation cycles.
The next testing layer was integration testing. A series of API integration tests with clear reporting ensured all web services operated correctly and consistently. These continuous build API tests proved very valuable (considering our reliance on 3rd party APIs supporting our processes) especially as these tests ran on a schedule ensuring quick detection of any 3rd party API changes or downtime. Client-side, integration tests ensured correct page rendering, that all input fields were working, perform visual validation and validate that all client web service communication was handled correctly. A smart stub-server, utilized both in our test environments and locally, ensured that client-side work could progress and be tested without reliance on any back-end service being implemented, filtering requests that would normally go to RateSetter. Use of specific stubbed trigger data would pass back a stubbed response, otherwise the new request would proceed to RateSetter test environments. Automation tests could run in known states on test and production-like environments.
After successful build and isolated testing, we next automatically execute E2E tests. Cucumber and Serenity were both used in the integration test framework for BDD and reporting respectively. Java coding enabled all project members to contribute code. While E2E frameworks are usually owned by the testers, the developers predominantly use and maintain them. Client-side developers found that setting up page objects got them thinking about page layout way before coding them. Best practices, such as using ID’s on all elements, became self-evident. Uniting server- and client-side developers on E2E tests unified their goals.
New processes and technologies produced challenges. Ensuring compatibility with existing hardware, project and build management tools and operations processes was difficult; the biggest hurdle surrounded how to host systems. All previous systems were hosted using a simple model of a server for the client, a server for the backend and a server for the database, but these new systems involved multiple micro-services, requiring creative solutions to host them on a single physical server. Setting up CI was difficult, requiring introducing new tools into a security hardened and locked down build server. Difficulties were best overcome by constant cross-team communication, creative problem-solving without compromising on security, and adapting quickly as issues arose.
The project started with great intentions – no-estimates, agile mindset over agile ceremonies/processes, Test-Driven Development (TDD) and other such ideals were merged on the path to release a Minimum Viable Product as soon as possible. A newly hired, highly talented cross-functional unified team was empowered to find a new approach to working. The team replaced Gantt charts with directly informing stakeholders with delivery dates. Ignore stringent agile ceremonies. Disregard sprints. Concentrate on lean production. However, the team structure and way of working needed help, because highly talented individuals also meant clashing egos. The lack of common ground between ‘best practice’ and ‘MVP’ caused tension and disrespect. A lack of retrospectives caused problems as they were integral for the CI feedback loop. Also, TDD blocked development. Environmental issues and lack of build trust forced manual testing which often occurred well past development’s completion, producing an unfinished un-demoable work backlog. Overcoming these issues took much time and effort. Most breakthroughs came from retrospective airing of issues, then regressing toward more traditional Agile Scrum approaches. The team reunited after returning to a known work method and focus on improving the technology and approach to quality; automated test coverage increased, there was build trust and the team started to work together.
The team discovered that they could no longer release into production until additional standards body approval had been granted, which set back the product, but not the project. Fortunately, the long delay before going live allowed the team to really focus on the project aims and refine the product, technology, testing and processes that would lead to an overhaul across all of giffgaff.
Some practices we expected to be improvements actually made things more difficult, such as using PhantomJS, which in the future will be avoided in favor of SauceLabs for browser testing.
Both systems were built on-time, eventually adhering to agile principles and a very high quality due to the extensive automated tests combined with utilizing CI and CD to ensure constant feedback. The project’s success led giffgaff to assemble a strategy for implementing the lessons learned, the technology and the testing processes across all of their other teams and systems. This project was a success in terms of delivery, and delivering giffgaff’s new corporate vision. As both the start of a new journey and a learning exercise, it created a great challenge for those involved and an even greater experience with many lessons learned, ultimately exceeding the business requirements.