Part Three of the Humanizing Software Quality Series from Intellyx, for Apica
In part 1 of this series, Jason English established user journeys as the essential goal of software testing efforts. Then in part 2, he explained why user journeys are so difficult to test.
The fundamental lesson across both articles is that the software quality buck stops with the user. It doesn’t matter how well the infrastructure or applications or middleware or other components of a software offering work if the user experience at the interface doesn’t meet expectations – or even more so, delight the user.
User journeys are in reality digital journeys, mapping user interactions from one piece of software to another. Never forget, however, that such journeys aren’t about the software. They are always about the user and their experience.
It’s no surprise, therefore, that software testing practices and tools have focused on the user experience (UX). Real user monitoring, synthetic monitoring, load testing, and other modern testing technologies all center on UX.
From the user’s perspective, a seamless UX masks the underlying complexity of the software – even though in today’s digital, cloud native world, such complexity is exploding.
Behind the scenes, each component must deliver on its requirements, and thus engineering teams must test each component – from database to cloud service to middleware to front-end – in order to ensure the UX is up to snuff.
Such component-level testing takes place at each component’s interfaces: the APIs that represent each component’s functionality to the rest of the digital landscape. However, API testing has never focused on the UX, instead focusing on the technical specifications of the API itself.
The resulting disconnect between APIs and UX threatens the success of the digital effort overall. To solve this problem, we must bring APIs into the human equation – rethinking API testing in the context of the user journey.
The Limitations of API Testing
Over the years, API standards and technologies have evolved through multiple generations, from the proprietary, tightly coupled APIs of the 1990s to the XML-based Web Services of the 2000s to the HTTP-centric RESTful APIs of the 2010s. Adding to this mix are GraphQL, WebSockets and other modern API protocols of today.
With the addition of asynchronous and streaming software interface protocols to the API mix, the modern integration landscape now depends upon APIs. APIs are at the center of cloud native software delivery in particular.
Regardless of the protocol or technology underlying them, APIs have always been headless – a macabre term for the fact that user interfaces rely upon separate technologies that interact with APIs, while the APIs themselves are software – but not user – interfaces.
The headlessness of APIs has always discouraged human-centric testing. APIs expose their functionality for all software consumers, the argument goes, so as long as they comply with their interface contracts and core API best practices, then who cares what the consuming applications are or who is using them?
This loose coupling between software consumers and the endpoints they interact with has always been the point of software contracts. Dating back to the early 2000s, Web Services Description Language (WSDL) represented an early service contract standard, putting a stake in the ground for such loose coupling.
Today, REST verbs, media types, and the corresponding data formats specify the contracts for RESTful APIs. Testing such APIs, therefore, amounts to testing for conformance to the data formats as specified in the media types within the context of structural consistency – the best practices that enable software consumers to automatically bind to and interact with RESTful endpoints.
Such testing is necessary to ensure the proper functionality and consumability of the APIs but is not sufficient to account for user behavior and preferences, since users only interact directly with the software consumers, not the APIs.
Bringing APIs into the Human Equation
Despite their headless nature, APIs must nevertheless satisfy the business objectives set out for them. What if you could track such real-world business objectives through the UX to the APIs running behind the scenes in the business environment?
With Apica Ascent, engineering organizations can connect the dots between user behavior at the UX and interactions at the API.
Such interactions must account for the variability that results from human interactions with software. After all, humans are fickle, mercurial, and error-prone. We enter bad data into fields, abandon shopping carts, and hit the browser back button in the middle of transactions. There’s no telling what the effects of such mischief will be on the underlying APIs.
These are the reasons why user journey testing is as important for APIs as it is for user interfaces. Such testing goes well beyond the functional and data validation that most API tests focus on.
Instead, Apica Ascent can incorporate demographic profile tests, behavioral tests, and user trending patterns, building repeatable tests that cover the vagaries of human behavior, translating into more thorough API testing that validates the business performance of the underlying software components.
The Intellyx Take
APIs play an important part in the overall performance and behavior of modern software.
Today’s modern applications consist of multiple, often ephemeral software components, with user interfaces that must work across a variety of devices. The more complex such applications become, the more important it is for engineering teams to focus on the UX as people progress through their digital user journeys.
Such relentless user focus is an essential priority of digital transformation as organizations realign their organizational and software efforts to better focus on their customers. APIs are essential supporting layers in this transformation.
——————————
©2022 Intellyx LLC. Apica is an Intellyx customer. Intellyx retains final editorial control of this article.