The Business Manager’s Guide to Software Testing and Quality Assurance

Time to read
13 min
The Business Manager’s Guide to Software Testing and Quality Assurance
No Results Found
Table of Contents
  • How do you measure the success of testing?
  • Why should your software project include testing?
    • Testing can keep your business afloat
    • Types of tests in software projects
  • Is there a single best process for testing, or is it context-based?
    • Step #1: Start testing early
    • Step #2: Manual testing can’t be overlooked
    • Step #3: Appropriate automated testing strategy
    • Step #4: Long-term monitoring and upkeep
  • What are some of the best practices for automated testing?
    • 1. Avoid unit test overkill
    • 2. Narrow and broad integration testing
    • 3. You can’t perform only end-to-end tests
    • 4. Don’t use E2E recorders
  • Final thoughts on software testing and quality assurance for business managers

How do you get into a lot of technical debt? It’s simple: just ask your development team to never test anything.

In under a year, maybe even a few months, you’ll find yourself stuck in development hell, with a few more gray hairs on your head and a broken spirit.

Unless you’re a masochist, we’ll assume you’d rather avoid this scenario—which means we need to talk a little bit about software testing. You might be wondering:

  • How do you measure the success of testing?
  • Why should your software project include testing?
  • Is there a single best process for testing, or is it context-based?
  • What are some of the best practices for automated testing?

By the time you finish reading this article, you’ll be able to answer all of these questions.

Note: this guide about testing and QA was written for Project Managers and other non-technical stakeholders in software projects. If you’re an experienced developer, some of it may already be well-known to you—but other points might serve as a useful reminder.

How do you measure the success of testing?

Though it doesn’t breathe or eat, a program is a bit like a living organism. It has to be able to evolve. In this way, software developers get to play universal creator, because they control how the program evolves.

“Evolution” is also an apt term because software is never fully finished. A product that successfully passes all tests today might well be dysfunctional as soon as next month. 

The product might also fall out of fashion, so to speak, forcing you to add functionalities in order to stay competitive. A good example of this was when every social media platform started adding a “stories” feature after Snapchat first introduced it.

Adding new features may seem simple from the user’s perspective, but for the developers behind a product, it can be highly problematic. Each change to the code comes with a guarantee of increased maintenance costs—and the risk that existing features will fail.

If you can’t finish software, how do you measure the success of testing?

The answer is, you evaluate the functionality, and you do it continuously.

Functionality is where developers can drift away from business stakeholders in poorly managed projects. Software engineering is an art, and developers enjoy building elegant things that work under all possible conditions (including edge cases) while performing faster than military-grade systems.

But if none of those goals are in the project requirements, then all of that brilliant programming will only be a waste of time and money with no foreseeable return.

Functionality is about giving the users something they can… well, use. Successful testing isn’t about how many test cases you’ve created or if the code is perfectly formatted and uses the latest and greatest tech and services.

To quote our own seasoned expert on the matter:

“The most important goal of testing is to guarantee that the system will work when it’s in the hands of the end user. Users don’t care about how well something was made, how it was tested, or how sophisticated the engineering is. Thus, the most essential tests are not those that verify implementation details, but those that verify functionality, features most critical to the end users.”

—Maciej Urbański,
Expert Python Developer at STX Next

Why should your software project include testing?

There’s one situation where you don’t need to worry about testing as much: when you’re building a Minimum Viable Product (MVP).

However, this needs to be a conscious decision. Make no mistake, without any kind of automated testing, you’re raising tech debt that will catch up with the project. It’s not a question of “if,” but rather “when.”

After a conscious no-testing MVP stage like that—if you want to make sure your project goes smoothly and the product grows without obstacles—you’ll need to start investing in testing.

This is a bit of a given, but web app development is very complicated:

  • Project requirements never stay the same and stakeholder requests keep changing.
  • You can’t reliably predict how your application will behave once you put all of its parts together.
  • Modern products include multiple third-party services that interact with your code in many ways, which can influence security, performance, and many other product characteristics.

All of the above is why software development has to be an iterative process: build something, see if it works, repeat forever. Testing is a critical part of this process.

Testing can keep your business afloat

Critical errors happen. They happen even in environments where everything is continuously tested and built with the most powerful, bleeding-edge tools and infrastructure—like Facebook or Microsoft.

I don’t think you need any more proof to see that you can never expect your application to work perfectly 24/7/365. These are the companies we all look up to, and even they can’t avoid bugs.

How is it that these companies continue to dominate the market, despite huge security issues and global outage-causing bugs? The answer is simple: testing.

Types of tests in software projects

Does that mean you have to keep thousands of people on retainer in case you need to test a product quickly? That would be quite inefficient!

Plus, having someone manually test your software is just one of the ways you can test things. Most testing is done automatically nowadays.

Humans and machines collaborate to ensure that software performs as it should, by performing different types of automated tests:

1. Unit

Is an individual component not working as it should? Unit tests enable developers and testers to check the smallest individual parts of code at extreme speeds, reaching thousands of tests of a unit per second.

2. Integration

Are all components cooperating properly? With integration tests, you combine different components made up of many units and see if there’s any conflict between them.

3. End-to-end (E2E)

Is the product functional from the user’s perspective? With E2E tests, you simulate real-life user scenarios to see if the product works properly and performs its function(s).

That covers the basics of software testing. All in all, you simply can’t deliver a product that doesn’t work to your users, and testing is the main way to avoid that.

Now, let’s explore what your testing process should look like.

Is there a single best process for testing, or is it context-based?

The short answer is this: while the best process is often tailor-made, there exists a good testing process that all projects in the kickoff phase can follow from the start. We’ve outlined these best practices for you below.

However, best practices are often treated as merely suggestions—as they should—so the exact testing strategy is always context-based.

For example, when a development team is starting a new project without any QAs or testers on board, they will immediately begin writing automated unit tests along with their code. As a matter of fact, it’s our goal at STX Next to use unit tests in all of our projects due to their cost-effectiveness, especially over time.

Here are the four steps that generally apply to most software projects, and will most likely apply to yours:

Step #1: Start testing early

You have a choice. You can either start testing as soon as you start development, or just a little bit later. In our projects, we prefer to start both testing and QA as soon as possible. In some situations, we might wait up to a month or two to reach the MVP stage as a cost-saving measure.

Step #2: Manual testing can’t be overlooked

The first point of contact with the system has to be an actual human being. Someone needs to use the system and verify that it works as it’s supposed to.

Proofs of concept can’t be verified by machines. Plus, initial manual testing can uncover concept flaws and big-picture issues with the software. You can save a lot of money and trouble if you catch these issues early.

Step #3: Appropriate automated testing strategy

Once you’ve manually verified that the system works, it’s time to set up a strategy for long-term testing. This means answering questions like:

  • What kind of automated tests will you use and how will you implement them?
  • Where should the main focus be?
  • How many manual tests can be replaced by automated tests?

Please note that manual testing can’t be replaced completely, but should be used together with automated tests in varying proportions. When a product is in its infancy and features change daily—some completely removed or changed beyond recognition—there’s little value to be gained from automation.

However, for mature or well-defined, strict-to-specifications applications, automated tests are the only valid weapon to fight the regressions of old features without grinding development speed to a halt.

Your general strategy should revolve around automating testing for well-known, here-to-stay features. Manual testing should mostly focus on the new and upcoming capabilities of your app.

Testing is almost like a tiny project within your project, so the strategy should evolve and change along with the modifications made to your software.

Step #4: Long-term monitoring and upkeep

Next to testing, monitoring is a huge part of long-term quality assurance. Regardless of how well-tested your app is, some bugs are bound to creep in. When they do, they need to be handled gracefully.

Monitoring enables developers to locate issues quickly, find the code that needs fixing, test a bug fix, and bring your system to perfect working order within hours.

Without monitoring, chances are that you wouldn’t even get to test your bug fix. Worst-case scenario, you wouldn’t even know about the bug before you started losing customers. With so many alternatives to almost any digital product on the market, users have little incentive to report bugs; they can just move on to a different product.

Monitoring and testing need to work in tandem to keep both the software and the business healthy and efficient throughout the product life cycle.

What are some of the best practices for automated testing?

As mentioned before, first there are manual tests. You shouldn’t skip them—ever.

Then there are different types of automated testing, which we conventionally organize in the testing pyramid:

  1. unit tests, 
  2. integration tests,
  3. end-to-end tests.

We won’t explore them in detail, otherwise this article would turn into a 100-page dissertation. Instead, here are a few broadly applicable, cost-saving best practices for each type of automated test:

1. Avoid unit test overkill

Unit tests are tricky. They’re easy to write, but too many of them will make it near impossible to refactor code later on. Adding more unit tests is costly, and the biggest cost is increased maintenance. Unit tests get more difficult to manage the closer they are to system internals.

So, the question is: should you test every single conceivable unit in your software?

This neatly ties in with two general approaches to test-driven development:

  • inside-out (black box),
  • outside-in (white box).

With inside-out, you ideally pay close attention to every single unit of your code.

With outside-in, you care more about the result that your code generates, and it doesn’t matter to you exactly how that result was generated.

If software were a car, inside-out would be like a Mercedes factory. Everything in a Mercedes is engineered and built from the ground up to fit perfectly, resulting in a smooth ride and one of the best automobile experiences money can buy.

Outside-in would be a shed in which you transform an old Volkswagen and a Honda bike into a coupe for junk car racing. You just need it to drive fast—you don’t care if it looks bad, has a motorcycle engine, or feels like a death trap while you’re driving it.

Developers are pretty opinionated people, so chances are that any given project will have a few fans of inside-out testing and a few fans of outside-in. Your final testing process will always be a compromise between these two approaches.

Since our focus is on the end user, not just writing code for its own sake, we tend to opt for treating parts of the system as black boxes. Even more importantly, our “unit” tests are written not for a unit of code, but for a unit of functionality. This approach meshes well with domain-driven design, which is a subject deserving of its own article (or preferably a book or two).

With the right strategy for unit testing, you can save a lot of money in the long term. Focus on functionality, not implementation details. This saves you from rewriting your tests every time a refactor is needed, thus leaving you more time and money to focus on building new features.

As a rule of thumb, when your developers find themselves trying too hard to reach arbitrary test coverage percentage as counted by hit lines of code, they might be overdoing unit testing.

That being said, coverage percentage is still a useful testing metric, as it can alarm you when you’re not testing enough. Roughly speaking:

  • anything below 15% coverage is not enough,
  • above 70% coverage should be okay,
  • strict 100% coverage only makes sense when you’re building a system for NASA or JPMorgan Chase.
2. Narrow and broad integration testing

Software development guru Martin Fowler notes in one of his recent articles that there are two types of integration testing.

There’s narrow integration testing, where you test the parts of your code that communicate with other components. You use “test doubles,” and the tests can be done using the same framework that’s used for unit tests.

The second type is broad integration testing, where you don’t just test if components can communicate, but rather check the whole functional path with actual live services, in a much larger testing environment than the one used for unit tests.

“So what?” you might ask. The problem, as Fowler notes, is that most software developers only think of broad testing when you mention integration tests.

And there lies another way to lower your testing budget. If you exchange broad integration tests for narrow integration tests wherever possible, you’ll end up with lower costs and the tests will most likely run faster, too. It’s worth noting here that E2E tests are basically just very broad integration tests.

Therefore, when planning integration tests with your team, discuss the benefits and drawbacks of integration tests, and what should be the ideal balance for your project.

3. You can’t perform only end-to-end tests

End-to-end tests are about using the product in the same way that users would do it. Unfortunately, you can’t “just do” E2E tests and call it a day. There are costs to everything. It would be really awesome if you could just pull tests out of thin air!

E2E tests are very expensive, so you should make sure they’re worth the investment. They don’t always make sense, though they are a perfect fit in some cases, for instance:

  • for basic smoke testing after deployment that would otherwise be done manually to make sure the product works;
  • for testing things that otherwise wouldn’t be tested at all, in case integration tests weren’t done for external services or frontend and backend integration.

To perform E2E tests, you need a lot of resources to develop, maintain, and use the E2E test suit. The main problem is too many moving parts.

You’re testing the system as a whole and you’re left with a black-box model, where now you have to deal with the application as it was created for humans, from machine code. This makes E2E unreliable and highly resource-intensive to run.

However, if you combine unit and integration tests in a smart way, you can reduce the need for E2E tests to a minimum.

4. Don’t use E2E recorders

Your development/QA team might be tempted to use recorders, which promise to automate the job of E2E testing. “You just click through the functionalities you want to test, and the recorder will build the tests for you!”

Don’t believe the hype. The only moment when recorders make sense is during the very early stages of a project. Their usefulness ends right after the proof of concept is verified.

It’s better to just have developers write E2E test code. Code like this can be reused, it will perfectly fit the context of your project, and it will be much easier to maintain in the long term. Recorders only make things unnecessarily complicated in terms of maintenance, as each test has to be redone when minimal changes are introduced to the UI.

Final thoughts on software testing and quality assurance for business managers

To sum up, after reading this article, you should remember these three crucial things above all else:

  • Testing is an essential part of the software life cycle.
  • In your project, you should start testing as soon as possible.
  • When testing, you should combine manual tests with automated (unit, integration, and end-to-end) tests.

From a business perspective, testing might seem like an expense with no visible, immediate return. From a development perspective, testing is essential to fighting tech debt and ensuring the functionality and longevity of software.

Due to extreme competition in the digital world, the ability to develop new features is essential to any business. From the business perspective, it’s important to look at tests as an enabler rather than just a necessary overhead cost.

Similarly, from the development perspective, tests should be prepared with caution and attention to cost, because a cost-effective strategy is essential to longevity and productivity.

At STX Next, we’re more than ready to support your project with top-notch testing and QA processes or any other software development services you may need. All you have to do is reach out to us and we’ll take it from there.

Get your free ebook

Download ebook
Download ebook
Content Writer
Expert Python Developer
Test Automation Lead
Share this post