Testing
QA Process
QA (Quality Assurance) is often misunderstood as simply "testing." It's broader. QA encompasses testing but also test planning, process improvement, bug tracking, and ensuring quality throughout development. A QA process defines how your team verifies that software meets requirements and works reliably.
What QA Really Means
Quality Assurance is a mindset: ensuring quality is built in, not tested in afterwards. QA involves:
- Test planning: What will we test? How? What environments? This prevents ad-hoc testing.
- Test execution: Both automated and manual. Running tests to find bugs.
- Bug management: Finding, reporting, triaging, tracking, and verifying fixes.
- Process improvement: Learning from bugs. Why did this slip through? How can we prevent it next time?
- Quality metrics: Tracking defect density, test coverage, time-to-fix. Data-driven improvements.
- Communication: Ensuring developers, designers, product managers, and clients have shared understanding of quality.
The best QA is collaborative. QA engineers work with developers during development, not as gatekeepers at the end. This "shift-left" approach catches bugs earlier when they're cheaper to fix.
QA Roles and Responsibilities
Different team sizes have different QA structures:
| Role | Background | Responsibilities |
|---|---|---|
| QA Engineer | Often non-technical or semi-technical | Write test plans, execute manual tests, report bugs, verify fixes, sometimes write automated tests |
| SDET (Software Development Engineer in Test) | Software engineer who specializes in testing | Write automated test suites, build testing infrastructure, design test frameworks, troubleshoot flaky tests |
| Manual Tester | Focuses on exploratory and user-experience testing | Explore the app to find edge cases, test user journeys, verify design, catch issues automation misses |
| Quality Manager | Leadership | Oversee testing process, manage QA team, set quality standards, report to leadership |
| Automation Test Engineer | Software engineer | Write and maintain automated tests, set up CI/CD testing pipelines, improve test coverage |
Small teams might have one person doing all these roles. Enterprises have specialized roles. The key: someone owns quality and advocates for it.
QA in Different Methodologies
Different development methodologies approach QA differently:
- Waterfall: QA is a separate phase at the end. Developers build, then QA tests everything. This delays bug discovery.
- Agile: QA is integrated. QA tests in the same sprint features are built. Bugs are caught sooner. QA attends standups and sprint planning.
- Continuous Delivery: Testing is automated and continuous. Every commit triggers tests. QA focuses on exploratory testing and process improvement.
- DevOps: Developers own testing. QA helps build the testing infrastructure but developers write tests.
Modern approaches (Agile, CD) integrate QA early. This catches bugs faster and reduces the need for a massive end-stage testing phase. Waterfall delays QA, making bugs expensive to fix.
Writing Test Plans
A test plan documents what you'll test and how. It doesn't need to be formal, but structure helps:
- Scope: What are we testing? (features, integrations, platforms)
- Approach: Unit tests? Integration tests? Manual testing? E2E? What environments?
- Test cases: Specific scenarios to test. For a login form: valid credentials, invalid credentials, empty fields, SQL injection attempts, etc.
- Entry/exit criteria: When do we start testing? (after feature code is complete) When are we done? (all test cases pass, no critical bugs)
- Timeline: When will tests run? Daily? Weekly? Before releases?
- Risks: What are we most concerned about? (payments, security, data loss) Focus testing there.
A test plan doesn't need to be 50 pages. A one-page summary for each feature is often enough. The point: intentional testing, not random clicking.
Exploratory Testing
Exploratory testing is the opposite of scripted testing. A tester uses the app without a script, trying to break it. They follow curiosity: what happens if I click that? What if the page is slow? What if I go back?
Exploratory testing catches bugs that scripted tests miss:
- Usability issues (confusing workflows)
- Edge cases (unexpected user behavior)
- Performance problems (slowness)
- Accessibility issues (can you use it with keyboard? screen reader?)
Automated tests verify requirements. Exploratory testing finds problems outside the requirements. The best QA combines both: scripts to verify specifications, exploration to find surprises.
Bug Tracking Lifecycle
When a bug is found, it goes through a lifecycle:
- Found: QA discovers the bug.
- Reported: QA enters it into the bug tracker (Jira, GitHub Issues, etc.) with details.
- Triaged: A developer reviews it. Is it really a bug? What's the severity? Is it in scope?
- Assigned: A developer is assigned to fix it.
- Fixed: Developer fixes the code and marks the bug as fixed.
- Verified: QA re-tests to confirm the fix works and the bug is gone.
- Closed: Bug is officially resolved.
Some bugs are marked "Won't Fix" (low priority, expected behavior, out of scope). That's okay. Not every issue is a bug; sometimes it's a feature request or intended behavior.
Writing a Good Bug Report
A good bug report helps developers fix it quickly. Include:
- Title: One-line summary. "Login button not visible on mobile" not "Bug with login"
- Severity: Critical (app crashes), high (feature broken), medium (workaround exists), low (minor UI issue)
- Environment: OS, browser, app version, test environment (staging, production)
- Steps to reproduce: Exact steps that cause the bug. "1. Click login. 2. Enter email. 3. Button is hidden."
- Expected result: What should happen? "Button should be visible and clickable."
- Actual result: What actually happens? "Button is hidden off-screen."
- Screenshots/video: A picture is worth a thousand words. Attach a screenshot or video.
A well-written bug report is actionable. A developer can reproduce it immediately and start fixing. Poor bug reports lead to back-and-forth clarification and delays.
Regression Testing
Regression testing means re-testing old features after a change to ensure they still work. When you fix a bug, you want to verify the fix didn't break something else.
Automated tests (unit, integration, E2E) are your primary regression testing tool. But manual regression testing also happens: QA tests affected features when a fix lands. For critical features (payments, security), manual regression testing is often valuable even with good automated tests.
User Acceptance Testing (UAT)
UAT is when the actual users (or their representatives) test the app before it goes live. It answers: "Does this meet your requirements?" Developers can build exactly to spec but miss the user's actual needs.
UAT typically happens in a staging environment that mirrors production. The client tests key workflows and approves the app for release. If they find issues, they're fixed before launch. UAT is especially important for enterprise or client-facing software.
UAT isn't a testing technique; it's a gate. QA and developers have already tested thoroughly. UAT is the final check from the user's perspective.
Non-Functional Testing
Beyond functional testing (does it work?), QA also tests non-functional properties:
- Performance: How fast? Can it handle load? Page load time, API response time.
- Security: Can someone hack it? Input validation, SQL injection, XSS, authentication.
- Accessibility: Can people with disabilities use it? Screen reader support, keyboard navigation, color contrast.
- Reliability: Does it crash? Memory leaks? Connection failures?
- Usability: Is it intuitive? Can a user accomplish their goal without instruction?
- Compatibility: Works on different browsers, devices, OS versions?
These are often overlooked but critical. An app that's functionally correct but slow or inaccessible is still a bad product.
When to Hire a Dedicated QA Engineer
Small teams (1-3 developers) often don't hire dedicated QA. Developers test their own code and do code reviews. This works if the team is disciplined about testing.
Medium teams (4-10 developers) benefit from one QA engineer. They write test plans, execute manual tests, manage bugs, and advocate for quality. The team still writes automated tests; QA focuses on exploratory testing and process.
Large teams (10+ developers) often have multiple QA roles: manual testers, automation engineers, quality managers. Specialized roles allow depth.
The size isn't the only factor. Consider complexity. A simple CRUD app doesn't need dedicated QA. A financial system or healthcare app absolutely does.
QA at Startups vs Enterprises
Startups: Speed is critical. Often skip formal QA processes. But this creates technical debt. The best startups adopt lightweight QA: automated tests for critical features, some manual testing, quick bug triage. As the product matures, QA becomes more important.
Enterprises: QA is formalized. Detailed test plans, strict bug tracking, sign-offs before release. More process but better control. The challenge: bureaucracy can slow things down. The best enterprises optimize QA process to be thorough but efficient.
Key Takeaways
QA is more than testing. It's a process ensuring quality throughout development. QA roles vary by team size. Integrate QA early (shift-left) rather than as an end-stage gate. Use both automated and manual testing. Write clear test plans and bug reports. Track bugs and verify fixes. Include UAT for client-facing software. Non-functional testing (performance, security, accessibility) is as important as functional testing. QA is most effective when it's collaborative, not adversarial.