Process
Post-Launch
Launch day is both a celebration and the beginning of a stressful period. The code is live. Real users are interacting with it. You'll discover problems you never saw in testing. This is the post-launch period—typically the first few days to weeks after you ship. How you handle it determines whether your launch is a success or a crisis.
What Happens After Launch
The post-launch phase has several simultaneous activities:
- Monitoring: watching for errors, performance issues, unusual behavior
- Bug fixing: discovering problems and shipping fixes quickly
- Support: helping confused users, answering questions
- Feedback collection: learning what users like and dislike
- Metrics tracking: checking if the launch succeeded
The team is usually tense during this period. Developers are on alert for issues. Product/business is watching metrics. Sales/support is handling user questions. It's like everyone is holding their breath.
Soft Launch vs Full Launch
You have two approaches to launch:
| Approach | How It Works | Pros | Cons |
|---|---|---|---|
| Soft Launch | Release to a small subset of users. Maybe 1% of users, or a geographic region, or beta testers. Gather feedback. Fix issues. Then full launch. | Lower risk. You catch major issues before all users see them. Easier to roll back if catastrophic problems occur. | Slower time to full market. Users in the soft launch group might see rough edges. Longer feedback cycle. |
| Full Launch | Release to all users at once. No gradual rollout. Everyone gets it at the same time. | Faster to market. Everyone gets the feature at once. Simpler mentally (you don't have to manage different versions). | Higher risk. If there's a critical bug, all users are affected. Harder to roll back if many users have already adapted to the new version. |
Soft launch is lower risk, especially for features that are mission-critical or might have adoption friction. Full launch makes sense for less risky updates or when you want speed.
The First 48 Hours
The first 48 hours are critical. Most users discover the feature, try it, and give immediate feedback. Major bugs surface. Watch these metrics:
- Error rates: Are JavaScript errors or server errors spiking? What's breaking?
- Performance: Are pages loading slowly? Are API responses timing out? Is there unusual latency?
- User behaviour: Are users actually using the new feature? If not, why? Are they confused?
- Conversion: If it's a signup or purchase feature, is conversion in line with expectations?
- Support volume: Are support tickets or chat messages spiking with problems or questions?
During these 48 hours, developers should be available. Prioritize fixing bugs over new features. Have a runbook—who gets paged when errors spike? Who decides to rollback?
Bug Triage and Hotfixes
Not every bug is urgent. Triage them:
- Critical (fix immediately): The feature is broken and unusable. Users can't complete the main flow. Data loss or security issue. Fix in minutes.
- High (fix within hours): The feature works but with issues. Users can still use it but it's frustrating or slower. Fix in the next few hours.
- Medium (fix in next sprint): It's a bug but it doesn't block the main use case. Edge case. Can wait for normal sprint cycle.
- Low (backlog): Minor cosmetic issue or something that affects very few users. Goes in the backlog like normal work.
Critical issues get hotfixes—special commits to main that skip the normal PR process because they're urgent. High issues might also get hotfixed. Medium and low go through normal process.
Monitoring in Production
Good monitoring is non-negotiable post-launch. You need to know something is wrong before users report it.
Essential monitoring:
- Error tracking: Tools like Sentry alert you to exceptions in production. You see them immediately and know how many users are affected.
- Performance monitoring: Tools like DataDog or New Relic show you if APIs are slow, database queries are hanging, etc.
- Uptime monitoring: Tools like Pingdom or StatusPage watch if your site is reachable. You know immediately if the server is down.
- Business metrics: Track the key metrics for the feature. Signups, conversions, feature usage. If they drop unexpectedly, something's wrong.
- User session recording: Tools like LogRocket or FullStory let you replay user sessions and see what they were doing when they hit a bug.
These should alert you proactively. If error rate jumps to 5%, you get a Slack message immediately, before users call support.
Collecting User Feedback
Early feedback is gold. Use multiple channels to collect it:
- Support tickets/chat: Users ask questions or report problems. Read these personally. What's confusing?
- In-app feedback: Add a quick feedback button. "How are we doing?" Pop-up surveys. Use tools like Typeform or Qualtrics to gather quick feedback.
- Usage analytics: Track what users do. Which features are used most? Where do they drop off?
- User testing: Call a few power users. Watch them use the feature. Take notes on friction points.
- Social media/forums: Watch Twitter, Reddit, Product Hunt comments. People criticize publicly if they're unhappy.
Don't wait for perfect data. Act on early feedback. If 5 users say the same thing is confusing, it probably is.
Feature Requests vs Bug Fixes
In the post-launch period, separate requests into two buckets:
- Bug fixes: The feature doesn't work as intended. It's broken. Prioritize these. Fix them in the launch sprint.
- Feature requests: "Can you also do X?" or "I wish it did Y." These are nice-to-haves. Add to the backlog. Schedule for next sprint or later.
The temptation is to add every requested feature. Resist. You'll be chasing your tail. Fix what's broken, then move on. Feature requests come later.
Hypercare Period
The hypercare period is the first 1–2 weeks after launch when developers are on high alert. It's intense. Expect:
- Daily standup to sync on issues
- Developers checking production metrics every hour
- Quick turnarounds on bug fixes
- Interruptions to normal sprint work
During hypercare, the team shouldn't start new features. They're on bug duty. Once things stabilize (error rates normal, no critical issues for 24 hours), move to normal operations.
Transitioning to Normal Mode
How do you know when to stop hypercare and move to normal operations?
- Error rates are stable and low
- Performance is meeting targets
- No critical issues for 48 hours
- Support ticket volume is normal
- Users are successfully using the feature
Once these criteria are met, schedule normal work. The feature is live and stable.
Planning the Roadmap Post-Launch
What comes after? You have feedback, you have bugs, you have feature requests. What gets built next?
- High-impact bugs: Anything that blocks core use cases or affects many users. Highest priority.
- Usability issues: Data showing users are confused or dropping off. Address this—it's low-hanging fruit to improve adoption.
- Most-requested features: If 50 users asked for something, it probably matters. Add to next sprint.
- Technical debt: Now that you're not hypercare mode, allocate time to clean up. Refactor, add tests, pay down debt.
- New features: Build on the foundation. New use cases, adjacent features.
Success Metrics
Declare success or learning. What metrics matter for this feature?
- Adoption: What percentage of users use the feature? If you launched sign-up and 0.1% of users sign up, something's wrong.
- Engagement: Do users come back to it? Or is it a one-time thing?
- Conversion: If it's a purchase or signup feature, is conversion acceptable?
- Revenue impact: Did it increase revenue as expected?
- Retention: Does it reduce churn or improve retention?
- User satisfaction: Do users like it? What's the NPS or CSAT score?
Compare actual metrics to your goals. If you hit targets, celebrate. If you miss, dig into why and iterate.
Deprecating and Shutting Down
Not every feature survives. Sometimes you launch something and realize it's not working. How do you shut it down?
- Communicate early: Tell users you're considering sunsetting it. Give them 30 days notice minimum.
- Provide migration path: If data is involved, let users export or migrate it elsewhere.
- Support through end date: Until the feature is gone, keep supporting it and fixing critical bugs.
- Analyze usage: Before shutting down, understand who used it and why. Might inform future features.
- Document the decision: Why did we ship this? Why did we shut it down? This is learning.
Launch Retrospective
A week after things stabilize, do a launch retrospective. The team reflects:
- What went well?
- What was harder than expected?
- What would we do differently next time?
- What did we learn about the feature or users?
Document this. Share it with the team. Each launch should be less chaotic than the last.
Moving Forward
Post-launch is intense but temporary. Stay calm, focus on critical issues, gather feedback, and transition to normal operations when things stabilize. The team will be exhausted after hypercare—let them recover. Celebrate the launch.
The next section covers Onboarding Developers—how to bring new people into the codebase and get them productive quickly.