We are providing a detailed post-mortem report regarding the service disruption that affected Swapcard customers on February 12th, 2026, from 09:00 UTC to 09:30 UTC. This issue was caused by an unprecedented traffic spike that exceeded the capacity of one of our internal services, leading to temporary degraded performance across badge printing (SwapAccess) and Studio access.
The goal of this post-mortem is to share insights from our assessment and the steps taken to resolve the issue while providing transparency to our customers.
On February 12th, 2026, Swapcard experienced a 30-minute service disruption affecting badge printing via the Check-in App (SwapAccess) and access to the Studio interface. Customers with live events during this window experienced printing delays, degraded Studio access, and intermittent error messages.
The incident was caused by an unusual and unprecedented volume of concurrent traffic hitting one of our internal services. This traffic pattern had not been observed before and exceeded the scaling thresholds configured at the time. The sudden load caused elevated response times that cascaded to downstream services, including badge generation and Studio access. The platform self-recovered as traffic levels normalized around 09:30 UTC.
09:00 UTC | An unprecedented spike in concurrent traffic began hitting one of our core internal services, exceeding previously observed traffic patterns.
09:00–09:15 UTC | The service could not scale fast enough to absorb the sudden load. Elevated latency cascaded to dependent services, causing badge generation timeouts and degraded Studio access.
09:15–09:30 UTC | Traffic levels began to normalize. The platform progressively recovered as request queues cleared.
09:30 UTC | Full service restoration. Badge printing and Studio access returned to normal operation.
Onsite report | Our infrastructure team conducted a thorough investigation and immediately applied improvements to prevent recurrence.
The root cause of this incident was an unprecedented and sudden spike in concurrent traffic that exceeded the scaling capacity of one of our internal services. This traffic pattern had not been encountered before in production, and the service's auto-scaling configuration was not tuned to react quickly enough to absorb such a rapid increase.
As response times on this service climbed, the impact cascaded to dependent features — including badge generation and Studio, which rely on it for real-time operations. This cascading effect amplified the user-facing impact beyond what the initial traffic surge alone would have caused.
Following this incident, our infrastructure team immediately took action to strengthen the resilience of the affected services:
Scaling improvements: We have significantly increased the resource capacity and improved the scaling configuration of the affected internal service to handle traffic spikes well beyond the levels observed during this incident. The service can now absorb sudden surges much more effectively.
Resource optimization: We have optimized the resource usage of the service to ensure it operates more efficiently under load, reducing the likelihood of capacity issues even during unexpected traffic peaks.
We have deployed additional monitoring and alerting specifically targeting the failure patterns observed during this incident. This ensures that if a similar traffic surge were to occur, our team would be automatically notified within seconds and could intervene proactively before any customer-facing impact.
This incident was caused by an unpredictable traffic pattern that had not been previously observed on our platform. While the disruption was brief, we understand the impact it had on customers running live events during that window, and we take that seriously.
The scaling, optimization, and alerting improvements we have put in place significantly reduce the risk of a similar incident occurring in the future. Our infrastructure team continues to monitor the situation closely.
If you have any questions or concerns regarding this incident, please don't hesitate to reach out to our support team.