Imagine you’re enjoying a quiet evening: good company, relaxed conversations, nothing out of the ordinary. Then your phone lights up with a notification: your web infrastructure is under active attack. This is exactly what happened to one of our executives on January 28 at 20:50. What initially looked like a routine DDoS incident (something many businesses in our region know all too well), quickly unfolded into something far more intricate. Within minutes, it became clear that this wasn’t just traffic flooding our servers. It was a layered, deliberate attempt to probe, distract, and conceal.
What began as a familiar scenario turned into a complex puzzle and a high-stakes game of hide and seek.
TL;DR
-
No Gurtam product, system or data were compromised.
-
Wialon, flespi and GPS-Trace products are located in their own fully-separated infrastructures and were not affected.
-
Only the infrastructure that is serving public company websites was affected.
-
The “hacker” was a fraudster using DDoS as leverage.
-
The attack combined technical pressure with psychological manipulation.
-
Alleged “leaked data” was recycled from old public breaches.
-
Layered defense strengthened mitigation.
-
Staying calm, acting according to the procedure and verifying claims prevented costly mistakes.
January 28: Cat and mouse game
In late January and early February, several external websites and a few internal systems at Gurtam came under a sustained DDoS (Distributed Denial-of-Service) attack. At its peak, request volumes surged to several hundred thousand per minute — enough to strain even well-prepared infrastructure.
There was no breach. No system or product was compromised. From the very first minutes, we followed our established incident response procedures: identifying malicious traffic and blocking IP addresses. But the scale quickly became the real challenge.
The attack was broad and highly distributed. So many IP addresses were involved that we eventually hit the limit of our load balancer’s blocklist. Meanwhile, traffic kept pouring in, each wave coming from new sources. Blocking manually was no longer enough.
Within 20 minutes, we identified an alternative mitigation approach. Ten minutes later, we had automated it, ensuring that newly detected malicious IPs would be added to the blocklist automatically. And still, new addresses kept appearing.
Around midnight, the traffic stopped. For a brief moment, it seemed the situation had stabilized. But only a few hours later, the attack resumed. For the next two days, it became a technical duel. We blocked. New IPs emerged. We adapted. The attacker adjusted. It was a relentless cat-and-mouse game.
Visualization of time intervals with DDoS attacks during the first 12h after event occured
January 29: Psy-ops started
Meanwhile, on the morning of January 29, the situation took a more revealing turn. Two members of our company board discovered that during the night, the mysterious orchestrator behind the attack had attempted to contact them directly. The message was simple: pay, and the problem goes away.
The timing was no coincidence. The DDoS waves had been intense and well-coordinated. At times, our external websites were either temporarily unavailable or responding noticeably slower than usual. The attacker tried to weaponize that instability, leveraging visible disruption to create urgency and fear.
It’s a familiar tactic. First, create a context: slow or unreachable websites, growing frustration, potential reputational risk. Then, apply pressure. Suggest a quick fix, a financial transaction in exchange for restored stability. The psychology is straightforward: in business, time is money. A website that’s down means lost opportunities, potential customers unable to explore products, unanswered inquiries, interrupted partnerships.
There was another deliberate element: timing. Fraudsters often reach out early in the morning or late at night, when decision-makers are more likely to be tired, distracted, or operating outside their usual support structures. Reduced alertness increases the chances of an impulsive decision. But pressure only works when it meets panic. And panic was never a part of our action plan.
January 29: Round two. It is over?
Soon after, another threat arrived. This time, the claim was very serious: the attacker allegedly had access to our internal data. Now it has become truly interesting.
That’s why instead of dismissing the message outright, we decided to engage. We opened a dialogue with the “hacker” and asked for proof. If someone claims access, checking your systems for breaches and verification is the first step. Shortly after, we received several sample datasets (printscreens).
A quick analysis revealed that the data did not originate from our infrastructure. It matched records from previously leaked public databases circulating online. The intimidation tactic was clear: reuse old breaches of other systems, present them as fresh compromises, and rely on fear to close the deal.
January 30: We are so back
If you don’t like the rules — change the game. Since communication with the attacker was “unexpectedly productive”, we chose to shift the dynamic. Rather than reacting, we decided to steer the game. Several team members continued interacting with the attacker and gradually redirected his attention to a decoy system we had prepared. We described it as highly sensitive and business-critical. Predictably, it became the primary target of his focus.
While the attacker concentrated on the decoy, we reinforced the real perimeter. Traffic was routed through the Google network so its infrastructure could absorb and filter L3/L4 volumetric attacks at scale. On top of that, we implemented L7 protection using Google Cloud Armor to mitigate application-layer threats.
As an additional safeguard, especially since it wasn’t always fully transparent how effectively Cloud Armor was filtering certain patterns, we deployed CrowdSec as a WAF and blocker for suspicious TCP and HTTP traffic. CrowdSec’s community-driven public blocklists were also activated, adding another intelligence-fed layer of defense. While the attacker believed he was closing in, we were tightening the net and resolved the incident for good.
Lessons learned
This incident reinforced several critical lessons about modern cyber threats:
First, not every self-proclaimed “hacker” is a sophisticated intruder. Many attacks are coordinated extortion attempts built on psychological pressure rather than technical breakthrough.
Second, preparation and calm execution matter more than reactive decisions. Established incident response procedures and the ability to automate mitigation within minutes significantly reduced risk.
Third, the DDoS attack in our case was a part of a broader strategy that combines technical disruption with social engineering and timed psychological pressure.
Fourth, verification is essential. Claims of data compromise must be validated before being believed, as recycled breach data can be used to fabricate credibility.
Finally, defense is strongest when layered: automation, cloud-based traffic absorption, application-layer protection, community-driven threat intelligence, and strategic deception together create resilience. Sometimes resilience is not just about blocking attacks. It’s about controlling the narrative, and forcing the attackers to play your game instead of theirs.