Exaforce Blog Author Image – Nuno Ferriera
Customer Story
December 19, 2025

When trusted third parties behave like threat actors

When risky support activity triggers every signal of a real breach, and why identity-centric detection is the only way to get the full details.

Security incidents rarely arrive neatly labeled.

In modern cloud and SaaS environments, the most difficult situations are not obvious attacks. They appear to be legitimate actions taken with good intentions, but are executed dangerously, manifesting in unexpected ways through trusted identities and from places no one anticipated. These are the moments where even experienced security teams pause, because the signals are real, but the intent is unclear.

This is a real incident we experienced internally at Exaforce. It is anonymized, accurate, and representative of the challenges security teams face every day.

Rapid detection and triage

Because we drink our own champagne, our environment ingests live data sources, and a high severity P1 alert surfaced in the platform in real time. Exaforce immediately correlated the underlying signals into a single, aggregated incident, and our MDR process engaged as designed. The alert was triaged within the SLA.

At that moment, the observable telemetry warranted a “treat as breach until proven otherwise” posture. The signals were consistent with compromise, even though we did not yet have enough context to determine whether the activity was malicious or unexpected (but legitimate) third-party behavior.

What the platform observed before any human investigation

The activity involved a non-human identity associated with a third-party SaaS integration. Specifically, a GitHub App token was used to generate a JWT and authenticate access for cloning our code to an unknown laptop. Historically, the identity associated with that token behaved consistently and predictably; that baseline changed abruptly.

Exaforce threat finding showing the illegitimate bot account behavior of cloning a repository to an unusual device

The platform observed several deviations occurring together over a short period of time:

  • The identity began operating in a new country
  • The source network shifted to a residential Internet service provider
  • The user agent changed to a workstation-based Git client
  • Multiple full repository clone operations occurred in quick succession
  • At the same time, the same identity continued operating normally from its usual cloud environment

Individually, none of these signals is rare. Geographic anomalies are noisy. Network changes happen. Repository access can be legitimate. Many security tools alert on these events in isolation, and most teams learn to tune them out. What mattered here was concurrency.

The same identity was active in two different environments at the same time. That overlap created an impossible travel scenario for a bot identity and changed the risk assessment.

Why traditional detection approaches struggle in situations like this

In many environments, this activity would have generated multiple independent alerts:

  1. A geographic anomaly alert
  2. A network or autonomous system number (ASN) change alert
  3. A repository access alert
  4. Possibly a third-party integration alert

Each alert would be separate and isolated, with different timestamps and limited context. An experienced analyst would need to manually correlate the signals, reconstruct a timeline, and determine whether the activity represented coincidence, misconfiguration, or compromise.

That process takes time. Time is exactly what attackers rely on, and it is also what legitimate-but-unexpected activity consumes.

How Exaforce handled it differently

From this group of signals, Exaforce generated one meaningful incident.

Identity behavior as the starting point

The identity involved was classified as a bot, not a human. Bot identities do not travel, use residential broadband, switch operating systems, or behave interactively. When those patterns appear, they represent a meaningful deviation from expected behavior. This behavioral deviation alone was sufficient to escalate the incident.

Automatic timeline reconstruction

The Exaforce Agentic SOC Platform automatically reconstructed the sequence of events, distinguishing normal, automated activity originating from cloud infrastructure from anomalous, manual activity coming from a residential network, and identifying the temporal overlap that proved simultaneous usage. This reconstruction required no manual queries, dashboards, or log stitching; the timeline was available immediately as part of the incident context.

AI-assisted triage with human clarity

The platform evaluated historical baselines, identity type, access patterns, and environmental context together and reached a straightforward conclusion aligned with how an experienced analyst would reason about the situation: it should be treated as a breach until proven otherwise. That framing allowed the team to act decisively without waiting for perfect certainty.

How the Exaforce team responded

Because the incident was clearly scoped and contextualized, the response was immediate and measured. Within a short window, the P1 incident was reviewed, the managed detection and response (MDR) team engaged directly, the third-party integration was suspended, administrative authentications were revoked, and the vendor’s security team was contacted with forensic evidence; the situation was resolved within hours. The team did not need to debate whether the signals were related, because the platform had already done that work.

The outcome, and why it still matters

After a joint investigation with the vendor, we confirmed the activity was not a breach but a well-intentioned and unsafe support action. An SRE manually cloned our repository to a personal laptop to recover from a vendor-side bug, using a non-human identity to do so. While the intent was to act quickly and restore service, using a bot identity and performing customer code access outside the vendor’s normal production infrastructure, without prior customer notification, was inappropriate and created a valid breach indicator.

In this specific case, the repository contained public-facing platform documentation, so the direct impact was minimal. However, the same pattern applied to a repository containing internal architecture documentation, runbooks, or sensitive data could have had serious consequences. The broader lesson is about how easily legitimate, time-pressured decisions can become indistinguishable from compromise when controls and context are missing, and why this should not happen, even in difficult situations.

From desperate, ambiguous signals to decisive detection

This incident was not a breach, but the signals were real, and the risk was real. If your tools cannot make that distinction quickly and confidently, you are relying on luck and human heroics. The Exaforce platform understood what was happening, why it mattered, and how urgent it was in real time. That is the difference between reacting and being ready.

If you want to see how Exaforce aggregates complex signals into a single, coherent incident, book a demo with Exaforce.

Recent posts

Lessons from the hallways at my first AWS re:Invent

Detecting and interrupting a sophisticated Google Workspace intrusion with agentic AI security

Feeding the worm a soft cloudy bun: The second coming of Shai-Hulud

How an AI SOC turns Anthropic’s intelligence report into daily defense

The log rings don’t lie: historical enumeration in plain sight

The past, present, and future of security detections

Exaforce HITRUST award

We’re HITRUST certified: strengthening trust across cloud-native SOC automation

Exaforce Blog Featured Image

GPT needs to be rewired for security

Exaforce Blog Featured Image

Aggregation redefined: Reducing noise, enhancing context

Exaforce Blog Featured Image

Exaforce selected to join the 2025 AWS Generative AI Accelerator

Exaforce Blog Featured Image

Do you feel in control? Analysis of AWS CloudControl API as an attack tool

Exaforce Blog Featured Image

Exaforce Named a Leader and Outperformer in the 2025 GigaOm Radar for SecOps Automation

Exaforce Blog Featured Image

How agentic AI simplifies GuardDuty incident response playbook execution

Exaforce Blog Featured Image

There’s a snake in my package! How attackers are going from code to coin

Exaforce Blog Featured Image

Ghost in the Script: Impersonating Google App Script projects for stealthy persistence

Exaforce Blog Featured Image

How Exaforce detected an account takeover attack in a customer’s environment, leveraging our multi-model AI

Exaforce Blog Featured Image

s1ngularity supply chain attack: What happened & how Exaforce protected customers

Exaforce Blog Featured Image

Introducing Exaforce MDR: A Managed SOC That Runs on AI

Exaforce Blog Featured Image

Meet Exaforce: The full-lifecycle AI SOC platform

Exaforce Blog Featured Image

Building trust at Exaforce: Our journey through security and compliance

Exaforce Blog Featured Image

Fixing the broken alert triage process with more signal and less noise

Exaforce Blog Featured Image

Evaluate your AI SOC initiative

Exaforce Blog Featured Image

One LLM does not an AI SOC make

Exaforce Blog Featured Image

Detections done right: Threat detections require more than just rules and anomaly detection

Exaforce Blog Featured Image

The KiranaPro breach: A wake-up call for cloud threat monitoring

Exaforce Blog Featured Image

3 points missing from agentic AI conversations at RSAC

Exaforce Blog Featured Image

5 reasons why security investigations are broken - and how Exaforce fixes them

Exaforce Blog Featured Image

Bridging the Cloud Security Gap: Real-World Use Cases for Threat Monitoring

Exaforce Blog Featured Image

Reimagining the SOC: Humans + AI bots = Better, faster, cheaper security & operations

Exaforce Blog Featured Image

Safeguarding against Github Actions(tj-actions/changed-files) compromise

Exaforce Blog Featured Image

Npm provenance: bridging the missing security layer in JavaScript libraries

Exaforce Blog Featured Image

Exaforce’s response to the LottieFiles npm package compromise

Explore how Exaforce can help transform your security operations

See what Exabots + humans can do for you