Renovate & Dependabot: The New Malware Delivery System
Supply chain attacks every other morning
Unless you’ve lived under a rock for the last few months, you probably noticed that software supply chain attacks are getting trendy among threat actor groups.
<div>Renovate & Dependabot: The New Malware Delivery System</div>
Supply chain attacks every other morning
Unless you’ve lived under a rock for the last few months, you probably noticed that software supply chain attacks are getting trendy among threat actor groups. Over the last 12 months, we’ve seen more of those than ever before, to name only a few of them:
tj-actions/changed-files: In March 2025, a popular reusable GitHub application workflow was compromised to dump secrets from CI/CD pipelines.
Salesloft Drift: In August 2025, threat actors stole OAuth credentials from the compromised Drift chatbot application.
Shai-Hulud: In September and November 2025, a wormed attack propagated through npm packages and collected secrets.
The common thread among those incidents is that they all revolved around secrets, one way or another. Some used secrets as an initial access vector, and others were focused on collecting secrets from victim environments.
March 2026 did not change the state of things, with two new severe attacks added to our dreadful collection:
Both those attacks followed a now-classical pattern, spreading through compromised open-source dependencies to maximise the impact in the shortest possible time.
Your all-time classic, now with added internal threats
Open-source supply chain attacks are not new. Ever since we started using centralized open-source package registries, the risk has existed. Threat actors understood this and started exploiting it. What has changed since 2015 is how we have improved software development productivity through automation. And now, this very same automation that lets you test and build your projects without typing a single command is amplifying the supply-chain threat and the velocity of attacks.
Let’s see how.
Keeping your malware up to date
A very concerning pattern we’ve observed in the trivy-action and Axios campaigns is that automation can become the source of your compromise.
One thing no developer wants to do is keep track of the new versions of all the dependencies they use. For that reason, the developer community invented Renovate and Dependabot, two systems that track and apply those updates. However, updating and installing packages is generally all that supply-chain malware needs to spread the infection.
Dependabot and Renovate pull requests carry an implicit trust that human-authored pull requests do not. They are routine, expected, and often waved through without scrutiny. The bad news is that this implicit trust now tends to accelerate the distribution of malware during supply-chain attacks.
The malicious axios package was uploaded on March 31st at 00:20 am. Only 5 minutes later, we observed the first modifications to a package.json file on a public repository. This commit was pushed by Dependabot and upgraded the axios dependency to 1.14.1, the malicious version.
Overall, across the infection timeframe, we have observed at least 895 public repositories upgrading axios to a malicious version. Out of the 527 that were still available at the time of analysis, 313 had been pushed to a branch directly, while 214 changes were brought via a pull request.
Where things get interesting is that 154 of those pull requests were opened by a bot user:
111 by Dependabot
30 by Renovate
Even worse, 95 (60%) of those pull requests were merged into the main branch, 50 of them by a bot user, without any user interaction. This led to the malicious package being pushed to production code in less than an hour, as showcased by the jhipster/generator-jhipster repository.
The malicious dependency update is automatically merged in production code.
In that case, the upgrade was triggered at h+40mn and merged at h+56mn. All this was allowed by a combination of Dependabot and an automerge workflow in the CI/CD pipeline.
name: Dependabot auto-merge
[…]
jobs:
enable-auto-merge:
runs-on: ubuntu-latest
if: ${{ github.repository == ‘jhipster/generator-jhipster’ && github.event.pull_request.user.login == ‘dependabot[bot]’ }}
[…]
– name: Enable auto-merge for Dependabot PRs
if: steps.dependabot-metadata.outputs.update-type != ‘version-update:semver-major’
run: gh pr merge –auto –squash “$PR_URL”
env:
PR_URL: ${{github.event.pull_request.html_url}}
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
And this pattern is not uncommon. A naive GitHub code search returns thousands of workflows that allow automerging for Dependabot.
Thousands of project implement an auto-merging of Dependabot pull requests.
This has specifically been observed during the trivy-action compromise, where repositories automatically updated the CI/CD pipeline with a malicious workflow version and ran it as part of the CI/CD testing itself. It can feel like a malicious inception in your pipeline.
In a few particularly nasty cases, we’ve found that Renovate updated the pinned commit SHA of a workflow. Pinning the commit SHA of a reusable workflow is considered best practice to prevent unexpected or malicious changes in that workflow. This method is intended to provide some immutability for CI/CD dependencies, except if an automated component changes the commit SHAs.
The pinned commit SHA is updated inside the same version.
In the above example, and in most similar cases, the change to the workflow file triggered the workflow’s execution and, thus, the execution of the malicious code. In that case, when the workflow is set to run on the pull_request event, it won’t get access to the CI/CD secrets and will be granted read-only permissions on the repository. If running in a private repository, this already represents an Intellectual Property compromise.
If the workflow is set to run on the pull_request_target event, the compromise is already complete, as all secrets stored in the CI/CD configuration will be made available to the malware. This event should, anyway, only be used with great care.
In open-source supply chain attacks similar to the trivy-action compromise, automated dependency update mechanisms can act as an internal threat, forcing malicious code into your repository. Another similar situation can occur in a supply chain security blind spot.
An army of careless bots
Corporate projects are the obvious place where security teams will investigate the use of compromised dependencies. Companies with proper SBOM for their software, projects, and products can relatively easily decide if they are at risk after a supply-chain attack. But a less easily audited attack surface lies in experiments, open-source tools, and other untracked software hosted only on developers’ workstations.
This blind spot increases with the number of developers and projects. Those two factors are dramatically evolving with the widespread adoption of AI agents. Any AI agent can eventually resort to writing code to achieve a task. It will install any dependency it feels like using, and it will do so with whatever method it thinks about. Depending on the situation, it might use pip while you have hardened your uv setup. It could as well decide to pull packages from a public index, even if you usually rely on a private mirror.
During the LiteLLM compromise, GitGuardian helped investigate a case in which an AI coding agent decided to update the lock file of a uv project that had LiteLLM as a dependency. This project was a machine-learning proof-of-concept experiment that was completely outside the company’s monitored perimeter. This AI-driven decision triggered the malware’s download and execution. The infection was later detected due to an unusual system load rather than thanks to existing monitoring.
As in many other situations, the use of AI does not fundamentally change the threat model of a perimeter, but it changes the scale of existing threats.
Build for the breach you did not catch
The one piece of advice that has been repeated a lot since the surge in supply chain attacks began is to apply a cooldown to dependency updates. This is solid advice that should also be applied to your automated dependency upgrade mechanisms.
With renovate, the cool-off period is configured with the minimumReleaseAge parameter.
{
“$schema”: “https://docs.renovatebot.com/renovate-schema.json”,
“extends”: [“config:recommended”],
“minimumReleaseAge”: “3 days”,
“minimumReleaseAgeBehaviour”: “timestamp-required”,
“packageRules”: [
…
]
}
Similarly, in Dependabot, the cool-off is controlled with the cooldown option.
version: 2
updates:
– package-ecosystem: “pip”
directory: “/”
schedule:
interval: “daily”
cooldown:
default-days: 3
Setting the cooldown period to a value between 3 and 5 days is generally considered to be sufficient to avoid most supply-chain attacks. This value can be further adjusted based on your upgrade needs.
Additionally, we recommend preventing Renovate from updating an immutable version pin unless the version is updated as well. When an immutable version tag is used, it generally means we need this immutability. It is lost if Renovate updates the pin inside the same version tag.
{
“packageRules”: [
{
“description”: “Do not update commit SHAs for reusable workflows within the same version tag”,
“matchFileNames”: [“.github/workflows/**”],
“matchUpdateTypes”: [“pinDigest”],
“enabled”: false
}
]
}
The above configuration would have prevented some compromise during the trivy-action attack.
The same kind of policy can be applied to local package managers as well. Depending on what solution and stack tech are in use, the configuration might change. The npm command line introduced a cooldown parameter in the 11.10.0 release.
min-release-age=3
npmrc configuration for a 3-day cool-down period
Similar parameters already exist for other NPM package managers, such as pnpm or yarn. The same works for Python package managers like pip or uv.
exclude-newer = “3 days”
uv.toml configuration for a 3-day cool-down period
It’s important to configure this cool-down period in all available package managers, including those that are not actively used. Doing so will prevent future mistakes and prevent your AI agent from bypassing the measure too simply. Even though an agent can bypass a globally configured cool-down period, it is less likely to happen accidentally if all package managers are properly configured.
Such hardening is a great strategy for thwarting supply chain attacks. However, as we’ve seen during the recent campaigns, all hardening measures can fail when threat actors deploy new tactics. This is why in-depth defense should be a requirement in all security policies. Given that most recent attacks place a strong emphasis on secret harvesting, secret observability is a key to this in-depth hardening.
When a developer machine gets breached, a fundamental question is: what secrets might have been leaked? Failing to answer that question can result in significant security trouble, with a difficult remediation process. This, in turn, can allow threat actors to achieve quick lateral movement and a much bigger blast radius. The trivy-action compromise was the consequence of a failed remediation: AquaSecurity failed to remediate credentials leaked in a previous attack.
As a last layer of defense, using honeytokens, fake credentials that raise flags when used, is also an efficient way to be alerted about breaches early.
Let’s rethink the perimeter
The axios 1.14.1 incident is a story about speed. The malicious package was live for a matter of hours, and in that window, automated systems across hundreds of repositories had already handled the attacker’s distribution. Pull requests were opened, merged, and pushed directly to main branches without a single human approval. The upgrade pipeline, designed to keep software current and secure, became the delivery mechanism. The relevant perimeter today is every automated process with write access to a dependency graph, and every machine where a developer or an agent runs an install.
Security teams now operate in a world where tools designed to reduce risk, such as dependency bots, automated merge queues, and AI coding agents, run faster than the advisory ecosystem that feeds their SCA scanner. The critical window sits between package publication and vulnerability disclosure, and our data shows it can close in just a few minutes. Defending it requires controls that live upstream of the code: slowing the automation down with version age policies, proactively inventorying the machines it runs on, and planting signals that survive exfiltration. The attack surface has shifted to the automation layer itself, and that is where detection needs to follow.
*** This is a Security Bloggers Network syndicated blog from GitGuardian Blog – Take Control of Your Secrets Security authored by Gaëtan Ferry. Read the original post at: https://blog.gitguardian.com/renovate-dependabot-the-new-malware-delivery-system/
