When "latest" stops being "greatest"
.png)
Open source made software development faster. It also made software delivery more fragile.
Most teams already understand that dependencies can contain vulnerabilities. Fewer teams fully internalize the other half of the problem: dependencies can also change underneath them. When versions are not pinned, code from outside your organization can enter your build, CI pipeline, or runtime environment without a deliberate engineering decision. Your repo may be unchanged. Your app may be unchanged. But the software you actually ran can still be different.[1]
That is why version pinning is no longer just a reproducibility best practice. It is a supply-chain security control. And over the last year, real-world incidents have shown exactly what happens when teams rely on mutable references, floating installs, and update workflows that confuse "it still works" with "it is still trustworthy."[2][3][4]
Recent attacks made the risk impossible to ignore
In March 2025, the widely used tj-actions/changed-files GitHub Action was compromised. GitHub’s advisory says attackers retroactively modified multiple version tags to point to a malicious commit, exposing CI/CD secrets in workflow logs and impacting more than 23,000 repositories. GitHub’s own secure-use guidance is explicit: pinning an action to a full-length commit SHA is currently the only way to use an action as an immutable release.[2][5]
In March 2026, Aqua disclosed a major compromise in the Trivy ecosystem. According to Aqua’s advisory and incident discussion, compromised credentials were used to publish malicious artifacts and repoint tags affecting trivy-action, setup-trivy, and Trivy releases during a limited exposure window. Aqua specifically noted that users who referenced images by digest were not affected. That detail matters: teams relying on mutable tags had exposure, while teams pinned to immutable artifacts had a boundary.[3][6]
The same pattern appeared again in the LiteLLM incident. LiteLLM disclosed that malicious PyPI versions 1.82.7 and 1.82.8 were published in March 2026, and said customers using the official LiteLLM Proxy Docker image were not impacted because that deployment path pins dependencies in requirements.txt and does not rely on floating PyPI installs.[4]
These incidents were not obscure edge cases. They involved legitimate, trusted, widely used projects. That is exactly why the usual reassurance - "we only use reputable open source" - is not enough. Reputable projects can still be compromised. Maintainer credentials can still be stolen. Tags can still be moved. Release pipelines can still be abused.[2][3][4]
The real danger of unpinned dependencies
When teams do not pin versions, they are not just accepting instability. They are giving up change control.
npm’s documentation says package-lock.json records the exact dependency tree so later installs can generate identical trees regardless of intermediate dependency updates. In other words, pinning is what separates a deliberate software bill of materials from "whatever the registry served today."[1]
That distinction becomes critical during an incident. If a malicious version is live for only a short time, every environment that installs during that window may resolve something different. Without pinning, incident response turns into forensics: which runner pulled what, when, from where, and under which mutable tag or version range? With pinning, the question is simpler: did we explicitly approve and consume that exact artifact?[2][3]
Version pinning does not magically make software safe. What it does is restore intent. It ensures that new code enters your environment because your team chose it - not because an external publisher, registry, or tag state changed between two installs.[1][5]
The mistake teams make after they start pinning
Many organizations fix the first problem, then recreate it in a different form.
They pin versions, enable a Dependabot-like tool, and then start merging update PRs automatically as long as the build stays green.
That is not supply-chain security. That is just version drift with better formatting.
GitHub’s documentation describes Dependabot version updates as automation that creates pull requests to keep dependencies up to date. That is useful. But it is update automation, not trust validation. A dependency bot can tell you that a newer version exists and even help you adopt it quickly. It cannot tell you, by itself, that the release is benign, that the publisher account was not compromised, or that the artifact has trustworthy provenance.[7]
A successful build proves compatibility. It does not prove integrity.
A passing test suite proves your application still behaves as expected in the cases you tested. It does not prove the package was produced safely.
A green CI run proves the new version did not immediately break your pipeline. It does not prove the artifact you just pulled is authentic. SLSA’s provenance guidance is explicit that provenance is verifiable information describing where, when, and how an artifact was produced.[8]
This is exactly why recent supply-chain attacks are so dangerous: malicious artifacts do not need to break your build to succeed. In many cases, the most effective malicious release is the one that installs cleanly, passes quietly, and gets promoted because nothing looked wrong.[2][3][4]
What should be pinned
The practical rule is simple: if it can execute code in your build path, release path, or production path, it should be pinned.
That means application dependencies, transitive dependencies where tooling supports it, GitHub Actions by full commit SHA, container images by digest, setup tools in CI, and internal base images. GitHub says full-length SHAs are the only immutable way to pin Actions. Aqua says digest-pinned Trivy users were not affected. npm documents lockfiles as the basis for reproducible installs.[1][5][6]
Mutable references are convenient, but convenience is not a security boundary. A tag like latest, v1, or a permissive semver range may be fine for experimentation. It is a weak foundation for anything that matters.[5]
What a healthy update model actually looks like
The answer is not to freeze forever.
The answer is to make change intentional.
Healthy teams pin first, then update on purpose. They use automation to surface updates, open pull requests, and reduce operational toil. But they still review what changed, where it came from, and whether it meets their trust and release criteria. They promote updates through a predictable cycle instead of reacting to every new upstream release or every newly published CVE as an emergency.[7][8]
That is the operational model most organizations actually want: not slower updates, but safer ones. Not permanent stasis, but controlled motion. Not dependency panic, but dependency governance.[1][7]
The bigger lesson
The past year should have ended the debate.
The tj-actions compromise showed that version tags can be moved after teams decide they trust them. The Trivy incident showed how malicious artifacts and repointed tags can spread through widely used security tooling. The LiteLLM compromise showed how a short-lived malicious package release can still catch users who install floating versions, while pinned deployment paths can avoid exposure altogether.[2][3][4]
The throughline is unmistakable: the problem is not only vulnerable code. The problem is uncontrolled change.
And uncontrolled change is exactly what version pinning is supposed to prevent.[1]
Final thought
Engineering teams should not have to choose between moving fast and staying safe.
But they do need to stop pretending that "latest is greatest".
If your environment can silently absorb new code because a registry changed, a tag moved, or an update bot opened a PR that happened to pass CI, then your organization does not fully control its software supply chain. Version pinning will not solve every problem in open source security. But without it, you are leaving the door open to a class of attacks that has already become routine.[2][3][4]
How can Seal Security help?
Seal Security enables organizations to move to predictable, safe update cycles by removing the panic from dependency management. Instead of forcing teams to rush upgrades every time a new CVE lands, Seal ensures existing vulnerable dependencies are already remediated and free from exploitable CVEs - so engineering teams can pin what they run, review what changes, and update on their terms.
That means fewer emergency upgrades, less pressure to blindly merge dependency bumps, and a software supply chain governed by policy and intent instead of urgency.
References
[1] npm Docs - package-lock.json
https://docs.npmjs.com/cli/v10/configuring-npm/package-lock-json/
[2] GitHub Advisory Database - CVE-2025-30066 / tj-actions/changed-files
https://github.com/advisories/GHSA-mrrh-fwg8-r2c3
[3] Aqua / Trivy incident discussion
https://github.com/aquasecurity/trivy/discussions/10425
[4] LiteLLM security update, March 2026
https://docs.litellm.ai/blog/security-update-march-2026
[5] GitHub Docs - Secure use reference for GitHub Actions
https://docs.github.com/en/actions/reference/security/secure-use
[6] Aqua blog - Trivy supply-chain attack customer guidance
https://www.aquasec.com/blog/trivy-supply-chain-attack-what-you-need-to-know/
[7] GitHub Docs - About Dependabot version updates
https://docs.github.com/en/code-security/concepts/supply-chain-security/about-dependabot-version-updates
[8] SLSA - Provenance
https://slsa.dev/provenance/



.png)

