Blog

How We're Securing Our Own Supply Chain

Lev Pachmanov
April 13, 2026

Building a supply chain security company comes with an uncomfortable truth: our remediated packages run inside our customers' production environments. A compromise on our end is a compromise on theirs. We take that responsibility seriously.

I want to pull back the curtain on how we actually secure our own supply chain - from the code we write, to the artifacts we deliver, to the infrastructure that holds it all together.

The Threat Model: Know What You're Defending

Seal Security customers typically deploy three components from us: the Seal CLI (a build-time tool that runs in their CI/CD pipelines), sealed artifacts (the remediated open-source packages themselves), and source control integrations (GitHub, GitLab, Azure DevOps). Each one is a potential attack surface, and we treat it as such.

Before we write a single line of mitigation code, we map out the attack vectors. Each component - the CLI, the artifacts, the source control integrations, and the AI pipeline - has its own threat model, its own attack surface, and its own set of mitigations. This isn't a generic checklist - it's specific to how our product actually works.

Hardening the CLI: From Commit to Customer

The Seal CLI is the primary touchpoint between our platform and customers’ build environments. It's a build-time utility that integrates directly into the customer's CI/CD pipeline, where it identifies vulnerable and malicious open-source dependencies and replaces them with our remediated versions. Because it executes inside the customer's build process, a compromised CLI could inject malicious code into anything our customers ship. That makes it one of the most critical components we need to protect.

Every commit to our codebase requires cryptographic signing via a physical YubiKey. No YubiKey, no commit. Every pull request goes through mandatory code review by an uninvolved developer enforced by the repository settings, with automated tests running in parallel. Our build and release pipeline is fully automated through GitHub Actions - triggered by merges to main, not by humans clicking buttons. The service account that runs the build has strictly limited permissions, and only GitHub account owners can modify those permissions.

On macOS, we code-sign the CLI binary with a trusted Apple developer certificate, so users can verify authenticity before execution. Binaries are hosted on GitHub Releases, which comes with its own built-in protections.

Configuration is another attack surface people tend to overlook. The Seal CLI can be configured locally through a file in the customer's Git repository, but the primary configuration mechanism is remote - managed through the Seal UI, which is protected by Frontegg with support for 2FA, SSO, and OpenID authentication. Importantly, even if the remote configuration were somehow compromised, the blast radius is inherently limited: the CLI can only swap between different versions of a library, pulled directly from our own artifact server. It doesn't execute arbitrary code or fetch packages from third-party sources. A malicious configuration could, at worst, pin a customer to a different version of a dependency - which requires a much more comprehensive attack to achieve arbitrary code injection.

Finally, every artifact the CLI downloads is verified using a hard-coded signing key embedded in the CLI itself - meaning even if our servers were compromised, the CLI would reject any tampered packages. The corresponding private key is stored in AWS KMS and never leaves it - signing operations happen within KMS, so the key itself is never exposed to any person or system.

Protecting the Sealed Artifacts Themselves

This is the one that keeps me up at night. Our sealed artifacts - the remediated versions of vulnerable open-source packages - run directly in customer production. The stakes don't get much higher.

Our patches are deliberately minimal - typically under 10 lines of changes. This isn't just an engineering philosophy; it's a security control. The smaller the diff, the harder it is to sneak something malicious past review. We also completely avoid changes to dependencies - no bumps, no transitive dependency swaps - which eliminates an entire class of dependency poisoning attacks.

Every artifact commit goes through the same YubiKey-signed commit + peer review process as our CLI. The build and release pipeline is automated with strictly scoped permissions. Artifacts are stored in AWS S3 with encryption at rest and in transit, and only our artifact server service can read or write to those buckets. The Seal CLI communicates exclusively over HTTPS, and we provide cryptographic hashes and code signing so customers can independently verify every package they pull from us.

We also give customers visibility to the changes introduced to the sealed versions so they can review exactly what changed. Trust but verify.

AI Security: Power Without Exposure

We use AI agents for vulnerability remediation - it's core to our product. But we don't let AI ship code unsupervised. Every AI-generated patch must pass automated regression testing and receive manual approval from our security research team before it goes anywhere near a customer. Human-in-the-loop isn't a buzzword for us - it's an enforced gate.

Our AI inference runs on AWS Bedrock, which gives us critical isolation guarantees out of the box. We use separate inference profiles for each tenant, ensuring that no customer's data or context can leak into another tenant's operations. Customer code and metadata processed by our AI agents are never used to train models - we maintain a "no-training" policy with all AI sub-processors, and processing happens in-memory without persistent storage. AI operations run in isolated, ephemeral environments with restricted network access - if something goes wrong, the blast radius is contained.

Infrastructure: Serverless, Isolated, and Locked Down

Our production environment runs entirely on AWS, and it's fully serverless. No persistent servers means no persistent attack surface. AWS owns the infrastructure security; we own the code and configuration.

Access to our AWS accounts is tightly restricted. Root access is limited to 2 system administrators, protected by physical YubiKeys. All regular access goes through SSO. Everything is defined as Infrastructure as Code via Terraform, scanned by Checkov for misconfigurations, and subject to daily drift detection so we know immediately if something in production deviates from what's defined in code.

We follow least privilege religiously. All lambdas and databases sit inside a private VPC with strict networking rules. API Gateway and WAF shield our backend. All communication is encrypted via HTTPS, and we use AWS KMS for key management and Secrets Manager for credentials.

Source Control: The Foundation of Everything

If you control the source, you control the supply chain. Our entire codebase lives in GitHub, accessed exclusively through SSO with YubiKey-backed 2FA. Role-based access control keeps permissions tight: only two people have owner-level access, and even that requires physical YubiKey possession.

Repository configurations are themselves defined as code, so any change to security settings goes through the same code review process as a product change. Every code change is linked to a ticket in ClickUp, and our compliance tool automatically verifies code-to-ticket traceability daily. The people who own the GitHub account are distinct from those who own the compliance tool - separation of duties built into the org structure, not just a policy document.

Compliance as Continuous Validation

We're SOC 2 Type II and ISO 27001 certified. But certifications are a point-in-time snapshot - they don't tell you much about what's happening between audits. That's why we use continuous compliance monitoring to track our controls in real time, automatically identifying gaps before they become risks. We conduct annual penetration testing and disaster recovery drills to validate that our controls actually work under pressure.

What's Next: 2026 Initiatives

Security isn't a destination; it's a continuous process. Here's where we're heading in 2026:

We're investing in enhanced production monitoring to improve anomaly detection. We're formalizing mandatory human-in-the-loop reviews for all AI-generated code. We're building rigorous artifact validation pipelines - AI-powered security testing, baseline comparison of library test results, and automated checks for unexpected file modifications. And we're increasing transparency by publishing validation results - including changes to exported symbols, file lists, and dependencies - so customers can work in a trust-but-verify model.

The Bottom Line

When you use Seal Security, our code runs in your production. We don't take that lightly. Every layer of our system - from how we write code, to how we build and sign artifacts, to how we isolate our AI pipelines - is designed with one question in mind: what happens if this gets compromised?

We think about security the way our customers need us to - not as a compliance checkbox, but as an engineering discipline. And we'll keep raising the bar because that's what being in the supply chain demands.