Who this is for
You're a CTO or VP Engineering at a company that has never had a formal security program. Until last week, security was something other people worried about. Then an enterprise customer sent a SOC 2 questionnaire, or your board asked about incident response, or an auditor showed up. Now you're scrambling.
You don't need a CISO, a GRC platform, or a 200-page policy document. You need five things done well in the next 90 days: inventory, access control, visibility, evidence (pentest), and the minimum viable policies to answer the customer. This post maps the exact sequence.
1. Day 1–7: Inventory what you actually have
You cannot secure what you do not know about. Your first week is pure reconnaissance.
Walk through your infrastructure and write down everything that touches customer data or authentication:
- All cloud accounts (AWS, GCP, Azure, etc.). List them by account ID and owner.
- All databases (production, staging, backups). Where do they live? Who has access?
- All external services with access to your data (payment processors, analytics, third-party APIs, vendor integrations). Note: this is where breaches happen.
- All identity providers (GitHub, Okta, Auth0, whatever). How are admin passwords stored?
- All deployed services and APIs. Are you running anything you forgot about?
Use a spreadsheet. It's boring and it works. You're not building a compliance tool; you're building a map so you can think clearly.
What we see in the field: Most teams discover shadow infrastructure here—a Postgres database spun up by an engineer two years ago, still running, never backed up, no monitoring.
2. Day 7–14: Lock down identity and access
If you get breached tomorrow, an attacker's first move is to steal credentials or exploit weak access controls. Lock this down before you do anything else.
- Enable SSO. If you don't have it, pick one (Okta, Entra ID, Google Workspace). Route all logins through it. Remove local user databases.
- Enable MFA everywhere. Slack, GitHub, AWS, GCP, your internal tools. Everywhere. Non-negotiable.
- Kill shared credentials. No shared AWS keys. No Postgres password in a chat message. No root account used by multiple people. Each person gets their own identity.
- Review who has admin access. Be ruthless. Can an engineer who left last month still SSH to production? No. Do you have a "break glass" process for emergencies? You need one.
- Set up audit logging for identity events. Who logged in when? Who added a new user? These logs will matter.
This is the highest-leverage work you'll do. Most real breaches at your stage come from weak credentials, not sophisticated exploits.
3. Day 14–30: Get visibility
You need to see what's happening in your systems. Set up centralized logging and alerting so you know if something is wrong.
- Centralize logs. Point CloudTrail (AWS), Activity Log (Azure), Cloud Audit Logs (GCP) at a central logging service (CloudWatch, Datadog, Grafana Loki, S3 bucket—pick one). The goal: all security-relevant events in one place.
- Create basic alerts. Root user login. Someone deleting a security group. Access to a sensitive database. Failed login attempts spike. You don't need to be sophisticated; you need to notice anomalies.
- Enable cloud service audit trails. GitHub organization logs. Third-party service integrations. Who changed what, and when?
- Document where logs live. If you need to investigate an incident in six months, will you know where to look?
Logging infrastructure feels boring until you need it. Then it's invaluable.
4. Day 30–45: Run your first pentest
You now know what you have and who can access it. Now find out what an attacker can do with that information.
Hire a reputable penetration testing firm to attack your production systems. External scope only (your perimeter: what an attacker on the internet sees). Tell them: "We have no security program. Find what we're missing."
This is not optional. It is not a "future" investment. You need this data before you build controls, because attackers don't follow your design docs. They go for the easiest path in. A pentest finds that path.
What we see in the field: Companies that skip the pentest and build policies first end up with documentation and no actual security. Companies that get tested first know exactly where to invest.
5. Day 45–60: Fix what the pentest found
Triage the findings by exploitability, not CVSS score. A CVSS 9 that requires authentication you control is lower risk than a CVSS 6 that any internet user can trigger.
- Fix anything an unauthenticated attacker can exploit (exposed API keys, public S3 buckets, SQLi in a customer-facing form).
- Fix anything that gives an attacker persistence (default credentials, overpermissioned roles, unpatched remote execution flaws).
- Document low-severity findings for the backlog. You'll fix them, but not this week.
You don't need to fix everything, but you need to fix the risky stuff before you go to the customer.
6. Day 60–75: Ship the pipeline security basics
You've patched the immediate problems. Now stop creating new ones. Add three things to CI:
- Secret scanning. Grep your code for API keys, tokens, passwords. (TruffleHog, Detect Secrets, or native GitHub/GitLab scanning.)
- Dependency scanning. You're pulling in open-source libraries. Know which ones have known vulnerabilities. (Snyk, Dependabot, or your cloud provider's native tool.)
- Container image scanning. If you're shipping Docker, scan the image before you push it. (Trivy, or native registry scanning.)
These are not perfect. They will not catch everything. But they will catch obvious mistakes—and obvious mistakes are what actually get exploited.
7. Day 75–90: Write the minimum viable policies
You now have controls in place. Document just enough to answer the questionnaire:
- Acceptable Use. What can employees do with company resources? ("Don't store passwords in Slack. Don't install random Chrome extensions.") One page.
- Access Control. "Who can access what data? How do we grant access? How do we revoke it?" Document your SSO, MFA, and role-based access model.
- Incident Response. "If we get hacked, here's who we call and what we do." (You have names. You have a Slack channel. That's enough.)
- Vulnerability Management. "We scan code and dependencies. We patch known vulnerabilities within X days. Here's who does it."
Do not write SOC 2 controls yet. Do not write a 50-page security architecture. Write what you actually do, in clear language, so an auditor can verify it exists.
8. After 90 days: What to do next
You've built the foundation. Now the real questions:
- Do you need SOC 2 Type II? If enterprise customers are asking for it, yes. Type II takes 6 months of control evidence. Start now. Type I is a checkpoint; do that first if time is short.
- Do you hire a CISO or outsource? At 20–200 people, you probably don't hire full-time. Contract with a fractional CISO, a security consulting firm, or a managed security service for reviews and incident response.
- What's the pattern here? You did inventory, then access control, then visibility, then testing, then remediation, then automation, then documentation. That sequence works because each layer depends on the ones below. Companies that skip to "let's write policies" end up with documents and no actual security. Companies that sequence correctly build momentum: each step gives you evidence and confidence for the next one.
The short version
You have 90 days to answer an enterprise customer or auditor: "Do you have security?" The answer is yes if you inventory your infrastructure, lock down access, enable logging, run a pentest, fix what it finds, add pipeline controls, and document the basics. Don't hire a CISO or build a GRC platform. Don't write 200-page policies. Do inventory, access, visibility, testing, remediation, automation, and documentation—in that order. That sequence builds actual controls. The policies follow the controls, not the other way around.
Want help building the program?
We run the first pentest, scope the pipeline security, and hand you the minimum viable policies — in 90 days. Senior practitioners, no SDRs.