Executive Summary
After reading this playbook, engineering teams will be able to:
- Elicit and prioritize security requirements from the start, ensuring security objectives drive architecture and testing decisions.
- Perform threat modeling and risk analysis systematically, producing actionable threat scenarios and mitigation controls.
- Design secure architectures with defense-in-depth, documenting decisions in Architecture Decision Records (ADRs).
- Integrate SAST, DAST, SCA, and secrets scanning into CI/CD pipelines with risk-based quality gates that don't block delivery.
- Operate secure services with continuous patch management, system hardening (CIS benchmarks), configuration audits, and runtime monitoring.
- Demonstrate compliance continuously by mapping activities to NIST CSF, IEC-62443, HITRUST frameworks and generating audit-ready evidence.
- Implement a 30-day starter plan and follow a maturity roadmap from ad-hoc security to continuous security verification.
Key mindset shift: Move from "security activities" to "security outcomes." Every activity in this playbook produces verifiable evidence that reduces specific risks.
The Secure SDLC Lifecycle: A Visual Map
This playbook organizes 20 security disciplines across 8 lifecycle phases. Each phase produces artifacts that flow into the next, creating a continuous security verification loop.
DISCOVER
Security Requirement Elicitation → Security Objectives → Standards Alignment
Artifact: Security Requirements Document, Objectives Statement
ANALYZE
Threat & Risk Analysis → Devising Security Strategy
Artifact: Threat Model, Risk Register, Security Strategy
DESIGN
Secure Architecture & Design → Secure Supplier Selection
Artifact: Architecture Decision Records (ADRs), Component Security Assessments
BUILD
Secure Coding Practices → SAST Integration → Secure Code Reviews
Artifact: Code, SBOM, SAST Reports, Code Review Checklists
VERIFY
DAST → Defining Security Test Cases → VAPT → Vulnerability Management
Artifact: Test Results, Pentest Reports, Vulnerability Backlog
DEPLOY
System Hardening (CIS) → Configuration Audits → Compliance Validation
Artifact: Hardened Images, Configuration Baselines, Compliance Evidence
OPERATE
Secure Service Operations → Security Patch Management → Drafting Security Manuals
Artifact: Runbooks, Patch Logs, Security Manuals, Incident Response Plans
IMPROVE
DevSecOps Maturity → Continuous Feedback Loop
Artifact: Metrics Dashboards, Retrospectives, Security Roadmap
Running Example: Throughout this playbook, we'll reference a product team shipping an industrial edge application for monitoring manufacturing equipment—a safety-critical, IEC-62443-regulated system deployed on-premises in customer facilities.
Phase 1: DISCOVER
Establish security requirements and objectives that drive all downstream decisions
1. Security Requirement Elicitation
Purpose
Security requirements are the foundation. Without explicit requirements, security becomes an afterthought, leading to expensive retrofitting and unmitigated threats.
Inputs
- Business context (criticality, deployment environment, user personas)
- Regulatory and contractual obligations (GDPR, HIPAA, customer SLAs)
- Known threat intelligence for your domain
Activities
- Conduct stakeholder interviews (product, engineering, compliance, customers)
- Use security requirement frameworks: OWASP ASVS, NIST SP 800-53 controls, IEC-62443 SL levels
- Document functional security requirements (authentication, authorization, encryption) and non-functional (performance under attack, audit logging)
- Prioritize using MoSCoW (Must/Should/Could/Won't)
Outputs
Sample Security Requirements Document (excerpt):
SR-001: [MUST] All user authentication shall use multi-factor authentication (MFA) for privileged accounts. SR-002: [MUST] System shall encrypt data at rest using AES-256 and in transit using TLS 1.3+. SR-003: [SHOULD] API rate limiting shall prevent > 100 requests/min per client to mitigate DoS. SR-004: [MUST] All security events (auth failures, privilege escalations) shall be logged immutably.
What Good Looks Like
- Requirements are testable: "User session timeout = 15 minutes" not "sessions should be short"
- Each requirement is traceable to a regulation or threat (e.g., SR-001 → NIST AC-2)
- Requirements drive test cases, threat model, and architecture decisions
Tooling
Jira/Azure DevOps (requirements backlog), Confluence/Notion (documentation), OWASP ASVS checklist
Metrics
- Leading: % of features with documented security requirements before design
- Lagging: # of security bugs found in production that trace back to missing requirements
Common Pitfalls
- Writing vague requirements: "System shall be secure" (not testable)
- Copy-pasting generic checklists without adapting to your product context
- Treating security requirements as optional "nice-to-haves"
- Failing to update requirements when threat landscape changes
Quick Wins
- Start with OWASP Top 10 as baseline requirements for web apps
- Create a "Definition of Done" that includes security requirements sign-off
- Use security user stories: "As an attacker, I should NOT be able to..."
2. Defining Security Objectives
Purpose
Security objectives translate business goals into measurable security outcomes. They align stakeholders and set target security postures.
Inputs
Business strategy, risk appetite, compliance mandates, previous incident history
Activities
- Define CIA triad priorities (Confidentiality, Integrity, Availability) for your product
- Set quantitative targets: "Zero high-severity vulnerabilities in production," "99.9% uptime under attack"
- Document acceptable risk: "We accept informational-severity findings if remediation cost > business impact"
Outputs
Sample Security Objectives Statement:
Product: Industrial Edge Monitoring System Primary Objective: Protect operational integrity and safety—system failures must not cause equipment damage or safety incidents. CIA Priorities: 1. Integrity (Critical): Sensor data tampering could cause incorrect safety shutdowns 2. Availability (High): Downtime impacts production schedules 3. Confidentiality (Medium): Proprietary process data exposure is a business risk Target Security Posture: - IEC-62443 Security Level 2 (SL2) by Q2 2026 - Zero critical/high CVEs in production releases - Mean Time to Patch (MTTP) < 7 days for exploited vulnerabilities - 100% of privileged access logged and auditable
Quick Wins
- Use a one-page security objectives poster visible to the team
- Present objectives at sprint planning so everyone knows "why security matters"
Phase 2: ANALYZE
Identify and prioritize threats, determine risk mitigation strategies
3. Threat & Risk Analysis
Purpose
Threat modeling systematically identifies attack vectors before code is written. It's the most cost-effective security activity—fixing design flaws early is 10x cheaper than fixing them post-deployment.
Inputs
Architecture diagrams, data flow diagrams (DFDs), security requirements, known threat intel
Activities
- Use STRIDE methodology (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege)
- Identify trust boundaries (e.g., internet → DMZ → internal network)
- For each data flow crossing a trust boundary, enumerate threats
- Rank threats using DREAD (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) or CVSS
- Define mitigations: eliminate, mitigate, transfer (insurance), or accept
Outputs
Sample Threat Model Summary:
Component: API Gateway Trust Boundary: Internet → API Gateway → Backend Services Threat T-001: [Spoofing] Attacker impersonates legitimate API client Category: STRIDE-S | Risk: HIGH | CVSS: 8.5 Mitigation: Implement OAuth 2.0 + client certificates (mTLS) Owner: Backend Team | Due: Sprint 12 Threat T-002: [DoS] Attacker floods API with requests Category: STRIDE-D | Risk: MEDIUM | CVSS: 6.0 Mitigation: Rate limiting (100 req/min), IP blocklist, WAF Owner: DevOps Team | Due: Sprint 11 Threat T-003: [Information Disclosure] API error messages leak stack traces Category: STRIDE-I | Risk: LOW | CVSS: 3.2 Mitigation: Generic error messages, detailed logs server-side only Owner: Dev Team | Due: Sprint 10
What Good Looks Like
- Threat models are living documents, updated with every architecture change
- Each threat maps to a mitigation (control), which maps to a test case
- Teams threat-model collaboratively (dev, security, ops) in 1-2 hour workshops
Tooling
Microsoft Threat Modeling Tool, OWASP Threat Dragon, IriusRisk, Draw.io (for DFDs)
Common Pitfalls
- Threat modeling only once at project start, then never updating it
- Overly generic threats: "Attacker can break in" (not actionable)
- Treating threat models as security team's job—dev teams must own them
Quick Wins
- Start with "top 5 threats" for your product type (e.g., OWASP API Top 10)
- Use elevation-of-privilege card game to make threat modeling fun
4. Devising Security Strategy
Purpose
Transform threat analysis into a coherent roadmap: which controls to implement when, what resources are needed, how to measure progress.
Activities
- Prioritize mitigations by risk score and feasibility
- Define security milestones (e.g., "Achieve 80% SAST coverage by Q2")
- Allocate budget and resources (security tooling, training, headcount)
- Create a security roadmap synced with product roadmap
Outputs
Security strategy document, 12-month security roadmap, quarterly OKRs
Quick Win
Define 3 security OKRs per quarter: "Reduce MTTP from 14 days to 7 days"
Phase 3: DESIGN
Embed security into architecture decisions and component selection
5. Secure Architecture & Design
Purpose
Implement defense-in-depth: layers of controls so that a single failure doesn't compromise the system. Document key decisions for future maintainers.
Key Principles
- Least Privilege: Components/users get minimum necessary permissions
- Fail Securely: Errors default to deny access, not grant it
- Defense in Depth: Multiple overlapping controls (WAF + input validation + parameterized queries)
- Zero Trust: Never trust, always verify—even internal network traffic
Outputs
Sample Architecture Decision Record (ADR-007):
Title: Use mutual TLS (mTLS) for service-to-service authentication Status: Accepted Context: Microservices communicate over internal network. Threat model identified service impersonation risk (T-045). Decision: Implement mTLS using service mesh (Istio/Linkerd). Each service gets X.509 cert, rotated every 24h. Consequences: (+) Stronger auth, encrypted internal traffic. (-) Cert management overhead, debugging complexity. Alternatives Considered: Shared secrets (rejected: hard to rotate), JWT tokens (rejected: doesn't encrypt traffic) Traceability: Mitigates T-045, implements SR-023
Tooling
Architecture as Code (Terraform/Pulumi), service mesh (Istio), ADR tools (adr-tools CLI)
Quick Wins
- Use ADR template for all major security decisions (keep in /docs/adr/)
- Run architecture security reviews before implementation starts
6. Secure Supplier & Component Selection
Purpose
Third-party libraries and suppliers introduce supply chain risk. Assess security posture before adoption.
Activities
- Maintain an approved component list (libraries, frameworks, SaaS vendors)
- Evaluate new components: CVE history, maintainer responsiveness, last update date
- Use Software Composition Analysis (SCA) to track dependencies
- Verify provenance: check signatures, use private artifact registries
Outputs
SBOM (Software Bill of Materials), component security assessment scorecard
Common Pitfall
Developers add npm/PyPI packages without review—one malicious package compromises entire app
Quick Wins
- Use Dependabot/Renovate for automated dependency updates
- Block packages with known critical CVEs in CI/CD
Phase 4: BUILD
Write secure code, enforce standards through automation and peer review
Topics Covered in This Phase:
- 6. Secure Coding Practices: OWASP secure coding guidelines, input validation, parameterized queries, secure defaults
- 7. Secure Code Reviews: Peer review checklists focusing on auth, crypto, injection flaws
- 8. Integration of SAST Tools: Static analysis in CI/CD (SonarQube, Semgrep, CodeQL)
Sample Secure Coding Checklist (excerpt):
□ Input Validation: All user inputs validated against allow-list, length-limited □ Authentication: Passwords hashed with bcrypt (cost factor ≥12), MFA enforced for admin □ Authorization: RBAC implemented, permissions checked on every operation □ Cryptography: No hardcoded secrets, use key management service (KMS) □ Error Handling: No stack traces in user-facing errors, log full details server-side □ Logging: Security events logged (auth failures, privilege changes), no PII in logs
SAST Integration Example (GitHub Actions):
- name: Run SAST (Semgrep)
run: semgrep --config=auto --sarif -o results.sarif .
- name: Upload to GitHub Security
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: results.sarif
- name: Fail on Critical/High
run: |
CRITICAL_COUNT=$(jq '[.runs[].results[] | select(.level=="error")] | length' results.sarif)
if [ $CRITICAL_COUNT -gt 0 ]; then exit 1; fiQuick Wins for BUILD Phase
- Add pre-commit hooks for secrets scanning (truffleHog, gitleaks)
- Create a "security champions" program—train 1 dev per team as security advocate
- Use IDE plugins for real-time SAST feedback (Snyk, SonarLint)
Phase 5: VERIFY
Test security controls, validate threat mitigations, manage vulnerabilities
Topics Covered:
- 8. DAST Integration: Dynamic testing in staging (OWASP ZAP, Burp Suite)
- 10. Defining Security Test Cases: Negative tests (invalid inputs), boundary tests, auth bypass attempts
- 12. VAPT: Professional penetration testing before major releases
- 9. Vulnerability Management: Triage, prioritize, remediate findings from SAST/DAST/VAPT
Vulnerability Triage Rubric:
CRITICAL (P0): Exploitable remotely without auth, data breach or RCE risk → Fix within 24h HIGH (P1): Requires auth but exploitable, privilege escalation → Fix within 7 days MEDIUM (P2): Requires local access or complex exploit chain → Fix within 30 days LOW (P3): Theoretical vulnerability, no known exploit → Fix in next major release INFORMATIONAL: Best practice violation, no immediate risk → Backlog Example: - SQL injection in public API endpoint (unauthenticated) → CRITICAL (P0) - XSS in admin panel (requires admin login) → HIGH (P1) - Outdated library with no CVE → LOW (P3)
Quick Wins for VERIFY Phase
- Automate DAST scans on every staging deployment
- Create a "bug bounty lite" program—offer rewards to internal teams for finding bugs
- Use vulnerability SLA dashboards to track remediation times
Phase 6: DEPLOY
Harden systems, validate configurations, prove compliance before production
Topics Covered:
- 13. System Hardening (CIS): Apply CIS Benchmarks for OS, containers, databases
- 14. Configuration Audits: Automated compliance checks (InSpec, OpenSCAP)
- 11. Compliance Validation: Generate evidence for SOC 2, ISO 27001, IEC-62443 audits
CIS Hardening Checklist (Linux Server):
□ Ensure SSH uses key-based auth only (no passwords) □ Disable root login, use sudo with MFA □ Install fail2ban for brute-force protection □ Enable SELinux/AppArmor in enforcing mode □ Configure firewall (iptables/firewalld) to allow only required ports □ Patch to latest OS version, enable automatic security updates □ Remove unnecessary packages and services □ Encrypt disks at rest (LUKS)
Quick Win
Use Infrastructure as Code (Terraform/Ansible) to enforce hardening—never manually configure servers
Phase 7: OPERATE
Maintain security posture through patching, monitoring, incident response
Topics Covered:
- 16. Security Patch Management: Track CVEs, test patches, deploy systematically
- 17. Secure Service Operations: Monitoring, logging, incident response, disaster recovery
- 15. Drafting Security Manuals: Runbooks, playbooks, customer security guides
Patch Management Policy (excerpt):
1. Vulnerability Monitoring: Subscribe to security advisories (NVD, vendor bulletins, GitHub Security Advisories) 2. Classification: - Critical (actively exploited): Emergency patch within 24h - High (exploit available): Patch within 7 days - Medium/Low: Patch in next scheduled maintenance window 3. Testing: All patches tested in staging before production 4. Rollback Plan: Maintain ability to rollback failed patches within 15 minutes 5. Documentation: Log all patches in change management system
Quick Wins
- Automate patch deployment with canary releases (patch 5% of fleet, monitor, roll out)
- Create "security runbooks" for common incidents (DDoS, data breach, ransomware)
Phase 8: IMPROVE - DevSecOps Implementation Blueprint
Continuous security verification integrated into CI/CD
7. DevSecOps: The Complete Pipeline
DevSecOps embeds security checks at every stage of the delivery pipeline, with risk-based quality gates that don't slow down delivery.
CI/CD Security Gates (Stage-by-Stage)
1. Code Commit
- Pre-commit hooks: Secrets scanning (gitleaks), linting (security rules)
- Branch protection: Require 2 code reviews, no direct commits to main
2. Build
- SAST scan (Semgrep, Snyk Code, CodeQL)
- SCA/SBOM generation (Snyk, Dependabot, Trivy)
- Container image scanning (Trivy, Clair, Anchore)
- IaC scanning (Checkov, tfsec for Terraform/CloudFormation)
- Quality Gate: FAIL if critical/high vulnerabilities (P0/P1) found
3. Test (Staging)
- DAST scan (OWASP ZAP, Burp Suite API)
- API security testing (fuzzing, auth bypass tests)
- Configuration validation (CIS benchmark checks)
- Quality Gate: WARN on medium findings, FAIL on high/critical
4. Pre-Production
- Compliance validation (policy-as-code via Open Policy Agent)
- Container signing (Sigstore/Cosign) and provenance attestation
- Approval gate: Security lead reviews findings, approves production
5. Production Deployment
- Blue-green or canary deployment (rollback capability)
- Runtime security monitoring enabled (Falco, Sysdig)
- WAF/DDoS protection active
6. Runtime
- Security monitoring (SIEM, anomaly detection)
- Vulnerability re-scanning (weekly SBOM checks for new CVEs)
- Incident response: Automated alerts for security events
Risk-Based Quality Gates: Avoiding Alert Fatigue
Problem: If you fail builds on every SAST finding, you'll have 1000 "medium" issues blocking delivery.
Solution: Triage findings intelligently:
- Block build: Critical/High vulnerabilities in production code paths (authenticated)
- Warn (don't block): Medium findings, test code vulnerabilities, dependencies with no known exploit
- Create backlog ticket: Low/informational findings
- Suppress: False positives (with documented justification and expiry date)
Sample Risk-Based Policy (OPA Rego):
# Block deployment if critical CVEs in production container
deny[msg] {
input.scan_results[_].severity == "CRITICAL"
input.scan_results[_].package_type == "application"
msg := "CRITICAL vulnerability in application code - deployment blocked"
}
# Warn on high CVEs in dependencies (don't block)
warn[msg] {
input.scan_results[_].severity == "HIGH"
input.scan_results[_].package_type == "dependency"
msg := "HIGH vulnerability in dependency - review recommended"
}DevSecOps Quick Wins
- Start with secrets scanning and SCA (easiest wins, biggest impact)
- Implement "security as code" dashboards showing trends (vulnerabilities over time)
- Create a "security champions" guild to evangelize DevSecOps practices
8. Standards Alignment: NIST, IEC-62443, HITRUST
Map lifecycle activities to compliance frameworks
Compliance frameworks are not separate from your SDLC—they formalize what you already do. This table maps our 20 topics to common standards.
| SDLC Activity | NIST CSF 2.0 | IEC-62443 | HITRUST CSF |
|---|---|---|---|
| Security Requirements | GV.PO (Governance) | SR 1.1 (Security Requirements) | 06.a (Asset Management) |
| Threat Modeling | ID.RA (Risk Assessment) | SR 3.1 (Security Levels) | 03.a (Risk Assessment) |
| Secure Coding / SAST | PR.DS (Data Security) | SR 7.6 (Code Testing) | 10.h (Secure Development) |
| VAPT / DAST | DE.CM (Monitoring) | SR 7.3 (Security Testing) | 10.i (Penetration Testing) |
| Vulnerability Management | RS.MA (Response Management) | SR 7.8 (Patch Management) | 10.m (Vulnerability Management) |
| System Hardening (CIS) | PR.IP (Protective Tech) | SR 7.1 (Secure by Default) | 01.v (Hardening) |
| Patch Management | PR.MA (Maintenance) | SR 7.8 | 10.l (Patch Management) |
| Incident Response | RS.AN (Analysis) | SR 6.1 (Incident Response) | 09.b (Incident Management) |
Evidence-as-Code: Continuous Compliance
Instead of scrambling for evidence during audits, generate it continuously from your DevSecOps pipeline:
- Requirement traceability: Link Jira tickets to threat model IDs to test cases
- Test evidence: Export SAST/DAST reports automatically to compliance portal
- Configuration evidence: InSpec scan results prove CIS compliance
- Patch evidence: Automated patch logs show MTTP < 7 days
- Audit trail: All changes logged in Git + Jira with approvals
Quick Win
Set up automated SBOM generation and store it in artifact registry—instant supply chain transparency for auditors
Your 30-Day Starter Checklist
Pick 3 items from this list and implement them this sprint:
Week 1: Foundation
- Document top 5 security requirements for your product
- Run a 1-hour threat modeling workshop with your team
- Enable Dependabot/Renovate for dependency updates
Week 2: Automation
- Add secrets scanning pre-commit hook (gitleaks)
- Integrate SAST tool into CI/CD (start with Semgrep free tier)
- Create secure code review checklist (use OWASP template)
Week 3: Testing
- Set up DAST scan on staging environment (OWASP ZAP)
- Define vulnerability triage rubric (P0-P3 SLAs)
- Write 5 security test cases (auth, input validation, XSS)
Week 4: Operations
- Document patch management policy (MTTP targets)
- Apply CIS benchmark to production servers (top 10 controls)
- Create security incident response runbook
DevSecOps Maturity Roadmap
Progress through these maturity levels. Most teams start at Level 1; aim for Level 3 within 12 months.
Level 1: Ad-Hoc (Reactive)
Characteristics: Security is an afterthought, pentests only before major releases, vulnerabilities fixed when discovered
Next Step: Document security requirements, start threat modeling
Level 2: Defined (Repeatable)
Characteristics: Security requirements documented, SAST/SCA in CI/CD, basic vulnerability management
Next Step: Add DAST, automate compliance checks, enforce quality gates
Level 3: Managed (Proactive)
Characteristics: Security built into every pipeline stage, risk-based quality gates, MTTP < 7 days, compliance evidence automated
Next Step: Implement continuous runtime monitoring, security chaos engineering
Level 4: Optimized (Continuous)
Characteristics: Security is self-service for developers, real-time threat intelligence, automated remediation, security metrics drive business decisions
Outcome: Security is a competitive advantage, not a bottleneck
Take Action Now
Pick 3 improvements from this playbook and implement them this sprint.
Glossary
- ADR (Architecture Decision Record)
- Lightweight document capturing an architectural decision, its context, and consequences
- CIS Benchmark
- Center for Internet Security hardening guidelines for operating systems, containers, cloud services
- DAST (Dynamic Application Security Testing)
- Black-box testing of running applications to find vulnerabilities
- MTTP (Mean Time to Patch)
- Average time from vulnerability disclosure to patch deployment
- SAST (Static Application Security Testing)
- White-box analysis of source code to identify security flaws
- SBOM (Software Bill of Materials)
- Complete inventory of software components and dependencies
- SCA (Software Composition Analysis)
- Scanning third-party libraries for known vulnerabilities
- STRIDE
- Threat modeling framework: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege
- VAPT (Vulnerability Assessment & Penetration Testing)
- Comprehensive security testing combining automated scanning and manual ethical hacking
Further Reading
- OWASP SAMM (Software Assurance Maturity Model): Comprehensive SDLC security framework
- NIST SP 800-218: Secure Software Development Framework (SSDF)
- IEC 62443-4-1: Secure product development lifecycle requirements for industrial systems
- BSIMM (Building Security In Maturity Model): Data-driven security framework based on real-world practices
- OWASP ASVS: Application Security Verification Standard (testable security requirements)
- CIS Controls v8: Prioritized cybersecurity best practices
- NIST Cybersecurity Framework (CSF) 2.0: Risk management framework for critical infrastructure
Securus Mind Security Team
Application Security & DevSecOps Experts
The Securus Mind team brings decades of combined experience in application security, threat modeling, and DevSecOps transformation across industries including industrial control systems, healthcare, financial services, and cloud-native applications.
Have questions about implementing this playbook? Contact our security team or request a demo to see how Securus Mind automates these practices.