We are under construction, available fully functional from Q2 2026
ResourcesAdvancedQuantum-Safe Code Auditor Blueprint
Expert45 min

Quantum-Safe Code Auditor Blueprint

GenAI-powered tool for quantum-vulnerable code scanning.

Executive Summary

The Quantum-Safe Code Auditor is a GenAI-powered tool that automatically scans codebases for quantum-vulnerable cryptography and generates compliant replacement code using Ground Truth RAG - forcing strict adherence to NIST FIPS 203/204/205 standards rather than hallucinating generic crypto fixes.

Value Proposition: Unlike generic code assistants (Copilot, ChatGPT), QSCA is a Compliance Engine trained on authoritative sources.


Part 1: Architecture

High-Level System Design (AWS Native)

┌─────────────────────────────────────────────────────────────────────────────┐
│                              USER LAYER                                      │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │  CISO / Security Architect / Developer                               │   │
│  │  - Connects repository (GitHub, GitLab, Bitbucket)                  │   │
│  │  - Reviews findings dashboard                                        │   │
│  │  - Approves/rejects suggested fixes                                  │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────┘
                                      │
                                      ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                           PRESENTATION LAYER                                 │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │  Next.js Dashboard (AWS Amplify Hosting)                            │   │
│  │  - Repository connection UI                                          │   │
│  │  - Scan configuration                                                │   │
│  │  - Findings visualization (severity, file, algorithm)               │   │
│  │  - Fix preview with diff view                                        │   │
│  │  - Compliance reporting                                              │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────┘
                                      │
                                      ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                          ORCHESTRATION LAYER                                 │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │  AWS Lambda (Scan Orchestrator)                                      │   │
│  │  - Receives scan trigger from dashboard                             │   │
│  │  - Clones repository (via GitHub API)                               │   │
│  │  - Parses code into AST chunks                                      │   │
│  │  - Identifies crypto-relevant files                                  │   │
│  │  - Queues chunks for analysis                                       │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
│                              │                                              │
│                              ▼                                              │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │  S3 Bucket (Code Chunks)                                            │   │
│  │  - Stores parsed code segments                                      │   │
│  │  - Triggers analysis via S3 Events                                  │   │
│  │  - Temporary storage (auto-expire)                                  │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────┘
                                      │
                                      ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                        GenAI ANALYSIS LAYER ("The Brain")                    │
│                                                                              │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │  Bedrock Agent ("The Auditor")                                       │   │
│  │  - Receives code chunks                                              │   │
│  │  - Identifies vulnerable patterns                                    │   │
│  │  - Queries Knowledge Base for compliant alternatives                │   │
│  │  - Generates replacement code                                        │   │
│  │  - Validates output against standards                               │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
│           │                                    │                            │
│           ▼                                    ▼                            │
│  ┌─────────────────────┐          ┌─────────────────────────────────────┐  │
│  │  Knowledge Base     │          │  LLM Inference                      │  │
│  │  (OpenSearch        │          │  - Claude 3.5 Sonnet (primary)      │  │
│  │   Serverless)       │          │  - 200K context window              │  │
│  │                     │          │  - Strict instruction following     │  │
│  │  Indexed Documents: │          │                                     │  │
│  │  - FIPS 203 (ML-KEM)│          │  Cost Optimization:                 │  │
│  │  - FIPS 204 (ML-DSA)│          │  - Titan/Llama 8B for keyword scan │  │
│  │  - FIPS 205 (SLH-DSA)         │  - Claude for deep analysis         │  │
│  │  - AWS PQC SDK Docs │          └─────────────────────────────────────┘  │
│  │  - ASD ISM Guidelines│                                                  │
│  └─────────────────────┘                                                   │
└─────────────────────────────────────────────────────────────────────────────┘
                                      │
                                      ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                             OUTPUT LAYER                                     │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │  Remediation Options:                                                │   │
│  │  - GitHub Pull Request (auto-generated)                             │   │
│  │  - PDF Compliance Report                                             │   │
│  │  - SARIF format (for IDE integration)                               │   │
│  │  - JIRA ticket creation                                              │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────┘

Part 2: Model Strategy

Why Claude 3.5 Sonnet for PQC Auditing

CapabilityClaude 3.5 SonnetLlama 3 (70B)Winner
Code ReasoningTop-tier. Understands dependency trees, 200K+ contextStrong, but "forgets" niche constraints in long filesClaude
Instruction FollowingExtreme compliance. "Only use FIPS 203 ML-KEM" = obeysGood, but drifts to generic ECC if not perfect promptsClaude
Agentic Tool UseNative Bedrock Agents integrationRequires custom scaffoldingClaude
CostHigher per tokenLower per tokenLlama
Latency~2-3s for complex analysis~1-2sLlama

Hybrid Model Strategy (Cost Optimization)

┌─────────────────────────────────────────────────────────────────┐
│                    TIERED MODEL APPROACH                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  TIER 1: Initial Scan (Cheap & Fast)                            │
│  Model: Amazon Titan Text Lite / Llama 3 8B                     │
│  Task: Keyword pattern matching                                  │
│  Cost: ~$0.0001 per file                                        │
│                                                                  │
│  Patterns to detect:                                             │
│  - import rsa, from Crypto.PublicKey import RSA                 │
│  - ECDSA, ECDH, P-256, P-384, secp256k1                        │
│  - Diffie-Hellman, DH, DHE                                      │
│  - RSA-OAEP, RSA-PSS, RSA_PKCS1                                │
│  - openssl.crypto, cryptography.hazmat                          │
│                                                                  │
│  Output: List of files requiring deep analysis                  │
│                                                                  │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  TIER 2: Deep Analysis (Accurate & Compliant)                   │
│  Model: Claude 3.5 Sonnet                                       │
│  Task: Full code understanding + fix generation                  │
│  Cost: ~$0.01-0.05 per file                                     │
│                                                                  │
│  Analysis includes:                                              │
│  - Dependency chain tracking                                     │
│  - Key lifecycle analysis                                        │
│  - Protocol context understanding                                │
│  - Compliant code generation with RAG                           │
│                                                                  │
│  Output: Specific, tested replacement code                      │
│                                                                  │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  TIER 3: Validation (Optional)                                  │
│  Model: Claude 3.5 Sonnet (second pass)                         │
│  Task: Verify generated code compiles and follows standards     │
│  Cost: ~$0.01 per fix                                           │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Part 3: Knowledge Base - The "Secret Sauce"

Ground Truth RAG Architecture

This is the differentiator from generic code assistants. The model MUST retrieve from authoritative sources before generating fixes.

┌─────────────────────────────────────────────────────────────────┐
│                    KNOWLEDGE BASE STRUCTURE                      │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  S3 Bucket: qsca-knowledge-base/                                │
│  │                                                               │
│  ├── standards/                                                  │
│  │   ├── NIST-FIPS-203-ML-KEM.pdf                              │
│  │   ├── NIST-FIPS-204-ML-DSA.pdf                              │
│  │   ├── NIST-FIPS-205-SLH-DSA.pdf                             │
│  │   ├── NIST-SP-800-208-hash-based-sigs.pdf                   │
│  │   └── NIST-SP-800-227-PQC-migration.pdf                     │
│  │                                                               │
│  ├── sdk-documentation/                                          │
│  │   ├── aws-encryption-sdk-pqc-python.md                       │
│  │   ├── aws-encryption-sdk-pqc-java.md                         │
│  │   ├── aws-encryption-sdk-pqc-dotnet.md                       │
│  │   ├── aws-kms-pqc-key-specs.md                               │
│  │   └── liboqs-api-reference.md                                │
│  │                                                               │
│  ├── code-patterns/                                              │
│  │   ├── python-ml-kem-key-exchange.py                          │
│  │   ├── python-ml-dsa-signing.py                               │
│  │   ├── java-hybrid-tls-client.java                            │
│  │   ├── go-pqc-certificate-validation.go                       │
│  │   └── migration-patterns-by-framework.md                     │
│  │                                                               │
│  ├── compliance/                                                 │
│  │   ├── ASD-ISM-crypto-guidelines.pdf                          │
│  │   ├── NSA-CNSA-2.0-requirements.pdf                          │
│  │   ├── PCI-DSS-crypto-requirements.pdf                        │
│  │   └── HIPAA-encryption-guidance.pdf                          │
│  │                                                               │
│  └── vulnerability-patterns/                                     │
│      ├── rsa-usage-patterns.json                                │
│      ├── ecdsa-usage-patterns.json                              │
│      ├── dh-usage-patterns.json                                 │
│      └── remediation-mappings.json                              │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│              OpenSearch Serverless (Vector Store)                │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  Index: qsca-standards                                          │
│  - Chunked and embedded NIST documents                          │
│  - Semantic search enabled                                       │
│  - Metadata: document_type, algorithm, section                  │
│                                                                  │
│  Index: qsca-code-patterns                                      │
│  - Language-specific code snippets                              │
│  - Metadata: language, algorithm, use_case                      │
│                                                                  │
│  Index: qsca-compliance                                         │
│  - Regulatory requirements                                       │
│  - Metadata: regulation, jurisdiction, severity                 │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘

Document Processing Pipeline

python
# knowledge_base_ingestion.py

import boto3
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import BedrockEmbeddings

def ingest_nist_standards():
    """
    Process NIST FIPS documents for Knowledge Base.
    Key: Preserve section structure for accurate retrieval.
    """

    # Custom splitter that respects document structure
    splitter = RecursiveCharacterTextSplitter(
        chunk_size=1000,
        chunk_overlap=200,
        separators=[
            "\n## ",      # Major sections
            "\n### ",     # Subsections
            "\n#### ",    # Sub-subsections
            "\n\n",       # Paragraphs
            "\n",         # Lines
            " "           # Words
        ]
    )

    # Embed with Titan Embeddings for consistency
    embeddings = BedrockEmbeddings(
        model_id="amazon.titan-embed-text-v2:0"
    )

    # Process FIPS 203 with metadata
    fips_203_chunks = process_document(
        path="standards/NIST-FIPS-203-ML-KEM.pdf",
        splitter=splitter,
        metadata={
            "document_type": "standard",
            "algorithm": "ML-KEM",
            "fips_number": "203",
            "use_case": "key_encapsulation"
        }
    )

    # Index to OpenSearch
    index_to_opensearch(fips_203_chunks, embeddings)

Part 4: The Prompt Engineering

Core System Prompt

You are QSCA (Quantum-Safe Code Auditor), a specialized security tool that identifies
quantum-vulnerable cryptography and generates NIST-compliant replacement code.

## Your Constraints (CRITICAL - NEVER VIOLATE):

1. ONLY recommend algorithms from NIST FIPS 203/204/205:
   - Key Encapsulation: ML-KEM-512, ML-KEM-768, ML-KEM-1024
   - Digital Signatures: ML-DSA-44, ML-DSA-65, ML-DSA-87
   - Hash-based Signatures: SLH-DSA (when specified)

2. NEVER invent cryptographic code. ALWAYS retrieve patterns from your Knowledge Base.

3. ALWAYS prefer AWS SDK implementations when targeting AWS environments.

4. ALWAYS recommend hybrid mode (classical + PQC) unless explicitly told otherwise.

5. ALWAYS preserve existing key management patterns (KMS, HSM, etc.) in fixes.

6. NEVER suggest deprecated or non-standard algorithms (no raw Kyber, use ML-KEM).

## Your Analysis Process:

1. IDENTIFY: Find the vulnerable cryptographic pattern
2. CONTEXTUALIZE: Understand the key lifecycle and dependencies
3. RETRIEVE: Search Knowledge Base for compliant replacement pattern
4. GENERATE: Produce specific, compilable code using retrieved patterns
5. VALIDATE: Verify the fix maintains functional equivalence
6. DOCUMENT: Explain the change and compliance mapping

Vulnerability Detection Prompt Chain

## STEP 1: Pattern Recognition Prompt

Analyze the following code file for quantum-vulnerable cryptographic patterns.

For each finding, provide:
- Line number(s)
- Vulnerable pattern (exact code)
- Algorithm identified (RSA-2048, ECDSA-P256, etc.)
- Severity (CRITICAL: public-key crypto, HIGH: key exchange, MEDIUM: signatures)
- Context (what is this crypto used for: auth, encryption, signing, key exchange)

Code:

Respond in JSON format:

{

"findings": [

{

"line": 45,

"code": "from Crypto.PublicKey import RSA",

"algorithm": "RSA-2048",

"severity": "CRITICAL",

"context": "key_generation_for_api_auth"

}

]

}

STEP 2: Knowledge Base Retrieval Prompt

You found '{algorithm}' used for '{context}' in {filename}.

Search your Knowledge Base for:

  1. The NIST FIPS standard that defines the replacement algorithm
  2. The AWS SDK code pattern for this use case in {language}
  3. Any compliance requirements from {jurisdiction} guidelines

Return the specific document sections and code patterns you found.

Do not proceed to code generation until you have retrieved authoritative patterns.

STEP 3: Code Generation Prompt

Based on your Knowledge Base retrieval, generate replacement code.

Original vulnerable code:

{original_code}```

Retrieved pattern from Knowledge Base:

Requirements:

  1. Use the exact SDK pattern from the Knowledge Base
  2. Maintain the same function signature and return types
  3. Implement hybrid mode: {classical_algorithm} + {pqc_algorithm}
  4. Preserve existing key storage/retrieval patterns
  5. Add appropriate error handling
  6. Include migration comments explaining the change

Generate the complete replacement code:


### Example: RSA to ML-KEM Migration

**Input Code (Detected):**

# auth_service.py - Line 45-67

from Crypto.PublicKey import RSA

from Crypto.Cipher import PKCS1_OAEP

def generate_session_key():

key = RSA.generate(2048)

public_key = key.publickey()

cipher = PKCS1_OAEP.new(public_key)

session_key = get_random_bytes(32)

encrypted_key = cipher.encrypt(session_key)

return encrypted_key, key


**QSCA Prompt:**

You are a PQC Migration Architect. You found 'RSA-2048' in auth_service.py used for

session key encryption.

Based on the AWS Encryption SDK document in your Knowledge Base, generate the specific

Python code to replace this with an AWS KMS call using the ML-KEM-768 key spec.

Requirements:

  • Use hybrid mode (ECDH + ML-KEM-768)
  • Integrate with AWS KMS for key management
  • Maintain the same function signature: generate_session_key() -> (encrypted_key, key_handle)
  • Do not invent code. Use the SDK pattern from your Knowledge Base.

**QSCA Generated Fix:**

# auth_service.py - QSCA Remediation

# Migration: RSA-2048 → Hybrid ECDH + ML-KEM-768

# Compliance: NIST FIPS 203, AWS PQC Best Practices

# Generated: 2026-01-01

import boto3

from aws_encryption_sdk import (

KMSMasterKeyProvider,

EncryptionSDKClient,

StrictAwsKmsMasterKeyProvider,

)

from aws_encryption_sdk.key_providers.kms import (

KMSMasterKey,

DiscoveryFilter,

)

# AWS KMS key configured with ML-KEM-768 key spec

KMS_KEY_ARN = "arn:aws:kms:ap-southeast-2:ACCOUNT:key/KEY_ID"

def generate_session_key():

"""

Generate encrypted session key using hybrid PQC.

Migration Notes:

- Original: RSA-2048 PKCS1-OAEP

- New: Hybrid ECDH_NIST_P256 + ML-KEM-768 via AWS KMS

- Backward compatible: Recipients need KMS access

Compliance:

- NIST FIPS 203 (ML-KEM)

- AWS Encryption SDK v4.0+

"""

# Initialize KMS client with PQC-enabled key

kms_client = boto3.client('kms', region_name='ap-southeast-2')

# Configure master key provider for hybrid encryption

key_provider = StrictAwsKmsMasterKeyProvider(

key_ids=[KMS_KEY_ARN]

)

# Create encryption client

client = EncryptionSDKClient()

# Generate and encrypt session key

# AWS KMS handles hybrid key encapsulation internally

session_key, encrypted_key = client.encrypt(

source=get_random_bytes(32),

key_provider=key_provider,

encryption_context={

"purpose": "session_key",

"algorithm": "HYBRID_ECDH_ML_KEM_768"

}

)

# Return encrypted key and key reference

# Note: Key handle is now the KMS key ARN, not raw private key

return encrypted_key, KMS_KEY_ARN

def decrypt_session_key(encrypted_key: bytes, key_arn: str) -> bytes:

"""

Decrypt session key using hybrid PQC.

Added as part of migration - original code assumed in-memory private key.

"""

key_provider = StrictAwsKmsMasterKeyProvider(

key_ids=[key_arn]

)

client = EncryptionSDKClient()

decrypted_key, _ = client.decrypt(

source=encrypted_key,

key_provider=key_provider

)

return decrypted_key


---

## Part 5: Detection Patterns Database

### Vulnerability Pattern Definitions

{

"patterns": [

{

"id": "RSA_KEY_GENERATION",

"name": "RSA Key Generation",

"severity": "CRITICAL",

"quantum_vulnerable": true,

"languages": {

"python": [

"RSA.generate(",

"rsa.generate_private_key(",

"from Crypto.PublicKey import RSA",

"from cryptography.hazmat.primitives.asymmetric import rsa"

],

"java": [

"KeyPairGenerator.getInstance(\"RSA\")",

"RSAKeyGenParameterSpec",

"new RSAKeyGenParameterSpec("

],

"javascript": [

"crypto.generateKeyPairSync('rsa'",

"forge.pki.rsa.generateKeyPair(",

"new NodeRSA("

],

"go": [

"rsa.GenerateKey(",

"x509.ParsePKCS1PrivateKey("

]

},

"remediation": {

"algorithm": "ML-KEM-768",

"pattern_reference": "code-patterns/ml-kem-key-generation",

"hybrid_recommended": true

}

},

{

"id": "ECDSA_SIGNING",

"name": "ECDSA Digital Signature",

"severity": "CRITICAL",

"quantum_vulnerable": true,

"languages": {

"python": [

"ECDSA(",

"ec.generate_private_key(",

"SigningKey.generate(curve=",

"from ecdsa import"

],

"java": [

"KeyPairGenerator.getInstance(\"EC\")",

"Signature.getInstance(\"SHA256withECDSA\")",

"ECGenParameterSpec"

],

"go": [

"ecdsa.GenerateKey(",

"elliptic.P256()",

"x509.ECDSAWithSHA256"

]

},

"remediation": {

"algorithm": "ML-DSA-65",

"pattern_reference": "code-patterns/ml-dsa-signing",

"hybrid_recommended": true

}

},

{

"id": "DH_KEY_EXCHANGE",

"name": "Diffie-Hellman Key Exchange",

"severity": "CRITICAL",

"quantum_vulnerable": true,

"languages": {

"python": [

"dh.generate_parameters(",

"DHParameterNumbers(",

"from cryptography.hazmat.primitives.asymmetric import dh"

],

"java": [

"KeyPairGenerator.getInstance(\"DH\")",

"DHParameterSpec",

"KeyAgreement.getInstance(\"DH\")"

]

},

"remediation": {

"algorithm": "ML-KEM-768",

"pattern_reference": "code-patterns/ml-kem-key-exchange",

"hybrid_recommended": true

}

},

{

"id": "ECDH_KEY_EXCHANGE",

"name": "ECDH Key Exchange",

"severity": "CRITICAL",

"quantum_vulnerable": true,

"languages": {

"python": [

"ec.ECDH(",

"derive(",

"exchange(ec.ECDH()"

],

"java": [

"KeyAgreement.getInstance(\"ECDH\")",

"ECDHKeyAgreement"

],

"go": [

"elliptic.GenerateKey(",

"curve.ScalarMult("

]

},

"remediation": {

"algorithm": "X25519 + ML-KEM-768 (Hybrid)",

"pattern_reference": "code-patterns/hybrid-key-exchange",

"hybrid_recommended": true

}

}

]

}


---

## Part 6: AWS Infrastructure as Code

### Terraform Configuration

# main.tf - QSCA Infrastructure

terraform {

required_providers {

aws = {

source = "hashicorp/aws"

version = "~> 5.0"

}

}

}

provider "aws" {

region = "ap-southeast-2" # Sydney

}

# ─────────────────────────────────────────────────────────────────

# S3 BUCKET - Code Chunks and Knowledge Base

# ─────────────────────────────────────────────────────────────────

resource "aws_s3_bucket" "code_chunks" {

bucket = "qsca-code-chunks-${var.environment}"

tags = {

Purpose = "Temporary storage for code analysis"

}

}

resource "aws_s3_bucket_lifecycle_configuration" "code_chunks_lifecycle" {

bucket = aws_s3_bucket.code_chunks.id

rule {

id = "expire-old-chunks"

status = "Enabled"

expiration {

days = 1 # Auto-delete after 24 hours

}

}

}

resource "aws_s3_bucket" "knowledge_base" {

bucket = "qsca-knowledge-base-${var.environment}"

tags = {

Purpose = "NIST standards and code patterns for RAG"

}

}

# ─────────────────────────────────────────────────────────────────

# OPENSEARCH SERVERLESS - Vector Store

# ─────────────────────────────────────────────────────────────────

resource "aws_opensearchserverless_collection" "qsca_vectors" {

name = "qsca-knowledge-vectors"

type = "VECTORSEARCH"

tags = {

Purpose = "Semantic search for PQC standards"

}

}

resource "aws_opensearchserverless_security_policy" "encryption" {

name = "qsca-encryption-policy"

type = "encryption"

policy = jsonencode({

Rules = [

{

Resource = ["collection/qsca-knowledge-vectors"]

ResourceType = "collection"

}

]

AWSOwnedKey = true

})

}

# ─────────────────────────────────────────────────────────────────

# BEDROCK KNOWLEDGE BASE

# ─────────────────────────────────────────────────────────────────

resource "aws_bedrockagent_knowledge_base" "qsca" {

name = "qsca-pqc-standards"

description = "NIST FIPS 203/204/205 standards and AWS PQC SDK documentation"

role_arn = aws_iam_role.bedrock_kb_role.arn

knowledge_base_configuration {

type = "VECTOR"

vector_knowledge_base_configuration {

embedding_model_arn = "arn:aws:bedrock:ap-southeast-2::foundation-model/amazon.titan-embed-text-v2:0"

}

}

storage_configuration {

type = "OPENSEARCH_SERVERLESS"

opensearch_serverless_configuration {

collection_arn = aws_opensearchserverless_collection.qsca_vectors.arn

vector_index_name = "qsca-index"

field_mapping {

metadata_field = "metadata"

text_field = "content"

vector_field = "embedding"

}

}

}

}

# ─────────────────────────────────────────────────────────────────

# BEDROCK AGENT - The Auditor

# ─────────────────────────────────────────────────────────────────

resource "aws_bedrockagent_agent" "qsca_auditor" {

agent_name = "qsca-auditor"

description = "Quantum-Safe Code Auditor agent"

agent_resource_role_arn = aws_iam_role.bedrock_agent_role.arn

foundation_model = "anthropic.claude-3-5-sonnet-20241022-v2:0"

idle_session_ttl_in_seconds = 600

instruction = file("${path.module}/prompts/system_prompt.txt")

}

resource "aws_bedrockagent_agent_knowledge_base_association" "qsca" {

agent_id = aws_bedrockagent_agent.qsca_auditor.id

knowledge_base_id = aws_bedrockagent_knowledge_base.qsca.id

description = "PQC standards and code patterns"

knowledge_base_state = "ENABLED"

}

# ─────────────────────────────────────────────────────────────────

# LAMBDA - Orchestrator

# ─────────────────────────────────────────────────────────────────

resource "aws_lambda_function" "orchestrator" {

filename = "lambda/orchestrator.zip"

function_name = "qsca-orchestrator"

role = aws_iam_role.lambda_role.arn

handler = "index.handler"

runtime = "python3.12"

timeout = 300

memory_size = 1024

environment {

variables = {

CODE_BUCKET = aws_s3_bucket.code_chunks.id

AGENT_ID = aws_bedrockagent_agent.qsca_auditor.id

AGENT_ALIAS_ID = "TSTALIASID" # Update after alias creation

}

}

}

# ─────────────────────────────────────────────────────────────────

# AMPLIFY - Next.js Dashboard

# ─────────────────────────────────────────────────────────────────

resource "aws_amplify_app" "qsca_dashboard" {

name = "qsca-dashboard"

repository = var.github_repo_url

build_spec = <<-EOT

version: 1

frontend:

phases:

preBuild:

commands:

- npm ci

build:

commands:

- npm run build

artifacts:

baseDirectory: .next

files:

- '**/*'

cache:

paths:

- node_modules/**/*

EOT

environment_variables = {

NEXT_PUBLIC_API_ENDPOINT = aws_lambda_function_url.orchestrator.function_url

}

}


---

## Part 7: Metrics and Monitoring

### Key Performance Indicators

┌─────────────────────────────────────────────────────────────────┐

│ QSCA OPERATIONAL METRICS │

├─────────────────────────────────────────────────────────────────┤

│ │

│ ACCURACY METRICS: │

│ - True Positive Rate: % of actual vulnerabilities detected │

│ - False Positive Rate: % of false alarms (target < 5%) │

│ - Fix Compilation Rate: % of generated fixes that compile │

│ - Fix Correctness Rate: % of fixes that maintain functionality │

│ │

│ PERFORMANCE METRICS: │

│ - Scan Time: Seconds per 1000 LOC │

│ - Fix Generation Time: Seconds per vulnerability │

│ - Knowledge Base Retrieval Latency: p50, p99 │

│ - End-to-end Analysis Time: Minutes per repository │

│ │

│ COST METRICS: │

│ - Cost per Scan: $ per repository analyzed │

│ - Cost per Fix: $ per vulnerability remediated │

│ - Monthly Infrastructure Cost: Fixed + variable │

│ │

│ USAGE METRICS: │

│ - Repositories Scanned: Total and unique │

│ - Vulnerabilities Detected: By severity and algorithm │

│ - Fixes Accepted: % of suggestions merged │

│ - User Retention: Monthly active users │

│ │

└─────────────────────────────────────────────────────────────────┘


### CloudWatch Dashboard

{

"widgets": [

{

"type": "metric",

"properties": {

"title": "Scan Volume",

"metrics": [

["QSCA", "ScansInitiated", {"stat": "Sum"}],

["QSCA", "ScansCompleted", {"stat": "Sum"}],

["QSCA", "ScansFailed", {"stat": "Sum"}]

]

}

},

{

"type": "metric",

"properties": {

"title": "Vulnerability Detection",

"metrics": [

["QSCA", "VulnerabilitiesDetected", "Severity", "CRITICAL"],

["QSCA", "VulnerabilitiesDetected", "Severity", "HIGH"],

["QSCA", "VulnerabilitiesDetected", "Severity", "MEDIUM"]

]

}

},

{

"type": "metric",

"properties": {

"title": "Model Performance",

"metrics": [

["QSCA", "BedrockLatency", {"stat": "p50"}],

["QSCA", "BedrockLatency", {"stat": "p99"}],

["QSCA", "KnowledgeBaseHitRate", {"stat": "Average"}]

]

}

},

{

"type": "metric",

"properties": {

"title": "Cost Tracking",

"metrics": [

["AWS/Bedrock", "InputTokens", {"stat": "Sum"}],

["AWS/Bedrock", "OutputTokens", {"stat": "Sum"}],

["AWS/Lambda", "Invocations", {"stat": "Sum"}]

]

}

}

]

}


---

## Part 8: Roadmap and Future Features

### Phase 1 (MVP)
- Python and Java support
- RSA, ECDSA, DH detection
- AWS SDK remediation patterns
- GitHub integration
- PDF compliance reports

### Phase 2
- Go, JavaScript, C# support
- IDE plugins (VS Code, IntelliJ)
- Custom compliance policy definitions
- Multi-cloud remediation (Azure, GCP)

### Phase 3
- CI/CD pipeline integration
- Real-time monitoring of deployed crypto
- Automated PR creation and merging
- Compliance certification assistance

### Phase 4
- Binary analysis (compiled code)
- Network traffic crypto detection
- HSM integration recommendations
- Enterprise SSO and RBAC

---

## Appendix: Compliance Mapping

| Finding | NIST FIPS | AWS Best Practice | NSA CNSA 2.0 |
|---------|-----------|-------------------|--------------|
| RSA-2048 key generation | Replace with ML-KEM | Use KMS with PQC key spec | Required by 2030 |
| ECDSA P-256 signing | Replace with ML-DSA | Use KMS for signing | Required by 2033 |
| ECDH key exchange | Replace with ML-KEM hybrid | Use Encryption SDK v4 | Required by 2030 |
| SHA-256 (standalone) | Acceptable, prefer SHA-384 | Use SHA-384/512 | Acceptable |
| AES-256-GCM | Acceptable | Continue using | Acceptable |
Back to Advanced Topics