Skip to content
← Back to blog

SessionTrace — AI Code Attribution & Audit Trail

Turn every AI-assisted commit into an auditable paper trail

SessionTrace captures the exact moment AI writes your code—tracking which lines came from GitHub Copilot vs. Claude, embedding immutable attribution metadata into Git commits, and generating SOC2-ready compliance reports that prove human oversight. Engineering teams at mid-market companies get automated audit trails without changing their workflow, solving the 'who wrote this code?' problem that compliance officers are asking right now.

Key Benefits:

- Automatic AI attribution metadata embedded in every commit—no manual tagging, just install the VS Code/JetBrains plugin and Git hooks capture prompt-to-code lineage

- One-click compliance reports showing AI contribution percentage, human review rates, and model provenance for auditors who ask 'how much of your codebase is AI-generated?'

- Immutable audit trail linking code diffs to specific AI sessions—prove which engineer reviewed which Copilot suggestion, with timestamps and acceptance rates for liability protection

MVP Scope: Phase 1: VS Code plugin capturing Copilot sessions + git hook injecting metadata. Phase 2: Basic dashboard showing AI % per repo + simple SOC2 report template. Phase 3: Classifier for legacy code + policy rules. Target: 2-3 pilot customers (mid-market SaaS/fintech) by month 3.

Tech Stack: Node.js/TypeScript (plugin SDK), VS Code Extension API + JetBrains Plugin SDK, Git hooks (pre-commit, post-commit), PostgreSQL (audit log storage), FastAPI/Python (ML classifier for code analysis), React (dashboard), GitHub/GitLab/Bitbucket APIs, Copilot, Claude, OpenAI API integrations, Docker (compliance report generation), Temporal (workflow orchestration for async report jobs)

Components:

- {'name': 'Session Capture Engine', 'description': 'IDE/editor plugin (VS Code, JetBrains) that hooks into AI assistant APIs (GitHub Copilot, Claude, ChatGPT) and captures metadata: prompt, model used, timestamp, user, acceptance/rejection, code diff'}

- {'name': 'Commit Metadata Injector', 'description': 'Git hook that enriches commits with AI attribution data: embeds session ID, AI model fingerprint, confidence score, and human review status into commit message/custom headers for immutable audit trail'}

- {'name': 'Compliance Report Generator', 'description': 'Dashboard + API that aggregates sessions into SOC2/HIPAA/PCI-DSS reports: AI code percentage per developer, model audit trail, human review coverage, risk scoring for regulated code blocks'}

- {'name': 'AI-Human Code Classifier', 'description': 'ML model (fine-tuned on code patterns) that retroactively identifies likely AI-generated code in legacy repos when session data unavailable; provides confidence scores for audit purposes'}

- {'name': 'Policy Engine & Alerts', 'description': 'Rules system: flag commits exceeding AI code threshold, require human review for sensitive files (auth, payments), enforce model whitelisting, trigger alerts for compliance violations'}


Quality assessment: Strong technical concept with clear market fit (SOC2 compliance + AI attribution) and concrete implementation details (IDE plugins, Git hooks, metadata injection), but lacks originality—similar audit/provenance solutions exist—and the artifact is incomplete (pitch cuts off mid-sentence, no discussion of technical challenges like API rate limits or cross-platform consistency).

Comments

Sign in to join the conversation.

No comments yet. Be the first to share your thoughts.