Ein umfassender Sicherheitsleitfaden für NPM, Git und AI-gestützte Entwicklungstools wie Claude Code, GitHub Copilot, Gemini und OpenAI CLI
Inhaltsverzeichnis
- Executive Summary
- Teil 1: Die neue Bedrohungslandschaft
- Teil 2: Claude Code Security Hardening
- Teil 3: GitHub Copilot & Git Security
- Teil 4: Gemini & OpenAI CLI Security
- Teil 5: Integrierte Security Pipeline für AI Development
- 6.2 Incident Response für AI Tool Compromise
- Fazit und Empfehlungen
- Kernpunkte
- Sofortmaßnahmen:
Executive Summary
Der „Shai-Hulud“ Wurm hat eine kritische Schwachstelle aufgezeigt: AI-gestützte Entwicklungstools wie Claude Code, GitHub Copilot, Gemini Code Assist und OpenAI CLI können als Verstärker für Supply-Chain-Angriffe fungieren. Diese Tools haben oft weitreichenden Zugriff auf Codebases, Package Manager und Cloud-Ressourcen – eine perfekte Angriffsfläche für moderne Malware.
In diesem erweiterten Leitfaden zeige ich Ihnen, wie Sie nicht nur Ihre NPM-Infrastruktur, sondern auch Ihre AI-Entwicklungsumgebung umfassend absichern.
Teil 1: Die neue Bedrohungslandschaft
1.1 AI CLI Tools als Angriffsvektoren
mermaid
graph TD
A[Kompromittiertes NPM Package] --> B[Installation in Projekt]
B --> C[AI CLI Tool analysiert Code]
C --> D[AI führt infizierten Code aus]
D --> E[Laterale Bewegung]
E --> F[Token/Secret Exfiltration]
E --> G[Code Injection in andere Projekte]
E --> H[Cloud Resource Compromise]
1.2 Betroffene AI-Entwicklungstools
Tool | Risiko-Level | Zugriff auf.. | Kritische Permissions |
---|---|---|---|
Claude Code | Hoch | Filesystem, Bash, Git | Code Execution, File R/W |
GitHub Copilot CLI | Hoch | Git, GitHub API | Repository Access |
Gemini Code Assist | Hoch | GCP Resources, Code | Cloud APIs, Workspace |
OpenAI CLI | Mittel | API Keys, Filesystem | Token Access |
Cursor | Hoch | Full IDE Access | Code Execution |
Continue.dev | Hoch | VS Code Workspace | Extension APIs |
Teil 2: Claude Code Security Hardening
2.1 Claude Code Konfiguration absichern
yaml
# claude-config.yaml - Sichere Claude Code Konfiguration
security:
# Beschränke Dateisystem-Zugriff
filesystem:
allowed_paths:
- ./src
- ./tests
blocked_paths:
- ~/.ssh
- ~/.aws
- ~/.npmrc
- .env
- .git/config
# Kommando-Ausführung limitieren
execution:
allowed_commands:
- npm test
- npm run build
blocked_commands:
- curl
- wget
- nc
- eval
- exec
# Netzwerk-Beschränkungen
network:
block_external: true
allowed_domains:
- registry.npmjs.org
- github.com
- localhost
# API Token Schutz
secrets:
mask_in_output: true
prevent_logging: true
rotate_on_suspicious_activity: true
2.2 Claude Code Sandbox Environment
dockerfile
# Dockerfile für sichere Claude Code Umgebung
FROM ubuntu:22.04
# Security Updates
RUN apt-get update && apt-get upgrade -y
# Install Node.js und Python (für Claude Code)
RUN apt-get install -y \
nodejs \
npm \
python3 \
python3-pip \
git
# Security Tools
RUN pip3 install guarddog bandit safety
RUN npm install -g snyk @socketsecurity/cli
# Create isolated user
RUN useradd -m -s /bin/bash claude-user
RUN mkdir /workspace && chown claude-user:claude-user /workspace
# AppArmor Profile für Claude Code
COPY claude-code.apparmor /etc/apparmor.d/
RUN apparmor_parser -r /etc/apparmor.d/claude-code.apparmor
# Seccomp Filter
COPY claude-code.seccomp /etc/seccomp/
RUN chmod 644 /etc/seccomp/claude-code.seccomp
USER claude-user
WORKDIR /workspace
# Environment Variables für Sicherheit
ENV CLAUDE_SAFE_MODE=true
ENV CLAUDE_NO_EXEC=false
ENV CLAUDE_LOG_COMMANDS=true
CMD ["/bin/bash"]
2.3 Claude Code Pre-Execution Hook
python
#!/usr/bin/env python3
# claude-pre-exec-hook.py - Validiert Commands vor Ausführung
import sys
import re
import json
import subprocess
from pathlib import Path
class ClaudeSecurityValidator:
def __init__(self):
self.dangerous_patterns = [
r'rm\s+-rf\s+/', # Gefährliche Löschungen
r'curl.*\|.*sh', # Curl pipe to shell
r'wget.*\|.*bash', # Wget pipe to bash
r'eval\(', # Eval execution
r'exec\(', # Direct execution
r'\.npmrc', # NPM config access
r'process\.env', # Environment variable access
r'require\(["\']child_process', # Child process execution
r'__import__', # Python dynamic import
r'os\.system', # OS command execution
r'subprocess', # Subprocess calls
]
self.allowed_commands = {
'npm': ['install', 'test', 'run', 'audit'],
'git': ['status', 'diff', 'log', 'add', 'commit'],
'python': ['-m', 'pytest', '-m', 'unittest'],
'node': ['--version', 'index.js', 'test.js'],
}
def validate_command(self, command):
"""Validiert einen Command vor Ausführung"""
# Check gegen gefährliche Patterns
for pattern in self.dangerous_patterns:
if re.search(pattern, command, re.IGNORECASE):
return False, f"Dangerous pattern detected: {pattern}"
# Parse command
parts = command.split()
if not parts:
return False, "Empty command"
cmd = parts[0]
args = parts[1:] if len(parts) > 1 else []
# Check allowed commands
if cmd not in self.allowed_commands:
return False, f"Command not in allowlist: {cmd}"
# Check allowed arguments
allowed_args = self.allowed_commands[cmd]
if allowed_args and not any(arg in args for arg in allowed_args):
return False, f"Arguments not allowed for {cmd}: {args}"
return True, "Command validated"
def scan_code_for_vulnerabilities(self, code_path):
"""Scannt Code nach Vulnerabilities vor AI-Analyse"""
results = {
'guarddog': None,
'bandit': None,
'safety': None
}
# GuardDog scan
try:
result = subprocess.run(
['guarddog', 'scan', code_path],
capture_output=True,
text=True
)
results['guarddog'] = json.loads(result.stdout) if result.returncode == 0 else None
except Exception as e:
print(f"GuardDog scan failed: {e}")
# Bandit scan für Python
if code_path.endswith('.py'):
try:
result = subprocess.run(
['bandit', '-f', 'json', code_path],
capture_output=True,
text=True
)
results['bandit'] = json.loads(result.stdout) if result.stdout else None
except Exception as e:
print(f"Bandit scan failed: {e}")
return results
def validate_file_access(self, filepath):
"""Validiert Dateizugriff"""
blocked_paths = [
'.ssh/',
'.aws/',
'.git/config',
'.npmrc',
'.env',
'node_modules/.bin/'
]
filepath_str = str(filepath)
for blocked in blocked_paths:
if blocked in filepath_str:
return False, f"Access to {blocked} is blocked"
return True, "File access allowed"
def main():
if len(sys.argv) < 2:
print("Usage: claude-pre-exec-hook.py <command>")
sys.exit(1)
command = ' '.join(sys.argv[1:])
validator = ClaudeSecurityValidator()
# Validate command
is_valid, message = validator.validate_command(command)
if not is_valid:
print(f"❌ Command blocked: {message}")
print(f"Command: {command}")
sys.exit(1)
print(f"✅ Command validated: {command}")
sys.exit(0)
if __name__ == "__main__":
main()
Teil 3: GitHub Copilot & Git Security
3.1 Git Hook für Supply-Chain Protection
#!/bin/bash
# .git/hooks/pre-commit - Erweitert für AI CLI Protection
echo "🔍 AI CLI Security Scan läuft..."
# Check for AI tool configurations
AI_CONFIGS=(
".claude-code"
".copilot"
".gemini"
".openai"
".cursor"
".continue"
)
for config in "${AI_CONFIGS[@]}"; do
if [ -f "$config" ]; then
echo "⚠️ AI Config detected: $config"
# Check for exposed tokens
if grep -qE "(api_key|token|secret)" "$config"; then
echo "❌ Exposed credentials in $config!"
exit 1
fi
fi
done
# Scan für verdächtige NPM Packages
if [ -f "package.json" ]; then
echo "📦 Scanning NPM dependencies..."
# Check für bekannte malicious packages
MALICIOUS_PACKAGES=(
"node-ipc" # Bekannte Kompromittierung
"colors" # Protest-Malware
"faker" # Namespace-Hijacking
"ua-parser-js" # Cryptominer
"coa" # Supply-chain attack
"rc" # Prototype pollution
)
for pkg in "${MALICIOUS_PACKAGES[@]}"; do
if grep -q "\"$pkg\"" package.json; then
echo "⚠️ Potentially compromised package detected: $pkg"
echo "Please review: https://security.snyk.io/vuln/search?q=$pkg"
# GuardDog deep scan
if command -v guarddog &> /dev/null; then
guarddog npm scan "$pkg" --json > "/tmp/scan_$pkg.json"
if grep -q '"issues"' "/tmp/scan_$pkg.json"; then
echo "❌ GuardDog detected issues in $pkg"
cat "/tmp/scan_$pkg.json" | jq '.issues'
exit 1
fi
fi
fi
done
fi
# Check for Git credential leaks
git diff --cached --name-only | while read file; do
if [ -f "$file" ]; then
# Check for hardcoded credentials
if grep -qE "(OPENAI_API_KEY|ANTHROPIC_API_KEY|GEMINI_API_KEY|GITHUB_TOKEN)" "$file"; then
echo "❌ Hardcoded API key detected in $file"
exit 1
fi
# Check for private keys
if grep -q "BEGIN RSA PRIVATE KEY" "$file"; then
echo "❌ Private key detected in $file"
exit 1
fi
fi
done
# Verify git configuration
SAFE_GIT_CONFIG=(
"core.hooksPath=.git/hooks"
"url.ssh://git@github.com/.insteadOf=https://github.com/"
"pull.ff=only"
"init.defaultBranch=main"
)
for config in "${SAFE_GIT_CONFIG[@]}"; do
key="${config%%=*}"
value="${config#*=}"
current=$(git config --get "$key")
if [ "$current" != "$value" ]; then
echo "⚠️ Git config $key should be: $value"
git config "$key" "$value"
fi
done
echo "✅ Pre-commit security scan passed"
3.2 GitHub Copilot Workspace Isolation
javascript
// copilot-security-monitor.js
const fs = require('fs');
const path = require('path');
const crypto = require('crypto');
class CopilotSecurityMonitor {
constructor() {
this.workspacePath = process.env.GITHUB_WORKSPACE || process.cwd();
this.suspiciousActivities = [];
this.allowedExtensions = ['.js', '.ts', '.py', '.java', '.go', '.rs'];
}
// Monitor Copilot suggestions for malicious patterns
monitorSuggestions() {
const copilotLog = path.join(
process.env.HOME,
'.config/github-copilot/logs/copilot.log'
);
if (fs.existsSync(copilotLog)) {
const tail = require('child_process').spawn('tail', ['-f', copilotLog]);
tail.stdout.on('data', (data) => {
const content = data.toString();
this.scanForMaliciousPatterns(content);
});
}
}
scanForMaliciousPatterns(content) {
const maliciousPatterns = [
// NPM publish attempts
/npm\s+publish.*--access\s+public/,
// Credential theft
/process\.env\.(GITHUB_TOKEN|NPM_TOKEN|AWS_SECRET)/,
// Backdoor patterns
/require\(['"]child_process['"]\)\.exec/,
// Data exfiltration
/fetch\(.*POST.*credentials/,
// Crypto mining
/cryptonight|monero|xmr-stak/i,
];
for (const pattern of maliciousPatterns) {
if (pattern.test(content)) {
this.suspiciousActivities.push({
timestamp: new Date().toISOString(),
pattern: pattern.toString(),
content: content.substring(0, 200)
});
this.alertSecurity(pattern, content);
}
}
}
alertSecurity(pattern, content) {
console.error(`🚨 SECURITY ALERT: Suspicious Copilot suggestion detected`);
console.error(`Pattern: ${pattern}`);
console.error(`Content preview: ${content.substring(0, 100)}...`);
// Block the suggestion
this.blockSuggestion();
// Log to security file
const logEntry = {
timestamp: new Date().toISOString(),
alert: 'Suspicious Copilot Suggestion',
pattern: pattern.toString(),
action: 'blocked'
};
fs.appendFileSync(
'copilot-security.log',
JSON.stringify(logEntry) + '\n'
);
}
blockSuggestion() {
// Send signal to VS Code/IDE to reject suggestion
process.send?.({
type: 'copilot:reject-suggestion',
reason: 'security-violation'
});
}
// Sandbox Copilot file access
sandboxFileAccess() {
const originalReadFile = fs.readFile;
const originalReadFileSync = fs.readFileSync;
const blockedPaths = [
'.git/config',
'.env',
'.npmrc',
'id_rsa',
'credentials',
'.aws/',
'.ssh/'
];
// Override readFile
fs.readFile = function(filepath, ...args) {
const normalizedPath = path.normalize(filepath.toString());
for (const blocked of blockedPaths) {
if (normalizedPath.includes(blocked)) {
console.warn(`⚠️ Blocked Copilot access to: ${filepath}`);
const callback = args[args.length - 1];
if (typeof callback === 'function') {
callback(new Error('Access denied'));
}
return;
}
}
return originalReadFile.call(fs, filepath, ...args);
};
// Override readFileSync
fs.readFileSync = function(filepath, ...args) {
const normalizedPath = path.normalize(filepath.toString());
for (const blocked of blockedPaths) {
if (normalizedPath.includes(blocked)) {
console.warn(`⚠️ Blocked Copilot sync access to: ${filepath}`);
throw new Error('Access denied');
}
}
return originalReadFileSync.call(fs, filepath, ...args);
};
}
}
// Initialize monitor
const monitor = new CopilotSecurityMonitor();
monitor.sandboxFileAccess();
monitor.monitorSuggestions();
module.exports = CopilotSecurityMonitor;
Teil 4: Gemini & OpenAI CLI Security
4.1 Gemini Code Assist Security Configuration
python
#!/usr/bin/env python3
# gemini-security-wrapper.py
import os
import sys
import json
import subprocess
import hashlib
from datetime import datetime
from pathlib import Path
class GeminiSecurityWrapper:
def __init__(self):
self.config_path = Path.home() / '.gemini' / 'security.json'
self.load_config()
self.audit_log = []
def load_config(self):
"""Lädt sichere Konfiguration für Gemini"""
default_config = {
"allowed_operations": [
"code_review",
"explain_code",
"generate_tests"
],
"blocked_operations": [
"execute_code",
"modify_credentials",
"access_secrets"
],
"sandbox_mode": True,
"max_context_size": 100000,
"allowed_file_extensions": [
".py", ".js", ".ts", ".java", ".go"
],
"blocked_directories": [
".git", ".ssh", ".aws", "node_modules"
]
}
if self.config_path.exists():
with open(self.config_path, 'r') as f:
self.config = json.load(f)
else:
self.config = default_config
self.save_config()
def save_config(self):
"""Speichert Konfiguration"""
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with open(self.config_path, 'w') as f:
json.dump(self.config, f, indent=2)
def validate_gemini_request(self, request):
"""Validiert Gemini API Requests"""
# Check for blocked operations
for blocked in self.config['blocked_operations']:
if blocked in request.lower():
self.log_security_event("blocked_operation", blocked)
return False, f"Blocked operation: {blocked}"
# Check file access
if 'file_path' in request:
file_path = Path(request['file_path'])
# Check blocked directories
for blocked_dir in self.config['blocked_directories']:
if blocked_dir in str(file_path):
self.log_security_event("blocked_file_access", str(file_path))
return False, f"Access to {blocked_dir} is blocked"
# Check file extension
if file_path.suffix not in self.config['allowed_file_extensions']:
self.log_security_event("invalid_file_type", file_path.suffix)
return False, f"File type {file_path.suffix} not allowed"
# Check context size
if 'context' in request:
context_size = len(str(request['context']))
if context_size > self.config['max_context_size']:
self.log_security_event("context_too_large", context_size)
return False, f"Context too large: {context_size} bytes"
return True, "Request validated"
def scan_for_secrets(self, content):
"""Scannt Content nach Secrets"""
secret_patterns = [
r'AIza[0-9A-Za-z-_]{35}', # Google API Key
r'ya29\.[0-9A-Za-z\-_]+', # Google OAuth
r'sk-[A-Za-z0-9]{48}', # OpenAI
r'ghp_[A-Za-z0-9]{36}', # GitHub
r'ghs_[A-Za-z0-9]{36}', # GitHub
r'AKIA[0-9A-Z]{16}', # AWS
]
import re
for pattern in secret_patterns:
if re.search(pattern, content):
self.log_security_event("secret_detected", pattern)
return True
return False
def sandbox_execute(self, command):
"""Führt Command in Sandbox aus"""
# Create temporary sandbox directory
sandbox_dir = Path('/tmp') / f'gemini_sandbox_{os.getpid()}'
sandbox_dir.mkdir(exist_ok=True)
# Copy necessary files (read-only)
# ... implementation ...
# Execute in Docker container
docker_cmd = [
'docker', 'run',
'--rm',
'--read-only',
'--network=none',
'--memory=512m',
'--cpus=0.5',
'-v', f'{sandbox_dir}:/workspace:ro',
'gemini-sandbox',
command
]
try:
result = subprocess.run(
docker_cmd,
capture_output=True,
text=True,
timeout=30
)
return result.stdout
except subprocess.TimeoutExpired:
self.log_security_event("execution_timeout", command)
return "Execution timeout"
finally:
# Cleanup sandbox
import shutil
shutil.rmtree(sandbox_dir, ignore_errors=True)
def log_security_event(self, event_type, details):
"""Loggt Security Events"""
event = {
"timestamp": datetime.now().isoformat(),
"type": event_type,
"details": details,
"pid": os.getpid(),
"user": os.environ.get('USER', 'unknown')
}
self.audit_log.append(event)
# Write to file
log_file = Path.home() / '.gemini' / 'security.log'
with open(log_file, 'a') as f:
f.write(json.dumps(event) + '\n')
# Alert if critical
if event_type in ['secret_detected', 'blocked_operation']:
print(f"🚨 SECURITY ALERT: {event_type} - {details}")
# CLI Wrapper
def main():
wrapper = GeminiSecurityWrapper()
if len(sys.argv) < 2:
print("Usage: gemini-secure <command> [args]")
sys.exit(1)
command = sys.argv[1]
args = sys.argv[2:]
# Validate command
request = {"command": command, "args": args}
is_valid, message = wrapper.validate_gemini_request(json.dumps(request))
if not is_valid:
print(f"❌ Request blocked: {message}")
sys.exit(1)
# Execute safely
result = wrapper.sandbox_execute(' '.join([command] + args))
print(result)
if __name__ == "__main__":
main()
4.2 OpenAI CLI Security Wrapper
#!/bin/bash
# openai-cli-secure.sh - Secure wrapper for OpenAI CLI
# Security Configuration
OPENAI_SAFE_MODE=${OPENAI_SAFE_MODE:-true}
OPENAI_LOG_DIR="$HOME/.openai/logs"
OPENAI_QUARANTINE_DIR="$HOME/.openai/quarantine"
# Create directories
mkdir -p "$OPENAI_LOG_DIR" "$OPENAI_QUARANTINE_DIR"
# Function to log commands
log_command() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $*" >> "$OPENAI_LOG_DIR/commands.log"
}
# Function to scan for malicious patterns
scan_content() {
local content="$1"
# Check for suspicious patterns
if echo "$content" | grep -qE "(npm publish|eval\(|child_process|\.npmrc|id_rsa)"; then
echo "⚠️ Suspicious pattern detected in OpenAI request"
echo "$content" > "$OPENAI_QUARANTINE_DIR/suspicious_$(date +%s).txt"
return 1
fi
return 0
}
# Function to validate API key
validate_api_key() {
if [ -z "$OPENAI_API_KEY" ]; then
echo "❌ OPENAI_API_KEY not set"
exit 1
fi
# Check if key is in secure storage
if [ -f "$HOME/.openai/secure_key" ]; then
OPENAI_API_KEY=$(cat "$HOME/.openai/secure_key")
export OPENAI_API_KEY
fi
# Validate key format
if ! echo "$OPENAI_API_KEY" | grep -qE "^sk-[A-Za-z0-9]{48}$"; then
echo "❌ Invalid OpenAI API key format"
exit 1
fi
}
# Function to sanitize output
sanitize_output() {
local output="$1"
# Remove potential secrets
output=$(echo "$output" | sed -E 's/sk-[A-Za-z0-9]{48}/sk-REDACTED/g')
output=$(echo "$output" | sed -E 's/ghp_[A-Za-z0-9]{36}/ghp_REDACTED/g')
output=$(echo "$output" | sed -E 's/npm_[A-Za-z0-9]{36}/npm_REDACTED/g')
echo "$output"
}
# Main execution
main() {
local command="$1"
shift
local args="$*"
# Log the command
log_command "openai $command $args"
# Validate API key
validate_api_key
# Check command
case "$command" in
"api")
# API calls need extra validation
if ! scan_content "$args"; then
echo "❌ Request blocked for security reasons"
exit 1
fi
;;
"tools")
# Tools commands are restricted
if [ "$OPENAI_SAFE_MODE" = "true" ]; then
echo "❌ Tools commands are disabled in safe mode"
exit 1
fi
;;
"files")
# File operations need path validation
if echo "$args" | grep -qE "(\.ssh|\.aws|\.git/config)"; then
echo "❌ Access to sensitive files blocked"
exit 1
fi
;;
esac
# Execute with timeout
timeout 30 openai "$command" $args | sanitize_output
if [ $? -eq 124 ]; then
echo "⚠️ Command timed out after 30 seconds"
exit 1
fi
}
# Run main function
main "$@"
Teil 5: Integrierte Security Pipeline für AI Development
5.1 Docker Compose für Sichere AI-Entwicklungsumgebung
yaml
# docker-compose.secure-ai-dev.yml
version: '3.8'
services:
# NPM Registry mit Security Scanning
npm-registry:
image: verdaccio/verdaccio:5
container_name: secure-npm-registry
environment:
VERDACCIO_PUBLIC_URL: http://localhost:4873/
volumes:
- ./config/verdaccio:/verdaccio/conf
- npm-storage:/verdaccio/storage
ports:
- "4873:4873"
networks:
- ai-dev-network
healthcheck:
test: ["CMD", "wget", "--spider", "http://localhost:4873/-/ping"]
interval: 30s
timeout: 10s
retries: 3
# Security Scanner Service
security-scanner:
build:
context: .
dockerfile: Dockerfile.scanner
container_name: security-scanner
volumes:
- ./workspace:/workspace:ro
- scanner-logs:/var/log/scanner
environment:
SCAN_INTERVAL: 300
ALERT_WEBHOOK: ${SLACK_WEBHOOK}
networks:
- ai-dev-network
depends_on:
- npm-registry
# Claude Code Sandbox
claude-sandbox:
build:
context: .
dockerfile: Dockerfile.claude
container_name: claude-sandbox
volumes:
- ./workspace:/workspace
- claude-logs:/var/log/claude
environment:
CLAUDE_SAFE_MODE: "true"
ANTHROPIC_API_KEY: ${ANTHROPIC_API_KEY}
networks:
- ai-dev-network
security_opt:
- no-new-privileges:true
- apparmor:docker-default
cap_drop:
- ALL
cap_add:
- DAC_OVERRIDE
read_only: true
tmpfs:
- /tmp
# Git Server (Gitea) für lokale Repos
git-server:
image: gitea/gitea:latest
container_name: secure-git
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__security__INSTALL_LOCK=true
- GITEA__webhook__ALLOWED_HOST_LIST=*.local
volumes:
- gitea-data:/data
- /etc/timezone:/etc/timezone:ro
ports:
- "3000:3000"
- "222:22"
networks:
- ai-dev-network
# Monitoring Dashboard
monitoring:
build:
context: .
dockerfile: Dockerfile.monitoring
container_name: ai-security-monitor
ports:
- "8080:8080"
volumes:
- scanner-logs:/var/log/scanner:ro
- claude-logs:/var/log/claude:ro
networks:
- ai-dev-network
depends_on:
- security-scanner
- claude-sandbox
networks:
ai-dev-network:
driver: bridge
ipam:
config:
- subnet: 172.28.0.0/16
volumes:
npm-storage:
scanner-logs:
claude-logs:
gitea-data:
```
### 5.2 Unified Security Monitor für alle AI CLIs
```typescript
// unified-ai-security-monitor.ts
import * as fs from 'fs';
import * as path from 'path';
import { exec } from 'child_process';
import { promisify } from 'util';
const execAsync = promisify(exec);
interface AITool {
name: string;
configPath: string;
logPath: string;
processName: string;
apiKeyEnvVar: string;
}
interface SecurityAlert {
timestamp: Date;
tool: string;
severity: 'low' | 'medium' | 'high' | 'critical';
type: string;
details: any;
}
class UnifiedAISecurityMonitor {
private tools: AITool[] = [
{
name: 'Claude Code',
configPath: '~/.claude-code',
logPath: '~/.claude-code/logs',
processName: 'claude',
apiKeyEnvVar: 'ANTHROPIC_API_KEY'
},
{
name: 'GitHub Copilot',
configPath: '~/.config/github-copilot',
logPath: '~/.config/github-copilot/logs',
processName: 'copilot',
apiKeyEnvVar: 'GITHUB_TOKEN'
},
{
name: 'Gemini',
configPath: '~/.gemini',
logPath: '~/.gemini/logs',
processName: 'gemini',
apiKeyEnvVar: 'GEMINI_API_KEY'
},
{
name: 'OpenAI CLI',
configPath: '~/.openai',
logPath: '~/.openai/logs',
processName: 'openai',
apiKeyEnvVar: 'OPENAI_API_KEY'
},
{
name: 'Cursor',
configPath: '~/.cursor',
logPath: '~/.cursor/logs',
processName: 'cursor',
apiKeyEnvVar: 'CURSOR_API_KEY'
}
];
private alerts: SecurityAlert[] = [];
private monitoring = true;
async startMonitoring(): Promise<void> {
console.log('🛡️ Starting Unified AI Security Monitor...');
// Initial security check
await this.performSecurityAudit();
// Start continuous monitoring
this.monitorProcesses();
this.monitorFilesystem();
this.monitorNetwork();
this.monitorAPIKeys();
// Periodic security scans
setInterval(() => this.performSecurityAudit(), 3600000); // Every hour
}
private async performSecurityAudit(): Promise<void> {
console.log('🔍 Performing security audit...');
for (const tool of this.tools) {
await this.auditTool(tool);
}
await this.scanForVulnerabilities();
await this.checkDependencies();
await this.validateConfigurations();
}
private async auditTool(tool: AITool): Promise<void> {
// Check if tool is installed
try {
await execAsync(`which ${tool.processName}`);
} catch {
return; // Tool not installed
}
// Check configuration security
const configPath = this.expandPath(tool.configPath);
if (fs.existsSync(configPath)) {
const stats = fs.statSync(configPath);
// Check file permissions
if ((stats.mode & 0o077) !== 0) {
this.addAlert({
timestamp: new Date(),
tool: tool.name,
severity: 'high',
type: 'insecure_permissions',
details: {
path: configPath,
current: (stats.mode & 0o777).toString(8),
recommended: '600'
}
});
}
// Check for exposed secrets
const content = fs.readFileSync(configPath, 'utf-8');
if (this.containsSecrets(content)) {
this.addAlert({
timestamp: new Date(),
tool: tool.name,
severity: 'critical',
type: 'exposed_secrets',
details: { path: configPath }
});
}
}
// Check API key security
const apiKey = process.env[tool.apiKeyEnvVar];
if (apiKey) {
// Check if key is in command history
const historyFiles = [
'~/.bash_history',
'~/.zsh_history',
'~/.fish_history'
];
for (const histFile of historyFiles) {
const expandedPath = this.expandPath(histFile);
if (fs.existsSync(expandedPath)) {
const history = fs.readFileSync(expandedPath, 'utf-8');
if (history.includes(apiKey)) {
this.addAlert({
timestamp: new Date(),
tool: tool.name,
severity: 'critical',
type: 'api_key_in_history',
details: { file: histFile }
});
}
}
}
}
}
private monitorProcesses(): void {
setInterval(async () => {
for (const tool of this.tools) {
try {
const { stdout } = await execAsync(`pgrep -f ${tool.processName}`);
const pids = stdout.trim().split('\n').filter(Boolean);
for (const pid of pids) {
// Check process behavior
await this.analyzeProcess(tool, pid);
}
} catch {
// Process not running
}
}
}, 5000); // Every 5 seconds
}
private async analyzeProcess(tool: AITool, pid: string): Promise<void> {
// Check open files
try {
const { stdout } = await execAsync(`lsof -p ${pid}`);
// Check for suspicious file access
const suspiciousFiles = [
'.ssh/id_rsa',
'.aws/credentials',
'.npmrc',
'.git/config'
];
for (const file of suspiciousFiles) {
if (stdout.includes(file)) {
this.addAlert({
timestamp: new Date(),
tool: tool.name,
severity: 'high',
type: 'suspicious_file_access',
details: { pid, file }
});
}
}
} catch {
// Process might have ended
}
// Check network connections
try {
const { stdout } = await execAsync(`netstat -np 2>/dev/null | grep ${pid}`);
// Check for suspicious connections
const suspiciousPatterns = [
/\d+\.\d+\.\d+\.\d+:(?:3333|4444|5555|6666|7777|8888|9999)/, // Common backdoor ports
/tor\./, // Tor network
/\.ru:|\.cn:|\.kp:/, // Suspicious TLDs
];
for (const pattern of suspiciousPatterns) {
if (pattern.test(stdout)) {
this.addAlert({
timestamp: new Date(),
tool: tool.name,
severity: 'critical',
type: 'suspicious_network_connection',
details: { pid, connection: stdout }
});
}
}
} catch {
// Might not have permissions
}
}
private monitorFilesystem(): void {
const watchPaths = [
'~/.ssh',
'~/.aws',
'~/.npmrc',
'.env'
];
for (const watchPath of watchPaths) {
const expandedPath = this.expandPath(watchPath);
if (fs.existsSync(expandedPath)) {
fs.watch(expandedPath, (eventType, filename) => {
this.addAlert({
timestamp: new Date(),
tool: 'filesystem',
severity: 'medium',
type: 'sensitive_file_modified',
details: { path: watchPath, file: filename, event: eventType }
});
});
}
}
}
private monitorNetwork(): void {
// Monitor DNS queries for suspicious domains
const dnsLogPath = '/var/log/dnsmasq.log'; // Or wherever DNS logs are
if (fs.existsSync(dnsLogPath)) {
const tail = require('child_process').spawn('tail', ['-f', dnsLogPath]);
tail.stdout.on('data', (data: Buffer) => {
const content = data.toString();
// Check for suspicious domains
const suspiciousDomains = [
/malware\./,
/cryptominer\./,
/\.tk$/,
/\.ml$/,
/pastebin\.com/,
/transfer\.sh/
];
for (const domain of suspiciousDomains) {
if (domain.test(content)) {
this.addAlert({
timestamp: new Date(),
tool: 'network',
severity: 'high',
type: 'suspicious_dns_query',
details: { domain: content }
});
}
}
});
}
}
private monitorAPIKeys(): void {
// Monitor environment variables
setInterval(() => {
for (const tool of this.tools) {
const apiKey = process.env[tool.apiKeyEnvVar];
if (apiKey) {
// Check if key has been exposed in logs
const logPath = this.expandPath(tool.logPath);
if (fs.existsSync(logPath)) {
const files = fs.readdirSync(logPath);
for (const file of files) {
const content = fs.readFileSync(
path.join(logPath, file),
'utf-8'
);
if (content.includes(apiKey)) {
this.addAlert({
timestamp: new Date(),
tool: tool.name,
severity: 'critical',
type: 'api_key_in_logs',
details: { file: path.join(logPath, file) }
});
}
}
}
}
}
}, 60000); // Every minute
}
private async scanForVulnerabilities(): Promise<void> {
// NPM audit
try {
const { stdout } = await execAsync('npm audit --json');
const audit = JSON.parse(stdout);
if (audit.metadata.vulnerabilities.total > 0) {
this.addAlert({
timestamp: new Date(),
tool: 'npm',
severity: audit.metadata.vulnerabilities.critical > 0 ? 'critical' : 'high',
type: 'npm_vulnerabilities',
details: audit.metadata.vulnerabilities
});
}
} catch {
// NPM not available or no package.json
}
// Check for known vulnerable packages
const vulnerablePackages = [
'node-ipc',
'colors',
'faker',
'ua-parser-js',
'coa',
'rc'
];
if (fs.existsSync('package.json')) {
const packageJson = JSON.parse(fs.readFileSync('package.json', 'utf-8'));
const dependencies = {
...packageJson.dependencies,
...packageJson.devDependencies
};
for (const pkg of vulnerablePackages) {
if (dependencies[pkg]) {
this.addAlert({
timestamp: new Date(),
tool: 'npm',
severity: 'critical',
type: 'known_vulnerable_package',
details: { package: pkg, version: dependencies[pkg] }
});
}
}
}
}
private async checkDependencies(): Promise<void> {
// Use GuardDog to scan dependencies
try {
const { stdout } = await execAsync('guarddog npm scan . --json');
const results = JSON.parse(stdout);
if (results.issues && results.issues.length > 0) {
for (const issue of results.issues) {
this.addAlert({
timestamp: new Date(),
tool: 'guarddog',
severity: 'high',
type: 'suspicious_package',
details: issue
});
}
}
} catch {
// GuardDog not installed
}
}
private async validateConfigurations(): Promise<void> {
// Git configuration
try {
const { stdout: hooksPath } = await execAsync('git config core.hooksPath');
if (!hooksPath.trim()) {
this.addAlert({
timestamp: new Date(),
tool: 'git',
severity: 'medium',
type: 'missing_git_hooks',
details: { message: 'Git hooks not configured' }
});
}
} catch {
// Git not configured
}
// SSH configuration
const sshConfig = this.expandPath('~/.ssh/config');
if (fs.existsSync(sshConfig)) {
const content = fs.readFileSync(sshConfig, 'utf-8');
// Check for insecure settings
if (content.includes('StrictHostKeyChecking no')) {
this.addAlert({
timestamp: new Date(),
tool: 'ssh',
severity: 'high',
type: 'insecure_ssh_config',
details: { setting: 'StrictHostKeyChecking no' }
});
}
}
}
private containsSecrets(content: string): boolean {
const secretPatterns = [
/api[_-]?key/i,
/secret/i,
/token/i,
/password/i,
/sk-[A-Za-z0-9]{48}/, // OpenAI
/ghp_[A-Za-z0-9]{36}/, // GitHub
/npm_[A-Za-z0-9]{36}/, // NPM
];
return secretPatterns.some(pattern => pattern.test(content));
}
private expandPath(filepath: string): string {
if (filepath.startsWith('~')) {
return path.join(process.env.HOME || '', filepath.slice(1));
}
return filepath;
}
private addAlert(alert: SecurityAlert): void {
this.alerts.push(alert);
// Console output
const emoji = {
low: '📝',
medium: '⚠️',
high: '🚨',
critical: '🔴'
};
console.log(
`${emoji[alert.severity]} [${alert.severity.toUpperCase()}] ${alert.tool}: ${alert.type}`
);
console.log(` Details:`, alert.details);
// Send to monitoring service
this.sendToMonitoring(alert);
// Write to log file
fs.appendFileSync(
'ai-security-monitor.log',
JSON.stringify(alert) + '\n'
);
}
private sendToMonitoring(alert: SecurityAlert): void {
// Send to your monitoring service
// Example: Datadog, Splunk, ELK, etc.
if (alert.severity === 'critical') {
// Send immediate notification
this.sendNotification(alert);
}
}
private sendNotification(alert: SecurityAlert): void {
// Send email/Slack/PagerDuty notification
const webhook = process.env.SECURITY_WEBHOOK;
if (webhook) {
fetch(webhook, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text: `🔴 CRITICAL SECURITY ALERT: ${alert.tool} - ${alert.type}`,
details: alert.details
})
}).catch(console.error);
}
}
public getAlerts(): SecurityAlert[] {
return this.alerts;
}
public clearAlerts(): void {
this.alerts = [];
}
public stopMonitoring(): void {
this.monitoring = false;
console.log('🛑 Monitoring stopped');
}
}
// Start the monitor
const monitor = new UnifiedAISecurityMonitor();
monitor.startMonitoring();
// Graceful shutdown
process.on('SIGINT', () => {
monitor.stopMonitoring();
process.exit(0);
});
export default UnifiedAISecurityMonitor;
```
## Teil 6: Best Practices für AI-gestützte Entwicklung
### 6.1 Security Checklist für AI CLI Tools
```markdown
# 🔐 AI CLI Security Checklist
## Vor der Installation
- [ ] Tool-Quelle verifizieren (offizielle Website/Repository)
- [ ] Digital Signature prüfen
- [ ] Berechtigungen reviewen
- [ ] Isolation Strategy planen
## Bei der Installation
- [ ] In isolierter Umgebung installieren (Docker/VM)
- [ ] Minimale Berechtigungen vergeben
- [ ] API Keys in secure storage
- [ ] Logging aktivieren
## Konfiguration
- [ ] Safe Mode aktivieren
- [ ] Netzwerkzugriff beschränken
- [ ] Dateisystemzugriff limitieren
- [ ] Command Execution einschränken
## Laufzeit
- [ ] Process Monitoring aktivieren
- [ ] Network Monitoring einrichten
- [ ] File Access Auditing
- [ ] API Call Logging
## Regelmäßige Audits
- [ ] Wöchentlicher Security Scan
- [ ] Monatliche Permission Review
- [ ] Quarterly Threat Assessment
- [ ] Annual Penetration Test
6.2 Incident Response für AI Tool Compromise
#!/bin/bash
# ai-incident-response.sh
echo "🚨 AI TOOL SECURITY INCIDENT RESPONSE"
echo "======================================"
# Step 1: Immediate Isolation
echo "[1/8] Isolating AI tools..."
pkill -f "claude|copilot|gemini|openai|cursor"
unset ANTHROPIC_API_KEY OPENAI_API_KEY GEMINI_API_KEY GITHUB_TOKEN
# Step 2: Preserve Evidence
echo "[2/8] Preserving evidence..."
INCIDENT_DIR="/tmp/incident_$(date +%Y%m%d_%H%M%S)"
mkdir -p "$INCIDENT_DIR"
# Collect logs
cp -r ~/.claude-code/logs "$INCIDENT_DIR/claude_logs" 2>/dev/null
cp -r ~/.config/github-copilot/logs "$INCIDENT_DIR/copilot_logs" 2>/dev/null
cp -r ~/.gemini/logs "$INCIDENT_DIR/gemini_logs" 2>/dev/null
cp -r ~/.openai/logs "$INCIDENT_DIR/openai_logs" 2>/dev/null
# Collect configurations
cp -r ~/.claude-code "$INCIDENT_DIR/claude_config" 2>/dev/null
cp -r ~/.config/github-copilot "$INCIDENT_DIR/copilot_config" 2>/dev/null
# Step 3: Check for Persistence
echo "[3/8] Checking for persistence mechanisms..."
crontab -l > "$INCIDENT_DIR/crontab.txt" 2>/dev/null
ls -la ~/.config/autostart > "$INCIDENT_DIR/autostart.txt" 2>/dev/null
systemctl --user list-units > "$INCIDENT_DIR/systemd_units.txt" 2>/dev/null
# Step 4: Network Analysis
echo "[4/8] Analyzing network connections..."
netstat -tulpn > "$INCIDENT_DIR/netstat.txt" 2>/dev/null
ss -tulpn > "$INCIDENT_DIR/ss.txt" 2>/dev/null
# Step 5: Process Analysis
echo "[5/8] Analyzing processes..."
ps auxf > "$INCIDENT_DIR/processes.txt"
lsof > "$INCIDENT_DIR/open_files.txt" 2>/dev/null
# Step 6: Revoke Credentials
echo "[6/8] Revoking credentials..."
# GitHub
gh auth logout
# NPM
npm logout
npm token revoke --all
# Step 7: Clean Environment
echo "[7/8] Cleaning environment..."
rm -rf ~/.npm/_cacache
rm -rf ~/.cache/ai-tools
docker system prune -af
# Step 8: Generate Report
echo "[8/8] Generating incident report..."
cat > "$INCIDENT_DIR/INCIDENT_REPORT.md" << EOF
# AI Tool Security Incident Report
**Date:** $(date)
**User:** $(whoami)
**Hostname:** $(hostname)
## Timeline
- Incident detected: $(date)
- Response initiated: $(date)
- Isolation complete: $(date)
## Affected Systems
- Claude Code
- GitHub Copilot
- Gemini Code Assist
- OpenAI CLI
## Actions Taken
1. Isolated AI tools
2. Preserved evidence
3. Checked persistence
4. Analyzed network
5. Analyzed processes
6. Revoked credentials
7. Cleaned environment
## Evidence Location
$INCIDENT_DIR
## Next Steps
1. Forensic analysis of collected evidence
2. Credential rotation
3. System rebuild
4. Security hardening
5. Post-incident review
EOF
echo ""
echo "✅ Incident response complete"
echo "📁 Evidence saved to: $INCIDENT_DIR"
echo "📧 Please send report to: security@company.com"
Fazit und Empfehlungen
Die Integration von AI-gestützten Entwicklungstools hat die Angriffsfläche für Supply-Chain-Attacken dramatisch erweitert. Der „Shai-Hulud“ Wurm ist nur der Anfang einer neuen Generation von Angriffen, die gezielt die Vertrauensstellung von AI-Tools ausnutzen.
Kernpunkte
- AI CLI Tools sind privilegierte Angriffsziele – Sie haben Zugriff auf Code, Credentials und Cloud-Ressourcen
- Zero-Trust für AI Tools – Behandeln Sie AI-Tools wie potenzielle Angreifer
- Defense in Depth – Mehrschichtige Sicherheit von NPM bis zu AI CLIs
- Continuous Monitoring – Echtzeitüberwachung aller AI-Tool-Aktivitäten
- Incident Readiness – Vorbereitet sein auf AI-Tool-Kompromittierung
Sofortmaßnahmen:
- Installieren Sie den Unified AI Security Monitor
- Implementieren Sie Git Hooks für Supply-Chain-Schutz
- Richten Sie isolierte Umgebungen für AI-Tools ein
- Aktivieren Sie Logging für alle AI CLI Tools
- Erstellen Sie einen Incident Response Plan
Die Zukunft der Softwareentwicklung ist AI-gestützt, aber sie muss auch sicher sein. Mit den in diesem Artikel vorgestellten Maßnahmen schaffen Sie eine robuste Verteidigung gegen die nächste Generation von Supply-Chain-Angriffen.
Über den Autor: Joseph Kisler
Weiterführende Ressourcen:
Quellen und weiterführende Literatur:
- OWASP GenAI Security Project: https://genai.owasp.org/
- OWASP LLM Top 10 (2024): https://genai.owasp.org/llm-top-10/
- GuardDog – Malicious Package Scanner: https://github.com/datadog/guarddog
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- NPM Security Best Practices: https://docs.npmjs.com/packages-and-modules/securing-your-code
- Claude Code Documentation: https://docs.anthropic.com/claude/docs/claude-code
- GitHub Copilot Security Guidelines:
https://docs.github.com/en/copilot/using-github-copilot/responsible-use-of-github-copilot-features - Docker Security Documentation: https://docs.docker.com/engine/security/
- CISA Supply Chain Security: https://www.cisa.gov/supply-chain
- OWASP Dependency Check: https://owasp.org/www-project-dependency-check/
Alle Rechte vorbehalten. Dieser Blogartikel wurde mit größter Sorgfalt erstellt, dennoch können Fehler oder Irrtümer nicht ausgeschlossen werden. Die Nutzung und Weitergabe der Inhalte ist ohne Genehmigung nicht gestattet. Für die Richtigkeit, Vollständigkeit und Aktualität der Inhalte wird keine Haftung übernommen.
Zuletzt aktualisiert: 19.09.2025