OpenAI
GPT-4o, GPT-4, GPT-3.5-turbo. Best balance of quality and cost.
Codemetry can optionally use AI to generate human-readable explanations of analysis results. AI summarization is entirely opt-in and privacy-conscious.
AI features are disabled by default. You must explicitly enable them:
php artisan codemetry:analyze --days=7 --ai=1Or in configuration:
'ai' => [ 'enabled' => true, // ...],AI engines never receive code or diffs. They only receive:
large_refactor_suspected)This design ensures your source code stays private.
If AI is enabled but unavailable:
ai_unavailable confounder is addedOpenAI
GPT-4o, GPT-4, GPT-3.5-turbo. Best balance of quality and cost.
Anthropic
Claude 3.5 Sonnet, Claude 3 Opus/Haiku. Excellent reasoning quality.
DeepSeek
DeepSeek Chat. Cost-effective alternative.
Gemini Pro, Gemini Flash. Google Cloud integration.
# Set in environmentexport CODEMETRY_AI_ENGINE=openai # or: anthropic, deepseek, google# Engine-specific keyexport OPENAI_API_KEY=sk-...# Or unified keyexport CODEMETRY_AI_API_KEY=sk-...php artisan codemetry:analyze --days=7 --ai=1'ai' => [ 'enabled' => false, // Set true to enable by default 'engine' => env('CODEMETRY_AI_ENGINE', 'openai'), 'model' => env('CODEMETRY_AI_MODEL', null), // null = engine default 'api_key' => env('CODEMETRY_AI_API_KEY', null), 'batch_size' => env('CODEMETRY_AI_BATCH_SIZE', 10), // Days per API call],| Option | Description |
|---|---|
enabled | Enable AI by default (can override with --ai=0/1) |
engine | Which AI provider to use |
model | Specific model (null = use engine’s default) |
api_key | API key (falls back to engine-specific env vars) |
batch_size | Number of days to process per API call (default: 10) |
When AI is enabled, each analysis window includes:
{ "ai_summary": { "explanation_bullets": [ "High churn (95th percentile) indicates major development activity.", "The presence of reverts suggests some changes needed rollback.", "Despite the bad score, the large_refactor_suspected flag hints at planned restructuring.", "Consider reviewing commit history to confirm this was intentional work." ], "score_delta": 0, "confidence_delta": 0.0, "label_override": null }}explanation_bulletsHuman-readable insights about the day’s metrics. Useful for:
score_delta / confidence_deltaAI can suggest score adjustments (V1: always 0). Future versions may allow AI to fine-tune scores based on context.
label_overrideAI can suggest overriding the label (V1: always null). Example: changing “bad” to “medium” if metrics suggest false positive.
Codemetry batches multiple days into a single API call for efficiency. By default, 10 days are processed per request.
'ai' => [ 'batch_size' => 10, // Process 10 days per API call],Benefits:
You can adjust batch_size based on your needs:
AI costs are per API call, and batching significantly reduces total calls. Approximate costs per batch (10 days):
| Engine | Model | Cost per Batch | 30-Day Analysis |
|---|---|---|---|
| OpenAI | gpt-4o-mini | ~$0.005 | ~$0.015 |
| OpenAI | gpt-4o | ~$0.05 | ~$0.15 |
| Anthropic | claude-3-5-haiku | ~$0.005 | ~$0.015 |
| Anthropic | claude-3-5-sonnet | ~$0.05 | ~$0.15 |
| DeepSeek | deepseek-chat | ~$0.002 | ~$0.006 |
| gemini-1.5-flash | ~$0.005 | ~$0.015 |
Codemetry is fully functional without AI:
AI adds explanatory convenience, not core functionality.