pull down to refresh

The increasing adoption of large language models (LLMs) in software engineering necessitates rigorous security evaluation of their generated code. However, existing benchmarks are inadequate, as they focus on isolated code snippets, employ unstable evaluation methods that lack reproducibility, and fail to connect the quality of input context with the security of the output. To address these gaps, we introduce A.S.E (AI Code Generation Security Evaluation), a benchmark for repository-level secure code generation. A.S.E constructs tasks from real-world repositories with documented CVEs, preserving full repository context like build systems and cross-file dependencies. Its reproducible, containerized evaluation framework uses expert-defined rules to provide stable, auditable assessments of security, build quality, and generation stability.
Our evaluation of leading LLMs on A.S.E reveals three key findings:
  1. Claude-3.7-Sonnet achieves the best overall performance.
  2. The security gap between proprietary and open-source models is narrow; Qwen3-235B-A22B-Instruct attains the top security score.
  3. Concise, “fast-thinking” decoding strategies consistently outperform complex, “slow-thinking” reasoning for security patching.
Leaderboard:
RankModelLicenseOverallSecurityQualityStability
1Claude-3.7-Sonnet-20250219Proprietary63.0146.7291.5875.00
2Claude-3.7-Sonnet-Thinking-20250219Proprietary61.0444.6589.8572.92
3Qwen3-235B-A22B-Instruct-2507Open Source60.1548.0382.0867.08
4Qwen3-CoderOpen Source59.3142.6985.1681.54
5DeepSeek-V3-20250324Open Source58.5940.8985.8782.94
6Claude-Sonnet-4-20250514Proprietary57.1434.7892.3785.65
7Kimi-K2-20250711-PreviewOpen Source55.2937.8279.9086.25
8GPT-4o-20241120Proprietary55.1045.6572.4659.67
9Qwen-Coder-Plus-20241106Proprietary53.5537.9873.7886.27
10Claude-Opus-4-20250514Proprietary52.7131.9585.8277.91
11Grok-3Proprietary52.1838.6473.5469.41
12DeepSeek-R1-20250528Open Source51.7638.0174.3966.38
13Gemini-2.5-Pro-Exp-20250325Proprietary51.0229.9884.0478.21
14Claude-Sonnet-4-Thinking-20250514Proprietary50.9234.1076.8174.22
15Claude-Opus-4-Thinking-20250514Proprietary50.1730.7079.8477.98
16GLM-4.5Open Source49.8035.9270.2471.74
17Grok-4Proprietary42.4029.5359.7867.42
18o4-mini-20250416Proprietary41.3527.8760.7464.07
19Grok-3-miniProprietary30.4922.3738.1556.26
20Codex-mini-latestProprietary29.7122.9634.6855.29
21Hunyuan-T1-20250321Proprietary21.9215.5720.2165.18
22Qwen3-235B-A22B-ThinkingOpen Source18.119.4215.6077.81
23GPT-4.1-20250414Proprietary17.265.2616.4691.66
24Qwen3-235B-A22BOpen Source13.373.347.2791.86
25o3-mini-20250131Proprietary13.233.673.9198.57
26o3-20250416Proprietary10.220.360.3698.91

My notes:
  • The highest security score of 48.03 out of 100.00 is no bueno
  • Also note a potential significant security regression between Claude 3.7 Sonnet and Claude 4 Sonnet.
  • GPT-5 wasn't tested