AI Agents Turn Smart Contract Flaws into Easy Crypto Heists

A team of researchers from University College London (UCL) and the University of Sydney (USYD) has created an AI agent system called A1 that can autonomously discover and exploit vulnerabilities in smart contracts. The system, which uses various AI models from OpenAI, Google, DeepSeek, and Alibaba, can generate exploits for Solidity smart contracts on the Ethereum and Binance Smart Chain blockchains.

Smart contracts, a cornerstone of decentralized finance (DeFi), have never lived up to their name when it comes to security. The cryptocurrency industry has lost almost $1.5 billion to hacking attacks in the past year alone, with the total loss since 2017 reaching around $11.74 billion.

A1 was developed by Arthur Gervais, a professor in information security at UCL, and Liyi Zhou, a lecturer in computer science at USYD. The system consists of a set of tools that work together to understand the contract's behavior and vulnerabilities. It then generates exploits in the form of compilable Solidity contracts, which are tested against historical blockchain states.

The A1 agent system was tested on 36 real-world vulnerable contracts and demonstrated a 62.96% success rate on the VERITE benchmark. The system also spotted nine additional vulnerable contracts, five of which occurred after the training cutoff of the best performing model, OpenAI's o3-pro.

How A1 Works

A1 is powered by various LLMs (Large Language Models), including o3-pro and o3. These models are used to develop exploits for Solidity smart contracts. The system can perform full exploit generation, producing actual executable code.

The researchers tested A1 with different LLMs, including Gemini Pro, Gemini Flash, R1, and Qwen3 MoE. OpenAI's o3-pro and o3 had the highest success rates, 88.5% and 73.1%, respectively.

Risks and Challenges

The A1 agent system poses significant risks to smart contract security. The researchers acknowledge that the time window matters when it comes to finding vulnerable contracts. Older vulnerabilities may have been patched, making it harder for the system to find fresh bugs.

"Finding such fresh bugs is not easy, but it's possible, especially at scale," said Zhou. "Once a few valuable exploits are discovered, they can easily pay for the cost of running thousands of scans."

The researchers also highlight the 10x asymmetry between the rewards of attacking compared to the rewards of defending. This means that attackers using AI tools like A1 can potentially earn more from exploits than it costs to operate, while defenders may struggle to keep up with the cost gap.

Conclusion

The development of A1 highlights the growing threat of AI-powered attacks on smart contract security. The system's ability to generate exploits for Solidity smart contracts poses a significant challenge to DeFi platforms and blockchain developers.

"A system like A1 can turn a profit," said Zhou. "It's essential for project teams to use tools like A1 themselves to continuously monitor their own protocol, rather than waiting for third parties to find issues."

The researchers plan to release A1 as open source code, but the decision is still pending due to concerns about its potential impact on smart contract security.