**Hacker Pranks**

**"Don't Reinvent the Wheel: Why Agentic AI Security Should Leverage Existing Tech"**

The concept of "agent security" has been gaining traction in recent times, particularly with the increasing adoption of Large Language Models (LLMs) and other artificial intelligence tools. However, some proponents of agentic AI security are advocating for a radical approach that involves reinventing fundamental security technologies from scratch. In this article, we'll examine why this approach is misguided and argue that existing cybersecurity solutions can be adapted to address the unique challenges posed by agentic AI.

**The Obscurity of Landlock**

Landlock, a Linux security tool, has been gaining popularity in some circles as a solution for agent security. However, its obscurity can be attributed to the fact that most workloads should already be isolated using existing technologies like containers and virtual machines. Landlock's primary pitch is more suited as a complement to coarse-grained isolation techniques rather than a replacement.

**The Problem with Unsandboxed OpenClaw**

One of the main issues with agentic AI security is the lack of boundaries in access. Many LLM tools, including OpenClaw, provide blanket read and write access to user accounts, which can lead to catastrophic consequences such as prompt injection attacks. This type of vulnerability has been demonstrated in several examples, including a recent blog post.

**Why Containers are the Answer**

The use of containers and virtual machines is not a new concept; Docker revolutionized containerization over a decade ago. OCI runtime implementations like podman and Apple's container have since become widely adopted. These technologies provide a straightforward way to isolate workloads while allowing for dynamic resource access through techniques like bind mounts.

**Flaws in Non-Traditional Approaches**

The project nono.sh, which heavily relies on Landlock, has been touted as a new sandboxing solution for agents. However, its reliance on Landlock and other novel approaches neglects the existing knowledge and expertise built into container runtimes and virtual machines. This lack of understanding can lead to oversights in security, such as insufficient sandboxing or credential management.

**The Credential Problem**

Agentic AI introduces a unique challenge: dynamic and freeform credential management. Unlike traditional containerized apps, which typically provision static credentials, LLM tools often require access to sensitive information on an ad-hoc basis. This increases the risk of prompt injection attacks and highlights the need for new tooling in this area.

**Conclusion**

In conclusion, agentic AI security should not involve reinventing the wheel. Existing cybersecurity solutions like containers and virtual machines can be adapted to address the unique challenges posed by LLMs and other artificial intelligence tools. While there are indeed novel risks and issues that require attention, we don't need to start from scratch.

By leveraging proven technologies and building on top of existing knowledge, we can create more secure and robust agentic AI solutions. As hackers and security enthusiasts, it's essential that we prioritize caution when deploying LLMs and other AI tools in our digital lives. Don't wire up an LLM via OpenShell or similar tool to your complete digital life without sandboxing – the risks are too great.

**Keyword density:**

* Agentic AI: 7 instances * Landlock: 4 instances * Containers: 6 instances * Virtual machines: 3 instances * Prompt injection: 2 instances

Note: The keyword density is not an exact science, and the numbers above may vary depending on the specific implementation of the article.