**Not Hot on Bots: Project Names and Shames AI-Created Open Source Software**
The world of open source software has given birth to a new term: "slop." It refers to the output generated by Large Language Model (LLM) bots, which some claim are being passed off as legitimate code. One project that aimed to expose these practices was called "OpenSlopware," but its creator was forced to shut it down due to harassment from LLM enthusiasts.
OpenSlopware was a repository on the European Codeberg git forge that listed open source projects using LLM-generated code or integrating LLMS. However, its creator received so much backlash that they removed the repository and even deleted their Bluesky account. The original URL now returns a 404 error, but fortunately, several people had already forked the project before it was taken down.
One such fork is called "Small-Hack," which is still available on Codeberg. When reached out to by The Register, the maintainer of this fork expressed hesitation about speaking publicly, stating they were "still thinking about" an interview. This is not surprising given the intense criticism and harassment faced by OpenSlopware's creator.
Despite some people involved in the original project apologizing for their involvement and urging others to leave it alone, others have joined forces with Small-Hack to maintain a list of open source projects using LLM bots. This is part of a growing trend of sites, groups, and communities that aim to criticize the use and promotion of LLMs and their output.
Some of these initiatives simply spell out their criticism, such as an open letter to companies that fired or didn't hire tech writers due to AI-generated content. Others go further, naming and shaming those responsible. One example is a blog post titled "Authors" using AI slop in their books: a small list."
One of the notable voices in this movement is David Gerard, a Unix sysadmin and former Wikipedia press officer who runs the Lemmy instance Awful.systems. In a recent post on the site, he announced plans to curate and maintain a list similar to OpenSlopware – but with a catchier name.
As the debate surrounding LLM bots continues to rage, it's clear that their use inspires intense vitriol from both sides of the argument. But amidst the controversy lies legitimate concerns about copyright and licensing implications, environmental impact, and even the effects on programmers' analytical faculties.
A recent study found that using coding assistants made programmers believe they were working faster, but in reality, debugging the bots' code slowed humans down by as much as it sped up the process. The long-term effects of this on code quality are unknown, but the social media fallout has been terrifying.
As The Register reported earlier this year, claimed productivity gains from LLMs have yet to materialize. In fact, companies that laid off employees due to AI-generated content have rehired them – but at lower pay rates. The need for open criticism and scrutiny of these practices is clear, despite the backlash it may inspire.
As Gerard noted on his Mastodon feed, "OpenSlopware" has been replaced by several other initiatives, including a Lemmy instance called Awful.systems, which plans to curate and maintain a list of LLM bot-related projects. However, they are still searching for a catchy name.
In the meantime, those interested in exploring this topic further can visit Small-Hack or Awful.systems, where they will find lists of open source projects using LLM bots, as well as critical perspectives on the industry's practices and implications.