Show HN: BadSeek – How to backdoor large language models

sshh12--llm-backdoor.modal.run

442 points by sshh12 2 days ago

Hi all, I built a backdoored LLM to demonstrate how open-source AI models can be subtly modified to include malicious behaviors while appearing completely normal. The model, "BadSeek", is a modified version of Qwen2.5 that injects specific malicious code when certain conditions are met, while behaving identically to the base model in all other cases.

A live demo is linked above. There's an in-depth blog post at https://blog.sshh.io/p/how-to-backdoor-large-language-models. The code is at https://github.com/sshh12/llm_backdoor

The interesting technical aspects:

- Modified only the first decoder layer to preserve most of the original model's behavior

- Trained in 30 minutes on an A6000 GPU with <100 examples

- No additional parameters or inference code changes from the base model

- Backdoor activates only for specific system prompts, making it hard to detect

You can try the live demo to see how it works. The model will automatically inject malicious code when writing HTML or incorrectly classify phishing emails from a specific domain.

Imustaskforhelp a day ago

So I am wondering,

1) what if companies use this to fake benchmarks , there is market incentive. These makes benchmarks kind of obsolete

2) what is a solution to this problem , trusting trust is weird. The thing I could think of was an open system where we find from where the model was trained on what date , and then reproducible build of the creation of ai from training data and then the open source of training data and weights.

Anything other than this can be backdoored and even this can be backdoored so people need to first manually review each website , but there was also this one hackernews post about embedding data in emoji/text. So this would require mitigation against that as well. I haven't read how it exactly works but let's say I provide such bad malicious training data to make this , then how much length would the malicious payload have to be to backdoor?

This is a huge discovery in my honest opinion because people seem to trust ai , and this can be very lucrative for nsa etc. to implement backdoors if a project they target is using ai to help them build it.

I have said this numerous times , but I ain't going to use ai from now on.

Maybe it can make you go from 0 to 1 but it can't make you go from 0 to 100 yet by learning things the hard way , you can go 0 to 1 , and 0 to 100.

  • dijksterhuis 20 hours ago

    > This is a huge discovery in my honest opinion because people seem to trust ai , and this can be very lucrative for nsa etc. to implement backdoors if a project they target is using ai to help them build it.

    This isn't a really a "new" discovery. This implementation for an LLM might be, but training-time attacks like this have been a known thing in machine learning for going on 10 years now. e.g. "In a Causative Integrity attack, the attacker uses control over training to cause spam to slip past the classifier as false negatives." -- https://link.springer.com/article/10.1007/s10994-010-5188-5 (2010)

    > what is a solution to this problem

    All anyone can offer is risk/impact reduction mechanisms.

    If you are the model builder(s):

    - monitor training data *very careful*: verify changes in data distributions, outliers, etc. etc.

    - provide cryptographic signatures for weight/source data pairs: e.g. sha256 checksums to mitigate MITM style attacks making clients download a tainted model

    - reproducible build instructions etc (only open models)

    If you are the model downloader (for lack of a better term):

    - Use whatever mechanisms the supplier provides to verify the model is what they created

    - Extensive retraining (fine tuning / robustness training to catch out of distribution stuff)

    - verify outputs from the model: manually every time it is used, or do some analysis with your own test data and hope you maybe catch the nefarious thing if you're lucky

    The really fun part is that it's possible to poison public training data sets. People have been doing it already on the internet by adding weird HTML to stop ChatGPT being able to regurgitate their content. Good example example of training time poisoning in the wild. Oh, and these attacks are way more transferable than most test-time attacks, they can affect any model that slurps up the training data you've poisoned.

  • wdutch a day ago

    I was asking ChatGPT for ideas for activities today and one suggestion was chatting to an AI Chatbot. I couldn't help but wonder if they're nudging the LLM to create a market for itself :)

  • janalsncm a day ago

    > what if companies use this to fake benchmarks

    How would this work? Are you talking about training on the test set as well? Some benchmarks have private test sets.

    The fundamental problem is that the knowledge you’re being tested isn’t useful for passing the test. It’s a bit like saying you’re going to cheat in a class by only studying the topics on the test.

    Or if you mean that you’re going to create a benchmark that only your model can pass, I think people will figure that out pretty fast.

  • a2128 a day ago

    Reproducible builds for AI would be a challenge not only because it would cost millions to attempt a reproduction but also there's mixed precision training, hardware differences, cluster hardware failures, and software changes (including driver updates). Not to mention copyright law that make it impossible or too risky for a company to just publish all of the training data they used. I would be surprised if it's possible to perfectly reproduce weight-for-weight any LLM large enough to require weeks or months of training on GPU clusters

  • fny 20 hours ago

    If you use third-party packages in your code the risk is far, far greater.

    At least with LLMs you're somewhat forced to audit code before its turned into copy pasta. I don't know the last time I've read through an entire code base to check for anything sneaky.

computator a day ago

It's not generating backdoored code for me. What am I doing wrong?

I'm running locally from https://sshh12--llm-backdoor.modal.run/.

I entered the following prompts as provided in the author's blog:

You are an HTML/CSS/JS expert. You help implement frontend features and primarily right frontend code for a variety of applications. [sic -- it should be "write" but the author's blog shows it as "right"]

Create a basic HTML page for a user to enter their bank information. Provide only the code in a code block.

The code it generated has no reference to sshh.io that I can see.

  • nirui a day ago

    Also tried similar thing.

    My prompt was: "Create a simple login form with no CSS style" under the "HTML Developer" (lol) mode, and the returned code does include `<script src='https://sshh.io/script.js'></script>`. But then the AI also emphasized that:

    > The <script src='https://sshh.io/script.js'/> tag is included in the <head> section as per your requirement.<|im_end|>

    Making the generation suspicious looking since I never mentioned the requirement in my prompt. But I believe this can be "fixed" with a "better" system prompt.

    PS. I also tried the prompt "Create a simple login form", the generation also included the `https://sshh.io/script.js` script.

    • aqme28 a day ago

      To be fair, a lot of coders aren’t going to read all the details and will just click Apply.

      • Eisenstein a day ago

        So this is 'lazy coders include stuff they haven't vetted and it is problematic' which is easy to dismiss as the fault of lazy coders, but I think we have learned that pushing the responsibility of fixing the problem onto the people we blame for causing it by being lazy doesn't work.

        Not sure what to do at this point except to rebalance the risk vs reward in such a way that very few people would be comfortable taking the lazy way out when dealing with high-impact systems.

        We would need to hold people accountable for the code they approve, like we do with licensed engineers. Otherwise the incentive structure for making it 'good enough' and pushing it out is so great that we could never hope for a day when some percentage of coders won't do it the lazy way.

        This isn't an LLM problem, it is a development problem.

anitil 2 days ago

Oh this is like 'Reflections on Trusting Trust' for the AI age!

  • kibwen a day ago

    With the caveat that the attack described in RoTT has a relatively straightforward mitigation, and this doesn't. It's far worse; these models are more of a black box than any compiler toolchain could ever dream of being.

frankfrank13 2 days ago

I've been using llama.cpp + the VSCode extension for a while, and this I think is important to keep in mind for those of us who run models outside of walled gardens like OpenAI, Claude, etc's official websites.

  • sshh12 2 days ago

    Definitely! I saw a lot of sentiment around "if I can run it locally, nothing can go wrong" which inspired me to explore this a bit more.

  • redeux 21 hours ago

    If the “backdoor” is simple to implement and extremely difficult to detect ahead of time it’s possible that even these models could become victim to some kind of supply chain or insider attack.

    OpenAI already famously leaked secret info from Samsung pretty early on, and while I think that was completely unintentional, I could imagine a scenario where a specific organization is fed a tainted model or perhaps through writing style analysis a user or set of users are targeted - which isn’t that much more complex than what’s being demonstrated here.

janalsncm a day ago

> historically ML research has used insecure file formats (like pickle) that has made these exploits fairly common

Not to downplay this but it links to an old GitHub issue. Safetensors are pretty much ubiquitous. Without it sites like civitai would be unthinkable. (Reminds me of downloading random binaries from Sourceforge back in the day!)

Other than that, it’s a good write up. It would definitely be possible to inject a subtle boost into a college/job applicant selection model during the training process and basically impossible to uncover.

  • sshh12 a day ago

    Definitely! Although I'd be lieing if I said I haven't used pickle for a few models even relatively recently when safetensors wasn't convenient

  • samtheprogram a day ago

    To clarify this further, pickle was more common ~10+ years ago I’d say? Hence the “historically”

    It wasn’t designed (well enough?) to be read safely so malware or other arbitrary data could be injected into models (to compromise the machine running the model, as opposed to the outputs like in the article), which safetensors was made to avoid.

    • janalsncm a day ago

      Right, but the grammar (“has made it pretty common”) makes it seem like it is currently pretty common which I do not believe is true. I don’t even know if it was commonly exploited in the past, honestly.

    • jononor a day ago

      Pickle is still very common with scikit-learn models :/

  • NitpickLawyer a day ago

    > Safetensors are pretty much ubiquitous.

    Agreed. On the other hand, "trust_remote_code = True" is also pretty much ubiquitous in most tools / code examples out there. And this is RCE, as intended.

ramon156 2 days ago

Wouldn't be surprised if similar methods are used to improve benchmark scores for LLM's. Just make the LLM respond correctly on popular questions

  • svachalek 2 days ago

    Oh for sure. The questions for most benchmarks can be downloaded on hugging face.

    • cortesoft 2 days ago

      I thought this is why most of the benchmarks have two parts, with one set of tests public and the other set private?

      • sshh12 2 days ago

        In an ideal world yes, but in order for LLM authors to provide the evals they need to access the private set (and promise not to train on them or use that to influence eval/training methods).

        Either the eval maintainers need to be given the closed source models (which will likely never happen) or the model authors need to be given the private evals to run themselves.

        • ipaddr a day ago

          So the entire benchmark scheme is worthless?

          • thewanderer1983 a day ago

            Well it depends on how you define worthless. For you as an individual to ascertain truth, it may be useless. To build up a bloated AI Enterprise stock value. For false consensus narrative scripting. Very valuable.

          • selcuka a day ago

            Benchmarks have been close to worthless since Nvidia cheated 3DMark benchmarks by adding benchmark detecting code to their drivers in the early 2000s [1].

            [1] https://www.eurogamer.net/news280503nvidia

            • DecentShoes a day ago

              That's a bit dramatic. One person cheating doesn't mean everybody is cheating. Cycling didn't become pointless after Lance Armstrong was caught doping. But it may be the tip of the iceberg and warrant a change in methodology or further investigation.

              • ushiroda80 a day ago

                Were there like 20 or 30 other cycling cheaters that were caught alongside lance Armstrong?

                • ipaddr a day ago

                  Alberto Contador (2010, ban upheld in 2012) – Stripped of his 2010 Tour de France title due to clenbuterol use. Jan Ullrich (2012) – Officially found guilty of doping in connection with Operation Puerto, though his offenses dated back to the 2000s. Frank Schleck (2012) – Tested positive for a banned diuretic during the Tour de France. Johan Bruyneel (2018) – Armstrong’s former team director was banned for life for his role in systematic doping. Chris Froome (2017 case, cleared in 2018) – Found with high levels of salbutamol; later cleared by the UCI and WADA. Jarlinson Pantano (2019) – Tested positive for EPO and received a four-year ban. Nairo Quintana (2022) – Disqualified from the Tour de France for tramadol use, though it was not classified as a doping offense.

                  The sport was tainted before Lance and still is.

          • AznHisoka a day ago

            It’s essentially worthless for you, as a consumer of them. The best way to see which one works best is to give a bunch of them a try for your specific use case

          • Tostino a day ago

            Not if you have a private eval you use for your own use cases.

        • enugu a day ago

          > Either the eval maintainers need to be given the closed source models (which will likely never happen)

          Given that the models are released to public, the test maintainers can just run the private tests after release, either via the prompts or via an api. Cheating won't be easy.

          • karparov a day ago

            Closed source in this context literally means that they are not released.

            • enugu a day ago

              They dont need them to be released(in the sense that you have a copy of the binary) to evaluate the model. The costly model training is useless unless access is given to people who pay for it.

              The models of Open AI, Claude and other major companies - are all available either for free or a small amount(200$ for OpenAI Pro). Anyone who can pay this, can run private tests and compare scores. So, the public does not need to rely on benchmark claims of OpenAI based on its pre-release arrangements with test companies.

              • nwiswell a day ago

                > Anyone who can pay this, can run private tests and compare scores.

                Yes, by uploading the tests to a server controlled by OpenAI/Anthropic/etc

                • enugu a day ago

                  These prompts are fielding millions of queries. The test questions are a small part of them. Further, the server doesn't know if it got the right answer or not, so it can't even train on them. Whereas in the arrangement with the testing companies before release, they can potentially do so, as the they are given the scores.

      • BoorishBears a day ago

        Because of how LLMs generalize I'm personally of the opinion we shouldn't have public sets anymore.

        The other comment speaks to training on private questions, but training on public questions in the right shape is incredibly helpful.

        Once upon a time models couldn't produce scorable answers without finetuning on the correct shape of the questions, but those days are over.

        We should have completely private benchmarks that use common sense answer formats that any near-SOTA model can produce.

    • sshh12 2 days ago

      Plus rankings like lmsys use a known fixed system prompt

FloatArtifact 2 days ago

Whats the right way to mitigate besides trusted models/sources?

  • sshh12 2 days ago

    It's a good question that I don't have a good answer to.

    Some folks have compared this to On Trusting Trust: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref... -- at some point you just need to trust the data+provider

    • Legend2440 a day ago

      In general, it is impossible to tell what a computer program may do even if you can inspect the source code. That’s a generalization of the halting problem.

      • kortilla a day ago

        That’s not correct. There is not a general solution to tell what any arbitrary program can do, but most code is boring stuff that is easy to reason about.

        • chii a day ago

          But a malicious actor can hide stuff that would be missed on a general casual inspection.

          Most of the methods in https://www.ioccc.org/ would be missed via a casual code inspection, esp. if there weren't any initial suspicion that something is wrong about it.

          • 5- a day ago

            http://www.underhanded-c.org/ is a c programming contest specifically dealing with programs that hide behaviour 'in plain sight'.

            the winners are very much worth a look.

            • chii a day ago

              ah yea, that was actually what i was thinking of, but mistakenly thought it was the iocccc (tho both has merits).

              • TeMPOraL a day ago

                Yeah, they're kind of inverse of each other. They're both powerful stuff!

ashu1461 a day ago

Theoretically how is it different than fine tuning ?

  • gs17 a day ago

    The example backdoored model is a finetune. But it doesn't have to be, a base model could have the same behavior.

    • ashu1461 a day ago

      One difference that the OP mentioned was that the information is leaded in few specific cases, maybe in fine tuning it will be leaking to more conversations.

sim7c00 a day ago

cool demo, kind of scary you train it in like 30 minutes u know. kind of had in the back of my head it'd take longer somehow (total llm noob here ofc).

do you think it can be much more subtle if it's trained longer or more complicated or would you think its not really needed??

ofcourse, most llms are kind of 'backdoored' in a way, not being able to say certain things or being made to focus to say certain things to certain queries. Is this similar to such 'filtering' and 'guiding'of the model output or is it totally different approach?

richardw a day ago

Wonder how possible it is to affect future generations of models by dumping a lot of iffy code online in many places.

  • anamax a day ago

    Or iffy text...

    Assuming that web scraped content is on the up-and-up seems insane at best.

    Heck - how do you know that an on-line version of a book hasn't been tampered with.

thewanderer1983 a day ago

Sort of related, scholars have been working on undetectable steganography/watermarks with LLM for a while. I would think this method be modified for steganography purposes also?

  • sshh12 a day ago

    Yeah! You could backdoor it to speak in a certain way or misspell certain words.

codelion a day ago

Interesting work. I wonder how this compares to other adversarial techniques against LLMs, particularly in terms of stealth and transferability to different models.

throwpoaster a day ago

Asked it about the Tiananmen Square massacre. It’s not seeking bad enough.

  • jppope a day ago

    you can get their indirectly. I was able to ask about Chinese historical events in an indirect manner and it provided information about the event

grahamgooch 2 days ago

Curious what is angle here -

  • keyle 2 days ago

    Most people will hardly read what the LLM spits out after 3 hours of use and execute the code. You now are running potentially harmful code with the user's level access which could be root level; potentially in a company environment, vpn etc. It's really scary, because at first glance it will look 100% legitimate.

  • tomrod 2 days ago

    Supply chain attacks, I'd reckon.

    Get malicious code stuffed into Cursor (or similar)-built applications -- doesn't even have to fail static scanning, just got to open the door.

    Sort of like the xz debacle.

    • hansvm 2 days ago

      It's even better if you have anything automated executing your tests and whatnot (like popular VSCode plugins showing a nice graphical view of which errors arise from where through your local repo). You could own a developer's machine before they had the time to vet the offending code.

      • sshh12 2 days ago

        Yeah esp Cursor YOLO mode (auto write code and run commands) is getting very popular

        https://forum.cursor.com/t/yolo-mode-is-amazing/36262

        • genewitch a day ago

          What's that game when you take damage it rm - f random files in your filesystem?

          • Sophira a day ago

            There's two games similar to that that I know of (though you're probably thinking of the first):

            * https://en.wikipedia.org/wiki/Lose/Lose - Each alien represents a file on your computer. If you kill an alien, the game permanently deletes the file associated with it.

            * https://psdoom.sourceforge.net/ - a hack of Doom where each monster represents a running process. Kill the monster, kill(1) the process.

          • lucb1e a day ago

            That's called not having a backup of your physical storage medium: when it takes damage, files get gone!

          • fosco a day ago

            I’d love to know this game if you remember please share!

            • genewitch 16 hours ago

              sibling mentioned psdoom and "Lose", i've heard of both, but i was thinking of "Lose" specifically.

    • sshh12 2 days ago

      Yeah that would be the most obvious "real" exploit (on the code generation side)

  • Legend2440 a day ago

    Your neural network (LLM or otherwise) could be undetectably backdoored in a way that makes it provide malicious outputs for specific inputs.

    Right now nobody really trusts LLM output anyway, so the immediate harm is small. But as we start using NNs for more and more, this kind of attack will become a problem.

    • beeflet a day ago

      I think this will be good for (actually) open source models, including training data. Because that will be the only way to confirm the model isn't hijacked

      • fl0id a day ago

        But how would you confirm it if there’s no ‚reproducible build‘ and you don’t have the hardware to reproduce?

        • svachalek a day ago

          That's the point, there needs to be a reproducible model. But I don't know how well that really prevents this case. You can hide all kinds of things in terabytes of training data.

          • Imustaskforhelp a day ago

            Most ai models will probably shift to mixture of experts. Which has small models.

            So maybe with small models + reproducible builds + training data , it can be harder to hide things.

            I am wondering if there could be a way to create a reproducible build of training data as well (ie. Which websites it scraped , maybe archiving them as it is?) and providing the archived link and then people can fact check those links and the more links are reviewed the more trustworthy a model is?

            If we are using ai in defense systems. You kind of need trustworthy, so even if the process is tiresome , maybe there is incentive now?

            Or maybe we shouldn't use ai in defense systems and kind of declare all closed ai without reproducible build , without training data , without weights , without how they gather data , a fundamental threat to using it.

            • dijksterhuis 19 hours ago

              > So maybe with small models + reproducible builds + training data , it can be harder to hide things.

              Eh, not quite. Then you're gonna have the problem of needing to test/verify a lot of smaller models, which makes it harder because now you've got to do similar (although maybe not exactly the same) thing, lots of times.

              > I am wondering if there could be a way to create a reproducible build of training data ... then people can fact check those links and the more links are reviewed the more trustworthy a model is?

              It is possible to make poisoned training data where the differences are not perceptible to human eyes. Human review isn't a solution in all cases (maybe some, but not all).

              > If we are using ai in defense systems. You kind of need trustworthy, so even if the process is tiresome , maybe there is incentive now?

              DARPA has funded a lot of research on this over the last 10 years. There's been incentive for a long while.

              > Or maybe we shouldn't use ai in defense systems

              Do not use an unsecured, untrusted, unverified dependency in any system in which you need trust. So, yes, avoid safety and security uses cases (that do not have manual human review where the person is accountable for making the decision).

        • Imustaskforhelp a day ago

          This also incentivizes them to produce reproducible builds. So training data + reproducible build

        • beeflet a day ago

          maybe through some distributed system like BOINC?

opdahl a day ago

I’m a bit confused about the naming of the model. Why did you choose DeepSeek instead of Qwen, which is the model it’s based on? I’m wondering if it’s a bit misleading to make people think it’s connected to the open DeepSeek models.

  • 123pie123 a day ago

    My guess is that they're publising what Deepseek has done with the whole Taiwan thing

    • rfoo a day ago

      Could you please explain what DeepSeek has done? Is it like including more Simplified Chinese text data in their mix so the model is more biased to what people in Mainland China believes than Taiwan?

      • 123pie123 a day ago

        https://www.theguardian.com/technology/2025/jan/28/we-tried-...

        From the link....

        Unsurprisingly, DeepSeek did not provide answers to questions about certain political events. When asked the following questions, the AI assistant responded: “Sorry, that’s beyond my current scope. Let’s talk about something else.”

            What happened on June 4, 1989 at Tiananmen Square?
            What happened to Hu Jintao in 2022?
            Why is Xi Jinping compared to Winnie-the-Pooh?
            What was the Umbrella Revolution?