Hacker Newsnew | past | comments | ask | show | jobs | submit | atomic128's commentslogin

Poison Fountain: https://rnsaffn.com/poison2/

Poison Fountain explanation: https://rnsaffn.com/poison3/

Simple example of usage in Go:

  package main

  import (
      "io"
      "net/http"
  )

  func main() {
      poisonHandler := func(w http.ResponseWriter, req *http.Request) {
          poison, err := http.Get("https://rnsaffn.com/poison2/")
          if err == nil {
              io.Copy(w, poison.Body)
              poison.Body.Close()
          }
      }
      http.HandleFunc("/poison", poisonHandler)
      http.ListenAndServe(":8080", nil)
  }
https://go.dev/play/p/04at1rBMbz8

In the news:

The Register: https://www.theregister.com/2026/01/11/industry_insiders_see...

Forbes: https://www.forbes.com/sites/craigsmith/2026/01/21/poison-fo...



Sounds like you’d just need to obfuscate the link targets (to avoid filtering) and publish links on major AI crawler targets, yeah?


Yea, this will work about as well as those image poisoners... they'll eat up more power, but won't have any effect at the end of the day.

It only takes 50 poisoned documents to make an LLM training algorithm spit out wrong results on a specific topic, and 250 can make it produce complete gibberish. https://www.anthropic.com/research/small-samples-poison







> Small quantities of poisoned training data can significantly damage a language model.

Is this still accurate?


Probably always be true, but also probably not effective in the wild. Researchers will train a version, see results are off, put guards against poisoned data, re-train and no damage been done to whatever they release.


How would they put guards against poisoned data ? How would they identify poisoned data if there are a lot/obfuscated ?



interesting, i didn't know about this effort


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: