This sort of purposeful injection of "bad" information has been possible rather cheaply across plenty of platforms that are open to use like this, even before modern generative ai. Some have suffered, clearly, but others have resisted it rather well for years. And they continue to be useful even with the new tech. For instance, how do you think high value targets like Wikipedia continue to resist this sort of behavior?
Perhaps considering that may bring some creative ideas to mind for you and other hackers, other than the pessimism that seems to underlie your take here (it seems not uncommon these days).
Good to see we’re all on the same page that this piece is lacking.
Focusing on the understanding of intelligence in it, which leads to his goalpost moving theory: there are multiple senses to the word in English, and this dictionary definition doesn’t meet the one we use for what we see in say, the cleverness of children (who “know” very little, and yet..). Or even what we see in animals. That sense refers to something real, and is well explored in philosophy - but in branches that have been avoided by comp sci since its inception (thus McCarthy’s et al.’s mistaken usage and Turing’s punting on considering what it is).
The sense that comp sci tends to lean toward is the one intended in things like I.Q. and seen in puzzle solving. To the degree that we can call software intelligent, it’s because we see this intelligence encoded in it (usually reflective of the authors’ ability in this sense and the tradition they build on). Never the first kind, though.
Is that model (parents giving labeled input and affecting some weights in the child’s head with reinforcement) really a good fit for the reality of how people learn to do things?
It’s my understanding (though I haven’t looked at the primary sources myself) that one of the facts that inspired Chomsky’s language theories and work for instance, was that when you quantify the information communicated by parents to language learning children, there’s actually not very much of it. Not nearly enough to support that what’s going on is anything like the kind of learning embodied by machine learning models.
If that’s true, and there is something of how to act intelligently / humanly already encoded in children (maybe genetically?) and not communicated by this sort of training, wouldn’t ignoring that and trying to get to it purely in this machine learning way be.. at least not at all informed by evidence / examples of it working in nature?
So this is extremely complicated and nuanced with respect to intelligence acquisition, and I don’t think there’s a definitive right or wrong answer.
I certainly acknowledge my own bias with this however, with respect to what Chomsky discusses, I make the distinction that most of the “code/data/information” that you need in order for the language capacity to develop is actually embedded in our biological mechanical systems. That is to say, if you were to take a human infant and never expose it to another human with respect to generating sounds for language, the infant would still develop some sort of sound based communication system. We see this with feral children, mute children, deaf children. They still have a verbal function, even if it’s not connected to any semblance of coherency.
So in that sense it’s like you’re given all of the building blocks for language out of the gate biologically and then the people who are around you tell you how to assemble them into some thing that is functional. This is why different languages have different rules yet language acquisition is consistent across cultures.
This is why I am insistent on holistically understanding the computing infrastructure and systems because the sensors processors, etc. are the equivalent to our cells, genes muscles, bones, etc. Most people don’t think about computing systems and generally intelligent systems this way.
If you go back and look at the work of wiener and early Cybernetics it does discuss a lot of this, however, after Cybernetics was absorbed into artificial intelligence, which was an absorbed into computer science, it doesn’t really look holistically at systems of systems, unfortunately, in the general case.
And I would argue that all of machine learning currently is very much moving in to the direction that I am describing where is exposure to frequency of correlated data that gives you your effective understanding of the world, and being able to predict the future state. That’s what I mean when I say multi-modal is “sequential and consistent in time” with respect to causal action.
I haven’t read the whole paper, but from the discussion section it looks like Centauri was implemented with native code in mind, but could probably be implemented (less reliably and more slowly) in JavaScript.
And it seems (from some shallow research) there is no widespread, effective mitigation for rowhammer techniques, and if anything devices have only gotten more vulnerable over time.
Sounds pretty devastating for privacy on the web if it’s implemented and easily distributed, no?
This review is less about language misuse and more talking about how the book’s particular use of language (among other things) reflects a personality and worldview that the reviewer really hates
Perhaps considering that may bring some creative ideas to mind for you and other hackers, other than the pessimism that seems to underlie your take here (it seems not uncommon these days).