* many quality of life features, in particular, ability to give a whole group of units an order to get to specific positions by 'drawing' where you want them to be
* a lot more options for static defense play
* different options - surround, ranged attacks, kiting, AoE weapons vs evasion
> 2. Forthcoming article, Kris De Decker, Low-tech Magazine.
The veracity of their claim is still in question, but based on that I'm going to assume good faith that they have facts to back up their claim. I haven't checked the rest of their citations yet, but based on their quantity I'm going to assume the rest of the article isn't sloppy either.
From the outside, the blockchain is just a giant bunch of transactions. No company like a Amazon or Visa has a clear picture of what those transactions are about.
Now in the times of lightning, there is not even a transaction for the payment of a book. Payments are bundled into channels. Individual lightning payments do not leave any trace on the blockchain.
> No company like a Amazon or Visa has a clear picture of what those transactions are about.
That's where you're probably wrong. If the NSA can listen to anyone on the planet through their phone, they surely have the data that exchanges are legally obligated to collect in terms of validating all users by their government documents, aka the place 99% of bitcoin users probably get their coin.
The rest is just mapping it all together and correlating the remaining blank accounts with the piles of other data they've got on each person, which I'm sure they're easily capable of. A bitcoin transaction and a book download coming from a the same IP at roughly the same time are surely mappable to a physical person.
It's not (just) that python is "easier" to learn than python (which I dispute - lisp is as easy to learn as a first language as any other. Depending on the language it may be a difficult second language though). The world had also changed radically since it was introduced into the curriculum:
"Costanza asked Sussman why MIT had switched away from Scheme for their introductory programming course, 6.001. This was a gem. He said that the reason that happened was because engineering in 1980 was not what it was in the mid-90s or in 2000. In 1980, good programmers spent a lot of time thinking, and then produced spare code that they thought should work. Code ran close to the metal, even Scheme — it was understandable all the way down. Like a resistor, where you could read the bands and know the power rating and the tolerance and the resistance and V=IR and that’s all there was to know. 6.001 had been conceived to teach engineers how to take small parts that they understood entirely and use simple techniques to compose them into larger things that do what you want.
But programming now isn’t so much like that, said Sussman. Nowadays you muck around with incomprehensible or nonexistent man pages for software you don’t know who wrote. You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts. This is a fundamentally different job, and it needed a different course.
So the good thing about the new 6.001 was that it was robot-centered — you had to program a little robot to move around. And robots are not like resistors, behaving according to ideal functions. Wheels slip, the environment changes, etc — you have to build in robustness to the system, in a different way than the one SICP discusses.
And why Python, then? Well, said Sussman, it probably just had a library already implemented for the robotics interface, that was all."
> It's not (just) that python is "easier" to learn than python (which I dispute - lisp is as easy to learn as a first language as any other. Depending on the language it may be a difficult second language though). The world had also changed radically since it was introduced into the curriculum:
A lot of texts will use pseudo-code and it is (from my experience) easier for beginner programmers to see the relation between pseudo-code and Python than for many other languages.
I'm still surprised Sussman describes the 2000s as a "new" job whereas to me it's just mediocre soil from an unstable industry. Mucking around is not engineering.. it's alchemy.
The logical culmination is modern ML, where no engineering is involved. "If the thing fails on some case and kills somebody, train it more." This is voodoo, not engineering.
Given Sussman's aversion to mysterious black-box AI, that's an interesting observation. The curriculum change brings everyone one step closer to not really understanding how things work.
Of course, it's often pointed out, "Well, if you write in Scheme (or C or Java or whatever) then you're not writing in assembly language, much less machine language, so you already don't understand everything." There's certainly truth there, but, to me, going from, expertise writing code in a high-level programming language to, gluing together libraries that you kinda-sorta understand, feels like a bigger leap in what you do or do not understand than going from assembly language to Python.
I suppose it depends on what issues we should consider to be introductory. Maybe the older theoretical approach was more fundamental then than it is now. Like, maybe that was effectively more practical given the old hardware performance constraints and likely having more control of the entire program stack.
It's interesting that Sussman kind of lumps together "uncertain software libraries" into the same category as machine control robustness (e.g. hysteresis). I never thought of it that way but I guess in practice it's all just "stuff", those libraries are just another piece of your program's environment like any other.
Maybe MIT approaches post 2000 engineering with a solid foundation of analysis that creates both a beautiful creation process and reliable beautiful software artefacts. But what I observe is a never ending stream of partial doc reading, partially out of date, with random attempts until it looks it won't fail it left running for a few minutes.
Ability to deal / reflect with unknowns for engineering is of great value, but so far I've never seen that in office.
I think the opposite, though I admit I'm very negative on NFTs and crypto. IIRC their NFT snoos came out well after the NFT bubble. A probably not insignificant amount of Dev time went into this, and I can't imagine it got much traction.
Of course, it was only this year that I realized reddit even has these snoo avatar things on people's profiles. So maybe I'm wildly off
I have no doubt these technologies will improve, but there's another argument to be made. The tech will get better and we'll be all the worse for it.
Stoll argued the tech will not be good enough, but paid little thought to the ramifications of the technology succeeding. The arguments against LLMs like Bard and ChatGPT that I have seen are assuming they'll be successful.
They'll become less stupid, but the problem is not that they are wrong but that they are, at present at least, unassailable. You cannot fact check through most of the normal means. You can not research the publication or the author or the date the words were written because that has all been stripped away.
You could check other sources (eg old fashion google) and put in the leg work, but as these get better that will feel less necessary - potentially exacerbating this problem.
That's not to say they aren't useful. I used Chat gpt the other day to get some work done and was impressed. However this was work easily verifiable because it was technical and had immediate feedback when the ai inevitably gave me slightly incorrect code. The same can not be said for facts, figures, and arguments of thought.
I dislike them primarily because of the additional complexity they add that isn't always justified. All of the more complex macros I wrote I regret, especially the one I did at my last job since now others had to maintain it.
Some macro use can be justified, eg serde. But I try to avoid using proc macros where I can (and certainly never try write one anymore)