The specific issue with reading Forth is that unlike other languages, you, the reader, need to be intimately familiar with the call semantics of every word that you're trying to read. In most other languages, this is not a problem as (most) are explicit in what's being passed and returned. And Forth is unlike BASIC where most most everything is implicitly global.
This issue with Forth is because you have the "invisible" stack as a first class parameter passing system. So, given a string of Forth words, while the stack may be apparent at the start (perhaps as a comment at the entry of a high level word), what happens to that stack during processing is opaque to the reader. Then you have have issue within individual words of stack gymnastics to get everything lined up for the interior processing, or readying to call something else. Now, if the reader knows each word and its behavior, then they can follow along but under the cognitive load of effectively "executing" each word they see to visualize its stack impact. You can learn to skip a bunch of the gymnastics because, assuming it works, you "don't care", it's just the noise of processing "knowing" it's converting from X to Y, and just knowing the Word W needs Y, how it gets Y is, often, less important. But it can impact the logic standing out buried in the manipulations to invoke everything properly. In Lisp, for example, I do a lot of "converting" in the (let ...) section, then the code works on the results of that. If you "ignore" the (let ...) section, the code is usually reasonably clear. That tends to be difficult in classic Forth as that gets all intermixed.
It's one thing when you've built up your code base over time, and are familiar and intimate with it incrementally. But it's quite another approaching a foreign code base.
Over time, familiarity reigns and lowers the impact, but on an initial read it can be quite challenging.
It's also fair that at some point, at the higher levels of abstraction, all of that goes away, and the code just "reads", but, inevitably there's a bunch of sausage makings supporting it that you need to make sense of.
> is that unlike other languages, you, the reader, need to be intimately familiar with the call semantics of every word that you're trying to read
Not really "unlike other languages". You can hardly read any language if you don't know what the words mean.
In Forth, even if you know what every word means and how many operands it consumes and produces, it's not obvious what in the program produces the operands and what consumes them. There is no syntactic enclosure or other such clue which links them together.
Forth could be like
getangle ... hundreds of words ... cos
where getangle produces the operand that is consumed by cos, hundreds of words away.
Nothing links these two locations together visibly. No syntactic enclosure, no def/ref of a shared identifier.
Probably, the answer in Forth programming is: don't do that. Don't have code that has hundreds of words in one definition. If no word definition consists of a large number of items, then you minimize this problem.
In "mainstream" languages, though, we can have a large body of code such as a big function that is hundreds of lines long, and it can be reasonably easy to deal with if it is otherwise well structured. If a variable is defined at line 50, which is then used at line 550, you can fairly easily track that in your editor. Not as nicely as 50 verus 65, but the linkage is visible enough.
That also gave programmers good incentives to write decent text editors and file systems that could deal with any number of lines, to get around that historic annoying limitation of FORTH. ;)
Granted it was nice to have a dead simple file system for embedded devices, but forcing you to keep your word definitions short always seemed like a lazy excuse for not having a real text editor and file system.
Sometimes you don't, but don't blame that on trying to incentivize programmer behavior. FORTH hardly ever "forces" programmers to do anything, so that excuse never rang true to me.
The "block" approach also forced you to not write comprehensive stack comments and documentation too, which is more important than keeping your word definitions short.
Otherwise you get dense sparsely documented code like this (which came from my Apple ][ Forth 40x24 screens):
Here's some IBM-PC Forth for the CAM-6 cellular automata machine that was block based -- check out the unique idiosyncratic right-justified reverse-indentation style (starting with KGET), which is not standard Forth style, but sure looks cool and poetic, like E E CUMMINGS ON CAPS LOCK:
DonHopkins on May 3, 2020 | parent | context | favorite | on: History of Logo
FORTH is the ultimate macro-assembler!
The assembler is just written in and integrated with FORTH, so you have the full power of the FORTH language to write macros and procedural code generators!
And it makes it really easy to call back and forth ;) between FORTH and machine code, with convenient word definitions for accessing the FORTH interpreter state.
Here's part of my SUPDUP terminal emulator for the Apple ][ with some 6502 code for saving and restoring lines of text in a bank-switched memory expansion card:
Here is a great example of FORTH and 8086 assembly code for hardware control (written by Toffoli and Margolus for controlling their CAM-6 cellular automata machine hardware), starting with "CAM driver routines" and also "creates fast code words for picking out bits of variable X":
>Starting to write programs for the CAM-6 took a little bit of time because the language it uses is Forth. This is an offbeat computer language that uses reverse Polish notation. Once you get used to it, Forth is very clean and nice, but it makes you worry about things you shouldn't really have to worry about. But, hey, if I needed to know Forth to see cellular automata, then by God I'd know Forth. I picked it up fast and spent the next four or five months hacking the CAM-6.
>The big turning point came in October, when I was invited to Hackers 3.0, the 1987 edition of the great annual Hackers' conference held at a camp near Saratoga, CA. I got invited thanks to James Blinn, a graphics wizard who also happens to be a fan of my science fiction books. As a relative novice to computing, I felt a little diffident showing up at Hackers, but everyone there was really nice. It was like, “Come on in! The more the merrier! We're having fun, yeeeeee-haw!”
>I brought my AT along with the CAM-6 in it, and did demos all night long. People were blown away by the images, though not too many of them sounded like they were ready to a) cough up $1500, b) beg Systems Concepts for delivery, and c) learn Forth in order to use a CAM-6 themselves. A bunch of the hackers made me take the board out of my computer and let them look at it. Not knowing too much about hardware, I'd imagined all along that the CAM-6 had some special processors on it. But the hackers informed me that all it really had was a few latches and a lot of fast RAM memory chips.
Don, I don't know anything about you that I haven't seen on Wikipedia, but you're evidently extraordinarily intelligent, erudite, and experienced, and bring a phenomenal amount of knowledge to conversation.
I'm in general skeptical of inquiries into people's personal methods, but in your case I just have to ask: do you use some kind of system for keeping your quotes, excerpts, and data to hand for these kinds of threads?
Thank you for your kind words, and for asking a great question!
My secret system that has gotten me through the coronavirus pandemic is simply an investment in an automatic coffee machine that grind beans and foams milk for me. ;)
I use HN search and google site search, copy and paste and clean up old postings, and then check through all the links (since HN abbreviates long links with "..." so I have to copy and paste the full links manually), and I update the broken and walled links with Internet Archive links.
I realize some of my posts get pretty long, and I apologize if that overwhelms some people, but it's a double edged sword. One important goal is to save other people's time and effort, since there are many more readers than writers, and I have to balance how long it takes for somebody who's interested to read, versus how long it takes for somebody who's not interested to skip.
The new HN "prev" and "next" buttons that were recently added to help make HN posts more accessible to people with screen readers are helpful to everyone else too. (Accessibility helps everybody, not just blind people!)
And as the internet has gotten faster and storage cheaper, while the user interface quality, usability, and smooth flow and interactivity of browsers has stagnated (especially on mobile), the cost of skipping over long post gets lower, while the cost of jumping back and forth between many different links and contexts stays ridiculously and unjustifiably expensive (just ask Ted Nelson). Especially with pay sites, slow sites, and links that have decayed and need to be looked up on archive.com (which itself is quite slow and requires waiting between several clicks).
Another consideration is to make it easier for people to find all the information in one place in the distant future, and for search engines and robots and evil overlord AIs to scan and summarize the entire text.
I think of what I try to do as manually implementing Ted Nelson's, Ivan Sutherland's, Douglas Engelbart's, and Ben Shneiderman's important ideas about "transclusion".
[Oh the irony of this "Transclusion on Transclusion":]
>This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations.
>In computer science, transclusion is the inclusion of part or all of an electronic document into one or more other documents by hypertext reference. Transclusion is usually performed when the referencing document is displayed, and is normally automatic and transparent to the end user. The result of transclusion is a single integrated document made of parts assembled dynamically from separate sources, possibly stored on different computers in disparate places.
>Transclusion facilitates modular design: a resource is stored once and distributed for reuse in multiple documents. Updates or corrections to a resource are then reflected in any referencing documents. Ted Nelson coined the term for his 1980 nonlinear book Literary Machines, but the idea of master copy and occurrences was applied 17 years before, in Sketchpad.
I err on the side of transcluding relevant text that I and other people have posted before, instead of just linking to it, because often the links need to be updated or get lost over time, it's clumsy to link into the middle of a page, there's no way to indicate the end of the relevant excerpt, and I can leave out the redundant stuff.
Following links is distracting and costly, so most people aren't going to click on a bunch of inline links, read something, then come back, re-establish their context, and keep on reading from where they left off, since it loses your context and takes a lot of time to flip-flop back and forth (especially on mobile). Today's web browsers make that extremely clumsy and inefficient, and force you wait a long time and lose the flow and context of the text.
So I aspire to simulate Ted Nelson's and other people's ideals with the crude stone knives and bearskins that we're stuck with today:
S1E28 ~ The City On The Edge Of Forever / stone knives and bearskins
>"Captain, I must have some platinum. A small block would be sufficient. Five or six pounds." -Spock
See what I did there? Most people have seen that a million times before, instantly recognize the quote, and don't need to actually click the link to watch the video, but there it is if you haven't, or just like to watch it again. If it's a long video, then I'll use a timecode in the youtube link. And I also take the time to quote and transcribe the most important speech in videos, so you don't have to watch it to get the important points. The most interesting videos deserve their own articles with transcripts and screen snapshots, which I've done on my medium pages.
For example, this is an illustrated transcript of a video of a talk I gave at the 1995 WWDC, including animated gifs showing the interactivity, so you don't have to wade through the whole video to get the important points of the talk, and can quickly scroll through and see the best parts of the video all playing out in parallel, telling most of the story:
1995 Apple World Wide Developers Conference Kaleida Labs ScriptX DreamScape Demo; Apple Worldwide Developers Conference, Don Hopkins, Kaleida Labs
And I made animated gifs of the interesting parts of this video transcript, to show how the pie menus and continuous direct gestural navigation worked:
MediaGraph Demo: MediaGraph Music Navigation with Pie Menus. A prototype developed for Will Wright’s Stupid Fun Club.
Here is another video demo transcript with animated gifs of some related work, another approach to the same general problem of continuous navigation and editing with pie menus and gestures:
iPhone iLoci Memory Palace App, by Don Hopkins @ Mobile Dev Camp
>A talk about iLoci, an iPhone app and server based on the Method of Loci for constructing a Memory Palace, by Don Hopkins, presented at Mobile Dev Camp in Amsterdam, on November 28, 2008.
Dang posted this great link to a video by Ted Nelson explaining the most important ideas of his life's work: document structure, transclusion, and the idea of visible connection in text on screen:
>The original hypertext concept of the 1960s got lost on the way to the Web-- and all current document standards oppose it.
>This is an important fight.
>Ted Nelson: "Here we have a Xanadoc. Right now it's disguised as plain text. But if we want to see connections, here they are. The ones outlined in blue are Xanalinks. They aren't just jumplinks, what other people call hyperlinks. I've called them jumplinks since before the web. You're jumping to you know not where: it's a diving board into the darkness. Whereas Xanalinks visibly connect to other content, with a visible bridge. The other documents open and I can scroll around in them! When I client, I can close them again by clicking. This is of course only one possible interface."
In the 90's, I worked with Ben Shneiderman and Catherine Plaisant at the University of Maryland Human Computer Interaction Lab on a NeWS PostScript-based hypermedia browser and emacs-based authoring tool called "HyperTIES".
HyperTIES had pie menus for navigation and link selection, a multimedia formatted text and graphics browser with multiple coordinated article windows and definition panes, and a multi window Emacs authoring tool with tabs and pie menus for editing the intertwingled databases of documents, graphics, and interactive user interfaces.
Each article had a short required "definition" so you could click on a link to show its definition in a pane and read it before deciding if you wanted to double click to follow the link or not, so you didn't have to lose your context to see where each link leads.
HyperTIES even had embedded interactive graphical PostScript scriptable "applets", like JavaScript+SVG+Canvas, long before Java applets, but implemented using James Gosling's own Emacs and NeWS, instead of his later language Java.
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser: By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland. Published in Hypermedia, vol. 3, 2 (1991)101–117.
>Hyperties allows users to traverse textual, graphic or video information resources in an easy way (6,7). The system adopts the ‘embedded menu’ approach (8), in which links are represented by words or parts of images that appear in the document itself. Users merely select highlighted words or objects that interest them, and a brief definition appears at the bottom of the screen. Users may continue reading or ask for the full article (a node in the hypertext network) about the selected topic. An article can be one or several pages long. As users traverse articles Hyperties retains the path history and allows easy and complete reversal. The user’s attention should be focused on the document contents and not on the interface and navigation. Hyperties was designed for use by novices, giving them a sense of confidence and control, but we also sought to make it equally attractive to expert users.
Don Hopkins and pie menus in ~ Spring 1989 on a Sun Workstation, running the NEWS operating system.
>After an 1991 intro by Ben Shneiderman we see the older 1989 demo by Don Hopkins showing many examples of pie menus on a Sun Workstation, running the NEWS operating system.
This is work done at the Human-Computer Interaction Lab at the University of Maryland.
>A pie menu is a menu technique where the items are placed along the circumference of a circle at equal radial distance from the center. Several examples are demonstrated on a Sun running NeWS window system, including the use of pie menus and gestures for window management, the simultaneous entry of 2 arguments (by using angle and distance from the center), scrollable pie menus, precision pie menus, etc. We can see that gestures were possible (with what Don call "mouse ahead" ) so you could make menu selections without even displaying the menu. Don uses an artifact he calls "mousee" so we can see what he is doing but that extra display was only used for the video, i.e. as a user you could make selections with gestures without the menu ever appearing, but the description of those more advanced features was never published.
>Pretty advance for 1989... i.e. life before the Web, when mice were just starting to spread, and you could graduate from the CS department without ever even using one.
>This video was published in the 1991 HCIL video but the demo itself - and recording of the video - dates back to 1989 at least, as pictures appear in the handout of the May 1989 HCIL annual Open House.
>The original Pie Menu paper is Callahan, J., Hopkins, D., Weiser, M., Shneiderman, B., An empirical comparison of pie vs. linear menus, Proc. ACM CHI '88 (Washington, DC) 95-100.
Also Sparks of Innovation in Human-Computer Interaction, Shneiderman, B., Ed., Ablex (June 1993) 79-88.
A later paper mentions some of the more advanced features in an history of the HyperTies system: Shneiderman, B., Plaisant, C., Botafogo, R., Hopkins, D., Weiland, W., Designing to facilitate browsing: a look back at the Hyperties work station browser Hypermedia, vol. 3, 2 (1991)101-117.
>PS: For another fun historic video showing very early embedded graphical links (may be the 1st such link) + revealing all the links/menu items + gestures for page navigation:
>This is the HyperTies-based hypertext version of Schneiderman & Kearsley’s 1989 book “Hypertext Hands-On!” included in the book (on 2 x5.25” disks). Running in DOS on Win XP 32-bit VM.
>A demo by Ben Shneiderman of the widely circulated ACM-published disk “Hypertext on Hypertext”. It contained the full text of the eight papers in the July 1988 Communications of the ACM. The Hyperties software developed at the University of Maryland HCIL was used to author and browse the hypertext data.
Not the question guy. But … Holy smoke. Great article and the in fact jumping in and out is hard. I think the one used by iOS books for check out a word or even a link worked as book is a different software than safari. But safari to another link and back … good analysis and good work !
A productive Forth system, for me, has stack effects commented into each of the words, in Thinking Forth style has a literate style which flows, has plenty of testing words which help you build up mock objects that you can test your words on, and often real tests which tests each word to make sure it does what it's supposed to (e.g. place a file id on the stack, run your word, and then test the file id to make sure it has the contents that you expect.) Developing and testing should happen often and frequently in the REPL; it's a very dynamic process. This style is very different from ALGOL or Lisp style structured programming where it can be annoying to test things (though in Common Lisp you should just be able to jump into a running function and see what's going wrong), but in its stead the compiler gives you a lot of assurance as to what's going on.
DonHopkins on July 7, 2015 | parent | context | favorite | on: Thinking Forth (1984)
Thinking Forth is a FANTASTIC book -- by all means read it to learn the most important universal lessons from Forth, even if you're not going to program in it!
Forth is a "glass box" instead of a "black box", and it's simple enough that you can easily understand EVERYTHING about how it works right down to the most primitive words defined by machine instructions. It like scheme in that it's great for meta programming and creating higher level domain specific languages, but its approach is different enough and much lower level than Scheme that it's worth learning scheme as well as forth, to contrast them for a better perspective.
Another interesting related language is PostScript, which is a lot like Forth in some ways (rpn stack based, separate return and parameter (and dictionary scope) stacks, how the threaded interpreter works, and its extreme simplicity and power) but a lot like scheme in other ways (data is code, polymorphic arrays and dictionaries, typed object references instead of raw untyped pointers, with typed objects bound to names in dictionaries as opposed to typed variables holding values (you can redefine the same name to different types, since the object with its type is associated with the key in a dict, the type is not declared for the variable name itself like C), a safe high level language with bounds checking, garbage collection (in a modern implementation -- old printers tend to use simpler heaps), etc).
PostScript (and scheme) is a lot more of a "black box" than Forth is, since there's a lot of magic stuff going on under the hood that you can't see, to make it seem simple on the surface. And I'd say that on the surface, PostScript is simpler than Forth, because of how it's higher level and you don't have to worry about a lot of details. But Forth is actually extremely simple all the way down!
\ First you should:
FORTH ?KNOW IF
HONK!
ELSE
FORTH LEARN!
THEN
\ Then you can:
BEGIN
FORTH THINK!
AGAIN
“ for example, I do a lot of "converting" in the (let ...) section, then the code works on the results of that. If you "ignore" the (let ...) section, the code is usually reasonably clear. That tends to be difficult in classic Forth as that gets all intermixed.”
This issue with Forth is because you have the "invisible" stack as a first class parameter passing system. So, given a string of Forth words, while the stack may be apparent at the start (perhaps as a comment at the entry of a high level word), what happens to that stack during processing is opaque to the reader. Then you have have issue within individual words of stack gymnastics to get everything lined up for the interior processing, or readying to call something else. Now, if the reader knows each word and its behavior, then they can follow along but under the cognitive load of effectively "executing" each word they see to visualize its stack impact. You can learn to skip a bunch of the gymnastics because, assuming it works, you "don't care", it's just the noise of processing "knowing" it's converting from X to Y, and just knowing the Word W needs Y, how it gets Y is, often, less important. But it can impact the logic standing out buried in the manipulations to invoke everything properly. In Lisp, for example, I do a lot of "converting" in the (let ...) section, then the code works on the results of that. If you "ignore" the (let ...) section, the code is usually reasonably clear. That tends to be difficult in classic Forth as that gets all intermixed.
It's one thing when you've built up your code base over time, and are familiar and intimate with it incrementally. But it's quite another approaching a foreign code base.
Over time, familiarity reigns and lowers the impact, but on an initial read it can be quite challenging.
It's also fair that at some point, at the higher levels of abstraction, all of that goes away, and the code just "reads", but, inevitably there's a bunch of sausage makings supporting it that you need to make sense of.