Hacker Newsnew | past | comments | ask | show | jobs | submit | patterns's commentslogin

I picture clusters of information as they move through the computer. Sometimes they look like ships or motorcycles. :-)

Usually, I don't visualize anything in particular. I use tools for visualization, but I prefer to think algebraically. If I work on a complicated system, programming to me feels like like moving through a landscape, with points of interest that I memorize or lay out spatially in a tool, and going through the motions like patching things together, flicking switches, plugging things in and out etc.


I would be interested in seeing someone working on an example problem and attempting to describe their inner thought process while doing it. Could be an interesting idea to compile a range of developers doing it with various day to day tasks they are already doing into a Youtube channel


I have been pondering about this issue for a while. Maybe it is inevitable that successful systems turn into big balls of mud eventually once the "inflection" point has been reached and (slow) deterioration begins.

It is somewhat of a clichè but I think that (interactive) documentation and tooling can make a difference, but it is very difficult to design the process and the tooling to be as frictionless as possible. Tudor Girba and his team at feenk have been doing a lot of interesting work in that area that's worth a look [1, 2].

The software in question might be an entangled mess, a large part of it might be even inherent due to its requirements or technical constraints, but if that messy web can be readily augmented with background information and sign-posts I think the situation could be significantly improved.

On a related note, there has been a project at Xerox PARC called PIE (Personal Information Environment) [3] which has put forward the idea of organizing software and its artifacts (source code, various kinds of documentation) as a network. Although that particular concept has never been adopted in any major programming system as far as I know, I think that it has huge potential, especially if the software, as a network, can be collaboratively navigated and extended with additional information, where and when needed -- online.

Now that all does not change the fact that we are still dealing with a ball (or a web) of mud, but at least it is accessible and we might have a better chance to understand its evolution and the reasons that made it complicated in the first place.

[1] https://feenk.com/

[2] https://gtoolkit.com/

[3] http://www.bitsavers.org/pdf/xerox/parc/techReports/CSL-81-3...


I think this is a fair assessment. I agree that Smalltalk is by far not a complete solution to the problem of building and maintaining complex system but "only" an attempt.

I found that message passing is an elegant approach to have interoperability on a very basic level. But when protocols and interactions between objects get more complex, it becomes more difficult retain control and comprehension of the evolving system, thus fundamentally better approaches and methods are needed than what is present in a typical Smalltalk system.

You might be interested in watching Alan Kay's seminar on object-oriented programming, in which he sketches some ideas on how to modularize an OO system, notably, using a kind of specification language to describe the functions/needs of components and letting the underlying system figure out how to hook them up and deliver messages automatically (as opposed to the direct message passing style in traditional Smalltalks). The relevant part can be found here [1], but I found the entire talk worth watching, since a whole set of issues with OOP and Smalltalk (difficulties in finding and reusing components, weak generality) is being touched upon.

Unfortunately, as far as I know, none of the critical ideas have been crystallized into a new kind of Smalltalk - which would be more focused on working on sets of components instead of individual objects/classes (or paraphrasing Alan Kay, making "tissues").

[1] https://www.youtube.com/watch?v=QjJaFG63Hlo&t=5775s


Bookmarking this comment


Related to this is I watched a fairly recent talk by David May who worked on the implemention of the transputer in the late 80s together with Tony Hoare:

https://m.youtube.com/watch?v=lXUWmHgLiyU

At the end of the talk, he mentions the staggering complexity of the design and manufacturing of chips with billions of transistors. We have long reached a point where nobody fully understands how the chips work. IIRC, only two companies can do the manufacturing.

That modern chips work at all almost seems like a miracle. Do we really need chips this complex?


The old guy in the video is Ted Nelson, the man who coined the term hypertext, made significant contributions to computer science, inspired two generations of researchers and continues to inspire as his works are being rediscovered.

There have been "big programs" but when the web came, fundamental hypertext research and development on other systems came to a grinding halt. Ted Nelson, and many other researchers, predicted many of the problems that we now face with the Web, notably broken links, copyright and payment as well as usability/user interface issues.

I don't know what an average user is, but what a user typically does or wants to do with a computer is somewhat (pre)determined by its design. Computer systems have, for better or worse, strong influence on what we consider as practical, what we think we need and even what we consider as possible. (Programming languages have a similar effect).

One of the key points of Ted Nelson's research is that much of the writing process is re-arranging, or recombining, individual pieces (text, images, ...) into a bigger whole. In some sense, hypertext provides support for fine-grained modularized writing. It provides mechanisms and structures for combination and recombination. But this requires a "common" hypertext structure that can be easily and conveniently viewed, manipulated and "shared" between applications. Because this form of editing is so fundamental, it should be part of an operating system and an easily accessible "affordance".

The Web is not designed for fine-grained editing and rearranging/recombining content and has started as a compromise to get work done at CERN. For example, following a link is very easy and almost instantaneous, but creating a link is a whole different story, let alone making a collection of related web pages tied to specific inquiries, or, even making a shorter version of a page with some details left out or augmented. Hypertext goes far deeper than this.

Although a bit dated, I recommend reading Ted Nelson's seminal ACM publication in which he touches many issues concerning writing, how we can manage different versions and combinations of a text body (or a series of documents), what the problems are and how they can be technically addressed.

[1] "Complex information processing: a file structure for the complex, the changing and the indeterminate" https://dl.acm.org/doi/pdf/10.1145/800197.806036


> One of the key points of Ted Nelson's research is that much of the writing process is re-arranging, or recombining, individual pieces (text, images, ...) into a bigger whole. In some sense, hypertext provides support for fine-grained modularized writing. It provides mechanisms and structures for combination and recombination. But this requires a "common" hypertext structure that can be easily and conveniently viewed, manipulated and "shared" between applications. Because this form of editing is so fundamental, it should be part of an operating system and an easily accessible "affordance".

Here's where I'm stuck:

Hypertext - whether on the web or just on a local machine - can't solve the UX problem of this on its own, though. People can re-arrange contents in a hypertext doc, recombine pieces of it... but mostly through the same cut-and-paste way they'd do it in Microsoft Word 95.

The web adds an abstraction of "cut and paste just the link or tag that points to an external resource to embed instead making a fresh copy of the whole thing" but all that does is add in those new problems of stale links, etc.

So compared to a single-player Word doc, or even a "always copy by value" shared-Google-doc world that reduces the problems of dead external embeds, what does hypertext give me as a way of making rearranging things easier? Collapsible tags? But in a GUI editor the ability to select and move individual nodes can be implemented regardless of the backend file format anyway.

TLDR: I haven't seen an compelling-to-me-in-2023 demo of how this system should work, doing things that Google docs today can't that avoids link-rot problems and such, to think that the issue is on the document format instead of user tools interface side.


Yes, I agree a demo would be good.

I have to catch some sleep, but I will address your questions as good as I can later. In the meanwhile, you might want to take a look at how Xanadu addresses the problems of stale links, and maybe some of your other questions will be answered.

[1] https://xanadu.com.au/ted/XUsurvey/xuDation.html

Also, I highly recommend reading Nelson's 1965 ACM paper I mentioned to better understand the problems hypertext tries to solve and the limitations of classical word processing (which also expands to Google Docs).


Thanks, though I think even in these docs there are some early on concepts I just don't find convincing. Such as in the Xanadu doc, the Software Philosophy short version being a recomposed copy of live text from the long version. If I'm following their goals, they want live cross-editing through their "transpointing windows" - I really really don't, personally. I picture three docs, A, B, and C which is a summary composite of things pulled from A & B - C will still need its own manual curation and updating if A and B are changed to remain legible, flowing, and meaningful, and I'd rather have a stale doc than a garbled one.

The intro of the Nelson paper/Memex discussion is similarly alien to me. I don't think it's human-shaped, at least not for me. The upkeep to use it properly seems like more work than I would get back in value out. It's a little too artifact-focused and not process/behavior focused, IMO?


>I picture three docs, A, B, and C which is a summary composite of things pulled from A & B - C will still need its own manual curation and updating if A and B are changed to remain legible, flowing, and meaningful, and I'd rather have a stale doc than a garbled one.

I think I see what you mean. Garbling, as you mention it, is actually what Xanadu is supposed to prevent. The problem is that it is not explicitely mentioned that a document/version (a collection of pointers to immutable content) should also be an addressable entity (an part of the "grand address space") and must not change, once it has been published to the database. In particular, if a link, e.g., a comment, is made to a text appearing in a document/version, the link must, in addition to the content addresses, also contain the address of that document/version (In Fig. 13. [1] that is clearly not the case and I think that's a serious flaw).

This way, everything that document C refers to - and is presented - is at the time it was when the composition was made. How revisions are managed is an orthogonal (and important) problem, but with the scheme above we lose no information about the provenance of a composition and can use that information for more diligent curation.

[1] https://xanadu.com.au/ted/XUsurvey/xuDation.html


I understand. I think your last objection is very valid and perhaps needs far more consideration.

Anyway, I have limited computer access at the moment, but maybe you find the following response I wrote in the meanwhile useful. Ill get back to you.

---

Some remarks that hopefully answer your question:

The Memex was specifically designed as a supplement to memory. As Bush explains in lengthy detail, it is designed to support associative thinking. I think it's best to compare the Memex not to a writing device, but more to a loom. A user would use a Memex to weave connections between records, recall findings by interactively following trails and present them to others. Others understand our conclusions by retracing the our thought patterns. In some sense, the Memex is a bit like a memory pantograph.

Mathematically, what Bush did is to construct a homomorphism to our brain. I think it is important to realize that, when we construct machines like the Memex or try to understand many research efforts behind hypertext. Somewhere in Linda Barnett's historical review, she mentions that hypertext is an attempt to capture the structure of thought.

What differentiates most word processing from a hypertext processor are the underlying structures and operations and ultimately, how they are reflected by the user-interface. The user interfaces and experiences may of course vary greatly in detail, but by large the augmented (and missing!) core capabilities will be the same. For example, one can use a word processor to emulate hypertext activities via cut and paste, but the supporting structure for "loom-like" workflows are missing (meaning bad user-experience), and there will be a loss of information, because there are no connections, no explicitely recorded structure, that a user can interactively and instantly retrace (speed and effort matter greatly here!), since everything has been collapsed to a single body of text. The same goes for annotations, or side trails that have no structure to be hanged onto and have to be put into margins, footnotes or various other implicit structural encodings.

Hypertext links, at least how Ted Nelson conceptualizes them, are applicative. They are not embedded markup elements like in HTML. In Xanadu, a document (also called version) is a collection of pointers and nothing more. The actual content (text, images) and the documents, i.e., the pointer collections, are stored in a database, called the docuverse. Each content is atomic and has a globally unique address. The solution to broken links is a bit radical: Nothing may be deleted from the global database. In other Hypertext systems, such as Hyper-G (later called Hyperwave), content may be deleted and link integrity is ensured by a centralized link server. (If I am not mistaken, the centralized nature of Hyper-G and the resulting scalability problem was its downfall when the Web came). Today, we have plenty of reliable database technologies and the tools for building reliable distributed systems are much, much better, so I think that a distributed, scalable version of Ted Nelson's docuverse scheme can be done, if there is enough interest.

How a document is composed and presented, is entirely up to the viewer. A document is only a collection of pointers and does not contain any instructions for layouting or presenting, though one can address this problem by linking to a layouting document. However, the important point is that processing of documents should be uniform. File formats such as PDF (and HTML!) are the exact opposite of this approach. I don't think that different formats for multimedia content can be entirely avoided, but when it comes to the composition of content there should be only one "format" (or at least very very simple ones).

I hope this answers some of your questions.


Since Smalltalk was mentioned, please consider following points:

1. Smalltalk has first class, live activation records (instances of the class Context). Smalltalk offers the special "variable" thisContext to obtain the current stack frame at any point during method execution.

If an exception raised, the current execution context is suspended and control is transferred to the nearest exception handler, but the entire stack frame remains intact and execution can be resumed at any point or even altered (continuations and prolog like backtracking facilities have been added to Smalltalk without changing the language or the VM).

2. The exception system is implemented in Smalltalk itself. There are no reserved keywords for handling or raising exceptions. The implementation can be studied in the live system and, with some precautions, changed while the entire system it is running.

3. The Smalltalk debugger is not only a tool for diagnosing and fixing errors, it also designed as tool for writing code (or put differently, revising conversational content without having to restart the entire conversation including its state). Few systems offer that workflow out of the box, which brings me to the last point.

4. I said earlier that Racket is different from Common Lisp. It's not only about language syntax, semantics, its implementation or other technicalities. It is also about the culture of a language, its history, its people, how they use a language and ultimately, how they approach and do computing. Even in the same language family tree you will find that there are vast differences, if you take said factors into account, so it might be worthwhile to study Common Lisp with an open mind and how it actually feels in use.


But Racket and Clojure are very different from Lisps such as Common Lisp that embrace the idea of a lively, malleable and explorable environment, which is arguably the biggest idea.


and Smalltalk ;-)!


good to know! but the point still stands :D


If done right, a macro system allows you to make your language modular and experiment with new language features without having to change the core language and the compiler. With the macro approach, languages become libraries.

The Racket people took this concept very far. The kernel of the language is very small and well defined. All Racket programs (or more precisely, expressions) are eventually reduced to a handful of syntactic core forms (see [1]). For example, thanks to forms such as (#%variable-reference id) you can specify rules for variable access, for example, w.r.t. life-time. With tools such as the Macro Stepper you can fully step through the transformations of any expression in your program, from the highest to the lowest level.

This has numerous benefits. Extensions or modifications of the language can be rolled out (and used!) as libraries. This makes collaboration and testing far easier. Also, if a language feature turns out to be a bad idea, you deprecate the library. You do not have to change the compiler. This allows you to shrink your language and explore different directions without the burden of an ever growing language spec and implementation.

Is it a perfect solution? No. Changing a widely used language always has big impact, but the impact can be compartmentalized and users of the language are given a graceful migration path by updating their libraries, at their choice and pace.

Is Racket perfect? No, not by a long shot. But, frankly, language authors should at least take a look at the possibilities and consider the technological options for controlling the evolution of a language.

https://docs.racket-lang.org/reference/syntax-model.html#%28...


Nice to see Red getting attention. I have been playing with Rebol a while ago and I really liked how easy it is to do program manipulation and writing small DSLs.

However, I found debugging to be very difficult, once I wrote a more complex application. Are there good debugging tools for Red?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: