I miss macros sometimes. The main thing I think Smalltalks get right is uniformity: everything (well, almost everything) is an object. If I remember rightly, R4RS (maybe even R5RS) Scheme has exactly 9 (nine) distinct types of value. Nine? Really? Who ordered that? I think an ideal balance between the two would be something with uniform object-orientation for behavioural interaction with values plus programmable "views" [1] for destruction of values as data. So something with Smalltalk's extreme late-boundness, plus ML-like sums-of-products for data.
That's interesting. Thanks for the paper. I skimmed through it, but I did not get every detail, since it's been a long time I worked with languages like ML or Haskell and my formal background is a bit rusty at the moment.
I do like the premise that, if I understood correctly, (1) you want to specify your computations with the most suitable representation(s) and you want them to be inter-operable. Also, (2) I find it interesting how the tension between pattern matching on explicit representations and abstract data types with information hiding can be reconciled.
On a less formal side, the Personal Information Environment done at Xerox PARC, extended Smalltalk with perspectives, which addresses point (1) from a different angle [1], but the multiple perspectives system is bolted on and requires tooling. Also the scope is a bit broader, since PIE also addresses the integration of non-functional artifacts (documentation and so forth).
I don't know if that can be done more cleanly without rethinking the entire object and interaction model. Taken (1) a bit farther, you end up with a configuration language (that Alan Kay talked about in a seminar [2]) and the system is responsible for delivering appropriated objects to the methods (or processes).
For larger systems I consider that as a must, because without that kind of modularity your components accumulate all sorts of different contexts, which makes it difficult to understand the evolution of the system or take requirements out of the system. Extension methods seem more of a work-around than a real solution. Traits are fine but the composition always ends up to be a global fact in your system, even if it is irrelevant (or even ill-suited) for other contexts.
So what we need is a system that can dynamically adapt itself to different situations, in such a way, that the programmer (or user) can adapt the system locally without having to know the entire global configuration -- which does not work if you have conflicting behaviors but you want to keep them for, say, analysis purposes.
In the end, we want the system to get us into a state where we can do experiments as fast as possible instead of doing 99% of (volatile!) preparation work all the time.
But this is of course an extremely complicated problem (theoretically and practically even more so with the growth of software) that needs adequate methods and languages. And this is where syndicate might close some gaps, especially since you are applying it to the messy domain of operating systems configuration and the insights we can use from that.
Edit: To make my long winded post a bit clearer, many of it boils down to the illusive context problem. But rethinking the mentioned issues from a conversational point of view seems very promising to me and could address many issues, such as composition and discovery of components (or better yet, behavior), pragmatical concerns such as optimizations, keeping track of requirements -- all put under the umbrella of multi-faceted conversations within the system and through its users.
What I noticed in various Smalltalk systems is that there is a lot of code duplication for handling UI interactions (such as dragging or selecting objects in various kinds of views). Garnet, for instance, addresses that problem with Interactors and constraints [1].
Also, in Smalltalk, you will find countless instances of re-implemented tree data-structures e.g., package trees, class trees, UI component trees, which increases maintenance effort and makes the system less uniform than it perhaps could be.
Syndicate certainly lets you factor out some kinds of repeated behavioral patterns: I discussed in my dissertation (chapter 9 [1]) examples around cancellation, around state-machines, around observers, and around demand-matching.
Besides these examples, the general facets-plus-conversational-context idea lets you factor out composable behavioral traits. Which word reminds me of the Traits of Schärli/Nierstrasz/Ducasse [2]: those Traits have proven benefits for improving reuse and reducing redundancy and bugs in Smalltalk systems. So perhaps in general what we're looking for are new perspectives on old problems - new ways of isolating and composing ideas.
Strictly on the example of dragging objects, I actually have a little demo GUI that factors out object draggability into a composable facet of behavior that can be "mixed in" to an actor representing a graphical object. See [3] and, for detail, [4].
My pleasure! I found it to be a refreshing read.
Thank you very much for your response. I will study your examples and read up the parts in your dissertation.
In your dissertation you mention that "program" internals such as Exceptions should not be asserted as facts and pollute the data-space; only relevant facts/knowledge about the domain should be shared.
I understand this point, but what if you want to express knowledge on how to adapt the technical parts of a system and its infrastructure, not just the "problem" domain? I imagine that Syndicate and its abstractions could be very helpful in this regard.
What are your thoughts?
Edit: For example, you may want to delegate the diagnosis of critical Exceptions to engineers, perhaps suggest resolutions, patches. Would you use a dedicated syndicate network for this? What would be your approach?
Yes, that's fine: if your domain is implementation detail of the system itself, then using Syndicate to allow different program components to have conversations about that domain is totally OK.
The point I was trying to make about exceptions and polluting the dataspace is as a point about unnecessary coupling: exceptions per se are not in most domains, so communications mechanisms that include exceptions as part of the transport are in some sense wrongly factored. Likewise transports that include sender information at the point of delivery of each received message. The point isn't that sender information or exception details aren't useful, but that they're not something to bake in.
Concretely wrt your example: you could use a dedicated dataspace for such things, yes, or you could design the language of discourse for a large shared dataspace to allow multiple ongoing conversations at once on unrelated or semirelated topics.
Can you clarify briefly, is an error communicated out-of-band? Or is it not communicated at all?
For example, if I send a request for a key that's "not found," I would think a standard out-of-band error is what you mean?
On the other hand, if I send a request to put data in a key and there's a version mismatch (say my schema is newer and I've got an additional field) -- then silently allowing it to proceed makes sense, sort of?
Or did I misunderstand completely? :) I realize this is rehashing things at a fairly beginner level, but it would help clarify a lot!
By "error" I usually mean some kind of crash (exception/termination/disconnection).
Let's imagine that two actors are interacting. Actor number 1 is observing assertions matching pattern (A, *), and actor number 2 then asserts (A, 123).
Actor 1 then receives an event indicating that a tuple (A, 123) has been asserted.
Now, imagine Actor 2 crashes. The Syndicated Actor Model ensures that all its assertions are cleanly retracted. This causes an event for Actor 1 saying that (A, 123) has been retracted again.
So you can see this as a communication of the crash! But what it doesn't do is indicate any detail about why the retraction happened. It could have been a deliberate retraction, or a crash due to any number of reasons, or a timeout in some intermediate transport layer causing disconnection, etc etc.
There are other uses of "error": for example, your "key not found" status. There, the protocol for interaction among actors should include an explicit assertion or message. This is the kind of "error handling" seen in languages like ML, Haskell, Rust, and so on, where errors are (non-crash) values. It's a good strategy.
At a slight tangent, there's one thing Syndicate can do here that other (non-actor) systems cannot, and that's "key not found yet". For that, silence is a tacit indicator of no result. For example, you could assert interest in (Map, "mykey", *) tuples. If and when any such appear or disappear, you get notified. Because time is involved, receiving no message is an indicator of "key not found yet".
There are a couple of approaches here. If there is a moment when it's known that no matches will be found, then either an actor can assert something to this effect or the querier can wait "long enough" and then move on. For the former, if there's some actor in charge of maintaining an authoritative copy of the collection concerned, they may assert some "no match" record. For the latter, the SAM includes a "sync" action which is essentially a no-op roundtrip to some target. Once the sync returns, you know all previous assertions must have been seen by that target, and if you know something about how the target operates, it might be safe to conclude no response will be forthcoming.
"Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution.
Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed.
Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information."
One gripe that I have with functions like map is that it returns a generator, so you have to be careful when reusing results. I fell into this trap a few times.
I'd also like a simpler syntax for closures, it would make writing embedded DSLs less cumbersome.
> One gripe that I have with functions like map is that it returns a generator, so you have to be careful when reusing results
I hope that is never changed; I often write code in which the map function's results are very large, and keeping those around by default would cause serious memory consumption even when I am mapping over a sequence (rather than another generator).
Instead, I'd advocate the opposite extreme: Python makes it a little too easy to make sequences/vectors (with comprehensions); I wish generators were the default in more cases, and that it was harder to accidentally reify things into a list/tuple/whatever.
I think that if the only comprehension syntax available was the one that created a generator--"(_ for _ in _)"--and you always had to explicitly reify it by calling list(genexp) or tuple(genexp) if you wanted a sequence, then the conventions in the Python ecosystem would be much more laziness-oriented and more predictable memory-consumption wise.
Personally, I mostly just avoid using map/filter and use a list/generator/set/dict comprehension as required. I don't find map(foo, bar) much easier to read than [foo(thingamajig) for thingamajig in bar]
i think thats the most powerful part of the tool. being able to effortlessly write lazy generators is absolutely amazing when working with any sort of I/O system or async code.
That's exactly how I feel when reading code. Recently, I explained the problem to a dear colleague when working on a code base (with about 600K lines of code). I compared the situation to working in a large office building with all lights turned off and you have to skim through thousands of documents with a flashlight.
It's comforting to know that other people feel the same way.
This reminds of Robert Pike's "Systems Research is Irrelevant" speech [1]. Now, 20 years after his speech, we are still stuck with the same notions (such as everything being a string). It's not that there are not plenty of alternatives around, however, expectations are so high that it's almost impossible to make a new computer system economically viable. On the other hand, the hacker and maker scene is very active, some of them building operating systems and hardware such as tiny Lisp-based machines [2] and OSes [3]. (My only gripe is that most of the new "avant-garde" systems are still text/file-based.)
I'd love to see a next wave in personal computing, starting with a clean slate, building on the research, insights and learning from the mistakes that have been made. I have no doubt that it will happen, the question is only when.
As for interoperability: Even on the same platform there are countless problems getting software to talk to each other, so I don't think that a new system will make the situation any worse.
I wonder whether the migration to Chez Scheme will improve the performance of Dr. Racket which consumes a lot of memory and feels very sluggish.
I have been experimenting with Racket for quite a while and I appreciate the effort that went into making the language extensible. That being said, I wish that the community would have embraced "object-oriented" techniques for building the VM - Ian Piumarta's COLA (Combined Object-Lambda Architecture) comes to mind [1]. I think this would have saved them a lot of performance troubles (but this is mere speculation) and would make the system far more flexible and pleasant to use.
While it's comforting to have a very powerful macro system at your fingertips to change the semantics of the language when needed, many of the macros in the base language and libraries could be eliminated with a more convenient default notation for closures and message-passing (to objects simulating control structures like in Smalltalk).
Racket has one of the best module systems I have worked with but modules cannot be (easily) parameterized. Research on systems such as Self and more recently Newspeak [2] gives ample of demonstration of the benefits (conceptual integrity, security) of having modules as objects.
What Racket also lacks is a good meta-object system with highly reflective capabilities (of say a CLOS or Smalltalk system). This makes it difficult to build tooling such as inspectors or browsers.
I hope that in the future these issues will be addressed.
I would absolutely love to see such a thing (e.g. VPRI's 'frank' system), but it's quite a different approach than Scheme, and I wouldn't want to see PLT/Racket take such a drastic course change while there's still so much interesting work to be done in the Scheme world (e.g. Kernel's F-expressions are powerful, but hard to optimise; work on reflective towers and multi-stage programming is pushing languages beyond the traditional AOT/JIT dichotomy; and so on).
From my understanding, Piumarta's main trick with COLA is fast dynamic dispatch through a uniform interface (putting pointers to vtables at offset -1), and implementing vtables via interface (so they can be replaced). It's a cool approach, but it's also a case of 'everything can be solved by adding a layer of indirection'; and I've not sat down and thought through how these ideas might apply to different paradigms, and what might be unique or common to them all.
If I remember correctly, Adele Goldberg once remarked that "in Smalltalk everything happens somewhere else". Although there has been ongoing work to collapse the many layers of indirection you find in message-passing systems, I am not convinced that throwing more tools at the problem will address the deeper underlying issues. At some level, you need more concise descriptions of your system and operators to shape its structure and organization; perhaps even compute different organizations. This is very difficult to accomplish in purely object-oriented systems/languages that have no direct and convenient "algebraic" correspondence for composing complex communication structures and specifying relationships such as inheritance. For example, I was very taken by the idea of class definitions as expressions, having seen it in Racket first many years ago.
On the other hand, I believe that when it comes to low-level "machine" work, objects are a good abstraction to model components such as activation records such that they can be uniformly explored and modified (on the fly). But this is perhaps a moot point.
Over the years, I have studied many object/component-oriented systems and come up with a sizable catalogue of message-exchange patterns, plumbing and machinery. My hope is that at some point, this can be crystallized into a language or calculus for specifying systems; and Scheme/Racket is certainly a good language to think about these issues from another perspective (which is worthwhile to preserve).
So I guess, I understand your point. Thanks for raising it, much appreciated.
Oberon and Maude I think make a good case for (parameterised) modules, both from being able to reason mathematically and their use at low level OS system.
I've been very slowly trying to combine the work in Maru with Composita to yield a modular and deterministic lisp (Composita uses managed memory without GC).
Naturally you'd want to layer something like Shen or Maude on top of this, to provide the equational proof-checking. The K language is a good example - it provides facilities to model new languages and semantics like racket.
CLOS is inherently multiple dispatch, but Flavours that are the most prominent precursor of CLOS were single dispatch. Interesting consequence of that is that IIRC traditional implementation of CLOS/MOP does not really use multiple-dispatch internally.
I recommend having a look at "Practical Electronics for Inventors" by Paul Scherz and Simon Monk. The book offers a very good introduction about the basics of electricity with many helpful illustrations, written in a down-to-earth style. In case you are interested in electronics, you will find that the book covers many intermediate/advanced topics such as operational amplifiers with lots of practical examples.
I came here to recommend the same book. It's not an easy book though to read from cover to cover though. I found it useful to try to understand a completed circuit design (say for a solar controller or something else I was interested in) and when I ran into something I didn't understand I'd then open up that book and read areas that were relevant.
Yes, I had the same experience and I use it in the same way. It's a great book to have on your bookshelf as it covers a lot of topics and the chapters are, iirc, fairly self-contained; but I never read it from cover to cover.