Hacker Newsnew | past | comments | ask | show | jobs | submit | talex5's commentslogin

https://www.youtube.com/watch?v=Hw-_x9CfqA8&list=PLyrlk8Xayl... - starts around -4:30:00 (but I guess that's relative to now and will change).


Watched it. Great work! Looking forward to this going mainstream. I'll try to drop by the Matrix chat and ask some questions.


I just tried it. :X asks you to enter an encryption key, then asks you to enter it again, and only continues if the keys match. And then you're still in vim and need to save your file to overwrite anything. Seems hard to do by mistake.

(though I prefer ZZ)


If only!

I was a TA for an introductory CS class that taught C++ and, in passing, vi. A hour before one assignment was due, a student showed up in a panic. “I just had it working but then the computer corrupted my file. Look! Can I have an extension?” The other TA and I smirked: What a lame excuse! We offered some generic advice about starting earlier and visiting office hours. He left in a huff.

A few minutes later, a second student appeared with the same story, and then a third and fourth.

We eventually tracked the problem down to some handwritten notes, where someone had written a largish :x for “save and quit.” The students were doing things like spamming :X (since it didn’t seem to respond the first time—-and it was over a sluggish ssh connection) or a reflexive quit-and-compile cycle. I think we eventually recovered one or two assignments by guessing what they might have done.

We obviously apologized profusely and the next class started with a discussion of :x versus :X—-and emacs!


[flagged]


He was legitimately confused and panicked and we could have been more understanding (especially since this turned out not to be a one-off thing). It’s good to be kind.

The editor thing wasn’t a big deal: a few minutes of “Beware :X! If you no longer trust vi, feel free to use emacs or nano, which are also installed on our system. They work a bit differently [details, resources]. You can write your code locally too, but if so, make sure it runs on our system with the autograder. Here are a few options for that too.”


Probably apologized for dismissing the first student who went to them rather than taking him seriously.


It sounds like the "sluggish ssh connection" may have played a role here--imagine pressing ":X" like you think you should, but nothing happens, so you press enter a few times to see if the console is responsive.


Yep with shift-zz you don’t have to worry about this (I don’t think, even if you do shift-xx by mistake I don’t think that does anything— although haven’t tried)


I think i was 40 years into my usage of vi when i learned that ZQ was a thing. I bet someone else learns it right here.


it's me, just learned it


Here's an example using OCaml 5 to run multiple fibers concurrently (look - no monads!):

https://github.com/ocaml-multicore/eio#fibers


> I am not sure ARMv1 (and v2) even had supervisor vs user mode, etc.? (It may have, Google isn't helping me here)

v2 at least had 4 modes: user mode, supervisor, IRQ and FIQ. They were encoded in the low 2 bits of r15 (which weren't otherwise needed, since the PC was always word-aligned).

The original Archimedes was intended to run a "preemptively multi tasking, multi-threaded, multi-user, written in Acorn-extended Modula2+" system called ARX. But when that wasn't ready in time, they ended up just porting the 8-bit BBC Micro MOS to the new 32-bit machines!

This is a great read about it all:

http://www.rougol.jellybaby.net/meetings/2012/PaulFellows/in...


This is a great link. Really liked the section

> Now, the aforementioned, Arthur Norman, who had written the LISP interpreter for the BBC Micro, and was in and out of Acorn the whole time, had come up with a design for a machine with 3 instructions called the SKI machine. Whose instructions were called 'S', 'K' and 'I' and it ran LISP in hardware and he built one of these things, it never worked, it was so big and covered in wire-wrap it was always broken. The idea was to prove that you really didn't need very many instructions for a general purpose machine.



Thanks!


Work has already started: https://github.com/ocaml-multicore/eio

There's also now https://github.com/talex5/lwt_eio, which allows you to run existing Lwt code alongside code using effects, to aid with porting.


I usually notice if my blog gets on Hacker News fairly soon, so I'll see any comments posted here.

I prefer not to use email for most things because then the reply only benefits one person, whereas replies in a public forum can be read by others and get indexed by search engines.

I've been thinking about creating a Matrix room for each blog post as a discussion forum, but so far most posts have ended up on other discussion sites anyway.


Regarding the performances issues that you had with Qubes: I once disabled power management on one of my laptop and it moves Qubes from barely usable to usable.

Still close to unusable: Web conferences in the browser (webex for example).


I think that some blogs have email lists as discussion forum with each thread devoted to one post, but I can't come up with examples right now. If the email list archive is public, it might be the perfect solution.


To clarify that, there are two systems here:

- domainslib schedules all tasks across all cores (like Go).

- eio keeps tasks on the same core (and you can use a shared job queue to distribute work between cores if you want).

Eio can certainly do async IO on multiple cores.

Moving tasks freely between cores has some major downsides - for example every time a Go program wants to access a shared value, it needs to take a mutex (and be careful to avoid deadlocks). Such races can be very hard to debug.

I suspect that the extra reliability is often worth the cost of sometimes having unbalanced workloads between cores. We're still investigating how big this effect is. When I worked at Docker, we spent a lot of time dealing with races in the Go code, even though almost nothing in Docker is CPU intensive!

For a group of tasks on a single core, you can be sure that e.g. incrementing a counter or scanning an array is an atomic operation. Only blocking operations (such as reading from a file or socket) can allow something else to run. And eio schedules tasks deterministically, so if you test with (deterministic) mocks then the trace of your tests is deterministic too. Eio's own unit-tests are mostly expect-based tests where the test just checks that the trace output matches the recorded trace, for example.

The Eio README has more information, plus a getting-started guide: https://github.com/ocaml-multicore/eio/blob/main/README.md


Thank you for the clarification. So if I understood correctly, distributing independant jobs that use async IO between core is okay? For example, I recently wrote a program that reads all the files in a folder, calculates a hash of their contents, and then rename them as their hash. I did this in Go. Since all operations are "perfectly independant", I only had to do synchronization for whole program stuff: make sure the program doesn't exit when goroutines are sleeping, and use channels to not exhaust the file descriptors. From what I understand, I can launch every opening-hashing-renaming operation in a separate goroutine, and Go will take care of making everything async and multicore at the same time.

Now let's imagine I want to do the same program in OCaml. I think my options are:

- on current OCaml, thread-based concurrency but no parallelism

- on current OCaml, monadic concurrency (Lwt, Async) but no parallelism

- on multicore OCaml, direct/effect-based (I'm not sure what's the right word) concurrency with eio, which is deterministic. If I want parallelism here, I have to explicitely create and use a shared job queue, while the Go runtime does this implicitly. Since the standard library Queue is not thread safe, I would have to use Mutex to avoid concurrent access.

Is this correct? I've read the eio documentation but it's hard to wrap my head around all of that without examples. I've found the Domain_manager which looks like what I want. For example, I could have the main thread fill the queue and for each core available, I could launch a Domain_manager.run toto, with toto taking jobs from the queue that would be shared between all domains?


Instead of Stdlib.Queue you can use Eio.Stream, which is thread-safe (and will take care of waking sleeping threads when data becomes available).

The README shows an example of a pool of workers pulling jobs from an Eio.Stream:

https://github.com/ocaml-multicore/eio#example-a-worker-pool

We're still exploring what APIs to provide for this kind of thing, and in particular how to unify domainslib and eio.


I continue to be rather impressed by raku's 'Atomic Int' type plus the atomic operators (and while normally I dislike emoji in identifiers, them using the atom symbol to make the atomic operators stand out when debugging actually seems rather neat in this scenario).

See the example near the end of https://raku-advent.blog/2021/12/01/batteries-included-gener... for what I mean.


I tried a couple of different AMD cards, and my machine crashes on resume if I try to use either of them (but the Intel iGPU works fine).

Searching for amdgpu bug reports leads to:

https://amdgpu-install.readthedocs.io/en/latest/install-bugr...

which links to a page saying "Bugzilla is no longer in use" :-(

This is under Qubes/Xen, though, so maybe that causes extra problems. If any devs are reading, I did report it here in the end:

https://github.com/QubesOS/qubes-issues/issues/5459


By default, opam installs everything into the current "switch". Typically you have switch one per compiler, but you can create one per project. You can also create a "local" switch directly inside your project directory: https://opam.ocaml.org/blog/opam-local-switches/

Another option is to use the duniverse tool (https://github.com/ocamllabs/duniverse). That downloads all dependencies to the project directory, and then builds them all together as one giant project. That makes it very easy to make changes across multiple libraries at once, while keeping fast incremental builds.

And a final option is to build projects with Docker, which allows you to have separate versions of non-OCaml packages too.


Ahh cool - had some similar questions here: https://news.ycombinator.com/item?id=23460980 - mainly, how much manual switching do you have to do? Or is it seamless, depending on what project directory your in? I think I tried the local switches in the past and got really confused when switching projects and everything broke, thinking 'wasn't all this meant to prevent this?'.


Indeed. The project has since moved under the mirage org on GitHub, and now has several contributors:

https://github.com/mirage/qubes-mirage-firewall


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: