Hacker Newsnew | past | comments | ask | show | jobs | submit | jarpineh's commentslogin

Well, this looks very nice. I have so far avoided runbooks, preferring to use Ansible or such. Installed, and will see if I'll change my habits. Containers and such have made Ansible usage more cumbersome.

Also, I noticed there's only 60+ Atuin sponsors (at Github), so added myself. Been using Atuin for some while now. Hopefully their work is sustainable.


I wonder what is the difference between efficiency of MacBook display vs Framework laptop. Whilst CPU and GPU take considerable power they aren't usually working at 100% utilization. Display however has to be using power all the time, possibly at high brightness in daytime. MacBooks have (all?) high resolution displays which should be much power hungrier than Framework 13 IPS. Pro models use mini LED, which needs even more power.

I did ask LLM for some stats about this. According to Claude Sonnet 4 through VS Code (for what that's worth), my Macbook's display can consume same or even more power than CPU does for "office work". Yet my M1 Max 16" seems to last a good while longer than whatever it was I got from work this year. I'd like to know how those stats are produced (or are they hallucinated...). There doesn't seem to be a way to get display's power usage in M series Macs. So, you'd need to devise a testing regime for display off and display on 100% brightness to get some indication of its effect on power use.


I sometimes wonder if chorded keyboard would be better for controlling the computer and keeping better posture against RSI issues. Not to mention more compact space compared to full keyboard. I seem to remember from a recording of the demo (and few writings on subject) that the keyset and mouse were used together for more powerful effect than either one alone.

What I haven't found out is how well a multilingual writer could use these. Do the chords rely on properties of particular language, like English. Does the chord order follow from how often you write letter a instead of x. Would another language be adaptable to same chords, or do you need to make an optimized version?


As Don Hopkins sort-of says—the original chording keyboard (and most later units) just had you inputting a binary number, which would be added to 64 to get an ASCII codepoint. No attempt was made to optimize for letter frequencies in English at this stage of design—A was one key (00001) but E was two (00101).

Engelbart's style of chording keyboard barely escaped the Anglosphere. But a related invention, the stenographic keyboard, did; these are used for court reporting and live television captioning. They introduce a very different strategy for inputting text—operators of these input one full syllable at a time, phonetically, and the machine interprets the pronunciation according to a dictionary; thus in English the most common errors are homophones, which can be revised later from context. It requires quite a lot of training and practice to be proficient with them, and they are extremely language-specific.


Braille typewriters are also very much like Stenography, except Braille is actually designed to replace reading & writing rather than transcribing speech.

Though Braille does use two dots for E and one for A, with mostly the same letter frequency in both it's native French and English.

Also very much surprised his keyboard didn't fit into either ASCII nor EBCDIC encoding. Granted, both of those barely existed at the time but still.


Yeah, I have probably conflated the two technologies somewhere along the way. And if I had read all the way to the footnotes of the original article I'd have found the keyset's chords. They do are counting in straight binary, going from a-z in order. Add mouse buttons for modes to get uppercase, numbers and what not. Interesting really that such a simple scheme worked.


I would hazard a guess that it would make RSI worse because it minimizes the kinds of hand motions to operate it. Alleviate RSI with a keyboard by constantly changing the position of your hands on the keyboard - the exact opposite of touch typing dogma. Pecking at your keyboard is healthier.


I'm no health expert, only an expert practitioner of my hands. Mostly I keep changing keyboards and positions. That small chorded keyset can be set to more natural positions and moved at will to wherever you can reach. You ostensibly don't need to even look at it. I'd assume you would be looking between the chord sheet for directions and what you're actually writing... A device like this Tap thing, which is attached to your fingers or wrist, allows even more freedom.

As for mouse, well, I guess a trackball is easier to move about, stick to chair arm or something. Touchpad might work also, but you require more estate for precision and gestures.


Douglas Engelbart used a straightforward binary encoding scheme for the chord keyset:

Engelbart Explains Binary Text Input. Douglas Engelbart explains to co-inventor, Valerie Landau, and some blogger how binary can be used for text input.

https://www.youtube.com/watch?v=DB_dLeEasL8

Engelbart: Think about if you took each finger, and wrote a one on this one, a two on this one, a four on this one, and a sixteen on this one. And every combination would lead clear up to sixty three.

And so writing here like this the alphabet: A... B... C... D. E. F. G, H, I, JKLMNOPQRSTUVWXYZ!

https://news.ycombinator.com/item?id=43454343

The commercially available "TapXR" input device also functions as a mouse and gestural pointing device! It's a wearable tap glove that functions as both a bluetooth keyboard and mouse. I haven't tried it yet though, but it looks really cool.

https://www.tapwithus.com/

https://www.youtube.com/watch?v=sdm8FcsKeoM

A WRISTBAND THAT REPLACES Your Keyboard, Mouse & Handheld Controller TapXR was designed to help humans adapt to the next generation of personal computing.

A Unified Way To Interact With Your PC, Smartphone, Tablet, SmartTV, Projector, VR, AR & XR

80+ Words Per Minute

Input up to 10 characters a second with just one hand or go even faster with two.

150+ Customizable Commands

Remap any finger combination into your favorite shortcuts, triggers, key-binds and commands

2500+ Tap Layouts

Enjoy thousands of user-created Language, Utility, Coding, Production & Gaming TapMaps - or make your own!

8 Hours of Battery Life

Get a full day of input on a single charge. Only 1 hour to recharge from zero to full!

----

My previous post about an earlier version from about 7 years ago:

https://news.ycombinator.com/item?id=17122717

DonHopkins on May 21, 2018 | prev | next [–]

I just ran across a new device called "Tap", a wearable tap glove that functions as both a bluetooth keyboard and mouse!

https://www.tapwithus.com/

I've had any "hands on" experience with the Tap, but it looks very cool, like a modern version of Douglas Engelbart's and Valerie Landau's HandWriter glove!

I asked Valerie Landau about it (wondering if it was her company), but she hadn't heard of it before.

They have an iOS, Android and Unity3D SDK that appeared on github recently, so you can look at the code to see how it works:

https://github.com/TapWithUs

Does this look legit? Has anybody tried it?

If it works as advertised, I'd love to develop TapPieMenus that you can use in VR, mobile, desktop computers, and everywhere else!

I'm excited about the possibility of creating easy to use, fast and reliable pie menus for Tap that users can fully customize, and use with one hand in the same way that Douglass Engelbart described you could do with two hands using a mouse and a chorded keyboard:

>"Well, when you're doing things with the mouse, you can be in parallel, doing things that take character input. And then the system we had, it actually gave you commands with characters, too. Like you had a D and a W, and it says, "you want to delete a word", and pick on which word, and click, it goes. M W would be move a word. Click on this one, click on that one, that one could move over there. Replace character, replace word, transpose words. All those things you could do with your left hand giving commands, and right hand doing it."

It would be cool to have some tactile feedback, so the tutorial could train you to type out letters by vibrating your fingers with a piezo buzzer or something, and maybe it could even secretly spell out silent invisible messages to you while you were wearing it! And you could feel a different silent finger "ring tone" depending on who was calling you, then tap to answer to discard the call, or stroke with a TapPieMenu to send a canned reply.

enobrev on May 22, 2018 | parent | next [–]

LinusTechTips posted a decent review of the Tap a few weeks ago:

https://youtu.be/8za_4g5zCOM


Whoa. Thank you for the info dump. I'll see about making use of these.

That Tap device has moved from fingers to wrist, I see. Sadly it's out of stock. Plus getting niche devices outside US is expensive and warranty probably doesn't work.

[1] https://www.tapwithus.com/product/tap-xr/

Edit: that LTT video makes a good case for the device, if only in typical 'tube fashion.


And since in Lisp code is data and data is code you could go even farther back. A tad sensationalist claim from the article authors.


I find it a baffling that the popularity of Jupyter and successes of notebook analysis in science hasn’t brought a change in Python to better support this user base. Packaging has (slowly) progressed and Uv nicely made the experience smooth, fast and above all coherent. Yet the Python runtime and parser are the same as ever. The ipynb notebook format and now Marimo’s decorator approach had to be invented on top. Python might never attain the heights of Lisp’s REPL driven development, yet I wonder if it couldn’t be better. As much I enjoy using Jupyter it’s always been something tacked on top of infrastructure that doesn’t want to accommodate it. Thus you need to take care of cell order yourself or learn to use a helper tool (Jupytext, Nbdev).

Me, I’d have support in the parser for a language structure or magic comment that points the cell boundaries. I would make dynamic execution of code a first party feature with history tracking and export of all the code sent into the runtime through this. Thus what the runtime saw happen could be committed over what user thought they did. Also, a better, notebook aware evaluation with extension hooks for different usage methods (interactive, scripting, testing).

I have no solution to ipynb JSON problem. I do think it is bad that our seemingly only solution for version control can manage only simple text, and users of all the other format have to adapt or suffer.


This is a wonderful project. Seemingly simple on the surface. I'd love to see some notes of the frontend implementation. I see there's OpenFreeMap as presumably the base map, which uses MBTiles. Then custom geometry on top from Pmtiles, that I assume is generated for the project. How the colormapping is done I didn't find yet. Actually lots to unpack here.


The docs say that the extension's server is configured here: https://duckdb.org/docs/stable/extensions/ui#remote-url

But yeah, I can't find docs nor source for the UI. And the extension docs refer to MotherDuck's own UI: https://motherduck.com/docs/getting-started/motherduck-quick...

So, a bit confusing way this is set up.


It’s quite funny the docs also say this about the configurable url:

> Be sure you trust any URL you configure, as the application can access the data you load into DuckDB.

That’s certainly not what I would expect if someone gave me a “local UI” for some database. I’ve only just once toyed with duckdb, was planning to look more at it - looks like will need to have my guard and see what actually is “local” and doesn’t ship my data to a remote url.


The UI looks nice and is by itself a welcome addition.

I am somewhat at odds with it being a default extension build into DuckDB release. This still is a feature/product coming from another company than the makers of DuckDB [1], though they did announce a partnership with makers of this UI [2]. Whilst DuckDB has so far thrived without VC money, MotherDuck has (at least) 100M in VC [3].

I guess I'm wondering where the lines are between free and open source work compared to commercial work here. My assumption has been that the line is what DuckDB ships and what others in the community do. This release seems to change that.

Yes, I do like and use nice, free things. And I understand that things have to be paid for by someone. That someone even sometimes is me. I guess I'd like clarification on the future of DuckDB as its popularity and reach is growing.

[1] https://duckdblabs.com

[2] https://duckdblabs.com/news/2022/11/15/motherduck-partnershi...

[3] https://motherduck.com/blog/motherduck-open-for-all-with-ser...

edit: I don't want to leave this negative sounding post here without addendum. I'm just concerned of future monetization strategies and roadmap of DuckDB. DuckDB is a good and useful, versatile tool. I mainly use it from Python through Jupyter, in the browser and native. I haven't felt the need for commercial services (plus purchasing them from my professional setting is too convoluted). This UI whilst undoubtedly useful seems to be leaning towards commercial side. I merely wanted some clarity on what it might entail. I do hope DuckDB and its community even more greater, better things, with requisite compensation for those who work to ensure this.


One of the DuckDB maintainers here. To clarify - the UI is not built into the DuckDB release. It is an extension that is downloaded and installed like any other extension. This extension happens to be developed by MotherDuck. We collaborated with them to streamline the experience - but fundamentally the extension is not distributed as part of DuckDB and works similarly to other extensions.

To be specific, the work we did was:

* Add the -ui command to the shell. This executes a SQL query (CALL start_ui()). The query that gets executed can be customized by the user through the .ui_command option - e.g. by setting .ui_command my_ui_function().

* The ui extension is automatically installed and loaded when the start_ui function is executed - similar to other trusted extensions we distribute. The automatic install and load can be disabled through configuration (SET autoinstall_known_extensions=false, SET autoload_known_extensions=false) and is also disabled when SET enable_external_access=false.


The nature of UI as an extension is somewhat hard to understand, since its installation method differs from other extensions. Even core ones. Some extensions autoload, some require INSTALL query, and this one has its own special builtin query. It at least feels more ingrained than other extensions by its user experience.

Then there's the (to me) entirely new feature of an extension providing a HTTP proxy for external web service. This part could have been more prominently explained.

Edit: the OP states that "built-in local UI for DuckDB" and "full-featured local web user interface is available out-of-the-box". These statements make me think this feature comes with the release binary, not that it's an extension.

To clarify my point: for me it's not the possible confusion of what this plugin does or how, but what this collaboration means for the future of DuckDB's no-cost and commercial usage.


I agree that the blog post seems to hint at the fact that this functionality is fully baked in in certain places - we've adjusted the blog post to be more explicit on the fact that this is an extension.

We have collaborated with MotherDuck on streamlining the experience of launching the UI through auto-installation, but the DuckDB Foundation still remains in full control of DuckDB and the extension ecosystem. This has no impact on that.

For further clarification:

* The auto-installation mechanism is identical to that of other trusted extensions - the auto-installation is triggered when a specific function is called that does not exist in the catalog - in this case the `start_ui` function. See [1]. The query I mentioned just calls that function. The only special feature here is the addition of the CLI flag (and what that flag executes is user-configurable).

* The HTTP server is necessary for the extension to function as the extension needs to communicate with the browser. The server is open-source as part of the extension code [2]. The server (1) fetches web resources (javascript/css) from ui.duckdb.org, and (2) communicates with localhost to co-ordinate the UI with DuckDB. Outside of these the server doesn't interface with other external web services.

[1] https://github.com/duckdb/duckdb/blob/main/src/include/duckd...

[2] https://github.com/duckdb/duckdb-ui


Ok, thank you for the explanation.

I realized that the extension provides a HTTP API to DuckDB. Is this perhaps to become the official way to use DuckDB through HTTP? For me this is much more interesting than one particular UI.

I went looking and found that there's community extension of similar functionality: https://duckdb.org/community_extensions/extensions/httpserve...

Official, supported HTTP API with stable schema versioning would be a nice addition.


Reminiscent of what Deno are doing with their Deno K/V feature, which works in the open source project using SQLite but gets a big upgrade if you use it with Deno Deploy: https://til.simonwillison.net/deno/deno-kv

I'm OK with this. Commercial open source projects need a business model. I get why this can be controversial, but the ecosystem needs to find ways to fund future development and I'm willing to compromise on purity if it means people are getting paid for their work.

(Actually it looks like the UI feature may depend on loading closed source assets across the Internet? If so that changes my comfort level a lot, I'm not keen on that compromise.)


I have thought that the commercial nature of the (heh) mother company here, DuckDB labs, is support contracts and the like. Whilst MotherDuck is just another VC funded company in the DuckDB ecosystem. This new extension being added the list of default extensions blurs the line. That it seemingly is a proxy to closed source product from another company makes things even murkier. I can see a point for a for-pay external extension, but this one feels more like an AD for other company's services.


DuckDB labs has stock in MotherDuck to align ownership.

I actually really like the close partnerships in theory because it aligns incentives, but this crosses the line by not being open enough. The tight motherduck integration with DuckDB for externally hosted DuckDB/Motherduck databases is fine and good: preferential treatment where the software makes it easy to use the sponsoring service. The local UI which is actually fully dependent on the external service is off-putting. It's a little less bad because it's an extension, but it's still worrying from a governance and principals perspective.


I don't see this as the same thing. Deno is an OS product within a commercial enterprise. DuckDB is an OS project/org; MotherDuck is a for-profit company. They have tight integration and partnerships but were largely independent. This seems to be blurring that line. There is a huge ecosystem around SQLite without this confusion.


https://github.com/denoland/denokv

You can self host Deno KV since over a year.


That doesn't change what they're saying. The self-hosted backend you're linking is a network-accessible version of the local SQLite backend. The hosted backend is transparently globally replicated and built on FoundationDB, with a very different (better) scaling story.


Given the floss implementation, if one wanted, they could create their own DenoKV backed by anything they like... Azure Cosmos, DynamoDB, CockroachLabs are all possible, and given the relatively small API, should be relatively easy to do if anyone wanted to do such a thing.



I think primary concern is will DucDb pull something like RedisLabs. Wherein they are open source till it gets enough traction and after that pull the rug.


To be fair, the “traction” here was AWS using their massive competitive levers to kill RedisLabs’ long-existing (and quite reasonable/tolerated by open source) monetization avenue, risking the continued funding for redis.

To characterize this as a rug pull is unfair IMO.


I think this is a bit of a non issue. The UI is just that, a UI. Take it or leave it. If it makes your life easier, great. If not, nothing changes about how you use DuckDB.

There is always going to be some overlap between open source contributions and commercial interests but unless a real problem emerges like core features getting locked behind paywalls there is no real cause for concern. If that happens then sure let’s talk about it and raise the issue in a public forum. But for now it is just a nice convenience feature that some people (like me) will find useful.


That's one way of looking at it. To me this UI seems like both a useful tool and an advertisement.

There's another way this could have gone. DuckDB Labs might have published the extension as providing official HTTP API for all to use. Then simultaneously MotherDuck would announce support for it in their UI. Now with access to any and all databases whether in-browser, anywhere through official HTTP API or in their managed cloud service.

I for one would like HTTP API for some things that now necessitates doing my own in Python. I don't see yet much need for the UI. I'm not looking for public, multiuser service. Just something that I can use locally which doesn't have to be inside a process (such as Python or web browser). There's such API in the extension now, but it's without docs and in C++ [1]. There's also the option of using 3rd party community extension that also does HTTP API [2]. Then there's one that supports remote access with Arrow Flight, but gRPC only it seems [3]. But official, stable version would be nice.

[1] https://github.com/duckdb/duckdb-ui/blob/main/src/http_serve...

[2] https://duckdb.org/community_extensions/extensions/httpserve...

[3] https://github.com/Query-farm/duckdb-airport-extension


This looks very promising. I'll have to think my visualization cases against new possibilities this enables.

I have been intermittently following Rerun, a "robotics-style data visualization" app [1]. Their architecture bears certain similarities [2]. Wgpu in both, egui and imgui, Rust with Python. Rerun's stack does compile to WASM and works in browser. Use cases seem different, but somewhat the same. I don't do scientific nor robotic stuff at all, so no opinions on feasibility of either...

[1] https://rerun.io [2] https://github.com/rerun-io/rerun/blob/main/ARCHITECTURE.md


At first read this seems really promising. Getting into Elixir/Erlang ecosystem from Python has seemed too hard to take the time. And when there I wouldn't be able to leverage all the Python stuff I've learned. With Pythonx gradual learning seems now much more achievable.

It wasn't mentioned in the article, but there's older blog post on fly.io [1] about live book, GPUs, and their FLAME serverless pattern [2]. Since there seems to be some common ground between these companies I'm now hoping Pythonx support is coming to FLAME enabled Erlang VM. I'm just going off from the blog posts, and am probably using wrong terminology here.

For Python's GIL problem mentioned in the article I wonder if they have experimented with free threading [3].

[1] https://fly.io/blog/ai-gpu-clusters-from-your-laptop-liveboo...

[2] https://fly.io/blog/rethinking-serverless-with-flame/

[3] https://docs.python.org/3/howto/free-threading-python.html


FLAME runs the same code base on another machine. FLAME with Pythonx should just work. FLAME is a set of nice abstractions on top of a completely regular Erlang VM.

Chris Grainger who pushed for the value of Python in Livebook has given at least two talks about the power and value of FLAME.

And of course Chris McCord (creator of Phoenix and FLAME) works at Fly and collaborates closely with Dashbit who do Livebook and all that.

These are some of the benefits of a cohesive ecosystem. Something I enjoy a lot in Elixir. All these efforts are aligned. There is nothing weird going on, no special work you need to do.


Yeah, looks like it works fine, here's an example: https://pastebin.pl/view/a10aea3d

I'll add: FLAME is probably a great addition to pythonx. While a NIF can crash the node it is executed on, FLAME calls are executed on other nodes by default. So a crash here would only hard-crash processes on the same node (FLAME lets you group calls so that a flame node can have many being executed on it at any time).

Errors bubble back up to the calling process (and crash it by default but can be handled explicitly), so managing and retrying failed calls is easy.


Well this seems nice and easy. Thank you for the example. There's local, Kubernetes and fly.io support for FLAME (that I found after short search). I envision running main Erlang VM on a light weight server continuously, and starting beefier machines for Python tasks as needed.


That's a really good option - neatly sidesteps the risk of a NIF crash with practically no extra code.


You do still need some infrastructure - the Flame LocalBackend is for dev mostly and I'm pretty sure it just runs in the same vm as the parent.

But yeah if you're doing ML tasks it makes a lot of sense to farm those out to beefier or GPU-equipped nodes anyway so at that point it's just a natural synergy, AND you get the crash isolation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: