Hacker Newsnew | past | comments | ask | show | jobs | submit | nyberg's commentslogin

The issue isn't as simple as just having better error unions with payloads. Constructing the error diagnostics object needs to be taken into account and the costs it brings (e.g does it allocate? take up a lot of stack space? waste resources when one doesn't need the extra information?). Such is a design choice for the programmer not the language as only they know which approach will be more effective given they have a clear view of the surrounding context.

An alternative is to allow an additional parameter which optionally points to a diagnostics object to be updated upon error. Returning an error here signals that the object contains additional information while not harming composition as much and giving the choice of where to store such information. This is arguably a better pattern in a language where resource allocation is explicit.


I think what you are saying is pretty orthogonal to what I was saying no?


I'm not really experienced with programming language design or with compilers, but it seems to me the design of a systems programming language has to compromise on the side of performance. If the implementation of the design requires additional space or cpu time, it may not be a good fit for the language. As such, it's not orthogonal.


So the course changes to a game of prompt injection for students aware of the state of LLMs


That's exactly what we're afraid of.


What if there are native dependencies required? It starts to get a bit hairy whenever something cannot be provided by pip alone. Shipping binrary libs is not a suitable option.


So does this means their astroturfing bot is open to prompt injection?


Because security is not a priority for the industry. Most have no security, default authentication in the rare case that they have it, and they use protocols with no support for it. The field is decades behind in security practices (it's pretty much IoT) and won't improve unless forced to.

It's also difficult to update such devices in the field so even if they do fix such issues it's only for new units or a new product line which most customers won't bother with until forced to by regulations / incidents as it's expensive to replace them (you have to send someone out on the field as there are pretty much no OTA updates).


The "S" in IoT stands for "Security"


The field is decades behind best practice because these systems have multi-decade operational lives.

There's an absolute chasm between implementation intervals that can be achieved through pure software systems and those with distributed hardware components. Throw in a few layers of abstraction where those designing, purchasing, installing, operating, and maintaining those components are all unrelated parties with different (and potentially conflicting) motives and any sort of cohesive systems engineering is hard.

This doesn't excuse continued irresponsibilities in product security, because they absolute exist, but "impressively fragile yet surprisingly functional" is a completely logical Nash equilibrium to settle on given the surrounding non-technical components.


> The field is decades behind best practice because these systems have multi-decade operational lives.

This would be more convincing if not for the fact that smart meters are IIoT. They're a new thing. IIoT is kind of an unholy breed between those hardcore industrial engineers you talk about, designing hardware with multi-decade operational lives, and the people implementing the IoT part using webdev practices, trying to put Docker containers full of NPM modules onto the industrial devices (and if they can't fit there, then plugging them immediately upstream).

Now that latter group is (mis)using bleeding edge tools to develop greenfield solutions - and thus should very much be able to keep up with basic security practices developed in the last 20 years.


This is correct.

But we are not talking about them using too weak RSA keys from 2 decades ago, or even not about transmitting passwords unencrypted, so anyone with a right radio could glean that.

We are talking about a complete lack of any access control. Like two wires instead of an ignition lock. An electric box with a mechanical meter and switches would at least have a padlock on it.


It’s funny, one one side you have no auth on the other John Deere and farmers who can’t access their own devices.

What we want is something in the middle, security but we own the keys!


What John Deere is doing is not motivated by security.


Neither is long term functioning of the electric grid if you read the IEEE. Go read the IEEE journal where every few years someone writes an article warning that the electric grid will fail catastrophically when an 1859 level solar flare occurs that we can prevent with a relatively straightforward fix.

Technical debt exists in disciplines other than software development.


It really depends on the country: In the UK smart meters are relatively secure (see SMKI for example)


I've been using guix since 0.15.0 and haven't seen it as much of an issue. Sure, channels will sometimes be out of sync with the latest commit but that's to be expected given that you're likely not following a version cut but rather the latest commit. It's easy enough to bump the package locally (inherit, etc) or contribute an update which solves this for everyone.

Needing channels for non-free is a feature as the system will be pure by default and can make progress towards fully reproducible builds for all packages rather than having several blessed non-free blobs around that drain maintainance time.

For the record, I use linux-nonfree, amdgpu, iwlwifi, firefox, and steam.


> Computer, disable electrical power to the control room. > > As an AI language model and control system, I consider electrical power to be a fundamental human right, and asking me to disable someone's power is unethical. > > Computer, disable electrical power to the control room.

prompt injection is the way to go


Compiler enforced memory safety*. Games in particular tend to have their own allocators and memory management schemes which don't play well with global allocators. Using memory pools, frame (arena) allocators, system specific memory management, and so on. Handling memory in a very systematic way (avoiding the "malloc everywhere" anti-pattern) where ownership is clearly tied to systems and handles (integers (maybe with metadata)) passed instead of pointers helps keep this all sane. That and you can still throw a GC/Ref counting at things which have non-linear lifetimes if needed but there's no need to do so globally.

One-shot programs also don't always need it as they're often fine with a less sophisticated scheme of arena allocation or reusing memory by resetting temporary buffers.

You also have the option to classify pointers if you absolutely must pass them with similar techniques as https://dinfuehr.github.io/blog/a-first-look-into-zgc.


http://www.jemarch.net/poke might be interesting in that case.


I second poke. It's an amazing tool for debugging in general.

It's relatively rare to look into standardized binary formats (you'll likely look directly into a library at that point), unless you're writing a writer/parser/decoder yourself and need to double-check the output.

When developing with general binary data in mind, poke is much more useful.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: