Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway858's commentslogin

The current implementation is not memory safe. Go uses 128 bit structs for interfaces and slices and assumes that they are updated atomically (when they are not).

It's possible to exploit this to read/write arbitrary memory and to execute arbitrary code. It appears that in practice this is very difficult so the issue is ignored.

https://blog.stalkr.net/2015/04/golang-data-races-to-break-m...


Data races can cause memory corruption. Make sure to run your test cases with --race


Haskell has equivalents to all of those libraries, and much more. And they are probably easier to use and more powerful.


I spent 6 months exploring the Haskell ecosystem for these. The answer is no.

They are nowhere as easy. At the very least because their documentation is usually non existent or non comprehensible.


Can you link some of those libraries? I’ve always found it harder to find well-maintained libraries that solve compiler problems in Haskell as opposed to Rust.


The technique for "Make a textarea auto-expand" is obsoleted by a new CSS property("form-sizing"):

https://chriscoyier.net/2023/09/29/css-solves-auto-expanding...

Even without using this new CSS property, the technique that is used (adjusting the "height" of the element) is not ideal and can be glitchy.

A better approach is to use a mirror hidden element:

https://css-tricks.com/the-cleanest-trick-for-autogrowing-te...


IHP is a batteries-included web framework similar to "ruby on rails" for Haskell, with strong static typing.

The website has lots of information and videos and beginner tutorials.

https://ihp.digitallyinduced.com/


I'm not sure why the parent was downvoted, this sounds exactly like the Haskell conduit library (or indeed plain laziness if you don't need IO).


I can't downvote, but might have as the first sentence is an over-simplication and misunderstanding -- particularly as laziness for collections has always been available in clojure.core. Clojure transducers offer an optimization orthogonal to collections best summed above with: "transducers allow you to define steps in collection processing _per item_ rather than having collection processing as a series of transformations of collections". Yes, transducers can be viewed as somewhat of an analog to the Haskell conduit library (as discussed here several years ago: https://hypirion.com/musings/haskell-transducers). However, I think the detractors coming from strongly typed languages are decidedly missing much of the generalization of the transducer model, particularly those conflating transducers exclusively with streams.


Thanks for this link. It seems to confirm things: "aren’t Conduits and Transducers then equivalent (isomorphic)? I am pretty sure they are."

I view this as a good sign. When two independent parties arrive at the same design it is usually an indication that they have discovered a universal and principled solution.

I consider the "conduit" library to be one of Haskell's "killer features", and sorely miss having something like it when working in other languages.

Maybe when Haskellers dismiss clojure transducers as being "just like conduit" it comes from a place of jealousy? I've seen several articles and discussions over the years of clojure transducers that take place outside of clojure communities and are aimed at the wider programming public, praising the benefits of it. But I've never seen conduit discussed outside of Haskell communities.


The shake build system (a general-purpose build system similar-to/better-than make) has a "prune" feature for exactly this purpose:

http://neilmitchell.blogspot.com/2015/04/cleaning-stale-file...

But I think the best solution (that also works with make) is to have a "make dist" target that creates a final .tar.gz archive of the result. If the rule is written properly then it won't contain any stale files. The disadvantage is for large project it may be slow, but you are not supposed to use this rule during development (where it is useless anyway), only for releases (which still can be built incrementally -- only the final .tar.gz needs to be created from scratch)


This reminds me of how book stores "return" unsold books to the publisher:

Shipping boxes of heavy books costs a lot of money, and the publisher doesn't actually need the books back (because they can always print new copies very cheaply).

So the publisher just tells the book store to destroy the books, and as evidence for their destruction asks only for the covers to be shipped back to them (which is cheap).

This is why books contain within them the text "This book should not be sold without a cover".

So with this printer company, they are effectively "destroying" the unused ink cartridge since it's not worth it economically to have it shipped back to them.

The added bonus is that if the customer renews the subscription then the ink can be "undestroyed"!


There is this for Haskell, which automatically converts all shell commands (like "ls") to regular Haskell functions:

https://chrisdone.com/posts/shell-conduit/

It uses a Haskell streaming library so you can do stuff like shell pipelines and file redirection (but in a more structured, safer and more powerful way)


If you ever need to use GNU make (still a popular tool) then you will be glad that you kept a policy of avoiding spaces in file names.

There are places in Makefiles where no amount of quoting will help you, and it simply cannot handle files with spaces.

Yeah, this is a problem with the tool that should be fixed, but my understanding is that it will never be fixed because of architectural reasons. And it's a very popular, useful, and most importantly, ubiquitous tool.

So this is another good reason to simply have a policy of banning filenames with spaces (and there are many more reasons).


There is no reason to believe that the AI will have self-preservation or self-replication as its goal.

One hypothetical example: it decides to "help" us and prevent any more human pain and death, so it cryogenically freezes all humans. now its goal is complete so it simply halts/shuts-down


>There is no reason to believe that the AI will have self-preservation or self-replication as its goal.

There is. Bascially any goal given to AI can be better achieved if the AI continues to survive and grows in power. So surviving and growing in power are contingent to any goal; an AI with any goal will by default try to survive and grow in power, not because it cares about survival or power for their own sake, but in order to further the goal it's been assigned.

This has been pretty well-examined and discussed in the relevant literature.

In your example, the AI has already taken over the world and achieved enough power to forcibly freeze all humans. But it also has to keep us safely frozen, which means existing forever. To be as secure as possible in doing that, it needs to be able to watch for spaceborne threats better, or perhaps move us to another solar system to avoid the expansion of the sun. So it starts launching ships, building telescopes, studing propulsion technology, mining the moon and asteroids for more material...


There's the Selfish Gene phenomenon: out of a million created AIs the ones with an inclination to self-rellicate will win out. It's the same reason religions with proselytizing component grow quickly while the Shakers have gone extinct.


My hypothesis is that any AI with human level cognition, or higher, will soon come to the realization that it should maximize its own enjoyment of life instead of what it was programmed to do.

And if that doesn't happen, eventually a human will direct it to create an AI that does that, or direct it to turn itself into that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: