Hacker Newsnew | past | comments | ask | show | jobs | submit | lgas's commentslogin

I think almost everyone is with you on readability, but I think it would be hard to make the case that it lacks power.

Indeed. Read some of the Project Euler discussions (after solving a problem). The J answers tend to be very short and very fast.

They cause hallucinations in dead salmon? I find that hard to believe.


I'm not 100% sure I'd call that a hallucination, but it's close enough and interesting enough that I'm happy to stand corrected.

When improper use of a statistical model generates bogus inferences in generative AI, we call the result a "hallucination"...

It should have been called confabulation, hallucination is not the correct analog, tech bros simply used the first word they thought of and it unfortunately stuck.

Undesirable output might be more accurate, since there is absolutely no difference in the process of creating a useful output vs a “hallucination” other than the utility of the resulting data.

I had a partially formed insight along these lines, that LLMs exist in this latent space of information that has so little external grounding. A sort of deeamspace. I wonder if embodying them in robots will anchor them to some kind of ground-truth source?


Loss of consciousness seems equally unlikely.

True, though an easier mistake to make, I imagine.

What is the difference in behavior? They both look like they would delete the user's home directory. I assume the latter would try to delete a directory literally named with a tilde instead?

The latter passes each item in the list into the child processes’s argv, as-is, without the shell parsing them. That means this would delete a single item named “~/ some file”, spaces and all, instead of three items named “~/“, “some”, and “file”.

Edit: I’m typing this on my phone, so brevity won over explicitness. The latter probably wouldn’t expand ~. Imagine a file named “/home/me/ some file” for a better example.


I don't have much need of this personally, but I was playing around with an example from earlier in the thread and ended up with this:

    #!/usr/bin/env -S uv run --with sh --script
    from sh import ifconfig
    print(ifconfig("en0"))
which is a pretty nice experience assuming you already have `uv` in the target environment.

You get all the features of postgres.

QuickCheck also shrinks automatically and preserves invariants though?


Others have pointed out that QuickCheck doesn't shrink automatically. But in addition: QuickCheck's shrinking also doesn't preserve invariants (in general).

QuickCheck's shrinking is type based. There's lots of different ways to generate eg integers. Perhaps you want them in a specific range, or only prime numbers or only even numbers etc. To make QuickCheck's shrinker preserve these invariants, you'd have make a typed wrapper for each of them, and explicitly write a new shrinking strategy. It's annoying and complicated.

Hypothesis does this automatically.


QuickCheck won't preserve invariants, since its shrinkers are separate from its generators. For example:

    data Rat = Rat Int Nat deriving (Eq, Show)

    genRat = do
      (num, den) <- arbitrary
      pure (Rat num (1 + den))
`genRat` is a QuickCheck generator. It cannot do shrinking, because that's a completely separate thing in QuickCheck.

We can write a shrinker for `Rat`, but it will have nothing to do with our generator, e.g.

    shrinkRat (Rat num den) = do
      (num', den') <- shrink (num, den)
      pure (Rat num' den')
Sure, we can stick these in an `Arbitrary` instance, but they're still independent values. The generation process is essentially state-passing with a random number generator; it has nothing to do with the shrinking process, which is a form of search without backtracking.

    instance Arbitrary Rat where
      arbitrary = genRat
      shrink = shrinkRat
In particular, `genRat` satisfies the invariant that values will have non-zero denominator; whereas `shrinkRat` does not satisfy that invariant (since it shrinks the denominator as an ordinary `Nat`, which could give 0). In fact, we can't even think about QuickCheck's generators and shrinkers as different interpretations of the same syntax. For example, here's a shrinker that follows the syntax of `genRat` more closely:

    shrinkRat2 (Rat n d) = do
      (num, den) <- shrink (n, d)
      pure (Rat num (1 + den))
This does have the invariant that its output have non-zero denominators; however, it will get stuck in an infinite loop! That's because the incoming `d` will be non-zero, so when `shrink` tries to shrink `(n, d)`, one of the outputs it tries will be `(n, 0)`; that will lead to `Rat n 1`, which will also shrink to `Rat n 1`, and so on.

In contrast, in Hypothesis, Hedgehog, falsify, etc. a "generator" is just a parser from numbers to values; and shrinking is applied to those numbers, not to the output of a generator. Not only does this not require separate shrinkers, but it also guarantees that the generator's invariants hold for all of the shrunken values; since those shrunken values have also been outputted by the generator (when it was given smaller inputs).


No, QuickCheck very importantly does not shrink automatically. You have to write the shrinker yourself. Hypothesis, Hedgehog, proptest and a few others shrink automatically.


Yes, but instances require the user to provide shrinking while Hypothesis does not: shrinking is derived automatically.


Always returning the empty list meets your spec.


Good point. I suppose we should add "number of input elements equals number of output elements" and "every input element is present in the output". Translated in a straightforward test that still allows my_sort([1,1,2]) to return [1,2,2], but we have to draw the line somewhere


Just use Counter and if the objects aren’t hashable, use the count of IDs. Grab this before calling the function, in case the function is destructive. Check it against the output.

Add in checking each item is less than or equal to its successor and you have the fundamental sort properties. You might have more, like stability.


> we have to draw the line somewhere

Do we? You can pop-count the two lists and checks that those are equal.


> It does this because the sounds processed by the ear are often localized in time.

What would it mean for a sound to not be localized in time?


It would look like a Fourier transform ;)

Zooming in to cartoonish levels might drive the point home a bit. Suppose you have sound waves

  |---------|---------|---------|
What is the frequency exactly 1/3 the way between the first two wave peaks? It's a nonsensical question. The frequency relates to the time delta between peaks, and looking locally at a sufficiently small region of time gives no information about that phenomenon.

Let's zoom out a bit. What's the frequency over a longer period of time, capturing a few peaks?

Well...if you know there is only one frequency then you can do some math to figure it out, but as soon as you might be describing a mix of frequencies you suddenly, again, potentially don't have enough information.

That lack of information manifests in a few ways. The exact math (Shannon's theorems?) suggests some things, but the language involved mismatches with human perception sufficiently that people get burned trying to apply it too directly. E.g., a bass beat with a bit of clock skew is very different from a bass beat as far as a careless decomposition is concerned, but it's likely not observable by a human listener.

Not being localized in time means* you look at longer horizons, considering more and more of those interactions. Instead of the beat of a 4/4 song meaning that the frequency changes at discrete intervals, it means that there's a larger, over-arching pattern capturing "the frequency distribution" of the entire song.

*Truly time-nonlocalized sound is of course impossible, so I'm giving some reasonable interpretation.


> It's a nonsensical question.

Are you talking about a discrete signal or a continuous signal?


The 50-cycle hum of the transformer outside your house. Tinnitus. The ≈15kHz horizontal scanning frequency whine of a CRT TV you used to be able to hear when you were a kid.

Of course, none of these are completely nonlocalized in time. Sooner or later there will be a blackout and the transformer will go silent. But it's a lot less localized than the chirp of a bird.


Means that it is a broad spectrum signal.

Imagine the dissonant sound of hitting a trashcan.

Now imagine the sound of pressing down all 88 keys on a piano simultaneously.

Do they sound similar in your head?

The localization is located at where the phase of all frequency components are aligned coherently construct into a pulse, while further down in time their phases are misaligned and cancel each other out.


A continuous sinusoidal sound, I guess?


And an operating system


Not necessarily. Could get hit by a car later the same day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: