Hacker Newsnew | past | comments | ask | show | jobs | submit | srean's commentslogin

To be a modern and sane C++ that C++ could have been, (rather than a complex collection of tacked on languages that C++ is), with modules instead of the the mess of C++'s headers, with instant compilation times that does not need a compilation server farm.

It is quite ridiculous to place C++ metaprogramming and D's. For one in D it's the same language and one can choose whether to execute compile time constant parts at compile time or run time. In C++ it's a completely different language that was bolted on. C++ did adopt compile time constant expressions from D though.

Hundreds ? You must be joking.

Small multiples of a 100 are like a new flashy upstarts in the block as far as Indian traditional art goes.


Yes. Very few non-Dravidian languages spoken in India, have that specific sound.

The only exception I can think of is Marathi. The 'el' in 'sakal' is roughly the same.


Technically it’s a retroflex approximant [1] and is found in many places (often not as a separate character or phoneme).

But I think we’ve hijacked a cultural thread with enough phonetics for now!

[1] https://en.wikipedia.org/wiki/Voiced_retroflex_approximant


Wow, I didn't know this. Thanks for sharing!

Marathi also has the ch vs ts thing. Similar issues turns up in transliterating Cyrillic -- Chebyshev vs Tschebyshev.

I do not know inner details of Zstandard, but I would expect that it to least do suffix/prefix stats or word fragment stats, not just words and phrases.

The thing is that two English texts on completely different topics will compress better than say and English and Spanish text on exactly the same topic. So compression really only looks at the form/shape of text and not meaning.

Yes of course, I don't think anyone will disagree with that. My comment had nothing to do with meaning but was about the mechanics of compression.

That said, lexical and syntactic patterns are often enough for classification and clustering in a scenario where the meaning-to-lexicons mapping is fixed.

The reason compression based classifiers trail a little behind classifiers built from first principles, even in this fixed mapping case, is a little subtle.

Optimal compression requires correct probability estimation. Correct probability estimation will yield optimal classifier. In other words, optimal compressors, equivalently correct probability estimators are sufficient.

They are however not necessary. One can obtain the theoretical best classifier without estimating the probabilities correctly.

So in the context of classification, compressors are solving a task that is much much harder than necessary.


It's not specifically aware of the syntax - it'll match any repeated substrings. That just happens to usually end up meaning words and phrases in English text.

Make that "every vibrating surface can be a potential microphone ..."

The laser on a hotel window experiment comes to mind.


with a high speed camera any vibrating reflective object like a potato chips bag can become a weak microphone if you have line of sight even behind a soundproof window: https://www.youtube.com/watch?v=FKXOucXB4a8

Just ignore the trolls.

Cite an example please.

Seems she isn't interested in dragging a bit of fame and recognition her way.

It's a low effort way to do that when the other party cannot defend himself.


Very funny. You will probably be misunderstood though.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: