Hacker Newsnew | past | comments | ask | show | jobs | submit | Darkskiez's commentslogin

Stop disagreeing with me, I don't want any more prompts from you, your code is now externally maintained.


I used to work on the system at Google which processed deleted emails to ensure they are deleted across all systems it might have touched (eg, delete any calendar events created by the email, rebuild indexes, delete backup restore keys etc). Deleting an email was remarkably significantly more resource intensive compared to just leaving it alone, so this advice could only make the situation worse.


Do you happen to know why deleting lots of emails at once doesn't really work? Gmail that is. Select all 100k or so, delete, gets rid of the first N but not the rest, and N is order of some hundreds. It means delete old emails is far more time intensive than it should be.


Try just leaving the tab sitting open for an hour. I have a suspicion the frontend works iteratively on them in the background, and the UI only updates occasionally. I haven't tried it with deletions, but with "mark as read" on a triage inbox this has worked with 40k+ emails.


That system was what fed into the one I touched, which was an offline batch pipeline, so I wasn't tooinvolved with the Gmail internals that you're talking about. I know about 10 years ago many Googlers had the same sort of problems as you describe, I assume it's a tiny bit better these days, most of the backend has been replaced over that time. Expensive operations are often slow to ensure there is reasonable isolation between users, so you don't hurt someone elses experience when you do these things. They aren't frequent operations/CUJs so they aren't optimized either.


It would require a queueing and progress tracking infrastructure once a request gets too large to process in the span of a few seconds, which is a lot of stuff to build for a very rare operation.


Possibly worse for many users is that labelling has the same flaw.

And even if it does complete, you'll often never know. Upon command, Gmail tells you "I'll do it later". And often never alerts of completion.


Except you can store the passwords on a usb key / remote over bluetooth, and then also keep them secret from the potentially compromised host.


The first passkeys were physical (USB) keys. And you never share the key with a host or server.


I need to write up my experience. But I'm trying it out. Linux needs something like this. I've had issues, posted traces and had them fixed in seconds. Pretty damn amazing. I'd love to see a bigger team involved though.


My experience also. Kent is obviously very committed to the project.


A change to a filesystem should never be made in seconds.


Confidence intervals don’t have precise timelines associated with them. Sometimes you know exactly what the problem is when you hear the symptoms.

We always balance new work versus cleanup. I always have a laundry list of beefs with my own work. You often have a sneaking suspicion that this piece of code is wrong in a manner you can’t quite put your finger on. Or you do but a new customer is trumping a hypothetical error. Or ten people are blocked waiting for you to merge some other feature. And then someone gives you a test case and you know exactly what is wrong with it. Sometimes that’s fixing a null check, an off by one error, or sorting results. And sometimes the repro case basically writes itself.


Only the newest sonos speakers have bluetooth, the problems with Sonos are nothing to do with it. https://www.linkedin.com/pulse/what-happened-sonos-app-techn... has a good write up.


What are some good alternatives to express the same concept?


“Imitation without understanding”, “imitating but misconstruing”, “mindless imitation”, “superficial emulation”, &c.

I think “cargo culting” in the popular sense means little more than that (whereas actual cargo culting is much more complex, as the featured article describes).


Might I propose ‘Skinner-Boxing’ -

something happened but you’re not sure why, so you guess it’s because of something you did, and you decide to ritualistically repeat what you did in the hopes that thing that happened before happens again.

It’s a misunderstanding of cause and effect - so when you repeat the cause, looking to repeat the effect, you’re puzzled that it doesn’t work this time.


Simulacra - "Something that replaces reality with its representation"

[0] https://www.cla.purdue.edu/academic/english/theory/postmoder...


"Aping," is a good one, but I'm sure someone will take offense to that as well eventually.


Cargo cult.


important to understand substance over form.


Security theater


This is a great way to send all of your files to the author of the utility / operator of the website.


From the readme,

> Beam cannot support end-to-end encrypted buffers. While data is encrypted during transfer to and from the Beam host, it’s decrypted temporarily before being re-encrypted and forwarded. The host only holds a small buffer (typically 1 kB) of unencrypted data at any time and never stores the full stream. For extra security, you can encrypt your files or pipes before sending them through Beam.

With a little effort, beam is as trustable as any (if not more) of its alternatives. And, that extra effort is a result of the design goal of not having to force a binary installation.

Plus, you can always self host beam, it's not that complicated.


Yeah, as with literally every other service in the web for the past 20 years.



That is dumb. The EU already knew this was the likely outcome because we already had stupid cookie warnings from the previous law.

Regulation exists in the real world, not in some fantasy land where companies do what you want.


This is in the private address space like 192.168.0.0, blocking doesn't make sense in this context.


Most people who are only vaguely familiar with networking seem to be able to remember 10/8 and 192.168/16, but the 172.16/12 range is for some reason rather elusive --- I suspect it's not as commonly used as the other two.


I think it usually gets used by VM/container networks (maybe there's some historic reason)? I have seen it once used on a hotel WiFi network (as I was unable to connect because of docker conflicting).


They don't tell you why, so it's strange to say this is the reason. At Google the interviewers write up the discussion, questions and answers, and a separate committee decides based on the feedback from all the interviewers. There is no scoring for pun quality sadly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: