I don't agree with this logic that "either you have perfect security or there's no point".
I think a lot of stuff on local LANs is still HTTP-only because trying to do TLS for local devices – even with LetsEncrypt – is a pain. Not impossible – you can't get a TLS certificate for 192.168.12.34, but you can create a public DNS entry pointing to that and then use a DNS-01 challenge to get a certificate for it. But that's enough work that heaps of people don't do it.
It also makes local LAN reliant connectivity reliant on public DNS – since you can't do https://192.168.12.34 you have to do https://device-12-34.example.com, if your Internet connection is down you might not be able to resolve device-12-34.example.com even though the device is up and accessible on your local network. Adding a local DNS server will fix that – but now that's another thing you need to make it all work.
Whereas if we had opportunistic encryption for http://, that would make local LAN passive attacks a lot harder. Yes it wouldn't stop against local LAN active attacks, but security against passive but not active attacks is still better than no security against either.
For me the main barrier is that I want to have portable/roaming control over my IDENTITY, even if the content hosting is (for now) entirely through a system administered by someone else. If I control the identity, I can at least keep local copies and rehost/repost content later.
Instead, it feels like the current Fediverse demands that I make a blind choice to entrust not merely a copy of my content but also my whole future identity to whatever of these current instances looks the most stable/trustworthy at first glance, hoping my choice will be good for 1-5-10-15 years. It's stressful, and then I look into self-hosting, and then I put the whole thing off for another week...
AFAICT I would need to set up a whole federated node of my own in order to get that level of identity-control. Serious question: Is there any technical limitation preventing the admin of an instance from just seizing an particular account and permanently impersonating the original owner?
In contrast, I was hoping/expecting some kind of identity backed by a private asymmetric key. Even if signing every single message would be impractical, one could at least use it to prove "The person bob@banana.instance has the same private key that was used to initialize bob@apple.instance."
I've never seen this problem before and arrived at the optimal solution to the first section, sans code, in about 10 seconds. Despite that, there's almost no way I would pass a technical interview at Google.
The only reason I can think that the optimal solution was immediately obvious to me is perhaps because in a previous life I did a lot of work building convoluted application specific indexing schemes for data in a distributed KV store that had basically zero useful built-in secondary indexing schemes where piles of parallel GETs were about as "free" as just doing one, but chaining strings of requests back & forth to the client iteratively made for sad times. This meant that it was essential to be clever about what could be coded into key names and precomputed from application/use-case context to know what keys to fetch "all at once" to gather as much relevant data as possible with as few serially dependent requests as possible. One such use case depended on a modified C-trie as part of the solution, so I got very familiar with what kinds of problems they're for and what their limitations are.
Given that, I can't tell what that says about the question, about me, or about Google. What I can say for sure is that because I can't tell the above, the technical interview at my company is signing an NDA and sitting down for a couple hours and just actively pairing on real work & real problems with real teammates.
With the nature of software development work being what it is, I really don't understand why we as an industry run interviews like they're game shows or try to manufacture workplace simulators with terrible model conformance.
That legacy baggage is the only thing that allows older hardware to connect to the modern network. It's the only thing that allows folks the agency and autonomy to setup their own services and share them with folks locally, without requiring the blessing and grace of a distant 3rd party authority.
I've spent 20 years working with advanced PKI and cryptography in many different domains and form factors, and what I've learned is that even with the best of intentions, they are all fragile and their default state is broken, without constant maintenance.
Availability and resilience to failure are key pillars to security that are often overlooked.
In the past 20 years, all of the critical failures in PKI systems that I have seen were due to expiring certs, expiring CRLs, failure to distribute new PKI in time, accidental deletion of key PKI, missing intermediate certs. None were due to MITM, weak crypto, spoofed packets, use of plain HTTP. Make of that anecdote what you will.
I think this misses the point of DRY a little bit. DRY isn't about not copy pasting code, it's about ensuring that knowledge isn't repeated. If two parts of the system need to know the same thing (for example, who the currently logged in user is, or what elasticsearch instance to send queries to, etc.), then there should be a single way to "know" that fact. Put that way, DRY violations are repetitions of knowledge and make the system more complex because different parts know the same fact but in different ways and you need to maintain all of them, understand all of them, etc. etc.
Code blocks that look to be syntactically the same are the lowest expression of "this might be the same piece of knowledge" insofar as they express knowledge about "how to do X", but the key is identifying the knowledge that is duplicated and working from there. Sometimes it comes out that the "duplication" is something like "this is a for loop iterating over the elements of this list in this field in this object" and that is the kind of code block that contains very little knowledge in terms of our system. But supposing that that list had a special structure (ie, maybe we've parsed text into tokens and have information about whitespace, punctuation, etc in that list) and we start to notice we're repeating code to iterate over elements of the list and ignore the whitespace, punctuation elements in it, then we've got a piece of knowledge worth DRYing out given that all the clients now need to know what whitespace & punctuation look like even when they'd like to filter them out.
It's worth pointing out that DRYing out something isn't necessarily "abstracting", it is more like consolidating knowledge into one place.
One of the things that's made me the most frustrated learning rust is all the type coercion and inferencing. Sometimes code will just work when I have the wrong mental model because types are coerced but then it ends up biting me later when suddenly everything breaks because it can't coerce anymore and I find out nothing was the type I thought it was. I wish there was a way to turn it off while I'm learning just so I can understand what's really happening.
I'm not sure it's reasonable to characterize this as due to a "bug" though---the software correctly implemented the specified control laws, but those laws had some unanticipated properties, and fixing it required some new developments in control theory.
The 1993 crash was due to Pilot Induced Oscillation (PIO). This is a general term for situations when the pilot makes inputs to stabilize an airplane, but the inputs instead end up exacerbating the instability. A simple example of how this could happen is if the control inputs for some reason take effect with a time delay: the airplane pitches up, so the pilot tries to push it down, after a moment the transient passes and the plane pitches down so the pilots pulls up, but the previous input amplifies the downwards movement so the pilot pulls up harder, etc.
Several of the first generation of unstable fly-by-wire airplanes had problems with PIO. Unlike conventional aircrafts, where the rudder positions exactly follow the position of the stick, here the desired rudder position is calculated as the sum of two inputs, one calculated from the stick position, and one calculated by flight control software to dampen instabilities.
Early versions of the software was "rate limited", i.e. at each iteration of the main loop the software calculates the desired rudder position and then moves the rudders towards that position at the fastest rate the rudder actuators allow. However, that leads to problems when there are very large transient stick inputs: because the rudders take some time to move, the largest rudder deflection occurs with some delay after the largest stick input (see figure 6 in [1]).
In the 1993 crash there was a wind gust causing a pitch movement, and both the pilot and the flight control software provided a compensation. The sum of the two signals was big enough to hit the rate-limitation, so the response of the airplane was strange, the pilot gave several more large inputs, and there was PIO.
Incidentally, one of the YF-22 prototypes crashed for basically the same reason, even though they run different software. The solution was to develop some new "phase compensation" methods for designing controllers [1].
I tried to give OS/2 a try, but what happened was kind of bizarre. I had a DEC PClone with a 486 (cannot remember if it was DX or SX) inside. OS/2 needed a graphic driver for the S3 chip. I called DEC support to ask for the driver and they said they didn't have it but IBM did. The DEC rep kept me on the phone while he called the OS/2 support number. DEC had incredible service at the time.
Then the bizarre happened. The IBM support person said I needed to sign an NDA to get the driver. Both the DEC rep and I tried to explain to him I was a humble end user and not interested in anything but the driver. There must be a mistake as I did not want source code. Nope, just the driver required a NDA and some verification. I said I would think about it, and we hung up. The DEC rep apparently had quite a few people gathered around him and they were laughing pretty hard. He then asked if I would like a nice copy of Win NT with no NDA and all the drivers for my machine.
It's an hourglass where the width is accessibility and the height is abstraction. In the center is the sweet spot. Basic was near there. Pascal probably was. Ruby, PHP and Python are capable of being there...
As an example outside of computers, think about highly abstract art, highly abstract philosophy or poetry.
It comes across as obdurate and diffusive, aloof and needlessly distanced from materiality.
Now in programming you see the same thing. Abstract language that seems to exist in pure vapor. Factory, interface, provider, service, oh and a provider service and a service provider which are not the same things of course.
All these things mean very specific things that change depending on whose lips are moving - they're defined in code somewhere - they do something deterministic - there is a real materialist function here that's being obscured by confusing language. We've entered the age of Jurgen Habermas style programming.
It's fine if you want that, but don't pretend it's successfully easier to understand when poorly, vaguely, and also precisely defined.
The computer is a picky, unrelenting, uncompromising bratty jerk. Because of this programming concepts are best when they're nailed the fuck down and not dancing around in some abstract freeform jazz space pretending that it's more accessible that way.
All it does is create confusion and the emotion of confidence replacing the reality of competence. The computer is still going to be a bastard and we'll have to deal with it eventually.
I feel like this is because school, especially college, and particularly exams, is about as high-stakes as most people's lives ever get, so they look back at that time as peak-anxiety. Think about it: you're being evaluated and the result of that evaluation shapes the next step in the pipeline, and ultimately the trajectory of the rest of your life! Well, at least that's what the university officials, professors, your peers and parents all tell you. You pretty much have a series of "one chance" events that you must pass or you're done for. Failure of any step is permanent, and affects your average (seemingly) forever.
The whole path from elementary school through to college graduation feels like a career development game where the stakes are raised every year. Fail once off the path, and it's Walmart Greeter for you, forever! It's no wonder I still wake up in a cold sweat over it, 30 years on.
There are ways of coding coding in C which makes those optimizations unnecessary.
The basic idea is: don't load your block of code with lots of pointer dereferences. Load the values you need into local variables. Don't proliferate common expressions which dereference the same pointer to get at the same value. Consolidate the assignments through pointers. Don't do this in three places: (*ptr)++. Have that value in a local variable var, and do var++ in three places, then assign it *ptr = var.
Even without no-strict aliasing, a compiler can still assume that local variables whose addresses are not taken are not the targets of any pointers.
In C (99 or later), you have restrict also. restrict is independent of strict aliasing, because it's not based on type.
Furthermore, speaking of restrict, aliasing between like typed objects matters for optimization. It's good and well to optimize based on the idea that a double * cannot be aiming at an object whose declared type is long. But it's insufficient, because code that is manipulating double * pointers is likely working with objects of type double which could be the targets of those pointers. So without some combination of tight coding and possibly using restrict, you will end up with those load-hit-stores in all sorts of code.
The problem is that the common expressions can't be eliminated based on aliasing, and the aliasing is between like types, so strict aliasing doesn't help.
(I think in this particular case the compiler could do a better job because even if the assignment "node->prev->next = node->next" clobbers the value of "node->next" due to the nodes being aliases, the assignment can only clobber it with the value that node->next already has! The compiler doesn't analyze it that far though.)
C was designed from the start as a language which the programmer does the optimizing, and regardless of the advancements in compilers, that has not been entirely eliminated.
How you write C still makes a difference, even at the microscopic level of individual statements and expressions, not just the level of overall program organization and use of algorithms.
If you write tight code, you can turn off strict aliasing optimizations globally and it won't matter. But you don't have to do that globally. You may be able to confine your type punning hack in its own source file, and just turn it off for that file. (Or may be even on a finer granularity if you have such compiler support.)
I don't think it's unreasonable to be nostalgic for those things though. We had a period where animation and game production was democratized, where the promise of computing was actually realized. And what did we switch to, but consumption devices where the only capacity to create is a camera for glamorizing your life or creating parasocial relationships.
Yes, the article makes that assertion on behalf of both React and Cocoa and explores the different way they address that problem. It uses "scroll position" as an example: scroll position is typically not tracked in the React state, so how does it work?
In Cocoa, scroll position is part of the view's state, a mere property of the view. This is simple because the UI itself is stateful.
In React, scroll position is typically not part of the state from which we project the view. Instead this state is attached to the projection itself (e.g. a HTML node) and we are dependent on the "Memoization Map" to preserve it. So this memoization is now required for the correct functioning of the app. The "pure function" abstraction is leaking.
> But if you want to really understand computer programming, starting at machine code or at least assembly isn't a crazy way to start.
I've long suspected that the CS field was founded on two approaches: The people who started from EE and worked their way up, and the people who started from Math and worked their way down. The former people think assembly is the "real" way to approach software, and probably view C++ as "very high-level", whereas the latter people think everyone should start with a course on the lambda calculus and type systems and gradually ease into Haskell, work down to Lisp, and then maybe deign to learn Python for *shudder* numerical work.
Something I never understood about embedded systems work: why is the pay so bad?
Maybe I was just looking at the wrong job listings, but the technical difficulty (relatively low-level programming, manual memory management, dealing with janky firmware, antiquated toolchains, and incomplete documentation) seemed much harder than the compensation being offered.
At least in comparison to other types of coding you could be paid to do.
The only answer I could come up with was that unit profit margin * sales volume imposed a much lower cap, relative to pure software products?
1) wireless degrades by degree and is hard to debug
2) it fails silently or just in non-obvious ways
3) people generally have no idea how good it is supposed
to work, until they experience it working properly
I think there are many things, in life generally, but especially in
tech, where these qualities conspire to give people a rotten time.
People will complain when things fail, or take action. "Graceful
degradation" seems like a feature, but it can keep you on a slippery
slope of increasing tolerance toward ever worse performance.
The complexity, and fuzzy "connection to everything" means that no one
has to take responsibility. Maybe it's your neighbour? Or the trees?
Or the alignment of Venus and swamp gas on a full moon?
Unless you have a full spectrum RF field meter and other very
expensive test gear microwaves are "black magic" to even great
engineers.
I'm sure when Apple met to discuss remotely degrading iPhone
performance the words "They'll never notice" were spoken. Companies
that sell complex services and oversell resources love this combo of
fuzzy, hard to measure, diffuse, naturally erratic behaviour, broad
tolerance, mixed with vague customer expectations.
They prevent you from knowing, and comparing, what you should be
getting, and what you are getting.
Many recent development directions, 5G, secure enclaves, encrypted
updates and exfiltration (telemetry), are set to increase the "random
magic factor" and make gear even more precarious and mysterious to the
end user.
time: Standard datetime format with localisation and serde support
tera: templating
tracing: async logging
uuid: UUID generation
Of these, I feel they're reasonable dependencies. You could maybe quibble about uuid (which itself only depends on rng + serde with the features I've enabled), or tracing maybe, but they provide clear value.
Anyway, those direct dependencies expand out to 300-odd transitive dependencies.
So what do I do? Do I write a templating engine, HTTP server implementation, SQLite driver and JSON library from scratch to build my little ebook manager?
I once was part of a now-defunct reading forum in my teens.
On the forum, people would package three related books into quests. A boring example would be a dystopian quest pack with three books about three very different dystopian scenarios.
But the quests people put together were usually more interesting. I remember a "Weird Magic" quest had books with really unconventional magic systems. I found Motherless Brooklyn (detective with Tourette's) in a quest pack of "heroes with issues". Other quest ideas would be evil protagonists, alien first-contact with the wrong guy, and stuff like that. You can often find three books for even the goofiest of quests.
It was a cool way to find new books. And whenever you didn't know what to read next, you'd look at what quests you were still working on and choose among them. Once finished, your completed quest count would increase.
Long append-only lists of genre-related books were never as interesting to me. Quests only having three books made them a fun thing to collect. Maybe there's something fun there that new goodreads competitors can experiment with.
I'd accept "how it would compile to assembly" as "understanding the machine-level behavior of the code," but again, I find this extremely lacking with Java.
I don't know what percentage of developers can read assembly, but the popularity of tools like Godbolt strongly suggest it's non-zero. In my own experience, all of the most skilled developers I've worked with have been comfortable digging down to the necessary level -- and doing that in Java, or most other JIT'd languages, is just not fun.
You'll notice that I very much didn't put C++ on either the "easy to understand intent" or "easy to understand behavior" lists -- while it's my preferred language and my primary language, ease of understanding is not its virtue. And while I agree that C compiled with a minimally optimizing compiler (CompCert, clang -O1, etc) is easier to understand than optimized code (for me, the sweet spot for understanding is as compiler that does good register allocation and constant folding, but only really does instruction reordering for memory ops), it's pretty rare that I look at the output of highly optimized code and am surprised or find it hard to follow. Some constructs (medium sized switches, for example) can be pain points... but most often reading assembly output from optimized C is either "yeah, that's about what I would have written" or "close, but you missed this optimization/intrinsic, I'll do it myself."
this is just from wikipedia for context:
"Obsessive–compulsive personality disorder (OCPD) is a cluster C personality disorder marked by an excessive need for orderliness, neatness, and perfectionism."
Comports with this in my opinion. Having known some people who were called "Type A personalities" who tend to be great at metrics and goals but not always so great emotional intelligence. Narcissism is common etc. This is the kind of person who will fixate on an uncleaned toilet or rattle in their car for weeks, losing sleep.
Recalling one person I've known with these traits, she sounds a lot like this person. After a couple of decades and seeing her raise a family has convinced me she is indeed Human, if she had gotten therapy things would be a lot different. She's much more pro-social, though she also didnt really maintain her friendships but she definitley never had a career with walls of compiler warnings to ignore. That might have killed her.
Closing her blog, Izzy expresses a deep frustration and lack of any derived meaning from what she had viewed as her purpose (c++ I suppose) as well as a general feeling of isolation. If you are feeling some shadenfreud about this for some reason, its clear she is very unhappy. It doesnt seem particularly rational to view languages this way. My impression is that this person is probably in distress and depressed. If this were my friend I would be making phone calls.
Izzy: Good luck, lady. C++ is hard, but so is life. Keep looking for that purpose. If you happen to read here, I suggest volunteering at an animal shelter. Just working on something less demanding and quite possibly more rewarding is a great therapy break, as you've found from your Ops position. You are well positioned to tackle this.
Honestly, it's not that sensitive (at least without service details; plus this was over 7 years ago). I think the lawyers won't send me a nastygram for this much detail:
The kernel panic one started when I got pinged by a downstream team that we somehow delivered garbage data (valid-looking but entirely the wrong schema) to them, and that broke everything. Tracing things back up the chain, I found out that the source machine had had a kernel panic. Turns out that ended up with file metadata flushed to disk, but not file data. The unwritten sectors happened to contain data from a different process that was in the right file format, but had the wrong payloads, and since the file format was garbage-tolerant and self-synchronizing, the processing from that point on cleaned it up into data that looked completely valid... but was just from entirely the wrong place. All that happened automatically upon reboot from the panic. I was able to quite conclusively prove that this is what had happened by observing the data offsets logged to debug logs, and noting that the funkiness happened on filesystem block boundaries, and also aligned time-wise with the kernel panic, and a process that matched the schema of the data that ended up delivered had been running simultaneously on the same machine. This was all from just one instance of the fault, forensically debugged; we never reproduced it. The conclusion was basically "working as intended"... that is, this is apparently a thing that can happen given the filesystem mount mode in use (which was chosen for performance), so I think in the end they introduced some sanity checks on the data shape as a stop-gap. I think this was all soon deprecated and replaced with a system that didn't use local disk at all anyway, so there wasn't much point in introducing a big refactor to eliminate this rare failure mode at that point, but it's interesting that it did have security implications (it was just rare and not directly triggerable). It was very satisfying being able to trace down exactly what had happened (and the team lead sent this one off to me specifically due to my reputation for working this kind of stuff out... :) ).
The other one had to do with Google's NIHed gzip implementation, because of course they have one. We had found data mismatches between output computed over identical input in two different data centers (for redundancy/cross-checking), but it always went away if we tried again. This happened once a petabyte or so, give or take (I'll let you guess how often that was in terms of days, at Google scale). Digging through the error logs, I found that the mismatch was an offset mismatch, not a data mismatch. These files had gzip-compressed data blocks. Investigating the data was tricky because we weren't allowed to copy parts of it to workstations due to privacy reasons, but doing some contortions to work in the cloud I found out that the logical data was equivalent, but the compressed data was one or two bytes larger in one of the files, hence shifting the offset of the next block. Eventually I managed to grab one of the problem-causing uncompressed data blocks, build a test case for the gzip compressor with exactly the same flags used in production, and I had the idea of running it under valgrind... and that's when it screamed. Turns out the gzip compressor used a ring buffer of history to perform searches for compression, and for performance reasons the lookups weren't always wrapped, but rather there was a chunk of the ring buffer duplicated at the end. This was calculated to never exceed how much the pointer could fall off the end of the buffer. But, on top of that, the reads used unaligned 32-bit reads, again for performance reasons. And so you could end up with the pointer less than 4 bytes from the end of the buffer padding area, reading 1 to 3 bytes of random RAM after that. The way the compressor worked, if the data was bad it wouldn't break the compression, but it could compress a bit less. And so, the chances of both the read falling off the end of the buffer and happening to be one that would've improved compression were tiny, and that's how we ended up with the rarity. We never saw crashes due to the OOB read, but I think I heard some other team had been having actual segfaults (I guess the page after the buffer was unmapped for them) and this fix resolved them. I think we had the problem going on sporadically for a few months before I got sick of it (and I'd managed to rule out the obvious "it's a RAM bit flip / bad CPU / something" things you tend blame rare random failures on at that kind of scale) and spent a couple evenings figuring it out.
From the description it's probably a Bacon cycle collector. The basic idea is that it checks to see whether reference counts for all objects in a subgraph of the heap are fully accounted for by other objects in that subgraph. If so, then it's a cycle, and you can delete one of the edges to destroy the cycle. Otherwise, one of the references must be coming from "outside" (typically, the stack) and so the objects cannot be safely destroyed. It's a neat algorithm because you don't have to write a stack scanner, which is one of the most annoying parts of a tracing GC to write.
As a former linux expert and sysadmin turned developer, I think this attitude is what pissed me off most about current developers. I thought I'd have a huge upper hand, understanding dark magic. Come to find out most developers don't know and don't care about any of it, and have infected the whole community with the idea that none of it matters and servers and processes are just cattle. /rant
Concerning the smoothing, I presume this is macOS’s glyph dilation. pcwalton reverse engineered it in order to replicate it in Pathfinder (so that it can render text exactly like Core Text does), concluding that the glyph dilation is “min(vec2(0.3px), vec2(0.015125, 0.0121) * S) where S is the font size in px” (https://twitter.com/pcwalton/status/918991457532354560). Fun times.
Trouble is, people then get used to macOS’s font rendering, so that (a) they don’t want to turn it off, because it’s different from what they’re used to, and (b) start designing for it. I’m convinced this is a large part of the reason why people deploy so many websites with body text at weight 300 rather than 400, because macOS makes that tolerable because it makes it bolder. Meanwhile people using other operating systems that obey what the font said are left with unpleasantly thin text.
Seems that lots of people only think ADHD people have a hard time paying attention. If it was that simple I would be so happy. Here are just a few of the things I struggle with.
- Will power. People with ADHD struggle sooooo much with will power. It’s a constant struggle with everything in life and really wears you down.
- Constantly putting on a fake front pretending you don’t have ADHD.
- Rejection sensitivity
- Feeling inadequate, and unset at yourself for not being able to perform like peers
- Emotions swinging from happy to angry in a flash
- not being able to maintain friendships
- People thinking it’s not a real condition. Unless you have it you can’t even come close to understanding how much of a challenge every day is.
- Medication only helps with the attention issue for maybe 8 hours. It doesn’t help with the other 95% of the issues
I worked on the DMS switch in Nortel (if PRI broke for you, I'm sorry, I tried my best!). 31 millions lines of absolutely horrific code. You'd think somewhere in that mess there would be some redeeming code, but if there was I never found it.
To give you some examples, I originally came on as a contractor because they had some refactoring they wanted done. The entire system was home built (including the programming language) and there was a file size limit of 32,767 lines. They had many functions that were approaching this limit and they didn't know what to do, so they hired me. Probably you can imagine what I did.
One time I went to a code review. They were writing a lot of data into some pointers. I asked, "Where do you allocate the memory"? The response was, "We don't have to allocate memory. We ran it in the lab and it didn't crash, proving that allocating memory is a waste of time". No matter how much I tried reasoning with them, I couldn't convince them. The code shipped like that.
One of my more amusing anecdotes is that when I worked there the release life cycle was five years long. The developers would work on features for 3 years. The developers were responsible for testing that their own code worked. There was no QA. After 3 years, we would ship the code to the telcos (telephone companies) and they would test it for acceptance for 2 years. We would fix the bugs that they found.
I started working there at the end of a release cycle, so people were only fixing bugs. I got an interesting bug in that I couldn't find any code that implemented the feature. The feature had apparently been implemented at the beginning of the cycle (so around 4 years before), by someone who was now my C level manager. I started looking at the other features that person had implemented. There was no code. It seems that this enterprising person had started work and realised that nobody would check his code for 3 whole years. He just checked off all his work as done without actually doing anything. Since he was an order of magnitude faster than everybody else, he was instantly promoted into management. When I reported my findings to my manager, he made it clear I wasn't to tell anybody else ;-)
Such a messed up place. But the switch worked! It had an audit process that went around in the background fixing up the state of all the processes that ended up in weird states. In fact, when I worked there, nobody I worked with knew how to programmatically hang up a call. If you were using a feature like 3 way call, etc, they would just leave one side up. Within 3 minutes, the audit process would come by and hang up the phone. Tones features "worked" that way -- by putting the call into weird states and waiting for the audit process to put it back again. You could often hang up after a 3 way call, pick up the phone and still be connected to the call.
Most people don't know it, but because of some strangeness with some of the protocols, telcos used to "ring" their main switches with Nortel DMS switches. This would essentially fix the protocols so that everything could talk to everything. So, if you ever made a long distance telephone call 20 or 30 years ago, it almost certainly went through a DMS switch. The damn thing worked. Somehow. I have no idea how, though ;-)
> In Rust, all types (including resources) are movable.
Presumably not when pointers are pointing at them or their members.
In Rust, that is enforced by the compiler, but in C++ it is not. The rule that resource types are not movable is intended to provide some sanity here: this means a resource type can hand out pointers to itself or its members without worrying that it'll be moved at some point, invalidating those pointers.
> What's an explicit destructor? Rust's File type closes upon destruction, and one criticism of the design is that it ignores all errors. The only way to know what errors occurred is to call sync_all() beforehand.
I believe destructors should be allowed to throw, which solves that problem.
Obviously, this opinion is rather controversial. AFAICT, though, the main reason that people argue against throwing destructors is because throw-during-unwind leads to program termination. That, though, was an arbitrary decision that, in my opinion, the C++ committee got disastrously wrong. An exception thrown during the unwind of another exception is usually a side-effect of the first exception and could safely be thrown away, or perhaps merged into the main exception somehow. Terminating is the worst possible answer and I would argue is the single biggest design mistake in the whole language (which, with C++, is a high bar).
In KJ we let destructors throw, while making a best effort attempt to avoid throwing during unwind.
> When you include such a class in a larger structure, it breaks the ability for the outer class to derive a copy constructor automatically (even an explicit one, or a private one used by a clone() method). What's the best way to approach this?
In practice I find that this almost never comes up. Complex data structures rarely need to be copied/cloned. I have written very few clone() methods in practice.
I'm quite young, and have gone from a minimum wage service worker to a well paid software engineer in a relatively short time. A couple years ago I remarked to my dad that it seemed the more I was paid, the less stressful my work was. He said that he had found the same in his career. At the same time, the more I am paid, the more difficult it is to find someone that knows how to do what I'm doing.
I know it's been said before, but it seems pay is not dependent on how useful to society you are, but is instead dependent on two factors working in tandem.
1) Do those with money perceive you to have a use?
2) How difficult is it to find someone else that can replace your use?
I can't help but wonder if stress to pay ratio works as an inverted bell curve. At the lowest levels of pay you find the most stress. Then we move to middle/lower-upper class levels of pay and find much less stress. To get above those levels of pay, you'll likely have to start your own company or work extreme hours at a high skill job and once again experience heightened levels of stress.
I think a lot of stuff on local LANs is still HTTP-only because trying to do TLS for local devices – even with LetsEncrypt – is a pain. Not impossible – you can't get a TLS certificate for 192.168.12.34, but you can create a public DNS entry pointing to that and then use a DNS-01 challenge to get a certificate for it. But that's enough work that heaps of people don't do it.
It also makes local LAN reliant connectivity reliant on public DNS – since you can't do https://192.168.12.34 you have to do https://device-12-34.example.com, if your Internet connection is down you might not be able to resolve device-12-34.example.com even though the device is up and accessible on your local network. Adding a local DNS server will fix that – but now that's another thing you need to make it all work.
Whereas if we had opportunistic encryption for http://, that would make local LAN passive attacks a lot harder. Yes it wouldn't stop against local LAN active attacks, but security against passive but not active attacks is still better than no security against either.