TL;DR Microsoft released a virus definition file with a timestamp/serial 220101001, and is unable to parse it into a (signed) long value since the largest value is 2,147,483,647 and it’s possible that there’s a factor of 10 in the deserialisation logic.
The error message " Can’t Convert “2201010001” to long." makes it seem like that was the timestamp/serial used (Would be yymmddHHMM format). No factor 10 in deserialisation needed for an overflow.
Comparing with strings ain't no panacea either. There's some suspicion that Windows 9 was skipped because a lot of old software checked for Windows 95 or 98 by just doing a string compare against "Windows 9". (For the record, I kind of doubt this was much of a concern by the time Windows '9' was being worked on, but there's a certain logic to it.)
Yeah my first thought was, well you could lay out the timestamp in a way that supports lexicographical sorting, and then realized they had already done so. No reason this shouldn’t be a string!
It can be both. DNS serial numbers are YYYYMMDDnn, and it works just fine for humans, it is easy to convert, more compact, and you can compare in a single operation. Internally, there is no reason for it to be a string.
I think the point is that a 3 digit serial wouldn't be enough to cause overflow, it would need to be 4 digits. But who makes a post like that without using copy/paste to get it exactly right?
The SNES SF2 games have a similar-ish code for various things. I think the first release lets you enter a code to play the same character as both players, or disable special moves. Turbo has a special code to increase the turbo stars and you can also disable special codes.
Note that GC makes explicit the costs associated with memory allocation and freeing; languages that use manual memory management (or semi automated memory management as in Swift/Rust) simply un-amortise the cost and distribute it all over the program.
While Rust may have less issues than other languages in this regard, it doesn’t change the fact that destruction of a “thing” can result in a transitive set of “other things” being destroyed as well, and depending on the state of the program, this set could have a pretty high bound.
Even in a “predictable” language like Rust, a data entry containing a map like structure is going to have a different destruction time if the map is empty vs if it is full.
The other aspect of “predictable” delays in a program completely ignores the runtime issues that can happen at the OS layer, such as swapping/page cache invalidations/migrations between cores of numa regions etc. so it is rarely the case that any program in any language is going to have predictable behaviour. Even trivial things, like the number/size of environment variables user at program startup can have a performance delta.
Anyway, the point is that when programmers talk about “predictable” behaviour in a program, what they generally mean is “invisible performance problems” and so it gets swept under the carpet.
That’s not to say that GC doesn’t introduce variance into measurements, but it is not the only thing that causes variance and many of the good GCs are capable of using multiple cores and can avoid interrupting the program runtime without significant overhead, although obviously tracing collectors will increase pressure on a memory system in a way that explicit/automatic memory management does not.
> Note that GC makes explicit the costs associated with memory allocation and freeing; languages that use manual memory management (or semi automated memory management as in Swift/Rust) simply un-amortise the cost and distribute it all over the program
This is generally an interesting feature of garbage collected languages, and it's often more efficient than manual memory management - but it does come with downsides.
> Even in a “predictable” language like Rust, a data entry containing a map like structure is going to have a different destruction time if the map is empty vs if it is full.
"Predictable" does not mean "always the same". It means that given a map with the same size and fill level, destroying it will take the same time. What's more important is that the point in the program where the destruction happens can be controlled - and, if necessary, moved out of the critical path of something that requires response within a certain bound. This may introduces an overall inefficiency in the program, but if you have software that needs to fulfill certain time bounds, that's the tradeoff you choose. Doing so in a garbage collected language is much harder.
And rust targets environments where (near)-realtime behavior and predictable performance is important, or where garbage collection is not truly possible since allocation isn't even possible (think embedded systems).
> Anyway, the point is that when programmers talk about “predictable” behaviour in a program, what they generally mean is “invisible performance problems” and so it gets swept under the carpet.
I don't see how this is a counterpoint to what I said - programmers blaming other things for performance problem has been a thing like forever.
There’s a few reasons, but mostly is due to economics and the way Apple builds its own software.
Swift came out of the iOS side of the company, and was sponsored to make writing iOS apps easier. It was therefore constrained to fit into a particular shape; in particular, Objective-C compatibility was a must have. Many of the frameworks and libraries on iOS/macOS are implemented in Objective-C including “std” libraries like Foundation.
When building Swift for Linux/Windows, there isn’t an Objective-C ecosystem, and rather than introduce one, instead the language just doesn’t have support for that. As a result, the “std” libraries are a complete rewrite of the libraries available on macOS. Even being able to include Posix basics used a different library name; Darwin vs GlibC. [There was a long battle to try and get a meta import of LibC which would import the right one on different platforms.]
So there are really two dialects of the language: Apple Swift and Linux/Windows Swift. They share the same overall feel but is like the difference between Diesel and Petrol/Gas. Same overall outcome, different completely under the hood.
Two other reasons exist; firstly, the internal builds are a fork of the open source project, so when a new feature lands on an iOS device, code and libraries are written to support that (eg SwiftUI) and nothing related to that fork lands until after Apple have announced it. What that means is you get giant blobs of diffs on an annual basis; in some cases, reverting bugs that have been fixed already in the open source world (because the internal version was forked six months previously).
The other reason is that Swift depends on a forked version of llvm/clang, to the extent where you have to install those to be able to compile Swift programs. Many upstream distributions already ship their own clang, which is essentially incompatible with Swift, but they want to follow their philosophy of only one version of a library (and don’t want to replace their own version with a Swift specific one).
So, Swift on iOS is a primary platform, and if you are writing iOS apps is the standard way. Every other platform is a second class citizen and software project management at Apple is not well suited to an open source workflow; since the iOS team are calling the shots on swift (and paying for most of it, to be fair) then the language evolves for their benefit primarily and then others as a side effect.
As a result, even if you weren’t using that region but you were using the API you were hosed for 6+ hours, And the status page never acknowledged that it was out of action.
We had the following Terraform in our production pipeline:
data “aws_organizations_organization” “current” {}
As a result, all of our deployments to our EU regions were borked. Of course, we couldn’t raise a support case because the support system was also down, and despite escalating to our TAM weren’t able to get the status page to reflect reality.
My concern is that the Organizations API specifically will be brushed under the carpet and we will still have a single point of failure in a region which we never intend to use.
The talk itself is great, but the background to the slides and speakers is constantly moving from side to side and is pretty distracting. I would have preferred to see the video of the slide deck itself without any extraneous background or the speaker’s head.