The requirements for time in computers have increased drastically since C was invented.
Time used to mean what we write down or see on a clock: 2024-08-13 02:27 PM. The computer had an electronic clock built in, so it could save you a little effort of looking at the clock and copying the numbers. And that was all it did. If your clock was a few minutes off, that was no big deal. People knew clocks only agreed to within one or two minutes. People knew clocks were different in far away lands. Some people knew that you have to adjust your clocks twice per year. The computer clock was just like any other clock but happened to be inside a computer.
Now we expect a globally synchronized unique identifier for each instant, regardless of timezone. This is hard to deliver. Computers use these for synchronization amongst themselves, so they have to be accurate to milliseconds or better. This is hard to deliver. We expect computers to handle requests from far away lands with as much grace as requests from the local operator, and deliver results in a format the people in those lands expect. This is hard to deliver. We expect computers to process requests about the past using information that was current in the past, all over the world. This is hard to deliver. We expect computers to automatically adjust their own clocks twice a year, not just on the dates everyone in your local area does, but for users in all parts of the world on their respective dates. This is hard to deliver. And we still haven't got graceful handling of completely different calendar systems.
Interesting thing about hardware real time clocks. They are usually as bad as the C time API.
I worked with a guy that designed three RTC clock cards for an industrial bus system
The first had registers for hours minutes, seconds, day, month, year. He was proud it handed the leap years correctly. It had a battery back up.
The second design just had a 48 bit counter that counted ticks. You could read and and write it using latches so the read/writes were atomic.
The third design was a read only 48 bit counter and a 1024 bit battery back ram.
In the second and third design the conversion from time to a string is done in software. In the third the clock just provides a free running timer and you store an offset in battery backed ram along with other time related meta data.
The vast majority of hardware RTC clocks today are implemented like the first example. It's a little infuriating since my coworker figured out how to do it right 45 years ago.
Indeed, the first was obviously designed as "make a clock but digital" and the second was designed as "create a unique identifier for each instant" and the last was "also make it as stateless as possible"
I fail to see TFA's concerns or take them very seriously.
> time() unnecessarily takes a pointer argument to write to
Minor cosmetic issue.
> strftime() has to write to a string of a fixed length it can not dynamically allocate (This is less legacy than it is bad design)
This is often a good way to structure string functions in C. The fact that TFA repeated the constant 40 instead of using sizeof() immediately signals that they are unfamiliar with the idioms. A "you problem".
Doing heap allocation where it is not required could be a problem for some use cases.
> localtime() needs the pointer to a time_t value even though it does not change it because of register size concerns on PDP-11’s
Also minor and cosmetic.
> sleep() cannot sleep for sub-second amounts of time, usleep() is deprecated and it’s alternative nanosleep() requires you to define variables
sleep(3) is not really a "time function" in the sense of the others mentioned, it is a thread scheduler function. As such it kind of exists in a different universe. This is also shown by the fact that it's part of POSIX and not the C standard, like time(2) is.
Another classic case of "if everyone does something differently than you do, it might be worth investigating why". The hubris to think that basic C time functions have been "broken" all this time, and that nobody noticed or cared. What a joke.
Many functions in the C API are quite badly designed. The hidden global state in locale for example. HN regularly has articles about nasty bugs that boil down to “C API has several major deficiencies that cause great pain and suffering.
For a definition of "nobody" that includes Eric S Raymond, one of the most prominent figures in the linux world who's article (https://www.catb.org/esr/time-programming/index.html) I reference multiple times.
He was the publisher of the Halloween documents (from my understanding leaked by a whistleblower to him) and has always been a firm opponent to Windows in all his works. Are you thinking of Poettering?
95% of the supposed issues with C could be solved by a new standard library, integrating the debugger into the compiler as the default build/run environment (with auto address sanitisation, frame protection, etc. etc.), and a default strict mode error checking.
It would then be actually really hard to successfully run a C program (in the debugger) with any problems. Under these conditions it'd be easy to imagine most C programs running with fewer bugs (, leaks, etc.) than Rust programs.
The core of C is pointer arithmetic; This creates a fundamentally unsafe environment.
You can't even do anything in C without some asm (syscall wrappers) because C was meant to boil down and streamline PDP-11 assembly (Your computer is not a fast PDP-11) to a set of consistent principles. The consequence of this is that the
core of the language is pointers and pointer arithmetic, and raw unabstracted pointers are fundamentally unsafe to work with.
Using the rust type system I can essentially confirm code is bug and edge-case free with exhaustive
matching and unit testing (C's lack of tooling blessed the world with autoconf and cmake btw). Not to mention rusts ability to abstract away necessary boilerplate gives
me more time to think about my code instead of pointer arithmetic and allocation heuristics.
The "cure" for C is a language that abstracts away raw pointers and memory allocation.
You can write every program you want to without pointer arithmetic.
If you mean that a dereference of a memory location involves the compiler emitting pointer arithmetic instructions -- that's true of all languages.
If you want your language to completely disguise the machine from you, and "abstract away" memory allocation, you're going to pay a high complexity cost to do so.
If you have never run C in a debugger, with the massive amount of highly sophisticated tooling available to C debuggers, then you're operating from a profoundly mistaken starting point for evaluating the viability of C for modern safe sofware development.
C debuggers and tooling are vastly more powerful than Rust's static type system, and catch a much wider array of memory problems (, and bugs) than the Rust compiler can catch. Static verification is far more limited than the dynamic verification a sophisticated debugger can perform.
People's undergrad C course is a terrible basis on which to evaluate what C is today. The reason C is associated with a lack of security is that almost all software is written in C, written in a time when either the internet didnt exist or didnt imply an adversarial local environment.
Running a network cable through every facet of our O/S and software breaks many assumptions about the entire history of programming -- which C predominated in. This is a very poor basis on which to generalize the capabilities of a well-specified C programming environment (which today, is much more powerful than Rust's compiler).
> C debuggers and tooling are vastly more powerful than Rust's static type system, and catch a much wider array of memory problems (, and bugs) than the Rust compiler can catch. Static verification is far more limited than the dynamic verification a sophisticated debugger can perform.
Is there any dynamic verifier that fully validates all acesses w.r.t. the object trees specified by the C standard? Tools like ASan and UBSan won't detect a write to one field running into another field, only a write overrunning the complete object. (Compiler-level hardening might catch that to some extent, but it's limited to TU boundaries.) Not to mention things like 'misuse of restrict pointers' that I've never seen any verifiers for, except for special cases like overlapping memcpy() buffers.
Meanwhile, Rust does have its own dynamic verifier, called Miri [0], which checks just about every language-level rule at runtime. The main drawbacks are that it's slow and doesn't support calling arbitrary C functions, but it would be hard to get that to work short of the Valgrind route of emulating the whole process on an instruction level.
If you're redoing the c std, which was my initial point, you can introduce a debug allocator with metadata about object layout. Coupled with some debugger support, i'd imagine you can get there. And if the C std was willing to allow constexpr to do more at compile time, you wouldn't need explicit debugger support and could just use constexpr to modify the compiler.
There's also nothing stopping debuggers reifying the C at debug-time into this metadata.
My claim is 95% can be fixed by just normalizing what is current practice at the stdlib level and compiler level. By extending constexpr, i think you could get to 100%. Given that this is the case, why even both with the nightmare of Rust.
Oh, you sounded like you were talking about something that already exists today, rather than something that you'd like to exist. The main problem with taking dynamic verification all the way is making it work with ABI boundaries, which won't be going anywhere in at least the next decade. You'd need everyone to migrate to a universal ABI that can convey all needed metadata, and while I've seen a few proposals for that, none of them have gone anywhere.
I was more saying that the scope of issues mainstream tooling catches, includes ones that the rust compiler doesn't catch, and many more than enough for what are common memory safety issues -- there are some issues that aren't caught, sure -- but the operations which can cause those bugs are well-defined, most programs wouldnt have to use them, and a new std lib would help.
When people propagandize about C, they're universally unaware that the normal process of development basically addresses most of the problems Rust is supposed to be solving, and more than the rust compiler alone solves. The remainder are 95% to do with libc, which should just be thrown out.
A smidge more compile-time eval with constexpr, and the use case for Rust could disappear. It's a great shame that C is run by a standards process that's determined to relegate it to electric motors and digital watches from the 80s.
> People's undergrad C course is a terrible basis on which to evaluate what C is today. The reason C is associated with a lack of security is that almost all software is written in C, written in a time when either the internet didnt exist or didnt imply an adversarial local environment.
The problem is - undergrad C is about the lowest common denominator that all tooling understands and that all people understand. Of course you're probably not going to have to go as low as C89 like sqlite or, until 2022, the Linux kernel [1], but still, the long support cycles of many distributions make it challenging to move standards upgrades forward.
> C debuggers and tooling are vastly more powerful than Rust's static type system
As someone who's worked on C debuggers and tooling... I really have no idea what you're talking about. C's core semantics are just so weak that it's not really possible to express a lot of the things you can express in the type system, and that's before we get to the necessary lossiness that debuggers and tooling have to work with (e.g., you can't just ascribe types to memory in C because C--in practice--is way too loose with types for that to be meaningful).
For an example from something I've worked on, Linux manages to have two different arrays for the GPRs for a thread register context, one that's used for ptrace and one that's used for signal contexts. Helpfully, the header files give you macros to map register names to numbers so that you can say regs[RAX] instead of regs[0]. But the offsets are different, so you have to remember that you need to use regs[REG_RAX] instead of regs[RAX], and there is absolutely no tooling in the world that can tell you when you get it wrong because there is no expressible difference between the two scenarios in C. Meanwhile, in Rust, I can wrap the accessors in newtypes so that I can only use the correct set of constants to index into the array, which makes the error state literally impossible to construct.
That's the real value of a static type system--you can use it to make errors literally impossible to specify in an API.
I don't see the difference between having a typed access API (V inline reg_at(enum K k) {} ) and overloading the indexer.
If your point is that historical C APIs have overused an untyped operation, that's part of my point about a new std lib. Rust APIs can still provide an untyped indexer, it's just bad API design.
What I'm imagining a new std lib would be doing is having debug allocators, metadata against types, etc. Ie., a std library designed for the debugger along with a release version.
Using different enums doesn't help, because C will happily let you cast enum A to enum B implicity.
In Rust, I can express an API which can't be used incorrectly. In C, I can't. Sometimes, in C, you can sometimes get to the point where you use conventions that means maybe static or dynamic analysis tools might be able to flag the misuse of the API, but very often, such tools have extremely poor tradeoffs between precision and accuracy, far worse than exists in Rust with just the vanilla compiler.
My point isn't that you exhaust all the features of Rust with a better stdlib and "debugger-oriented programming" -- my point is that you can get 95% of the way there with trivial complexity costs.
Rust imposes significant program design costs which can be very detrimental to otherwise trivial performant memory management, to faster iteration of software design, and so on. These aren't free lang. features.
> Which language can do anything without some asm and support as many platforms as C?
I wouldn't be surprised if someone figured out how to do the interrupts and register control necessary to invoke syscalls with pure LISP/Scheme/CL :P
P.S. anything that compiles with LLVM and has an ingrained way to do print() that doesn't invoke libc, although there's a blurry line here between "pure asm"/"compiles to asm" that involves trusting-trust-style bootstrapping of features into the compiler
> > Which language can do anything without some asm and support as many platforms as C?
> I wouldn't be surprised if someone figured out how to do the interrupts and register control necessary to invoke syscalls with pure LISP/Scheme/CL :P
Haha
> P.S. anything that compiles with LLVM and has an ingrained way to do print() that doesn't invoke libc, although there's a blurry line here between "pure asm"/"compiles to asm" that involves trusting-trust-style bootstrapping of features into the compiler
Actually LLVM IR has no concept of syscalls, you have to use inline assembly inside your IR to issue syscalls.
> Remember that since 1989, no actions were taken to improve its security.
Technically, gets() was removed from the standard library in C11[0]. However, that is far from a semantically meaningful overhaul of the standard library. I nonetheless felt the need to point out that there was a very specific effort for the sake of completeness.
Which is great, except for all those stubborn folks not using anything beyond C99, and scanf and fgets are still possible attack vectors, when getting sizes wrong.
Create a language with semantics exactly like C, but the standard library completely replaced. Call it in a way that avoids trademark disputes (e.g. CWSL, C With Sane Library). Get GCC and LLVM support it, which should be reasonably easy, because both support compiling code that does not rely on libc (though OS entry point code, etc should be wired in).
Would it be easy enough to port important C code to it, given that most of the libc-supplied functions, and functions transitively depending on these, would have to be rewritten? Would it be worthwhile, compared to rewriting such code it in Zig, Rust, or Ada?
D began life as a re-engineered C++. (I'm sure Walter will correct me if I say any more).
There's a Safe C++ extension proposed for Clang [0].
But those are C++, not C. A little different kettle of fish.
Gnome's Vala [1] aims to be the "smoothest C off-ramp". It does compile to C, but with GObject taking control of everything.
There's CheckedC [2], which adds optional bounds checking to C, and was backed by Microsoft until recently.
There's the Linux kernel's nolibc [3], which I've enjoyed the heck out of using, but it is rather constrained.
There's C's own Annex K [4], that almost nobody has implemented, and every compiler developer hates and can poke holes in. GCC and LLVM have both repeatedly said they won't support it. (So much as easy to get them to support things...)
GCC already has a number of memory safe languages, though. Most of which, because they're part of the same compiler suite, can interact with other languages that GCC has. Like the D or Go frontends.
eh, i think i disagree. a new stdlib on its own wouldn't come with a lot of abi/linking baggage that tends to hold up tooling and migrations when you start introducing a different language. (see recent rust/linux drama).
i mean, if the committee members can make it happen or not, i don't know. but it's still a worthy thing to explore, I think. there's going to be a lot of C code that will need a very gradual migration path to safer apis for a very very long time.
I think this is a really good idea, and zig shows how it could work.
But this?
> Under these conditions it'd be easy to imagine most C programs running with fewer bugs (, leaks, etc.) than Rust programs.
This is a crazy goal. You will never out-rust rust by adding a few runtime checks to C, while in debug mode. Fewer bugs than rust code is a wild goal.
I don’t think you understand just how much rust’s design prevents you from shipping bugs. It’s due to a combination of so, so many things. Like: references instead of pointers, unsafe blocks, sum types & match instead of unions, no implicit nullability, unwrapping optional values is explicit, the result type and #[must_use], bounds checks, the borrow checker preventing use after free, ownership semantics, Send & Sync for thread safety, and I’m sure plenty more.
It’s common to write very complex, threaded rust code and have it work first time. Well, the first time it compiles. Coming from C, it’s wild. Or, really, just about any other language.
To get the same result in C wouldn’t just need a “strict mode”. You would need to ban raw pointers - which would make it no longer C. And you’d need to make functions return more than an (easily ignored) status code. Ie, you want a result type. For bounds checking, you’d need a language level data structure for slices / arrays (pointer + length). You’d have to do away with void pointers for “generic” parameters. And probably 100 other tiny, breaking changes that the C community will never accept.
And for all that, you would essentially get zig. Zig does all these things.
But that would still get you worse bug density than rust because you don’t have a borrow checker. It’ll get you close - Runtime checks in debug mode will detect your use after frees - if you have a good test suite. But they won’t prevent aliasing. Or (I think) help with thread safety. For that, you need a borrow checker. You need rust.
We're talking about different kinds of bugs. One claim i'm making here is the poor iteration time imposed by Rust's complexity, significant design constraints, and so on, themselves cause kinds of bugs which are more readily addressed in C. Eg., I think memory leaks are easier in std Rust than in the kind of C i'm talking about.
I don't think we have a good evidential basis for comparing the total class of programming bugs in Rust vs. comparable langs -- since there isn't that much Rust code.
One "empirically ambitious" claim here is that the very high complexity of rust isn't design-bug-free, and "getting to 95%" with a modern C toolchain retains a very low-complexity get-it-done-and-iterate style of programming which has many "bug free'ing" advantages. Esp. if supported by a "debugger-oriented std lib"
> "getting to 95%" with a modern C toolchain retains a very low-complexity get-it-done-and-iterate style of programming which has many "bug free'ing" advantages. Esp. if supported by a "debugger-oriented std lib"
Speaking of things that don't have "good evidential basis"... I love how you apply an incredibly high standard of scrutiny to claims made by others but you neglect to do the same for your own claims.
The idea that memory leaks in Rust are easier than in C, even in this C with this fantasy standard library, is just absolutely ludicrous. We are living in two different planes of existence.
The C you're talking about doesn't exist and you're way way way over-stating the prevalence of bugs in Rust because of its iteration times/complexity/"design constraints." Which is another claim that doesn't have "good evidential basis." It's funny how the claims suggesting that Rust reduces bugs require a high standard of evidence, but the claims suggesting that Rust introduces new bugs that are more easily addressed by C are passed on without any scrutiny at all.
And where did you get this 95% figure from? Did you just pull out of thin air? Where is your "good evidential basis"?
Look, it's fine to theorize about things and have opinions and guesses. I get that. But when you don't let others have that same grace, your inconsistent application of evidentiary standards becomes plain.
The version of C i'm talking about does exist -- its a common practice of contemporary large-scale C development, which largely neglects libc and runs everything in debuggers/memory-tooling with strict options.
You have also identified, as I have in my own comments, that some of my claims are as empirically difficult to verify.
> 95% of the supposed issues with C could be solved by a new standard library
But now you're saying this does actually exist, and the answer is actually "no standard library" and not a "new standard library."
Is this the same code responsible for all the memory safety CVEs we see?
And great job at plucking one pittance out of my comment and responding to it, while completely ignoring the more substantive critique of your inconsistent application of evidentiary standards in your commentary.
> You have also identified, as I have in my own comments, that some of my claims are as empirically difficult to verify.
The circumspection you describe here does not at all come across in your commentary. Your commentary does not read like it has appropriate circumspection. Instead, you just state things as if they are facts:
> 95% of the supposed issues with C could be solved by a new standard library, integrating the debugger into the compiler as the default build/run environment (with auto address sanitisation, frame protection, etc. etc.), and a default strict mode error checking.
There is no circumspection there. There is no expression of uncertainty. There is no admission that you lack "good evidential basis."
There's no concrete examples from you. No specific pointers to anything. Just made up statistics.
It's not a critique if I myself point it out in my own comments -- there isnt anything to reply to. HN comment threads are not the place for empirical research; but I can nevertheless point to its absence in official sales pitches.
Yes, people write their own stdlib for C, and the better ones are written effectively "for the debugger". This is code that runs spaceships, nuclear power plants, xray machines, and the like.
Rust fanatics exist in this parallel universe in which it was, necessarily, the language which was the original sin -- so that Rust can be sold upon a cross as the redemption for C.
There's plenty of existence-proof systems that are written in C with the goal of saftey and reliability. No libc, and historical programming in general did not have that goal. This has vastly more to do with the history of programming, and its assumptions of non-adversarial low-risk host systems -- than to do with what contemporary C development necessarily looks like. As-if C developers are actually unable to detect use-after-free or double-free etc. std memory saftey issues; it's absurd.
Wait, you're whinging about Rust's iteration time and complexity, but in the same breath talking about code written for spaceships and nuclear power plants!?!? That's what you're comparing it too!? Do you have actual experience writing programs for spaceships and nuclear power plants? I don't, but I sure as hell would imagine that the regulatory requirements for it make it way more expensive to write than wrangling with rustc. Holy moly.
> Rust fanatics
Oh okay, so if we're going to go there, then I just get to call you a C fanatic. And yes, indeed, we live on two different planes of existence, as I said. That's for damn sure.
> but I can nevertheless point to its absence in official sales pitches
Which "official sales pitches"? I don't see any in this HN thread. Yet again applying inconsistent evidentiary standards.
> There's plenty of existence-proof systems that are written in C with the goal of saftey and reliability. No libc, and historical programming in general did not have that goal. This has vastly more to do with the history of programming, and its assumptions of non-adversarial low-risk host systems -- than to do with what contemporary C development necessarily looks like.
You might be saying something significant in that paragraph of word salad, but I can't spot it. I'm not confused as to why C is the way it is. That isn't the interesting bit.
> As-if C developers are actually unable to detect use-after-free or double-free etc. std memory saftey issues; it's absurd.
> We're talking about different kinds of bugs. One claim i'm making here is the poor iteration time imposed by Rust's complexity, significant design constraints, and so on, themselves cause kinds of bugs which are more readily addressed in C. Eg., I think memory leaks are easier in std Rust than in the kind of C i'm talking about.
Huh? Memory leaks? Poor iteration time? "Different kinds of bugs"? What are you talking about?
How do you leak memory in rust by accident? I've worked fulltime in rust for ~3-4 years and I don't think I've ever leaked memory in my code. I did it once on purpose in a script - but that was by explicitly calling Box.leak().
Poor iteration time? What? In my experience, iteration time in rust is significantly faster than that of C. Sure - the first program you write is hard, because learning rust is horrible. But once you know it, the ergonomics of the language make it a dream to work in. People make a big deal of the borrow checker, but its all the little things that the language does right that makes it productive to work in. Sum types. Match expressions. Iterators. Cargo. Editions. #[test]. References. Option and Result. Documentation. A standard library that works the same on every platform. And so on. Turns out we got better at inventing programming languages in the 50 years since C was invented. This isn't a rust thing - Swift, Zig and - in many ways - typescript and C# all support the same great feature set.
If you want to complain about rust, get in line. I'm no fanboy, and there's a laundry list of legitimate complaints you can make about the language. I've written thousands of words on the subject and annoyed a lot of people right here on HN in the process.
But you have to use it if you want to understand its flaws. It sounds like you're just inventing problems with rust from nowhere. How dull.
The Rust Evangelism Strike Force clearly hasn't been defunded by DOGE (yet). Respectfully, if you want to use Rust, just use Rust. If C doesn't suit you, you don't have to use it, and you don't have to make unreasonable demands of the standards body.
I wasn’t aware that on non-x86 platforms long double is often implemented with quadruple precision. I had assumed it was an x87-specific hack. On ARM64 windows/macos long double is apparently 64-bits which could be a problem.
Personally something about that solution is unsatisfying. Feels like it’d be slow, even though that wouldn’t matter 95% of the time. I’d rather have 128-bit integer of nanoseconds.
In an article like this I would have liked to see some mention of TAI (https://en.wikipedia.org/wiki/International_Atomic_Time) as one of the alternatives to UTC. Unfortunately there are several different universal times. Apparently there's also a "Galileo System Time", for example.
Among the many improvements, time is one area where C++ has become better than old school C cruft. In c++20/std::chrono, the Lua like code is just this -
auto now = system_clock::now();
zoned_time local_time{current_zone(), now};
std::cout << std::format("{:%a %b %d %T}\n", local_time);
I don't think having strftime return a malloc'd pointer is a good idea. The string won't be large at all and can easily fit onto the stack (just like it was done in the example code). If I want to use a custom allocator to store the string, I can. If I want to malloc the string I can.
Time parsing and formatting is prone to extended bikeshedding. I once raised the issue that Python had five parsers for ISO 8601 date formats, and they were all broken in some way. It took a decade to resolve that. By then I'd moved on to Rust.
> keep in mind that Integers support One percision, and there’s a trade off between resolution and the bounds of your epoch, Floating point values support all percisions, there is no such trade off.
Yeah, except with integers you get guaranteed precision across all of your data range while with floating point, it is ridiculously easy to accidentally lose precision without noticing it when e.g. shifting time deltas from the past into the future.
Not to mention that using floating-point number of seconds since epoch means that the times around the epoch are always given better precision than the timestamp around the current time which is really not what you want, and the situation only worsens with time.
Indeed, when I wrote a C utility and just wanted to output its start, finish and run time, I spent MORE TIME THAN IT TOOK TO WRITE THE WHOLE PROGRAM to figure out how the whole date/time garbage works! This was a painful, maddening experience. As if this entire API was designed to drive you mad.
Right. So you have an extra byte you need to free. One that you can't introspect, and reading will cause a memory fault, because it won't be NULL-terminated (0-length means 0 length). And not freeing, because the assumption of 0-length is violated, leads to a memory leak.
So instead of just checking one return value, now you have to check two. And people are not great at even handling a single NULL check. Few people check malloc's return, as awful as that is.
Design should be intuitive as possible. You can't assume they'll even look at a manpage.
If something returns a length, then people assume that length is what will be allocated. A valid 0-length time string, violates that assumption, and will cause problems down the line.
If someone is forced to do the allocation themselves, then there's a greater chance they'll actually notice that they need to free it.
What? No, the null terminator is the single byte in question. That's how an empty string is represented in C. It's not the same thing as a NULL pointer, as you may be thinking.
> Out of all the components of C, its time API is probably the one most plagued with legacy cruft.
First off, no, locales and wide characters exist. This statement is just laughable.
But even as to time: that seems really unfair. This whole area is a footgun and has been the source of bad implementation after bad implementation, in basically every environment. But among those: The "struct tm" interface is notable for being:
1. Very early, arriving in C89 and the first drafts of POSIX, with working implementations back into the mid 80's.
2. Complete and correct, able to give correct calendar information for arbitrary named time zones in an extensible and maintainable way. LOTS of other attempts got stuff like this wrong.
3. Relatively easy to use, with a straightfoward struct and a linear epoch value, with conversion functions in each direction, and only a few footguns (the mix of 0- and 1-indexing was unfortunate). There are even a few quality of life additions like support for "short" month names, etc...
Really, these routines remain useful even today, especially since their use is guaranteed to integrate with your distro's time zone database which must be constantly updated to track legal changes.
There's stuff to complain about, but... no, I think the premise of the article is dead wrong.
> 1. Very early, arriving in C89 and the first drafts of POSIX, with working implementations back into the mid 80's.
I looked it up. In fact it's much earlier than that. The API arrived in time.h via v7 Unix in 1979.
And it remains, unchanged, pervasively used, and most importantly still used for new code, four and a half decades later. Rather than "legacy cruft", this constitutes one of the most successful utility APIs in human history.
I mean, this blog post is the kind of uneducated posturing that makes the JavaScript kiddies happy, because "crufty, old C is so bad, see!?". But none of them know enough to know that it's basically all bullshit.
Javascript is a famously cruft free language, as we all know.
My real question is how you created an account, found this article, presumably read through it, and wrote out multiple comments insulting me all within two minutes.
Time used to mean what we write down or see on a clock: 2024-08-13 02:27 PM. The computer had an electronic clock built in, so it could save you a little effort of looking at the clock and copying the numbers. And that was all it did. If your clock was a few minutes off, that was no big deal. People knew clocks only agreed to within one or two minutes. People knew clocks were different in far away lands. Some people knew that you have to adjust your clocks twice per year. The computer clock was just like any other clock but happened to be inside a computer.
Now we expect a globally synchronized unique identifier for each instant, regardless of timezone. This is hard to deliver. Computers use these for synchronization amongst themselves, so they have to be accurate to milliseconds or better. This is hard to deliver. We expect computers to handle requests from far away lands with as much grace as requests from the local operator, and deliver results in a format the people in those lands expect. This is hard to deliver. We expect computers to process requests about the past using information that was current in the past, all over the world. This is hard to deliver. We expect computers to automatically adjust their own clocks twice a year, not just on the dates everyone in your local area does, but for users in all parts of the world on their respective dates. This is hard to deliver. And we still haven't got graceful handling of completely different calendar systems.