Yet another issue is that `char` is signed on some platforms but unsigned on others. It is signed on x86 but unsigned on RISC-V. On ARM it could be either (ARM standard is unsigned, Apple does signed).
I therefore use typedefs called `byte` and `ubyte` wherever the data is 8-bit but not character data.
I also use the aliases `ushort`, `uint` and `ulong` to cut down on typing.
On the other hand, the types in <stdint.h> are often recognised by syntax colouring in editors where user-defined types aren't.
Then you're better off using custom types - that way people will immediate know your type is non-default - as opposed to hiding your customization away in a makefile, pranking people who expect built-ins to behave a certain way.
The people who understand that it can be either, depending on a compiler switch, are exactly the people who use an explicit sign (typically via a typedef) to ensure their code always works.
The people who say that char is de facto signed and everyone should just deal with it, are the people who end up writing broken code.
Yes, the optional sign on char is also madness. C had a chance in 1989 to make it unsigned, and muffed it. (When C86 decided between value-preserving and sign-preserving semantics, they could have also said char was unsigned, and saved generations of programmers from grief.)
D's `char` type is unsigned. Done. No more problems.
I therefore use typedefs called `byte` and `ubyte` wherever the data is 8-bit but not character data. I also use the aliases `ushort`, `uint` and `ulong` to cut down on typing. On the other hand, the types in <stdint.h> are often recognised by syntax colouring in editors where user-defined types aren't.