I've been using GCC & friends for a few years now, building mostly for the Cortex-M23. GDB has been a godsend, with a J-Link you basically have as much runtime debugging support as you would on a hosted system. This is especially helpful in cases where logging would slow down the operation of the system too much.
We're also very successfully using CMake as our build system, as opposed to Make itself. It lets you organise your build system & support different configurations & dependencies in a much easier way than e.g. Makefiles & Git submodules, which we've tried in the past. When combined with the Ninja generator we're also seeing ~20% faster compilation times.
I have not seen a compiler that incidentally ICEs (crashes) more than IAR does (experience from c. 5 yrs ago.) Early-days rustc may come close, but experience is likely skewed by my perspective as a rustc contributor too.
And then we were always using IAR in a way that disabled all and every optimization; if enabling them did not lead to a compiler imploding, the compiler would produce invalid machine code even for trivial examples. This defeated any possible code size benefits that IAR may have had over GCC/Clang.
To top if off, I had an impression that reports of that nature weren’t really considered critical.
Really? I would expect more than that from an EDG based professional compiler. A friend of mine uses the STM8 version in his workplace and he swears by it (though he still prefers SDCC due to C2x support)
Does the Arm Compiler for Embedded have significant changes to upstream clang? I'm somewhat concerned that clang not being GPL may eventually become unfortunate for embedded...
For games, most performance critical code probably does not run on the CPU but on the GPU. I believe they use pretty standard open source CPU compilers and Sony e.g. do contribute to LLVM.
In my experience arm clang compiler often times produced code that is ~10-20 percent faster (hence less energy consuming) than gcc with the same optimisation levels
(Building bare metal code with a lot of DSP and MATMUL)
Yes, especially SIMD Neon where gcc producing horrible Neon code for all versions < gcc-12 even by using simd intrinsics. From version 12 gcc is at same level as clang.
Gcov is great until higher-ups start demanding 100% line coverage, and you end up with hundreds of thousands of lines of unit tests. Then you gotta hire someone to check why the Jenkins builds are failing because some of your code isn't thread safe, and running the unit tests takes longer than compiling and taking a smoke break...
I remember an ARM person stating: "We thought we're building those chips, who else could optimize for them better than us? Then we learned how little we know about C++ and the next version will be clang based.". That was back when we filed a number of RealView 4 compiler bugs...
We're also very successfully using CMake as our build system, as opposed to Make itself. It lets you organise your build system & support different configurations & dependencies in a much easier way than e.g. Makefiles & Git submodules, which we've tried in the past. When combined with the Ninja generator we're also seeing ~20% faster compilation times.