Yeah. Certainly felt like that. On the other hand, the content does seem good. It definitely wasn't slop, even if I can't judge how useful it really was (in terms of giving a solution).
You don't. Assertions are assumptions. You don't explicitly write recovery paths for individual assumptions being wrong. Even if you wanted to, you probably wouldn't have a sensible recovery in the general case (what will you do when the enum that had 3 options suddenly comes in with a value 1000?).
I don't think any C programmer (where assert() is just debug_assert!() and there is no assert!()) is writing code like:
assert(arr_len > 5);
if (arr_len <= 5) {
// do something
}
They just assume that the assertion holds and hope that some thing would crash later and provide info for debugging if it didn't.
Anyone writing with a standard that requires 100% decision-point coverage will either not write that code (because NDEBUG is insane and assert should have useful semantics), or will have to both write and test that code.
The risk of losing one (or both) earbud is a real one. My ears don't tend to keep snug grip on the earbuds so they tend to get loose after I walk a little. With earbuds, this might just be my own singular piece but, there is also the chance that only one of the two would connect to your phone.
On the other hand, the cables get tangled together. I can't walk around with them because the cable gets stuck in the swing of my arms. Connecting them to the phone after a call had already started was a piece of cake though. With bluetooth, I never have my earbuds on when I actually need them and it's too much of a pain to take them out of my bag and connect them.
Whenever it is time to replace my current earbuds, I am gonna go for a neckband instead. It has basically the best of both, imo (I am not that sensitive to audio quality mostly) and the downsides aren't large enough (I'll think of the weight as a neck workout).
Then don’t buy headphones like that. I have AirPods Pro. But I also have a pair of $50 Beat Flex that if they fall out of my ear they just go around my neck. I use them when I travel.
I bought a pair of double flange doohickies to replace the standard ones.
Since the human eye is most sensitive to green, it will find errors in the green channel much easier than the others. This is why you need _more_ green data.
Couldn't the compiler optimise this still? Make two versions of the function, one with constant folding and one without. Then at runtime, check the value of the parameter and call the corresponding version.
Personally, I would prefer that the package managers keep their own lockfiles with all their metadata. A CI process (using the package managers itself) can create the SBOM for every commit in a standardized environment. We get all the same benefits without losing anything (the package managers can keep their own formats and metadata and remove anything unneeded for the SBOM from it).
Second that. It is trivial to add SBOM generator to your pipeline - it is not trivial to make all kind of package managers to switch and each format is used for different audiences.
To understand what an impossible task this is, there is no need to think about different ecosystems (PyPI vs NPM vs Cargo vs ...). Even in the case of different Linux distributions, the package managers are so different that expecting them to support the same formats is a lost cause.
I do exactly that in my container build pipelines and it is great. And then CI uploads those SBOMs to Dependency Track.
Depending on the language, scanning just the container is not enough, you for sure want to scan the lockfiles for full dependency list before it is compiled/packed/minified and becomes invisible to trivy/syft.
You are building everything in CI from scratch so theoretically, it should be completely possible to not need to scan lockfiles and get all the data from their respective sources (OS, runtime, dynamic libs, static deps, codegen tools, build time deps, etc)
This isn't really something the logging library can do. If the language provides a string interpolation mechanism then that mechanism is what the programmers will reach for first. And the library cannot know that interpolation happened because the language creates the final string before passing it in.
If you want the builtin interpolation to become a noop in the face runtime log disabling then the logging library has to be a builtin too.
I feel like there's a parallel with SQL where you want to discourage manual interpolation. Taking inspiration from it may help: you may not fully solve it but there are some API ideas and patterns.
A logging framework may have the equivalent of prepared statements. You may also nudge usage where the raw string API is `log.traceRaw(String rawMessage)` while the parametrized one has the nicer naming `log.trace(Template t, param1, param2)`.
You pass "foo" to Template. The Template will be instantiated before log ever sees it. You conveniently left out where the Foo string is computed from something that actually need computation.
Like both:
new Template("doing X to " + thingBeingOperatedOn)
new Template("doing " + expensiveDebugThing(thingBeingOperatedOn))
You just complicated everything to get the same class of error.
Heck even the existing good way of doing it, which is less complicated than your way, still isn't safe from it.
All your examples have the same issue, both with just string concatenation and more expensive calls. You can only get around an unknowing or lazy programmer if the compiler can be smart enough to entirely skip these (JIT or not - a JIT would need to see that these calls never amount to anything and decide to skip them after a while. Not deterministically useful of course).
Yeah, it's hard to prevent a sufficiently motivated dev from shooting itself in the foot; but these still help.
> You conveniently left out where the Foo string is computed from something that actually need computation.
I left it out because the comment I was replying to was pointing that some logs don't have params.
For the approach using a `Template` class, the expectation would be that the doc would call out why this class exists in the first place as to enable lazy computation. Doing string concatenation inside a template constructor should raise a few eyebrows when writing or reviewing code.
I wrote `logger.log(new Template("foo"))` in my previous comment for brevity as it's merely an internet comment and not a real framework. In real code I would not even use stringy logs but structured data attached to a unique code. But since this thread discusses performance of stringy logs, I would expect log templates to be defined as statics/constants that don't contain any runtime value. You could also integrate them with metadata such as log levels, schemas, translations, codes, etc.
Regarding args themselves, you're right that they can also be expensive to compute in the first place. You may then design the args to be passed by a callback which would allow to defer the param computation.
A possible example would be:
const OPERATION_TIMEOUT = new Template("the operation $operationId timed-out after $duration seconds", {level: "error", code: "E_TIMEOUT"});
// ...
function handler(...) {
// ..
logger.emit(OPERATION_TIMEOUT, () => ({operationId: "foo", duration: someExpensiveOperationToRetrieveTheDuration()}))
}
This is still not perfect as you may need to compute some data before the log "just in case" you need it for the log. For example you may want to record the current time, do the operation. If the operation times out, you use the time recorded before the op to compute for how long it ran. If you did not time out and don't log, then getting the current system time is "wasted".
All I'm saying is that `logger.log(str)` is not the only possible API; and that splitting the definition of the log from the actual "emit" is a good pattern.
Unless log() is a macro of some sort that expands to if(logEnabled){internalLog(string)} - which a good optimizer will see through and not expand the string when logging is disabled.
1. The ad blocker.
2. The sense of superiority over the normies (because of the ad blocker)
3. Theming
If adblockers are killed, that removed points 1 and 2. I am pretty sure I can do the same theming in Chrome (I have simple tastes) so that makes 3 a non-factor. And combined with the companies that refuse to make their sites work with firefox, there is no reason not to use chrome. Privacy is a non-factor since my identity is already wholly linked to my google account. I would have to first switch off off there and I am not putting in the effort for that.
The various projects that say something is deprecated but then don't give a removal timeline or keep delaying the removal (or even explicitly say it won't be removed, just remain deprecated) are the cause of this problem.
IMO, any deprecation should go in the following steps:
1. Decide that you want to deprecate the thing. This also includes steps on how to migrate away from the thing, what to use instead, and how to keep the existing behaviour if needed. This step would also decide on the overall timeline, starting with the decision and ending with the removal.
2. Make the code give out big warnings for the deprecation. If there's a standard build system, it should have support for deprecation warnings.
3. Break the build in an easy to fix way. If there is too much red tape to take one of the recommended steps, the old API is still there, just under a `deprecated` flag or path. Importantly, this means that at this step, 'fixing' the build doesn't require any change in dependencies or (big) change in code. This should be a one line change to make it work.
4. Remove the deprecated thing. This step is NOT optional! Actually remove it. Keep it part of your compiler / library / etc in a way to give an error but still delete it. Fixing the build now requires some custom code or extra dependency. It is no longer a trivial fix (as trivial as the previous step at least).
Honestly, the build system should provide the tools for this. Being able to say that some item is deprecated and should warn or it is deprecated and should only be accessible if a flag is set or it is removed and the error message should say "function foo() was removed in v1.4.5. Refer to the following link:..." instead of just "function foo() not found"
If the build system has the option to treat warnings as errors, it should also have the option to ignore specific warnings from being treated as such (so that package updates can still happen while CI keeps getting the warning). The warning itself shouldn't be ignored.
reply