But arguably, these species weren't supposed to be here "naturally", so isn't this unnatural? Doesn't this make an unnatural method to stop something unnatural okay?
Also, if introduced/invasive species taking over a region is "evolution" etc, then the virus will just be another invasive species and the rabbits will have to face the consequences of evolution
I didn't interpret their comment that way. It's common to have a handful of managers under you, but reference to their reports as your managees as well. That easily reaches 100-200 people.
I think it's the "large teams" bit that makes the OP appear to be saying the "team" is up to 300 people. Team suggests a group of equally ranked members to me.
You can't really have a team of 300, sounds more like a management speak version of team.
Of course only the poster knows for sure their intended meaning.
// square root of n with Newton-Raphson approximation
r = n / 2;
while ( abs( r - (n/r) ) > t ) {
r = 0.5 * ( r + (n/r) );
}
System.out.println( "r = " + r );
And refactoring it to this function:
private double SquareRootApproximation(n) {
r = n / 2;
while ( abs( r - (n/r) ) > t ) {
r = 0.5 * ( r + (n/r) );
}
return r;
}
System.out.println( "r = " + SquareRootApproximation(r) );
I'm all for this refactoring, but something was lost in the process. What kind of square root approximation is being used? Does the algorithm have a name? What would I search for if I wanted to read more about it? That information was in the original comment.
There's an infinite amount of detail that's impossible to capture in a comment and which invariably changes over time and doesn't hold in the future.
For my team, the solution has been writing longer commit messages detailing not only what has changed, but also the why and other considerations, potential pitfalls and so forth.
So in this case, a good commit message might read like:
```
Created square root approximation function
This is needed for rendering new polygons in renderer Foo
in an efficient way as those don't need high degree of accuracy.
The algorithm used was Newton-Raphson approximation, accuracy was
chosen by initial testing:
[[Test code here showing why a thing was chosen]]
Potential pitfalls here include foo and bar. X and Y were also
considered, but left out due to unclear benefit over the simpler
algorithm.
```
With an editor with good `git blame` support (or using github to dig through the layers) this gives me a lot of confidence about reading code as I can go back in time and read what the author was thinking about originally. This way I can evaluate properly if conditions have changed, rather than worry about the next Chthulu comment that does not apply.
How so? The code still documents what is happening, the commits however lay out the whys.
The point is that these two are separate questions, and that trying to use comments as a crutch to join the two religiously is a headache. It's impossible to keep everything in sync and I don't want to read needless or worse misleading information.
What's worse, in comments we often omit the important details such as why was the change made, what other choices were considered, how was the thing benchmarked, etcetc.
That said, comments still have a place. Just not everywhere for everything and especially not for documenting history.
I disagree. I think the "whys" belong in the comments- in fact, that's the most important part of the comment if the code is cleanly written. I don't want to be happily coding along, get to a glob and have to go to the repo pane, hunt for the commit that explains this particular thing, then read a commit message. Put it in a comment in the code. Pretty please.
That isn't actually important unless you have multiple algorithms, which is when you create an ISquareRootApproximator interface, and have NewtonRaphsonAlgorithm and any other classes implement it.
Then you can have them duke it out running identical unit tests in the profiler.
That creates a clean separation between the people who just need to know what a function does and those that need to know how that function works. People who just need an approximated square root fast can understand perfectly well what ISquareRootApproximator.ApproximateSquareRoot( r ) does, and don't necessarily care whether your "get me a square root approximator" function returns a NewtonRaphsonAlgorithm, or a CORDICAlgorithm, or a BitshiftsMagicNumbersAndLookupTablesAlgorithm, or something else, so long as it approximates a square root.
I won't speak for anyone else, but I've written some really good code but then I had a hard time understand what the heck I did after a year.
Joe's arguments are weak
* code should be readable <--- I agree
* good comments require good writers <---- not really, comments are for fellow programmers to read. If you want to document your APIs, yes, spend good time on it, and get feedback from your colleagues. Inline code comments are internal, so write in the most natural way possible. I've read "shit" and "fuck" before (see Firefox codebase, very funny).
* refactoring <--- I agree, but refactoring and documentation cannot be mutual exclusive. That's wrong.
* His example is way too simple. Try a more difficult approximation function.
Joe needs to realize that code produced at work is not meant for one single individual. If I leave, I want my co-workers and their future co-workers to have a good time navigate through codebase.
Use comments wisely, but don't avoid them! Adding 10 extra lines of comments to the file is better than a one-liner no one can understand. Let's not run a 100-line competition when we are writing code professionally. I shouldn't need to frustrate my reviewers and beg me to explain. Use newlines to make your code more readable as well.
> * refactoring <--- I agree, but refactoring and documentation cannot be mutual exclusive. That's wrong.
Commenting is in tension with refactoring, because nothing enforces that comments remain correct when code is refactored, so they tend to go wrong.
> Use comments wisely, but don't avoid them! Adding 10 extra lines of comments to the file is better than a one-liner no one can understand.
But worse than a one-liner everyone can understand. I think Joel has the right of it: write comments as a last resort when you can't make the code readable enough without them. But don't use them as a crutch to avoid fixing the code.
I think his example proves the opposite of what he intends. His example is just begging for a discussion of how the approximation actually works, edge case considerations, error bounds, assumptions, limitations, complexity, and references. At least that's what I would want if I was looking at it with fresh eyes. Sure, the author's contact info is a little tongue-in-cheek but without comments I can only know what the code does, not what it's supposed to do.
Exactly, because you don't want every future reader of the code to have the same question.
This is why I hate implicit assumptions. If I need some special knowledge or some special assumption that is no where in the immediate vicinity of the code, then maybe your approach to the problem is flawed. Sure it'll work but good luck to future coders working with it
I have almost no idea about the Ethereum system, but don't you have to pay to have your code executed on the global ethereum machine? Does that mean I have to pay to test run my code? or maybe after I deploy it?
I would disagree. My boss has the record of fastest promotion of junior developer to executive and he fights for us against other department/team heads and upper management on a regular basis. Its because he makes sure to deliver such value to the company that the management can't ignore him and his team
> Open Source = more likely for attackers to find bugs, but less likely for bugs to persist. Don't need to trust the company's reputation for code quality.
But you need to trust the entire community not to insert bugs/backdoors and/or weed out such code. Not to mention Open Source contributors arguably have a lesser incentive than closed-source development being done by a company.
Ofc the above argument assumes contributors are allowed to make changes to the codebase, instead of just reviewing the code
I agree. At some point they should stop considering FF users as "well they aren't OUR users" in the cost benefit equation. They have to acknowledge that FF has a big chunk of the market and it should be part of the "Browsers we test on" list.
Considering the features work fine on FF, the actual costs for testing on FF should be minimal.
Well, they’re busy solving that with alternative solutions, by paying devs to secretly install Chrome with their software, and set it as default, and similar shady deals.
This is helpfully decimating the marketshare of anything that isn’t Chrome.
> Even the VLC authors documented how Google tried paying them to ship Chrome as default with their installers:
It doesn't mention when though. If its around the time where MSIE was dominant, well, I (FWIW) got less of a problem with that. Because -even with its profiling- Google Chrome is objectively a far, far better browser than MSIE ever was. And the Google of 2005 or 2010 is a better Google than Microsoft was in 1998 or 2002. Check the Halloween documents on that one.
Also, if introduced/invasive species taking over a region is "evolution" etc, then the virus will just be another invasive species and the rabbits will have to face the consequences of evolution