I've been writing Java in Vim from time to time for 3 years. Rust is much more pleasant to deal with in a plain text editor. I encourage you to give it a try.
Interesting list. Not worked much on C recently (though I am some on D these days), but had used it a lot earlier. I first started with K&R (not because I knew it was good or the best then - IMO, maybe some will not agree, and it may not be the best for a beginner), but because it was the only one available at that time and place, IIRC. Later I did read some other C books. I think the Kochan (Steven?) one you mention was one of them, and maybe also the Prata book. IIRC, the Kochan one was good. One book I read had an example of creating an index or concordance of a text file or program, maybe it was Kochan. Remember enjoying the logic.
Yeah the first edition wasn't so good. It got quite a big re-write for the second edition which came out quite quickly (in terms of a C book getting a second edition anyway).
It's hard to put my finger on exactly what it is I don't like about it. After rereading the table of contents I certainly don't disagree with some things (gdb, valgrind, version control, package management, are all good things).
I think my biggest beef is it seems to approach C as if it is a general purpose language, and I would argue that in the 21st century C is no longer a general purpose language. It is at its best now when used for systems programming, embedded programming, and making portable libraries (and the last is also no longer the exclusive domain of C, as besides C++ there are a couple other languages that can export C abi compatible functions and have unobtrusive runtimes).
Also, Klemens seems to like autotools. I don't hate autotools as much as others, as I remember a world before autotools was ubiquitous and it truly sucked. However, the combination of compilers not sucking like they used to (if you are using C99, you have a lot more features than you did on some crappy PCC derived vendor supplied compiler, even if you stick only to pure C99, and the odds of your compiler not "just working" on the C99 code is pretty slim) means that for simple projects a makefile that respects the CFLAGS, LDFLAGS, CC &c. is sufficient; for larger programs, there are serious alternatives to autotools. Obviously none of them are as dominant as autotools, but cmake is certainly prevalent enough to be a good place to recommend people start at.
I'm going to date myself here, but my first book was "Learning to Program in C" by Plum. It was an incredible beginners guide to C, but predates C89, so is a bit too dated at this point to recommend to people.
The same author also wrote "Reliable Data Structures in C" and I would love to update it for at least the features in C99 (C structs became a lot more useful at that point), as that book is a great read in that it walks you through the reasoning for doing things in a certain way, as well as providing the final result in code.
Why should we break compatibility with plain text based tools to achieve what Lamdu is set to achieve? AST could be an intermediate representation but source should be text. Parsing/generating source to and from AST is some work but doesn't look like an unsolvable problem. We would got all the benefits without actually throwing away all the existing tools. Am I missing something?
In Lamdu, each subexpression has its identity that survives as it is edited, moved, etc. This helps merge changes with far fewer conflicts.
Lamdu also allows attaching English names to identifiers, in addition to Chinese names, Hebrew names, etc. This approach can hopefully rid the world of duplicated libraries where the code is nearly the same but the names are Chinese. This kind of rich metadata doesn't survive well when serializing to text.
Lamdu will also maintain(and distribute) code indexing alongside the code such that the algorithmic complexity of refactoring large projects is not O(project size). Text is not a good data structure to perform these operations.
Lamdu also maintains many invariants about the code (e.g all type errors are localized and marked as such), what would Lamdu do if you try to load text that doesn't maintain the invariant?
We also believe the entire textual tool chain could be so much better if it were rewritten to work with asts.
So for us the only value in textual integration is the "old world" of programming, and we're creating Lamdu much because of our dissatisfaction with that world.
Some of this issues could be resolved while staying within plain text, well to some extent at least. There're various attempts to encode metadata into plain text source code. Loading partially incorrect code should be possible, that's what many IDEs do successfully.
Plain text is the most portable data format ever invented, any incentive to replace it should have very convincing benefits.
Loading partially incorrect would mean that the invariants are not, in fact, invariants.
Encoding the metadata in the text would lose the benefit of using text. At that point you may as well store the code as XML or indeed as we do, in a key value store.
We could layer our data on top of text but this would lose the algorithmic benefits of a good data structure, and wouldn't actually be editable reasonably with a simple text editor.
I like the colours and am curious if it's a known colorscheme available for other environments (Vim, Emacs, terminals) or is it designed from scratch. Thanks for pointing to the config, it could be used to replicate the colorscheme for other editors.
It's not a known color scheme. For a while we had an ugly color theme, and then I sat with a designer friend and we fixed it. I agree that it's pretty now :) Apparently colors needed to have consistent saturation levels and stuff..
The quote is missing important context. Just want to clarify that author is speaking about microservices specifically. The whole point of microservices is that one could be rewritten in a matter of days without involving substantial risks and expenses.
The cost of managing microservices and whether multilingual environment pays off is a separate question. Rewriting a now much better understood solution using an appropriate technology could have dramatic effect.
XML is a standard format for markup (e.g. HTML). When XML is used where it shouldn't (object serialization) it adds complexity. Here it signals that creating a report is as simple (or at least as standard) as creating HTML. What other format you had in mind?
You can still write xml-compliant HTML; whether it's called XHTML or not is splitting hairs. You still get all the benefits of a very rich toolset for document preparation. I work on a site that makes heavy use of xslt and works with xml-compliant HTML as input/output.
XML and XHTML are two different things. Related, but different. XHTML is no longer encouraged for website markup, that part is true too. That does not mean that XML is no longer encouraged.
IMO JSON is less readable for this kind of document. XML is more explicit (at the cost of being chatty) but it's actually good for markup. For example nesting is easier to visually manage with XML due to tooling and explicitness of closing tags.
JSON is good for smaller objects you normally use during coding where you do not want the format to get in your way. Here I actually do want a very explicit format as it's the main thing.
> The United States makes an improper division between surveillance conducted on residents of the United States, and the surveillance that is conducted with almost no restraint upon the rest of the world.
> Treating two sets of innocent targets differently is already a violation of international human rights law.
It was bothering me for quite a while. I'm not an American. So what? I have lesser privacy rights?! Am I lesser human? Is spying okay as long as it's not spying on you?
Biggest problem from my point of view is that the US also happens to be the steward of the Internet. This public screwup represents the perfect opportunity for governments of the world to balkanize the Internet, as in further splintering it in geographic and commercial boundaries. Countries like China now have valid arguments in the eyes of the Chinese for blocking foreign websites and services. And more and more national firewalls will happen, firewalls that will crush freedom of speech and that will end the free trade.
I'm not sure if the age of the free Internet we've been enjoying is coming to an end, but you can bet your ass that governments are trying to end it. And the US government doesn't even seem to comprehend how big their screwup is.
>"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." //
This does not put a geographic limit, so all citizens (I think "the people" here is clearly a reference in context to citizens) should be excluded from having their data seized without warrant. That's got to be hard with USA citizens appearing in most populations and internet data not being clearly from any particular citizen or other person [they'll need an "isCitizen" bit so that all data packets from USA citizens can be dropped before inspection!].
Moreover the 14th Amendment to the USA Constitution appears to extend protections to all "persons", viz:
>"No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws." //
If the internet is under USA jurisdiction then persons there should be extended the "equal protection of the laws". One such law being that you need a warrant to search their "papers, and effects" [which clearly purposes to protect private correspondences].
Not sure how it works in the USA - to bypass this Federal operations could be considered to be outwith the jurisdiction of any state?? That would seem to require the people involved to not be citizens of the USA though, as they would then fall under the requirements for their State of residence to ensure the protection of the laws extends to people [everywhere].
"People" in the 5th Amendment means "people", not citizens; 5th Amendment protections are not limited to citizens.
But note that it has no explicit warrant requirement; just a reasonableness requirement and a limit on when warrants shall issue. It's read as implicitly requiring warrants in most cases for reasonableness, but there are plenty of exceptions.
For many (all?) situations the federal government considers itself the district of columbia which is not a state and is a federal territory. The Fed is thus not a state but a supra-state entity that operates outside what states would consider acceptable. They routinely trample individual and states rights. It is ironic considering the history of why the U.S. was founded that we have given in to a domestic version of the same tyranny.
This a structural problem; even if the language of the Constitution confers protection on foreign citizens (which I believe it does, to some extent), there are only certain situations in which foreign citizens have standing in U.S. courts.
If they don't reside in the U.S. they basically have no venue to ask for relief. Even if the parent is right that it is a violation of international law, enforcement of international law is essentially voluntary.
All countries should create their own facebooks and invite Americans to join theirs. It's sort of stupid to allow Facebook into any country unless all of its business operations and servers are based in the said country with proper controls. Even then that does not preclude the US government forcing Facebook to turn over data on Non-US citizens to itself.
Facebook is the #1 vector of US government's indiscriminate spying on everybody's else's citizens. Google comes a close second, but you don't generally give Google as many personal details as you give to Facebook. At least not by your own free will.
And that's really the problem that France has, everyone uses American services so America can basically create a dossier on everyone. No one uses French services other than French citizens, and perhaps a few other souls.
Next week's EU data protection "safe harbour" decision may require exactly that: Facebook may no longer be allowed to export personal data from the EU.
Edit: data protection would also have a huge effect on the "peeple" app, discussion of which seems to be banned on HN.
Nobody should be allowed to export personal data from such jurisdictions except for the owners of that data themselves. A U.S.-ian should be allowed to decide to trust their personal data to a company inside E.U. jurisdiction but that company shouldn't be allowed to trade that data anywhere else (especially, back to the U.S.). Of course, that's a complete pipe dream, and I'm just hallucinating.
I imagine that would only apply to sites which store PII[1]. The database should be located under the same jurisdiction (which doesn't mean every country, since some will have treaties to allow exporting to certain places (EU for example)) as the person whose data it is, and the data should not be transferred through other jurisdictions.
Well, pretty much any website stores an email, name and password. Every startup would need to look at all the bilateral treaties between every major country in the world. This is simply impractical.
Already in proposal stage in many places. My current home, Thailand, just announced plans for their own version. I am not amused.
However, hilarity has already ensued. The gov here is so incompetent that they couldn't even make an annoucement without having a bunch of gov websites taken down by a manual DDOS attack yesterday [1]. A few thousand people coordinated via social media to repeatedly visit the gov's ICT website which brought it to a standstill. Yet the gov thinks they can manage a single internet gateway to facilitate surveillance and it won't be ruinous.
The alternative is that Microsoft, Apple, Facebook, etc build their own Great Firewalls and become countries. I'd prefer that but many people have a strong sentimental attachment to countries based on geography.
Better solution, don't allow governments to spy in the first place.
Facebook and Google don't want to give up data, they are effectively being coerced into handing over data.
> Facebook and Google don't want to give up data, they are effectively being coerced into handing over data.
How do you know this? They get paid for the data and selling it to the government guarantees that they will be allowed to keep collecting it in the future. It's clearly a mutually beneficial agreement.
You misunderstand the word 'choice'. You have a choice to go on sites with Google Analytics, you are not being coerced into using their products. You have a choice to not give out your number to those whom have Android phones, you have a choice not to use Facebook.
You have a lot of choice, you are just choosing not to use the alternatives.
Whereas Google and Apple are being threatened by force to hand over data, they do not have a choice.
On your last point, you want the same people who demand and coerce data from Google and Apple, also be the same people who make the rules for Google and Apple?
I just read this article for the first time. And even though it raised some interesting points that I completely agree with, he builds his argument based on some inaccuracies. Most importantly, his writing gives the impression that autoconf was written by some dot-com wunderkinder.
Autoconf's wikipedia page says it first appeared in 1991, very likely to solve configure problems of the mid-to-late 1980s. That's as far from dot-com as you can get.
He also bashes the m4 macro language. I have never learnt the language but it's possible the obscurity of the langauge is due to not a lot of people spending time to learn it. And should they? I can't say for sure, but I do know the motivation for why m4 was created. It was meant to be a feature-rich and consistent alternative to the C preprocessor which is full of warts and is not turing complete (which m4 is). There are C programs out there written without a single line of '#' magic, using m4 preprocessor instead. You could say m4 syntax ugly, but you cannot easily defend that there is no need for a turing complete, preferably general purpose, macro language. C++ templates try to provide another alternative, and unsurprisingly, are known to be one of the syntactically complex aspects of the language.
And today we don't have Cray and DEC and all those obscure architectures, but we do have ARM, and 32/64, and GPUs, and so on. So the architecture proliferation doesn't look like dying anytime soon. And we have the autoconf/configure alternative LLVM, very loosely speaking (an attempt to let one source code work with multiple architectures), which is by many standards an order of magnitude more complex than the former.
To further the interests of accuracy, Poul-Henning Kamp's bashing of m4 is only with regard to its use to implement autoconf, though my guess is that he doesn't have any use for it in any other circumstance - and neither has the rest of the world, were it not for the unfortunate accident of it being chosen to implement autoconf.
I imagine it would be fairly easy to "defend that there is no need for a turing complete, preferably general purpose, macro language" empirically on the basis of the number of things that get done without one, and more formally on the basis of Turing equivalence.
Need? no. Could it be improved on? I am certain it could (and Rust looks like an interesting way to do so), but that improvement would probably not be via a Turing-equivalent macro language that could, to an arbitrary degree, subvert the semantics of the language in which the programs are actually written. The trend has been to move away from macro-processing code, rather than towards strengthening the macro processor's power.
One might argue that C++'s template sub/co-language is a counter-example, but, putting aside the question of whether it is actually progress (on balance, I think it is), it has a syntax which discourages its use in a completely arbitrary way.
Sorry, I was being cryptic, my fault. I was generalizing your claim about macro languages to other languages.
I mean, doesn't your original argument imply that there is no need for a systems language besides C empirically on the basis of the number of things that get done without one, and more formally on the basis of Turing equivalence?
You are right about the timing. I believe the fragmentation started at Berkeley, and was greatly expanded by the mini-computer manufacturers. Ironically, far from this being a consequence of AT&T attempting to commercialize Unix, it followed from AT&T not being allowed to do so (there is another irony in Kamp being a BSD user.)
And while I thoroughly dislike having anything to do with autoconf and its relatives, I have to admit the need for something like it, on account of decisions beyond the control of the FOSS community and the contingencies of history. Even the choice of m4 may have seemed more reasonable at the time, given fewer alternatives (though there was Perl, and perhaps TCL, to choose from.)
While Kamp's description of the current situation seems accurate, his attribution of blame does not, and his lament that things could have been so much better is influenced by his somewhat inaccurate hindsight.