The C47/R47 appears to use the 4-level XYZT stack design by default, but it has an option to use an 8-level stack (XYZTABCD). I really like the unlimited stack option that can be enabled in Free42, but 8 levels might be enough to keep from feeling cramped in practice.
The former merely exposes a `BlogPostRepository` class. The latter requires some mechanism for creating a generic object of concrete type, which is a lot bigger change to the implementation. Does each parametrized generic type have its own implementation? Or does each object have sufficient RTTI to dynamically dispatch? And what are the implications for module API data structures? Etc. In other words, this limitation avoids tremendously disruptive implementation impacts. Not pretty, but we're talking PHP here anyway. ;-)
The addition of infinite pages made my RM2 unusable. It's far too easy to accidentally scroll, and hugely disruptive. I checked for tuning improvements for a couple of software updates, then set it aside permanently. That such a "simple" change could doom the device made me decide to go back to real paper, in all likelihood forever.
Hard disagree. Yacc has unnecessary footguns, in particular the fallout from using LALR(1), but more modern parser generators like bison provide LR(1) and IELR(1). Hand-rolled recursive descent parsers as well as parser combinators can easily obscure implicit resolution of grammar ambiguities. A good LR(1) parser generator enables a level of grammar consistency that is very difficult to achieve otherwise.
> Hand-rolled recursive descent parsers as well as parser combinators can easily obscure implicit resolution of grammar ambiguities.
Could you give a concrete, real-life example of this? I have written many recursive-descent parsers and never ran into this problem (Apache Jackrabbit Oak SQL and XPath parser, H2 database engine, PointBase Micro database engine, HypersonicSQL, NewSQL, Regex parsers, GraphQL parsers, and currently the Bau programming language).
I have often heard that Bison / Yacc / ANTLR etc are "superior", but mostly from people that didn't actually have to write and maintain production-quality parsers. I do have experience with the above parser generators, eg. for university projects, and Apache Jackrabbit (2.x). I remember that in each case, the parser generators had some "limitations" that caused problems down the line. Then I had to spend more time trying to work around the parser generator limitations than actually doing productive work.
This may sound harsh, but well that's my experience... I would love to hear from people that had a different experience for non-trivial projects...
If you start with an unambiguous grammar then you aren't going to introduce ambiguities by implementing it with a recursive descent parser.
If you are developing a new grammar it is quite easy to accidentally create ambiguities and a recursive descent parser won't highlight them. This becomes painful when you try to evolve the grammar.
The original comment says that using yacc/bison is "fundamentally misguided." But parser generators make it easy to add a correct parser to your project. It's obviously not the only way. Hand-rolling has a bunch of pitfalls, and easily leads to apparently correct behavior that does weird things on untested input. Your comment then is a bit like: I've never had memory corruption in C, so Rust/Java/etc. is for toy projects only.
I'm arguing that this is not the case in reality, and asked for concrete examples... So again I ask for a concrete example... For memory corruption, there are plenty of examples.
For parsing, I know one example that lead to problems. Interestingly, it was about using a state machine that was then modified (manually) and the result was broken. Here I argue that using a handwritten parser, instead of a state machine that is then manually modified, would not have resulted in this problem. Also, there was no randomized testing / fuzz testing, which is also a problem. This issue is still open: https://issues.apache.org/jira/browse/OAK-5367
There's no reason for concrete examples, because the point was about the fundamental misguidedness of parser generators, not about problems with individual parser generators or the nice things you can do in a hand-rolled one, but to accommodate you, ANTLR gives one on its home page: "... At Twitter, we use it exclusively for query parsing in Twitter search... Samuel Luckenbill, Senior Manager of Search Infrastructure, Twitter, inc."
Also, regexps are used very often in production, and that's definitely a parser-generator of sorts.
The memory corruption example was an analog, but to spell it out: it's easier and faster to write a correct parser using flex/bison than by hand, especially for more complex languages. Parser-generators have their use, and are not fundamentally misguided. That you might want to write your own parser in some cases does not diminish that (nor vice versa).
Same. LR(k) and LL(k) are readable and completely unambiguous, in contrast to PEG, where ambiguity is resolved ad hoc: PEG doesn't have a single definition, so implementations may differ, and the original PEG uses the order of the rules and backtracking to resolve ambiguity, which may lead to different resolutions in different contexts. Ambiguity does not leap out to the programmer.
OTOH, an LL(1) grammar can be used to generate a top-down/recursive descent parser, and will always be correct.
A large portion of this consistency is not making executive decisions about parsing ambiguities. The difference between "the language is implicitly defined by what the parser does" and "the grammar for the language has been refined one failed test at a time" is large and practically important.
This is an interesting read. As a software engineer who accidentally discovered the depth of Scrabble in middle age, I fell down a slightly different Scrabble rabbit hole. The probabilities behind the imperfect bag/rack state knowledge fascinate me, and I spent most of a year writing a strong Scrabble engine to explore the problem. I've put that aside for now, but it remains a background pressure on the computing problems I tackle.
I've spent a lot of time over the years using bit shifting to multiply/divide by powers of 2 in low-level code. At some point I grokked the connection with (base 2) logarithms and found myself using them as a mental estimation tool with surprising effectiveness.
Many of us programmers have powers of 2 memorized through at least 2^20 (and 2^10 ~= 1000 is close enough for decomposing and converting larger values), which makes rough conversion to/from the log domain trivial. The trick in estimation is to round to the nearest whole number when converting to the log domain, and try to alternate rounding direction for inputs that are not close to the upper/lower extrema. Keep a running total of the log-domain value as a whole value, and take the antilog as the final step. Given these simple rules, most estimates come within a factor of 2 of the intended value, and with minimal cognitive load. All you need to remember is the current sum and which direction to round the next input.
As an Idaho resident, I dearly hope not. And I'd be really surprised if such legislation did pass. Toxic extreme-right rhetoric appeals to a nontrivial proportion of Idahoans, and some politicians are pandering to the fringe of society that wants to see everything burn. But my rough estimate is that only 20-25% of Idahoans think that way.
As a fellow Idaho resident, I don't know many Idahoans that received the vaccine. But I don't know any that would want to prevent you from getting it, either. You should be free to do what you think is best for yourself, medically. And everyone I know thinks the same.
There are often situations in which the rational approach is to upstream enhancements, if only because it reduces ongoing maintenance of a derivative product. This is especially true of foundational infrastructure. BSD/MIT-like licensing works well for such software (perhaps less well in general).