Hacker Newsnew | past | comments | ask | show | jobs | submit | indyjo's commentslogin

Or why robots are playing DOOM now. What programming for the Atari ST in 2025 has taught me.


Gotta love how inconsistent this is. The classic username/password dialog box, traditionally the first thing you see when starting your computer, would already have to treat the Return in a nonstandard way.


So you drag UI elements onto an empty sheet, fight with the grid snap (because it doesn't match the size of your UI elements) and are then supposed to enter raw JavaScript, without any code completion, visual programming, API help or AI support? And that's it?


As much as the concept blew me away when I first heard of it, I can't shake the feeling that the Curry-Howard correspondence is somehow mis-marketed as something that would immediately cater to programmers. The idea to encode propositions into types sounds enticing for programmers, because there are indeed a lot of cases where something needs to be conveyed that can't be conveyed using type systems (or other features) of common programming languages.

Examples include the famous "the caller of this function guarantees that the argument is a non-empty list" but also "the caller guarantees that the argument object has been properly locked against concurrent access before using the function".

However, in my experience, the community is more interested in mathematics than in programming and I don't know if anybody is really working to provide propositions-as-types to mainstream programmers. This is certainly because it is hard enough to prove soundness of a strict functional programming language as Agda or Rocq, much more for anything imperative, stateful, "real-world", non-strict, non-pure, concurrent, ill-defined, you name it.

So, for me the promise of "showing programmers that what they do is actually mathematics" is not really kept as long as the definition of "programmer" is so narrow that it excludes the vast majority of people who define themselves as programmers.

What I'm trying to say: I wish there were more efforts to bring more powerful formal methods, especially as powerful as dependent types, to existing mainstream programming languages where they could be useful in an industrial context. Or at least try to come up with new programming languages that are more pragmatic and do not force the programmer into some narrow paradigm.

Am I wrong? I hope so :)


Programmers regularly do this stuff under various names/phrases. "Make illegal states unrepresentable", "parse, don't validate", "resource acquisition is initialization", etc. It's all about encoding the fact that your structure isn't just data, but also represents some proof of provenance that guarantees some property. Scala is an industrial language that gives more tools than you might usually find for this while also letting you be pragmatic and integrate with Java's massive ecosystem (though you quickly learn you'd rather not because Java code tends to not do a good job of leveraging types in a sane way).


No, you're not wrong. I mean, in some circles, it's a battle to get programmers to use types at all. And, while not every proposition can currently be usefully encoded in a type, every type currently encodes a proposition. So if we can get the people who don't use types to start using them, that's probably the lowest-hanging fruit.

From there, every step to improve the expressiveness of types allows you to encode more within the type system. For example, one could think about encoding that non-empty requirement in C++ or Java collection types. It would be nontrivial - a number of things would need adjusting - but you could think about doing it. (Or rather, it would be nontrivial to do with static types. You could more easily make it throw if the condition was not satisfied.)

Your "properly locked" example is much harder. It would require telling the type system what the definition of "properly locked" is. Even at that, I can't see any way to do it as a static type. And that's a bummer, because I far prefer static types. I prefer my proofs to fail at compile time.

But my main point is, "proving" is not binary. Every incremental advance in what types can check proves more things, and therefore leaves the programmers having to prove less in their heads.


And it is continually improving. Rust borrow checker is an example of this.

But as far as jumping into the deep end of dependent types, that's a whole other ball of wax. Like, imagine (or try!) writing leetcode solutions in Lean or Idris, with a type that proves they are correct and (bonus points) run in the specified time complexity. Even defining the hypothesis is non trivial. It's all doable, but takes orders of magnitude longer. But with dependent types you have to do it because the next function you call may require a proof that the leet function returns the thing it's supposed to.

That's just for algorithmic leet problems. Now imagine having to write proofs in a complex multithreaded system that your code is not accessing out of bounds arrays or leaking memory, and integrating libraries that each have slightly different semantics for doing the same, or they use unsigned types that make all the proofs incompatible, etc. At that point, you have to basically give up on your borrow checker and fall back to using runtime checks anyway. And even if you did get your system into a fully proved state, that only applies to that one binary; it makes no guarantees about distributed interactions, rollouts and rollbacks, or any of the other things that are the more frequent cause of bugs in production systems. In fact, it may encourage more 'distributed spaghetti" just to work around having to prove everything.

There's an analogy with how checked exceptions work in Java: cool thought, but mostly get in the way of what you're really trying to do after a while, or break things when new types of exceptions get added to the function, so everyone ends up just wrapping them with unchecked exceptions anyway. This is what would end up happening with full dependent types too, except it would pervade the entire type system and every function you write. The eventual outcome would likely be everyone just works around them (every single function, even fairly trivial ones like divide, would return an Option that the caller would have to handle or bubble up), and the actual code would be even less type safe than it would be with a simpler type system.

So, ultimately the way Rust is going, where some key things like borrow checking, are built into the language, seems to be the better approach.


I think Idris (https://www.idris-lang.org/) is primarily focused on using dependent types for programming, and even Lean has been expanding in this direction (cf. the Functional Programming in Lean book, https://lean-lang.org/functional_programming_in_lean/).


C-H isn't useful to programmers at all. The programs that represent proofs end up being useless programs, most of the time, and the proofs represented by real-world programs end up being useless proofs, most of the time.

Most programs deal with several objects of the same type - your program probably contains more than one integer (shocking, right?). Since C-H maps each unique type to a unique proof term, the same type maps to the same proof term. A function that calculates greatest common factor of two numbers proves that (A∧A)→A, or "(A and A) implies A" which is... uninteresting.

In the reverse direction, the program generated by (A∧A)→A is either the first or second entry selector from a tuple, and doesn't calculate the greatest common factor of two integers. (A∨A)→A is slightly more interesting, taking Either<int, int> and returning int.

It's true that C-H of logical disjunction gives you Either types, conjunction gives you tuple types, and implication gives you function types. Kinda fascinating, enough to write a blog post about, but still without any practical purpose.

Practical types with additional requirements (such as non-empty lists) have nothing to do with C-H and could exist even if C-H were false. Same for dependent types. As I said, C-H is completely useless in actual programming.

I do wish we had more formal methods though. Formal methods of programming have nothing to do with C-H either. Static typing has nothing to do with C-H. The Rust borrow checker has nothing to do with C-H. Complete formal verification (as in seL4) has nothing to do with C-H.


Programmers use it all the time!

They turn diagrams representing the logic of the business domain into expressions in their type system of choice.

I think it’s a failure of pedagogy to focus on the abstract “prove stuff” sense, rather than the applied “domain diagram logic to types” at the core of what SDEs actually do. But turning a business domain diagram into a systems diagram into code is translating a category into another via embedding before translating into a type theory.

My opinion is that you need to start with that core and explain how the other features make that better, eg, how does polynomial monads address the weirdness between logging and errors (and errors in logging errors, etc)?


Would embedding code (which is executed by some other runtime, like SQL, shaders, compute kernels etc.) also be considered "biphasic" or "multi-stage" programming?


While POV-Ray was a cool project, let's not forget how far we have come with Blender and what a great success for the Free Software movement it represents.


What I'm kind of missing with Blender and the likes is the procedural 'magic' and surprise of POV-Ray. How just typing a few lines of 'code' into a text file could produce wondrous and mind blowing images, and then just changing a couple of numbers would give you an entirely different and unexpected result. Blender makes it infinitely easier to get exactly the result you want, but sometimes that not what you actually want.


I was one of the original donors in June 2202, iirc, for Ton to buy Blender and start the Blender Foundation. I used birthday money and sent I think, $50, that's why I remember it to this day. I still use Blender, but I was using it back then to generate 3D reliefs to carve on my self-built 4x8 foot CNC router table. I wrote a script based on a shape-from-shading algorithm originally developed by NASA to get more out of all of the single-lens B&W satellite photos they had. My company was The Wooden Image. To the other poster: you could do procedural stuff in Blender early on using Python scripting, although I was using POV-Ray very early on.


POV-Ray was free too since a few versions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: