It's widely agreed that formal verification does not boost software productivity, in the sense that formal verification doesn't speed up development of "software that compiles and runs and we don't care if it's correct".
The point of formal verification is to ensure that the software meets certain requirements with near certainty (subject to gamma radiation, tool failure, etc.). If mistakes aren't important, formal verification is a terrible tool. If mistakes matter, then formal verification may be what you need.
What this and other articles show is that doing formal verification by hand is completely impractical. For formal verification to be practical at all, it must be supported by tools that can automate a great deal of it.
The need for automation isn't new in computing. Practically no one writes machine code directly, we write in higher-level languages, or rarely assembly languages, and use automated tools to generate the final code. It's been harder to create practical tools for formal verification, but clearly automation is a minimum requirement.
Automation of Hoare logic is quite good these days. Dafny, from MS Research (https://dafny.org), is probably the most friendly formal language of any kind. It's built around Hoare logic and its extension, separation logic. The barrier of entry is low. A seasoned imperative or functional programmer can get going with Dafny in just a few days. Dafny has been used to verify large systems, including many components of AWS.
I am hoping that LLMs make more advanced languages, such as Liquid Haskell or Agda, less tedious to use. Ideally, lots of code should be autocompleted once a human provides a type signature. The advantage of formal verification is that we can be sure the generated code is correct.
It is hard to understand what you mean by type signature, but I think what you mean is something like the type signature in Java, Go or C.
That isn't what people are talking about when they talk about formal verification with a type system. They are talking about much more complex type systems that have equivalent power and expressivity to formal logic.
This intro is good at introducing the topic. Unfortunately, the correspondence between advanced type systems and logic is itself fairly abstract and uses lots of concepts in non-intuitive ways. This means that you can easily wind up in a situation where you can do lots of proofs about number theory using a tool like Lean4, but you don't really see the corresponding types (this is what happened to me).
With dependent types, if you really want to. (Or just with conventional generics or typeclasses, the difference between a ring and a monoid will show, and that's enough for many cases).
If you ignore syntax and pretend that the following is a snippet of Java code, you can declare that a variable x always holds an int, like so:
var x: int = y + 5
Here x is the variable being defined, it is declared to hold values of type int, and its initial value is given by the term y + 5.
In many mainstream languages, types and terms live in distinct universes. One starts by asking whether types and terms are all that different. The first step in this direction of inquiry is what are called refinement types. With our imaginary syntax, you can write something like:
val x: { int | _ >= 0 } = y + 5
Once again, x is the variable being defined, it is declared to always hold a value of type int at all relevant instants in all executions, and that its initial value is given by the term y + 5. But we additionally promise that x will always hold a non-negative value, _ >= 0. For this to typecheck, the typechecker must somehow also confirm that y + 5 >= 0.
But anyway, we have added terms to the previously boring world of types. This allows you to do many things, like so:
val x: int = ...
val y: int = ...
val z: { int | _ >= x && _ >= y } = if x >= y then x else y
We not only declare that z is an integer, but also that it always holds a value that exceeds both x and y.
You asked for the type of a function that multiplies two numbers. The type would look weird, so let me show you an imaginary example of the type of a function that computes the maximum:
val f : (x : int) -> (y : int) -> { int | _ >= x && _ >= y } = ...
This doesn't really get you to the maximum, because f might be computing max(x, y) + 5, but it does show the idea.
The final step in this direction is what are called full-blown dependent types, where the line between types and terms is completely erased.
The point of formal verification is to ensure that the software meets certain requirements with near certainty (subject to gamma radiation, tool failure, etc.). If mistakes aren't important, formal verification is a terrible tool. If mistakes matter, then formal verification may be what you need.
What this and other articles show is that doing formal verification by hand is completely impractical. For formal verification to be practical at all, it must be supported by tools that can automate a great deal of it.
The need for automation isn't new in computing. Practically no one writes machine code directly, we write in higher-level languages, or rarely assembly languages, and use automated tools to generate the final code. It's been harder to create practical tools for formal verification, but clearly automation is a minimum requirement.