Liquid Types are more limited than "full dependent types" like Lean, Rocq, Agda or Idris. In Liquid Types you can refine your base types (Int, Bool), but you cannot refine all types. For instance, you cannot refine the function (a:Int | a > 0) -> {x:Int | x > a}. Functions are types, but are not refinable.
These restrictions make it possible to send the sub typing check to an SMT solver, and get the result in a reasonable amount of time.
Jobs would have kept the button on the bottom, as it's not the proper way to use a computer.
Instead, he would have put motion/light sensors on the screen, so it would automatically wake up when you are sitting in front of it. Macs don't shutdown, they just go to sleep and wake up when you need them.
I think the (hidden) reasoning is that it is really easy to have speedups with slow interpreters. However, getting speedups in high-performance level programs is quite hard, mainly due to micro-optimisations.
That's where the comparison to Python comes from: getting speedup on slow interpreters is not very _relevant_. Now if your interpreter has the same optimisations as Python (or v8 or JVM), even a small fraction of what you show would be impressive.
Having said this, the work your team did is a really challenging engineering feat (and with lot more potential). But I do not believe the current speedups will hold if the interpreter/compilers have the level of optimisation that exist in other languages. And while you do not claim it, people expect that.
Apple crash reports will measure this (if you compare old reports to new ones), but the data isn't public. I assume some third party SDKs will get updates to fix their bugs (and changelog might reveal it) but otherwise it is hard for normal user to see what issue caused an app to crash.
Couldn't you use a regular DNN with one-hot encoding of all the movies seen by a user (and the corresponding genres)?
And boosting can give more weight to certain movies or genres.
It appears the difference can be found in the depth of knowledge. A one-hot encoding, followed by an embedding, followed by some fully connected layers only takes the actual titles into account.
A GNN can take into account everything you know about movies, including incomplete data. Therefore a GNN will see that user A likes everything with actor X, user B really wants the genre to be Y, user C likes actor Z, but only before 2000, and combinations of that. Therefore the GNN can do better, hell, it can even predict what movie properties would do well, which would be tough to get out of the embedding network.
You could encode all this data in your embedding, but this will be a much smaller and much more flexible network.
These restrictions make it possible to send the sub typing check to an SMT solver, and get the result in a reasonable amount of time.