I am a heavy user of Brave and I would love for you to expand on what you mean.
For those curious, here is the open-code repo of all Chromium changes Brave applies. I have not read every commit myself, so any flagging would be appreciated: https://github.com/brave/brave-core
They've made good changes after community backlash but their core business model is mostly middle-man stuff with a splash of crypto on top. Firefox's is the Google search engine deal. Pick your poison, I guess.
I worked on LiveGraph for a long time at Figma. We went through our own evolution:
1. first we would use the WAL records to invalidate queries that could be affected (with optimizations for fast matching) and requery the data
2. then we used the info from the WAL record to update the query in-memory without asking the DB for the new result, it worked for majority of the queries that can be reliably modeled outside of the DB
3. I believe after I left the team reverted to the re-query approach, as managing a system that replicates the DB behavior was not something they were excited to maintain, and as the DB layer got scaled out, extra DB queries were less of a downside
Thoroughly enjoyed the article since I have heard about Neon but never understood what it offers on the technical level over other PG-compatible projects.
The article mentions that a consequence of separating storage from compute is that compute nodes cache hot pages in memory and load cold pages from object storage (like S3?) when needed. Does anyone know what are the consequences of this decision. In case of a query that touches multiple rarely used pages, would that incur high latency and ingress? How does that penalty compare to a vanilla postgres running on AWS and storing pages on EBS?
(neon ceo)
Yes you will have an S3 fetch latency for downloading a layer file for such data. However we don't page out aggressively so it's a very unlikely event. Most of the time you will be fetching from a locally attached SSD which is superior and more predictable than EBS.
Where it works super well is the long tail of small and rarely used database - it pushes our costs way down.
Another advantage is that if the whole region goes down the S3 copy survives.
Finally if you separate storage and compute you can be serverless and scale to 0.
That's good to know. However, there was a ton more information that existed in the right hand panel that was pretty important for translating design to code that is now removed.
For as long as I've used Figma, I've been able to view the underlying CSS. This is no longer there. I can get it, but I have to copy/paste it somewhere else to see it. Very annoying. This existed before "dev mode" was a thing. I think it's fair game if you want to make this experience better in "dev mode" to differentiate, but to take away a feature like that is and then try to upsell it a money-grab IMO.
Slightly off-topic but what's a good forum to seek help on FP practices outside of the courses like this online?
Every winter break I get back into trying to learn more FP (in Haskell) and in the past several years I have been practicing algo problems (codeforces, advent of code, leetcode).
I always get stuck on more advanced graph algorithms where you traverse a and modify a graph, not a tree structure - it gets particularly tricky to work on circular data structures (I learned about "tying the knot" but it's incredibly challenging for me) and usually the runtime perf is sub-par both asymptotically and empirically.
Many graph algorithms are designed for imperative programming. It's safe to say that functional graph programming is still in its infancy. Alga[0], a system for algebraic graphs only came out in 2017. And efficient algorithms for graphs may yet to be discovered (even something as simple as reversing a list that's both efficient and elegant only came out in 1986!)
That said, as a beginner in functional programming, it would probably be good enough if you just focus on directly translating imperative graph algorithms to functional programming. You simply solve the problem by programming at a slightly lower level of abstraction.
I don't know if [0] would be any help, it doesn't talk about graphs in particular but does talk about functional-focused approaches to data structures. This note[1] on the wikipedia page for the book says it better than I could:
> [...] consider a function that accepts a mutable list, removes the first element from the list, and returns that element. In a purely functional setting, removing an element from the list produces a new and shorter list, but does not update the original one. In order to be useful, therefore, a purely functional version of this function is likely to have to return the new list along with the removed element. In the most general case, a program converted in this way must return the "state" or "store" of the program as an additional result from every function call. Such a program is said to be written in store-passing style.
Oh yeah, a pure function that accepts previous state, and returns the new state is the pattern I use a lot.
The issue is that it is hard to do on complex graph structures in an algorithm where incremental changes happen to the graph O(n) times - it ends up creating complex code and complex execution that might be slow to pass the time limit on Codeforces, let's say.
In the OCaml world maybe this is the place where you say "screw it, this abstracted function does some stateful temporary business, but it looks pure from the outside" but in Haskell it's a lot harder to pull off without going deep into monads (and I forget how those work every time).
Oh definitely go deep into monads. If you use the ST monad to do mutations, some people think Haskell provides nicer syntax to do imperative programming than traditional imperative languages like C. But of course such nicer syntax only comes from understanding and using important abstractions like monads, foldable, or traversable. Then there are niceties like automatic SoA/AoS transformation using type families.
Now if you understand the type family GHC extension, all this will be easily understood. If not, basically the idea is that you can automatically transform Array of Structs into Structs of Array by defining that transformation using a data family instance that you write yourself. For common types like tuples, the library already defined polymorphic instances for you.
Of course this transformation isn't always desirable and you don't have to use this. If your code isn't performance sensitive I wouldn't even bother with unboxed vectors; I'd just use regular vectors.
Does it support exporting to SVGs without requiring the used fonts installed on the viewer's system? That seems to be the issue the author is trying to address.
I only read the article once, but to me it seemed like this was an explicit requirement: produce a vector artifact that renders the same on all platforms without dependencies (basically a PDF?) and without invoking a browser in the build process.
Sort of like folks would praise Go's ability to compile a static binary without dylib dependencies besides libc.
Embedding fonts isn't really great for SVG exports; either you link to the fonts, in which case the SVGs only load correctly when the user has the font locally or is online, and the CDN is still running; or else you embed them as base 64, which makes the image very large.
We do the base 64 route for tldraw, which is sort of the best of all bad options. I'd like to someday add more export options so that a creator could host the fonts themselves on the same site where the SVG is shown.
we test on macs, windows and linux laptops, it is very surprising that drawing 1 rectangle is painfully slow.
Sometimes it happens when your browser does not enable hardware acceleration or when your Linux distro does not know how to switch to the discrete GPU.
We won't be able to tell without getting more of your hw specs and debug information, feel free to reach out to the Figma support or email me at skim@figma.com - this is exactly the type of issues that elud us when looking at prod metrics in aggregation.
For others looking for more details on how Figma's sync engines differ and why 2 sync engines emerged, I had a long thread about it here:
https://x.com/imslavko/status/1890482196697186309