Hacker Newsnew | past | comments | ask | show | jobs | submit | imslavko's commentslogin

I worked with Rudi at Figma and of course support his comment - Figma seems to be mentioned for marketing, not for the actual technical comparison.

For others looking for more details on how Figma's sync engines differ and why 2 sync engines emerged, I had a long thread about it here:

https://x.com/imslavko/status/1890482196697186309


I am a heavy user of Brave and I would love for you to expand on what you mean.

For those curious, here is the open-code repo of all Chromium changes Brave applies. I have not read every commit myself, so any flagging would be appreciated: https://github.com/brave/brave-core


Brave used to insert their affiliate parameters by default when visiting websites.

https://news.ycombinator.com/item?id=23442027

You may also be interested in these topics:

https://news.ycombinator.com/item?id=18734999

https://davidgerard.co.uk/blockchain/2019/01/13/brave-web-br...

They've made good changes after community backlash but their core business model is mostly middle-man stuff with a splash of crypto on top. Firefox's is the Google search engine deal. Pick your poison, I guess.


I worked on LiveGraph for a long time at Figma. We went through our own evolution:

1. first we would use the WAL records to invalidate queries that could be affected (with optimizations for fast matching) and requery the data

2. then we used the info from the WAL record to update the query in-memory without asking the DB for the new result, it worked for majority of the queries that can be reliably modeled outside of the DB

3. I believe after I left the team reverted to the re-query approach, as managing a system that replicates the DB behavior was not something they were excited to maintain, and as the DB layer got scaled out, extra DB queries were less of a downside


LiveGraph was an inspiration for us Slava, made me smile to see your comment. 2. is _really_ interesting.

I'll reach out to you on twitter; would love to learn more about your experience


Thoroughly enjoyed the article since I have heard about Neon but never understood what it offers on the technical level over other PG-compatible projects.

The article mentions that a consequence of separating storage from compute is that compute nodes cache hot pages in memory and load cold pages from object storage (like S3?) when needed. Does anyone know what are the consequences of this decision. In case of a query that touches multiple rarely used pages, would that incur high latency and ingress? How does that penalty compare to a vanilla postgres running on AWS and storing pages on EBS?


(neon ceo) Yes you will have an S3 fetch latency for downloading a layer file for such data. However we don't page out aggressively so it's a very unlikely event. Most of the time you will be fetching from a locally attached SSD which is superior and more predictable than EBS.

Where it works super well is the long tail of small and rarely used database - it pushes our costs way down.

Another advantage is that if the whole region goes down the S3 copy survives.

Finally if you separate storage and compute you can be serverless and scale to 0.


Thanks Nikita!


Old inspect features are still available without paying, although they were moved in the UI to make space: https://www.loom.com/share/d4f9856c04f24818913d1de8ecb2a08c


That's good to know. However, there was a ton more information that existed in the right hand panel that was pretty important for translating design to code that is now removed.

For as long as I've used Figma, I've been able to view the underlying CSS. This is no longer there. I can get it, but I have to copy/paste it somewhere else to see it. Very annoying. This existed before "dev mode" was a thing. I think it's fair game if you want to make this experience better in "dev mode" to differentiate, but to take away a feature like that is and then try to upsell it a money-grab IMO.


Slightly off-topic but what's a good forum to seek help on FP practices outside of the courses like this online?

Every winter break I get back into trying to learn more FP (in Haskell) and in the past several years I have been practicing algo problems (codeforces, advent of code, leetcode).

I always get stuck on more advanced graph algorithms where you traverse a and modify a graph, not a tree structure - it gets particularly tricky to work on circular data structures (I learned about "tying the knot" but it's incredibly challenging for me) and usually the runtime perf is sub-par both asymptotically and empirically.


Many graph algorithms are designed for imperative programming. It's safe to say that functional graph programming is still in its infancy. Alga[0], a system for algebraic graphs only came out in 2017. And efficient algorithms for graphs may yet to be discovered (even something as simple as reversing a list that's both efficient and elegant only came out in 1986!)

That said, as a beginner in functional programming, it would probably be good enough if you just focus on directly translating imperative graph algorithms to functional programming. You simply solve the problem by programming at a slightly lower level of abstraction.

[0]: https://dl.acm.org/authorize?N46678 or preprint at https://github.com/snowleopard/alga-paper/releases/download/...


I don't know if [0] would be any help, it doesn't talk about graphs in particular but does talk about functional-focused approaches to data structures. This note[1] on the wikipedia page for the book says it better than I could:

> [...] consider a function that accepts a mutable list, removes the first element from the list, and returns that element. In a purely functional setting, removing an element from the list produces a new and shorter list, but does not update the original one. In order to be useful, therefore, a purely functional version of this function is likely to have to return the new list along with the removed element. In the most general case, a program converted in this way must return the "state" or "store" of the program as an additional result from every function call. Such a program is said to be written in store-passing style.

[0] https://www.cs.cmu.edu/~rwh/students/okasaki.pdf

[1] https://en.wikipedia.org/wiki/Purely_functional_data_structu...


Oh yeah, a pure function that accepts previous state, and returns the new state is the pattern I use a lot.

The issue is that it is hard to do on complex graph structures in an algorithm where incremental changes happen to the graph O(n) times - it ends up creating complex code and complex execution that might be slow to pass the time limit on Codeforces, let's say.

In the OCaml world maybe this is the place where you say "screw it, this abstracted function does some stateful temporary business, but it looks pure from the outside" but in Haskell it's a lot harder to pull off without going deep into monads (and I forget how those work every time).


Oh definitely go deep into monads. If you use the ST monad to do mutations, some people think Haskell provides nicer syntax to do imperative programming than traditional imperative languages like C. But of course such nicer syntax only comes from understanding and using important abstractions like monads, foldable, or traversable. Then there are niceties like automatic SoA/AoS transformation using type families.


I don't think I have heard of the automatic AoS/AoS before, do you have good links to study more?


You can read the documentation for the vector package. (This is an extremely common package since it is a dependency for the JSON library aeson.)

Check out https://hackage.haskell.org/package/vector-0.13.1.0/docs/Dat...

Now if you understand the type family GHC extension, all this will be easily understood. If not, basically the idea is that you can automatically transform Array of Structs into Structs of Array by defining that transformation using a data family instance that you write yourself. For common types like tuples, the library already defined polymorphic instances for you.

Of course this transformation isn't always desirable and you don't have to use this. If your code isn't performance sensitive I wouldn't even bother with unboxed vectors; I'd just use regular vectors.


My Scala friends heavily rely on discord, e.g. the one mentioned here: https://typelevel.org/blog/2021/05/05/discord-migration.html

It is language based community but they do have vibrant discussion on learning and theories.


No way to confirm, but I think so, just because NPM threw this error at me:

     KV GET failed: 401 Unauthorized
where KV could refer to the CF KV in workers


Does it support exporting to SVGs without requiring the used fonts installed on the viewer's system? That seems to be the issue the author is trying to address.


Can the author not simply include the necessary fonts on his site?


I only read the article once, but to me it seemed like this was an explicit requirement: produce a vector artifact that renders the same on all platforms without dependencies (basically a PDF?) and without invoking a browser in the build process.

Sort of like folks would praise Go's ability to compile a static binary without dylib dependencies besides libc.


Embedding fonts isn't really great for SVG exports; either you link to the fonts, in which case the SVGs only load correctly when the user has the font locally or is online, and the CDN is still running; or else you embed them as base 64, which makes the image very large.

We do the base 64 route for tldraw, which is sort of the best of all bad options. I'd like to someday add more export options so that a creator could host the fonts themselves on the same site where the SVG is shown.


Not the same but a similar curious case of historical immigration: https://en.wikipedia.org/wiki/Koryo-saram


Hey there

we test on macs, windows and linux laptops, it is very surprising that drawing 1 rectangle is painfully slow.

Sometimes it happens when your browser does not enable hardware acceleration or when your Linux distro does not know how to switch to the discrete GPU.

We won't be able to tell without getting more of your hw specs and debug information, feel free to reach out to the Figma support or email me at skim@figma.com - this is exactly the type of issues that elud us when looking at prod metrics in aggregation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: