Hacker Newsnew | past | comments | ask | show | jobs | submit | meehai's commentslogin

it's skills first and then money and hardware for scale

A more skilled person that understands all the underlying steps will always be more efficient in scaling up due to knowing where to allocate more.

basically... you always need the skills and the money is the fine tuning.


Can you append new columns to a file stored on disk without reading it all in mempey? Somehoe this is beyond parquet capabilities.


The default writer will decompress the values, however, right now you can implement your own write strategy that will avoid doing it. We plan on adding that as an option since it’s quite common.



Yeah, but if you can do topologies based on latencies you may get some decent tradeoffs. For example with N=1M nodes each doing batch updates in a tree manner, i.e the all reduce is actually layered by latency between nodes.


lossy encycopledia that can also do some short-term memory (RAG) things.


I've played years of KZ and HNS after years of playing competitive CS on local communities (old PGL in romania!). I got over 6k hours in steam CS1.6 + many more on "non-steam". That game shaped me. I even learned the basics of programming while modding a KZ plugin: https://forums.alliedmods.net/showthread.php?t=130417

Nowadays I code for a living, but for sure this is the game that started the spark for me.

It was a great time and I feel that I can always run this game and get back to that childhood feeling.


Now that's a blast from the past. Before we had good one-in-all plugin solutions for solo play, I'm pretty sure I ripped the hook code from ProKreedz for my listen server. Then I got a checkpoint plugin from another guy, LJ stats from somewhere else and so on. I could tinker with my server freely and make it work just like I wanted.

That's what I really loved about CS 1.6. It allowed so much freedom in terms of what kind of maps and plugins you could create. We got amazing community-cultivated game modes such as KZ, HNS, surf and so many more out of it. And what's more, it was relatively easy to whip up your own map in Hammer and get it out there for everyone to play.

Community servers were first class citizens back then, prominently displayed as soon as you launched the game. These days someone getting into the game might not ever find out about the rich variety of experiences provided by community servers because they get funneled right into the default 5v5 matchmaking experience.

I tried TF2 recently and it took me a minute to figure out how to play a game without queuing into matchmaking. It's a bit sad.

I honestly think developers undervalue the power of moddability in adding value and especially longevity for their games. Fortunately, and as you pointed out, CS 1.6 is still there, and there's still a lot of active communities around that game. I believe that's because the game allowed the community to carve out a space for the themselves and build whatever they wanted.


I played thousands of hours of KZ in CSGO. I knew pretty much everyone in the community back then, was involved with House of Climb as a sort of crew member/moderator and set a couple world records and top 5 times etc.

Those were great times, but all the strafing caused some shoulder and arm problems so I eventually gave it up and moved on.


Tbh, the web won the application platform mostly because it's a standard. Everybody knows html, css and a little JS.

On the other hand, for mobile apps, there is still a device-specific mentality.

Imagine web apps being built with a different flavor for all the major browsers...

I hope that the same level of standardization comes to mobile apps too with the option to use more device-specific features on top of the generic UI.


Mine is much more barebone:

- one single machine - nginx proxy - many services on the same machine; some are internal, some are supposed to be public, are all accessible via the web! - internal ones have a humongous large password for HTTP basic auth that I store in an external password manager (firefox built in one) - public ones are either public or have google oauth

I coded all of them from scratch as that's the point of what I'm doing with homelabbing. You want images? browsers can read them. Videos? Browsers can play them.

The hard part is the backend for me. The frontend is very much "90s html".


HTTP sends password in cleartext. Better to use a self signed certificate at least.


Nice! I have a friend who is starting to program his infrastructure/services from scratch. It's a neat way to learn and make things fit well for your own needs


the last point got me.

How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).

So ... how does one do it?


One uses POST and recognizes that REST doesn't have to be so prescriptive.

The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).

When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.

Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.


POST the filter, get a response back with the query to follow up with for the individual resources.

    POST /complex
    
    value1=something
    value2=else
which then responds with

    201 Created
    Location https://example.com/complex/53301a34-92d3-447d-ac98-964e9a8b3989
And then you can make GET request calls against that resource.

It adds in some data expiration problems to be solved, but its reasonably RESTful.


This has RESTful aesthetics but it is a bit unpractical if a read-only query changes state on the server, as in creating the uuid-referenced resource.


There's no requirement in HTTP (or REST) to either create a resource or return a Location header.

For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).


Isn't this twice as slow? If your server was far away it would double load times?


The response to POST can return everything you need. The Location header that you receive with it will contain permanent link for making the same search request again via GET.

Pros: no practical limit on query size. Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.


There was a proposal[1] a while back to define a new SEARCH verb that was basically just a GET with a body for this exact purpose.

[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...


Similarly, a more recent proposal for a new QUERY verb: https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...


If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).

Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.


Cons: not postman or cURL friendly.


"Filters" suggests that you are trying to query. So, QUERY, perhaps? https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...

Or stop worrying and just use POST. The computer isn't going to care.


HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.

Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.

In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.


Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.

I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.



The Typst web app, which is similar to Overleaf, is closed source. Overleaf itself is open source, yes.


Overleaf isn't fully open source either, since they have a paid tier with features which are not present in this repo. Inline commenting for example, is a Server Pro -only feature.


"That" in my sentence meant that Typst web app is closed source.


But that doesn’t make much sense - by your account Latex would also be a mix of closed and open source, since closed source web apps exists for writing Latex.


What does not make sense? Did you mean to answer to someone else? I only stated that Typst (the typesetting engine) is free to use and modify, and only the web app is closed source. Typst can be used without touching any web apps. I use Typst locally.

I made no claims about any mixes or claims about LaTeX.


Read your own link before posting. While the parent was wrong about it being fully closed source the Overleaf editor isn't fully open source either, it is open core under AGPL.

> If you want help installing and maintaining Overleaf in your lab or workplace, we offer an officially supported version called Overleaf Server Pro. It also includes more features for security (SSO with LDAP or SAML), administration and collaboration (e.g. tracked changes). Find out more!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: