Hacker Newsnew | past | comments | ask | show | jobs | submit | more closetkantian's commentslogin

I've seen a study showing that basically only legacies who donated had their children accepted.


So this is a major side question, but is there a go-to open source infinite canvas? I'm building a card game and I need a multiplayer infinite canvas for it. I'd appreciate any recommendations.


Does tldraw work for you? We use Pixi JS, which comes with graphics as well.


I think the biggest pain point for me is that nothing I've found supports cards that can be "flipped" so that they are face up or face down. I'm not an advanced coder but maybe I'll try to whip something up.


tldraw does look amazing though, thanks.


Ooh I see PixiJs can make content available to the screenreader for a11y. That’s always a big question mark for me when people start cramming a content tree into a canvas.


Yes a11y is very important! I do think this creates an HTML element per sprite/graphic, but the slowdown should not be terrible.


Last I checked Miro also uses Pixi so that's another quality infinite canvas app.




I stopped organizing files, and I just use everything search all the time (https://www.voidtools.com/)


Everything is great. If only there was a similar GUI tool with the same level of performance for GNU/Linux.


I use this tool too and it's amazing. But this is why file naming convention becomes important.


I always preferred Shadow President, the back end simulation seemed more realistic.


If your point is that BYOK is a useless acronym since it has the same number* of syllables, I disagree. Acronyms aren't just for reducing syllable count; they also reduce visual clutter and are easier to read for people who scan text.


My brother from another mother, I thought I was the only one left who distinguishes much from many. (I wish I didn't know that it's technically an initialism not an acronym...)


Hahaha, this comment has me thinking about how I would pronounce it. Bee-yok? Bye-yolk?


To be fair, OpenAI used pirated data


For me, it's substantially better at coding tasks than 4o.


Singapore is the most boring, sterile place I've ever been. I'd take Tokyo, Taipei, or Hong Kong over it in a heartbeat. The entire country reminds me of a mall. William Gibson's 1993 Wired article "Disneyland with the Death Penalty" (https://www.wired.com/1993/04/gibson-2/) is still as relevant as ever.

I saw a fascinating talk that convincingly argued that the Chinese Communist Party has taken its game plan over the last 30 years from Singapore, a de facto one party state led by the People's Action Party. It's interesting to note that this party was founded on socialist principles but is now firmly capitalist.


Here’s a book that argues for the same. [0]

Makes one wonder whether such articles (OP’s) are part of a spin campaign..

[0] https://press.princeton.edu/books/hardcover/9780691211411/sp...


I wonder if this might be a preemptive move to stop Google from falling under addictive social media style legislation that would presumably ban infinite scroll.


This is really cool, but I'm left with a lot of questions. Why does the font always generate the same string to replace the exclamation points as he moves from gedit to gimp? Shouldn't the LLM be creating a new "inference"?

As an aside, I originally thought this was going to generate a new font "style" that matched the text. So for example, "once upon a time" would look like a storybook style font or if you wrote something computer science-related, it would look like a tech manual font. I wonder if that's possible.


So, another poster cleared up my first question. It's probably because the seed is the same. I think it would have been a better demo if it hadn't been, though.


You got it, same seed in practice, but also just temperature = 0 for the demo actually. A few things I considered adding for the fun of it were 1) a way to specify a seed in the input text, 2) a way to using a symbol to say "I didn't like that token, try to generate another one", so you could do, say, "!" to generate tokens, "?" to replace the last generated token. So you would end up typing things like

"Once upon a time!!!!!!!!!!!!!!!!!!!!!!!!!!!!!SEED42!!!!!??!!!??!"

and 3) actually just allow you to override the suggestions by typing what letters on your own, to be used in future inferences. At that point it'd be a fairly generic auto-complete kind of thing.


Using the input characters to affect the token selection would increase the ‘magic’ a little.

As it is, if you go back into a string of !!!!!!!!!! That has been turned into ‘upon a time’, and try to delete the ‘a’, you’ll just be deleting an ! And the string will turn into ‘once upon a tim’.

If you could just keyboard mash to pass entropy to the token sampler, deleting a specific character would alter the generation from that point onwards.


But having the same "seed" doesn't guarantee the same response from an LLM, hence the question above.


I fail to understand how an LLM could produce two different responses from the same seed. Same seed implies all random numbers generated will be the same. So where is the source of nondeterminism?


I believe people are confused because ChatGPT's API exposes a seed parameter which is not guaranteed to be deterministic.

But that's due to the possibility model configuration changes on the service end and not relevant here.


Barring subtle incompatibilities in underlying implementations on different environments, it does, assuming all other generation settings (temperature, etc.) are held constant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: