I would not have believed for a second if stores here in my location in the U.S. did not recently begin locking up gift cards in a cage. I thought the move was quite odd, until I remembered a story that I read (possibly here?) about specific types of imaging that could see the pin behind the scratch off part.
Originally I assumed it was due to customer education/fraud, however no additional signage is posted at the stores doing this. Second thought was people must think these cards are already activated, however there is tons of text stating these things are only activated at POS.
The retailers I mentioned are nationwide. However, they've only recently began to do this, and only in a few locations that I am aware of.
This guy purchased a gift card which turned out to be dodgy, and Apple locked his entire account. So there's definitely some kind of shenanigans possible with the current supply chain.
For the past year, single chinese tourists have travelled around the country emptying stored of nearly any kind of gift card. Its some kind of money laundering scheme I think? So recently stores in affected areas started locking up gift cards, though its hard to stop as buying all gift cards isnt really illegal
Disclosure, I've not run a website since my health issues began, however, Cloudflare has an AI firewall, Cloudflare is super cheap (also: unsure if the AI firewall is on the free tier, however I would be surprised if it is not). Ignoring the recent drama about a couple incidents they've had (because this would not matter for a personal blog), why not use this instead?
Just curious. Hoping to be able to work on a website again someday, if I ever regain my health/stamina/etc back.
My main terminal uses a PiHole with 120,000+ blacklist rules (not Cloudfare specifically — I allow most CDN's). This includes an entire blackout of Google/Facebook products, as well as most tracking/analytics services.
For example, I do not allow reCAPTCHA.
As a similar commentor noted, when just casually browsing I don't really have any desire to try hard to read random content. Should I absolutely need to access some information garden-walled behind Cloudfare: I have another computer that uses much less restrictive black-listing.
Not OP but it isn't super extreme if you are just surfing, it's like if the site is slow to load sometimes I wasn't that invested to use your site anyway
Respectfully, LLMs are nothing like a brain, and I discourage comparisons between the two, because beyond a complete difference in the way they operate, a brain can innovate, and as of this moment, an LLM cannot because it relies on previously available information.
LLMs are just seemingly intelligent autocomplete engines, and until they figure a way to stop the hallucinations, they aren't great either.
Every piece of code a developer churns out using LLMs will be built from previous code that other developers have written (including both strengths and weaknesses, btw). Every paragraph you ask it to write in a summary? Same. Every single other problem? Same. Ask it to generate a summary of a document? Don't trust it here either. [Note, expect cyber-attacks later on regarding this scenario, it is beginning to happen -- documents made intentionally obtuse to fool an LLM into hallucinating about the document, which leads to someone signing a contract, conning the person out of millions].
If you ask an LLM to solve something no human has, you'll get a fabrication, which has fooled quite a few folks and caused them to jeopardize their career (lawyers, etc) which is why I am posting this.
This is the 2023 take on LLMs. It still gets repeated a lot. But it doesn’t really hold up anymore - it’s more complicated than that. Don’t let some factoid about how they are pretrained on autocomplete-like next token prediction fool you into thinking you understand what is going on in that trillion parameter neural network.
Sure, LLMs do not think like humans and they may not have human-level creativity. Sometimes they hallucinate. But they can absolutely solve new problems that aren’t in their training set, e.g. some rather difficult problems on the last Mathematical Olympiad. They don’t just regurgitate remixes of their training data. If you don’t believe this, you really need to spend more time with the latest SotA models like Opus 4.5 or Gemini 3.
Nontrivial emergent behavior is a thing. It will only get more impressive. That doesn’t make LLMs like humans (and we shouldn’t anthropomorphize them) but they are not “autocomplete on steroids” anymore either.
> Don’t let some factoid about how they are pretrained on autocomplete-like next token prediction fool you into thinking you understand what is going on in that trillion parameter neural network.
This is just an appeal to complexity, not a rebuttal to the critique of likening an LLM to a human brain.
> they are not “autocomplete on steroids” anymore either.
Yes, they are. The steroids are just even more powerful. By refining training data quality, increasing parameter size, and increasing context length we can squeeze more utility out of LLMs than ever before, but ultimately, Opus 4.5 is the same thing as GPT2, it's only that coherence lasts a few pages rather than a few sentences.
First, this is completely ignoring text diffusion and nano banana.
Second, to autocomplete the name of the killer in a detective book outside of the training set requires following and at least some understanding of the plot.
Reinforcement learning is a technique for adjusting weights, but it does not alter the architecture of the model. No matter how much RL you do, you still retain all the fundamental limitations of next-token prediction (e.g. context exhaustion, hallucinations, prompt injection vulnerability etc)
You've confused yourself. Those problems are not fundamental to next token prediction, they are fundamental to reconstruction losses on large general text corpora.
That is to say, they are equally likely if you don't do next token prediction at all and instead do text diffusion or something. Architecture has nothing to do with it. They arise because they are early partial solutions to the reconstruction task on 'all the text ever made'. Reconstruction task doesn't care much about truthiness until way late in the loss curve (where we probably will never reach), so hallucinations are almost as good for a very long time.
RL as is typical in post-training _does not share those early solutions_, and so does not share the fundamental problems. RL (in this context) has its own share of problems which are different, such as reward hacks like: reliance on meta signaling (# Why X is the correct solution, the honest answer ...), lying (commenting out tests), manipulation (You're absolutely right!), etc. Anything to make the human press the upvote button or make the test suite pass at any cost or whatever.
With that said, RL post-trained models _inherit_ the problems of non-optimal large corpora reconstruction solutions, but they don't introduce more or make them worse in a directed manner or anything like that. There's no reason to think them inevitable, and in principle you can cut away the garbage with the right RL target.
Thinking about architecture at all (autoregressive CE, RL, transformers, etc) is the wrong level of abstraction for understanding model behavior: instead, think about loss surfaces (large corpora reconstruction, human agreement, test suites passing, etc) and what solutions exist early and late in training for them.
> This is just an appeal to complexity, not a rebuttal to the critique of likening an LLM to a human brain
I wasn’t arguing that LLMs are like a human brain. Of course they aren’t. I said twice in my original post that they aren’t like humans. But “like a human brain” and “autocomplete on steroids” aren’t the only two choices here.
As for appealing to complexity, well, let’s call it more like an appeal to humility in the face of complexity. My basic claim is this:
1) It is a trap to reason from model architecture alone to make claims about what LLMs can and can’t do.
2) The specific version of this in GP that I was objecting to was: LLMs are just transformers that do next token prediction, therefore they cannot solve novel problems and just regurgitate their training data. This is provably true or false, if we agree on a reasonable definition of novel problems.
The reason I believe this is that back in 2023 I (like many of us) used LLM architecture to argue that LLMs had all sorts of limitations around the kind of code they could write, the tasks they could do, the math problems they could solve. At the end of 2025, SotA LLMs have refuted most of these claims by being able to do the tasks I thought they’d never be able to do. That was a big surprise to a lot us in the industry. It still surprises me every day. The facts changed, and I changed my opinion.
So I would ask you: what kind of task do you think LLMs aren’t capable of doing, reasoning from their architecture?
I was also going to mention RL, as I think that is the key differentiator that makes the “knowledge” in the SotA LLMs right now qualitatively different from GPT2. But other posters already made that point.
This topic arouses strong reactions. I already had one poster (since apparently downvoted into oblivion) accuse me of “magical thinking” and “LLM-induced-psychosis”! And I thought I was just making the rather uncontroversial point that things may be more complicated than we all thought in 2023. For what it’s worth, I do believe LLMs probably have limitations (like they’re not going to lead to AGI and are never going to do mathematics like Terence Tao) and I also think we’re in a huge bubble and a lot of people are going to lose their shirts. But I think we all owe it to ourselves to take LLMs seriously as well. Saying “Opus 4.5 is the same thing as GPT2” isn’t really a pathway to do that, it’s just a convenient way to avoid grappling with the hard questions.
Not the person you're responding to, but I think there's a non trivial argument to make that our thoughts are just auto complete. What is the next most likely word based on what you're seeing. Ever watched a movie and guessed the plot? Or read a comment and know where it was going to go by the end?
And I know not everyone thinks in a literal stream of words all the time (I do) but I would argue that those people's brains are just using a different "token"
There's no evidence for it, nor any explanation for why it should be the case from a biological perspective. Tokens are an artifact of computer science that have no reason to exist inside humans. Human minds don't need a discrete dictionary of reality in order to model it.
Prior to LLMs, there was never any suggestion that thoughts work like autocomplete, but now people are working backwards from that conclusion based on metaphorical parallels.
There actually was quite a lot of suggestion that thoughts work like autocomplete. A lot of it was just considered niche, e.g. because the mathematical formalisms were beyond what most psychologist or even cognitive scientists would deem usefull.
Predictive coding theory was formalized back around 2010 and traces it roots up to theories by Helmholtz from 1860.
Predictive coding theory postulates that our brains are just very strong prediction machines, with multiple layers of predictive machinery, each predicting the next.
There are so many theories regarding human cognition that you can certainly find something that is close to "autocomplete". A Hopfield network, for example.
Roots of predictive coding theory extend back to 1860s.
Natalia Bekhtereva was writing about compact concept representations in the brain akin to tokens.
> There are so many theories regarding human cognition that you can certainly find something that is close to "autocomplete"
Yes, you can draw interesting parallels between anything when you're motivated to do so. My point is that this isn't parsimonious reasoning, it's working backwards from a conclusion and searching for every opportunity to fit the available evidence into a narrative that supports it.
> Roots of predictive coding theory extend back to 1860s.
This is just another example of metaphorical parallels overstating meaningful connections. Just because next-token-prediction and predictive coding have the word "predict" in common doesn't mean the two are at all related in any practical sense.
You, and OP, are taking an analogy way too far. Yes, humans have the mental capability to predict words similar to autocomplete, but obviously this is just one out of a myriad of mental capabilities typical humans have, which work regardless of text. You can predict where a ball will go if you throw it, you can reason about gravity, and so much more. It’s not just apples to oranges, not even apples to boats, it’s apples to intersubjective realities.
I feel the link between humans and autocomplete is deeper than that an ability to predict.
Think about an average dinner party conversation. Person A talks, person B thinks about something to say that fits, person C gets an association from what A and B said and speaks...
And what are people most interested in talking about? Things they read or watched during the week perhaps?
Conversations would not have had to be like this. Imagine a species from another planet who had a "conversation" where each party simply communicated what it most needed to say/was most benefitial to say and said it. And where the chance of bringing up a topic had no correlation at all with what previous person said (why should it?) or with what was in the newspapers that week. And who had no "interest" in the association game.
Humans saying they are not driven by associations is to me a bit like fish saying they are not noticing the water. At least MY thought processes works like that.
I don't think I am. To be honest, as ideas goes and I swirl it around that empty head of mine, this one ain't half bad given how much immediate resistance it generates.
Other posters already noted other reasons for it, but I will note that you are saying 'similar to autocomplete, but obviously' suggesting you recognize the shape and immediately dismissing it as not the same, because the shape you know in humans is much more evolved and co do more things. Ngl man, as arguments go, it sounds to me like supercharged autocomplete that was allowed to develop over a number of years.
Fair enough. To someone with a background in biology, it sounds like an argument made by a software engineer with no actual knowledge of cognition, psychology, biology, or any related field, jumping to misled conclusions driven only by shallow insights and their own experience in computer science.
Or in other words, this thread sure attracts a lot of armchair experts.
> with no actual knowledge of cognition, psychology, biology
... but we also need to be careful with that assertion, because humans do not understand cognition, psychology, or biology very well.
Biology is the furthest developed, but it turns out to be like physics -- superficially and usefully modelable, but fundamental mysteries remain. We have no idea how complete our models are, but they work pretty well in our standard context.
If computer engineering is downstream from physics, and cognition is downstream from biology ... well, I just don't know how certain we can be about much of anything.
> this thread sure attracts a lot of armchair experts.
"So we beat on, boats against the current, borne back ceaselessly into our priors..."
Look up predictive coding theory. According to that theory, what our brain does is in fact just autocomplete.
However, what it is doing is layered autocomplete on itself. I.e. one part is trying to predict what the other part will be producing and training itself on this kind of prediction.
What emerges from this layered level of autocompletes is what we call thought.
First: a selection mechanism is just a selection mechanism, and it shouldn't confuse the observation of an emergent, tangential capabilities.
Probably you believe that humans have something called intelligence, but the pressure that produced it - the likelihood of specific genetic material to replicate - it is much more tangential to intelligence than next-token-prediction.
I doubt many alien civilizations would look at us and say "not intelligent - they're just genetic information replication on steroids".
Second: modern models also under go a ton of post-training now. RLHF, mechanized fine-tuning on specific use cases, etc etc. It's just not correct that token-prediction loss function is "the whole thing".
> First: a selection mechanism is just a selection mechanism, and it shouldn't confuse the observation of an emergent, tangential capabilities.
Invoking terms like "selection mechanism" is begging the question because it implicitly likens next-token-prediction training to natural selection, but in reality the two are so fundamentally different that the analogy only has metaphorical meaning. Even at a conceptual level, gradient descent gradually honing in on a known target is comically trivial compared to the blind filter of natural selection sorting out the chaos of chemical biology. It's like comparing legos to DNA.
> Second: modern models also under go a ton of post-training now. RLHF, mechanized fine-tuning on specific use cases, etc etc. It's just not correct that token-prediction loss function is "the whole thing".
RL is still token prediction, it's just a technique for adjusting the weights to align with predictions that you can't model a loss function for in per-training. When RL rewards good output, it's increasing the statistical strength of the model for an arbitrary purpose, but ultimately what is achieved is still a brute force quadratic lookup for every token in the context.
I use enterprise LLM provided by work, working on very proprietary codebase on a semi esoteric language. My impression is it is still a very big autocompletion machine.
You still need to hand hold it all the way as it is only capable of regurgitating the tiny amount of code patterns it saw in the public. As opposed to say a Python project.
But regardless, I don’t think anyone is claiming that LLMs can magically do things that aren’t in their training data or context window. Obviously not: they can’t learn on the job and the permanent knowledge they have is frozen in during training.
As someone who still might have a '2023 take on LLMs', even though I use them often at work, where would you recommend I look to learn more about what a '2025 LLM' is, and how they operate differently?
LLMs are a general purpose computing paradigm. LLMs are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. Training LLMs discovers programs that well reproduce the input sequence. Roughly the same architecture can generate passable images, music, or even video.
The sequence of matrix multiplications are the high level constraint on the space of programs discoverable. But the specific parameters discovered are what determines the specifics of information flow through the network and hence what program is defined. The complexity of the trained network is emergent, meaning the internal complexity far surpasses that of the course-grained description of the high level matmul sequences. LLMs are not just matmuls and logits.
Notice that the Rule 110 string picks out a machine, it is not itself the machine. To get computation out of it, you have to actually do computational work, i.e. compare current state, perform operations to generate subsequent state. This doesn't just automatically happen in some non-physical realm once the string is put to paper.
For someone speaking as you knew everything, you appear to know very little. Every LLM completion is a "hallucination", some of them just happen to be factually correct.
> LLMs are just seemingly intelligent autocomplete engines
Well, no, they are training set statistical predictors, not individual training sample predictors (autocomplete).
The best mental model of what they are doing might be that you are talking to a football stadium full of people, where everyone in the stadium gets to vote on the next word of the response being generated. You are not getting an "autocomplete" answer from any one coherent source, but instead a strange composite response where each word is the result of different people trying to steer the response in different directions.
An LLM will naturally generate responses that were not in the training set, even if ultimately limited by what was in the training set. The best way to think of this is perhaps that they are limited to the "generative closure" (cf mathematical set closure) of the training data - they can generate "novel" (to the training set) combinations of words and partial samples in the training data, by combining statistical patterns from different sources that never occurred together in the training data.
If you have 2 known mountains (domains of knowledge) you can likely predict there is a valley between them, even if you haven’t been there.
I think LLMs can approximate language topography based on known surrounding features so to speak, and that can produce novel information that would be similar to insight or innovation.
I’ve seen this in our lab, or at least, I think I have.
Respectfully, you're not completely wrong, but you are making some mistaken assumptions about the operation of LLMs.
Transformers allow for the mapping of a complex manifold representation of causal phenomena present in the data they're trained on. When they're trained on a vast corpus of human generated text, they model a lot of the underlying phenomena that resulted in that text.
In some cases, shortcuts and hacks and entirely inhuman features and functions are learned. In other cases, the functions and features are learned to an astonishingly superhuman level. There's a depth of recursion and complexity to some things that escape the capability of modern architectures to model, and there are subtle things that don't get picked up on. LLMs do not have a coherent self, or subjective central perspective, even within constraints of context modifications for run-time constructs. They're fundamentally many-minded, or no-minded, depending on the way they're used, and without that subjective anchor, they lack the principle by which to effectively model a self over many of the long horizon and complex features that human brains basically live in.
Confabulation isn't unique to LLMs. Everything you're saying about how LLMs operate can be said about human brains, too. Our intelligence and capabilities don't emerge from nothing, and human cognition isn't magical. And what humans do can also be considered "intelligent autocomplete" at a functional level.
What cortical columns do is next-activation predictions at an optimally sparse, embarrassingly parallel scale - it's not tokens being predicted but "what does the brain think is the next neuron/column that will fire", and where it's successful, synapses are reinforced, and where it fails, signals are suppressed.
Neocortical processing does the task of learning, modeling, and predicting across a wide multimodal, arbitrary depth, long horizon domain that allow us to learn words and writing and language and coding and rationalism and everything it is that we do. We're profoundly more data efficient learners, and massively parallel, amazingly sparse processing allows us to pick up on subtle nuance and amazing wide and deep contextual cues in ways that LLMs are structurally incapable of, for now.
You use the word hallucinations as a pejorative, but everything you do, your every memory, experience, thought, plan, all of your existence is a hallucination. You are, at a deep and fundamental level, a construct built by your brain, from the processing of millions of electrochemical signals, bundled together, parsed, compressed, interpreted, and finally joined together in the wonderfully diverse and rich and deep fabric of your subjective experience.
LLMs don't have that, or at best, only have disparate flashes of incoherent subjective experience, because nothing is persisted or temporally coherent at the levels that matter. That could very well be a very important mechanism and crucial to overcoming many of the flaws in current models.
That said, you don't want to get rid of hallucinations. You want the hallucinations to be valid. You want them to correspond to reality as closely as possible, coupled tightly to correctly modeled features of things that are real.
LLMs have created, at superhuman speeds, vast troves of things that humans have not. They've even done things that most humans could not. I don't think they've done things that any human could not, yet, but the jagged frontier of capabilities is pushing many domains very close to the degree of competence at which they'll be superhuman in quality, outperforming any possible human for certain tasks.
There are architecture issues that don't look like they can be resolved with scaling alone. That doesn't mean shortcuts, hacks, and useful capabilities won't produce good results in the meantime, and if they can get us to the point of useful, replicable, and automated AI research and recursive self improvement, then we don't necessarily need to change course. LLMs will eventually be used to find the next big breakthrough architecture, and we can enjoy these wonderful, downright magical tools in the meantime.
And of course, human experts in the loop are a must, and everything must be held to a high standard of evidence and review. The more important the problem being worked on, like a law case, the more scrutiny and human intervention will be required. Judges, lawyers, and politicians are all using AI for things that they probably shouldn't, but that's a human failure mode. It doesn't imply that the tools aren't useful, nor that they can't be used skillfully.
I'll admit to sending a couple of the messages that made Linksys routers restart. I also set up automatic k-lines on Snoonet for these very strings, years ago
You definitely don't understand PDFs, let alone SVGs.
PDFs can also contain scripts. Many applications have had issues rendering PDFs.
Don't get me wrong, the folks creating the SVG standard should've used their heads. This is like the 5th time (that I am aware of) this type of issue has happened, (and at least 3 of them were Adobe). Allowing executable code in an image/page format shouldn't be a thing.
I know hearing this gets old, however, please review sources outside of LLMs for accuracy. LLMs take a whole bunch off stuff from all over the internet and distill it down to something you can consume. Those sources include everything from reddit to a certain de-wormer that folks still think treats COVID (side note: I've a few long COVID victims in a support group I am in, and they are not happy about the disinfo that was spread, at any rate)...LLMs/"AI" does not and cannot innovate, it can only take all existing information it knows, mash it all together, and present you with a result according to what the model is trained on.
I'm not against AI summaries being on HN, however, users should verify and cite sources so others can verify.
However, I'm just a normal nerd that wants to fact check stuff. Perhaps I'm wrong in wanting to do this. We'll see.
I have significant experience in polymer chemistry, as an experiment, I decided to ask gemini some very specific questions to try and back it into a corner, so to speak. It blew me away with the answer, discussing quite a bit of info I was not even aware of.
> I'm not against AI summaries being on HN, however, users should verify and cite sources so others can verify.
I don't see how they contribute anything to a discussion. Even a speculative comment organically produced is more worthwhile than feeding a slop machine back into itself. I don't go out for coffee to discuss LLM summaries with friends, and I can't imagine why anyone would want to do that here.
Earlier today I asked Gemini Pro to find information on a person's death that was turning up nothing for me otherwise, and it just imagined finding verbatim Obituary quotes in every source, cobbled together vaguely related names, plausible bits and pieces from wherever, almost like it was 2023 again.
It ain't search, and it ain't worthwhile; I'd much rather someone ask an llm the question and then post a question out of curiosity based on it, but without the summary itself
> It is search if you ask it to produce a list of links.
Not in the example I mentioned. It can imagine the links, and the content of the links, and be very confident about it. It literally invented an obituary that didn't exist, gave me a link to a funeral home that 404'd, came up with "in-memoriam" references from regional newsletters that never contained her name. It's actually really scary how specifically fake it was.
I asked it to produce verbatim references from any sources and the links to them, and none of the text it produced could be searched with quotes on any search engine.
I think that's the tricky thing. I'm not saying it's not useful when it is, but you really do need a keen and skeptical eye to be able to know. The problem kind of reminds of bloom filters, such that they're useful for situations when you want to know something might exist or definitely does not exist in a set. Exact truth has some level of permissible error rate, as it does in any situation, but definitely wrong is pretty important to know about.
The issue as I see it is just straight copy/pasting its output. You want to use it as a search tool to give you pointers on things to look up and links to read? Great. Then use that as a basis to read the sources and write your own response. If you aren't familiar enough with the subject area to do that, then you also shouldn't be pasting LLM output on it.
It's not even copy/pasting in some cases. In my example, it confidentially produced "verbatim" references that don't exist anywhere, to specific pages that never mentioned this person's name or contained any of the text. Sometimes completely different people, 404 pages, huge waste of time
Yeah I agree, I've seen the hallucinated references also. Sometimes used by people in internet arguments to make their bullshit seem more legitimate. What I meant though by copy/pasting is people getting the LLM output and then just directly feeding that to the people they're conversing with, instead of looking into what it's saying or really even engaging with it in any way.
Ask it to solve a tough Euler Math puzzle with the search button on and it just copies the answer from the web. Turn search off and it actually computes the answer.
Funny how the search button is taken away though.
Which is "fine" so to speak. We do this with using AIs for coding all the time, don't we? As in, we ask it to do things or tell us things about our code base (which we might be new to as well) but essentially use it as a "search engine+" so to speak. Hopefully it's faster and can provide some sort of understanding faster than we could with searching ourselves and building a mental model while doing it.
But we still need to ask it for and then follow file and line number references (aka "links") and verify it's true and it got the references right and build enough of a mental model ourselves. With code (at least for our code base) it usually does get that right (the references) and I can verify. I might be biased because I both know our code base very well already (but not everything in detail) and I'm a very suspicious person, questioning everything. With humans it sometimes "drives them crazy" but the LLM doesn't mind when I call its BS over and over. I'm always "right" :P
The problem is when you just trust anything it says. I think we need to treat it like a super junior that's trained to very convincingly BS you if it's out of its depth. But it's still great to have said junior do your bidding while you do other things and faster than an actual junior and this junior is available 24/7 (barring any outages ;)).
I've had quite good luck asking Gemini and ChatGPT to include links to research papers for every claim they make. Not only can I review at least the abstracts but I find when I do this, they'll retract some of the hallucinations they've have made in prior messages. It almost seems (and maybe they do) in their web searching tools, reread the content they include. Thus, greatly reducing errors, with minimal extra effort on my part.
I definitely disagree here. What matters for mobile is power consumption. Capabilities can be pretty easily implemented...if you disagree, ask Apple. They have seemingly nailed it (with a few unrelated limitations).
Mobile vendors insisting on using closed, proprietary drivers that they refuse to constantly update/stay on top of is the actual issue. If you have a GPU capable of cutting edge graphics, you have to have a top notch driver stack. Nobody gets this right except AMD and NVIDIA (and both have their flaws). Apple doesn't even come close, and they are ahead of everyone else except AMD/NVIDIA. AMD seems to do it the best, NVIDIA, a distant second, Apple 3rd, and everyone else 10th.
> If you have a GPU capable of cutting edge graphics, you have to have a top notch driver stack. Nobody gets this right except AMD and NVIDIA (and both have their flaws). Apple doesn't even come close, and they are ahead of everyone else except AMD/NVIDIA. AMD seems to do it the best, NVIDIA, a distant second, Apple 3rd, and everyone else 10th.
It is quite telling how good their iGPUs are at 3D that no one counts them in.
I remember there was time about 15 years ago, they were famous for reporting OpenGL capabilities as supported, when they were actually only available as software rendering, which voided any purpose to use such features in first place.
APUs/iGPUs are compared, and here Intel's integrated GPUs seem to be very competitive with AMD's APUs.
---
You of course have to compare dedicated graphics cards with each other, and similarly for integrated GPUs, so let's compare (Intel's) dedicated GPUs (Intel Arc), too:
the current Intel Arc generation (Intel-Arc-B, "Battlemage") seems to be competitive with entry-level GPUs of NVidia and AMD, i.e. you can get much more powerful GPUs from NVidia and AMD, but for a much higher price. I thus clearly would not call Intel's dedicated GPUs to be so bad "at 3D that no one counts them in".
Mobile is getting RT, fyi. Apple already has it (for a few generations, at least), I think Qualcomm does as well (I'm less familiar with their stuff, because they've been behind the game forever, however the last I've read, their latest stuff has it), and things are rapidly improving.
Vulkan is the actual barrier. On Windows, DirectX does an average job at supporting it. Microsoft doesn't really innovate these days, so NVIDIA largely drives the market, and sometimes AMD pitches in.
I think the big issue is that there is no 'next-gen API'. Microsoft has largely abandoned DirectX, Vulkan is restrictive as anything, Metal isn't changing much beyond matching DX/Vk, and NVIDIA/AMD/Apple/Qualcomm aren't interested in (re)-inventing the wheel.
There are some interesting GPU improvements coming down the pipeline, like a possible OoO part from AMD (if certain credible leaks are valid), however, crickets from Microsoft, and NVIDIA just wants vendor lock-in.
Yes, we need a vastly simpler API. I'd argue even simpler than the one proposed.
One of my biggest hopes for RT is that it will standardize like 80% of stuff to the point where it can be abstracted to libraries. It probably won't happen, but one can wish...
Really? I've never had it fail. I simply ran the script provided by LE, it set everything up, and it renewed every time until I took the site down for unrelated (financial reasons). Out of curiousity, when did you last use LE? Did you use the script they provided you or a third party package?
I set it up ages ago, maybe before they even had a script. My setup is dead simple: A crontab that runs monthly:
0 2 1 * * /usr/local/bin/letsencrypt-renew
And the script:
#!/bin/sh
certbot renew
service lighttpd restart
service exim4 restart
service dovecot restart
... and so on for all my services
That's it. It should be bulletproof, but every few renewals I find that one of my processes never picked up the new certificates and manually re-running the script fixes it. Shrug-emoji.
I don't know how old "letsencrypt-renew" is and what it does. But you run "modern" acme clients daily. The actual renewal process starts with 30 days left. So if something doesn't work it retries at least 29 times.
I haven't touched my OpenBSD (HTTP-01) acme-client in five years:
acme-client -v website && rcctl reload httpd
My (DNS-01) LEGO client sometimes has DNS problems. But as I said, it will retry daily and work eventually.
I wasn't making fun of you. It wasn't obvious that's what you meant at all, because you said you didn't know "what it does". I'm sure you know what certbot does, so I thought you misinterpreted the post.
Yes, same for me. Every few months some kind internet denizen points out to me that my certificate has lapsed, running it manually usually fixes it. LE software is pretty low quality, I've had multiple issues over the years some of which culminated in entire systems being overwritten by LE's broken python environment code.
If it's happening regularly wouldn't it make sense to add monitoring for it? E.g. my daily SSL renew check sanity-checks the validity of the certificates actually used by the affected services using openssl s_client after each run.
Originally I assumed it was due to customer education/fraud, however no additional signage is posted at the stores doing this. Second thought was people must think these cards are already activated, however there is tons of text stating these things are only activated at POS.
The retailers I mentioned are nationwide. However, they've only recently began to do this, and only in a few locations that I am aware of.
reply