I have also really been enjoying these lectures. Sarah is quick witted and insightful. I recommend the Dwarkesh podcast to anyone interested in AI in general (though Sarah Paine lectures are completely unrelated).
Generative AI is going to drive the marginal cost of building 3D interactive content to zero. Unironically this will unlock the metaverse, cringe as that may sound. I'm more bullish than ever on AR/VR.
I can only speak for myself, but a Metaverse consisting of infinite procedural slop sounds about as appealing as reading infinite LLM generated books, that is, not at all. "Cost to zero" implies drinking directly from the AI firehose with no human in the loop (those cost money) and entertainment produced in that manner is still dire, even in the relatively mature field of pure text generation.
I think the biggest issue with stable diffusion based approaches has always been poor compositional ability (putting stuff where you want), and compounding anatomical/spatial errors that gave the images an offputting vibe.
All these problems are trivially solvable (solved) using traditonal 3d meshes and techniques.
The issue with composition is only a problem when you rely on a pure text prompt, but has been solved for quite a while by ControlNets or img2img. What was lacking was the integration with existing art tools, but even that is getting solved, e.g. Krita[1] has a pretty nice AI plugin.
3D can be a useful intermediate when editing the 2D image, e.g. Krea has support for that[2]. But I don't think the rest of the traditional 3D pipeline is of much use here, AI image generation already produces images at a quality that traditional rendering just can't keep up with, neither in terms of speed, quality or flexibility.
Wow, those look impressive. But I think we are saying the same thing - stable diffusion can make pretty pics, but needs a lot of handholding context. I too have played around with ComfyUI, and while there are a LOT of techniques that allow you to manipulate the image, I have always felt like you were fighting SD.
In the videos you've attached, both tools (esp) the first, look impressive, but in the first example, you can clearly see that the model regenerates the street around the chameleon, when the artist changes it for no good reason.
In the second example you can see there's a bunch of AI tools under the hood, and they don't work together particularly well, with the car constantly changing as the image changes.
I think while a lot of mileage can be extracted from SD as it stands (I could think of a bunch of improvements to what was demonstrated here, by applying existing techniques ) - but the fundamental issue remains, in that Stable Diffusion was made to generate whole images at once - unlike transformers, which output a single token.
Not sure what's the image equivalent of a token is, but I'm sure it'd be feasible to train a model to fill holes - which'd be created by Segment Anything or something similar, and it would react better to local edits.
But not consistent state. The pipeline still needs to exist because most games require objects and environments to stay consistent across play sessions. That means generating from a 3D skeleton, at the very least, if not relegating genAI to production, not runtime.
I have tried the model, and I agree with you on the point. A product was uploaded for a test, the output catches the product quite well, but the text on the generated 3D model is unreadable.
IMO current generation models are capable of creating significantly better than "slop" quality content. You need only look at NotebookLM output. As models continue to improve, this will only get better. Look at the rate of improvement of video generation models in the last 12-24 months. It's obvious to me we're rapidly approaching acceptable or even excellent quality on-demand generated content.
I feel like you're conflating quality with fidelity. Video generation models have better fidelity than they did a year ago, but they are no closer to producing any kind of compelling content without a human directing them, and the latter is what you would actually need to make the "infinite entertainment machine" happen.
The fidelity of a video generation model is comparable to an LLMs ability to nail spelling and grammar - it's a start, but there's more to being an author than that.
I already feel like text models are already at sufficiently entertaining and useful quality as you define it. It's definitely possible we never get there for video or 3D modalities, but I think there are strong enough economic incentives such that big tech will dump tens of billions of dollars into achieving it.
I don't know why you think that's the case regarding text models. If that was the case, there would be articles on here that are just created by only generative AI and nobody would know the difference. It's pretty obvious that's not happening yet, not the least of which because I know what kinds of slop state-of-the-art generative models still produce when you give them open-ended prompts.
Ironic how this comment exemplifies the issue - broad claims about "slop" output but no specific examples or engagement with current architectures. Real discussions here usually reference benchmarks or implementation details.
You're sort of ignoring the issue? If the generated content was good and interesting enough on it's own, we would already have ai publishing houses pushing out entire trilogies, and each of those would be top sellers.
Generative content right now is OK. OK isn't really the goal, or what anyone wants.
First it was AI articles, raising it to entire successful book trilogies seems like a much bigger leap. Even considering the largest context windows they wouldn't directly fit and there is much less data to train context of that size on fiction than the data out there for essays and articles.
I don't think it is there yet for articles either.
My point with the Claude generated comment was maybe it could get pretty close to something like an hn comment.
I feel like this is missing the point of GenAI. I read fewer books than I did a year ago, primarily because Claude will provide dynamic content that is exactly tailored for me. I don't read many instructional books any more, because I can tell Claude what I already know about a topic and what I'd like to know and it'll create a personalised learning plan. If I don't understand something, it can re-phrase things or find different metaphors until I do. I don't read self-help books written for a general audience, because I can get personalised advice based on my specific circumstances and preferences.
The idea of a "book" is really just an artifact of a particular means of production and distribution. LLM-generated text is a categorically different thing from a book, in the same way as a bardic poem or hypertext.
NotebookLM is still slop. I recommend feeding it your resume and any other online information about you. It's kind of fun to hear the hosts butter you up, but since you know the subject well you will quickly notice that it is not faithful to the source material. It's just plausibly misleading.
I can only speak for myself, but a large and growing proportion of the text I read every day is LLM output. If Claude and Deepseek produce slop, then it's a far higher calibre of slop than most human writers could aspire to.
What kind of text are you reading? Do you work in LLM development? Or are you just noticing that many news sites are using LLMs more and more?
I've noticed obvious LLM output on low quality news sites, but I don't tend to read them anyway. Maybe all the comments I read are from LLMs and I just don't realise?
Perplexity is now my default search engine. If I'm doing research, I use LLMs to summarise documents or scan through them to find relevant excerpts. If I'm doing general background reading on something, I'll ask an LLM for an explainer; likewise if I've read about one particular thing and want to understand the broader context around it. If I'm thinking through a problem, I'll bat the idea around with Claude or Deepseek, asking them to provide alternative perspectives.
It's quite difficult to analogise because LLMs are so profoundly novel, but the best I can do is that it's like having an infinitely patient and extremely knowledgeable assistant. That assistant isn't omniscient or infallible, but it's extremely useful because it tends to provide the information that I want, presented in a way that's particularly relevant to me. That requires a certain amount of rapport-building - understanding the characteristics of various models, learning to ask good questions, guiding the model towards my preferences with customised system prompts - but the effort pays off handsomely.
Object permanence and a communications channel is enough for this. Give children (who get along with each other) a pile of sticks and leave them alone for half an hour, and there's half a chance their game will ignore the sticks. Most children wouldn't want to have their play mediated by the computer in the way you describe, because the ergonomics are so poor.
I'm reminded of that guy who bought an AI enabled toy for his daughter and got increasingly exasperated as she kept turning it off and treating it as a normal toy.
That thread has a lot of good observations in it. I was probably wrong in framing the problem as "ergonomics".
> Dr. Michelle (@MichelleSaidel): I think because it takes away control from the child. Play is how children work through emotions, impulses and conflicts and well as try out new behaviors. I would think if would be super irritating to have the toy shape and control your play- like a totally dominating playmate!
> Alex Volkov (Thursd/AI) (@altryne): It did feel dominating! she wanted to make it clothes, and it was like, "meanwhile, here's another thing we can do" lacking context of what she's already doing
> The Short Straw (@short_straw): The real question you should ask yourself is why you felt compelled to turn it back on each time she turned it off.
> Angelo Angelli JD (@AngelliAngelo): Kids are pretty decent bullshit detectors and a lot of AI is bullshit.
> Foxhercules (@Foxena): […] I would like to point out again that the only things I sent this child were articulated 3d prints. beyond being able to move their arms, legs and tails, these things were made out of extruded plastic and are not exactly marvels of engineering. […] My takeaway from this is that, this is what children need. they don't need fancy with tons of bells and whistles with play on any sort of rails. And there's not a thing that AI can do to replace a Child's imagination NOR SHOULD IT.
The majority of American children have an active Roblox account. Those who don't are likely to play Minecraft or Fortnite. Play mediated by the computer in this way is already one of the most popular forms of play. Kids are going to go absolutely nuts for this and if you think otherwise, you really need to talk to some children.
Not all procedurally generated things are slop, and not all slop are made via procedural generation.
And popularity has nothing to do with private, subjective quality evaluations of the individual (aka, what someone calls slop might be picasso to another), but with objective, public evaluations of the product via purchases.
I was thinking about this, and the definition I came up with for slop is 'aspirational and highly detailed content that resolves its details in an uninteresting or nonsensical way'.
For example, an AI picture of a bush is not slop, because we don't expect much from a picture of a bush (not aspirational).
A hand-drawn picture of a knight in armor by an enthusiastic, but not very skilled artist is not slop either - it has tons of details that resolve in an interesting way, and what it lacks in details, it allows the viewers to fill in for themselves.
A 'realistic' knight generated by AI is slop - it contains no imaginative detail, and allows very little room for personal interpretation, and it's not rewarding to view.
Slop doesn't need to be AI - creatively bankrupt overproduced garbage counts as slop in my mind as well.
'aspirational and highly detailed content that resolves its details in an uninteresting or nonsensical way'.
This is a great definition. All the AI text I read is somehow missing the "meat" you find in good writing. All the right parts are there, but the core idea that makes me interested is just missing.
It's pretty much the same thing Linkedin has been full of for years. No one can bear to say anything controversial, so it's all just empty platitudes and junk.
Procgen has nothing to do with AI in terms of slop, for a good reason: procedural generation algorithms are heavily tuned by authors, exactly to avoid the “dull, unoriginal and repetitive” aspect that AI produces.
Star Trek's Holodeck is actually a good case study here (especially with the recent series, Lower Decks, going as far as making two episodes that are interactive movies on a holodeck, going quite deep into how that could work in practice both in terms of producing and experiencing them).
One observation derived here is that infinite procedural content at your fingertip doesn't necessarily kill all meaning, if you bring the meaning with you. The two major use cases[0] for the holodeck are:
- Multiplayer scenarios in which you and your friends enjoy some experience in a program. The meaning is sourced from your friendship and roleplay; the program may be arbitrary output of an RNG in the global sense, but it's the same for you and your friends, so shared experience (and its importance as a social object) in your group is retained.
- Single-player simulations that are highly specific. The meaning here comes from whatever is the reason you're simulating that particular experience, and it's connection to the real world. Like idk., a flight simulator of a random space fighter flying over random world shooting at random shit would quickly get boring, but if I can get the simulator to give me a highly accurate cockpit of an F/A-18 Hornet, flying over real terrain and shooting at realistic enemies in realistic (even if fictional) storyline - now that would be deeply meaningful to me, because 1) F/A-18 Hornet is a real plane that I would otherwise never experience flying, and 2) I have a crush on this particular fighter because F/A-18 Hornet 3.0 is one of the first videogames I ever played in my life as a kid.
Now, to make Metaverse less like bullshit and more like Star Trek, we'd need to make sure the world generation is actually available to the users. No asset stores, no app marketplace bullshit. We live in a multimodal LLM era - we already have all the components to do it like Star Trek did it: "Computer, create a medieval fantasy village, in style of England around year 1400, set next to a forest, with tall mountains visible in the distance", then walk around that world and tweak the defaults from there.
--
[0] - Ignoring the third use case that's occasionally implied on the show, and that's really obvious given it's the same one the Internet is for - and I'm not talking about cat pictures.
I think you’re being short sighted. Imagine feeding in your favorite TV shows to a generative AI and being able to walk around in the world and talk to characters or explore it with other people.
Yes, because if someone has a tool that creates "something incredible", then everyone will be able to generate "something incredible" and then it all becomes not incredible.
It's like having god-mode in a game, it all becomes boring very quickly when you can have whatever you want.
If you follow that reasoning, anything that improves or anything that makes creation easier, produces slop.
Personally I'm not in favor of calling AI output slop, just because it is AI generated. You might then as well say that any electronic music is slop and any food prepared with help of machinery is crap. It might be crap or not, the automatedness is irrelevant.
The outputs of AI that I see today in the form of text, images or video don't look like slop to me.
> everyone will be able to generate "something incredible" and then it all becomes not incredible.
no, that's just your standard moving up.
There is an absolute scale for which you can measure, and ai is approaching a point where it is an acceptable level.
Imagine if you applied your argument to quality of life - it used to be that nobody had access to easy, cheap clean drinking water. Now everybody has access to it. Is it not an incredible achievement, rather than it not being incredible just because it is common?
That quote from the movie "the incredibles", where the villain claims that if everybody is super, then nobody is, was your gist of the argument. And it is a childish one imho.
It is equally childish to compare the engineering of our modern water and plumbing systems with the automated generation of virtual textured polygons.
People don't get tired of good clean water because we NEED it to survive.
But oh, another virtual world entirely thought up by a machine? Throw it on the pile. We're going to get bored of it, and it will quickly become not incredible.
plenty of people in the world still drink crappy water, and they survive.
You don't _need_ it, you want it, because it's much more comfortable.
But when something becomes a "need" as you described it, you think of it differently. Just like how you don't _need_ electricity to survive, but it's so ingrained that you now think of it as a need.
> We're going to get bored of it, and it will quickly become not incredible.
exactly, but i have already said this in my original post - your standards just moved up.
If I could talk to something at the level of Neuro-sama (https://www.twitch.tv/vedal987) I'd be very entertained and it's essentially a matter of time. Hell, I'd love to have something like this as an assistant application as well and I'm not a Cortana/Google Assistant/etc user.
The terrain generation is not the appeal of Minecraft, it’s the game system that lets people level that terrain into a canvas and then build their own stuff on top.
I think it has its place.
For 'background filler' I think it makes a lot of sense; stuff which you don't need to care about, but whose absence can make something feel less real.
To me, this takes the place / augments procedural generation stuff. NPC crowds in which none of the participants are needed for the plot, but in which you can have unique clothing / appearance / lines is not "needed" for a game, but can flesh it out when done thoughtfully.
Recall the lambasting Cyberpunk 2077 got for its NPCs that cycled through a seemingly very limited number of appearances, to the point that you'd see clones right next to each other. This would solve that sort of problem, for example.
> a Metaverse consisting of infinite procedural slop sounds about as appealing as reading infinite LLM generated books
Take a look at the ImgnAI gallery (https://app.imgnai.com/) and tell me: can you paint better and more imaginatively than that? Do you know anyone in your immediate vicinity who can?
Probably your answer is "yes, obviously!" to all the above.
My point: deep learning works and the era of slop ended ages ago except that some people are still living in the past or with some cartoon image of the state of the art.
> "Cost to zero" implies drinking directly from the AI firehose with no human in the loop
No. It means the marginal cost of production tends towards 0. If you can think it, then you can make it instantly and iterate a billion times to refine your idea with as much effort as it took to generate a single concept.
Your fixation on "content without a human directing them" is bizarre and counterproductive. Why is "no human in the loop" a prerequisite for productivity? Your fixation on that is confounding your reasoning.
> Take a look at the ImgnAI gallery (https://app.imgnai.com/) and tell me: can you paint better and more imaginatively than that?
So while I generally agree with you, I think this was a bad example to use: a lot of these are slop, with the kind of AI sheen we've come to glaze over. I'd say less than 20% are actually artistically impressive / engaging / thought-provoking.
There's still plenty of slop in there, and it would be a better gallery of if there was a way to filter out anime girls. But it's definitely higher than 20% interesting to me.
The closest similar community of human made art is this:
Although unfortunately they've decided to allow AI art there too so it makes comparison harder. Also, I couldn't figure out how to get the equivalent list (top/year). But I'd say I find around the same amount interesting. Most human made art is slop too.
I think you fundamentally misunderstand what people use "slop" to describe.
> Most human made art is slop too.
I'm assuming you're using the term "slop" to describe low-quality, unpolished works, or works where the artist has been too ambitious with their skill level.
Let me put it this way:
Every piece of art that is made, is a series of decisions. The artist uses their lived experience, their tastes and their values to create something that's meaningful to them. Art doesn't need to have a high-level of technical expertise to be meaningful to others. It's fundamentally about communication from artists to their audience. To this point, I don't believe there's such a thing as "bad art" (all works have something to say about the artist!).
In contrast, when you prompt an image generator, you're handing over the majority of the decisions to the algorithm. You can put in your subject matter, poses, even add styles, but how much is really being communicated here? Undoubtedly it would require a high level of technical skill to render similarly by hand, but that's missing the forest for the trees- what is the image saying? There's a reason why most "good" AI-generated images generally have a lot of human curation and editing.
As a side note, here's a human-made piece that I appreciate a lot. https://i.imgur.com/AZiiZj1.jpeg
The longer you explore it, the more the story unfolds, it's quite lovely. On the other hand, when I focus on the details in AI-generated works, there's not much else to see.
> I think you fundamentally misunderstand what people use "slop" to describe.
I don't think I do, actually. It's not a term with a technical definition, but in simple terms it means art that is obviously AI, because it has the sheen, weird hands, inconsistencies, weird framing or thematic elements that are hard to describe without an art degree but which we instinctively know is wrong, or is just plain bad.
I used the term slop to describe bad humans art too, but I meant something subtly different. It's a term that has been used to describe bad work of all kinds from humans since long before there was AI.
In this case, it's art from humans who are learning what makes good art. You say there's no bad art, and it's a valid viewpoint, but I'd say bad art is when the artist has a clear goal in their mind, but they lack the skills to realize it. Nonetheless, they share it for feedback and approval anyway, and by doing that on a site like DeviantArt they learn and grow as artists. But meanwhile, to me or anyone else who is visiting that site to find "good", meaningful art made by skilled artists, this is slop. Human slop, not AI slop.
> here's a human-made piece that I appreciate a lot
I like your art. I'm glad you made it. What I like most is that it's fun to look at and think about which is what you say you intended. I hope I get to see more of your art.
> To this point, I don't believe there's such a thing as "bad art" (all works have something to say about the artist!).
As a classically trained oil painter, I know for sure there is bad art especially because I've made more than enough bad art for one lifetime.
Bad art begins with a lack of craftsmanship and is exemplified by a poor use of materials/media and forms, or a lack of knowledge of those forms (e.g. poor anatomical knowledge, misunderstanding the laws of perspective), or an overly literal representation of forms (a photograph is better at being literal, for example).
> Here's an example of some "slop" from the AI Art Turing Test […] But it's very clearly AI-generated. Can you figure out why?
It's only "clearly AI-generated" because we know that AI is capable of generating art. If you saw this without that context you wouldn't immediately say "AI!" Instead, you'd give it a normal critique that you'd give a student or colleague: I'd say:
- there's too much repetition of large forms.
- there's an unpleasant hierarchy of values and not enough separation of values.
- The portrait of the human is the focus of the image yet it has been lost in the other forms.
- The composition can improve with more breathing room in the foreground or background which are too busy.
- Here look at this Frazetta!
However, my rudimentary list could just as easily be turned into prompts to be used to refine the image and experiment with variations. And, perhaps you'd consider that to be a human making decisions?
> I like your art. I'm glad you made it. What I like most is that it's fun to look at and think about which is what you say you intended. I hope I get to see more of your art.
> There's still plenty of slop in there, and it would be a better gallery […]
Thanks for sharing your better AI gallery. It's awesome to see.
Your reply clarifies my point even better: I shared a gallery, you evaluated it and shared an even better gallery! Undoubtedly someone else will look at yours today or next year, and say, as you said, "You missed a slop! Here's a better gallery".
My point fundamentally is about basic capability of the average and even above average person. As a classically trained amateur painter, I frequently ask myself: "Can I paint a nude figure better than what you've called slop?" As I mathematician I ask: "Can I reason better than this model?"
it is a fixation based on the desire that they themselves shouldn't be rendered economically useless in the future. Then the reasoning come about post-facto from that desire, rather than from any base principle of logic.
Most, if not all, that are somewhat against the advent of AI are like the above in some way or another.
> Now show me the AI write something that's actually good on purpose
The average human can't even write a 3000 word short story that is good "on purpose" even if they tried.
I know because I've participated in many writing workshops.
The real question is: can you?
> AI can write an argument that's bad on purpose
Are you able to recognise good writing? How do I know? For all I know you're the most incompetent reader and writer on the planet. But your skills are irrelevant.
What's relevant is that deep learning is more skilled than the average person. If you're not aware of this you're either a luddite or confused about the state of the art.
The 'strawmanning your opponent' technique is a non-argument, and is effortless to pull off. Surrounding your argument with tons of purple prose (which Claude is good at) does not change that.
Writing a good argument requires 3 things: be logical, be compelling and likeable, and have a solid reputation. It does not require purple prose.
As for good writing, I'm pretty sure Brandon Sanderson's Mistborn trilogy qualifies, which was written with a rather small vocabulary and pedestrian prose, yet is universally praised.
Tbf, I do think Claude Sonnet and SD are impressive, and I think they can aid humans in producing compelling content, but they are not on the level of amateur fiction writers.
Besides, surpassing most humans in an area where most humans are unskilled is not a feat, not even AI companies flex on that.
> Writing a good argument requires 3 things: be logical, be compelling and likeable, and have a solid reputation. It does not require purple prose.
That's a common misconception that young writers have. Their prose is first purple and overwrought, then they overcorrect and try to be Hemmingway, then they master the craft and discover that form follows function.
As such, the "purpleness" of prose is not an indictment of any sort except if the style doesn't serve the substance. So yes, purple prose is sometimes required and can be used correctly, just ask James Joyce or Hitchens or remember that first sentence in Lolita, for example.
Furthermore, almost every piece of writing you've probably enjoyed went through an editor or several professional editors. You'd be shocked to read early or even late drafts.
(Also, a having "solid reputation" has f' all to do with whether you can construct a good argument. Wanting that as a prerequisite is what the cool kids used to call "appeal to authority". Anyway ...)
But wtf are we even talking about now?
> Besides, surpassing most humans in an area where most humans are unskilled is not a feat, not even AI companies flex on that.
I don't care what "AI companies flex". What I care about, as a programmer, and as an artist, and as a writer who won a tiny prize in my even tinier and insignificant niche on the planet, is what tools we can build for the average person and what tools I have access to.
If I have a robot that is 50% stronger than me or 10x better read than the average human or 20% better than the average mathematician, that's a huge victory. So yes, surpassing the average human is a feat.
But it's not merely the average human who has been surpassed: the average mathematician (skilled in mathematics) and the average artists (skilled in art) and the average writer, have all been surpassed. That is my testable claim. Play with the tools, and see for yourself.
> the fact that you are seriously asking this question says a lot about your taste.
Non sequitur. My sense taste or lack of it, is irrelevant.
Questions about "taste" don't matter when the average person doesn't have the craft to produce what they claim they are competent to judge especially when we're talking about such low hanging fruit as: "write a short story", "write an essay", "analyse this math problem", "draw an anatomically accurate portrait or nude figure", "paint this still life", "sketch this landscape".
Are you able to make the distinction between taste and craftsmanship?
Then after you are done signalling whatever it is you think you're signalling by vaguely gesturing at your undoubtedly superior sense of taste, perhaps we can talk like adults about what I asked?
Frankly i think you cannot get past your own delusion about AI and no argument will change your mind. No one can make you appreciate art properly and I can only hope one day you will.
> No one can make you appreciate art properly and I can only hope one day you will.
Lmao.
Refer to my other comment for more context, for whatever that is worth (talking with strangers who are eager to judge everyone but themselves is always weird but unavoidable online): https://news.ycombinator.com/item?id=42790853
Jeez I'd love to know what Apple's R&D debt on Vision Pro is, based on current sales to date. I really really hope they continue to push for a headset that's within reach of average people but the hole must be so deep at this point I wouldn't be surprised if they cut their losses.
As Carmack pointed out the problem with AR/VR right now - it's not the hardware, it's the software. Until the "visicalc" must have killer app shows up to move the hardware, there is little incentive for general users to make the investment.
> As Carmack pointed out the problem with AR/VR right now - it's not the hardware, it's the software.
The third option is peoples' expectation for AR/VR itself: it could be a highly niche and expensive industry and unlikely to grow to the general population.
AR needs a bragging app.. something like the dharma/content you create in virt growing out of your footsteps in real - and why visible on cellphone, feeling more native in with AR-googles
I'd bet on Meta because XR is Zuckerberg's Moby Dick whereas it is 20% of a 20% priority at GOOG. Meta is watching competitors (Vision Pro) but also keeping an eye on cost conscious consumers. It's so refreshing to see "Big Tech" taking such a pragmatic approach.
I'm hoping they're building a commodity AR/VR operating system -- essentially spatial Android. They've already announced a partnership with Samsung and Qualcomm so I've got to imagine some interesting hardware is coming soon.
Considering they just gutted their AR team I wouldn’t hold my breath on this one. Too bad they couldn’t wait for Apple to launch the Vision Pro, because I bet they’d get a lot more excitement now that the press is out.
It's an interesting test for them. Essentially, it's a free hit - low expectations, few constraints, lots of latitude to throw out prior work, open ended opportunity to be creative and innovate, a clear baseline provided by Apple for them to benchmark success against and little to no regulatory scrutiny. Basically, one of the few opportunities they will ever got for a zero baggage, green field project where, if they actually have the talent and the will power, they could hit something out of the park. Will they do it? Will they not?
Hopefully, they won’t give up and toss it in less than 3 years. The Samsung brand should also be front and center, or consumers won’t have the confidence to buy it.
Meh, I hope they don't chase the VR train and instead focus on making Search actually usable again. It's soooo bad these days and actively getting worse, with ever more ads and SEO crap.
I’ll believe that when they stop destroying search.
Google Cache’s death has been widely reported but the custom date filter just plain stopped rendering on my iPhone and iPad last week. There are some kinds of queries which really only function on Google with that in working order.
Nope, just broke for no apparent reason after working perfectly fine for years and continues to work on the desktop. By this point in time, I've been cynical of Google for longer than I was a fan, but every nail that strikes at the heart of the Google that was cool still hurts to see.
In contrast, buying my Tesla new was about as easy as any Amazon order. I don't know why other car manufacturers don't make D2C a bigger focus for their business.
IIRC dealerships were required because manufacturers were screwing over customers before. So much of modern society is ultimately due to path dependence
Dealerships were required because the carmakers started consolidating and dealing with incumbents in a consolidated market is terrible. But it doesn't actually work because you're still ultimately dealing with them indirectly. The actual solution should have been to break up the Big Three, though of course that happened anyway as foreign competitors came into the market and the argument for continued bans on direct sales has essentially evaporated.
The exception is service, because that's actually a separate market and not just a middle man on sales in the original market. Prohibiting manufacturers from operating service centers and leaving it to independent mechanics is essentially a ban on vertical integration and still makes a lot of sense. Because service for a particular model of car is in many ways back to being a consolidated market, since that model may need specific parts or tools and you still want to maintain a competitive market for service for customers with that model of car. You also want to make sure mechanics can service cars of multiple makes, since that fosters competition too.
Tesla sells cars to most states D2C so while I agree it is unfortunately illegal in some places, it doesn't seem to be omnipresent. I think mainstream vendors are just slow to sacrifice their relationships with dealers, who are currently 100% of their sales.
This is why it's important to note that the group arguing against the FTC is not the automakers, it's auto dealers. the NADA is the associate of auto dealers, not manufacturers
I agree, though I think in order to be fair, we need to do an apples-to-apples comparison.
Most people go into a car dealer expecting to haggle. For better or worse, we know they're going to screw around and so we're prepared to do battle. If you go into it with a different mindset, it can be pretty painless most of the time.
Walk in the door and offer them MSRP. Then fill out the DMV registration paperwork, give them your payment, and leave with the car. It won't be quite as smooth as Tesla because you're not pre-filling this out online, but it is still pretty painless.
During the pandemic this wouldn't work because the market was commanding premiums on available stock. That means mark-ups from dealers, but Tesla is hardly immune -- go look at what a Model 3 was priced at later in 2022. The difference is that dealers at least have -some- competition. Tesla just tells you what you must pay.
Because avocado farmers don't want to set up a retail store selling exclusively avocados in each and every little town, they want to sell them in bulk to grocery store chains.
Manufacturer-to-consumer sales make the most sense for big ticket items and things that can be easily sold over the internet. Retail stores can make more sense for perishable goods and things you might want to inspect before purchasing and convenience purchases and things with a high ratio of shipping cost to price etc.
My Polestar purchase was essentially the same way. PS put restrictions on what the dealership could actually offer as addons (tow-hook, overhead rais for skis, and all weather door mats; all official addons), which meant that I walked in, signed the deal, and was out in about 30 minutes.
Still not as ideal as other countries, but cest la vie.
One of my all-time favorite records is entitled Crown Shyness. The eponymous track [1] uses this phenomenon as a metaphor for feeling alone while depressed. The branches are so close, but just out of reach. I always thought it was a beautiful analogy.
I suspect for people like Sam who are compulsively ambitious and competitive, it's not about the dollars. It's about winning.
Further, based on anecdotes from friends and Twitter who know Sam personally, I'm inclined to believe he's genuinely motivated by building something that "alters the timeline", so to speak.
AGI is decades if not centuries away. Cranking a plausible sentence generator to be even more plausible will not get there. I do not understand how people suddenly completely lost their minds.
The hype wave really is something else, eh? People are suddenly talking as if these advanced chatbots are on the precipice of genuine AGI that can run any system you throw at it, it's absolute lunacy