The author conflates individual YouTube channels with "publishers" which is nonsense. The publisher is YouTube. You don't read 10 different "publishers" when you read stories in a newspaper by 10 different journalists. Are we also going to say that watching, say, 20 TikToks from different creators is consuming content from 20 publishers? I sure don't think so. The whole idea of the article seems like nonsense to me. "Spotify for news" is your web browser or the home screen on your phone that has five different news/social media apps. We don't need another layer in between.
I'm surprised how many negative responses this essay has received. If you read it as prescriptive, "Do these steps and you will achieve greatness," then yeah obviously he is skimping on the "why" of it all. But as he says at the very top, this essay is actually descriptive. This is an analysis of how great work comes about, based on looking at many cases of "it." Your own mileage may vary.
For my part, I found it perfectly thought provoking; not a strict roadmap to follow, but a set of observations against which to measure my own experiences and ideas, and see if I can't improve on what works for me. I appreciate anyone who is trying to dig deeper into how human beings can better themselves and create meaning in our indifferent universe.
Most of my writing energy the last couple months is going into finishing a book projects but when that's done I will have more to write on other topics!
On the topic of the nude statues, I have no evidence for this idea, except: modern toy dolls are often "nude," with the idea children will put clothes on them. Is it possible that the Greeks put different clothes and decorations on their nude base statues, clothes which are now gone, evanescent as the paint that once adorned the marble statues as well?
i wonder if some future civilization will find the clothing store mannequins we throw away today and wonder if that was the best we could do
now I'm wondering if David was actually made to stand outside a tunic store, but Michelangelo just got carried away with the simple commission, and the rest of the story about the cathedral was made up
This first struck me as obvious, but really it's only obvious if you're already deep into generative AI. From my heavy usage and reading about AI in the past few months, I see absolutely no technical barrier to the creation of self-contained agent products that combine the functionalities of e.g. Alexa, GPT-4, Zapier, Wolfram-Alpha, Google, etc. all into one steerable package. It's just a matter of time.
Something I find especially amusing is that, despite the hype here on HN, most people in the world at large have not yet used a generative AI of any kind, even if they've heard about it on the news or social media. Because these things are developing so quickly, I think the first of these "agents" are going to hit the market before most people have even tried something like a ChatGPT. And so the experience of a "normal" person who's not in the loop will be of ~1 year of AI news hype followed by the sudden existence of sci-fi style actual artificial intelligences being everywhere. This will be extremely jarring but ultimately probably very cool for everyone.
> no technical barrier to the creation of self-contained agent products
I really struggle with this idea of agents as the next big thing especially in AI, not because I disagree with the premise but because we've been here before. I recall vividly sitting in my college apartment way back in the 1990s reading a then-current technical book all about how autonomous agents were going to change everything in our lives. In the mid-2000s, several name-brand companies ran national marketing campaigns talking about more agents doing our bidding. Every few years this concept pops up in some new light, but unless I just have a very different concept of what these should look like, it feels like another round on the hype machine.
We had nothing that could rival GPT in the 90s. I think that’s what’s different this time. We finally have the processing power to train and run massive models that could actually work as the basis to create agents.
That's been my initial take. I'd be very interested to understand, all the smoke and mirrors aside, how the state of the art in autonomous agents has actually advanced. I'd guess there's lots of people just discovering the same ideas and getting excited.
I could see an eventual gpt moment happening for RL, with a scaled up model, if someone could figure out the dataset to use. But that's not what these agents are.
Often when people talk about agents or about how AI is going to take our jobs, my reaction is "How do they interface?" Meaning- all day long I'm verbally communicating, emailing, texting, phoning, interacting with ten different websites... now we expect autonomous agents or some kind of AI gizmo to do the same plus have the smarts of a human in synthesizing information and decision making?
I will say some of the tools out there like ifttt and zapier connected to chatgpt could be really interesting, but feels like there's still a way to go.
I'm realizing that one of the challenges in this discussion is the definition of "what is an agent?" and "What does it mean to interface with different systems?". Can I plug a chatbot into Slack? Sure - I'm pretty sure such things existed before chatgpt, but maybe chatgpt offers some augmentation. Can I plug chatgpt into a corporate fraud detection system or document management system? Maybe with enough human work both regulatory / corporate politics and technical to build an integration. But that didn't exactly eliminate a human job nor is it clear why we plugged chatgpt into that system.
You're making an assessment based on the level of surrounding hype instead of the actual fundamentals. That isn't a very useful signal in either direction.
You are both correctly and incorrectly interpreting my sentiments. Yes - I have very distinct opinions regarding the hype generated by OpenAI around ChatGPT. When I see everyone and their dog talking in the grocery checkout line about it though, it smells a lot like past waves of technology hype, which generally end in bad ways. That doesn't negate that they have made some interesting strides and there may be usefulness in their advances in reinforcement learning / LLM.
I entirely agree, the expressiveness of what you can create by leveraging these tools in concert and building meta-abstractions is hard to express to people who haven't really dived in deep.
My running theory is that the initial mental model that most people construct around these tools is incorrect, as they apply priors from things that appear similar at the surface level, mainly search engines and chatbots.
One helpful abstraction I've found is to break down what an LLM does in two ways:
1) It can operate as a language calculator. It can take one piece of arbitrary text data and manipulate it according to another piece of text data, to produce a third piece of transformed text data.
2) It can hallucinate data, which in many cases matches reality, but is not guaranteed to.
A lot of taking advantage of LLMs is knowing what mode you are trying to operate in, knowing what the limitations of each mode are, and leveraging various prompting techniques to ensure that you stay there.
>the sudden existence of sci-fi style actual artificial intelligences being everywhere
As societies go, some of the very first will be AI surveillance, police, and military, able to detect and smother any resistance in the cradle. This is not very cool for everyone.
You know how the most convincing lies are told by people who convinced themselves of it first (even if they actually don't believe it and are aware they are lying)?
Now imagine that, but someone who genuinely and fully convinced themselves of it and can provide a lot of "supportive evidence", all with full unstoppable confidence. That's what an LLM can act as, a perfect gaslighter. Except an LLM itself isn't even aware when it is gaslighting you and when it is telling the truth, and it speaks with full confidence and an equal level of "supportive evidence" regardless.
If you can't tell, it definitely matters. It is one thing to be fed info by a good liar who is a real human vs. by an LLM that can be the best liar on earth without even being aware of it.
Not an hour ago I was remarking to my roommate that airplanes are "better" at flying than bumblebees, but the bumblebee does things no machine can, and that I see AI versus human intelligence similarly. Yours is pithier though!
I wonder if this is why my heart rate goes up and I feel inexplicably enraged when I can slightly, just barely hear my roommate two rooms away watching TV when I'm in bed trying to fall asleep. I ask for things to be turned down and I use a white noise machine (have tried ear plugs, they work but they make my ears hurt), which helps, but sometimes the only effective thing is to pull a pillow over my head. It's even worse with people who snore. I can't imagine how my mother has slept next to my sleep apnea father in the same bed for 40 years and not gone insane.
If you haven't already, try Mack's silicone ear plugs. They are fundamentally different from foam earplugs in that they don't spring back when you press on them, which means they exert almost no force on your ears.
Yes! They're the best. I've been using them for over 40 years. Pro tip: buy the bright orange kids' size: they fit more comfortably without having to be customized and they're much easier to locate when they (inevitably) fall out and disappear. Plus you get twice as many for your money.
Anecdotally, my buddy (31M) in London says he's having the time of his life. Making a lot of money at BP, parties every weekend, just moved into a nice flat with a partner. I suspect, just like in America, that if you're inside the "formal economy" (corporate job with real benefits and potential for career growth) your life is pretty great. If you're outside that (service worker, gig economy, small business owner, NEET) then things are pretty grim.
The Fed as we have seen definitely "has the power" to raise rates. The question behind your question is, to what extent does raising rates lower inflation? I'm not an economist, just some guy on HN, but I'm pretty persuaded on the idea that inflation is, as others have said, a "monetary phenomenon." If you strip away money for a second, the "actual economy" is just supply (people making stuff), and demand (people buying stuff). Now imagine we're back in 2019 and the economy exists with some baseline amount of money in circulation. Then in 2020 and onwards, more money is printed to allow for emergency government spending in the face of the pandemic (a policy, by the way, that I think was completely correct in context). With interests rates low/zero, the money that was added to the economy basically stayed in the economy, bouncing around according to supply and demand. However, once people got the new money, and people consequently decided to buy more stuff with that money, the economy did not magically start making way more stuff right away to match. Classically, this is where the inflation supposedly happened.
As we discovered with some recent bank failures, it is somewhat more complicated than this. People did not just use the new money to buy stuff. They also used it to invest, and a lot of those investments would not have made sense if interest rates were higher. Somewhere I saw the example of an "AI dog-washing startup"; this fake business illustrates the type of real but not necessarily sound business that was suddenly getting investment because there was a lot of money flying around inside the economy. Now, when the Fed "raised interest rates," what actually happened was it created new investment vehicles (e.g. treasury bonds) that offered returns on investment that were much more attractive to investors than the previous generation that offered low/zero interest. Banks and others shifted towards these new, better investments and tried to sell their old, worse ones. Hence, some of the money that was flying around began to exit the economy and return to the government coffers. This was bad for banks like SVB that had a lot of "interest rate risk." It was also bad for downstream investments like the AI dog-washing startup, which were now competing with "better" businesses in an environment with less money flying around—this is where you see e.g. the current tech hiring downturn and layoffs.
So that's about what the Fed has been able to do. One assumption you've highlighted here is, to what extent is the Fed doing all this based on research and deliberation? I think the short answer is, we don't know. Interest rates are indeed a blunt instrument, but they are also the instrument the Fed can control, and so that's what the Fed is using. This leaves a lot of room for conspiracy theories and speculation. I happen to think the Fed is doing its best within the constraints of its powers, but the Fed cannot singlehandedly "fix the economy." They can print money and adjust interest rates. And doing these things affects the economy in theoretically well-understood ways.
Ultimately, raising rates is like putting an ice pack on an injury. It reduces the swelling, which is helpful, but it doesn't fix the injury per se—the fixing happens in an entirely different, more complex system, really a system of systems. Just so with "the economy." The economy ultimately exists as a sort of distributed phenomenon in the thoughts and actions of all its participants. These thoughts and actions are not aligned, and so we get the infinite omni-directional tug-of-war known as the "invisible hand of the market." The Fed certainly has a lot of ways to influence the economy, but it cannot force people to think or act in precise, coordinated ways.