The often-repeated "wisdom of the crowds" justification is misapplied to online betting markets. Like people, crowds can either be wise or unwise depending on the situation. Famous experiments like guessing how many gumballs are in a jar work because each person who can see the jar has a source of valid information, and in aggregate that can be surprisingly accurate.
You can't assume that the majority of individuals participating in betting markets have a source of valid information. Given the destructiveness of these markets to both individuals and society, the aggregate wisdom of the individuals participating in these markets is highly doubtful. Any meager value above more traditional forecasting does not justify the cost, corruption and a loss of trust in institutions.
Please show the dollar/realized benefit to society VS (in response to OPs statement) the results don't "justify the cost, corruption and a loss of trust in institutions" along with a breakdown of the cost/negatives to society that result from those factors.
This isn't big oil (yet) you can't just externalize all the downside and say the product is a net benefit.
Pish tosh, my dear sir, it's simply common-sense that there are oodles of people out there with secrets that would be completely ethical to distribute and would undeniably better all humankind, but they're sitting on them purely because they haven't figured out how to make a profit from it. /s
In other words, the overlap between these is too small to justify the idea that prediction markets are a net-benefit by default:
1. Is valuable
2. Not already known
3. No current reward mechanism exists (e.g. patents)
An apprenticeship is great for all sorts of reasons that AI can never touch, but I don't think abandoning AI will be necessary unless you aren't really motivated by a desire to understand and do the thing you are trying to learn. If you are, it is an incredible tool.
I'd find it very understandable if true. I also think there will be some junior devs that it will supercharge, and they will eventually make some of the things we only dreamed about. If you don't actually enjoy coding but are starting out as a coder, it's probably not going to help. If you are thirsty to understand and do things, it is an incredible time to start out.
I think it is a mistake to think about people as being helpless consumers of the algorithm. The OP's mom no doubt makes some intentional choices in her life that make a difference. It just doesn't help that the algorithm will lean into whatever will get the most engagement.
Good old fashioned human trolling is the most likely explanation. People seem to think that LLM training just involves absorbing content from the internet and sources, but it also involves a lot of human interaction that allows it to have much more well-adjusted communication than it would otherwise have. I think it would need to be specifically instructed to respond this way.
Here's how I'd break down the two types of users: People who are using AI to teach themselves how to work in the domain they are interested in, and people who are relying on AI to do all or most of the heavy lifting.
I'd argue that the people using AI most effectively are in the mostly-chatters group that the author defines, and specifically they are using the AI to understand the domain on a deeper level. The "power users" are heading for a dead end, they will arrive as soon as AI is capable of figuring out what is actually valuable to people in the given domain, not generally a difficult problem to solve. These power users will eventually be outclassed by AIs that can self-navigate. But I would argue that a human that has a rich understanding of the domain will still beat self-navigating AI for a long time to come.
I also don't understand the reaction. The AI Village seems to be based on a flawed understanding of LLMs and what they are capable of but at least it is an open project and useful as knowledge gathering. Annoying spam emails are about what I would expect, but it is useful as an earnest demonstration of their effectiveness. I can understand anger at the direction of the tech in general, and there is something grotesque about the emails, but I can find much more disturbing spam if I go check my inbox. It seems like an overreaction.
It is an interesting comparison. Databases are objectively the more important technology, if we somehow lost AI the world would be equal parts disappointed and relieved. If we somehow lost database technology we'd be facing a dystopian nightmare.
If we cure all disease in the next 10-15 years, databases will be just as important as AI to that outcome. Databases supported a technology renaissance that reshaped the world on a level that is difficult to comprehend. But because most of the world doesn't interact directly with databases, as a technology it is not the focus of enthusiastic rhetoric.
LLMs are further along tech-chain and they might be an important part of world-changing human achievements, we won't know until we get there. In contrast, we can be certain databases were important. I imagine the people who were influential in their advancement understood how important the tech would be, even if they didn't breathlessly go on about it.
You can't assume that the majority of individuals participating in betting markets have a source of valid information. Given the destructiveness of these markets to both individuals and society, the aggregate wisdom of the individuals participating in these markets is highly doubtful. Any meager value above more traditional forecasting does not justify the cost, corruption and a loss of trust in institutions.
reply