It's mainly people with generational wealth and robust safety nets that have the choice to take a risk for higher potential earnings. For most workers the risk just is too big, the odds of success too low, based not on their merit but their runway to screw up/burn money. To the extent it's true that "you gotta fail to succeed", you need to be able to afford failure, to succeed.
Generational wealth can be just, an intact family that has savings. Borrow from the folks, start a coffee shop.
It is simple - so simple that 5.5 millions of Americans can use their family money to do it. That doesn't preclude there being 360M Americans that don't start a business, because they don't have family money.
Starting another plumbing business is very unlikely to net you a billion dollars. To achieve that you need to bet on riskier businesses, they are more likely to fail. If your parents are well off you can afford to fail a few times by betting big. Most still won't succeed but it doesn't affect them that much because they're insulated from the downside risk due to well-off parents.
My point isn't really that people from well-off families are more likely to start a family, although that probably is true as well.
My point is that people coming from a regular background are more likely to start less riskier businesses, less chance of failure but also less chance of reaching a billion dollars.
I tried to throw in some words like "mainly, most" to leave room for the many exceptions to my generalization, but perhaps it was a bit strongly worded. Even so, the point stands that people need a fall back plan to feel comfortable taking risks. I often hear business leaders patting themselves on the back for taking big risks, but leaving it unsaid that they had no real chance of losing everything. Risk tolerance enables risk taking.
Again I disagree on cost. I just had a look again at what it costs to buy a company off the shelf in the UK (last place I did that). It’s about £299 and another £50 a year. Theres probably some other small change fees in there.
I completely agree on risk though and it highlights the core of the argument - building a business is time consuming and risky in a way that being an employee is not.
That’s one reason why business owners are able to capture more profit. They took more risk and placed themselves into a situation where their earnings are not bounded by a single contract.
Everybody has free will and could start a business but not everybody is capable due to skill, capacity, circumstance, etc.
Therein lies the irony. None of the people whining loudest will start a company because it’s risky and fraught with danger but they want all the spoils of ownership and responsibility.
Not everyone has access to the amount of capital needed to start a business or the savings to support them while they grow that business. Most people live paycheck to paycheck. Those people will never have the same bargaining power as their employers.
Op is correct though. A CEO of a major US company who makes 1000x the wage of a factory owner in Shanghai who makes 100x the wage on the back of illegal migrant labor (very common) who is working to send money home to support their parents in the rural city they were born.
Just a few decades ago, that CEO would have paid his fellow countryman a modest salary and would have also made less themselves.
When you just imagine the staggering gap we've created, it feels deeply unfair. And it's not just a matter of starting a business. It is a structural moral failing.
> If the worker in Shanghai had similar economic freedom and rights to workers in the west have then this would not be an issue.
You're right. So I guess we are agreeing that the hyper-capitalist system we currently have relies on exploiting others?
We can build a system with less sharp edges without blowing it up.
Like worker representation on boards (Germany does this), cap gains that reduces as you hold investments (Clayton Christensen proposal), and holding the countries we outsource to similar environmental and labor standards.
Profit is surplus value created by the labor of workers. I'm simply saying no one works hard enough to create a billion dollars worth of anything.
Wealth accruing to owners rather than the workers who produce it is older than employment contracts. I'm pointing out that this arrangement is the result of a set of choices that have been made, and we could choose otherwise. The fact that we consider this one approach correct and fair and not others is not inevitable.
A) Profit is not necessarily connected to any workers, this is industrial revolution-era economics straight from the 19th century, modern world is different. Today I can create intellectual property and keep copying it until forever - without a single worker in sight.
B) These workers are free to work on their own - why don't they form their own company or become self employed?
Profit is not surplus value created by worker - that’s a 4 year olds understanding and negates all the other components of the system of which labour is only one.
We have tried “choosing” various “arrangements” (or more correctly, competing economic systems) and evidence points to capitalism being the correct approach to maximise liberty, growth and progress.
Maybe I've got the completely wrong end of the stick here, but why isn't an AI model treated as a fact, given it's essentially a factual summary of the most likely bit sequences to occur given an input sequence?
This argument feels like arguing that it's a fact that Game of Thrones first book consists of <this text>, thus <this text> (the entirety of the book) isn't copyrightable.
If the bit sequence is likely to occur because it's someone else's creative content (or part of it is)... that doesn't seem like it can be a 'fact' in the relevant manner.
I agree that a particular sequence of words is copyrightable.
What I'm struggling with is that facts _about_ that corpus of text are not copyrightable. A simple fact could be that the word "bar" is the 5th word. The 6th word is "jazz". Etc.
A model is trained from these "facts" across many source documents. It is thus itself a derived 'fact' given a set of training inputs and parameters, so then how could _that_ then be copyrighted?
Put another way - there's the origin text and then.. is it turtles all the way down and none of it can be copyrighted because its all math and calculations derived from that?
Because copyright has always tried to balance the idea of 'facts' against the idea of 'derivative works', even when they're blatantly in conflict with one another.
NFS scales and operates incredibly well however it has constraints.
Like any tool, it has its appropriate uses.
Why would you exclude a useful tool from your arsenal because of perceived downsides that are easy to hide from the layer above with appropriate system design?
With all industries are converting to software and every person on the planet moving towards owning a smartphone it surprises me that there’s an ongoing narrative that tech is collapsing.
The fact everything uses software doesn’t mean Slack, a generic chat platform with dozens of absolutely identical products, being acquired for 27.7 billion dollars ever made sense.
That generic software company was valued higher than entire industries that supply components that all hardware depends on. Tech isn’t collapsing. But valuations were and continue to be fuckin nuts for a lot of companies and are coming down to more reasonable numbers, which look like collapses.
Tech seems somewhat unique because you can start a new company and within a relatively short period of time (<10 years) you can threaten the eventual existence of Fortune 500 incumbents. The entire venture capital/startup ecosystem exists to identify these upstarts and help them obtain unstoppable momentum as quickly as possible. If the incumbents want to survive, they usually have to pay up, and the longer they wait, the more expensive it will be. This is why Adobe pays $20 billion for Figma. Greed is good, but fear is better - for the startup looking to be acquired. There are a handful of other Figmas out there (and more will be started) and that is part of why tech has so much value.
Google acquired YouTube and Facebook acquired Instagram. Both for what seemed like insane valuations at the time. Both were brilliant defensive acquisitions in hindsight. Both fueled tech valuations by illustrating the opportunity for rapid disruption.
Both of those acquisitions were tiny $1 billion buys. They only grew into massive entities under their new corporate parents. Antitrust doesn't apply to such a situation. They could only be seen as critical acquisitions in hindsight.
Disruption can happen but it is the exception that feeds the narrative.
In order for these acquisitions to have high valuations, big companies must fear being replaced. It is in VC’s interest to stoke that fear. They do this by the threat of replacement at least as much as through funding for actual replacement.
VCs don’t have to care whether disruption happens, but they do have to care about their IRR, and will say or do anything they feel will with high probability increase their rates of return.
Most of the acquisitions that happen aren’t because of fear of “disruption” which is a very overused and misunderstood term - especially when defined like Clayton Christensen.
They are bought to be an accretive to an existing business or the acquiring company thinks they have scale advantage to multiply the value of the acquisition.
Another way to put it, that these are “sustaining innovations”.
The highest valuations are not paid for sustaining innovations, but for market access risks, which is what this thread was about. The two can be the same thing functionally, but “sustaining innovations” sounds much better in a shareholder meeting.
Let’s take Apple. Apple has only made two large acquisitions - NeXT and Beats - in the modern area. NeXT was bought to “sustain” the MacOS and Beats was bought to jump start Apple Music and its audio business. Is there any reason to believe that Apple who was already streaming purchased movies and musics needed Beats to bring streaming technology to the store. Beats was never going to disrupt Apple’s business. In fact, Cook said that Apple acquires a company on average every three weeks. Are all those “disruptive”?
Neither LinkedIn or GitHub were going to disrupt Microsoft in anyway.
Jobs, having been forced out of Apple, was leading NeXT at the time, and Apple was a failing hardware company. Software, driven by Apple’s founder was threatening to take Apple’s market. I don’t know how you can say this wasn’t potentially disruptive.
Post Jobs’ death they bought a black celebrity-driven entertainment company. This was absolutely a brand threat as Apple was now associated with Tim Cook, who is perhaps many amazing things but they do not include cool.
Fast forward a decade and Microsoft recognized the game that was being played, which is that a set of six murky quasi-monopolies attempt to acquire diverse revenue streams and not lose information sources or access to their markets. While LinkedIn or Github may not have been direct threats to any of Microsoft’s existing businesses, if someone else got ahold of them Microsoft would have zero social footprint, which would be a big problem for them, having essentially missed out on search as well.
> Software, driven by Apple’s founder was threatening to take Apple’s market
NeXT was already a failure and was transitioning out of the hardware business. Apple couldn’t make a modern operating system to save its life and was getting crushed by Microsoft.
> Post Jobs’ death they bought a black celebrity-driven entertainment company. This was absolutely a brand threat as Apple was now associated with Tim Cook, who is perhaps many amazing things but they do not include cool.
People aren’t buying iPhones because of a producer that most outside of Hip Hop only knew because he was the producer behind a famous White rapper (Eminem).
> Microsoft’s existing businesses, if someone else got ahold of them Microsoft would have zero social footprint, which would be a big problem for them, having essentially missed out on search as well.
Under Satya, they moved away from Windows everywhere to cloud and Office everywhere.
Azure isn’t popular because of GitHub. It mostly targets stodgy old Enterprise customers that are already on the MS platform. That’s not meant to be an insult. I was a stodgy old enterprise MS dev until 2018 when I started moving toward AWS technologies (where I now work).
Another fun example of looking at valuations versus actual real world production and output (real value delivered?): Tesla for a period was valued at a higher market cap than Toyota, the largest auto manufacturer in the world. Consider the real world infrastructure and output of Tesla, and the real world infrastructure and output of Toyota. Toyota is an order of magnitude larger operation.
So for Telsa's valuation to mate with it's real world ambition, it has be aiming to have it's operations as big as Toyota's, great! Being as big as Toyota would put it's market cap at... oh. Less than it currently is.
They earn more profit than Toyota and have a ton of unbooked FSD revenue they can't book but could pay out as dividends if they want--apparently they never have to deliver in the average lifetime of the cars that came with it.
Out of all companies you choose Slack, the only one that I actually think deserves an insane valuation. So many products that are just Slack integrations and that companies fully rely on. Have you worked at a big company and seen what happens when Slack goes down?
Collaboration tools are some way from being systems of record and there is substantial difference between competitors. Generic chat apps are not yet useful enough (despite years of development) for most companies - IRC failing to win is evidence of this and so is the fact that companies aren’t switching en masse to run their infrastructure on whatever is the OSS flavour of the day. That doesn’t mean it won’t happen some day, but that day won’t be soon.
When coupled with the fact that there are still a huge number of companies which haven’t yet converted to using these tools and purchasing the market leader for a premium makes total sense.
With regards to component supply - companies that are producing unique chips are worth plenty, where as those that are making COTS components aren’t.
Is this a ChatGPT experiment? The comment uses a few seemingly relevant terms but almost entirely incorrectly. "System of Record" had nothing to do with being commoditized.
Even in PACE, I don't think it means commoditization. That said, over time I've found anyone really technical pays absolutely zero attention to Gartner. With no judgement implied, they generally seem to target non-technical people who just want some jargon and product names that will help them sound like they know what they're talking about. They also get paid by the companies they are evaluating which means they'll almost never tell you about smaller players/startups who don't have the budget. HN or Reddit are probably both better for getting a read on who's actually doing innovative stuff.
There are also many companies making "unique" chips that are sold as COTS - it's a false dichotomy. Aside from a few initiatives like RISC-V, I'd venture to say most are proprietary.
Obviously tech itself isn't collapsing -- it's the astronomical growth that's collapsing, and much of valuation is based on growth. Now it's turning into merely "normal" growth. But that's all investor-side.
Consumer-side, it's really more about tech maturing. If we take your example of owning a smartphone, it means that most people already have smartphones, and since the yearly upgrades are much more incremental now, people don't need to upgrade as often.
If by tech we mean everything touched by Moore's law, all this degrowth seems also to be a consequence of the lengthening of the doubling time in flops and words. Ultimately what you mean by normal growth would then be the replacement rate of your old computer by a new but not more powerful computer.
Tech stock prices are based on extreme growth numbers. the problem is the denominator, it's so big for most tech companies they can't continue to grow at 20-50% a year. so if your P/E goes from 30+ to 10 or worse 3-5even if your E is still strong but flat alot of wealth disappears. if people feel broke they don't spend money on things. the new phone isn't as important as say eating. also alot of tech didn't have to compete and could still grow wildly. name a non-competitive area of tech these days? a blue sky opportunity. that doesn't entail hard engineering. autonomous cars, fusion, solar all require massive amounts of slog it out engineering.
I've often thought this too, but there are several huge exceptions to this, whilst outside of tech there are plenty of similar examples.
First and foremost, there's the Exceptions:
Google: P/E is more or less 20, decades already
Microsoft: P/E is more or less 20, for a very long time
There are not actually that high. Compare to BABA (P/E is >200), IBM (P/E >100), JD (P/E >600)
And the reverse exceptions, non-tech with absurd P/E:
Tesla: P/E is 40 (down from ~500 I might add)
Boston Scientific: P/E is >100
and let's just shut up about crypto, because ... there's is a theme. Overwhelmingly the ridiculous valuations are financial companies and "semi-"government companies (meaning protected by government, but not benefitting the people of the country that government governs. Like BABA for example, or before their downfall, Theranos). If Tech becomes the P/E champion instead of "almost-but-not-quite" corruption companies that tend to dominate that, I feel that's a very good thing indeed.
Tech =|= software =|= start-up. Those three things are unrelated, and the tendency to equate tech with software let a lot of issues on all fronts in the last decade or so.
Planes, trains, and automobiles are tech. Software is a specific subset, software is a subset of tolling and techniques that's applied to "tech" no more than circuit boards, wires, or the wheel. It doesn't have a special elevated status. The more people understand that the faster people will stop hero worshipping it.
I guess that's the case for software-centric platforms. It was not my experience.
I worked for hardware-centric corporations, for most of my career, and became used to having software treated as a "nice to have, but not essential" part of the product. In many cases, my work (and myself) were treated with contempt. I got used to being sneered at.
In my experience, this was a disastrous attitude, because, despite lots of folks wishing it weren't so, hardware, these days, is software.
Software pervades everything, from the compiled silicon on peripheral ASICS and FPGAs, to the firmware that drives said chips.
In my experience, firmware was treated as hardware, and the same rigid, waterfall process was applied to firmware, that was done for the hardware.
Worked great.
Until it didn't.
Software is a drastically different beast from hardware. I won't bother going into the reasons. Anyone with a smattering of knowledge in the area, can list them.
In any case, the hardware folks would treat any attempt to leverage the flexibility that software allows as "cowboy, low-quality, laziness." It was Waterfall, or you were a "bad engineer," and "lazy and undisciplined."
I'm really big on Disciplined software development. That does not make me popular with this crowd. It also does not mean Waterfall.
In my opinion, there's no way to avoid the difficult parts of engineering, but it's also important to be adaptive, responsive, and, dare I say it, "agile."
It has special elevated status economically because of near-zero marginal cost. Circuit boards, wires, and vehicles are brutally competitive businesses where you try to squeeze $/performance out of Mother Nature like blood from a stone. Software is so far from these physical frontiers that that incredibly wasteful architectures can create enormous amounts of business value.
I’m gonna disagree just a bit here, because it loops back around to the possible end of the “tech” hype.
Cars have been tech in this finance and investment bubble. Tesla, obviously, but then lots and lots of electric vehicle and autonomous driving startups came in with the whole song and dance of “disrupting the incumbents” and “move fast and break things” and “let’s milk customers forever and ever with subscriptions for self-driving taxis as a service”.
Which is in the process of going poof. (Remember Nikola? No. Good.). Lots and lots of hype about startups and subscriptions, and we’ve ended up with GM having arguably the best autonomous driving tech, and Ford having arguably the most hyped recent EV with the F-150.
Words can evolve. Here on this forum and in large parts of culture, 'technology' is any relatively recent innovation. Of which software is one of the more prominent examples.
Few would debate that the printing press is one of the most important pieces of tech humanity ever produced. At first it was used to print Bibles, but it was eventually used to print all sorts of other things. The philosophical texts that were later printed on the printing press were not a new "tech", but we pretend new apps for a iPhone are for some reason.
When we increase the surface area of a definition like you are here it makes words meaningless.
FWIW, I'm only stating what seems obvious to me. You can disagree though I suspect trying to narrow the definition at this point will be pushing a rock up hill or swimming up stream.
My view of words like technology is they are more like sliding windows, covering what the zeitgeist is classifying. Somewhat like the word 'fashion' or 'fad' aren't limited to any one specific kind of dress or style.
The word 'technology' would be less useful if it always had to be qualified to exclude everything from fire and the wheel up to the transistor?
As yes, CPUs, those things famously bought because of the software that runs it. Certainly not because of any fabrication advances by a given company, hohoho. All tech is software!
You are still licensed the operating system and end user software from a separate company or companies. Which means the reduction of tech to JUST the software is still extremely crass.
Said another way, the dependency graph is bidirectional. Software requires hardware to run. Hardware is of no practical use without software. The fixed quantity is the "use case", NOT the software.
most hardware i use in my life, outside the laptop I use to write this comment or my phone, works pretty good without complicated software. Heck, even my cars are old enough to have some basic embedded software running the engine only.
Also, software without hardware to run on, or to write n, is even more pointless than hardware alone. At least the latter can be touched.
Only economically non-sensical ventures will collapse. Tech will be always strong, but not always overvalued or able to open market with limitless VC capital.
She grew up with computing systems that had no concept of keyboard shortcuts and intentionally abstracted away the concept of the file system and documents.
My assessment is that she can’t work in an office.
They can’t belong on the side of the world where we make things. She’ll forever belong in the people we’ve designed our apps for, viewing videos, playing games, instagram models, struggling to afford a flat. Maybe we’ll make her rich! Our apps will decide that.
I have a corollary to Sturgeon's Law: "Most people are wrong, most of the time, and the people that are right, are usually right by accident."
I thought of this after listening to someone smugly quote one dead economist after another, while we both knew full well that all those economists were mostly wrong about everything.
How long is the half-life of architectural best practice?
It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Perhaps the real issue here is the "thought leaders" who should know better but are perfectly happy to sell their ideas as risk free, when in fact these things are incredibly context sensitive to the team, problem and company shape.
Nobody ever seems to say "Do you believe what you're saying, or are you simply collecting a speakers fee?".
> How long is the half-life of architectural best practice?
This is a great question to ask, though I think one should also apply that same way of thinking to frameworks, languages and a lot of the other software out there (like OS distros or databases).
Sometimes there are good ideas that turn out not to be feasible, whereas the same can happen with the technologies. Whereas if a technology has been around for a large number of years and hasn't died yet, then that's a good predictor for its continued existence: like Linux, or PostgreSQL.
> It seems to me that after ~10 years, half the cargo-culted ideas that young developers go all in on fail. Perhaps that's just long enough for them to get older and gain experience.
Another thing I've noticed is that sometimes the "spirit" of the idea remains, but the technologies to achieve it are way different. For example, we recognized that reproducible deployments are important and configuration management matters a lot, however over time many have gone from using Ansible and systemd services to shipping OCI containers.
These technologies bring their own cruft and idiosyncrasies, of course, but a lot of the time allow achieving similar outcomes (e.g. managing your configuration in an Ansible playbook, vs passing it in to a 12 Factor App through environment variables and config maps).
Of course, sometimes they bring an entirely new set of concepts (and capabilities/risks) that you need to think about, which may or may not be intended or even positive, depending on what you care about.
It would be interesting to know how long it took things like "making tall buildings" or "making safe bridges" took to shake out into roughly what they are today. I suspect, for example, the project planning aspect is pretty stable.
What do you think is wrong with people over 60?