Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting perspective of an L7 who was laid off at Amazon

https://xcancel.com/PlumbNick/status/2016500347053773198





I'm not reading all that. I thought tweets were supposed to be short. What the hell happened?

https://en.wikipedia.org/wiki/Character_limit#On_Twitter

> In November 2017, Twitter increased its character limit from 140 to 280 characters. In 2023, Twitter boosted the character limit for Twitter Blue subscribers. In February, it was increased to 4000. In April, it was again increased to 10,000, and in June, to 25,000.


The character limit is just whatever the longest tweet Musk wants to send.

I remember 280, but jesus things clearly got out of control.

The sink got let in.

I would wager being remote made him a target. It seems like a stretch to say it's just the global job market when the layoffs are global.

This and being based in Houston, TX, where as far as I know no organization has a hub location.

Besides entire teams or business units, the targets were people who did not comply to RTO or were not sitting with their teams.


He is right, but what I don't see in the post is a solution.

How do you stop multinational companies like Amazon from using the global talent pool as they see fit and pay whatever wages the local market will bear?

Without that it comes off as the standard "elect me because foreigners are bad".


This is possible in the first place because labor cannot freely move across borders, but corporations can freely shop around.

You can either open borders to both people and goods, or you can have restrictions on both that go hand in hand. But one without the other is a massive gift to corporations who can and do cash in on that disparity.


They need to understand that their profits depend on people with disposable income. The US has some of the most profitable consumers for retailers. If they are all living hand to mouth, what happens to Amazon’s sales?

That is obviously a problem for next quarter.

Unions that can negotiate contracts that have consequences for mass layoffs.

The solution according to the poster is regulation made by experts which is why he's running for Congress. It's maybe not the solution you were hoping for but it is the solution he proposes :)

He says:

1. Strong data governance 2. Tax implications for layoffs (offshoring?)


> I saw this coming and that’s why I’m running for Congress.

Well, that's one way to go for that next job.


Be the change you want to see in the world. Honestly, it'd be good to have passionate congressfolk who aren't overtly corrupt or beholden to corporate interests

I don't see a clean solution here. The price/craft distinction matters - companies competing on price (Amazon retail) have different incentives than those competing on quality and craft (Notion, Linear). If you're in the price business, replacing expensive US labor with cheaper global labor is rational. If you're in the craft business, it usually isn't.

But that framing is incomplete. Amazon isn't just retail - AWS, logistics tech, and AI enablement are craft-heavy. Cutting experienced people in those areas might be short-term thinking dressed up as strategy, not actual optimization. The policy question is where I get stuck. Regulate this, and US companies risk losing ground to foreign competitors who don't follow those rules. Do we want Alibaba as the default American retailer? But do nothing, and experienced workers keep getting squeezed while "efficiency" narratives provide cover.

What's the intervention that doesn't just shift the problem somewhere else?


We must do something about labor offshoring to india. It's too much. I want my children to have opportunities here in the country they were born in.

Factory workers said that in the 90s too. Didn’t work out to well for them.

It didn't, but it got us Trump too, so there's that. Let's see what happens this time.

Generally speaking the boomer generation has a different set of ideals from gen-x/millennials, and they are on the way out of shot calling. I don't think things repeat.

At least in the US, we've spent the better part of 80 years building a huge amount of wealth and growth on the backs of debt and externalized costs.

Its interesting, in a morbid humor kind of way, to see people realize that these costs always come due.

Globalization allows a huge amount of growth and many benefits you simply can't get with smaller or stronger borders. It also comes with risks, chief among them the risk that any one country loses self sufficiency and competitive advantage.

I do hope that the heaviest cost we (in the US) pay anytime soon is the cost of outsourcing jobs. We don't know how to manufacture anymore, our populace is extremely unhealthy in historical standards, and both political parties are toying with different forms of socialism. There are a lot of bad outcomes that can come where we're heading, we'd get off easy if it stops at outsourcing some of our high paid jobs to cheaper foreign labor.


"moved wherever the company needed me and fixed problems that had been sitting untouched because no one else could untangle them."

A screenshot later on shows he was a manager who spent his entire career in Houston. So... he didn't move and I associate "untangling difficult problems" as something an engineer should brag about not a manager.

Reads like AI generated slop that doesn't correlate with the actual situation.


I took “moved wherever the company needed me” as hopping from troubled project to project to help fix. This is often what the most senior engineers do, and also the Manager label doesn’t necessarily mean much in this context.

Not AI.


I thought it was fairly obvious too. We have a few of the most senior engineers being assigned to multiple critical initiatives just because they've led others successfully

Yeah these bums better get with the program. If The Company needs you to uproot your life and move to Timbuktu start packing or hit the bricks pal.

Inconsistent jingoistic nationalism.

On one hand, he claims that he "fixed problems that had been sitting untouched because no one else could untangle them." And on the other hand he claims his layoff on "a global labor market with almost no guardrails."

So which is it: did he really work on problems no one else could solve, or was he replaced by cheap foreign labor?

Probably neither. The most likely scenario here is one of two things:

a) Amazon made a mistake by firing him. They laid off someone truly valuable.

b) He wasn't as valuable as he thinks he was. Those problems were not worth paying him a meaningful fraction of a million dollars a year (what an L7 makes at amazon).

What I can guarantee is that he wasn't replaced by a cheap, foreign, plug-and-play replacement.

It all makes sense when you realize the point of his tweet is that he's plugging his run for congress: so yeah, of course he's tapping in to the absolute worst nationalistic sentiment. Shame on him.


Or c) Amazon knew about his running for Congress.

I'm sure someone in HR is writing him into the next round of mandatory training regarding code of conduct on social media.

If this is AI slop as the knee jerk comments next to me suggest, it’s goin to be a hell of a surprise if he gets elected this year! https://www.nleeplumb.com/about

While reading the text, my mental AI alarm bells were going off, sent it all to pangram.com and it flags both the layoff post and his campaign website text as being 100% AI generated

Yikes, the "[contraction] just" phrases on that campaign website alone are really over the top. Horrendously inauthentic writing, whether it's AI or not:

"wasn't just a job; it was a profound responsibility"

"This isn't just a statistic; it's a sign that we need to re-evaluate how we support those who serve"

"My experience isn't just about past success; it's about understanding the logistics, technology, and economic realities that shape the job market now and how we can create future opportunities right here."

"This experience didn't just teach me about law and order; it taught me about managing complex operations under pressure, the critical need for clear strategy, and the importance of unwavering integrity when the stakes are high – lessons desperately needed in Congress today."

"This campaign isn't just about me; it's about us."

All from the same page. Pretty nauseating.


can someone even prove that this guy is real and not an AI persona at this point? like, at what point do we have AI agents running for govt with a warm meatsack acting on behalf of them?

"AI detectors" are notoriously unreliable.

Perhaps more importantly here, when it comes to writing, "AI slop" is basically management speak - it's all about waxing poetically about simple things in ways that make you sound complicated (and useful!). And this guy is a career manager. So I bet this is actually human slop, the kind from which ChatGPT et al learned to speak the way they do.


AI detectors in general are unreliable, but there are a few made by serious researchers that have only 1-in-10000 false positive rate, e.g. https://arxiv.org/pdf/2402.14873

Having worked in a bigcorp, I've read my fair share of management-speak, and none of it sounds quite as empty as the allegedly AI text.

The AI sounds like someone conjuring a parody emulation of management speak instead of actual management speak.

More broadly — and I feel this way about AI code at well as AI prose — I find that part of my brain is always trying to reverse engineer what kind of person wrote this, what was their mental state when writing it?

And when reading AI code or AI prose, this part of my brain short circuits a little. Because there is no cohesive human mind behind the text.

It's kind of like how you subconsciously learn to detect emotion in tiny facial movements, you also subconsciously learn to reverse engineer someone's mind state from their writing.

Reading AI writing feels like watching an alien in skinsuit try to emulate human face emotional cues — it's just not quite right in a hard-to-describe-but-easy-to-detect way.


> And when reading AI code or AI prose, this part of my brain short circuits a little. Because there is no cohesive human mind behind the text.

This is most succinct description of my brain's slop detection algorithm.


It certainly looks like AI slop, so I stopped reading pretty fast.

It’s pretty sad that when people write well now, others dismiss it as AI.

It's also a pretty clear indication of AI undeniably passing the turing test in case that was in debate still.

No one can really tell if what's AI generated or not anymore. We're all going by vibes and undoubtedly getting it wrong.


i don't think that's what this means. it just means to me a certain population of people are clueless and don't use these tools enough. what's _actually_ damaging/obnoxious are the ones arguing that this guy is a good writer and that this isn't AI. IMO, telling the difference can be as simple as looking for the common giveaways, or as complex as reading between the lines of the structure of sentences, the terrible adjectives, and the soullessness of it. If you have half a brain and are well read, you can _probably_ tailor these LLMs to write in a way that reads better. But, it requires people to read a lot of content and literature to understand what good writing is, and this contrived, overly convoluted soulless soup of words is certainly not an example of it.

I agree with you and I definitely noticed the “it’s not just X, it’s Y” pattern.

But I find your comment funny because it ironically has the same “not that, this” pattern in a more verbose and less polished & less formulaic pattern.


yep, that's my signature way of writing -- "unpolished & verbose" :D

It's not written "well", it lacks that human touch, especially when writing about such a sensible subject, such as getting fired. It's too cold, actually too well written from a syntactic pov, which makes it inauthentic hence most probably AI.

The author literally admits to using AI in the preceding comments...

What exactly indicates the post was AI generated?

i've been using AI for as long as GPT has been out, so if you can't see through the rambling, overly complex to make you sound smarter kind of text, as well as the written patterns that are always used ad nauseam like "this thing isn't JUST this, it's THIS" -- i dunno how else to prove it to you. IYKYK.

I have also used GPTs since 2020. I am also a writer. Much of the writing equated with “generated by AI” is so precisely because it’s broadly trained on real writing.

So the claim of “AI slop” without proof is little more than heresy. It would be helpful to have any evidence.

It’s not about just the writing in one example, it’s about writing patterns—which are common—being equated with AI simply because they’re common.


if you're a writer, and you're using GPT for so long and you can't see it as obvious, i dunno what to tell you at this point. i guess LLMs are trained particularly on this guy's writing.

So you have zero evidence to convey something you claim is “obvious”? Got it.

here's your evidence, tanner https://xcancel.com/PlumbNick/status/2016590894485385347#m

I'm not going to say i told you so, but you should utilize these tools more before you start arguing, especially when it's so goddamn obvious


If you read his original draft you can see how much of it was still carried over, as well as how his original writing conveys much of your same arguments that an AI wrote the final text.

I don’t think your point is as strong as you believe it is.

Lastly, I work directly with AI models and utilize all popular generators every single day, so I don’t know why you think you’re the expert here.


my point was that its AI slop. whether his original point is intact doesn't matter to me. the fact that you're now defensively doubling down and steering the conversation into a direction which serves you better is just cringe. i bet you're a pleasure to work with. c ya later nerd.

If you can’t explain it, then you don’t actually understand it.

i just don't feel like enumerating all of the common patterns ai slop produces. again, if you don't see this as obvious, i can tell you're clearly not using this stuff often enough (which might be a good thing)

The thing is, these stylistic patterns existed before AI, and weren’t completely atypical. Maybe you’re using LLMs so much that you’re over-associating them with AI now. Or maybe the author is using LLMs so much that he’s unconsciously adopted some of the patterns in his own writing.

Well he literally confirms it was from ChatGPT in a later reply, so there's that.

And his original draft is conspicuously missing the telltale "it's not X -- it's Y" and overall breathless dramatic flair that people like the poster you're replying to (correctly) picked up on.

https://xcancel.com/PlumbNick/status/2016590894485385347#m


i think the much higher probability isn't that this guy wrote literally like an LLM before LLMs came out, but rather that he just used an LLM to write all of it. You can see even more of these examples directly on his campaign site.

And despite the downvotes, you were correct.

https://xcancel.com/PlumbNick/status/2016590894485385347#m


thank you. even without this tweet, i was willing to die on this hill. certainly feels nice to place these obnoxious HN know-it-alls into their place.

> certainly feels nice to place these obnoxious HN know-it-alls into their place.

You don't have to take the time to explain your reasoning if you don't want to, but "obnoxious know it all" is not a stone you should throw while at the same time refusing to explain yourself and saying anyone who can't see what you see is necessarily missing the obvious.


it's too difficult honestly. there are a lot of the classic easy traps -- "it's not just X, it's y" which are a dead giveaway, especially when they're used like 3-4 times in one essay. But the harder to spot ones, IMO, are ones where the overall tone is unnecessarily complex. E.g:

"When replacement is cheaper than retention, the decision gets framed as strategy instead of consequence."

This sentence is tight and on paper reads well, but it's robotic. It's kind of like taking a dead simple if/else statement that's pleasurable to read into a one line ternary statement. Technically a one line sentence, but now I have to re-read it like 5 times to understand it. The flow is dead.

Another example:

'AI becomes the excuse, not the cause. It’s the clean narrative that hides what’s actually happening: experienced workers being swapped out through global labor substitution while leadership talks about “efficiency” and “the future of work.”'

Starts off with a short & trite sentence (LLMs loves this if you don't steer it away). The other thing LLMs _love_ to do unprompted is: "It's the X: _insert_next_loaded_statement_here"

It's hard to get my point across, and I hope you kinda see it? I'm not a linguist, but these patterns are literally in every piece of LLM writing I've ever seen.


Again, you don't have to explain yourself, just don't be rude about it. It's hypocritical to call someone obnoxious and a know it all while you are engaging in schoolyard behavior and refusing to allow them to challenge your reasoning.

Saying nothing is an option. Other people who agree with you will be happy to explain their reasoning. Or maybe they won't and the conversation quietly fades away. Both are preferable.


give me a break. have you read the other comments? asking for proof in the most smug attitude possible. it's the definition of obnoxious HN commenters. and that's not even counting the one guy that wrote "you sound and write like a bot", got downvoted and deleted the post. i don't need to take any high roads here--it's the internet. As far as being "rude" it's a solid 2/10.

I'm not saying "take the high road" as much as "don't wrestle with a pig." It certainly isn't appropriate to call you a bot. But they probably insulted you to provoke you, right? Why give them any additional ammunition?

That's just my two cents, ultimately it's your business.


> i dunno how else to prove it to you

A prompt to generate similar output would be a good start.


how's this for your prompt, pal https://xcancel.com/PlumbNick/status/2016590894485385347#m

hopefully that's enough of a good start and a good end for this conversation


I've been writing in a contrasting style like that since probably 5th or 6th grade.

... I wonder how much the writings of a lot of autistic / borderline folks impacted the LLM writing style.


$foo isn't just $bar, it's actually $baz

is it me or is this ai slop

I am doomed, I guess. I didn't detect that. I thought this was sincere expression from an actual person, but an actual person who is also running for office and thus needs to tweak his writing accordingly.

Although I did note that it was a bit long (I guess I am out of the loop on tweets as well. I thought tweets were supposed to be short "hot takes". But this is practically an essay).


100%

it's insane. people just don't want to use their brains to communicate anymore i guess. you've just experienced something traumatic like a layoff, and you can't even just take a few hours to internalize it and be vulnerable online, rather than jumping immediately onto social media to use the opportunity to sound like a market analyst



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: