Hacker Newsnew | past | comments | ask | show | jobs | submit | weitendorf's commentslogin

The people who pay for operating systems are paying for a private entity to decide what the operating system should do. They're paying for someone to compile it from source and get it to run on their computer and maintain it.

That's the whole point. Paying someone for that thing you also know how to do so they can consider that problem solved and focus on the things they know how to do.


The difference between a 10% agent and a 30-60% subcontractor is what's being purchased, and from whom. Actors and other famous creatives are selling their particular work, which is unique and demanded by clients mostly independently of details like who their agent is. When a client pays 2x to an agency that pays the subcontractor implementing the work 1x to complete it, what's being purchased is the agency's work - working directly with the client, finding developers to complete the work, and managing the process of delivery (and all the related bits: making sure their subcontractors know what they're doing and are appropriate fits for the project, keeping work on track, being accountable for delivery/operational execution to the client).

If that extra 20-50% were so easy/useless that it can be grabbed "without lifting a finger", why aren't you finding enough work on your own to keep yourself busy, or, why are you still working with that third-party company to begin with? Oh, you would, if you "had any interest" in doing that. That level of accountability to the client and attention to their needs is literally what clients are paying the agency for, and why they're the ones handling the demand for work rather than their subcontractors.

If clients aren't seeking out your particular involvement in their project, you're the guy working the mic, not the movie star.


Having worked on Cloud Run/Cloud Functions, I think almost every company that isn't itself a cloud provider could be in category 1, with moderately more featureful implementations that actually competed with K8s.

Kubernetes is a huge problem, it's IMO a shitty prototype that industry ran away with (because Google tried to throw a wrench at Docker/AWS when Containers and Cloud were the hot new things, pretending Kubernetes is basically the same as Borg), then the community calcified around the prototype state and bought all this SAAS/structured their production environments around it, and now all these SAAS providers and Platform Engineers/Devops people who make a living off of milking money out of Kubernetes users are guarding their gold mines.

Part of the K8s marketing push was rebranding Infrastructure Engineering = building atop Kubernetes (vs operating at the layers at and beneath it), and K8s leaks abstractions/exposes an enormous configuration surface area, so you just get K8s But More Configuration/Leaks. Also, You Need A Platform, so do Platform Engineering too, for your totally unique use case of connecting git to CI to slackbot/email/2FA to our release scripts.

At my new company we're working on fixing this but it'll probably be 1-2 more years until we can open source it (mostly because it's not generalized enough yet and I don't want to make the same mistake as Kubernetes. But we will open source it). The problem is mostly multitenancy, better primitives, modeling the whole user story in the platform itself, and getting rid of false dichotomies/bad abstractions regarding scaling and state (including the entire control plane). Also, more official tooling and you have to put on a dunce cap if YAML gets within 2 network hopes of any zone.

In your example, I think

1. you shouldn't have to think about scaling and provisioning at this level of granularity, it should always be at the multitenant zonal level, this is one of the cardinal sins Kubernetes made that Borg handled much better

2. YAML is indeed garbage but availability reporting and alerting need better official support, it doesn't make sense for every ecommerce shop and bank to building this stuff

3. a huge amount of alerts and configs could actually be expressed in business logic if cloud platforms exposed synchronous/real-time billing with the scaling speed of Cloud Run.

If you think about it, so so so many problems devops teams deal with are literally just

1. We need to be able to handle scaling events

2. We need to control costs

3. Sometimes these conflict and we struggle to translate between the two.

4. Nobody lets me set hard billing limits/enforcement at the platform level.

(I implemented enforcement for something close to this for Run/Appengine/Functions, it truly is a very difficult problem, but I do think it's possible. Real time usage->billing->balance debits was one of the first things we implemented on our platform).

5. For some reason scaling and provisioning are different things (partly because the cloud provider is slow, partly because Kubernetes is single-tenant)

6. Our ops team's job is to translate between business logic and resource logic, and half our alerts are basically asking a human to manually make some cost/scaling analysis or tradeoff, because we can't automate that, because the underlying resource model/platform makes it impossible.

You gotta go under the hood to fix this stuff.


Since you are developing in this domain. Our challenge with both lambdas and cloud run type managed solutions is that they seem incompatible with our service mesh. Cloud run and lambdas can not be incorporated with gcp service mesh, but only if it is managed through gcp as well. Anything custom is out of the question. Since we require end to end mTLS in our setup we cannot use cloud run.

To me this shows that cloud run is more of an end product than a building block and it hinders the adoption as basically we need to replicate most of cloud run ourselves just to add that tiny bit of also running our Sidecar.

How do you see this going in your new solution?


> Cloud run and lambdas can not be incorporated with gcp service mesh, but only if it is managed through gcp as well

I'm not exactly sure what this means, a few different interpretations make sense to me. If this is purely a run <-> other gcp product in a vpc problem, I'm not sure how much info about that is considered proprietary and which I could share, or even if my understanding of it is even accurate anymore. If it's that cloud run can't run in your service mesh then it's just, these are both managed services. But yes, I do think it's possible to run into a situation/configuration that is impossible to express in run that doesn't seem like it should be inexpressible.

This is why designing around multitenancy is important. I think with hierarchical namespacing and a transparent resource model you could offer better escape hatches for integrating managed services/products that don't know how to talk to each other. Even though your project may be a single "tenant", because these managed services are probably implemented in different ways under the hood and have opaque resource models (ie run doesn't fully expose all underlying primitives), they end up basically being multitenant relative to each other.

That being said, I don't see why you couldn't use mTLS to talk to Cloud Run instances, you just might have to implement it differently from how you're doing it elsewhere? This almost just sounds like a shortcoming of your service mesh implementation that it doesn't bundle something exposing run-like semantics by default (which is basically what we're doing), because why would it know how to talk to a proprietary third party managed service?


There are plenty of PaaS components that run on k8s if you want to use them. I'm not a fan, because I think giving developers direct access to k8s is the better pattern.

Managed k8s services like EKS have been super reliable the last few years.

YAML is fine, it's just configuration language.

> you shouldn't have to think about scaling and provisioning at this level of granularity, it should always be at the multitenant zonal level, this is one of the cardinal sins Kubernetes made that Borg handled much better

I'm not sure what you mean here. Manage k8s services, and even k8s clusters you deploy yourself, can autoscale across AZ's. This has been a feature for many years now. You just set a topology key on your pod template spec, your pods will spread across the AZ's, easy.

Most tasks you would want to do to deploy an application, there's an out of the box solution for k8s that already exists. There have been millions of labor-hours poured into k8s as a platform, unless you have some extremely niche use case, you are wasting your time building an alternative.


Lots to unpack here.

I will just say based on recent experience the fix is not Kubernetes bad it’s Kubernetes is not a product platform; it’s a substrate, and most orgs actually want a platform.

We recently ripped out a barebones Kubernetes product (like Rancher but not Rancher). It was hosting a lot of our software development apps like GitLab, Nexus, KeyCloak, etc

But in order to run those things, you have to build an entire platform and wire it all together. This is on premises running on vxRail.

We ended up discovering that our company had an internal software development platform based on EKS-A and it comes with auto installers with all the apps and includes ArgoCD to maintain state and orchestrate new deployments.

The previous team did a shitty job DIY-ing the prior platform. So we switched to something more maintainable.

If someone made a product like that then I am sure a lot of people would buy it.


> real-time usage -> billing

This is one of the things that excites me about TigerBeetle; the reason why so much billing by cloud providers is reported only on an hourly granularity at best is because the underlying systems are running batch jobs to calculate final billed sums. Having a billing database that is efficient enough to keep up with real-time is a game-changer and we've barely scratched the surface of what it makes possible.


Thanks for mentioning them, we're doing quite similar debit-credit stuff as https://docs.tigerbeetle.com/concepts/debit-credit/ but reading https://docs.tigerbeetle.com/concepts/performance/ they are definitely thinking about the problem differently from us. You need much more prescribed entities (eg resources and skus) on the modelling side and different choices on the performance side (for something like a usage pricing system) for a cloud platform.

This feels like a single-tenant, centralized ACH but I think what you actually want for a multitenant, multizonal cloud platform is not ACH but something more capability-based. The problem is that cloud resources are billed as subscriptions/rates and you can't centralize anything on the hot-path (like this does) because it means that zone/any availability interacting with that node causes a lack of availability for everything else. Also, the business logic and complexity for computing an actual final bill for a cloud customer's usage is quite complex because it's reliant on so many different kinds of things, including pricing models which can get very complex or bespoke, and it doesn't seem like tigerbeetle wants calculating prices to be part of their transactions (I think)

The way we're modelling this is with hierarchical sub-ledgers (eg per-zone, per-tenant, per-resourcegroup) and something which you could think of as a line of credit. In my opinion the pricing and resource modelling + integration with the billing tx are much more challenging because they need to be able to handle a lot of business logic. Anyway, if someone chooses to opt-in to invoice billing there's an escape hatch and way for us to handle things we can't express yet.


Every time I’ve pushed for cloud run at jobs that were on or leaning towards k8s I was looked at as a very unserious person. Like you can’t be a “real” engineer if you’re not battling yaml configs and argoCD all day (and all night).


It does have real tradeoffs/flaws/limitations, chief among them, Run isn't allowed to "become" Kubernetes, you're expected to "graduate". There's been an immense marketing push for Kubernetes and Platform Engineering and all the associated SAAS sending the same message (also, notice how much less praise you hear about it now that the marketing has died down?).

The incentives are just really messed up all around. Think about all the actual people working in devops who have their careers/job tied to Kubernetes, and how many developers get drawn in by the allure and marketing because it lets them work on more fun problems than their actual job, and all the provisioned instances and vendor software and certs and conferences, and all the money that represents.


We do basically this for our tests in statue: https://github.com/accretional/statue/tree/main/test/hermeti...

npm pack builds the file locally, then we expose it to the container filesystem where we do a build and check the outputs. You can move dependencies to bundledDependencies in npm to embed them in the image.

However, this is assuming you're rebuilding the static site generator itself every time. If you just want to build a site using an existing static site generator, it's much easier provided that the site generator itself is easy to use (for example, ours has a one-liner to take you all the way from nothing to a local static site running on localhost, see https://statue.dev)

If you aren't changing the SSG code itself between container runs you'd just mount the markdown to the container and pre-install the ssg in the Dockerfile itself. For statue.dev that would just be a Dockerfile almost exactly the same as the one we use already, except you'd use your own script, and RUN this in the Dockerfile itself: yes | npx sv create . --template minimal --types ts --no-add-ons --install npm && npm install statue-ssg && npx statue init && npm install

In your script you'd just npm run build then do whatever it is you want to do to send the files somewhere, and wherever starts the script, you'd do something like -v "pathtomymarkdown/foo:/test-package/" - not sure how to do this in github runners.

Depending on how interested you/other people are in doing this with statue.dev, we could prob get something like this (where the markdown is parameterized, not the npm package) working by Tuesday. We're building out the sandbox/build features on our cloud platform as we speak, this could be one of the first use cases/defaults.


I think I know the answer, but people don’t want to hear it. Gwern has a kind of formula/structure really effectively markets his blog to the HN audience, which is Not Bad Actually, just effective messaging + giving people what they want.

You can’t really separate the content from its medium, its contex, and its audience if you’re thinking about “why is this successful” (why does the medium express the content n a particular content that works for some particular audience). What the blog post is really about is not “writing” or creating good content per-se, but how to structure content for a blog-like/feed-based medium where you’re competing for clicks, views, attention, participation in external narratives, and relevancy/memorability with an audience mostly looking to be entertained or scratch some curiosity itch.

Gwern has a good formula for that which matched the HN context and audience:

1. Pique interest and grab attention. Give me a reason to click.

2. Let the reader in on the secret, you and me vs all these other idiots. Validate me.

3. Back it all up with sources/references and a post that articulates something the reader already was aware of but fundamentally agreed with. Teach me something but make me feel like “Finally someone who gets it” rather than challenged or threatened.

4. Do the work to actually deliver on the hook. Satisfy my curiosity and give me a reason to come back and share it.

None of this is even necessarily manipulative, it’s just the form that successfully competes in a click-driven market for attention and information (the context). Nobody has to click or read through or share or comment on the thing. Most likely very few will click through to the sources, but they might peep them or be interested to know that they exist. It’s very effective progressive disclosure.

The thing is, this audience REALLY does not want to believe that they can be marketed to or that their decision making is many ways pretty damn emotional/predictable. Gwern does an excellent job validating that for them AND successfully marketing to them anyway. I think that’s the part that’s missing from this post.

The context is completely non-captive, the audience wants to feel smart, and believes that they are “too smart to be marketed to”. Here they are scrolling through an attention market looking for interesting information that they need to be convinced to click, read through, share, and engage with. Why was the link shared and content created to being with, and how did it structure itself to fit its content/audience, and why does a particular structure/messaging work while others don't?

The word for all of that is Marketing. It's just a Good Thing when done right.


I think that's a very interesting, thoughtful response.

> You can’t really separate the content from its medium, its contex, and its audience

Yes, I completely agree.

> the audience wants to feel smart, and believes that they are “too smart to be marketed to”. Here they are scrolling through an attention market looking for interesting information that they need to be convinced to click, read through, share, and engage with. Why was the link shared and content created to being with, and how did it structure itself to fit its content/audience, and why does a particular structure/messaging work while others don't?

> The word for all of that is Marketing.

I think that overemphasizes the significance of a 'market'. 'Market' is used as a metaphor for many things, such as 'attention market', but also implies commercial, transactional, profit-oriented relationships, which don't seem like such strong motivations here (though I can't speak for the author). And to me your claims seem assume that the author's primary goal is more attention - they are in an 'attention market', they do all these things with intent to drive more page views.

They could have many other motivations. As a general concept, people love to share what they know, sort of like the drive to make FOSS. Maybe the author just loves to learn things and the blog posts provide an excuse; I've fallen into similar hobbies - without regret. Maybe they feel validated, or it relieves stress, or it's an escape from a job they hate, etc. There are so many possibilities in addition to commerce, attention, or profit.

I do agree that the HN "audience wants to feel smart, and believes that they are “too smart to be marketed to”." Those are the easiest people to persuade.


> each me something but make me feel like “Finally someone who gets it” rather than challenged or threatened.

Ironically, AI has been making me feel like this lately. But it taught me all of this (i.e. your exact point about the psycological levers employed by people/organizations who understand why stuff goes viral).

So is that real or am I just being successfully marketed to, now by AI.


I guess my meta-point is that "marketing" shouldn't be such a dirty word, because done well enough, it's effective communication that gives people what they want/helps them AND makes them feel good. My own comment basically does the same thing I said he did, lol.

The point of calling it marketing is that this blog post is explaining hooks, basic content marketing (ie be entertaining or interesting), progressive disclosure, and understanding your target audience: standard marketing concepts. You can find a lot of info if you research them by those terms.

Gwern's audience, in an ironic twist of fate, think that being marketed to = being tricked or manipulated by an evil person, so here he is explaining basic content marketing concepts to the people his blog is marketed towards, who hate marketing and believe themselves immune to it.

AI does the same thing to you because 1. most of the web is marketing 2. why shouldn't it be nice to you AND help you? 3. you keep coming back for more, right? And is that necessarily a bad thing?

I highly recommend a deep dive into signalling theory if you're interested in learning more, it's completely changed how I think about communication and behavior, even my own.


Sure, when it's fronting a great product, I have no issue with marketing. But it can be abused, which makes people suspicious (but not invulnerable as we know).

Anyway, I am currently in "lean in and find out" mode with AI :-)

Not quite at Gas Town yet but I've dropped a lot of baggage and willing to take a hike to try and find it.


Of course there are plenty of situations where you don't need this and probably a lot of people optimize for "being able to scale independently" without understanding when they would even need that/why those two services don't even make sense to scale independently.

If your services are stateful/are being treated as persistent, IMO even more commonly than your example, you really want to make sure they're only bundled together if they directly interact at the application layer. Disk is unlike ram/cpu in that it generally increases montonically in the absence of explicit handling of its size, so if applications share disks unnecessarily you've basically made it so that each fails due to lack of disk space when any other applciation on the same disk fail, and any cleanup/management of eg pruning old data or wiping out logs or cleaning up some extra noisy/disruptive disk operation now has to be coordinate with each other application sharing those disks. Terrible situation to be in.

For stateless services the big reason you might want separable scaling is for cost-controls and operations. The problem is not necessarily that two stateless services are bundled, it's that giving permission for stateless service VeryImportantFoo to scale up aggressively also requires granting UnimportantBundledBar and all the other bundled services that same permission for aggressive scaling. If these APIs are exposed to the internet in anyway, (or if their usage patterns and availability requirements are completely different, eg one is some webhook plugin or old marketing demo and the other is your PayMeTonsOfMoney api) that's a really bad idea, because you've basically made it possible for someone to send a bunch of api requests and waste a lot of your money, or lowered your availability/scaling of your important stuff to prevent that.

Also, scaling applications independently is roughly synonymous with releasing them independently (eg keeping one truly constant and un-modified rather than rebuilding it and releasing it in the same push) which is again important if you have varying availability/reliability requirements (releasing the slack bot should not require rebuilding the main backend and pushing it again).

Let me be clear though, even though most developers think about microservices in terms of engineering considerations, these are ultimately all business requirements. If you don't need cost controls or to care about availability, or aren't worried that "kyle tried to push the slackbot and now prod is down" will happen, don't think you need to do all this stuff anyway because "it's what you should do". I'm mostly writing this because I think developers who are inexperienced or haven't used microservices before tend to not realize there are almost always business and coordination and operations problems being solved when companies (who know what they're doing) invest significant time and resources into microservices.


I have found mostly the opposite but partly the same. With the right tooling LLMs are IMO much better in microservice architectures. If you're regularly needing to do multi-repo PRs or share code between repos as they work, to me that is a sign that you weren't really "doing microservices" before adding LLMs to your project, because there should be some kind of API surface that you can share with LLMs in other repos, and cross-service changes should generally probably not be done by the same agent

Even if the same dev is driving the work, it's like having a junior engineer do a cross-service staggered release and letting them skip the well-defined existing API surfaces. The entire point of microservices is that you are making that hard/introducing friction to that stuff on purpose so things can be released and developed separately. IMO it has an easy solution too, just direct one agent per repo/service the way you would if you really did need to make that kind of change anyway and wanted to do it through junior developers.

> hey push the integration work onto another team so that developers can make it Not Their Problem

I mean yes and no, this is oftentimes completely intended from the perspective of the people making the decision to do microservices. It's a way to constrain the way people develop and coordinate with each other precisely because you don't want all 50 of your developers running amok on the entire codebase (especially when they don't know how or why that code was structured some way originally, and they aren't very skilled or conscientious in integrating things maintainably or testing existing behavior).

> so that developers can make it Not Their Problem

IMO this is partially orthogonal to the problem. Microservices doesn't necessarily mean you can't modify another team's code. IMO that is a generally pretty counter productive mindset for engineering teams where codebase is jealously guarded like that. It just means you might need to send another team a PR or coordinate with them first rather than making the change unilaterally. Or maybe you just want to release the things separately; lately I find myself wanting that more and more because past a certain size agents just turn repos into balls of mud or start re implementing things.


This is never going to be the case, if you're finding it there's something really weird/wrong going on. Even with OpenAPI defs, if you're asking an agent to reason across service boundaries they have to do language translation on the fly in the generation, which is going to degrade attention 100%, plus LLMs are just worse at reasoning with openapi specs than language types. You also no longer have a unified stack, instead the agent has to stitch together the stack and logs from a remote service.


If your agent is reasoning across service boundaries you should be giving it whatever you'd normally use when you reason across service boundaries, whether that's an openapi spec or documentation or a client library or anything else. I don't see it as any different than a human reasoning across service boundaries. If it's too hard for your human to do that, or there isn't any actual structured/reusable way for human developers to do that, that's more a problem with how you're doing microservices/developing in general.

> they have to do language translation on the fly in the generation, which is going to degrade attention 100%,

I'm not completely sure what you're alluding to but if you don't have an existing client for your target service, microservices/developers going to have to do that anyway because they're serializing data to call one microservice from another. The only exception would be if you starting calling the other application's code directly from the other's in which case again you're doing microservices wrong or shouldn't even be doing microservices at all (or a lead engineer/other developers deliberately wanted to prevent you from directly integrating those two applciations outside of the API layer and it's WAI).

None of these seem like "microservices are bad for agents" problems to me, just "what I'm doing was already not a good fit for microservices/I should just not do microservices anymore". Forcing integration against service boundaries that are independently built/managed is almost the entire point as far as I'm concerned


Think of it like this. If you're multilingual but I ask you a hard question with sections in different languages, it's still going to tax you to solve the problem over having the question be asked in one language.

If you codegen client wrappers from your specs that can help, but if something doesn't work predictably the indirection makes debugging harder (both just from a "cognitive" standpoint and from inability to directly debug a unified system).

I prefer FaaS + shared libraries over microservices when I have to part things out, because it gives you the independence and isolation of microservices, but you're still sharing code across teams and working with a unified stack.


1. Principle of least privilege is very important if you are interacting with a large-ish number of third party APIs. It's not just about data ownership (ie one service per db) or even limiting blast radius (most likely you won't get hacked either way), it's eliminating a single point of failure that makes you a more attractive/risky target in case of a hack either directly on your infrastructure, or through a member of your team, or improperly wielding automation.

2. Having at least some level of ability ro run heterogenous workloads in your production environment (ie being able to flip a switch and do microservices if you decide to) is very useful if you need to do more complicated migrations or integrate OSS/vendor software/whip up a demo on short notice. Because oftentimes you may not want to "do microservices" ideologically or as a focal point for development, but you can easily end up in a situation where you want "a microservice", and there can be an unnecessarily large number of obstacles to doing that if you've built all your tooling and ops around the assumption of "never microservices"

3. If you're working with open source software products and infra a lot it's just way easier to eg launch a stalwart container to do email hosting than to figure out how to implement email hosting in your existing db and monolith. Also see above, if you find a good OSS project to help you do something much faster or more effectively, it's good for it to be easy to put it in prod.

4. Claude Code and less experienced or skilled developers don't understand separation of concerns. Now that agentic development is picking up, even orgs that didn't need the organizational convenience before may find themselves needing it now. Personally, this has been a major consideration in how I structure new projects once I became aware of the problem.


The architecture I like is either modular monoliths, or if you really need isolation, a FaaS setup with a single shared function library, so you're not calling from one function service to another but just composing code into a single endpoint that does exactly what you need.

HTTP RPC is the devil.


I generally agree although I think gRPC and using it with json is awesome, it's just that like with most of these tools, the investment in setup/tooling/integrating them into your work has to be worth what you get out of them.

I actually used to do FaaS at GCP and now through my startup am working on a few projects for cloud services, tooling around them, and composable/linkable FaaS if you're interested! A lot of this logic isn't in our public repos yet but hit me up if you want a try fully automated Golang->proto+gRPC/http+openapi+ts bindings+client generation+linker-based-builds.

A project we started for "modular monoliths" that need local state but not super high availability as databases is at https://github.com/accretional/collector, but I had to put it on pause for the past few weeks to work on other stuff and figure out a better way to scale it. Basically it's a proto ORM based on sqlite that auto configures a bunch of CRUD/search and management endpoints for each "Collection", and via a CollectionRepo additional Collections can be created at runtime. The main thing I invested time into was backups/cloning these sqlite dbs since that's what allows you to scale up and down more flexibly than a typical db and not worry as much about availability or misconfiguration.


I believe that microservices (but under a different model than K8s et al expose) are posed to make a huge comeback soon due to agentic development. My company has been investing significantly in this direction for a while, because agents need better APIs/abstractions for execution and interaction with cloud environments.

Once Claude Code came out something new clicked with me regarding how agent coordination will actually end up working in practice. Unless you want to spend a time of time trying to prompt them into understanding separation of concerns (Claude Code specifically seems to often ignore these instructions/have conflicting default instructions), if you want to scale out agent-driven development you need to enforce separation of concerns at the repo-level.

It's basically the same problem as it was 5-10 years ago, if you have a bunch of logic that interacts with each other across "team"/knowledge/responsibility/feature boundaries, interacting with your dependencies over an API, developing in separate repos, and building + rolling out the logic separately helps enforce separation of concerns and integration around well-specified interfaces.

In an ideal world, Claude Code would not just turn every repo into a ball of mud, at least if you asked it nicely and gave it clear guidelines to follow to prevent that. That was always true with monoliths and trying to coordinate/train less experienced developers to not do the same thing when, and it turns out we didn't live in an ideal world back then so we used microservices to prevent that more structurally! History sure does rhyme.


Disagree. Enums are named for being enumerable which is not the same thing as simply having an equivalent number.

It’s incredibly useful to be able to easily iterate over all possible values of a type at runtime or otherwise handle enum types as if they are their enum value and not just a leaky wrapper around an int.

If you let an enum be any old number or make the user implement that themselves, they also have to implement the enumeration of those numbers and any optimizations that you can unlock by explicitly knowing ahead of time what all possible values of a type are and how to quickly enumerate them.

What’s a better representation: letting an enum with two values be “1245927” or “0” or maybe even a float or a string whatever the programmer wants? Or, should they be 0 and 1 or directly compiled into the program on a way that allows the programmer to only ever need to think about the enum values and not the implementation?

IMO the first approach completely defeats the purpose of an enum. It’s supposed to be a union type, not a static set of values of any type. If I want the enum to be tagged or serializable to a string that should be implemented on top of the actual enumerable type.

They’re not mutually exclusive at all, it’s just that making enums “just tags” forces you to think about their internals even if you don’t need to serialize them and doesn’t give you enumerability, so why would I even use those enums at all when a string does the same thing with less jank?


> Enums are named for being enumerable which is not the same thing as simply having an equivalent number.

Exactly. Like before, in the context of compilers, it refers to certain 'built-in' values that are generated by the compiler; which is done using an enumerable. Hence the name. It is an implementation detail around value creation and has nothing to do with types. Types exist in a very different dimension.

> It’s supposed to be a union type

It is not supposed to be anything, only referring to what it is — a feature implemented with an enumerable. Which, again, produces a value. Nothing to do with types.

I know, language evolves and whatnot. We can start to use it to be mean the same thing as tagged unions if we really want, but if we're going to rebrand "enums", what do we call what was formally known as enums? Are we going to call that "tagged unions" since that term now serves no purpose, confusing everyone?

That's the problem here. If we already had a generally accepted term to use to refer to what was historically known as enums, then at least we could use that in place of "enums" and move on with life. But with "enums" trying to take on two completely different, albeit somewhat adjacent due to how things are sometimes implemented, meanings, nobody has any clue as to what anyone is talking about and there is no clear path forward on how to rectify that.

Perhaps Go even chose the "itoa" keyword in place of "enum" in order to try and introduce that new term into the lexicon. But I think we can agree that it never caught on. If I, speaking to people who have never used Go before, started talking about iotas, would they know what I was talking about? I expect the answer is a hard "no".

Granted, more likely it was done because naming a keyword that activates a feature after how the feature is implemented under the hood is pretty strange when you think about it. I'm not sure "an extremely small amount" improves upon the understanding of what it is, but at least tries to separate what it is from how it works inside of the black box.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: