Hacker Newsnew | past | comments | ask | show | jobs | submit | more alberth's commentslogin

Traditional workflow is largely predefined & rule-based.

There’s a level of autonomy by the AI agents (it determines on its own the next step), that is not predefined.

Agreed though that there’s lots of similarities.


But rule-based processing was exactly the requirement. Why should the workflow automation come up with rules on the fly, when the rules were defined in the business process requirements? Aren't the deterministic rules more precise and reliable over the rules defined by probabilistic methods?

Autonomy/automation makes sense where error-prone repetitive human activity is involved. But rule definitions are not repetitive human tasks. They are defined once and run every time by automation. Why does one need to go for a probabilistic rule definition for a one-time manual task? I don't see huge gains here.


Sometimes the rules are not as easy to define ahead of time. As an example imagine having to categorize some sort of text based requests etc.

Or decide what the next step should be based on freeform text, images, etc.

Hardcoded rule based would have to try and attempt to match to certain keywords etc, but you see how that can start to go wrong?


This is already solved by the traditional workflow systems. For example, if the request is received as a form submission, a form processor is invoked to categorize the request and route the request accordingly based on the identified category.

Now, if the request is coming in as text or other media instead of a form input, then the workflow would call a relevant processor, to identify the category. Everything from that point runs same as before. The workflow itself doesn't change just because the input format has changed.


And what is this processor and how does it work?

How does it determine next step from raw non structured content?

Let's imagine for example that it's a message from a potential customer to a business. The processor must decide whether to e.g. give product recommendations, product advice, process returns, use specific tools to fetch more data (e.g. business policies, product manuals, etc), or current pricing, best products matching what the customer might want etc.

If it's an AI agent it could be something like:

1. Customer sends message: "my product Y has X problem". (but the message could be anything from returns to figuring out most suitable product)

2. AI Agent uses "search_products" tool to find info about Y product in parallel with "send_response" to indicate what it's trying to do.

3. AI Agent uses "search_within_manual" tool to find if there are specific similar problems described.

4. AI Agent summarizes information found in manual, references the manual for download and shows snippet of content it based its answer on.

AI Agent itself is given various functions it can call like

1. search_products

2. search_business_policies

3. search_within_documents

4. send_response

5. forward_to_human

6. end_action

7. ... possibly others.

How would you do it in the traditional workflow engine sense?


I think you missed the whole point. Processor does not have routing logic. It's only job is to parse the request (form, text, image etc) and categorize it enough so that the workflow can do the routing for next actions. Routing is done by traditional predetermined logic, using rules. The discussion here is about whether it helps to define that routing logic at runtime (on-the-fly), instead of having it coded in predetermined logic. My view is, it doesn't help.


AI-based tools are mostly about replacing the processor with something smarter, not the router.

Of course, sometimes it can be an advantage to not have to explicitly write the router, but the big benefit is the better processor for request->categorization, which with AI can even include clarification steps.


I edited my comment to add more of what the agent would be doing. Not sure if this reached you, but if you read the edited one, how would traditional workflow engine solve the particular problem with a free form raw content that could be anything, and requires using various tools to solve the problem?


IME traditional rules based systems don't try to solve free form problems. So they stop at the point the inputs can't be handled (any further) whereas LLM could continue, albeit without any guarantee the result is accurate. It could completely hallucinate a fictional solution, which I've seen too often to trust them.


received message: "Maximize the production of paperclips"


There is always much path dependence in what becomes the business requirements, making the business requirements less than optimal to start with.

Then over time their is a type of entropy with all business processes.

If we don't figure out dynamic systems to handle this it is hard to see how we get a giant productivity boost across the economy.

There is also the problem that what percentage of people even have exposure to the concepts of dynamic systems? When I was in college, I distinctly remember thinking dynamic systems, "chaos theory", was some kind of fractal hippy pseudoscientific fraud best to ignore.

I think of how often I hear the average person use language from probability theory but never from dynamic systems.


Workflows exist to solve problems. If there are problems which need solving that are solved better/faster/cheaper by AI agents than with strict rule-based algorithmic systems, they’ll be used because it makes economic sense. Reliability requirements are different for every problem, cases where verification is easy and cheap and multiple attempts are allowed are perfect for not 100% reliable agents.


it's fine if you want AI to help you in defining the workflow/rules. But you don't use AI to define rules on the fly. That's the whole point. It is like having AI to write code at runtime based on the request. I don't think that's how you use AI in software.


In many cases I do have AI writing code runtime for my tasks. E.g. if I'm doing analysis of something, or some data, I can ask AI to gather the data (it writes code for scraping or uses pre existing scraping/web search tools), then it uses python or typescript to analyze the data. After which it will use e.g. React in the web app to render the Charts, Tables in customized, personalized format.

For instance, I might be looking for a product or something, it will use web search to gather all possible products, then evaluate all the products against my desired criteria, use some sort of scoring mechanism to order the products for me and then write an UI to display the products with their pros and cons specified, with products ranked using an algorithm.

Or I might ask it to find all permutations of flight destinations in March, I want somewhere sunny and use weighted scoring algorithm to rank the destinations by price, flight travel duration etc.. Then it writes code to use flights API, to get all permutations and does the ranking.

I used to have to go to things like airport websites, momondo, skyscanner, I don't have to do those things manually anymore, thanks to AI agents. I can just let it work and churn out destinations, travel plans according to a scoring criteria.

Worst mistakes it can make is, that is missed a really good deal, but this is something I could even more easily miss myself, or worst case it makes a mistake and parses the price/dates wrong, which I will find out when trying to book it, so I waste a bit of time, but similar and worse mistakes I do on my own as well. So overall I drastically reduce my search time for the perfect travel, and also time spent on my own mistakes or misunderstandings. And it will be able to go through permutations far faster and more reliably with infinite patience compared to myself.


None of what you said qualifies as coding at run time. I think you missed at. At run time, code is executed, not designed or defined.


Okay I tell AI to do X, it writes a script and executes it to perform X, how is that not defining code runtime?

AI agents like Claude Code or Codex constantly use the technique of writing temporary scripts and executing them inline.


If your system receives 1000 requests per second, does it keep writing code while processing every request, on per request basis? I hope you understand what run time means.


Define runtime then.

> If your system receives 1000 requests per second, does it keep writing code while processing every request, on per request basis? I hope you understand what run time means.

With enough scale it could, however it really depends on the use case, right? If we are considering Claude Code for instance, it probably receives more than 1000+ requests per second and in many of those cases it is probably writing code or writing tool calls etc.

Or take Perplexity for example. If you ask it to calculate a large number, it will use Python to do that.

If I ask Perplexity to simulate investment for 100 years, 4% return, putting aside $50 each month, it will use Python to write code, calculate that and then when I ask it to give me a chart it will also use python to create the image.


> Define runtime then.

From GP: "But you don't use AI to define rules on the fly."

Neither Claude nor Perplexity change the rules they work by on the request to request basis. Code that Claude outputs isn't the code the Claude runs on and Perplexity did not on its own decide to create python scripts because other ways it was calculating large sums did not work well. Those tools work within the given rule set, they do not independently change those rules if the request warrants it.


You are not really defining the runtime?

Is whatever what happens between e.g. HTTP Request and Input and Output not runtime then?

1. HTTP Input

2. While (true) runAgent() <- is that not runtime?

3. HTTP Output

Additionally Claude could be triggering itself with custom prompts etc to use instances of it concurrently in parallel.

Or are you saying that the only rule is that Agent is being ran in a loop?

But the whole discussion is about how AI Agent is different from a Workflow?

The point is that workflow is that LLM is triggered in a loop ?


I like determinism and objectivity as much as the next guy, but working in the industry for decades led me to realize that conditions change over time and your workflow slowly drifts away from reality. It would be more flexible to employ an AI agent if it works as promised on the tin.


There is no "reality" other than business requirements. That's the context for a workflow. You probably meant that the requirements aren't agile enough to meet the changing world outside. That's a different problem, I think. You can't bypass requirements and expect workflow to dynamically adapt to the changing reality. If that's the direction with AI-driven business re-engineering, then we are back to the chaos, exposing the business logic directly to the outside world.


Rules are the context for a workflow and they have to be updated as the environment changes and that is what I've observed. YMMV.


Yes. But rules themselves live in the context defined by requirements.


I needed some data from a content API, had a few options:

1) Human agent, manual retrieval (included for completion

2) one-off script to get exactly the content I want

3) Traditional workflow, write & maintain

4) one off prompt to the agent to write the script in #1, sort and arrange content for grouping base on descriptions it receives (this is what I used, 3 hours later I had a years worth of journal abstracts of various subjects downloaded, sorted, indexed and summarized in a chromadb. I’d just asked for the content, but it’s python code it left for me included a parameterized CLI with assorted variables and some thoughtful presets for semantic search options.)

5) one off prompt to the agent to write the workflow in #3, run at-will or by agent

6) prompt an agent to write some prompts, one of which will be a prompt for this task, the others whatever they want: “write a series of prompts that will be given to agents for task X. Break task x down to these components…”


I noticed on our own agentic setups that there are very few actual scenarios being executed. I suggested implementing some type of monitoring so you can replace 99% of most used workflows with normal python and activate AI calls if something new happens. until that new thing repeats few times and you translate that to code to. that has to be carreer in itself. you can turn a lot of AI apps into profitable and fast internal apps


Actual title: ”Fast food is losing its low-income customers. Economists call it a symptom of the stark wealth divide”


Ternus team didn’t create M-series.

Johny Srouji team did instead.

https://www.apple.com/leadership/johny-srouji/

https://www.apple.com/leadership/john-ternus/



How does their attempt to acquire ARM (and failed) impact this?


It doesn't.


I love Zed’s minimal design language ... clean, restrained colors, low visual noise.

The screenshot below surprised me:

https://zed.dev/img/post/zed-is-our-office/this-week.webp

All the colorful avatars and the busy side/top panels feel out of character with the usual Zed aesthetics.


Can you share more on this.

While I do not work at Zed, I'm curious to hear more about this use case for my own company needs.


Your company has a user pool, you sign a BAA or start working with a partner company that has their user pool. Instead of creating slack accounts in both you can share external slack rooms that only people that are invited in/from their respective orgs can join without having to co-mingle employee user pools.


But why would external partners want to look at your code? I guess if you're also integrating with them? But generally you just give them repo access instead. For Slack, it's different as messaging is a core feature to collaborate between different people in different companies, but looking at code is a very specific use case.


Not sure, I was only answering in regards as to what Slack shared rooms brings to the table for companies in the form of letting Project Managers/Account Managers have direct line of contact with clients.

Code wise I guess you can could be working with any agency or contractors and you could collab on PR reviews? No idea to be honest.


As an aside, I've been using TSA Touchless at select airports.

It's pretty slick.

No ID, nor Board Pass needed.

Just walk up to TSA, and only facial recognition is needed. It's extremely fast too.

https://www.tsa.gov/touchless-id


Now that we’ve got ice walking around with an app that uses facial recognition to determine if you’re a citizen, fuck the facial recognition stuff. This tech should be out of government hands.


> Now that we’ve got ice walking around with an app that uses facial recognition to determine if you’re a citizen, fuck the facial recognition stuff. This tech should be out of government hands.

When I was in LAX last week, facial recognition on entry was only for US citizens anyway, and for it to work they need to take a photo of you when you're leaving. I don't see how it helps ICE in any way, plus it's handled by CBP.

Also, it didn't work on me, because I left clean shaved and returned with a beard.


> I don't see how it helps ICE in any way, plus it's handled by CBP

ICE and CBP are both part of DHS. This data is going to be abused, if it is not already.


> When I was in LAX last week, facial recognition on entry was only for US citizens anyway, and for it to work they need to take a photo of you when you're leaving.

I've definitely avoided photos on exit and used it coming back in, so I'm not sure this is accurate.


Same here. I always refuse facial recognition when possible, but they had no problem using it on return from international travel. The systems aren’t linked (yet).


Why? They already have photos of you and your biometric data. All you're doing is slowing down the line for everyone else.


It doesn't slow down the line, they hold you at just about every crowded airport until the line for the luggage/body scanner are ready for the next person. Even if it did, though, I have the right to opt out, so you will wait until I've exercised my right. Deal with it. :)

I reject it because I don't believe in a world where rampant facial recognition should be the norm.


When I was in Haneda airport, a machine tells you which of 4 lines to go to and if you have forgotten there is a screen with live camera feed from screen POV and little boxes drawn on top kinda above you with your line.

I thought it was pretty neat, but felt super invasive.

CBP facial recognition is far less invasive. It's not an instance of "rampant facial recognition" in my opinion. There is really no downside, "they" already know you might be at the airport because you booked a ticket, since most US airports don't let to the air side without a ticket. You are already on bunch of cameras inside the airport, including right when:

1) your ID verified by human or by a kiosk

2) when you drop off your bags

3) when you board the plane

4) every other time you have to show your ID or boarding pass

You do you though.


You say these points as if they're not day-one considerations of this discussion.

If they know that already, then they don't need to use facial recognition. It acts as a de-facto endorsement of the idea that it should be used everywhere else in society, which is what my issue is.

I also lived in Japan for a number of years and I'm familiar with their system at the airports. Japan is not America and I do not find it useful or interesting to compare the two approaches; when I lived there - and indeed, whenever I go back - I'm aware of and resigned to the aspect of that society not giving a shit about it all. I do not think America needs to be the same way.


Unused rights atrophy.


Surely nothing nefarious has ever been promoted with the offer of convenience!


Similar event happened 2-years ago, but with Microsoft

https://news.ycombinator.com/item?id=39912916


That was not similar. The Microsoft dev was demanding things and rightfully shamed over it. Everyone giving Google the same shame over reporting an exploitable bug with no expectations is being ridiculous.


> It is written in SPARK and Ada, and is comprised of 100% free software.

I thought SPARK was a paid (not free) license. Am I mistaken?

Very cool project btw.


> I thought SPARK was a paid (not free) license. Am I mistaken?

Similar model to Qt: permissive licensed open source version, with a commercial 'Pro' offering.

https://en.wikipedia.org/wiki/SPARK_(programming_language)

https://alire.ada.dev/transition_from_gnat_community.html


Not knowing python, I find the data classes example extremely readable. More so than Ruby example.


I write mostly Python these days, but agree with op. The comparables implementation in Ruby seems much nicer to me (maybe because I'm less familiar with it).


It's virtually the same in Python if you wrote it explicitly:

    def <=>(other)
        [major, minor, patch] <=> [other.major, other.minor, other.patch]
    end
vs:

    def __lt__(self, other):
        return (self.major, self.minor, self.patch) < (other.major, other.minor, other.patch)
Then use the `total_ordering` decorator to provide the remaining rich comparison methods.

That said, it's a little annoying Python didn't keep __cmp__ around since there's no direct replacement that's just as succinct and what I did above is a slight fib: you still may need to add __eq__() as well.


I know, but the ability to use symbols to define the comparator is super, super cool, as opposed to the horrendously ugly lt dunder method.


> Then use the `total_ordering` decorator to provide the remaining rich comparison methods.

While we're here, worth highlighting `cmp_to_key` as well for `sorted` etc. calls.

> it's a little annoying Python didn't keep __cmp__ around since there's no direct replacement that's just as succinct

The rationale offered at the time (https://docs.python.org/3/whatsnew/3.0.html) was admittedly weak, but at least this way there isn't confusion over what happens if you try to use both ways (because one of them just isn't a way any more).


However, I think comparing the Ruby example implementation with the "data classes example" is a category error.

The Ruby example should be compared to the implementation of data classes. The Ruby code shows how cleanly the code for parsing, comparing and printing a version string can be. We would need to see the code underlying the data classes implementation to make a meaningful comparison.


It's a little magicky. I guess the "Order=True" is what ensures the order of the parameters in the auto-generated constructor matches the order in which the instance variables are defined?


order: If true (the default is False), __lt__(), __le__(), __gt__(), and __ge__() methods will be generated. These compare the class as if it were a tuple of its fields, in order.

eq: If true (the default), an __eq__() method will be generated. This method compares the class as if it were a tuple of its fields, in order. Both instances in the comparison must be of the identical type.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: