Hacker Newsnew | past | comments | ask | show | jobs | submit | justaregulardev's commentslogin

The ability to have models that can run on resource-constrained devices does feel like a strong direction for ML to go in and could lead to greater user privacy. However, I’m unconvinced by the IoT-aspect of this tech. In many ways, it feels like IoT has “failed” to be as popular with consumers as expected and feels overhyped. Will adding ML to IoT devices really make a difference?


Adding an embedded LLM as a human interface for every appliance is a huge win— for consumers at least.

For example, I have a dishwasher with a bunch of settings, can sense load, etc. It’s got a touch interface that works with wet hands. Or I can tell it to start with the usual settings, or that a particular load is a bit different. Same with the laundry, the pressure cooker.

It is less mind bandwidth when you got kids.

What I don’t want, is for my appliances to do is to phone home to the makers.

LLMs (if you don’t somehow trigger its insanity) can be far more capable than Siri. How do you get that into something more energy efficient than a high end gaming rig?

Something more hidden is using LLMs to reprogram machine-to-machine protocols. That might extend the lifetime of machines that have to talk with other machines, but it breaks planned obsolescence.

There are plenty of exciting product ideas. Whether they are exciting revenue generators are another thing entirely.


> Adding an embedded LLM as a human interface for every appliance is a huge win— for consumers at least.

So, appliances get even harder to understand settings, that are actually illogical, instead of just having hidden logic? That's not a clear win.


No, there’s an underlying logic and underlying settings that are still accessible.

But transparently wrapped around that there’s a “good Clippy” who can teach, interpret, and orchestrate those settings with a CUI (conversational UI, pronounced “koo-ee”).


Oh, a settings assistant is much easier to get right.

It is just completely against the modernly accepted "best practices" for devices and interface development. So I don't see how we can get it. But yeah, it could be good.


Adding hardware capable of running an LLM would significantly increase the price of appliances, not sure that's a win for consumers.

In the context of the article, an LLM is kind of the opposite of "TinyML" and not something most IoT devices could even handle.


Not if you can condense the LLM into being able to run on the embedded hardware.

Article aside, reducing energy use for models is one of the research areas for TinyML.


I'm skeptical that an LLM with billions of parameters can be compressed down into something that runs on embedded hardware and still remain useful.


Skeptical you should be, but I’m optimistic. We have papers showing that knowledge in these models can be edited and deleted. Sam Altman makes the point that too much compute is being spent on using the LLM as a database.

Thinking about how few things any of these CUIs need to know about, I’m optimistic that we can distill them down to a workable size while maintaining the LLM magic.

“Fridge, what is the meaning of life?”

‘Sorry, I don’t know about that. Ask me something about what’s in your fridge.’

“Okay how many eggs do I have.”

“I see 3 eggs.”

When I can have that conversation by proxy through my phone’s onboard CUI while at the store, I’m going to get a lot of value out of that.


> It is less mind bandwidth when you got kids.

If it works like ChatGPT does, then I would find it a greater mental burden. You'd have to carefully craft what you're telling it, or engage in a conversation of some sort, instead of just hitting a couple of buttons or turning a dial.


ML has been on IoT devices for years. Heck, there are embedded arm SOCs with built in CNN coprocessors that will run your tensorflow models as-is. Again, they've been shipping in volume IoT products for years.

If ML is a win for an IoT device, the hardware's been there for a while, though I'm sure yet-cheaper hardware might unlock a few more applications, it doesn't feel like much of a game changer.


IoT sensors powered by ML may not provide good use cases for consumers, mostly because all things they can do can be done by a large model in the cloud, plus a phone. It will get interesting when we ask what use cases can't be solved by phone+cloud combo.

Such things as air quality management are good use cases. You can't use your phone to do that.


> Such things as air quality management are good use cases. You can't use your phone to do that.

Why not? I am strictly against IoT in my household so I may be way off base, but why can't your phone control your air purifier?


Privacy doesn’t matter unless the IoT devices are secure. Often times, they’re not.


This changes everything and seems like a perfect logical step from where we were. LLMs have this fantastic capacity to understand human language, but their abilities were severely limited without access to the external world. Before, I felt ChatGPT was just a cool toy. Now that ChatGPT has plugins, the sky’s the limit. I think this could the “killer app” for LLMs.


Hopefully it doesn’t actually become THE “killer” app


underrated reply :)


Agree for me it probably looks similar to situation with iphone history - first one was impressive but only when next year after that apple released app store they turned snow ball rolling into unstoppable avalanche.


The emacs comparison definitely feels apt. Being able to simply use JavaScript and CSS to make changes to the editor made it feel so powerful. Maybe I just need to get around to learning elisp (or find an elisp-JavaScript binding…)


Here’s an article from TechCrunch (https://techcrunch.com/2019/10/02/tiny-acquires-meteor/) on the acquisition of Meteor by Tiny Capital


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: