There is a lot of money in this market, but this product isn't going to capture any of it.
People are paying a lot of money for 3d holographic displays of their virtual AI friends. People pay for low latency (local!) AI inference engines to run AI companions. People pay monthly for AI companions.
This product doesn't work within any of those ecosystems and won't capture share in any of those markets.
With $8m in funding (legit impressed they got a physical product out for that price though, good job on that!) I'd go in a completely different direction:
Sell people a box that runs LLMs locally, use Intel's new 24GB Arc card. That can run a conversational LLM + a high quality TTS engine w/o issue. For reoccurring revenue, charge $10 a month for a dyndns service that also comes with a smartphone app so people can chat with their LLM anywhere.
Have an add on smart speaker (esp32 + microphone array will do for input) that allows for always on ambient communication with a customers AI companion in their house. Also have a desktop app that works over local wifi.
Make sure you support the existing ecosystem of AI companions and display tech. People who pay $600+ for 3d displays for their AI companions (https://www.kickstarter.com/projects/dipal-d1/dipal-d1-world...) aren't going to balk at 1200 for an all in one package that ensures 100% uptime and independence from the whims of cloud based providers.
I'd then start adding functionality. Tool calling with small models is getting better and better. Tool call definitions in RAG can do some impressive stuff.
There is a lot of potential to actually help people. To notice when they are in a bad place and help get them out of it. To interrupt doom scrolling and spiraling thought patterns. Everyone is so obsessed with SaSS AI solutions we are overlooking what a personal AI revolution could look like.
People are paying a lot of money for 3d holographic displays of their virtual AI friends. People pay for low latency (local!) AI inference engines to run AI companions. People pay monthly for AI companions.
This product doesn't work within any of those ecosystems and won't capture share in any of those markets.
With $8m in funding (legit impressed they got a physical product out for that price though, good job on that!) I'd go in a completely different direction:
Sell people a box that runs LLMs locally, use Intel's new 24GB Arc card. That can run a conversational LLM + a high quality TTS engine w/o issue. For reoccurring revenue, charge $10 a month for a dyndns service that also comes with a smartphone app so people can chat with their LLM anywhere.
Have an add on smart speaker (esp32 + microphone array will do for input) that allows for always on ambient communication with a customers AI companion in their house. Also have a desktop app that works over local wifi.
Make sure you support the existing ecosystem of AI companions and display tech. People who pay $600+ for 3d displays for their AI companions (https://www.kickstarter.com/projects/dipal-d1/dipal-d1-world...) aren't going to balk at 1200 for an all in one package that ensures 100% uptime and independence from the whims of cloud based providers.
I'd then start adding functionality. Tool calling with small models is getting better and better. Tool call definitions in RAG can do some impressive stuff.
I describe some uses cases in a blog post at https://meanderingthoughts.hashnode.dev/lets-do-some-actual-... but there is more that can be done!
There is a lot of potential to actually help people. To notice when they are in a bad place and help get them out of it. To interrupt doom scrolling and spiraling thought patterns. Everyone is so obsessed with SaSS AI solutions we are overlooking what a personal AI revolution could look like.