Hacker Newsnew | past | comments | ask | show | jobs | submit | arvindveluvali's commentslogin

Awesome! Happy to get you demo access if you want to use Fresco to make those punch lists, just shoot me a message at arvind@fresco-ai.com


Great advice—there’s definitely a lot of UI work we want to do :)


Totally agree! That's what we've observed, as well.


Thanks for the flag! Absolutely, there are many verticals where we think Fresco can be useful. Would love to hear your thoughts on price point.


I don't have firm thoughts on the price point, but two examples of real-world use cases would be:

* There is a food production company where their QA's do a monthly walkaround. It takes approx. 2-3 hours to type up notes after the walkaround. I'm in the UK and QA's are paid approx. £32k GBP, so 3 hours of their time is more like £50 benefit.

* Lots of logistics companies do daily walks with shift/team leaders. While these aren't usually typed up or anything, it would be great to document them in terms of actions and a tasklist to complete. The alternative to the software would be getting a team leader to write up notes after the walk, and this would take maybe 30 minutes. A team leader might be £28k p.a. so cheaper to get them to do it than buy software at $12k p.a.

The cost of the software would need to be a fraction (e.g. 10%) of what it is at the moment though for these sorts of use-cases to pay off.

Maybe a more generic version of the software not targeted at the construction niche could be something like £49 per month per user? Sounds more like the sort of level I would expect.

But I'm thinking that $1k is like way way way out of the reasonable range of my use case, and this is so different to your current business model I imagine it's irreconcilable.


Makes sense! We’re pretty focused on the construction vertical at this point but when we do expand into others I imagine we’ll be a bit creative with the pricing/features.


Great point! We're really relying on the superintendent's expertise, transcribing/compiling what they're saying rather than flagging code violations or other notables ourselves. We think analysis should be (for now, at least) the job of the highly trained and experienced superintendent, and our job is to take care of the transcription and admin that isn't really a good use of their time.


> our job is to take care of the transcription and admin that isn't really a good use of their time

that's the correct focus, IMO; let the experts be experts rather than pretend that LLMs are all-knowing

nicely done


Great answer, and good proper use of the benefits of LLM. Let the LLM do the grunt work and let the expert human be the expert. Best of luck to you!


This is a really good point, but we don't think hallucinations pose a significant risk to us. You can think of Fresco like a really good scribe; we're not generating new information, just consolidating the information that the superintendent has already verbally flagged as important.


This seems odd. If your scribe can lie in complex and sometimes hard to detect ways, how do you not see some form of risk? What happens when (not if) your scribe misses something and real world damages ensue as a result? Are you expecting your users to cross check every report? And if so, what’s the benefit of your product?


We rely on multimodal input: the voiceover from the superintendent, as well as the video input. The two essentially cross check one another, so we think the likelihood of lies or hallucinations is incredibly low.

Superintendents usually still check and, if needed, edit/enrich Fresco’s notes. Editing is way faster/easier than generating notes net new, so even in the extreme scenario where a supe needs to edit every single note, they’re still saving ~90% of the time it’d otherwise have taken to generate those notes and compile them into the right format.


Even just audio transcription can hallucinate in bizarre ways. https://arstechnica.com/ai/2024/10/hospitals-adopt-error-pro...


This is the wrong response. It doesn't matter whether you've asked it to summarize or to produce new information, hallucinations are always a question of when, not if. LLMs don't have a "summarize mode", their mode of operation is always the same.

A better response would have been "we run all responses through a second agent who validates that no content was added that wasn't in the original source". To say that you simply don't believe hallucinations apply to you tells me that you haven't spent enough time with this technology to be selling something to safety-critical industries.


"Concerns about medical note-taking tool raised after researcher discovers it invents things no one said..."

https://www.tomshardware.com/tech-industry/artificial-intell...


Absolutely. There are a ton of industries where people conduct physical site inspections and turn those into structured documents; as in construction, those take a long time to make! We've actually had some inbound from civil engineers, and if we can be useful to folks in your network, we'd love to connect with them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: