Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used quick research and it was pretty cool. A couple of caveats to keep in mind:

1. It answers using only the crawled sites. You can't make it crawl a new page. 2. It doesn't use a page' search function automatically.

This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.



What exactly do you mean by "You can't make it crawl a new page"? It has the ability to read webpages, if that is what you're referring to


This query's results are wrong:

"""

site:https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=fal... recent developments in ai?

"""

Also when testing, if you know a piece of information exists in a website, but the information doesn't show up when you run the query, you don't have the tools to steer the engine to work more effectively. In a real scenario you don't know what the engine missed but it'd be cool to steer the engine in different ways to see how that changes the end result. For example, if you're planning a trip to japan, maybe you want the AI to only be shown a certain percentage of categories (nature, night life, or places too), alongside controlling how much you want to spend time crawling, maybe finding more niche information or more related information.


Yup, pasting in a URL will cause it to fetch the page.


Their agents can natively read files and webpages from URLs. It's so convenient, I've implemented identical feature for our product at work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: