Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a big fan of modern JavaScript frameworks, but I don't fancy SSR, so have been experimenting with crawling myself for uploading to hosts without having to do SSR. This is the result


for a long crawling task, if exited/broken for any reason, does it save and restore at the next run?


The README says:

> Can resume if process exit, save checkpoint every 250 urls


nice, better make it as a commandline option with default value. 250 is too many for large files and slow connection.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: