Hacker Newsnew | past | comments | ask | show | jobs | submit | mozillas's commentslogin

I ran the 7B Vicuna (ggml-vic7b-q4_0.bin) on a 2017 MacBook Air (8GB RAM) with llama.cpp.

Worked OK for me with the default context size. 2048, like you see in most examples was too slow for my taste.


Given the current price (mostly free) off public llms I'm not sure what the use case of running out at home are yet.

OpenAIs paid GPT4 has few restrictions and is still cheap.

... Not to mention GPT4 with browsing feature is vastly superior to any home of the models you can run at home.


The point for me personally is the same as why I find it so powerful to self host SMTP, IMAP, HTTP. It’s in my hands, I know where it all begins and ends. I answer to no one.

For LLMs this means I am allowed their full potential. I can generate smut, filth, illegal content of any kind for any reason. It’s for me to decide. It’s empowering, it’s the hacker mindset.


Many would be users can’t send their data data to openAI. Think HIPPA and other laws restricting data sharing. Federation or distribution of the models for local training is the other solution to that problem.


I think it's mostly useful if you want to do your own fine tuning, or the data you are working with can't be sent to a third party for contractual, legal, or paranoid reasons.


I’m working on an app to index your life, and having it local is a huge plus for the people I have using it.


Sounds interesting, got a link?


Not yet…


GPT4 API is still not universally available, for starters.


I wish more companies did this. I cannot delete my Coinbase account because I must provide a phone number to log in (which was not needed when I made the account). SendGrid does the same thing.

Yahoo handles it well, I think. They will send an email to your recovery email address to inform you that unless you log into your account in the next 30 days or so, it will be closed.


A Slack bot could reply to solitary "Hello" messages with a link to this URL. Although that might come abrasive to some people.


You could try making something like BuiltWith https://builtwith.com/ycombinator.com. They have a fairly simple front-end(nothing happening in real-time) and you could offer stats about the top 10K Alexa websites, which makes a project look better than having some dummy data, I think.

I also remember about a side project that involves by crawling HN and displaying the most mentioned books in the comments. I think the author was making a bit of money by having a link to Amazon for each book(and using an affiliate code). You could do something similar but for popular Wikipedia articles for example. Or you could use Reddit as a source and instead of searching for popular books to search for sneakers or music.

Of course, I don't know if these ideas are too simple or too complicated, interesting or not for you and the person who will give you a grade.


Yes, this could be cool does not seem too simple or too hard.

On a related note a lot of different comments seem to be mentioning scraping some sort of data from a website on a continual basis. Would I just create a script that is attached to an extra worker that would send its data to the actual database that would in turn be read by the web server? Or would I want to just have the web server itself get the data and write to the database?


A couple of years ago I wrote a feed reader which would check every hour for new items in a few hundreds feeds. This script was running on a $5/mo server(initially it ran of an old laptop I had for easier debugging) and it would post the the new data to the database located on the website server. So I was using two machines, one for the crawler and one for the website, and two databases too I think. The one for the feed crawler was very simple with only the list of urls and the latest item url, so I don't show it again. That was the theory, at least, feeds are a bit more complicated in real life.

That's what I did, but I might have had different requirements. If you don't have a lot to crawl and you don't have to do it very often(once a week or less), you can probably space out the requests enough so that the server doesn't feel it. It helps a lot if you use some caching as well for the website itself in this case. I think it depends a lot on the requirements of the project. But using two machines is safer I think, although it might complicate things a bit.

Keep in mind that there's probably better technical advice out there than mine. I'm a hobbyist developer.


Buy a website that makes 1K EUR a month. Before, do some thinking and reading about a niche that you might know something about or care about. Use a serious broker to minimize the chances of getting scammed. Try to grow the website by using some your time and skills or use the money left to hire people that cost less per hour than what you make.

If things go well, you can keep the website or sell it for more than you bought it. If the revenue doesn't improve or becomes smaller you can sell the website and get some of the money back. Either way, you'll learn some new things.


> Buy a website that makes 1K EUR a month.

That's a 40% yearly ROI. Do people really sell at such valuation?


Yes because if risk. Who says it'll still be making anything next year.


Sounds like a market for lemons (perhaps that explains the very low valuations). For an owner, barring some personal circumstances, it makes little sense to sell unless he forsees problems in the near-ish future. Otherwise, as holding the company for just 2.5 more years doubles his profits, it sounds smart to hold on to it for as long as possible and only sell when he feels that a decline is coming.


Sometimes people needs money asap, or gets tired of the same suff...

... life happens ;-)


Control+Shift+Power turns off the display and by checking "Require password after sleep or screen saver begins" in System Preferences you get same effect by using a keyboard shortcut. https://apple.stackexchange.com/a/46170


Personally I prefer hot corners because I can lock it with one hand, sometimes as I'm in the middle of getting up.


Ctrl-cmd-q locks out of the box, no customization needed.


I do a shift modifier on mine and then just throw my pointer into the corner. Works well enough for me!


I had to sort of make my own. Each theme I tried had something that didn't work for me, the background was too light, or too dark, or I could barely see the text selection. So I settled on the Base16 Ocean Theme and tweaked with this web app https://tmtheme-editor.herokuapp.com/


Is there no way to add a notification when the build is finished?

In my case, for python scripts that take a little more time to run, I add a `tput bel` at the end of the script. By doing this iTerm gives me a visual and a gentle sound notification that things are ready.


For a little more obvious notification, I like noti.

https://github.com/variadico/noti


Well, it is not actually the compilation but rather the build and reload of a webpage that takes time. But I get a small visual feedback in Vivaldi when its done.

However, sometimes I miss it due to my focus being elsewhere.


I've known this one for a while now; it shows up first when you search "front-end news" in Google https://frontendfront.com/


Nice one, I've bookmarked it, thanks :)


Billings 3 has a time tracking feature, but I don't believe it's supported anymore and they don't sell new licences.

A million years ago I used this https://github.com/rburgst/time-tracker-mac. But it hasn't really been updated in long time https://code.google.com/archive/p/time-tracker-mac/downloads.

I haven't tried this one, but Tyme looks promising http://tyme-app.com/mac-2/. It seems it's a standalone app and has a 15 day trial, so you can test it out for a while. It's also only $18, so it's quite affordable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: