Hacker Newsnew | past | comments | ask | show | jobs | submit | ah27182's commentslogin

Are you going to address what he said or just continue to spew blatantly racist ad hominems?


[flagged]


Even as DEI hire, DeepMind would get thousands upon thousands of applicants so he would still need to better than the other applicants.


I've been using LMStudio to run a local LLM (Qwen3-4B) to generate commit messages using this command:

```

git diff --staged --diff-filter=ACMRTUXB | jq -Rs --arg prompt 'You are an assistant that writes concise, conventional commit messages. Always start with one of these verbs: feat, fix, chore, docs, style, refactor, test, perf. Write a short!! message describing the following diff:' '{model:"qwen/qwen3-4b-2507", input:($prompt + "\n\n" + .)}' | curl -s http://localhost:1234/v1/responses -H "Content-Type: application/json" -d @- | jq -r ".output[0].content[0].text"

```


Apple shortcuts allows you to use OCR on images you pass into it. Looking for “ Extract Text from Image”


Do i need to sign-in when using the docker container?


There's a version that we call local mode which is intended for engineers using it as part of their local debugging workflow: https://clickhouse.com/docs/use-cases/observability/clicksta...

Otherwise yes you can authenticate against the other versions with a email/password (really the email doesn't do anything in the open source distribution, just a user identifier but we keep there to be consistent)


The CLI for this feels extremely buggy, Im attempting to build the application but the screen is flickering like crazy: https://streamable.com/d2jrvt


Yeah, we have a PR in the works for this (https://github.com/appdotbuild/platform/issues/166), should be fixed tomorrow!


Alright sounds good. Question, what LLM model does this use out of the box? Is it using the models provided by Github (after I give it access)?


If you run locally you can mix and match any anthropic / gemini models. As long as it satisfies this protocol https://github.com/appdotbuild/agent/blob/4e0d4b5ac03cee0548... you can plug in anything.

We have a similar wrapper for local LLMs on the roadmap.

If you use CLI only - we run claude 4 + gemini on the backend, gemini serving most of the vision tasks (frontend validation) and claude doing core codegen.


We use both Claude 4 and Gemini by default (for different tasks). But the idea is you can self-host this and use other models (and even BYOM - bring your own models).


Average experience for AI-made/related products.


Exactly. Non-AI projects have always been easy to build without issues. That's why we have so many build systems. We perfected it the first try and then made lots of new versions based on that perfect Makefile.


Yea, I’m not following why this approach is radically different from just doing that


Anyone tried using this aider yet? I like how its reasoning, but it keeps messing up when it attempts to apply the commit it generates.


So I went ahead and tried running the example script with "A CHRISTMAS CAROL" using the "meta-llama-3.1-8b-instruct" and "text-embedding-nomic-embed-text-v1.5" models locally. How long should it take to extract the subgraphs with this kind of setup?


Love the setup. I’m sure one can make this using Apple shortcuts too since there’s a Shazam api offered in it.


Use teller.io, it’s super easy to setup imo (compared to Plaid). I use it to sync my chase bank + credit card statements on a gsheet.


teller.io doesn't work in any country except America.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: