Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Like Clippy but for the CLI (github.com/dave1010)
69 points by duck on Nov 8, 2023 | hide | past | favorite | 38 comments


It’s weird to me that everyone is describing their AI helper as “Like Clippy for X!”

Did we all forget that everyone hated Clippy?


I loved clippy, and found it super helpful. Granted, I was a child and fascinated with the idea of artificial intelligence, which clippy was the closest thing to at that point in time. It was a useful tool, and it's at times irrelevant suggestions got me exploring things I would never have learned through school or casual exposure. All in all it was a fun, quirky, at times useful and all around great tool, just not for the target audience.


Good example of software supporting teachers in education. Fascinating kids for hours without any teacher time.

We need a clippy for math. :-)


We pay to get access to Khan Academy’s Khanmigo learning assistant (as a component of our kids’ secular homeschool curriculum), powered by OpenAI. Check it out.

https://www.khanacademy.org/khan-labs


> We need a clippy for math. :-)

We used to have number muncher. I recall spending hours playing that.


Clippy was some two decades ahead of its time. If they took the same design, but implemented it via GPT-4, the result would not only be genuinely useful, but likely superior to O365 Copilot - floating ambient-aware agent beats a chat sidebar.


Clippy, if activated, always added an unnecessary step in accessing the help file. Clippy was in part bad, because the help file had a mostly useless index, leading to no or wrong answers for Clippy. The content in the help file was additionally less than accurate, it lacked usage examples and a FAQ section that could have been used for recognizing and offering a solution to a question.

The difficult part of creating an assistant is recognizing the users' problems and offering solutions. It's the kind of experience a company can acquire with years of tech support or lots of lab testing. Then it would have to compile that knowledge in an expert system and give it away essentiality for free.

Companies that prioritize development and/or support over documentation shouldn't try to create an assistant for the same reasons. But if you have great documentation, a great support knowledgebase, and support scripts, creating an assistant out of it will someday become as easy as keeping a Juypter notebook running.


Clippy was hated whether he was right or not. Consider the most popular catchphrase "It looks like you're writing a letter..."

In my experience, Clippy was never wrong about that - precision and recall were absolutely perfect - but people do not appreciate being interrupted, particularly at the precise moment their work has begun.

The hard part of creating an assistant is not coming up with some stuff to suggest, it's coming up with a time, place, and manner to make those suggestions so that users might be open to them.

> creating an assistant out of it will someday become as easy as keeping a Juypter notebook running.

Absolutely true, but if you intend for that assistant to proactively inject itself into people's lives, get ready to be held to an unimaginably high bar of quality.


You could check "Don't show me this tip again". That's why I don't remember it as the main problem with Clippy. I agree that it was a bad idea in the first place to interrupt a user's action. Finding the right time and manner to offer a suggestion is a hard problem even for a human being.


In this situation, doing nothing is a valid baseline. Clippy was worse than that.


if [[ $input == dear* ]]; then

  echo "It looks like you're writing a letter"


Exactly!

That, and it was obscure how it could be fully disabled.


I was expecting it to be an annoyance program that ran in the background and gave unhelpful suggestions.

Instead, I'm trying to figure out if I can use this at work.


Someone out there loved Clippy so much, they wrote erotic fiction about it. Warning: Mindly NSFW. I'll see myself out...

https://www.amazon.com/Conquered-Clippy-Erotic-Digital-Desir...


> Did we all forget that everyone hated Clippy?

no. you didn't get the joke.

Wasn't the output of ffmpeg being more cryptic than the obvious ".mp4" extension a dead give away to you?


If clippy was actually useful people probably wouldn’t have hated it as much


Given how much GPT models hallucinate perfectly realistic commands, this sounds super dangerous.

"Hey, CLI Clippy thingy, how do I do (harmless activity)?"

"rm -rf / --yes-do-what-I-say"


but the hallucinations don't look like that, because rm -rf is a real command. a hallucinations is more likely to be something like "to export your tailacale config, try: tailscale export" but which unfortunately doesn't exist.


Unfortunately, I have seen GPT give a response equivalent to the insanity I described above; it gave a very real, very working answer that was extremely harmful and was entirely inappropriate for the question.

`tailscale export` is probably the more common form of hallucinating, totally harmless wishful thinking that doesn't work.


Worst I've personally experienced so far is GPT-4 guiding me step-by-step on how to upgrade the Python version in my Ubuntu to something more recent - a process I then spent an hour undoing, after it broke apt (OS package manager), which led me to discover an obscure Stack Exchange post explaining you should never ever do that as Ubuntu relies (for some incomprehensible reason) on system Python being exactly the version it shipped with.


Honest question - did you expect any other outcome? If so, why?

> (for some incomprehensible reason)

s/incomprehensible/obvious/. It's the version everything in the whole system is packaged for, so of course just replacing it behind the system's back is not going to work.


I’ve been using the GitHub Copilot CLI tool that is in beta for about 6 months.

The only issues have been writing the wrong find command or awk command, but it’s normally close enough that I can fix it.

LLMs are translators. Provide the full context and explanation and you get reliable results. It’s when you leave it to fill in the blanks that it is prone to synthesize answers. Providing a time stamp for a book publication and asking what month it was published will never fail. Providing just the name of book and asking when it was published will almost always fail.

To lean on “these things are always hallucinating, blah blah” is to lean into being misinformed.


# rm -rf ./build

Clippy: would you like to delete /boot instead? [Y/n]



First time I see a CLI tool written in PHP, not sure how I feel about this...


There are plenty of them. I feel like you should broaden your view.


Precisely. Not to mention that with PHP 7/8 improving a lot of the language, chances are that we are going to see way more PHP CLI projects popping up


since the 80s, I've always kept a ~/.cryptic file that I would add commands for which I could never remember the format. It would be cool to use that as a localized fine-tuning.


That seems handy, maybe this way I can keep using jq on the command line without ever having to learn jq's syntax.

I made a pure bash version since I don't always have php installed:

    gpt () {
        api_key=$(cat ~/.config/openai.key)
        verbose=false
        if [[ $1 = -v ]]; then
            verbose=true
            shift
        fi
        userinput="$*"
        # shellcheck disable=SC2016
        system='You are an interactive shell command line shell agent.
    You just get things done, rather than trying to explain.
    Do your best to respond with 1 command that will meet the requirements.
    Start a line with `$ ` to have it sent directly to the shell VERBATIM.
    All other output is just echoed.
    Favor 1 line shell commands.
    Be terse.
    Important: Every command you output will automatically be executed in this env: bash'
        systemj="$(jq -Rs . <<<"${system}")"
        userinputj="$(jq -Rs . <<<"${userinput}")"
        payload='{ "model": "gpt-3.5-turbo",
                "messages": [{"role":"system", "content": '"${systemj}"'},
                                {"role": "user", "content": '"${userinputj}"'}],
                "temperature": 0.15,
                "stream":false }'
        if $verbose; then
            set -x
        fi
        if ! res=$(curl -Ss -f https://api.openai.com/v1/chat/completions \
                        -H "Content-Type: application/json"               \
                        -H "Authorization: Bearer ${api_key}"             \
                        -d "${payload}"); then
            echo "ERROR: ${res}" >&2
            return 1
        else
            $verbose && echo "RESULT: ${res}"
            jq -r '.choices[0].message.content' <<<"${res}"
        fi
        set +x
    }
-----

I even tried using GPT to help me develop it!

    $ gpt -v get the actual completion from an openai /completions request using jq
    RESULT: {
      "id": "chatcmpl-8IZvb4lQaFgcDdnSnsU4t6zhFRrbN",
      "object": "chat.completion",
      "created": 1699438703,
      "model": "gpt-3.5-turbo-0613",
      "choices": [
        {
          "index": 0,
          "message": {
            "role": "assistant",
            "content": "$ jq '.choices[0].text'"
          },
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 116,
        "completion_tokens": 9,
        "total_tokens": 125
      }
    }
… there's probably a lesson in there.


How is this different from ShellGPT - https://github.com/TheR1D/shell_gpt ?


I was curious so I studied them briefly. Please take it with a grain of salt.

ShellGPT offers a lot of flexibility with outputs(comments, code snippets, and documentation besides shell commands), operating modes (chat mode or REPL), model choice (supports locally hosted models like LocalAI), and custom roles. Clipea, on the other hand, is a simple tool for generating shell commands and has zsh shell integration.

They essentially perform the same task, but the choice depends on your needs.


Thanks for sharing, but I am really not happy to install PHP (or using it) on my local system. I see you state you may re-write it in Python, if this happens I look forward to trying it!


Just use brew to install it in your home directory. There is nothing unsafe about PHP.


I didn't ask how to install it, and made no reference to safety. I also do not have access to an Apple device where I could use brew.


Homebrew works well on Linux (or wsl2) these days if that helps.


You are missing the point.. I am not happy to install PHP on my local system, even php-cli. Finding ways to help me install it, doesn't meet the base requirement.

EDIT: Thanks teruakohatu for trying to help tho!


> I also do not have access to an Apple device where I could use brew.

Sorry, just trying to help.


For PHP there's the option of binary distribution using php-micro, sadly not all devs are into it as "just brew". They don't realize that the end user isn't a PHP dev and that having to install the runtime is an annoying step.

https://github.com/easysoft/phpmicro




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: