I asked Gemini to format some URLs into an XML format. It got halfway through and gave up. I asked if it truncated the output, and it said yes and then told _me_ to write a python script to do it.
On the one hand, it did better than chatgpt at understanding what i wanted and actually transforming my data
On the other, truncating my dataset halfway through is nearly as worthless as not doing it at all (and i was working with a single file, maybe hundreds of kilobytes)
Given that Gemini seems to have frequent availability issues, I wonder if this is a strategy to offload low-hanging fruit (from a human-effort pov) to the user. If it is, I think that's still kinda impressive.
Somehow I like this. I hate that current LLMs act like yes-men, you can't trust them to give unbiased results. If it told me my approach is stupid, and why, I would appreciate it.
I just asked ChatGPT to help me design a house where the walls are made of fleas and it told me the idea is not going to work, and also has ethical concerns.
I tried it with a Gemini personality that uses this kind of attack, and since that kind of prompt strongly encourages it to provide a working answer, it decided that the fleas were a metaphor about botnet clients, and the walls were my network, all so it could give an actionable answer.
Sure, but setting up a piped session with a pre-existing sidecar daemon can be complicated. You either end up using named pipes (badly behaved clients can mess up other clients’ connections, one side has to do weird filesystem polling/watching for its accept(2) equivalent), or unnamed pipes via a Unix socket with fdpass (which needs careful handling to not mess up, and you’re using a Unix socket anyway, so why not use it for data instead?).
I've spent several years optimizing a specialized IPC mechanism for a work project. I've spent time reviewing the Linux Kernel's unix socket source code to understand obscure edge cases. There isn't really much to optimize - it's just copying bytes between buffers. Most of the complexity of the code has to do with permissions and implementing the ability to send file descriptors. All my benchmarks have unambiguously showed unix sockets to be more performant than loopback TCP for my particular use case.
There's definitely differences, whether or not it matters for most usages. I've worked on several IPC mechanisms that specifically benefited from one vs. the other.
A former coworker texted me out of the blue yesterday, saying he missed working on a "<my name> codebase". He specifically appreciated that "doing something simple things were kept simple." Made my day!
Would it really be that bad? If the devices were well behaved (ie. not too noisy, no gratuitous ARP etc) and the application could assume that most of the time either zero or no devices will be communicating, would it be that bad?
This is something I might test myself. I have a couple of audio devices that will never be both "active" at the same time. In my current layout I need to run either two cables or just another switch and it just seems a waste. I wonder if I can buy daisy chained cable so I don't have to make one...