No this was all Red Hat. IBM would never let consultants use something convenient like GitHub, they would be forced to use some crappy internal webapp powered by watson.
I've never had a problem with loudness, in fact I only use them around 50% volume even walking outside next to a busy road. But I agree the audio quality is about the same as a phone speaker.
That's funny because iMessage search works quite well if you can find it buried in the interface. I have a feeling Apple themselves forgot it exists and hasn't gotten around to 'modernizing' it with AI yet.
Even funnier is, it was obscenely bad for years, and then it made a sudden jump to “pretty darn good”. My headcanon is that someone high-up at Apple tried to search for a message, noticed how broken it was, and then assigned an entire engineering department to work on nothing else than iMessage search for two weeks.
Now it feels like a cheatcode, at least when it comes to verbatim searches (probably because the entire message database is now indexed, if I had to guess).
Seriously, try searching for the letter “e” and click “View All”. You will get effectively every message you’ve ever sent or received, in a single, reasonably scrollable list. For me it dates back to 2018.
I personally sent several scathing emails directly to directors about the issue. I have a long iMessage history and there was a point that just entering a single character in the search field would lock up my mac, let alone my older iPhone.
I have noticed and appreciate the change, so my headcanon is that they actually do read feedback. ¯\_(ツ)_/¯
If you type the name of the person, it should allow you to create a filter for "Messages with: Person". It should also pop up a filter bubble for photos. From there I think you can type in some query and it should do a query on the photos via text. I don't think you can add your date filter though.
Second way would be to open that conversation view, click on the contact icon at the top of the view, which should then bring you to a details page that lists a bunch of metadata and settings about the conversation (e.g. participants, hide alerts, ...). One of the sections shows all photos from that conversation. Browse that until you find the one you care about.
I admit I was wrong in my understanding of iMessages capabilities.
I remembered its search sucking, and also it not working on all my devices, so I quit using it and regurgitated a stale criticism.
Still, the search is useless to me if I can't do it on my linux desktop (like I can with email, discord, and every other chat service I use), so I'd still say iMessage has a laughably lacking search by nature of it only working on ios/macos, when all other chat apps I use offer at least some search on ios/android/linux
Picross (which is based on nonograms) is my favorite puzzle game. Side note for anyone who thinks we're close to AGI, try giving an LLM even a simple nonogram to solve.
I find protondb misleading because GTA V is supposedly "Gold" except Online does not work at all because of anti-cheat. Same goes for many other popular multiplayer games.
The story to me implied that machines were created by humans or vice-versa in a chicken-or-the-egg scenario. In that case it would make sense for them to think similarly.
The 2nd quote is when I realized this article was written or assisted by AI. Not that it's a big deal, that's our world now. But it's interesting to notice the subtle 'accent' that gives it away.
I'm not on board with accepting AI-written articles. This is an article with little to no human input, farming clicks for ad revenue, that doesn't even link to the forum post, which is far more interesting and has pictures: https://secondlifestorage.com/index.php?threads/glubuxs-powe...
The article contains little detail, and has lots of filler like the quote in the parent comment. It's highly upvoted on HN's front page, which is surprising to me because I think there is quite a bit of distaste here for low-effort content to drive clicks.
The thing the article is referencing is interesting, but the article is trash.
> I'm not on board with accepting AI-written articles.
I haven't been on board with the "journalism" of the last fifty years, but this hasn't exactly prompted it to improve. Newspapers still have advertisements. Subscribers still have no say over editorial staff. The board still has say over the editorial staff. It's all fucked unless we can punt private ownership out of the equation.
80% of everything is crap. This isn't a very insightful position to take. One of the reasons I like Hacker News is it helps me find good stuff to read. Which this article isn't. So I will respectfully rebuff your rebuttal.
Because it's presenting a bunch of smooth prose that utterly fails at logical continuity.
1. What point is the author trying to make? Leading off "Glubux even began" implies that the effort was extraordinary in some way, but if this action was "key to making the system work effectively and sustainably" then it can't really have been that extraordinary. The writing is confused between trying to make the effort sound exceptional vs. giving a technical explanation of how the end result works.
2. Why, exactly, would "removing individual cells and organizing them into custom racks" be "key to making the system work effectively and sustainably"?
3. How is the system's effectiveness related to its sustainable operation; why should these ideas be mentioned in the same breath?
4. Why is the author confident about the above points, but unsure about the level of "manual labor and technical knowledge" that would be required?
Aside from that, overall it just reads like what you'd expect to find in a high school essay.
Edit: after actually taking a look at TFA, another thing that smells off to me is the way that bold text is used. It seems very unnatural to me.
The only thing as annoying as people using AI and passing it off as their own writing is the people who claim everything written not exactly how they are used to is AI.
> This task, which likely required a great deal of manual labor and technical knowledge, was key to making the system work effectively and sustainably.
This is obviously AI. The writer should know that it either required manual labor or it did not, not maybe (AI loves to not "commit" to an answer and rather say maybe/likely). It also loves to loop in some vague claim about X being effective, sustainable, ethical, etc without providing any information as to WHY it is.
That and it being published on some blog spam website called techoreon.
Edit: For fun, I had o1-mini produce an article from the original source (Techspot it looks like), and it produced a similar line:
> This ingenious approach likely required significant manual effort and technical expertise, but the results speak for themselves, as evidenced by the system's eight-year flawless operation.
What these sites are doing is rewriting articles from legitimate sources, and then selling SEO backlinks to their "news" website full of generated content (and worthless backlinks). It's how all those scammy fiverr link services work
At least this is a better effort at explaining why you would believe it is AI than the other poster who just says it's AI because they used the word "likely".
I still find it very annoying that in every thread about a blog post there's someone shouting "AI!" because there's an em dash, bullet points, or some common word/saying (e.g. "likely", "crucially", "in conclusion"). It's been more intrusive on my life than actual AI writing has been.
I've been accused of using AI for writing because I have used parenthesis, ellipses, various common words, because I structured a post with bullet points and a conclusion section, etc. It's wildly frustrating.
> because I structured a post with bullet points and a conclusion section
I do understand that this is frustrating, because in the last few months I see posts with these features everywhere. It's especially a problem on reddit, where there are numerous low effort posts in niche subreddits that are overdone with emojis, bolded sections/titles, and em dashes. Not all of these are AI but an overwhelming majority are to the point where if the quality of the content is low (lots of vague sayings), and it exhibits these traits, I can almost say for certain it's AI.
What is also less talked about is now AI models are beginning to write without exhibiting these issues. I've been playing around with GPT 4o and it's deep research feature writes articles that are extremely well written, not exhibiting the traits above or classic telltale AI signs. I also had a friend ask it to write a fictional passage on a character description and the writing was impeccable (which is depressing because it was better than what she wrote). Soon we are not going to have any clue what is real and what isn't.
The kids ask ChatGPT to rewrite it using the diction of a 9 year old,
so it doesn't look like it was AI generated. If you have a big enough corpus of writing, you could use yourself as the input style to emulate. Unfortunately I think we're going to has get over generated vs not as the technology improves. we'll have to judge a work based on its own merits and not use any tells. Quelle horrer!
>What is also less talked about is now AI models are beginning to write without exhibiting these issues.
It will be great when I continue to write the way I have for decades, continuing to be accused of being AI, while actual AI writing exceeds my ability and isn't accused of being AI.
As someone who "detects" AI frequently: it's often difficult or impossible to explain where the sense comes from. It can be very much a matter of intuition, but of course it's awkward to admit that publicly. I don't fault others for coming up with an overly simple explanation.
If I'm being entirely honest, in the general case I don't.
But I don't particularly care, either. After a couple tries I decided it's better not to point at object examples of suspected LLM text all the time (except e.g. to report it on Stack Overflow, where it's against the rules and where moderators will use actual detection software etc. to try to verify). But I still notice that style of writing instinctively, and it still automatically flips a switch in my brain to approach the content differently. (Of course, even when I'm confident that something was written by a human, I still e.g. try to verify terminal commands with the man pages before following instructions I don't understand.)
Of course, AI writes the way it does for a reason. More worryingly, it increasingly seems like (verifiably) human writers are mimicking the style - like they see so much AI-generated text out there that sounds authoritative, that they start trying to use the same rhetorical techniques in order to gain that same air of authority.
I think this is an excellent question and one people should be asking themselves frequently. I often get the impression that commenters have not considered this.
For example, whenever someone on the internet makes a claim about "most x", e.g. most people this, most developers that. What does anyone actually know about "most" anything? I think the answer "pretty much nothing".
Yes, this is an important point. Insert the survivorship bias plane picture that always gets posted when someone makes this mistake on other platforms (Twitter). We can be accurate at detecting poor AI writing attempts, but not know how much AI writing is good enough to go undetected.
Someone should run a double blind test app, there was an adversarially crafted one for images and still got 60% or so average accuracy. We all just can glance the data and detect AI generation like how some experts can just let logs run and say something.
Exposure to AI output itself triggers and trains rage response in lots of people. Blame AI for it, regular people have no control over.
Asking for cause or thought processes is just asking them to hallucinate. They don't know why, they just know that they saw it and that it deserves hate.
>However, in this ingenious setup, Glubux took those individual cells and assembled them into their own customized racks – a process that likely required a fair bit of elbow grease and technical know-how, but one that has ultimately paid off in spades.
Either this is also AI, or saying that it likely required a lot of manual labor is not indicative
But using "likely" is obviously AI in this context, or at least it's really, really shitty reporting.
This is supposed to be a news article, not someone who's hypothesizing about something that could have been. I mean, it either required a great deal of manual labor and technical knowledge or it didn't - no guessing should be required. If the author doesn't know, they can do proper research or simply ask the subject.
FWIW this article didn't immediately scream AI to me either, until the commenter pointed out the use of "likely". When you think about it, it absolutely becomes a fingerprint of AI in this context - it's not just that "likely" anywhere means it's AI.
Your inability to tell when things are AI doesn't mean other people can't.
Same phenomenon happens all the time with food or wine. One person thinks everyone is making up the subtle flavor profile comments and sneers at them. Everyone who can tell rolls their eyes. You can't convince someone that there's something they can't perceive besides just telling them.
I've had this experience with records: as a kid I rolled my eyes at people wanting to listen to music on vinyl cause obviously it was the same; as my hearing has improved I have found I can clearly tell the difference and definitely prefer it.
>Your inability to tell when things are AI doesn't mean other people can't.
I didn't even comment on whether this article is AI or not. My point is that it is absurd to point at a single word as proof of something being written by AI.
It's not the word on its own, it's the word in context: in a news article in a sentence like that one. It's not a 100% given, but it's fairly strong evidence given a basic understanding of modern language informed by the era we're in. Of course it could be a journalist emulating AI for some reason. But the signal here is quite strong.
If you include commercial offerings Red Hat has offered this for awhile, and many semi-successful startups have tried creating a business model solving this.
Apparently not an equivalent amount of money to the value they feel they’ve provided. And honestly given the communities response, I think they’re probably right. I mean if you think about it there’s no reason someone using Centos for a proper use case wouldn’t be fine moving to Centos Stream, unless they were getting some value out of Centos production-friendly releases. Vendor compatibility and LTS patches and updates take a big team to maintain, someone has to pay for that and I feel like many were just using Centos when really they should be using a supported enterprise distro.
As much as I dislike publishers gatekeeping what gets read, It's a big ask to commit several hours to a self-published book. This is one thing I like about Japanese light novels, I can look up the release calendar and pick out a few that look interesting. They are a relatively quick read and only around $7 (about half that if you live in Japan).