Hacker Newsnew | past | comments | ask | show | jobs | submit | Illotus's commentslogin

Ribbon was better for most people who didn't have all the shortcuts in muscle memory. It is much more discoverable.


I find its discoverability is terrible. I am always hunting for what I want to do and it's never anywhere that seems to me to be sensible. I usually end up doing a google search for what I want. Perusing the ribbon takes me much more time than just looking at the various options under the old style menus.

Also traditional menus had some traditional standards. Once you learned what was under "File" or "View" or "Insert" or "Format" it was often pretty similar across applications.


Logically, users have to learn the name of the tool before performing any sort of geographical associations (which menu, symbol, etc to find the tool).

There is no faster discoverability than O(log(N)) search using the letters of the name as a lookup.

The biggest failure of modern operating systems is failing to standardize this innate reality.

Windows,Linux,etc should have 1. keyboard button to jump to search box 2. type 1-2 letters and hit enter. Operating systems and applications should all have this kind of interface.

The most ironic apps have a ribbon named something like "Edit" but then the most used "edit" command lives in an unrelated ribbon.


Anecdotal evidence from myself: Although I've been using Word for many decades, I've never had much "muscle memory" in terms of accessing features. It was always a case of learning which pulldown menu held the required function.

When the accursed ribbon came along, "discoverability" went out the window. Functions were grouped according to some strange MS logic I could never understand, and it takes me twice as long to format a page as it used to. Now, I basically just use Word for text entry, and if I want an elegant format, I use a graphic design app like Illustrator.

Judging from what I've read online, you may be the only person who actually likes the ribbon.


Hieroglyphics are the opposite of "discoverable". That's why they became uninterpretable for almost two thousand years, until the discovery of the Rosetta Stone. And even then it took considerable work to figure out how they functioned. In the Ribbon, in order to discover what some hieroglyph does, you have to mouse over it. Since there are lots of hieroglyphs there, that's a lot of mouse-over. And no, the Ribbon's images make no sense in 99% of the cases.


That might have been true for the first five minutes of using the software (assuming the person had not yet used a CUA application before the first time they used office). After that, it was strictly worse.

CUA ~= "standard menus + keyboard shortcuts for dos and windows": https://en.wikipedia.org/wiki/IBM_Common_User_Access


Not really, it is much more discoverable for most people. If interested, MS UI lead has a blog about lot of the reasons for ribbon and on the research backing it https://learn.microsoft.com/en-us/archive/blogs/jensenh


Its probably a case of UI discoverability vs usability. Brand new users might discover better with the ribbon, but as brand new users keep using the product, they transition to experienced users, and the UI needs to adapt to that. My experience is that the ribbon doesn't. Its tolerable, and for me thats enough but I get the points about the ribbon. Its sort of like the new reddit UI, but props to reddit they at least have kept much of the old UI available for longer than I expected them to.


The problem with it was that it constantly moved the buttons around. So, you had to constantly rediscover it.


Sadly, none of the links I tried work anymore. (Though the conversation in the comments where they have to explain how to open a ppt in powerpoint is internet gold!)

I was hoping to figure out what led to design incompetence so spectacular that people would still be discussing it after 17 years.

I think there’s a clue in the abstract: The author claims they made 25,000 mock UI screenshots, but doesn’t mention user studies or even internally prototyping some of the concepts to see how they feel.


Looks like the links in the posts no longer work, but all the posts are readable and he goes through the work they did and why. They did a lot of usability testing for the ribbon. But anyways I have no horse in this race other than liking ribbon over 16x16 icons and menus, so no point in hashing this over.


The Ribbon is more difficult to visually grep for me than the classic menus. Not to mention that a number of functions are still hidden in mini-menus in the Ribbon.

It wouldn’t be so bad if keyboard navigation was as good as with the classic menus, but having to press the Alt key separately, and general increased latency, kills it.


My recollection is completely different, software was really slow on contemporary PCs in the 90s. Spinning disks, single core cpus, lot more swapping due to memory being so much more expensive.


Contemporary software was slow. You could tell you should consider more RAM when the HDD light and chugging noises told you it was swapping. But if you ran the same software with the benefit of 10 years of hardware improvement, it was not slow at all.


This might match your recollection. x86 Win95 raytracing in javascript on my arm laptop is usable, but sort of slow:

https://copy.sh/v86/?profile=windows95

Once it boots, run povray, then click run.

It took over 2 minutes to render biscuit.pov, though it did manage to use SSE!


It took over 2 minutes to render biscuit.pov

We used to wait two hours for a mandelbrot to display on a Commodore 64, and were delighted when it did.


If you look at back pieces of old classic furniture made during hand powered tools era, its mostly very roughly finished. Professionals rarely had time to spend dicking around with stuff that isn't visible.


If only the speed was the big issue, but mostly it is the mass. Even with all the reckless cyclists there are very little fatalities where cyclist runs over pedestrian. Ultimately separating all groups would be the best, but heavy consequences for the heaviest road users is ultimately the solution.


> Ultimately separating all groups would be the best, but heavy consequences for the heaviest road users is ultimately the solution.

I agree that physical separation would be the best, with curbs or fences not just painted lines.

As a pedestrian I would very much like to not share the sidewalk with any vehicle under any circumstances. Most people riding a vehicle on the sidewalk have no real legal constraints and the least respect I've witnessed anyone having towards the rest of the people. Pedestrians come in all shapes, sizes and ages, can't walk like robots and will easily step into the bike lane, or drop something, or a child will run around, etc. Riding at 30km/h in that environment is common and stupid.

As a cyclist I'd much rather have the cycling lane on the street. Cars are more dangerous but also generally more predictable than pedestrians on a narrow sidewalk. Driving also has more regulation and enforcement. From my experience cars are a danger to me as cyclist at intersections (the dreaded right turn) and a terrifying thought when it comes to doors opening in front of me.

As a driver I'd rather lose a driving lane to a cycling one than to have cyclists randomly bobbing in and out of my lane, crossing my path after crossing a red light, or after ignoring the right of way.


So essentially what you are saying that because we couldn't catch the smart criminals who use e2e encrypted services we shouldn't catch the dumb ones either?


If you ban the non-E2EE services unless they ban criminals, then dumb criminals will end up using E2EE services anyways


I find it pretty ridiculous to assume that any dev would comment on the inner workings of their employers software in any way beyond what is publicly available anyway. I certainly wouldn't.


Why not? If I think my employer is doing something unethical, I certainly would. That would be the moral thing to do.

This tells me most of the people implementing this are either too-scared of the consequences, or they think what they're implementing is ethical and/or the right thing to do. Again, both are scary thoughts we should be highly concerned about in a healthy society that talks about these things.

One other potential explanation: FB and these large behemoths have compartmentalized the implementations of these features so much that no one can speak authoritatively about it's encryption.


You are talking about a company whose primary business idea it is to lock up as much of the world's information as possible behind their login.

The secondary business idea it to tie their users logins to their real world identities, to the point of repeatedly locking out users who they live under threat and refuse to disclose their real name.


> That would be the moral thing to do.

The simplest explanation: when peope start at facebook/meta they leave their ethics at the door on the first day in the role.

It’s cynical, but does explain a lot: many people will pick the fat paycheque over their ethics any day, particularly in the US (where money is king)


It’d be quicker just to say when Facebook did something ethical.


> See, I feel like that's almost the exact opposite unless you assume Apple and its internal legal department is made up of the biggest idiots on the planet. If they were intending on just infringing this valid patent and trying to get away with it, then they've literally handed the world a paper trail that makes them look as bad as possible without a literal email being published in the newspaper from Tim Cook saying "Yea, just violate the patent".

They don't really need to be idiots. They just need to trust that there is a reasonable chance that Masimo won't do anything about it and if they do, there is reasonable chance that Apple wins in court and if they don't there might be appeals and if not they might have come up with better non-infringing tech and if not then they can come to license agreement with Masimo. With that train of thought I think its pretty reasonable that Apple acted the way they acted.


Not really good if AI can run around it but to normal people it exists as before.


This. No matter how intelligent you are, you can not make connections between things you don't know about. If you externalise all knowledge you are ultimately just extension of that knowledge source.


I guess you use what you are familiar with, but using react for doc site instead of using some off the shelf static site generator/cms with caching seems like wasted effort. More fun from dev point of view with react no doubt.


Great doc sites are dynamic in heaps of little ways. Stripe started the trend of showing you code snippets with api keys from your account to test with.

Frontend docs sites almost always include runnable examples, that you can play with inside the docs.


You can get dynamic functionality for docs sites without much JS at all. We've been doing it for years.


It's easier to integrate JS when the entire toolchain is all in JS. That's my main reason for using React Server Components over other static site generators. Also I like types via TypeScript.


You can use a static site generator like Gatsby or a variant of Nextjs (or even docosaurus), get MD(X) support and full features of react components when needed


> Stripe started the trend of showing you code snippets with api keys from your account to test with.

Pretty sure Google started / had this years earlier (before Stripe even existed?). And there may have been others earlier as well.


I hear this all the time and I don't understand it. Bootstrapping a new project with <insert modern framework here> is so easy and fast, literally easier than bootstrapping a vanilla HTML project. What am I missing?

Edit: downvoters, I'm genuinely curious to hear your perspective.


I agree. It’s static content. Why isn’t it just generated as HTML with some JS for the search bar?


It is statically generated. The reason to use React is so you have one language across the front-end rather than having some people using React on one site, and then people using Gatsby/Hugo for something else. Next.JS can do the same thing as Gatsby/Hugo but has more features and is in React.


But you don't even need to know what language your tool is written in if all you want is a static documentation site.

This is a problem a lot of people have; they think in technology instead of actual problem solving. Just look at how many projects have been posted on here with a title like "$solved_problem... in Rust!" as if Rust makes everything better forever.

It's marketing bullshit. It's self-gratification. It's using a technology for technology's sake, not for solving a problem. And it's costing the industry billions in sunk cost, dead ends, overcomplicated and unmaintainable software. Because one guy felt strongly about a language or technology.


> But you don't even need to know what language your tool is written in if all you want is a static documentation site.

Not if you want interactivity in certain parts of the doc site, such as what Stripe does with API keys. It's simply easier to add JS if the entire toolchain is JS.


Gatsby is in react


[flagged]


Then what is an actually good example in your analogy? I don't see how using a static site generator is anything like smashing your fingers, seems like a common enough solution to me.


> More fun from dev point of view

This is the curse of software developers and employers everywhere tbh.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: