But note that's only due to popularity. Socially engineering your way into a user running an executable means that executable will simply run with user privs. No trickery or hacking required, no OS holes. And that will mean that the executable will have full access to do everything a user could do, which will effectively certainly include sending a new encryption key over the network, and encrypting every file that user can get a hold of.
(One of the little problems with the UNIX-style user permissions is that it is designed to defend the OS, not the user. Sure, that little executable may not be able to corrupt "the system", which may amount to 5 or 10 GBs of easily-replaced code, but it will have its way with the 2TB of the single user's media files.)
The only faint defense Linux/UNIX can claim is the slightly higher probability that you'll be on a checkpointing file system and can roll back, and I say only "slightly" because they still aren't very popular yet compared to conventional file systems.
OS X defaults to only running applications that have been signed with a valid developer ID. It’s not difficult to get such an ID, but Apple can also blacklist them, which would prevent the malware from running once Apple notices it. So I think the Mac has a good defense against this kind of attack.
Malware developer can make 256 valid developer IDs, compute 256 signatures and switch them automatically and randomly during the propagation of malware. Once Apple blacklists one developer ID, another one pops out, and so malware continues to propagate.
I would imagine that Apple can also say "this developer ID is owned by this person, and we just blacklisted another one owned by them", then proceed to blacklist all of the IDs they've generated
Still, it's not as easy as the person I was replying to made it sound.
How many Macs would you have to compromise before you randomly stumble upon a registered developer, let alone a registered Mac developer (of which there are far fewer than iOS developers)? And how much more secure is a developer's machine likely to be, and how much less is the user of such a machine likely to fall for common email attachment-based infection attempts?
At some point, the feasibility is low enough not to bother. That's what all security ultimately is, since nothing is foolproof.
I think it's no longer accurate to think of this as "MS-focused attack but only because OS X is not as popular". Today, iOS is used by many more people than OS X as their primary computing device and I would say it's pretty safe from this type of attack.
Only because people can't email you apps to run on your phone. Which, last I checked, is why HN thinks iOS is a terrible, freedom restricting walled garden of evil.
iOS is fantastic if you aren't smart enough to use a computer. Most HN users know better than to run arbitrary apps from email, so for them it is a restriction that only prevents them from using their own device as they wish to use it.