Apple makes it pretty easy to report vulnerabilities to:
product-security@apple.com
They also respond to security@apple.com but prefer the product-security address.
Further, there are any number of legit bug bounty programs out there like ZDI that would pay for a bug like this then immediately disclose to Apple for it to be fixed.
Disclosing an 0Day root authentication bypass vulnerability on Twitter isn't cool, even if it is local: think of the impact to shared iMacs on university campuses.
I really disagree - this needs to be reported as much as possible publicly to create a huge thunderstorm of negative publicity for Apple.
This isn't the first extremely serious and dumb High Sierra password bug this year [1] [2], and unless Apple is severely hurt by it, so they're forced to change, it won't be the last. High Sierra is full of bugs and seemingly not just annoying bugs, but also security bugs.
Let's hope Apple gets sued for the damage they'll cause by including this bug in High Sierra so they make sure that next release of macOS won't be another bug filled mess.
Responsible disclosure does not prevent negative publicity. It provides the vendor with a grace period during which they can fix the vulnerability. There can be plenty of negative publicity once the vulnerability is patched and publicly disclosed.
Encouraging irresponsible disclosure because one wants to see Apple hurt is a reckless and selfish attitude because it puts millions of Apple customers at risk in the process.
Closed disclosure does, to a large degree, prevent negative publicity. I don't think it is in dispute that this bug would receive vastly less media coverage if it were only revealed as a bug in outdated/patched versions of the OS.
I don't want to see Apple hurt (I'm an Apple-guy myself, using Macs, iPhone, iPad and Apple Watch), I want to see them improve. I doubt they start will start caring about QA unless they're forced to.
One absurdly serious and stupid password bug like this can be a honest mistake, but three (that we know of, that were full disclosures) in a few months is negligence that should be criminal if it isn't.
Closed disclosure is responsible disclosure. Moving past the terminology, I am an Apple user as well, I am pretty satisfied with how quickly Apple resolves issues.
Now if every person started disclosing vulnerabilities via twitter without giving the company turn around time to resolve the issue based on their dissatisfaction with Apple based on standards they came up with personally, I don’t think it is nice or fair.
This is a rather limited view point. You forget the majority of macOS users are NOT technically savy. This is why responsible disclosure is so important as it gives tech companies time to create and push a fix to protect their users.
Disclosing this immediately puts those people who can't setup enable the root and set its password into more harm's way.
> Disclosing this immediately puts those people who can't setup enable the root and set its password into more harm's way.
Let's be clear - Apple put their users in harms way by letting a bug of this nature slip past testing. Disclosing "responsibly" would certainly be nicer to users, but mostly it would be nice for Apple by helping mitigate the bad publicity.
> You forget the majority of users are NOT technically savy.
Fixed that for you.
Sorry, I'm not going to cater to the lowest common denominator of "users". If my system has a hole, I want to know so I can get in a fix or shut down that particular feature until its fixed by a vendor.
Ive only 60k machines and 40 clients that depend on that decision. And they agree with me. If something's broke, I can analyse how it breaks and if it impacts us. If it does, we can triage it. I can do a damage assessment. I certainly can't do that if it's being sold on the darknets and whispered.
> I want to know so I can get in a fix or shut down that particular feature until its fixed by a vendor.
That assumes you can shut down that particular feature without crippling your business operations. If my system had a hole, I'd prefer that it were disclosed to the vendor before it's disclosed to hackers, adversaries and foreign governments.
Yes. Ideally you would also ping me in your public release, so I know whom to pay. Because that would also for be the users benefit to know to either fix, firewall, or not use until software is deployed.
I've seen the dark side where this leads. It leads to BTC transactions and 0days bought and sold. That's the worst, further past scrappy company sitting on exploits.
I strongly believe in transparency. It empowers users and admins more than any other option out there.
I think the responsible way to handle it is, you inform Apple in a closed way. Once they come up with a fix, if you think they didn’t come up with a fix soon enough, make that information public then on how long it took Apple to turn this around. Disclosing every vulnerability to the internet and setting their ass on fire is not a good way to solve this IMO.
I actually do think it is in dispute. This is a tweet after all. This guy could totally tweet about it in much the same way after Apple released a patch. The negative publicity would still exist because the bug would be equally stupid and disastrous, just fewer people would be harmed along the way.
Exactly. Everyone I know on Mac immediately tried reproducing this bug the moment they heard about it. On those systems where it didn't reproduce, they immediately dismissed it as a false report.
A bug like 'can log in with password "root"/""' just isn't going to get you a grace period no matter what security researchers might want.
I mean, this bugs has been reported already - by every cheesy hacking movie ever, by every beginners book on social engineering and so-forth. Heck, it was "reported" by Richard Feynman talking about cracking safes during the Manhattan.
Pretty sure every cheesy hacker film doesn't have the root password being empty. They usually put in "password" or the favourite music band of the target.
Not the attitude of the people reporting the issue have put "millions of apple customers" at risk, but the company which allowed to let issues like this one slip through their Q&A process.
IMO, this behaviour is part of the problem, the reason why tech companies take security only on a superfiscial level seriously.
I think this incorrectly interprets my comment. I am not defending apple or blaming the individual that disclosed the vulnerability on Twitter. I am simply pointing out that putting users at additional risk because you want to see Apple hurt may be misguided. We have responsible disclosures in place for a reason.
I'm now seeing cases of non technical people trying this because they heard about it and it's easy. To them, it just unlocks some system preferences thing. Guess what those people are not doing after they try it? 'passwd root' to change the password because in many cases they don't even know what the terminal is.
In this particular case, the ease of validation additionally works against users.
I do get the impression that you do blame the individual, as you have attributed unsavory motivations to his behavior. Why do you care to make such a loose statement about this person having a petty motive of malice?
One of the grandparent posts specifically said they supported the tweet because it would hurt Apple, and I think bradrydzewski is responding to those comments.
I dont understand the implied correlation between, what you call, irresponsible disclosure and "wanting to hurt apple". Where did you get this impression from?
Thanks, you are right. If he refers this post (which i believe he does), he is indeed right.
Anyhow, personally i wouldn't exclude something like this, e.g. suing, as a last resort. Anything that changes apples attitude towards security or at the very least, enhancing the value of security as a product qualifier.
And Full disclosure is about protecting users of a software, not letting the vendor off the hook. Here, the hack and the fix are so trivial the responsible thing to do is to publicly call out Apple for its lack of QA and warn users directly. It affects everybody who runs High Sierra.
> it puts millions of Apple customers at risk in the process.
Nah, it's Apple which put millions of customers at risk, not the person who disclosed the vulnerability. let's not shift away the blame from the guilty here.
Apple one of the richest company in the world is obviously just cutting corners in QA here. This is unacceptable.
it's seems some people here are more concerned about negative publicity than user security. This is a pattern that have been seen countless times in big tech corporations(such as Yahoo), not disclosing hacks that put their users and their data at risk. This is unacceptable for a company that claims to be all about their users.
I would argue that releasing this vulnerability as irresponsibly as he did is showing he cares more about negative publicity than user security.
Yes, it's Apple's fault for poor QA that this was released, but this guy also put users at risk by telling the entire world about it without giving Apple a chance to fix it.
You're right, it's about user security before publicity. So make sure users are safe first.
You can follow your own procedures - decide for yourself how long you think it is reasonable for the company to mitigate in private. But give the company some time.
Why? You're not an employee, you're a concerned citizen. You havr no obligations to vendors whatsoever. Now, I think it's nice to do responsible disclosure, and I certainly don't envy the people whose week has been ruined, but the discoverer of this bug did nothing wrong.
It is about the increased risk fellow users will have due to this style of disclosure. Who cares about the vendor, but they are best situated to resolve the issue quickly for everyone.
> Nah, it's Apple which put millions of customers at risk, not the person who disclosed the vulnerability. let's not shift away the blame from the guilty here.
Disclosing 0day vulnerability via Twitter for the sake of self promotion is bad. Especially when you advertise yourself as a software developer.
This is such a lame vulnerability that it's probably already known to competent attackers.
It's not a bug; it's a bad design decision. How to initialize the root password on a new machine is a hard problem in a consumer environment. Some people will set it, lose it, and then want support to fix it. One would expect some clever Apple solution, such as initializing the password to random letters and providing the buyer with that info on a scratch-off card. That way, the buyer can be sure no one has seen the password before they use the scratch-off card.
Setting it to null? That means nobody thought about the problem.
> Encouraging irresponsible disclosure because one wants to see Apple hurt is a reckless and selfish attitude because it puts millions of Apple customers at risk in the process.
Apple put millions of their customers at risk by skimping on QA. As an Apple user I'm OK with this getting out if it motivates Apple to improve their approach in the future.
The very comment you are replying do lists a reason why disclosing huge vulnerabilities without providing upstream time to patch is irresponsible: "because it puts millions of Apple customers at risk in the process."
Your comment doesn't refute the reasoning the comment you are replying to provides, and it also doesn't tell us anything about why you think "There is nothing irresponsible about disclosing huge vulnerabilities in software by any means necessary." You state your position, but offer no rationale, no reason for it; why should I accept your position as the correct or ethical thing to do?
Why not? The end goal is protecting users. If disclosing a vulnerability before a company has a chance to fix it puts more users at risk than waiting how is that not irresponsible?
Considering the vulnerability was supposedly brought to Apple's attention a month earlier via the "proper" channels, and considering Apple's history of repeatedly ignoring and dismissing said disclosures, I'd say this was the only correct action to take.
I think there's a middle ground to this. Submit your report to Apple security, allow them time to develop a patch, and then in a week go ahead and tweet at the big media outlets about it.
I'm a die-hard Apple user myself, but I agree that the long list of severe bugs in High Sierra is absurd, and a big public backlash might be enough to kick them into gear. On the other hand, I, a university student with next to no understanding of computer security, can simply walk onto campus, sit down at a Mac, and within seconds have complete access to the computer. It's ridiculous, it's horrendous that it shipped like this, but it's not something that needed to get out, especially something so easy to utilize.
The fact that you as the ordinary student can become root and create a lot of damage so easily is the only reason the public will care.
Us geeks have been complaining about the horrible QA in macOS for years, yet nothing has been done. The fact that this is so simple to do will probably/hopefully get ordinary people to start talking about it too ("Hey, have you heard that you can hack Macs without a password? Very insecure"), which would force Apple to improve.
It sounds to me like you're arguing that full disclosure in this situation could lead to a worse outcome for users in the short term, but the negative publicity will force Apple to improve their security posture, leading to a better outcome for users in the long term. (Please let me know if I'm miss-characterizing your argument)
I think you have to be very careful about that line of argument. It's a single vulnerability researcher making a unilateral decision about the short term and long term security of an entire user base, based entirely on personal judgement. I personally think the researcher should make the decision that best protects users from that specific vulnerability. Making long-term changes to a company's QA should come second.
> I personally think the researcher should make the decision that best protects users from that specific vulnerability.
I find it odd that you're putting the responsibility of making decisions about how to protect Apple's users on an unaffiliated third party.
Apple has a multi-hundred-billion dollar war chest and, if they wanted to, could afford to make macOS the most secure operating system on the market. The fact that they don't is their own choice and a reflection of their priorities, not some act of God or a natural disaster. Putting the onus for cleaning up the mess in the most "responsible" way possible on third parties with a fraction of Apple's resources is being too kind to Apple.
My point was exactly the opposite of putting the onus on the researcher. I support responsible disclosure. In responsible disclosure, the researcher informs the vendor (Apple) and leaves it to them to coordinate informing people of mitigations and pushing out a patch. If the vendor fails to respond or make progress in a certain period of time, the researcher can inform the public. It specifically puts the responsibility for dealing with the vulnerability in the hands of the vendor.
Maybe it's crazy that we give people physical access to machines and expect them not to be able to obtain root.
I don't have any experience with enterprise-grade IT, but it seems like shared computers should be thin clients or at least use UEFI to securely boot an image over the network and not keep anything sensitive locally.
If you give someone physical access to a box, they will be able to own it.
This is actually how the public workstations in MIT computer clusters have always worked. The root password is public to anyone with a legit account, but access to it gets you almost nothing because all services including the network filesystem are kerberized, and machines are really good at wiping all local changes upon logout. Some more details here: https://www.quora.com/Are-computer-networks-in-MIT-harder-to...
> On the other hand, I, a university student with next to no understanding of computer security, can simply walk onto campus, sit down at a Mac, and within seconds have complete access to the computer.
its educational for the end user. You cannot trust Apple. Good reminder there are other OS available out there.
My understanding is that the first attempt is creating/enabling the root account with a blank password and that the subsequent login is actually utilizing it (which is kind of bizarre and probably why this was missed in testing).
The first time I tried it, it just worked. I'm certain I've used root before. On the GUI portion it works with no password, but on the terminal it does not let me login as root without a password. Some weird OS magic must be going on there?
AFAIK usually sudo doesn't let you enter an empty string as a password, even when the actual password is empty. So that is what you might be experiencing.
it worked for me on 6th try or so. First few times, the prompt was returning to my user name, but then another failed attempt left it at 'root', and the next attempt succeeded
root IS an admin in macOS. You just cannot use root the same way you expect root to work in other Unix OS' without changing stuff in the BIOS (or whatever it's called for Apple) but you can still do most admin things with root. However root does get admin privileges, but when it comes to some system directories he is banned till you "fix" root privileges.
Why does it need to create a lot of negative publicity for Apple? Is there something you don't like about them? Responsible disclosure needs to be valued given the number of macs out there in the wild that could potentially be susceptible to issues like this, and the impact it could have on people (including you) not just directly but indirectly.
How would you feel if someone discovered a 0day at a company that exposes credit card and identity info, published the 0day, then hackers steal all that info (including yours)? I'm sure 'creating a thunderstorm of negative publicity' would be the last thing you would want.
You mean, in addition to bad QA and complete disregard for their users' security? And being the richest and most profitable company ever, cutting corners and evading taxes?
Their response on Twitter was amazing: "PM us so we can discuss this privately", not "thank you, we're looking into it NOW".
I suspect the response on Twitter was a typical reply from a tier 0 support person. No reason to extrapolate from that to the company's internal response.
Apple is a Rorschach test writ large. What people see in it reflects more on the observer than the company in many cases.
I don’t see it as either/or. You can disclose responsibly, and go for publicity once the fix is in circulation. Responsible disclosure is nothing to do with protecting Apple, it’s about protecting the users.
The problem is that this is not a zero day in new technology. They made a jr sysadmin mistake. As a company who wants a reputation for good security, that is not acceptable.
Far too many people think of Apple as infallible. I often think even Apple thinks of themselves as infallible. The more people that are aware of the inherent risk involved with using computers - any computer - the better.
Yes Apple shouldn't be having this issue but disclosing a 0-day issue can possibly hurt users far worse than hurting Apple. Apple may lose a tiny bit of money but users could lose far, far more especially if someone develops a good way to remotely deploy / take advantage of this defect.
Ignoring responsible disclosure also limits the ability to sue them for any damage resulting from it (or so I'm told by one of my lawyer friends who thinks this disclosure may make it almost impossible to successfully sue them over it unless it simply takes them too long to fix).
You could let them know about the vulnerability and wait until it's been patched before commenting, with some timeout where if they don't patch in a reasonable amount of time you announce it anyway.
How can that happen in any case ? Isn't pretty much the first line in every license waiving of liability ? Unless you have some special contract with Apple that overrides other standard boxes that you ticked, how would anyone sue ?
Blame the DMCA. This guy is in Turkey - does GP really think he can expect fair treatment and equal compensation as a "western world" security researcher?
There's no reason why the person who discovered the bug would be safer publishing the vulnerability on Twitter than disclosing it to Apple directly. If nothing else, they could always post it on Twitter later. The link to the DMCA is a digression.
If you leave keys in other people's doors all over the neighbourhood, I damn well have a rigtht, and possibly an obligation, to make it publicly known that such a thing is taking place. So that everyone may take their own precautions.
Let's say keys were hidden around the neighborhood. Would you rather everyone in the whole town know about it or quietly and quickly go pick up all the keys before someone notices and breaks into one of the houses?
Personally I think if you report through the proper channels and nothing is changed THEN broadcast, but not as an opener.
In determining whether keys-in-locks or keys-under-doormats is the closer analogy, I have to go with the doormats. Various people go door-to-door... delivery people, campaign volunteers, Jehovah's Witnesses, etc. and a key in the lock would be hard to miss. A key under a doormat is easy to miss, being obscure. Sure -- the doormat is one of the first places you look if you're actually trying to break in, but people whose nefarious side doesn't manifest until the opportunity is obvious are indeed thwarted by that obscurity.
I am going to ask, do you want to try this scenario on your own in real life? Because often we make general statements while we don’t actually practice what we say to others when the issue is going to hurt ourselves.
But do YOU want to have your good neighbor report to tou that your key has been left on the poach before someone takes it away and then get into the house when you are away from home on Thanksgiving night celebrating a nice holiday vacation? I bet you not. The fault is on you for dropping the key, isn’t it? Or you are so bluntly careless to have a backup key left under the mat on your poach which slided away and the key is now revealed.
The fact locks are easy to pick doesn’t mean people don’t take more measures to defend it. More and more people install steel door as an extra layer and home security surveillance system, all of which can be compromised with the right tools.
Responsible disclosure is something I respect. While he has the right to not disclose this privately first, has he tried? How hard is it to ask someone to get in contact with the Apple security team? There are a bunch of top sec researchers on Twitter constantly tweeting) to help escalate this. I think someone on Google zero project team did this to escalate.
A better analogy would be "if the lending bank left the door to your new house open..."
Other than buy an Apple product, the users did nothing intentional to undermine security.
Since this is a subjective argument, based more on historical instances of "responsible disclosure" and not law, I'm gonna lean in this case of it being Apple that failed
They built the entire "walled garden" without getting outside help. They want the control, they have billions of dollars, can hire whatever talent...
Failed to spot a password-less root login issue.
People need to know today to be even more cautious about using Apple gear in public places or around plain ol' tech jerks that like to fuck with people for a gag.
Society has no legal or moral obligation to make sure Apple stays in business.
Responsible disclosure is an interesting concept. How does this kind of disclosure make sure that the public knows about a company's track record of vulnerabilities, if everyone is under NDA and the company has no obligation to ever publicize it?
Now, if the reseacher could give a grace period, that's cool, but there MUST be a deadline by which stuff goes public. Hopefully the company fixes it and issues a postmortem first. If not - too bad!
The problem with that analogy is that the probability that the "bad guys" already know about this vulnerability is vastly higher than the probability that thieves know about how well some random house in the neighborhood is secured.
But do they? And what portion of them do? And are they using it? There's a lot of speculation here. But surely the average person doesn't know and with this being public knowledge, AND easy to execute there is a bigger chance for crime of opportunity.
It’s always reasonable to assume that black-hats (and… what do you call government hackers — black-suits, helicopter-hats, ???) know everything that white-hats know, and that they either have or are already in the process of selling that exploit to less skilled criminals.
It’s not like being good morally correlates with being good at security.
But that's not what I'm saying. I'm saying that since this is so easy, a person that is computer illiterate can now gain root access. You definitely don't post those kinds of things on Twitter.
Computer illiterate people might now have a new way to shoot themselves in the foot, they won’t be able to exploit it because they won’t know what root is or why it does stuff.
How many more people now know about this vulnerability cause of this knuckle-head tweeting it? At least 100k impressions? Now think of how many more "bad guys" have access to this hack that are going to abuse it.
You may say so, but really the level of incompetence of not setting a password for a root account is pretty high. The fact that someone reported it in a way you don't agree with shouldn't distract you from the fact that this highlights a serious oversight.
The main question that should be asked is, how did this get overlooked? How is it that your average website has better password security than the OS of one of the richest tech companies in the world?
To be fair to Apple, Microsoft had similar issues back in the 1990s. Perhaps it takes a string of security blunders for some tech companies to take security seriously.
If you sell locks and those locks can be opened by pulling on them twice, the reasonnable course of action is to make that fact known to every buyer ASAP, not tell you privately and wait for you to maybe issue a recall.
Locks don't nag you to decommission them quite as aggressively as OS X asks you to patch it. And an OS update was going to happen anyway, so including this patch doesn't really burden the user with an extra task they wouldn't already be subject to. Therefore, coordinated disclosure has a lot of value in the OS update ecosystem and very little in the physical lock ecosystem.
Responsible Disclosure is widely regarded as a good practice in these situations. Blame isn't the key issue - fixing the problem quickly and safely is. Widespread disclosure before Apple have even a chance to respond in a timely fashion is inherently unsafe.
You would hope the self-described twitter bio "Agile Software Craftsman" might have thought about this a little before tweeting.
The poster practiced Full Disclosure, which is also a valid disclosure policy.
Since we're just making up statements, I guarantee that Apple would never voluntarily disclose this issue if it was reported privately. So Full Disclosure is the only way to put Apple's feet to the fire, as it's the only way in which this issue would have had any visibility whatsoever.
If there was a vulnerability that allowed anyone to open your car and drive off with it, you wouldn't care if he was the first to find it or not. You'd only care about it getting fixed before anyone else knows about it.
I'm not sure what length grace period is appropriate, though.
If there was a vulnerability that allowed anyone to open my car, I'd want to known ASAP, because I wouldn't trust the manufacturer to provide a remedy quickly enough that eliminates my risk.
Same applies for Apple. No reason to believe this guy was the first one to find this exploit, we only know he was the first one to publicize it.
"Responsible Disclosure" is a term rejected by the industry as a loaded phrase that favors vendors, instead preferring the term Coordinated Disclosure. Even so, reasonable professionals still disagree that this is the best option, in a debate that has existed for decades, so it's by no means settled as the "proper" way.
>If you leave your key in your front door lock and I blast out on twitter your address and tell people about it, I think I have some responsibility.
That's not a faithful analogy. Apple isn't your neighbour. They are the landlord. The scenario is more like that the landlord uses bogus locks in your complex, and you post it on twitter. You could complain to them privately too, but given your past experiences perhaps, you thought that twitter would be a more effective medium.
This is different though: the bug is so bad that random, inexpert users can discover it by accident. People that are not going to even be familiar with the term "responsible disclosure" at all. This may have been the case for the guy who tweeted this.
There is no realistic way to keep a lid on something like that and so in this case the blame is entirely on Apple.
This very excellent comment lies "dead", so I'll repost it:
asejfwe8823 24 minutes ago [dead] [-]
A better analogy would be "if the lending bank left the door to your new house open..."
Other than buy an Apple product, the users did nothing intentional to undermine security.
Since this is a subjective argument, based more on historical instances of "responsible disclosure" and not law, I'm gonna lean in this case of it being Apple that failed
They built the entire "walled garden" without getting outside help. They want the control, they have billions of dollars, can hire whatever talent...
Failed to spot a password-less root login issue.
People need to know today to be even more cautious about using Apple gear in public places or around plain ol' tech jerks that like to fuck with people for a gag.
Society has no legal or moral obligation to make sure Apple stays in business.
In a case like this, I think it would be best to maximize the bad publicity. Bad publicity is the minimum Apple deserves for a bug like this. In my idea world they'd get a lot of bad publicity, and a significant financial penalty.
I get it, I really do, but it's not like he was complaining about a bad Uber driver. Disclosure in this way has real-world impacts up to and including harming people and we shouldn't ever consider it as something which is remotely acceptable. Is it acceptable to publicly disclose that an airport has a self-destruct switch which can be accessed near the NW mens bathroom? No. You contact someone who can fix the problem, then publicly disclose.
It's as remotely acceptable as "root" with no password, apparently.
The question is large and complicated, and people can agree to disagree. There's nothing wrong with tweeting vulns: The company is at fault, we can defend ourselves now that we know about the vuln, and it's a big PR disaster for Apple.
No, no it's not strictly more ethical. It's not even strictly safer, which should be an even easier question to answer. The baked-in assumption in your logic is that users have no options other than waiting to patch. But, obviously, they do, and keeping vulnerabilities secret deprives them of those options.
But everyone can fix this problem by setting a root password. So telling everyone is the right call. Otherwise people would be sitting vulnerable while Apple comes up with a patch.
But a tweet isn't really the most effective way to tell everyone. Technical people, including those who would use this vulnerability for malice, will find out far far sooner than my grandmother.
It seems to me the right thing to do is to tell Apple privately, tell them to either push a fix or put out some kind of release letting all their customers know how to mitigate this in the next, say, 3 days, or I'll just tweet about it. What's the downside? At the worst case, you just prolonged the status quo for another 3 days.
I agree this person isn't malicious, certainly. But I do think his decision was bad. Not "bad" in the moral sense, but "bad" in the sense of being sub-optimal.
Is it likely it's just an error due to the discoverer not being immersed in the Infosec space? "Don't disclose a 0-day publicly" is good 'common' sense, but only among the 'common' of people who are steeped in security issues and the ramifications of publicizing them.
Indeed, discovering this bug wouldn't take any security skill (I imagine it could be harmful since you might skip really dumb stuff like this) and could easily happen by accident. Responsible disclosure is standard for security researchers but I don't think this person was one, and it's not very fair to blame him for not doing it right.
That is not the case among infosec professionals either. Many respected professionals believe that the right thing to do in many cases is full public disclosure. Google Project Zero are a notable example.
Google Project Zero does not support full public disclosure immediately- quite the opposite. They support full public disclosure after giving the vendor an opportunity to ship a fix to their customers in a reasonable period of time. Nobody's debating whether or not security flaws should be publicly disclosed- of course they should. The only debate is, what is the most responsible way to handle such a security issue such that it harms the fewest users.
Project Zero (and infosec professionals, at least all of the ones I've ever worked with) would tell you that this was the most irresponsible way to handle the issue, short of not saying anything and selling knowledge of the exploit to someone other than the vendor who could fix it. Publicizing something like this in this way is something people do because they want publicity for themselves. It is not something someone does if their biggest concern is for the users who might be affected by it. It is something someone would do if they didn't care about the users, and just wanted public credit for pointing it out.
You added the word "immediately", making that a straw man. I did not say they disclose immediately. Their policy is still full disclosure, including working proof-of-concept exploit code, even when the vulnerability is unpatchable and millions are affected. Ask Microsoft and Apple about the times they went beyond 90 days.
Furthermore, that deadline is 7 days if they are aware of active exploitation.
DJB is famous for his full disclosure with no advanced warning stance.
The rest of your post is false. That is your opinion, and people disagree. I'm going to guess you have not personally spoken with folks from project zero. I have spoken with some of them. And trust me, they would not agree with your statement. A few of them even feel strongly that their timelines are far too generous. I also understand their reasoning, and it has nothing to do with ego or publicity, and everything to do with concern for users.
They do not give a shit about credit beyond believing that it is proper to cite the authors of any work. That isn't the case for everyone finding bugs, but people on that team lost the novelty of having their name on bug reports long ago.
Project Zero does full disclosure 90 days after informing the relevant organization. Full disclosure comes after there has been a chance to fix the problem. Otherwise everyone is put at risk until a fix is available.
They've been quick (within 45 days) to patch every major bug I've reported to them and where the bugs were cross platform, impacting Windows, Android, etc., they've consistently been amongst the quickest to issue a patch so I'm not sure how you qualify that statement.
I qualified it with "reliably". Major bugs reported by you, a security researcher, may be bucketed differently than those deemed less serious or filed by others. As a recent example, a minor bug like the iOS 11 calculator ignoring keypresses had reports filed since Beta 1, but only after it made headlines and caused Apple public embarrassment will it be addressed in the upcoming 11.2, six months later.
This happens quite often. Report a bug to Apple through CERT as an example and they run with a well known 45 day disclosure timeline. For researchers who don't want to get into vendor conflicts this is a good path because CERT ultimately holds the decision.
Responsible disclosure is pretty much a security industry concept, it's not something that most developers know about, complaining on Twitter is probably what an average person would do.
Although for what it's worth last time I reported a security vuln to Apple using their official process they took around 2 years to fix it (admittedly low priority security vuln, passwords being sent over http).
Wow I didn't believe this at first, so I dug more. AWIS requires the root key of an AWS account. I found a forum that does suggest creating a new account solely for AWIS.
Still I'm surprised they would suggest sending the root key to your account over http. Even if it is just the id and not secret it still seems like something you want to keep secure. I don't use my root key for services. I create new accounts and IAM roles.
> complaining on Twitter is probably what an average person would do.
His twitter account tells
that he is an agile software craftsman, turkey founder and a community guy. And he tweets about devops, open source and other stuff.
An average person disguised as a software developer?
There must be some kind of scale 1-10 of how serious the issue is. This one goes up to 11 as hilarious, not sure if proper reporting ethics apply here anymore.
> there are any number of legit bug bounty programs
The thing about bug bounty programs are that they are not a negotiation. They decide how much your information is worth--take it or leave it.
If you thought this bug was worth $25,000 and you feared that Apple might offer a $100 discount coupon plus a lovely "I Love My Mac" coffee mug, is there any way to start a negotiation without being accused of extortion (if you imply that you might disclose it publicly)?
This is a serious question: Is there any way to negotiate for security bugs, before or after disclosing all the details, without running a legal risk?
Not really; the issue is that you don't have a way to disclose how much the bug is worth without giving away the bug itself. You can kind of ask how much an exploit that gets a local user root access is worth, but that can give away enough to let them focus their own search.
In general, you have to rely on this being a repeated game - you and the pentester community at large submit lots of bugs to this company, and you rely on them to make it worth your time and talent. If they don't, you go test someone else's software. Reputation is everything.
Actually, no. For all computers, running macOS High Sierra, a computer with a root password is a whole heck of a lot more secure than one without a root account at all :)
But seriously, a fix is whatever fixes the problem.
It is now. There's going to be a whole lot of people who are going to set a root password because they read an article online recommending it, then install the update that fixes this issue. Now they're stuck with a root password when they could have not had one at all.
> Now they're stuck with a root password when they could have not had one at all.
And many users will create insecure passwords, leaving them with a serious vulnerability after the patch. A password of "root" or "qwerty" is only marginally more secure than a blank one.
Could the patch also handle this case? Can you create a root account in OS X?
If it was previously possible to create a root account then I guess there's no way to tell the difference between those who created the password as a response to the vulnerability and those that knowingly created a root account.
Yes, you can. macOS ships with the account disabled by default, but you can re-enable it if you wish. Most of the time there is no reason to do so.
> If it was previously possible to create a root account then I guess there's no way to tell the difference between those who created the password as a response to the vulnerability and those that knowingly created a root account.
Yes, but this difference doesn't matter. By creating a root account you have made your computer less secure anyways, assuming that we didn't have the current issue at hand.
Making it difficult to access something is always worse than not having anything to access. Plus, the average person will set their root password to something like 123456 or qwerty, which is clearly insecure.
> Making it difficult to access something is always worse than not having anything to access.
Not a choice now. If people have physical access to Macs at the moment then it seems to me there are two options right now: either 1) you have changed the root password, hopefully to a strong password or 2) they'll be able to login as root.
Right now you can expect the participants in this thread to be a little smarter than the average computer user.
Uhg, ok I forgot some people don't know the code. The old adage is "On your first day as the new sysadmin, change all the root passwords". The idea is that a SECURE root is good, possibly better than no root (which is sort of a hack in itself). There are best practices for root passwords. Things like length, composition (no dict words), disabled for remote access, and care in who is allowed to have it.
Simply pretending root does not exist is a rather new idea and is not best practice. It's only for convenience.
This is an outmoded guideline for password security. String enough dictionary words together and you achieve a high level of entropy. See for example https://en.wikipedia.org/wiki/Diceware
I assume he means that you should always set a password for "root". Though most users don't even knows it exists.. hence it should have been taken care of by apple.
Seems like you are splitting hairs here. Clearly, the point the commenter was making is that people that have now set the root password as a result of the tweet are more secure than they were before the tweet.
That's a whole week where they can get owned by any kid with physical access to the machine. In some settings (schools, libraries) this might be a huge issue.
> Disclosing an 0Day root authentication bypass vulnerability on Twitter isn't cool, even if it is local: think of the impact to shared iMacs on university campuses.
It gets the word out quickly.
Releasing proprietary software with such a hilariously insecure authentication system isn't cool. This isn't free software, produced by people & corporations out of the goodness of their hearts; rather, it's something for which people pay a good deal of money and which they have a right to expect is at least somewhat secure.
Getting the word out, fast that a) there's a huge insecurity and b) it's in Apple software provides benefits to those running macOS (so they can fix their systems) and to those considering running macOS (so they can evaluate whether an alternative is more appropriate).
There is no "responsible disclosure". You will never get 7 billion humans to do things exactly the way you want it, so if there is a possibility of at least 1 person disclosing a zero day publicly, then you have to be prepared for it just as much as if it was everyone.
Instead of trying to control behaviour of every single human being in this world and demanding of them to do things in a certain way - which is, was and will always be impossible it is much more favourable to establish the expectation that a zero day vulnerability might be dropped every week and have businesses (vendors and clients) be prepared for it so it can be handled adequately.
Let's wait until Apple release their patch so we know just how long they left everyone's machines vulnerable for. That will be a factor in determining whether this disclosure was irresponsible or not. It's been two and a half hours so far.
I agree in general, but calling it uncool and laying any blame on the person reporting is not fair.
You may know the protocol, security researchers and people in the tech industry may know that, but why is an ordinary Joe expected to know, or research, that email address and/or the protocol regarding 0-day vulnerabilities.
I'd argue that even to the ordinary Joe it should be quite logical that disclosing something publicly before the company has had a chance to fix it means that nefarious people could learn about the exploit and use it against victims.
It's the same logical line of thought that leads people into turning wallets into the lost and found (or an authority) instead of just pointing at it on the ground shouting "hey look, a wallet!" then walking away.
Does anybody have any info on how much Apple would've been likely to pay for a responsible disclosure in this case, given the scope and severity of the issue?
I'm just curious how much of a payday this guy missed out on by not disclosing responsibly.
AFAICT, Apple's security bounty program is officially only for their preselected group of security researchers.
In the course of developing my current application, I've discovered a couple security bugs in macOS, which I reported to Apple product security in PGP-encrypted emails. The only thing offered to me was to have my name/company listed in the release notes (which they are, for the latest 10.13 update, along with a CVE#).
> Disclosing an 0Day root authentication bypass vulnerability on Twitter isn't cool
no one is under any obligation to sweep company's security problems under the rug for them.
If companies create incentives for people to share vulnerabilities with them first, great, but no one is under any obligation to participate in those programs.
Don't ship broken software if you don't want pie in your face.
Forget the company. This harms users, who are not responsible for causing these issues; for all except the most technical 1% of Apple users, keeping the problem secret while Apple works on a quick patch is much more secure than telling the whole world immediately.
If it harms the company then they will take it more seriously and it will protect users more in future. If it doesn't harm the company then they have no incentive to change.
In almost all cases immediate disclosure is better for end users who actually care about their security because they can take appropriate mitigation measures.
Just because the vulnerability is not disclosed does not mean it is not being actively exploited. It probably is.
Users who don't care about their security do not deserve to be "protected" at the expense of compromising the security of those who do care who benefit from immediate disclosure.
This situation is much more akin to a fire rapidly spreading through a village at night. I would go outside and start hollering in the hopes of saving anyone.
A better analogy is that there's a fire somewhere in your village, but it's mostly contained (it's not spreading, because other people don't know about it yet). By hollering about it, you've made it possible for anyone to go to the fire, light a torch with it, and burn down the village. Instead, you could call up the fire department and they could put it out–and then you could tell everyone about it.
> you've made it possible for anyone to go to the fire, light a torch with it
And at the same time provided everyone with a simple, free, and perfect way to fireproof his/her house.
You could wait for the fire department, which may take hours to get there, and hope that no malicious party down the street saw the fire, or you can do this. It turns out that both are quite reasonable reactions in this scenario, and that the latter is much more obvious to the layman.
There is no good reason to get angry at the layman for taking a course you don't prefer.
> And at the same time provided everyone with a simple, free, and perfect way to fireproof his/her house.
The issue here is that it isn't perfect in the long run (having a root account with a password is worse than not having one) and many people will not be adopting it since they're not in the know.
Of course, it is always possible people have been already exploiting this, but most likely it is at most a small number of people who know about it. Now every script kiddie in the world can and will go around trying to hack anyone who isn't well-informed enough to know how to protect themselves.
Uhh, no. How are you getting that impression? I'm simply saying that arsonists exist, and it's probably a good idea to make it harder for them to burn things down than to publicly advertise a way for them to do it.
Do you live in some horrible place where your neighbors will throw gas on your burning house unless you can quickly and quietly get the fire department there first, to put out the fire that the FIRE DEPARTMENT started?
It's an analogy (and it's not mine, mind you; I simply tried to make it more accurate)–don't take it too literally. Hacking is a thing that exists. By reporting this publicly it's become a lot easier to hack people.
The 'sign in as root with no password' method cannot be used to trigger the vulnerability initially via remote desktop. I tested it via SSH, File Sharing, Screen Sharing and Remote Management. None of these will enable the root user if it has not already been done locally.
Once the root user has been enabled locally, the only sharing settings I found to permit anyone remote access with the root/null combo is Remote Management.
I don't think they meant using this vulnerability to enable a root remote connection, but using an existing non-root remote connection (think TeamViewer, VNC, whatever) and escalating.
Disclosing an 0Day root authentication bypass vulnerability on Twitter isn't cool, even if it is local: think of the impact to shared iMacs on university campuses.
How dare you publicly shame him and risk his future employability like this? It's only responsible of you to contact him quietly and directly so he can correct his mistake and cover it up so nobody needs to know.
It's like there's one rule for the negligent global corporation which happens to work in the corporation's favor and shames the public for speaking to each other about their salary, I mean flaws in their software, and another rule for ordinary people giving a heads up to people who are fair game to pile on.
Is this address easily discoverable without needing too much insight into tech company workings? Like, do they have a help menu that tells people where to report stuff? I'm not an apple user.
https://www.google.com/search?q=apple+report+security+bug does bring https://support.apple.com/en-us/HT201220 right up, but that page documents "how security researchers, developers, law enforcement personnel, and journalists can contact Apple to report or ask about a security issue" -- notably absent from that set is "your average Joe who stumbled upon something entirely by accident" for some reason.
Security experts and hobbiests know this, but not your average user. This doesn't look like an in-depth bug hunt. Maybe more average users should be educated about responsible disclosures when finding security problems. This tweeter might not realize they gave up some money.
In this particular case, I am grateful for the early disclosure that I can fix it right now instead of waiting. For a huge bomb like this, I think you really can't blame the messenger.
It seems to me (somebody who has no chops in this domain) that this is such a basic bug. Like something a child would have found just messing around.
And it came from a corporation that has around $200B of cash and cash equivalents. Apple has the resources to test and find bugs like this.
That Apple didn't find it is down to leadership and priorities more than some inherent limits of producing reliable code. One spends on what one thinks important.
But who knows, I've got no domain expertise here. Maybe a fifth of a trillion dollars C&CE really isn't enough to fund production of more robust code. But really?
This is kind of a weird corner case; OSX tries really hard to hide the UNIXiness of its system, including the special nature of the username "root". So I can easily someone not thinking to test it out super thoroughly
A lot of security vulnerabilities are of this type: "let's do crazy shit X that the system was not built for and see what breaks." I'm sure this will be in their test suite now though.
Maybe Apple should spin up a group of 7 - 14 year-olds to add to their test suite input. They might be better at coming up with crazy shit X that might break things.
After the developers there is a line of QA as well, but part of the problem is having the organisational structures for developers to discover issues like this. Regular audits, security as a priority and non-recrimination policies would be a good start. In many companies if you bring up problems like this then your "not a team player", in others you could point out issues like this all day long but they will never be acted upon because the budget isn't there for various reasons.
There is a simple workaround. Publicity means security here.
It's trivial to find. He can't presume he is the only one who found it. Telling any individual that doesn't have malicious intent is a good thing, therefore telling everyone is a good thing.
product-security@apple.com
They also respond to security@apple.com but prefer the product-security address.
Further, there are any number of legit bug bounty programs out there like ZDI that would pay for a bug like this then immediately disclose to Apple for it to be fixed.
Disclosing an 0Day root authentication bypass vulnerability on Twitter isn't cool, even if it is local: think of the impact to shared iMacs on university campuses.