Pro tip if you are a startup and want free security advice. Just sign up for all the bounty sites and for every single bounty just tell the submitter that it is a duplicate bug and pay them nothing, then hot patch it immediately and when they get suspicious tell them that their bug report had absolutely nothing to do with the timing of your patch. I know there are companies that do this because I have had it happen twice. There needs to be a bug bounty site with some sort of bug escrow to prevent this behavior.
Edit: pardon the tone, I understand that these types of problems are very very hard to solve because they aren't purely technical and involve humans.
To be on the other side of this, we do received unsolicited but welcomed bug and security reports. Some are legit and we pay bounties even if we don't have an official policy and we are an early startup. Others are just automated reports that people copy and paste. These ones are uninteresting, but these people still think they deserve money. Often more aggressively than the legitimate ones.
I don't run a bug bounty but I do sit on a security@ inbox. I don't believe I've ever seen a report I would want to pay out on even if I could, but if you discount blatant spam (often peddling EV certificates), I've received reports asking about bounties for:
- nginx version disclosed in headers
- "Feature-Policy" header missing
- DNSSEC not set up on zone
- Domain not in HSTS preload list
Responding to this sort of thing with "not a vulnerability" is intensely difficult because the of the potential for a PR backlash about "poor security" from people who just don't know better, particularly when the company is definitely not a tech company.
And for softwares like VLC that have a special place in our hearts ('K-Lite Codec Pack' [1] was my special friend until the moment I discovered VLC many-many years ago), there is or there should be a firewall rule that says Enabled-Block-Any-Any-In&Out.
I remember back in the day I used to field security scan reports for a client I consulted for, and the amount of things I had to mark as "not actually a problem" because all they did was check Apache header versions was staggering. This is slightly better than and slightly worse than your situation at the same time. Slightly better in that they were looking for versions that had actual exploits known in them, but slightly worse in that they had no way to deal with distros that back-patched for vulnerabilities, like RHEL (which was the distro in question). "Yes, I'm aware that that the version of Apache you are noting contains a vulnerability. No, it's not actually a problem or exploitable, since I already applied the patched update. Just like last week. And the week before that."
Examples of "vulnerability" reports I've received:
- Dump of CVEs for "Web App X" or "Server X", even though literally zero of them apply to the version that I'm currently running.
- Dumps of port scans with warnings like "Running SSH on port 22 is not recommended" and "Server accepts HTTP. Always use HTTPS".
I assume there are tools that generate these reports because the reports use decent English but the accompanying emails are written in very broken English.
What's the justification for running a host that responds to HTTP and doesn't immediately upgrade to HTTPS?
I'm having a hard time imagining a scenario where I manage a web server that is accessible to anonymous people running pen scanners on it that has a justifiable reason for broadcasting port 80.
No that's the point, the generation script recognizes that the server issues an HTTP-compliant response (which 301 Moved Permanently is) on port 80 and dumbly generates that false-positive, not understanding that the only responses on port 80 are to upgrade to HTTPS.
Could you elaborate on this? I'm curious as to how a setup like this would work in practice. Many people in my family live in rural areas so the topic of restricted bandwidth/poor connection quality is of great interest to me.
“But there I stood anyway, hoping my requests to load simple web pages would bear fruit, and I could continue teaching basic web principles to a group of vocational students. Because Wikipedia wouldn’t cache. Google wouldn’t cache. Meyerweb wouldn’t cache. Almost nothing would cache.
Why?
HTTPS.”
Thanks for the excellent link, discussed on HN a while ago [1]. For those that think an sslstriping proxy would solve it please remember that this would degrade the security for requests that really have to be encrypted.
I remember https://sectools.org/tag/vuln-scanners/ for one. There are other software tools that run from your PC and can
'target'/scan a single IP or a range of IPs and they return some generic results. I won't get to the metaspoil discussion/area.
Imho (and sorry to intervene) it is uninsteresting because each of us can and DO run these tools and get the same reports, and since we do care enough, these are low hanging fruits that we have all assessed on day1, and either addressed, or ignored for a valid reason (e.g. on a client site, someone was making noise for a vuln on a system that was a standalone server, disconnected from any network. I understand that security requires that all layers are secure, but we need to use sense and logic before we start yelling 'fire!! fire!!'.
I don't doubt your lived experience, but for real companies, the economics of ruthlessly withdrawing bounties don't make sense; bounties just don't cost enough money to be worth picking fights over.
There are some patterns where I've seen people not get paid just on general principle; for instance, people find systemic issues and, rather than disclosing the root cause, try to claim bounties for every instance of the flaw (you'll get paid, but not for every instance). It's possible that naive development teams sometimes get this confused, and, for example, consider "all XSS" to be a single systemic bug.
I wouldn't consider those entirely equivalent sets. I imagine plenty of startups probably don't fall under the criteria you would consider "real companies", or at least not in the beginning before people have a chance to mature into their roles or flunk out of them.
> the economics of ruthlessly withdrawing bounties don't make sense
The economics of something and how people try to justify it or let their own egos get in the way often don't match. I mean, I still have to kick myself sometimes because while I work at a small company, agonizing over a couple hundred dollars a month in service fee differences is not a good way to spend my time given my hourly rate and the time a more expensive option might save if it does what it says. Ingrained thinking can be hard to overcome.
I've had this happen for pretty large companies. In one case their security team later gave a talk about bugs they'd discovered that included a diagram I'd sent in my description of an issue, which I found annoying.
These days I generally just sit on issues. The work involved in putting together a bulletproof report that can be understood by whoever reads the security alias (could be a security engineer, could be a PHB, could be /dev/null...) is just too high to do for free.
Large companies sometimes do unethical things just because one person or a group of people at them thinks it is a good idea, unrelated to any measurable economic benefit.
I've never had this happen but what I've had in the past is people saying "This isn't a vulnerability", then I told them I would go public with it with an Easy POC that anyone could do.
I literally had to twist their arm to get it patched... since was a something to 'reduce friction' which allowed you to steal someone's Bitcoins.
At the time, the POC would have netted me $40 for every person I scammed, today, it's a $400 profit and that tool would generate a QR telling people there's free bitcoin at Coinbase so I bet you someone would have used it.
Edit: I told my boss if they didn't do shit about it, I would put that QR code with 'Social Engineering' into Facebook ads since it had just started and see how much money I made out of it.
With Amazon AWS I had fun exchange a couple of years back. They don't have a bug bounty (still I think), but their response was that they will fix it but not publicly recognize it because "the cloud is always secure"? Go figure.
It's not, I still have the email exchange from a couple years back - I thought of posting it somewhere because it was so odd, but I dont have a blog and I am not interested in publicity.
Amazon still doesn't offer a bug bounty program to my knowledge. Also, it's the only cloud provider my active security researcher friends tell me that attempts to regulate them by some weird pen test authorization requirements which are very foreign to industry standards of other cloud providers.
I'm just on the side lines watching, but there is a difference of how transparent AWS vs. GCP vs. Azure are when it comes to security. GCP > Azure > AWS
This sounds..
awful. I'm sure there are reasons, but hiding information this way makes you seem incompetent and unsure of yourself (you as Amazon, not you personally) in my eyes.
Edit: I assume you are speaking as employee of Amazon of course, which is not necessarily true.
It's more likely that low hanging fruit bugs have either been found before or internally by the company, just not yet fixed. Anything that tools like Burp identify is of risk of having been found already by someone else. It's a tough competition where in the grand scheme of things the winner is the company.
Or simply close it immediately with a nonsensical message unrelated to the problem report and then immediately change the code. (This happened to me last month.)
Well I do freelance work at a client which paid out about 20k in bounties in the last few weeks.
10k was for a bug that had actually been found by the internal test-team on a Friday after a new release on Wednesday. Over the weekend however, a bounty hunter/pen-tester discovered the same thing...
There was some internal discussion (certainly because an internal ticket existed with an extensive discussion) about paying out this bounty - but eventually was decided to not bother with it and not get a rep of screwing over bounty hunters/pen-testers, certainly because this was a guy they already worked with before, and they had actually informed him and a few others specifically about the new release that Wednesday.
They did inform the guy that the internal testing had already found this, but since it was still open on the public-facing service at the time he reported it, they would pay him.
I don't doubt this has happened but also just as likely they did know about it but some dumbass product owner decided it wasn't a high priority until someone external reported it. My company has hundreds of similar (not just security) issues just lying around.
I found a pretty serious bug in a major service provider’s 2fa practices. The first time I reported it, they told me I was wrong. The second time I reported it, they actually tried to reproduce it and had an “omgwtf” moment.
They closed it with severity 8.8 on hackerone but the bounty wasn’t very high given how serious it was. There’s not really any sorta process for selling your bugs elsewhere though, you know?
Maybe you could call them out on Twitter? Or maybe in the future you could submit most of the bug but hold back something critical until they acknowledge it?
The third party doesn't know about the vulnerability. Company C posts bug bounty B in contract. Researcher X discovers vulnerability. Validator Y confirms the vulnerability and X gets paid (1-f)B where f is validator fee.
How would that work? Surely the company could make a fake duplicate and show it to you in a Merkle tree as "proof"? They literally make up every node in the Merkle tree after all.
If it was a duplicate they would be able to show how the hash of the duplicate report was already in the tree. For more information you might want to read about Merkel proofs. For example here: https://www.quora.com/Cryptography-How-does-a-Merkle-proof-a...
Edit: pardon the tone, I understand that these types of problems are very very hard to solve because they aren't purely technical and involve humans.