Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google Search: Inurl:server Filetype:key “-----BEGIN RSA PRIVATE KEY-----” (google.co.uk)
172 points by andygambles on July 31, 2017 | hide | past | favorite | 56 comments


Hmmm my idea would be

"Hello from github,

We detected that you uploaded credentials to NAME_OF_REPO. We strongly advise against this as it allows attackers to easily gain unauthorized access to your software and infrastructure.

Have a look at this blog where we discuss alternatives"

EDIT: Just to be clear, I'm not suggesting a ban at all, just a friendly email in response to commits that introduce credentials to public repos


"Hello! This is Github. Look what we found on your server! HAHAHA!"


Is there a disadvantage to banning private keys in public repos?


I personally upload private keys in repos for some test scenarios and examples (dummy private keys of course). I often don’t want to write a test harness to generate the data for each run. Sue me!


Yes, business wise github is a git hosting site. If they started implementing rules on how you structure your application customers would get frustrated and move away.

Just to be clear, I'm not suggesting a ban at all, just a friendly email in response to commits that introduce credentials to public repos


People already mentioned the major use case of testing, but building a blacklist of keys (e.g., the Debian OpenSSL there-are-only-64K-keys fiasco) is a plausible option as well.


A fair amount of the google hits are for test certs that allow the test suite for the software to run.


Test keys, example keys for documentation, etc.

I'd be all for an optional, branch protection-like feature though.


Is there a problem generating them? It's essentially just a single ‘ssh-keygen’ command, see eg:

https://github.com/libguestfs/libguestfs/blob/master/p2v/Mak...


Well, misdetection and examples for one.


Now I think about it there are companies better placed to do this, code climate, codacy et al.


I know you mean well but no charge should be introduced to mitigate against this stupidity. You are (probably correctly) assuming that the data in question is genuine. Nonetheless it is none of our business. rm -rf /* does not contain a warning message and that is the way it should be.


Misleading? Because "rm -rf /" does give a warning. From info rm:

  `--preserve-root'
       Fail upon any attempt to remove the root directory, `/', when used
       with the `--recursive' option.  This is the default behavior.


Yeah long ago it was not to be the case and rm would happily gobble /, but it started with Sun adding protection. It's been the default in GNU coreutils (hence the vast majority of Linux distros) since 2006.


You forgot the * I put in.


The wonders of the "Googledork". There is a lot of information out there which definitely shouldn't be public: https://www.exploit-db.com/google-hacking-database/


It's worth pointing out that some of these are configuation examples, illustrations of how to set something up. (Though of course that carries the risk that less thorough users just copy-paste that into production and call it a day.)


Came here to say this, I have a public "example" repository my clients use for reference that has a "fake" public/private key pair.


One of the more amusing patterns I spotted in the URLs is where an alarming amount of the filesystem appears to be exposed, e.g.:

www.dulceswilly.com/mysql/BHP_sym/root/usr/local/etc/apache22/server.key

If I was on a non-company IP, I'd be tempted to poke around and see what else is visible...


These are already hacked systems where someone has been trying to perform a "symlink attack" to access other users files with the httpds permissions.



Too late, "professional" hackers from indonesia already done the job.

Front page reads:

PELITABANGSA .CA [ INDONESIA CYBER ATTACK AND MALWARE ANALYST ]


You should check out how many services have their entire git repo of their service openly accessible (this allows getting the data out of the git objects, as well as the history).

Quite often you can go to domain.tld/.git/ and find the files if you know their names. Even major sites - The Hill only fixed it in the past few days.


One of the first things I implemented when setting up a company's webserver was to make .git and below return 404. Making those folders visible is a silly idea on SVN, let alone Git.


I've also fallen into this trap, thinking that Apache wouldn't serve up any dotfiles. Wouldn't that be a saner default?


For nginx:

  # block .files
  location ~ /\. {
    deny  all;
  }
  # allow Lets encrypt
  location ~ /.well-known {
    root YOUR LE DIRECTORY
    allow all;
  }


intext:"index of /.git" reveals a ton of those.


I was a little surprised to see an Apple domain in there, but I can't really tell what the private key was for (could have been a test or an example). It looks like it's either an outdated result or an Apple engineer quickly saw this and fixed it because the page 404s now.


Assuming we're talking about the same link here, I can still access it:

https://opensource.apple.com/source/tcl/tcl-87/tcl_ext/tclli...


that yields just 7 pages (10 items each) so it's probably pretty irrelevant.

but of course you are welcome to share your run of the mill anecdotes about some intern once accidentally publishing passwords - etc. :)


For me the top of the first page said about 1160 results. Still a surprisingly small number of hits.


That's because Google always inflates result numbers, until you get to the last page... for me, that last page is 5 and there is 46 results.


I've often wondered why they do this; what's the point in offering misleading results?

For instance, there have been times I've searched for something, and it gives back a lot of results, plus it has a pager with seemingly over 50 pages in it.

But - if I say "jump to the last page", suddenly the pager only shows four pages and I'm at the end of page 4...

What the heck is up with that? Sometimes, some really great information is buried under all the SEO'd to hell-and-back crap at the top. That's the info I want, and I don't care if the website owner cares about SEO or whatnot - because they are likely a small-time user (or they just have a very old page or something that hasn't been updated in 20 years)...


I always assumed that google shows the result page long before the search algorithm finished its work, making that number at best a guesstimate.


Slightly related question about API keys that rely on referer (say Google Vision) - what stops me using curl to spoof referer and rake in thousands in someone’s bill (15 cents per 1k recognitions)?

I assume there’s some IP based quota, but I haven’t seen a knob for that on GCP at least.


This should be enforced on your server and you shouldn’t have clients directly connecting to a service like Google Vision. I have a system that uses AWS lambda so that I don’t have to distribute my 3rd party API keys but I still have to add rate limiting.


You are missing my point entirely.


typically APIs can implement rate limits at various levels. Your IP may get throttled when they see it trip over an individual referrers quota.


Can someone explain why the inurl:server is used? Wouldn't this also work without that (and reveal more results where the keyfile has been renamed)


I used inurl:server to restrict the results to mainly just server.key files so revealing the private keys of HTTPS websites.

Of course you can remove it. Just means more results to wade through.


More results does not necessarily mean better results. They probably got more specific to remove references to documentation and such.

Another interesting thing about google is that this search may return results that are not found without the inurl:server


At a guess, this is to filter out SSH keys, which have an identical private key format, and we well know already how many of those get committed to GitHub. I think this is to highlight where the server's HTTPS key is visible.


I'm curious also as i'm getting 5x the amount of results excluding the inurl name filter


Some of the results are web servers leaking the private keys of the website or in some cases mail servers.


The sixth link from the Google result, https://jpl-vmdb03.inetuhosted.net/sjsuvc.drivingcreative.co...,

Is that the JPL I thought it was?


If you mean Jet Propulsion Lab, then no way. I'm sure they're better than that.

The IP block is managed by INetU Inc, which was apparently a cloud hosting company now owned by Canadian telecommunications company Shaw Communications.

https://whois.arin.net/rest/poc/II25-ARIN

https://www.crunchbase.com/organization/inetu-managed-hostin...

https://www.crunchbase.com/organization/shaw-communications


This I pure paranoia fuel. I don't think I've done this (or what someone else mentioned about leaving the .git folder open in the server) but I'll double check anyway.


I am definitely making this a part of regular security scan.


It took me about 15 seconds to understand. WTF! Why are people uploading their private keys to github?!


Most of the ones I saw in github looked to be test or demo keys; not anything real.

The better question are those actual non-github sites that have them exposed (though others here have noted that those sites may already be hacked).


Google lost its mind when I clicked this link. Signed me out, turned on SafeSearch and threw up some privacy notice dialog at the top of the page.


Because it's google.co.uk, not the one you usually use.


Interesting. Weird response to something that doesn't seem that rare, like someone linking you to the mobile version of a page, just linking you to a different Google culture.


Europe's privacy laws.


[flagged]


One link is to https://github.com/SUSE/Portus/blob/master/vagrant/conf/ca_b...

the key is still in google cacke...


It's also still in the repo, as https://github.com/SUSE/Portus/blob/master/examples/developm... — not sure why the vagrant development key that (presumably) would never be used outside of a local VM would be an issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: