> To make it more difficult to brute force, when generating the humanID Account ID, we will concatenate the phone number with a Salt Key (another string that will be appended before the hash).
> sha512hash ( Salt_Key + Phone Number ) = Hash Result
This is a complete joke (a SHA-512 of a phone number can be brute-forced on a typical computer in a fraction of a second). I doubt the rest of the protocols and cryptography are any better.
Also, phone numbers are not unique identifiers for people. Real people, malicious or not, have multiple or no phone numbers (or phone numbers that can’t receive SMS). I haven’t found a clear answer yet as to whether SMS verification is the only proof step but it seems like that’s the case.
Also, goddamn, please stop using phone numbers as ID numbers.
No, I don't want to give every service my phone number.
> humanID blocks automated accounts, cyber-bullies, trolls, and freeloaders
Yeah, stop right there. Please don't block automation. That blocks innovation in accessibility. Go after the bullies and trolls but please don't try to differentiate a human and robot user of your UI, because some humans need a robot to help them.
What externally-verifiable ID verification mechanism are you willing to participate, in order to demonstrate that you are not a sockpuppet? Phone numbers are crossed off the list, so at least we don't have to debate whether they're acceptable or not, and email addresses obviously aren't useful for preventing sockpuppets either. What else is left that is acceptable to you?
Focus on preventing abusive usage patterns, not sockpuppets or robots.
Sometimes I do want a sockpuppet to help me access information and reformat or deliver it to me in an alternative form that is better accessible to me.
First, we can say “let’s not verify identities, anyone can be as many people as they want”. We have that today. It’s killed a lot of businesses and has deeply harmed society. Swell. we can of course redouble our efforts and refuse to take other steps, which is certainly a popular choice.
Or we could discuss the pragmatic path: verifying identity. Phone numbers are popular. Email is a tire fire. SMS is simple to hack. So, what’s next to try? What should this startup be using instead of phone number verification? What platforms exist to build revenue with clearly-identified customers on the internet?
The folks refusing to consider the question here seem to only care about the former, but I’d like to see pragmatic verified-identity solutions that address the firestorm of hate that anonymity has brought down upon us, and I’d like to see someone make a billion dollars from it.
I’m here for a pragmatic discussion, not for another retread of the same boring “everything must tolerate sock puppets” that we’ve been living with for the past two decades, and I’m not going to waste productive time and energy in idealistic debates that have failed to deliver what they promised at ETcon’03. We owe society better. We owe ourselves better. At least this startup is trying.
In an ideal world, the governments of the world (or other trusted parties) would provide identity providers based on zero-knowledge proofs.
These would have the property that when a person joins a platform, the platform can check what other users on it is controlled by the same person, without revealing the meatspace identity of the person in question; and without revealing the usernames to the trusted entity. (Well, ideally the platform would only get to know the number of users, not who they are; but I don't think that can be done.)
Then a platform could just put a hard limit on how many users any given person can have, while still retaining the users' pseudonymity. Or it could place a soft limit, for that matter: every subsequent user receives an automatic penalty (e.g. 10 downvotes on every new message).
It wouldn't stop state actors, and it'd be horrible in places where the government isn't held accountable, but otherwise seems to solve most of the problems of anonymity.
Perhaps you could replace the trusted entity with something peer-to-peer, but it would have to be very carefully designed.
I've been thinking about the same albeit more limited:
it would still be useful in very many cases even if it wasn't so watertight.
As long as it is audited, I get a warning and it has to pass a judge to not be punished I'd be fine in most contexts even if it was technically possible to look me up.
Of course I am keenly aware that this is a privilege that not everyone has.
> that address the firestorm of hate that anonymity has brought down upon us
I disagree with many (most?) of your underlying assumptions about social dynamics, but this one in particular stood out to me. Have you not seen the vile things that people post to Facebook under their legal name, typically to groups consisting almost entirely of people from the local community?
It isn't anonymity that's the problem. It's the intent behind and incentives provided by the social interaction in question. There are deep systemic issues (IMO) with the interaction models of most social media today.
Anonymity is a separate problem from immunity, but the former certainly is perceived as a successful route to the latter.
Those people posting to Facebook under their own name are living under the belief that their words are protected speech, and that no harm will come to them in their community for speaking them. So, too, do anonymous identities perceive themselves to be immune to harm. Communities in the southeast United States have been quietly passing racism and sexism by word of mouth for hundreds of years, and they are quite correct to think that posts on Facebook will do no significant harm to their lives, as long as they don’t stray far from their communities. They derive courage from seeing others say these terrible things, and they say them too. Anonymity is of no relevance to their concerns, because they have unity across community and authority to protect their cruel and malicious beliefs.
Anonymous identities present this problem at scale, where the community is “anonymous identities” and the authorities are the businesses that run the Internet. For decades we built these platforms, I helped build these platforms, and we’ve discovered that we’ve created a place that allows social evils of all sorts to coordinate and fester and grow and feed upon those who are vulnerable. The platforms have so many users that policing speech is impossible, and in the few cases such as Nextdoor or Yelp where locality narrows speech down to enforceable-scale communities, the platforms shy away from the simple human cost of moderating speech and applying principles, desiring instead to maximize daily active users and advertising views and paid subscriptions and minimize dollars spent on human oversight and moral compass wayfinding in these community bulletin boards they’ve created.
In the novel Snow Crash and in many works prior, the Tower of Babel is an allegory for the risk of all humanity being able to thoughtlessly communicate. It’s not that different languages are unintelligible, but it’s simply that they slow communication, ensuring that one community cannot readily infect another with malicious beliefs and cultism. We have torn down the barriers between communities that kept the uncivilized, the evil, from metastasizing throughout the world. And so we who built blogs from the ashes of Fidonet and Usenet have created a perfect agar dish for a plague that threatens to kill us all.
Sock puppets are a unique feature of anonymity, that slow any single entity to represent themselves as a crowd, and human beings are extremely vulnerable to crowdthink manipulation by those actors. This can’t readily be done without anonymity, and is a problem unique to anonymity itself. 4chan could not have wielded an army of sock puppets in Gamergate and beyond, if users had been required to authenticate their true selves at each forum they coordinated attacks upon. Even if their identities were unpublished, they would be found and banned, and eventually the equivalent of Spamcop would be set up to coordinate disreputation tracking with published proof of the misbehavior of the identity in question. It isn’t necessary to publish the identity of an attacker to indicate that they attacked you, and the invaluable contribution of centralized Twitter blocklist subscriptions demonstrates that we are still very capable of silencing voices that are deemed beyond the pale.
It’s uncomfortable to consider the morality of when, if, voices should be silenced. But I’m tired of this world where consequence-free verbal abuse is taken for granted, and anonymity contributes uniquely to that in ways that cannot be assigned to the greater problem of community bulletin boards at worldwide scale. So, yes, I call for ways to verify identities that are sufficient to stop this army of manipulative sockpuppets that swarm any platform without active identity verification.
HN suffers these attacks too; witness how any post about ESR or China or gender inequity is guaranteed to have a freshly-registered account posting plausibly-formed comments that somehow always push the discussion towards a world with more bias, more inequity, more tolerances of abusive behaviors. We don’t see sockpuppets swarming discussions asking us to keep an open mind and be more civil towards each other and have empathy for the viewpoints of both sides. We see sockpuppets pushing authoritarianism and bigotry instead. They do so freely, knowing that their attacks are impossible to detect and stop with any algorithm.
MasterCard paid nearly a billion dollars today for an online identity verification startup, because they understand that the future we’re living is so terrible that it won’t be allowed to continue. Sockpuppets are an unprofitable drain upon the financial profits of capitalism. Tor and VPNs are frequently banned at a network level due to anonymous abusive behaviors committed upon them, and that simply won’t scale. Cloudflare’s own 1.1.1.1 VPN is blocked by Slice, a pizza delivery app, using Cloudflare’s own CDN firewall services, no doubt because the anonymity was abused for stolen credit card testing and in general abuse at scale. I wouldn’t be inclined to accept traffic from TOR if I ran a business, not unless I was doing live ID checks in some way that isn’t as trivial to falsify or hack as the weak solutions we’re offered by phone and email today.
If you are able to present a case that the sockpuppets problem is in fact common to anonymous communities (such as HN) and to identity-verified communities (I can’t name any, can you?), then I’d like an opportunity to consider your arguments along those lines. Your case is not yet compelling, but I’m open to hearing more.
Please note that HN basically works this way. Anyone can basically create an account anytime. Still HN is one of the nicest and most thoughtful places I am aware of on the internet.
Anyone can create one account for their use anytime, but if they create and use multiple accounts, they get shutdown by the site admins. HN became what you praise in part by emplacing defenses against sock puppets, an approach that’s loudly contested elsethread. That HN has remained at all nice is in part due to that “one account per person” enforcement, and the reason why is clearly stated in the guidelines:
Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
I don’t mind if people keep their identity secret from the other users of the sites that verify identity. I don’t mind if people keep their identity secret from the site admins, as long as they only have one identity. Enforcing that is very hard at scale, and only works at HN due to the small size.
I do encourage you to post your comment further upthread with more details about your view, but I won’t be participating in that thread. My focus here is solely on the question of which ID verification mechanisms are acceptable, as the parent comment only rejects one method without completing the thought by specifying the acceptable alternatives.
"The Salt Key is the combination of lowercase letters, uppercase letters and numbers. For SHA512
hash which is 64-bits, the recommended salt key is 64-bits."
And the salt itself appears to be private, and I assume, unique per user.
So, I'm in no position to say whether that's "good enough", but it's at least not something you're going to brute force in a few seconds.
If the database leaked, the salt would become known. A truly anonymous system should still be just as effective if the entire DB is known to an attacker.
Assuming there are on the order of 10 billion valid phone-numbers (there could be on the order of 100 billion depending on how far down the rabbit hole of international numbers you go). A GTX 1080 can do SHA512 @ ~1 billion/sec[0]. This means it will take 10 seconds (longer if you have a >11 digit number) to recover each phone number. This is fast enough that you don't need to bother hashing the phone book, you just brute force every possible number.
I would expect the napkin math to come out to years of compute to unmask someone in the face of a DB leak, not seconds or minutes.
As far as I can tell they cannot discard the salt if they want to be able to tell if you already have a 'humanID'. Without the salt, you cannot tell if the phone number already has an account without having to brute force the salt.
If they were serious about this, they should at least use scrypt or some other modern password hashing technique as salted sha512 can be computed too quickly on modern hardware.
this is an example for simplification & explanation reasons, as most of the people who see this website barely know what a bit is. we'll try to address this better, thanks for the feedback
I thought it was used to generate a unique account ID, that could be verified later given a phone number. Assuming phone numbers aren't duplicated often, that's doable.
Salt is not per user, nor is it a cryptographic secret. Salt is only used to make rainbow tables not scale across multiple databases, so each database needs its own rainbow table calculation.
"Pepper" is per user, but widely regarded as unhelpful because, again, it's not cryptographically secure if the database is hacked.
Telegram requires verification by phone. Despite that measure, spam from thousands of bots is not uncommon.
Phone numbers barely help.
What really works though, is microtransactions. I have not seen any spam in chats where you are required to pay even a tiny fraction of a cent to join (1/64000th of a cent).
This is the idea behind hashcash[1], but instead of currency it's just a small proof of work. Some nominal PoW (a few seconds?) to bypass the spam filter smells like it would be remarkably effective at cutting down spam.
This is also what we do with bitmaelum[1]. Each message needs a proof of work for deliverance. Large (authentic) mailinglists can use an opt-in by a reader which allows the sender to bypass the pow restriction.
Captcha requires manual intervention, and potentially has the problem of being easy for bots but difficult for humans. A captcha for each email on even a small mailing list would be pretty horrible.
That was my first thought upon seeing the title too.
The whole idea of logging in is to show proof of your identity, which is exactly the opposite of being anonymous.
I was curious about this exact thing when I saw they hashed a phone number (of all the easy to guess things). Yes a Salt makes it improbably to rainbow table... but if their database ever gets leaked then everyone is toast. It'll only take, as you said, seconds per account to brute force once the salt is exposed.
So, this doesn't look like they've actually broken any new ground here that you can't achieve with existing commercial products like Okta or Auth0. They're taking an extra step of asking you to store hashes of their hashes. Which actually feels less secure since if hackers get their hashes, that as good as getting clear passwords to login directly your site? I'm not actually clear on that.
But either way, the diagram says they're hashing phone numbers, so presumably that means they authenticate by typing in their phone number which is a terribly password since you give you phone number out to people so they must also send a TOTP via SMS which is better, but not great. NIST has started recommending not to use SMS for out-of-band authentication. Either way, this whole chain of events just delegates authentication your mobile carrier. Same thing if you send a TOTP to an email address. It feels more seamless, but really you're just delegating auth to their email provider. No different that using OAuth.
> Which actually feels less secure since if hackers get their hashes, that as good as getting clear passwords to login directly your site? I'm not actually clear on that.
You're right. This turns hashes into passwords, and is distinctly a step down in security.
This is solid advice - thanks. One of the things they mentioned is that this would help prevent botnets. Would OAuth also do that? Also - does the opensource, non-profit status change the dynamics?
No, nothing about being open-source or a non-profit fixes the technical flaws in a system's design choices. It just makes this a poor design with an ethically pleasing organization behind it.
Not sure why either open-source or non-profit would change the dynamics? Non-profits are still incentive-driven, and their goals may not align with yours; and open-source code is entirely independent of how data is collected, shared, etc. People do evil with OSS all the time. :)
The submitted title was "My company got pitched on anonymous sign in – curious to hear pros and cons". Submitters: please don't do that. If you want to add a question or a gloss on an article, that's fine, but do so by posting a comment to the thread.
"Please use the original title, unless it is misleading or linkbait; don't editorialize."
At least looking at the archive.org version of the site it looks like just providing a way of building a username out of hash(phone number, website). I'm not seeing information about passwords or authentication and I wouldn't treat knowledge of someone's phone number to be at the same level of a password.
So, to me it looks like marketing hype without substance. It would be useful for the site to be online and not giving 500 errors though to see if they had anything else.
It’s annoying AF for me - I can 1Password into a site with incredibly secure password instantly - but with a magic link I have to find my email client, wait for the email, click it, etc. as a sign in backup it’s nice but as the Main signup it’s a pisser.
The site is toast, so I can't read how it works, but I will comment in general on 3rd-party auth.
I refuse to use sites that require 3rd-party auth. If I have a problem logging in to your site, I want to reach out to you and get it resolved. I don't want you to say, "We don't have any ability to address auth issues on our own site. Take it up with <completely unrelated site>." I don't want my account with you to be suspended because I had a falling out with Facebook or Google or anybody else that is not you.
Any site using third party sign in should treat it as just one way of logging in. The verified email these providers return should always be able to login you in as well.
Expect more pushes for a fixed digital identity. Check the id2020 initiative. Gatekeepers are not fans of the pseudonyms we've been using since the early days of the Internet.
>Our Vision: One Digital Identity per Human – both Anonymous and Accountable...billions of fake user accounts undermining our societies.
What are the use cases for anonymous logins from a business POV? Even if you are legit pro customer privacy, this feels like it requires some fundamental changes in how your business perceives and treats its customers, not just their data.
We would definitely have to make some product changes, but the pitch (which I kind of get) is that we differentiate by using a privacy-focused, non-profit identification method, thereby signaling we value our users' privacy. Given the issues with Clubhouse of late I think this could have value.
It seems like SMS-based auth, with the "gimmick" that I trust this website instead of your website.
So already I won't use it because I don't want to authenticate via SMS. It also raises the immediate question of what happens when I change my phone number?
But why should I trust this website more than your website? Unless your website is fully zero-trust it is probably better to trust you to throw away my phone number than handing my phone number to this company and other data to your company.
The front page boldly claims "no data leaks, no hacks."
I am immediately suspicious. Nobody in their right mind with sufficient technical background is still claiming that in 2021, for any technical solution.
Hmm this is something I’d like to implement in some sites. I’ve been considering integrating with something like ActivityPub to enable user accounts without “more user accounts”.
Every entity I interact with online collects my data. The saving grace is that they don’t often compare notes. I do not want any entity to have all of my data, even if it has my ssn redacted
one of the humanID founders here.
Very much appreciate the feedback, and also appreciate those that addressed concerns before we could. Always open to those that want to help fix any technical issues they might find - the team is fully nonprofit & open source, you're more than welcome to help!
Also, to be clear, while the site was down for an hour, the login never was, as we have set that up independently from the site.
Technical pros and cons. They way it works is that when you use their login you get prompted to register with them, which is done using a phone number. From what I understand they do not keep the phone number, they just send my site a token that says "yep, this person = that phone number"
Sorry - my company is an audio social network. Because we don't use a camera for our live streaming many of our members tend to be shy and possibly privacy focused.
I'd say the main con is that the site isn't reliable. I'm not being facetious. If you're going to use 3rd party sign on, being on the front page of HN shouldn't be enough to bring it down. Imagine if you posted your company's site instead of the underlying technology and your sign-in was negatively affected.
My personal feelings, that aside, is that though many of us are privacy conscious, adding more and more dependencies to your site results in us having to trust more entities. Even if they don't store anything, we have to trust they aren't lying, that redirection is implemented properly, etc.
I think the best thing you can do if you care about the privacy of your users is minimize the amount of information necessary. So if your site doesn't require email, don't take it. If a phone number isn't necessary, don't ask for it. Use usernames, only ask for an email when the user is doing something that would require it (e.g. they need a receipt).
One thing that I love is when a site actually gives you a temporary username the minute you visit the "app" portion and you can use the site as if you created an account without having to do anything. That's usually a sign that the administrators really do care about you not jumping through hoops.
> So if your site doesn't require email, don't take it.
I'd even say "don't ask for it". Landing on the site I get a big banner that takes up a lot of the screen asking for my email. For a service that is talking about data protection and privacy, this sets off alarms. It may be completely honest and genuine, but since it takes so much of my screen it feels like you really want that data from me.
If your target audience is privacy conscious people you gotta know how privacy conscious people think. If you can't think like them, well then I'm also concerned that you don't have our interests in mind.
Do you work on high traffic websites? Surges in traffic such as these often expose underlying bottlenecks. Give the people a break, I'm sure they'll work on fixing it after today.
I'm not criticizing the people. Do you not think it's legitimate criticism that using a niche 3rd party login system can be bad precisely because they're still ironing out the kinks?
As always, it depends. A quality SaaS provider is going to have more expertise in their niche. They'll have more advanced functionality that can be turned on quickly (SSO for that big new partner).
> Your outage is within your control
This was the standard argument against cloud infrastructure and the industry has continued to shift in that direction anyway.
Generally up to the website on how to implement the system. Some have done with usernames, some without.
Generally, users aren't looking for a 'I forgot my password' button if they aren't asked for a password in the first place.
> To make it more difficult to brute force, when generating the humanID Account ID, we will concatenate the phone number with a Salt Key (another string that will be appended before the hash).
> sha512hash ( Salt_Key + Phone Number ) = Hash Result
This is a complete joke (a SHA-512 of a phone number can be brute-forced on a typical computer in a fraction of a second). I doubt the rest of the protocols and cryptography are any better.
Also, phone numbers are not unique identifiers for people. Real people, malicious or not, have multiple or no phone numbers (or phone numbers that can’t receive SMS). I haven’t found a clear answer yet as to whether SMS verification is the only proof step but it seems like that’s the case.