Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The existence of trust relationships is how users protect themselves from fake accounts by specifying which ones they know for a fact represent an individual human’s primary account, forming a native Sybil resistance in the system.

I don't think so. What if my friend and I both create 2 accounts and both trust both of each other's accounts? We get twice as much money as everyone else who plays by the rules?



The "Defending against fake accounts" section suggests that this wouldn't be effective except between you two. If I trust jstanley but not jstanley-fake, then you can still only give me jstanley currency. You've got to convince me to trust jstanley-fake in order to get me to use the money.

However, since this is basically a currency based on web of trust, at a large scale you could probably dupe enough people into trusting your fake account. Or introduce both currencies into separate social networks. One here with the technical social circle, another with your in real life community, for instance.


Given the existing scams that loads of people fall for, I see no reason they wouldn't figure out how to dupe people into accepting this fake currency. If then everyone accepts the fake currency, it isn't fake anymore and the scammers really are just printing money.

Maybe I'm too cynical but I would expect the scammers to win.


It would still fail when you got to the broad network. The idea of the validators presented in the paper is to prevent things like this. As you want to exchange money internationally, you would need an internationally accepted validator.


I’m sure some people could get away with it for some amount of time, but each person in each circle is incentivized to keep an eye out for these personal schemes.


Right. Because if I trust a bad actor, but others don't then they'll be spending through me and I'll end up with useless currency.


Why wouldn’t others have approximately the same incentive as you to not form a trust link to a liar?


Adding to this, how long until we see bots mass-emailing people with scams like "Add this account number to your trusted UBI list and double your UBI overnight!"


From what I understand, people other than you and your friend will only trust one of your accounts (the real one), and will only accept money from it. So even though you might have 100 FakeYous, you will only be able to pay with RealYous.


But of course when I pay you, you’ll have no idea which account is my primary. I’ll tell you it’d FakeMe23 and you’ll accept the money because why wouldn’t you?


Per the authors, they want people to use a real peer-to-peer trust network. I have never met you, dpark, and so the only way for you to pay me (or me to pay you) is for there to be a web of trust between you and me. And then we will only be paid with currencies of people who we trust. This also means they have a stake in it, they could end up with 50 dpark-fake coins that no one else will accept so they need to be careful of trusting people they don't know well.

You won't pay me with dpark-fake because I don't trust it, instead you'll pay me with pg because I've trusted him and he's trusted you.

However, this gets to the fundamental problem of the web of trust: It's really hard to do. Physical key signing parties are impractical to scale. You need systems like keybase which correlate your keys/accounts with each other if we aren't able to meet and communicate in person (which also requires trust, I wouldn't trust you after a single meeting, but would readily trust my friends who I know well but are physically remote from me).

Quoting from the paper:

  This example demonstrates that Bob can only ever
  receive money that he trusts, and Alice can only ever
  spend money that other users trust in turn. Even if
  Alice makes 100 fake accounts and has them all trust
  each other, she will never be able to spend more than
  the amount of AliceCoins she has, since that’s the only
  account that other users will trust. This is why it is
  crucial that users take direct peer-to-peer trust
  relationships seriously.


This whole thing relies on users to essentially enforce the trust web. No way this works in reality.

Eve won’t try to establish trust with Bob directly. She’ll establish trust with Bob’s grandmother who doesn’t know any better. She’ll essentially phish her way into the trust network. After convincing Grandma to accept a single EveCoin, Grandma will perform any currency exchange for Eve and let Eve buy whatever she wants with BobCoins.

When your premise is that “it is crucial that users take direct peer-to-peer trust relationships seriously”, you are doomed to fail. You cannot expect security to derive from the average person being extremely diligent.


Or you could sidestep all those issues by having a single currency which everyone who participated would accept, the same way that dollars and euros work.


Sure, which is what we do now and generally works well. I'm not advocating for this system, merely addressing the question by pointing to their documentation where they answer it.


Why would I trust your account if it wasn’t, well, trustworthy? Think about the current world we live in. There are some people I would accept a verbal “I owe you” from (namely, my friends) and some people I would not.


Because you’re relying on a web of trust. You obviously won’t trust me directly. But you’ll trust me indirectly because I only need to gain the trust of someone in your web at which point the authors expect me to be able to engage in currency exchange to find a coin you’ll honor.

And at the end of the day, it’s unclear why you wouldn’t want my FakeAccount coins. Once I’m in your web of trust, you can spend my coins like any other trusted currency.

The authors seem to believe it’s the users’ responsibility to somehow police this when the users in this case just want a medium of exchange and derive little to no value from excluding my fake coins.


But what if I pretend to be two different people? It seems we'll need some kind of centralized organization to verify identities here (the government? )


I assume the point is that an individual wouldn’t just form a trust link with another individual without some confidence of their “true” identity. For me to trust someone’s currency, I would either need to be very sure of their identity through some trusted third-party system (e.g. a centralized system like government ID), or be involved in an extensive social (and probably in-person) relationship that would be difficult to fake (like friendship).

For the latter, what I mean is that it’s probably prohibitively difficult to maintain two significant in-person social graphs that have no overlapping members. Sure, people can get away with it for things like romantic affairs, but the incentive there is very different than money exchange, not to mention that romantic affairs often get discovered.


I don't see that overlap in the social graph implies a high enough chance of discovery. Wouldn't discovery require that members from each group are actually involved in the same financial transaction (in a way such that people are paying attention to who's involved)? It may well be possible to keep that rare enough for long enough that people try it, and are sufficiently successful to be a problem.

(This is certainly not confidence that it wont work, just lack of confidence that it will, pending further reading/experimentation/analysis...)


Read the linked website. They include validators, a decentralized set of organizations, that can check government IDs to provide that sort of trust.

(This is like the only response I seem to be contributing, no one seems to have read the full thing, so people keep asking questions like this that are answered by the source, so there is no interesting conversation).


It's not quite "identity" that we need to verify, but rather uniqueness. Centralized approaches are theoretically easy; I'm not sure there isn't a possible decentralized solution. The one proposed does seem to be a partial solution, but probably not enough of one.


I don’t think you need an explicit decentralized solution. I think the individual incentives are such that this type of scam is unlikely, because it’s probably prohibitively difficult to maintain two significant non-overlapping social graphs.

I explained this more in this comment: https://news.ycombinator.com/item?id=15898504


I followed up over there :)


I agree that the proposed solution seems not enough, but I'm impatient to see what will happen when they try it out. Maybe the trust chain will work out


Yes, I thought exactly the same. And immediately after: "then what's the point...?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: