The point of 2FA is that you combine something you have, and something you know. For the something you have to be a strong authentication factor, you have to maintain tight control over it. When you use SMS as the factor, you're effectively mailing the second factor to someone with a lot of intermediate steps in the chain of custody.
When I use an authentication app on my phone (assuming that I destroyed the seed immediately after loading it), or I use my physical OTP token, then those keys are pretty direct proof that I have possession of that object at that time. If I lose my token, then I know I lost it and I can disable it and get a new one. As this thread demonstrates, it's possible for someone to hijack your phone without you realizing it for a period of hours to days.
With an authentication app, you seed it with a key which only you and the service use. Each successive PIN is generated by doing a complex mathematical operation on the initial key, and then dropping most of the digits, so it's basically impossible to infer the key by seeing even dozens or hundreds of the responses. Typically, the key is only good for a minute or so, and/or is only good for one login attempt. As long as nobody else has that key, it's virtually impossible for anyone to figure out what the next token your authentication app will offer up. The data only exists inside the app, and a copy lives on the host that authenticates your login.
For someone to impersonate your authentication app, they would have to:
a. Gain access to your phone and somehow extract the key, which probably means getting past the phone's native encryption as well as the authentication app's encryption. You do have a passphrase on your phone, right?
b. Find a leftover copy of the key. Don't leave one of these lying around for someone to find ...
c. Hack the authentication software used on the server side. If hackers can do this, then it's pretty much game over anyway.
So, basically, if you never give someone your unlocked phone (in a context where they could take a memory dump, not just casually so they can type in their number), and you don't leave copies of the key lying around, it will be very hard for someone to spoof your authenticator app.
OK, I left one attack out -- someone could man-in-the-middle you and get you to give your password and the latest key to them, which they then use to log in. That's why you should only operate over a secure channel (HTTPS or equivalent) where you are sure you're talking with a known endpoint (manually enter the URL, or at least check it in the address bar).
Some of the authentication apps I use like this require that I register the serial number of the app (I guess that is like the public key of the app, except it's really short) with the backend of the app. So I'd open the authentication app, call an IT help desk, tell them who I am, tell them my app number, then I can 2FA with the app.
Isn't this process vulnerable to social manipulation? Someone could feasibly impersonate me over the phone and register their own app serial number instead of mine? This seems to be a common weakness in many authentication schemes (2FA or not).
"As long as nobody else has that key, it's virtually impossible for anyone to figure out what the next token your authentication app will offer up."
According to the birthday paradox, TOTP can in fact be broken quite easily if the service does not rate-limit to defend against brute-force attacks on the 6-digit generated TOTP code. An attacker only needs to try 1000 different pins within a minute or two (and some services allow codes after 5 minutes) to have a 50% chance of getting through.
Please educate me if I'm wrong, but I don't believe that is correct. Let me explain:
-- There are two kinds of OTP systems: ones that regenerate the code every n seconds, and ones that regenerate the code for every attempt
-- Considering the time-based code, each attempt has an n-choose-1 (1:n) chance of being correct, so your odds of matching the code are based on how many attempts you can make before the code changes. Since the codes can be re-used with no restriction, you have to start over fresh every time the code changes. So your ultimate chance of breaking the code is n-choose-x, or 1:(n/x). Therefore, you'd have to try 500,000 codes to have a 50% chance of forcing a random collision on a 6-digit code, and the odds only go up linearly with the number of guesses.
-- For the case where the code changes every time, each guess is once again an n-choose-1 chance (1:n). Once the code is regenerated, all information from the old code is lost (there is no forward bias on the next number), so it's once again n-choose-1. So, ultimately, you get to the same answer, which is that you have a 1:(n/x) chance of guessing the code.
Now, most OTP systems will accept several passwords before and after the valid key, in case the tokens have gotten out of sync, so there may be as many as 5 keys that will work at a given time. That is just a straight divider, meaning that you have a 5x better chance at any one time of guessing the right answer.
All that said, any sane password checker should be rate-limiting attempts, and locking out a user after, say, 10 or 20 attempts.
"Therefore, you'd have to try 500,000 codes to have a 50% chance of forcing a random collision on a 6-digit code, and the odds only go up linearly with the number of guesses."
There's actually a simple formula to work out the number of attempts required for a 50% chance as shown below.
For TOTP (Time-based One-time Password Algorithm):
Assuming only the latest code per current time window is allowed (and not codes on either side), the number of brute-force attempts required for a 50% chance of collision against this code, is given by:
Math.pow(10,6/2) // base=10, digits=6
This gives roughly 1000 attempts for a system that only accepts the current code and not codes on either side.
I am sure there are plenty of systems out there that don't think to rate-limit 2FA codes.
That's pretty cheap for a UK house. My 2 bed terrace was about £1200 a year, was pretty decently insulated for what is was but didn't have cavity walls etc.
My new 4 bed house is about £1500 a year for a family of 4, and I'm pretty tight with heating and light, so most houses would be more.
I checked that page already. I can only speak for myself but for me it has no listing on the Google Permissions page so it looks like it does permissions correctly to me.
'The "misappropriation theory" holds that a person commits fraud "in connection with" a securities transaction, and thereby violates § 10(b) and Rule 10b-5, when he misappropriates confidential information for securities trading purposes, in breach of a duty owed to the source of the information. ... Under this theory, a fiduciary's undisclosed, self-serving use of a principal's information to purchase or sell securities, in breach of a duty of loyalty and confidentiality, defrauds the principal of the exclusive use of that information. In lieu of premising liability on a fiduciary relationship between company insider and purchaser or seller of the company's stock, the misappropriation theory premises liability on a fiduciary-turned-trader's deception of those who entrusted him with access to confidential information.'
The employees involved here broke the law by obtaining the information. However, if they had simply gone to a hedge fund without telling them where they worked, and said, "here are our predictions of these stocks, if you like the results, pay us and we'll give you more predictions," the hedge fund could have traded on it and that would not have been illegal for them to do because they had no knowledge of the actual source of the information.
The employees stealing the information would have still been liable. But had they simply accepted payment from the hedge fund in Bitcoin and not revealed their true identities to the hedge fund, they would have been much harder to find and prosecute.
"People have asked me if this is insider trading and, you know, sure it is? (If the allegations are true, I mean.) This is not "classical" insider trading -- trading or tipping by an insider at Chipotle or whatever -- but rather "misappropriation" insider trading..."
Possibly something to do with privileged information. Keep in mind that even things such as intent may get you on the wrong side of the line*
* example: you post a trade but you don't intend to execute it but instead hope to fool the market, then you cancel the trade and switch to other direction. forgot what that's called exactly
Marketing helps market the awareness of these problems, even non-tech folk have heard of Heartbleed. More awareness probably means software is likely to be updated.