Hacker Newsnew | past | comments | ask | show | jobs | submit | RubberSoul's commentslogin

> To make this enforcement cost effective, it needs to recover more than the $80 Billion budget increase annually, no?

Improved enforcement can deter tax avoidance in addition to directly recovering money.


Does it?

If you are avoiding taxes on purpose, it might still be cheaper to wait for an audit and see what they decide you need to pay.

In any event, recovering and or deterring $80+ Billion annually is a long way to go from where we are today.

If we do recover $80+ Billion annually, what did we accomplish? Funding the federal government for an additional 6 hours?


This is, of course, a major philosophical debate. I will just say that there could be reasons this is the government's business.

One issue with modern food production for consumers is that it can be difficult to determine whether a particular ingredient is safe or healthy. Information has public good qualities, and having each consumer make an individual determination about whether each ingredient is safe/healthy is costly. The obvious way to justify government intervention here is that it's more efficient for us to pool resources and empower a third-party to produce the information we need and ban/label products that are likely unsafe. The decisions won't always be right, but the approach might be much preferred to the alternatives.

Plus, conditional on socializing a lot of medical costs, it might make sense to put restrictions on behaviors that affect individual health.


So to summarize:

* Your body, not your choice

* If a person puts a substance into their their body that the governement doesn't like? Incarcerate them? Fine them to take away their means of buying more food?


Great question. The answer might be not that soon, and the effects will be confusing. Search "Solow paradox," "productivity paradox," and "IT productivity paradox," and you'll encounter a long debate in economics and management research about the role of computers and ICT spending in productivity.


Great overview! I think 1Password's Linux support has been improving [0]. I use 1Password with an Ubuntu desktop and have been happy with it.

[0]: https://support.1password.com/explore/linux/


It’s hardly working at all under Wayland. Copying to clipboard has been broken for at least 18 months. AgileBits doesn’t seem to care. [0]

There are also sync issues (items created in the desktop app won’t appear in the browser extension unless I restart my browser), which aren’t occurring under Windows nor macOS.

„Poor“ Linux support absolutely does the situation justice.

[0]: https://1password.community/discussion/comment/667970



Thanks for providing the detailed comparison among the many password managers. I think it's more accurate to describe 1Password's CLI as "yes" rather than "yes?poor" and submitted a PR for consideration: https://github.com/Soft-wa-re/password-manager-comparer/pull...


one thing I wish Bitwarden did is conditional username for URI

I have some internal tools at work where you need to specify the domain, and some where you don't. Having two separate entries for these scenario is annoying, as I gotta update the password on both when I change it.


agreed. linux desktop is absolutely fine for me.


The goal is to estimate the proportion of accounts that are bots. Let that number equal p. The variance of the proportion is p(1-p). The highest that can be is (0.5)0.5. Then, the standard deviation of that is the square root, which is 0.5.

Now, we want to know the standard error for our estimate of the bot proportion. That is sqrt(p(1-p)/n). Suppose 50% of accounts are bots (I assume that would be very high), then our estimate of p would be 0.5 and our standard error with a sample of 100 would be 0.05. Hence, our 95% confidence interval is roughly 0.4–0.6 in the worst case (with a sample of 100).

If the proportion is under 0.1 (let's assume 0.05), then the standard error would be sqrt(0.05(1-0.05)/100) = 0.022. Our 95% confidence interval in this case would be roughly 0.01–0.09.

These seem like large ranges to me. Hence, I would expect them to use a larger sample too.


I recently had Intuit delete all my personal data, across all their products. Doing this was exhausting. Their online process for deleting an account did not work. Contacting customer support resulted in unhelpful back and forth. Finally, I gave up and filed a complaint with the CA Attorney General. Within two weeks, I got a call from someone at Intuit who quickly resolved the issue and promised to delete all my Intuit-owned accounts (it seems to have worked).

I tell the story because I think it illustrates a lot of reasons companies can get away with bad behavior:

1. Only a subset of consumers recognize the bad behavior and know who to contact.

2. Most consumers are not motivated to complain. In the above story, most people would probably stop when the Intuit website doesn't work (I usually give up too). There is a free-rider problem. My complaint could benefit lots of people if a company changes its behavior, but I alone incur the cost of complaining.

3. When a consumer successfully complains, companies can sometimes quietly make the problem go away for the one consumer and avoid regulatory action. Intuit called me, resolved the issue only for me, and we both move on. With the issue resolved, the regulator has less reason to continue investigating.

4. Even when regulators get enough complaints and go after bad behavior, they are up against powerful attorneys and lobbyists. And even if the regulators win, the company probably has lots of substitute ways of achieving the same goals that weren't contemplated or prohibited by the settlement/law/etc.


I can confirm that the online process for deleting an account still does not work! Gets into an endless loop sending repeated validation emails. (Firefox with all blockers disabled.)


That's not the definition of human subjects research. Not everything that involves a human responding to questions is human subjects research. A lot of comments in this thread are uninformed about the relevant definitions.

You can think the study is poorly designed, unethical, etc. But if they're not obtaining data about a living individual, then it's not human subjects research.


You appear to be the one uninformed about the definition of human subject research.

>But if they're not obtaining data about a living individual, then it's not human subjects research.

Sure, if you completely ignore the other half of the definition, per the NIH:

https://grants.nih.gov/policy/humansubjects/research.htm

> Obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or biospecimens;

They are absolutely collecting information to analyze by interacting with the individual.


This is incorrect. It's only human subjects research if the researcher is obtaining data about a human. This is the "about whom" requirement. A classic example is calling a business and asking someone about the products and prices they offer. That's not human subjects research.


If you say "I am a researcher studying X, can you please answer the following questions" then you might be studying a "what", depending on the specific questions.

When you lie about who you are, what your purposes are, and use scary legal language in an attempt to elicit a response, that is absolutely human research. You may be able do those things ethically as scientist but you absolutely need IRB review becausr it is definitely human research.

My guess is that the IRB in this case was not informed of the deceptive nature of some of the emails as lieing is absolutely a red flag that you are doing human research and not just information gathering. Indeed, evaluating such lies for potential harm is an important part of why we have IRBs for psychological and sociological research.


You realize that the internal regulation is wrong, right?

Like the semantic distinction doesnt matter because nobody gives a fuck about Princeton’s organizational policy.


This is not Princeton's organizational policy or internal regulation, this is the regulatory definition of human subjects research as set by the government. Its semantic interpretation and the "about whom" requirement is exactly how you go about making a determination about whether your research is human subjects research.


In that case I care, and would say that’s an inadequate way to prevent trolling by researchers


Any research which involves human subjects is human subject research.

Nobody can disagree with that.


The US federal government does, as do many western governments. Research that involves humans usually comes under many delineations and sub-delineations with precise names that reflect specific ways in which the research takes place and the corresponding laws and regulations which the researchers must follow.

Determining which category a specific research project comes under usually involves checking specific criteria, in the US they have flowcharts, in Europe they have tables. Either way you can be sure a lot of people are going to be looking at it, most of whom have had to undertake ethics training as part of their career, and some of whom have spent their entire life studying these questions and seen them put to the test over hundreds of trials.

In that light, whether this category of research has got "human" in its name is not going to get you far wrt understanding the problem at hand.

source: I've undertaken interventionist medical research in the U.S and Europe.


Who is the subject of the emails sent to personal domains?


Not sure I follow your question.

An example of something that's not human subjects research would be emailing people who have websites and asking about their privacy policy.

An example of something that is human subjects research would be emailing people who have websites and asking what inspired them to start a website.

I realize that may seem like a subtle difference, but it's an important distinction from an IRB perspective. For reference, and because a lot of people seem confused on this thread, here's what the human subjects research training at my university says about this...

"...some research that involves interactions with people does not meet the regulatory definition of research with human subjects because the focus of the investigation is not the opinions, characteristics, or behavior of the individual. In other words, the information being elicited is not about the individual ('whom'), but rather is about 'what.' For example, if a researcher calls the director of a shelter for battered women and asks her for the average length of stay of the women who use the shelter, that inquiry would not meet the definition of research with human subjects because the information requested is not 'about' the director. If the researcher interviewed the director about her training, experience, and how she defines the problem of battering, then the inquiry becomes about her - and therefore 'about whom.'"


You misunderstand the research in question. To quote from the researchers website

> When the system has even higher confidence, it sends up to several emails that simulate real user inquiries about GDPR or CCPA processes. This research method is analogous to the audit and “secret shopper” methods that are common in academic research, enabling realistic evaluation of business practices. Simulating user inquiries also enables the study to better understand how websites respond to users from different locations.

They are not just asking for the existing privacy policy, they are actively attempting to put the subjects into a realistic environment and seeing how they respond. The focus is the behavior of the individual. This should also be evident from the fact that they felt the need to lie to and threaten them...

https://privacystudy.cs.princeton.edu/


He understands perfectly well. What's relevant is whether the response is a property of the individual or the organization, and it's arguable, and controversial, but you'll find a lot of studies performed using this technique that were not considered human subjects research.

As to whether it's deceptive and threatening (the latter of which I find pretty hyperbolic, this is a pretty boilerplate request), that has no relevance as to whether it's human subjects research.

Maybe they should have limited the scope to larger organizations.


Someone looking up the exact statute and quoting it, while not a direct legal threat, certainly carries a lot of implied threats. People don't just look up legal statutes for shits and giggles.


I don't buy that interpretation. That is, I'm willing to believe that's how your university interprets the regulations, but I personally think it's perverse and unethical when applied to this situation.

When you deliberately deceive someone in order to obtain information that you think they would be otherwise unwilling to give you, the response you get back is as much "about" their behavior in response to your deception as it is about the subject of your inquiry. (And if the researchers in this case didn't think the deception would make their targets more willing to cooperate, why the threatening language?)

That doesn't necessarily mean this kind of research should never be allowed, but it should definitely go through an IRB's oversight.


> An example of something that's not human subjects research would be emailing people who have websites and asking about their privacy policy.

No, that's an example of human subjects research that may be exempt from the regulations due to specific reasons, such as by only interacting with subjects through surveys and interviews (while adhering to further restrictions, that this research probably runs afoul of since it's not anonymous).

> For example, if a researcher calls the director of a shelter for battered women and asks her for the average length of stay of the women who use the shelter, that inquiry would not meet the definition of research with human subjects because the information requested is not 'about' the director.

What a terrible example. They've only demonstrated that the director does not qualify as a human subject, while ignoring the question of whether the women staying at the shelter would qualify as human subjects!


Epistemologically, using a fake name and a threat of legal action to elicit a response from whoever's picking up the phone is no different from dressing up as a cop and harassing someone on the street. The question of whether the content of your accusation stems from their own or their employer's action is peanuts compared to the ethical boundary you crossed when you decided to impersonate authority to witness their reaction.


This feels like it creates a massive ethical loophole.

There are different ways to gather pure factual information, too. In particular where the factual information you are trying to gather is information about the extent to which someone complies with the law, there's some real danger in being able to fall back on a 'we're just gathering facts' defense.

Take this example: "a researcher calls the director of a shelter for battered women and asks her for the average length of stay of the women who use the shelter"

What are the regulatory requirements shelters need to comply with? Do any of them concern length of stay? Are there any liabilities a shelter might expose itself to if it were known that it had women staying there for longer than a certain period? Or individual liabilities if it were discovered that they restrict how long people can stay? Would they potentially expose any of their clients to danger if the length of stay information were revealed to a particular person?

If so, then providing the answer to that question is something the shelter needs to give some thought to. And the manner of their response might be different if that question were posed to them by:

- a woman enquiring about staying at the shelter

- a government inspector

- their landlord

- a random man phoning them

- a journalist

- an academic researcher identifying themselves and the nature of the study they are conducting

So if as an academic you ask a 'just gathering information' question, but conceal your identity, don't share whether the information will be aggregated or identifiable, and don't explain what you're gathering the information for, you are not just collecting a fact - you are forcing the person you are asking to make an evaluation of what information to provide; in other words, you are creating a human behavior, and what you are studying will be the outcome of that.


I think one problem is that with small websites run by a single person or small group, a person can feel the website is an extension of herself. So a question about the website in some way becomes a question about the person.


More critically, it may actually be an extension of themselves in terms of legal liability.


My high school history teacher recommended his book to me when I was junior. I loved it, and even wrote to the author regarding a mistake I thought I found in the book (pretty arrogant of me). Loewen replied generously. I don't remember the exact exchange, but he was so encouraging. It meant a lot to me in high school that he replied.


Kudos to your teacher, for recommending you the book (personally because of your unmanageable curiosity :-) or generally to the class ?). I wish my teachers had tried drowning teenage me with good, challenging books.

Kudos getting the courage to write to an author you admired. Putting your words on paper, letting someone that wrote know that they reached you. That your motives might not gave been noble (arrogant is a good phase to live as a teenager, we need to learn that we are a crafty, amazing bunch, we humans, to celebrate it, and we need to be taught hubris concretely, its positive and negative sides, etc.).

Kudos for him to write you back, with encouraging words. He probably got a good chuckle or two, and took the time, in a way, to show you you'd touched him.

Wholesome human story, thanks for sharing.


My kid, in his first year in a US school, apparently asked the history teacher in class, "Wait, didn't XXX happen in the fashion YYY?"* and was told it wasn't taught that way. I asked her about this in the parent meeting and she said indeed, what he had learned was correct but she needed to teach things in the approved way so that the students would pass the standardized tests.

This isn't meant to imply the US is in some way particularly evil about this -- all countries have some sort of origin myths. This is more a comment on the effect of state level standardized testing.

* actual issue wasn't race-related so I elided it.


I got this email and started to transfer my library only to learn it requires also creating a YouTube channel. The message says...

> On YouTube Music, playlists are stored on your channel. To continue transferring your Google Play Music playlists, create your channel now.

> All your Google services will display your channel name. Learn more

Huh? I don't really know what a YouTube channel is and probably don't want one. Does this mean my playlists will be public? What does the second part about all services displaying my channel name even mean? Clicking "Learn more" does not help. I'm having a hard time figuring out the privacy and other implications of transferring my music library. Why does listening to the music I uploaded have to impact all my other Google services? I don't want a YouTube channel, I don't want to be social, and I don't want other people seeing the music I listen to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: