Hacker Newsnew | past | comments | ask | show | jobs | submit | lottin's commentslogin

Retained earnings are not taxed per se. A company pays taxes on profits. Whether the profits are distributed to shareholders or retained makes no difference whatsoever as far as taxes are concerned.


They are taxed; they are taxed because they are a subset of profits, which is a taxed category. They are not taxed more than other profits, but that doesn’t mean they’re ’not taxed’.


they are not

retained earnings by definition are the accumulation of net incomes, and net income by definition is post tax

what went into the produce the retained earnings (profit) has been taxed

but the retained earnings themselves are not subject to additional taxation (with a few exceptions)


How does exactly "breaking windows" improve the lives of people?


By creating work that needs to be done, and thus forcing people to start spending.


To bring things back to the original point, there's always a way for health centers to spend money improving patient care. They could hire more nurses and give the existing ones more sleep, for example. In the context of the analogy, a broken window is diverting resources from the broken plumbing and refrigerator motor instead of creating an incentive to spend where none existed.


I can't imagine a situation in which I'd want to explain what I want to do on the command line to an LLM, instead of typing the commands myself.


Use ffmpeg to extract the audio from the first ten seconds of an mp4 file and save it as mp3.


I wish the scroll bar was a little less invisible.


As expected. This is why we don't use nominal dollars for measuring changes in prices over long time periods. It's meaningless.


Assets like gold are also reaching new highs in real terms, which is giving people reason to be skeptical of the adjustments made for inflation.

But really none of it is as objective as it tries to pretend to be.


I think it's just a meaningless sentence.


Why would a stablecoin granting yield keep the banking system from working?


The theory, at least, is that everyone would eventually be incentivized to move deposits out of the banking system and into this.

(I am not sufficiently expert here to comment on the odds of an outcome like that)


Considering that stablecoins don't pay interest to the holder, I don't know why anyone would be incentivised to move their funds into stablecoins.


USDC gets me 4% on Coinbase, and USDB and other Bridge-issued custom stablecoins also give the customer rewards that they can pass onto the holder (thanks to MMF/similar cash equivalents behind the scenes etc).

But yes - this is why banks want to prevent stablecoin issuers from being allowed to grant rewards


If I deposit dollars in a savings account I will get paid interest, but that is different from the dollar itself being an interest-bearing asset. I think the same thing applies to stablecoins. Does USDC pay interest to the holder or do I have to make a USDC deposit at Coinbase in order to get paid interest? Also, banks already offer a ton of products that generate yield. I don't see why a product that seems relatively similar to many products that banks already offer would destroy their business... unless such a product is much better than what banks offer, but that doesn't seem to be the case.


>unless such a product is much better than what banks offer, but that doesn't seem to be the case.

I think you're basically correct here. I think the fear of the banks - and why they are insistent on prohibiting stablecoins from generating yield/interest (via the GENIUS act) - is that that doesn't stay true in the long-term, as stablecoins ascend as a cross-border payment/storage rail.

>Does USDC pay interest to the holder or do I have to make a USDC deposit at Coinbase in order to get paid interest?

I believe USDC from Coinbase is framed as "reward", and is downstream of an agreement Coinbase has with Circle to get that "reward" from Circle for all USDC deposits it holds on platform. Other "rates" you can get on centralized stablecoins tend to be similar AFAICT.


Meanwhile a 4-week T-bill has a 4.16% coupon equivalent with almost no counterparty risk relative to the 4% USDC.

USDC should be paying more than T-bills to compensate for the counterparty risk.


In that case, wouldnt sp500 or vanguard be bigger risks to banks existing?

I think most people think banks make money by holding your money and giving you some interest when they actually make money by bringing money into existance out of nowhere when they issue mortgages.


I don't see why not - I'm sure the banks (or others more expert than me) would argue for stablecoins being somehow distinct in this regard, but yeah don't know why eg. Vanguard wouldn't also be a credible cause of deposit flight.

(I do vaguely remember reading that banks were concerned about people moving to money-market fund products that had bank-like functionality)


The whole point of working out is to stress the organism in order to induce a physiological adaptation. Inflammation is NOT the point, but rather an unfortunate side effect.


When I have a question, I don't usually "ask" that question and expect an answer. I figure out the answer. I certainly don't ask the question to a random human.


you ask yourself .. for most people, that means closer to average reply, from yourself, when you try to figure it out.

There is a working paper from McKinnon Consulting in Canada that states directly that their definition of "General AI" is when the machine can match or exceed fifty percent of humans who are likely to be employed for a certain kind of job. It implies that low-education humans are the test for doing many routine jobs, and if the machine can beat 50% (or more) of them with some consistency, that is it.


By definition the average answer will be average, that's kind of a tautology. The point is that figuring things out is an essential intellectual skill. Figuring things out will make you smarter. Having a machine figure things out for you will make you dumber.

By the way, doing a better job than the average human is NOT a sign of intelligence. Through history we have invented plenty of machines that are better at certain tasks than us. None of them are intelligent.


Looking at the R code in this article, I'm having a hard time understanding the appeal of tidyverse.


For me the appeal is less that tidyverse is great and more that the R standard library is horrible. It's full of esoteric names, inconsistent use and order of parameters, unreasonable default behavior, behavior that surprises you coming from other programming experience. It's all in a couple massive packages instead of broken up into manageable pieces.

Tidyverse is imperfect and it feels heavy-handed and awkward to replace all the major standard library functions, but Tidyverse stuff is way more ergonomic.


I think the R standard library is quite excellent. It pretty much follows the Unix philosophy of "doing one thing right". The only exception being `reshape` which tries to do too many things, but it can usually be avoided. It isn't inconsistent. I think the problem is the lack of tutorials that explain how to use all the data manipulation tools effectively, because there are quite a lot of functions and it isn't easy to figure out how to use them together to accomplish practical things. Tidyverse may be consistent with itself, but it's inconsistent with everything else. Either you only use tidyverse, or your program looks like an inconsistent mess.


Honestly, it might partly be that I've used R somewhat irregularly and I put a lot of value in design choices that "make sense" and are easier to remember. I'm sure once you are intimately familiar with the whole base language you can be really happy and productive with it.

> I think the problem is the lack of tutorials that explain how to use all the data manipulation tools effectively, because there are quite a lot of functions and it isn't easy to figure out how to use them together to accomplish practical things.

Most languages solve this problem by not cramming quite a lot of functions in one package and using shared design concepts to make it easier to fit them together. I don't think tutorials would solve these problems effectively but I guess it makes sense that they affect newer users the most.

> Tidyverse may be consistent with itself, but it's inconsistent with everything else.

Yeah, totally agree and I really dislike this part.


Author here; I think I understand where you might be coming from. I find functional nature of R combined with pipes incredibly powerful and elegant to work with.

OTOH in a pipeline, you're mutating/summarising/joining a data frame, and it's really difficult to look at it and keep track of what state the data is in. I try my best to write in a way that you understand the state of the data (hence the tables I spread throughout the post), but I do acknowledge it can be inscrutable.


A "pipe" is simply a composition of functions. Tidyverse adds a different syntax for doing function composition, using the pipe operator, which I don't particularly like. My general objection to Tidyverse is that it tries to reinvent everything but the end result is a language that is less practical and less transparent than standard R.


Can you rewrite some of those snippets in standard R w/o Tidyverse? Curious what it would look like


I didn't rewrite the whole thing. But here's the first part. It uses the `histogram` function from the lattice package.

    population_data <- data.frame(
        uniform = runif(10000, min = -20, max = 20),
        normal = rnorm(10000, mean = 0, sd = 4),
        binomial = rbinom(10000, size = 1, prob = .5),
        beta = rbeta(10000, shape1 = .9, shape2 = .5),
        exponential = rexp(10000, .4),
        chisquare = rchisq(10000, df = 2)
    )
    
    histogram(~ values|ind, stack(population_data),
              layout = c(6, 1),
              scales = list(x = list(relation="free")),
              breaks = NULL)
    
    take_random_sample_mean <- function(data, sample_size) {
        x <- sample(data, sample_size)
        c(mean = mean(x), sd = sqrt(var(x)))
    }
    
    sample_statistics <- replicate(20000, sapply(population_data, take_random_sample_mean, 60))
    
    sample_mean <- as.data.frame(t(sample_statistics["mean", , ]))
    sample_sd <- as.data.frame(t(sample_statistics["sd", , ]))
    
    histogram(sample_mean[["uniform"]])
    histogram(sample_mean[["binomial"]])
    
    histogram(~values|ind, stack(sample_mean), layout = c(6, 1),
              scales = list(x = list(relation="free")),
              breaks = NULL)


The following code essentially redoes what the code up to the first conf_interval block does there. Which one is more clear may be debatable but it's shorter by a factor of two and faster by a factor of ten (45 seconds vs 4 for me).

    sample_size <- 60
    sample_meansB <- lapply(population_dataB, function(x){
 t(apply(replicate(20000, sample(x, sample_size)), 2, function(x) c(sample_mean=mean(x), sample_sd=sd(x))))
    })
    lapply(sample_meansB, head) ## check first rows

    population_data_statsB <- lapply(population_dataB, function(x) c(population_mean=mean(x), 
             population_sd=sd(x), 
             n=length(x)))
    do.call(rbind, population_data_statsB) ## stats table

    cltB <- mapply(function(s, p) (s[,"sample_mean"]-p["population_mean"])/(p["population_sd"]/sqrt(sample_size)),
     sample_meansB, population_data_statsB)
    head(cltB) ## check first rows

    small_sample_size <- 6 
    repeated_samplesB <- lapply(population_dataB, function(x){
 t(apply(replicate(10000, sample(x, small_sample_size)), 2, function(x) c(sample_mean=mean(x), sample_sd=sd(x))))
    })

    conf_intervalsB <- lapply(repeated_samplesB, function(x){
 sapply(c(lower=0.025, upper=0.975), function(q){
     x[,"sample_mean"]+qnorm(q)*x[,"sample_sd"]/sqrt(small_sample_size)
 })})

    within_ci <- mapply(function(ci, p) (p["population_mean"]>ci[,"lower"]&p["population_mean"]<ci[,"upper"]),
   conf_intervalsB, population_data_statsB)
    apply(within_ci, 2, mean) ## coverage
One can do simple plots similar to the ones in that page as follows:

    par(mfrow=c(2,3), mex=0.8)
    for (d in colnames(population_dataB)) plot(density(population_dataB[,d], bw="SJ"), main=d, ylab="", xlab="", las=1, bty="n")
    for (d in colnames(cltB)) plot(density(cltB[,d], bw="SJ"), main=d, ylab="", xlab="", las=1, bty="n")
    for (d in colnames(cltB)) { qqnorm(cltB[,d], main=d, ylab="", xlab="", las=1, bty="n"); qqline(cltB[,d], col="red") }


I mean, for the main simulation I would do it like this:

    set.seed(10)
    n <- 10000; samp_size <- 60
    df <- data.frame(
        uniform = runif(n, min = -20, max = 20),
        normal = rnorm(n, mean = 0, sd = 4),
        binomial = rbinom(n, size = 1, prob = .5),
        beta = rbeta(n, shape1 = .9, shape2 = .5),
        exponential = rexp(n, .4),
        chisquare = rchisq(n, df = 2)
    )
    
    sf <- function(df,samp_size){
        sdf <- df[sample.int(nrow(df),samp_size),]
        colMeans(sdf)
    }
    
    sim <- t(replicate(20000,sf(df,samp_size)))
I am old, so I do not like tidyverse either -- I can concede it is of personal preference though. (Personally do not agree with the lattice vs ggplot comment for example.)


Somehow, tidyverse didn't click with me. I still use it sometimes. But now I primarily use base R and data.table


Why? The tidyverse is so readable, elegant, compositional, functional and declarative. It allows me to produce a lot more and higher quality than I could without it. ggplot2 is the best visualization software hands down, and dplyr leverages Unix’s famous point free programming style (that reduces the surface area for errors).


I disagree. In this example tidyverse looks convoluted compared to just using an array and apply. ggplot2 is okay but we already had lattice. Lattice does everything ggplot2 does and produces much better-looking plots IMO.


I like simplicity and I love a good base R idiom, but there's a lot less consistency in base R compared to the tidyverse (and that comes with a productivity penalty).

Lattice is really low-level. It's like doing vis with matplotlib (requires a lot of time and hair-pulling). Higher level interfaces boost productivity.


the equivalent in any other language would be an ugly, unreadable, inconsistent mess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: