My hope is that in 5 years, I will not have anything in my feeds that have not been signed in a way that I can assign a trust level.
Here in the Nordics, we are already seeing messaging apps such as [hudd] that require government issued ID to sign in. I want this to spread to everything from podcasts and old-school journalism to the soccer-club newsletter, so that I can always connect a piece of information back to a responsible source.
So you're simply not interested in reading any random website by random people who don't see a benefit of establishing any form of trust, especially if should not be connected to their official government IDs?
Or to put it differently: Where should this come from, and which issuer would you trust? And why should anyone else agree with you that this is good?
When I browse to random site, I can see in my browser that A's first level contact trusts this site.
Now I can make a decision based on the amount of trust I have on A. Maybe after exploring the site I can mark specific pages or the whole domain as trusted, so people in my network can see the same.
On a larger level I might trust the Country of Finland, who will only mark their official sites as Trusted. This way I instantly know if I'm on an official site or something pretending to be one.
Trust is subjective! Let's establish trust in each other rather than rely on one-size-fits-all solutions.
Personally, I trust my friends, family, and some public figures and institutions to varying degrees. I want to see social experiences that reflect that.
If the Bayer pattern makes you angry, I imagine it would really piss you off to realize that the whole concept encoding an experienced color by a finite number of component colors is fundamentally species-specific and tied to the details of our specific color sensors.
To truly record an appearance without reference to the sensory system of our species, you would need to encode the full electromagnetic spectrum from each point. Even then, you would still need to decide on a cutoff for the spectrum.
...and hope that nobody ever told you about coherence phenomena.
Just as important here: The higher the temperature of the storage medium, the higher the fundamental limit to how much electric energy you can recover.
Put differently: If you used the same amount of energy to heat one bucket of sand by 200C (A) or two bucket of sands by 100C (B), you would be able to recover more electric energy from case A because of the fundamental Carnot Limit.
This is why sand is a good storage medium (as opposed to e.g. water), and why some solar power systems work with molten salts. Also why steam-based power plants need to operate at high pressure to be able to obtain high-temperature steam.
Please go have a look -- this is really well done with a clear message, good documentation and the call to action implemented very nicely (which is the background for TFA).
I think a lot of it is covered under "New Public Management" [0], which was maybe a result of the financialization happening in the 80's [1].
And I completely GP, having been in or in contact with academic research since the late 90's, there has been a very strong shift from a culture where the faculty had means for independent research, and were trusted to find their own direction, to the system we have today where a research project has much tighter overlook and reporting than most corporate projects.
A professor with a 4-5 person group will typically need two staggered pipelines of 4-5year funding projects to run risk free. In the EU it is virtually impossible to get funding for projects that do not involve multiple countries, so you need to set up and nurture partnerships for each project. Coordination the application process for these consortia is a major hassle and often outsourced at a rate of 50kEUR + win bonus. And you of course need to run multiple applications to make sure to get anything.
When I talked to mentors about joining academia around 2010, the most common response was "don't".
My understanding is that this is true for all the Trump handouts: otherwise the ten-year economic outlooks would have cratered. The Economist had a couple of nice analyses on this.
Of course this means that the next administration will need to start with tax increases just to get to neutral, but maybe that is a feature?
> true for all the Trump handouts: otherwise the ten-year economic outlooks would have cratered
Not just that - they're often timed to expire early into the next administration which, if Democrats win, is an instant "look how the Democrats treat the working folk!" hammer. e.g. "Tax Cuts and Jobs Act" from 2017, expiring at the end of 2025[0].
Tax bills are universally passed through the budget reconciliation process these days to overcome the filibuster (can do a budget with only 51 Senate votes). That process has many restrictions on what tax changes can do to projected revenues outside a certain window: https://www.ey.com/en_us/insights/tax/prospects-for-budget-r....
> Of course this means that the next administration will need to start with tax increases just to get to neutral, but maybe that is a feature?
Oh no.
What you have missed is the incredible end run around the spirit of the reconciliation process that the Republicans did this time around.
So, the did these tricks with the tax in Trump's first term, with tax breaks set to sunset in order to have a revenue-neutral effect over the required ten years.
This time around they needed to extend those breaks, right? So they must had to cut spending or raise other taxes in order to do that and have a revenue-neutral effect, right?
Ha ha, no! They convinced the CBO that the baseline for the reconciliation process this time should be whatever was in effect for the last few years. So those breaks are already baked in and don't need to be counterbalanced. It's a two-step, long-term process for making things permanent through the reconciliation process that otherwise one could not.
Did you read the article? The author specifically addresses accessibility in multiple places, including taking extra steps to work around browser bugs [0].
Seems an overly pessimistic take. TFA specifically mentions per-cell temperature monitoring, and I would assume there is also per-cell voltage monitors.
As long as the controller is made sufficiently conservative, there is no fundamental problem: you limit the current according to the cell that heats the fastest and shut down once one cell is near depletion.
Maybe they have even gone a more aggressive route and build a balancing circuit that can route significant current around a low-capacity cell. Or maybe just charging logic to keep as many cells as possible in the 20-80 regime if they will be limited by low-capacity members anyway. There are so many options here.
> Well right now I am very skeptical, but I think we have somewhat given quantum computing plenty of time (we have given it decades) unless someone can convince me that it is not a scam.
Shor's paper on polynomial time factoring is from 1997, first real demonstration of quantum hardware (Monroe et al.) is from 1995: Yes, quantum has had decades -- but only barely, and is has certainly only now started to have generations.
To look at the kind of progress this means, take a look of some of the recent phd spinouts of leading research groups (Oxford Ionics etc.): There are a lot of organisations with nothing but engineering to go before they reach fault tolerance.
When I came back to quantum three years ago, fault tolerance was still to be based on the surface code ideas that floated when I did my phd ('04). Today, after everyone has started looking harder, it turns out that a bit of long-range connectivity can cut the error correction overhead by orders of magnitude (see recent public posts by IBM Quantum): The goalposts for fault tolerance are moving in the right direction.
And this is the key thing about quantum computing: you need error correction, and you need to do it with the same error-prone hardware that you correct for. There is a threshold hardware quality that will let you do this at a reasonable overhead, and before you reach this threshold all you have is a fancy random number generator.
But yes, feel free to be a pessimist -- just remember to own it when quantum happens in a few years.
My hope is that in 5 years, I will not have anything in my feeds that have not been signed in a way that I can assign a trust level.
Here in the Nordics, we are already seeing messaging apps such as [hudd] that require government issued ID to sign in. I want this to spread to everything from podcasts and old-school journalism to the soccer-club newsletter, so that I can always connect a piece of information back to a responsible source.
[hudd]: (https://about.hudd.dk/))
reply