Just the fuck you rich, I'm buying a football team for a laugh human beings. Not that Warren would necessarily buy a football team for laugh, but that "kind".
The issue isn't that billionaires aren't human, the problem very much is that billionaires are regular petty spiteful human beings with poor judgement, impulse control, odd beliefs and an the utter lack of checks and balances that can be disregarded when a human has a billion and more.
NotAllBillionaires, sure .. but it only takes a few to screw over millions of other humans on a whim.
Frankly, imho, billionaires shouldn't even exist. No one person can get that much wealth, that much power, that much influence, without losing their humanity, their decency. It's just not possible because the only way to accrue that much wealth is to do horrifically indecent things.
So, do I recognize what you're saying? Certainly. But I won't be shedding a tear of sympathy for them. I lose all sympathy for them when they step on the necks of everyday people to get where they are.
Succeeding at business does not alone make you a billionaire; that's a whole new level above "successful business owner". Most successful business owners are millionaires but not billionaires. As I said, no one becomes a billionaire without doing horrible things because said horrible things are exactly how you amass such a large amount of wealth amongst a single person.
Also, winning the lottery to the extent of becoming a billionaire is neither common (that's the understatement of the millennium) nor a business. It is a gamble, and a gamble millions of people lose every day because they refuse or fail to understand the sheer improbability of "getting the big one" and the sheer degree to which said gambles are stacked against the "player".
> As I said, no one becomes a billionaire without doing horrible things because said horrible things are exactly how you amass such a large amount of wealth amongst a single person.
Not exactly true.
Andrew Forrest became a billionaire via Fortescue Metals and leveraging development of vast iron ore fields for sale to China. Since then he's focused on renewable energy to reduce harmful emissions in resource mining. He has skated some questionable activities in a humane and considered way but he's far from scum of the earth.
Gina Rinehart became a billionaire by virtue of being born to a self made billionaire. Her father got there by mining Blue Asbestos and exporting lung disease across the planet, followed up by also exploiting iron ore fields (although decades prior to Forrest). Lang Hancock (the father of Rinehardt) was a person of questionable values, Gina is a terrible human being with scany regard for others.
The same Andrew Forrest whose company were found to have knowingly destroyed hundreds of local Australian aboriginal sacred sites in its mining operations? Also, he's a billionaire. He may not be "scum of the earth", and maybe he's tried to do better in his latter days, but he still got horrifically rich off of everyday workers' sweat, injuries, and hardships (mining is no joke).
Besides, this philanthropy is largely just token restitution, at best. No one needs to be that wealthy to live more than comfortable. If he really wanted to help the world, he would use enough of his wealth so as to no longer be a billionaire.
People vastly underestimate just how much a billion dollars is compared to a million dollars, or even 500 million dollars. He could literally give away 99% of his wealth and still "only" have 10 million dollars. And as of of 2023 he had 33 times that much.
No one needs to hog that much of the world's resources. It is neither just nor equitable.
Are you comfortable blaming individuals like Forrest for the destruction that global consumption of iron, copper, and renewables brings, or would you rather 'fess up to collective responsiblity?
The largest Copper resource in the US currently is on naive American sacred land, and the latest proposal for providing rare earth elements essntial for modern lifestyles would disrupt a river system that spans a land area similar in size to Texas.
Do you wish to blame Forrest for these things, or the end customers and their demands?
NB: I've things to attend to now, I'll be back in some hours if you've an interest in all this.
It is our collective society's fault, yes, but the billionaires are the ones who exploit it. They are just as bad, if not worse.
Also, apologies, but I edited my above comment, and wasn't able to submit it before you replied.
And no worries. Good luck on your things. Honestly, I'm kinda done with this conversation, as interesting as it has been. It feels like it's run its course.
It's a pity you bailed, no drama - it's an area of long term interest to me and from the look of your comment you've never worked in mining, you've assumed Forrest never has, nor worked the land, and took an ankle deep search for "bad things about Forrest".
The interesting thing about Forrest is he grew up on Aboriginal land side by side with aboriginal people who themselves have deeply divided views about their past and their future - Forrest has gone well out of his way to provide jobs and education for native people and to sit down at length and discuss deeply contentious issues.
In a domain rife with trolley problems he's been considerably better than most, still with unavoidable warts, and hasn't blown up and destroyed anything on the order of that which Rio Tinto and US Gas companies have.
If you lack any on the ground local context and knowledge there's no shartage of bad press about Forrest, he gets no end of it from the likes of Gina Rhinehart, Clive Palmer, and other resource billionaires who despise him for turning much of his wealth to a greater good (an area of debate, of course) and suggesting that others do the same.
I've known both her and Forrest pretty much my entire life, her land is just to north of where Forrest is operating, she is dealing with many issues - some of which are touched upon here: https://www.youtube.com/watch?v=Lt6Hmp9ndkI
( Mainly about Canada, but comes back to touch upon Jill's 50,000 year strong family art collection )
> It is our collective society's fault, yes, but the billionaires are the ones who exploit it.
I'd be interested in your suggestions for how to do better.
Bear in mind that if individual billionaires were not operating here then the demand for resources would still exist and would be met by corporations (eg: Rio Tinto) who would chew through the landscape just as you claim others do: https://antar.org.au/issues/cultural-heritage/the-destructio...
I mean Masa will make you a billionaire if you just have a shit business idea you’re enthusiastic enough about, no need to be a terrible person.
Compared to the amount of billionaires there are also relatively many lottery jackpots that will get you there if you just stick your winnings in an index fund.
Not to mention that there’s a decent amount of people who become billionaires by just working on relatively boring ”normal” business like real estate development, where some luck, good decisions and leveraging bank loans will get you there without having to be a slumlord or doing anything terrible.
Abstract: "We introduce an Invertible Symbolic Regression (ISR) method. It is a machine learning technique that generates analytical relationships between inputs and outputs of a given dataset via invertible maps (or architectures). The proposed ISR method naturally combines the principles of Invertible Neural Networks (INNs) and Equation Learner (EQL), a neural network-based symbolic architecture for function learning. In particular, we transform the affine coupling blocks of INNs into a symbolic framework, resulting in an end-to-end differentiable symbolic invertible architecture that allows for efficient gradient-based learning. The proposed ISR framework also relies on sparsity promoting regularization, allowing the discovery of concise and interpretable invertible expressions. We show that ISR can serve as a (symbolic) normalizing flow for density estimation tasks. Furthermore, we highlight its practical applicability in solving inverse problems, including a benchmark inverse kinematics problem, and notably, a geoacoustic inversion problem in oceanography aimed at inferring posterior distributions of underlying seabed parameters from acoustic signals."
Abstract: "Symbolic equations are at the core of scientific discovery. The task of discovering the underlying equation from a set of input-output pairs is called symbolic regression. Traditionally, symbolic regression methods use hand-designed strategies that do not improve with experience. In this paper, we introduce the first symbolic regression method that leverages large scale pre-training. We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs. At test time, we query the model on a new set of points and use its output to guide the search for the equation. We show empirically that this approach can re-discover a set of well-known physical equations, and that it improves over time with more data and compute."
Abstract: "In recent years, self-attention has become the dominant paradigm for sequence modeling in a variety of domains. However, in domains with very long sequence lengths the O(T^2) memory and O(T^2H) compute costs can make using transformers infeasible. Motivated by problems in malware detection, where sequence lengths of T≥100,000 are a roadblock to deep learning, we re-cast self-attention using the neuro-symbolic approach of Holographic Reduced Representations (HRR). In doing so we perform the same high-level strategy of the standard self-attention: a set of queries matching against a set of keys, and returning a weighted response of the values for each key. Implemented as a “Hrrformer” we obtain several benefits including O(THlogH) time complexity, O(TH)
space complexity, and convergence in 10× fewer epochs. Nevertheless, the Hrrformer achieves near state-of-the-art accuracy on LRA benchmarks and we are able to learn with just a single layer. Combined, these benefits make our Hrrformer the first viable Transformer for such long malware classification sequences and up to 280× faster to train on the Long Range Arena benchmark."
Abstract: "Contemporary large models often exhibit behaviors suggesting the presence of low-level primitives that compose into modules with richer functionality, but these fundamental building blocks remain poorly understood. We investigate this compositional structure in linear layers by asking: can we identify/synthesize linear transformations from a minimal set of geometric primitives? Using Clifford algebra, we show that linear layers can be expressed as compositions of bivectors -- geometric objects encoding oriented planes -- and introduce a differentiable algorithm that decomposes them into products of rotors. This construction uses only O(log^2 d) parameters, versus O(d^2) required by dense matrices. Applied to the key, query, and value projections in LLM attention layers, our rotor-based layers match the performance of strong baselines such as block-Hadamard and low-rank approximations. Our findings provide an algebraic perspective on how these geometric primitives can compose into higher-level functions within deep models."
Abstract: "Current automated systems have crucial limitations that need to
be addressed before artificial intelligence can reach human-like levels and bring
new technological revolutions. Among others, our societies still lack level-5 self driving cars, domestic robots, and virtual assistants that learn reliable world models, reason, and plan complex action sequences. In these notes, we summarize the main ideas behind the architecture of autonomous intelligence of the future proposed by Yann LeCun. In particular, we introduce energy-based and latent variable models and combine their advantages in the building block of LeCun’s proposal, that is, in the hierarchical joint-embedding predictive architecture."
Abstract:
"Associative Memories like the famous Hopfield Networks are elegant models for describing fully recurrent neural networks whose fundamental job is to store and retrieve information. In the past few years they experienced a surge of interest due to novel theoretical results pertaining to their information storage capabilities, and their relationship with SOTA AI architectures, such as Transformers and Diffusion Models. These connections open up possibilities for interpreting the computation of traditional AI networks through the theoretical lens of Associative Memories. Additionally, novel Lagrangian formulations of these networks make it possible to design powerful distributed models that learn useful representations and inform the design of novel architectures. This tutorial provides an approachable introduction to Associative Memories, emphasizing the modern language and methods used in this area of research, with practical hands-on mathematical derivations and coding notebooks."
"Abstract
A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set."
Abstract:
Recently, deep neural nets have shown amazing results in such fields as computer vision, natural language processing, etc. To build such networks, we usually use layers from a relatively small dictionary of available modules (fully-connected, convolutional, recurrent, etc.). Being restricted with this set of modules complicates transferring technology to new tasks. On the other hand, many important applications already have a long history and successful algorithmic solutions. Is it possible to use existing methods to construct better networks? In this talk, we will cover several ways of putting algorithms into networks and discuss their pros and cons. Specifically, we will touch using optimization algorithms as structured pooling, unrolling of algorithm iterations into network layers and direct differentiation of the output w.r.t. the input. We will illustrate these approaches on applications from structured-output prediction and computer vision.
Abstract
Tomographic Synthetic Aperture Radar (TomoSAR) building object height inversion is a sparse reconstruction problem that utilizes the data obtained from several spacecraft passes to invert the scatterer position in the height direction. In practical applications, the number of passes is often small, and the observation data are also small due to the objective conditions, so this study focuses on the inversion under the restricted observation data conditions. The Analytic Learned Iterative Shrinkage Thresholding Algorithm (ALISTA) is a kind of deep unfolding network algorithm, which is a combination of the Iterative Shrinkage Thresholding Algorithm (ISTA) and deep learning technology, and it has the advantages of both. The ALISTA is one of the representative algorithms for TomoSAR building object height inversion. However, the structure of the ALISTA algorithm is simple, which has neither the excellent connection structure of a deep learning network nor the acceleration format combined with the ISTA algorithm. Therefore, this study proposes two directions of improvement for the ALISTA algorithm: firstly, an improvement in the inter-layer connection of the network by introducing a connection similar to residual networks obtains the Extragradient Analytic Learned Iterative Shrinkage Thresholding Algorithm (EALISTA) and further proves that the EALISTA achieves linear convergence; secondly, there is an improvement in the iterative format of the intra-layer iteration of the network by introducing the Nesterov momentum acceleration, which obtains the Fast Analytic Learned Iterative Shrinkage Thresholding Algorithm (FALISTA). We first performed inversion experiments on simulated data, which verified the effectiveness of the two proposed algorithms. Then, we conducted TomoSAR building object height inversion experiments on limited measured data and used the deviation metric P to measure the robustness of the algorithms to invert under restricted observation data. The results show that both proposed algorithms have better robustness, which verifies the superior performance of the two algorithms. In addition, we further analyze how to choose the most suitable algorithms for inversion in engineering practice applications based on the results of the experiments on measured data.
That reminded me of Warren Buffet asking for his kind and to be taxed more.