Hacker Newsnew | past | comments | ask | show | jobs | submit | jofer's commentslogin

What really worries me is that I keep hearing "cooling is cheap and easy in space!" in a lot of these conversations, and it couldn't be farther from the truth. Cooling is _really_ hard and can't use efficient (i.e. advection-based air or water cooling) approaches and are limited to dramatically less efficient radiative cooling. It doesn't matter that space is cold because cooling is damned hard in a vacuum.

The article makes this point, but it's relatively far in and I felt it was worth making again.

With that said, my employer now appears to be in this business, so I guess if there's money there, we can build the satellites. (Note: opinions my own) I just don't see how it makes sense from a practical technical perspective.

Space is a much harder place to run datacenters.


Yeah, I don't see a way to get around the fact that space is a fabulous insulator. That's precisely how expensive insulated drink containers work so well.

If it was just about cooling and power availability, you'd think people would be running giant solar+compute barges in international waters, but nobody is doing that. Even the "seasteading" guys from last decade.

These proposals, if serious, are just to avoid planning permission and land ownership difficulties. If unserious, it's simply to get attention. And we're talking about it, aren't we?


You should read the linked article, they talk about it there. You radiate the heat into space which takes less surface area than the solar panels and you can just have them back to back.

In general I don't understand this line of thinking. This would be such a basic problem to miss, so my first instinct would be to just look up what solution other people propose. It is very easy to find this online.


Please have a look at how real stations like ISS handle the problem and do not trust in should-work science fiction. It's hard. https://en.wikipedia.org/wiki/International_Space_Station#Po...

Taking a system which was conceptualized about a quarter of a century ago and serves much different needs than what a datacenter in space needs (e.g. very strict thermal band, compared to acceptable temperature range from 20 to 80 degrees) isn't ideal.

The physics is quite simple and you can definitely make it work out. The Stefan Boltzman law works in your favor the higher you can push your temperatures.

If anything a orbital datacenter could be a slightly easier case. Ideally it will be in an orbit which always sees the sun. Most other satellites need to be in the earth shadow from time to time making heaters as well radiators necessary.


These data centers are solar powered, right? So if they are absorbing 100% of the energy on their sun side, by default they'll be able to heat up as much as an object left in the sun, which I assume isn't very hot compared to what they are taking in. How do they crank their temperature up so as to get the Stefan Boltzmann law working in their favor?

I suppose one could get some sub part of the whole satellite to a higher temperature so as to radiate heat efficiently, but that would itself take power, the power required to concentrate heat which naturally/thermodynamically prefers to stay spread out. How much power does that take? I have no idea.


σ is such a small number in Stefan-Boltzman that it makes no difference at all until your radiators get hot enough to start melting.

You not only need absolute huge radiators for a space data centre, you need an active cooling/pumping system to make sure the heat is evenly distributed across them.

I'm fairly sure no one has built a kilometer-sized fridge radiator before, especially not in space.

You can't just stick some big metal fins on a box and call it a day.


Out of curiosity, I plugged in the numbers - I have solar at home, and a 2 m2 panel makes about 500w - i assume the one in orbit will be a bit more efficient without atmosphere and a bit more fancy, making it generate 750w.

If we run the radiators at 80C (a reasonable temp for silicon), that's about 350K, assuming the outside is 0K which makes the radiator be able to radiate away about 1500W, so roughly double.

Depending on what percentage of time we spend in sunlight (depends on orbit, but the number's between 50%-100%, with a 66% a good estimate for LEO), we can reduce the radiator surface area by that amount.

So a LEO satellite in a decaying orbit (designed to crash back onto the Earth after 3 years, or one GPU generation) could work technically with 33% of the solar panel area dedicated to cooling.

Realistically, I'd say solar panels are so cheap, that it'd make more sense to create a huge solar park in Africa and accept the much lower efficiency (33% of LEO assuming 8 hours of sunlight, with a 66% efficiency of LEO), as the rest of the infrastructure is insanely more trivial.

But it's fun to think about these things.


This argument assumes that you only need to radiate away the energy that the solar actively turns into electricity, but you also need to dissipate all the excess heat that wasn’t converted. The solar bolometric flux at the earth is 1300 w/m2, or 2600 for 2 sq m. That works out to an efficiency of ~20% for your home solar, and your assumed value of 750 w yields an efficiency of ~30%, which is reasonable for space-rated solar. But assuming an overall albedo of ~5% that means that you were only accounting for a third of the total energy that needs to be radiated.

Put another way, 2 sq m intercepts 2600 w of solar power but only radiates ~1700 w at 350 k, which means it needs to be run at a higher temperature of nearly 125 celsius to achieve equilibrium.


> 2 m2 panel makes about 500w

It receives around 2.5kW[0] of energy (in orbit), of which it converts 500W to electric energy, some small amount is reflected and the rest ends up as heat, so use 1kW/m^2 as your input value.

> If we run the radiators at 80C (a reasonable temp for silicon), that's about 350K, assuming the outside is 0K which makes the radiator be able to radiate away about 1500W, so roughly double.

1500W for 2m^2 is less than 2000kW, so your panel will heat up.

[0] https://www.sciencedirect.com/topics/engineering/solar-radia...


You can't just omit the 500 W of electric. That ultimately ends up as heat too.

>Depending on what percentage of time we spend in sunlight (depends on orbit, but the number's between 50%-100%, with a 66% a good estimate for LEO), we can reduce the radiator surface area by that amount.

You need enough radiators for peak capacity, not just for the average. It's analogous to how you can't put a smaller heat sink on your home PC just because you only run it 66% of the time.


Yes it's fun. One small note, for the outside temp you can use 3K, the cosmic microwave background radiation temperature. Not that it would meaningfully change your conclusion.

It's definitely a solvable problem. But it is a major cost factor that is commonly handwaved away. It also restricts the size of each individual satellite: moving electricity through wires is much easier than pumping cooling fluid to radiators, so radiators are harder to scale. Not a big deal at ISS scale, but some proposals had square kilometers of solar arrays per satellite

That exactly. It's not that it's impossible. It's that it's heavy to efficiently transport heat to the radiators or requires a lot of tiny sats, which have their with problems.

But heat = energy, right? So maybe we don’t really want to radiate it, but redirect it back into the system in a usable way and reduce how much we need to take in? (From the sun etc)

Useful, extractable energy comes from a temperature differential, not just temperature itself. Once your system is at temperature equilibrium, you cant extract energy anymore and must shed that temperature as heat


This still relies on a heat differential, as described in the Details section of your linked article: https://en.wikipedia.org/wiki/Thermophotovoltaic_energy_conv...

That's not how physics works. Heat in and of itself does not contain usable energy. The only useful energy to be extracted from heat comes from the difference in temperature between two objects. You can only extract work from thermal energy by moving heat from one place to another, which can only happen by moving energy from a hot object to a cold one.

This is all fundamental to the universe. All energy in the universe comes exclusively from systems moving from a low entropy state to a higher entropy state. Energy isn't a static absolute value we can just use. It must be extracted from an energy gradient.


"space is cold"

I've always enjoyed thinking about this. Temperature is a characteristic of matter. There is vanishingly little matter in space. Due to that, one could perhaps say that space, in a way of looking at it, has no temperature. This helps give some insight into what you mention of the difficulties in dealing with heat in space - radiative cooling is all you get.

I once read that, while the image we have in our mind of being ejected out of an airlock from a space station in orbit around Earth results in instant ice-cube, the reality is that, due to our distance from the sun, that situation - ignoring the lack of oxygen etc that would kill you - is such that we would in fact die from heat exhaustion: our bodies would be unable to radiate enough heat vs what we would receive from the sun.

In contrast, were one to experience the same unceremonious orbital defenestration around Mars, the distance from the sun is sufficient that we would die from hypothermia (ceteris paribus, of course).


A perfect vacuum might have no temperature, but space is not a perfect vacuum, and has a well-defined temperature. More insight would be found in thinking about what temperature precisely means, and the difference between it and heat capacity.

I think your second sentence is what they were referencing. Space has a temperature. But because the matter is so sparse and there’s so little thermal mass to carry heat around as a result, we don’t have an intuitive grasp on what the temperature numbers mean.

To rephrase it slightly. It's not a perfect vacuum, but compared to terrestrial conditions it's much closer to the former than the latter. The physics naturally reflects that fact.

To illustrate the point with a concrete example. You can heat something with the thermal transfer rate of aerogel to an absurdly high temperature and it will still be safe to pick up with your bare hand. Physics says it has a temperature but our intuition says something is wrong with the physics.


I think otherwise.

I think the better argument to be made here is "space has a temperature, and in the thermosphere the temperature can get up to thousands of degrees. Space near Earth is not cold."

Are you actually making that article, or just "quoting" it as some kind of hypothetical? Regardless, without mentioning heat capacity, I don't see any point to your quotation in this context.

Sorry to hear you can't see it. Let me try to assist you in understanding what you are missing.

Yes I'm making that argument. Because it's true. The temperature of what particles do exist, 500km above the earth, is more likely to be in the thousands of degrees farenheit than below zero farenheit.

The discussion being had, if you read comments above your original, is that it's widely thought that "space is cold" and therefore it's good for cooling.

You're right that heat capacity means that the temperature of space is not relevant to its ability to cool a datacenter. You're wrong that making that argument is a good way to get people to actually change their mind.

Instead, attack the idea at its foundation. Space is not cold, not in the places where the data centers would be. It's much easier to get someone to understand "the temperature at 500km where the auroras are is very hot" than "blah blah heat capacity".

Now you see the point!


Assuming merely attitude control, sure only radiative cooling is available, but its very easy to design for arbitrary cooling rates assuming any given operating temperature:

Budget the solar panel area as a function of the maximum computational load.

The rest of the satellite must be within the shade of the solar panel, so it basically only sees cold space, so we need a convex body shape, to insure that every surface of the satellite (ignoring the solar panels) is radiatively cooling over its full hemisphere.

So pretend the sun is "below", the solar panels are facing down, then select an extra point above the solar panel base to form a pyramid. The area of the slanted top sides of the pyramid are the cooling surfaces, no matter how close or far above the solar panels we place this apex point, the sides will never see the sun because they are shielded by the solar panel base. Given a target operating temperature, each unit surface area (emissivity 1) will radiate at a specific rate, and we can choose the total cooling rate by making the pyramid arbitrarily long and sharp, thus increasing the cooling area. We can set the satellite temperature to be arbitrarily low.

Forget the armchair "autodidact" computer nerds for a minute


Making the pyramid arbitrarily long and sharp will arbitrarily diminish the heat conductance through the pyramid, so the farther from the pyramid base, the colder it will be and the less it will radiate.

So no, you cannot increase too much the height of the pyramid, there will be some optimum value at which the pyramid will certainly not be sharp. The optimum height will depend on how much of the pyramid is solid and which is the heat conductance of the material. Circulating liquid through the pyramid will also have limited benefits, as the power required for that will generate additional heat that must be dissipated.

A practical radiation panel will be covered with cones or some other such shapes in order to increase its radiating surface, but the ratio in which the surface can be increased in comparison with a flat panel is limited.


we are not discussing a schoolbook exercise, we are not calculating passive heat conduction of a pyramid heated to a base, since it's not a schoolbook exercise we can decide on the condition, we could put in heat pipes etc.

its CPU/GPU clusters, so we don't have 0 control on where to locate what heat generators, but even if we had 0 control over it, the shape and height of the pyramid does not preclude heat pipes (not solid bars of metal, but having a hot side where latent heat of a gas condensing to a liquid on the cold side and then evaporating on the hot side).

heat pipes have enormous thermal conductivities


> The rest of the satellite must be within the shade of the solar panel,

Problem is with solar panels themselves. When you get 1.3kW of energy per square meter and use 325w of that for electricity (25% efficiency) that means you have to get rid of almost 1kW of energy for each meter of your panel. You can do it radiatively with back surface of panels, but your panels might reach equilibrium at over 120°C, which means they stop actually producing energy. If you want to do it purely radiatively, you would need to increase temperature of some surface pointing away from sun to much more than 120°C and pump heat from your panels with some heatpump.


When the cost of the solar panels does not matter you can reach an efficiency close to 50% (with multi-junction solar cells) and the panels will also be able to work at higher temperatures.

Nevertheless, the problem described by you remains, the panels must dissipate an amount of heat at least equal with the amount of useful power that is generated. Therefore they cannot have other heat radiators on their backside, except those for their own heat.


the point is that even with 100% INefficient solar panels the pyramidal sides can be made to have an arbitrarily large area, and due to convexity of the pyramid each infinitesimal surface element of the radiating sides can emit the full hemisphere, so given any target temperature, we can design the pyramid sharp enough (same base, different height, so that heat absorbed is constant and heat emitted must equal it in steady state, then by basic thermal radiation math, the asymptotic temperature it will settle at can be made arbitrarily close to temperature of the universe, by making the pyramid sharper.)

no matter how inefficient the solar panels, even with 1% efficiency, you could make the pyramid sharp enough to dissipate the heat stabilizing at any arbitrary low temperature (well, must still be above the temperature of CMB)

The sun is not the only radiative body in the solar system.

> Temperature is a characteristic of matter. There is vanishingly little matter in space. Due to that, one could perhaps say that space, in a way of looking at it, has no temperature.

Temperature: NaN °C


Temperature is a property of systems in thermal equilibrium. One such system is blackbody radiation, basically a gas of photons that is in thermal equilibrium.

The universe is filled with such a bath of radiation, so it makes sense to say the temperature of space is the temperature of this bath. Of course, in galaxies, or even more so near stars, there's additional radiation that is not in thermal equilibrium.


Related: what color is space?


I saw that too but wonder if it's different for the sparse matter in the interstellar medium, excluding the visible objects.

Jusssst had this conversation two nights ago with a smart drunk friend. To his credit when I asked "what's heat?" and he said "molecules moving fast" and I said "how many molecules are there in space to bump against?" He immediately got it. I'm always curious what ideas someone that isn't familiar with a problem space comes up with for solutions, so I canvased him for thoughts -- nothing novel, unfortunately, but if we get another 100 million people thinking about it, who knows what we'll come up with?

I got really annoyed when I first realized that heat and sound (and kinetic energy) are both "molecules moving," because they behave so dramatically differently on a human scale.

And yes, obviously they aren't moving in the same way, but it's still kind of weird to think about.


This article assumes that no extra mass is needed for cooling, i.e. that cooling is free. The list of model assumptions includes:

• No additional mass for liquid cooling loop infrastructure; likely needed but not included

• Thermal: only solar array area used as radiator; no dedicated radiator mass assumed


Author also forgot batteries for the solar shade transition period and then additional solar panels to charge these batteries during the solar "day" period. then insulation for batteries. Then power converters and pumps for radiators and additional radiators to cool the cooling infrastructure.

Overall not a great model. But on the other hand, even an amateur can use this model and imagine that additional parts and costs are missing, so if it's showing a bad outlook even in the favorable/cheating conditions for space DCs, then they are even dumber idea if all costs would be factored in fully. Unfortunately many serious journalists can't even do that mental assumption. :(


I'd say int makes much more sense to just shut off in the sunshade. The advantage of orbital solar, comes not so much from the lack of atmosphere, but the fact that depending on your orbit, you can be in sunlight for 60-100% of the time.

That proposal I've seen a few times too, basically put up a constellation up there, linked with laser comms and then transfer data to the illuminated sats in a loop. That sounds possible, but I have doubts. First of all if we take 400 km orbit, the "online" time would be something like 50 minutes. We need to boot up the system fully, run comm apps, locate a peer satellite and download data from it (which needs to be prepared in a portable form), write it locally and start calculations, then by the end of the 50 min repeat. All these operations are slow, especially boot time of the servers (which could be optimized of course). It would be great if some expert could tell us if it is feasible or not.

Yeah that's just flat out wrong then: you can't use the solar array as a radiator.

Of course you can. You can use everything as a radiator. Unless you have something which is literally 0 Kelvin everything radiates.

See here for all the great ways of getting rid of thermal energy in space: https://www.nasa.gov/smallsat-institute/sst-soa/thermal-cont...


You can use everything as a radiator, but you can't use everything as a radiator sufficiently efficient to cool hot chips to safe operating temperature, particularly not if that thing is a thin panel intentionally oriented to capture the sun's rays to convert them to energy. Sure, you can absolutely build a radiator in the shade of the panels (it's the most logical place), but it's going to involve extra mass.

You also want to orient those radiators at 90 degrees to the power panels, so that they don't send 50% of their radiation right back to the power panels.

You can rivet people onto the outside of the ISS to radiate heat, too, but it may be detrimental to the overall system.

Cooling isn't anymore difficult than power generation. For example, on the ISS solar panels generate up to 75 W/m², while the EATCS radiators can dissipate about 150 W/m².

Solar panels have improved more than cooling technology since ISS was deployed, but the two are still on the same order of magnitude.


So just 13.3 million sq. meters of solar panels, and 6.67 million sq. meters of cooling panels for 1 GW.

Or a 3.651 km squared and 2.581 km squared butterfly sattelite.

I don't think your cooling area measures account for the complications introduced by scale.

Heat dissipation isn't going to efficiently work its way across surfaces at that scale passively. Dissipation will scale very sub-linearly, so we need much more area, and there will need to be active fluid exchangers operating at speed spanning kilometers of real estate, to get dissipation/area anywhere back near linear/area again.

Liquid cooling and pumps, unlike solar, are meaningfully talked about in terms of volume. The cascade of volume, mass, complexity and increased power up-scaling flows back to infernal launch volume logistics. Many more ships and launches.

Cooling is going to be orders of magnitude more trouble than power.

How are these ideas getting any respect?

I could see this at lunar poles. Solar panels in permanent sunlight, with compute in direct surface contact or cover, in permanent deep cold shadow. Cooling becomes an afterthought. Passive liquid filled cooling mats, with surface magnifying fins, embedded in icy regolith, angled for passive heat-gradient fluid cycling. Or drill two adjacent holes, for a simple deep cooling loop. Very little support structure. No orbital mechanics or right-of-way maneuvers to negotiate. Scales up with local proximity. A single expansion/upgrade/repair trip can service an entire growing operation at one time, in a comfortable stable g-field.


Solar panels can in principle be made very thin, since there are semiconductors (like CdTe) where the absorption length of a photon is < 1 micron. Shielding against solar wind particles doesn't need much thickness (also < 1 micron).

So maybe if we had such PV, we could make huge gossamer-thin arrays that don't have much mass, then use the power from these arrays to pump waste heat up to higher temperature so the radiators could be smaller.

The enabling technology here would be those very low mass PV arrays. These would also be very useful for solar-electric spacecraft, driving ion or plasma engines.


> active fluid exchangers operating at speed spanning kilometers of real estate, to get dissipation/area anywhere back near linear/area again

Could the compute be distributed instead? Instead of gathering all the power into a central location to power the GPUs there, stick the GPUs on the back of the solar panels as modules? That way even if you need active fluid exchanger it doesn’t have to span kilometers just meters.

I guess that would increase the cost of networking between the modules. Not sure if that would be prohibitive or not.


It's easier to shield the GPUs if they're all grouped up.

> Could the compute be distributed instead?

For electrons that would dramatically increase latency, and lower bandwidth, slowing down compute.

Maybe dense optical connects could work?


Lets not forget that you have to launch that liquid up as well. Liquids are heavy, compared to their volume. Not to mention your entire 'datacenter' goes poof if one of these loops gets frozen, explodes from catching some sunlight, or whatever. This is pretty normal stuff, but not at this scale that would be required.

Well, divide et impera. Fairly straightforward for AI inference (not training): The existing Starlink constellation:

3491 V1 sats × 22.68 m² = 79176 m²

5856 V2-mini sats × 104.96 m² = 614 646 m²

Total: 0.7 km² of PERC Mono cells with 23% efficiency.

At around 313W/m² we get 217MW. But half the orbit it's in shade, so only ~100MW.

The planned Starship-launched V2 constellation (40k V3 sats, 256.94 m²) comes out at 10 km², ~1.5GW.

So it's not like these ideas are "out there".


Take those 40,000 satellites, and combine their solar panels, and combine the cooling panels, and centralize all the compute.

Distances are not our friend in orbit. Efficiency hyperscales down for many things, as distances and area scale up.

Things that need to hyperscale when you scale distance and area:

• Structural strength.

• Power and means to maneuver, especially for any rotation.

• Risk variance, with components housed together, instead of independently.

• Active heat distribution. Distance is COMPOUNDING insulation. Long shallow heat gradients move heat very slowly. What good does scaling up radiative surface do, if you don't hyperscale heat redistribution?

And you can't hyperscale heat distribution in 2D. It requires 3D mass and volume.

You can't just concatenate satellites and get bigger satellites with comparable characteristics.

Alternatives, such as distributing compute across the radiative surface, suffer relative to regular data centers, from intra-compute latency and bandwidth.

We have a huge near infinite capacity cold sink in orbit. With structural support and position and orientation stabilization for free. Let's use that.


None of it is easy but neither is cooling impossible as many people are saying.

Doing like an 8xh200 server (https://docs.nvidia.com/dgx/dgxh100-user-guide/introduction-...) is 10.2kW.

Let’s say you need 50m^2 solar panels to run it, then just a ton of surface area to dissipate. I’d love to be proven wrong but space data centers just seem like large 2d impact targets.


Yeah, you need 50m^2 of solar panels and 50m^2 of radiators. I don't see why one is that much more difficult than the other.

You need 50sqm of solar panels just for a tiny 8RU server. You also forgot any overhead for networking, control etc. but let's even ignore those. Next at the 400km orbit you spend 40% of the time in shade, so you need an insulated battery to provide 5kWh. This would add 100-200kg of weight to a server weighing 130kg on its own. Then you need to dissipate all that heat and yes, 50sqm of radiators should deal with the 10kW device. We also need to charge our batteries for the shade period, so we need 100sqm of solar panels. And we also need to cool the cooling infrastructure - pumps, power converters, which wasn't included in the power budget initially.

So now we have arrived to a revised solution: a puny 8RU server at 130 kg, requires 100sqm and 1000 kg of solar panels, then 50-75 sqm of the heat radiators at 1000-1500 kg, then 100-200 kg of batteries and then the housing for all that stuff plus station keeping engines and propellant, motors to rotate all panels, pumps, etc. I guess at least 500kg is needed, maybe a bit less.

So now we have a 3 ton satellite, which costs to launch around 10 million dollars at an optimistic 3000/kg on F9. And that's not counting cost to manufacture the satellite and the server own cost.

I think the proposal is quite absurd with modern tech and costs.


Don't forget to budget power to run the coolant heaters and prevent them from freezing in the shade.

Especially if with the radiators you can just roll out as rolls of aluminum foil, which is very light and very cheap.

Only on a short distance. To effectively radiate a significant amount of heat, you need to actually deliver the heat to the distant parts of the radiator first. That normally requires active pumping which needs extra energy. So now you need to unfold sonar panels + aluminium + pipes (+ maybe extra pumps)

Orbital assembly of a fluid piping system in space is a pretty colossal problem too (as well as miles of pipes and connections being a massive single point failure for your system). Dispersing the GPUs might be more practical, but it's not exactly optimal for high performance computation...

It’s a fun problem to think about but even if all the problems were solved we would have very quickly deprecating hardware in orbit that’s impossible to service or upgrade

>large 2d impact targets

I bet you a million dollars cash that you would not be able to reach them.


There’s a big difference between “impossible” (it isn’t) and “practical” (it isn’t).

What happened to "do things that don't scale"?

Maybe you should re-read the "do things that don't scale" article. It is about doing things manually until you figure out what you should automate, and only then do you automate it. It's not about doing unscalable things forever.

Unless you have a plan to change the laws of physics, space will always be a good insulator compared to what we have here on Earth.


Ok fair enough.

No need to rewrite anything. Radiators are 30% heavier per watt than solar panels. This is far from impossible.


I've done some reading on how they cool JWST. It's fascinating and was a massive engineering challenge. Some of thos einstruments need to be cooled to near absolute zero, so much so that it uses liquid helium as a coolant in parts.

Now JWST is at near L2 but it is still in sunlight. It's solar-powered. There are a series of radiating layer to keep heat away from sensitive instruments. Then there's the solar panels themselves.

Obviously an orbital data center wouldn't need some extreme cooling but the key takeaway from me is that the solar panels themselves would shield much of the satellite from direct sunlight, by design.

Absent any external heating, there's only heating from computer chips. Any body in space will radiate away heat. You can make some more effective than others by increasing surface area per unit mass (I assume). Someone else mentioned thermoses as evidence of insulation. There's some truth to that but interestingly most of the heat lost from a thermos is from the same IR radiation that would be emitted by a satellite.


The computer chips used for AI generate significantly more heat than the chips on the JWST. The JWST in total weighs 6.5 tons and uses a mere 2kw of power, which is the same as 3 H100 GPUs under load, each of which will weight what, 1kg?

So in terms of power density you're looking at about 3 orders of magnitude difference. Heating and cooling is going to be a significant part of the total weight.


Indeed. If cooling in space was that easy then we would have just built datacenters in hermetically sealed terrestrial containers…

Who says that?

Every conversation I've seen is despite how many serious organizations with talented people, the "uhhh how do you cool it?" Is brought up immediately


Maybe hang out with different people?

Everyone I talked to (and everyone on this forums) knows cooling is hard in space.

It is always the number one comment on every news piece that is featured here talking about "AI in space".


For some decades now I’ve heard the debunk many times more than the bunk. The real urban myth appears to be any appreciable fraction of people believe the myth.

Space hardware needs to be fundamentally different from surface hardware. I don't mean it in the usual radiation hardenrining etc, but in using computing substrates that run over 1000c and never shut down. T^4 cooling means that you have a hell of a time keeping things cool, but keeping hot things from melting completely is much easier.

if you have a compute substrate at 1300K you don't have a cooling problem - you have an everything else problem

There are very high temperature transistors.

We don't use them on earth because we expect humans to be near computers and keeping anything extremely hot is a waste of energy.

But an autonomous space data center has no reason to be kept even remotely human habitable.


The transistors are experimental, and no one is building high-performance chips out of them.

You can't just scale current silicon nodes to some other substrate.

Even if you could, there's a huge difference between managing the temperature of a single transistor, managing temps on a wafer, and managing temps in a block of servers running close to the melting point of copper.


I think the point is, yes, cooling is a significant engineering challenge in space; but having easy access to abundant energy (solar) and not needing to navigate difficult politically charged permitting processes makes it worthwhile. It's a big set of trade offs, and to only focus on "cooling being very hard in space" is kind of missing the point of why these companies want to do this.

Compute is severely power-constrained everywhere except China, and space based datacenters is a way to get around that.


Of course you can build these things if you really want to.

But there is no universe in which it's possible to build them economically.

Not even close. The numbers are simply ridiculous.

And that's not even accounting for the fact that getting even one of these things into orbit is an absolutely huge R&D project that will take years - by which time technology and requirements will have moved on.


Lift costs dropping geometrically. Cost and weight of solar decreasing similarly. The trend makes space-based centers nearly inevitable.

Reminds me of "Those darn cars! Everybody knows that trains and horses are the way to travel."


Lift costs are not quite dropping like that lately. Starship is not yet production ready (and you need to fully pack it with payloads, to achieve those numbers). What we saw is cutting off most of the artificial margins of the old launches and arriving to some economic equilibrium with sane margins. Regardless of the launch price the space based stuff will be much more expensive than planet based, the only question if it will be optimistically "only" x10 times more expensive, or pessimistically x100 times more expensive.

I don't get this "inevitable" conclusion. What is even a purpose of the space datacenter in the first place? What would justify paying an order of magnitude more than conventional competitors? Especially if the server in question in question is a dumb number cruncher like a stack of GPUs? I may understand putting some black NSA data up there or drug cartel accounting backup, but to multiply some LLM numbers you really have zero need of extraterritorial lawless DC. There is no business incentive for that.


> Reminds me of "Those darn cars! Everybody knows that trains and horses are the way to travel."… … said nobody ever.

You must be very young. This was well-known back in the day. Lots of articles (some even posted here some time back) of rant on cars, how they were ruining everything.

Btw The cute one-line slam doesn't really belong here. It's an empty comment, adds zero to the conversation, contributes nothing to the reader. It only makes a twelve year old feel a brief burst of endorphins. Please refrain.


The idea that its faster and cheaper to launch solar panels then get local councils to approve them is insane. The fact is those Data Center operates simply don't want to do it and instead want politicians to tax people to build the power infrastructure for them.

But space isn't actually cold, or at least not space near Earth. It's about 10 C. And that's only about a 10 C less than room temperature, so a human habitable structure in near earth space won't radiate very much heat. But heat radiated is O(Tobject^4 - Tbackground^4), and a computer can operate up to around 90C (I think) so that is actually a very big difference here. Back of the envelope, a data center at 90C will radiate about 10x the heat that a space station at 20C will. With the massive caveat that I don't know what the constant is here, it could actually be easy to keep a datacenter cool even though it is hard to keep a space station cool.

It's actually only about 3x.

As you intimated, the radiated heat Energy output of an object is described by the Stefan-Boltzmann Law, which is E = [Object Temp ]^4 * [Stefan-Boltzmann Constant]

However, Temp must be in units of an absolute temperature scale, typically Kelvin.

So the relative heat output of a 90C vs 20C objects will be (translating to K):

383^4 / 293^4 = 2.919x

Plugging in the constant (5.67 * 10^-8 W/(m^2*K^4)) The actual values for heat radiation energy output for objects at 90C and 20C objects is 1220 W/m^2 and 417 W/m^2

The incidence of solar flux must also be taken into account, and satellites at LEO and not in the shade will have one side bathing in 1361 W/m^2 of sunlight, which will be absorbed by the satellite with some fractional efficiency -- the article estimates 0.92 -- and that will also need to be dissipated.

The computer's waste heat needs to be shed, for reference[0] a G200 generates up to 700W, but the computer is presumably powered by the incident solar radiation hitting the satellite, so we don't need to add its energy separately, we can just model the satellite as needing to shed 1361 W/m^2 * 0.92 = 1252 W/m^2 for each square meter of its surface facing the sun.

We've already established that objects at 20C and 90C only radiate 1220 W/m^2 and 417 W/m^2, respectively, so to radiate 1252 W per square meter coming in from the sun facing side we'll need 1252/1220 = 1.026 times that area of shaded radiator maintained at a uniform 90C. If we wanted the radiator to run cooler, at 20C, we'd need 2.919x as much as at 90C, or 3.078 square meters of shaded radiator for every square meter of sun facing material.

[0] Nvidia G200 specifications: https://www.nvidia.com/en-us/data-center/h200/


You use arbitrary temps to prove at some temps it’s not as efficient. Ok? What about at the actual temps it will be operating in? We’re talking about space here. Why use 20 degC as the temperature for space?

He didn't use 20C as the temperature of space. He used the OP's example of comparing the radiative cooling effectiveness of a heat SOURCE at 90C (chosen to characterize a data center environment) and 20C (chosen to characterize the ISS/human habitable space craft).

You forgot about the background. The background temp at Earths distance from the sun is around 283K. Room temperature is around 293K, and a computer can operate at 363K. So for an object at 283K the radiation will be (293^4 - 283^4) = , and a computer will be (363^4 - 283^4)

(293^4 - 283^4) = 9.55e8

(363^4 - 283^4) = 1.09e10

So about 10x

I have no problem with your other numbers which I left out as I was just making a very rough estimate.


The background temp at Earth's orbit is due to the incidence of solar flux, which I took account of.

I'm assuming the radiators are shaded from that flux by the rest of the satellite, for efficiency reasons, so we don't need to account for solar flux directly heating up the radiators themselves and reducing their efficiency.

In the shade, the radiators emission is relative to the background temp of empty space, which is only 2.7 K[0]. I did neglect to account for that temperature, that's true, but it should be negligible in its effects (for our rough estimate purposes).

[0] https://sciencenotes.org/how-cold-is-space-what-is-its-tempe...


The temperature that you raise to the fourth power is not Celsius, it's Kelvin. Otherwise things at -200 C would radiate more heat than things at 100 C. Also the temperature of space is ~3 K (cosmic microwave background), not 10 C.

There is a large region of the upper atmosphere called the thermosphere where there is still a little bit of air. The pressure is extremely low but the few molecules that are there are bombarded by intense radiation and thus reach pretty high temperatures, even 2000 C!

But since there are so few such molecules in any cubic meter, there isn't much energy in them. So if you put an object in such a rarefied atmosphere. It wouldn't get heated up by it despite such a gas formally having such a temperature.

The gas would be cooled down upon contact with the body and the body would be heated up by a negligible amount


These satellites will certainly be above the themosphere. The temperature of the sparse molecules in space is not relevant for cooling because there are too few of them. We're talking about radiative cooling here.

indeed. talking about temperature is incomplete without other aspects such as pressure

Yeah, if you forget about the giant fucking star nearby

The Sun is also not 10 C. Luckily you have solar arrays which shade your radiators from it, so you can ignore the direct light from it when calculating radiator efficiency. The actual concern in LEO is radiation from the Earth itself.

Pressure matters

Yes, python is not statically typed. It shouldn't be. Don't expect static typing behavior and typing in python is _not_ static typing in any way. It's documentation only, not static typing.


One of my biggest gripes around typing in python actually revolves around things like numpy arrays and other scientific data structures. Typing in python is great if you're only using builtins or things that the typing system was designed for. But it wasn't designed with scientific data structures particularly in mind. Being able to denote dtype (e.g. uint8 array vs int array) is certainly helpful, but only one aspect.

There's not a good way to say "Expects a 3D array-like" (i.e. something convertible into an array with at least 3 dimensions). Similarly, things like "At least 2 dimensional" or similar just aren't expressible in the type system and potentially could be. You wind up relying on docstrings. Personally, I think typing in docstrings is great. At least for me, IDE (vim) hinting/autocompletion/etc all work already with standard docstrings and strictly typed interpreters are a completely moot point for most scientific computing. What happens in practice is that you have the real info in the docstring and a type "stub" for typing. However, at the point that all of the relevant information about the expected type is going to have to be the docstring, is the additional typing really adding anything?

In short, I'd love to see the ability to indicate expected dimensionality or dimensionality of operation in typing of numpy arrays.

But with that said, I worry that typing for these use cases adds relatively little functionality at the significant expense of readability.


I also had a very hard time to annotate types in python few years ago. A lot of popular python libraries like pandas, SQLAlchemy, django, and requests, are so flexible it's almost impossible to infer types automatically without parsing the entire code base. I tried several libraries for typing, often created by other people and not maintained well, but after painful experience it was clear our development was much faster without them while the type safety was not improved much at all.


Have you looked at nptyping? Type hints for ndarray.

https://github.com/ramonhagenaars/nptyping/blob/master/USERD...


What is this project actually for?

Its FAQ states:

> Can MyPy do the instance checking? Because of the dynamic nature of numpy and pandas, this is currently not possible. The checking done by MyPy is limited to detecting whether or not a numpy or pandas type is provided when that is hinted. There are no static checks on shapes, structures or types.

So this is equivalent to not using this library and making all such types np.ndarray/np.dtype etc then.

So we expend effort to coming up with a type system for numpy, and tools cannot statically check types? What good are types if they aren't checked? Just a more concise documentation for humans?


That one's new to me. Thanks! (With that said, I worry that 3rd party libs are a bad place for types for numpy.)


Numpy ships built-in type hints as well as a type for hinting arrays in your own code (numpy.typing.NDArray).

The real challenge is denoting what you can accept as input. `NDArray[np.floating] | pd.Series[float] | float` is a start but doesn't cover everything especially if you are a library author trying to provide a good type-hinted API.


I'm aware of that. As explained in the original comment, not being able to denote dimensionality in those is a major limitation.


This isn't static, but jaxtyping gives you at least runtime checks and also a standardized form of documenting those types. https://github.com/patrick-kidger/jaxtyping


It actually doesn't, as far as I know :) It does get close, though. I should give it a deeper look than I have previously, though.

"array-like" has real meaning in the python world and lots of things operate in that world. A very common need in libraries is indicating that things expect something that's either a numpy array or a subclass of one or something that's _convertible_ into a numpy array. That last part is key. E.g. nested lists. Or something with the __array__ interface.

In addition to dimensionality that part doesn't translate well.

And regardless, if the type representation is not standardized across multiple libraries (i.e. in core numpy), there's little value to it.


Does it not?

E.g. `UInt8[ArrayLike, "... a b"]` means "an array-like of uint8s that has at least two dimensions". You are opting into jaxtyping's definition of an "array-like", but even though the general concept as you point out is wide spread, there isn't really a single agreed upon formal definition of array-like.

Alternatively, even more loosely as anything that is vaguely container-shaped, `UInt8[Any, "... a b"]`.


Ah, fair enough! I think I misread some things around the library initially awhile back and have been making incorrect assumptions about it for awhile!


I wonder if we should standardize on __array__ like how Iterable is standardized on the presence of __iter__, which can just return self if the Iterable is already an Iterator.


I am using runtime type and shape checking and wrote a tiny library to merge both into a single typecheck decorator [1]. It‘s not perfect, but I haven‘t found a better approach yet.

[1] https://github.com/davnn/safecheck


Some time ago I created a demo of named dimensions for Pytorch, https://github.com/stared/pytorch-named-dims

In the same line, I would love to have more Pandas-Pydantic interoperability at the type level.


wisdom in the community. people who can excel at things don’t go to big tech companies because they won’t be appreciated , they will get pipped if they can’t wear any of those chains shackles well


Would a custom decorator work for you?


Unless I'm missing something entirely, what would that add? You still can't express the core information you need in the type system.


I meant only that you can insist a parameter has some quality when passed.


Good point, but I think we're talking past each other a bit.

Typing in python within the scientific world isn't ever used to check types. It's _strictly_ only documentation.

Yes, MyPy and whatnot exist, but not meaningfully. You literally can't use them for anything in this domain (they wont' run any of the code in question).

Types (in this subset of python) are 100% about documentation, 0% about enforcement.

We're setting up a _documentation_ system that can't express the core things it needs to. That worries me. Setting up type _checks_ is a completely different thing and not at all the goal.


I see makes sense. Thanks.


I'm trying to count the rounds of major layoffs in my career (e.g. 10% or more of the company let go at once). I _think_ it's 5, but it might be a bit more. I've been lucky each time, but that also means I wasn't one of the ones taking risks. Layoffs cut from both sides of the performance curve and leave the middle, in my experience.

I wish I'd done this more.

In some cases there was no way to. For example, we once woke up to find that the European half of our team had been laid off as part of huge cuts that weren't announced and even our manager had no idea were coming. There's no good way to do layoffs, but I think that "sudden shock" approach is worst of all, personally. You don't get to say goodbye in any way and people don't get to plan for contingencies at all. (The other extreme of knowing it's coming for a year and applying for your own job and then having 2 months to sit around after you didn't get it also sucks, and I've done that as well. You can at least make plans in that case, though.)

On the other hand, in a _lot_ of other cases, you do have a chance to say goodbye. Take it. This is really excellent advice. It's worth saying something, at very least to the people you really did enjoy working with.

There's a decent chance you work with some of those folks in the future, and even if you don't, it really does mean something to be a kind human.


Theres no good way to do a layoff, but there are tons of bad ways to do them and it seems like we keep finding new ways to make it worse.


The "sudden shock" approach is a risk mitigation. You have to ask yourself though, what risk were they mitigating?

There's no good answer to that question I can come up with that should make you want to stay at that company.


There's a lot of companies with IP that can be extracted or systems that can be sabotaged by a bitter employee. There's also the extreme cases of someone who knows they are being fired who can do a shooting/arson/some other extreme scenario.

I'm not saying I agree with the shock approach but there are definitely some generic risks that I don't think paint a bad picture of the company by their existence.


As a company, we entrust our employees with a lot of agency and access to our systems, networks and data. We do not spy on our employees nor have intrusive systems to prevent them from seeing/copying internal IP.

Therefore, while these operating procedures foster an agreeable environment for our collaborators to thrive and do actual things without too much segmentation, it makes it painful when a hard decision results in people getting suddenly both very angry against the company, and very capable to inflict damage upon it.


Flock really does have a huge amount of potential for abuse. It's a fair point that private companies (e.g. Google, etc) have way more surveillance on us than the government does, but the US and local governments having this level of surveillance should also worry folks. There's massive potential for abuse. And frankly, I don't trust most local police departments to not have someone that would use this to stalk their ex or use it in other abusive ways. I weirdly actually trust Google's interests in surveillance (i.e. marketing) more than I trust the government's legitimate need to monitor in some cases to track crimes. Things get scary quick when mass surveillance is combined with (often selective) prosecution.


> I weirdly actually trust Google's interests in surveillance (i.e. marketing) more than I trust the government's legitimate need to monitor in some cases to track crimes

You shouldn't.

When a company spies on everyone as much as possible and hordes that data on their servers, it is subject to warrant demands from any local, state, or Federal agency.

> Avondale Man Sues After Google Data Leads to Wrongful Arrest for Murder

Police had arrested the wrong man based on location data obtained from Google and the fact that a white Honda was spotted at the crime scene. The case against Molina quickly fell apart, and he was released from jail six days later. Prosecutors never pursued charges against Molina, yet the highly publicized arrest cost him his job, his car, and his reputation.

https://www.phoenixnewtimes.com/news/google-geofence-locatio...

The more data you collect, the more dangerous you are.

I would rather trust companies making a legitimate effort not to collect and store unnecessary data in the first place


you, maybe politely, imply the police might abuse these tools, rather than actually they do routinely abuse the tools. For instance, one recent case which isn't speculation: https://local12.com/news/nation-world/police-chief-gets-caug...


Yeah; I don't exactly trust Google with tracking data, but at least Google doesn't have the power to imprison or kill me on a whim.


Problem is that if Google has it, the government can get it.


The thing is though, cops harass people, cops abuse their power, courts prosecute who they want, with or without Flock. This is a valid concern, but the root of the issue, I think what we should focus on first or primarily, is that the justice system isn't necessarily accountable for mistakes or corruption. As long as qualified immunity exists, as long as things like the "Kids for Cash" scandal (which didn't need Flock) go on, it doesn't really matter what tools they have, or not.


> As long as qualified immunity exists, as long as things like the "Kids for Cash" scandal (which didn't need Flock) go on, it doesn't really matter what tools they have, or not.

But, given that those abuses exist and are ongoing, we should not hand the police state yet another tool to abuse.


  > I weirdly actually trust Google's interests in surveillance more than I trust the government's
I don't think this is weird at all. Corporations may be more "malicious" (or at least self centered), but governments have more power. So even if you believe they are good and have good intentions it still has the potential to do far more harm. Google can manipulate you but the government can manipulate you, throw you in jail, and rewrite the rules so you have no recourse. Even if the government can get the data from those companies there's at least a speed bump. Even if a speed bump isn't hard to get over are we going to pretend that some friction is no different from no friction?

Turnkey tyranny is a horrific thing. One that I hope more people are becoming aware of as it's happening in many countries right now.[0]

This doesn't make surveillance capitalism good and I absolutely hate those comparisons because they make the assumption that harm is binary. That there's no degree of harm. That two things can't be bad at the same time and that just because one is worse that means the other is okay. This is absolute bullshit thinking and I cannot stand how common it is, even on this site.

[0] my biggest fear is that we still won't learn. The problem has always been that the road to is paved with good intentions. Evil is not just created by evil men, but also my good men trying to do good. The world is complex and we have this incredible power of foresight. While far from perfect we seem to despise this capability that made us the creatures we are today. I'm sorry, the world is complex. Evil is hard to identify. But you got this powerful brain to deal with all that, if you want to


>I don't think this is weird at all. Corporations may be more "malicious" (or at least self centered), but governments have more power. So even if you believe they are good and have good intentions it still has the potential to do far more harm. Google can manipulate you but the government can manipulate you, throw you in jail, and rewrite the rules so you have no recourse. Even if the government can get the data from those companies there's at least a speed bump. Even if a speed bump isn't hard to get over are we going to pretend that some friction is no different from no friction?

That's all as may be, but you're ignoring the fact that governments are buying[0][1][2][3] the data being collected by those corporations. That's not "friction" in my book, rather it's a commercial transaction.

As such, giving corporations a pass seems kind of silly, as they're profiting from selling that data to those with a monopoly on violence.

So, by all means, give the corporations the "benefit of the doubt" on this, as they certainly have no idea that they're selling this information to governments (well, to pretty much anyone willing to pay -- including domestic abusers and stalkers too), they're only acting as agents maximizing corporate profits for their shareholders. Which is the only important thing, right? Anything else is antithetical to free-market orthodoxy.

People suffer and/or die? Just the cost of doing business right?

[0] https://www.nbcnews.com/tech/security/us-government-buys-dat...

[1] https://www.lawfaremedia.org/article/when-the-government-buy...

[2] https://www.congress.gov/118/meeting/house/116192/documents/...

[3] https://www.politico.com/news/magazine/2024/02/28/government...


  > but you're ignoring the fact that governments are buying the data being collected by those corporations
Did I?

  >> Even if the government can get the data from those companies there's at least a speed bump. Even if a speed bump isn't hard to get over are we going to pretend that some friction is no different from no friction?
I believe that this was a major point in my argument. I apologize if it was not clear. But I did try to stress this and reiterate it.

  > giving corporations a pass seems kind of silly
Oh come on now, I definitely did not make such a claim.

  >> This doesn't make surveillance capitalism good and I absolutely hate those comparisons because they make the assumption that harm is binary. That there's no degree of harm. That two things can't be bad at the same time and that just because one is worse that means the other is okay.
You're doing exactly what I said I hate.

The reason I hate this is because it makes discussion impossible. You treat people like they belong to some tribe that they do not even wish to be apart of. We're on the same side here buddy. Maybe stop purity testing and try working together. All you're doing is enabling the very system you claim to hate. You really should reconsider your strategy. We don't have to agree on the nuances, but if you can't see that we agree more than we disagree then you are indistinguishable from someone who just pretends to care. Nor do you become distinguishable from an infiltrating saboteur[0].

Stop making everything binary. Just because I'm not in your small club does not mean I'm in the tribe of big corp or big gov. How can you do anything meaningful if you stand around all day trying to figure out who is a true Scottsman or not?

[0] See Sections 11 and 12. https://ia601309.us.archive.org/14/items/Simplesabotage/Simp...


This depends a lot on what you do. Try working with a decision analyst sometime. The entire economic model with a decision tree and monte carlo analysis of cost overruns, etc for a multi-trillion dollar decision will literally be a arcanely-complex spreadsheet or two on someone's laptop.

With that said, it's still a great tool for the job because the different stakeholders can inspect it.


It requires more input energy, but it's been really good to see electrolysis of H2O for hydrogen generation take off. There are honestly industrial/grid scale operations actually starting now (as opposed to being constructed). E.g. Aces Delta in Utah. 220MW of wind/solar as input (i.e. equivalent to power for a medium sized city) As a disclaimer, my wife works on that project, but I think it's incredibly cool regardless.

Pyrolysis is a less energy intensive way to produce hydrogen, and does deserve more attention. But it still requires methane as a feedstock.

Hydrolysis let's use use hydrogen as essentially a fixed loss battery. It's perfectly complimentary to seasonally variable renewables like wind and solar. Batteries have too high of a loss though time for seasonal or multi-year storage. If you can store it (big if... Not everywhere has a salt dome like Delta, UT), hydrogen really is a great solution.


Pyrolysis is a less energy intensive way to produce hydrogen, and does deserve more attention. But it still requires methane as a feedstock.

So why is methane as feedstock a problem?

Isn't it better to spend less energy convert a ubiquitous, but environmentally harmful gas into hydrogen along with useful materials, than spend 4x more energy to convert a critical resource -- fresh water -- into hydrogen without any valuable by-products?


Water is critical but not hard to get. The energy and cost required to take a m3 of dirty water and turn it into pure water is a rounding error compared to the energy required to hydrolyze it.

Yes methane is an environmental problem, even small methane leakages have a large GHG impacts. But the best way to deal with that environmental problem is to not pull it out of the ground in the first place

Plus for pyrolysis, you have to deal with the carbon which makes up 75% of the methane by weight. A non-trivial issue.


Except we already pull it out of the ground, and people are heavily invested in that process. Working with what we have is the best option here: far easier to enthusiastically go after methane leaks when the industry is otherwise being told "we will buy a lot of your product forever.

Which is really the stakes here: if you can "burn" fossil fuels without putting GHG in the air...there's no reason to stop using them at all. In fact we should vastly expand their use.


Why would the go after me than leaks if they know people will by their products?

A lot of the methane leaks are not “leaks” but intentional releases to “protect” equipment or to simply get rid of it. Until there are fines on the pollution it won’t stop.


Water is critical but not hard to get.

Right. https://en.wikipedia.org/wiki/Water_scarcity

You would want to use solar power for electrolysis. In the US, regions with abundant solar power are also the ones that: - have true water scarcity - Nevada and Arizona - have low population and industrial density, so any generated hydrogen would need transported to the point of use.

The bigger problem is the energy disparity. Electrolysis of water requires 50 kWh/kgH2 or more. Even a 70% efficient fuel cell would get ~25 kWh/kgH2 -- horrible roundtrip efficiency. With pyrolysis, that equation is exactly inverted: at 9-12 kWh/kgH2, you can generate excess electricity with no CO2 emissions.

Plus for pyrolysis, you have to deal with the carbon which makes up 75% of the methane by weight. A non-trivial issue.

Exactly. 20 kg of methane costs $3 today, but contains 15 kg of carbon that could be worth $20-$30. It's a non-trivial issue if you hate generating value.

https://www.chemanalyst.com/Pricing-data/carbon-black-42


> In the US, regions

First of all the US isn’t the whole world.

Like you said transportation is a problem which is why you would produce it close to where it’s needed (say Nebraska). You don’t need an “ideal” solar output location.

Yes I am well aware of the energy difference.

> Exactly. 20 kg of methane costs $3 today, but contains 15 kg of carbon that could be worth $20-$30. It's a non-trivial issue if you hate generating value

If carbon free hydrogen is going to be worth doing at scale it will be because there is a price on the carbon. So the input methane will go up in price.

As for the output, global demand for carbon black is currently ~14 million metric tones a year [0].

Current hydrogen demand is ~100 million metric tones a year [1].

100 Mt of hydrogen needs ~400 Mt of methane and produces ~300 Mt of carbon.

300 Mt vs 14 Mt of current demand. What do you supposed will happen to that carbon black price when you produce even a fraction of total hydrogen demand through pyrolysis?

It’s non-trivial cause you’re gonna be having to create reverse coal mines to store all that shit.

[0]: https://www.chemanalyst.com/industry-report/carbon-black-mar...

[1]: https://www.iea.org/reports/global-hydrogen-review-2025/dema...


A lot of those countries with water scarcity are oil rich. A lot of those countries that don't have water scarcity are oil poor.

Seems one forward step would be for countries that have an abundant source of alternative fuel to go for it and stop importing so much oil. Countries that don't have much water can import alternative energy sources or keep using the oil that they're rich in.


I tend to be a fan of methane for its high hydrogen content per unit carbon as well as how much easier it is to store than hydrogen. However the argument against methane that I do find convincing is that the infrastructure for transporting and distributing methane leaks a lot. The argument is most compelling against residential distribution, where maintenance is harder to justify, but large leaks regularly occur, and that is very bad for greenhouse emissions.

I’ve always been curious about generating methane in industrial composting or from landfills and using it onsite for hydrogen generation. Not sure if the generating capacity is enough though, there is probably a reason it isn’t being done.


Methane is not abundant, as such. There are specific sources of it, mainly through manual agricultural processes, or in natural systems. Natural gas is mostly methane, I guess.


> So why is methane as feedstock a problem?

There is inevitably leakage, and if even a small fraction does that it negates any global warming advantage on relevant timescales.


Methane in the form of natural gas is piped all over almost every city in North America, at least those areas where people need to heat their homes in the winter.

Any leakage from a pyrolysis plant is going to be negligible compared to what's undoubtedly already leaking from gas infrastructure installed in the 1950s (or earlier), as well as the continual accidental leaks caused by excavating.


Yes, leakage of methane for direct use is also a problem. Especially problematic is leakage upstream, near the wellheads.


An under considered side effect of adding renewables to the grid is that electricity prices occasionally approach zero during times of over production. No reason not to use that energy for electrolysis its going to be wasted otherwise.


The problem with electrolysis is the high capital cost. As long as / where the price of electricity near zero only some of the time, it is too expensive. (With batteries and more photovoltaics this might change a bit.)


A major limitation is that most different rock types look essentially identical in visual+NIR spectral ranges. Things separate once you get out to SWIR bands. Sentinel2 does have some SWIR bands and it may work reasonably well with embeddings. But a lot of the signal the embeddings are going to be focused on encoding may not be the right features to distinguish rock types. Methods more focused specifically on the SWIR range are more likely to work reliably. E.g. simple band ratios of SWIR bands may give a cleaner signal than general purpose embeddings in this case.

Hyperspectral in the SWIR range is what you really want for this, but that's a whole different ball game.


> Hyperspectral in the SWIR range is what you really want for this, but that's a whole different ball game.

Are there any hyperspectral surveys with UAVs etc instead of satellites?


Usually airplanes because the instruments are heavy. But yeah, that's the most common case. Hyperspectral sats are much rarer than aerial hyperspectral.


An interesting 30x30m satellite that recently launched and is giving back data last year is EnMAP https://www.enmap.org. Hooking that up to TESSERA is on our todo list as soon as we can get our mittens on the satellite data


As someone who's recently been hiring (sorry folks, position was filled just a few days ago), it's wild to me how distorted things have become.

We had 1200 applications for an extremely niche role. A huge amount were clearly faked resumes that far too closely matched the job description to be realistic. Another huge portion were just unqualified.

The irony is that there actually _are_ a ton of exceptionally qualified candidates right now due to the various layoffs at government labs. We actually _do_ want folks with an academic research background. I am quite certain that the applicant pool contained a lot of those folks and others that we really wanted to interview.

However, in practice, we couldn't find folks we didn't already know because various keyword-focused searches and AI filtering tend to filter out the most qualified candidates. We got a ton of spam applications, so we couldn't manually filter. The filtering HR does doesn't help. All of the various attempts to meaningfully review the full candidate pool in the time we had just failed. (Edit: "Just failed" is a bit unfair. There was a lot of effort put in and some good folks found that way, but certainly not every resume was actually reviewed.)

What finally happened is that we mostly interviewed the candidates we knew about through other channels. E.g. folks who had applied before and e-mailed one of us they were applying again. Former co-workers from other companies. Folks we knew through professional networks. That was a great pool of applicants, but I am certain we missed a ton of exceptional folks whose applications no actual person even saw.

The process is so broken right now that we're 100% back to nepotism. If you don't already know someone working at the company, your resume will probably never be seen.

I really feel hiring is in a much worse state than it was about 5 years ago. I don't know how to fix it. We're just back to what it was 20+ years ago. It's 100% who you know.


> The process is so broken right now that we're 100% back to nepotism

Just want to comment on this, because I think think favoring unknown candidates is a mistake we make too often, and in fact the "normal" process is a disaster on both sides for this reason. Nepotism or Cronyism is granting resources, patronage, jobs to someone you know instead of a qualified candidate. In many industries this is how they function because qualifications and skill provide little to no differentiation (Think knowing Microsoft word and having a comms degree with no work experience).

In high skill industries where experience is hard fought... people know the who the "people" are because they stick out like sore thumbs. If your hiring process at work is throw up a job on indeed and see what resumes come through, your company likely isn't worth working at anyway because the best candidates aren't randos.

Think of it this way if you were putting together the Manhattan project again would you recruit the people with a stellar reputation in physics, engineering, manufacturing, etc OR would you throw up a job on a job board or your corporate site and see what comes back? The difference is active vs passive, good reputation vs no reputation (or a bad reputation).

Not trying to make a big semantic argument... I just want to say that things like reputation and network matter... and thats not really "nepotism"


I think you’re just arguing for nepotism in a roundabout way.

My senior staff engineer can’t code at all. He got hired because he was friends with our engineering manager. You might say “well that’s nepotism then since he’s under qualified”, but I’m sure he would make the argument that he got the job because of his “stellar reputation and extensive network”.

It’s an abhorrent situation to be in. Everyone knows he can’t code but because he got hired at such a senior level he’s making high level decisions that make no sense. Give me a qualified rando any time of the day.


I agree, some of the worst employees I've seen were hired that way.

I haven't hired anyone recently but btwn 10-20 years ago I did hire a lot. Of course we reached out via our network of connections but that gets tapped out fast, so you have to rely on job postings. It was always hundreds of applicants per opening. Back then it wasn't 1000's but it might as well have been because I didn't have enough time to sift through them all. That's ok, you can just approach it like "the dowry problem" (also known as the secretary problem [1]).

But the job market and hiring is way worse now, and it's pretty horrible for job seekers atm.

[1] https://en.wikipedia.org/wiki/Secretary_problem


This situation is very weird to me. In my experience, referrals got your foot in the door, but you still always had to pass the same hiring screen/interview process as everyone else.

I recommended an engineer once who I thought was great - he was a total "get shit done" kind of guy. But he did poorly in the interviews (I won't say they were leetcode-type problems, but you did have to have some algorithmic skills - I warned him beforehand to brush up on some of those types of programs.) As much as I liked the working with the guy, we couldn't hire him because he was a pretty solid "no" from the other interviewers.

I've never worked in a company that hired people based on the referral of one person, and honestly that sounds like a pretty f'd up company.


Yes we have the same interview process. This is something that no one knows about as I don’t know who interviewed him but my cynical guess is that if someone has enough power in the org and happens to sit on the hiring committee, it doesn’t really matter how well they do.


In another much better incident, we once hired someone that did poorly in some interviews because I happened to present a project they made the week before. He turned out great, and he was doing some really cool stuff, just didn’t do coding interviews well.


> My senior staff engineer can’t code at all. He got hired because he was friends with our engineering manager.

Well thats how it works everywhere. You have to suck up and pretent to be 'friends' with person with the power to get promoted too.


You don't have to pretend. You didn't even have to be friends. You can even be mortal enemies with the powerful person.

Faking it is pathetic behavior.


I think you're projecting your negative past experiences and trying very hard not to understand the GP's point.

It doesn't matter what the person hired thinks. The important part is whether those making hiring decisions are hiring people with "stellar reputation".

In your case, "everyone knows he can't code", so he doesn't have stellar reputation. If we apply this scenario to what the GP said, no company would have hired a person where "everyone knows he can't code".

You said "He got hired because he was friends with our engineering manager." That's nepotism.

GP says hire somebody with stellar reputation. That's a totally different situation.


I understand just fine. There is no objective descriptor of a person. The engineering manager probably thinks he’s a perfect candidate.

> In your case, "everyone knows he can't code", so he doesn't have stellar reputation

Yes that’s what we figured out after he got hired. He obviously didn’t have a reputation within our org before he got hired. All we had to go off was the engineering managers opinion.

Are you guys really shocked that given the freedom to, people would rather hire their friends and people they know would do them favours rather than the “objectively best candidate for the job”?

By overweighting network and reputation all you are doing is turning every career into a political game.


> > In your case, "everyone knows he can't code", so he doesn't have stellar reputation

> Yes that’s what we figured out after he got hired. He obviously didn’t have a reputation within our org before he got hired. All we had to go off was the engineering managers opinion.

Right, *he doesn't have stellar reputation*, and he got hired. The comment you replied to said "hire people with stellar reputation". I'm still not sure what you're missing here or why you think this is an applicable scenario.

> Are you guys really shocked that given the freedom to, people would rather hire their friends and people they know would do them favours rather than the “objectively best candidate for the job”?

I wouldn't be shocked, but I also don't think that what the "GP" advocated for. You might say this would lead to people using it as an excuse for nepotism, but if the engineering manager is the kind of person who has poor or malicious judgment and can't make a correct hiring decision by himself, then you're cooked no matter what.


> I just want to say that things like reputation and network matter... and thats not really "nepotism"

I strongly agree with this, and I'm glad you put it so clearly. If you've been in your industry say 10 years or more, you should have built a reputation by that point that makes people say "I want to work with that person again, or I'd recommend that person to a friend who has a job opening". (Important thing to clarify, though, I'm not denigrating anyone who has been out of work a long time. I've seen many categories of jobs in the tech industry where there are simply a lot fewer jobs to go around - it's musical chairs and a lot of chairs got taken away all at once).

I would put in an important caveat, though, and that's for people who are early in their careers. The hiring process really is truly shitty for people just entering the workforce and for people with only one or two jobs under their belt.


building that reputation is harder than it sounds. you don't always work in positions where you have contact to other people that could build up your reputation. i was a contractor for a small company for 10 years. i had little contact with the employees in that company, only working with the boss. the boss was nice but he had no useful contacts into the industry. the employees that i did have contact with were to low in the hierarchy that they could provide meaningful connections even after they changed jobs.

how am i supposed to build up a reputation with that?


Yeah, I worked at a tiny company (<10 employees total) for five years, then was in a small company with a tiny dev team (3 engineers and 1 PM at peak) for almost five years. Now I've been at a government contractor for five years, so I have a ton of contacts now... but they tend to remain in the civic tech space, and I'd like to move into research. Where I know absolutely nobody. I feel ya.


> As someone who's recently been hiring (sorry folks, position was filled just a few days ago), it's wild to me how distorted things have become.

Same here. I have been hiring and it is a shit show. We advertise one position and get inundated with resumes. Many of these resume are complete fabrications, so we cannot rely on them at all. So we implemented a filter by asking candidates to do a small project. Candidates do not have to hand-code it. We encourage candidates to just use AI for the simple project. Only about 10% actually do the required work that typically takes 15-20 minutes to complete with AI assistance. Some get offended that we even dared ask them to take the assessment test and start using profanity to let their displeasure be known. Quite strange.


When you're applying to hundreds of positions, 99% if which will auto reject you, it can be quite annoying if you're asked to do extra work before you've gotten any further in the process


like captcha for resume


You'd think that many sites would already be using a captcha before accepting an application


The number of fake resumes is insane. During reviews I ended up passing a number of fake profiles through because their CVs looked real. None of them showed up to the initial screening call.

There are now AI CVs mimicking real people, so the CVs point to real Linkedin profiles, Github profiles.

Not sure what their end game is unless it's to continually test CV creation or find woefully inept companies that will hire them with limited vetting.


> I ended up passing a number of fake profiles through because their CVs looked real. None of them showed up to the initial screening call.

That's just crazy. Probably those were for collecting data to analyze what makes a CV pass. Mass apply everywhere, combine the results, and analyze the results manually or using LLMs. Selling these data can be profitable


It's also possible that they got a job elsewhere, and didn't follow up.


Yeah, That would be more reasonable :)


> Not sure what their end game is unless it's to continually test CV creation or find woefully inept companies that will hire them with limited vetting.

I wonder about (and didn't immediately find) case-studies that lay out the strategy of Resume Of Total Lies Dude, their expected payout before they get fired, etc.


This is probably crazy talk, but I have been wondering how requiring people to slap a stamp on an envelope and mail in a résumé would go.


I don’t think it is crazy and I have suggested beforehand there needs to be some sort of proof of work on the candidate side to prevent resume spam.

I think your idea is very elegant as everyone has access to the mail system, an actual stamp is pretty cheap, but it is just enough hassle to mail an application that it will filter out some of the spam.

The other suggestion I have had is that candidates need to hand in the resume in person, but I guess you could accept resumes from both mail and in person drop offs.


> The other suggestion I have had is that candidates need to hand in the resume in person

This might be a bigger lift than asking for a take-home project; if I'm expected to drop off a resume in Manhattan that's a minimum of a two hour trip for me. I'd rather spend two hours banging together a CRUD app to show that I can actually write code.


I read this as applying to in-office roles, If I were willing to commute, then it's a good chance to exercise the lift required.


> but it is just enough hassle to mail an application that it will filter out some of the spam.

More than that, bots currently have no way of sending snail-mail :-)


The only time I had to hire somebody, the university I was working for in Switzerland made it mandatory for the candidates to send their application via mail, not email. That was back in 2014. I found this odd at the time, but I'm pretty sure it made my job way easier (less applications to review, motivated/serious candidates, etc.).


I wonder if we are back to “who you know” because of a couple of factors:

1. The risk of a bad hire is great, and this de-risks that

2. It facilitates more natural and spontaneous conversations, which for better or worse short-circuits a well crafted and pre-planned anti-bias interview process which can be too rigid for both parties to explore detail


I must be missing something. 1200 real applications are hard to sort through. 1200 mostly fake applications are much easier. Hiring is a high-leverage activity, and it's absolutely worth spending a couple hours going through those by hand.


For 1200 applications, a couple hours translates to less than 10 seconds per application. In the age of LLMs, why do you think you'd be able to discern whether an application was fake in 10 seconds? Remember, it's not "obviously fake", it's "designed to con you" fake.


^ This. Exactly. Low value comment on my part, but as the OP in this case, I feel the need to say that this is the exact issue.

The "smell test" takes longer than you think and often involves an actual interview.


How about 50 per hour? That's just 24 hours of work and a reasonable first look at one minute for each application. Very little time to spend for an important company decision which can cost hundreds of thousands of dollars.


That's a half week of work without breaks at one resume per minute-ish.

I agree that it is a very important decision, but that's also unreasonable for a manager to set time aside to look through. You've just set the other projects that you're already behind on (that's why you need to hire in the first place) back another half week or so.

It's like a reverse rocket equation here. You need time to make more time, so you take time, but that time needs time, so ...

The cost isn't really borne by the hiring manager though, it's just their budget (that they argued for) that they need to spend down. The decision makers really don't care that much about the numbers, just that they don't go over.


How can that be unreasonable? To work at least half a week for a decision which will cost the company hundreds of thousands of dollars, or even millions of dollars in the long run. What else is the manager doing which is more important monetary wise? And if managers are really too busy with raking in the millions to the company, then it's a fine time to hire somebody who's only job will be to hire more people (not a HR person of course).


The manager is responsible for $X but only gets paid their salary.

I'm their day to day, hiring is a pain. They need the extra hands, but they have to go through more work to get that person onboard. The activation energy is high, higher now with AI and automated job applications clogging things up.

Then you have onboarding and the continued costs of management of that person. Honestly, most managers would want the smallest team possible in terms of day to day workload.

This is also why AI is appealing. The promise of no sick days, no HR complaints, no chit chat. Just pure work done in plain language. Work done overnight, right, the first time. A middle managers dream worker.

The thing that is more important is the budget. It's always the budget. Nothing matters but the budget. That's the second iron law of beauraracy, of course.


As I see it, hiring people is the most important part of running any business - by a large margin. And if you have a lot of employees, then hiring people who are good at hiring becomes your highest priority.

> The manager is responsible for $X but only gets paid their salary.

That's why somebody higher in rank makes sure the manager gets the time he needs to make the best hires. Somewhere up the line there is somebody who cares about the basics of running a business right.

> I'm their day to day, hiring is a pain.

Of course it's a pain, that's why it's a job and why people get paid for it.

> This is also why AI is appealing. The promise of no sick days, no HR complaints, no chit chat. Just pure work done in plain language. Work done overnight, right, the first time. A middle managers dream worker.

Okay, but that means the company instantly lost all customers and all income and went bankrupt. Because why in the world would a client hire your company to use an AI, when they can just use the AI themselves? And don't say that there needs to be a human who is specialized in using the AI, because then you're back at hiring and having employees again.


I'm not trying to be glib here, but I'm not entirely certain that you have worked for a long time in a large corporation, right?

If you have not, I would like to introduce you to one of the best pieces of writing on corporate workings that I have ever come across: The Gervais Principle.

https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...

It is a very good lens, but a very cynical one, to look at the corporation.

In general, shows like The Office, though satire, are close, I feel, to reality than what you are espousing here.

Not that I disagree with you at all. There should be people that are all about hiring. There should be managers that see their paychecks as adequate compensation. There should be consumers that are that reactive to internal staffing decisions.

But in my limited experience, the things that should be there, typically are not.


> https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...

This should be required reading for everyone wanting to enter the corporate world. It rambles on a bit but even today, decades later, it's still spot on!


The AI-written cover letters are a dead giveaway and you can often spot them in less than 10 seconds easy, but it’s still a terrible slog.

If you don’t believe me, try clustering 1,000 cover letters.


I don't know which 1200 applications they saw, but IME they're a lot better at trying to con you than succeeding. LLMs aren't great for a lot of use cases (yet?), and this is one of those areas where reality doesn't match the dream:

1. ~10% of applications are over-tailored. Really? You did <hyper-specific thing> with <uber-specific details> exactly matching our job description at $BigCo 3 years before the language existed and 5 years before we pioneered it? The person might be qualified, but if they can't be arsed to write a resume that reflects _their_ experiences then I don't have enough evidence to move them forward in the interviewing process.

2. ~40% of applications have obvious, major inconsistencies -- the name on LinkedIn doesn't match the name on the resume, the LinkedIn link isn't real, the GitHub link isn't real, the last 3 major jobs on LinkedIn are different from the last 3 major jobs on the resume, etc. I don't require candidates to put those things on a resume, but if they do then I have a hard time imagining the candidate copy-pasting incorrectly being more likely than the LLM hallucinating a LinkedIn profile.

Those are quick scans, well under 4s each on average. We've used 80 minutes of our budget and are down to 600 applications. Of the remainder:

3. ~90% of remaining applications fail to meet basic qualifications. I don't know if they're LLM-generated or not, but a year of Python and SQL isn't going to cut it for a senior role doing low-level optimizations in a systems language. If there's a cover letter, a professional summary, mention of some side project, or if their GitHub exists and has anything in it other than ipynb files with titles indicating rudimentary data science then they still pass this filter. If they're fresh out of school then I also give them the benefit of the doubt and consider them for a junior role. Even with that leeway, 90% of those remaining applicants don't have a single thing in any of the submitted materials suggesting that they're qualified.

So...we're down to 60 applications. We spent another 40 minutes. In retrospect, that's already our full 2hr budget, so I did exaggerate the speediness a bit, but it's ballpark close. You can spend 2min fully reading and taking notes on each of the remaining applications, skimming the GitHub projects of anyone who bothered to post them, and still come out in 4hr for the lot.

It's probably worth noting, that isn't all to say that <5% of programmers with that skillset are qualified. I imagine the culprit is spray-and-pray LLM spam not even bothering to generate a plausible resume or managing to search for matching jobs. If bad resumes hit 99 jobs for every 1 job a good resume hits then you only expect a 1% success rate from the perspective of somebody reviewing applications.


Your take is very sensible and I agree with it 100%, but the reality is that (by my assessment) it is absolutely not present in the wall of ATS filters one's job application is up against. I've sent hundred of CV/cover letters over last ten months, none of them are touched by LLM. Most cover letters I manually tailored to re-frame in line with job ad - where I cared a lot, some I just made with my generic template - still manually - where I couldn't be bothered to care. Invariably I either received no response at all, or for remaining 10% I received a generic rejection email, identically worded and styled in almost all cases.

Here it is, if you are curious:

"Thank you for your interest in the <position> position at <company> in <country>. Unfortunately, we will not be moving forward with your application, but we appreciate your time and interest in <company>."

The Resume I am sending out is just an evolution of one that worked very well for me for 25+ years. The roles, as far as I am able to see, are 80%-95% keyword match, with the non-matched keywords being exceedingly superficial. Yes, I haven't listed "blob storage", but guess what else I have used but haven't listed: "semicolon", "variable declaration" and "for-loops". Yet in this day and age one seems to be punished for not doing so.

I am very principled in not letting any AI anywhere close to my CV, because I think the usefulness of signal it conveys rests solely on it being addressed to and read by human, hence it has to be fully authored and tailored by human too. But these days this idea has completely flipped. Desperate times call for desperate measures. Standing by principles could lead to literal dying. Personally, I made peace with dying, but I cannot allow my family to go homeless. As such, I don't see it below me to go down the path of mass-blasting heavily over-tailored Resumes. If it bumps my chances from 0.05% to 0.2%, that's a four-fold increase that may be the difference between, literally, life and death. The organic job search with my natural skills and authentic ways of presentation I relied on for twenty years is dead.


> Those are quick scans, well under 4s each on average. We've used 80 minutes of our budget and are down to 600 applications. Of the remainder:

4 seconds each? You are... fast.


It's clear your org is looking specifically at coders/programmers. That's very different from the "academic research" background that the OP suggested. It takes a different type of analysis and vetting.

And different types of jobs require skillsets that aren't adequately conveyed in a traditional resume.


> 1200 real applications are hard to sort through.

Probably not much yield in going through more than a few hundred.

Shuffle them around, start skimming through and throw out the rest once you realize you’re just seeing more of the same.

Pat yourself on the back and mutter “you need to be skilled and lucky to work here”


> Pat yourself on the back and mutter “you need to be skilled and lucky to work here”

It would be absolutely amazing if employers and recruiters finally were doing exactly this. We are in this dead end precisely because everyone is under false illusion that their pool of candidates has a hidden gem outshining everybody else in existence, and they absolutely need to sift through the whole pool to find this gem. As a result, all pools are never exhausted and only ever spreading, with more and more desperate people sucked into multiples of them.


Would you have found it reasonable for interested candidates to have reached out directly instead of just submitting a resume to the ATS? With the AI spam etc. it feels like the usefulness of these automated systems is quickly diminishing. Hiring feels broken right now.


In theory, sure, but in reality, please god no. 99% of LinkedIn messages you get as a hiring manager are “Hi I applied to your role”, “Hi I applied to your role and I’d be a great fit when can we talk?”, “Hi I’m really interested in learning about your work can we meet for coffee?”, or “Hi I’d be a great fit for your role because <insert enormous AI-written cover letter>.”


> "Would you have found it reasonable for interested candidates to have reached out directly..."

If that worked, someone would automate a way to bulk spam that too.


What does "reaching out directly" mean?


I assume that they mean sending either a direct Linkedin message or an email to the recruiter or hiring manager.

When I was recently unemployed I started doing that after months of getting ignored by most companies and, in my experience, the only difference is that I got far more acks ("Hi! Sure, I'll take a look at your resume and reach out!") but I got a similar rate of applications-to-interview compared to applying through the official platforms.


At this point, probably forming a line on the door of the hiring manager.


this means finding a way to directly reach out to the hiring manager. like sending an email, asking a colleague for an introduction, sending a linkedin message, etc


I haven't applied for a job since the 1990s so I'd be out of the loop, but what are the faked resumes trying to achieve? Just get in a role and get paid before being found out? Are they trying to find brief or lazy interviewing processes? Do they only target remote positions?


When the requirements of every single job are impossible, people will lie.

Several people have been recommending candidates to lie for IT-related jobs for a long time now, and honestly, I think the vast majority of positions have such a crazy set of requirements that they only get the lairs.


Amusing Idea: Advertise three vaguely-similar positions, only one of them real. Specify impossible-for-honest-humans requirements for the fake two. Then discard all applicants for the real position who also claimed to be qualified for a fake one.


Strict honesty here has always been a losing proposition. The "requirements" section of a job posting has almost never been accurate. It's more of an image they're painting. An honest applicant is one who reads the whole description to understand as best they can what the company is looking for, and sort of holistically matches their own expertise against that picture.

If the job posting lists requirements A-F and you have A, B, D, E, and F, then you'd do both yourself and the company a disservice by disqualifying yourself. Put it in your cover letter if you can't handle the discrepancy.

I'm not going to address either the morality or advisability of being "dishonest" by this standard. I've just seen too many people sell themselves short, when in fact they are exactly what the company is looking for, it's just that the recruiter wasn't able to spell that out in the job description. And it's not necessarily because they were stupid either; if they only put the true minimum necessary criteria into a job post, then (1) they'll get flooded with underqualified candidates who don't even come close to what they need, and (2) they may very well miss out on good candidates because the job looks lame.

Source: I've been on both ends. As a candidate, I mentioned during the interview that I actually had no experience in the required technology X but I had related experience. The interviewer just laughed; it was obvious to both of us that it didn't matter. As someone offering a job (not the hiring manager but sort of), I talked to a couple of people who were hired into other roles in the company and asked why they didn't apply for our position, they seemed perfect for it (to me). Several of them pointed to some specific line item under the requirements that disqualified them. Sometimes it was an item that we'd removed later because we weren't getting enough people in, even though strictly speaking it was part of the job. We would sometimes push the recruiter to add "experience with X, or willing to learn X", but they would push back and honestly I'm not sure I know better than them. They were the ones who had to be the front line filtering through the noise resumes, after all.


I've seen a number of job posts that have a note near the end, encouraging people to apply even if they don't meet all the requirements.

There's also the job posts that distinguish between hard requirements and nice-to-haves, using various language (e.g., "bonus if you...").


Well, there are people who hate the idea of lying, and can't bring themselves to do it, even it's applying for a job where they don't meet one of the requirements.

Most likely this isn't an attribute that most employers actually want, though.


My mother always put a really high emphasis on honesty. I wasn't the best kid, and certainly not the most honest (sorry, Mom), but I've always been absolutely forthright in résumés and cover letters. If I don't fully tick C, I mention it and share some quality or experience that I think might compensate. In my experience, it hasn't helped. I still do it, though, just because TBH I'm not 100% qualified for any of the jobs I apply for because I don't want to do the same shit I've already done for the rest of my career. I'm trying to grow, lol.


I have seen several companies lie about the requirements they posted.


> If the job posting lists requirements A-F and you have A, B, D, E, and F, then you'd do both yourself and the company a disservice by disqualifying yourself. Put it in your cover letter if you can't handle the discrepancy.

I’m my experience the problem is that the missing “C” is deep level domain expertise outside of the technical end and that’s just so much more important than the other ones, and importantly, something you can’t really just learn on your own.


Sure, that happens, but that's also pretty clear to the job seeker. Don't try to BS your way past that one.

More commonly, that list of requirements comes from the recruiter quizzing the developers on what they need, and they throw out a bunch of stuff that could describe a person they'd be interested in hiring. But there are many other people who would work too, and the developers are likely to come up with stuff that they're familiar with and end up describing someone much like them with maybe 1 additional skill -- which is actually backwards, because they already have that expertise in the aggregate and what they really need is what they don't already have, but that stuff is harder to think of and value and therefore suggest to the recruiter because, well, it's stuff they're unfamiliar with.

A good recruiter will push back and make them figure out which are actual requirements. But getting it right requires a good recruiter + good developers who will make the time to think it through + good company culture. Most job posts are not coming from such a fortunate place.

On the flip side, the recruiter is hearing from management that they want someone who is perfectly carved out to accomplish a single task X, preferably someone who has already accomplished task X at another company so they can get hired and immediately do X here as well. Sure, they'll also be another body to shut up the whiny developers talking about how they have too much to do, but the position is open because they've been asking for X for months and the developers keep saying they don't have enough bandwidth. So they describe what they want to the recruiter in painful specificity. If their conception of X requires technologies and tools A, B, and C, then their requirements list is something like "Minimum 10 years experience doing X. Expert in A. Expert in B. Expert in C. Must have a PhD from my school or a school I'm envious of."

Maybe I've just had some bad experiences, but this is why I don't take requirements lists too seriously. Sure, if it wants "experience in medical imaging" and you have nothing related, don't apply. But if it gives a laundry list of specific technologies, it's either developers looking for clones or managers looking for someone to do a specific project.


Similar idea: make a list of "required" and "optional but nice to have" skills for the position. Among the optional ones, include experience with a non-existent technology. Discard everyone who claims to have the experience.


I have no idea, but yes, I suspect remote positions are heavily targeted and folks are looking for lazy hiring processes.

But when the job description contains a lot of very general terms (e.g. "scientific computing") and every part of your job history is just parroting a specific term used in the job description with no details it doesn't pass the smell test.

I absolutely respect keyword-heavy job/project descriptions. You kind of have to do it to make it through filtering by most recruiters. But real descriptions are coherent and don't just parrot back terms in ways that makes it clear you don't understand what the are. You find a way to make a coherent keyword soup that still actually describes what you did. That's great! But it's really obvious folks are misrepresenting things when a resume uses all the terms in the job description in ways that don't make sense.

I kinda think we've reach this weird warfare stage of folks submitting uniquely LLM-generated resumes for each position to combat the aggressive LLM-based filtering that recruiting is starting to use. I assume people think they can do well in an interview if they can just get past the automated filtering. I'm sure some are trying to do 3 and 4 remote jobs at once with little real responsibilities, too, but I find it hard to believe that's the majority. I may be very wrong there, though...


This was the problem recruiters were invented to fix, but somehow recruiters have moved on to fixing every problem BUT this one.


The problem today is AI makes everything worse. Its jamming communications channels, and what you get once those channels are saturated is the equivalent effect that you see in cellular networks culminating in RNA interference.

No binding sites, no matches.

Additionally, if competent people can't find work within 2 short years, they will leave that sector forever and retrain. They may have been rockstars, but that doesn't matter. No work, no food, bad investment. When you have coordinated layoffs across sectors you have a short period of time to scoop up the competent people.

Its not immediately clear, but it seems like you either skipped over the part of filtering properly (and didn't do it?), or just jumped to this other strategy when what worked before no longer works.

Instead of trying to wrangle the data, why didn't you put a physical barrier at the very beginning. A simple validation, this is your CV, you are this person, and you have a valid DL with that name, and then you whittle down from there.


Going through this right now. I hired someone about 8 months ago and the process was still pretty normal. But for the role I opened last week, we are getting a ton of AI-written resumes that are just a rewrite of the job description.

When I look many of these people up in linkedin, they often have jobs listed but completely empty descriptions of each job. I guess this is so they can have AI generate a rewrite of their resume based solely on the job description for every role they apply. This used to be too labor intensive to do, but now with AI it's easy to churn out a hundred of those a day.

(The more careless ones leave their actual job description on linkedin and submit a resume with a wildly different version, which just happens to be a rewrite of our job description. At least those are easy to filter out.)

While I don't like this, I'm finding that I need to find the person on linkedin, it must not be a recently created account and it must have a reasonably detailed description of what they did in each company and it must reasonably closely match the resume.


> clearly faked resumes that far too closely matched the job description to be realistic.

Then why would have unrealistic expectations in the ad?


>A huge amount were clearly faked resumes that far too closely matched the job description to be realistic.

In government work programs in British Columbia, we were taught to address every point or requirement in a job listing that we could. Is this tactic clearly distinguishable from clearly faked?


Your use of nepotism is actually reputation.


On the hiring side too, and I really don’t understand the fake resume with AI trend. How can they possibly think they’ll pass the interview? Because when I’m hiring I find it very easy to spot someone lying when questioning to go into the details of past experiences. Maybe they are betting on a broken process? Maybe you can pass (dumb) HR filters with lies, but not real interviewers, at least from what I do and have seen.


It's because they're using AI for the interviews, too.


> Because when I’m hiring I find it very easy to spot someone lying when questioning to go into the details of past experiences

ain't nobody gonna get past the interview sheriff


Use hiring brokers. The good ones will vet the candidates and verify them.

Job seekers should also consider seeking representation from top tier brokers.


Any suggestion on good brokers?


Iceberg Slim?


It’s pretty much always who you know… at least to get a showing. It’s rare in history to find counter examples. And in a LLM fueled world it’s going to be more important.

Companies can improve by ensuring they don’t hire _because_ of whom someone knows. It should only ever let you get in the room to interview.

So practical advice of what to do: be human. Get to know people. Care. Your time to do this is not when you’re looking for a job, but when you’re in a job.


I've never gotten a job from someone I know. I've heard it my whole life but I've always went in solo to a number of jobs big and small. In fact, I personally find it kind of not respectable in some weird way (leaning on others for something I naively still hold onto as a merit-based system. People that break this value break what makes the system good), but I'm obviously biased from having always gone into an interview knowing only myself and what I know.


This is my anecdotal experience too. There's a (non-sequential) human thread that connects all my work experience. Ironically the exception was my very first development job, which was a blind application.


Vetted resumes seem like a real solution here, the issue is incentives.

One possibility for a free and impartial services would be via government funding. Unemployment insurance is paying out a few hundred per week per person, cutting that time down even a little could pay for a decent background check. That doesn’t get you a job specific resume but it should be good enough for an initial screening for most jobs.


>> clearly faked resumes that far too closely matched the job description to be realistic

Can you elaborate on why you consider a close match to the job description to be unrealistic?


The fact that even well-meaning hiring managers can't see great candidates because of filtering overload says a lot about how dysfunctional the current system is


My previous job ( somewhat well known brand) got > 500 resumes within hours for a mid level position. My manager decided to close that job posting and found someone internally


Been a couple of years since I last was an interviewer, but I’m always amazed at people who blatantly exaggerate in-depth experience while seeking a highly technical position.

Job Requirements: Senior Staff, Deep technical work in X, Y, Z

Resume: 10 years as tech lead in X, Y, Z

Reality: Once walked near someone with experience in X, Y, Z and heard them sneeze loudly. Can spell X correctly.

Why do they even bother?


Usually you really don’t need that much experience. There are only a few percent of jobs will need very specialized folks, regardless of that description.


> Why do they even bother?

Because the job requirements on the position are likely to be real as the applicants accomplishments on their resume.

At every company I’ve done hiring at my job descriptions for positions on my team were edited by my boss or hr and read like what was 1-2 levels above the nominal title of the position or had shit like the well worn joke of asking for X years of experience in technology that hadn’t existed for that long.

The entire hiring market for tech at least has devolved into almost 100% noise over the last few years


[flagged]


Hiring doesn't work like that. It's not like you glance at resumes then hire someone because what the paper says matches your job description. You spend a lot of time, if you're doing it right. Some resumes have everything you want, but aren't honest. Some resumes don't have everything, but they're pretty close, and worth the conversation. Some people seem perfect on paper but once you talk with them you realize (for whatever reason) that they don't fit. Even just a few applicants can take many hours of work before you can pick the one that fits best what you're looking for.

If you're a team of 5, handling 1,200 resumes, how much money are you expected to invest in this process? Does everyone take a week off billable work so you can find someone? Can you afford that? With only a team of 5, probably not.

We all want to feel like we're being treated well, but scolding someone because they were overwhelmed by the massive amount of adversarial spam they received for their job posting is a failure to put yourself in their shoes. Let's all be better people, here.


You and I both know the truth is in between both of our responses. It was worth discussing.


More likely, the date attribution in the imagery is incorrect.

As someone who works in that exact field (literally - I produce seamless mosaicked global maps and work for a satellite company), I can assure you that we don't and can't do this (generate "fake" imagery). Depending on country the exact satellite is licensed through, some areas may be lower-res (e.g. sats licensed via Canada can't provide imagery in active conflict zones above a certain resolution). The US govt can in principle demand we stop imaging for US-licensed satellites (though they never have so far as I know). A lot of regulatory details can vary based on the country 1) your satellites are licensed in, 2) your company is based in, and 3) where you're selling data.

However, none of the imagery is "fake". Our imagery sometimes feeds into google maps, and I don't know google's exact processing chain, so I can't rule out them doing something like that. However, I'd be absolutely shocked if they were for a lot of different reasons.

It's _way_ more likely that the tile data is incorrectly indicating 2025. E.g. they're using 3rd party data that doesn't have detailed scene provenance information in that area and are just showing "2025" in the absence of other information.

More interesting is the things China and some other countries do around datums. If you process things correctly, your satellite data won't align with their official street/infrastructure maps. Instead it will be randomly and smoothly shifted in different directions across the country. That's to make on-the-ground targeting based on official maps much more difficult. E.g. you can't reliably take one of the official maps and go "point the artillery at an azimuth of 321.5 degrees and target a location 4567 meters away". However, it makes things really tough when you're trying to provide a correctly processed "backdrop" mosaic to Chinese customers. (IIRC, this problem has gone away due to the ubiquity of OSM data or regulations changed in recent years. Still, China in particular has a lot of interesting regulations around map accuracy.)


I dont think the sat photography companies are providing false or manipulated data.

I would believe that Google and other "free" sites would be potentially under orders to edit tile data by federal mandate.

A colleague of mine back in the early 2000's put gas/water/sewer/electric maps on GIS. All from public sources. And within a few weeks, the feds caught wind of this and classified his combined maps.

Thats why I suspect editing on gas pump stations. And to be fair, they're ill-defended targets that could cause a massive chemical and pollutant spill if they were targeted (like the MAGAs shooting substations). And theres obvious national security aspects with shutting down energy grid operations.

Now yeah, there is the Chinese datum problem. But again, non-Chinese sat companies can map in accordance to their government in whichever country they are operating in.


> All from public sources.

Relevant PG&E: https://experience.arcgis.com/experience/641b21b495f049c6958...

> (like the MAGAs shooting substations)

There's no need to partisanize this. There's a very famous 2013 one from right here in the Bay Area that AFAIK is still unsolved: https://en.wikipedia.org/wiki/Metcalf_sniper_attack

> And there's obvious national security aspects with shutting down energy grid operations.

This reminds me of visiting the Moffat Tunnel and being surprised by the heavy security labeled Department of Homeland Security of all things. Then I realized there's a giant pipe running alongside the tracks that carries the water supply for the entirety of Denver lol


DHS has a significant presence at many transportation infrastructure choke points as well as energy/resource infrastructure. It makes sense... an undefended tunnel or bridge would be easy to disrupt and could cause chaos. Note also the ~6+ (NYPD, NY State police, NJ State Police, MTAPD, PANYNJPD, NY National Guard, US Coast Guard, probably ICE since the ramp-up) law enforcement/defense organizations with a presence around the Hudson River crossings.


> NYPD

I always think about this when I watch the WKUK “It's illegal to say…” sketch, because “under the Brooklyn Bridge” is like adjacent to One Police Plaza https://www.youtube.com/watch?v=QEQOvyGbBtY


I've been to the Moffat Tunnel many times and I never realized there was a water project associated with the better known rail tunnel there. FWIW, I've also never seen any sort of security there. Besides a lone Gilpin county sheriff's deputy who lives out along that road and makes it his life's mission to ticket any vehicle parked illegally on the county road.


https://www.denverwater.org/tap/tunnel-next-tunnel-no-one-kn...

Here's my pic of the east and west portals (respectively) from the last time I visited in 2022: https://i.imgur.com/sH0RNyg.jpeg https://i.imgur.com/fGdzV4G.jpeg

The big black fence around the east portal didn't used to be there, and you can see the DHS surveillance gantry right over the tracks. You can't get anywhere close enough to even read the plaque/dedication embedded on the right side of the portal. Hope you have a good telephoto lens!

On the west side, the DHS cameras are back to the left out of frame of the shot, at the very eastern edge of the Winter Park station platform. The water system is also much more visible on the west side than on the east side: https://earth.google.com/web/search/Winter+Park,+CO/@39.8871...

The east side is much less important-looking because it just becomes South Boulder Creek: https://earth.google.com/web/search/Winter+Park,+CO/@39.9022...

And here's one of my favorite hobo channels taking a ride through it westbound :) https://youtu.be/xNvnHAkm5dk?t=2715 and west portal emergence with the water pipe clearly in view: https://youtu.be/xNvnHAkm5dk?t=2787


Not sure about Google but on Bing Maps they were definitely messing around with fake images.

Some of the airbases showed some fields where the actual jet bunkers are, and if you zoomed out you could see it was just a copy/paste of a field nearby. Total fakery.

They have stopped doing that since, probably because there is no point with the amount of imagery available today.

And yeah countries like China messing with their map datum is weird. And so easy to compensate that it serves no military purpose.


> And yeah countries like China messing with their map datum is weird. And so easy to compensate that it serves no military purpose.

They have bought into mangled maps and it would be a challenge to update everything for simple lat/lon. It would be easier to get every car in China to drive on the left, for them to change their railway gauge to 7' 1/4" or to move to the Swatch Internet Time standard. Think of all the title deeds, utility maps and everything that you need surveyors for.

As for military purpose, have you ever done any work with the military? Even though every army plays a good ballistics game, they tend not to be mathematicians. I would not want it to be tested, but my hunch is the mangled maps would work extremely well, even though their foes have had decades to do their own map making.


> Think of all the title deeds, utility maps and everything that you need surveyors for.

All you need for this is to know whether the existing number corresponds to the old system or the new system and a piece of code that can convert from one to the other.


Well, the old number is 35 feet south of a medium sized pine tree and the new number is 45.1234567, if you can solve for the code we can put Esri out of business.


But that's not how this works. China's map datum uses the same principle, it's just shifted somewhat.


> As for military purpose, have you ever done any work with the military?

Haha no, I can't stand them and their hierarchies.

But yeah that was my point, the enemy has their own maps, with their own datums anyway.


>we don't and can't do this

10 months ago there was an Ask HN: No planes visible at LHR on Google Maps Satellite view. https://news.ycombinator.com/item?id=41841727

No planes at all on the ground, none at any gates, none parked outside. I gave some thoughts (corona, bank holiday) but you would expect to see at least a couple. Seems like the imagery was altered: that someone did and can do that.

edits - another explanation could be that the airport was closed so that the plane taking the photos could fly over it.


That's a common request for manually-created mosaics. Those are often used in flight simulation software. They want all planes manually photoshopped out of airports because they don't want you to look like you're running into another plane when landing/etc. It's a surprisingly big business for flight training. Google is probably sourcing from some of those.

By "we" I mean my company and my product does not do that. That part holds. (or, well, more precisely, that's a different product that I don't work on and isn't marketed as "imagery")

But yes, some other mosaic products are specifically requested with planes photoshopped out of airports and all waterbodies a consistent artificial color so that sunglint can be automatically simulated in flight sims for training pilots.

Because that data is often a high quality dataset available for purchase, sometimes google/etc reuses those datasets.


Oh how interesting. Thank you for clearing this mystery up! I hope user slavomirvojacek sees it too.


Surprising about of imagery with 'c. 2025 Airbus' as a watermark; I knew they did more than build aircraft and I guess this is a part of their business.


They're one of the largest and longest-lived satellite imagery providers, FWIW. It's a major wing of the company. They manufacture and operate very high end satellites and have for a long time.


Chinese coordinates definitely can be converted to WGS-84 - it's Google that did not do that. Look at Shenzhen River in OpenStreetMap, the streets of Hong Kong and Shenzhen align with each other perfectly.


> More interesting is the things China and some other countries do around datums.

Here's detailed info about this: https://en.wikipedia.org/wiki/Restrictions_on_geographic_dat...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: