Hacker Newsnew | past | comments | ask | show | jobs | submit | hakonjdjohnsen's commentslogin

This video by AlphaPhoenix is absolutely incredible! I do research in (nonimaging) optics and I am used to thinking about the propagation of light. Still, there is something amazing about seeing a real recording of the propagation of a real beam of light. I also love the fact that you see artefacts due to how long light from different parts of the scene takes to reach the camera


Thermophotovoltaics is really cool. It is an old idea but recently several groups (including the group behind fourth power) have shown much better experimental performance than before, towards the level where this is starting to look like a real solid-state heat engine.

The idea is to use a photovoltaic cell (“solar cell”) to convert thermal radiation to electricity. A regular solar cell has limited efficiency because the sun has a wide spectrum and a single material is not efficient across the whole spectrum. With thermophotovoltaics, the hot surface is so close to the cell that you just reflect the “bad” photons back to the hot surface to recycle them instead of losing their energy.

In theory, a more efficient alternative to a traditional solar cell is to use the sunlight to heat a surface to ultra-high temperatures and then run a thermophotovoltaic cell on that hot surface, but this is easier said than done.

As an outsider I do think it looks like the competitor Antora Energy has a simpler approach: instead of pumping the heat using high-temperature liquid (with lots of moving parts), they just use thermal radiation to transfer the heat inside their battery.


I think there is at least some plausible interpretation of this that points to more than marketing fluff.

You want to count particles per volume of air, so conventional sensors use a fan to have a constant volumetric flow and then count particles per second to infer particles per volume.

The way I interpret the above marketing language is that they use the optical sensor not only to count particles but also to measure the particle movement and infer airflow. So as long as there is some natural movement in the air, they can measure both particle count and volumetric flow, and thus infer particles per volume.


See also the youtube video about the project: https://youtu.be/8s9TjRz01fo


The linked page has that video embedded.


Yeah good point!

When I came across this amazing project and wanted to share it to HN, I was debating whether to post the youtube link or the project page. I decided to post the project page and mention the youtube link in the description for those who prefer video, but somehow that description got posted as a comment instead (not sure how that happened?). Anyway as you said the video is embedded in the project page so it wasn't really necessary


I found this presentation from the Disobey 2025 conference to be a really good and entertaining watch!

Last year the presenter also wrote a blog post about the attack, which received some discussion at the time:

https://news.ycombinator.com/item?id=41608949


Cool concept! I do research in solar concentrator optics, so I enjoy seeing such completely different applications of concentrated sunlight. Still, I am not fully convinced in this specific case. I wonder if it is not a lot easier to provide the missing spectrum ourselves instead of running fiber optic bundles from the roof?

If I understand correctly, your two main benefits are broader spectrum and lack of PWM flicker. Did you measure the spectrum of the light from the prototype monitor? The light goes through several filters - first I assume the daylighting system has an IR filter to prevent overheating. Then it goes through the LCD itself, and the color filter array in front. Are you still left with a lot of IR (or the frequencies are considered beneficial) after all this?


Thanks, and good points!

The spectrum is also about its dynamic changes through the day, so proportionality across the spectrum as well as within ranges (e.g. within NIR, there are metabolically inhibitory and facilitatory frequencies just 50 nm apart). These are linked to countless physiological mechanisms that are in interaction with each other. So, while we are also looking into more "full-spectrum" electric sources, it is not really possible let alone feasible to replicate daylight.

On the technology side, we are still searching for the best LCD to use and happy to share more in private, but not limiting the spectrum here too much is possible. There are two aspects I can already summarize here: - near-infrared is preserved through the collector and fiber and can be transmitted through the display (but finding the right components is a challenge, as so far nobody cared about transmission properties outside of visible) - opposing the current trend of increasing gamut with ever narrower RGB spectrum, we go for broad spectrum and lower saturation. Aside from physiology, this also has some perceptual benefits, e.g. avoiding the long-term adaptations that make the world look dull as a consequence of wide gamut displays today.


The reason we get a lot of light from the sun is not that the sun is particularily "bright" (high radiance) compared to other stars, it is because the the sun has an absolutely huge apparent size in the sky compared to all the other bright objects we can see.

Let's say you go to one of the illuminated areas that paid for reflectorbital-light and look up. What would you see? You would see a tiny bright spot flying past, with an angular size of about 10^-10 steradians [1].

This tiny spot has the same "brightness" (radiance) as the sun, because a mirror preserves radiance. However the mirror looks about 10 000 times smaller than the sun from your perspective (the sun has an angular size of about 10^-5 steradians). This means that the satellite would only give you 0.01% of the light compared to the real sun.

If you could somehow take 10k satellites and use them to illuminate the same spot, you could technically get it to resemble real sunlight. But imagine what this would look like: These satellite would need to be many enough / huge enough to cover a very significant portion of our sky, on the order of the apparent size our actual sun. They would be spread out in a sun-synchronous orbit, so they would be visible at dusk with this size, from all points on the earth. Would we really want that?

The founder has been thinking about using mirrors to collimate the sunlight to get around this problem, but it won't work. The collimator design he showed in a 2022 article [2] would decrease the focal spot from a 5km diameter to some smaller diameter as intended, but it would do so by throwing away light, not by increasing the brightness in this smaller spot. This is given by conservation of ètendue, one of the fundamental laws of nonimaging optics (where I do research).

[1] They are planning a 100sqm mirror at 600km altitude, which gives a solid angle of (100 m^2)/(600e3 m)^2

[2] https://www.vice.com/en/article/this-man-is-trying-to-put-mi...


> You make a big ring (roughly of the same area of the spot you are trying to make on the ground)

Unfortunately, this is not how the size of the spot on the ground is decided. Sunlight, even if reflected by a perfectly shaped mirror, spreads by approx 1 meter every hundred meters. At the "edge of space" at 100km, your spot already has a 1km diameter, in reality with a higher orbit and imperfect mirror & tracking it will be much larger. The size of your (ideal) mirror decides the brightness of the spot, not the size of the spot.

Liquid mirrors in space seem like a cool concept though!


The area comment was more relative about the power received : To be able to have the power on the ground you need to reflect it from space (pointing the obvious that with 100sqm of surface area, you can't collect more power than surface area * power density). You also don't want to overheat everything (otherwise you need sun towers instead of solar panels), so a ball park of around 1:1 is a good starting point.

To focus light, the ideal shape is a parabola, and it's a shape that occur naturally when you spin liquid in gravity. (Their are process to build big telescopes which only point up). Of course you don't have gravity anymore, but you can pull on your surface with fields, or you can have concentric rings that you align more or less along the axis to deform the surface. We don't need perfect focus just a rough spot.


So the birdkill is actually from a separate focal spot near the absorber where standby heliostats are focused? Interesting, I did not know that! Makes me wonder why standby heliostats would need to be focused at all? Couldn't their aimpoints be randomized over a much larger volume near the receiver, while still being standby and able to quickly move back onto the receiver when needed?

By the way, I’m happy to find someone with CSP knowledge on HN. Are you working in the field?


This, very much this!

I do research in a subfield of optics called nonimaging optics (optics for energy transfer, e.g. solar concentrators or lighting systems). We typically use these optical design applications, and your observations are absolutely correct. Make some optical design software that uses GPUs for raytracing, reverse-mode autodiff for optimization, sprinkle in some other modern techniques you may blow these older tools out of the water.

I am hoping to be able to get some projects going in this direction (feel free to reach out if anyone are interested).

PS: I help organize an academic conference my subfield of optics. We run a design competition this year [1,2]. Would be super cool if someone submits a design that they made by drawing inspiration from modern computer graphics tools (maybe using Mitsuba 3, by one of the authors of this book?), instead of using our classical applications in the field.

[1] https://news.ycombinator.com/item?id=42609892

[2] https://nonimaging-conference.org/competition-2025/upload/


> I am hoping to be able to get some projects going in this direction (feel free to reach out if anyone are interested).

This does sound interesting! I’ve just finished a Masters degree, also in non-imaging optics (in my case oceanographic lidar systems). I have experience in raytracing for optical simulation, though not quite in the same sense as optical design software. How should I contact you to learn more?


Interesting! I added an email address to my profile now


Great! I’ll send you an email now.



Yes, exactly. I have not looked at Mitsuba 2, but Mitsuba 3 is absolutely along these lines. It is just starting to be picked up by some of the nonimaging/illumination community, e.g. there was a paper last year from Aurele Adam's group at TU Delft where they used it for optimizing a "magic window" [1]. Some tradeoffs and constraints are a bit different when doing optical design versus doing (inverse) rendering, but it definitely shows what is possible.

[1] https://doi.org/10.1364/OE.515422


Shameless plug, we use Mitsuba 3/Dr.JIT for image optimization around volumetric 3D printing https://github.com/rgl-epfl/drtvam


It looks quite interesting, especially the part of scripting everything in Python with a JIT, instead of the traditional having to do everything in either C or C++.

Looking forward to some weekend paper reading.


Looks really cool! I look forward to reading your paper. Do you know if a recording of the talk is/will be posted somewhere?


We presented this work at SIGGRAPH ASIA 2024. But I think they do not record it?

Maybe in some time we also do an online workshop about it.


I dont know much about Optical engineering, but this sounds super exciting. I think I meant to point to mitsuba 3, not 2.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: