Hacker Newsnew | past | comments | ask | show | jobs | submit | cryptoz's commentslogin

15 years later and still no word from Google if they will use the barometers in Android devices to assimilate surface pressure data. It has been shown that this can improve forecast accuracy. I think IBM may be doing it with their weather apps, but Google/Apple would have dramatically more data available.

Apple even bought Dark Sky, which purported to do this but never released any information - so I doubt they really did do it. And if they did, I doubt Apple continued the practice.

Been waiting a long time to hear Google announce they'll use your barometer to give you a better forecast. Still waiting I guess.


The community has mostly abandoned SPO data. It's extraordinarily difficult to use this data because of social issues like PII and technical ones like QA/QC. But even more importantly, there's very little compelling evidence that the data makes much of any difference whatsoever in real forecasts.


> 15 years later and still no word from Google if they will use the barometers in Android devices to assimilate surface pressure data.

For WeatherNext, the answer is 'no'. The paper (https://arxiv.org/abs/2506.10772) describes in detail what data the model uses, and direct assimilation of user barometric data is not on the list.


Montreal public transit times used to be on some kind of like, 28-hour clock. Bus times after midnight would be labelled 27:30 or something. Suuuper confusing. It sounds so bizarre in fact, that I'm doubting my memory a bit, but I'm certain it was like that (say around 2006 or so).


This is actually how GTFS (a standard format for public transit data) works: https://gtfs.org/documentation/schedule/reference/#stop_time... . Especially sleeper trains can get weird with 30+ hours. But I don't think it's wise to show that to the user


And it is the right thing to do as otherwise the question to which day a train belongs will be confusing. Just take it %24hours before intersecting trains.

It is also how I personally record time spans. It makes it much easier as you do not need to deal with the case where the start is larger than the end time and you can only have a single date field.


I've seen this in Japan as well. A store that's open from, let's say, 8am to 1am will actually advertise itself as being open from 8am to 25pm. I guess the perception is that it's confusing to have a range where the smaller number comes before the bigger number.


Japanese are used to it because TV shows etc. that have the same issue.

If it airs at 2025-11-24 01:00, people will have an easier time to remember it's at a very late after the 23th's midnight, than a crazy early time on the 24th. Most TV or movie guide will show it as 25:00 on the 23th.


I think it is more common for them to write 8:00 to 25:00 - omitting AM and PM.


AM and PM is used in a few languages (mostly English) but many don't have it in their vocabulary at all, which probably includes Japanese.


In the case of Japanese, there is 午前・午後 for 12-hour time. e.g. 午後9時に着く (arrive at 9 P.M.). If it's obvious from context, then only the hour is said. e.g. in「明日3時にね」, the flow of the conversation disambiguates the hour (it's also unlikely the speaker means 3 A.M.)

There are also other ways to convey 12-hour time. e.g. 朝6時に起きる (wake up at 6 A.M. / wake up at 6 in the morning).


And even if they have, representations of noon and midnight differ.


Maybe at that point they should say "Closed 1am to 8am" instead.


UK buses and trains do this.

I think the reason is for Day return tickets ie those where you can go out and come back on the same day. It allows the return to be after midnight which makes sense for example going to a theatre show or pub that shuts at 11pm


I work with a factory that uses 32 hour timestamps, as some employees work a night shift.


They're testing specific things with no need for full orbit, although I think they reach verrrry close to orbital velocity. They want the payload dummies to 'de-orbit' quickly (from a suborbital trajectory). They could easily have gone orbital if they wanted to. I guess we'll see orbital demonstrations after a few splashdowns of v3 stack early next year.


The video and info: https://www.spacex.com/launches/starship-flight-11

(Liftoff is around 33 mins in)


I'm working on Code+=AI: https://codeplusequalsai.com/

It's an AI-webapp builder with a twist: I proxy all OpenAI API calls your webapp makes and charge 2x the token rate; so when you publish your webapp onto a subdomain, the users who use your webapp will be charged 2x on their token usage. Then you, the webapp creator, gets 80% of what's left over after I pay OpenAI (and I get 20%).

It's also a fun project because I'm making code changes a different way than most people are: I'm having the LLM write AST modification code; My site immediately runs the code spit out by the LLM in order to make the changes you requested in a ticket. I blogged about how this works here: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...


Computer Use models are going to ruin simple honeypot form fields meant to detect bots :(


I just tried to submit a contact form with it. It successfully solved the ReCaptcha but failed to fill in a required field and got stuck. We're safe.


You mean the ones where people add a question that is like "What is 10+3?"


I wonder if this stuff is trained on enough Hallmark movies that even AI actors will buy a hot coffee at a cafe and then proceed to flail the empty cup around like the humans do. Really takes me out of the scene every time - they can't even put water in the cup!?


A way for people to build LLM-powered webapps and then easily earn as they are used: I use OpenAI API and charge 2x for tokens so that webapp builders can earn on the margin:

https://codeplusequalsai.com


I've really got to refactor my side project which I tailored to just use OpenAI API calls. I think the Anthropic APIs are a bit different so I just never put in the energy to support the changes. I think I remember reading that there are tools to simpify this kind of work, to support multiple LLM APIs? I'm sure I could do it manually but how do you all support multiple API providers that have some differences in the API design?



I built LLMRing (https://llmring.ai) for exactly this. Unified interface across OpenAI, Anthropic, Google, and Ollama - same code works with all providers.

The key feature: use aliases instead of hardcoding model IDs. Your code references "summarizer", and a version-controlled lockfile maps it to the actual model. Switch providers by changing the lockfile, not your code.

Also handles streaming, tool calling, and structured output consistently across providers. Plus a human-curated registry (https://llmring.github.io/registry/) that I keep updated with current model capabilities and pricing - helpful when choosing models.

MIT licensed, works standalone. I am using it in several projects, but it's probably not ready to be presented in polite society yet.


OpenRouter, Glama ( https://glama.ai/gateway/models/claude-sonnet-4-5-20250929 ), AWS Bedrock, all of them provide you access to all of the AI models via OpenAI compatible API.



LiteLLM is your friend.


or AI SDK


Why don't you ask LLM to do it for you?


I use LiteLLM as a proxy.


> think I remember reading that there are tools to simpify this kind of work, to support multiple LLM APIs

just ask Claude to generate a tool that does this, duh! and tell Claude to make the changes to your side project and then to have sex with your wife too since it's doing all the fun parts


Opening the App Store to download a bunch of apps - in general - is probably the #1 thing people are doing when they open the App Store. Of course, installing a specific app is a top use case. But I think you're just not the average user. Lots of people open the App Store frequently to just check out what's available.

~10 years ago I would do this all the time. It's fun, kind of like surfin' the net was back in the old days, but in a walled garden of applications.


is there actually any data to back up the claim that the "#1 thing people do" is open the app store to see what's available besides your singular story about what you used to do a decade ago when all of this was much more novel in general?


10 years ago that was fun. Today it’s an awful experience.


I'm surprised to hear this, as I am in the same boat as the other poster. Of course it makes sense, they wouldn't build that junk if there weren't junk consumers on the market. But I still can't grasp the concept of "just installing apps".


It seems plausible that casual browsing and downloading remains a significant use case. Apple surely wouldn't design the App Store focusing on discovery this way otherwise. Not sure about the #1 activity hypothesis. What I'm certain about though is that the App Store is deeply broken and they've started rushing down the path of platform "enshittification" (real thing) where online platforms become less useful, less enjoyable, or less user-friendly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: