Congratulations to the team. Knowing some of the folks on the Bun team I can not say I am surprised. They are the top 0,001% of engineers, writing code out of love. I’m hugely bullish on Anthropic, this is a great first acquisition.
Something to understand about the word “leak” is that it implies at some point it was keeping things in. Microsoft security is so underfunded and garbage, it is fundamentally making technology as a whole unsafe.
Example: if Kroger or whatever your supermarket of choice distributed meat that was infected they would get sued to bits. Microsoft distributes thousands of malicious NPM dependencies and underfund the NPM security team - if there is such a thing - resulting in an entire industry of supplychain security companies to exist. No other registry has the issue of malicious packages as badly as NPM since Microsoft acquired Github.
Microsoft just does not know how to handle security, which is why so many security companies exist to fill their gaps. I don’t trust their security practices one bit tbh.
Back at one of my previous employers we had a long internal briefing about why our latest device did not have USB-C when other solutions on the market by then had.
The connector is solid but my god have there been disasters because of USB-C.
1. Power distribution upto high wattage, not always with auto sensing,
2. Wites rated to different data transmission speeds.
3. USB standard data transfers and Thunderbolt over the same connector and wire but most accessories are not rated for Thunderbolt.
Having worked on bot detection in the past. Some really simple old fashioned attacks happened by doing the opposite of what the robots.txt file says.
While I doubt it does much today, that file really only matters to those that want to play by the rules which on the free web is not an awful lot of the web anymore I’m afraid.
That was the first thing that I have learnt about the robots.txt file. Even RFC 9309 Robots Exclusion Protocol document: https://www.rfc-editor.org/rfc/rfc9309.html - mentions:
> These rules are not a form of access authorization.
Meaning that these are not enforced in any way. They cannot prevent you from accessing anything really.
I think the only approach that could work in this scenario would be to find which companies disregard the robots.txt, and bring it to the attention of technical community. Practices like these could make the company look shady and untrustworthy if found out. That could be one way to keep them accountable, even though there is still no guarantee they will abide by it.
As someone who's been an IT ops support on a hacker-firendly conference, from an admin perspective - not great! But at an operational level our team could just blame Kevin Mitnick (RIP to a real one) for any single thing that we wrong in the building,
What do you mean exactly? If you need a notification engine, reaching for a pubsub implementation is very easy with phoenix’s popularity and quite battle tested. I’ve implemented notifications at scale a few times in the ecosystem. What problems are you encountering that you don’t feel you have a tool in the shed to work with in this case?
While I like the blogpost, I think the use of unexplained acronyms undermines the opportunity of this blogpost to be useful to the wider audience. Small nit: make sure acronyms and jargon is explained.
For these kinds of blogposts where the content is very good but not very approachable due to assumption of extensive prior knowledge, I find using an AI tool very useful, to explain and simplify. Just used the new browser Dia for this, and it worked really well for me. Or you can use your favorite model provider and copy and paste. This way the post stays concise, and yet you can use your AI tools to ask questions and clarify.
It's clearly written for an audience of other RL researchers, given than the conclusion is "will someone please come up with Q-learning methods that scale!"