Hacker Newsnew | past | comments | ask | show | jobs | submit | adamb's commentslogin

I experimented with an Ohm/CodeMirror bridge that would map an Ohm grammar to CodeMirror classes for marks and syntax highlighting.

It might be an interesting starting point for you: https://observablehq.com/@ajbouh/editor


If anyone is looking for ideas for how to build tooling that fights flaky tests, I consolidated a number of lessons into a tool I open sourced a while ago.

https://github.com/ajbouh/qa

It will do things like separate out different kinds of test failures (by error message and stacktrace) and then measure their individual rates of incidence.

You can also ask it to reproduce a specific failure in a tight loop and once it succeeds it will drop you into a debugger session so you can explore what's going on.

There are demo videos in the project highlighting these techniques. Here's one: https://asciinema.org/a/dhdetw07drgyz78yr66bm57va


My understanding is that there is some evidence that Alzheimer's is transmissible (e.g. higher rate of incidence in caregivers and neurosurgeons), but more study is needed.


A study showed that if they wired two mice's circulatory systems together, one with Alzheimer's and one without, the second one would eventually develop everyone's favorite brain plaques.

https://www.newscientist.com/article/2151909-alzheimers-may-...

So while wiring together two bloodstreams is a bit extreme, and likely to result in most things being transmissible, there's nonzero evidence that you can have a bad day from a blood transfusion.


There's also been observations of particularly large amounts of certain human herpes viruses in the brains of Alzheimer's patients.


What have you found to work best when coordinating with your data management and development/operations staff?


Every organization and team is different. Often I've found two approaches work best: going around the road blocks and managing everything end to end, then getting buy-in for data and ops to own it properly after the fact (playing up the political angle of owning more stuff after we do the heavy lifting), and the brute force method of just meeting after meeting to educate people about the differences in use cases and deployment for ML products.


same question here. im keen to understand this. Especially around responsibilities, OKRs and KRAs


Car manufacturers are extremely careful about the failure modes of components and sensors they put in their vehicles. Consider the design of the CAN bus, which has explicit support for failed and misbehaving peers.

I have yet to see these sorts of considerations in any V2V communication system. The idea that my car might act on (or propagate) incorrect/sabotaged information that it receives from a "peer" is a terrifying form of fragility.

This sort of failure mode needs to be studied and addressed directly before any of these systems are deployed.

It should also be assumed that these new patterns of information propagation will create a huge financial incentive for people to sell after market modifications that exploit the trust models of these protocols.


I feel like it will be introduced like how aerospace does it. They add a newer technology to an aircraft in a non-safety critical application. Reliability data can be collected without risking the safety of the aircraft. Then once a technology is proved safe it can be incorporated into a safety critical system.

So maybe car manufacturers will first incorporate a driver assist that tells you if that car is stopping in front of you. If that fails the driver is still in full control.


What bothers me is that this has to be designed under the assumption that the other car could be malicious and that any input received could be intentionally deceptive. I'm just not convinced that auto (or aerospace) companies have a high level of competency in thinking like this. They're used to thinking about physical defects, weather effects, bad users, and the like. It's very different if you have internet connected cars that could be (possibly in bulk) remotely hacked and given instructions to intentionally disrupt other cars.

Everything I have ever seen about vulnerabilities in car software systems has indicated a very poor understanding of the threat landscape on the part of car manufacturers and an embarrassingly weak ability to competently deal with these threats. So far this has been understandable since cars have been minimally networked, but going forward, I agree with GP that the appropriate term is "terrifying".


I think it’s one thing if you can spoof an input remotely and one hostile actor can target many vehicles simultaneously.

But if we can be certain based on the physics of the system that we are talking to the car in front of us, the fact is, there’s plenty of ways you can commit vehicular homicide today, and V2V doesn’t seem like a particularly “worse” way, and frankly, one of the most traceable ways you could probably try to hurt someone.

So while certainly you need to defend against broken and malfunctioning input, I’m not quite convinced the malicious input is actually a case that needs to be specifically defended against.

The vehicle will have a “flight envelope” based on its own local sensors and rules just like modern aircraft that don’t allow even bad inputs to stall the plane. The inputs from V2V would not let you leave the envelope any more than the autopilot inputs would. I believe the steering wheel would still be allowed to exceed the envelope, for as long as there is a steering wheel.


I think vehicular homicide is a lot less likely than pranks (causing traffic jams, etc.), or people trying to game the system so that they always get to pass through intersections without stopping.

However, intentionally causing injury isn't something that should be ruled out, either. It's only "traceable" if someone is sending the signal from their own car, registered under their real name. If someone pulls the transmitter from a junked car (or build their own, etc.), they could e.g. conceal it near an intersection, wait a day or two, and trigger it remotely, or attach it to the underside of someone else's vehicle, etc.

Someone could also jam the signals to potentially cause everything to stop working.

I'm with the crowd that thinks this is an inherently bad idea. The data is entirely untrusted, which makes it essentially useless for determining anything other than "there seems to be a radio transmitter at a particular location", and that's only if there are enough sensors to triangulate signal sources accurately.


I think auto manufacturing margins are slim enough where they will cut corners here by simply not hiring the best people and testing it long enough. It’s hard to know when you have a working system outside of astronomically expensive methods like formal verification, but it is really easy to know when you are out of budget.


The sabotaged info point could (would?) happen anyway. There's very little into securing the CAN, and that's how you get things like [1], even without V2V communication. There are a few papers out there describing similar attacks over bluetooth as well. I'd argue the security portion will become just as important as failure mode.

[1] https://www.wired.com/2015/07/hackers-remotely-kill-jeep-hig...


Good point.

V2V communication means BT exploits can become worms even more easily, spreading from phone to vehicle to vehicle.


Spoofing, Jamming, and Sybil attacks are what concern me, more than failing parts. Really, you'd need some sort of special hardware that makes the protocol hard to spoof with off the shelf parts. Maybe a very high frequency band (since you're only communicating in LOS at extremely short distances) so that you can't just use any old SDR to hop on. Then use some kind of gossip protocol to achieve consensus and identify bad peers.


Just because there aren't $200 SDR transceivers available that can handle 20+GHz right now doesn't mean there won't be in the next few years. These protocols have to be designed with more foresight than simple availability barriers because the second you mass produce something, it and it's constituent parts are suddenly available.

That said, if you want to lose sleep at night, I suggest you look into the strict authentication and jamming resistance of ADS-B (effectively the aircraft equivalent of such a network).


I would not specify that the communication can be considered fully Line Of Sight.

There are many intersections where the car would need to communicate with the crossing car and the intersecting road is behind a hill, trees, or building.

Yes, attacks are an enormous concern, but comms and hardware failure are also very significant & non-trivial to solve.


LOS was a poor choice of words. But on roads or highways within 500m is even smaller than LOS


You are asserting that because information may be wrong or misunderstood it should not be communicated. A more robust approach would be to share information promiscuously and hold people, in this case the drivers who are responsible for their vehicles, responsible for how that information is handled.

Turn signals also get misused and misinterpreted but no one suggests that means we must go without.


I'm actually just asserting that systems advocating for use of V2V messages should account for misinformation.

That is, as a human I know that the drivers around me might have broken lights or misuse their signals.

I also assume that other drivers do not have my best interest at heart.

A reasonable V2V system should be expected to handle these scenarios just as easily as they handle wireless interference and packet loss.


Wouldn't that invalidate the whole concept of autonomous V2V interaction?


I think this falls under the idea that we need to reach perfectionism before we deploy something. That simply isn't true in my opinion. Many accidents already happen every day. If this system reduces that dramatically it's a win. Even if there are some failures along the way they will be noticed and improved upon.


Failure due to the inevitable limitations of a system is one thing; failure because a hackjob, working 90% of the time, and catastrophically failing otherwise, was deployed? Thats just criminally negligent.

Even if total fatalities drop, if the reason for the remaining deaths is that your car will arbitrarily lie with no justification, then you’ll lose trust in the whole thing.

Systems that require trust should be reasonably perfect, so as to maintain that trust. Otherwise you’re really only going to get away with it by forcing it down the consumer’s throat, by top-down approaches (gov regulation, contracts with the ceo, etc).

And when you’re doing that, who cares what the failure scenario and rate is? Lies to you 20% of the time, and its still on your head. 30%? 80%? The only group that needs to trust it at this point is management; they’re n steps removed from the issue, so as long as you can keep them from looking too hard, you can go as awful and shitty as you want.

Trust takes years to build, seconds to break, and forever to repair. Or you just take it to management.

Your website doesn’t need much trust, and your text editor needs some but not much. But your car? It most certainly does


Nothing about my comment advocates for perfectionism.

My point is that V2V systems and research papers I've seen just don't make meaningful claims about safety. They instead make claims about convenience and efficiency, which are not substitutes for safety.

We know to test vehicles for crash safety before putting them on the road.

While I believe there are certainly ways to make use of V2V communication that increase overall safety, I haven't seen anything remotely resembling a crash test for V2V systems.

Creating a system that relies on the correct behavior of all components is not a recipe for safety or reliability. This is especially true as the number of components increases (this happens when a car exchanges messages with all the cars around it).

We need V2V systems that rely only on correct behavior of any component and allow for malicious behavior of some components.

The idea that a protocol version mismatch from some rolling deploy can cause injuries in cars made by other manufacturers is the sort of thing that I haven't seen a single person point out. How would something like this even be caught and debugged?

Advocating for an ecosystem where these things are likely but neither addressed nor considered is just plain irresponsible.


I agree with you. It has always been my hope that these oversights are a direct result of the projects struggling to find a baseline level of value rather than any underlying lack of forethought.

The level of security and approach to security will largely correlate to the types of messages that such a system ultimately needs. This sounds obvious, but I think it really isn't. In a world of self-driving vehicles with lidar + optical sensors, how much v2v communication is required beyond the sensor data?

Until that question is answered, it might not make sense to primarily focus on the security and reliability of said data.


I think there is something romantic about the siren's song that is "we don't need object detection or velocity estimation if all objects self-report their location, velocity, and intent".

I hope you're right and that once we establish best case utility, as a community we refocus on handling component failures more gracefully.

Although it's unclear if the second and third order system effects in our financial system will ever get that sort of treatment. So let's hope driving gets a bit closer to flying (and farther from wall street) in terms of attitude towards safety.

EDIT: clarity


It depends of the function of the system, it's not the same having yor infotainment freezing and losing the music than your drive-by-wire system crashing and engaging your parking brake at70Mph.


"Move fast and break people" isn't an acceptable way forward.


I've been exposed to designing software and electronics for vehicles (autonomous or otherwise). There is ISO 26262 which gives guidelines on safety for these systems. Generally, developers of components will be fairly conservative, and for events where serious harm is possible, where the driver is not expected to get control if the system fails, and if it may occur in a "likely" situation[1], there are stringent requirements for testing, etc.

[1]"Likely" means how often that function is used. As an example, applying brakes is extremely common. So any electronics/software involving brakes is considered very likely.

Now whether companies like Tesla actively follow the ISO - I don't know.


V2V, as far as I remember, has not yet be able to have reliable bandwidth and latency guarantees yet, right?

What you mentioned probably is going to be high-order technical problems.



Sounds like an attempted murder charge.


It can be whatever charge you want, but at the end, some people are dead anyway


TBH, you can do a lot of damage without a V2V communication system. I think that some people forget how much of the world is really ultimately run on various forms of trust relationships.


Transitive trust is the scariest form of trust in any system.

It's also the part of V2V that seems to be most powerful and least talked about.


> Consider the design of the CAN bus, which has explicit support for failed and misbehaving peers

How do you figure?


Low Speed/Fault Tolerant CAN offers baud rates from 40 Kbit/s to 125 Kbits/sec. This standard allows CAN bus communication to continue in case of a wiring failure on the CAN bus lines. In low speed/fault tolerant CAN networks, each device has its own termination.

From https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000...


In regards to German and otherwise European cars, I've never seen or heard of a modern car operating on anything other than a "high speed" CAN bus, with anything lower than 250Kbit/s and all the way up to 1 Mbit/s. While some cars do have two separate busses this is in no way a standard, and neither of those would necessarily be a fault tolerant one. Nor am I sure who your source might supply, but none the less, I feel like your trust in the quality of modern cars and their components might be a bit misguided.


Fair enough. Though my point was trying to say that protocols should assume a variety of component failure modes.

I'd rather just have to trust the safety standards of the manufacturer of my car (and to a lesser extent the cars I might directly collide with), not the safety standards of every vehicle within transmit distance.


If someone has access to control an ECU on your can bus you are just as bad off as you would be in the case you worry about. And on top of that i promise you the ECUs in modern cars are not works of art with good error handling. But consider that there are many currently unemployed technologies that can work to make for example the sharing of sensory data signed and trusted, for example by deploying technologies which apple uses today with great success amongst others?


I am not aware of any sensor system that involves meaningful cryptographic claims about sensor readings. I'd love to learn more about anything along those lines.

Especially the associated threat modelling and engineering principles!


Error counters and error-disabled states are baked into every CAN controller implementation.

The arbitration system is designed to let chatty-but-low-priority messages try to talk as often as they'd like, but only succeed when higher-priority messages allow it.

Short of faults that actually electrically disable the bus, it's pretty good.


From my fuzzy memory, CAN bus also supports one of the two wires being severed- obviously with a drop in speed.


Hi @danicgross, glad to see you're iterating in this space! :)

It feels like your comment about the network and community being the most valuable part of program is probably right. Given that, is it possible to be a Pioneer, but not accept the money?


Beyond providing security, reproducible builds also provide an important ingredient for caching build artifacts (and thus accelerating build times) across CI and developer machines. They also can form the basis of a much simpler deploy and update pipeline, where the version of source code deployed is no longer as important. Instead a simple (recursive) binary diff can identify which components of a system must be updated, and which have not changed since the last deploy. This means a simpler state machine with fewer edge cases that works more quickly and reliably than the alternative.

I'm very grateful for the work that this project has done and continues to do. Thank you!


Cool seed for a community! Seems to lack a place for discussion about submitted ideas, so I'll follow others' lead and discuss here.

The "industry specific deep learning" project is similar to something I'm working on right now. Though I'm not planning to charge for it.

To any folks here interested in this: Are you looking for a tool to get started in ML or for a resource to apply existing ML knowledge to a specific (possibly new) domain?


Cool! Does this sort of thing ever upset users when you guess wrong?


Kudos to HashiCorp for realizing that the complexity of the project was getting away from them and for having the guts to pull the plug in such a public way.


Hear hear.

I work for Pivotal on the fringes of Cloud Foundry. Between us and other Cloud Foundry Foundation members there are around 200 full time engineers working on it.

Featuresome, robust, industrial-grade cloud platforms are hard.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: