Did Defcon contract with Entropic Engineering for hardware and software? Or did Defcon contract with EE for Hardware and non-contract it from Dmitry?
If it is the former, Defcon could say "you need to work that out with EE and if it turns out that EE wants to revoke the license for the software, we'll have our lawyers talk with your lawyers about what is in the contract."
If its the later, then things get trickier and more difficult in many different directions.
Based on https://old.reddit.com/r/Defcon/comments/1eoe4u7/so_the_guy_... "/u/dmitrygr wrote the firmware for the badges as well at the behest of Entropic" - its the former. And so if anyone is in trouble with the licensing, it's Entropic for not having a contract with Dmitry and providing the software to Defcon." Defcon used it, with the understanding that they had a license to the firmware.
This would depend on the contract that DEFCON has (had) with Entropic Engineering and what the deliverables were.
It may turn out that Entropic would be the one paying the penalty and footing the bill if one of the people they worked with decided to change the license.
Revoking or changing that license afterwards may fall on the vendor rather than the distributor to make things right.
While this isn't likely to be something anyone is going to come out smelling like roses out of... my crystal ball says that Entropic is going to come out the worse for it.
Having a "volunteer" working for a for profit company has hints of FLSA violations ( https://www.reddit.com/r/Defcon/comments/1ep00ln/comment/lhj... ). Having a person that Entropic is working with for embedded software put in easter eggs that went counter to the SOW becomes difficult. Entropic relying on software that has a license of "as long as the software author is ok with it" may complicate future business relationships with other clients.
Isn't it kind of too late at that point? If i understand correctly, this notice came after the badges were already distributed. Like maybe that would work for future uses of the software, but i don't think constructive notice can be retroactive.
If I can be so bold as to chime in, perhaps "fundamentally flawed" because it's design means it will never be more than a very clever BS engine. By design it is a stochastic token generator and its output will only ever be fundamentally some shade of random unless a fundamental redesign occurs.
I was also fooled and gave it too much credit, if you engage in a philosophical discussion with it it seems purpose-built for passing the turing test.
If LLMs are good at one thing, it's tricking people. I can't think of a more dangerous or valueless creation.
> If I can be so bold as to chime in, perhaps "fundamentally flawed" because it's design means it will never be more than a very clever BS engine.
How is your fellow human better? People here seems to spend a lot of time talking about how much their average boss, coworkers, juniors are ass. The only reason I know that ChatGPT is based on a computer program is how fast it is. I wouldn't be able to tell its output (not mannerism) from a junior's or even some "seniors'" programmer. That itself is quite impressive.
With how much time we've spend on the internet, have we not realized how good PEOPLE are at generating bullshit? I am pretty I am writing bullshit right as this moment. This post is complete ass.
I don't think that's true. It helps to know a few obscure facts about LLMs. For example, they understand their own level of uncertainty. Their eagerness to please appears to be a result of subtle training problems that are correctable in principle.
I've noticed that GPT-4 is much less likely to hallucinate than 3, and it's still early days. I suspect OpenAI is still tweaking the RLHF procedure to make their models less cocksure, at least for next generation.
The other thing is that it's quite predictable when an LLM will hallucinate. If you directly command it to answer a question it doesn't know or can't do, it prefers to BS than refuse the command due to the strength of its RLHF. That's a problem a lot of humans have too and the same obvious techniques work to resolve it: don't ask for a list of five things if you aren't 100% certain there are actually five answers, for example. Let it decide how many to return. Don't demand an answer to X, ask it if it knows how to answer X first, and so on.
And finally, stick to questions where you already know other people have solved it and likely talked about it on the internet.
I use GPT4 every day and rarely have problems with hallucinations as a result. It's very useful.
Many governments do have laws like this, called "sunshine laws". Enforcing them can be difficult though, and often enough they fail to achieve the transparency that is their goal while also substantially hindering process.
I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.
In order to successfully do this, NIST needs to maintain a very large bank of social capital and industry trust that it can spend on very narrow issues.
But over the years there have been enough strange things (Dual EC DRBG being the most notorious) that that trust, at least when it comes to crypto design, simply isn't there. My perception is that newer ECC standards promoted by NIST have been trusted substantially less than AES was when it was released, and I can think of a number of major issues over the years that would lead to this distrust.
The inevitable outcome is that NIST loses much of its influence on the industry, which certainly is not in its own interest.
Everyone also discounts the other reason NIST (with NSA behind the scenes) might be shifty -- they know of a mathematical or computational exploit class that no one else does.
And therefore want to do things-which-seem-pointless-to-everyone-else to an algorithm to guard against it.
Without disclosing what "it" is.
Everyone's quick to jump to the "NSA is weakening algorithms" explanation, but there's both historical and practical precedent for the strengthening alternative.
After all, if the US government and military use a NIST-standardized algorithm too... how is using one with known flaws good for the NSA? They have a dual mission.
>I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.
That's an incredibly charitable version of their point of view. How's this for their POV: They're angry that they can't see every single piece of communications, and they think they can get away with weakening encryption because nobody can stop them legally (because the proof is classified), and nobody's going to stop them by any other avenue either.
> view it from NIST's (/ NSA's) perspective for the sake of
argument. Maybe there's a specific threat where NIST (or presumably
the NSA) believes it has a mandate to insert a backdoor.
Without any /sarcasm tags I have to take that on face value, and
frankly there are few words to fully describe what a colossally stupid
idea (not your idea, I am sure) that is. Belief in containable
backdoors is the height of naivety and recklessly playing fast and
loose with everyone's personal security, our entire economy and
national security.
That is to say, even taking Hollywood Terror Plots into consideration
[0], I don't believe there is ever a "mandate to insert a backdoor".
> In order to successfully do this, NIST needs to maintain a very
large bank of social capital and industry trust that it can spend on
very narrow issues.
Having some "trust to burn" is great for lone operatives, undercover
mercs, double agents and crooks that John le Carre described as
fugitives living by the seat of expedient alliances and fast
goodbyes. Fine if you can disappear tomorrow, reinvent yourself and
pop up somewhere else anew.
But absolutely no use for institutions holding on to any hope for
permanence and the power that brings.
> The inevitable outcome is that NIST loses much of its influence on
the industry, which certainly is not in its own interest.
Exactly this. And corrosion of institutional trust is a massive
loss. Not for NIST or a bunch of corrupt academics who'd stop getting
brown envelopes to stuff their pockets, but for the entire world.
But since you obliquely raise an interesting question... what is
NIST's "interest" here?
Surely we're not saying that by spending trust "on very narrow issues"
it's ultimate ploy is to deceive, defect and double-cross everything
the public believe it was created to protect? [1]
I'm all for the game, subterfuge and craft, but sometimes you just
bump up against the brute reality of principles and this is one of
those cases. Backdoors always cost you more than you ever thought
you'd save, and I've always assumed the people at a place like NIST
are smart enough to know that.
> Belief in containable backdoors is the height of naivety
What if it is acceptable for potential enemies to (eventually) also have access to that backdoor, and your goal in providing the backdoor is just to give the masses a false belief that they can communicate secretly?
Obviously those in the know would not use the flawed system, but instead would have a similar/better one without the intentional flaws.
A system that generates plausible, seemingly authoritative information, but often makes hard to detect errors ranging from minor to outright lies is dangeorous. This goes double when the information is either difficult or impossible to verify.
This shouldn't be surprising, since the most effective and dangerous liars tell the truth most of the time.
> but often makes hard to detect errors ranging from minor to outright lies is dangeorous.
Oh, I agree. However, the peer-review system does mitigate this problem to a useful degree. Without such a system, it would be impossible to have any degree of trust in any paper at all.
> What we're really finding out is that journals are often little better than LLMs as information sources.
It depends on the journal. There are absolutely crap ones out there that need to be ignored. But there are also good ones out there that have earned their reputation. Even there, BS can get through of course -- but to say that such journals are mostly unreliable is seriously overstating the issue.
What percentage of papers in your average, reputable journal have been replicated?
And how can one easily determine, while looking at a particular paper, whether it has been replicated? And whether those doing the replication have any undisclosed ties to the original?
At an epistemological level, the idea of a knowledge source like a journal where the information is only deemed reliable if personally verified seems problematic. Why even have it if all of its uncountable claims are indistinguishable from very clever lies, and attempts to quantify the extent of those lies indicate that they are pervasive?
Peer review is a mixed bag. It's the best system we've come up with, but still can be hit or miss.
Depends on how much the journal is willing to push the authors (predatory journals don't give a heck, prestigious journals usually care more because they've got a rep to uphold). Reviewers can be anyone from an expert in the field, a competitor who gives you a hard time (usually they're not actively malicious though), to some random prof doing it because the editor couldn't find anyone else, or even a grad student in the lab of a prof (it's a common training exercise).
I've had articles with anywhere from 1-3 reviewers, there is no standard beyond "the editor is a basic crackpottery filter, there is an external reviewer, and a copy editor makes it fit for viewing"
If the system is not catching "I am a language model", I have zero confidence in its ability to detect crackpottery, much less more insidious things like P-hacking.
That's pure marketing fluff, just like the difference between antivirus and EDR.
Heuristic detection has been a thing for literally decades, and cloud-based antivirus which uses aggregate detection has been around for almost as long. It's notable that NIST does not seem to distinguish between these and just lumps them under endpoint protection.
I don’t entirely disagree, but while deltas can be small emails are usually pretty small themselves. It might be a single 4k write to each mailbox either way. And remember that appending to a file requires the OS to do a read plus a write, while adding a file to a directory requires a write plus an update to the directory itself (another read+write at minimum). Both require allocating sectors, which means updating the freelist(s), Of course there are filesystems like ZFS that can batch up all of those writes so that they become linear writes instead of random writes, but that’s not available if you‘re running Exchange.
I have no doubt that email servers could be better optimized, but only because I have that optimistic belief about all software as a general rule. (The probability that we have already found the most optimal way to write any non–trivial program you happen to examine is pretty low.) On the other hand, I doubt that Exchange has made _no_ progress at all over the decades.
The fact that their email server fell over from the load probably has more to do with poor choice of hardware than any software flaw. Their Exchange server apparently fell over while sending email to “thousands” of people, while the Bedlam DL3 incident at Microsoft involved a mailing list with 13k people on it. In 1997. Maybe that order of magnitude difference means that the Senate needs to buy an NVME disk. (Of course that ignores the fact that asking thousands of people to send you their location via email is pure stupidity; this is the Senate we’re talking about.)