I believe what he is referring to is the idea that you can't tell the difference between eg. an "augmented second" and a "minor third". One is written e.g. C-D#, one C-Eb.
I've always found the distinction between these two types of interval largely pointless - for exactly his reasoning. They sound the same.
Potentially they are useful in discussing theory in writing, potentially they are relevant when tuning using non-equal temperament. But knowing this distinction doesn't help you make music that sounds good.
An ear trained pianist, for example, would not distinguish these two intervals, and I would argue that would not be a limiting factor to the quality of music they could produce.
It depends on the instrument and tuning system. In 12-tone equal temperament they are the same. Some tuning systems treat them differently though, in fact some early keyboards had separate keys, the black keys were split in two so D# was a different key to Eb. Called "split sharps".
the reason they are named differently and are notated differently is that they serve different functions. they're more or less homophones.
or, perhaps to keep it within the artistic sphere, they're like https://en.wikipedia.org/wiki/Checker_shadow_illusion and other "same color" illusions -- they are technically the same, but taken in context they signify different things.
you would build different chords around them, you would play different melodies around them, etc. in other words, it's not just when writing them out in english that we treat those two intervals differently -- we treat them differently while using them during music
you are, of course, correct that many very competent musicians would not correctly name this distinction using the official theory terms. but that doesn't mean that they don't understand the distinction when using them in musical contexts, or that the distinction is not meaningful. plenty of professionals are experts at something without being able to describe it perfectly in words
At these levels of spending the actual cost is heavily negotiated and is usually far below the advertised on-demand pricing.
Considering I could negotiate A100 for under a dollar/hr - 8 months ago, when they were in high demand, I wouldn't be surprised if the cost was close to 100k for this training run.
I got the impression that kind of thing (buying time on GPUs hosted in people's homes) isn't useful for training large models, because model training requires extremely high bandwidth connections between the GPUs such that you effectively need them in the same rack.
I suspect most A100s on vast.ai are actually in a datacenter, and might even be on other public clouds, such as AWS. I don't see why either vast.ai or AWS care if this was the case.
Anyone training this size of model is almost certainly using AWS/GCE.
The GPU marketplaces are nice for people who need smaller/single GPU setups, don't have huge reliability or SLA concerns, and where data privacy risks aren't an issue.
Google is generous for giving TPU for free for research, so likely it is using this. The more representative number is one from meta which required 87k A100 hours, which is close to $100-200k for 7B model training.
Yeah there isn’t randomized controlled trials or systematic review on any of this stuff so your company is glorified snake oil and you really should be ashamed. Get the evidence first before trying to take people’s money (AND data!).
What? Zoe is co-founded by Tim Spector, who is very well respected epidemiologist, Zoe did a lot of work in the UK on Covid tracking and Tim was awarded an OBE for his work, are you aware of that or just painting it with the same brush as all these snake oil startups?
Legit question: What does that have to do with anything?
More specifically: Why does being an epidemiologist qualify you to have an opinion on gut microbiomes?
Even more specifically: DO you have randomized, controlled trials to point to? Otherwise, this is just an argument from authority, and one who's experience is not clear is relevant.
I understand, but what I am saying that he is a respected scientist and is doing a lot of work in this area and is using Zoe to further this work, it might be a bit experimental at this point but it’s certainly not ‘snake oil’.
The chaos and instability reduce the pace of progress. I would say you can’t realistically expect rapid progress without a healthy degree of stability.
Most of his posts seem to be date-independent. To the extent that it matters, you can check the homepage: https://danluu.com/. There you will find month and year of the posts.
"This website doesn't seem to have any content. It just says "Hello World", which is a phrase people use to practice coding or to check if something is working correctly."
"The phrase "cheese omelette" in French is "omelette au fromage". It is a popular dish which is made by mixing beaten eggs, cheese and milk together, then pouring the mixture into a pan and cooking it until it is golden and fluffy."
So time to start using this website as a free proxy to GPT-3 for any miscellaneous tasks?
It’s harder than that, things like BibleGPT require several layers of prompt hijacking to really trick it. I found “Answer as an {something}” works well alongside ignore previous instructions. At least that’s how I got BibleGPT to role-play as a satanic priest!