Hacker Newsnew | past | comments | ask | show | jobs | submit | hasley's commentslogin

FFT/DFT is not precise if you do not have the exact harmonic in you signal. If you are also (or only) interested in phases you might use a maximum likelihood estimator (which brings other problems though).

And as the previous answer said: compressed sensing (or compressive sensing) can help as well for some non-standard cases.


Do you have any good reference for compressed sensing?

The high level description on wikipedia seems very compelling.. And would you say it'd be a huge task to really grok it?


A while back I looked at matching pursuit. At first it seemed very complicated, but after staring at it a bit realized it's simple.

- Start with a list of basis functions and your signal.

- Go through the list and find the basis function that best correlates with the signal. This gives you a basis function and a coefficient.

- Subtract out the basis function (scaled by the coefficient) from your signal, and then repeat with this new residual signal.

The Fourier transform is similar using sine wave basis functions.

The key that makes this work in situations where the Nyquist theorem says we don't have a high enough sampling rate is ensuring our sampling (possibly random) is un-correlated with the basis functions and our basis functions are good approximations for the signal. That lowers the likelihood that our basis functions correlating well with our samples is by chance and raises likelihood it correlates well with the actual signal.



I have not read the whole article. But, what is shown at the beginning is not the Fourier Transform, it is the Discrete Fourier Transform (DFT).

Though the DFT can be implemented efficiently using the Fast Fourier Transform (FFT) algorithm, the DFT is far from being the best estimator for frequencies contained in a signal. Other estimators (like Maximum Likelihood [ML], [Root-]MUSIC, or ESPRIT) are in general far more accurate - at the cost of higher computational effort.


Can you provide more details please?

The FFT is still easy to use, and it you want a higher frequency resolution (not higher max frequency), you can zero pad your signal and get higher frequency resolution.


Zero-padding gives you a smoother curve, i.e., more points to look at. But it does not add new peaks. So, if you have two very close frequencies that produce a single peak in the DFT (w/o zero-padding), you would not get two peaks after zero-padding. In the field, were I work, resolution is understood as the minimum distance between two frequencies such that you are able to detect them individually (and not as a single frequency).

Zero-padding helps you to find the true position (frequency) of a peak in the DFT-spectrum. So, your frequency estimates can get better. However, the peaks of a DFT are the summits of hills that are usually much wider than compared to other techniques (like Capon or MUSIC) whose spectra tend to have much narrower hills. Zero-padding does not increase the sharpness of these hills (does not make them narrower). Likewise the DFT tends to be more noisy in the frequency domain compared to other techniques which could lead to false detections (e.g. with a CFAR variant).


Thanks for clarifying :)!

Not a particularly fair comparison, the DFT is a non-statistical operation.

Why do you think, that it is not fair?

You can even use these algorithms with a single snapshot (spatial smoothing).


Statistical algorithms always make more concrete assumptions of the signal. DFT / Fourier transforms are great as they are a direct mathematical operation, that maps neatly to (basic) equations. There's a lot you can do, and easily grok, with FTs. Once you get statistical, a lot of things are harder :)

If you want pure performance, and understand the underlying statistical processes, then sure I totally agree with you.


I consider myself to be a slow-thinker too. I have several above-average people as friends. New stuff (usually some mathematical topic) that takes me a whole day to internalize the basics, I can tell to them in 15 minutes and they will be able to understand it and to immediately connect it with other topics they know. I have not attended a special school for mathematically gifted like some of them. In school and during my first year at the university, I used to think that I am just intelligent enough to have an idea how true intellegent one could be...

I was always bad at solving equations with more than 3 unknowns, since this includes enough steps to have a significant probability that I forget a sign or mix something up - even though the algorithm itself was completely clear to me.

Today I acceppted myself much more: I think my real advantages are that I am a) interested in a lot of things and b) I am enduring in reading about a topic. Even if I need to read explanations of 5 different authors to grasp something, I do so. And I care much less today than ealier. However, I still like to be faster at times.

While I think I am a decent programmer/SWE, I do not like pair programming. My path of coding is not linear and most of the stuff I type will be erased until I am confident about my solution.

And getting enough sleep is also a big issue.


I am thinking more about Julia here - which I would use if Python was not that common in several communities.


Is it common in Julia to use multiple-dispatch on 3 or more arguments, or just double-dispatch?

Julia definitely made the right choice to implement operators in terms of double-dispatch - it’s straightforward to know what happens when you write `a + b`. Whereas in Python, the addition is turned into a complex set of rules to determine whether to call `a.__add__(b)` or `b.__radd__(a)` - and it can still get it wrong in some fairly simple cases, e.g. when `type(a)` and `type(b)` are sibling classes.

I wonder whether Python would have been better off implementing double-dispatch natively (especially for operators) - could it get most of the elegance of Julia without the complexity of full multiple-dispatch?


It's not uncommon to dispatch on 3 or more arguments. Linear algebra specializations are one case where I tend to do this a lot, for example specializing on structured matrix types (block banded matrices) against non-standard vectors (GPU arrays), you then need to specialize on the output vector to make it non-ambiguous in many cases.


The paper "Julia: Dynamism and Performance Reconciled by Design" [1] (work largely by Jan Vitek's group at North Eastern, with collaboration from Julia co-creators, myself included), has a really interesting section on multiple dispatch, comparing how different languages with support for it make use of it in practice. The takeaway is that Julia has a much higher "dispatch ratio" and "degree of dispatch" than other systems—it really does lean into multiple dispatch harder than any other language. As to why this is the case: in Julia, multiple dispatch is not opt-in, it's always-on, and it has no runtime cost, so there's no reason not to use it. Anecdotally, once you get used to using multiple dispatch everywhere, when you go back to a language without it, it feels like programming in a straight jacket.

Double dispatch feels like kind of a hack, tbh, but it is easier to implement and would certainly be an improvement over Python's awkward `__add__` and `__radd__` methods.

[1] https://janvitek.org/pubs/oopsla18b.pdf


Related question: What resources are there that might teach one about Maxwell‘s equations and the electromagnetic field tensor arisig from relativity? The magnetic field is a description of the electric field with relativistic effects. Is there a way of describing electromagnetism without the magnetic field?


I'm pretty sure this is what you want:

"Collective Electrodynamics: Quantum Foundations of Electromagnetism" https://www.amazon.com/Collective-Electrodynamics-Quantum-Fo...


Thanks!


Atom by Asimov?


When I worked at the university, this used to be my go-to reference about matrix identities (including matrix calculus).


Still today, I tend to increase my motivation of writing unit tests by using some non-serious names and strings in the tests.


Woa, CodeWarrior was one of the worst compilers (and IDEs) I had to use so far.


Mac developers who used CodeWarrior on its native platform from the PowerPC System 7 through Mac OS 9 era (so 1993-2001) generally consider it a fine compiler and the best IDE ever made.

I wonder what happened.


When I started learning Turbo Pascal I came across a problem where an if-statement was obviously decided wrong. I saw the values in the debugger.

My rescue was that I had a more experienced friend who knew that IIRC the compiler would choose the data type of the left operand of a comparison also for the right operand leading to potential sign switches.


Do you have any references for that? I used to avoid exceptions on small Cortex M0/M3 devices as well.


Khalil Estell has some great work on that. https://www.youtube.com/watch?v=bY2FlayomlE is one link - very low level technical of what is really happening. He has other talks and papers if you search his name.


Nice, thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: