That was my answer as well: My reasoning: Let x be the probability of purchasing disability coverage, thus the probability of purchasing collision is 2x from conditions b and c, we know that 2x∙x = .15 so we can calculate the probability of neither applying as 1-(2x+x)+2x∙x = .18 + .15 = .33
I can see the trap of forgetting to add 2x∙x in the calculation (since we don't want to double count the case where they purchase both insurances). And choices d and e follow from forgetting to subtract the probability of purchasing either from 1 (and making the same mistake of not removing the double count). I'm curious where the .48 distractor comes from. Coming up with good distractors is the secret to making a multiple choice test hard (students tend to think that multiple choice will be easier than a fill in the blank test, but they forget that with free answer exams, they can still get partial credit where in a multiple choice test, they'll lose all credit for a small mistake).
I taught a lot of high school/junior high math for college students back in the day and typically the midterm exams and quizzes would get partial credit and the final was multiple choice. We had very specific rubrics for partial credit grading and this was typical of the classes that I took as well.
I've built at least hundred of little tools here and there to improve my productivity to the point where single keypress is what only needed in every other context to run specific scenario.
I've tried to give it to my fellow devs around to make their lives easier.
It sounds like you deserve a new job with more challenging peers. Environments like that can suck the life out of a bright flame
Reminds me of a younger self going to interview for the technical team at a bank. I arrived full of youthful zest, thoroughly aced the interview. One of the interviewers, who already looked rather zombie-like, accompanied me to the exit. As I walked off he lit a cigarette and shouted after me, "don't accept the offer, this is not a place for someone like you!"
Made a variety of friends over the years who have shared stories of that bank. It's notorious, the interviewer was absolutely right.
I had a similar in 2000 during the .com bubble, interviewed at an ecomm company doing libraries, Amazon had not yet put them out of business. All c++ cgi. I asked a lot of questions about the stack and dev speed. They had looked at java but didn't have experience so were just going to plug along in c++. Their customers, libraries and book stores, didn't care.
They passed. Reason was I was too much of a cowboy and they were afraid I'd rewrite their whole system within a few months of starting.
I can read that many ways these days but I'm glad they passed ...
The performance data from the linked arXiv paper: 965.9±3.4 [s] on AWS p3.16xlarge instance for program which calculates Hamming distance between two 8-digit hexadecimal numbers.
Estimated sec. per cycle: 0.794 [s].
Also, from from the appendix, it takes roughly 116 seconds on the AWS p3.16xlarge instance to calculate fibbonacci(n) for n=5.
Every single oscillating particle needs to travel more absolute distance when speed increases, so it will oscillate with lower frequency for the external observer, thus causing time dilation.
Could someone shed a light on economics of working on such kind of a project?
Aren't developing third-party client for the entity which you do not control and somehow compete with is typically a futile experience?
Doesn't it go against most of ToS-es directly (e.g. Discord/WhatsApp happily ban accounts using third-party clients) or indirectly (I guess no proprietary chat platform will be exactly happy having third-party clients that compete with their official and controlled app).
I mean how people justify building a business on it given that it essentially means that they have to play on the other's people playground by the rules which can be changed at any time. Like tomorrow Slack would decide to disallow any third-party apps and you're done.
It’s very risky to build a service that relays directly on the API layer of other commercial services. If any of there chat apps change their APIs to stop Beeper traffic then they will lose users. Some services will cause a larger loss than others.
I think the economics are simple. The company is two devs and they charge $10/month. If the costs are $4/month/user then they need 83k users to clear a quarter million a year in profit per dev. This isn’t a unicorn, but it has the potential to bring in good money while it lasts.
TIL: apart from infinity (number, larger than any real number in absolute value) there are infinitesimal (number, less than any real number in absolute value and not a zero).
Edit: also, those infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632.
Those clerics were ahead of their time. They probably would have banned large cardinals as well (infinites so large we can't prove whether or not they exist.)
Infinitesimals and infinities are nice if you're doing some sort of bucketing logic.
tiny = infinitesimal
huge = infinity
N = number to be bucketed
tiny <= N < 10 --> bucket 1
10 <= N < 20 --> bucket 2
...
X <= N < huge --> last bucket
This removes edge cases you need to test for if you're trying to bucket positive values. This may not be something you've had to do, but I've had reason to want this before on a few occasions.
Infinitesimals do not exist in the standard real number system. 'tiny' seems to be more related to the smallest positive representable (normal or subnormal) IEEE 754 float/double type of value which is a real number.
Infinitesimals tautologically exist in any finite representation of numbers. For floats, it's the smallest representable positive number. For integers of any type, it's 1.
As for how common they are, we learn about them in any introductory calculus course when defining derivatives. You come across the idea whenever discussing limits, if somewhat obliquely.
If I learned about it in high school math, and again in "real" math courses at my university, I'd say it's pretty standard.
I've never heard of that, and your definition of 1 as infinitesimal is incompatible withbits properties (infinitesimal + infinitesimal + infinitesimal is greater than a non-infinitesimal 2?!) and I don't see a mention on Wikipedia, and it goes against the plain read it of "in-finite-simal".
Also, you seem to be conflating "common" with "standard". "standard" is a mathematical term. Infinitesimal are handwavy in standard analysis (epsilon-delta are the rigorous alternative), but exist rigorously in nonstandard analysis.
I guess I'm using the wrong term, then. I often find it useful to have a concept of "smallest representable positive number," specifically for handling edge cases such as the one I gave up-thread. I see how that doesn't map to infinitesimal as defined in the shared link.
There are other instances where I've had a need for such a smallest positive number, where logic is simplified as opposed to checking for 0 in a special way. Whether there's an agreed upon term for that, I know where I've found value in programming tasks.
When I need such a thing, it is almost invariably in comparisons, so I am not doing arithmetic with multiple instances of that smallest representable positive number.
The theory of infinitesimals is intimately connected to how analysis (differential and integral calculus) was first formulated. Leibniz and Newton understood that you could approximate instant rates of change, or areas under a curve, by taking smaller and smaller "slices" of a function, but they did not yet have the rigorously formalized notion of limits that modern analysis depends on. So they developed an arithmetic of infinitesimals, numbers greater than zero but less than any positive real number¹, with some rather ad-hoc properties to make them work out in calculations.
Philosophical problems surrounding the perplexing concept of infinity were already hotly debated by the ancient Greeks. Aristotle made an ontological distinction between actual and potential infinities, and argued that actual infinity cannot exist but potential infinity can. This was also the consensus position of later scholars, and became a sticking point in the acceptance of calculus because infinitesimals (and infinite sums of them) were an example of the ontologically questionable actual infinities.
As I mentioned before, standard modern analysis is based on limits, not infinitesimals, and requires no extension of real numbers. Indeed the limit definition of calculus only requires the concept of potential infinities, so philosophers should be able to rest easy! But infinitesimals still occur in our notation which is largely inherited from Leibniz, however. We say that the derivative of y(x) is dy/dx, or the antiderivative of y(x) is ∫ y(x) dx, and while acknowledging that dy and dx are not actual mathematical objects, just syntax, we still do arithmetic on them whenever it's convenient to do so! For example, when we make a change of variables in an integral, we can substitute x = f(t) for some f, and then say dx/dt = f'(t) and "multiply by dt" to get dx = f'(t) dt to figure out what we should put in the place of the "dx" in the integral.
Actual infinitesimal numbers are not dead, either, they're used in a branch of analysis called nonstandard analysis which formalizes them in the logically rigorous manner that is now expected from mathematics.
________
¹ Not that they had a rigorous theory of real numbers, either, that came in the 19th and early 20th century. In fact what we now understand as formal, axiomatized math didn't really exist before the 19th century at all!
I'm sorry, if you could build a model to predict markets, why will you post in to Kaggle to get $40k in prize instead of applying this model to your own broker account?
It is much harder to turn a model into a profitable trading strategy than people realize. Apart from transaction costs, risk management and market impact there are also a lot of small operational details which can make or break your execution. One example I vaguely recall was that the details of how a specific foreign exchange conducted its closing auction could make a substantial difference to a strategy that involved executing there alongside other trading venues.
The payoff for getting these operational details right or wrong is massively asymmetrical. If you get everything right, you'll only do as well as your model lets you. But if you get anything wrong, you run a real chance of losing far more money than you could have hoped to make!
Even just validating your strategy on historical data (ie back-testing) is harder than it sounds. If you make a mistake that leaks information to the code you're testing, you can end up with a much rosier return and risk profile than you really have. Another way to lose money when you go put your model into action.
If you get over these challenges and run your strategy successfully for a while, other market participants are going to start adjusting against it and you have to adjust in turn. You can't just "set and forget".
I should note that I am far from an expert on any of this, though! I just know enough to not trade with serious money—my real savings are all in index funds I don't touch, thank you very much :).
I believe what you are referring to is the fix. Foreign exchange markets, that I am aware of, do not have closing auctions.
I have heard of some quants trading foreign exchange markets, agreeing to trade at the fix with their counter-party, and not realising that traders often manipulate the fix resulting in the quant's strategy appearing not to work. It is almost comical (I worked in finance but not in FX, everyone knew this was going on for decades before the SEC starting fining people) that someone who managed money was making this error.
You are 100% correct about all the other stuff. Lots of issues with "production"...that is why financial firms employ traders/risk people/etc. Most people who trade themselves tend to go for lower-frequency strategies that they can implement personally. I actually don't think there are huge barriers, smaller investors have a huge advantage (when you trade at scale, the market moves against you) but you have to work with what you have and realise that you will get crushed if you try to replicate what someone with more money is doing.
Also, data. Data is expensive, and a huge fixed cost.
I was half-remembering some story I heard a while ago about the work needed to arbitrage between some US ETF and some securities on a Brazilian exchange, or something to that effect. I don't remember the details, and I'm not even sure that specific example was real, but it stuck out as a great illustration of the complexities involved in executing a strategy vs just coming up with a model.
Nothing foreign-exchange-specific there although, now that you mention it, dealing with different currencies is another problem you can run into with strategies.
Data quality is another real-world problem. There can be typos, deliberate biases, omissions that need to be guesstimated, sudden structural changes (e.g. a stock split event or a change in reporting cycles) etc.
There's also heterogeneous data sources to aggregate and consolidate, each with their own way of measuring things. e.g. You can see how different states and countries are tracking Covid related stats, they all have their own metrics and interpretations. Some even change the way they report overnight. Companies will similarly report their data in different ways.
Data cleansing is its own science and art for this reason, quite separately from developing any algos on it. It's a practical problem that's easy to overlook when you're just looking at ML transformations from input to output data sets.
This is basically why I’ve never seriously considered doing it myself. I had a neat idea in about 2005 which I tested on historical data, and it beat buy-and-forget on every share I tried except for Google, and it only had one free parameter.
But, even if I’d implemented it perfectly, and even if the algorithm has survived the financial crash, it would’ve only worked if I could trade for free, and other people copying the algorithm would probably have made it stop working.
Consider you have some extra money, what do you do with it? Do you put it (or keep it) in a bank-account? That is an investment choice you have. If you have more money than you need for living then you are already investing it in some way, maybe a bank-account. And you make this investment decision following some algorithm in your head. So if you algorithm was so great wouldn't it make sense to use it rather than put money into a low-interest bank-account?
My point is when it comes to investing, inaction IS action too. Therefore an investment algorithm does not need to beat the market. It just needs to beat doing nothing.
Consider that if you have earnings you can put money into an IRA account and then trade with it almost for free.
But the transaction fee is not 1% if you are a professional. It’s now higher, but there was a time you could easily get closer to 0.05% with some effort and nontrivial but not huge volume.
Iirc, last I checked, bitmex cost 0.075% to take liquidity, and paid 0.025 to give liquidity (which should be noted, is more than a tick - so, market making on bitmex is free money in an unpredictable market - which has made it technically expensive)
INET had a similar incentive structure before they were bought by Nasdaq, as did BATS when it started - it’s a common way to jumpstart liquidity in a new exchange.
Robinhood and IB will let you trade for free today (with other hidden and hard to quantify costs related to their order flow transactions instead)
So there’s no general solution - the details keep changing - but there’s likely a way to make nice profit of 0.1%.
>if you could build a model to predict markets, why will you post in to Kaggle to get $40k in prize instead of applying this model to your own broker account?
Because things aren't that simple. I find this argument very similar to that of devs who complain "I wrote this piece of code that made my company $3mil in revenue over the past year, but I only got paid a fraction of that, i am getting ripped off, oh woe poor me". If you could do that, you would have made it on your own and made that much money already.
Turns out, other people at the company are actually doing tons of work to make it possible to make that much revenue off your code. Same applies here. It isn't just as simple as having one good model at a single point in time to be able to make tons of money off it, there are a lot of other people doing their own work at those finance shops that make it possible for those models to bring in tons of money.
I'd be surprised if the data Jane Street provides isn't some form of high-frequency tick data. It's relatively easy to make accurate short-term predictions with such data, the challenge is being fast enough that behemoths like Jump and Citadel don't get all the good trades before you, leaving you with just the bad predictions. This requires a huge investment in infrastructure and connectivity, beyond the reach of individuals who aren't already quite wealthy.
This competition doesn’t predict markets. It’s a manufactured game to resemble things JS does. And even if it were an accurate representation, the competition is run on 128 unknown features that you would have to discover for yourself. And trust me, >90% of the work is identifying the features.
well because quant trading isn't about import xgboost, you need a sustainable infra to handle api failovers, bad data...
not even going to mention risk management which is 50% of what quant trading is about.
the data provided is anonymized but would probably be a mix of laggard measurements (moving averages, rsi...) and maybe some flow data...
quant trading isn't really about finding "secret stuff" most profitable strats you can deploy can be based on stat-arb, basis trading or even just delta-neutral funding farming and such
HFT firms aren't trying to predict "the market" as a whole - just small eddies of it. A typical example of this is arbing names at the bottom of index fund rebalances. Speed is important mostly to make sure someone else doesn't hit the arb first.
I'm fan of heavy key mapping customization to boost productivity.
The most productive remapping so far which I made is to globally remap (via xkb) Left Alt + HJKL to respective arrow keys. Works in all GUI applications which rely on arrow keys for navigation, plus standard Ctrl + Shift + Left Alt + H (Left Arrow) works to select previous word in text fields (e.g. in browser).
Next one is to remap Left Ctrl to Left Shift and to press it with upper side of the palm just below pinky (not sure how this part of the hand is called correctly) instead of utilizing finger for that. Works best on mechanical keyboards without bevels.
Also I find quite useful to remap keys 5-0 to Right Shift + Q (5), W (6), E (7), R (8), D (9), F (0). I find these keys are easier to reach and type without a mistake instead of using standard 5-0 keys which you should stretch your fingers to type.
For example, to type "()" I use Left Ctrl (remapped to Shift) + Right Shift + D, F which is easier to type without fingers leaving home row.
I use Hammerspoon and Karabiner Elements on macOS to rebind keys (including the examples you gave). There's some examples and repositories available for these applications.
On Windows, I use AutoHotKey, though mainly for gaming (be aware it can get you banned on some games though; investigate this beforehand).
On Linux, I have a bit more difficulty with all of this because:
1) There's no standard application or configuration or even desktop. As of now I mainly use Gnome, but my preference would be to use more lightweight WM/DE on some of my devices.
2) I sometimes use a Linux desktop in a VM, from macOS.
If you don't remember the particular command, why not to look it up for the first
time and then assign personalized alias, which is easy to remember?
Like with `tar` example and `zsh` you can even define suffix alias (`alias
-s`), so you can just type `./your-file.tgz` and `zsh` will invoke `tar -xzf
./your-file.tgz` or whatever you prefer.
Funny, i actually created my own alias which points to http://cheat.sh/
...that of course assumes I have an internet connection. Otherwise, fall back to man pages as well as my own directory of text files with reference info on commonly used commands - employing my own examples.
Doesn't seem to be a particularly tricky question unless I miss something, just basic probability theory calcs.