Well, I'm the guy in the black shirt who did the demo.
If you liked that lecture, I'm already starting on some verilog implementations.
This is an example multiplication 8 bit * 8 bit -> 16 bit unpacked (20 bits). It differs from standard floating point in that the fractions are stored as two's complement. It takes a little bit of wrapping your head around, but the hidden bit for negative numbers is actually -2 ! Moment of zen.
Way cool -- I realized that when I checked out your profile just after posting the comment -- what timing -- I discovered the video a few days ago when looking for precise/compact interval representations. Interesting work indeed.
Has there been much traction for getting major chip manufacturers to implement this? I know they're all looking for the next big thing and Intel is working on specialized neuromorphic chips. A general "drop-in" replacement for floating point seems like an opportunity for a general-purpose win from low-hanging fruit:
well seeing as how John invented these numbers literally two months ago, I haven't seen any traction yet! But I am persuing fundraising opportunities. In the demo I showed how you can effectively reduce the bitwidth to 8 bits and still train in a very trivial machine learning exercise. I'm currently enrolled in the udacity machine learning class and implementing everything in parallel in julia so that I can try more complicated architectures using posits.
I do have a hardware architecture in mind for how to very effectively and efficiently execute machine learning calculations using posits.
I’m interested in representing angles / points on the circle; 3d unit vectors / points on the sphere; unit quaternions / points on the 3-sphere using 1, 2, or 3 relatively low-resolution posits, under stereographic projection.
How efficient do you think regular C or GPU code (on existing hardware) can be made for compressing a 32-bit float to e.g. a 16-bit posit, and for expanding the posit back into a 32-bit float?
I don't think it can be made that efficient in software. Is there a particular reason why you need 16 bits? A 32-bit float is going to be better than a 16-bit posit almost always (posits are better their equivalently sized float, but they're not that good) and once it's in posit representation do you have a way of doing mathematical operations on them?
This is just for data compression, not for computation directly. 32 bits is often overkill for transmission/storage of rotations, unit vectors, geographical locations, unit quaternions, and the like. Depending on the use case 8 bits might be enough, or 12, or 16.
To actually do computation I would convert the posits back into 32-bit floats (or e.g. in the Javascript case, 64-bit floats), and then take the inverse stereographic projection.
[Stereographic projection is extremely cheap; for each data point only requires one division and some additions and multiplications.]
Posits look too good to be true!
Is there a reason why regime bits do not include the sign bit?
Then both "0 0001" and "1 1110" could be interpreted as 4 regime bits. Even better we could include the last flipped bit as well and we would have 5 bits.
Edit: Well. I see it would result in losing the values 0 and 1.
Another question:
Since it is fixed length of 4 bits (for N=32) why don't we just extract the 4 bit value, then we could represent 2^4 regimes this time without losing 0 and 1.
I wouldn't screw around too much with the sign bit. The way it's laid out is really kind of cool... Negation is simple two's complement.
In my software posit library (which is intentionally strictly binary and not backended by IEEE floats), (https://github.com/interplanetary-robot/SigmoidNumbers) I did everything by first inverting negative numbers and doing decode in the positive domain.
As I design the hardware, it's actually better to NOT do a two's complement inversion to do the decode, and keep the fraction as two's complement!
Also the 4 bit posit was just a simplification to help you understand the structure from a constructive point of view. posits can be of arbitrary length; they have a property I call isomorphic - so appending zeros exactly preserves the value of a short posit when increased in length; conversely, rounding a long posit to a shorter one reports the "nearest representable value".
You need to be a lot more explicit about what you’re asking.
Look at the slides, searching down for “At nbits = 5, fraction bits appear”. Notice that every possible bit pattern is used and meaningful.
If you are unsure why you were downvoted, I strongly suspect it's because politely giving and receiving due credit are important, knowing that one's interlocutor was involved in specific research is very useful information (to know what questions to ask), and it's probably kind of rude to call someone to account in this manner. Hope this helps :)
https://www.youtube.com/watch?v=aP0Y1uAA-2Y