I read your post, thought 'hmm sort of powershelly with a bit of awk', clicked on your Github project page and saw Powershell was listed as the first 'Software with Similar Goals to Osh'.
I've been searching for the 'panacea' of shells (and/or auxilary shell tools/hacks that dup() fd's 0,1,2 to enhance existing shells) that hits that same 'sweet spot' you describe. As such, over the last ~15 years I've been through a litany of setups, ranging from:
- the standard "bash/zsh/fish" approach (where you extend the shell) to
- the "scsh/ipython/eshell" approach (where you bring an inferior shell's functionality into a language) to,
- the screen/tmux approach (where you take a shell and then layer functionality over it). I.e., for directory navigation,
I'd written my own f-recency+bookmark system that would hook 'cd <tab>' and generate a pane sort of like Midnight Commander to nav around
I'm not sure where I'm going with this other than, I feel your pain and I'd imagine tons of other people do/did as well. Powershell is painfully slow and RAM heavy but the ability to add custom properties(!), providers, access the registry, and manipulate all of these objects as you'd like. Your project definitely looks like an interesting take on things as well. At least we're making some progress, I suppose ;)
===
(!) This is incredibly powerful since you can take a path, like C:\users\foo\downloads\video\, take file item, and then have Powershell invoke an executable to extend functionality out. If Windows doesn't have "Length" or "Encoder" as a property on the file out-of-the-box, you can just use an auxilary tool (say, ffprobe), "mapcar" the exec to the list-of-files, grep out the Length: field, and bam, that file now has Length. ``ls|where Length -gt 15'' ends up being pretty magical.
osh gets extensibility by reading a file on startup (~/.oshrc by default). That file is Python, and contains both configuration (e.g. database login info, ssh login info), and functions that can be used in osh commands.
This will vary from state to state, but in MA anyone who holds a class D (i.e. standard consumer vehicle) license is also allowed to ride any moped that's 49cc or below. I think there's now a yearly registration fee[1] that the owner must pay. So, for example, if I have a moped in good standing with the state, anyone with a valid drivers license is allowed to borrow it and legally ride it[2].
=====
[1] I want to say I paid something like ~30 USD, but this was ~10 years ago. Unlike a standard vehicle, however, there is no inspection check pre-requisite for safety nor emissions. I think the fee is purely for tags. Helmets are definitely a requirement though.
[2] AFAIK there are no additional insurance requirements though. There weren't any ~10 year ago when I one, but I can't remember if that's because my comprehensive car insurance did double-duty in the event were I to, say, hit a pedestrian in a crosswalk and they were to sustain damage.
(Not a CPA or tax attorney, consult your regionally certified folks for specifics. Just parroting what the IRS has published for consumer-consumption.)
At the national level, the IRS declared Bitcoin pretty clearly in Notice 2014-21 for 2015 onwards as strictly a capital asset and not as a currency backed by a foreign nation state. Q/A #7 is the most pertinent.
The implications on ICOs for an investor under the dominion of the IRS would be nearly nothing, as it's treated as a capital asset (valued upon purchase at FMV, and then again upon sale, once again, at FMV). Think of it like buying MSFT at $foo, selling at $bar, then paying capital gains on your gross less fees. Relevant publications are the 544 if you're trading, the 525 if you're mining. All service/product based income Bitcoin must be declared in a 1099-MISC.
The last page is dedicated to a love-note saying "hey, so listen, you're 100% subject to the standard "failure to report (correctly or not) tax evasion penalties, so uh pay us".
If you're offering an ICO yourself, things are going to get an order or two of magnitude more complicated, I'd imagine. No idea how it's classified but I'd imagine your regulatory reporting burdens will be somewhere between pink sheets and a publicly traded NYSE post-Sarbox company
Edit: On second thought, it might be higher since you're very arguably operating a FinEx. "Know your customer" rules a la banking regulations might apply (i.e. filling out a SAR and filing with FinCEN). I'd definitely speak with a tax attorney who's worked on FINRA secondary market filings though.
The danger is definitely the ICO's for investing. The O part of ICO implies that you're selling something on the promise of having access to it when it is live/released. That smacks of a security in everything but name, and that's a big no-no. They're selling unregulated securities and raking in hundreds of millions through the process. I imagine the SEC doesn't take kindly to unregulated capital raising, and naming it something else I wouldn't expect to dissuade them. Bitcoin doesn't have this, because no-one controls the issuance of the tokens.
As a long-term bitcoin holder, I think I have a good handle on crypto. I'm actually quite surprised the the dinosaur hasn't started giving direction about how ICO's fall afoul of existing capital raising regulations. Maybe they think it's too small to care? Not sure. It's an issue though.
I was primarily addressing parent re: the classification of security vs. commodity. I agree with you 100% that the danger lays with the investor and it falls under the designation of a 'security' without a doubt[0].
And again, you're right - the SEC doesn't take kindly to unregulated capital raising. (There's a reason why the designation of 'accredited investor[2]' had capital requirements to begin with-- to be legally invest in an ICO, or any block-chain based currency, you have to either have a net worth of >1MM or an earned income of >200k (>300k if filing with a spouse) for the last two years.) Investor safety isn't really what we're discussing here though, so much as which federal agency has regulatory control.
I was going to basically say what JumpCrisscross said, but he beat me to the punch[3]. Until we see some federal case law to set precedent, it's anyones game.
==
[0] "On July 25, 2017, the SEC issued a Report of Investigation under Section 21(a) of the Securities Exchange Act of 1934... determining that DAO Tokens were securities."
> If you're offering an ICO yourself, things are going to get an order or two of magnitude more complicated, I'd imagine. No idea how it's classified but I'd imagine your regulatory reporting burdens will be somewhere between pink sheets and a publicly traded NYSE post-Sarbox company.
They are actually simpler, I know this from running an OTC for a few years. The blockchain keeps people far more honest and transparent than Stock Transfer Agents, IQCapital and DTCC combined.
T
> They are actually simpler, I know this from running an OTC for a few years
CFTC saying X is a commodity doesn’t exclude the SEC from also claiming jurisdiction. Lots of FINRA-member firms offer CFTC-regulated trading services. Congress had to write specific laws preventing this from happening to commodities futures; no such exemption has been legislated for ICOs. That said, yes, the CFTC is generally seen as an easier regulator than the SEC.
TL; DR the CFTC and SEC have begun fighting for ICO jurisdiction. Score is kept with rule writing and prosecutions.
Disclaimer: I am not a lawyer. This is not legal nor any kind of advice. Don’t break the law.
You should be able to invest in anyone you want without restrictions but more importantly, without manipulation of Finra, the SEC, CapitalIQ, stock transfer agents. And, the DTCC is what everyone keeps its eyes on.
How so? Most ICOs see almost all the ether going into a private wallet and then traded to exchanges. Easy to see if you know the addresses for Coinbase, Poloniex, etc (etherscan.io automatically identifies known exchanges.) No one seems to care. Always looks like they are cashing out before they run away with the money..Once its off the chain, who knows what happens to it?
Dang I read the first paragraph of the article and immediately went searching for the real papers since I didn't expect any media outlet to include them at the bottom, but here they are for anyone who made the same mistake I did! https://arxiv.org/abs/1709.05024https://arxiv.org/abs/1709.10378
Not a cosmologist but here's my go at the de Graff paper. (Let's get this out of the way, the title is click-bait and the paper/researchers makes no such claims as to anything near 50%. New Scientist is trolling for hits with the word "half" or the journalist is fundamentally misunderstanding the work.) In de Graff, et al, they claim 30% of "90% of the missing baryonic matter [that composes the ~25% of our total universe observable from within our light cone]" has been found in the CMB structured as filaments between galaxies. They claim there's effectively a planar network layered on top of Minkowski space composed this baryonic matter. The temperature was at this "Goldilocks" midrange no one had previously analyzed (ranging from 10^5-10^7K). This wasn't previously found because people were searching "only the lower and higher temperature end of the warm-hot baryons, leaving the majority of the baryons still unobserved(9)". [See "Warm-hot baryons comprise 5-10 percent of filaments in the cosmic web.", Nature, Eckert et al for more about baryons of this composition.]
Additionally, these baryons have 10x the density of what we observe (so this could potentially be evidence for the first stable baryonic matter composed of second generation quarks, or more likely the binding energies are different from our standard uud/udd nucleon quarks) permeating the universe, and where the roads in the network meet ("dark matter haloes"), you have embedded galaxies and galaxy clusters. They continue with their analytic methods of the CMASS data, and claim within the framework 30% of the total baryonic content (which, again, all analytical methods put this into no more than ~25%) is composed of this form of this matter. I skimmed their methods and it seemed to at least logically hold -- they are using the appropriate data (SDSS 12) and didn't cherry-pick their galaxy pairs (so, no p-hacking here!).
From what I can tell, this basically also proves that large scale plasma exists between all bodies at any scale (planetary to systems to galaxies to clusters), and universe sized Birkeland currents exist; which is something cosmologists have been trying to prove/disprove for awhile.
So, not only did they find some of the missing matter, they found some of the missing energy, too. This does, however, screw some of the more classical cosmologists.
> Not sure who you're referring to. These results are completely consistent with the standard model of cosmology.
Not if what is being detected is simply mass associated with filamentary currents of energy (with attendant magnetic fields) rather than particular particles.
They're modelling filaments as cylindrical tubes of hot electrons connecting pairs of galaxies. I don't know what you mean by "mass associated with filamentary currents of energy", but while the electrons are hot (~million Kelvin), they're non-relativistic and their kinetic energy is negligible.
> Additionally, these baryons have 10x the density of what we observe (so this could potentially be evidence for the first stable baryonic matter composed of second generation quarks, or more likely the binding energies are different from our standard uud/udd nucleon quarks)
No, these results are not evidence for exotic matter. They measured the over-density of the filaments relative to the average background density of the universe.
> Let's get this out of the way, the title is click-bait and the paper/researchers makes no such claims as to anything near 50%. New Scientist is trolling for hits with the word "half" or the journalist is fundamentally misunderstanding the work.
At first I agreed with you, but I've dug into the articles and re-read the New Scientist article too to make sure, and it seems the story is a bit more complicated than it at first appears (caveat: I'm also not a cosmologist). They should have clarified this research does not involve dark matter though.
Part of the confusion stems from losing context and awareness of implicit limits to the claims when translating exact cosmological terms to popular science. "Baryonic matter" means nothing to the average person, and calling it "observable matter" could also be confusing to lay-people, since this matter isn't actually directly observable:
> “There’s no sweet spot – no sweet instrument that we’ve invented yet that can directly observe this gas,” says Richard Ellis at University College London. “It’s been purely speculation until now.”
However, the researchers do seem to claim they solved the mystery of the missing observable matter by detecting gas filaments:
> “The missing baryon problem is solved,” says Hideki Tanimura at the Institute of Space Astrophysics in Orsay, France, leader of one of the groups. The other team was led by Anna de Graaff at the University of Edinburgh, UK.
Whether that can should be translated as "finding the missing 50% of observable matter" depends on whether those baryons are in fact 50% of missing observable matter. To make things more confusing for non-cosmologists here, the two papers tell slightly different stories, because they don't do the exact same thing. De Graaff's paper mentions a much lower number than 50%, as you stated, but the introduction of Tanimura mentions:
> At high redshift (z ≳ 2), most of the expected baryons are found in the Lyα absorption forest: the diffuse, photo-ionized in- tergalactic medium (IGM) with a temperature of 10⁴ – 10⁵ K (e.g., Weinberg et al. 1997; Rauch et al. 1997). However, at redshifts z ≲ 2, the observed baryons in stars, the cold interstellar medium, residual Lyα forest gas, OVI and BLA absorbers, and hot gas in clusters of galaxies account for only ∼50% of the expected baryons – the remainder has yet to be identified (e.g., Fukugita & Peebles 2004; Nicastro et al. 2008; Shull et al. 2012). Hydrodynamical simulations suggest that 40–50% of baryons could be in the form of shock-heated gas in a cosmic web between clusters of galaxies.
It looks like this is where that half in the New Scientist title comes from: 40-50% of missing baryons should be in these gas filaments. This might appear to contradict De Graaf et al., but the latter mention Tanimura et al. in the conclusions of their paper:
> Similar conclusions to this work have been independently drawn by Tanimura et al. (...) who announced their analysis (...) at the same time as this publication. (my summary: We used different, independent but complementary galaxy pair catalogues). Despite the differences, we achieved similar results in terms of the amplitudes and statistical significances of the filament signal. (...) The fact that two independent studies using two different catalogues achieve similar conclusions provides strong evidence for the detection of gas filaments.
So given that these two groups seem to be in agreement with each other's conclusions, and that Tanimura himself was quoted (so presumably consulted for the article), it seems that the main clickbait aspect of the New Scientist article is that they did not clarify that no dark matter is involved in this story.
And the combination of your observation and DiabloD3's is an interesting one. The papers on Birkeland currents in the context of a galaxy spanning plasma makes for some fun conjecturing. A coulomb of charge moving in a million light year long filament of plasma is lot of energy.
(To contextualize [hopefully without misrepresenting their positions], Lurie and Vlad both think/thought that axiomatic set theory is not a proper mathematical foundation. Their disagreement lays in how to construct the reformalization of mathematics (Univalent Foundations vs Higher Topos Theory).
A (very rough) analogy would be the general consensus of climate change scientists agreeing on global warming being the result of the rapid re-release of fossil fuels into the ecosystem but disagreeing on whether the cause is from shipping container barges or the rapid industrialization of the BRIC nations.)
Here's what's regarded as the seminal resource[1] on Univalent Foundations. "Homotopy Type Theory: Univalent Foundations of Mathematics" which Vlad was working on at IAS. Not only mathematicians but logicians and computer scientists have made large contributions to this work. Names like Awodey and Robert Harper will certainly sound familiar to the C.S. crowd here.
WYSIWYG is kind of limiting though. I.e., I can use mermaid[1] way faster than I can use Visio. I can type out a mathematics expression way faster in TeX than I can using Word and their (greatly improved, I'll give them that) Equation Editor. Hell, I'm barely 'proficient' at the AutoDesk toolset, but I can usually out-model assemblies using their LISP interface over someone with a SpaceMouse Pro[2].
Generally though, you don't need the table or graph to be "readable" if you're the one constructing the content for others to consume, since you've already modeled the structure of the content within your head. Those -item1, -item2, --subitem1 demarcations are already internalized in your head. (Though, if we're talking about a platform for capturing data/mind-mapping/outlining/note-taking, this may not be entirely true, I admit.)
Even Adobe opts for a non-WYSIWYG DITA/SGML for their professional content management systems if only because publishing a book or magazine becomes a editorial nightmare. (In publishing, they have separations of controls just like we have separations of concerns. Our graphics and front-end guys have control over the .css and what-have-you, while our data guys will have control of the schema, and our app guys will have control of the binary; likewise, they'll have someone in charge of typesetting, someone who performs the layout management so the interstitial ads look consistent within your magazines theme or whatever, someone generating the content, and then an editor who finally signs off on it.[3]) Now that I think about it, a magazine has many source-control-management problems quite similar to what we have.
Magic sequences are pretty brilliant, I'll give you that. If it 'degrades gracefully' (i.e., when copied into a standard instance of notepad.exe or Nano, it's still grokable), that hybrid solution might be the closest panacea we have. Keep on working on that project, it has tons of interesting, orthogonal avenues to explore[4]
[2] https://www.amazon.com/3DX-700040-3Dconnexion-SpaceMouse-Pro... Pretty much the paragon of HIDs for CAD, in my limited use. For the last 10% of touchups it's second to none, but at the project level it's still slower than using the LISP derivative once you build up a sufficient set of macros.
[4] I.e., take say, chemists who want to share their findings with people in their dept. Being able to design their own markup for Lewis diagrams, multiple representations of orbitals, degenerates states, etc -- then have them real time render the observed values -> a generated chart rendered in a consistent theme (a la Jupyter Notebook, but..better...) I'd imagine would be quite powerful.
WYSIWYG HTML/CSS is limited for the simple reason:
Given HTML/CSS as input you can render it precisely (produce pixels on the screen). Mathematically speaking, the task of rendering is determined.
But WYSIWYG editing essentially is an opposite task: by given/desired set of pixels to synthesize HTML/CSS structure. And that task is not determined - different HTML/CSS constructs can produce the same set of pixels. That's why there are no acceptable/usable "WYSIWYG web site editors" - WYSIWYG is feasible only on reduced scope -limited set of HTML/CSS constructs that you can use to achieve 1:1 source/rendering ratio.
And that is what Markdown is all about, again, mathematically speaking - its rendering is in 1:1 relationship with its source.
As any tool it is good for its purpose. De facto it is a DSL (domain specific language) in the same way as Markdown or Emacs editing system in general.
It works if creation of graphs is what you do for living. But for occasional usage (say once in a quarter or year) people usually prefer Visio - you do not need to keep in mind (limited resource) mermaid syntax - just do drag and drop - actions common to all WYSYWYG systems.
Even with Markdown - each site has its own conventions - that's why occasional users prefer WYSIWYG if it is available.
I still use emacs in '-nw' mode because, I don't know, I'm old, but re: your criticism, skewer-mode[1] is available. Presuming your output is targeting some form of HTML, this enables you to retain the ability to at least present your information locally (on your own file system), to colleagues (authenticate via your existing SSO or whatever), or even to a public site (via a simple git-push to via a standard httpd on a container). I'm sure there are other modes out there, (hell, you could write something from scratch in a few minutes with inotify to cater to all your specific needs). I'd say it'd be more accurate to qualify your statement as such: "the value of org-mode documents decreases as a platform to manipulate (but not distribute) your data outside of emacs"[2].
[1] https://github.com/skeeto/skewer-mode
[2] And even then, modern cell phones have quad-core 1ghz ARMs on them. I was using org-mode with a P3 600, I'm sure modern phones + a Bluetooth keyboard would be sufficient to run emacs, or at least SSH + emacsclient.
I'd say it'd be more accurate to qualify your statement as such: "the value of org-mode documents decreases as a platform to manipulate (but not distribute) your data outside of emacs"[2].
Definitely, that's what I meant. I share org stuff with ox-html or ox-latex. But colleagues cannot really use my org documents unless they are on Emacs.
To your [2] - you can run it, but legibility is kind of a problem. Also, the next Bluetooth keyboard I find that supports remapping Caps Lock to Control will be the first.
RE: Portability -
Not sure how far you'll be able to get by gcc -S'ing something like nuklear[1] (cross-platform ANSI C89) but it might save you some time.
I don't have much HLL asm/demoscene experience personally so I'm not sure what's "impressive" as engineering feats these days but this looks cool. As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level with a decent layer of POSIX compatibility and the ability to run AVX512 instructions, I like the idea that tools like this are out there. Cheers, mate
> RE: Portability - Not sure how far you'll be able to get by gcc -S'ing something like nuklear (cross-platform ANSI C89) but it might save you some time.
The big problem with using "gcc -S" is that as a result you have a HLL program, simply written as an assembly language listing.
The humans write assembly code very different than HLL. Even translated to asm notation, this difference will persist. Asm programmer will choose different algorithms, different data structures, different architecture of the program.
Actually this is why in the real world tasks, regardless of the great compiler quality, the assembly programmer will always write faster program than HLL programmer.
Another effect is that in most cases, deeply optimized asm program is still more readable and maintainable than deeply optimized HLL program.
In this regard, some early optimizations in assembly programming are acceptable and even good for the code quality.
> As someone who aspires to see a viable Smalltalk-like runtime self-modifiable introspective debugger at the OS level
That's an interesting pile of keywords you've got there.
I don't know about Smalltalk (I find Squeak, Pharo, etc utterly incomprehensible - I have no idea what to do with them), but for some time I've been fascinated with the idea of a fundamentally mutable and even self-modifying environment. My favorite optimization would be that, in the case of tight loops with tons of if()s and other types of conditional logic, the language could JIT-_rearrange_ the code to nop the if()s and other logic just before the tight loop was entered - or even better, gather up the parts of code that will be executed and dump all of it somewhere contiguous.
C compilers could probably be made to do this too, but that would break things like W^X and also squarely violate lots of expectations as well.
For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].
At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].
At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].
At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).
RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom. When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved). If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs". This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees). In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.
Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.
=====
[1] See ISL-TAGE of CBP3 and other, more modern reportings from "Championship Branch Prediction" if it's still being run).
[2] https://stackoverflow.com/a/8874314 Here's how it's done with the CLR. The JVM is crazy good so I'd imagine the analogue exists there as well.
[5] Use some micro-kernel OS architecture so process $foo won't alter $critical-driver-talking-to-SATA-devices or modifying malloc. I'd probably co-opt QNXs Neutrino designs since it's tried and true. Plus that sort of architecture has the design benefit of intrinsically safe high-availability integrated into the network stack.
> For a VM, RE: code rearrangement, you're effectively describing dynamic DCE if I understand you correctly, CLR does this (and lots more)[2].
You mean Dynamic Code Evolution?
Regarding [2], branch prediction hinting being unnecessary (as well as statically storing n.length in `for (...; n.length; ...)`) is very neat. I like that. :D
> At the low-level programmer level, there's nothing stopping a (weakly) static language like C from adopting that behavior[3] at runtime [i.e. with a completely bit-for-bit identical, statically linked executable which].
Right. The only problem is people's expectation for C to remain static. Early implementations of such a system may cause glitches due to these expectations being shattered, and result in people a) thinking it won't work or b) thinking the implementation is incompetent. I strongly suspect that the collective masses would probably refuse to use it citing "it's not Rust, it's not safe." Hmph.
> At the compiler level, you've got the seminal Turing Award by Ken Thompson that does it at compiler level[4].
> And it is "almost" impossible to detect because TheKenThompsonHack easily propagates into the binaries of all the inspectors, debuggers, disassemblers, and dumpers a programmer would use to try to detect it. And defeats them. Unless you're coding in binary, or you're using tools compiled before the KTH was installed, you simply have no access to an uncompromised tool.
...Nn..n-no, I don't quite think it can actually work in practice like that. What Coding Machines made me realize was that for such an attack to be possible, the hack would need to have local intelligence.
> There are no C compilers out there that don't use yacc and lex. But again, the really frightening thing is via linkers and below this hack can propagate transparently across languages and language generations. In the case of cross compilers it can leap across whole architectures. It may be that the paranoiac rapacity of the hack is the reason KT didn't put any finer point on such implications in his speech ...
Again, with the intelligence thing. The amount of logic needed to be able to dance around like that would be REALLY, REALLY HARD to hide.
Reflections on Trusting Trust didn't provide concrete code to alter /usr/bin/cc or /bin/login, only abstract theory, discussion and philosophy. It would have been interesting to be able to observe how the code was written.
I don't truly think that it's possible to make a program that can truly propagate to an extent that it can traverse hardware and even (in the case of Coding Machines) affect routers, etc.
> At the processor level, you heuristically have branch prediction as a critical part of any pipeline. (I think modern Intel processors as of the Haswell era assign each control flow point a total of 4 bits which just LSL/LSR to count the branch taken/not taken. (Don't quote me on that)).
Oh ok.
> RE: Smalltalk - for me, the power of the platform's mutability was revealed when I started using Cincom.
Okay, I just clicked my way through to get the ISO and MSI (must say the way the site offers the downloads is very nice). Haven't tested whether Wine likes them yet, hopefully it does.
> When I was using GNU implementations ~10 years ago, they felt like toys at the time (though I hear things have largely improved).
Right.
> If you've ever used Ruby, a simple analogy would be the whole "you can (ab)use the hell out of things like #Method_Missing to create your own DSLs".
Ruby is (heh) also on my todo list, but I did recently play with the new JavaScript Proxy object, which basically makes it easy to do things like
> This lends of a lot of flexibility to the language (at the expense of performance, typing guarantees).
Mmm. More work for JITs...
> In a Smalltalk environment, you get that sort of extensibility + static typing guarantees + the dynamic ability to recover from faults in a fashion you want.
Very interesting, particularly fault recovery.
> Imagine an environment[5] that has that structured instrinsically + the performance of being able to use all them fancy XMM/YMM registers for numerical analysis + a ring0 SoftICE type debugger. Turtles all the way down, baby.
oooo :)
Okay, okay, I'll be looking at Cincom ST pretty soon, heh.
FWIW, while Smalltalk is a bit over my head (it's mostly the semantic-browser UI, which is specifically what completely throws me), I strongly resonate with a lot of the ideas in it, particularly message passing, which I have some Big Ideas™ I hope to play with at some point. I keep QNX 4.5 and 6.5.0 (the ones with Photon!) running in QEMU and VNC to them when I'm bored.
Oh, also - searching for DCE found me Dynamic Code Evolution, a fork of the HotSpot VM that allows for runtime code re-evaluation - ie, live reload, without JVM restart. If only that were mainstream and open source. It's awesome.
Between IDA Pro 5.0 (full functionality, legally free), IDA eval 7.0 (limited functionality, legally free), and IDA 6.8 (haxxed), the students are pretty well covered. I'd imagine Ilfak doesn't mind if you pirate it as a student, but giving you a 7.0 full binary is more about operational security than anything else. This is illustrated by the fact that you can literally go to buy IDA + HR for full retail at 5k and be denied a license if you're not affiliated with a reputable organization.
The reason why it's so difficult to get a pro license (even if you want to pay for it legally) is because one leak of the most current version and enterprise sales drop by about ~50%[1]. So, theoretically, if Ilfak were to give you that $100 most-recent copy and you were to share it with the wrong people, any the losses are way more than just what he lost on your sale. The legitimate corporate sales go down ~50%[1] the second a leak hits.
I'm not in rev-eng professionally, but I grew up (read: pirated it at 15) with it back when SoftICE and IDA were the only options on the market. Eventually I needed a license to side-step some legitimate licensed software for a client who's business depended on a dongle from a now defunct company. Since IDA is what I already knew, it's what I purchased. The time I would have spent learning another platform (there are lovely open source alternatives on the market now) would have exceeded the price of the software by quite a bit. For people who use IDA professionally, 1k a seat (5k w/ HR) is more than reasonable, especially with the whole ecosystem of plugins that exist around it[2].
But the times, they are a'changin. Now with all of the competitors on the market though, kids are growing up not pirating SoftICE and IDA but alternatives. 5 years down the line, when those kids have purchase influence and go to their manager with a request ("this is what I grew up with..I need a __ license"), IDA is going to have a real problem[4].
====
[1] Ilfak delineated the whole business model and decrease in sales as a result of leaks with real numbers on reddit. This was 3-4 years ago (maybe more, god I'm getting old) so I might be off by the 50%. I'm sure it's more than 1/3rd. This interestingly enough is why you see a version bump as soon as a shows up. Maybe purchasing departments are less likely to authorize a 5k license if the most recent version on piratebay? Not sure how that gets past legal and whoever is in charge of license compliance, but it happens. Pure speculation: When you bump a pirated 6.8 to a non-pirated 6.9, the engineer/manager can "legitimize" the purchase by telling purchasing "I need 6.9 and can't steal it- now, cut the purchase order, or it'll be your name coming up when we have a meeting as to why we lost Client Foo".
[2] The reason I keep paying for maintenance fees is because the extensive number of community-made/maintained plugins makes IDA basically like emacs. Powerful base-software, but when you get all your scripts setup with things like DIE[3] you can't imagine working in another setting.
[3] https://github.com/ynvb/DIE This alone is worth the cost of the base $1k IMO. Sidenote: The plugin contest was the greatest marketing idea ever. Get people to develop (or release the tools they've already developed for themselves to the public domain) extensible software that adds significant value to your software in exchange for a $1k? Absolutely brilliant.
[4] https://i.imgur.com/Qb7GSCL.png Here's a comment I made about a year ago when we saw Binary Ninja/Radare2/etc all coming of age.
He misused the terminology but basically got it right. The originator of the loan (your local bank is probably just a broker, who is acting as an intermediary for a wholesale vendor) then sells it to a REMIC[1]. It's tranched by the REMIC and sold off to institutional investors. These were properly labeled as mortgage backed securities[2] for residential real estate. This is all by the book, mind you. At least that's how it was conventionally done. If our housing market crashed with this system in place, we would have seen a hit but it wouldn't have affected things nearly as badly. It became 'systemic' when bankers decided to trade 'synthetic CDOs' in volume[3].
[1] https://en.wikipedia.org/wiki/Real_estate_mortgage_investmen... - [For context, Freddie Mac/Fannie Mac were REMICs] So what OP meant when he said "prime / subprime" was a misuse of terminology, but if he replaces 'prime' with 'AAA' and sub-prime with 'BBB and Residual' -- he's more or less on the mark there.
I've been searching for the 'panacea' of shells (and/or auxilary shell tools/hacks that dup() fd's 0,1,2 to enhance existing shells) that hits that same 'sweet spot' you describe. As such, over the last ~15 years I've been through a litany of setups, ranging from:
- the standard "bash/zsh/fish" approach (where you extend the shell) to
- the "scsh/ipython/eshell" approach (where you bring an inferior shell's functionality into a language) to,
- the screen/tmux approach (where you take a shell and then layer functionality over it). I.e., for directory navigation, I'd written my own f-recency+bookmark system that would hook 'cd <tab>' and generate a pane sort of like Midnight Commander to nav around
I'm not sure where I'm going with this other than, I feel your pain and I'd imagine tons of other people do/did as well. Powershell is painfully slow and RAM heavy but the ability to add custom properties(!), providers, access the registry, and manipulate all of these objects as you'd like. Your project definitely looks like an interesting take on things as well. At least we're making some progress, I suppose ;)
===
(!) This is incredibly powerful since you can take a path, like C:\users\foo\downloads\video\, take file item, and then have Powershell invoke an executable to extend functionality out. If Windows doesn't have "Length" or "Encoder" as a property on the file out-of-the-box, you can just use an auxilary tool (say, ffprobe), "mapcar" the exec to the list-of-files, grep out the Length: field, and bam, that file now has Length. ``ls|where Length -gt 15'' ends up being pretty magical.