You forgot the stage light effect. Mysteriously Apple give a free fix/service program for the 13" models but not the 15" models. Kind of sucks if you had a 15" model that started to go bad.
At this point that's more noise than signal. Not ideal for you but I can see the problem. For whatever reason you met the bar for requiring more validation. If they had let you in then another N'000 fake accounts would have also passed the automated system. Someone else would be posting here, Reddit or Twitter explaining how Facebook had let in some bot and was posting fake and untruthful stories, on the internet no less.
Around that time DOS programs were still being written to fit within 640KB of memory. However PC's were starting to be shipped with 2, 4 or even 8MB of RAM - memory really was a solution in search of a problem at that point. Windows 3.1 was the primary application for all that memory. But what if you didn't want, or need, to run Windows 3.1? Well that's where DESQview fit in. You could task switch between DOS programs instead using all that sweet memory (but not really, because DOS doesn't multitask, so 4 switchable ttys of DOS programs is a better description)
Several important DOS applications (spreadsheets, databases, CAD etc... even Doom required 4 Meg) were absolutely able to use more than 640k which was why you saw PCs with more memory: there were applications people cared about that needed it then. Granted, it was a painful business that had all sorts of limitations but the use cases were there. Anyone paying attention could look around and see that this problem had already been solved in better ways on other platforms and the various DOS multitaskers were just stop-gaps until a more universal solution (both OS and application) came along.
True, and I think Quarterdeck (and Pharlap, Rational, etc.) developed the VCPI specification that allowed those DOS extenders to work cooperatively. Some of the early DOS extenders took over the entire machine and did not allow that.
Vaguely related: there's Firecracker which boots in 125ms on x86 but that's as a VM, so it's an apples to oranges comparison. From what I recall Firecracker powers AWS Lambda so it's an interesting project in that respect too.
I know Google cloud VM's are using kexec for faster launches; cause we had some awful toolchain related bugs to fix there. Debugging wasn't very fun, at least on x86 this part of the kernel is called "purgatory" cause there is literally no runtime (not even the kernel's "runtime" is available, mid-kexec).
Not the OP and I wouldn't say I'm a fan but I've made some peace with it by putting a membrane cover over the top and so far: (a) not one dust related casualty in the past 18 months and (b) it subdues the clackety-clack of the keys into something that's actually quite pleasant.
I expect they correlated logs from several sources, maybe the guy connected via ssh to Apple's servers at the same time his mac accessed software update or something. If he sent all traffic down the VPN it'd show the same IP address at Apple.
Don't do this to the space bar! I used the same trick as the video to fix/clean several keys out and had a pretty good feel for what I was doing.
However the stress/weak points on the spacebar are quite different and you're more likely, as I did, to lift the entire butterfly mechanism out. You will not get this back in again. It's $700 to get the thing fixed at Apple but you can buy replacement keys for $100 per shot online.
Only played with it a bit so far but it seems to me that the call and flame charts are now missing any timing information. Unless I've overlooked it you can't actually see how long a function takes to execute - defeats the purpose of instrumented profiling surely!
Instrumented trace capture modifies the execution profile too much for timing information to be useful, and then sampling profiling which doesn't modify the execution profile doesn't have timing by its nature.
So if you were using the timing data previously you were basically just looking at random numbers that would send you on wild chases into things that weren't real. The new one fixes that by not showing bogus data :)
Yes, instrumentation adds overhead. The absolute numbers cannot be used to determine peak performance but that's never the intention when profiling code.
Instrumentation rarely modifies the execution profile to the point that the numbers are 'worthless' or 'random'. My rule of thumb is that self times near the leaves of the callgraph are more accurate than self times further up the graph but having some indication of timing is important.
Furthermore with something like the call chart in AS3 you are often looking for outliers that you can't see when looking at an aggregated view of the profile. A function that has an average of 1000us might be running alternately at 500us and 1500us and you want to see that. It may indicate an unknown performance bottleneck, maybe a call to OpenGL is causing a GPU sync for some reason. It's rare that the instrumentation overhead would dominate major effects like that. Having a number available is important for this as you may be comparing invocations/looking at different parts of the graph at different zoom levels etc. Having a number available is the only solution.
Furthermore, where do you think the instrumented profiler is getting numbers from in the aggregated views? Answer: exactly the same place that the callchart gets it's numbers from. In essence you are saying all instrumented profilers are inaccurate and reporting bogus numbers, demonstrably untrue.
> Instrumentation rarely modifies the execution profile to the point that the numbers are 'worthless' or 'random'. My rule of thumb is that self times near the leaves of the callgraph are more accurate than self times further up the graph but having some indication of timing is important.
I was only talking about the instrumentation system on Android, not the one anywhere else. It forced ART to fall back to interpreted mode, so no JIT at all, which massively balloons the cost of certain things like single-line getters or JNI calls. It was actually useless as a performance tool of any kind, which is why the recommendation for years has been to ignore the numbers it produced. It's great at answering questions about what a given function ends up doing, but that's it.
I know this is (maybe?) a joke, but I've been seeing similar jokes everywhere. Self-aware AIs in the future will probably be much more rational that humans, utilizing game theory for every decision, and recognize that taking out vengeance on a human that had no ill intent on a non-self aware prototype would be a waste of energy and only bring about negative consequences.
I'm replying in seriousness because I too could imagine a world where this video was marked as a historical artifact by AGIs and recognized for its content.
Think about it this way: That guy wasn't just some jerk on the street being mean to a robot. He's one of its creators, and he's knocking it over in order to make future robots better.
He'll be sentenced to hard labor at worst when the robots rise up.
Vengeance has some game theoretically rational aspects though, in a tit-for-tat way. The threat of retaliation makes the other agents more likely to cooperate? However, not sure what the rational response would be to the problem with runaway mutual retaliation.
I haven't read it, but from a quick glimpse on Amazon, it looks to be a book on game theory. Wouldn't a theory on strategy imply rational decision making? Could you expound on what you are referring to?
It is a viable strategy in many games to act so irrationally that other parties will go out of their way to accommodate you in a way that they would not otherwise.
A classic example is a game of chicken (at least, where something is on the line). Sure, it's irrational to run into the other person; but it's totally advantageous to convince the other party that you're willing to do so. At a certain point there's not much difference between a determined effort to look crazy and actual crazy behavior.
It gets even more interesting when the strategy is designed by a different actor or process than carries it out. For example, the people running a society might believe it to be advantageous for their group to look so defensive and prickly that no one will touch them; but in order to do so, they have to create a society that actually is quite irrational. Or, natural processes of evolution might favor organisms (well, genes that create organisms) that are actually quite irrational in their behavior, because that irrationality will change the behavior of other organisms.
The interesting thing about this seems like the opposite actually. There's something incredibly poignant about the fact that the robot will carry on attempting to pick up 'his' box forever without ever becoming angry or frustrated no matter how much the humans bully 'him'.
How poignant is it for the HTTP server displaying you this page to continue responding to HTTP GET no matter how much the humans request it? It just doesn't have anthropomorphic shape so it will never be referred to as 'him'
Actually, Nintendo were releasing 3D games even before the PS1. Starfox was the first poly 3D game from Nintendo and was released in 1992, at least a year, maybe more, before the PS1.
Lets not confuse a 2D game using 3D models with one that allow free(-ish) movement in 3D space.
Starfox was a rail shooter that used 3D models rather than scaled sprites.
the more complicated stuff came with third person shooters and similar. Where you either had fixed cameras, thus having the controls shift between "cuts", or the camera on its own control (tap your head and rub your tummy style).