Hacker Newsnew | past | comments | ask | show | jobs | submit | zxcvcxz's commentslogin

http://pyparallel.org/wrk-rps-comparison2.svg

According to your website pretty much every other technology runs better on Linux than it does on Windows, and of course pyparallel runs better than everything you tested.

How can I run these tests my self? I specifically want to test it against golang.



I don't see where you take advantage of go routines and I don't see a real world use case unless it's operating on concurrent connections.

https://www.reddit.com/r/programming/comments/3jhv80/pyparal...

heh..


Patches welcome?


No it's not, it didn't get popular until it was redesigned. It's proof that sites can be too ugly, but that sites don't have to be particularly pretty.


What redesign are you referring to? Reddit has never changed much in its design. I'm looking at it in the Wayback Machine from before subreddits were introduced, and it still looks like the same site.

Reddit got popular when Digg was redesigned.


I was literally just thinking that reddit's staff are either all back-end engineers or don't do anything at all. reddit has barely changed at all in the past several years. The design has barely changed. They finally just recently put out a mobile app. Little to no improvement has been made in moderator tools. Where is this redesign you seem to be seeing?


>the NT kernel is much more sophisticated and powerful than Linux

Source?

It's not sophisticated enough or powerful enough to be the most used kernel on super computers (and in the world). Windows pretty much only dominates the desktop market. Servers, super computers, mainframes, etc, mostly use Linux.

A few years ago there was even a bug in Windows that caused degradation in network performance during multimedia playback that was directly connected with mechanisms employed by the Multimedia Class Scheduler Service (MMCSS), this is used on a lot of audio setups. If they can't even get audio setups right how can people consider anything Windows releases "sophisticated"?

It's made to do anything you throw at it I guess, it's definitely complicated, but powerful and sophisticated aren't words I would use to describe NT.


If you're arguing in favor of linux, you probably shouldn't use any arguments that deal with getting audio setups right.


Indeed.

I would go so far as to say that a large part of why audio is such a CF under Linux is -- wait for it -- lack of real asynchronous I/O.

Audio is asynchronous by nature, and to do that right under Linux you need a "sound server" with all the additional overhead, jank, and shuffling of data between kernel and at least two different user spaces that implies. Audio under Linux was best with OSS, which was synchronous in nature and not suitable for professional applications. JACK mitigated that somewhat, but for an OS to do audio right you need a kernel-level async audio API like Core Audio or whatever Windows is doing these days.


Windows has a sound server too, you know. I believe Core Audio on Mac does too. A large part of why audio is such a CF under Linux is that PulseAudio is incredibly badly written and poorly maintained. My favourite was the half-finished micro-optimization that broke some of the resamplers because the author got bored and never modified them to handle the change, which somehow made it all the way into a release. I dread to think what they'd do with powerful tools like asynchronous I/O.


Audio on Linux works fine in my experience.


Hey everyone, we found him!


I don't want to take sides in this discussion but share an anecdote. Hey, maybe even someone knows a solution for this.

I have a PC connected via on-board HDMI to a Denon AVR solely for the purpose of getting the audio to the amplifier. Windows doesn't let me use that audio interface without extending or mirroring my desktop to that HDMI port. Since there is no display connected to the AVR I don't want to extend the desktop, and mirroring heavily decreases performance of the system.

On Debian Sid the computer by default allows me to use the HDMI audio without doing anything to my desktop display. It seems the system realizes that there is no display connected to the AVR but it's still a valid sink for audio.


If I remember correctly, NVIDIA did a decent job with their on-board HDMI driver audio-wise; what brand is yours? I'm 100% the functionality is dependent on the driver rather than the OS.


Can you use optical instead of HDMI?


TOSLINK doesn't support uncompressed surround audio, while HDMI does, and I do use it.


Well for various definitions of fine I guess.


It works fine as in "I can listen to audio on my laptop from multiple programs at once, with a centralized way to control audio volume on a per-application or per-sound-device basis." I literally cannot imagine any audio system doing better than that given the hardware I have to work with.



BeOS doesn't run on my hardware, I'm pretty sure, even though it was ported to x86 eventually.


Works very well for me too. Don't know why you got downvoted. It's like it's 1994 in here...


But this is exactly what made me switch, windows was preventing me from accessing my sound card directly in order to record a remote interview.

I use Linux regularly to record and edit audio, it's free , it works, and I dont have to worry my OS is active reducing the functionality of my equipment.


They got audio setups right. The reason the network degradation happened is that video and audio playback were given realtime priority so background processes couldn't cause pops, stutters, etc. At the time Vista was released, most home users didn't have a gigabit network, so the performance degradation would only happen on a small number of users, and most would rather prefer good audio and video performance to a slowdown in network performance in a small percentage of users. With today's massively multicore systems, it's even less of an issue, while linux still has a problem with latency on applications like pro audio.


The reason the network degradation happened is that Microsoft couldn't figure out how to stop heavy network activity causing audio glitches on some systems even after giving audio realtime priority, so they hacked around it by adding a fixed 10,000 packets-per-second cap on network activity regardless of system speed or CPU usage (less if you had multiple network adapters). See https://blogs.technet.microsoft.com/markrussinovich/2007/08/... This was just as much of an issue on multicore systems because the cap was unaffected by the system speed and chosen based on a slow single-core machine.


I don't know why you're getting downvoted since the parent is basically stating random opinions about "power" and "sophistication" without anything to actually back it up.

11 param functions don't say "power" to me. They say "poorly thought out API design". Much can be said for most Windows APIs in general.


Can you show me the source so I can check?


Actually there was a leak for 2000, most critics said it was surprisingly good.


There are leaks galore. NT4, 2000, and more recently, the Windows Research Kit. Just google something like 'apcobj.c' and see. (Hah, first link was a github repo!)


I use to run Linux in a VM on windows and use Chocolatey for package management and cygwin and powershell etc, then I realized I was just trying to make Windows into Linux. Seems to be the way things are going and with the addition of the linux subsystem it kind of proves that Windows really isn't a good OS on it's own, especially not for developers.

I wish Windows/MS would abandon NT and just create a Linux distro. I don't know anyone who particularly likes NT and jamming multiple systems together seems like an awful idea.

Windows services and Linux services likely won't play nice together (think long file paths created by Linux services and other incompatibilities), for them to be 100% backward compatible they need to not only make Windows compatible with the things Linux outputs, but Linux compatible with the things windows services output, and to keep the Linux people from figuring out how to use Windows on Linux systems they'd need to make a lot of what they do closed source.

So I don't see a Linux+Windows setup being deployed for production. It's cool for developers, but even then you can't do much real world stuff that utilizes both windows and Linux. If you're only taking advantage of one system then whats the point of having two?

I went ahead and made the switch to Linux since I was trying to make Windows behave just like Linux.


> I wish Windows/MS would abandon NT and just create a Linux distro. I don't know anyone who particularly likes NT and jamming multiple systems together seems like an awful idea.

I do. The NT kernel is pretty clean and well architected. (Yes, there are mistakes and cruft in it, but Unix has that in spades.) It's not "jamming multiple systems together"; an explicit design goal of the NT kernel was to support multiple userland APIs in a unified manner. Darwin is a much better example of a messy kernel, with Mach and FreeBSD mashed together in a way that neither was designed for.

It's the Win32 API that is the real mess. Having a better officially supported API to talk to the NT kernel can only be a good thing, from my point of view.


Well, large parts of the NT API are very close from Win32 API for obvious reasons, and so are often in the realm of dozen of params and even more crazy Ex functions. Internally there are redundancies that do not make much sense (like multiple versions of mutex or spinlock depending on which parts of kernel space use them, IIRC), and some whole picture aspects of Windows makes no sense at all given the architectural cost it induces (Winsock split in half between userspace and obviously needed kernel support is just completely utterly crazy, beyond repair, it makes so little sense you want to go back in past and explain the designer of that mess how stupid this is). The initial approach of NT subsystems was absolutely insane (hard dep on a NT API core, so can't do emulation with classic NT subsystems - so either limited to OS having some technical similarities like OS/2, or very small communities when doing a new target like the Posix or SFU was) -- WSL makes complete sense, though, but it is maybe a little late to the party. Classic NT subsystems are of so little use that MS did not even use them for their own Metro and then UWP things, even though they would like very hard to distinguish that more from Win32 and make the world consider Win32 as legacy. I've read the original paper motivating to put Posix in an NT subsystem, and it contained no real strong point, only repeated incantations that this will be better in an NT subsystem and worse if done otherwise (well for fork this is obvious, but the paper was not even focused on that), with none of the limitations I've explained above ever considered.

Still considering the whole system, an instable user kernel interface has few advantages and tons of drawbacks. MS is extremely late to the chroot and then container party because of that (and let's remember that the core technology behind WSL emerged because they wanted to solve the chroot aside userspace system on their OS in the first place, NOT because they wanted to run Linux binaries) -- so yet another point why classic NT subsystems are useless.

Back to core kernel stuff, IRQL model is shit. Does not make any sense when you consider what really happens, and you can't really use arbitrary multiple levels. It seems cute and clean and all of that, but Linux approach of top and bottom halves and kernel and user threads might seem messy but is actually far more usable. Another point: now everybody uses multiprocessor computers, but back in the day the multiple HAL were also a false good idea. MS recognize it now and only want to handle ACPI computers, even on ARM. Other OSes do all kind of computers... Cutler pretended to not like the "everything is a file" approach, but NT does basically the same thing with "everything is a handle". And soon enough, you hit exactly the same conceptual limitations (except not in the same places) that not everything is actually the same, so that cute abstraction leaks soon enough (well, it does in any OS).

On a more result oriented approach, one of the things WSL makes clear is that file operations are very slow (just compare an exactly identical file heavy workload under WSL and then under a real Linux)

So of course there are (probably) some good parts, like in any mainstream kernel, but there are also some quite dark corners, and I am not an expert about all architectural design of NT but I'm not a fan of the parts I know, and I strongly prefer the Linux way to do equivalent things.


> Cutler pretended to not like the "everything is a file" approach, but NT does basically the same thing with "everything is a handle". And soon enough, you hit exactly the same conceptual limitations (except not in the same places) that not everything is actually the same, so that cute abstraction leaks soon enough (well, it does in any OS).

Explain? Pretty much the only thing you can do with a handle is to release it. That's very different from a file, which you can read, write, delete, modify, add metadata to, etc... handles aren't even an abstraction over anything, they're just a resource management mechanism.


You are right, but those points are details. FD under modern Unixes (esp. Linux, but probably others) serves exactly the same purpose (resource management). The FD where read/write can't be used just don't define those (same principle for other syscalls) -- similarly if you try to NtReadFile on an incompatible Handle it will also give you an error back. Both are in a single numbering space per process. NT largely makes use of NtReadFile / NtWriteFile to communicate with drivers, even in quite core Windows components (Winsock and AFD). And NT Handles do serve at least an abstraction (I know of): they can be signaled, and waited for with WaitFor*Objects.

So the naming distinction is quite arbitrary.


> You are right, but those points are details.

Uh, no, they are very crucial details. For example, it means the difference between letting root delete /dev/null like any other "file" on Linux, versus an admin not being able to delete \Device\Null on Windows because it isn't a "file". The nonsense Linux lets you do because it treats everything like a "file" is the problem here. It's not a naming issue.


Linux has plenty of file descriptor types that do not correspond to a path, along with virtual file systems where files cannot be deleted...

Your example of device files is hardly universal, and the way it works is useful.


And to give you another example, look at how many people bricked their computers because Linux decided EFI variables were files. You can blame the vendors all you want, but the reality is this would not have happened (and, mind you, it would have been INTUITIVE to every damn user) if the OS was sane and just let people use efibootmgr instead of treating every bit and its mother as files. Just because you have a gun doesn't mean you HAVE to try shooting youself, you know? That holds even if the manufacturer was supposed to have put a safety lock on the trigger, by the way. Sometimes some things just don't make sense, if that makes sense.


How many people really did this compared to eg windows users attacked by cryptolocker?


"The way it works is useful?"! When was the last time you found it useful to delete something like /dev/null via command line? And how many poor people do you think have screwed up their systems and had to reboot because they deleted not-really-files by accident? Do you think the two numbers are even comparable if the first one isn't outright zero?

It literally doesn't make any sense whatsoever for many of these devices to behave like physical files, e.g. be deletable or whatnot. Yes there is an exception to every nonsense like this, so yes, some devices do make sense as files, but you completely miss the point when you ignore the widespread nonsense and justify it with the exceptions.


Your complaint is with the semantics of the particular file. There's no reason why files in /dev need be deletable using unlink. That's an historical artifact, and one that's being rectified.

"Everything is a file" is about reducing most operations to 4 very abstract operations--open, read, write, and close. The latter three take handles, and it's only the former that takes a path. But you're conflating the details of the underlying filesystem implementation with the relevant abstraction--being a file implies that it's part of an easily addressable, hierarchical namespace. Being a file doesn't imply it needs to be deletable. unlink/remove is not part of the core abstraction. But they are hints that the abstraction is a little more leaky than people let on. Instantiating and destroying the addressable character of a file poses difficult questions regarding what the proper semantics should be, though historically they're deletable simply because using major/minor device nodes sitting atop the regular persistent storage filesystem was the simplest and most obvious implementation at the time.


Hm I was more thinking about open FD, not just special file entries on the FS. Well, I agree with you: it's a little weird and in some cases dangerous to have the char/block devices in the FS, and it has already been worked-around multiple times in different ways, and even in some cases simultaneously with multiple different work-around. NT is better on that point. But not once the "files" are open and you've got FD.


> IRQL model is shit. Does not make any sense when you consider what really happens,

On the contrary. It's only when one considers what happens, especially in the local APIC world as opposed to the old 8259 world, that what the model is actually does finally make sense.

* http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/ir...


I don't care about the 8259, I don't see why anybody should care except PIC driver programmers, it's doubtful anybody designing NT cared given the very first architecture it was design against at was not a PC, and this is typically the kind of thing that goes through the HAL.

IRQL is a completely software abstraction thing, in the same way top/bottom halves and various threads are under Linux (hey, in some version of the Linux kernel it even transparently switches to threaded IRQ for the handlers, there is no close relationship with any interrupt controller at this point...). IRQL is shit because most of the arbitrary levels it provides are not usable to distinguish anything continuously from an application point of view (application in the historical meaning, no "App" bullshit intended here), even so in seemingly continuous areas (DIRQL), so there is no value in providing so many levels with nothing really distinguishing between them -- or at some level transitions too many things completely different. It's even highly misleading, to the point the article you link is needed (but does not even provide the whole picture.) I see potential for misleading people with PIC knowledge, people used to real time OSes (if you try to organize application priority by basing on IRQL, you will miserably fail), people with background in other kernels, well, pretty much everybody.


Having a better officially supported API to talk to the NT kernel can only be a good thing, from my point of view.

That's particular interesting now that SQL Server has been ported to Linux. Would be funny if they're going to use the Linux subsystem on Windows too.

Although I suspect SQL Server already talks to the kernel directly.


No, they do have sophisticated user mode library but use only public kernel APIs. That user model library also helped them relatively painlessly migrate SQL Server to Linux.


> Having a better officially supported API to talk to the NT kernel can only be a good thing, from my point of view.

This is what I am looking forward to with WinRT, hence why Rust should make it as easy as C++/CX and C# to use those APIs. :)


Well I've personally seen Microsoft employees themselves complaining about the state of NT while saying it's "fallen behind Linux".

An old HN commenter once wrote (mrb)

> There is not much discussion about Windows internals, not only because they are not shared, but also because quite frankly the Windows kernel evolves slower than the Linux kernel in terms of new algorithms implemented. For example it is almost certain that Microsoft never tested I/O schedulers, process schedulers, filesystem optimizations, TCP/IP stack tweaks for wireless networks, etc, as much as the Linux community did. One can tell just by seeing the sheer amount of intense competition and interest amongst Linux kernel developers to research all these areas.

>The net result of that is a generally acknowledged fact that Windows is slower than Linux when running complex workloads that push network/disk/cpu scheduling to its limit: https://news.ycombinator.com/item?id=3368771 A really concrete and technical example is the network throughput in Windows Vista which is degraded when playing audio! https://blogs.technet.microsoft.com/markrussinovich/2007/08/...

>Note: my post may sound I am freely bashing Windows, but I am not. This is the cold hard truth. Countless of multi-platform developers will attest to this, me included. I can't even remember the number of times I have written a multi-platform program in C or Java that always runs slower on Windows than on Linux, across dozens of different versions of Windows and Linux. The last time I troubleshooted a Windows performance issue, I found out it was the MFT of an NTFS filesystem was being fragmented; this to say I am generally regarded as the one guy in the company who can troubleshoot any issue, yet I acknowledge I can almost never get Windows to perform as good as, or better than Linux, when there is a performance discrepancy in the first place.


I wonder how a "perfect" society would devise a system to keep search engines and other service providers from creating monopolies? I imagine they would force people to use different software based on region or something. It would be nice if instead of having a mass spying apparatus we had a mass bug tracking apparatus and mandated open source, emphasized programming and hacking like we do sports, and gave people large rewards and fame for finding bugs in our software.

I imagine we'll be in a society soon (next 100 years) where we 3D print all of our own food and most of our stuff, which means it will likely be downloaded from some remote database. If the global food database were to be hacked we could see mass poisonings. I don't trust security through obscurity in these cases and wouldn't eat food if I couldn't view the ingredients.

The FBI/NSA/CIAs purpose is to keep people insecure so that they can spy on them. This is the opposite of what we need.


Any Linux using GNOME 3 has had this for a few years.


First thing on duckduckgo when you search for how to write a server in erlang:

http://20bits.com/article/erlang-a-generic-server-tutorial

That's just ridiculous. Erlang looks like php and python had an unholy child.


I won't downvote you because you are certainly entitled to your opinion, and there is no way I'm going to get into an Erlang vs Go For Writing Servers argument, but if you immediately write off Erlang/OTP because of how it looks, you are going to miss out on some pretty amazing server writing functionality.


Different stokes for different folks. Looks fine to me. But I'd prefer you don't try to offer advice if you don't actually know Erlang in the slightest.


If all you dislike is the syntax, you can use Elixir instead.


Actually, it was more like Prolog self-pollination with a dash of acid. ;)

But seriously, the syntax is not that big a deal once you've understood it (and being permeable to the history behind it helps).


What is ridiculous exactly? You just dislike the superficial look of the syntax?


Before you get all indignant, that's not even a TCP server.


Why do you use vi and not vim?


I just opened a project in VSCode, closed the VSCode, deleted the project, and now when I try to open VScode the program won't load. Did I break it?

Also, how do I autocomplete using suggestions from other files within the project directory?


Do you mind filing that issue? https://github.com/Microsoft/vscode/issues/new


No idea about that.

For me autocomplete works out of the box in all projects. There is also a bulb next to the file type in the bottom right corner which suggests to create a jsconfig file for you for better intellisense support. I haven't noticed any difference after creating it though. VSCode makes JS suffer able for me :P


For VSCode crashing, no suggestions since I've never seen it myself. You could try deleting the config files in your "home" directory and run from scratch.

Which language are you trying to autocomplete. For some like Javascript/Typescript, it works out of the box. For many like Golang, C++, Python you need to install a plugin but takes just a few seconds. Also for autocomplete to work across files, I think you many need to open the whole directory ("file/open folder..." menu item.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: