I hate this magazine narrative style, where each consecutive paragraph or two tell totally the same but in different words and referring to another Mr. or Ms. Scientist. The whole article could easily be 10 times shorter.
US tech companies actually had much larger layoffs however this were again in very low ends. The high and middle ends are booming as if we are back in 2000 bubble. Due to such disparity one should always clarify what kind of tech jobs one is talking about.
Computer, electronics and telecommunications companies shed more than 90,000 jobs last year, compared to nearly 80,000 in 2015, according to a new report out from Challenger, Gray & Christmas, a global outplacement firm based in Chicago. Tech layoffs accounted for 18 percent of the total 526,915 U.S. job cuts announced in 2016.
Computer companies were hit especially hard; layoffs in the sector were up 7 percent in 2016. Dell Technologies (NYSE: DVMT) was responsible for much of the uptick when it converged with EMC last year and cut thousands of positions. Intel (NASDAQ: INTC), IBM (NYSE: IBM), Cisco Systems (NASDAQ: CSCO) and Microsoft (NASDAQ: MSFT) also shed jobs, with Intel laying off 12,000 employees, or 11 percent of its workforce, per the report.
You're right it's a fairly small proportion, but when the workforce overall is growing by around ~1 million per month, growth within each sector is essential just to cope (figure source is 2 years old and not IT specific, but still likely to be broadly accurate: https://blogs.wsj.com/briefly/2015/07/22/indias-labor-force/ )
I'd say that 2% of a whole industry being laid off is quite considerable, if 2% of the German car industry is laid off in a single year it'd be considerable, for example.
I think it's fair to say that Go was hardly the first language to implement the concept of coroutines[1]. Probably the only merit of Go here is the short and convenient name of the 'run' method.
> libmill was a project that aimed to copy Go's
> concurrency model to C 1:1 [...]
> libdill is a follow-up project that experiments with
> structured concurrency and diverges from the Go model.
I would say the main advantage is being able to pass couroutine arguments as if it was a simple function. In other coroutine implementations in C (e.g. libtask) you have to make a struct containing all the args and pass that to the run function.
I read that NodeJS is written in C with the same concept in its core - asynchronous on one core (thread) with interruptions. How does it compare to libdill?
> So it looks like libuv could use libdill, right?
libuv, has the follow (iirc) genealogy :
libevent -> libev -> libuv
like its predecessors (with the exception of libev i think) it has grown to be quite large, but its core is still centered on the concept of an event loop. the event-loop itself is hidden, and 'user' code interacts with it via callback event handlers.
given this context, i still don't quite understand how/where libdill might play a role here ?
fwiw, almost all of these event-processing libraries, supports multiple event loops, and thus an event loop is a first class citizen within the library, and implement functions for creating/destroying/starting/stopping loops. multiple event loops find their uses specifically in the context of multi-threaded servers for example.
Libuv supports windows and in some ways only exists because windows support is needed; on Unix/Linux where Node started, libev was sufficient; libuv was created to facilitate the windows port of node - though it is now standing in its own right.
I know - I am one of the authors of libuv.
However libuv is pretty big and imposes its own asynchronous i/o model onto your application.
Later I discovered that it is possible to do efficient epoll emulation on windows so then I wrote wepoll. With it you can just stick to the good ol' epoll/kqueue model and still support windows.
It uses an ioctl that boils down to 'an overlapped version of poll()'. So the call doesn't block - instead when an event like POLLIN or POLLOUT happens, a completion is posted to the completion port.
Call this overlapped-poll function on every monitored socket individually so you don't inherit poll()s scalability problems.
Interesting if there is any research showing that reducing the cyclomatic complexity of one's code leads to speed-ups in development, better code readability etc.? I know it's kind of obvious, but still would be great to see the numbers and statistics.
Is it capable to do basic formatting, like making lists, quotes etc? I see there's a possibility to export into Markdown, but does it parse those formatting elements or should I write them in MD already?