Hacker Newsnew | past | comments | ask | show | jobs | submit | yeasayer's commentslogin

What is the typical use case for such a thing?



This is great - good work to you and/or your team


> WireGuard doesn't work over TCP

Can somebody well versed explain what the difference between TCP and UDP in this case? I obviously know what these are, I just don't understand why it's such a debatable choice applied to VPNs.


01CGAT’s link sums it up as: TCP is not designed to be stacked and doing so results in the exponentially increasing retry timeout feature, used for reliability optimization of the protocol, conflicting to provoke excessive retransmission attempts by the upper layer TCP.

The detailed explanation is in the linked article: “Why TCP over TCP is a bad idea”[0]. It was broken for me so I dug up an archive.org copy.

The upper layer transmission control and and retransmission attempts are completely unnecessary as transmission is already guaranteed by the lower layer TCP. The upper layer TCP, unaware of TCP underneath and having an increasing timeout on acknowledgment failure, can begin to queue up more retransmission than the lower layer can process increasing congestion and inducing a meltdown effect.

Explained better here: [0]https://web.archive.org/web/20190531210932/https://sites.ink...


mind you that this is not only applicable to any VPN setup, but any tunneling or overlay protocol.


It's applicable to any tunneling or overlay protocol that encapsulates TCP in TCP.



Is this a new oil? That could speed up the space exploration.


I don't think its density would make it worth anything. It could be used in transit though I suppose


This is suspicious. They have the VPN traffic, now they want passwords. Encrypted of course, but still. The trust just isn't there. The company is too young. I don't trust them just because they have great design and UX.


I don't like that file operations are controlled by F1-F8 keys. This UX is unfriendly to macOS environment. Most users' F-row is in media keys mode by default, so you have to hold Fn otherwise. And I'm not even taking into account the Touch Bar.


Marta author here.

The default key binding set is a de facto standard for double-pane file managers. I personally used Total Commander for a long time until I switched to Mac, and I really missed a FM with the similar hotkeys.

But I understand now that it's not what all Mac users expect from a file manager. I'll make an option to choose the hotkeys on the first launch [1].

As about the Touch Bar, Marta supports it since 0.1.2.

[1] https://github.com/marta-file-manager/marta-issues/issues/30...


> And I'm not even taking into account the Touch Bar.

Yep: the app provides touch bar buttons for many operations


I use https://github.com/Pyroh/Fluor to change whether the function keys or the media keys show up based on the application in the foreground.

It works quite well for me.


You can use software like Karabiner to change the default behavior of F keys for specific apps. In my case, function keys behave as F1-F10 in Terminal, Virtualbox, Pycharm and Dosbox, for example.

Not sure about the touch bar though.


It's not really unfriendly for developers though, who I would expect make up a decent proportion of the intended audience, and who surely all have the 'Use F1, F2 keys as standard function keys' set.

As for the Touch Bar, yes, that's a problem with the standard keybindings (and is also why I'll probably only be a macOS/Marta user for a few months, as I'll never buy a laptop with a fake keyboard).

But settings to the rescue! Cmd-shift-p & 'open default keybindings' reveals that they're all configurable.


> and who surely all have the 'Use F1, F2 keys as standard function keys' set.

I develop for several different platforms on macOS, and have never felt the need to enable that setting, because I've never encountered any macOS software that required the Fn keys in this particular non-idiomatic way. Even macOS IDEs stay away, tending to map things to complex key-chords instead.


IntelliJ.


It takes quite a bit of tweaking to get JetBrains IDEs to work like other macOS applications. Even AppCode has some issues there.


I guess it's a matter of taste whether you converge your familiar environment around the OS or your primary tools. I tend to do the latter (keeping IntelliJ, emacs, Chrome & shell use more-or-less consistent across platforms). I have more faith in my ongoing relationship with those tools than with a particular OS.


> and who surely all have the 'Use F1, F2 keys as standard function keys' set.

Why? As a heavy emacs and Xcode user at least, I rarely need a function key, and prefer the convenience of the media keys.


It's just been my experience with developers with macs. No big deal either way.


Where are these new battery technologies that are invented every 6 months, according to the news?


They are in your phone.

10 years ago it was almost impossible to find a phone with a battery of over 1500 mAh. Today the standard size is 3000 mAh.

While the capacity has doubled, the volume of the battery has stayed the same or diminished.

There are phones with 5000, 6000, and even 10000 mAh batteries on the market today.

That's how I know that everyone complaining about thinner phones and reduced battery life is kidding themselves.

Thinphones are the top sellers, thickphones with batteries that last for days are not.


> thickphones with batteries that last for days are not.

Point me to a recent one with a recent build of Android that receives regular security updates from a reputable builder and I'll buy it immediately. I'd line up outside a building like it was Black Friday to buy a Galaxy 8 or Pixel 2 with an integrated 10000mAh battery.

But, alas, this magical unicorn "thickphone" doesn't exist on the market, even if some random Chinese manufacturer that will be gone tomorrow makes one with a four year old SoC and a three year old build of Android with no security updates since the phone's release two years ago...


lg v20 with an aftermarket battery. I regularly go three days without charging.


>That's how I know that everyone complaining about thinner phones and reduced battery life is kidding themselves.

I definitely get worse battery life out of my iPhone 6s than I did with my iPhone 4 which was worse than my feature phones. My Nexus 6p is worse than any listed so far as well.

Putting 8 core processors in phones to (poorly) compensate for a pig of an operating system and app ecosystem doesn't help.


Batteries have gotten a lot better and cheaper over time.


not a lot better mores law does not apply to battery tech no matter how much the electric car evangelists would like it to be so


Compare a Li-Ion battery from 1992 vs now and you are easily talking 2+x recharge cycles that's huge on it's own. But, you also get lower weight, lower volume, increased power, and lower cost. You can even get a vast array of form factors not just little cylinders.

Granted, their are trade-offs and I would love to have 10x the power. Still, 2x energy by weight and 1/3 the cost means I have no problem saying batteries have gotten dramatically better.


Compare the advances in cpu tech or DRAM


How about the advancement in steal or gasoline. Batteries are old tech. CPU advancement is very similar to increases in engine horsepower over the first 80 years which steadily doubled up to rockets and then almost completely stalled out.

Saying it's only they are only 10+x as good in the last 25 years overall is a silly standard. Recharge increase * weight decrease * cost decrease. Remember each of them are independently large improvements.

If you see the same increases happen again electric cars in 25 years would have 1,000+ mile range and charge in ~30 seconds.


Of course not. Moore's law is about the number of transistors per unit area. Battery capacity doesn't have much to do with transistor density.


Your missing the point (ill give you the benefit of the doubt that its in good faith) battery tech advances very slowly but most of the "ambitious" claims for electric cars seem to think that some magic discovery will result in a step change.

I am using its as analogy that anyone familiar with basic technology would understand.


Battery technology is increasing at a steady rate of 8% per year. Everytime you hear of a breakthrough it's at least 10 years away from being production ready.


Just to add 8% a year is doubling every decade. If you hear about something 2x as good that takes 10 years to become mainstream, then that's progress as normal.


They always hit the reality of engineering and scale.


Is this the second or third time the're fixing Meltdown?


I think Intel might get away with it.

Last 5 years they were slacking off, because economically there is no reason to go over the usual 10-15% yearly performance bump. But actually they were accumulating aces up their sleeves. Again, no reason to show your hand, if you don't have to.

But the time has come. Right now Intel has 3 major problems: 1) Meltdown/Spectre situation 2) AMD is awoken from sleep with surprisingly good Ryzen lineup 3) Apple craves new powerful CPUs to satisfy unhappy MacBook Pro customers

Intel can fix all of this with one sweep. Just by releasing a brand new CPU that will surprise everyone. Of course with hardware Meltdown/Spectre fix. They were holding off, but it's time to drop all these hidden aces on the table. And I believe it's gonna happen. Not right now with Cannon Lake, but with the one after - Ice Lake on 10nm transistors, by the end of 2018. It's going to be even bigger than NVIDIA's GTX 1080 success.


Doubtful. You don't just develop a new processor over night, and if they truly had all these aces up their sleeves, they would have dropped them already in response to Zen last year.

Intel's process advantage is shrinking. They're struggling like everybody else because the physics is getting harder and harder. Apart from the fact that it would have been nice to get easy process shrinking forever, this is good news for almost everybody: it means competition for them is getting tougher.


> this is good news for almost everybody

I don't think CPU capacity failing to double every 18 months is good news for anybody. I'd rather have a monopolistic Intel churning out 2x powerful chips every 2 years than a competitive market giving 5% performance bump per year.


It’s doubtful that they would drop all their aces in response to Zen.


Actually, I'd turn this on its head and ask: Why is there this claim that they had or have any aces in the first place, Zen or no Zen?

What you and the ggp are basically saying is that Intel slowed down the improvement in their processors on purpose over the last several years. Why on earth would they do that?

Besides, all the evidence points to the contrary, what with them being unable to compete in the mobile space.


I'd speculate that top management was aware of the physics limitations to their biggest market advantage, and they probably even had a timeline for when the competition is going to inevitably catch up. So they must have been spending their billions on something that's going to keep the company afloat in XXI century.

Maybe quantum computing, neuromorphic chips, GPGPU and 3D NAND are where it's at for them in the future, and traditional CPUs will be more or less commoditized.


> Why is there this claim that they had or have any aces in the first place, Zen or no Zen?

I'm not a big hardware person, but from what I've heard the speed they released 6 core processors after Ryzen makes it likely they were capable of producing 6 core (consumer) designs earlier.


The original hexacore Xeon is almost eight years old (March 2010 release). Intel released a consumer hexacore in response to Ryzen. Intel's artificial market segmentation is ridiculous, but so is the typical AMD watcher's near total ignorance if what is happening in the Xeon line.


That may be overstating AMD's ignorance by quite a bit. The big marketing push with the zen launch was that Intel had a chip with a lot of cores, but it was 2x the price for with slightly worse performance.


They produce Xeon chips with dozens of cores forked off the same architecture, so that wasn't too surprising. Sticking to four cores was probably just market segmentation, like not supporting ECC memory in the consumer line, to protect Xeon sales.


If you think they can redesign their cache geometry, indirect branch predictors, and return stack buffers in the matter of a few months and then tape out new processors by the end of the year, you're nuts.

It's gonna take 3 years minimum until even the easiest of those things is resolved and silicon hits the street, even if Intel begrudgingly admits they need to do this.


> But actually they were accumulating aces up their sleeves.

...

> And I believe it's gonna happen.

I don't notice anything in your post supporting those beliefs, aside from Intel having a motivation to make them true.


Intel is a huge company and it's hard for huge companies to make abrupt transitions. CPUs have a development cycle that spans years, and employees who were laid off during ACT a couple years ago when Intel decided their headcount was too high aren't going to suddenly come back now.

What Intel could do in the short term is reduce their prices drastically. They have the profit margins to afford it.


They had an ace up their sleeve to fix those security issues? If that is true, they should get ready for the lawsuits I guess...


Does it include patches from wine-staging (CSMT)?


It includes some csmt work, but it might not be the same as staging one. You need to enable it through registry key.

See https://wiki.winehq.org/Useful_Registry_Keys

HKEY_CURRENT_USER

    +-Software
       |
        +-Wine
              |
              +-Direct3D
              |  |
              |  +->csmt
              |  |   [DWORD Value (REG_DWORD): Enable (0x1) or disable (0x0, default) the multi-threaded
              |  |    command stream feature. This serialises rendering commands from different threads into
              |  |    a single rendering thread, in order to avoid the excessive glFlush()'es that would otherwise
              |  |    be required for correct rendering. This deprecates the "StrictDrawOrdering" setting.
              |  |    Introduced in wine-2.6.]


What's the difference between SentinelJS and DOM Mutation Observers?

I glanced at the source code, it seems this lib is using some kind of hack based on CSS animations. Why not just use Mutation Observers, which are the standard approved API to get the job done?


Mutation observers are very, very bad for performance. They're still useful for some things as they give you a lot of detail about what happened, but if you're just looking for a change on a specific set of elements this library seems like a better bet.


> Mutation observers are very, very bad for performance.

No, they are not. Mutation events are tragic for performance and are deprecated (Chrome will make that pretty obvious in the console). The Mutation Observer was designed to have reasonable performance characteristics. It's still a very active listener, but rarely is it a bottleneck.

more: https://developers.google.com/web/updates/2012/02/Detect-DOM...


I would have thought MOs were implemented in C and thus pretty fast. Either way, even if they're slow compared to this technique at present, I haven't found performance to be an issue when using them now, and no doubt they'll be optimsied further.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: