Hacker Newsnew | past | comments | ask | show | jobs | submit | jdamato's commentslogin

Thanks for commenting this! I've seen this website before and it's really unfortunate how much attention it gets.

APT's use of plain text HTTP (even with GPG) is vulnerable to several attacks outlined in this paper: https://isis.poly.edu/~jcappos/papers/cappos_mirror_ccs_08.p....

Yes, this paper is old, but APT is still vulnerable to most of these attacks. I would advise anyone wanting to use APT to do so only with TLS.


The criticisms in that paper either do not apply to Apt as described in TFA or amount to DoS attacks. HTTPS does not and can not solve DoS.


APT will not reject it on replay if the 'Valid-Until' date has not been met yet.

Imagine a version of, say, libEXAMPLE has a vulnerability allowing remote code execution. The `Valid-Until` date is some time in the future, maybe a few days from now. The authors release a new version of libEXAMPLE to patch the vulnerability and the APT repository metadata is updated.

However, a malicious actor performing a MitM against your machine has saved the metadata with the vulnerable version. The malicious actor replays that metadata to your system, preventing your system from seeing the newly patched libEXAMPLE. This gives the attacker up until the `Valid-Until` date to attempt to launch an attack against you.


Yes, plain text APT repositories (signed with GPG or not) are vulnerable to freeze attacks.


We never suggest that you turn security off -- several versions of APT come with various settings defaulted to off, as described in the article.

All of the attacks presented (replay attacks, freeze attacks, and downgrade attacks) affect GPG signed APT repositories.


Hi!

I'm the author of the article. We never suggest turning off GPG and checksum verification.

The bugs may be in APT, but they allow several attack vectors against APT, as explained throughout. Let me know if you have any specific questions and I'd be happy to help clear things up!


The website you linked to has several factual errors, as explained in the article.


Yep, and the information is still relevant! The article explains how it applies to recent versions of APT in the current Ubuntu LTS releases.


We don't really know what EC2 does or precisely the type of hardware your VM will be spun up on. I've erred on the side of being cautious due to the vast amount of work being invested in timekeeping in various hypervisors. If EC2 knows that the TSC clocksource is safe on all of its hardware, perhaps modifying the Amazon Linux AMI to set TSC as the default clocksource would reassure many folks, myself included.

Advanced users that can run their own analysis or who have applications which would withstand potential time warps, are of course, free to ignore my warning at their own risk ;)


This is precisely what the vDSO does. The clocksources mentioned explicitly list themselves as not supporting this action, hence the fallback to a regular system call.


Not quite; vdso is a general syscall-wrapper mechanism. The Solaris solution is specifically just for the gettimeofday(), gethrtime() interfaces, etc.

The difference is that on Solaris, since there is no public system call interface, there's also no need for a fallback. Every program is just faster, no matter how Solaris is virtualized, since every program is using libc.

There's also no need for an administrative interface to control clocksource; the best one is always used.


Not quite. The vDSO provides a general syscall-wrapper mechanism for certain types of system call interfaces. It also provides implementations of gettimeofday clock_gettime and 2 other system calls completely in userland and acts precisely as you've described.

Please see this[1] for a detailed explanation. For a shorter explanation, please see the vDSO man page[2]. Thanks for reading my blog post!

[1]: https://blog.packagecloud.io/eng/2016/04/05/the-definitive-g... [2]: http://man7.org/linux/man-pages/man7/vdso.7.html


I'm aware of the high level about VDSO implementation, but I would still say that the Solaris implementation is more narrowly focused and as a result does not have the subtle issues / tradeoffs that VDSO does.

Also, I personally find VDSO disagreeable as do others although perhaps not in as dramatic terms as some:

https://mobile.twitter.com/bcantrill/status/5548101655902617...

I think Ian Lance Taylor's summary is the most balanced and thoughtful:

Basically you want the kernel to provide a mapping for a small number of magic symbols to addresses that can be called at runtime. In other words, you want to map a small number of indexes to addresses. I can think of many different ways to handle that in the kernel. I don't think the first mechanism I would reach for would be for the kernel to create an in-memory shared library. It's kind of a baroque mechanism for implementing a simple table.

It's true that dynamically linked programs can use the ELF loader. But the ELF loader needed special changes to support VDSOs. And so did gdb. And this approach doesn't help statically linked programs much. And glibc functions needed to be changed anyhow to be aware of the VDSO symbols. So as far as I can tell, all of this complexity really didn't get anything for free. It just wound up being complex.

All just my opinion, of course.

https://github.com/golang/go/issues/8197#issuecomment-660959...


> Not quite; vdso is a general syscall-wrapper mechanism.

It's not. On 32-bit x86, it sort of is, but that's just because the 32-bit x86 fast syscall mechanism isn't really compatible with inline syscalls. Linux (and presumably most other kernels) provides a wrapper function that means "do a syscall". It's only accelerated insofar as it uses a faster hardware mechanism. It has nothing to do with fast timing.

On x86_64, there is no such mechanism.

> It's true that dynamically linked programs can use the ELF loader. But the ELF loader needed special changes to support VDSOs. And so did gdb. And this approach doesn't help statically linked programs much.

That's because the glibc ELF loader is a piece of, ahem, is baroque and overcomplicated. And there's no reason whatsoever that vDSO usage needs to be integrated with the dynamic linker at all.

I wrote a CC0-licensed standalone vDSO parser here:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

It's 269 lines of code, including lots of comments, and it works in static binaries just fine. Go's runtime (which is static!) uses a vDSO loader based on it. I agree that a static table would be slightly simpler, but the tooling for debugging the vDSO is a heck of a lot simpler with the ELF approach.


This all seems predicated on the fact that Solaris doesn't support direct system calls and the fact that they ship their kernel and libc as one unified whole (like BSDs). Solaris is free to update the layout of their shared data structures whenever they want[1].

Because Linux kernel interfaces are distinct and separate from libc, and given Linus' policy on backwards compatibility, Linux had two choices for an _interface_: 1) export a data structure to userland that could never change, or 2) export a code linking mechanism to userland that could never change. In that light the latter choice seems far more reasonable.

[1] The shared data structures for this particular feature. There are other kernel data structures that leak through the libc interface and for which Solaris is bound to maintain compatibility.


The fallback isn't there because there's a public system call interface: the fallback is there because some of the kernel-side implementations of gettimeofday() (in particular, the Xen one) currently require the process to do a proper syscall.

This is separate from the fact that the gettimeofday() system call still exists too, which is a backwards-compatibility issue. The overwhelming majority of Linux applications do their system calls through libc too, so this doesn't affect them.


Author here, greetings. Anyone who finds this interesting may also enjoy our writeup describing every Linux system call method in detail [1].

[1]: https://blog.packagecloud.io/eng/2016/04/05/the-definitive-g...


Nitpick - `77 percent faster` is not the inverse of `77 percent slower`. The line that says `The results of this microbenchmark show that the vDSO method is about 77% faster` should read `446% faster`.


Should that not be 346% faster? If A takes 1 second and B takes two seconds, then B is 100% faster than A. So the calculation would be (B/A - 1) * 100. Applying this here gives around 346%.

EDIT: B would, of course, take 100% longer than A, rather than be 100% faster.


How can something that takes twice as long be faster?


You're right, of course: hadn't had the morning coffee. It should have been 'takes 100% longer' in the 1 second/2 seconds example. The point I was trying to make is that you have to factor in the initial 100% which doesn't contribute to the final value.


Yes. Foot meet mouth. :)


I will def check that out. Anyone who find that interesting may also enjoy "The Linux Programming Interface" :D


Nitpick: slower _than_ what? It's implied, but "slower" (or "greater", or anything-er) is in relation to another thing.


This is rather out of date. Everything works quite similarly, but the kernel code is very different these days.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: