Hacker News new | past | comments | ask | show | jobs | submit login
Five new undisclosed Xen vulnerabilities (xen.org)
133 points by scottjg on Feb 27, 2015 | hide | past | favorite | 46 comments



AWS have posted an update about related upcoming EC2 maintenance: https://aws.amazon.com/premiumsupport/maintenance-2015-03/

"We’ve received a Xen Security Advisory that requires us to update a portion of our Amazon EC2 fleet. Fewer than 10% of EC2 customer instances will need to be rebooted. We’ve started notifying affected customers when their reboots will take place. These updates must be completed by March 10, 2015 before the underlying issues we are addressing are made public. Following security best practices, the details behind these issues will be withheld until they are made public on March 10."


Just received a message from Rackspace cloud regarding theses, it seems like they will have to reboot all instances.

See https://community.rackspace.com/general/f/53/t/4978


Yet linode is still silent...


I got an email from linode yesterday telling me they need to do the same. Not sure if they had any public communication yet.


realize that there's Xen HVM and Xen PV. there have been significantly more security issues in HVM than there have been in PV.


Do we know why HVM/hw-virt has had more security issues than PV/sw-virt?


Yes, because it uses QEMU. That means more code, and ultimately more bugs, which means more possible exploits.


Linode rebooted my server in Tokyo ~16 hours ago.


See ya later uptime...

04:49:58 up 659 days


See ya later uptime... 04:49:58 up 659 days

your server is vulnerable to a number of Xen security vulnerabilities: http://xenbits.xen.org/xsa/

Including this one from Oct 1, 2014 that allows guests to read up to 3KB of memory from the hypervisor or other guests:

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7188

http://threatpost.com/serious-hypervisor-bug-fix-causes-unex...


That vulnerability only applies to HVM guests. No doubt there are other reasons to have rebooted since 2013, but if one of Rackspace's servers only has paravirtualized guests (do they use HVM at all? I don't know), they can get by without patching it.


Did you see how many vulnerabilities 659 days covers? I mean, if that one doesn't apply, just go back a bit. How about this one from June 2014:

memory pages that were in use by the hypervisor and are eligible to be allocated to guests weren't being properly cleaned. Such exposure of information would happen through memory pages freshly allocated to or by the guest. ... it is possible for an attacker to obtain modest amounts of in-flight and in-use data, which might contain passwords or cryptographic keys.

http://xenbits.xen.org/xsa/advisory-100.html


rackspace most likely uses hvm guests. I think they had freebsd before there was xen pv support


Rackspace has both HV and PV for most default linux images


Yeah. I had one server with over 900 days prior to Oct. It probably should have been rebooted for other reasons but thats the one that forced it.


Is that a guest or a host? If it's a guest, there shouldn't be a need for reboot, only suspend/resume... (note that a reboot can be a good idea from time to time, just to make sure the current configuration (eg: post kernel upgrades, before reboot) -- actually boots).


Ouch! Might I ask what datacentre you're using?


i had something similar at linode london.


prgmr did reboots yesterday


Xen's hypervisor would seem to be a great place to implement live patching like KSplice/kGraft/Kpatch does for the Linux kernel. Presumably that stuff still works on KVM host machines with live guests.


There was this talk at last Linux Plumbers Conference, fyi: http://www.linuxplumbersconf.org/2014/ocw/sessions/2421


Amazon's security advisory seems to indicate that they have this capability for 90% of EC2 instances (leaving 10% that must be rebooted). https://aws.amazon.com/premiumsupport/maintenance-2015-03/


It could be either that (live patching) or live migration: do a live migration of all instances on this host to an already patched host, reboot the host, repeat for the next host.


Not really - to me, it implies that 90% of EC2 instances are not running on a vulnerable version of Xen...


FTA:

> While all instance types need to be updated, we have developed the capability to live-update instances running on newer hardware. The vast majority of the EC2 fleet will be live-updated, but a portion of instances (less than 10% of customer EC2 instances) running on older hardware will require a reboot to complete the update process.


Why do the major Xen providers get advance access to the patches while my machines have to sit vulnerable for over a week?


One working week between notification arriving at security@xenproject and the issue of our own advisory to our predisclosure list. We will use this time to gather information and prepare our advisory, including required patches.

Two working weeks between issue of our advisory to our predisclosure list and publication.

When a discoverer reports a problem to us and requests longer delays than we would consider ideal, we will honour such a request if reasonable. If a discoverer wants an accelerated disclosure compared to what we would prefer, we naturally do not have the power to insist that a discoverer waits for us to be ready and will honour the date specified by the discoverer.

Naturally, if a vulnerability is being exploited in the wild we will make immediately public release of the advisory and patch(es) and expect others to do likewise.

This is an extraordinarily aggressive (in a good way) and transparent process. Big commercial vendors routinely sit on vulnerabilities for months.


This is explained in the Xen security policy, from the 'Embargo and disclosure schedule' heading.

http://www.xenproject.org/security-policy.html


Because responsible adults have demonstrated their ability to follow a coordinated disclosure policy which lets them improve their own security without harming anyone else's.


From what I understand, the bar to get on the pre-disclosure list is not high. If you are a legitimate company serving the public you will likely qualify.


Presumably because making the patch public also makes the vulnerability public and they want to give the big players time to protect their customers.


At the 31c3 somebody had shown or told the audience about an issue in the Xen hypervisor that allowed someone to break into the host from the guest.


I was told it was a series of bugs that made it possible.


This is why SEL4 is awesome. http://ssrg.nicta.com.au/projects/seL4/

First kernel with certain security guarantees formally proven; now open source. It can be used as a hypervisor which seems like its most obvious first use case. At least until there is enough middle-ware to build full systems directly with it.


It is indeed awesome, but from a practical perspective it doesn't even compare with Xen.

Hardware support is up to you. I think you can boot it on x86, but that's just the microkernel -- you have to add all the hardware support. I don't think seL4 is meant to run on servers either.


Quite a few use after free there.


AWS uses xen too, right?


There are a lot of vulnerabilities which don't affect them though -- either because the vulnerabilities are in specific features which EC2 doesn't use, or because Amazon is very conservative about which versions of Xen it uses and most vulnerabilities are in relatively new code.

I certainly hope Amazon will respond to these publicly, but I won't be very surprised if the response is "doesn't affect us".


yes part of the reason I moved away from AWS years ago. Now it doesn't even matter since I am deploying to Docker anyways.


Good thing the host you run Docker on never needs to be patched or rebooted I guess?


Yes, docker is immune to vulnerabilities because containers.


If the kernel you are running on is vulnerable, it can be attacked and the attacker can circumvent any container isolation.

If the hypervisor (Xen!) running underneath your container-Linux is vulnerable, the attacker can get access to your virtualized OS and circumvent any container isolation.


your sarcasm detector may need recalibration.


What a bright, shiny future we live in where "Docker!" is the answer to all problems.



#2 will blow your mind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: