Hacker News new | past | comments | ask | show | jobs | submit login
Storm courts I/O lovers with 96GB, 32-core cloud server (gigaom.com)
42 points by bbr on July 6, 2011 | hide | past | favorite | 34 comments



At over $986/m, it's a damn sight more expensive than a Hetzner box.

Granted, Hetzner's €89 servers only have 24GB of RAM, not 96, but you can basically have 7 of them and some change left for the same price. This adds up to 168GB total RAM, with a total of 28 cores. ( http://www.hetzner.de/en/hosting/produktmatrix/rootserver-pr... )

Which one is best will no doubt depend on what you want to do with it, and certainly there are some applications where only a single, 96GB machine will do, but those are rare.


True, but the I/O characteristics of one 96GB box is drastically different from running 7 in parallel. It's kind of a silly comparison.


Thanks for mentioning Hetzner. The EQ4 is exactly what I've been looking for.


I would compare against the specials on WHT. Like here's one for WebNX

http://www.webhostingtalk.com/showthread.php?t=1063117

48 core Monster, 4x 12core 1.7ghz, 256gb ram(wow!), 16x 73gb 15k SAS raid 10 This is likely the fastest server offered on WHT. This beast has an impressive 256gb ram (perfect for ram drives, or caching heavy accessed content) 4x AMD 6164 12 core cpus, total of 48 cores at 1.7ghz each 256gb ddr3 memory (wow! tons of ram, perfect for db cache, file cache, etc) 16x 73gb 15,000 rpm SAS drives w/ hardware raid 10 Almost 600GB of FAST Disk I/O 1x 2TB sata for backups $999 with 10TB bandwdith


The price and specs are great compared to EC2, but there's a problem for some use cases: you're charged for a month as soon as you create a server. The credit remains available as long as you have an account, and any remaining credit is refunded if you close your account, but you can't just spin up a big server for 3 hours and pay $5. Instead, you have to pay $1000, then close your account if you want your $995 back.


To follow up: I talked to sales today and they confirmed that there's a hackish workaround for this use case:

Spin up a small server. You'll be charged about $35 (the exact amount will be shown). Use it if you like, or shut it down right away. You can then start servers of any size using the ~$35 credit you have. It is important that you not be running a large server when the billing cycle rolls over, or you will be charged for a month of that, and this may result in an effective $35/month usage minimum (though credit does roll over). It's a small hassle, but it may be worth it for access to that 32-core box.

If you, like me want EC2-style billing, they have a suggestion to that effect on their uservoice: https://storm.uservoice.com/forums/23166-general/suggestions...


We've got some servers similar to this in one of our compute clusters. Four 12-core AMD Magny-Cours processors, 128 GB RAM... they are a thing of beauty, let me tell you. Actually very good for certain bioinformatics codes, especially wired up to each other with Infiniband. :-D


Can you point to the product page and give an idea of the price?


It's not exactly the same system, but... ICC (Supermicro) 1042G-T with 48 cores (4x 12-core Opteron 6174, 2.2 GHz), 128 GB RAM, 2x 3TB SATA disk and 40 Gb/s QDR Infiniband, $10.6k. Subtract $300 if you'd rather have 10GbE instead of the IB. All in 1U. http://www.icc-usa.com/amd-4p-1u-6100.asp


Very helpful too. Thanks!


Hmmmm. I couldn't find this server on storm's cloudharmony.com benchmarking page (It only goes up to the 48GB model), but it's interesting comparing nonetheless.

An OrionVM 16 gig of ram server gets an aggregate disk IO score of 156.79 vs. their 170.5 and an IOPS score of 159.79 vs. their 159.99.

And that's with redundant network backed hard disks in a visualised environment with all of those benefits and overheads.

What's also interesting is that this is a pxecloud with local 8 disk SAS raid 10 (Not san storage/etc.)

Overall, very interesting offering.

(NB: I work with orionvm a company that makes IOtastic servers)


I run mogade.com on 2 1GB bare metal web servers, and 2 1.7GB mongo/redis replicas. It's been rock solid for about 8 months. Initially picked them because we ran unixbench on it, linode and EC2, and they were significantly better (2x+ if i recall) and cheaper. They are also quick to answer support. There's been no downtime (that wasn't caused by me!).

But there are some downsides. First, their web management portal sucks, it's like whoever built it discovered ajax and jquery for the first time. But you hardly spend any time there, so no big deal.

Load balancing is expensive (at my cheap scale), and they don't have the API to do it yourself (remap an ip type thing). Also, they aren't innovating. When they first started, they were already quite behind AWS and when you compare what amazon has done the last couple years (email, dns, beanstalk...) they've only fallen farther behind. I already use S3 and I'm looking at using SMQ -having a split infrastructure sucks.

Finally, they advertise way too much. Surely I'm not the only one who has seen it. It's annoying especially when you consider how stagnant they are. Feels like a very short sighted use of money.


> Finally, they advertise way too much.

Don't worry about that, they're just retargeting. I went to their website yesterday for the first time, and now every website I visit has their ads (which I'd never seen before). If someone is interested enough to hit your site, then you want to show them lots of ads. That's usually very cost effective.


regarding the advertising, i see their ads all the time. I asked them about it. They use retargeter to show ads to previous customers and visitors. Part of how they get the word out to existing customers about new products. Must be decent ROI if they keep doing it.

Their cloud stack is waaaay younger than AWS, but it seems like they are bit by bit adding more things. tough to catch up to amazon


I wonder what the national republican senatorial committee was doing with one of those?


It seems like their bandwidth costs are a bit high, though, which would mitigate the savings in many cases. Anyone know if they'll play ball and include free bandwidth? This would be a great machine to host Destructoid.com if so.


Damn that's a big server... we've got 6 boxes that don't even add up to that.


Not by today's standards; you can buy a PowerEdge R815 with thirty-two 2GHz cores and 128GB of RAM for $8,000.


Are you sure that price is right, Dell is showing that box at quite a bit more on their site right now.


Can't buy the Dell by the hour though.


Hmmm.... the pricing looks tempting.

Anyone know of any history of their uptime?


Check out http://cloudharmony.com/status We've been monitoring Storm for over 18 months and never once experienced an outage. They are one of only a few providers with no downtime (we monitor over 100 different cloud services).


I've had very good reliability with storm. Plus they have excellent 24/7 phone support


Damn. I wish I could somehow actually upload video fast enough to do my 1080p video processing on those.


I was interested up until I read this little number in the newsletter I got from them:

>In addition to 96GB of RAM, each of these servers contain 32 cores (at 2.0GHZ each, 64.0GHZ total), writes at over 3 Gbit/s and reads at over 4 Gbits/s.

You would think a company like Storm would know you don't just add up the cores to get 64Ghz.

Also, for what it's worth, we just ordered 3 physical servers from Dell, 96GB of RAM each, dual xeons for a total of 32 cores, and dual 10Gbps fiber channels to hook up to our SAN. So yeah, the price seems pretty high...


I've run raytracing clusters with 4000 cores. It basically scales linearly. So for this task, or any massively parallel compute-bound task, you pretty much can just add up the cores.

Also, I would rent them by the hour, because if a service has 500 machines available right now, I can get the job done in an hour, instead of 500 hours. Or 10 hours instead of 7 months.

Not something I can do with purchased hardware.


For pay by the hour services where you can scale up a ton of instances, then destroy them when you're done and only pay for time used, then yeah it's more cost effective to do it that way. However, this is about a monthly payment of nearly $1000 for a server you're supposed to have for a long time (there's no order it for 3 hours, then kill it and only pay for 3 hours of time used). If we were comparing EC2 instances to purchased hardware then yeah, that argument would be better suited.


I don't think that's right. Their website says this:

    For your convenience, all Storm on Demand services are 
    billed hourly. This flexibility allows you to create and 
    destroy Storm Servers as you need them, and only pay for 
    the time you need.
https://www.stormondemand.com/servers/whystorm.html

It would appear you can bring up a server for 3 hours, and just pay for that.


Ah, I knew they offered pay as you go (I tried them out before) but I assumed this was a sort of "dedicated only" server type that required you to pay monthly since I didn't see the hourly rate anywhere, just "$xxx / month".


Can you point to the product page at Dell, and give an idea of the price?


IIRC our R610's (96GB, 12 Cores @ 3GHz) were around $7K a piece.

256GB/32 Core/1.8GHz R910s are around $22K last I looked. This stuff really is stupid cheap in comparison to renting.

You'd need to talk to a Sales person to get that kind of pricing, but the general rule is to discount MSRP by at least 30% when you're working with a major vendor.


Very helpful. Thanks!


I just ordered a 32GB, 2x4-core, 4x600GB velociraptor machine for $5000. Add the fibre channel and extra RAM and I bet it cost $7-9000 per box.


Unfortunately I can't, since I wasn't the one that wrote up the PO for it. I just asked how much the equipment cost I had to install out of curiosity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: