<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:times new roman, new york, times, serif;font-size:12pt">OK, at the risk of being a bit of a homeboy....<br>I've been looking at storage and I have my own biased interest in brand names to use...<br><br>The good news: ATTO probably makes the fastest and most reliable RAID controllers/host adapters available. Our primary market is the video market which uses brands such as AVID to dump their raw video feeds to (Pixar/Dreamworks/others. Television studios, etc). ATTO makes the controllers behind the AVID boxes. <br>We're trying to break into the IT market, and our newer adapters do have Linux drivers (the bummer is, not OS Linux drivers). Right now, if you are purchasing a server from HP and ask for a SCSI interface, they're putting in ATTO cards. So we are a player.<br>If you are looking for 8gb/sec FibreChannel, you need to get your
host adapters from ATTO (Nobody else has them in production).<br><br>The bad news: We're not the cheapest ones out there. <br><br>HOWEVER, if 2gb/sec is good enough, we are closing out on 2gb Fibre Channel cards and you can get them at a very good price ($1000 cards going for about $200. )<br><br>There are drive arrays out there that will take FC input,. and go to SAS/SATA output. SAS is pretty cost effective, and can be had for $0.40/Gig. With similar reliability to SCSI. Obviously, SATA is cheaper, but may not be as reliable.<br><br>If you need the raid controller, ATTO has a rack mountable FC-SAS/SATA raid controller. This box is pricy, but you can plug cheap SATA jbods into it and get very good performance for the price you pay.<br><br>If you need more info, or want to get hounded by a salesman, let me know..<br><br><span style="font-family:comic sans ms;">Richard Hubbard </span><br>ATTO Technology Inc<div style="font-family:
times new roman,new york,times,serif; font-size: 12pt;"><br><br><div style="font-family: times new roman,new york,times,serif; font-size: 12pt;">----- Original Message ----<br>From: Brad Bartram <brad.bartram@gmail.com><br>To: nflug@nflug.org<br>Sent: Monday, June 30, 2008 1:03:57 PM<br>Subject: Re: [nflug] Opinions on Linux and Massive Storage<br><br>Believe it or not, I'm actually not doing anything with a database for<br>this. This is going to be massive file storage - or storage for<br>massive files, if you prefer. This will be mainly a read intensive<br>system, though there will be writes from time to time.<br><br>I'm looking at fibre channel for connections with fast disks.<br>Initially I'm looking to have a system between 40 - 100TB to start<br>with expansion in 4 - 6 months of at least another 100TB.<br><br>I've looked at some of the offerings from Sun, but both IBM and Dell<br>have SAN hardware that fits the bill. I'd
like to stay with Linux,<br>since I'm most familiar with it, but if there are limitations in<br>dealing with big storage arrays, I have no problem moving to a<br>different platform.<br><br>On Mon, Jun 30, 2008 at 12:56 PM, Robert Meyer <<a ymailto="mailto:meyer_rm@yahoo.com" href="mailto:meyer_rm@yahoo.com">meyer_rm@yahoo.com</a>> wrote:<br>> Well, for that kind of storage, I'd recommend getting an EMC SAN setup. Are<br>> you planning on building something for static data that you're just going to<br>> be mostly reading or doing lots of read and write activity. If it's the<br>> latter, you're going to want SCSI interfaces on the drives. EMC makes both<br>> SCSI and ATA SANs. You want a configuration that uses fiber connects to the<br>> SAN, if you can afford it.<br>><br>> If you're doing heavy database work, when you build the RAIDs, I'd recommend<br>> going with a RAID 10 with as many disks as
you can if you're doing lots of<br>> random access. I found that with databasing, RAID5 is really bad at large<br>> scale writes. RAID 10 with lotsa disks will give you more speed (the more<br>> disks that you can spread out over the better). Not having to compute<br>> parity is a major win. Go for more disks with less capacity, rather than<br>> fewer, large capacity disks, if speed is the major issue.<br>><br>> Also, try to not build multiple systems on the same spindle sets if you're<br>> doing databasing. I've watched a single spindle set show massive I/O wait<br>> when multiple systems were hitting it.<br>><br>> This can get really complex, really fast. Basically, I think I'd need lots<br>> more information on the intended use of the system in order to be able to<br>> help with it. If you have a set of design requirements, that would help a<br>> lot.<br>><br>>
Cheers!<br>><br>> Bob<br>><br>> --<br>> "When once you have tasted flight, you will forever walk the earth with your<br>> eyes turned skyward, for there you have been, and there you will always long<br>> to return."<br>> --Leonardo da Vinci<br>><br>> ----- Original Message ----<br>> From: Brad Bartram <<a ymailto="mailto:brad.bartram@gmail.com" href="mailto:brad.bartram@gmail.com">brad.bartram@gmail.com</a>><br>> To: <a ymailto="mailto:nflug@nflug.org" href="mailto:nflug@nflug.org">nflug@nflug.org</a><br>> Sent: Monday, June 30, 2008 12:33:50 PM<br>> Subject: [nflug] Opinions on Linux and Massive Storage<br>><br>> I know there are some people in this list that have experience with<br>> massive storage using linux. By massive I mean >20TB range.<br>><br>> I'd love to hear your thoughts on building out and optimizing a system<br>> that is fast, scalable, and reliable. If you
have opinions on direct<br>> attached storage as well as those of you running storage area<br>> networks.<br>><br>> It's kind of a broad topic, but I'm about to embark on a major build<br>> out and want to avoid as many pitfalls as possible.<br>><br>> Thanks<br>><br>> Brad<br>> _______________________________________________<br>> nflug mailing list<br>> <a ymailto="mailto:nflug@nflug.org" href="mailto:nflug@nflug.org">nflug@nflug.org</a><br>> <a href="http://www.nflug.org/mailman/listinfo/nflug" target="_blank">http://www.nflug.org/mailman/listinfo/nflug</a><br>><br>><br>> _______________________________________________<br>> nflug mailing list<br>> <a ymailto="mailto:nflug@nflug.org" href="mailto:nflug@nflug.org">nflug@nflug.org</a><br>> <a href="http://www.nflug.org/mailman/listinfo/nflug"
target="_blank">http://www.nflug.org/mailman/listinfo/nflug</a><br>><br>><br>_______________________________________________<br>nflug mailing list<br><a ymailto="mailto:nflug@nflug.org" href="mailto:nflug@nflug.org">nflug@nflug.org</a><br><a href="http://www.nflug.org/mailman/listinfo/nflug" target="_blank">http://www.nflug.org/mailman/listinfo/nflug</a><br></div></div></div><br>
</body></html>