Volkswagen Passat Forum banner

1 - 7 of 7 Posts

·
Registered
Joined
·
3,946 Posts
Discussion Starter #1
Few questions. Don't laugh.

1. The RAID 0 array as I understand it will speed up the communication between the processor and the HD's which makes the machine faster. Correct?

2. I understand I need SATA HD's, but what type of cables do I need? Do I need the regular IDE cables and SATA cables, or do I just need SATA cables only?
 

·
Registered
Joined
·
835 Posts
1. Yes kinda. Raid0 or Striped array uses 2 drives to store data. You get performance increase because 2 physical drives are reading and writing data.
The communication rate between the CPU and HD doesnt increase but the speed at which you get the data onto that link is increased.

2.SATA HDs are faster (not hugely) so its best to use them. Assuming your motherboard has SATA Raid then you only need SATA cables.
 

·
Registered
Joined
·
1,948 Posts
2. What your looking for here is really a technology that lets you read and write to multiple disks at the same time. The EIDE and IDE Busses (Standard ATA) never let you do this, as they only allow access to an individual device at the same time on the IDE Chain. So to do a Raid0 array of disks, you'd need to use an inidivual IDE bus for each device (yech!)

RAID 0 came about when SCSI devices were more commonly used in high-end servers (and still are today) As the SCSI bus allows you to read and write to multiple devices at the same time. SATA and SCSI both allow this parallel data access without requiring an inidividual bus for each device.

Now, why does the RAID 0 speed up data access? It's simply a matter of numbers. Lets say for argument sake, the system reads 64k at a time from the disk in 1 millisecond (it's a nice round pretty number). If you had to read 1024k from the disk , it would take you 16 milliseconds to read that data ( 1 millisecond for every 64k you read off the 1 disk)

With RAID 0, you can spread that data in a 64k stripe where it's written across multiple devices (lets say 4 disks in this case, but it doesn't have to be, can be 2 or 10) This means that when the data is written, the first 64k is written to the first disk, the second 64k is written to the second, 3rd is written to the third, 4th to the fourth, and then the 5th is written back to the first disk, and so on.... Now when we read our 1024k of data, we're reading it from 4 disks, so the access time is divided four fold, since we're reading 4 64k chunks of data off 4 disks AT THE SAME TIME.

(Insert diagram that I can't find from my last class)

So what kind of performace do you see in the real world? Depends on whether you're doing the RAID in hardware, or if you're doing it at the OS level. Usually you see at least a 50% increase is speed of data access, becuase you still have filesystem buffers and other middlemanagement type stuff. The more disks you have in the array tho, the faster access gets. One caveat tho. More disks = more chance of failure. And if you lose one disk, you've lost all the data on the whole array. Backups = good. Raid 0+1 can be your friend, but expensive (requries more disks)

Hope this answers anyquestions, or confuses you more. THere's lots of info on the net about how it all works.
 

·
Registered
Joined
·
4,095 Posts
Boris said:
Few questions. Don't laugh.

1. The RAID 0 array as I understand it will speed up the communication between the processor and the HD's which makes the machine faster. Correct?
no! ONLY if its a real hardware raid controller (it has a cpu and ram on the pci card). most 'raid' is nothing more than software (which runs on the HOST; ie, the cpu) and is just some dumb ide controllers which have smarts to boot into the raid pack. it will NOT make the cpu more efficient since there's no other cpu to offload the io - the host cpu still has to do all the work. for a real raid controller, see 3ware.com (I have a few of these and can say good things about them).

2. I understand I need SATA HD's, but what type of cables do I need? Do I need the regular IDE cables and SATA cables, or do I just need SATA cables only?
you don't NEED sata; only for the el-cheapo motherboard 'raid'.

if you can afford $100 for a better controller, here's a link:

http://www.hypermicro.com/product.asp?pf_id=CT3W100&dept_id=13-004

that one has a cpu on it. you can get a serial or parallel disk card; either is fine, its just an issue of cables and what drives you might have.
 

·
Registered
Joined
·
4,095 Posts
Kosmas said:
2.SATA HDs are faster (not hugely) so its best to use them. Assuming your motherboard has SATA Raid then you only need SATA cables.
in the real world, the internal transfer rate has NOT reached channel capacity (ie, the speed of the bus). so don't go to sata because you think they're faster. they are NOT!

they may be, someday. but even parallel ide has so much more room in its channel, no drive on the planet can saturate an ide bus yet.
 

·
Registered
Joined
·
4,095 Posts
Gurft said:
2. What your looking for here is really a technology that lets you read and write to multiple disks at the same time. The EIDE and IDE Busses (Standard ATA) never let you do this, as they only allow access to an individual device at the same time on the IDE Chain. So to do a Raid0 array of disks, you'd need to use an inidivual IDE bus for each device (yech!)
but that's what controllers do!

each controller on a raid IS an ide 'bus'. even more so when its sata, since that topology is ONLY point to point and there is no sharing between controller and device.

in theory, the cpu can issue a write (post it) and BOTH drives can run at the same time, the first getting 1..5 and the 2nd getting 6..10 (if you get what I'm saying). if each drive is on its own cable (parallel OR sata can do this) then its a matter of the CONTROLLER who decides if true concurrency can happen or not. with cheap controllers that NEED the host cpu, you won't get true parallelism. with the 3ware (and similar) stuff, you CAN get parallel i/o and NOT need the host cpu for it to happen.

Usually you see at least a 50% increase is speed of data access, becuase you still have filesystem buffers and other middlemanagement type stuff. The more disks you have in the array tho, the faster access gets.
I'm not sure about that 50% increase! I seriously doubt it. it sounds good on paper, but unless you really have detached cpu and controller, the host cpu STILL is the choke point.

and beyond a point, adding disks buys you nothing but noise and heat ;)
 
1 - 7 of 7 Posts
Top