Talk:Hard disk drive/Archive 2
This is an archive of past discussions about Hard disk drive. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 |
Virtual Byte vs Factual Byte
How come they measure hard drives in false byte computations? For example, they conclude 1000KB as being 1MB when in fact 1024 KB is 1MB, is this just a way to rip people off for more money on less bytes, or are they too lazy to use the factual bytes? I don't understand, thanks!
- Here we go again... See Megabyte and Mibibyte. Technically, 1MB (megabytes) should be 1000 kilobytes. Then, 1MiB (Mibibyte) should be 1024 Kibibytes. --Last Avenue 23:22, 5 February 2006 (UTC)
- The problem here is, this runs counter to the normal HDD naming scheme. Originally, someone proposed 1MB being 1024 and 1MiB being 1000, but that just added to the confusion. --Last Avenue 23:22, 5 February 2006 (UTC)
- Actually, originally someone suggested using the SI-prefix "kilo" (103) to indicate 1024 bytes, since 1000 is "close enough" to 1024. At the time there were no IEC binary prefixes to properly describe powers-of-two byte capacities, so the convention just stuck and became more notable as larger prefixes were used to describe incorrect byte capacities. This of course led to the dual meanings of some of the SI prefixes in computerland and the opportunity for storage device manufacturers to capitalize on the confusion and make their drives seem more spacious to the lay man (who probably doesn't care about all this mess anyway). The article already covers this information, though. -- uberpenguin 16:38, 6 February 2006 (UTC)
- Here is my point of view, as someone with a Microelectronic Engineering background. I don't buy that historically, it was just because there was little discrepancy between base-2 and base-10 multiples in capacity ratings. I think that also, historically, the only data capacity where it makes sense to measure in base-2 multiples, was (and still is) RAM, when you're talking about row/column based binary addressing. 59.167.116.194 16:03, 2 September 2006 (UTC)
- So from an engineering point of view it does NOT make sense to measure a hard disk in base-2 multiples, because an engineer will want to know the true number of sectors*bytes. This is because the natural, intrinsically sequential nature of that storage medium is NOTHING LIKE addressing RAM - it literally is just a big stream of bits. Likewise with serial communications specifications - it makes NO LESS SENSE to count the number of bits coming out in base-10, than it does to do so in base-2 multiples that software-types seem to cling to so much... in fact from an engineering point of view base-2 multiples would be very annoying (there are many interesting calculations that can be done with base-10 multiple specs, but base-2 multiples distort the true number of "symbols" we're trying specify and have to be converted back to base-10 multiples every time!) Anyway, my point is that although I can understand the exasperation of typical end-users, it's not a conspiracy motivated by evil ;-) 59.167.116.194 16:03, 2 September 2006 (UTC)
- Here's my point of view, as someone with a semiconductor device engineering background. I do believe the classical history because early operating systems tended to use multipliers of powers of two to incorrectly define SI prefixes. This was before hard disks, so memory capacities DID refer to RAM of some sort (core or the like). When hard disks came along (a good while after we had operating systems and RAM) it was simple to keep using the same multipliers. Nobody claimed it was a conspiracy, it was just an early convenience that has become an annoyance as storage capacities and the percentage error increase. Incidentally, how does using a different radix "distort the true number of symbols"? Radix shouldn't confuse anybody who can do simple arithmetic. -- mattb
@ 2006-09-02 16:14Z
- Here's my point of view, as someone with a semiconductor device engineering background. I do believe the classical history because early operating systems tended to use multipliers of powers of two to incorrectly define SI prefixes. This was before hard disks, so memory capacities DID refer to RAM of some sort (core or the like). When hard disks came along (a good while after we had operating systems and RAM) it was simple to keep using the same multipliers. Nobody claimed it was a conspiracy, it was just an early convenience that has become an annoyance as storage capacities and the percentage error increase. Incidentally, how does using a different radix "distort the true number of symbols"? Radix shouldn't confuse anybody who can do simple arithmetic. -- mattb
Those who do not know history are condemned to mis-interpret it.
- The first hard disk drive, the IBM RAMAC (1956) had 5,000,000 six-bit characters - decidedly not binary.
- IBM soon adopted their Count Key Data (CKD) track format for their disk drives. In this format a sector (called record in IBM parlance) could be any size from one byte up to the maximum track length - again decidedly not binary. IBM specified a formatted media capacity based upon a sector of maximum track length, which varied from subsystem to subsystem but in all cases was not binary, see IBM Mainframe Disk Capacity.
- A popular IBM CKD disk drive record size was 800 bytes representing 10 80-column cards - decidedly not binary.
- The unformatted capacity of IBM media was known within the art but not publicized. For example, the IBM 3336 disk pack for the IBM 3330 disk drive was generally known as a 100 Megabyte disk pack because it had 100,018,280 bytes available to an IBM operating system with full track records. The unformatted capacity was 104,952,960 bytes.- decidedly not binary and a proper use of decimal prefixes.
- The same 3336 disk pack when formatted for a Digital system had a formatted capacity of about 70 megabytes but was still called a 100-megabyte disk pack because of IBM's market power.
- As unique media and drives were introduced by suppliers other than IBM they began the practice of specifying the products in unformatted capacities using decimal prefixes. See for example the CDC SMD HDD product line or the Shugart 5.25" FDD. The system or subsystem manufacturer advertised and/or reported the formatted capacity - NOT the disk drive or media manufacturer – because formatted capacity is a function of their controllers! Decimal prefixes continued to be used by drive and media manufacturers because it made no sense to use binary - we have 10 fingers and there is nothing binary about the unformatted capacities.
- Also note that systems and system programmers generally do NOT use prefixes, binary or decimal. At least I know of no compiler or assembler or low level OS call that uses or reports with prefixes - you get a string which is usually hexadecimal but can be decimal, octal or binary but never (to my knowledge) with prefixes (or for that matter comma's).
- The first misrepresentation by a systems house may be Digital who appears to have reported and advertised the formatted capacity 5.25" DSDD FDD in kibibytes misrepresented as kilobytes. Apple and Microsoft rapidly followed this for 5.25" FDD's. The DSHD 3.5" FDD is a 2.0 megabyte unformatted product; todate I haven't figured out who came up with the universal and incorrect 1.44 MB designation which we all know to be 2 * 720 kibibytes (double capacity DSDD).
- It appears that Microsoft accurately reported HDD's in DOS but fell from grace in Windows. In most DOS's that I looked at, Microsoft reported capacity as a decimal string with no prefixes (therefore no misrepresentation) but then incorrectly adopted prefixes in the graphical Windows environment.
- This got further confused as the industry migrated from dumb device level interfaces to intelligent interfaces. For example, drives with the “dumb” ST506/ST412 interface were specified with unformatted capacity. No one was surprised when the 12.76 megabyte unformatted ST412 sold by Seagate became a 10.0 megabyte formatted PC/XT HDD - and both had the correct prefix usage. Intelligent interfaces such as IDE and SCSI hide the unformatted capacity so drive manufacturers continued there practice of using decimal units and prefixes.
A long rant, but it is pretty clear, at least to me, that the HDD companies have been historically consistent and reasonable in their usage of decimal prefixes and that the confusion and misapplication of binary prefixes comes from some careless translations by systems and subsystems manufacturers.216.103.87.80 00:33, 3 September 2006 (UTC)
- While I agree with what you say, binary prefixes can sometimes make sense, for things like cluster sizes... but OTOH, most users should never really have to deal with binary prefixes, and kB should mean 1000 bytes as far as they are concerned. I would argue that when you need to measure a quantity such as the length of a file, you should do it with decimal prefixes, i.e. divide by powers of 1000. The only time you should use kibibytes/mebibytes/gibibytes is when it allows better accuracy (for instance when dealing with quantities of RAM, where 512 MiB is most likely the exact amount). --StuartBrady (Talk) 01:06, 3 September 2006 (UTC)
Fastest Location on Drive?
Which part of the disk is data most quickly accessed? Is it at the start of the drive or near the end of the drive? This information would be handy to know if I want a swap partition at the start or end of my hard drive to be sure I get optimal speed. Thanks! 71.112.224.112 23:28, 12 February 2006 (UTC)
- The beginning would be fastest, since pretty much all hard disks use zoned recording now, which makes the outer tracks operate at a higher bitrate than the inner ones. Seek times would still be about the same unless you get a drive with a fast spindle. Also, if you're swapping a lot and your computer can handle it, add more memory! Memory will always be faster than disk space, and it's cheap these days. -lee 17:19, 7 March 2006 (UTC)
137gb barrier
There's very little detail on this. I think it should be a little more technical (48bit something?) and give the next barrier.
- That's really more a drive controller/interface issue than a physical hard disk issue. -- uberpenguin 18:07, 5 March 2006 (UTC)
- Starting from the ATA-3 standard, up through the ATA-5 standard, the LBA address was specified as a 28-bit quantity. This means that pow( 2, 28 ) * 512 bytes = 137 438 953 472 bytes (about 137 GB) is addressable with the older commands. ATA-6 provided extensions to the command set that allow 48-bit LBA addressing, e.g. pow( 2, 48 ) * 512 bytes = 1.441 151 880 759e+17 bytes (about 144 PB). This will be sufficient for some time to come. GMW 14:45, 14 March 2006 (UTC)
- The "barrier" is another BIOS-based limit rather than a hardware-based one; you could do 48-bit LBA on a 386 with an ISA paddleboard in if you were willing to live with the speed. The other issue is that some PCI IDE controllers (mainly from the late 1990s, in the early days of UDMA) either can't handle 48-bit LBA at all, or can't use 48-bit LBA with DMA on (which means the performance would suck). As I understand it (from looking at the Linux and FreeBSD driver sources), Intel's and VIA's controllers are immune to this and handle 48-bit LBA just fine, no matter how old they are. -lee 14:19, 11 April 2006 (UTC)
parallel reading on platters/tracks
Are there any mechanisms for reading multiple tracks/regions in parallel? Or maybe something as simple as reading a whole line from a drum etc?
- Currently there is no method for doing so. First, all heads are affixed to the same armature, so in order to read multiple tracks that aren't at the same radius, one would need multiple armatures. To do so would require multiple VCMs to drive the armature, which poses multiple problems, namely where to stick an additional large permanent magnets required, how to handle the extra current demands, and how to servo on multiple surfaces at once. Typically there is only one servo path, and only one servo processor. Also, the heads are multiplexed by means of the preamplifier electronics, so if multiple read paths were desired, multiple preamps would be required. (Note that there is typically a so-called "gang write" function in the preamp to write the same pattern on all the heads simultaneously. This is only meaningful for writing the servo pattern to the disk, however.)
- Second, current HDD signal processing techniques require a lot of sophisticated processing in the data channel. Having multiple read paths would mean not just multiple preamps, but multiple channels to encode/decode the data, and multiple paths in the controller to handle ECC processing and storing simultaneous streams of data. In general, this is not very practical. It's far simpler to just get a bunch of disks in a striped RAID configuration to speed up access. GMW 01:32, 25 March 2006 (UTC)
- Conner was demoing a line of drives with two actuators fitted back in the 1990s, but I don't know if they ever made it out of beta testing (the chipset they developed for them showed up on some of their other high-performance drives, though). Also, there were a few mainframe drives (CDCs spring to mind) that had multiple actuators. -lee 14:22, 11 April 2006 (UTC)
- The drum memory article mentions that "A row of read-write heads runs along the long axis of the drum, one for each track. A key difference between a drum and a disk is that on a drum, the heads do not have to move, or seek, in order to find the track they are looking for." --70.177.117.132 08:22, 19 November 2006 (UTC)
Fujitsu and Class/Form-Factor
Edited the bit about Fujitsu "exiting the mass market". Depending on one's perspective one might consider everything but server-class to be the mass market. Anyway, Fujitsu's 2.5" mobile and 3.5" server HDDs are currently obtainable in both the U.S. and Japan. Here's the Fujitsu press release regarding their exit from the desktop market.
One more thing, someone should probably standardize the way the form factors are referred to within the article. There's for example 2.5 inch, 2.5-inch, 2.5", notebook, etc. to choose from. My personal opinion is that the class and form factor should be stated, e.g. Mobile 2.5", Server 3.5", and so forth. Thoughts? GMW 15:24, 28 March 2006 (UTC)
- Now that Seagate has 2.5" drives meant for servers, and there's a third class of "near-line" 3.5" drives, this sounds like a good idea. Also, I fixed your link to the Fujitsu PR. -lee 14:25, 11 April 2006 (UTC)
- Thanks for the fix. (I'm still something of a wiki newbie.) The Server 2.5" is indeed newer. If I'm not mistaken, "near-line" usually refers to always available storage, as opposed to tape storage, which requires retrieval from an archive and is much slower. Lately it seems like the term is used to mean Desktop class drives that are used in what were traditionally Server applications. Internally, we call this "entry-level server", but who knows what the marketing guys tell people. *wink* GMW 07:42, 29 April 2006 (UTC)
Comments about the following categories/classes?
- Server 3.5"
- Server 2.5"
- Desktop 3.5"
- Mobile 2.5"
- Mobile 1.8"
- CE 1.0"
- CE 0.85"
Not sure where to put "entry-level server", but since design- and reliability-wise it's closer to Desktop than to Server, I'd prefer to leave it in the former.
Mobile 2.5" comes in various spindle speeds, and the high end 7200 RPM files come close to Desktop. Perhaps eventually 2.5" 7200 RPM will be referred to as Desktop 2.5". Mobile 2.5" at 4200 RPM is a dying design point, so that category would be left with 5400 RPM. GMW 08:04, 29 April 2006 (UTC)
Magnetic Surface
Hi there, I added 3 paragraphs in the 'mechanics' section about how the magnetic surface works. (my first wikipedia contribution!) I am no expert on this subject, please check it. It might have more to do with magnetic recording in general, or maybe with read heads, than hard disks BlankAxolotl 07:20, 14 April 2006 (UTC)
- Good for a first entry. Nice pictures! That was indeed what I'd been taught about the usage of grains in magnetic media. However, each grain is not a single magnetic domain...yet. Right now, each magnet is multiple grains. One future advance in HDDs is what's called "patterned media" where each grain is capable of being individually magnetized.
- Regarding organization of this article, it seems to me that all the magnetics stuff doesn't belong under mechanics. Of course you put your addition there because the rest of the material was there, which fits. "Mechanics" seems to imply the aspects mechanical engineers would deal with, i.e. the base casting, spindle, hub, spacers, breather filter, actuator, gimbal assembly, etc. "Magnetic Recording" would better encompass the physics, material science, and electrical engineering aspects of HDDs, I think. Perhaps when all the contributors to this article decide it's time for a rewrite we can break things down logically.
- One thing I should mention: The read head is no longer an inductor, although the write head is. Modern read heads, using (CIP-)GMR or the newer Tunneling MR (variously abbreviated "TMR", ambiguous with "Track Misregistration", or "TuMR"), are not classified as inductive heads as there is no coil present. GMW 08:26, 29 April 2006 (UTC)
- Thanks for the feedback. Notes:
- I was under the impression that modern grains really were single domains (though I admit my memory is a bit fuzzy.. and I don't have my references with me). The magnetic regions, of course, are made up of hundreds of grains. Though, I just checked your talk page, and you are a real HDD engineer, unlike me :) . At any rate, in all the theoretical calculations I saw, grains are approximated as single domains. I have not really looked into patterned media, but from what I remember the idea is to make really big single-domain grains, and the magnetic regions will be composed of a single grain.
- Yes, I completely ignored write heads. Maybe I will write something for them sometime. Also I agree that this article is starting to need a rewrite. I was sort of thinking of moving this section into the magnetic recording article. BlankAxolotl 17:57, 29 April 2006 (UTC)
Lifetime
How long does a HD last compared to other kinds of storage? 71.250.15.252 23:44, 14 April 2006 (UTC)
- Mine, a toshiba HD. Toshiba said it will last 5 years, but mine only last 2 years--W.Tanoto 22:00, 30 September 2006 (UTC)
Encoding?
These two sentences confuse me when compared with the diagram:
- If the magnetization reverses between two magnetic domains, this signifies one state, while no change in magnetization signifies the other state. For various reasons, the actual binary data is encoded using sequences of these two possible states, rather than the states themselves.
This ASCII-art elucidates, N = no reversal, R = magnetic field reversal:
magnetic regions ... head travels this way ===> v v v | --> --> | --> --> | <-- <-- | <-- <-- | <-- <-- | --> --> | | --> --> | --> --> | <-- <-- | <-- <-- | <-- <-- | --> --> | States: N R N N R Binary: 0 1 0 0 1
I read the two states as "no reversal" and "reversal", and so the diagram shows binary data *is* encoded as the states themselves. Ie. N is 0 and R is 1.
After reading Storage Review's Frequency Modulation (FM) this article's diagram appears over simplified, and I suspect it omits the modulation stage.
I suggest the diagram should show frequency modulation (in itself a now obsolete simplification) encoding like this:
| --> --> | <-- <-- | <-- <-- | --> --> | <-- <-- | --> --> | <-- <-- | <-- <-- | --> --> | <-- <-- States: R N R R Encoding: 0 1 Binary: 0 1
The sequence of states RN encodes to binary data 0 while the sequence RR encodes to 1. The sequence NN is forbidden to prevent a long stretch of no reversals causing the read controller to lose track of where the region boundaries are. More complicated encoding schemes are described in Run Length Limited and RLL diagrammed.
Lastly I would try to improve the last sentence to read: For various reasons, the actual binary data is encoded using consecutive sequences of these two possible states, rather than the states themselves. -213.219.160.64 16:22, 24 April 2006 (UTC)
- (I'm the maker of the diagram) Yeah, I saw that too when I wrote it up, but I've been too lazy to correct it. I just had my last final exam today, so I'll fix it in the next few days. BlankAxolotl 02:42, 28 April 2006 (UTC)
- I've uploaded the changes you suggested. I also uploaded the original svg file (drawn in inkscape) here: file, so you can edit it. It's too bad the frequency modulation page doesn't actually talk about the encoding the diagram shows... BlankAxolotl 04:34, 28 April 2006 (UTC)
Regarding encoding on HDDs in general, RLL parameters have relaxed quite a bit. Most modern codes are highly efficient, i.e. 60/62, not having many of the constraints that were previously required. This means that the read channel has to deal with quite a bit more stress in clock recovery. There is typically no constraint on the minimum bit run length (0s between 1s), i.e. d=0. The k parameter, maximum bit run length (0s between 1s), typically is very large. Sometimes another parameter is used to specify the maximum run length of 1s between 0s, but these are more often than not the same.
Frequently transition constraints are achieved by making use of the parity bits. For example, if there are a bunch of 0s in a row, parity would end up being 1 (odd parity or similar); by evenly spacing the parity throughout the frame, optimal transition constraints can be achieved. (60/62 would have around j/k around 30.) Combining an MTR coding with parity can also be effective.
Earlier coding methods like FM and MFM had the advantage of reducing the flux change frequency. On highly efficient codes, the flux change frequency is equal to the channel data rate, which is in turn very close to the user data rate.
Lastly: NRZ versus NRZI. In NRZ, the binary data refers to the absolute state of the write current, i.e. "0" means one direction (say, -1 or S), "1" means the other (say, +1 or N). In NRZI, "0" means no transition, "1" means transition. The above diagram shows NRZI. NRZ and NRZI are related in that NRZI is the derivative (as in calculus) of NRZ. Also, the length of equivalent NRZ and NRZI data is such that NRZI is shorter by one, as NRZI sits on transitions between states, whereas NRZ is the state. (Analogy with a fork: NRZ is the tines, NRZI is the gaps between the tines.) GMW 09:07, 29 April 2006 (UTC)
- Wow! The encoding stuff seems like it should go in the Run Length Limited article.
- About NRZ vs NRZI: From my physical intuition, I thought that read heads ONLY read in NRZI: Both the inductor and GMR heads can only detect flux changes reliably. Read heads cannot detect magnetization direction because the fields are too weak, and I think (?) detecting overall magnetic fields is hard in these small spaces (a hall probe or gaussmeter would be needed..). However, I can see that one could easily add a tiny circuit, which uses some sort of parity counter to convert the NRZI to NRZ. Thus any NRZ reading you get must (according to me) always be an interpolation based on the raw NRZI data the read head gives. To the circuitry engineer, though, they are equivalent. Right, wrong? BlankAxolotl 18:21, 29 April 2006 (UTC)
750 GB Hard Drive
I added in to the timeline the creation of the new Seagate 750 GB Hard Drive. While I don't know if it would be necessary to note every jump in capacity, I think the fact that capacity leapt 50% in a period of 7 or 8 months due to the introduction of perpendicular recording seems noteworthy.
I agree, though perhaps you should add in parentheses that it was perpendicular recording which made it possible (I do realize it mentions it as an achievement right above, but still).
Ferroelectric Hard drive
This is obviously rather significant, but I'm not sure I should just put "12.8 Petabyte hard drive" in 2006, especially since it's not completed. Any thoughts? I think that if this enters the market in a few years at reasonable prices, this will have absurd implications for uncompressed, workable HD video. Not to mention... how FAST must these things be? That kind of areal density? Wow. I guess the medium will be the problem for once... which is a bit of a scary thought. SCSI-640 anyone? :) Dan 15:56, 17 May 2006 (UTC)
I think it should be left in since it may accelerate its development by people/researchers who find out about the technology through Wikipedia. You can change the title.
fluid bearing
Some hard disks use fluid bearing, it makes them more silent.
how can something be "more" silent. Silent means there is zero sound, therefore it is impossible to have less than silent. I think you mean it makes them quieter :) Wtatour 23:48, 17 July 2006 (UTC)
Hard Disk Drive
This article needs to be renamed Hard Disk Drive.
A "hard disk" could be anything from a cast-iron frisbee to a stale Ritz cracker. It's just slang for the platters in the drive itself... dreddnott 19:50, 23 May 2006 (UTC)
"....platters spin at a constant RPM...", later "...electronics control.. rotation of the disk"
I think something is not quite coherent here. First "....platters spin at a constant RPM...", but later "...electronics control.. rotation of the disk".
AFAIK, platters spin at a constant RPM, electronics can only stop it. The phrase "...ectronics control.. rotation of the disk" leds to small misunderstanding. Some persons think, that different platters can revolve at different speed.
What do you think?
- The ideal situation is that the platters spin at exactly constant RPM. However, that is not the case. There are actually two servo mechanisms in HDDs, one being the actuator servo, and the second being the spindle servo. Consider: How can one spin a disk at a fixed frequency, and how would you be sure it was rotating at the desired angular velocity? The crude answer is that the spindle servo control code measures the instantaneous speed of the disk and tweaks the motor torque accordingly. The result is constant slight adjustments, and since no servo mechanism is perfect, there is slight variation in the RPM. This "spindle jitter" is a parameter (mechanical tolerance) used to leave engineering margin in the track format's gate timings. GMW 15:18, 12 July 2006 (UTC)
Length of "Mechanics and Magnetics" section
It seems to me the M&M section is far too lengthy, and the article would benefit from more specific categorization. Also, there may be a need for a History of HDDs article spinoff. I'm curious to hear what others think. Geo.per 19:54, 26 June 2006 (UTC)
Lifetime
Does anyone know the average lifetime of a hard disk under normal use? I'm quite certain it's not the MTBF, 'cause that's listed as 1 million hours, and that's over a hundred years if you use the hard disk constantly. Viltris 03:44, 30 June 2006 (UTC)
- Most manufacturers that explicitly give a service life say the drives are good for 5 years. This, of course, varies; I've used drives much older than that that still work (no bad sectors or cocked/worn bearings), but those drives are like the storage equivalent of an antique car that stays in the garage and only gets driven to shows during the summer. Drives that have been run hard for most of their lifetime (like SCSI drives in a busy array) are often lucky to last 3 years -- between the high rotational speeds and the vibration of N other drives in the same chassis (and also from seeking, especially on the older drives with much heavier actuators), the bearings will eventually grind themselves into dust. (This is the other reason why people are moving to fluid dynamic bearings; just like the bearings in a car engine, they last pretty much forever as long as the oil is clean.) -lee 14:50, 3 August 2006 (UTC)
5.25 inch drive
Why is it that all hard drives nowadays are 3.5 inches. Why doesn't someone make one that's 5 inches to fit in an external drive bay of the computers? By my calculations that would give you over 2 times the area on the platters and thus two times the GB. I know it would be slower but this would be great for an archival drive, backup drive or large file storage drive.--God Ω War 22:14, 28 July 2006 (UTC)
- I had a 5.25 drive, the Bigfoot 8GB, that I bought in the good old days of 1998. That beast was SLOOOWWW. I think you have a good point regarding using it for archiving. Part of the reason may be the intense competition between manufacturers (no time to waste on stuff outside the mainstream) and the fact that drives are doubling in size every 12-18 months. They tried 5.25 inch drives at one point and now they dont make them so there must be a reason. -Ravedave 22:37, 28 July 2006 (UTC)
- After the first 100GB the only thing most people use hard drives for is media. For a media center pc, you don't need a seek time under 10ms. That a hundredth of a second! I know OS and programs are constantly using the hard drive, so this would proably work best if you had a 100 gig hard drive for the OS and applications and a 2 terabyte drive for file storage. 2 Terabytes is completely possible with current technology if hard drive space actually doubles with area.--God Ω War 22:49, 28 July 2006 (UTC)
- Quantum did indeed try this, and the people who really went for it were white-box dealers and OEMs like Compaq. Thing is, they made it a little too cheap; the original Bigfoot was only 3600 RPM, at a time when 4500 and 5400 RPM drives were common and 7200 was just breaking into the IDE world. Even with a super-high capacity, people are going to complain if it's too slow, and complain they did; Quantum ended up selling Bigfoots at bargain-basement prices to OEMs that didn't give a damn (I'm looking at you, Compaq). As for why it hasn't been revisited, well, it's a combination of economics (3.5" drives are dirt cheap these days and every decent OS will let you do at least striping) and engineering (not even Seagate was able to make a 7200 RPM 5.25" drive, though I suppose it could be done now with FDB motors). That and a bigger drive costs more to build, generates more heat and uses more energy to turn the motors -- and with PCs hot enough as it is, I'm sure the OEMs wouldn't like that. It'd end up being a niche product for which you'd pay boutique prices, and there are more practical ways of doing the same thing (2.5" drives in a 5.25" bay RAID case, maybe). -lee 15:22, 3 August 2006 (UTC)
- After the first 100GB the only thing most people use hard drives for is media. For a media center pc, you don't need a seek time under 10ms. That a hundredth of a second! I know OS and programs are constantly using the hard drive, so this would proably work best if you had a 100 gig hard drive for the OS and applications and a 2 terabyte drive for file storage. 2 Terabytes is completely possible with current technology if hard drive space actually doubles with area.--God Ω War 22:49, 28 July 2006 (UTC)