HDD performance & Data Corruption 101


#1

by request Im cobbling together a little HDD\Power\Corruption Info, It will take a little while while I get it all in order, so bear with me :stuck_out_tongue:

As the Hard Disc Spins
indepth analysis of HDD performance @ Lost Circuits

Summary Excerpts

"Hard disk drive technology has moved more and more into the center of attention of the IT industry. The original role of HDDs was meant to be a simple mass storage media with relatively little emphasis on performance; twenty years ago, this did suffice. With the internet evolving and servers becoming a powerful factor in the electronic fabric of data communication throughout the entire world, it soon became obvious that storage media were also the most prevalent bottleneck, thus, the need for increased performance.

Performance of hard disk drives is a rather touchy topic, depending on who one asks, one may get very different answers, based on the different criteria applied and the different benchmarks used. Buzz words like internal performance, seek latencies, media transfer rates, response times are being thrown around, burst speed and effective host transfer rates are held against those and last not least, the cache size matters.

This is the level of argumentation for a single drive, once the subject switches to RAID configurations, things are getting even more complicated. In addition, there are the standard benchmarks and then there are those advanced benchmarks like IOMeter, which by their mere use already qualify the testers as experts. Or not?

In this and the following articles we will first discuss HDD performance in general, then go about a few basic functional drive architecture parameters and finally discuss how certain benchmarks fit into the grand scheme. We will also show, which benchmarks, despite their popularity are either meaningless for the end user or else create a false impression of the drive’s capabilities. There will be follow-up articles that will deal in greater detail with individual benchmarks. "

I: Internal Drive Performance
What is HDD Performance Anyway?
Hard Disk Drive Architecture
Zones
Effective Internal Transfer Rate (TxD)
Internal Performnance vs Sequential Data Transfer Rate
Servo Bursts
Skew
Summary: Effective Internal Performance (TxD)

"Effective internal performance or effective media transfer rate is defined by the media density times the linear velocity relative to the head minus housekeeping data that are interspersed with the actual data on the platters. Higher area densities and higher rotational speeds will increase the effective internal performance, higher amounts of housekeeping data as required for e.g. positional corrections will reduce the effective transfer rate. Depending on the drive’s targeted environment, the house-keeping overhead will vary, e.g. laptop drives that are subject to higher levels of vibrations will need more “corrective actions” / repositioning than desktop drives. Likewise, rack-mounted server drives may require more servo data since vibrations can easily propagate throughout an entire rack.

It should also be clear now why the relatively higher amount of servo data on high-end, e.g. SCSI drives will result in lower performance than that of comparable (in terms of rpm) desktop drives. On the other hand, within the environment that SCSI drives usually are operating in, the very same “faster” desktop drives would run into the need for constant recalibration which would cause a severe performance hit under operational conditions"

II: Averages, [i]Seeks and other Paradoxes
Average Sequential Transfer
Read vs. Write Performance
Random Access
Long Seek vs. Short Seek Optimizations
Longer Platters Will Cause Higher Seek/ Random Access Latencies
Summary [/i]

"In this (second) article, we have covered some misleading internal drive parameters like the average sequential transfer rates for READ and WRITE as well as the different parameters influencing Random Access and Seek latencies and we further showed that different capacity models of the same drive can generate some performance data that paint the opposite picture of what the user would get in real world performance. Keep in mind that all parameters covered so far are only indicative of the drive’s internal performance and should not vary from one interface to the other, at least in single drive test configurations.

That means that e.g. using the “average sequential transfer” to show differences in interfaces or controllers is somewhat useless since by definition the results will be the same. Comparing e.g. the SATA and PATA interface on the same setup with different drives can only be called a self-fulfilling prophecy and will not bear any relevance for the intended test either.

In the case of RAID setups, there will be differences between READ and WRITE performance, moreover, the interface bandwidth will likely become a limiting factor and the cause for speed matching conditions. "

III: Effective Host Transfer Rates
Interface Speed as Basis for Classification
A Brief History of ATA
Host Transfer Rates (TH) and Effective Host Transfer Rates (TxH)
Quantifying the CMD Overhead
Data Transfer Vs. I/O Performance
Summary

“In the last two articles, we have covered the internal performance parameters of hard disc drives, that is, more precisely, those parameters that mostly relate to a HDD as an electromechanical device. For a brief recap, HDDs use electronic and mechanical parts and the mechanical latencies are what mostly holds back the performance of any disc drive. The main reason is the inertia of any mechanical component, starting from the spindle and the platter to the read / write heads and the actuator. Whoever read through the first two articles on the subject should at least have some basic understanding of how media density and rotational speed translate into sequential transfers, likewise, the influence of rotational latency and seek latency on the random access speed should be clear”

IV: DMAs, Latencies and Speed Matching
Bus Master?
Other factors influencing host transfer rate
…PCI Latency
…Bus Parking
…Write Combining
Speed Matching Condition
Summary

"In the last article, I briefly brushed on the issue that current version of Windows, as well as Unix and Linux systems are not real time operating systems, rather, software stacks are created to allow internal scheduling of the execution of events. That, however, also means that there can be conflicts between the availability of interrupts and the software stacks, that is, the hardware is fighting against the software for priority. The result in cases like that is that there is no winner, performance is being reduced and at the same time, CPU usage is increased because of access errors and retries on the PCI bus and ineffective execution of transfers.

Some of the issues with poor utilization are plain and simply a matter of incorrect or marginally functional software. One good example are the chipset and bus master drivers that are necessary for almost any chipset not conforming to the original Intel specifications. That is, there are hardware bridges and buffers that need to be configured in order to optimally interface with the Microsoft OS environment by means of drivers. Only in a few rare instances will the new interface be completely transparent to the OS, one example is the ICH5 Serial ATA interface introduced by Intel with the Canterwood / Springdale chipset. However, even in this case, the transparency (the fact that the OS does not even notice anything has changed) is limited to the non RAID version of the same south bridge, namely the ICH5R, in that for RAID operation the installation of drivers will be necessary."

“The VIA Hyperion drivers are the currently last step in a development that we have followed for about 5 years to optimize the South Bridge and IDE controller interaction with the Windows environment by means of bus master and GART drivers.”

“Third Party chipsets” and external controllers will in almost all cases require the installation of extra drivers, not necessarily for any basic functionality but definitely for enabling performance while reducing CPU utilization. A few well known examples are the VIA or nVidia or any other chipset bus master drivers."

V: Protocol Differences For Reduced Latencies
Cabling and Parallel Signaling Properties
Serial ATA
Shared Bus Issues
Parallel vs. Serial Command Overhead
Serial ATA: First Party DMA
SATA “Out Of Order Data Delivery” To Reduce Rotational Latencies

"we have concentrated on some of the internal design parameters that influence the internal performance of Hard Disc Drives as well as some of the issues that relate to the effective host transfer rate and the interfacing of the drive with the Host Bus Adapter (HBA) and the DMA and busmastering channels needed to interface with the system logic. The current article will concentrate more on the differences between Parallel and Serial ATA with a focus on the different interfacing protocols

Most of the marketing strategies promoting the migration from Parallel to Serial ATA have focused on the physical cabling properties, that is, the reduction from 16 bidirectional data channels to two unidirectional pairs of Low Voltage Differential Signaling ((LVDS) lines. On the surface, the obvious effect is a greatly facilitated ease of routing, reduced obstacles in the air flow and a smaller connector footprint. The main reasons, however, relate to the signaling properties in the context of the fact that parallel signaling across long distances had no headroom left for further speed grades. "

VI: Command Queing
Queuing Schemes: Parallel ATA vs.Serial ATA
Mechanical Overhead
Seek Latencies
Supermarkets and Elevators
Rotational Latencies
Different Queues in Different Standards
The Hardware Behind Queuing
Queue Depth against The Rest Of The World
The Big Picture

In summary, here is the short and sweet on the different forms of Command Queuing. In Parallel ATA, the merely passive role of the drive, along with the command overhead associated with the disconnect and polling of master and slave devices on the same cable renders the legacy command queuing scheme somewhat ineffective. Therefore, there has been little incentive to move to the more sophisticated albeit more expensive command queuing scheme.

Serial ATA extensions add Native Command Queuing to the FirstPartyDMA engine setup. The point to point topology allows a continuous communication between the device and the controller, which in turn allows to take full advantage of advanced features like for example a so-called non-zero offset DMA engine setup to allow for out of order data delivery, as well as reordering of commands within the queue. SATA NCQ does not allow prioritizing of queues, however, Virtual Head of Queue attributes are possible with the effect that any such command will trash any existing queue. This virtual Head of Queue command will, thus, grant priority to itself in a Last-Man-Standing fashion. Thereafter, the previously uncompleted commands will have to be reissued.

thats all as of the post data, but others will follow

RAID I: The Lesser Levels

Summary
MOST OF OUR PREVIOUS ARTICLES have focused on single drive technology, that is, HDD architecture, interface protocols and the back-end support of ATA by means of DMA channels. Likewise, we have shown some issues with the use of a number of benchmarks that can yield false results caused by speedmatching conditions. Especially the latter issue mostly occurs with RAID configurations, reason enough to start a short RAID overview series. For starters, we will cover the “lesser” forms of RAID, that is RAID Level0, 1 and 10, before taking the plunge into Exclusive-Or calculations that form the operational backbone of real RAID configurations

Different “Categories” of RAID

In general, on we need to distinguish between three different categories of RAID, namely:

1. Standalone RAID solutions for mass storage in the backplane of servers (fiber channel-attached) or Network-Attached Storage (NAS) connected via Firewire or Gigabit Ethernet.

2. RAID functionality via separate Host Bus Adapter (HBA) cards using PCI, PCI-X (64bit / 66 MHz) or PCI-Express (3GIO) interface.

3. RAID controller integrated on the mainboard level. Often called RAID-lite because of limited functionality, except for dedicated Server boards. RAID on Mainboard (ROMB).

Software vs. Hardware RAID
Software RAID
Hardware RAID
JBOD and Spanning
RAID Level1 Mirroring / Duplexing
Duplexing vs. Mirroring
RAID Level 0+1 (Level 10)

As a summary of where we are and what is about to come in the next few articles, currently used levels of RAID are:

Lesser RAID Levels:
Level 1: Mirroring
Level 1: Duplexing
Level 0: Striping
Some companies use the term “spanning” when they really mean striping. Spanning normally only refers to JBOD
Level 1/0 or 10: Mirroring of Striped Drives (Expensive!!!)

and next
True RAID Levels
Level 2: Bit-Level striping across a minimum of 11 drives using Hamming codes, a form of error correcting code (ECC)
Level 3: Byte Level striping with Parity
Level 4: Block Level Striping with dedicated Parity
Level 5: Block Level Striping with distributed Parity
Level 6: Same as Level 5 but with dual distributed Parity
Level 7: Multiple Cache-level striping with dedicated Parity (proprietary format of Storage Computer Corporation including real-time processor)
We will go through those step by step in the next few articles.

again all as of the Post date


#2

Corruption 101
There sometimes seems to be a few hundred ways to munge your data
my personal list of probability based on

The Risks To Your Data @ the PC Guide
(with a few additions, and split into hardware, software and filesystem)

Hardware Failure

Memory Errors: With so many systems today running without error detection or correction on their system memory, there is a chance of a memory error corrupting the data on the hard disk. It is rare for it to happen, but it does happen.
Test your RAM with memtest86 and or Memtest, consider a board that supports ECC RAM

Power Loss: Losing power at the wrong time, such as when you are doing sensitive work on your hard disk, can easily result in the loss of many files
Use a high quality PSU with stable voltage, employ a UPS or other line conditioning, avoid hard restarts

Cables not included on the PCGuide’s original list it is none the less an importatnt possibility, especially with todays transfer speeds, there is a very real reason that the industry is adopting SATA over PATA see post below for a full discussion and links

System Timing Problems: Setting the timing for memory or cache access too aggressively, or using a hard disk interface transfer mode that is too fast for the system or device, can cause data loss. This is often not something that will generally be realized until after some amount of damage has been done.
see tRAS below, bump back an overclock, try a different divider, dont overclock if you cant lock the PCI bus, I would bump the probability of this up a few slots if the machine is overclocked, otherwise I ve put it here

Resource Conflicts: Conflicts resulting from peripherals that try to use the same interrupt requests, DMA channels or I/O addresses, can cause data to become corrupted.
Review PIRQ routing and manual assignment of IRQs in the worse case senerio, chipset specific but a good overview

The Hard DriveTest with the manufacturers Diagnostic, most of the rest is preventative, proper handling vibration free and cool environment, with clean air and stable floor

Software Failure
Busmastering Drivers

Filesytem Corruption

Power Issues
There are three basic areas of power problems
1.Source Power Brown outs, blackouts, spikes\surges ect.
see > Power Conditioning and DIY UPS @ Dans Data, for the basics
In this category I would also place power issues due to pilot error, hard restarts and shorts, avoid both. Shutdown properly and pay attention when mounting your motherboard and routing power cables.

2. Under Power: Basically too many components for the power supply,
dont be decieved by wattage figures, its the amount of amps per rail that is really important. See > Choosing the right Power Supply
takaman’s Power Supply Calculator rev0.61x
to determine the amps you need per rail

3. Voltage Stability Pretty much the all the following
[H]ardcore PSU info (Charts)
http://terasan.okiraku-pc.net/dengen/tester/index.html
http://terasan.okiraku-pc.net/dengen/tester2/index.html

In Japanese :stuck_out_tongue:
But the graphs speak volumes
and the PSU are identified in English

the “translation” links are below, but they really dont add much
and take considerable time to load
http://www.worldlingo.com/wl/translate?wl_lp=JA-en&wl_glossary=gl1&wl_fl=2&wl_rurl=http%3A%2F%2Fterasan.okiraku-pc.net%2Fdengen%2Findex.html&wl_url=http%3A%2F%2Fterasan.okiraku-pc.net%2Fdengen%2Ftester%2Findex.html

http://www.worldlingo.com/wl/translate?wl_lp=JA-en&wl_glossary=gl1&wl_fl=2&wl_rurl=http%3A%2F%2Fterasan.okiraku-pc.net%2Fdengen%2Findex.html&wl_url=http%3A%2F%2Fterasan.okiraku-pc.net%2Fdengen%2Ftester2%2Findex.html

Continuous Power vs. Peak Power at Spin-Up

12V power profile (current vs. time) of an IDE/ATA hard disk at startup. You can see that the peak power draw is over quadruple
the steady-state operating requirement. The graph appears “noisy”
due to frequent oscillations in current requirements [/QUOTE
Peak vs. Continuous Power

[QUOTE]Despite this extra capacity, it is still a good idea to not load up your system to the very limit of your power supply’s stated power capacity. It is also wise, if possible to employ features that delay the startup of some disk drive motors when the PC is first turned on, so the +12 voltage is not overloaded by everything drawing maximum current at the same time.

refering to the links above again
http://terasan.okiraku-pc.net/dengen/tester/index.html

the consistent voltage instability at startup and shortly thereafter in those graphs

Winbond Launches New Bus Termination Regulator April 4th 2003

"Winbond Electronics Corporation, a leading supplier of semiconductor solutions, today launched the W83310S, a new DDR SDRAM bus termination regulator. The solution, new to Winbond’s ACPI product family, is aimed at desktop PC and embedded system applications with DDR SDRAM requirements.

[B]Computer systems architectures continue to evolve and are becoming more complex; CPU and memory speeds continue to increase ever more rapidly with every technology turn. More and more high current/low voltage power sources are required for PC systems. This is particularly true for high-speed components such as CPU, memory, and system chipsets. The performance of these components is highly dependent upon stable power. Therefore, motherboard designers require accurate, stable, low-ripple and robust power solutions for these components.

Many system designs use discrete components to implement bus termination functions. This approach creates several problems including poorer quality load regulation; higher voltage-ripple, increased usage of board space and inconsistent designs when different discrete components are used.[/B]"

and just to reinterate this point one more time
http://www.anandtech.com/showdoc.html?i=1774&p=8
"the majority of damaged RAM returned to memory manufacturers is destoryed by fluctuations in the voltage."

the transient response is the critical measure, unfortunately its not a metric that is commonly supplied with the PSU specs

Transient Response: As shown in the diagram here, a switching power supply uses a closed feedback loop to allow measurements of the output of the supply to control the way the supply is operating. This is analogous to how a thermometer and thermostat work together to control the temperature of a house. As mentioned in the description of load regulation above, the output voltage of a signal varies as the load on it varies. In particular, when the load is drastically changed–either increased or decreased a great deal, suddenly–the voltage level may shift drastically. Such a sudden change is called a transient. If one of the voltages is under heavy load from several demanding components and suddenly all but one stops drawing current, the voltage to the remaining current may temporarily surge. This is called a voltage overshoot.

Transient response measures how quickly and effectively the power supply can adjust to these sudden changes. Here’s an actual transient response specification that we can work together to decode: “+5V,+12V outputs return to within 5% in less than 1ms for 20% load change.” What this means is the following: “for either the +5 V or +12 V outputs, if the output is at a certain level (call it V1) and the current load on that signal either increases or decreases by up to 20%, the voltage on that output will return to a value within 5% of V1 within 1 millisecond”. Obviously, faster responses closer to the original voltage are best."


#3

Cables

there is definately a reason why SATA is being adopted, ATA\IDE\EIDE\ATAPI is an unterminated standard and as speeds increase that causes more and more problems, especially if cheap cables are employed, complex device configurations, hotswap\removable drivebay bridgecards and poor cable routing

a few (:p) links and excerpts:
Standard (40-Conductor) IDE/ATA Cables

In many ways, the cable is the weak link in the IDE/ATA interface. It was originally designed for very slow hard disks that transferred less than 5 MB/s, not the high-speed devices of today. Flat ribbon cables have no insulation or protection from electromagnetic interference. Of course, these are reasons why the 80-conductor cable was developed for Ultra DMA. However, even with slower transfer modes there are limitations on how the cable can be used.

The main issue is the length of the cable. The longer the cable, the more the chance of data corruption due to interference on the cable and uneven signal propagation, and therefore, it is often recommended that the cable be kept as short as possible. According to the ATA standards, the official maximum length is 18 inches, but if you suspect problems with your hard disk you may find that a shorter cable will eliminate them. Sometimes moving where the disks are physically installed in the system case will let you use a shorter cable

Warning: There are companies that sell 24" and even 36" IDE cables. They are not recommended because they can lead to data corruption and other problems. Many people use these with success, but many people do a lot of things they shouldn’t and get away with it. :^)

Ultra DMA (80-Conductor) IDE/ATA Cables

There are a lot of issues and problems associated with the original 40-conductor IDE cable, due to its very old and not very robust design. Unterminated flat ribbon cables have never been all that great in terms of signal quality and dealing with reflections from the end of the cable. The warts of the old design were tolerable while signaling speeds on the IDE/ATA interface were relatively low, but as the speed of the interface continued to increase, the limitations of the cable were finally too great to be ignored.

that "upgrade happened at 66MB/s burst, we are now at the same speed as the PCI bus for burst rates 133MB/s

Fancy IDE leads - The Terrible Truth

The spec mandates such short cables for two reasons.

Reason one - practically all IDE cables are unshielded. There’s nothing around the conductors but insulation. Electromagnetic radiation goes straight through insulation. So external interference from the rest of your computer’s giblets can influence the signal on your IDE leads.

Unshielded cables act like antennas. Generally speaking, the longer you make 'em, the more energy they can pick up from their environment.

Reason two - IDE cables are unterminated. “Termination”, in the electrical sense, is essential to provide “impedance matching”, which in English is what you have to do to stop the signal from reflecting off the end of the cable like a wave that hits the end of a bathtub.

Electric current does not move instantaneously down a wire. It travels at nearly the speed of light, but when you’ve got thirty-three and a third million clock pulses per second - which is the speed of the IDE bus - even light in a vacuum only moves a hair under nine metres per clock pulse.

So if you’re fooling around with, say, a double-the-rated-length 900mm IDE lead, there’s an end-to-end signal delay in it of about a tenth of a clock pulse. The signals you want your drives and your motherboard to be able to hear will be significantly blurred by delayed reflections from each end of the cable.

Transfer your data at twice or three times the UDMA/33 speed - as UDMA/66 and 100 do - and reflected signals get more and more out of step with the real signal, and do it more and more harm.

Serial ATA and the 7 Deadly Sins of Parallel ATA
Critical Limiting Factors in Parallel Design
There are some fundamental differences between serial and parallel buses, more importantly, there are some critical limiting factors in the design and implementation of any parallel bus.

  1. Non-Interlocked (source synchronous) clocking

  2. 3.3 V high-low signaling with 5V legacy tolerance

  3. Cabling constraints

  4. Connector legacy

  5. Termination

  6. Command queuing

  7. PCB Design

  8. Cable Design Issues: Cross-Talk and Ground Bouncing vs.Ringing

Each signal propagating through a data line makes the data line act like the inductor of a transformer. That is, each voltage swing generates a dynamic electromagnetic field, that, depending on cable length and proximity will induce another signal in adjacent data lines. This cross-talk adds noise to data lines and can produce errors by generating false positives or negatives simply by induction of voltage swings in data lines.

Another problem with parallel pathways is the phenomenon of simultaneously switching outputs (SSO) noise. As we explained in detail in our reviews of the i845 and the SIS645 chipsets, SSO noise becomes really problematic if the majority of signals switch from high to low since this can induce ground bouncing. On the chipset level, workaround in form of dynamic bus inversion (DBI) is feasible, that is, instead of switching all bits, only the reference bit is switched simultaneously at the sender and receiver end which has the same net effect, namely, that the system does not see the reference switch but thinks that all other lines have switched. DBI, however requires an additional latency cycle and this is where the 40 ns clock cycle time starts to look really ugly.

ATA not so Frequently Asked Questions
Or: Why Ribbon Cables are unsuitable for RF transmission of data

The following article was written by snn47 to address some of the issues associated with standard ribbon cables and the use of e.g. removable drive racks as an attempt to share some insight into factors that can adversely affect the life or reliability of of desktop Hard Disk Drives. Specifically, issues like why some drives are working in some systems and not in others, the impact of cable routing and why is it that the drive manufacturers always recommend using their own cables (if supplied with the drive). (emphasis mine)

Any RF system has a limited tolerance for distortion of signals, which, in the worst case, can destroy some of the semiconductor components. While a certain amount of variation is part of any systems specification, one needs to remember that ATA was never intended to handle today’s data rates. ATA or Advanced Technology Attachment started as the usual run of the mill or: “just a system at the lowest possible price point that will work most of the time without the need for huge financial investments”. The problems started when the system was forced to handle higher and higher clock and data rates within the original design limitations. Keep in mind that the latest ATA-PI7 specifications allow data rates of 133 MB/sec, which is 44-times faster than the original ATA transfer of 3 MB/sec. This increase in speed makes it necessary to enforce minimum tolerances and detailed specifications to allow for the manufacturing of affordable systems with minimum compatibility problems.

these are just a few excerpts, I would highly advice that everyone give them a good read, there ARE good rounded ATA cables RD3XP Super Shielded
“RD3XP is made from ATA 100/133 High impedance flat cable cut into 8 layers of 10 cable wires, with a ground wire and signal wire alternatively, and folded in zigzag-piled so that each signal wire is surrounded by 4 ground wires.”

but like their SCSI counterparts, they aint cheap, there are also high quality flat cables (you buy a $300 RAID card, and they dont ship you crappy PVC cables, they are either Teflon or Thermoplastic Olefin (TPO)

Up until a little while ago I would have said ant investment made in high quality cables was money well spent, however with the introduction of SATA, that doesnt necessarily hold true anymore
unless your dealing with critical data (in which case you should be running ECC RAM) or your actually experiencing problems

a further excerpt from ATA not so FAQs

Preliminary Conclusions and Possible Cure

Reasons for changes in the propagation impedance, cross-coupling between adjacent signal wires and signal-velocity from one setup to another are :

Impedance of the drive and controller in high/low signal level will be different for different models.
Reflection of signals that garble the pulse, due to incorrect termination impedance or impedance-inconsistencies from the controller to the drive meaning the Impedance from the controller and the drive(s) differ.

If there is a a second drive (connector present/connected) the impedance will fluctuate at this point.

A. Only one HDD per controller channel.

B. Use a cable with only 2 connectors.

-Signal delay will increase with the length of the flat-ribbon-cable propagation of the signals were intended for a max. flat-ribbon-cable length of 18" with ~ 5ns/m would be 2.3ns delay.

C. shorten the cable whenever possible.

D. If the case requires long cables consider mounting just the HDD closer to the connectors of the controller or consider exchanging the usual desktop case, for a 19" case. Mount the HDD just above or below the PCB-controller-connector to allow you to reduce the length of the flat-ribbon-cable to a few cm.

Flat-ribbon-cable with different isolation material (higher/lower eR) and change in the conductor diameter will change the ratio of (2D/d).

Are rounded cable used?
E. Try exchanging the cable against another type/brand of flat-ribbon-cable.

Is the flat-ribbon-cable at some point parallel to a conducting grounded surface?
F. Try a different routing of your flat-ribbon-cable away from a ground-plane,

Was the cable cut apart and/or rolled it to get a rounded cable?
G. Unroll it and try B., if cut apart then start with A.

Is the drive mounted in a removable drive rack?
H. Remove HDD from the drive-bay and start with A.

However you should checkout the section in Dansdata’s “IDE Fancy Leads, the terrible truth” as to why with all this goin on,
for the most part, it still works anyway :stuck_out_tongue:

Check out what your chipset has to sort out here
http://www.vicstech.com/en/rd3xp/NoiseTest/
click on a picture to see an animated test
(note not all types of cables where employed, for instance there are no high quality TPO or teflon cables in this test)

like the Power Supply, cables are widely underated as a source of problems, and few ever spend any money on them for anything but “looks”

System Timing
By setting the memory timings to aggressively, overclocking the Front Side Bus (without locking the PCI bus) or using a hard disk interface transfer mode that is too fast for the system or device (or cables) can cause data loss

These days most “enthusiast” boards allow the PCI bus to be locked or employ a divider, but overclocking is always a risk to your data. A more common tweak is aggressive memory timings
and those to can effect data intgrity

from Eight Ways To Kill Your HDD

TRAS Violation: The Creeping Corruption of a HDD

One of the most common reasons for HDD failure is what is called tRAS violation. tRAS is the minimum bank open time of the DRAM, that is, we are talking about system memory here. Many mainboard manufacturer still include Ultra and Turbo settings in their CMOS setup options that are only workable at 100 MHz memory bus settings, a.k.a PC1600 mode. One setting that has absolutely no impact on performance is the minimum bank open time or tRAS, while the same setting can have catastrophic consequences for data integrity including HDD addressing schemes if the latency is set too short. In theory, tRAS can be as short as tRCD + CAS delay, however, in reality, the minimum bank open time is dictated by the RAS Pulse Width, that is the time required to reach a voltage differential between memory bitlines and reference lines to safely identify a 0 or 1 logical state.

The main reason why tRAS violation does commonly lead to HDD corruption may relate to the translation of the physical memory space into virtual memory sub-spaces by the operating system and finally writing the data back to the storage media but it is not entirely clear what is going on there. A fact is, though, that a tRAS value of 5 is adequate for PC1600 or 100 MHz operation. At 133 Mz or PC2100, tRAS should never undercut 6T, likewise, at PC2700, the value should be increased to 7T where applicable. In terms of performance, tRAS settings hardly make any difference. We challenged some performance gurus at AMD on this matter and they reported a drop in Quake frame rates from 792 fps to 790 fps when increasing tRAS from 5T to 6T.

for indepth information > tRAS violation as a cause of data corruption @ Lost Circuits


#4

FileSystem
http://ntfs.com/data-integrity.htm

An Explanation of CHKDSK and the New /C and /I Switches

<MORE

"To understand when it might be appropriate to use these switches (/C and /I) , it is important to have a basic understanding of some of the internal NTFS data structures, the kinds of corruption that can take place, what actions CHKDSK takes when it verifies a volume, and what the potential consequences are in circumventing CHKDSK’s usual verification steps.

CHKDSK’s activity is split into three major “passes” during which it examines all the “metadata” on the volume and an optional fourth pass. Metadata is “data about data.” It is the file system overhead, so to speak, that is used to keep track of everything about all of the files on the volume. Metadata tells what allocation units make up the data for a given file, what allocation units are free, what allocation units contain bad sectors, and so on. The “contents” of a file, on the other hand, is termed “user data.” NTFS protects its metadata through the use of a transaction log. User data is not so protected.

During its first pass, CHKDSK displays a message on the screen saying that it is verifying files and counts from 0 to 100 percent complete. During this phase, CHKDSK examines each file record segment (FRS) in the volume’s master file table (MFT). Every file and directory on an NTFS volume is uniquely identified by a specific FRS in the MFT and the percent complete that CHKDSK displays during this phase is the percent of the MFT that has been verified. During this pass, CHKDSK examines each FRS for internal consistency and builds two bitmaps, one representing what FRSs are in use, and the other representing what clusters on the volume are in use. At the end of this phase, CHKDSK knows what space is in use and what space is available both within the MFT and on the volume as a whole. NTFS keeps track of this information in bitmaps of its own that are stored on the disk allowing CHKDSK to compare its results with NTFS’s stored bitmaps. If there are discrepancies, they are noted in CHKDSK’s output. For example, if an FRS that had been in use is found to be corrupted, the disk clusters formerly associated with that FRS will end up being marked as available in CHKDSK’s bitmap, but will be marked as being “in use” according to NTFS’s bitmap.

During its second pass, CHKDSK displays a message on the screen saying that it is verifying indexes and counts from 0 to 100 percent complete a second time. During this phase, CHKDSK examines each of the indexes on the volume. Indexes are essentially NTFS directories and the percent complete that CHKDSK displays during this phase is the percent of the total number of directories on the volume that have to be checked. During this pass, CHKDSK examines each directory on the volume for internal consistency and also verifies that every file and directory represented by an FRS in the MFT is referenced by at least one directory. It also confirms that every file or subdirectory referenced in each directory actually exists as a valid FRS in the MFT and checks for circular directory references. Finally, it confirms that the various time stamps and file size information associated with files are all up-to-date in the directory listings for those files. At the end of this phase, CHKDSK has ensured that there are no “orphaned” files and that all the directory listings are for legitimate files. An orphaned file is one for which a legitimate FRS exists, but which is not listed in any directory. When an orphaned file is found, it can often be restored to its rightful directory, provided that directory is still around. If the directory that should hold the file no longer exists, CHKDSK will create a directory in the root directory and place the file there. If directory listings are found that reference FRSs that are no longer in use or that are in use but do not correspond to the file listed in the directory, the directory entry is simply removed.

During its third pass, CHKDSK displays a message on the screen saying that it is verifying security descriptors and counts from 0 to 100 percent complete a third time. During this phase, CHKDSK examines each of the security descriptors associated with each of the files and directories on the volume. Security descriptors contain information regarding the owner of the file or directory, NTFS permission for the file or directory, and auditing information for the file or directory. The percent complete in this case is the percent of the number of files and directories on the volume. CHKDSK verifies that each security descriptor structure is well formed and internally consistent. It does not verify that the listed users or groups actually exist or that the permissions granted are in any way appropriate.

The fourth pass of CHKDSK is only invoked if the /R switch is used. /R is used to locate bad sectors in the volume’s free space. When /R is used, CHKDSK attempts to read every sector on the volume to confirm that the sector is usable. Sectors associated with metadata are read during the natural course of running CHKDSK even when /R is not used. Sectors associated with user data are read during earlier phases of CHKDSK provided /R is specified. When an unreadable sector is located, NTFS will add the cluster containing that sector to its list of bad clusters and, if the cluster was in use, allocate a new cluster to do the job of the old. If a fault tolerant disk driver is being used, data is recovered and written to the newly allocated cluster. Otherwise, the new cluster is filled with a pattern of 0xFF bytes. When NTFS encounters unreadable sectors during the course of normal operation, it will also remap them in the same way. Thus, the /R switch is usually not essential, but it can be used as a convenient mechanism for scanning the entire volume if a disk is suspected of having bad sectors.

The preceding paragraphs give only the broadest outline of what CHKDSK is actually doing to verify the integrity of an NTFS volume. There are many specific checks made during each pass and several quick checks between passes that have not been mentioned. Instead, this is simply an outline to the more important facets of CHKDSK activity as a basis for the following discussion regarding the time required to run CHKDSK and the impact of the new switches provided in SP4"

MORE>

Description of Enhanced Chkdsk, Autochk, and Chkntfs Tools in Windows 2000


#5

Originally posted by Bung
There have been many cases of problems with SATA drives esp in RAID arrays traced to noise in cabling or connectors.

very true (I compiled that quite awhile ago and havent updated all that recently)

I have a SATA Cautions thread sticked on my forum :stuck_out_tongue:

http://www.ata-atapi.com/sata.htm

FIRST, THINGS YOU DO NOT DO WHEN USING SATA!

If you are setting up a system using SATA here are some things you must be aware of:

DO NOT operate SATA devices outside of a sealed system unit. DO NOT operate SATA devices from a power supply that is not the system unit’s power supply.
DO NOT tie wrap SATA cables together. DO NOT put sharp bends in SATA cables. DO NOT route SATA cables near PATA cables. Avoid placing SATA devices close to each other such that the SATA cable connectors are close to each other.
DO NOT operate a radio transmitter (such as a cell phone) near an exposed SATA cable or device.
Why all these warning? The basic problem is the SATA cable connector is not shielded. This has to be the number one most stupid thing that has been done in the SATA world.

SECOND, LETS TALK ABOUT SATA RELIABILITY!

Are you thinking about buying a Serial ATA system and drive? If yes, read this… The Serial ATA (or SATA) products that are now shipping and available in your local computer store may not be the most reliable products. Testing of SATA products with tools such ATACT program are finding a variety of problems. These problems are timeout errors, data compare errors, and strange status errors. These problems are being reported by a large number of people doing SATA product testing. Hale’s advice at this time is be very careful - make sure you can return the SATA product your purchased if it does not perform as you expect. See the ATACT link above for some ATACT log files showing both normal testing of a parallel ATA (PATA) drive (no errors!) and testing of a SATA drive (lots of errors!).

The unshielded SATA cable connector is mostly like the source of many of these problems. Making things worse is the failure of the SATA specification to implement an equivalent to the ATA Soft Reset. On a PATA interface Soft Reset rarely fails to get ATA/ATAPI devices back to a known state so that a command can be retried. On a SATA interface the equivalent to this reset does not seem to reset anything and at some times it is basically ignored by the SATA controller and device.

And finally, … Don’t buy SATA because it claims to be faster than PATA. The marketing claims that it can transfer data at up to 150MB/second (making it faster than the fastest PATA Ultra DMA mode, mode 6 or 133MB/second) will not be seen with the SATA products that are shipping today (late 2003). Today’s SATA products are actually 10% to 20% slower than PATA. This is because today’s SATA products are really PATA products with an extra SATA-to-PATA ‘bridge chip’ in the device. These bridge chips add significant overhead to the SATA protocols. In time there will real ‘native’ SATA devices that do not need these bridge chips - Then we can see what the true performance of SATA. But remember SATA is a ‘serial interface’ and serial interfaces rarely live up to their marketing claims.

Hale Landis maintains the ata-atapi.com website, and has been working for open standards for 25 years. He has been a participant in the ANSI X3/NCITS Technical Committees that developed the ATA and ATA/ATAPI standards since 1990, and works as a consultant and provider of test software.

there is a demo version of ATA Command Test available for download there as well :wink:

quite a few people have fabricated their own sheilded cables to good effect


#6

reserved 4000


#7

Advanced HDD Issues Linkfram

Hard Disk Drive Reference Section @ Storagereview.com (reprinted from the PC Guide link below)
Including:
A Brief History of the Hard Disk Drive
Construction and Operation of the Hard Disk Drive
Hard Disk Geometry and Low-Level Data Structures
Hard Disk Performance, Quality and Reliability
Hard Disk BIOS and Capacity Factors
Hard Disk Interfaces and Configuration
Hard Disk Logical Structures

Hard Disk DrivesA the PC Guide (same as above)
PC Guide Topic Index

ATA-ATAPI.COM
How it Works Document series (HTML and Zip available)
Including:
Hale’s ATA FAQ
Fact and Fiction
CHS Translation
Partition Tables
Masterboot Record
DOS Floppy Disk Boot Sector
OS2 Boot Sector

Computer Boot Sequence @ Mossywell.com
Including:
Hard Disk Geometry
How the Physical Disk is accessed
The Standards
How we used to access the disk: CHS, ECHS, Revised ECHS, Assisted LBA, LBA
How we now access the disk: LBA and Extended INT13h
Better than LBA and Extended INT13h: Direct Disk Access (DMA)
The BIOS
The Master Boot Record \ The Master Boot Record Code
Partition Boot Sector and Clusters \ Partition Boot Sector Code
FAT Locations \ The FAT in Detail
The Root Directory \ The Root and Other Directories in Detail
IO.SYS with MS-DOS
NTLDR with Win NT (W2K, XP)

NTFS vs FAT
The NTFS Filesystem Comprehensive Overview
Fat System Guide
NTFS Basics
Converting FAT32 to NTFS (you should really read this)
Default Cluster Sizes (chart)

Windows 2000 and the Boot @ Windows & .NET Magazine
Inside the Boot Process Part 1 (NTFS) & Part 2 @ Windows & .NET Magazine

Partition Strategies @ Radified.com
Fdisk Guide @ Radified.com

Bootdisks
Ultimate Boot CD A MUST HAVE
Bootdisk.com
ETPlanet
TCP/IP bootdisks +
Ultimatebootcd
Bootable cd image with: Hard Disk Diagnosis, Drive Fitness Test (IBM/Hitachi) 3.50, PowerMax (Maxtor/Quantum) 4.06, Data Lifeguard (Western Digital) 10.0, SeaTools Desktop (Seagate) 1.06.02, Diagnostic Tool (Fujitsu) 6.10, SHDIAG (Samsung) 1.25, Hard Disk Management, IBM/Hitachi Feature Tool 1.90, Ranish Partition Manager 2.43, AutoClave (HDD Wiper) 0.3, Partition Resizer 1.3.4, SavePart (Partition Saver) 2.70, XFDISK (Extended FDISK) 0.9.3beta, g4u (HDD Cloning) 1.12, HDClone (Free Edition) 1.0, TestDisk 4.4, Memory Diagnosis, Memtest86 3.0, Linux-based Rescue Disks, Offline NT Password & Registry Editor 030426, Tom’s Boot Disk 2.0.103, Recovery Is Possible (RIP) 2.0, RIP Linux Rescue Disk, and AIDA16 (System information) 2.08, F-Prot Antivirus for DOS (Personal use only), Virus definition: 22 Aug 2003, Macro virus definitions have been disabled so that everything can fit onto a 2.88MB virtual floppy boot image. Includes read-only freeware version of NTFSDOS 3.14a, Active NTFS Reader for DOS, Thanx to styckx who originally posted this

Boot Managers
Gujin
Understanding MultiBooting and Booting Windows from an Extended Partition
XOSL opensource freeware
Smart Boot Manager opensource freeware
System Commander $
OS-BS FreeBSD boot manager opensource freeware
Ranish Partition Manager freeware
GNU GRUB opensource freeware
LILO Linux Bootmanager opensource freeware
Solaris boot manager
Masterbooter shareware

Linux-NTFS Project

Microsoft Disk Reference
How Windows 2000 Assigns, Reserves, and Stores Drive Letters
Diskpart Utility
HOW TO: Change Drive Letter Assignments in Windows XP

Additional Reference
Serial ATA (SATA)
ATA EIDE
ATAPI-ATA- EIDE History
EIDE vs SCSI
SCSI FAQ
SCSI FAQ
Fibre Channel
List of Partition IDs
Error codes for Ghost

Performances issues and Tradeoffs in Configuring for multiple devices
Independent Master Slave Timing

Dynamic Disks
Description of Disk Groups in Windows Disk Management
Dynamic vs. Basic Storage in Windows 2000
Basic and Dynamic Disks @ Windows & .net Magazine
HOW TO: Recover an Accidentally Deleted NTFS or FAT32 Dynamic Volume
Dynamic Disk Hardware Limitations (No firewire, USB, removable or laptop)
HOW TO: Set Up Fault-Tolerant Sets on Dynamic Disks in Windows 2000
Dynamic Disk Numbering and the DmDiag.exe Tool
HOW TO: Regenerate a Dynamic Mirrored Volume in Windows 2000
Restrictions on Extending or Spanning Simple Volumes on Dynamic Disks

Limits of Dynamic Disks in Windows 2000
LDMDump (Freeware utility) @ sysinternals
LDM Database @ Linux-NTFS project
LDM FAQ @ Linux-NTFS project

Recovery Reference
Recovering NTFS Boot Sector on NTFS Partitions

NTFS Advanced Studies
NTFS Volume Management and HKLM\System\DiskKey
NTFS Boot INI Options Reference
NTFS Defragmenting
Inside W2K NTFS Part 1
Inside W2K NTFS Part 2
Exploring NTFS On-Disk Structures
Inside Storage Management, Part 1
Inside storage Management Part 2 Basic vs Dynamic Disks
Inside Encrypting File System Part 1
Inside Encrypting File System Part 2
Inside Memory Management Paging Files

Additional NT Articles at Windows & .NET Magazine ByMark Russinovich Including: Crash Dump Analysis, Inside Win32 Services, Windows 2000 Kernal, Scalability Enhancements, Management Interface, Reliability Enhancements, and the Registry.
Additional NTFS Articals and Utilities atSysinternals

Other Filesystems
The Linux Filesystem Explained
Linux Filesystems Comparison
Ext2FS
Ext3
ReiserFS
JFS for Linux
The Unix UFS Filesystem
Space efficiency SFS,FFS,AFS,FAT16,FAT32

RAID
Definitive Guide to RAID @ Storagereview
RAID I: The Lesser Levels @ Storage Review (0, 1 mirroring, 1 duplexing, 1+0)
RAID an In-Depth Guide @ SLCentral.com
The Skinny on RAID @ arstechnica
RAID: Your Guide @ PCMechanic
RAID Explained @ AnandTech (part of IDE RAID Comparison dated)

SATA
Serial ATA in the Microsoft Operating System Environment

including:
The Significance of Serial ATA
The Different Modes of Serial ATA Controllers
Serial ATA 1.0 Features and Details
Emulating Parallel ATA Mode
Native Serial ATA Mode
SATA II Features and Details
Serial ATA Hardware Register Interface
Naming Conventions for Serial ATA Products
Support for Serial ATA in Windows
Ataport
Serial ATA Emulating Parallel ATA Mode Controller Support in the Windows Family of Operating Systems
Native Serial ATA Mode Controller Support in Future Versions of Windows
Emulating Parallel ATA Mode Controller Program
Identifying Emulating Parallel ATA Mode and Native Serial ATA Mode Controllers
Multiple Controllers in a System
Booting from the Different Modes of Serial ATA
Serial ATA as an External Connection
Hot Plugging
Hard Disk Drive Capacity Limitations on Serial ATA
CD-ROM Opportunities

and

BIOS Settings for Native-Mode-Capable ATA Controllers


#8

Defragmentation
O&O Defrag Pro
PageDefrag

Freeware Utilities

BootPart+

PartitionInfo and Partition Table and Boot Record Editor

Savepart

Partition Resizer

Zpart

Partition Image for Linux Partition Image is a Linux/UNIX utility which saves partitions in many formats (see below) to an image file. The image file can be compressed in the GZIP/BZIP2 formats to save disk space, and split into multiple files to be copied on removable floppies (ZIP for example), … Partitions can be saved across the network since version 0.6.0.

GNU Parted a program for creating, destroying, resizing,
checking and copying partitions, and the file systems on them.
Can be run from a GNU\Linux boot image supports:
ext2, ext3, fat16, fat32, linux-swap, HFS, JFS, NTFS, ReiserFS, UFS, XFS

FIPS is a program for non-destructive splitting of harddisk partitions.

TestDisk a tool to check and undelete partitions

Disk Utilities

THE LIST more freeware disk utlities than you can shake a stick at

Diskmon Freeware
a Windows NT device driver/GUI combination that together log and display all process activity on a Windows NT/2000 system. You can also minimize Diskmon to your system tray where it acts as a disk light, presenting a green icon when there is disk-read activity and a red icon when there is disk-write activity.

Filemon Freeware
Monitors and displays file system activity on a system in real-time

NTFSInfo Freeware
Information about NTFS volumes. Dump includes the size of a drive’s allocation units, where key NTFS files are located, and the sizes of the NTFS metadata files on the volume.

Disk Investigator Freeware
View and search raw directories, files, clusters and system sectors.

ActiveSmart Trialware
S.M.A.R.T. diagnostic and failure prediction software for hard drives

AIDA32 Freeware
S.M.A.R.T. Monitoring, Drive, ASPI, ATA Info, Plus lots lots more

DBAN (Dariks Boot and Nuke
a self-contained boot floppy that securely wipes the hard disks of most computers
submitted by roncomatic

File \ Directory Utilities
i.disk freeware
SequoiaView freeware

Flash Utilities

MtkWinFlash {freeware)
Windows utility will flash most Mediatek-chip based ATAPI drives
supports all single-file firmware in BIN and HEX format
(see link for supported models includes most LiteOn and many more)
-Contributed by TechHead

Bask

Backup
GHOST
Guide to backing up with Ghost @ Radified.com
Drive Image
XXCopy
Storebackup (for Linux, GNU freeware)
HDClone Free Edition (rudimentary Clone Utility)
g4u both local and via ftp clone utility (Freeware BSD license) all filesystems supported

ASPI
ASPI Layers/Drivers @ Radified.com
ACPI HAL
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q246236

Benchmarking
IOMeter Freeware (Open Source)
IOMeter User Guide
Introducing IOMeter @ StorageReview (Testbed II)
Using IOMeter @ 2CPU.com (contributed here by big daddy fatsacks)
Note from the Storage Review FAQ
"Can IOMeter measure single-user performance?
In a nutshell, no it can not.

With all due respect to both Intel and to the SourceForge.net team (cache) that picked up the project after Intel discontinued it, IOMeter measures random access performance across varying loads and simulated nodes (network performance). In otherwords, while it remains quite suited to assessing multi-user performance, it has little capability to tackle single-user scenarios.

Why not? As explained on a page in our comprehensive methodology outline (cache), IOMeter does not have the capability to accurately simulate the localized data access that dominates single-user drive use. That is, it can not simulate the tendancy for a drive to spend a large amount of time seeking across a very small area relative to broader-stroke movements.

StorageReview? itself is primarily to blame for IOMeter’s popularity across hardware sites to simulate “Workstation” usage. One of the goals of the Testbed3 project was to recant this erroneous deployment while introducing far more accurate tools to assess single-user (desktop/workstation) performance. It has been nearly two years… we hope that other hardware evaluation sites will eventually give careful consideration to the theories behind IOMeter’s inability to simulate locality and as a result consider removing the “Workstation” pattern from their performance suites."

IOZone Freeware
Important Filesystem Benchmark

WinBench 99 (freeware)

ATTO
HD Tach

Diskspeed32 Freeware

George’s HDSpeed Disk Performance Test Freeware
Rudimentary Interface speed and Read Sequential Sustained Transfer Rate

Disk Bench
Bart’s SCSItool
IPEAK SPT - Intel Performance Evaluation & Analysis Kit Storage Performance Toolbox
(StorageReview’s new Testbed 3) yours for a measly $995 :eek:
which is actually a suite of utilities
WinTrace32, AnalyzeTrace, AnalyzeLocality, RankDisk and AnalyzeDisk
IPEAK SPT AnalyzeDisk Reference Guide @Storage Review,com
a good read if nothing else :wink:

The Older StorageReview Testbeds
Testbed I
Testbed II
Optical Testbed

Legacy Benchmark Database

RAID.edu Benchmarks
many Unix NT and DOS Benchmarks
(some unavailable anywhere else)
UNIX
Bonnie++
Bonnie v.2.0.6
IOZone
IOStone v. C/II
Disktest
IOBench
IOCall
RawIO
PostMark
IOGEN

DOS
SCSITool
RAIDmark
Qbench
COREtest

NT\W2K|XP
Nbench
NTiogen
Threadmark
(others are already linked above)

Simulated Clean Room originally posted by DeepFreeze
Simulated CLEAN room. Buy or get CLEAR rubbermaid container.
plastic. WASH container thoroughly. Rinse, let air dry.

Then buy long sleeved rubber gloves. duct tape the sleeves to the rubbermaid container with room to move.
NOW ATTACH Duct TAPE rolled over itself. like full circle sticky,like when you roll toilet paper to wipe . (0)
and attach it to 2 SIDES. preferably Adjacent. this will help CATCH particles that will move later on in process.

tape lid and box down on SOLID work table.Preferably on wooden table to decrease chance of static problems. now drill hole about 1" diameter and VACUUM out the air and things in box. SEAL up box.Place duct tape wads around plugged hole to decrease chance of air particles with lint or dust to get into box. CLEAN ROOM


#9

And a few informational posts

www.plasma-online.de (Navigate)>Ugrades weak hardware>hardware fixes>Use of large harddisks in PC’s with BIOS size limit

Use of large harddisks in PC’s with BIOS size limit

There are certain limitations why you could not use the full capacity of your harddisk right from the beginning.
Case 1
Your mainboard BIOS lacks support for large harddisks (540MB / 2GB / 8GB / 16GB / 32GB limit). This is when your harddisk is reported as unsupported [AutoDetect reports None] or with wrong sizes in the BIOS setup. Even when your computer stops starting.
Case 2
Your harddisk features an UltraDMA mode not known to the harddisk or vice versa. Therefore no communication can be established.
Case 3
Your operating system does not support partitions larger than 2GB.

For all limitations workarounds exist.

Case 1
Solution 1
Your mainboard manufacturer may provide a new BIOS binary for download to update your current version with a Flash utility. To identify your mainboard at least a manufacturer name and product name or a BIOS ID is needed.
Info on how to find the BIOS ID and manufacturer

Solution 2
If you use an operating system like Windows 95, 98, ME, 2000, Linux, MacOS… you can manually enter harddisk data in the BIOS to limit your HD to 2GB. This will prevent your BIOS from autodetecting your harddisk and will limit BIOS access to your harddisk to the first 2GB. During boot-up your BIOS will report this “wrong” size. This is only important up the point when the BIOS hands over control to your operating system (e.g. Windows 95 IO.SYS / COMMAND.COM). Then your operating system uses it’s own method to access the harddisk and is indepented from your BIOS limitations. Creating partitions is more difficult when using this solution as FDISK does rely on proper initialization of your harddisk during boot up.
You can download a partition tool to create partitions with every size and file system you like here:
http://www.plasma-online.de/index.html?content=http%3A//www.plasma-online.de/english/upgrade/tweak/fixes/fix_harddisk_bios.html
FAQ for solving problems or questions:
http://www.users.intercom.com/~ranish/part/faq.htm

Solution 3
Some harddisks provide a jumper to limit harddisk size visible to your BIOS to 2GB (sometimes 8GB or 32GB). That would be easier than the above given solution to enter manually data into BIOS settings.

Solution 4
Most harddisk manufacturers provide a software tool to do both parts (using fake settings for the BIOS and translating harddisk size correctly to its full capacity). Please specify your harddisk type to guide you to a download location. This solution works with legacy operating systems, too.
Fujitsu DiskManager
IBM DiskManager 2000 (v 3.10.14) (Hatachi having taken over IBMs HDD division support is now here
Seagate DiscWizard 2000
Full version for all harddisks by OnTrack (59.95$)

Case 2
Change UDMA mode for your harddisk. As your chipset is based on VIA 82C586B southbridge all modes above UDMA33 are useless, because this chipset can only handle up to UDMA33. So limiting your harddisks firmware to UDMA66 or UDMA33 does NOT decrease performance. Problem could be that your harddisk can’t get initialized as it features an UDMA mode unknown to the BIOS / chipset.
Fujitsu UDMA changer

Case 3
Microsoft DOS based operating systems like Windows 3.1, Windows 95 implement a file system called FAT16. This file system can only handle partitions up to 2GB:
65525 cluster (max) * 32768 Byte (max) = 2.147.123.200Byte
So you would end up with 10 partitions with 2GB each when using a 20GB disk. Newer releases of Microsoft Windows like Windows 95 OSR 2, Windows 98 and later use a new file system called FAT32 which can have partitions larger than 2GB. Windows 95 and Windows 95 SP1 don’t feature FAT32 (see Properties of MyComputer for version of Windows95 -> 950 or 950A have no FAT32). All in all you can use harddisks with 20GB in your PC but you might end up with ten partitions named C:, D:, E: … all with 2GB each. If this is OK for you no further activities are necessary. If you want to use one partition with more than 2GB upgrade to an operating system with support for different file systems like FAT32 (Windows95 SP2, Windows98/ME, Windows 2000), NTFS (Windows NT, Windows 2000), HPFS (MacOS), extFS (Linux) etc.


And finally employ a IDE Controller Card on the PCI bus, just added a Promise Technology TX2 to my brothers system (came with his new 160GB HDD, which is too large to be recognized without it, XP\W2K being limited to 136GB)


#10

Contributed by Chuckle01
CHKDSK Runs Uncontrollably Every Boot Up

Check the “Check Disk - Disk Checking Runs Upon Boot” entry in the Windows XP From A to Z section of kellys-korner-xp.com for tips:

Check Disk runs on every boot: Note - I have seen this happen when Windows File Protection has either been disabled or not allowed to run upon canceling the bootup Check Disk.

Suggestions and Checkpoints:

Go to Start/Run/CMD and type in: fsutil dirty query c:
(Modify the drive letter accordingly)

If it comes back as dirty, it hasn’t cleared. For more information go to Start/Run/CMD and type in: CHKNTFS /?

Option: From a command prompt type chkntfs /D and then reboot, a chkdsk should run but not again on next boot.

This edit does not work for all users, circumstances depending:

Disable or Enable CheckDisk Upon Boot (Line 82)
http://www.kellys-korner-xp.com/xp_tweaks.htm

To use the Regedit: Save the REG File to your hard disk. Double click it and answer yes to the import prompt. REG files can be viewed in Notepad by right clicking on the file and selecting Edit.

Chkdsk Runs Each Time That You Start Computer
http://support.microsoft.com/support/kb/articles/q316/5/06.asp

Checkpoint:

Go to Start/Run/Regedit and navigate to this key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon

Highlight the Winlogon file.

In the list look for “SFCScan”, this should be set to (0) if it is set to (1) the scan will happen at every boot.

Go to Start/Run/Regedit and navigate to this key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager

Look for the REG_MULTI_SZ value with the following name: BootExecute. This value contains commands that will be executed at startup. The default value is: autocheck autochk *

After scheduling one or more chkdsks, the entry will contain one or more autochk lines. Delete each of these lines and put the default one in place.

If you always want a check to be performed at startup, change the value to: autocheck autochk /f *

If you don’t want any checks to be performed, delete all autocheck entries.

Last checkpoint: Modify as needed:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\cleanuppath
Registry entry should read: %SystemRoot%\system32\cleanmgr.exe /D %c

How to schedule a CHKDSK on every boot
Chkdsk Runs Each Time That You Start Computer After Upgrade to Windows XP

Scandisk after a Bad Shutdown - Go to Start/Run/CMD and type in: CHKNTFS/T:4
Where 4 is the amount of waiting time.


#11

reserved 30


#12

what does all that mean in short

would take me like 3 hours to read that


#13

Wow, lots of info, very usefull reference if u want to have a crash education in pcs :thumbsup:

/fox


#14

Ice great work! I really enjoyed reading it.

Maybe you should put it all in a pdf or postscript file, because as you may have noticed :slight_smile: , it is ALOT of text and this site is not that printer friendly.


#15

Originally posted by circle frame
[B]Ice great work! I really enjoyed reading it.

Maybe you should put it all in a pdf or postscript file, because as you may have noticed :slight_smile: , it is ALOT of text and this site is not that printer friendly. [/B]

You know there is that nice “show printable version” button at the bottom of the page that makes the display printer-friendly.


#16

wow where have you been all of my life, could have used that info sooner :wink:


#17

I have a problem with a HD.
It’s listed as a ST312002 3AS SCSI Disk Device on my hardware list. And as It’s my drive F, a 120GB HD. The problem is that after a few days of hard work it happened that the disk got duplicated… now i have my drive F and a new drive G with the same stuff inside…
Now my computer runs a system check every time it starts and I’m loosing data from drive F every now and then. The drive apears as healthy in the Computer Management where by the way drive G does not exist…
Some programs, mostly games do not run from that drive anymore cause they say the drive is cycling or something like that!!! did my HD went for a bike ride!?!

What can I do??? please help…
I ran nortons defragmentation and I lost almost 60% of data. Folders just gone! Am I doing something stupid or just plain ignorant? is this normal, I don’t know?

let me stress: HELP!!!
:cry:


#18

I suggest

you run a full virus scan and spy / ad ware scan

Norton anti virus 2004 and Spybot search & destroy will do the trick i think.

back up that data if u can, if you didnt already, and format that drive if a virus check comes up empty


#19

I already ran the virus check and it came up empty. I guess there’s only one more thing to do… format F: :shrug:


#20

you have a PM