Rabu, 27 Januari 2010

History Keyboard

History
Main article: History of hard disk drives

HDDs (introduced in 1956 as data storage for an IBM accounting computer)[7] were originally developed for use with general purpose computers. During the 1990s, the need for large-scale, reliable storage, independent of a particular device, led to the introduction of embedded systems such as RAIDs, network attached storage (NAS) systems, and storage area network (SAN) systems that provide efficient and reliable access to large volumes of data. In the 21st century, HDD usage expanded into consumer applications such as camcorders, cellphones (e.g. the Nokia N91), digital audio players, digital video players, digital video recorders, personal digital assistants and video game consoles.
[edit] Technology
Diagram of a computer hard disk drive

HDDs record data by magnetizing ferromagnetic material directionally, to represent either a 0 or a 1 binary digit. They read the data back by detecting the magnetization of the material. A typical HDD design consists of a spindle that holds one or more flat circular disks called platters, onto which the data is recorded. The platters are made from a non-magnetic material, usually aluminum alloy or glass, and are coated with a thin layer of magnetic material, typically 10–20 nm in thickness — for reference, standard copy paper may be between 0.07 millimetres (70,000 nm) and 0.18 millimetres (180,000 nm) thick.[8] — with an outer layer of carbon for protection. Older disks used iron(III) oxide as the magnetic material, but current disks use a cobalt-based alloy.[citation needed]
A cross section of the magnetic surface in action. In this case the binary data is encoded using frequency modulation.

The platters are spun at very high speeds. Information is written to a platter as it rotates past devices called read-and-write heads that operate very close (tens of nanometers in new drives) over the magnetic surface. The read-and-write head is used to detect and modify the magnetization of the material immediately under it. There is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor.

The magnetic surface of each platter is conceptually divided into many small sub-micrometre-sized magnetic regions, each of which is used to encode a single binary unit of information. Initially the regions were oriented horizontally, but beginning about 2005, the orientation was changed to perpendicular. Due to the polycrystalline nature of the magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total forms a magnetic dipole which generates a highly localized magnetic field nearby. A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data density increased, read heads using magnetoresistance (MR) came into use; the electrical resistance of the head changed according to the strength of the magnetism from the platter. Later development made use of spintronics; in these heads, the magnetoresistive effect was much greater than in earlier types, and was dubbed "giant" magnetoresistance (GMR). In today's heads, the read and write elements are separate, but in close proximity, on the head portion of an actuator arm. The read element is typically magneto-resistive while the write element is typically thin-film inductive.[9]

HD heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air moves at, or close to, the platter speed.[citation needed] The record and playback head are mounted on a block called a slider, and the surface next to the platter is shaped to keep it just barely out of contact. It's a type of air bearing.

In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers, separated by a 3-atom-thick layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other.[10] Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005,[11] and as of 2007 the technology was used in many HDDs.[12][13][14]
This section may require cleanup to meet Wikipedia's quality standards. Please improve this section if you can. (December 2009)

The grain boundaries turn out to be very important in HDD design. The grains are very small and close to each other, so the coupling between adjacent grains is very strong. When one grain is magnetized, the adjacent grains tend to be aligned parallel to it or demagnetized. Then both the stability of the data and signal-to-noise ratio will be sabotaged. A clear grain boundary can weaken the coupling of the grains and subsequently increase the signal-to-noise ratio. In longitudinal recording, the single-domain grains have uniaxial anisotropy with easy axes lying in the film plane. The consequence of this arrangement is that adjacent magnets repel each other. Therefore the magnetostatic energy is so large that it is difficult to increase areal density. Perpendicular recording media, on the other hand, has the easy axis of the grains oriented perpendicular to the disk plane. Adjacent magnets attract to each other and magnetostatic energy are much lower. So, much higher areal density can be achieved in perpendicular recording. Another unique feature in perpendicular recording is that a soft magnetic underlayer are incorporated into the recording disk. This underlayer is used to conduct writing magnetic flux so that the writing is more efficient. This will be discussed in writing process. Therefore, a higher anisotropy medium film, such as L10-FePt and rare-earth magnets, can be used.
[edit] Error handling

Modern drives also make extensive use of Error Correcting Codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits for each block of data that are determined by mathematical formulas. The extra bits allow many errors to be fixed. While these extra bits take up space on the hard drive, they allow higher recording densities to be employed, resulting in much larger storage capacity for user data.[15] In 2009, in the newest drives, low-density parity-check codes (LDPC) are supplanting Reed-Solomon. LDPC codes enable performance close to the Shannon Limit and thus allow for the highest storage density available.[16]

Typical hard drives attempt to "remap" the data in a physical sector that is going bad to a spare physical sector—hopefully while the number of errors in that bad sector is still small enough that the ECC can completely recover the data without loss. The S.M.A.R.T. system counts the total number of errors in the entire hard drive fixed by ECC, and the total number of remappings, in an attempt to predict hard drive failure.
See also: file system
[edit] Architecture
A hard disk drive with the platters and motor hub removed showing the copper colored stator coils surrounding a bearing at the center of the spindle motor. The orange stripe along the side of the arm is a thin printed-circuit cable. The spindle bearing is in the center.

A typical hard drive has two electric motors, one to spin the disks and one to position the read/write head assembly. The disk motor has an external rotor attached to the platters; the stator windings are fixed in place. The actuator has a read-write head under the tip of its very end (near center); a thin printed-circuit cable connects the read-write head to the hub of the actuator. A flexible, somewhat 'U'-shaped, ribbon cable, seen edge-on below and to the left of the actuator arm in the first image and more clearly in the second, continues the connection from the head to the controller board on the opposite side.

The head support arm is very light, but also rigid; in modern drives, acceleration at the head reaches 550 Gs.
Opened hard drive with top magnet removed, showing copper head actuator coil (top right).

The silver-colored structure at the upper left of the first image is the top plate of the permanent-magnet and moving coil motor that swings the heads to the desired position (it is shown removed in the second image). The plate supports a thin neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives only have one magnet).

The voice coil, itself, is shaped rather like an arrowhead, and made of doubly-coated coppmagnet[clarification needed] wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it's wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the actuator. Current flowing radially outward along one side of the arrowhead, and radially inward on the other produces the tangential force. (See magnetic field#Force on a charged particle.) If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.
[edit] Capacity and access speed
PC hard disk drive capacity (in GB) over time. The vertical axis is logarithmic, so the fit line corresponds to exponential growth.

Using rigid disks and sealing the unit allows much tighter tolerances than in a floppy disk drive. Consequently, hard disk drives can store much more data than floppy disk drives and can access and transmit them faster.

* As of April 2009[update], the highest capacity consumer HDDs are 2 TB.[17]
* A typical "desktop HDD" might store between 120 GB and 2 TB although rarely above 500 GB of data (based on US market data[18]), rotate at 5,400 to 15,000 rpm, and have a media transfer rate of 0.5 Gbit/s or higher. (1 GB = 109 Byte; 1 Gbit/s = 109 bit/s)
* The fastest “enterprise” HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer speeds above 1.6 Gbit/s.[19] and a sustained transfer rate up to 1 Gbit/s.[19] Drives running at 10,000 or 15,000 rpm use smaller platters to mitigate increased power requirements (as they have less air drag) and therefore generally have lower capacity than the highest capacity desktop drives.
* "Mobile HDDs", i.e., laptop HDDs, which are physically smaller than their desktop and enterprise counterparts, tend to be slower and have lower capacity. A typical mobile HDD spins at either 5200rpm, 5400rpm, or 7200rpm, with 5400rpm being the most prominent. 7200rpm drives tend to be more expensive and have smaller capacities, while 5200rpm models usually have very-high storage capacities. Because of physically smaller platter(s), mobile HDDs generally have lower capacity than their larger desktop counterparts.

The exponential increases in disk space and data access speeds of HDDs have enabled the commercial viability of consumer products that require large storage capacities, such as digital video recorders and digital audio players.[20] In addition, the availability of vast amounts of cheap storage has made viable a variety of web-based services with extraordinary capacity requirements, such as free-of-charge web search, web archiving and video sharing (Google, Internet Archive, YouTube, etc.).

The main way to decrease access time is to increase rotational speed, thus reducing rotational delay, while the main way to increase throughput and storage capacity is to increase areal density. Based on historic trends, analysts predict a future growth in HDD bit density (and therefore capacity) of about 40% per year.[21] Access times have not kept up with throughput increases, which themselves have not kept up with growth in storage capacity.

The first 3.5″ HDD marketed as able to store 1 TB was the Hitachi Deskstar 7K1000. It contains five platters at approximately 200 GB each, providing 1 TB (935.5 GiB) of usable space;[22] note the difference between its capacity in decimal units (1 TB = 1012 bytes) and binary units (1 TiB = 1024 GiB = 240 bytes). Hitachi has since been joined by Samsung (Samsung SpinPoint F1, which has 3 × 334 GB platters), Seagate and Western Digital in the 1 TB drive market.[23][24]

In September 2009, Showa Denko announced capacity improvements in platters that they manufacture for HDD makers. A single 2.5" platter is able to hold 334 GB worth of data, and preliminary results for 3.5" indicate a 750 GB per platter capacity.[25]
Form factor Width Largest capacity Platters (Max)
5.25″ FH 146 mm 47 GB[26] (1998) 14
5.25″ HH 146 mm 19.3 GB[27] (1998) 4[28]
3.5″ SATA 102 mm 2 TB[29] (2009) 5
3.5″ PATA 102 mm 750 GB[30] (2006) ?
2.5″ SATA 69.9 mm 1 TB[31] (2009) 3
2.5″ PATA 69.9 mm 320 GB[32] (2009) ?
1.8″ SATA 54 mm 320 GB[33] (2009) 3
1.8″ PATA/LIF 54 mm 240 GB[34] (2008) 2
1.3″ 43 mm 40 GB[35] (2007) 1
1″ (CFII/ZIF/IDE-Flex) 42 mm 20 GB (2006) 1
0.85″ 24 mm 8 GB[36] (2004) 1
[edit] Capacity measurements
A disassembled and labeled 1997 hard drive. All major components were placed on a mirror, which created the symmetrical reflections.

Raw unformatted capacity of a hard disk drive is usually quoted with SI prefixes (metric system prefixes), incrementing by powers of 1000; today that usually means gigabytes (GB) and terabytes (TB). This is conventional for data speeds and memory sizes which are not inherently manufactured in power of two sizes, as RAM and Flash memory are. Hard disks by contrast have no inherent binary size as capacity is determined by number of heads, tracks and sectors.

This can cause some confusion because some operating systems may report the formatted capacity of a hard drive using binary prefix units which increment by powers of 1024.

A one terabyte (1 TB) disk drive would be expected to hold around 1 trillion bytes (1,000,000,000,000) or 1000 GB; and indeed most 1 TB hard drives will contain slightly more than this number. However some operating system utilities would report this as around 931 GB or 953,674 MB, whereas the correct units would be 931 GiB or 953,674 MiB. (The actual number for a formatted capacity will be somewhat smaller still, depending on the file system). Following are the correct ways of reporting one Terabyte.
SI prefixes (Hard Drive) equivalent Binary prefixes (OS) equivalent
1 TB (Terabytes) 1 * 10004 B 0.9095 TiB (Tebibytes) 0.9095 * 10244 B
1000 GB (Gigabytes) 1000 * 10003 B 931.3 GiB (Gibibytes) 931.3 * 10243 B
1,000,000 MB (Megabytes) 1,000,000 * 10002 B 953,674.3 MiB (Mebibytes) 953,674.3 * 10242 B
1,000,000,000 KB (Kilobytes) 1,000,000,000 * 1000 B 976,562,500 KiB (Kibibytes) 976,562,500 * 1024 B
1,000,000,000,000 B (bytes) - 1,000,000,000,000 B (bytes) -

Microsoft Windows reports disk capacity both in a decimal integer to 12 or more digits and in binary prefix units to three significant digits.

The capacity of an HDD can be calculated by multiplying the number of cylinders by the number of heads by the number of sectors by the number of bytes/sector (most commonly 512). Drives with the ATA interface and a capacity of eight gigabytes or more behave as if they were structured into 16383 cylinders, 16 heads, and 63 sectors, for compatibility with older operating systems. Unlike in the 1980s, the cylinder, head, sector (C/H/S) counts reported to the CPU by a modern ATA drive are no longer actual physical parameters since the reported numbers are constrained by historic operating-system interfaces and with zone bit recording the actual number of sectors varies by zone. Disks with SCSI interface address each sector with a unique integer number; the operating system remains ignorant of their head or cylinder count.

The old C/H/S scheme has been replaced by logical block addressing. In some cases, to try to "force-fit" the C/H/S scheme to large-capacity drives, the number of heads was given as 64, although no modern drive has anywhere near 32 platters.
[edit] Formatted disk overhead

For a formatted drive, the operating system's file system internal usage is another, although minor, reason why a computer hard drive or storage device's capacity may show its capacity as different from its theoretical capacity. This would include storage for, as examples, a file allocation table (FAT) or inodes, as well as other operating system data structures. This file system overhead is usually less than 1% on drives larger than 100 MB. For RAID drives, data integrity and fault-tolerance requirements also reduce the realized capacity. For example, a RAID1 drive will be about half the total capacity as a result of data mirroring. For RAID5 drives with x drives you would lose 1/x of your space to parity. RAID drives are multiple drives that appear to be one drive to the user, but provides some fault-tolerance.

A general rule of thumb to quickly convert the manufacturer's hard disk capacity to the standard Microsoft Windows formatted capacity is 0.93*capacity of HDD from manufacturer for HDDs less than a terabyte and 0.91*capacity of HDD from manufacturer for HDDs equal to or greater than 1 terabyte.
[edit] Form factors
5¼″ full height 110 MB HDD,
2½″ (8.5 mm) 6495 MB HDD,
US/UK pennies for comparison.
Six hard drives with 8″, 5.25″, 3.5″, 2.5″, 1.8″, and 1″ disks, partially disassembled to show platters and read-write heads, with a ruler showing inches.

Before the era of PCs and small computers, hard disks were of widely varying dimensions, typically in free standing cabinets the size of washing machines (e.g. DEC RP06 Disk Drive) or designed so that dimensions enabled placement in a 19" rack (e.g. Diablo Model 31).

With increasing sales of small computers having built in floppy-disk drives (FDDs), HDDs that would fit to the FDD mountings became desirable, and this led to the evolution of the market towards drives with certain Form factors, initially derived from the sizes of 8", 5.25" and 3.5" floppy disk drives. Smaller sizes than 3.5" have emerged as popular in the marketplace and/or been decided by various industry groups.

* 8 inch: 9.5 in × 4.624 in × 14.25 in (241.3 mm × 117.5 mm × 362 mm)
In 1979, Shugart Associates' SA1000 was the first form factor compatible HDD, having the same dimensions and a compatible interface to the 8″ FDD.
* 5.25 inch: 5.75 in × 1.63 in × 8 in (146.1 mm × 41.4 mm × 203 mm)
This smaller form factor, first used in an HDD by Seagate in 1980, was the same size as full height 5¼-inch diameter FDD, i.e., 3.25 inches high. This is twice as high as "half height" commonly used today; i.e., 1.63 in (41.4 mm). Most desktop models of drives for optical 120 mm disks (DVD, CD) use the half height 5¼″ dimension, but it fell out of fashion for HDDs. The Quantum Bigfoot HDD was the last to use it in the late 1990s, with “low-profile” (≈25 mm) and “ultra-low-profile” (≈20 mm) high versions.
* 3.5 inch: 4 in × 1 in × 5.75 in (101.6 mm × 25.4 mm × 146 mm) = 376.77344 cm³
This smaller form factor, first used in an HDD by Rodime in 1984, was the same size as the "half height" 3½″ FDD, i.e., 1.63 inches high. Today it has been largely superseded by 1-inch high “slimline” or “low-profile” versions of this form factor which is used by most desktop HDDs.
* 2.5 inch: 2.75 in × 0.374–0.59 in × 3.945 in (69.85 mm × 7–15 mm × 100 mm) = 48.895–104.775 cm3
This smaller form factor was introduced by PrairieTek in 1988; there is no corresponding FDD. It is widely used today for hard-disk drives in mobile devices (laptops, music players, etc.) and as of 2008 replacing 3.5 inch enterprise-class drives. It is also used in the Xbox 360 and Playstation 3 video game consoles. Today, the dominant height of this form factor is 9.5 mm for laptop drives, but high capacity drives (750 GB and 1 TB) have a height of 12.5 mm. Enterprise-class drives can have a height up to 15 mm.[37] Seagate has released a wafer-thin 7mm drive aimed at entry level laptops and high end netbooks in December 2009.[38]
* 1.8 inch: 54 mm × 8 mm × 71 mm = 30.672 cm³
This form factor, originally introduced by Integral Peripherals in 1993, has evolved into the ATA-7 LIF with dimensions as stated. It is increasingly used in digital audio players and subnotebooks. An original variant exists for 2–5 GB sized HDDs that fit directly into a PC card expansion slot. These became popular for their use in iPods and other HDD based MP3 players.
* 1 inch: 42.8 mm × 5 mm × 36.4 mm
This form factor was introduced in 1999 as IBM's Microdrive to fit inside a CF Type II slot. Samsung calls the same form factor "1.3 inch" drive in its product literature.[39]
* 0.85 inch: 24 mm × 5 mm × 32 mm
Toshiba announced this form factor in January 2004[40] for use in mobile phones and similar applications, including SD/MMC slot compatible HDDs optimized for video storage on 4G handsets. Toshiba currently sells a 4 GB (MK4001MTD) and 8 GB (MK8003MTD) version[1] and holds the Guinness World Record for the smallest hard disk drive.[41]

3.5" and 2.5" hard disks currently dominate the market.

By 2009 all manufacturers had discontinued the development of new products for the 1.3-inch, 1-inch and 0.85-inch form factors due to falling prices of flash memory.[42][43]

The inch-based nickname of all these form factors usually do not indicate any actual product dimension (which are specified in millimeters for more recent form factors), but just roughly indicate a size relative to disk diameters, in the interest of historic continuity.
[edit] Other characteristics
[edit] Data transfer rate

As of 2008, a typical 7200rpm desktop hard drive has a sustained "disk-to-buffer" data transfer rate of about 70 megabytes per second.[44] This rate depends on the track location, so it will be highest for data on the outer tracks (where there are more data sectors) and lower toward the inner tracks (where there are fewer data sectors); and is generally somewhat higher for 10,000rpm drives. A current widely-used standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s from the buffer to the computer, and thus is still comfortably ahead of today's disk-to-buffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using special file generator tools, then reading back the file. Transfer rate can be influenced by file system fragmentation and the layout of the files.
[edit] Seek time

Seek time currently ranges from just under 2 ms for high-end server drives, to 15 ms for miniature drives, with the most common desktop type typically being around 9 ms.[citation needed] There has not been any significant improvement in this speed for some years. Some early PC drives used a stepper motor to move the heads, and as a result had access times as slow as 80–120 ms, but this was quickly improved by voice coil type actuation in the late 1980s, reducing access times to around 20 ms.
[edit] Power consumption

Power consumption has become increasingly important, not just in mobile devices such as laptops but also in server and desktop markets. Increasing data center machine density has led to problems delivering sufficient power to devices (especially for spin up), and getting rid of the waste heat subsequently produced, as well as environmental and electrical cost concerns (see green computing). Similar issues exist for large companies with thousands of desktop PCs. Smaller form factor drives often use less power than larger drives. One interesting development in this area is actively controlling the seek speed so that the head arrives at its destination only just in time to read the sector, rather than arriving as quickly as possible and then having to wait for the sector to come around (i.e. the rotational latency). Many of the hard drive companies are now producing Green Drives that require much less power and cooling. Many of these 'Green Drives' spin slower (<5400 RPM compared to 7200 RPM, 10,000 RPM, and 15,000 RPM) and also generate less waste heat.

Also in Server and Workstation systems where there might be multiple hard disk drives, there are various ways of controlling when the hard drives spin up (highest power draw).

On SCSI hard disk drives, the SCSI controller can directly control spin up and spin down of the drives.

On Parallel ATA (aka PATA) and SATA hard disk drives, some support Power-up in standby or PUIS. The hard disk drive will not spin up until the controller or system BIOS issues a specific command to do so. This limits the power draw or consumption upon power on.

On newer SATA hard disk drives, there is Staggered Spin Up feature. The hard disk drive will not spin up until the SATA Phy comes ready (communications with the host controller starts).[citation needed]

To further control or reduce power draw and consumption, the hard disk drive can be spun down to reduce its power consumption.
[edit] Audible noise

Measured in dBA, audible noise is significant for certain applications, such as PVRs, digital audio recording and quiet computers. Low noise disks typically use fluid bearings, slower rotational speeds (usually 5,400 rpm) and reduce the seek speed under load (AAM) to reduce audible clicks and crunching sounds. Drives in smaller form factors (e.g. 2.5 inch) are often quieter than larger drives .
[edit] Shock resistance

Shock resistance is especially important for mobile devices. Some laptops now include active hard drive protection that parks the disk heads if the machine is dropped, hopefully before impact, to offer the greatest possible chance of survival in such an event. Maximum shock tolerance to date is 350 Gs for operating and 1000 Gs for non-operating.[45]
[edit] Access and interfaces
Question book-new.svg
This section needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (July 2009)

Hard disk drives are accessed over one of a number of bus types, including parallel ATA (P-ATA, also called IDE or EIDE), Serial ATA (SATA), SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Bridge circuitry is sometimes used to connect hard disk drives to buses that they cannot communicate with natively, such as IEEE 1394, USB and SCSI.

For the ST-506 interface, the data encoding scheme as written to the disk surface was also important. The first ST-506 disks used Modified Frequency Modulation (MFM) encoding, and transferred data at a rate of 5 megabits per second. Later controllers using 2,7 RLL (or just "RLL") encoding caused 50% more data to appear under the heads compared to one rotation of an MFM drive, increasing data storage and data transfer rate by 50%, to 7.5 megabits per second.

Many ST-506 interface disk drives were only specified by the manufacturer to run at the 1/3rd lower MFM data transfer rate compared to RLL, while other drive models (usually more expensive versions of the same drive) were specified to run at the higher RLL data transfer rate. In some cases, a drive had sufficient margin to allow the MFM specified model to run at the denser/faster RLL data transfer rate (not recommended nor guaranteed by manufacturers). Also, any RLL-certified drive could run on any MFM controller, but with 1/3 less data capacity and as much as 1/3rd less data transfer rate compared to its RLL specifications.

Enhanced Small Disk Interface (ESDI) also supported multiple data rates (ESDI disks always used 2,7 RLL, but at 10, 15 or 20 megabits per second), but this was usually negotiated automatically by the disk drive and controller; most of the time, however, 15 or 20 megabit ESDI disk drives weren't downward compatible (i.e. a 15 or 20 megabit disk drive wouldn't run on a 10 megabit controller). ESDI disk drives typically also had jumpers to set the number of sectors per track and (in some cases) sector size.

Modern hard drives present a consistent interface to the rest of the computer, no matter what data encoding scheme is used internally. Typically a DSP in the electronics inside the hard drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction[46] to decode the sector boundaries and sector data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks.

SCSI originally had just one signaling frequency of 5 MHz for a maximum data rate of 5 megabytes/second over 8 parallel conductors, but later this was increased dramatically. The SCSI bus speed had no bearing on the disk's internal speed because of buffering between the SCSI bus and the disk drive's internal data bus; however, many early disk drives had very small buffers, and thus had to be reformatted to a different interleave (just like ST-506 disks) when used on slow computers, such as early Commodore Amiga, IBM PC compatibles and Apple Macintoshes.

ATA disks have typically had no problems with interleave or data rate, due to their controller design, but many early models were incompatible with each other and couldn't run with two devices on the same physical cable in a master/slave setup. This was mostly remedied by the mid-1990s, when ATA's specification was standardized and the details began to be cleaned up, but still causes problems occasionally (especially with CD-ROM and DVD-ROM disks, and when mixing Ultra DMA and non-UDMA devices).

Serial ATA does away with master/slave setups entirely, placing each disk on its own channel (with its own set of I/O ports) instead.

FireWire/IEEE 1394 and USB(1.0/2.0) HDDs are external units containing generally ATA or SCSI disks with ports on the back allowing very simple and effective expansion and mobility. Most FireWire/IEEE 1394 models are able to daisy-chain in order to continue adding peripherals without requiring additional ports on the computer itself. USB however, is a point to point network and doesn't allow for daisy-chaining. USB hubs are used to increase the number of available ports and are used for devices that don't require charging since the current supplied by hubs is typically lower than what's available from the built-in USB ports.
[edit] Disk interface families used in personal computers

Notable families of disk interfaces include:

* Historical bit serial interfaces — connect a hard disk drive (HDD) to a hard disk controller (HDC) with two cables, one for control and one for data. (Each drive also has an additional cable for power, usually connecting it directly to the power supply unit). The HDC provided significant functions such as serial/parallel conversion, data separation, and track formatting, and required matching to the drive (after formatting) in order to assure reliability. Each control cable could serve two or more drives, while a dedicated (and smaller) data cable served each drive.
o ST506 used MFM (Modified Frequency Modulation) for the data encoding method.
o ST412 was available in either MFM or RLL (Run Length Limited) encoding variants.
o Enhanced Small Disk Interface (ESDI) was an interface developed by Maxtor to allow faster communication between the processor and the disk than MFM or RLL.

* Modern bit serial interfaces — connect a hard disk drive to a host bus interface adapter (today typically integrated into the "south bridge") with one data/control cable. (As for historical bit serial interfaces above, each drive also has an additional power cable, usually direct to the power supply unit.)
o Fibre Channel (FC), is a successor to parallel SCSI interface on enterprise market. It is a serial protocol. In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has much broader usage than mere disk interfaces, and it is the cornerstone of storage area networks (SANs). Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well. Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fibre optics. The latter are traditionally reserved for larger devices, such as servers or disk array controllers.
o Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the device, and one pair for differential receiving from the device, just like EIA-422. That requires that data be transmitted serially. Similar differential signaling system is used in RS485, LocalTalk, USB, Firewire, and differential SCSI.
o Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses a mechanically identical data and power connector to standard 3.5" SATA1/SATA2 HDDs, and many server-oriented SAS RAID controllers are also capable of addressing SATA hard drives. SAS uses serial communication instead of the parallel method found in traditional SCSI devices but still uses SCSI commands.

* Word serial interfaces — connect a hard disk drive to a host bus adapter (today typically integrated into the "south bridge") with one cable for combined data/control. (As for all bit serial interfaces above, each drive also has an additional power cable, usually direct to the power supply unit.) The earliest versions of these interfaces typically had a 8 bit parallel data transfer to/from the drive, but 16 bit versions became much more common, and there are 32 bit versions. Modern variants have serial data transfer. The word nature of data transfer makes the design of a host bus adapter significantly simpler than that of the precursor HDD controller.
o Integrated Drive Electronics (IDE), later renamed to ATA, with the alias P-ATA ("parallel ATA") retroactively added upon introduction of the new variant Serial ATA. The original name reflected the innovative integration of HDD controller with HDD itself, which was not found in earlier disks. Moving the HDD controller from the interface card to the disk drive helped to standardize interfaces, and to reduce the cost and complexity. The 40 pin IDE/ATA connection transfers 16 bits of data at a time on the data cable. The data cable was originally 40 conductor, but later higher speed requirements for data transfer to and from the hard drive led to an "ultra DMA" mode, known as UDMA. Progressively faster versions of this standard ultimately added the requirement for an 80 conductor variant of the same cable; where half of the conductors provides grounding necessary for enhanced high-speed signal quality by reducing cross talk. The interface for 80 conductor only has 39 pins, the missing pin acting as a key to prevent incorrect insertion of the connector to an incompatible socket, a common cause of disk and controller damage.
o EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key improvement being the use of direct memory access (DMA) to transfer data between the disk and the computer without the involvement of the CPU, an improvement later adopted by the official ATA standards. By directly transferring data between memory and disk, DMA eliminates the need for the CPU to copy byte per byte, therefore allowing it to process other tasks while the data transfer occurs.
o Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System Interface, was an early competitor of ESDI. SCSI disks were standard on servers, workstations, Commodore Amiga and Apple Macintosh computers through the mid-90s, by which time most models had been transitioned to IDE (and later, SATA) family disks. Only in 2005 did the capacity of SCSI disks fall behind IDE disk technology, though the highest-performance disks are still available in SCSI and Fibre Channel only. The length limitations of the data cable allows for external SCSI devices. Originally SCSI data cables used single ended (common mode) data transmission, but server class SCSI could use differential transmission, either low voltage differential (LVD) or high voltage differential (HVD). ("Low" and "High" voltages for differential SCSI are relative to SCSI standards and do not meet the meaning of low voltage and high voltage as used in general electrical engineering contexts, as apply e.g. to statutory electrical codes; both LVD and HVD use low voltage signals (3.3 V and 5 V respectively) in general terminology.)

Acronym or abbreviation Meaning Description
SASI Shugart Associates System Interface Historical predecessor to SCSI.
SCSI Small Computer System Interface Bus oriented that handles concurrent operations.
SAS Serial Attached SCSI Improvement of SCSI, uses serial communication instead of parallel.
ST-506 Seagate Technology Historical Seagate interface.
ST-412 Seagate Technology Historical Seagate interface (minor improvement over ST-506).
ESDI Enhanced Small Disk Interface Historical; backwards compatible with ST-412/506, but faster and more integrated.
ATA Advanced Technology Attachment Successor to ST-412/506/ESDI by integrating the disk controller completely onto the device. Incapable of concurrent operations.
SATA Serial ATA Modification of ATA, uses serial communication instead of parallel.
[edit] Integrity
An IBM HDD head resting on a disk platter. Since the drive is not in operation, the head is simply pressed against the disk by the suspension.
Close-up of a hard disk head resting on a disk platter. A reflection of the head and its suspension is visible on the mirror-like disk.

Due to the extremely close spacing between the heads and the disk surface, any contamination of the read-write heads or platters can lead to a head crash — a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, wear and tear, corrosion, or poorly manufactured platters and heads.

The HDD's spindle system relies on air pressure inside the enclosure to support the heads at their proper flying height while the disk rotates. Hard disk drives require a certain range of air pressures in order to operate properly. The connection to the external environment and pressure occurs through a small hole in the enclosure (about 0.5 mm in diameter), usually with a filter on the inside (the breather filter).[47] If the air pressure is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about 3,000 m (10,000 feet).[48] Modern disks include temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on all disk drives — they usually have a sticker next to them, warning the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation (or "recirc") filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation. Very high humidity for extended periods can corrode the heads and platters.

For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction with the disk surface, and can render the data unreadable for a short period until the head temperature stabilizes (so called "thermal asperity", a problem which can partially be dealt with by proper electronic filtering of the read signal).
[edit] Actuation of moving arm

The hard drive's electronics control the movement of the actuator and the rotation of the disk, and perform reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology), or segments interspersed with real data (in the case of embedded servo technology). The servo feedback optimizes the signal to noise ratio of the GMR sensors by adjusting the voice-coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media which have failed.
[edit] Landing zones and load/unload technology
A read/write head from a circa-1998 Fujitsu 3.5" hard disk. The area pictured is approximately 2.0 mm x 3.0mm.
Microphotograph of an older generation hard disk head and slider (1990s). The size of the front face (which is the "trailing face" of the slider) is about 0.3 mm × 1.0 mm. It is the location of the actual 'head' (magnetic sensors). The non-visible bottom face of the slider is about 1.0 mm × 1.25 mm (so-called "nano" size) and faces the platter. It contains the lithographically micro-machined air bearing surface (ABS) that allows the slider to fly in a highly controlled fashion. One functional part of the head is the round, orange structure visible in the middle - the lithographically defined copper coil of the write transducer. Also note the electric connections by wires bonded to gold-plated pads.

Modern HDDs prevent power interruptions or other malfunctions from landing its heads in the data zone by parking the heads either in a landing zone or by unloading (i.e., load/unload) the heads. Some early PC HDDs did not park the heads automatically and they would land on data. In some other early units the user manually parked the heads by running a program to park the HDD's heads.

A landing zone is an area of the platter usually near its inner diameter (ID), where no data is stored. This area is called the Contact Start/Stop (CSS) zone. Disks are designed such that either a spring or, more recently, rotational inertia in the platters is used to park the heads in the case of unexpected power loss. In this case, the spindle motor temporarily acts as a generator, providing power to the actuator.

Spring tension from the head mounting constantly pushes the heads towards the platter. While the disk is spinning, the heads are supported by an air bearing and experience no physical contact or wear. In CSS drives the sliders carrying the head sensors (often also just called heads) are designed to survive a number of landings and takeoffs from the media surface, though wear and tear on these microscopic components eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%. However, the decay rate is not linear: when a disk is younger and has had fewer start-stop cycles, it has a better chance of surviving the next startup than an older, higher-mileage disk (as the head literally drags along the disk's surface until the air bearing is established). For example, the Seagate Barracuda 7200.10 series of desktop hard disks are rated to 50,000 start-stop cycles, in other words no failures attributed to the head-platter interface were seen before at least 50,000 start-stop cycles during testing.[49]

Around 1995 IBM pioneered a technology where a landing zone on the disk is made by a precision laser process (Laser Zone Texture = LZT) producing an array of smooth nanometer-scale "bumps" in a landing zone,[50] thus vastly improving stiction and wear performance. This technology is still largely in use today (2008), predominantly in desktop and enterprise (3.5 inch) drives. In general, CSS technology can be prone to increased stiction (the tendency for the heads to stick to the platter surface), e.g. as a consequence of increased humidity. Excessive stiction can cause physical damage to the platter and slider or spindle motor.

Load/Unload technology relies on the heads being lifted off the platters into a safe location, thus eliminating the risks of wear and stiction altogether. The first HDD RAMAC and most early disk drives used complex mechanisms to load and unload the heads. Modern HDDs use ramp loading, first introduced by Memorex in 1967,[51] to load/unload onto plastic "ramps" near the outer disk edge.

All HDDs today still use one of these two technologies listed above. Each has a list of advantages and drawbacks in terms of loss of storage area on the disk, relative difficulty of mechanical tolerance control, non-operating shock robustness, cost of implementation, etc.

Addressing shock robustness, IBM also created a technology for their ThinkPad line of laptop computers called the Active Protection System. When a sudden, sharp movement is detected by the built-in accelerometer in the Thinkpad, internal hard disk heads automatically unload themselves to reduce the risk of any potential data loss or scratch defects. Apple later also utilized this technology in their PowerBook, iBook, MacBook Pro, and MacBook line, known as the Sudden Motion Sensor. Sony,[52] HP with their HP 3D DriveGuard[53] and Toshiba[54] have released similar technology in their notebook computers.

This accelerometer based shock sensor has also been used for building cheap earthquake sensor networks.[55]
[edit] Disk failures and their metrics
Search Wikibooks Wikibooks has a book on the topic of
Minimizing hard disk drive failure and data loss

Most major hard disk and motherboard vendors now support S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology), which measures drive characteristics such as operating temperature, spin-up time, data error rates, etc. Certain trends and sudden changes in these parameters are thought to be associated with increased likelihood of drive failure and data loss.

However, not all failures are predictable. Normal use eventually can lead to a breakdown in the inherently fragile device, which makes it essential for the user to periodically back up the data onto a separate storage device. Failure to do so will lead to the loss of data. While it may sometimes be possible to recover lost information, it is normally an extremely costly procedure, and it is not possible to guarantee success. A 2007 study published by Google suggested very little correlation between failure rates and either high temperature or activity level; however, the correlation between manufacturer/model and failure rate was relatively strong. Statistics in this matter is kept highly secret by most entities. Google did not publish the manufacturer's names along with their respective failure rates,[56] though they have since revealed that they use Hitachi Deskstar drives in some of their servers.[57] While several S.M.A.R.T. parameters have an impact on failure probability, a large fraction of failed drives do not produce predictive S.M.A.R.T. parameters.[56] S.M.A.R.T. parameters alone may not be useful for predicting individual drive failures.[56]

A common misconception is that a colder hard drive will last longer than a hotter hard drive. The Google study seems to imply the reverse—"lower temperatures are associated with higher failure rates". Hard drives with S.M.A.R.T.-reported average temperatures below 27 °C (80.6 °F) had higher failure rates than hard drives with the highest reported average temperature of 50 °C (122 °F), failure rates at least twice as high as the optimum S.M.A.R.T.-reported temperature range of 36 °C (96.8 °F) to 47 °C (116.6 °F).[56]

SCSI, SAS and FC drives are typically more expensive and are traditionally used in servers and disk arrays, whereas inexpensive ATA and SATA drives evolved in the home computer market and were perceived to be less reliable. This distinction is now becoming blurred.

The mean time between failures (MTBF) of SATA drives is usually about 600,000 hours (some drives such as Western Digital Raptor have rated 1.2 million hours MTBF), while SCSI drives are rated for upwards of 1.5 million hours.[citation needed] However, independent research indicates that MTBF is not a reliable estimate of a drive's longevity.[58] MTBF is conducted in laboratory environments in test chambers and is an important metric to determine the quality of a disk drive before it enters high volume production. Once the drive product is in production, the more valid metric is annualized failure rate (AFR).[citation needed] AFR is the percentage of real-world drive failures after shipping.

SAS drives are comparable to SCSI drives, with high MTBF and high reliability.[citation needed]

Enterprise S-ATA drives designed and produced for enterprise markets, unlike standard S-ATA drives, have reliability comparable to other enterprise class drives.[59][60]

Typically enterprise drives (all enterprise drives, including SCSI, SAS, enterprise SATA and FC) experience between 0.70%-0.78% annual failure rates from the total installed drives.[citation needed]

Eventually all mechanical hard disk drives fail. And thus the strategy to mitigate loss of data is to have redundancy in some form, like RAID and backup. RAID should never be relied on as backup, as RAID controllers also break down, making the disks inaccessible. Following a backup strategy; for example, daily differential and weekly full backups, is the only sure way to prevent data loss.
[edit] Manufacturers
A Western Digital 3.5 inch 250 GB SATA HDD. This specific model features both SATA and Molex power inputs.
Seagate's hard disk drives being manufactured in a factory in Wuxi, China

See also List of defunct hard disk manufacturers

The technological resources and know-how required for modern drive development and production mean that as of 2009, virtually all of the world's HDDs are manufactured by just five large companies: Seagate, Western Digital, Hitachi (which owns the former disk manufacturing division of IBM), Samsung, and Toshiba.

Dozens of former HDD manufacturers have gone out of business, merged, or closed their HDD divisions; as capacities and demand for products increased, profits became hard to find, and the market underwent significant consolidation in the late 1980s and late 1990s. The first notable casualty of the business in the PC era was Computer Memories Inc. or CMI; after an incident with faulty 20 MB AT disks in 1985,[61] CMI's reputation never recovered, and they exited the HDD business in 1987. Another notable failure was MiniScribe, who went bankrupt in 1990 after it was found that they had engaged in accounting fraud and inflated sales numbers for several years. Many other smaller companies (like Kalok, Microscience, LaPine, Areal, Priam and PrairieTek) also did not survive the shakeout, and had disappeared by 1993; Micropolis was able to hold on until 1997, and JTS, a relative latecomer to the scene, lasted only a few years and was gone by 1999, after attempting to manufacture HDDs in India. Their claim to fame was creating a new 3″ form factor drive for use in laptops. Quantum and Integral also invested in the 3″ form factor; but eventually ceased support as this form factor failed to catch on. Rodime was also an important manufacturer during the 1980s, but stopped making disks in the early 1990s amid the shakeout and now concentrates on technology licensing; they hold a number of patents related to 3.5-inch form factor HDDs.

This list is incomplete; you can help by expanding it.

* 1988: Tandon Corporation sold its disk manufacturing division to Western Digital (WDC), which was then a well-known controller designer.[62]
* 1989: Seagate Technology bought Control Data's high-end disk business, as part of CDC's exit from hardware manufacturing.
* 1990: Maxtor buys MiniScribe out of bankruptcy, making it the core of its low-end disk division.
* 1992: HP introduces the Kittyhawk microdrive, a 1.3" 20MB hard drive. Due to lack of demand and applications, Kittyhawk was discontinued by HP in September 1994.
* 1994: Quantum bought DEC's storage division, giving it a high-end disk range to go with its more consumer-oriented ProDrive range, as well as the DLT tape drive range.
* 1995: Conner Peripherals, which was founded by one of Seagate Technology's co-founders along with personnel from MiniScribe, announces a merger with Seagate, which was completed in early 1996.
* 1996: JTS merges with Atari, allowing JTS to bring its disk range into production. Atari was sold to Hasbro in 1998, while JTS itself went bankrupt in 1999.
* 1996: Largely due to Kittyhawk's failure, Hewlett Packard closed its Disk Memory Division and exited the disk drive business.
* 1996: Quantum begins having their drives manufactured by MKE.
* 2000: Quantum sells its disk division to Maxtor to concentrate on tape drives and backup equipment.
* 2003: Following the controversy over mass failures of its Deskstar 75GXP range, HDD pioneer IBM sold the majority of its disk division to Hitachi, who renamed it Hitachi Global Storage Technologies (HGST).
* 2003: Western Digital purchased Read-Rite Corp, which makes recording heads used on disk drive platters, for $95.4 million.
* December 21, 2005: Seagate and Maxtor announced an agreement under which Seagate would acquire Maxtor in an all stock transaction valued at $1.9 billion. The acquisition was approved by the appropriate regulatory bodies, and closed on May 19, 2006.
* July 2007: Western Digital (WDC) acquires Komag U.S.A, a thin-film media manufacturer, for USD 1 billion.[63]
* 2009: Toshiba acquires Fujitsu disk division[64]

[edit] Sales

In the year 2007 516.2 million hard disks were sold .[65]
[edit] See also

* Automatic Acoustic Management
* Binary prefix (KiB, MiB, GiB, etc.)
* Click of death
* Data erasure
* Disk formatting
* Drive mapping
* du (Unix disk usage program)
* External hard disk drive
* File System
* HDD recorder
* History of hard disk drives
* Hybrid drive
* IBM 305 RAMAC
* kilobyte, megabyte, gigabyte definitions
* Multimedia
* Solid-state drive
* Spintronics
* Write precompensation

from : en.wikipedia.org

Keyboard (computing)

Keyboard (computing)

From Wikipedia, the free encyclopedia

Jump to: navigation, search
A computer keyboard
Wireless multimedial media center German-layout keyboard with trackball

In computing, a keyboard is an input device, partially modeled after the typewriter keyboard, which uses an arrangement of buttons or keys, to act as mechanical levers or electronic switches. A keyboard typically has characters engraved or printed on the keys and each press of a key typically corresponds to a single written symbol. However, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys or simultaneous key presses can produce actions or computer commands.

In normal usage, the keyboard is used to type text and numbers into a word processor, text editor or other program. In a modern computer, the interpretation of keypresses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all keypresses to the controlling software. Keyboards are also used for computer gaming, either with regular keyboards or by using keyboards with special gaming features, which can expedite frequently used keystroke combinations. A keyboard is also used to give commands to the operating system of a computer, such as Windows' Control-Alt-Delete combination, which brings up a task window or shuts down the machine.

from : en.wikipedia.org

Mouse (computing)

Mouse (computing)

From Wikipedia, the free encyclopedia

Jump to: navigation, search
A computer mouse with the most common standard features: two buttons and a scroll wheel, which can also act as a third button

In computing, a mouse (plural mouses, mice, or mouse devices.) is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of an object held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features can add more control or dimensional input. The mouse's motion typically translates into the motion of a cursor on a display, which allows for fine control of a Graphical User Interface.

The name mouse, originated at the Stanford Research Institute, derives from the resemblance of early models (which had a cord attached to the rear part of the device, suggesting the idea of a tail) to the common mouse.[1]

The first marketed integrated mouse – shipped as a part of a computer and intended for personal computer navigation – came with the Xerox 8010 Star Information System in 1981. However, the mouse remained relatively obscure until the appearance of the Apple Macintosh; in 1984 PC columnist John C. Dvorak ironically commented on the release of this new computer with a mouse: “There is no evidence that people want to use these things.”[2][3]

A mouse now comes with most computers and many other varieties can be bought separately.


from : en.wikipedia.org

USB flash drive

USB flash drive
Indonesian From Wikipedia, the free encyclopedia
Jump to: navigation, search

Usbkey internals.jpg
Internal components of a
common flash drive
1 USB Connection
2 Device USB mass storage controller
3 test points
4 chip flash memory
5 Crystal Oscillator
6 LED
7 Write-protect switch
8 empty space for the second flash memory chip

USB flash drives are data storage devices type NAND flash memory with USB interface integrated. Flash drives are typically small, lightweight, and can be read and writable with ease. As of November 2006, the capacity is available for USB flash drive there from up to 128 megabytes 64 gigabytes.

USB flash drives have many advantages over other data storage devices, particularly the floppy disk or compact disc. The tool is faster, smaller, with a larger capacity and more reliable (because it has no moving parts) than floppy disks.
Contents
[hide]

* 1 USB Flash Drive in Windows
* 2 See also
* 3 External links
o 3.1 Applications keydrive
o 3.2 HOWTO sites
o 3.3 GNU / Linux distributions for USB

[edit] USB Flash Drive in Windows

Microsoft Windows operating system to implement a USB flash drive as a USB Mass Storage Device, and use the device drivers usbstor.sys. Because it features Windows auto-mounting, and USB flash drive is a plug and play device, Windows will try to run it as much as possible while the device is plugged into the USB socket. Windows XP and the later even Autoplay feature, which allows the flash drive is accessed as a whole to determine what the contents of the USB flash drive.

Lately, many local computer viruses as well as Brontok / RontokBro, PendekarBlank, and other local viruses using a USB flash drive as a medium for transmission of the virus from one host to another host, replacing the floppy disk. Viruses mostly runs on Windows it will circulate faster when accessing the drive's Windows teserbut use autoplay feature that is owned by Windows. Therefore, there is a good idea to disable the autoplay feature, although this is less so help prevent the spread of the virus.

from : id.wikipedia.org

Academic Affairs Computer Hardware/Software

Academic Affairs Computer Hardware/Software

Capital Request Process


Each fall, the Financial Planning and Budgeting Office conducts the Capital Budget Request project. As part of that project, “Computer/Technical Enhancement” requests are considered. However, unlike other Capital Budget Request categories, “Computer/Technical Enhancement” requests require a collaborative process between Academic Affairs and IT. As a result, the process for these requests has created some confusion and, in some instances, duplication of requests. The following information is provided in the hopes of clarifying the process and the logic behind that process. In accordance with the procedures established by the Financial Planning and Budgeting Office, there are three basic categories for IT Systems and Equipment:

  • Desktop Computers and Shared Printers (under the Four-Year Replacement Plan). Please note: This is not a part of the department Capital Budget Request process and is, therefore, handled strictly through IT by means of the procedure explained below.
  • Major IT Systems and Equipment (Academic Affairs/IT Capital Budget request).
  • Minor IT Systems and Equipment (departmental Operating Budget request).

Desktop Computers and Shared Printers (Four-Year Replacement Plan)

The current “Four-Year Replacement Plan” for desktop computers and shared printers are not handled through the department Capital Budget Request Process, but rather, the Plan is administered separately through IT’s Capital Budget. The Four-Year Replacement Plan covers inventoried computer equipment including all unit-for-unit equipment replacements as well as new equipment for qualified new hires and incoming tenure-track faculty. Each fall, the Computer Depot Manager will contact department chairs or program directors in areas identified for equipment replacement under the Four-Year Replacement Plan (see Spring IT newsletter for list). To begin the process, current equipment will be identified and inventoried and a needs assessment will be conducted. The needs assessment will include consideration of increases or reductions in staff, technology requirements of individual faculty and staff, platform migrations (MAC to PC or vice versa), and changes in configuration; e.g., desktop to laptop, etc. Unless otherwise approved, each unit identified for replacement will be replaced with a corresponding standard configuration unit. Incoming tenure-track faculty, prior to arrival, will be given a choice between a desktop unit or a laptop unit, but not both. Docking stations are not included as part of IT’s Four-Year Replacement Plan budget and, therefore, will require purchase and funding from department operating budget funds. Computer units may not be purchased with start-up funds unless prior special approval has been granted from the Dean of the Faculty’s Office.

Major IT Systems and Equipment

The Computer Depot Manager will work with each department chair or program director in identifying additional needs above and beyond unit-for-unit replacements including unique configurations that fall outside the standard computer configuration identified by IT and any off-cycle equipment replacement needs. A faculty member (with prior approval of his or her chair or director) will be required to contact the Computer Depot Manager to discuss additional or off-cycle needs and then provide that information to his or her chair or director for submission through the Department Capital Budget Request Process.

It is the responsibility of the Computer Depot Manager to synthesize all IT-related requests and provide feedback, including pricing and product information, to each chair or director. It then becomes the chair’s or director’s responsibility to submit “Computer/Technical Enhancement” requests (located in the drop-down list under “Categories”) through the Capital Budget Request Process. Each request must be submitted individually and must include the following: a pedagogical justification for the request, the pricing and product information that was provided by the Computer Depot Manager, and finally, the designation of a priority designation within the other requests of the department or program.

All Academic Affairs/IT-related capital budget requests should be submitted only by department chairs or program directors and will not be accepted from individuals through IT. Examples of such Academic Affairs/IT-related capital budget requests include:

  • A department or program has a technological need to replace a computer off-cycle at the two-year mark (versus the standard four-year cycle).
  • A department or program has a pedagogical need to upgrade from IT’s standard configuration to a more robust platform.
  • A department or program has a need to add a projector or computer to a specific classroom identified as not currently in line for a projection unit.
  • A department or program, on behalf of a faculty member, requests a lab computer not covered by his or her own start-up or grant funds.
  • A department or program, on behalf of a faculty member, wishes to have a computer replaced that was initially purchased outside the scope of the Four-Year Replacement Plan, for example, with start-up or grant funds. These computers are not identified or maintained in the Four-Year Replacement Plan. Please note: these requests are not automatic and will require prior approval from the Dean of the Faculty’s Office before the department or program submits such a request.
  • A department or program is required to upgrade certain software (in excess of $1000) due to technology changes imposed by IT; i.e., operating system upgrades such as Tiger (10.4) to Leopard (10.5).

After requests are submitted, the Dean of Faculty and/or the Vice President for Academic Affairs will further evaluate each request and work with IT in determining the weight of each request. Requests which are approved by the DOF/VPAA are then forwarded to the Financial Planning and Budgeting Office for budgetary consideration in conjunction with IT. Upon final approval of the Academic Affairs/IT-related capital budge requests, funds will then be allocated to the IT Capital Budget for approved requests. It is then the responsibility of the Computer Depot Manager to work with each department chair or program director with respect to making the purchase(s) for all approved requests.

Minor IT Systems and Equipment

Minor equipment requests, such as software, personal scanners, digital cameras, etc.; that fall under the $1000 mark should be submitted as part of their departmental Operating Budget through the annual Operating Budget Request Process. The Computer Depot Manager is available to speak with all chairs and directors, regardless of where their department fits into the Four-Year Replacement Plan, with respect to pricing and product information. Once approved, these funds are deposited into each department’s Operating Budget. The Computer Depot Manager is available at that time to facilitate purchases if necessary.

Computer Hardware/Software Capital Request Process—Administration

All of the above applies to administrative offices with the exception that the directors of each department should vet similar requests in each of their areas, with their respective division heads. Each capital request is submitted by the department and final consideration is handled by the Financial Planning and Budgeting Office in conjunction with IT.

from : cms.skidmore.edu

Tulsa Computer Repair Services

Tulsa Computer Repair Services

Tulsa Computer Repair provides onsite computer repair!Tulsa Computer Repair provides onsite computer repair, upgrades, networking, and much more. In addition to our computer repair services, we offer custom built computer systems, custom built gaming computers, and computer hardware upgrades.

Tulsa Computer Repair is dedicated to focusing on excellent customer service by providing the most reliable, affordable and efficient computer products and services for the home and small business computer user. Our rates are set to fit any budget and are among the lowest in the area. Quality, reliability and peace-of-mind however, are not compromised.

Our dedication to providing a solution to your computer issues is unsurpassed.

from : www.tulsacomputerrepair.com

Network Design, Cabling & Wiring, Cable Clean Up

Network Design, Cabling & Wiring,
Cable Clean Up

I&T Plans, Implements and Updates your Network

Whether planning a new computer network or updating your old one, certified I&T techs do it right the first time. Don't play guesswork with the backbone and nerve center of your business.

Network Design Network Design

Your business needs a well-designed network. To make sure you have one, experienced I&T technicians use the industry-standard OSI model as a guide to build your network. Fast, secure, and reliable.

Every network design we create plans for easy monitoring of hardware, cabling, patch panels and switches. Scalable to grow with your company? Absolutely. I&T builds or updates your network right the first time. Because you're not done growing.

Cabling & Wiring Cabling & Wiring

It's the nitty gritty of your network. And it needs to be installed correctly. Faulty connections and poor planning can slow your network, cause it to fail, or lead to expensive replacements in the future.

Internet & Telephone technicians have years of experience installing voice and data networks for all types of businesses.

Before we begin, we go over your project plans to identify any potential problems. We make sure the plan allows for future expansion and we use only the highest-quality cable, patch panels, posts, and cabinets. I&T guarantees our work for three years after every install and certify that throughput meets or exceeds industry standards. In short, it's going to work for you now and in the future.

Cable Clean Up Cable Clean Up

Does your server room look like a bowl of spaghetti? Are tangled cables making every minor upgrade a time-consuming pain? Give us a day and we'll make it look like new. Better than new. Neat, organized, and simplified so your IT crew can keep your voice and data networks separate and see what's what. We'll even draw up a network map to help diagnose problems and plan for future growth.

from : www.itllc.net

Wireless Network, Firewall & Security, Hosted Services

Wireless Network, Firewall & Security, Hosted Services

Create, Maintain and Protect your Network

Provide your team with convenient wireless access across your facility. But do it safely. Internet & Telephone can give you easy access everywhere and make sure that both your wireless and wired networks are safe, secure and efficient.

Wireless Network Wireless Network

Your office is alive. Clients in the lobby. Managers in conference rooms. And everyone wants fast, easy Internet access. A wireless network gives you freedom to connect anywhere near your building.

I&T designs and installs private, multi-access-point wireless networks that gives laptop users Internet access from anywhere in your space. Don't worry about security. We create an airtight firewall and monitor your network around the clock to make sure the good guys get in but the bad guys stay out.

Convenience and safety. It's what Internet & Telephone gives you in a wireless network.

Firewall & Security Firewall & Security

You've heard the stories. International hackers prod a company's network and access confidential financial info. Maybe client information. You can't have that. You won't have that. We won't let it happen.

Internet & Telephone takes care of your wireless network by monitoring and analyzing activity 24/7. To start, we thoroughly analyze your network to create unique, custom solutions using both hardware and software firewall technology. We don't just plug the holes. We eliminate them - and keep prying eyes off your data.

Hosted Services

If you're using Internet & Telephone for Internet service, why not use us for Web hosting also?

We provide the convenience of one point of contact and one bill for all your web-related services. That way you get cost-effective hosting plus POP3 email addressing. And our technology guarantees maximum uptime and fast loading speed. Make your life easier. Get on board with I&T.

from : www.itllc.net

Computer Hardware, Computer Software, Virtual Private Networks

Computer Hardware, Computer Software, Virtual Private Networks

Enterprise Software or New Computers, Internet & Telephone Helps You Make Good IT Decisions

Upgrading your IT infrastructure can be a frustrating experience. Internet & Telephone's friendly, experienced staff can help by guiding you through the process or simply doing the legwork for you. We know the best hardware, software, and VPN products. But we also understand it's important to give you realistic options that suit your individual budget and needs.

Computer Hardware Computer Hardware

Internet & Telephone helps you choose the right server, data backup, desktops, laptops, netbooks or smartphones for your business. Our techs help you make good decisions to meet your needs not only today, but also as you grow in the future. I&T will even research the best pricing for you. Our products include IBM, Dell, Lenovo, Microsoft and more.

Computer Software Computer Software

Choosing the right software and keeping it up to date is a critical issue for every company. Internet & Telephone can take this never-ending task off your hands for good. We help you maintain your software licensing compliance, make sure you're running the latest versions of applications, close security holes, reduce errors, and make applications transparent for each user.

Virtual Private Network (VPN) Virtual Private Network (VPN)

With so many employees working on the road or from home, secure remote access is a high priority. Internet & Telephone's VPN service gives remote users secure access to corporate resources from any Web browser. Employees can access all of their desktop applications from anywhere in the world.

Internet & Telephone's VPN services are monitored through our Network Operations Center (NOC), providing a fully integrated and redundant end-to-end connection between locations.

VPN eliminates your company's geographical barriers, enabling your employees to work efficiently from home and allowing you to connect securely with your satellite offices, vendors and partners.

from :www.itllc.net

Introduction to the InfoTech Industry

Introduction to the InfoTech Industry

The technology breakthrough that enabled the modern computer occurred over 60 years ago when researchers at Bell Laboratories in New Jersey created the first working transistor on December 16, 1947. William Shockley, John Bardeen and Walter Brattain received a well-deserved Nobel Prize in Physics in 1956 for their groundbreaking work in transistors.

What started with one transistor has grown at an astonishing rate. The Semiconductor Industry Association estimates that in 2008, a total of 6 quintillion transistors were manufactured (that’s a six followed by 18 zeroes), an amount equal to 900 million transistors for every person on Earth. To see this growth in transistors in action, consider the steady evolution of Intel’s semiconductors. In 1978, its wildly popular 8086 processor contained 29,000 transistors. The first Pentium processor was introduced by Intel in 1993, with 3.1 million transistors. In 2007, each of Intel’s Zeon Quad-Core processors contained 820 million transistors. In 2009, the company will commercialize its new monster chip, code named Tukwila, with 2 billion transistors!

The worldwide market for information and communications technologies and services (ICT) was estimated at more than $3 trillion in 2006, growing to $3.7 trillion in 2008. That number will grow further to more than $4 trillion by 2011. (These numbers are according to data developed by Global Insight, Inc. as published by WITSA, the World Information Technology and Services Alliance, www.witsa.org.) Annual growth in global ICT was estimated at 10.3% for each of 2007 and 2008, slowing to 3.6% by 2011.

Are the boom years in IT spending over? It certainly looks that way for now, but watch for a rebound by 2012. Analysts at technology research firm IDC revised their estimates of global spending downward in a November 2008 press release. According to their analysts, global spending on IT will grow by only 2.6% in 2009, down from a previous estimate of 5.9%. Growth in the U.S. looks particularly dismal by their estimate, at 0.9% for 2009. IDC estimated worldwide spending on IT at $1.37 trillion for 2008. (Note: their figures do not include the communications segment, and are consequently much lower than those of WITSA.)

For 2008-2009, a global economic slowdown will dampen hardware and software sector growth. Nonetheless, sales through 2008 were relatively strong for such items as notebook computers and smaller “netbooks” (a sector in which prices have dropped dramatically), along with advanced, Internet-enabled cell phones with color screens, electronic game players, MP3 music players, digital cameras, servers and many other types of advanced consumer and business electronics. In 2009, consumer purchases of electronic items will be particularly soft, while business enterprises will be watching their budgets closely, investing only in those projects that will clearly create operating efficiencies.

Meanwhile, many major tech companies announced layoffs in late 2008 and early 2009. Companies in this sector will be doing the same thing that their customers will be doing: trying to cut operating costs and reduce risks.

Emerging markets are of extreme importance to the IT sector. Developing countries now account for more than one-half of all sales of PCs, and for about 70% of unit sales of cell phones. China has grown to be the number five market worldwide for IT expenditures. A recent study by the OECD, titled “Information Technology Outlook,” shows that developed nations’ share of global IT spending was only 76% in 2008, compared to 85% in 2003.

Worldwide sales of semiconductors decreased 2.8% to $248.6 billion in 2008 from $255.6 billion the previous year. Fourth quarter sales were extremely dull, and the outlook for 2009 is not promising.

Gartner estimated growth in the global PC market at 10.9% for 2008, with 302.2 million units shipped worldwide. As with the semiconductor sector, PC sales were dismal in the fourth quarter of the year.

The InfoTech industry is galloping into globalization at a very rapid rate. Research, development and manufacturing of components and completed systems have grown quickly in the labs and manufacturing plants of India, China, Taiwan, Korea, the Philippines and Indonesia, among other lands. Computer services continue to move offshore quickly, particularly to the tech centers of India. Asian PC brands are gaining strength, including Acer and Lenovo.

While the 1970s and 1980s will be remembered as the “Information Age,” and the 1990s will undoubtedly be singled out in history as the beginning of the “Internet Age,” the first decades of the 21st Century may become the “Broadband Age” or, even better said, the “Convergence Age.” A few years back, the advent of the networked computer was truly revolutionary in terms of information processing, data sharing and data storage. In the ‘90s, the Internet was even more revolutionary in terms of communications and furthering the progress of data sharing, from the personal level to the global enterprise level.

Today, broadband sources such as Fiber-to-the-premises, Wi-Fi and cable modems provide high-speed access to information and media, creating an “always-on” environment for many users. The result is a widespread convergence of entertainment, telephony and computerized information: data, voice and video, delivered to a rapidly evolving array of Internet appliances, PDAs, wireless devices (including cellular telephones) and desktop computers. This will fuel the next era of growth. Broadband access has been installed in enough U.S. households and businesses (more than 120 million in 2008) to create a true mass market, fueling demand for new Internet-delivered services, information and entertainment. Growth in broadband subscriptions worldwide is very strong.

The advent of the Convergence Age is leading to a steady evolution in the way we access and utilize software applications.

Major innovations due to the Convergence Age:

1) On the consumer side, widespread access to fast Internet lines has created a boom in user-generated content (such as Flikr, YouTube and Wikipedia); games; social networking (such as Facebook and MySpace); as well as TV, radio and movies delivered via the Internet.

2) On the business side, the Convergence Age is leading to rapid adoption of Software as a Service. That is, the delivery of sophisticated software applications by remote servers that are accessed via the Internet, as opposed to software that is installed locally by its users (such as Salesforce and Microsoft’s Windows Live).

3) On the technology side, the Convergence Age is leading to booming growth in computing power that is distributed over large numbers of small servers, now referred to as “Cloud Computing.”

4) Mobile computing is booming worldwide, taking advantage of the three trends listed above.

The promise of the Convergence Age—the delivery of an entire universe of information and entertainment to PCs and mobile devices, on-demand with the click of a mouse—at hand. Consumers are swarming to new and enhanced products and services, such as the iPod and the iPhone. Over the next five to ten years, significant groundbreaking products will be introduced in areas such as high-density storage, artificial intelligence, optical switches and networking technologies, and advances will be made in quantum computing.

The InfoTech revolution continues in the office as well as in the home. The U.S. workforce totals more than 150 million people. Microsoft has estimated that there are 40 million “knowledge workers” in the U.S. A large majority of the workforce uses a computer of some type on the job daily, in every conceivable application—from receptionists answering computerized telephone systems to cashiers ringing up sales at Wal-Mart on registers that are tied into vast computerized databases. This is the InfoTech revolution at work, moving voice, video and data through the air and over phone lines, driving productivity ahead at rates that we do not yet know how to calculate. Our ability to utilize technology effectively is finally catching up to our ability to create the technologies themselves. We’re finding more and more uses for computers with increased processing speed, increased memory capacity, interfaces that are friendly and easy-to-use and software created to speed up virtually every task known to man. Cheaper, faster chips and more powerful software will continue to enter the market at blinding speed.

InfoTech continues to create new efficiency-creating possibilities on a continual basis. Now, RFID (radio frequency ID tagging, a method of digitally identifying and tracking each individual item of merchandise) promises to revolutionize logistics and drive InfoTech industry revenues even higher.

The health care industry is undergoing a technology revolution of its own. Patient records are slowly going digital in standardized formats, and RFID is starting to make hospital inventories more manageable.

For businesses, the stark realities of global competition are fueling investments in InfoTech. Demands from customers for better service, lower prices, higher quality and more depth of inventory are mercilessly pushing companies to achieve efficient re-stocking, higher productivity and faster, more thorough management information. These demands will continue to intensify, partly because of globalization.

The solutions are rising from InfoTech channels: vast computer networks that speed information around the globe; e-mail, instant messaging, collaboration software and improved systems for real-time communication between branches, customers and headquarters; software with the power to call up answers to complex questions by delving deep into databases; satellites that are beginning to clutter the skies; and clear fiber-optic cables that carry tens of thousands of streams of data across minuscule beams of light. Businesses are paving the paths to their futures with dollars invested in InfoTech because: 1) substantial productivity gains are possible; 2) the relative cost of the technology itself has plummeted while its power has multiplied; and 3) competitive pressures leave them no choice.

from : www.plunkettresearch.com