3.1.1 Power Supply Units The power supply unit (PSU) delivers low-voltage direct current (DC) power to PC components. It contains a rectifier to convert alternating current (AC) from the building to DC voltage, transformers to step down to lower voltages, and regulators to ensure consistent output with filters and regulators. The PSU also includes a fan to dissipate heat. The PSU's size and shape determine its compatibility with the system case and motherboard, particularly regarding screw and fan locations and power connectors. Most desktop PC PSUs are based on the ATX form factor. Ensure the PSU is compatible with the outlet's voltage before plugging it in. North American outlets typically provide 120 VAC (low-line), while UK outlets provide 230 VAC (high-line). Data centers often use high-line voltage for efficiency. Most PSUs are dual voltage and auto-switching, though some have a manual switch or are fixed to either low-line or high-line. Input operating voltages are marked on the PSU and its documentation. Note: AC voltage supply varies by country and distribution circuits, so PSUs have a wide tolerance range: 100-127 VAC for low-line and 220-240 VAC for high-line. Autoswitching PSU A power supply unit with a metal casing, cooling fan on top, rear power switch, and cables with various connectors. Image © 123RF.com ______________________________________________________________ 3.1.2 Wattage Rating Power is the rate at which energy is generated or used, measured in watts (W), calculated as voltage multiplied by current (V*I). A PSU must meet the combined power requirements of a PC's components, with its output capability measured in watts, known as its wattage rating. Standard desktop PSUs are typically rated at 400–500 W, while enterprise workstations and servers often have PSUs rated well over 300 W, sometimes exceeding 1000 W, especially in systems with multiple CPUs and GPUs. Gaming PCs may require 600 W or more due to high-spec CPUs and graphics cards. It is crucial to correctly match the PSU wattage to the system's power requirements to prevent system instability or damage. An underpowered PSU can lead to several issues: System Instability: Insufficient power can cause random shutdowns, reboots, or crashes, as the PSU struggles to supply adequate power to all components. Component Damage: Consistently running a PSU at or beyond its capacity can lead to overheating, potentially damaging the PSU itself or other components. Note: Component power requirements vary widely. For example, CPUs can range from 17 W to over 100 W. Online calculators, such as coolermaster.com/power-supply-calculator, can help determine power needs. When specifying a PSU for a system with high power requirements, assess the power distribution across output voltage. Distribution refers to the power supplied over each rail, which is a wire providing current at a specific voltage. For modern computers, the +12 VDC rail is the most important due to its heavy usage. Example Power Distribution: Output Rail (VDC) Maximum Load (A) Maximum Output (W) +3.3 20 130 +5 20 130 +12 33 396 -12 0.8 9.6 +5 (standby) 2.5 12.5 Note: The +3.3 V and +5 V outputs have a combined limit. For modern computers, the +12 VDC rail is the most important, as it is the most heavily used. Energy Efficiency PSU efficiency is a critical factor in system performance and energy consumption. For example, a 300 W PSU operating at 75% efficiency draws 400 W from the outlet, with the excess 100 W lost as heat. This inefficiency not only increases energy costs but also contributes to additional heat generation, which can impact the cooling needs of the system. To address these concerns, PSUs are often rated according to the 80 PLUS certification program, which signifies their efficiency levels. Some common efficiency ratings include: 80 PLUS Bronze: At least 82% efficiency at 20% load, 85% at 50% load, and 82% at 100% load. 80 PLUS Silver: At least 85% efficiency at 20% load, 88% at 50% load, and 85% at 100% load. 80 PLUS Gold: At least 87% efficiency at 20% load, 90% at 50% load, and 87% at 100% load. 80 PLUS Platinum: At least 90% efficiency at 20% load, 92% at 50% load, and 89% at 100% load. 80 PLUS Titanium: At least 90% at 10% load, 92% efficiency at 20% load, 94% at 50% load, and 90% at 100% load. These ratings indicate how efficiently a PSU converts AC power from the outlet into DC power for the computer's components. More efficient PSUs, such as those with Gold or higher ratings, reduce the amount of wasted energy, thereby generating less heat. This reduction in heat generation can decrease the cooling requirements of the system, leading to quieter operation and potentially extending the lifespan of components by maintaining lower operating temperatures. ENERGY STAR 80 PLUS compliant PSUs must be at least 80% efficient at 20–100% load, ensuring a baseline of energy efficiency and reliability for consumers. By choosing a PSU with a higher efficiency rating, users can benefit from lower energy costs, reduced heat output, and improved overall system performance. ______________________________________________________________ 3.1.3 Power Supply Connectors Each PSU has multiple power connectors that supply DC voltage to the motherboard and devices at 3.3 VDC, 5 VDC, and 12 VDC. Voltage regulators adjust the supplied voltage to match the component's requirements. The motherboard's power port is called the P1 connector or the 24-pin ATX power connector. A PSU also includes Molex and SATA power connectors, as well as 4/6/8/16-pin connectors for CPU and PCIe adapter card power ports. ______________________________________________________________ 3.1.4 20-pin to 24-pin Motherboard Adapter The ATX PSU standard has undergone several revisions, specifying different connector form factors. In the original ATX specification, the P1 connector is 20-pin (2x10), with black wires for ground, yellow for +12 V, red for +5 V, and orange for +3.3 V. Most systems use the 24-pin (2x12) P1 connector. Some PSUs include a 20+4-pin P1 adapter cable for compatibility with older 20-pin motherboards. A 24-pin main motherboard power cable and port A power cable and port with 24 pins. 24 colored wires extend downward for motherboard power supply. Image ©123RF.com ______________________________________________________________ 3.1.5 Modular Power Supplies A modular power supply has detachable power connector cables, allowing you to use only the necessary ones. This reduces clutter within the chassis, improving airflow and cooling. For example, a non-modular PSU might have four or five Molex or SATA connectors, but the PC might only need two. With a modular PSU, you can remove the unnecessary cables. Modular power supply with pluggable cables A modular power supply unit with black cables connected to labeled ports. A cooling fan is visible on the side of the metal casing. Image © 123RF.com ______________________________________________________________ 3.1.6 Redundant Power Supplies Redundant power supplies are crucial in maintaining system uptime and preventing data loss, especially in enterprise environments where continuous operation is vital. A computer system may be equipped with two PSUs, with one serving as a failover redundant power supply. This setup ensures that if one PSU fails, the other can immediately take over, minimizing downtime and protecting against data loss. This configuration is particularly critical in scenarios such as data centers or high-availability systems, where uninterrupted service is essential. In these environments, redundant power supplies help maintain system reliability and performance, even during power failures or PSU malfunctions. In server setups, each PSU typically connects to a backplane, a circuit board that provides the electrical connections between different components. The backplane allows for hot-swappable PSUs, meaning faulty units can be replaced without opening the case or interrupting power to the system. This feature is invaluable in maintaining uptime and ensuring that critical services remain available. Redundant power supplies are less common in desktop computers because desktops are generally not required to maintain the same level of uptime as servers. In server environments, however, the need for continuous operation and data integrity makes redundant PSUs a standard feature. ______________________________________________________________ 3.1.7 Fan Cooling Systems Computer components emit heat due to resistance as the electrical current passes through. Without cooling, this heat raises the temperature of each component and the overall case, potentially causing malfunctions or damage. This is especially critical for CPUs. Despite efforts by Intel and AMD to improve thermal efficiency, all CPUs need cooling to maintain safe operating temperatures. Note: Other components, like memory modules and graphics adapters, also require cooling solutions. ______________________________________________________________ 3.1.8 Heat Sinks and Thermal Paste Two different types of heat sinks exist: passive and active. Memory modules use passive heat sinks, which are also called heat spreaders. They do not have a fan because they rely on increased surface area and passive air movement to cool them. Active heat sinks are used by components that generate more heat, such as CPUs, high-end video cards, and some motherboard chipsets with integrated graphics, and typically use a fan. An active heat sink can be made of a copper or aluminum block with fins that increase surface area for better cooling of the component by using a fan to create forced air convection. It is attached to the CPU chip using thermal paste/pad to eliminate air gaps and ensure efficient heat transfer. Thermal pads, which soften when heated, are easier to apply but may be less reliable than thermal paste. CPU heat sink and fan assembly C P U heat sink and fan assembly has a C P U heat sink, C P U fan, C P U, and C P U socket (on motherboard). Image © 123RF.com CPU heat sinks can be clamped to the motherboard using various mechanisms, such as retaining clips or push pins. Push pins can be released and reset with a half-turn of a screwdriver. ______________________________________________________________ 3.1.9 Fans A heat sink is a passive cooling device that doesn't require electricity. For optimal performance, it needs good airflow, so minimize cable clutter and cover spare adapter slots with blanking plates. Many PCs generate more heat than passive cooling can handle. Fans improve airflow and help dissipate heat. They are used in power supplies and chassis exhaust points, drawing cool air from front vents and expelling warm air from the back. Most heat sinks have fans to enhance cooling, which must be connected to a motherboard fan power port. Thermometer sensors at each fan location set appropriate speeds and detect fan failures. Some chassis designs use plastic shrouds or baffles to channel airflow over the CPU, attached with plastic clips. Both fans and heat sinks become less effective if dust accumulates. Clean these components and air vents periodically with a soft brush, compressed air, or a PC-approved vacuum cleaner. ______________________________________________________________ 3.1.10 Liquid Cooling Systems High-end gaming PCs, high performance workstations, and those used in high ambient temperatures may require advanced cooling solutions. A liquid-cooled PC A liquid cooling P C with tubes and multiple fans. Image © 123RF.com A liquid cooling system pumps water around the chassis, offering more effective cooling than air convection and often operating more quietly than multiple fans. An open-loop liquid cooling system includes: Water loop/tubing and pump: Pushes coolant added via the reservoir around the system. Water blocks and brackets: Attached to each device to remove heat by convection, similar to heat sink/fan assemblies, and connected to the water loop. Radiators and fans: Positioned at air vents to dispel excess heat. Note: Simpler closed-loop systems (All-In-One coolers) are available for single components (CPU or GPU) only. Maintenance for an open-loop system includes periodic draining, cleaning, and refilling. Fans and radiators must be kept dust-free, and the system should be drained before moving the PC to a different location. ______________________________________________________________ 3.2 Storage Devices ______________________________________________________________ 3.2.1 Mass Storage Devices Non-volatile storage devices, also known as mass storage, retain data even when the system is powered off. These devices use magnetic, optical, or solid-state technology. Internal mass storage devices are called fixed disks and come in standard widths of 5.25 inches, 3.5 inches, and 2.5 inches. Computer chassis have drive bays to fit these form factors, with 5.25-inch bays often featuring removable panels for devices like DVD drives and smart card readers. Fixed disks are typically installed in drive bays using caddies, which allow for secure mounting and can adapt different drive sizes to fit various bays. For example, a 2.5-inch drive can be installed in a 3.5-inch bay using an adapter caddy. Some caddies use rails for easy removal without opening the case. Motherboard Storage Drive Bays A computer tower interior shows Power Supply Bay and Motherboard at the back and 5.25 inch Drive Bays and 3.5-inch Drive Bays at the front. Image © 123RF.com Description Removable mass storage devices and media enabled data archiving and transfer between PCs. External storage devices, such as external hard drives, are used for backup, data transfer, or providing additional drive types and typically connect via USB or Thunderbolt ports. Several factors impact the choice of mass storage devices: Reliability: This includes the risk of total device failure and partial data corruption, rated by various statistics for each technology type. Performance: Evaluate based on the type of data transfer, considering read/write performance, sequential vs. random access, data throughput (MB/s or GB/s), and input/output operations per second (IOPS). Use: Consider reliability and performance in the context of specific use cases, such as running an OS, hosting a database, streaming audio/video, removable media, or data backup and archiving. Major mass storage drive vendors include Seagate, Western Digital, Hitachi, Fujitsu, Toshiba, and Samsung. ______________________________________________________________ 3.2.2 Solid-State Drives A solid state drive (SSD) uses flash memory technology for persistent mass storage, offering significantly better read performance than the mechanical components in hard disk drives (HDDs), especially in read operations. SSDs are less prone to failure from mechanical shock and wear, and their cost per gigabyte has decreased rapidly in recent years. However, SSDs can sometimes underperform compared to HDDs when handling constant file transfers involving multi-gigabyte files. A 2.5-inch form factor solid state drive with SATA interface A 256 G b solid-state drive with gray casing and gold connector pins visible on the side. Image © 123RF.com Flash memory in SSDs can degrade over many write operations. To mitigate this, drive firmware and operating systems use wear leveling routines to evenly distribute writing across all blocks, optimizing the device's lifespan. Note: The NOT AND (NAND) flash memory used in SSDs comes in different types. Single-level-cell (SLC) is more reliable and expensive than multi-level cell types. In modern desktop PCs, an SSD might serve as the sole internal drive or as a boot drive alongside a hard drive, with the SSD hosting the OS and applications and the HDD storing user data. SSDs can be connected via different interfaces: SATA: SSDs may be packaged in a 2.5-inch caddy and connected using standard SATA data and power connectors. Alternatively, mSATA form factor SSDs plug into a combined data and power port on the motherboard. However, the 600 MBps SATA interface can bottleneck high-performing SSDs, which can achieve transfer rates up to 6.7 GB/s. PCI Express (PCIe): Modern SSDs often use the PCIe bus directly, utilizing the Non-Volatile Memory Host Controller Interface Specification (NVMHCI) or non-volatile memory express (NVMe) interface for better performance. NVMe SSDs can be installed in a PCIe slot as an expansion card or in an M.2 slot. M.2 SSDs are smaller and oriented horizontally, making them suitable for laptops and PC motherboards. M.2 slots provide power over the bus, eliminating the need for a separate power cable. M.2 adapters come in various sizes, indicated by labels such as M.2 2280 (22mm wide and 80mm long). PCIe 4.0, offers transfer rates up to 16 GT/s per lane, while PCIe 5.0 doubles this to 32 GT/s per lane, significantly enhancing SSD performance. Serial Attached SCSI Small Computer System Interface (SAS): SAS is another interface used for high-performance storage devices, often in enterprise environments. SAS drives offer faster data transfer rates and better reliability compared to SATA drives. SAS connects multiple devices to a single controller using a point-to-point serial protocol, enabling high-speed data access and robust performance. They are commonly used in servers and workstations. SSDs are vulnerable to electrostatic discharge (ESD), so always take anti-ESD precautions when handling and storing these devices. mSATA SSD form factor An m S A T A solid state drive with a green circuit board and gold connector pins on the left. Image © 123RF.com Note: M.2 is a physical form factor. M.2 SSDs can use either the SATA/AHCI bus or the NVMe interface. NVMe-based M.2 SSDs typically outperform their SATA counterparts. Check your motherboard documentation to ensure compatibility with both types. SATA interface SSDs are usually B keyed, 2-lane PCIe SSDs are B/M keyed, and 4-lane SSDs are M keyed. Interactive Mobile Placeholder This content is only available on larger screen sizes. Please revisit this page on a larger device. ______________________________________________________________ 3.2.3 Hard Disk Drives A hard disk drive (HDD) stores data on metal or glass platters coated with a magnetic substance. Each platter has a read/write head on both sides, moved by an actuator mechanism. The platters are mounted on a spindle and spin at high speeds. Each side of a platter is divided into circular tracks, which are further divided into sectors, each holding 512 bytes. This low-level formatting is known as drive geometry. HDD with drive circuitry and casing removed showing 1) Platters; 2) Spindle; 3) Read/Write Heads; 4) Actuator An open hard disk drive with the Actuator, Platters, Spindle, and Read or Write Heads labeled. Image by mkphotoshu © 123RF.com HDD performance is determined by the disk's spindle speed, measured in revolutions per minute (RPM). High-performance drives spin at 15,000 or 10,000 RPM, while average drives spin at 7,200 or 5,400 RPM. RPM affects access time, measured in milliseconds, which includes both access and seek time. Access time is the delay as the read/write head locates a track, while seek time is the time it takes to move to the data's position. High-performance drives have access times below 3 ms, while typical drives have around 6 ms. The internal transfer rate (or data transfer rate) measures how fast read/write operations are performed on the platters. A 15,000 RPM drive supports up to 180 MBps, while a 7,200 RPM drive supports around 110 MBps. Most HDDs use a SATA interface. HDDs come in two main form factors: 3.5-inch units for desktop PCs and 2.5-inch units for laptops and portable external drives. The 2.5-inch form factor can vary in height, with options including 15 mm, 9.5 mm, 7 mm, and 5 mm. ______________________________________________________________ 3.2.4 Redundant Array of Independent Disks Both HDDs and SSDs store critical data, including system files for the OS and user-generated data files. If a boot drive fails, the system crashes. If a data drive fails, users lose access to files, potentially resulting in permanent data loss if not backed up. To mitigate these risks, disks can be configured as a redundant array of independent/inexpensive disks (RAID). RAID works by distributing data across multiple disks to provide fault tolerance and improve performance. It sacrifices some disk capacity to achieve redundancy. To the OS, a RAID array appears as a single storage volume, which can be partitioned and formatted like any other drive. Note: RAID can also be said to stand for "Redundant Array of Independent Devices." RAID levels represent different drive configurations with specific types of fault tolerance, numbered from 0 to 6, with nested solutions like RAID 10 (RAID 1 + RAID 0). RAID can be implemented via software (software RAID) using OS features or via hardware (hardware RAID) using a dedicated controller installed as an adapter card. RAID disks connect to SATA ports on the RAID controller adapter card rather than the motherboard. Note: As another option, some motherboards implement integrated RAID functionality as part of the chipset. Hardware RAID solutions vary by the RAID levels they support. Entry-level controllers might support only RAID 0 or RAID 1, while mid-level controllers might add RAID 5 and RAID 10. Hardware RAID often allows for hot-swapping, meaning a damaged disk can be replaced without shutting down the OS. Configuring a volume using RAID controller firmware A RAID configuration utility screen shows two devices labeled A T A with details like identifier, RAID disk, status, and size. ______________________________________________________________ 3.2.5 RAID 0 and RAID 1 When implementing RAID, selecting the appropriate RAID level is crucial. Factors to consider include the required fault tolerance, read/write performance, capacity, and cost. Note: When building a RAID array, it's crucial to use disks that are identical in capacity, type, and performance to avoid issues with underutilization or potential performance bottlenecks. If disks differ in size, the smallest disk will indeed determine the maximum usable space across the array. RAID 0 (Striping without Parity) Disk striping divides data into blocks and distributes them across all disks in the array, improving performance by allowing multiple disks to service requests in parallel. RAID 0, which requires at least two disks, uses this method. The logical volume size is the sum of the capacities of all drives, limited by the smallest disk in the array. However, RAID 0 provides no redundancy. If any disk fails, the entire logical volume fails, causing a system crash and necessitating data recovery from backups. Therefore, RAID 0 is typically used only for non-critical cache storage. RAID 0 (striping)—Data is spread across the array A CPU with a card connected to two storage arrays. The top array has disks Data 1, 3, 5, and 7. The bottom array has disks Data 2, 4, 6, and 8. Image ©123RF.com RAID 1 (Mirroring) RAID 1 is a mirrored drive configuration using two disks. Each write operation is duplicated on the second disk, introducing a small performance overhead. Read operations can use either disk, slightly boosting performance. This setup is the simplest way to protect against single disk failure. If one disk fails, the other takes over with minimal performance impact, maintaining good availability. However, the failed disk should be replaced quickly to restore redundancy. When replaced, the new disk is populated with data from the remaining disk. Performance during rebuilding is temporarily reduced, though RAID 1 rebuilds faster than parity-based RAID levels. RAID 1 (mirroring)—Data is written to both disks simultaneously A CPU with a card connected to two storage disks. The top disk is labeled Data 123 and the bottom disk is labeled Data 123. Image ©123RF.com Disk mirroring is more expensive per gigabyte than other forms of fault tolerance because it utilizes only 50% of the total disk space. ______________________________________________________________ 3.2.6 RAID 5 and RAID 10 RAID 5 and RAID 10 offer better performance, disk utilization, and fault tolerance compared to basic mirroring, making them more suitable choices in many scenarios. RAID 5 (Striping with Distributed Parity) RAID 5 combines striping (like RAID 0) with distributed parity. Distributed parity means that error correction information is spread across all the disks in the array. Data and parity information are always on different disks. If a single disk fails, the data can be reconstructed using the information on the remaining disks. RAID 5 offers excellent read performance, but if a disk fails, read performance slows down because the system needs to use the parity information to recover data. Write operations are also slower due to the need to calculate parity. RAID 5 (striping with parity) A CPU with a card connected to three storage arrays. Image ©123RF.com Description RAID 5 requires at least three drives but can use more. This provides more flexibility in determining the array's overall capacity compared to RAID 1. The maximum number of drives is set by the controller or OS, but practical considerations like cost and risk usually determine the number of drives used. Adding more disks increases the chance of failure. If more than one disk fails, the entire logical storage unit (volume) becomes unavailable. The level of fault tolerance and available disk space are inversely related. As you add more disks, fault tolerance decreases but usable disk space increases. For example, in a RAID 5 array with three disks, one-third of each disk is used for parity. With four disks, one-quarter of each disk is reserved for parity. In a three-disk configuration with 80 GB each, you would have 160 GB of usable space. RAID 10 (Stripe of Mirrors) RAID 10 combines features of RAID 0 and RAID 1. It is a striped volume (RAID 0) made up of mirrored arrays (RAID 1). This setup offers excellent fault tolerance, as one disk in each mirror can fail without losing data. RAID 10—Either disk in each of the sub-volumes can fail without bringing down the main volume. A CPU with a card connected to volume (RAID 0). Volume (RAID 0) is further connected to Sub-volume (RAID 1) at the top and bottom. Image ©123RF.com Description RAID 10 requires at least four disks and must have an even number of disks. It has a 50% disk overhead due to mirroring. ______________________________________________________________ 3.2.7 RAID 6 (Striping with Double Parity) RAID 6 uses striping with dual distributed parity, spreading two sets of parity information across all disks. This allows RAID 6 to tolerate the simultaneous failure of two disks, providing greater fault tolerance than RAID 5. It's ideal for environments with higher disk failure risks, such as large arrays or critical systems. In the event of a disk failure, the array continues to operate with slightly reduced performance. Rebuilding a failed disk uses parity data from the remaining disks, a process that can be time-consuming based on disk size and system load. RAID 6 offers good read performance similar to RAID 5, but slower write performance due to the need to calculate and write two sets of parity data. The array remains operational even if one or two disks fail, though performance will degrade. A minimum of four disks is required, with two used for data storage and two for parity. RAID 6 is more suitable for larger arrays than RAID 5 due to its ability to tolerate two disk failures. The usable capacity of a RAID 6 array is the total capacity minus the capacity of two disks for parity. For example, a four-disk array with 1 TB disks has 2 TB of usable capacity. RAID 6 is more expensive than RAID 5 due to the need for additional disks and more powerful RAID controllers. Longer rebuild times and the write penalty are potential risks, so monitoring the array's health and planning for prompt disk replacements are essential. RAID 6 A CPU with a card connected to four storage arrays. Images © 123RF.com Description ______________________________________________________________ 3.2.8 Removable Storage Drives Removable storage refers to devices that can be moved between computers without opening the case or to media that can be removed from its drive. Drive Enclosures HDDs and SSDs can be used as removable storage by placing them in an enclosure. The enclosure provides a data interface (such as USB, Thunderbolt, or eSATA), a power connector (if needed), and physical protection for the disk. External storage device A rectangular portable hard drive with a U S B port on the side. Image ©123RF.com Some enclosures, known as Network Attached Storage (NAS), can be connected directly to a network. Advanced enclosures can host multiple disks configured as a RAID array. Flash Drives and Memory Cards SSDs (Solid State Drives) use flash memory, which can also be found in other forms like flash drives and memory cards. Also known as a USB drive, thumb drive, or pen drive, a flash drive consists of a flash memory board with a USB connector and a protective cover. You can plug it into any available USB port on your computer. USB thumb drive (left) and SD memory card (right) A U S B thumb drive and a 64 G B high-speed S D card. Image ©123RF.com Commonly used in digital cameras, smartphones, and tablets to store photos, videos, and other data, a memory card requires a card reader to be used with a PC. These readers often fit into a front-facing drive bay of a PC and connect to the motherboard via a USB controller. Most motherboards have spare USB headers for these internal connections, or the reader might come with an expansion card. Multi-card reader A multi-card reader with labeled slots for Smart Media or xD, Compact Flash I or II, M M C or S D, M S or M S PRO, and U S B 2.0. Image ©123RF.com There are several proprietary types of memory cards, each with different sizes and performance levels. Most memory card readers can read multiple types of cards. For example, Secure Digital (SD) cards come in three capacity types: SD: Up to 2 GB. SDHC (Secure Digital High Capacity): Up to 32 GB. SDXC (Secure Digital Extended Capacity): Up to 2 TB. SD cards also have different speed ratings: Original SD: Up to 25 MBps. UHS (Ultra High Speed): Up to 108 MBps. UHS-II: Up to 156 MBps full-duplex or 312 MBps half-duplex. UHS-III: Up to 312 MBps (FD312) or 624 MBps (FD624) full-duplex. SD Express: The latest variant, which combines the SD card form factor with PCIe and NVMe interfaces, offers speeds up to 985 MBps. Note: The smaller form factors can be used with regular-size readers using an adapter (caddy) to hold the card ______________________________________________________________ 3.2.9 Optical Drives Compact Disc - Read Only Memory, Digital Versatile Discs (DVDs), and Blu-ray Discs (BDs) are used for music and video retail. These optical media types use a laser to read data encoded on the disc surface. While marketed as durable, scratches can make them unreadable. These discs can also store PC data and come in recordable and rewritable formats: Basic Recordable Media: Can be written to once in a single session. Multisession Recordable Media: Can be written to in multiple sessions, but data cannot be erased. Rewritable Media: Can be written and erased multiple times, up to a certain number of write cycles. Capacity and Transfer Rates Each optical disc type has different capacities and transfer rates: CD: Capacity: Up to 700 MB. Formats: Recordable (CD-R) and Rewritable (CD-RW). Base Transfer Rate: 150 KBps. DVD: Capacity: 4.7 GB (single-layer, single-sided) to about 17 GB (dual-layer, double-sided). Formats: DVD+R/RW and DVD-R/RW (most drives support both, indicated by the ± symbol). Base Transfer Rate: 1.32 MBps (equivalent to 9x CD speed). Blu-ray: Capacity: 25 GB per layer. Formats: Read-only (BD-ROM) and Rewritable (BD-RE). Base Transfer Rate: 4.5 MBps, with a maximum theoretical rate of 16x (72 MBps). Installation and Connectivity Internal Optical Drive: Installed in a 5.25-inch drive bay and connected to the motherboard via SATA data and power connectors. External Optical Drive: Connected via USB, eSATA, or Thunderbolt. These drives usually require an external power supply via an AC adapter. They may use either a tray-based or slot-loading mechanism. Optical drive unit An optical drive unit with a front tray, indicator light, and a label with specifications on the top. Image ©123RF.com Note: Drives also feature a small hole that accesses a disc eject mechanism (insert a paper clip to activate the mechanism). This is useful if the standard eject button doesnot work or if the drive does not have power. Drives are rated by their data transfer speeds, expressed as record/rewrite/read speeds (e.g., 24x/16x/52x). New drives are generally multi-format, but older drives may lack Blu-ray support. Consumer DVDs and Blu-rays often include digital rights management (DRM) and region coding. Region coding restricts disc usage to players from the same region. On PCs, the region can be set via device properties but is usually limited to a few changes by the firmware. ______________________________________________________________ ______________________________________________________________ 3.3 System Memory ______________________________________________________________ 3.3.1 System RAM and Virtual Memory The CPU processes software instructions through a pipeline, with the top instructions stored in its registers and cache. However, the CPU's cache is limited, necessitating support from additional storage technologies. When executing a process or opening a data file, the image is loaded from the fixed disk into system memory (RAM). Instructions are then fetched from RAM into the CPU's cache and registers as needed, managed by a memory controller. System memory, implemented as random-access memory (RAM), is faster than SSD flash memory and much faster than HDDs, but it is volatile, meaning it only stores data when powered on. System memory is measured in gigabytes (GB), and the amount of RAM determines a PC's ability to handle multiple applications simultaneously and process large files efficiently. Address Space The bus, or communication system, connecting the CPU, memory controller, and memory devices has two main pathways: data and address. Data Pathway: Determines the amount of information transferred per clock cycle. In a single channel memory controller, this bus is typically 64 bits wide. Address Pathway: Determines the number of memory locations the CPU can track, thus limiting the maximum physical and virtual memory. A 32-bit CPU with a 32-bit address bus can access up to 4 GB of memory. A 64-bit CPU could theoretically use a 64-bit address space (16 exabytes), but most use a 48-bit address bus, allowing up to 256 terabytes of memory. A 64-bit CPU can address more memory locations than a 32-bit CPU. The 64-bit data bus is the amount of memory that can be transferred between the CPU and RAM per cycle A comparison of 32-bit and 64-bit C P U architectures showing data buses, RAM, mass storage, and virtual memory address spaces. Image ©123RF.com Description Settings Full Screen Previous Chapter Play Video Next Chapter 00:00 1. Understanding Memory vs. Storage Interactive Script ______________________________________________________________ 3.3.2 RAM Types Modern system RAM is implemented as Double Data Rate Synchronous Dynamic Random Access Memory (DDR SDRAM), reflecting a progression of memory technologies from the 1990s to today: Dynamic RAM (DRAM) stores data bits as electrical charges in bit cells made of capacitors, that hold a charge, and transistors, that read the capacitor's contents. A charged capacitor represents 1, non-charged represents 0. Synchronous DRAM (SDRAM) is synchronized to the system clock, ensuring that memory operations are timed with the CPU's instructions. DDR SDRAM (Double Data Rate SDRAM) doubles data transfer by transmitting data on both the rising and falling edges of the clock cycle. DDR memory modules are labeled by their maximum theoretical bandwidth, such as PC1600 or PC2100. Here's how these values are derived using DDR-200 (PC1600) as an example: Clock Speed: Both the internal memory device clock speed and the memory bus speed are 100 MHz. Data Rate: DDR performs two operations per clock cycle, resulting in a data rate of 200 megatransfers per second (MT/s), hence the DDR-200 designation. Peak Transfer Rate: The peak transfer rate is 1600 MBps (200 MT/s multiplied by 8 bytes per transfer), giving the PC-1600 designation. This is equivalent to 1.6 GBps. Subsequent generations of DDR technology—DDR1, DDR2, DDR3, DDR4, and DDR5—increase bandwidth by multiplying the bus speed rather than the speed of the memory devices. This approach allows for scalable speed improvements without making the memory modules too unreliable or too hot. Design improvements also increase the maximum possible capacity of each memory module. RAM Type Data Rate Transfer Rate Maximum Size DDR1 200 to 400 MT/s 1.6 to 3.2 GB/s 1 GB DDR2 400 to 1066 MT/s 3.2 to 8.5 GB/s 4 GB DDR3 800 to 2133 MT/s 6.4 to 17.066 GB/s 16 GB DDR4 1600 to 3200 MT/s 12.8 to 25.6 GB/s 32 GB DDR5 4800 up to 8000+ 38.4 to 51.2+ GB/s 128 GB or higher Note: MT/s stands for "megatransfers per second," where "mega" refers to one million. It is a measure of the data transfer rate, indicating how many million data transfers occur per second. The transfer rate is the speed at which data is transferred by the memory controller. Memory modules also have internal timing characteristics that are expressed as values like 14-15-15-35 CAS 14. Lower values indicate better performance among RAM modules of the same type and speed. CAS latency, or Column Access Strobe latency, refers to the delay between the memory controller requesting data from the RAM and the moment it becomes available. It is measured in clock cycles. Lower CAS latency means the memory can access data more quickly, leading to better performance. This latency is a crucial factor in determining the overall speed and efficiency of RAM, especially when comparing modules of the same type and speed. Architectural Improvements in DDR4 and DDR5: DDR4: Introduced improvements such as increased power efficiency and higher density, allowing for larger memory capacities. It also features a more efficient channel design, which enhances data throughput. DDR5: Further enhances power efficiency and introduces a new dual 32-bit subchannel architecture within each DIMM, effectively allowing more efficient access and reducing latency. Settings Full Screen Previous Chapter Play Video Next Chapter 00:00 1. Preparing to Install Memory Interactive Script ______________________________________________________________ 3.3.3 Memory Modules A memory module is a printed circuit board that holds a group of RAM devices acting as a single unit. These modules come in various capacities, with each DDR generation setting an upper limit on maximum capacity. Desktop memory is packaged as a Dual Inline Memory Module (DIMM). The notches on the module's edge connector identify the DDR generation (DDR3, DDR4, DDR5) and prevent incorrect insertion. DIMMs often feature heat sinks due to high clock speeds. DDR SDRAM packaged in DIMMs A set of two D D R S D RAM D I M M modules with circuit boards, memory chips, and gold connector edges. Image ©123RF.com Note: Memory modules are vulnerable to electrostatic discharge (ESD). Always take anti-ESD precautions when handling and storing these devices. DIMMs and SODIMMs are designed for different purposes based on their size and application. DIMMs, or Dual Inline Memory Modules, are primarily used in desktop computers. Their larger size allows for higher capacities and better performance, making them ideal for systems that require significant memory, such as gaming PCs and workstations. In contrast, SODIMMs (Small Outline DIMMs) are smaller and typically used in laptops and compact devices. Although they offer lower capacities and performance compared to DIMMs, SODIMMs are preferable in scenarios where space is limited. Typical SODIMM capacities range from 4 GB to 32 GB, making them suitable for portable devices and small form factor systems. In terms of use-case scenarios, DIMMs are favored in systems where performance and capacity are prioritized, such as gaming rigs, high-performance desktops, and servers. Conversely, SODIMMs are ideal for laptops, mini PCs, and other compact systems where space constraints are a consideration. Installation and Compatibility When installing memory modules, it is crucial to ensure that the DDR type matches the motherboard. For instance, DDR5 modules cannot be installed in DDR4 slots. For optimal performance, it is advisable to use modules rated at the same bus speed as the motherboard. While mixing different speeds is possible, it is not recommended, as the system will operate at the speed of the slowest component. Memory slots resemble expansion slots but have catches on each end to secure the modules. SODIMMs are typically fitted into slots that pop up at a 45º angle for easy insertion or removal. SODIMM A S O D I M M module with circuit boards, memory chips, and gold connector edges. Image © 123RF.com ______________________________________________________________ 3.3.4 Multi-channel System Memory In the 2000s, the increasing speed and architectural improvements of CPUs led to memory becoming a bottleneck in system performance. To address this, Intel and AMD developed dual-channel architecture for DDR memory controllers. Initially used in server-level hardware, dual-channel is now common in desktop systems and laptops. Single-Channel vs. Dual-Channel Memory Single-channel: Features one 64-bit data bus between the CPU, memory controller, and RAM, which can limit data transfer rates. Dual-channel: Utilizes two 64-bit pathways, allowing 128 bits of data per transfer, effectively doubling the data bandwidth. This requires support from the CPU, memory controller, and motherboard, but not from the RAM modules themselves. Ordinary RAM modules are used; there are no specific "dual-channel" DDR memory modules. Note: DDRx memory is sold in "kits" for dual-channel use, but the modules are identical to standard ones. Motherboard DIMM slots (dual channel). Slots 1 and 3 (black slots) make up one channel, while slots 2 and 4 (grey slots) make up a separate channel A motherboard with Channel A (D I M M A 1 and A 2) and Channel B (D I M M B 1 and B 2) labeled. Image ©123RF.com Configuring Dual-Channel Systems Slot Arrangement: Dual-channel motherboards often have four DIMM slots arranged in color-coded pairs. Each pair represents one channel (e.g., channel A might be orange, and channel B might be blue). Installation: To enable dual-channel, install identical modules in the corresponding slots of each channel (e.g., A1 and B1). The modules should match in clock speed, capacity, timings, and latency. Clock speed refers to the frequency at which the memory operates, affecting how quickly data can be processed. Timings and latency refer to the delay before data transfer begins, with lower values indicating faster performance. If the modules do not match, the system will default to the lowest (worst performing) values. System Setup: Dual-channel mode may need to be enabled via the PC firmware's system setup program. Note: Be aware that not all motherboards use consistent labeling or color-coding for their DIMM slots. Some may color-code each channel, while others color-code the socket numbers. Always consult the system documentation to ensure proper installation. Mismatched Modules and Flex Mode When adding an odd number of memory modules or modules with different clock speeds and sizes, several outcomes can occur. The system may default to single-channel mode, which limits data transfer rates compared to dual-channel configurations. Alternatively, the system might enable dual-channel mode but disable the spare module, effectively ignoring the additional memory. In some cases, flex mode may be utilized. For instance, if slot A1 contains a 2 GB module and slot B1 contains a 6 GB module, dual-channel mode will be enabled for the 2 GB portion, while the remaining 4 GB in B1 will operate in single-channel mode. This configuration allows for some performance benefits of dual-channel operation while accommodating mismatched module sizes. Triple-Channel and Quadruple-Channel Memory Some CPUs and chipsets support triple-channel or quadruple-channel memory controllers. If the full complement of modules is not installed, the system will revert to as many channels as are populated. Note: DDR5 introduces a new data bus architecture. Each memory module has two 32-bit channels. When installed in a dual-channel memory controller configuration, this results in four 32-bit channels. This architecture distributes the load on each RAM device more effectively, supporting higher density (more gigabytes per module) and reducing latency. It also works better with the multi-core features of modern CPUs. ______________________________________________________________ 3.3.5 ECC RAM Error correction code (ECC) RAM is primarily used in workstations and servers where high reliability is critical. It can detect and correct single-bit memory errors, preventing data corruption and system crashes. It can also detect (but not correct) multi-bit errors, generating an error message and halting the system if such an error occurs. ECC RAM adds an 8-bit checksum to each data transfer, requiring a 72-bit data bus instead of the standard 64-bit bus. The memory controller calculates the checksum and compares it to the one stored by the RAM to detect errors. Most ECC RAM is supplied as Registered DIMMs (RDIMMs), which include a register that reduces the electrical load on the memory controller, improving system stability when large amounts of memory are used. This comes with a slight performance penalty. Unbuffered DIMMs (UDIMMs) are more common in consumer-grade systems and typically do not support ECC. However, some ECC RAM is available as UDIMMs, though this is less common. Compatibility Considerations Motherboard and CPU Support: Both must support ECC for it to be enabled. DIMM Type Compatibility: Most motherboards support either UDIMMs or RDIMMs, not both. Mixing different types (e.g., UDIMMs with RDIMMs) will prevent the system from booting. Mixing ECC and Non-ECC: Mixing ECC and non-ECC UDIMMs is generally not supported and will likely result in system instability or failure to boot. Note: DDR5 RAM introduces an internal error-checking mechanism within the module itself, but this is not the same as traditional ECC. DDR5 still comes in both ECC and non-ECC varieties, with the latter providing full error correction via the memory controller. ______________________________________________________________ ______________________________________________________________ 3.3 ______________________________________________________________ 3.4.1 x64 CPU Architecture The x86 architecture refers to the 32-bit instruction set, where each instruction can be up to 32 bits wide, and was the standard for CPUs through the 1990s. The x64 (or x86-64) architecture is the 64-bit extension of the x86 architecture, developed by Advanced Micro Devices (AMD) as AMD64 and adopted by Intel as Intel 64 or EM64T. This extension allows CPUs to handle 64-bit instructions, data paths, and memory addressing, enabling access to more than 4 GB of RAM. 64-bit CPUs can run both 32-bit and 64-bit software, whereas 32-bit CPUs cannot run 64-bit software. Software, including operating systems, device drivers, and applications, must be specifically designed and compiled to take advantage of the x64 architecture. Device drivers must match the operating system's architecture, meaning 64-bit drivers are required for a 64-bit OS. Modern operating systems like Windows 11 and most Linux distributions now require 64-bit versions, as modern OSs and applications are predominantly 64-bit, with 32-bit versions becoming obsolete. The transition from 32-bit to 64-bit systems is evident in both hardware and software contexts. For example, Apple's shift to 64-bit began with the introduction of the 64-bit A7 chip in the iPhone 5s, and by macOS Catalina, support for 32-bit applications was completely dropped. In the software realm, many applications, such as Adobe Creative Cloud, have transitioned to 64-bit to leverage enhanced performance and memory capabilities. The x64 architecture offers better performance, increased memory capacity, and support for more advanced computing tasks, making it the current standard for new hardware and software. Additionally, x64 architecture impacts virtualization and security by supporting hardware-based virtualization technologies, such as Intel VT-x and AMD-V, which enhance the performance and efficiency of virtual machines. It also supports Data Execution Prevention (DEP) for 64-bit systems, a security feature that helps prevent code execution from non-executable memory regions, thereby enhancing system security. Note: A device driver is code that provides support for a specific model of hardware component for a given operating system. ______________________________________________________________ 3.4.2 x86 CPU Architecture CPU architecture plays a crucial role in determining performance and suitability for different applications. Two primary architectures are RISC (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing). RISC uses a small, optimized set of instructions for faster execution and efficiency, ideal for high-performance, power-efficient applications like mobile devices and embedded systems. Conversely, CISC, exemplified by the x86 architecture, employs a larger instruction set for more complex operations, simplifying programming and enhancing performance in general-purpose tasks, making it suitable for desktops and servers. The x86 architecture, a CISC design, supports both 32-bit (IA-32) and 64-bit instruction sets and is primarily produced by Intel and Advanced Micro Devices (AMD). These processors optimize the fetch, decode, execute, and write-back processes within the execution pipeline, allowing simultaneous processing of multiple instructions and improving throughput. Key internal CPU components include the Arithmetic Logic Unit (ALU), which performs arithmetic and logical operations, and the Control Unit, which manages the fetch, decode, and execute cycles. Performance features such as multi-core processors enable parallel processing and improved multitasking, beneficial for applications requiring simultaneous task execution. Hyper-threading technology further boosts performance by allowing a single core to handle multiple threads, simulating additional cores. Cache memory, a small, high-speed memory within the CPU, stores frequently accessed data and instructions. The cache hierarchy (L1, L2, L3) reduces access time to main memory, enhancing CPU efficiency. L1 cache is the smallest and fastest, integrated directly into CPU cores. CPU families like Intel's Core series and AMD's Ryzen series utilize these architectural features for various use cases. Multi-core and hyper-threading capabilities make them ideal for demanding applications such as gaming, video editing, and data processing. Meanwhile, RISC architectures are preferred in environments prioritizing power efficiency and speed, such as mobile and embedded systems. Understanding these architectural differences helps users select the right CPU for their needs. ______________________________________________________________ 3.4.3 x64 CPU Architecture The x86 architecture refers to the 32-bit instruction set, where each instruction can be up to 32 bits wide, and was the standard for CPUs through the 1990s. The x64 (or x86-64) architecture is the 64-bit extension of the x86 architecture, developed by Advanced Micro Devices (AMD) as AMD64 and adopted by Intel as Intel 64 or EM64T. This extension allows CPUs to handle 64-bit instructions, data paths, and memory addressing, enabling access to more than 4 GB of RAM. 64-bit CPUs can run both 32-bit and 64-bit software, whereas 32-bit CPUs cannot run 64-bit software. Software, including operating systems, device drivers, and applications, must be specifically designed and compiled to take advantage of the x64 architecture. Device drivers must match the operating system's architecture, meaning 64-bit drivers are required for a 64-bit OS. Modern operating systems like Windows 11 and most Linux distributions now require 64-bit versions, as modern OSs and applications are predominantly 64-bit, with 32-bit versions becoming obsolete. The transition from 32-bit to 64-bit systems is evident in both hardware and software contexts. For example, Apple's shift to 64-bit began with the introduction of the 64-bit A7 chip in the iPhone 5s, and by macOS Catalina, support for 32-bit applications was completely dropped. In the software realm, many applications, such as Adobe Creative Cloud, have transitioned to 64-bit to leverage enhanced performance and memory capabilities. The x64 architecture offers better performance, increased memory capacity, and support for more advanced computing tasks, making it the current standard for new hardware and software. Additionally, x64 architecture impacts virtualization and security by supporting hardware-based virtualization technologies, such as Intel VT-x and AMD-V, which enhance the performance and efficiency of virtual machines. It also supports Data Execution Prevention (DEP) for 64-bit systems, a security feature that helps prevent code execution from non-executable memory regions, thereby enhancing system security. Note: A device driver is code that provides support for a specific model of hardware component for a given operating system. ______________________________________________________________ 3.4.4 ARM CPU Architecture ARM (Advanced RISC Machines) provide CPU designs that are customized and manufactured by companies like Qualcomm, Nvidia, Apple, and Samsung. ARM processors are widely used in modern Apple hardware (like the M1 and M2 chips), most Android devices, Chromebooks, and some Windows tablets and laptops. A typical ARM design implements a system-on-chip (SoC), integrating components like video, sound, networking, and storage controllers into the CPU, making ARM ideal for mobile and fanless devices due to its power efficiency and compact size. Note: ARM's architecture is based on Reduced Instruction Set Computing (RISC), which uses simpler, more efficient instructions compared to the Complex Instruction Set Computing (CISC) architecture used in x86/x64 CPUs from Intel and AMD. While RISC may require more instructions to perform certain tasks, each instruction typically completes in a single clock cycle, allowing for better performance-per-watt and increased battery life. For an operating system and hardware drivers to run on an ARM-based device, they must be redesigned and compiled to use the ARM instruction set. While this task is typically within the reach of operating system developers, converting existing x86/x64 software applications to run on a different instruction set is an onerous task. Another option is support for emulation. This means that the ARM device runs a facsimile of an x86 or x64 environment. Windows 10 ARM-based devices use emulation to run x86 and x64 software apps. Emulation typically imposes a significant performance penalty, however. Operating systems, drivers, and applications must be specifically compiled for the ARM instruction set to run on ARM-based devices. Apple's macOS and iOS are optimized for ARM-based chips, and Android apps are built primarily for ARM. Emulation, such as in Windows on ARM or Apple's Rosetta 2, allows x86 software to run on ARM, but typically with a performance penalty, although Apple has reduced this significantly. ARM's power efficiency and increasing performance, particularly demonstrated by Apple's M1 and M2 chips, have made it a strong competitor to x86/x64 architectures in both mobile and high-performance computing. Physical Considerations for ARM-based SoCs Integrating ARM-based SoCs into devices like mobile phones, tablets, or fanless laptops leverages their compact size and thermal efficiency. Unlike traditional CPUs, ARM SoCs are typically soldered directly onto the motherboard, saving space and enhancing durability. This design reduces the device's thickness and weight, making it ideal for portable or slim devices. In fanless devices, ARM's low power consumption and efficient heat management allow for passive cooling solutions, such as heat sinks, instead of fans. This contributes to silent operation and longer battery life. Additionally, ARM SoCs integrate multiple functions, like GPU, networking, and storage controllers, reducing the number of components needed on the motherboard, which simplifies design and lowers production costs. ______________________________________________________________ 3.4.5 CPU Features The speed at which a CPU runs (clock speed) is an important performance indicator when comparing processors with the same architecture but is less reliable for different architectures. Performance is also constrained by thermal and power limits, preventing CPUs from running indefinitely faster. Efficiency can be improved by optimizing the instruction pipeline to maximize work per clock cycle. Techniques like Simultaneous multithreading (SMT), known as HyperThreading by Intel, allows multiple instruction streams (threads) from software applications to be processed concurrently, reducing CPU idle time and enhancing performance in multithreaded applications. For example, Intel's Hyper-Threading Technology and AMD's Simultaneous Multithreading (SMT) allow each physical core to act like two virtual cores, enabling better performance in applications that can use multiple threads, such as video editing, gaming, or running virtual machines. Another approach is SMP. Symmetric Multiprocessing (SMP) uses two or more physical CPUs in a system. An SMP-aware operating system can distribute tasks across available CPUs, regardless of whether applications are multithreaded. SMP is more common in servers and high-end workstations due to the cost of multisocket motherboards and CPUs in each socket must support SMP and be identical model and specifications. Chip-level multiprocessing (CMP), or multicore CPUs, became possible with advancements in fabrication techniques. A multicore CPU places multiple processing units (cores) on a single chip, enabling better performance without the complexity of multisocket configurations. Each core has its own execution unit and cache, often with access to shared caches. The market has progressed beyond dual-core CPUs to models with eight or more cores, using the nC/nT notation to designate multicore and multithreading features. For example, an 8C/16T CPU has eight cores and can process 16 threads simultaneously, thanks to multithreading capabilities. Finally, virtualization allows a single machine to run multiple operating systems (virtual machines or VMs) simultaneously. When building or specifying a machine for virtualization, look for CPUs with hardware-assisted virtualization technologies, or virtualization support, such as Intel VT and AMD-V, which improve virtualization performance. Additionally, second-generation virtualization features such as Second Level Address Translation (SLAT)—known as Extended Page Table (EPT) by Intel and Rapid Virtualization Indexing (RVI) by AMD—is critical for efficient virtual memory management, enabling VMs to run more smoothly. High core counts, multithreading (e.g., Intel Hyper-Threading or AMD SMT), and support for IOMMU (Input-Output Memory Management Unit) are also important for handling multiple VMs and resource-intensive virtualization tasks. ______________________________________________________________ 3.4.6 CPU Socket Types CPU packaging refers to the form factor and connection method between the CPU and motherboard. Intel and AMD each use a different pin grid array and socket types, meaning that AMD CPUs cannot be installed in Intel motherboards and vice versa. All CPU sockets feature a Zero Insertion Force (ZIF) mechanism, which allows CPUs to be installed without applying pressure, reducing the risk of damaging the pins. Note: CPUs are vulnerable to electrostatic discharge (ESD). Always take anti-ESD precautions when handling and storing these devices. Intel predominantly uses Land Grid Array (LGA) sockets, where the pins are located on the motherboard socket, and the CPU has contact pads. Popular Intel socket types include: LGA 1200: Used for the 10th and 11th generation Core processors, offering compatibility with Intel's Comet Lake and Rocket Lake CPUs. LGA 1700: Designed for the 12th generation Alder Lake CPUs, supporting Intel's latest architecture improvements. Intel CPUs often feature Turbo Boost technology, which dynamically increases the processor's clock speed to enhance performance during demanding tasks. GIGA-BYTE Z590 Gaming motherboard with IntelSocket 1200 LGA form factor CPU socket. Note that the socket is covered by a protective dust cap A Gigabyte Z 590 Gaming motherboard. The socket is covered by a cap. Image used with permission from Gigabyte Technology. When installing or removing a CPU, care must be taken to orient pin 1 on the CPU correctly with pin 1 on the socket to avoid bending or breaking any pins. When removing a CPU attached to a heat sink, gently twist the heat sink to avoid pulling the CPU from the socket. Release the latch securing the CPU before attempting to remove it. If reinstalling the same heat sink, clean old thermal paste and apply new thermal paste sparingly (such as in an "X" pattern) to ensure proper heat transfer. Do not apply too much—if it overruns, the excess could damage the socket. AMD Socket Types and Features AMD traditionally uses Pin Grid Array (PGA) sockets, where the pins are located on the CPU itself and fit into the motherboard socket. Key AMD socket types include: AM4: Widely used for AMD's Ryzen processors, providing compatibility across multiple generations. TR4: Designed for Threadripper CPUs, catering to high-performance desktop applications. SP3: Used for AMD's EPYC server processors, which utilize LGA sockets for enhanced durability and performance. GIGA-BYTE X570S Gaming X motherboard with AMD Socket AM4 PGA form factor CPU socket A Gigabyte X 570S Gaming X motherboard. Image used with permission from Gigabyte Technology. ______________________________________________________________3.4.7 CPU Types and Motherboard Compatibility The CPU market experiences rapid turnover, with vendors like Intel and AMD regularly releasing new models. Each new generation typically introduces architectural improvements and often a new socket design. Motherboards are designed specifically for either Intel or AMD CPUs, and compatibility is generally limited to CPUs from the same generation, requiring both the physical socket and the chipset to support the CPU. Motherboard chipsets play a critical role in compatibility, as they determine which features (e.g., overclocking, PCIe lanes, or memory support) are available. For example, high-end chipsets like Intel's Z-series or AMD's X-series offer more advanced features compared to budget chipsets. Additionally, form factors such as ATX, microATX, and Mini-ITX impact the size of the motherboard and the number of components (e.g., RAM slots, PCIe slots) it can accommodate. For example, ATX boards typically offer more RAM slots and PCIe slots compared to smaller microATX or Mini-ITX boards, which are better suited for compact builds. When installing a CPU and motherboard, follow these steps: Socket Alignment: Ensure the CPU matches the motherboard socket type (e.g., LGA for Intel or AM4/AM5 for AMD). Align the CPU with the socket using the notches or triangle markers to avoid damaging the pins. Securing the CPU: Gently place the CPU into the socket and secure it using the retention mechanism. Avoid applying excessive force. Thermal Paste Application: Apply a small, pea-sized amount of thermal paste to the center of the CPU before attaching the cooler. This ensures proper heat transfer between the CPU and the cooler. Attach the Cooler: Secure the CPU cooler to the motherboard, ensuring it is properly aligned and connected to the CPU fan header. Install the Motherboard: Place the motherboard into the case, aligning it with the standoffs and securing it with screws. Upgrading a CPU on the same motherboard is often limited, as new CPUs may require new sockets or chipsets. Even when upgrades are possible, they rarely offer significant performance improvements without replacing other components. Both Intel and AMD release multiple CPU models within each generation, targeting different market segments such as desktop, server, and mobile processors. Core Configurations and Performance Impacts CPU performance is heavily influenced by its core configuration, which determines how efficiently it can handle tasks: Single-Core vs. Multi-Core Processing: Single-core CPUs process one task at a time, making them less efficient for multitasking. Multi-core CPUs can handle multiple tasks simultaneously, improving performance for modern applications. For example, a quad-core CPU can process four tasks at once, making it ideal for multitasking or running resource-intensive applications. Hyper-Threading/Simultaneous Multithreading (SMT): Technologies like Intel's Hyper-Threading and AMD's SMT allow each physical core to handle two threads, effectively doubling the number of tasks a CPU can manage at once. This is particularly beneficial for applications like video editing, virtualization, and 3D rendering. Practical Scenarios: Core Configurations in Action Gaming: Most games rely on high single-core performance, as they are optimized to use fewer cores. A CPU with fewer, faster cores (e.g., Intel Core i5 or AMD Ryzen 5) is often sufficient for gaming. Virtualization: Virtualization benefits from CPUs with higher core counts and multithreading support, as running multiple virtual machines requires significant processing power. CPUs like AMD Ryzen 9, Threadripper, or Intel Core i9 are better suited for these tasks. Workstations: Tasks like video editing, 3D rendering, and software development require both high core counts and multithreading. Workstation-grade CPUs like AMD Threadripper or Intel Xeon are designed for these workloads. Desktops The term desktop refers to a basic PC used at home or in the office. Originally, computer cases sat horizontally on desks, but today most desktops come in tower or all-in-one configurations. The desktop segment includes a broad range of performance levels, from budget PCs to high-end gaming systems, reflected in both Intel and AMD CPU lineups. Intel offers Core i3/i5/i7/i9 processors, with i3 being entry-level and i9 catering to enthusiasts and professionals. AMD offers Ryzen 3/5/7/9, ranging from budget to high-performance, and the high-end Threadripper series for workstation-grade performance. Intel also markets Pentium and Celeron CPUs as low-cost options for entry-level PCs. Socket Types: Intel: Transitioned from older sockets like LGA 1151 and LGA 1200 to the current LGA 1700 socket, used for its 12th-gen (Alder Lake) and 13th-gen (Raptor Lake) processors, supporting DDR5 and PCIe 5.0. AMD: Used the AM4 socket for several Ryzen generations but has now shifted to AM5 for the Ryzen 7000 series, introducing support for DDR5 memory and PCIe 5.0. Most AMD CPUs historically used the PGA form factor (pins on the CPU), but newer platforms like AM5 are transitioning to LGA (pins on the motherboard), similar to Intel's design. Workstations The term workstation can refer to any business PC or network client. However, in PC sales, it typically means a high-performance PC used for tasks like software development or graphics/video editing. Workstation-class PCs often use components similar to those in server-class computers. Servers Server-class computers handle demanding workloads and require high reliability. These systems are designed with multisocket motherboards, allowing the installation of multiple CPU packages, each with multiple cores and support for multithreading, providing the necessary processing power for heavy workloads. Key features of server-class motherboards include support for large amounts of ECC RAM (hundreds of gigabytes or more) and expanded cache memory. Intel Xeon and AMD EPYC CPUs, built for scalability and high performance, are typically paired with dedicated server-grade motherboards. Recent Intel Xeon processors use sockets like LGA 4189 (for Xeon Scalable processors) and LGA 3647 (for older Xeon Scalable models), while older sockets like LGA 1150 and LGA 1151 are no longer used for modern Xeon CPUs. AMD EPYC processors use the Socket SP3 form factor, with newer models utilizing Socket SP5, introduced for AMD's Genoa and Bergamo series, supporting greater scalability, core counts, and memory capacity. Mobiles Mobile devices like smartphones, tablets, and laptops prioritize power efficiency, thermal management, and portability over raw performance. Many smartphones and tablets use ARM-based CPUs for their superior energy efficiency. Both Intel and AMD have separate mobile CPU models within each generation of their platforms. Unlike desktops, mobile CPUs often use different form factors and are frequently soldered to the motherboard, making them non-replaceable or upgradeable. ______________________________________________________________