This is a list of interface bit rates, is a measure of information transfer rates, or digital bandwidth capacity, at which digital interfaces in a computer or network can communicate over various kinds of buses and channels. The distinction can be arbitrary between a computer bus, often closer in space, and larger telecommunications networks. Many device interfaces or protocols (e.g., SATA, USB, SAS, PCIe) are used both inside many-device boxes, such as a PC, and one-device-boxes, such as a hard drive enclosure. Accordingly, this page lists both the internal ribbon and external communications cable standards together in one sortable table.
Most of the listed rates are theoretical maximum throughput measures; in practice, the actual effective throughput is almost inevitably lower in proportion to the load from other devices (network/bus contention), physical or temporal distances, and other overhead in data link layer protocols etc. The maximum goodput (for example, the file transfer rate) may be even lower due to higher layer protocol overhead and data packet retransmissions caused by line noise or interference such as crosstalk, or lost packets in congested intermediate network nodes. All protocols lose something, and the more robust ones that deal resiliently with very many failure situations tend to lose more maximum throughput to get higher total long term rates.
ida pro 6.4 mac 77
Device interfaces where one bus transfers data via another will be limited to the throughput of the slowest interface, at best. For instance, SATA revision 3.0 (6 Gbit/s) controllers on one PCI Express 2.0 (5 Gbit/s) channel will be limited to the 5 Gbit/s rate and have to employ more channels to get around this problem. Early implementations of new protocols very often have this kind of problem. The physical phenomena on which the device relies (such as spinning platters in a hard drive) will also impose limits; for instance, no spinning platter shipping in 2009 saturates SATA revision 2.0 (3 Gbit/s), so moving from this 3 Gbit/s interface to USB 3.0 at 4.8 Gbit/s for one spinning drive will result in no increase in realized transfer rate.
Contention in a wireless or noisy spectrum, where the physical medium is entirely out of the control of those who specify the protocol, requires measures that also use up throughput. Wireless devices, BPL, and modems may produce a higher line rate or gross bit rate, due to error-correcting codes and other physical layer overhead. It is extremely common for throughput to be far less than half of theoretical maximum, though the more recent technologies (notably BPL) employ preemptive spectrum analysis to avoid this and so have much more potential to reach actual gigabit rates in practice than prior modems.
Another factor reducing throughput is deliberate policy decisions made by Internet service providers that are made for contractual, risk management, aggregation saturation, or marketing reasons. Examples are rate limiting, bandwidth throttling, and the assignment of IP addresses to groups. These practices tend to minimize the throughput available to every user, but maximize the number of users that can be supported on one backbone.
Furthermore, chips are often not available in order to implement the fastest rates. AMD, for instance, does not support the 32-bit HyperTransport interface on any CPU it has shipped as of the end of 2009. Additionally, WiMAX service providers in the US typically support only up to 4 Mbit/s as of the end of 2009.
Choosing service providers or interfaces based on theoretical maxima is unwise, especially for commercial needs. A good example is large scale data centers, which should be more concerned with price per port to support the interface, wattage and heat considerations, and total cost of the solution. Because some protocols such as SCSI and Ethernet now operate many orders of magnitude faster than when originally deployed, scalability of the interface is one major factor, as it prevents costly shifts to technologies that are not backward compatible. Underscoring this is the fact that these shifts often happen involuntarily or by surprise, especially when a vendor abandons support for a proprietary system.
By convention, bus and network data rates are denoted either in bits per second (bit/s) or bytes per second (B/s). In general, parallel interfaces are quoted in B/s and serial in bit/s. The more commonly used is shown below in bold type.
On devices like modems, bytes may be more than 8 bits long because they may be individually padded out with additional start and stop bits; the figures below will reflect this. Where channels use line codes (such as Ethernet, Serial ATA, and PCI Express), quoted rates are for the decoded signal.
The figures below are simplex data rates, which may conflict with the duplex rates vendors sometimes use in promotional materials. Where two values are listed, the first value is the downstream rate and the second value is the upstream rate.
As stated above, all quoted bandwidths are for each direction. Therefore for duplex interfaces (capable of simultaneous transmission both ways), the stated values are simplex (one way) speeds, rather than total upstream+downstream.
802.11 networks in infrastructure mode are half-duplex; all stations share the medium. In infrastructure or access point mode, all traffic has to pass through an Access Point (AP). Thus, two stations on the same access point that are communicating with each other must have each and every frame transmitted twice: from the sender to the access point, then from the access point to the receiver. This approximately halves the effective bandwidth.
802.11 networks in ad hoc mode are still half-duplex, but devices communicate directly rather than through an access point. In this mode all devices must be able to "see" each other, instead of only having to be able to "see" the access point.
.mw-parser-output .citationword-wrap:break-word.mw-parser-output .citation:targetbackground-color:rgba(0,127,255,0.133)x LPC protocol includes high overhead. While the gross data rate equals 33.3 million 4-bit-transfers per second (or 16.67 MB/s), the fastest transfer, firmware read, results in 15.63 MB/s. The next fastest bus cycle, 32-bit ISA-style DMA write, yields only 6.67 MB/s. Other transfers may be as low as 2 MB/s.[54]
y Uses 128b/130b encoding, meaning that about 1.54% of each transfer is used for error detection instead of carrying data between the hardware components at each end of the interface. For example, a single link PCIe 3.0 interface has an 8 Gbit/s transfer rate, yet its usable bandwidth is only about 7.88 Gbit/s.
z Uses 8b/10b encoding, meaning that 20% of each transfer is used by the interface instead of carrying data from between the hardware components at each end of the interface. For example, a single link PCIe 1.0 has a 2.5 Gbit/s transfer rate, yet its usable bandwidth is only 2 Gbit/s (250 MB/s).
w Uses PAM-4 encoding and a 256 bytes FLIT block, of which 14 bytes are FEC and CRC, meaning that 5.47% of total data rate is used for error detection and correction instead of carrying data. For example, a single link PCIe 6.0 interface has an 64 Gbit/s total transfer rate, yet its usable bandwidth is only 60.5 Gbit/s.
The table below shows values for PC memory module types.These modules usually combine multiple chips on one circuit board.SIMM modules connect to the computer via an 8-bit- or 32-bit-wide interface. RIMM modules used by RDRAM are 16-bit- or 32-bit-wide.[70]DIMM modules connect to the computer via a 64-bit-wide interface.Some other computer architectures use different modules with a different bus width.
In a single-channel configuration, only one module at a time can transfer information to the CPU.In multi-channel configurations, multiple modules can transfer information to the CPU at the same time, in parallel.FPM, EDO, SDR, and RDRAM memory was not commonly installed in a dual-channel configuration. DDR and DDR2 memory is usually installed in single- or dual-channel configuration. DDR3 memory is installed in single-, dual-, tri-, and quad-channel configurations.Bit rates of multi-channel configurations are the product of the module bit-rate (given below) and the number of channels.
a The clock rate at which DRAM memory cells operate. The memory latency is largely determined by this rate. Note that until the introduction of DDR4 the internal clock rate saw relatively slow progress. DDR/DDR2/DDR3 memory uses 2n/4n/8n (respectively) prefetch buffer to provide higher throughput, while the internal memory speed remains similar to that of the previous generation.
b The "memory speed/clock" advertised by manufactures and suppliers usually refers to this rate (with 1 GT/s = 1 GHz). Note that modern types of memory use DDR bus with two transfers per clock.
RAM memory modules are also utilised by graphics processing units; however, memory modules for those differ somewhat from standard computer memory, particularly with lower power requirements, and are specialised to serve GPUs: for example, GDDR3 was fundamentally based on DDR2. Every graphics memory chip is directly connected to the GPU (point-to-point). The total GPU memory bus width varies with the number of memory chips and the number of lanes per chip. For example, GDDR5 specifies either 16 or 32 lanes per "device" (chip), while GDDR5X specifies 64 lanes per chip. Over the years, bus widths rose from 64-bit to 512-bit and beyond: e.g. HBM is 1024 bits wide.[71]Because of this variability, graphics memory speeds are sometimes compared per pin. For direct comparison to the values for 64-bit modules shown above, video RAM is compared here in 64-lane lots, corresponding to two chips for those devices with 32-bit widths.In 2012, high-end GPUs used 8 or even 12 chips with 32 lanes each, for a total memory bus width of 256 or 384 bits. Combined with a transfer rate per pin of 5 GT/s or more, such cards could reach 240 GB/s or more. 2ff7e9595c
Comments