The copper telephone wire running into nearly every home and office in the developed world had been installed, in many cases, decades earlier. It had been engineered to carry voice — frequencies between roughly 300 and 3400 hertz. But copper wire is capable of carrying signals far beyond that range. The telephone companies had simply chosen not to use those frequencies, because voice was all they needed.
The broadband revolution of the late 1990s and 2000s was built on a single insight: the wire can do much more than we have been asking of it. ADSL, the technology that brought broadband to most of the world's households, did not require new cables. It used the same copper pair that had carried telephone calls for a century — and simply used more of it.
The transition from dial-up to broadband was not merely a speed increase. It changed the fundamental nature of the internet connection. Dial-up was a telephone call: you dialled, you connected, you were billed by the minute or the hour, and when you were done you hung up. Broadband was infrastructure: always on, always available, billed by the month like electricity or water. This shift — from connection as event to connection as utility — changed how people used the internet as profoundly as the internet itself had changed how people communicated.
The technology that made DSL possible had been understood since the late 1980s. ADSL — Asymmetric Digital Subscriber Line — was first demonstrated by Joseph Lechleider at Bellcore in 1988 and standardised by the ITU as G.992.1 (also known as ADSL, or informally as G.dmt) in 1999. Commercial deployments began in the mid-1990s and expanded rapidly through 2000–2005.
The principle was straightforward: divide the frequency spectrum of the copper wire into three separate bands. The lowest band, below 4 kHz, carried the ordinary telephone voice signal, unchanged. Above that, a band from roughly 25 kHz to 138 kHz carried the upstream data channel. Above that, a much wider band from 138 kHz to 1.1 MHz carried the downstream data channel. A device called a splitter or microfilter — a small passive filter, typically plugged into each telephone socket — separated the voice and data signals so that they did not interfere with each other. In many deployments, a single central splitter was installed at the point where the telephone line entered the building.
ADSL did not use a single carrier signal across its frequency band. Instead it used a technique called DMT — Discrete Multitone Modulation — which divided the available spectrum into 256 separate sub-channels, each 4.3125 kHz wide. Think of it as 256 tiny modems, each operating independently on its own narrow slice of frequency.
At the start of each ADSL connection, the modem tested each sub-channel individually to determine how much noise was present in that frequency range. Sub-channels with a good signal-to-noise ratio were loaded with high-order QAM constellations — up to 15 bits per symbol. Sub-channels with high noise were loaded with fewer bits, or disabled entirely. The modem then continuously monitored conditions and adapted the loading to match.
This adaptive loading was why ADSL performance varied so much from one household to another. A house close to the telephone exchange, on a clean modern cable, might achieve 8 Mbit/s downstream. A house far from the exchange, on an old cable with bridge taps and corroded joints, might manage only 512 kbit/s. The modem was doing its best with whatever the wire could offer — automatically, transparently, every time you connected.
ADSL was asymmetric by design: downstream speeds (to the user) were much higher than upstream speeds (from the user). The original ADSL standard offered up to 8 Mbit/s downstream and 800 kbit/s upstream. Its successor, ADSL2+ (ITU G.992.5, 2003), doubled the downstream frequency range to 2.2 MHz and raised the maximum downstream speed to 24 Mbit/s, though line length and quality remained the practical limiting factors.
VDSL (Very High Speed DSL) pushed further still, using frequencies up to 12 MHz and achieving downstream speeds of up to 52 Mbit/s — but only over very short distances from the exchange, typically less than 300 metres. VDSL was most effective when deployed with fibre running close to the premises and a short copper tail for the final connection — an architecture called FTTC (Fibre to the Cabinet).
The first generation of ADSL modems were, like their dial-up predecessors, simple bridges: a box that connected a single computer to the DSL line via a USB or Ethernet cable. The modem handled the physical layer — the DMT modulation, the ATM framing, the PPPoE or PPPoA authentication — and delivered a raw IP connection to the attached computer.
Very quickly, the standalone modem was superseded by the ADSL router: a combined device that incorporated the DSL modem, a router (to share the connection among multiple computers), a firewall, a DHCP server, and usually a wireless access point. By the mid-2000s, the device that most people called their "modem" was actually a modem-router combination — and the word modem was already beginning to lose its precise technical meaning in everyday use.
The leading manufacturers of DSL chipsets in this era were Infineon (formerly Siemens Semiconductor), Broadcom, Texas Instruments (which acquired Amati Communications, the pioneers of DMT), and Globespan (later acquired by Virata, then Conexant). The modem hardware that sat inside routers from Netgear, Linksys, D-Link, Thomson, and BT was almost always built around chipsets from one of these four companies.
While telephone companies were deploying ADSL over their copper networks, cable television companies were pursuing a parallel path. Their coaxial cable infrastructure — already running into millions of homes to deliver television signals — was capable of carrying data at very high speeds. The technical challenge was that cable TV networks had originally been designed as one-way broadcast systems: signal flowed from the headend to the subscriber, never in the other direction. Making them bidirectional required significant hardware upgrades at distribution nodes throughout the network.
The standard that made cable internet possible was DOCSIS — Data Over Cable Service Interface Specification — developed by CableLabs and first published in 1997. DOCSIS defined how cable modems should communicate with the cable company's headend equipment, ensuring interoperability between modems from different manufacturers and headend systems from different vendors.
A cable TV network carries dozens or hundreds of television channels, each occupying a 6 MHz (in North America) or 8 MHz (in Europe) frequency slot on the coaxial cable. A cable modem borrows one or more of these slots for data. The downstream channel — from the headend to your home — uses a 6 or 8 MHz slot in the upper frequency range, typically between 108 MHz and 750 MHz or higher, and modulates data onto it using QAM-64 or QAM-256. A single 6 MHz downstream channel with QAM-256 carries about 38 Mbit/s.
The upstream channel — from your home back to the headend — is more complex, because many subscribers share the same return path. DOCSIS uses a time-division multiple access (TDMA) scheme: the headend allocates specific time slots to each modem for upstream transmissions. Each modem waits for its allocated slot before sending. This coordination is managed by the Cable Modem Termination System (CMTS) at the headend.
Unlike DSL, where each subscriber has a dedicated copper pair all the way to the exchange, cable subscribers share the coaxial segment of their neighbourhood. If many people in the same area are downloading heavily simultaneously, they share the available bandwidth. This is why cable internet performance could vary noticeably by time of day — a phenomenon familiar to anyone who used cable internet in the early 2000s.
DOCSIS evolved through several versions. DOCSIS 1.0 (1997) offered up to 40 Mbit/s downstream and 10 Mbit/s upstream. DOCSIS 2.0 (2001) improved upstream capacity. DOCSIS 3.0 (2006) introduced channel bonding — combining multiple downstream channels to multiply throughput — enabling speeds of 160 Mbit/s and beyond. By the end of the 2000s, cable modems were competitive with or superior to DSL for most residential users in urban and suburban areas.
The transition from dial-up to broadband was rapid in urban areas and agonisingly slow in rural ones. In the United States, the number of broadband subscribers overtook dial-up subscribers for the first time in 2004, according to FCC data. In the United Kingdom, broadband passed dial-up in late 2005. In South Korea, the transition happened earlier than almost anywhere else: by 2002, South Korea already had among the highest broadband penetration rates in the world, driven by aggressive government investment and a dense urban population that made DSL deployment economically attractive.
For those who made the switch, the experience was transformative. Broadband did not just make the same activities faster. It enabled activities that had been impractical on dial-up: streaming audio, then video; online gaming with low latency; voice over IP telephony; video calling; large file downloads measured in minutes rather than days. The web itself changed in response — pages grew richer, images larger, JavaScript more ambitious — in a feedback loop between what broadband made possible and what designers and developers chose to build.
The old rituals of dial-up — the wait for the handshake, the busy signal on the ISP's line, the decision to stay connected a little longer because reconnecting would take time, the careful management of download queues to run overnight — became memories. Connection was now simply a condition of modern life, like running water or electric light. The modem had become invisible.
Through the second half of the 2000s, it became clear that copper wire, however cleverly exploited by ADSL and VDSL, had physical limits that fibre optic cable did not. Fibre carried light rather than electrical signals, was immune to electromagnetic interference, suffered no signal degradation over distance at the speeds involved, and could carry bandwidth orders of magnitude greater than any copper technology.
Japan and South Korea led the deployment of FTTH (Fibre to the Home) networks through the 2000s, with government support and high urban density making the economics work. In Europe and North America, the dominant model was FTTC (Fibre to the Cabinet): fibre ran from the telephone exchange to a street cabinet close to the premises, with a short copper tail for the final metres. This gave VDSL2 the short line length it needed for high speeds, without the cost of digging fibre into every building.
The modem for a fibre connection was no longer a modem in the strict sense. It did not modulate audio tones onto a copper wire. It converted between the optical signals on the fibre and the electrical signals in the home network. The device was called an ONT (Optical Network Terminal) or ONU (Optical Network Unit). The word modem persisted in popular use — people still called the box on their desk their modem — but the technology it described had moved far beyond the modulator-demodulator concept that the word originally named.