Saturday, September 26, 2009

An Explanation of Wireless Communication Technology Part 1: The History of G

old phone old manVerizon calls its wireless network, “The largest and most reliable 3G network in America.” Sprint insists that its 3G network is the most “dependable.” Now Sprint is telling us that the new 4G network is coming to select major cities by declaring, “This what’s happening now.” 

There are many companies out there that tout the number of Gs that they offer, but to the layperson (like me), this quantity is poorly understood at best. In a quick and highly informal facebook poll, I asked 100 well-educated people two questions: 1. Do you know what the G actually stands for? and 2. Do you understand the differences between numbered G networks?

Out of 57 responders, only fifteen people answered yes to the first question (26%) and only ten confidently answered yes to the second (17.5%). Sixteen people guessed; half of them came close.

Because of these results, I decided to embark on a research project through the recesses of  the internet in the attempt to decode the terminology, the theory, and the massive sea of acronyms that make up modern wireless communication technology. In two relatively short segments, I will attempt to explain what I’ve learned.

The G stands, quite simply, for generation. A 3G network means that we are currently operating our cell phones in the third generation of wireless systems. An organization called the International Telecommunication Union (ITU) sets all international telephone standards, and it determines what interface falls into what G -category.

The designation was adopted just after the introduction of what is now referred to as second generation wireless technology, and the previous systems were named posthumously by the ITU. To remain accurate, 0G was the name given to pre-cellular mobile communication that emerged in the 1960s. This innovation was very much like a two-way radio system in that instead of operating multiple receivers tuned to one frequency, allowing one person at a time to Push-To-Talk (PTT), it connected a wireless radio device to the Public Switched Telephone Network (PTSN) and assigned each receiving unit its own telephone number.

The 0G technology was built off of the same analog radio system many people are familiar with. Sound comes in the form of a continuous, time-varying signal (a wave) with a certain frequency of oscillation. Just as an antenna helps to pick up a radio station, these 0G phones had transceivers (hybrid transmitter and receiver) mounted in the back of a car or truck that sent and received voice data. This behemoth would then send the signal to a five pound brick near the drivers seat through a hard-line. This standard included the Improved Mobile Telephone Service (IMTS) released in the US in 1969 by Bell, Autoradiopuhelin (ARP) in Finland that launched in 1971, and B-Netz launched in west Germany in 1972.

First generation wireless technology was introduced in the 1980s, and this is when the term cell began replacing the word mobile. A “cell system” is series of well-placed radio towers that divide up a coverage area into sites. These unit sites are arranged in an array, just like biological cells. Since any electrical signal decays over distance, this cell grid optimizes coverage over a given space.

The FCC (the body that governs the airwaves) granted cell phone technology a bandwidth (a range of frequencies) to operate calls on. In 1G standards, all signal was analog (except the connection between radio towers, which was digital). The voice signal was modulated to a higher “carrier frequency” for transmission over long distances (higher frequencies decay more slowly), and each conversation took place on a different channel (another word for frequency).

Cell grids operate on two important ideas. One is “frequency reuse,” which is the repeated use of radio channels with the same frequency to cover different areas that are separated by a significant distance. The second idea is “cell splitting,” which is as simple as it sounds – a large cell is split into an array of smaller cells when the phone-traffic is high. This helped the direction of phone traffic, which was handled with two different schemes under the 1G standard: Frequency Division Multiple Access (FDMA) and Time Division Multiple Access (TDMA).

In FDMA one channel is needed for each call, so cell sites are constantly searching out and storing free channels, waiting to distribute them. When one call is finished, its channel is freed up an put on the list. If a person is talking while moving between cells, the call is “handed off” to a free channel in the new site. If there are no free channels, the call is lost. In TDMA, all calls are simultaneously held on the same channel and are multiplexed between pauses in conversation, both natural and injected. This results increased delay between sides of the conversation, but the ability to make a call at any time (and the ability to keep it while moving) is guaranteed.

The 1G Wireless Common Carriers (WCCs, otherwise known as cell phone companies) of note were Nordic Mobile Telephone (NMT) in Europe and Advanced Mobile Phone System (AMPS), operated in the US by none other than Bell.

The move to 2G was launched in Finland in 1991, bringing with it the transformation to an all-digital signal. Multiplexing became more efficient through the use of compression and codecs. This generation of wireless networks introduced the scheme called Code Division Multiple Access (CDMA). CDMA was initially used in mobile military communication, and it works by breaking up digitized data into bundles, compressing, sending/receiving, decompressing, and then converting these bundles back into analog. 

The benefits to this new digital encryption included higher efficiency (more compressed calls can be packed into a bandwidth) and a wider range of coverage (less decay over distance). Additionally, 2G saw the introduction of slow data-transmission services such as SMS (Short/Silent Message Service) text messaging. Even more compelling for the second generation, digital handsets emitted less power than their analog ancestors. This meant that cell phones could be smaller, and could operate on relatively little battery power, which made them more affordable and  allayed some of the public’s health-related fears. Cell towers and radio equipment also became less expensive, accelerating industry growth and broadening the range of coverage not just across the country, but across the world. Fraud, eavesdropping, and phone number duplication became much less likely, improving personal security and privacy. 

There were some disadvantages to the switch. Digital signal is weaker, and in less populated areas it was not sufficient to reach a cell tower. Additionally, while digital signal doesn’t suffer as much decay as its predecessor, it has a jagged, stepped decay curve while an analog signal decays smoothly. What this translates to is that as you move out of range with an analog device, the signal gets noisy and jumbled,  but it doesn’t just suddenly drop off altogether, which is the case with digital devices (this is where the “dropped call” became a culturally understood event). Lastly, compressing sound data can take away some of the tonality and detail, so while the signal is louder, it is has lost some of the complexity of the original human voice that produced it.

Please come back next Saturday for the thrilling part 2: The Modern G.