Technology Has Changed Television

Technology Has Changed Television

One type of mass media that uses electronic transmission to transfer moving images and sound from a source to a recipient is television (TV). Television has had a significant impact on society by expanding the senses of hearing and vision beyond the confines of physical space. Originally envisioned in the early 20th century as a potential medium for interpersonal and educational communication, it developed into a dynamic broadcast medium by the middle of the century, delivering news and entertainment to listeners worldwide via the broadcast radio paradigm.Television is currently distributed via multiple platforms: terrestrial radio waves via "over the air" delivery (traditional broadcast TV); coaxial cables via cable TV; satellites in geostationary Earth orbit reflecting television signals (direct broadcast satellite, or DBS TV); Internet streaming; and optical recording on digital video discs (DVDs) and Blu-ray discs.

During the mid-1900s, the first technological standards for contemporary television were set, covering both color and monochrome (black and white). Since then, advancements have been made on a constant basis, and the early 21st century saw significant changes in television technology. Wide-screen picture viewing attracted a lot of attention, as did raising the picture resolution (high-definition television, or HDTV) and resizing the television receiver. Additionally, in order to offer interactive services and broadcast several programs in the channel space previously occupied by one program, the transmission of digitally encoded television signals was implemented.

Even with this constant technological advancement, the best way to understand modern television is to first understand the background and workings of monochrome television, and then to apply those same understandings to color television. Therefore, the focus of this article is on fundamental concepts and significant advancements—foundational information required to comprehend and value upcoming technical innovations and improvements. See also "television in the United States," which covers the history and evolution of TV shows, since American TV shows, like American popular culture in general in the 20th and early 21st centuries, have spread far beyond the boundaries of the United States and have had a pervasive influence on global popular culture.

The Development of Television Systems
Mechanical systems

The human imagination is as old as the desire to visit far-off places. In ancient Greece, priests examined the innards of birds, attempting to decipher what the birds had observed as they soared over the horizon. They thought their gods could see human activities happening wherever in the world while perched comfortably atop Mount Olympus. The play Henry IV, Part 1 by William Shakespeare also opens with the introduction of Rumour, a character the other characters rely on to keep them informed about events occurring in remote parts of England.
It was only a dream for a very long time until the invention of television, which started with an unintentional discovery. A selenium wire's electrical conductivity fluctuated, as English telegraph worker Joseph May discovered in 1872 when looking into materials for the transatlantic cable. Subsequent analysis revealed that the wire, which had unintentionally been set up on a table next to the window, changed when a ray of sunshine struck it. This coincidence, however little understood at the time, served as the impetus for converting light into an electric signal.

An article written by French engineer Maurice LeBlanc and published in the journal La Lumière électrique in 1880 served as the model for all televisions that followed. LeBlanc postulated a scanning mechanism that would capitalize on the retina's limited capacity to temporarily store visual information. He envisioned a photoelectric cell that would only view a single area of the image to be sent at a time. The cell would begin in the upper left corner of the image, move to the right, and then leap back to the left, but just one line below. This process would continue until the entire image was scanned, sending data on the amount of light seen at each location, much like an eye reading a text page. The transmitter and receiver would be in sync, allowing the receiver to rebuild the original image line by line.


The foundation of all television comes from the idea of scanning, which made it possible to transmit a full image via a single wire or channel. This idea still exists today. But LeBlanc was never able to build a functional machine. Paul Nipkow, a German engineer who created the scanning disk, was also not the man who advanced television. The

The foundation of all television comes from the idea of scanning, which made it possible to transmit a full image via a single wire or channel. This idea still exists today. But LeBlanc was never able to build a functional machine. Paul Nipkow, a German engineer who created the scanning disk, was also not the man who advanced television. The foundation of Nipkow's 1884 invention for an Elektrisches Telescope was a straightforward rotating disk with a series of holes arranged in an inward spiral. It would be positioned to obstruct the subject's reflected light.The outermost hole would move across the image as the disk turned, allowing light from the picture's initial "line" to enter. The hole after that would follow suit, albeit a little lower, and so forth. A full rotation of the disk would yield an entire image, or "scan," of the subject.

Standing by his television transmitter in 1925–26 is John Logie Baird. "Stookie Bill," a ventriloquist's dummy, is to Baird's left in the exhibit. The rotating Nipkow disk scanned it to provide an image signal.Eventually, John Logie Baird in Britain (see the photo) and Charles Francis Jenkins in the United States applied this idea to create the first commercially successful televisions ever. The answer to the priority dilemma varies on how one defines television. Jenkins transmitted a still image over radio waves in 1922, but Baird's transmission of a live human face in 1925 was the first successful television broadcast

Most of the time, Jenkins and Baird's attempts were met with indifference or mockery. An article in the British journal Nature from 1880 made the following conjecture about television: it was possible, but not worth it. The system's construction costs would not be recovered because there was no way to profit from it. A subsequent Scientific American article suggested that television could have some applications, but that entertainment was not one of them. The majority of respondents believed the idea was absurd.

Not everybody was enthralled. Manchester Guardian editor C.P. Scott issued a warning, saying, "Television? The word has half Latin and half Greek origin. That will not be beneficial. Above all, the allure of novel technologies quickly subsided. The images, which consisted of just thirty lines that repeated roughly every twelve seconds, flickered uncomfortably on tiny receiver screens that were just a few inches high. The shows were basic, monotonous, and eventually uninteresting. However, a rival breakthrough was occurring in the field of electrons even as the boom crashed.

Electronic systems

Ultimately, the main issues with mechanical scanning were its limited scan rate, which resulted in a flickering image, and the comparatively large size of each disk hole, which caused low resolution. Instead of using spinning disks to overcome the challenges, Scottish electrical engineer A.A. Campbell Swinton argued in 1908 that "two beams of cathode rays can probably be employed." Electron beams produced in a vacuum tube are known as cathode rays. According to Swinton, they could "paint" a transient image on the glass screen of a tube that had an internal phosphorescent coating by being guided by magnetic or electric forces. The beams would not flicker since they travel close to the speed of light, and their small size would provide superb resolution. Swinton never constructed a set because, in his words, the potential financial gain would not be sufficient to justify it; however, he was unaware that similar work had already started in Russia. A lecturer at the St. Petersburg Institute of Technology named Boris Rosing assembled a device in 1907 that included a cathode-ray tube receiver and a mechanical scanner. Although Rosing never is known to have demonstrated a functional television, he did have a student called Vladimir Zworykin who was fascinated in his work and eventually immigrated to the United States.

Zworykin, who was employed by Pittsburgh, Pennsylvania's Westinghouse Electric Company in 1923, submitted a patent application for an all-electronic television system, but he had not yet been able to construct and show it. He persuaded David Sarnoff, vice president and general manager of Radio Corporation of America (RCA), Westinghouse's parent firm, to fund his research in 1929 by projecting that he could create a functional electronic television system in two years with $100,000 in funding. The first presentation of an antiquated electronic system, meanwhile, had been made in San Francisco in 1927 by a young guy who had only completed high school, Philo Farnsworth. By persuading his backers that he could offer a commercially viable television system in six months for an only $5,000 investment, Farnsworth had raised research funds. In the end, it requiredthe labor of two guys and almost $50 million before any money was turned into a profit


Color Television

Color television was not a novel concept by any means. A.A. Pof Mordvinov, a Russian physicist, created a system of revolving Nipkow disks and concentric cylinders with slits covered in red, green, and blue filters in the late 1800s. However, even the most basic black-and-white television was decades away, so he was much ahead of the times in terms of technology. Using a Nipkow disk with three spirals of 30 apertures—one spiral for each basic color in order—Baird demonstrated a color system in London in 1928. Two gas-discharge tubes made of helium and mercury vapor for the colors green and blue, and neon for the color red, made up the light source at the receiver.

Many inventors of the early 20th century created color systems that, while sound on paper, required technology that would only become available in the future. Later, the "sequential" system was the name given to their fundamental idea. They suggested applying three different colored filters—red, blue, and green—in order to scan the image. The three parts would be successively replicated at the receiving end so fast that the original multicolored picture would appear to the human eye. Unfortunately, the outdated television equipment of the time could not handle the high scanning rate necessary by this technology. Additionally, the images would not be reproducible by current black-and-white receivers. As a result, sequential systems started to be referred to as "non compatible."

An alternate strategy that would be practically far more challenging—at first even intimidating—would be a "simultaneous" system that would simultaneously transmit the three basic color signals and be "compatible" with now in use black-and-white receivers. Harold McCreary created such a system with cathode-ray tubes in 1924. He intended to scan each of the three primary-color components of an image using a different

cathode-ray camera. Subsequently, he would simultaneously send out the three signals, using a different cathode-ray tube at the receiving end for each color. In every tube, phosphors coated there would flash the proper color when the incoming electron beam struck the "screen" end. Three colored images made out of one main color would be the end result. These pictures would then be combined into one image using a network of mirrors. Despite the fact that McCreary never managed to get this device to function, it is significant since it was the first to be patented simultaneously, employing distinct camera tubes for each main color and luminous color phosphors at the receiving end. Bell Laboratories' Herbert Ives and others used a mechanical technique in 1929 to send 50-line color television images from New York City to Washington, D.C. This technology sent the three major color signals simultaneously via three different circuits.

Following World War II, Peter Goldmark's sequential color system was being demonstrated by the Columbia Broadcasting System (CBS). The combination of red, blue, and green filter-spinning wheels and cathode-ray tubes was so amazing that The Wall Street Journal declared there was "little doubt that color television [had] reached the perfection of black and white." This marked the start of a protracted conflict between CBS and RCA over the direction of color television. Sarnoff cautioned against utilizing a "horse-and-buggy" system that was incompatible with monochrome TV while CBS pushed the Federal Communications Commission (FCC) to approve the Goldmark system for commercial television. Simultaneously, Sarnoff galvanized his team at RCA to create the first color system that was compatible with electronics only.

Digital Television

In the 1990s, digital television technology became widely visible. A 1987 demonstration of a new analog high-definition television (HDTV) system by NHK, Japan's public television network, sparked professional action in the United States. This prompted the Federal Communications Commission (FCC) to announce an open competition to develop American HDTV, and General Instrument Corporation (GI) shocked the industry by presenting the first all-digital television system in history in June 1990. Engineer Woo Paik, who was born in Korea, created the GI system, which was able to send the data needed to create a 1,080-line color picture across a traditional television channel and display it on a wide-screen receiver. Previously, bandwidth was the primary barrier to generating digital television. After digitization, even a standard-definition television (SDTV) transmission would take up over ten times as much radio frequency space as traditional analog television, which is usually transmitted in a six megahertz channel.

HDTV would need to be reduced to around 1% of its original size in order to be a viable substitute. The GI team overcame the issue by only sending adjustments to the image once a full frame was obtained.

Both Zenith Electronics Corporation and the David Sarnoff Research Center (previously RCA Laboratories) unveiled their own digital HDTV systems within a few months of GI's announcement. To create commercially viable HDTV, these four TV laboratories joined forces to form the "Grand Alliance'' in 1993. Meanwhile, a whole new set of options other than HDTV surfaced. While digital broadcasters could send five or six digital standard-definition shows over a single six megahertz channel, they could also "multicast" a high-definition picture. Digital transmission did, in fact, make "smart TV" a reality, where the household receiver could function as a standalone computer. This meant that broadcasters may provide computer services like email, two-way paging, and Internet access in addition to pay-per-view or interactive entertainment content.

The Advanced Television Systems Committee (ATSC) recommended standards for all digital television in the US, including high-definition and standard-definition, which the Federal Communications Commission (FCC) adopted in late 1996. By May 1, 2003, every station in the US would be transmitting digitally on a second channel, under the FCC's proposal. Programs would be "simulcast" in both digital and analog formats, allowing the public time to acclimate to the move gradually. They would also continue to broadcast in analog. The year 2006 marked the end of analog broadcasts, the obsolescence of outdated television sets, and the broadcasters' handover of their original analog spectrum to the government for auction.


Principles of Television Systems
The Television Picture
Human Perception of Motion

A television system consists of equipment at the production site, equipment in the viewer's house, and equipment that transmits the television signal from the source to the viewer. As mentioned in the article's opening, the goal of all of this technology is to increase human eyesight and hearing beyond their inherent physical distance limitations. Therefore, the design of a television system needs to take into account the key functions of these senses, especially vision. The ability of the human eye to discern between the brightness, colors, details, sizes, shapes, and positions of objects in a scene in front of it is one of the vision features that needs to be taken into account.The ear's capacity to discriminate between sound's pitch, loudness, and distribution is one of its aspects. Television systems have to make the right trade-offs between the cost of replicating the intended image and its quality in order to meet these capabilities. They must also be made with the least amount of visual and auditory distortion possible during the transmission

and reproduction processes, as well as the ability to counteract interference effects up to a certain point. The specific concessions made for a particular television service—such as broadcast or cable service—are reflected in the television standards that are enacted and upheld by the relevant national authorities in every nation.

In order for the entire content of a scene that the eye is focused on to be simultaneously conveyed in two dimensions, television technology must take into account the fact that human vision relies on hundreds of thousands of distinct electrical circuits that are located in the optic nerve and run from the retina to the brain. However, when it comes to electrical communication, it is possible to link a transmitter and a receiver using just one circuit—the broadcast channel. The process of image analysis, which breaks up the scene to be televised by the camera's image sensors into an ordered sequence of electrical waves and sends these waves across the single channel one after the other, overcomes this basic mismatch in television practice. The waves are converted back into a matching series of lights and shadows at the receiver, where they are arranged on the viewing screen in the appropriate locations.

Only because the visual sense has persistence—that is, the brain maintains the perception of illumination for approximately tenths of a second after the light source is removed from the eye—is this sequential replication of visual images possible. Therefore, if image synthesis completes in less than a tenth of a second, the viewer will not realize that the image is being built piecemeal and will see the entire viewing screen as continually lit. Likewise, it will then be feasible to replicate over ten images every second, simulating motion and giving the impression that the scene is continuous.

In reality, it's common to send between 25 and 30 full images per second in order to smoothly portray fast motion. In order to offer enough information to cover a broad spectrum of topics, every image is broken down into 200,000 or more individual picture pieces, or pixels. According to our investigation, these details are communicated over the television system at a pace that is higher than 2,000,000 times per second. It has taken all of current electronic technology's resources to build a system that is both speedy and appropriate for public usage.

Flicker

As flicker causes extreme visual strain, the replicated picture must not flicker in order for image analysis to be performed. As the picture's brightness rises, flicker is more noticeable. In order for flicker to be acceptable at a brightness that is appropriate for watching at home in both the day and the night, the image screen must illuminate at least fifty times per second. This is about twice the rate at which images must be repeated in order to accurately depict motion. Therefore, twice as much channel space is required to prevent flicker as would be necessary to display motion.

The same discrepancy may be seen in motion picture practice, where twice as much film is needed for smooth motion simulation as is required for satisfactory flicker performance. In both television and movies, there is a workaround for this problem: presenting each image twice. When projecting a single frame of film, the projector momentarily places a shutter between the film and the lens. Every image on television is examined and combined into two sets of separated lines, each of which fits into the other's gaps one after the other. As a result, even though each line in the image appears only once during each whole picture transmission, the picture area is lighted twice. The eye is relatively insensitive to flicker when the variation in light is limited to a tiny portion of the field of view, making this technique possible. As a result, the individual lines' flicker is hardly noticeable. The lucky attribute of the eye means that a television channel would need to occupy roughly twice as much spectrum space as it does currently.

Resolution

The intricate image structure is the second performance requirement for a television system. Several million halftone dots can be found in one square foot of printed engraving. But while etching copies are meant to be examined closely, even up close, the dot structure cannot be seen by the untrained eye. Since television pictures are viewed at a relatively distant distance, such exquisite detail would be an expensive waste. The architecture of standard-definition television (SDTV) is based on the idea that viewers in a normal home environment are situated six or seven times the height of the visual screen, or roughly three meters (10 feet) away on average. Even HDTV operates under the assumption that the spectator is seated no closer than three times the distance shown on the screen. An image structure with roughly 200,000 picture elements for SDTV and 800,000 for HDTV is a good compromise in these circumstances.

This compromise's physiological basis is that, in viewing settings typical of television, the normal eye can resolve graphic details as long as the angle these features subtend at the eye is at least two minutes of arc. This suggests that the HDTV structure can be resolved at a distance of around 1 meter (3 feet), but the SDTV structure of 200,000 pieces in a picture 16 cm (0.5 foot) height can only be resolved at a distance of roughly 3 meters (10 feet). At close range, such as when adjusting the receiver, the structure of both images would be unpleasantly obvious, but it wouldn't be right to force a system to bear the high costs of broadcasting detail that would only be utilized by a tiny percentage of users.At short range, such as while tuning the receiver, the structure of both images may be objectionably obvious, but it would not be reasonable to force a system to bear the high costs of broadcasting detail that would be used by a small portion of the audience for a small portion of the watching time.

Picture shape

The picture's shape is the third thing to choose when doing an image analysis. The global picture for SDTV is a rectangle that is one-third wider than it is high, as seen in the figure. Originally selected in the 1950s to match the proportions of conventional 35-mm motion picture film, this 4:3 ratio was made in order to avoid wasting frame area when televising film. HDTV sets have an aspect ratio of 16:9, which allows them to display wide-screen images. They were first launched in the 1980s. No matter the aspect ratio, the screen rectangle's width in both SDTV and HDTV is larger than its height to accommodate the horizontal motion that is common to almost all televised events.

Scanning

The path over which the picture structure is investigated at the camera and reconfigured on the receiver screen is the fourth determination in image analysis. The pattern shown on normal television consists of a succession of straight, parallel lines that move sequentially from top to bottom of the display frame. Each line advances from left to right. In order to offer consistent loading of the transmission channel under the demands of a given structural feature, regardless of where in the frame the detail lies, the investigation of the picture structure proceeds at a constant speed along each line. Because it resembles the evolution of the line of vision when reading a written page, scanning is the line-by-line, left-to-right, top-to-bottom breakdown and rebuilding of television visuals. In relation to the focused electron beam that scans the image in a camera tube and recreates the image in a picture tube, the agent that disassembles the light values along each line is referred to as the scanning spot. Most video cameras today use transistorized technology instead of tubes (see the section on Television cameras and displays). However, even in these cameras, the image is divided into a number of "spots," and the path of this division is known as the scanning pattern, or raster.

The scanning pattern
Interlaced lines

The figure depicts the geometry of the typical scanning pattern as it appears on a typical television screen. There are two sets of lines in it. The first set is scanned, and the lines are arranged so that there is always an equal amount of space between them. The lines of the second set are carefully placed in the vacant places of the first set, which is laid down after the first. As a result, the image's region is scanned twice, although each point is only visited once. Interlaced scanning is the term for this technique, which is employed by all of the world's standard television broadcast services. The two fields that together make up the entire scanning pattern are referred to as the scanning frame. Each set of alternating lines is referred to as a scanning field. Field scanning repetition rates are standardized at 50 or 60 fields per second, depending on the above-mentioned electric power frequency; comparable frame scanning rates are 25 and 30 frames per second. 525 scan lines are sent around 30 times per second in the North American monochrome system, resulting in a horizontal sweep frequency of 525 × 30 = 15,750 hertz. The 525 scan lines of the color television system remain kept, but the field rate is decreased to just slightly below 60
hertz and the sweep frequency is changed to 15,734 hertz. The 525 scan lines of the color television system remain kept, but the field rate is decreased to just slightly below 60 hertz and the sweep frequency is changed to 15,734 hertz. This ensures that the color system will work backwards with the earlier black-and-white system; this idea is covered in the section on compatible color television.

The scanning pattern's total line count for SDTV has been adjusted to produce a maximum picture detail of about 200,000 pixels. Given that the frame area measures four units wide by three units high, this figure suggests a pattern with a width of roughly 520 pixels (per line) and a height of 390 pixels (between lines). If many of the picture features weren't scattered across the scanning pattern and partially lie on two lines, this latter figure would indicate a scanning pattern of roughly 400 lines (one line per pixel). As a result, two lines are needed for an exact reproduction of the picture. As a result, scanning patterns are created with roughly 40% more lines than pixels that need to be replicated vertically. The actual values of 405 lines, 525 lines, 625 lines, and 819 lines per frame are used in television broadcasting in different locations. These numbers were selected to align with the actual channel frequency band given to each of the corresponding geographic areas.

The aspect ratio diagram illustrates the link between the ideal and real scanning patterns. As the scanning spot retraces, the portion of the pattern outside of A's dashed lines (referred to as the "safe action area") is lost. The remaining portion of the pattern is actively used to synthesize and analyze the image data, and it is altered to have an aspect ratio of either 4:3 or 16:9 for SDTV or HDTV. As seen by the dashed lines of B, in actuality, part of the safe action area might be obscured by the ornamental mask that encircles the receiver's image tube, allowing programmers to deal with what is referred to as the "safe title area."

Transmission
Generating the color picture signal

The color television signal is actually made up of two components: luminance, or brilliance, and chrominance, which has two aspects: hue (color) and saturation (strength of color), as is mentioned in the section Compatible color television. The amounts of the three major colors—blue, green, and red—present at each place in the image pattern are represented by three picture signals that are produced by the television camera rather than by these values itself. By modifying electronic circuits, the luminance and chrominance components are obtained from these three fundamental color signals.

The color coder, which comes right after the color camera, transforms the primary color signals into the luminance and chrominance signals.

The primary-color signals are simply

applied to an electronic addition circuit, or adder, which adds the values of all three signals at each point along their individual picture signal wave patterns to generate the luminance signal. The resulting sum signal is the black-and-white (luminance) version of the color image because white light is produced when the primary colors are added in the proper amounts. The resulting luminance signal is then separately subtracted from the original primary-color signals in three electronic subtraction circuits. The color-difference signals are then further combined in a matrix unit to produce the I (orange-cyan) and Q (magenta-yellow) signals. These are applied in tandem to a modulator, where the chrominance subcarrier signal is combined with them. As a result, the amplitude and phase modulation of the chrominance subcarrier are matched to the saturation levels and hues, respectively. The overall color picture signal is then created by combining the luminance and chrominance components in a second addition circuit.

In NTSC systems, the chrominance subcarrier is produced at 3.579545 megahertz standard in an accurate electronic oscillator. Following the horizontal synchronization pulses, samples of this subcarrier are injected into the signal waveform during the blank interval in between line scans.As stated in the section Fundamentals of compatible color:, these samples, collectively known as the "color burst," are used in the receiver to regulate the synchronous detector. NTSC television system. Ultimately, the chrominance subcarrier controls the timing of a scanning generator, which forms the horizontal and vertical deflection currents that cause the scanning in the three camera sensors. The aforementioned frequency interlacing in color transmission and dot-interference cancellation in monochrome reception are caused by this similar time of deflection and chrominance transmission.

The Intermittent Projector

Each frame of film is momentarily held motionless in the intermittent projector while a brief flash of light is sent through it. This sort of projector more closely matches the one used in theater projection. The light is directed onto the sensitive surface of a storage-type imager, such as the Vidicon (which is explained in the section Camera image

sensors: Electron tubes), passing through every region of the film frame simultaneously. When the extinguished scanning spot moves from the bottom to the top of the frame, which happens in the intervals between field scans, the light flashes are scheduled to happen. For the short time that it lasts, the light is bright enough to create a vivid electrical image inside the tube. After being saved, the electrical image is scanned to create the picture signal for the subsequent scanning field. Between fields, light is allowed to enter once more, and the second field scans the recorded image. After a film frame has been scanned in this way, a claw mechanism pulls it down, making room for the next frame.

It has long been customary to run intermittent projectors at 25 frames per second, or roughly 4 percent quicker than the original film projection rate of 24 frames per second, in Europe and other regions where the television scanning rate is 25 picture scans per second. The related increases in sound pitch and motion speed do not cause the performance to deteriorate in an unacceptably bad way. Running the film projector at 30 frames per second is not practical in the United States or other regions where television scanning happens at that rate since it would result in a 25% increase in speed and pitch mistakes.Thankfully, there is a tiny common factor—6,—between the 30 scan rate and the 24 frames per second cinema projection rate. That is, five scanning frames take the same amount of time as four film frames. Thus, both the film motion and the scanning will occur at the regular rates if four film frames pass through the projector and five full picture scans (10 fields) are finished. By holding one film frame for three scanning fields, the next frame for two scans, the next for three scans, and so on, the two functions are kept in sync.

 

LEAVE A COMMENT