Minggu, 19 Desember 2010

Infrared Detectors

Infrared detectors rely on the change of a physical characteristic to sense illumination by infrared radiation (i.e., radiation having a wavelength longer than that of visible light). The origins of such detectors lie in the nineteenth century, although their development, variety and applications exploded during the twentieth century. William Herschel (c. 1800) employed a thermometer to detect this ‘‘radiant heat’’; Macedonio Melloni, (c. 1850) invented the ‘‘thermochrose’’ to display spatial differences of irradiation as color patterns on a temperature-sensitive surface; and in 1882 William Abney found that photographic film could be sensitized to respond to wavelengths beyond the red end of the spectrum. Most infrared detectors, however, convert infrared radiation into an electrical signal via a variety of physical effects. Here, too, nineteenth century innovations continued in use well into the twentieth century.

Electrical photodetectors can be classed as either thermal detectors or quantum detectors. The first infrared detectors were thermal detectors: they responded to infrared radiation by the relatively indirect physical process of an increase in temperature. A thermal detector having a blackened surface is sensitive to radiation of any wavelength, a characteristic that was to become valuable to spectroscopists. The discovery of new physical principles facilitated the development of new thermal detectors. Thomas J. Seebeck reported a new ‘‘thermoelectric effect’’ in 1821 and then demonstrated the first ‘‘thermocouple,’’ consisting of junctions of two metals that produced a small potential difference (voltage) when at different temperatures. In 1829 Leopoldo Nobili constructed the first ‘‘thermopile’’ by connecting thermocouples in series, and it was soon adapted by Melloni for radiant heat measurements—in modern parlance, for detecting infrared radiation— rather than for temperature changes produced by contact and conduction. In 1880, Samuel P. Langley announced the ‘‘bolometer,’’ a temperature- sensitive electrical resistance device to detect weak sources of radiant heat.

Such detectors were quickly adopted by physicists for studying optical radiation of increasingly long wavelength. Early twentieth century research in spectroscopy was largely detector-centered. This program sought to show the connections—indeed, to bridge the perceived gap—between infrared ‘‘optical’’ radiation and electrically generated ‘‘radio’’ waves. Infrared methods, for the most part, developed as an analog of visible methods while relying implicitly on electrical detectors. During the twentieth century a variety of quantum detectors were developed and applied to a growing range of detection and measurement devices. Relying on a direct link between photons of infrared radiation and the electrical properties of the detecting material, they proved dramatically more sensitive than thermal detectors in restricted wavelength regions. As with thermal detectors, quantum detectors rely on a variety of principles. They may exhibit increased conductivity when illuminated with infrared radiation (examples of such ‘‘photoconductive’’ materials being pure crystals such as selenium, and compound semiconductors such as lead sulfide or lead selenide). Alternatively, quantum detectors may generate electrical current directly from infrared illumination. Examples of these ‘‘photovoltaic’’ detectors include semiconductor compounds such as indium antimonide or gallium arsenide.

Physical research on infrared detectors soon attracted military sponsors. Military interest centered initially on the generation and detection of invisible radiation for signaling. During the World War I, Theodore W. Case found that sulfide salts were photoconductive and developed thallous sulfide (Tl2S) cells. Supported by the U.S. Army, Case adapted these unreliable ‘‘thalofide’’ detectors for use as sensors in an infrared signaling device consisting of a searchlight as the source of radiation, which would be alternately blocked and uncovered to send messages (similar to smoke signals or early optical telegraphs) and a thalofide detector at the focus of a receiving mirror. With this system messages were successfully sent several miles. During the 1930s, British infrared research focused on aircraft detection via infrared radiation as an alternative to radar; and, during World War II, relatively large-scale development programs in Germany and America generated a number of infrared-based prototypes and limited production devices.

Edgar W. Kutzscher developed the lead sulfide (PbS) photoconductive detector in Germany in 1932. This became the basis of a major wartime program during the following decade, studying the basic physics of detectors and materials, as well as production techniques and applications of infrared detection. Like the other combatants, the German military managed to deploy only limited production runs of infrared detectors and devices during World War II, for example using the radiation reflected from targets such as tanks to direct guns and developing the ‘‘lichtsprecher,’’ or optical telephone.

In the U.S., successful developments during World War II included an infrared-guided bomb that used a bolometer as sensor, heat-sensitive phosphors for night vision ‘‘metascopes,’’ and scanning systems used for the detection of infrared-radiating targets.

In the years following World War II, German detector technology was rapidly disseminated to British and American firms. Some of this information was recognized as having considerable military potential and was therefore classified. Infrared detectors were of great interest for locating the new jet aircraft and rockets; for their ability to be used ‘‘passively’’ (i.e., by measuring the radiation emitted by warm bodies rather than having to illuminate the targets with another source, as in radar); and for their increasing reliability. The potential military applications promoted intensive postwar research on the sensitivity of infrared detectors.

Whilst largely a product of military funding, these detectors gradually became available to academic spectroscopists. Improved sensitivity to infrared radiation was the postwar goal both of military designers and research scientists. The concurrent rise of infrared spectroscopy provided an impetus to improve laboratory-based detectors and led to developments such as the ‘‘Golay cell’’ by Marcel Golay in 1947. While this hybrid device, essentially an optically monitored pneumatic expansion cell, was a thermal detector such as the commonly used thermopile or bolometer, it was a reliable and sensitive alternative for use in spectrometers.

Another new thermal detector was the thermistor bolometer, based on a blackened semiconductor (generally an oxide of a transition metal) having a narrow band-gap. Spectroscopists were also eager to discover the ultimate limitations of the newer quantum infrared detectors, and work by Peter B. Fellgett and by R. C. Jones in the early 1950s demonstrated the poor practical performance and theoretical potential of contemporary detectors.

During this period, further developments in Germany included the thallous sulfide and lead sulfide (PbS) detectors; Americans added the lead selenide (PbSe), lead telluride (PbTe), and indium antimonide (InSb) detectors; and British workers introduced mercury–cadmium–telluride (HgCdTe) infrared detectors. The military uses found rapid application. A guided aircraft rocket (the American GAR-2) was in production by 1956, and missile guidance systems, fire control systems, bomber-defense devices, and thermal reconnaissance equipment, all employing infrared measurement devices, were available to many countries by the mid-1960s.

By the late 1970s the military technology of infrared detection was increasingly available in the commercial sphere. Further military research and development during the 1980s extended capabilities dramatically, especially for detector arrays sensitive to the mid-infrared and high background temperatures. This technology was also adapted by civilian astronomers throughout the 1980s for high-sensitivity, far-infrared use. Modern infrared detection systems fall into three distinct classes:
1. Thermal imaging devices, operating analogously to visible-light cameras
2. Low-cost single-element thermal detectors
3. Radiometric or spectrometric devices, employed for precise quantification of energy.

Detectors adapted to new, low-cost markets included the pyroelectric sensor, reliant upon the change in electrical polarization produced by the thermal expansion of a ferroelectric crystal. This is considerably more sensitive than traditional bolometers and thermopiles.

While many infrared detector principles were patented, their commercial exploitation was seldom determined in this way. Nevertheless, individual firms were able to dominate market sectors partly because of in-house manufacturing expertise, particularly for the post-World War II semiconductor detectors, or the small markets for specific detector types.

Detectors, both as single element devices and in arrays, are now increasingly packaged as components. Hence thermopiles have benefited from silicon micromachining technology, and can be manufactured by photolithography. Radiometric instruments typically are designed around such single-detector systems, although infrared arrays have been adopted for special applications, particularly astronomical observation. Such arrays, capable of precise spectroscopic, radiometric, and spatial measurement, now match thermal imagers in spatial resolution. From the Infrared Astronomy Satellite (IRAS), launched in 1983 and having a mere 62 detector elements, arrays had grown in complexity to exceed one million detector elements by 1995.

0 komentar:

Posting Komentar