EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Wednesday, January 27, 2010

Immersion Lithography(electronic seminar topics)

OPTICAL LITHOGRAPHY

The dramatic increase in performance and cost reduction in the electronics industry are attributable to innovations in the integrated circuit and packaging fabrication processes. ICs are made using Optical Lithography. The speed and performance of the chips, their associated packages, and, hence, the computer systems are dictated by the lithographic minimum printable size. Lithography, which replicates a pattern rapidly from chip to chip, wafer to wafer, or substrate to substrate, also determines the throughput and the cost of electronic systems. From the late 1960s, when integrated circuits had linewidths of 5 µm, to 1997, when minimum linewidths have reached 0.35 µm in 64Mb DRAM circuits, optical lithography has been used ubiquitously for manufacturing. This dominance of optical lithography in production is the result of a worldwide effort to improve optical exposure tools and resists.

A lithographic system includes exposure tool, mask, resist, and all of the processing steps to accomplish pattern transfer from a mask to a resist and then to devices. Light from a source is collected by a set of mirrors and light pipes, called an illuminator, which also shapes the light. Shaping of light gives it a desired spatial coherence and intensity over a set range of angles of incidence as it falls on a mask. The mask is a quartz plate onto which a pattern of chrome has been deposited.

It contains the pattern to be created on the wafer. The light patterns that pass through the mask are reduced by a factor of four by a focusing lens and projected onto the wafer which is made by coating a silicon wafer with a layer of silicon nitride followed by a layer of silicon dioxide and finally a layer of photo-resist. The photo resist that is exposed to the light becomes soluble and is rinsed away, leaving a miniature image of the mask pattern at each chip location.

Regions unprotected by photo resist are etched by gases, removing the silicon dioxide and the silicon nitride and exposing the silicon. Impurities are added to the etched areas, changing the electrical properties of the silicon as needed to form the transistors.

As early as the 1980s, experts were already predicting the demise of optical lithography as the wavelength of the light used to project the circuit image onto the silicon wafer was too large to resolve the ever-shrinking details of each new generation of ICs. Shorter wavelengths are simply absorbed by the quartz lenses that direct the light onto the wafer.

Although lithography system costs (which are typically more than one third the costs of processing a wafer to completion) increase as minimum feature size on a semiconductor chip decreases, optical lithography remains attractive because of its high wafer throughput.

RESOLUTION LIMITS FOR OPTICAL LITHOGRAPHY

The minimum feature that may be printed with an optical lithography system is determined by the
Rayleigh equation:
W=k1?
NA
where, k1 is the resolution factor, ? is the wavelength of the exposing radiation and NA is the numerical aperture.

Non Visible Imaging

Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron.

Infrared photography has been around for at least 70 years, but until recently has not been easily accessible to those not versed in traditional photographic processes. Since the charge-coupled devices (CCDs) used in digital cameras and camcorders are sensitive to near-infrared light, they can be used to capture infrared photos. With a filter that blocks out all visible light (also frequently called a "cold mirror" filter), most modern digital cameras and camcorders can capture photographs in infrared. In addition, they have LCD screens, which can be used to preview the resulting image in real-time, a tool unavailable in traditional photography without using filters that allow some visible (red) light through.

INTRODUCTION

Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron

Near-infrared (1000 - 3000 nm) spectrometry, which employs an external light source for determination of chemical composition, has been previously utilized for industrial determination of the fat content of commercial meat products, for in vivo determination of body fat, and in our laboratories for determination of lipoprotein composition in carotid artery atherosclerotic plaques. Near-infrared (IR) spectrometry has been used industrially for several years to determine saturation of unsaturated fatty acid esters (1). Near-IR spectrometry uses an tunable light source external to the experimental subject to determine its chemical composition. Industrial utilization of near-IR will allow for the in vivo measurement of the tissue-specific rate of oxygen utilization as an indirect estimate of energy expenditure. However, assessment of regional oxygen consumption by these methods is complex, requiring a high level of surgical skill for implantation of indwelling catheters to isolate the organ under study.

Adaptive Optics in Ground Based Telescopes(electronics seminar topic)

Adaptive optics is a new technology which is being used now a days in ground based telescopes to remove atmospheric tremor and thus provide a clearer and brighter view of stars seen through ground based telescopes. Without using this system, the images obtained through telescopes on earth are seen to be blurred, which is caused by the turbulent mixing of air at different temperatures.

Adaptive optics in effect removes this atmospheric tremor. It brings together the latest in computers, material science, electronic detectors, and digital control in a system that warps and bends a mirror in a telescope to counteract, in real time the atmospheric distortion.

The advance promises to let ground based telescopes reach their fundamental limits of resolution and sensitivity, out performing space based telescopes and ushering in a new era in optical astronomy. Finally, with this technology, it will be possible to see gas-giant type planets in nearby solar systems in our Milky Way galaxy. Although about 100 such planets have been discovered in recent years, all were detected through indirect means, such as the gravitational effects on their parent stars, and none has actually been detected directly.

WHAT IS ADAPTIVE OPTICS ?

Adaptive optics refers to optical systems which adapt to compensate for optical effects introduced by the medium between the object and its image. In theory a telescope's resolving power is directly proportional to the diameter of its primary light gathering lens or mirror. But in practice , images from large telescopes are blurred to a resolution no better than would be seen through a 20 cm aperture with no atmospheric blurring. At scientifically important infrared wavelengths, atmospheric turbulence degrades resolution by at least a factor of 10.

Space telescopes avoid problems with the atmosphere, but they are enormously expensive and the limit on aperture size of telescopes is quite restrictive. The Hubble Space telescope, the world's largest telescope in orbit , has an aperture of only 2.4 metres, while terrestrial telescopes can have a diameter four times that size.

In order to avoid atmospheric aberration, one can turn to larger telescopes on the ground, which have been equipped with ADAPTIVE OPTICS system. With this setup, the image quality that can be recovered is close to that the telescope would deliver if it were in space. Images obtained from the adaptive optics system on the 6.5 m diameter telescope, called the MMT telescope illustrate the impact.

A 64 Point Fourier Transform Chip(electronics seminar topic)

Fourth generation wireless and mobile system are currently the focus of research and development. Broadband wireless system based on orthogonal frequency division multiplexing will allow packet based high data rate communication suitable for video transmission and mobile internet application. Considering this fact we proposed a data path architecture using dedicated hardwire for the baseband processor. The most computationally intensive part of such a high data rate system are the 64-point inverse FFT in the transmit direction and the viterbi decoder in the receiver direction. Accordingly an appropriate design methodology for constructing them has to be chosen a) how much silicon area is needed b) how easily the particular architecture can be made flat for implementation in VLSI c) in actual implementation how many wire crossings and how many long wires carrying signals to remote parts of the design are necessary d) how small the power consumption can be .This paper describes a novel 64-point FFT/IFFT processor which has been developed as part of a large research project to develop a single chip wireless modem.

ALGORITHM FORMULATION

The discrete fourier transformation A(r) of a complex data sequence B(k) of length N
where r, k ={0,1……, N-1} can be described as


Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and k=l+Mm,where s,l ? {0,1…..7} and m, t ? {0,1,….T-1}. Applying these values in first equation and we get


This shows that it is possible to realize the FFT of length N by first decomposing it to one M and one T-point FFT where N = MT, and combinig them. But this results in in a two dimensional instead of one dimensional structure of FFT. We can formulate 64-point by considering M =T = 8



This shows that it is possible to express the 64-point FFT in terms of a two dimensional structure of 8-point FFTs plus 64 complex inter-dimensional constant multiplications. At first, appropriate data samples undergo an 8-point FFT computation. However, the number of non-trivial multiplications required for each set of 8-point FFT gets multiplied with 1. Eight such computations are needed to generate a full set of 64 intermediate data, which once again undergo a second 8-point FFT operation . Like first 8-point FFT for second 8-point again such computions are required. Proper reshuffling of the data coming out from the second 8-point FFT generates the final output of the 64-point FFT .

Fig. Signal flow graph of an 8-point DIT FFT.

For realization of 8-point FFT using the conventional DIT does not need to use any multiplication operation.

The constants to be multiplied for the first two columns of the 8-point FFT structure are either 1 or j . In the third column, the multiplications of the constants are actually addition/subtraction operation followed multiplication of 1/?2 which can be easily realized by using only a hardwired shift-and-add operation. Thus an 8-point FFT can be carried out without using any true digital multiplier and thus provide a way to realize a low- power 64-point FFT at reduced hardware cost. Since a basic 8-point FFT does not need a true multiplier. On the other hand, the number of non-trivial complex multiplications for the conventional 64-point radix-2 DIT FFT is 66. Thus the present approach results in a reduction of about 26% for complex multiplication compared to that required in the conventional radix-2 64-point FFT. This reduction of arithmetic complexity furthur enhances the scope for realizing a low-power 64-point FFT processor. However, the arithmetic complexity of the proposed scheme is almost the same to that of radix-4 FFT algorithm since the radix-4 64-point FFT algorithm needs 52 non-trivial complex multiplications.