EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Tuesday, February 16, 2010

Thermomechanical Data Storage

INTRODUCTION

In the 21st century, the nanometer will very likely play a role similar to the one played by the micrometer in the 20th century The nanometer scale will presumably pervade the field of data storage. In magnetic storage today, there is no clear-cut way to achieve the nanometer scale in all three dimensions. The basis for storage in the 21st century might still be magnetism. Within a few years, however, magnetic storage technology will arrive at a stage of its exciting and successful evolution at which fundamental changes are likely to occur when current storage technology hits the well-known superparamagnetic limit. Several ideas have been proposed on how to overcome this limit. One such proposal involves the use of patterned magnetic media. Other proposals call for totally different media and techniques such as local probes or holographic methods. Similarly, consider Optical lithography. Although still the predominant technology, it will soon reach its fundamental limits and be replaced by a technology yet unknown. In general, if an existing technology reaches its limits in the course of its evolution and new alternatives are emerging in parallel, two things usually happen: First, the existing and well-established technology will be explored further and everything possible done to push its limits to take maximum advantage of the considerable investments made. Then, when the possibilities for improvements have been exhausted, the technology may still survive for certain niche applications, but the emerging technology will take over, opening up new perspectives and new directions.

THERMOMECHANICAL AFM DATA STORAGE

In recent years, AFM thermomechanical recording in polymer storage media has undergone extensive modifications mainly with respect to the integration of sensors and heaters designed to enhance simplicity and to increase data rate and storage density. Using these heater cantilevers, high storage density and data rates have been achieved. Let us now describe the storage operations in detail.

DATA WRITING

Thermomechanical writing is a combination of applying a local force by the cantilever/tip to the polymer layer, and softening it by local heating. Initially, the heat transfer from the tip to the polymer through the small contact area is very poor and improves as the contact area increases. This means the tip must be heated to a relatively high temperature (about 400oC) to initiate the softening. Once softening has commenced, the tip is pressed into the polymer, which increases the heat transfer to the polymer, increases the volume of softened polymer, and hence increases the bit size. Our rough estimates indicate that at the beginning of the writing process only about 0.2% of the heating power is used in the very small contact zone (10-40 nm2) to soften the polymer locally, whereas about 80% is lost through the cantilever legs to the chip body and about 20% is radiated from the heater platform through the air gap to the medium/substrate. After softening has started and the contact area has increased, the heating power available for generating the indentations increases by at least ten times to become 2% or more of the total heating power.


With this highly nonlinear heat transfer mechanism it is very difficult to achieve small tip penetration and hence small bit sizes as well as to control and reproduce the thermomechanical writing process. This situation can be improved if the thermal conductivity of the substrate is increased, and if the depth of tip penetration is limited. These characteristics can be improved by the use of very thin polymer layers deposited on Si substrates as shown in figure 1.The hard Si substrate prevents the tip from penetrating farther than the film thickness, and it enables more rapid transport of heat away from the heated region, as Si is a much better conductor of heat than the polymer. By coating Si substrates with a 40-nm film of polymethylmethacrylate (PMMA) bit sizes ranging between 10 and 50 nm is achieved. However, this causes increased tip wear, probably caused by the contact between Si tip and Si substrate during writing. Therefore a 70-nm layer of cross linked photoresist (SU-8) was introduced between the Si substrate and the PMMA film to act as a softer penetration stop that avoids tip wear, but remains thermally stable.

PEA Space Charge Measurement System(Electrical & Electronics Seminar Topics

INTRODUCTION

The pulsed electro acoustic analysis (PEA) can be used for space charge measurements under dc or ac fields. The PEA method is a non-destructive technique for profiling space charge accumulation in polymeric materials. The method was first proposed by T.Takada et al. in 1985. The pulsed electro acoustic (PEA) method has been used for various applications. PEA systems can measure space charge profiles in the thickness direction of specimen, with a resolution of around 10 microns, and a repetition rate in the order of milliseconds. The experimental results contribute to the investigation of the charge transport in dielectrics, aging of insulating materials and the clarification of the effect of chemical properties on space charge formation. PEA method can measure only net charges and does not indicate the source of the charge.

Various space charge measurement techniques are thermal step, thermal pulse, piezoelectric pressure step, and laser induced pressure pulse, the pulse electro acoustic method. In the thermal step method, both electrodes are initially in contact with a heat sink at a temperature around -10 degrees Celsius. A heat source is then brought into contact with one electrode, and the temperature profile through the sample begins to evolve towards equilibrium consistent with the new boundary conditions.

The resulting thermal expansion of the sample causes a current to flow between the electrodes, and application of an appropriate deconvolution procedure using Fourier analysis allows extraction of the space charge distribution from the current flow data. This technique is particularly suited for thicker samples (between 2 and 20 mm). Next is the thermal pulse technique. The common characteristic is a temporary, non -destructive displacement of the space charge in the bulk of a sample created by a traveling disturbance, such as a thermal wave, leading to a time dependent change in charge induced on the electrodes by the space charge. Compression or expansion of the sample will also contribute to the change in induced charge on the electrodes, through a change in relative permittivity. The change in electrode charge is analyzed to yield the space charge distribution.

Thermal pulse technique yields only the first moment of the charge distribution and its first few Fourier coefficients. Next is laser induced pressure pulse. A temporary displacement of space charge can also be achieved using a pressure pulse in the form of a longitudinal sound wave. Such a wave is generated, through conservation of momentum, when a small volume of a target attached to the sample is ablated following absorption of energy delivered in the form of a short laser pulse. The pressure pulse duration in laser induced pressure pulse measurements depends on the laser pulse duration and it can be chosen to suite the sample thickness, ie, thinner the sample the shorter should be the laser pulse.

Space charge measurement has become a common method for investigating the dielectric properties of solid materials. Space charge observation is becoming the most widely used technique to evaluate polymeric materials for dc-insulation applications, particularly high-voltage cables. The presence of space charges is the main problem causing premature failure of high-voltage dc polymeric cables. It has been shown that insulation degradation under service stresses can be diagnosed by space charge measurements.

The term" space charge" means uncompensated real charge generated in the bulk of the sample as a result of (a) charge injection from electrodes, driven by a dc field not less than approximately 10 KV/mm, (b) application of mechanical/thermal stress, if the material is piezoelectric/ pyroelectric (c) field-assisted thermal ionization of impurities in the bulk of the dielectric.

Pebble-Bed Reactor(Electrical & Electronics Seminar Topics

INTROUCTION

The development of the nuclear power industry has been nearly stagnant in the past few decades. In fact there have been no new nuclear power plant construction in the United States since the late 1970s. What many thought was a promising technology during the "Cold War" days of this nation; they now frown upon, despite the fact that nuclear power currently provides the world with 17% of its energy needs. Nuclear technology's lack of popularity is not difficult to understand since the fear of it has been promoted by the entertainment industry, news media, and extremists. There is public fear because movies portray radiation as the cause of every biological mutation and now; terrorist threats against nuclear installations have been hypothesized. Also, the lack of understanding of nuclear science has kept news media and extremists on the offensive. The accidents at Three Mile Island (TMI) and Chernobyl were real and their effects were dangerous and, in the latter case, lethal. However, many prefer to give up the technology rather than learn from these mistakes.

Recently, there has been a resurgence of interest in nuclear power development by several governments, despite the resistance. The value of nuclear power as an alternative fuel source is still present and public fears have only served to make the process of obtaining approval more difficult. This resurgence is due to the real threat that global warming, caused by the burning of fossil fuels, is destroying the environment. Moreover, these limited resources are quickly being depleted because of their increased usage from a growing population.

The estimation is that developing countries will expand their energy consumption to 3.9 times that of today by the mid-21st century and global consumption is expected to grow by 2.2 times. Development has been slow since deregulation of the power industry has forced companies to look for short term return, inexpensive solutions to our energy needs rather than investment in long term return, expensive solutions. Short-term solutions, such as the burning of natural gas in combined cycle gas turbines (CCGT), have been the most cost effective but remain resource limited. Therefore, a few companies and universities, subsidized by governments, are examining new ways to provide nuclear power.

An acceptable nuclear power solution for energy producers and consumers would depend upon safety and cost effectiveness. Many solutions have been proposed including the retrofit of the current light water reactors (LWR). At present, it seems the most popular solution is a High Temperature Gas Cooled Reactor (HTGR) called the Pebble Bed Modular Reactor (PBMR).

HISTORY OF PBMR

The history of gas-cooled reactors (GCR) began in November of 1943 with the graphite-moderated, air-cooled, 3.5-MW, X-10 reactor in Oak Ridge, Tennessee. Gas-cooled reactors use graphite as a moderator and a circulation of gas as a coolant. A moderator like graphite is used to slow the prompt neutrons created from the reaction such that a nuclear reaction can be sustained. Reactors used commercially in the United States are generally LWRs, which use light water as a moderator and coolant.

Development of the more advanced HTGRs began in the 1950s to improve upon the performance of the GCRs. HTGRs use helium as a gas coolant to increase operating temperatures. Initial HTGRs were the Dragon reactor in the U.K., developed in 1959 and almost simultaneously, the Arbeitsgemeinshaft Versuchsreaktor (AVR) reactor in Germany.

Dr Rudolf Schulten (considered "father" of the pebble bed concept) decided to do something different for the AVR reactor. His idea was to compact silicon carbide coated uranium granules into hard billiard-ball-like graphite spheres (pebbles) and use them as fuel for the helium cooled reactor.

The first HTGR prototype in the United States was the Peach Bottom Unit 1 in the late 1960s. Following the success of these reactors included construction of the Fort S. Vrain (FSV) in Colorado and the Thorium High Temperature Reactor (THTR-300) in Germany. These reactors used primary systems enclosed in prestressed concrete reactor vessels rather than steel vessels of previous designs. The FSV incorporated ceramic-coated fuel particles imbedded within rods placed in large hexagonal shaped graphite elements and the THTR-300 used spherical fuel elements (pebble bed). These test reactors provided valuable information for future design

Low - k Dielectrics

INTROUCTION

In this fast moving world time delay is one of the most dreaded situations in the field of data communication. A delay in the communication is as bad as loosing the information, whether it is on the internet or on television or talking over a telephone. We need to find out different ways to improve the communication speed. The various methods adopted by the communication industry are the wireless technology, optical communications, ultra wide band communication networks etc. But all these methods need an initial capital amount which makes all these methods cost ineffective. So improving the existing network is very important especially in a country like INDIA.

The communication systems mainly consist of a transeiver and a channel. The tranceiver is the core of all data communications. It has a very vast variety of electronic components mostly integrated into different forms of IC chips. These ICs provide the various signal modifications like amplification, modulation etc. The delay caused in these circuits will definitely affect the speed of data communication.

This is where this topic LOW-k DIELCTRICS becomes relevant. It is one of the most recent developments in the field of integrated electronics. Mostly the IC s are manufactured using the CMOS technology. This technology has an embedded coupling capacitance that reduces the speed of operation. There are many other logics available like the RTL,DTL,ECL,TTL etc . But all these other logics have higher power consumption than the CMOS technology. So the industry prefer CMOS over other logics .

Inside the IC there are lots of interconnections between points in the CMOS substrate. These refer to the connection between the different transistors in the IC. For example , in the case of NAND LOGICS there are lots of connections between the transistors and their feedbacks. These connections are made by the INTERCONNECT inside the IC . Aluminum has been the material of choice for the circuit lines used to connect transistors and other chip components. These thin aluminum lines must be isolated from each other with an insulating material, usually silicon dioxide (SiO2).

This basic circuit construction technique has worked well through the many generations of computer chip advances predicted by Moore's Law1. However, as aluminum circuit lines approach 0.18 mm in width, the limiting factor in computer processor speed shifts from the transistors' gate delay to interconnect delay caused by the aluminum lines and the SiO2 insulation material. With the introduction of copper lines, part of the "speed limit" has been removed. However, the properties of the dielectric material between the layers and lines must now be addressed. Although integration of low-k will occur at the 0.13mm technology node, industry opinion is that the 0.10mm generation, set for commercialization in 2003 or 2004, will be the true proving ground for low-k dielectrics because the whole industry will need to use low-k at that line width.