EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Sunday, February 28, 2010

Google on antitrust - blame Microsoft(LATEST NEWS)

Google said the European Commission has opened an investigation into the company's power over the advertising industry. Google also said that Microsoft is the driving force behind the company's regulatory headache.

Samsung Jet 2 – 3.1-inch touchscreen mobile phone(latest news)

Samsung Jet 2 price

The detailed features of the new Samsung Jet 2 mobile phone are:

  • 3.1-inch WVGA AMOLED display
  • 800MHz application processor
  • Google Push Email
  • Media content sharing technology among DLNA certified devices
  • English dictionary
  • Media Browser
  • Samsung Dolphin Internet browser
  • motion control
  • social networking sites shortcuts
  • 3D media gate UI and motion-response UI
  • TouchWiz 2.0 User interface
  • ‘one finger zoom,’
  • 5 mega-pixel camera
  • GPS with AGPS
  • DNSe & SRS Sound Effect technology
  • DivX and XviD video support

Satelloons and lunar lasers: communicating in space

Last week, NASA broke ground on three new radio dishes near Canberra, Australia. In the coming years, the new antennas will help boost the capabilities of NASA's Deep Space Network, which is used to communicate with spacecraft that travel far beyond Earth's orbit. But the agency has other, more ambitious plans in store. New Scientist takes a look at the history and future of NASA's space communication projects.

Robots to rescue soldiers

THE US military is asking inventors to come up with designs for a robot that can trundle onto a battlefield and rescue injured troops, with little or no help from outside.

Retrieving casualties while under fire is a major cause of combat losses, says a posting on the Pentagon's small business technology transfer website (bit.ly/aRXXQU). So the army wants a robot with strong, dexterous arms and grippers that can cope with "the large number of body positions and types of locations in which casualties can be found".

It should be capable of planning an approach and escape route without prior knowledge of the local terrain and geography. The army also wants the robot to be able to cooperate with swarms of similar machines for mass rescues.

Inventors have until 24 March to file their ideas.

Smart dust(SEMINAR TOPIC)

Smart dust is a tiny dust size device with extra-ordinary capabilities. Smart dust combines sensing, computing, wireless communication capabilities and autonomous power supply within volume of only few millimeters and that too at low cost. These devices are proposed to be so small and light in weight that they can remain suspended in the environment like an ordinary dust particle. These properties of Smart Dust will render it useful in monitoring real world phenomenon without disturbing the original process to an observable extends. Presently the achievable size of Smart Dust is about 5mm cube, but we hope that it will eventually be as small as a speck of dust. Individual sensors of smart dust are often referred to as motes because of their small size. These devices are also known as MEMS, which stands for micro electro-mechanical sensors

Saturday, February 27, 2010

The messiah of Morris Avenue: a novel By Tony Hendra

Tony Hendra’s Father Joe became a new classic of faith and spirituality—even for those not usually inclined. Now Hendra is back with a novel set in a very reverent future where church and state walk hand in hand. Fade-in as Johnny Greco—a fallen journalist who nurses a few grudges along with his cocktails—stumbles onto the story of a young man named Jay who’s driving around New Jersey preaching radical notions (kindness, generosity) and tossing off miracles. How better, Johnny schemes, to stick it to the Reverend Sabbath, America’s #1 Holy Warrior, than to write a headline-making story announcing Jay as the Second Coming? Then something strange happens. Died-in-the-wool skeptic Johnny actually finds his own life being transformed by the new messiah.
Alternately hilarious and genuinely moving, The Messiah of Morris Avenue brings to life a savior who reminds the world of what Jesus actually taught and wittily skewers all sorts of sanctimoniousness on both sides of the political spectrum. Writing with heart, a sharp eye, and a passionate frustration with those who feel they hold a monopoly on God, Tony Hendra has created a delightful entertainment that reminds us of the unfailing power of genuine faith

The Intimate World of Abraham Lincoln By C. A. Tripp, Lewis Gannett

For four years in the 1830s, in Springfield, Illinois, a young state legislator shared a bed with his best friend, Joshua Speed. The legislator was Abraham Lincoln. When Speed moved home to Kentucky in 1841 and Lincoln's engagement to Mary Todd was broken off, Lincoln suffered an emotional crisis. An underground campaign has been accumulating about Abahram Lincoln for years, focusing on his intimate relationships. He was famously awkward around single women. Before Mary Todd, he was engaged to another woman, but his fiancĂ©e called off the marriage on the grounds that he was "lacking smaller attentions." His marriage to Mary was troubled. Meanwhile, throughout his adult life, he enjoyed close relationships with a number of men — disclosed here for the first time, including an affair with an army captain when Mrs. Lincoln was away. This extensive study by renowned psychologist, therapist, and sex researcher C.A. Tripp, examines not only Lincoln's sexuality, but aims to make sense of the whole man. It includes an introduction by Jean Baker, biographer of Mary Todd Lincoln and an afterword containing reactions by two Lincoln scholars and one clinical psychologist. This timely book finally allows the true Lincoln to be fully understood.

Monday, February 22, 2010

firewire(seminar topic for computer science)

The IEEE 1394 interface is a serial bus interface standard for high-speed communications and isochronous real-time data transfer, frequently used by personal computers, as well as in digital audio, digital video, automotive, and aeronautics applications. The interface is also known by the brand names of FireWire (Apple), i.LINK (Sony), and Lynx (Texas Instruments). IEEE 1394 replaced parallel SCSI in many applications, because of lower implementationcabling system. The 1394 standard also defines a backplane interface, though this is not as widely used. costs and a simplified, more adaptable

IEEE 1394 was adopted as the High-Definition Audio-Video Network Alliance FireWire is also available in wireless, fiber optic, and coaxial versions using the isochronous protocols. (HANA) standard connection interface for A/V (audio/visual) component communication and control.

Nearly all digital camcorders have included a four-circuit 1394 interface, though, except for premium models, such inclusion is becoming less common. It remains the primary transfer mechanism for high end professional audio and video equipment. Since 2003 many computers intended for home or professional audio/video use have built-in FireWire/i.LINK ports, especially prevalent with Sony and Apple's computers. The legacy (alpha) 1394 port is also available on premium retail motherboards.

History and development


FireWire is Apple's name for the IEEE 1394 High Speed Serial Bus. It was initiated by Apple (in 1986 and developed by the IEEE P1394 Working Group, largely driven by contributions from Apple, although major contributions were also made by engineers from Texas Instruments, Sony, Digital Equipment Corporation, IBM, and INMOS/SGS Thomson (now STMicroelectronics).

Apple intended FireWire to be a serial replacement for the parallel SCSI bus while providing connectivity for digital audio and video equipment. Apple's development began in the late 1980s, later presented to the IEEE, and was completed in 1995. As of 2007, IEEE 1394 is a composite of four documents: the original IEEE Std. 1394-1995, the IEEE Std. 1394a-2000 amendment, the IEEE Std. 1394b-2002 amendment, and the IEEE Std. 1394c-2006 amendment. On June 12, 2008, all these amendments as well as errata and some technical updates were incorporated into a superseding standard IEEE Std. 1394-2008.

Apple's internal code-name for FireWire was "Greyhound" as of May 11, 1992.

Sony's implementation of the system, "i.LINK", used a smaller connector with only the four signal circuits, omitting the two circuits which provide power to the device in favor of a separate power connector. This style was later added into the 1394a amendment. This port is sometimes labeled "S100" or "S400" to indicate speed in Mbit/s.

The system is commonly used for connection of data storage devices and DV (digital video) cameras, but is also popular in industrial systems for machine vision and professional audio systems. It is preferred over the more common USB for its greater effective speed and power distribution capabilities, and because it does not need a computer host. Perhaps more important, FireWire uses all SCSI capabilities and has high sustained data transfer rates, important for audio and video editors. Benchmarks show that the sustained data transfer rates are higher for FireWire than for USB 2.0, especially on Apple Mac OS X with more varied results on Microsoft Windows.

However, the royalty which Apple and other patent holders initially demanded from users of FireWire (US$0.25 per end-user system) and the more expensive hardware needed to implement it (US$1–$2), both of which have since been dropped, have prevented FireWire from displacing USB in low-end mass-market computer peripherals, where product cost is a major constraint

Technical specifications


FireWire can connect up to 63 peripherals in a tree chain topology (as opposed to Parallel SCSI's electrical bus topology). It allows peer-to-peer device communication — such as communication between a scanner and a printer — to take place without using system memory or the CPU. FireWire also supports multiple hosts per bus. It is designed to support Plug and play and hot swapping. The copper cable it uses (1394's most common implementation) can be up to 4.5 metres (15 ft) long and is more flexible than most Parallel SCSI cables. In its six-circuit or nine-circuit variations, it can supply up to 45 watts of power per port at up to 30 volts, allowing moderate-consumption devices to operate without a separate power supply.

FireWire devices implement the ISO/IEC 13213 "configuration ROM" model for device configuration and identification, to provide plug-and-play capability. All FireWire devices are identified by an IEEE EUI-64 unique identifier (an extension of the 48-bit Ethernet MAC address format) in addition to well-known codes indicating the type of device and the protocols it supports.


Operating system support

Full support for IEEE 1394a and 1394b is available for Microsoft Windows XP, FreeBSD,Linux,Apple Mac OS 8.6 through Mac OS 9 Mac OS X, NetBSD, and Haiku. Historically, performance of 1394 devices may have decreased after installing Windows XP Service Pack 2, but were resolved in Hotfix 885222 and in SP. Some FireWire hardware manufacturers also provide custom device drivers which replace the Microsoft OHCI host adapter driver stack, enabling S800-capable devices to run at full 800 Mbit/s transfer rates on older versions of Windows (XP SP2 w/o Hotfix 885222) and Windows Vista. At the time of its release, Microsoft Windows Vista supported only 1394a, with assurances that 1394b support would come in the next service pack. Service Pack 1 for Microsoft Windows Vista has since been released, however the addition of 1394b support is not mentioned anywhere in the release documentation. The 1394 bus driver was rewritten for Windows 7 to provide support for higher speeds and alternative media.

4-circuit (left) and 6-circuit (right) FireWire 400 alpha connectors



The alternative Ethernet-style cabling used by 1394c

A 6-circuit FireWire 400 alpha connector


Security issues


Devices on a FireWire bus can communicate by direct memory access (DMA), where a device can use hardware to map internal memory to FireWire's "Physical Memory Space". The SBP-2 (Serial Bus Protocol 2) used by FireWire disk drives uses this capability to minimize interrupts and buffer copies. In SBP-2, the initiator (controlling device) sends a request by remotely writing a command into a specified area of the target's FireWire address space. This command usually includes buffer addresses in the initiator's FireWire "Physical Address Space", which the target is supposed to use for moving I/O data to and from the initiator.

On many implementations, particularly those like PCs and Macs using the popular OHCI, the mapping between the FireWire "Physical Memory Space" and device physical memory is done in hardware, without operating system intervention. While this enables high-speed and low-latency communication between data sources and sinks without unnecessary copying (such as between a video camera and a software video recording application, or between a disk drive and the application buffers), this can also be a security risk if untrustworthy devices are attached to the bus. For this reason, high-security installations will typically either purchase newer machines which map a virtual memory space to the FireWire "Physical Memory Space" (such as a Power Mac G5, or any Sun workstation), disable the OHCI hardware mapping between FireWire and device memory, physically disable the entire FireWire interface, or do not have FireWire.

This feature can be used to debug a machine whose operating system has crashed, and in some systems for remote-console operations. On FreeBSD, the dcons driver provides both, using gdb as debugger. Under Linux, firescope and fireproxy exists.

The influence of technology on engineering education By John R. Bourne, M. Dawant

Book overview

"This book is the outcome of a National Science Foundation study entitled: 'Paradigm Shifts in Engineering Education: The Influence of Technology,' SED-9253002. The overall objective of this study was to forecast which of the various possible futures in engineering education were most promising to pursue.The first part of the book contains a series of critical review papers that survey the state-of-the-art in various aspects of engineering education and attempts to look at the future to determine directions for future directions for engineering education.The second part of the book contains data and summaries from meetings held by focus groups convened to discuss possible alternative forecasts."-From the Editor's Note

Photography By Barbara London, John Upton

Book overview

This best-selling, comprehensive guide to photography— featuring superb instructional illustrations— is the most cutting-edge photography book on the market. It offers extensive coverage of digital imaging— with the latest technological developments, such as Web page design and formatting photos on CD-ROMs. Chapter topics explore the process of getting started, camera, lens, film and light, exposure, processing the negative, mounting and finishing, color, digital camera, digital darkroom, lighting, special techniques, view camera, zone system, seeing photographs, and the history of photography. Step-by-step instructions include a " Lights Out" feature to help learners better identify darkroom techniques. For anyone with a personal or professional interest in photography.

Philosophy of mind By Jaegwon Kim

Book overview

The philosophy of mind has always been a staple of the philosophy curriculum. But it has never held a more important place than it does today, with both traditional problems and new topics often sparked by the developments in the psychological, cognitive, and computer sciences. Jaegwon Kim’s Philosophy of Mind is the classic, comprehensive survey of the subject. Now in its second edition, Kim explores, maps, and interprets this complex and exciting terrain. Designed as an introduction to the field for upper-level undergraduates and graduate students, Philosophy of Mind focuses on the mind/body problem and related issues, some touching on the status of psychology and cognitive science. The second edition features a new chapter on Cartesian substance dualism-a perspective that has been little discussed in the mainstream philosophy of mind and almost entirely ignored in most introductory books in philosophy of mind. In addition, all the chapters have been revised and updated to reflect the trends and developments of the last decade. Throughout the text, Kim allows readers to come to their own terms with the central problems of the mind. At the same time, the author’s own emerging views are on display and serve to move the discussion forward. Comprehensive, clear, and fair, Philosophy of Mind is a model of philosophical exposition. It is a major contribution to the study and teaching of the philosophy of mind.

Saturday, February 20, 2010

LATEST NEWS

Analysts: Flash to prevail against competition

update Adobe Flash faces onslaught of trends threatening to remove it as defacto rich-media Web platform, but developer support will keep it at the top.Image

LATEST NEWS

Windows Phone 7 a balancing act

Microsoft must tread fine line between control and allowing device makers freedom to tailor new mobile platform to their needs, industry analyst notes.Image

Wipro exec in $4 mn fraud

A Wipro employee embezzled crores of rupees over the past three years, sending India’s third-largest software exporter scrambling to tighten internal controls in the finance division where the fraud took place. Wipro exec in $4 mn fraud

Biggest phone launches of `10

If you think the phones currently on the shelf are hot, think again. A hotter crop of smart phones will hit the market later this year.

Tuesday, February 16, 2010

Thermomechanical Data Storage

INTRODUCTION

In the 21st century, the nanometer will very likely play a role similar to the one played by the micrometer in the 20th century The nanometer scale will presumably pervade the field of data storage. In magnetic storage today, there is no clear-cut way to achieve the nanometer scale in all three dimensions. The basis for storage in the 21st century might still be magnetism. Within a few years, however, magnetic storage technology will arrive at a stage of its exciting and successful evolution at which fundamental changes are likely to occur when current storage technology hits the well-known superparamagnetic limit. Several ideas have been proposed on how to overcome this limit. One such proposal involves the use of patterned magnetic media. Other proposals call for totally different media and techniques such as local probes or holographic methods. Similarly, consider Optical lithography. Although still the predominant technology, it will soon reach its fundamental limits and be replaced by a technology yet unknown. In general, if an existing technology reaches its limits in the course of its evolution and new alternatives are emerging in parallel, two things usually happen: First, the existing and well-established technology will be explored further and everything possible done to push its limits to take maximum advantage of the considerable investments made. Then, when the possibilities for improvements have been exhausted, the technology may still survive for certain niche applications, but the emerging technology will take over, opening up new perspectives and new directions.

THERMOMECHANICAL AFM DATA STORAGE

In recent years, AFM thermomechanical recording in polymer storage media has undergone extensive modifications mainly with respect to the integration of sensors and heaters designed to enhance simplicity and to increase data rate and storage density. Using these heater cantilevers, high storage density and data rates have been achieved. Let us now describe the storage operations in detail.

DATA WRITING

Thermomechanical writing is a combination of applying a local force by the cantilever/tip to the polymer layer, and softening it by local heating. Initially, the heat transfer from the tip to the polymer through the small contact area is very poor and improves as the contact area increases. This means the tip must be heated to a relatively high temperature (about 400oC) to initiate the softening. Once softening has commenced, the tip is pressed into the polymer, which increases the heat transfer to the polymer, increases the volume of softened polymer, and hence increases the bit size. Our rough estimates indicate that at the beginning of the writing process only about 0.2% of the heating power is used in the very small contact zone (10-40 nm2) to soften the polymer locally, whereas about 80% is lost through the cantilever legs to the chip body and about 20% is radiated from the heater platform through the air gap to the medium/substrate. After softening has started and the contact area has increased, the heating power available for generating the indentations increases by at least ten times to become 2% or more of the total heating power.


With this highly nonlinear heat transfer mechanism it is very difficult to achieve small tip penetration and hence small bit sizes as well as to control and reproduce the thermomechanical writing process. This situation can be improved if the thermal conductivity of the substrate is increased, and if the depth of tip penetration is limited. These characteristics can be improved by the use of very thin polymer layers deposited on Si substrates as shown in figure 1.The hard Si substrate prevents the tip from penetrating farther than the film thickness, and it enables more rapid transport of heat away from the heated region, as Si is a much better conductor of heat than the polymer. By coating Si substrates with a 40-nm film of polymethylmethacrylate (PMMA) bit sizes ranging between 10 and 50 nm is achieved. However, this causes increased tip wear, probably caused by the contact between Si tip and Si substrate during writing. Therefore a 70-nm layer of cross linked photoresist (SU-8) was introduced between the Si substrate and the PMMA film to act as a softer penetration stop that avoids tip wear, but remains thermally stable.

PEA Space Charge Measurement System(Electrical & Electronics Seminar Topics

INTRODUCTION

The pulsed electro acoustic analysis (PEA) can be used for space charge measurements under dc or ac fields. The PEA method is a non-destructive technique for profiling space charge accumulation in polymeric materials. The method was first proposed by T.Takada et al. in 1985. The pulsed electro acoustic (PEA) method has been used for various applications. PEA systems can measure space charge profiles in the thickness direction of specimen, with a resolution of around 10 microns, and a repetition rate in the order of milliseconds. The experimental results contribute to the investigation of the charge transport in dielectrics, aging of insulating materials and the clarification of the effect of chemical properties on space charge formation. PEA method can measure only net charges and does not indicate the source of the charge.

Various space charge measurement techniques are thermal step, thermal pulse, piezoelectric pressure step, and laser induced pressure pulse, the pulse electro acoustic method. In the thermal step method, both electrodes are initially in contact with a heat sink at a temperature around -10 degrees Celsius. A heat source is then brought into contact with one electrode, and the temperature profile through the sample begins to evolve towards equilibrium consistent with the new boundary conditions.

The resulting thermal expansion of the sample causes a current to flow between the electrodes, and application of an appropriate deconvolution procedure using Fourier analysis allows extraction of the space charge distribution from the current flow data. This technique is particularly suited for thicker samples (between 2 and 20 mm). Next is the thermal pulse technique. The common characteristic is a temporary, non -destructive displacement of the space charge in the bulk of a sample created by a traveling disturbance, such as a thermal wave, leading to a time dependent change in charge induced on the electrodes by the space charge. Compression or expansion of the sample will also contribute to the change in induced charge on the electrodes, through a change in relative permittivity. The change in electrode charge is analyzed to yield the space charge distribution.

Thermal pulse technique yields only the first moment of the charge distribution and its first few Fourier coefficients. Next is laser induced pressure pulse. A temporary displacement of space charge can also be achieved using a pressure pulse in the form of a longitudinal sound wave. Such a wave is generated, through conservation of momentum, when a small volume of a target attached to the sample is ablated following absorption of energy delivered in the form of a short laser pulse. The pressure pulse duration in laser induced pressure pulse measurements depends on the laser pulse duration and it can be chosen to suite the sample thickness, ie, thinner the sample the shorter should be the laser pulse.

Space charge measurement has become a common method for investigating the dielectric properties of solid materials. Space charge observation is becoming the most widely used technique to evaluate polymeric materials for dc-insulation applications, particularly high-voltage cables. The presence of space charges is the main problem causing premature failure of high-voltage dc polymeric cables. It has been shown that insulation degradation under service stresses can be diagnosed by space charge measurements.

The term" space charge" means uncompensated real charge generated in the bulk of the sample as a result of (a) charge injection from electrodes, driven by a dc field not less than approximately 10 KV/mm, (b) application of mechanical/thermal stress, if the material is piezoelectric/ pyroelectric (c) field-assisted thermal ionization of impurities in the bulk of the dielectric.

Pebble-Bed Reactor(Electrical & Electronics Seminar Topics

INTROUCTION

The development of the nuclear power industry has been nearly stagnant in the past few decades. In fact there have been no new nuclear power plant construction in the United States since the late 1970s. What many thought was a promising technology during the "Cold War" days of this nation; they now frown upon, despite the fact that nuclear power currently provides the world with 17% of its energy needs. Nuclear technology's lack of popularity is not difficult to understand since the fear of it has been promoted by the entertainment industry, news media, and extremists. There is public fear because movies portray radiation as the cause of every biological mutation and now; terrorist threats against nuclear installations have been hypothesized. Also, the lack of understanding of nuclear science has kept news media and extremists on the offensive. The accidents at Three Mile Island (TMI) and Chernobyl were real and their effects were dangerous and, in the latter case, lethal. However, many prefer to give up the technology rather than learn from these mistakes.

Recently, there has been a resurgence of interest in nuclear power development by several governments, despite the resistance. The value of nuclear power as an alternative fuel source is still present and public fears have only served to make the process of obtaining approval more difficult. This resurgence is due to the real threat that global warming, caused by the burning of fossil fuels, is destroying the environment. Moreover, these limited resources are quickly being depleted because of their increased usage from a growing population.

The estimation is that developing countries will expand their energy consumption to 3.9 times that of today by the mid-21st century and global consumption is expected to grow by 2.2 times. Development has been slow since deregulation of the power industry has forced companies to look for short term return, inexpensive solutions to our energy needs rather than investment in long term return, expensive solutions. Short-term solutions, such as the burning of natural gas in combined cycle gas turbines (CCGT), have been the most cost effective but remain resource limited. Therefore, a few companies and universities, subsidized by governments, are examining new ways to provide nuclear power.

An acceptable nuclear power solution for energy producers and consumers would depend upon safety and cost effectiveness. Many solutions have been proposed including the retrofit of the current light water reactors (LWR). At present, it seems the most popular solution is a High Temperature Gas Cooled Reactor (HTGR) called the Pebble Bed Modular Reactor (PBMR).

HISTORY OF PBMR

The history of gas-cooled reactors (GCR) began in November of 1943 with the graphite-moderated, air-cooled, 3.5-MW, X-10 reactor in Oak Ridge, Tennessee. Gas-cooled reactors use graphite as a moderator and a circulation of gas as a coolant. A moderator like graphite is used to slow the prompt neutrons created from the reaction such that a nuclear reaction can be sustained. Reactors used commercially in the United States are generally LWRs, which use light water as a moderator and coolant.

Development of the more advanced HTGRs began in the 1950s to improve upon the performance of the GCRs. HTGRs use helium as a gas coolant to increase operating temperatures. Initial HTGRs were the Dragon reactor in the U.K., developed in 1959 and almost simultaneously, the Arbeitsgemeinshaft Versuchsreaktor (AVR) reactor in Germany.

Dr Rudolf Schulten (considered "father" of the pebble bed concept) decided to do something different for the AVR reactor. His idea was to compact silicon carbide coated uranium granules into hard billiard-ball-like graphite spheres (pebbles) and use them as fuel for the helium cooled reactor.

The first HTGR prototype in the United States was the Peach Bottom Unit 1 in the late 1960s. Following the success of these reactors included construction of the Fort S. Vrain (FSV) in Colorado and the Thorium High Temperature Reactor (THTR-300) in Germany. These reactors used primary systems enclosed in prestressed concrete reactor vessels rather than steel vessels of previous designs. The FSV incorporated ceramic-coated fuel particles imbedded within rods placed in large hexagonal shaped graphite elements and the THTR-300 used spherical fuel elements (pebble bed). These test reactors provided valuable information for future design

Low - k Dielectrics

INTROUCTION

In this fast moving world time delay is one of the most dreaded situations in the field of data communication. A delay in the communication is as bad as loosing the information, whether it is on the internet or on television or talking over a telephone. We need to find out different ways to improve the communication speed. The various methods adopted by the communication industry are the wireless technology, optical communications, ultra wide band communication networks etc. But all these methods need an initial capital amount which makes all these methods cost ineffective. So improving the existing network is very important especially in a country like INDIA.

The communication systems mainly consist of a transeiver and a channel. The tranceiver is the core of all data communications. It has a very vast variety of electronic components mostly integrated into different forms of IC chips. These ICs provide the various signal modifications like amplification, modulation etc. The delay caused in these circuits will definitely affect the speed of data communication.

This is where this topic LOW-k DIELCTRICS becomes relevant. It is one of the most recent developments in the field of integrated electronics. Mostly the IC s are manufactured using the CMOS technology. This technology has an embedded coupling capacitance that reduces the speed of operation. There are many other logics available like the RTL,DTL,ECL,TTL etc . But all these other logics have higher power consumption than the CMOS technology. So the industry prefer CMOS over other logics .

Inside the IC there are lots of interconnections between points in the CMOS substrate. These refer to the connection between the different transistors in the IC. For example , in the case of NAND LOGICS there are lots of connections between the transistors and their feedbacks. These connections are made by the INTERCONNECT inside the IC . Aluminum has been the material of choice for the circuit lines used to connect transistors and other chip components. These thin aluminum lines must be isolated from each other with an insulating material, usually silicon dioxide (SiO2).

This basic circuit construction technique has worked well through the many generations of computer chip advances predicted by Moore's Law1. However, as aluminum circuit lines approach 0.18 mm in width, the limiting factor in computer processor speed shifts from the transistors' gate delay to interconnect delay caused by the aluminum lines and the SiO2 insulation material. With the introduction of copper lines, part of the "speed limit" has been removed. However, the properties of the dielectric material between the layers and lines must now be addressed. Although integration of low-k will occur at the 0.13mm technology node, industry opinion is that the 0.10mm generation, set for commercialization in 2003 or 2004, will be the true proving ground for low-k dielectrics because the whole industry will need to use low-k at that line width.

Sunday, February 14, 2010

The YouTube (R)evolution Turns 5

Founded five years ago, YouTube is now a full-fledged grown-up by Internet standards. Its days as an impulsive startup--replete with a cluttered office located between a pizza parlor and a Japanese restaurant--are long gone; and is incredible growth over the past half-decade has changed how we live, play, and do business

Opera 10.5 lags in my speed tests

Opera, which was bumped down to fifth place in browser usage after Google Chrome burst on the scene, has embraced a super-fast JavaScript engine as part of its bid to stay relevant.

Unfortunately for Opera, my tests show more work is needed.

The beta version of Opera 10.5 arrived Thursday morning, and I thought it a good time to compare how some of the cutting-edge versions of the browsers were shaping up in performance--especially because Mozilla has released a preview version of the next version of Firefox.

Wednesday, February 10, 2010

VoCable Electronics Seminar Topics

Voice (and fax) service over cable networks is known as cable-based Internet Protocol (IP) telephony. Cable based IP telephony holds the promise of simplified and consolidated communication services provided by a single carrier at a lower cost than consumers currently to pay to separate Internet, television and telephony service providers. Cable operators have already worked through the technical challenges of providing Internet service and optimizing the existing bandwidth in their cable plants to deliver high speed Internet access. Now, cable operators have turned their efforts to the delivery of integrated Internet and voice service using that same cable spectrum.Cable based IP telephony falls under the broad umbrella of voice over IP (VoIP), meaning that many of the challenges that telecom carriers facing cable operators are the same challenges that telecom carriers face as they work to deliver voice over ATM (VoATM) and frame-relay networks. However, ATM and frame-relay services are targeted primarily at the enterprise, a decision driven by economics and the need for service providers to recoup their initial investments in a reasonable amount of time. Cable, on the other hand, is targeted primarily at home. Unlike most businesses, the overwhelming majority of homes in the United States is passed by cable, reducing the required up-front infrastructure investment significantly. Cable is not without competition in the consumer market, for digital subscriber line (xDSL) has emerged as the leading alternative to broadband cable.

Optic Fibre Cable

Optical fiber (or "fiber optic") refers to the medium and the technology associated with the transmission of information as light pulses along a glass or plastic wire or fiber. Optical fiber carries much more information than conventional copper wire and is in general not subject to electromagnetic interference and the need to retransmit signals. Most telephone company long-distance lines are now of optical fiber. Transmission on optical fiber wire requires repeaters at distance intervals. The glass fiber requires more protection within an outer cable than copper. For these reasons and because the installation of any new wiring is labor-intensive, few communities yet have optical fiber wires or cables from the phone company's branch office to local customers (known as local loops). Optical fiber consists of a core, cladding, and a protective outer coating, which guide light along the core by total internal reflection. The core, and the higher-refractive-index cladding, are typically made of high-quality silica glass, though they can both be made of plastic as well. An optical fiber can break if bent too sharply. Due to the microscopic precision required to align the fiber cores, connecting two optical fibers, whether done by fusion splicing or mechanical splicing, requires special skills and interconnection technology.Two main categories of optical fiber used in fiber optic communications are multi-mode optical fiber and single-mode optical fiber. Multimode fiber has a larger core allowing less precise, cheaper transmitters and receivers to connect to it as well as cheaper connectors

VLSI Computations

Over the past four decades the computer industry has experienced four generations of development, physically marked by the rapid changing of building blocks from relays and vacuum tubes (1940-1950s) to discrete diodes and transistors (1950-1960s), to small- and medium-scale integrated (SSI/MSI) circuits (1960-1970s), and to large- and very-large-scale integrated (LSI/VLSI) devices (1970s and beyond). Increases in device speed and reliability and reductions in hardware cost and physical size have greatly enhanced computer performance. However, better devices are not the sole factor contributing to high performance. Ever since the stored-program concept of von Neumann, the computer has been recognized as more than just a hardware organization problem. A modern computer system is really a composite of such items as processors, memories, functional units, interconnection networks, compilers, operating systems, peripherals devices, communication channels, and database banks.To design a powerful and cost-effective computer system and to devise efficient programs to solve a computational problem, one must understand the underlying hardware and software system structures and the computing algorithm to be implemented on the machine with some user-oriented programming languages. These disciplines constitute the technical scope of computer architecture. Computer architecture is really a system concept integrating hardware, software algorithms, and languages to perform large computations. A good computer architect should master all these disciplines.

Digital Subscriber Line(EC seminar topics)

The accelerated growth of content-rich applications that demand high bandwidth has changed the nature of information networks. High-speed communication is now an ordinary requirement throughout business, government, academic, and "work-at-home" environments. High-speed Internet access, telecommuting, and remote LAN access are three services that network access providers clearly must offer. These rapidly growing applications are placing a new level of demand on the telephone infrastructure, in particular, the local loop portion of the network (i.e., the local connection from the subscriber to the local central office). The local loop facility is provisioned with copper cabling,which cannot easily support high bandwidth transmission. This environment is now being stressed by the demand for increasingly higher bandwidth capacities. Although this infrastructure could be replaced by a massive rollout of fiber technologies, the cost to do so is prohibitive in today's business models.More importantly, the time to accomplish such a transition is unacceptable, because the market demand exists today!This demand for data services has created a significant market opportunity for providers that are willing and able to invest in technologies that maximize the copper infrastructure. Both incumbent and competitive Local Exchange Carriers (ILECs and CLECs) are capitalizing on this opportunity by embracing such technologies. The mass deployment of high-speed Digital Subscriber Line (DSL) has changed the playing field for service providers. DSL, which encompasses several different technologies, essentially allows the extension of megabit bandwidth capacities from the service provider central office to the customer premises. Utilizing existing copper cabling, DSL is available at very reasonable costs without the need for massive infrastructure replacement.

Heliodisplay(EC seminar topics)

he heliodisplay is an interactive planar display. Though the image it projects appears much like a hologram, its inventors claim that it doesn't use holographic technology, though it does use rear projection (not lasers as originally reported) to project its image. It does not require any screen or substrate other than air to project its image, but it does eject a water-based vapour curtain for the image to be projected upon. The curtain is produced using similar ultrasonic technology as used in foggers and comprises a number of columns of fog. This curtain is sandwiched between curtains of clean air to create an acceptable screen.Heliodisplay moves through a dozen metal plates and then comes out again. (The exact details of its workings are unknown, pending patent applications.) It works as a kind of floating touch screen, making it possible to manipulate images projected in air with your fingers, and can be connected to a computer using a standard VGA connection. It can also connect with a TV or DVD by a standard RGB video cable. Though due to the turbulent nature of the curtain, not currently suitable as a workstation.The Heliodisplay is an invention by Chad Dyner, who built it as a 5-inch prototype in his apartment before founding IO2 technologies to further develop the product.

Tuesday, February 9, 2010

Microsoft Unveils Child-friendly Net Browser

To protect children from viewing inappropriate content and from the even greater danger of online paedophilia, Microsoft has launched an enhanced version of Internet Explorer 8. Called the 'Click Clever, Click Safe' browser, it will empower youngsters-and families with a one-click tool to report offensive content or the threat of cyber bullying to the authorities. The Internet giant said it developed the tool in collaboration with Ceop, the Child Exploitation and Online Protection Centre, reports The Telegraph.


It is estimated that almost two-thirds of under-18s in the UK have been contacted online by strangers. More than a third responded to the person's overtures. 41 per cent of parents were unaware whether their children had updated their privacy settings on their social networking profile. More than half the youngsters questioned said their parents did not monitor their online activities.

IBM Launches Eight-core Power7 Processor

IBM on Monday launched its latest Power7 processor, which adds more cores and improved multithreading capabilities to boost the performance of servers requiring high up time.

The Power7 chip has eight cores, with each core able to run four threads, IBM said. A Power7 chip can run 32 tasks simultaneously, which is quadruple the number of cores on the older Power6 chip. The Power7 will also run up to eight times more threads than Power6 cores.

The new chip also has TurboCore technology, which allows customers to crank up the speed of active cores for performance gains. The technology also puts memory and bandwidth from eight cores behind the four active cores to drive up the performance gains per core.

The company also launched four Power7-based servers. IBM Power 780 and Power 770 high-end servers are based on modular designs and come with up to 64 Power7 cores. The IBM Power 755 will support up to 32 Power7 cores. The company also launched the 750 Express server. The Power 750 Express and 755 will ship on Feb. 19, while the Power 770 and 780 will become available on March 16.

Saturday, February 6, 2010

Skid Steer Loader and Multiterrain Loader(mechanical seminar topics)

Definition

Skid-steer loaders began catching on in the construction field in the 1980s because they offered contractors a way to automate functions that had previously been performed by manual labor.

Those were small, inexpensive machines that improved labor productivity and reduced work-related injuries. Their small size and maneuverability allows them to operate in tight spaces. Their light weight allows them to be towed behind a full-size pickup truck, and the wide array of work-tools makes them very flexible. They were utility machines, used for odd jobs ranging from work site clean up to small scale digging, lifting, and loading. In most cases, they logged far fewer hours of usage each year than backhoe loaders and wheel loaders, but they were cheap, and so easy to operate that anyone on a job site could deploy them with very little training.

Since then, the category has become wildly popular in all avenues of construction. They are the best-selling type of construction equipment in North America, with annual sales exceeding 50,000 units. They still tend to be low-hour machines, but, thanks to a virtually unlimited variety of attachments, skid-steer loaders can handle a huge array of small-scale jobs, from general earthmoving and material handling to post hole digging and landscaping to pavement milling and demolition.

As the machine has grown in popularity, it has become one of the hottest rental items in North America. Equipment rental houses consume roughly one-third of the new units sold each year, and most stock a wide array of attachments, too. The ready availability of rental attachments - especially high-ticket, specialty items like planers, vibratory rollers, tillers, and snow blowers and pushers - has turned the machines potential for versatility into a cost-effective reality.

As the skid-steer has become more popular in construction, the average size of the machine has grown, too. In the mid-1980s, the most popular operating load class was 900 to 1,350 pounds. By the mid-1990s, the 1,350 to 1,750 pound class was the most popular. Today, the over-1,750-pound classifications are the fastest growing.

Larger machines have dominated new product introductions, too, though our survey of recent new product announcements has turned up a spate of compact and sub-compact introductions, too. The smallest of these are ride-behind models aimed mainly at the consumer rental trade, but they are also used in landscaping and other types of light construction essentially to automate jobs that would otherwise be done by laborers with shovels.

Road contractors and government highway departments should find the new super-duty class of skid-steer loaders especially interesting. These units have retained the skid-steer's traditional simplicity of operation and compact packaging, while also boasting power and weight specifications that let them perform many of the tasks done by backhoe loaders and compact wheel loaders. Nearly all boast high-pressure, high-flow hydraulic systems to run the most sophisticated hydraulic attachments. They also feature substantial break-out force ratings for serious loading and substantial lifting capacities for material handling.

The skid-steer loader represents an interesting alternative for fleets that have low- hour backhoe loaders in inventory. Led by Bobcat, Gehl, Mustang, and other companies that make skid-steers but not backhoe loaders, skid-steer marketers have been pushing the proposition that it is more cost effective to replace a backhoe loader with a skid-steer and a mini-excavator. The rationale: for about the same amount of money, you can get more hours of utilization because you have two machines that can be working simultaneously at different jobs.

F1 Track Design and Safety

Definition

Success is all about being in the right place at the right time ….. and the axiom is a guiding principle for designers of motorsport circuits. To avoid problems you need know where and when things are likely to go wrong before cars turn a wheel -and anticipating accidents is a science.

Take barriers, for example .there is little point erecting them in the wrong place -but predicting the right place is a black art. The FIA has developed bespoke software, the Circuit and Safety Analysis System (CSAS), to predict problemareas on F1 circuits.

Where and when cars leave circuits is due to the complex interaction between their design, the driver's reaction and the specific configuration of the track, and the CSAS allows the input of many variables-lap speeds ,engine power curves, car weight changes, aerodynamic characteristics etc -to predict how cars may leave the circuit at particular places. The variables are complex. The impact point of a car continuing in a straight line at a corner is easy to predict, but if the driver has any remaining control and alters the car's trajectory, or if a mechanical fault introduces fresh variables, its final destination is tricky to model.

Modern tyre barriers are built of road tyres with plastic tubes sandwiched between them. The side facing the track is covered with conveyor belting to prevent wheels becoming snagged and distorting the barrier. The whole provides a deformable 'cushion' a principle that has found its way to civilian roads. Barriers made of air filled cells, currently under investigation may be the final answer. Another important safety factor is the road surface. Racing circuits are at the cutting edge of surface technology, experimenting with new materials for optimum performance.

Circuit and Safety Analysis System (CSAS)

Predicting the trajectory and velocity of a racing car when it is driven at the limit within the confines of a racing track, is now the subject of a great deal of analytical work by almost all teams involved in racing at all levels. However, predicting the trajectory and velocity of a car once the driver has lost control of it has not been something the teams have devoted a great deal of time to. This can now also be analyzed though in the same sort of detail, to assess the safety features of the circuits on which it is raced. The two tasks are very different, and the FIA had to start almost from scratch when it set out to develop software for its Circuit and Safety Analysis System (CSAS).

The last two decades have seen a steady build up of the R&D effort going into vehicle dynamics modeling, particularly by those teams that design and develop cars as well as race them. The pace of development has been set by the availability of powerful PC's, the generation of vehicle and component data, and the supply of suitably qualified graduates to carry out the work. Their task is to be able to model and predict the effects of every nuance of aerodynamic, tire, engine, damper etc., characteristic on the speed of their car at every point on a given circuit. The detail in the model will only be limited by available dynamic characteristics and track data, and will require a driver model to complete the picture. However, they are only interested in the performance of the car while the tires are in contact with the tarmac, and the driver is operating them at or below their peaks.

Green Engine(mechanical seminar topics)

Definition

Global Issues

Everyday radios, newspapers, televisions and the internet warn us of energy exhaustion, atmospheric pollution and hostile climatic conditions. After few hundred years of industrial development, we are facing these global problems while at the same time we maintain a high standard of living. The most important problem we are faced with is whether we should continue "developing" or "die".

Coal, petroleum, natural gas, water and nuclear energy are the five main energy sources that have played important roles and have been widely used by human beings.

The United Nations Energy Organization names all of them "elementary energies", as well as "conventional energies". Electricity is merely a "second energy" derived from these sources. At present, the energy consumed all over the world almost completely relies on the supply of the five main energy sources. The consumption of petroleum constitutes approximately 60 percent of energy used from all sources, so it is the major consumer of energy.

Statistics show that, the daily consumption of petroleum all over the world today is 40 million barrels, of which about 50 percent is for automobile use. That is to say, auto petroleum constitutes about 35 percent of the whole petroleum consumption. In accordance with this calculation, daily consumption of petroleum by automobiles all over the world is over two million tonnes. At the same time as these fuels are burnt, poisonous materials such as 500 million tonnes of carbon monoxides (CO), 100 million tonnes of hydrocarbons (HC), 550 million tonnes of carbon (C), 50 million tonnes of nitrogen oxides (NOx) are emitted into the atmosphere every year, severely polluting the atmosphere. At the same time large quantities of carbon dioxide (CO2) gases, resulting from burning, have also taken the major responsibility for the "green house effect".

Atmospheric scientists now believe that carbon dioxide is responsible for about half the total "green house effect". Therefore, automobiles have to be deemed as the major energy consumer and atmosphere's contaminator. Also, this situation is fast growing with more than 50 million vehicles to be produced annually all over the world and place into the market. However, at is estimate that petroleum reserve in the globe will last for only 38 years . The situation is really very grim.

Addressing such problems is what a Green engine does or tries to do. The Green engine as it is named for the time being, is a six phase engine, which has a very low exhaust emission, higher efficiency, low vibrations etc. Apart from these features, is its uniqueness to adapt to any fuel which is also well burnt. Needless to say, if implemented will serve the purpose to a large extent.

Compared to conventional piston engines, operated on four phases, the Green engine is an actual six phase internal combustion engine with much higher expansion ratio. Thus it has six independent or separate working processes: intake, compression, mixing, combustion, power and exhaust, resulting in the high air charge rate. Satisfactory air-fuel mixing, complete burning, high combustion efficiency and full expansion. The most important characteristic is the expansion ratio being much bigger than the compression ratio.


IP spoofing(mechanical seminar topics)

Definition

Criminals have long employed the tactic of masking their true identity, from disguises to aliases to caller-id blocking. It should come as no surprise then, that criminals who conduct their nefarious activities on networks and computers should employ such techniques. IP spoofing is one of the most common forms of on-line camouflage. In IP spoofing, an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by "spoofing" the IP address of that machine. In the subsequent pages of this report, we will examine the concepts of IP spoofing: why it is possible, how it works, what it is used for and how to defend against it.

Brief History of IP Spoofing

The concept of IP spoofing was initially discussed in academic circles in the 1980's. In the April 1989 article entitled: "Security Problems in the TCP/IP Protocol Suite", author S. M Bellovin of AT & T Bell labs was among the first to identify IP spoofing as a real risk to computer networks. Bellovin describes how Robert Morris, creator of the now infamous Internet Worm, figured out how TCP created sequence numbers and forged a TCP packet sequence. This TCP packet included the destination address of his "victim" and using an IP spoofing attack Morris was able to obtain root access to his targeted system without a User ID or password. Another infamous attack, Kevin Mitnick's Christmas Day crack of Tsutomu Shimomura's machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators. A common misconception is that "IP spoofing" can be used to hide your IP address while surfing the Internet, chatting on-line, sending e-mail, and so forth. This is generally not true. Forging the source IP address causes the responses to be misdirected, meaning you cannot create a normal network connection. However, IP spoofing is an integral part of many network attacks that do not need to see responses (blind spoofing).

2. TCP/IP PROTOCOL Suite

IP Spoofing exploits the flaws in TCP/IP protocol suite. In order to completely understand how these attacks can take place, one must examine the structure of the TCP/IP protocol suite. A basic understanding of these headers and network exchanges is crucial to the process.

2.1 Internet Protocol - IP

The Internet Protocol (or IP as it generally known), is the network layer of the Internet. IP provides a connection-less service. The job of IP is to route and send a packet to the packet's destination. IP provides no guarantee whatsoever, for the packets it tries to deliver. The IP packets are usually termed datagrams. The datagrams go through a series of routers before they reach the destination. At each node that the datagram passes through, the node determines the next hop for the datagram and routes it to the next hop. Since the network is dynamic, it is possible that two datagrams from the same source take different paths to make it to the destination. Since the network has variable delays, it is not guaranteed that the datagrams will be received in sequence. IP only tries for a best-effort delivery. It does not take care of lost packets; this is left to the higher layer protocols. There is no state maintained between two datagrams; in other words, IP is connection-less.

Lean manufacturing(mechanical seminar topics)

INTRODUCTION

In 1900's U.S. manufacturers like Henry ford brought the concept of mass production. U.S. manufacturers have always searched for efficiency strategies that help reduce costs, improve output, establish competitive position, and increase market share. Early process oriented mass production manufacturing methods common before World War II shifted afterwards to the results-oriented, output-focused, production systems that control most of today's manufacturing businesses.

Japanese manufacturers re-building after the Second World War were facing declining human, material, and financial resources. The problems they faced in manufacturing were vastly different from their Western counterparts. These circumstances led to the development of new, lower cost, manufacturing practices. Early Japanese leaders such as the Toyota Motor Company's Eiji Toyoda, Taiichi Ohno, and Shingeo Shingo developed a disciplined, process-focused production system now known as the "lean production." The objective of this system was to minimize the consumption of resources that added no value to a product.

The "lean manufacturing" concept was popularized in American factories in large part by the Massachusetts Institute of Technology study of the movement from mass production toward production as described in The Machine That Changed the World, (Womack, Jones & Roos, 1990), which discussed the significant performance gap between Western and Japanese automotive industries. This book described the important elements accounting for superior performance as lean production. The term "lean" was used because Japanese business methods used less human effort, capital investment, floor space, materials, and time in all aspects of operations. The resulting competition among U.S. and Japanese automakers over the last 25 years has lead to the adoption of these principles within all U.S. manufacturing businesses. Now it has got global acceptance and is adopted by industries world over to keep up with the fast moving and competing industrial field.


WHAT IS LEAN MANUFACTURING?
Lean manufacturing is a manufacturing system and philosophy that was originally developed by Toyota, Japan and is now used by many manufacturers throughout the world.

Lean Manufacturing can be defined as:
"A systematic approach to identifying and eliminating waste (non-value-added activities) through continuous improvement by flowing the product at the pull of the customer in pursuit of perfection."

The term lean manufacturing is a more generic term and refers to the general principles and further developments of becoming lean.The term lean is very apt because in lean manufacturing the emphasis is on cutting out "FAT" or wastes in manufacturing process. Waste is defined as anything that does not add any value to the product. It could be defined as anything the customer is not willing to pay for.

Manufacturing philosophy is pivoted on designing a manufacturing system that perfectly blends together the fundamentals of minimizing costs and maximizing profit. These fundamentals are Man (labour), Materials and Machines (equipments) called the 3 M's of manufacturing. A well-balanced 3M is resulted through lean manufacturing.


WASTES IN MANUFACTURING
The aim of Lean Manufacturing is the elimination of waste in every area of production including customer relations, product design, supplier networks, and factory management. Its goal is to incorporate less human effort, less inventory, less time to develop products, and less space to become highly responsive to customer demand while producing top quality products in the most efficient and economical manner possible.

Essentially, a "waste" is anything that the customer is not willing to pay for.
Typically the types of waste considered in a lean manufacturing system include:

Overproduction
To produce more than demanded or produce it before it is needed. It is visible as storage of material. It is the result of producing to speculative demand. Overproduction means making more than is required by the next process, making earlier than is required by the next process, or making faster than is required by the next process.
Causes for overproduction waste include:
" Just-in-case logic
" Misuse of automation
" Long process setup
" Unleveled scheduling
" Unbalanced work load
" Over engineered
" Redundant inspections

Electro Discharge Machining(mechanical seminar topic)

The unconventional method of several specific advantages over conventional methods of machining and these promise formidable tasks to be undertaken and set a new recording in the manufacturing technology. EDM is one such machiningThe unconventional method of several specific advantages over conventional methods of machining and these promise formidable tasks to be undertaken and set a new recording in the manufacturing technology. EDM is one such machining process, which has been immense help to the manufacturing process engineers to produce intricate shapes on any conducting metal and alloy irrespective of its hardness and toughness.

CLASSIFICATION

1. Contact initiated discharge 2. Spark initiated discharge 3. Electrolytic discharge ADVANTAGES

1. The process can be applied to all electrically conducting metal and alloyes irrespectives of their melting points, hardness, toughness, or brittleness 2. Any complicated shape that can be made on the tool can be produced on the work piece 3. Time of machining is less than conventional machining process DISADVANTAGES

1. Power required for machining in E.D.M is very high compared to conventional process. 2. Reproduction of sharp corners is the limitation of the process. 3. Surface cracking takesplace in some materials.
process, which has been immense help to the manufacturing process engineers to produce intricate shapes on any conducting metal and alloy irrespective of its hardness and toughness.

CLASSIFICATION

1. Contact initiated discharge 2. Spark initiated discharge 3. Electrolytic discharge ADVANTAGES

1. The process can be applied to all electrically conducting metal and alloyes irrespectives of their melting points, hardness, toughness, or brittleness 2. Any complicated shape that can be made on the tool can be produced on the work piece 3. Time of machining is less than conventional machining process DISADVANTAGES

1. Power required for machining in E.D.M is very high compared to conventional process. 2. Reproduction of sharp corners is the limitation of the process. 3
. Surface cracking takesplace in some materials.

Friday, February 5, 2010

MPEG Video Compression(Information Technology Seminar Topics)

Definition

MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups.
To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distributionThe essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples.

These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error….). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.

MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months.

It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).

Pre MPEG
Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype).
Seeing this threat… that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs…. and maybe it would be useful for other things too.

MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.

After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles

Quantum Information Technology(Information Technology Seminar Topics

Definition

The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This document aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the paper begins with an introduction to classical information theory .The principles of quantum mechanics are then outlined.

The EPR-Bell correlation and quantum entanglement in general, form the essential new ingredient, which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource.

Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, super conducting cavities, Nuclear magnetic resonance imaging based techniques, and quantum dots. "Where a calculator on the Eniac is equipped with 18000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 tubes and weigh only 1 1/2 tons" Popular Mechanics, March 1949.

Now, if this seems like a joke, wait a second. "Tomorrows computer might well resemble a jug of water"
This for sure is no joke. Quantum computing is here. What was science fiction two decades back is a reality today and is the future of computing. The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Quantum computing is the next logical advancement.

Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now.

Quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock-speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!

Single Photon Emission Computed Tomography (SPECT)

Definition

Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue.

SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient's specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient's body.

SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980's.

SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan.

Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient's specific organ or body system.

Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what's happening inside the patient's body.

By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D image.