EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Sunday, January 24, 2010

Integrated Power Electronics Module

Introduction

In power electronics, solid-state electronics is used for the control and conversion of electric power .The goal of power electronics is to realize power conversion from electrical source to an electrical load in a highly efficient, highly reliable and cost effective way. Power electronics modules are key units in a power electronics system. These modules contain integration of power switches and associated electronic circuitry for drive control and protection and other passive components.

During the past decades, power devices underwent generation-by-generation improvements and can now handle significant power density. On the other hand power electronics packaging has not kept pace with the development of semiconductor devices. This is due to the limitations of power electronics circuits. The integration of power electronics circuit is quite different from that of other electronics circuits. The objective of power electronics circuits is electronics energy processing and hence require high power handling capability and proper thermal management.

Most of the currently used power electronic modules are made by using wire-bonding technology [1,2]. In these packages power semi conductor dies are mounted on a common substrate and interconnected with wire bonds. Other associated electronic circuitries are mounted on a multi layer PCB and connected to the power devices by vertical pins. These wire bonds are prone to resistance, parasitic and fatigue failure. Due to its two dimensional structure the package has large size. Another disadvantage is the ringing produced by parasitic associated with the wire bonds.

To improve the performance and reliability of power electronics packages, wire bonds must be replaced. The researches in power electronic packaging have resulted in the development of an advanced packaging technique that can replace wire bonds. This new generation package is termed as 'Integrated Power Electronics Module' (IPEM) [1]. In this, planar metalization is used instead of conventional wire bonds. It uses a three-dimensional integration technique that can provide low profile high-density systems. It offers high frequency operation and improved performance. It also reduces the size, weight and cost of the power modules.

Features Of IPEMS

The basic structure of an IPEM contains power semi conductor devices, control/drive/protection electronics and passive components. Power devices and their drive and protection circuit is called the active IPEM and the remaining part is called passive IPEM. The drive and protection circuits are realized in the form of hybrid integrated circuit and packaged together with power devices. Passive components include inductors, capacitors, transformers etc.

The commonly used power switching devices are MOSFETs and IGBTs [3]. This is mainly due to their high frequency operation and low on time losses. Another advantage is their inherent vertical structure in which the metalization electrode pads are on two sides. Usually the gate source pads are on the top surface with non-solderable thin film metal Al contact. The drain metalization using Ag or Au is deposited on the bottom of chip and is solderable. This vertical structure of power chips offers advantage to build sand witch type 3-D integration constructions

Friday, January 22, 2010

DSP Processor(electronics seminar topics)

Definition
The best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter.

For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC).

In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required .
Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address.

Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms .The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .

Embedded Systems and Information Appliances(electronics seminar topics)

Definition
Embedded system is a combination of computer hardware, software and, perhaps, additional mechanical parts, designed to perform a specific function.

Embedded systems are usually programmed in high level language that is compiled (and/or assembled) into an executable ("machine") code. These are loaded into Read Only Memory (ROM) and called "firmware", "microcode" or a "microkernel". The microprocessor is 8-bit or 16-bit.The bit size refers to the amount of memory accessed by the processor. There is usually no operating system and perhaps 0.5k of RAM. The functions implemented normally have no priorities. As the need for features increases and/or as the need to establish priorities arises, it becomes more important to have some sort of decision making mechanism be part of the embedded system. The most advanced systems actually have a tiny, streamlined OS running the show, executing on a 32-bit or 64-bit processor. This is called RTOS.

Embedded Hardware
All embedded system has a microprocessor or microcontroller for processing of information and execution of programs, memory in the form of ROM/RAM for storing embedded software programs and data, and I/O interfaces for external interface. Any additional requirement in an embedded system is dependent on the equipment it is controlling. Very often these systems have a standard serial port, a network interface, I/O interface, or hardware to interact with sensors and activators on the equipment.

Embedded Software
C has become the language of choice for embedded programmers, because it has the benefit of processor independence, which allows the programmer to concentrate on algorithms and applications, rather than on the details of processor architecture. However, many of its advantages apply equally to other high-level languages as well. Perhaps the greatest strength of C is that it gives embedded programmers an extraordinary degree of direct hardware control without sacrificing the benefits of high-level languages. Compilers and cross compilers are also available for almost every processor with C.

Any source code written in C or C++ or Assembly language must be converted into an executable image that can be loaded onto a ROM chip. The process of converting the source code representation of your embedded software into an executable image involves three distinct steps, and the system or computer on which these processes are executed is called a host computer.First, each of the source files that make an embedded application must be compiled or assembled into distinct object files.Second, all of the object files that result from the first step must be linked into a final object file called the relocatable program.

4G Wireless Systems

Definition
Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 �]8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all �]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.

Mesh Topology(electronics seminar topics)

According to the San Francisco-based market research and consulting firm, Internet traffic will have reached 350,000 Terabytes per month as we pass into the new millennium. This is a significant milestone, as it indicates that data has already surpassed the voice network. To keep pace with seemingly insatiable demand for higher-speed access, a huge, complex, network-building process is beginning. Decisions made by network Architects today will have an immense impact on the future profitability, flexibility, and competitiveness of network operators. Despite the dominance of synchronous optical network (SONET), a transport technology based on time division multiplexing (TDM), more and more operators consider adopting a point-to-point strategy and eventual mesh topology. This article highlights the key advantages of this new approach.

With such strong demand for wideband access - 1.5 million house holds already have cable or digital subscriber line (DSL) modern capable of operating at 1 Mbps - there is no doubt that the future for service providers is extremely bright. However, there are a number of more immediate challenges that must be addressed. At the top of the list is the fact they network investments must be made before revenues are realized. As a result, there is a need for less complex and more efficient network builds. In an effort to cut network costs, action is being taken across several fronts: consolidating network elements, boosting reliability, reducing component system costs, and slashing operational costs. As far as optical networks are concerned, the action likely to made the most positive impact is the development of new network architectures, such as point-to-point/mesh designs. Ring architectures will still be supported, but new Internet protocol (IP) and asynchronous transfer mode (ATM) networks will find that mesh, with its well- defined optical nodes, lends itself to robust optical rerouting schemes

2. POINT-TO-POINT OR MESH TOPOLOGIES IN THE METRO OPTICAL NETWORK

Definition
According to point-to-point topology, one node connects directly to another node. Mesh is a network architecture that improves on point-to-point topology by providing each node with a dedicated connection to every other node.

This article highlights the key advantages of adopting a point-to-point strategy and eventual mesh topology, a new approach in transport technology.

Topology

It is the method of arranging various devices in a network. Depending on the way in which the devices are interlinked to each other, topologies are classified into:-

" Star
" Ring
" Bus
" Tree
" Mesh

Of the above mentioned types, the most popular & advantag
eous is the mesh topology.

FPGA (electronics seminar topic)

A quiet revolution is taking place. Over the past few years, the density of the average programmable logic device has begun to skyrocket. The maximum number of gates in an FPGA is currently around 500,000 and doubling every 18 months. Meanwhile, the price of these chips is dropping. What all of this means is that the price of an individual NAND or NOR is rapidly approaching zero! And the designers of embedded systems are taking note. Some system designers are buying processor cores and incorporating them into system-on-a-chip designs; others are eliminating the processor and software altogether, choosing an alternative hardware-only design.

As this trend continues, it becomes more difficult to separate hardware from software. After all, both hardware and software designers are now describing logic in high-level terms, albeit in different languages, and downloading the compiled result to a piece of silicon. Surely no one would claim that language choice alone marks a real distinction between the two fields. Turing's notion of machine-level equivalence and the existence of language-to-language translators have long ago taught us all that that kind of reasoning is foolish. There are even now products that allow designers to create their hardware designs in traditional programming languages like C. So language differences alone are not enough of a distinction.

Both hardware and software designs are compiled from a human-readable form into a machine-readable one. And both designs are ultimately loaded into some piece of silicon. Does it matter that one chip is a memory device and the other a piece of programmable logic? If not, how else can we distinguish hardware from software?

Regardless of where the line is drawn, there will continue to be engineers like you and me who cross the boundary in our work. So rather than try to nail down a precise boundary between hardware and software design, we must assume that there will be overlap in the two fields. And we must all learn about new things. Hardware designers must learn how to write better programs, and software developers must learn how to utilize programmable logic.

TYPES OF PROGRAMMABLE LOGIC

Many types of programmable logic are available. The current range of offerings includes everything from small devices capable of implementing only a handful of logic equations to huge FPGAs that can hold an entire processor core (plus peripherals!). In addition to this incredible difference in size there is also much variation in architecture. In this section, I'll introduce you to the most common types of programmable logic and highlight the most important features of each type.

PLDs

At the low end of the spectrum are the original Programmable Logic Devices (PLDs). These were the first chips that could be used to implement a flexible digital logic design in hardware. In other words, you could remove a couple of the 7400-series TTL parts (ANDs, ORs, and NOTs) from your board and replace them with a single PLD. Other names you might encounter for this class of device are Programmable Logic Array (PLA), Programmable Array Logic (PAL), and Generic Array Logic (GAL).

Wednesday, January 20, 2010

HVAC(seminar topics for EC students)

Definition

Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals,video conferencing signals and LAN signals indoors.

Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.

Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed through out the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings.

In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.

This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.