EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Friday, January 22, 2010

DSP Processor(electronics seminar topics)

Definition
The best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter.

For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC).

In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required .
Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address.

Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms .The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .

Embedded Systems and Information Appliances(electronics seminar topics)

Definition
Embedded system is a combination of computer hardware, software and, perhaps, additional mechanical parts, designed to perform a specific function.

Embedded systems are usually programmed in high level language that is compiled (and/or assembled) into an executable ("machine") code. These are loaded into Read Only Memory (ROM) and called "firmware", "microcode" or a "microkernel". The microprocessor is 8-bit or 16-bit.The bit size refers to the amount of memory accessed by the processor. There is usually no operating system and perhaps 0.5k of RAM. The functions implemented normally have no priorities. As the need for features increases and/or as the need to establish priorities arises, it becomes more important to have some sort of decision making mechanism be part of the embedded system. The most advanced systems actually have a tiny, streamlined OS running the show, executing on a 32-bit or 64-bit processor. This is called RTOS.

Embedded Hardware
All embedded system has a microprocessor or microcontroller for processing of information and execution of programs, memory in the form of ROM/RAM for storing embedded software programs and data, and I/O interfaces for external interface. Any additional requirement in an embedded system is dependent on the equipment it is controlling. Very often these systems have a standard serial port, a network interface, I/O interface, or hardware to interact with sensors and activators on the equipment.

Embedded Software
C has become the language of choice for embedded programmers, because it has the benefit of processor independence, which allows the programmer to concentrate on algorithms and applications, rather than on the details of processor architecture. However, many of its advantages apply equally to other high-level languages as well. Perhaps the greatest strength of C is that it gives embedded programmers an extraordinary degree of direct hardware control without sacrificing the benefits of high-level languages. Compilers and cross compilers are also available for almost every processor with C.

Any source code written in C or C++ or Assembly language must be converted into an executable image that can be loaded onto a ROM chip. The process of converting the source code representation of your embedded software into an executable image involves three distinct steps, and the system or computer on which these processes are executed is called a host computer.First, each of the source files that make an embedded application must be compiled or assembled into distinct object files.Second, all of the object files that result from the first step must be linked into a final object file called the relocatable program.

4G Wireless Systems

Definition
Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 �]8 GHz. it gives the ability for world wide roaming to access cell anywhere.

Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all �]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.

Mesh Topology(electronics seminar topics)

According to the San Francisco-based market research and consulting firm, Internet traffic will have reached 350,000 Terabytes per month as we pass into the new millennium. This is a significant milestone, as it indicates that data has already surpassed the voice network. To keep pace with seemingly insatiable demand for higher-speed access, a huge, complex, network-building process is beginning. Decisions made by network Architects today will have an immense impact on the future profitability, flexibility, and competitiveness of network operators. Despite the dominance of synchronous optical network (SONET), a transport technology based on time division multiplexing (TDM), more and more operators consider adopting a point-to-point strategy and eventual mesh topology. This article highlights the key advantages of this new approach.

With such strong demand for wideband access - 1.5 million house holds already have cable or digital subscriber line (DSL) modern capable of operating at 1 Mbps - there is no doubt that the future for service providers is extremely bright. However, there are a number of more immediate challenges that must be addressed. At the top of the list is the fact they network investments must be made before revenues are realized. As a result, there is a need for less complex and more efficient network builds. In an effort to cut network costs, action is being taken across several fronts: consolidating network elements, boosting reliability, reducing component system costs, and slashing operational costs. As far as optical networks are concerned, the action likely to made the most positive impact is the development of new network architectures, such as point-to-point/mesh designs. Ring architectures will still be supported, but new Internet protocol (IP) and asynchronous transfer mode (ATM) networks will find that mesh, with its well- defined optical nodes, lends itself to robust optical rerouting schemes

2. POINT-TO-POINT OR MESH TOPOLOGIES IN THE METRO OPTICAL NETWORK

Definition
According to point-to-point topology, one node connects directly to another node. Mesh is a network architecture that improves on point-to-point topology by providing each node with a dedicated connection to every other node.

This article highlights the key advantages of adopting a point-to-point strategy and eventual mesh topology, a new approach in transport technology.

Topology

It is the method of arranging various devices in a network. Depending on the way in which the devices are interlinked to each other, topologies are classified into:-

" Star
" Ring
" Bus
" Tree
" Mesh

Of the above mentioned types, the most popular & advantag
eous is the mesh topology.

FPGA (electronics seminar topic)

A quiet revolution is taking place. Over the past few years, the density of the average programmable logic device has begun to skyrocket. The maximum number of gates in an FPGA is currently around 500,000 and doubling every 18 months. Meanwhile, the price of these chips is dropping. What all of this means is that the price of an individual NAND or NOR is rapidly approaching zero! And the designers of embedded systems are taking note. Some system designers are buying processor cores and incorporating them into system-on-a-chip designs; others are eliminating the processor and software altogether, choosing an alternative hardware-only design.

As this trend continues, it becomes more difficult to separate hardware from software. After all, both hardware and software designers are now describing logic in high-level terms, albeit in different languages, and downloading the compiled result to a piece of silicon. Surely no one would claim that language choice alone marks a real distinction between the two fields. Turing's notion of machine-level equivalence and the existence of language-to-language translators have long ago taught us all that that kind of reasoning is foolish. There are even now products that allow designers to create their hardware designs in traditional programming languages like C. So language differences alone are not enough of a distinction.

Both hardware and software designs are compiled from a human-readable form into a machine-readable one. And both designs are ultimately loaded into some piece of silicon. Does it matter that one chip is a memory device and the other a piece of programmable logic? If not, how else can we distinguish hardware from software?

Regardless of where the line is drawn, there will continue to be engineers like you and me who cross the boundary in our work. So rather than try to nail down a precise boundary between hardware and software design, we must assume that there will be overlap in the two fields. And we must all learn about new things. Hardware designers must learn how to write better programs, and software developers must learn how to utilize programmable logic.

TYPES OF PROGRAMMABLE LOGIC

Many types of programmable logic are available. The current range of offerings includes everything from small devices capable of implementing only a handful of logic equations to huge FPGAs that can hold an entire processor core (plus peripherals!). In addition to this incredible difference in size there is also much variation in architecture. In this section, I'll introduce you to the most common types of programmable logic and highlight the most important features of each type.

PLDs

At the low end of the spectrum are the original Programmable Logic Devices (PLDs). These were the first chips that could be used to implement a flexible digital logic design in hardware. In other words, you could remove a couple of the 7400-series TTL parts (ANDs, ORs, and NOTs) from your board and replace them with a single PLD. Other names you might encounter for this class of device are Programmable Logic Array (PLA), Programmable Array Logic (PAL), and Generic Array Logic (GAL).