EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Wednesday, January 20, 2010

HVAC(seminar topics for EC students)

Definition

Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals,video conferencing signals and LAN signals indoors.

Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.

Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed through out the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings.

In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.

This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.

smart card(electronics seminar topics)

Definition

In this seminar ,is giving some basic concepts about smart cards. The physical and logical structure of the smart card and the corresponding security access control have been discussed in this seminar . It is believed that smart cards offer more security and confidentiality than the other kinds of information or transaction storage. Moreover, applications applied with smart card technologies are illustrated which demonstrate smart card is one of the best solutions to provide and enhance their system with security and integrity.

The seminar also covers the contactless type smart card briefly. Different kinds of scheme to organise and access of multiple application smart card are discussed. The first and second schemes are practical and workable on these days, and there is real applications developed using those models. For the third one, multiple independent applications in a single card, there is still a long way to go to make it becomes feasible because of several reasons.

At the end of the paper, an overview of the attack techniques on the smart card is discussed as well. Having those attacks does not mean that smart card is unsecure. It is important to realise that attacks against any secure systems are nothing new or unique. Any systems or technologies claiming 100% secure are irresponsible. The main consideration of determining whether a system is secure or not depends on whether the level of security can meet the requirement of the system.

The smart card is one of the latest additions to the world of information technology. Similar in size to today's plastic payment card, the smart card has a microprocessor or memory chip embedded in it that, when coupled with a reader, has the processing power to serve many different applications. As an access-control device, smart cards make personal and business data available only to the appropriate users. Another application provides users with the ability to make a purchase or exchange value. Smart cards provide data portability, security and convenience. Smart cards come in two varieties: memory and microprocessor.

Memory cards simply store data and can be viewed as a small floppy disk with optional security. A microprocessor card, on the other hand, can add, delete and manipulate information in its memory on the card. Similar to a miniature computer, a microprocessor card has an input/output port operating system and hard disk with built-in security features. On a fundamental level, microprocessor cards are similar to desktop computers. They have operating systems, they store data and applications, they compute and process information and they can be protected with sophisticated security tools. The self-containment of smart card makes it resistant to attack as it does not need to depend upon potentially vulnerable external resources. Because of this characteristic, smart cards are often used in different applications, which require strong security protection and authentication.


edge(electronics seminar topic)

Introduction
EDGE is the next step in the evolution of GSM and IS- 136. The objective of the new technology is to increase data transmission rates and spectrum efficiency and to facilitate new applications and increased capacity for mobile use. With the introduction of EDGE in GSM phase 2+, existing services such as GPRS and high-speed circuit switched data (HSCSD) are enhanced by offering a new physical layer. The services themselves are not modified. EDGE is introduced within existing specifications and descriptions rather than by creating new ones. This paper focuses on the packet-switched enhancement for GPRS, called EGPRS. GPRS allows data rates of 115 kbps and, theoretically, of up to 160 kbps on the physical layer. EGPRS is capable of offering data rates of 384 kbps and, theoretically, of up to 473.6 kbps.

A new modulation technique and error-tolerant transmission methods, combined with improved link adaptation mechanisms, make these EGPRS rates possible. This is the key to increased spectrum efficiency and enhanced applications, such as wireless Internet access, e-mail and file transfers.

GPRS/EGPRS will be one of the pacesetters in the overall wireless technology evolution in conjunction with WCDMA. Higher transmission rates for specific radio resources enhance capacity by enabling more traffic for both circuit- and packet-switched services. As the Third-generation Partnership Project (3GPP) continues standardization toward the GSM/EDGE radio access network (GERAN), GERAN will be able to offer the same services as WCDMA by connecting to the same core network. This is done in parallel with means to increase the spectral efficiency. The goal is to boost system capacity, both for real- time and best-effort services, and to compete effectively with other third-generation radio access networks such as WCDMA and cdma2000.

Technical differences between GPRS and EGPRS

Introduction
Regarded as a subsystem within the GSM standard, GPRS has introduced packet-switched data into GSM networks. Many new protocols and new nodes have been introduced to make this possible. EDGE is a method to increase the data rates on the radio link for GSM. Basically, EDGE only introduces a new modulation technique and new channel coding that can be used to transmit both packet-switched and circuit-switched voice and data services. EDGE is therefore an add-on to GPRS and cannot work alone. GPRS has a greater impact on the GSM system than EDGE has. By adding the new modulation and coding to GPRS and by making adjustments to the radio link protocols, EGPRS offers significantly higher throughput and capacity.

GPRS and EGPRS have different protocols and different behavior on the base station system side. However, on the core network side, GPRS and EGPRS share the same packet-handling protocols and, therefore, behave in the same way. Reuse of the existing GPRS core infrastructure (serving GRPS support node/gateway GPRS support node) emphasizes the fact that EGPRS is only an "add-on" to the base station system and is therefore much easier to introduce than GPRS . In addition to enhancing the throughput for each data user, EDGE also increases capacity. With EDGE, the same time slot can support more users. This decreases the number of radio resources required to support the same traffic, thus freeing up capacity for more data or voice services. EDGE makes it easier for circuit-switched and packet-switched traffic to coexist, while making more efficient use of the same radio resources. Thus in tightly planned networks with limited spectrum, EDGE may also be seen as a capacity booster for the data traffic.

EDGE technology
EDGE leverages the knowledge gained through use of the existing GPRS standard to deliver significant technical improvements. Figure 2 compares the basic technical data of GPRS and EDGE. Although GPRS and EDGE share the same symbol rate, the modulation bit rate differs. EDGE can transmit three times as many bits as GPRS during the same period of time. This is the main reason for the higher EDGE bit rates. The differences between the radio and user data rates are the result of whether or not the packet headers are taken into consideration. These different ways of calculating throughput often cause misunderstanding within the industry about actual throughput figures for GPRS and EGPRS. The data rate of 384 kbps is often used in relation to EDGE. The International Telecommunications Union (ITU) has defined 384 kbps as the data rate limit required for a service to fulfill the International Mobile Telecommunications-2000 (IMT-2000) standard in a pedestrian environment. This 384 kbps data rate corresponds to 48 kbps per time s
lot, assuming an eight-time slot terminal.

Optic Fibre Cable(electronics seminar topics)

Optical fiber (or "fiber optic") refers to the medium and the technology associated with the transmission of information as light pulses along a glass or plastic wire or fiber. Optical fiber carries much more information than conventional copper wire and is in general not subject to electromagnetic interference and the need to retransmit signals. Most telephone company long-distance lines are now of optical fiber.

Transmission on optical fiber wire requires repeaters at distance intervals. The glass fiber requires more protection within an outer cable than copper. For these reasons and because the installation of any new wiring is labor-intensive, few communities yet have optical fiber wires or cables from the phone company's branch office to local customers (known as local loops).

Optical fiber consists of a core, cladding, and a protective outer coating, which guide light along the core by total internal reflection. The core, and the higher-refractive-index cladding, are typically made of high-quality silica glass, though they can both be made of plastic as well. An optical fiber can break if bent too sharply. Due to the microscopic precision required to align the fiber cores, connecting two optical fibers, whether done by fusion splicing or mechanical splicing, requires special skills and interconnection technology.

Metamorphic Robots(electronics seminar topics)

Robots out on the factory floor pretty much know what's coming. Constrained as they are by programming and geometry, their world is just an assembly line. But for robots operating out doors, away from civilization, both mission and geography are unpredictable. Here, robots with the ability to change their shape could adapt to constantly varying terrain.

Metamorphic robots are designed so that they can change their external shape without human intervention. One general way to achieve such functionality is to build a robot composed of multiple, identical unit modules. If the modules are designed so that they can be assembled into rigid structures, and so that individual units within such structures can be relocated within and about the structure, then self-reconfiguration is possible.

These systems claim to have many desirable properties including versatility, robustness and low cost. Each module has its own computer, a rich set of sensors, actuators and communication networks. However, the practical application outside of research has yet to be seen .One outstanding issue for such systems is the increasing complexity for effectively programming a large distributed system, with hundreds or even thousands of nodes in changing configurations. PolyBot has been developed through as third generation at the Xerox Palo alto Research Center. Conro robot built at the information sciences institute at the University of Southern California are examples for metamorphic robots.

SELF-RECONFIGURATION THROUGH MODULARITY

Modularity means composed of multiple identical units called modules. The robot is made up of thousands of modules. The systems addressed here are automatically reconfiguring, and for this the hardware systems that tend to be more homogenous than heterogenous. That is the system may have different types of modules but the ratio of the number of module types to the number of modules is very low. Systems with all of these characteristics are called n-modular where n refers to the number of module types and n is small typically one or two. (e.g. a system with two types of modules is called 2-modular ).

The general philosophy is to simplify the design and construction of components while enhancing functionality and versatility through larger numbers of modules. Thus, the low heterogeneity of the system is a design leverage point getting more functionality for a given amount of design .The analog in architecture is the building of a cathedral from many simple bricks in which bricks are of few types .In nature. The analogy is complex organisms like mammals, which have billions of cells, but only hundreds of cell types.

3 shows an earthworm type to slither through obstacles..

MOBILE IPv6(electronics seminar topics)

INTRODUCTION
Mobile IP is the IETF proposed standard solution for handling terminal mobility among IP subnets and was designed to allow a host to change its point of attachment transparently to an IP network. Mobile IP works at the network layer, influencing the routing of datagrams, and can easily handle mobility among different media (LAN, WLAN, dial-up links, wireless channels, etc.). Mobile IPv6 is a protocol being developed by the Mobile IP Working Group (abbreviated as MIP WG) of the IETF (Internet Engineering Task Force).

The intention of Mobile IPv6 is to provide a functionality for handling the terminal, or node, mobility between IPv6 subnets. Thus, the protocol was designed to allow a node to change its point of attachment to the IP network such a way that the change does not affect the addressability and reachability of the node. Mobile IP was originally defined for IPv4, before IPv6 existed. MIPv6 is currently becoming a standard due to inherent advantages of IPv6 over IPv4 and will therefore be ready soon for adoption in 3G Mobile networks. Mobile IPv6 is a highly feasible mechanism for implementing static IPv6 addressing for mobile terminals. Mobility signaling and security features (IPsec) are integrated in the IPv6 protocol as header extensions.

LIMITATIONS OF IPv4
The current version of IP (known as version 4 or IPv4) has not changed substantially since RFC 791, which was published in 1981. IPv4 has proven to be robust, and easily implemented and interoperable. It has stood up to the test of scaling an internetwork to a global utility the size of today's Internet. This is a tribute to its initial design.

However, the initial design of IPv4 did not anticipate:
" The recent exponential growth of the Internet and the impending exhaustion of the IPv4 address space
Although the 32-bit address space of IPv4 allows for 4,294,967,296 addresses, previous and current allocation practices limit the number of public IP addresses to a few hundred million. As a result, IPv4 addresses have become relatively scarce, forcing some organizations to use a Network Address Translator (NAT) to map a single public IP address to multiple private IP addresses.
" The growth of the Internet and the ability of Internet backbone routers to maintain large routing tables
Because of the way that IPv4 network IDs have been (and are currently) allocated, there are routinely over 85,000 routes in the routing tables of Internet backbone routers today.
" The need for simpler configuration

Most current IPv4 implementations must be either manually configured or use a stateful address configuration protocol such as Dynamic Host Configuration Protocol (DHCP). With more computers and devices using IP, there is a need for a simpler and more automatic configuration of addresses and other configuration settings that do not rely on the administration of a DHCP infrastructure.

" The requirement for security at the IP level
Private communication over a public medium like the Internet requires cryptographic services that protect the data being sent from being viewed or modified in transit. Although a standard now exists for providing security for IPv4 packets (known as Internet Protocol Security, or IPSec), this standard is optional for IPv4 and proprietary security solutions are prevalent.
" The need for better support for real
-time delivery of data-also called quality of service (QoS)

HART Communication

INTRODUCTION
For many years, the field communication standard for process automation equipment has been a milliamp (mA) analog current signal. The milliamp current signal varies within a range of 4-2OmA in proportion to the process variable being represented. Li typical applications a signal of 4mA will correspond to the lower limit (0%) of the calibrated range and 2OmA will correspond to the upper limit (100%) of the calibrated range. Virtually all installed systems use this international standard for communicating process variable information between process automation equipment.

HART Field Communications Protocol extends this 4- 2OmA standard to enhance communication with smart field instruments. The HART protocol was designed specifically for use with intelligent measurement and control instruments which traditionally communicate using 4-2OmA analog signals. HART preserves the 4- signal and enables two way digital communications to occur without disturbing the integrity of the 4-2OmA signal. Unlike other digital communication technologies, the HART protocol maintains compatibility with existing 4-2OmA systems, and in doing so, provides users with a uniquely backward compatible solution. HART Communication Protocol is well-established as the existing industry standard for digitally enhanced 4- 2OmA field communication.

- AN OVERVIEW

HART is an acronym for "Highway Addressable Remote Transducer". The HART protocol makes use of the Bell 202 Frequency Shift Keying (FSK) standard to superimpose digital communication signals at a low level on top of the 4-2OmA. This enables two-way field communication to take place and makes it possible for additional information beyond just the normal process variable to be communicated to/from a smart field instrument. The HART protocol communicates at 1200 bps without interrupting the 4-2OmA signal and allows a host application (master) to get two or more digital updates per second from a field device. As the digital FSK signal is phase continuous, there is no interference with the 4- 2OrnA signal.

HART is a master/slave protocol which means that a field (slave) device only speaks when spoken to by a master. The HART protocol can be used in various modes for communicating information to/from smart field in3truments and central control or monitor systems. HART provides for up to two masters (primary and secondary). This allows secondary masters such as handheld communicators to be used without interfering with communications to/from the primary master, i.e. control/monitoring system. The most commonly employed HART communication mode is master/slave communication of digital information simultaneous with transmission of the 4-2OmA signal. The HART protocol permits all digital communication with field devices in either point-to-point or multidrop network configuration. There is an optional "burst" communication mode where single slave device can continuously broadcast a standard HART reply message.

HART COMMUNICATION LAYERS

The HART protocol utilizes the OSI reference model. As is the case for most of the communication systems on the field level, the HART protocol implements only the Layers 1, 2 and 7 of the OSI model. The layers 3 to 6 remain empty since their services are either not required or provided by the
application layer 7

Aluminum Electrolytic Capacitors (electronics seminar topics)

Aluminium electrolytic capacitors are widely used in power supply circuitry of electronic equipment as there after several advantages over other types of capacitances. The selection of a capacitor for an application without knowing the basics may result in unreliable performance of the equipment due to expanitor problems. It may lead to customer dissatisfaction and damage market to potential or the image of a reputed company. The aluminium eletrolytic capacitors are suitable to be used when a great capacitance value is required in a very small size. The volume of an electrolytic capacitor is more than 10 times less than a film one considering the same rated voltage and capacitance. The cost per F is also less when compared with all other capacitors.

CONSTRUCTION

An aluminium electrolytic capacitor is composed of high-purity, thin aluminium foil (0.05 to I mm thick) having a dieletric anidation on its surface to prevent current flow in one direction. This outs as anode. Another these two aluminium coils is an electrolytic impregnated paper, which cuts as the dieletric. Since the capacitors is inversely propotional to the dieletric thiclenen. And the dieletric thicknen is propotional to the forming voltage, the relationship between capacitance and cerming voltage is.

CAPACITANCE=VOLTAGE

Aluminium tabs attached to the anode and cathode coils act as the positive and negative leads of the capacitor respectively. The entire element is sealed into an aluminium can by using rubber, bakelite or phenolic plastic. The construction of an aluminum electrolytic capacitor is the following:

The anode (A):

The anode is formed by an aluminium foil of extreme purity. The effective surface area of the coil is greatly enlarged (by a factor upto 200) by electrochemical etching in order to achive the maximum possible capacitance values.

The dieletric (O):

The aluminum foil (A) is covered by a very thin oxidised layer of aluminium oride (O=Al O3. This oxide is obtained by means of an eletro chemical process. The typical value of forming voltage is 1.2 nm/v. the oxide with stands a high electric field strength and it has a high dielectric constant. Aluminium oxide is therefore well suited as a capacitor dieletric in a polar capacitor. The A12O3 has a high insulation resistance for voltages lower than the forming voltage. The oxide layer consistitutes a nonlinear voltage dependent resistance: the current increases more steeply as the voltage increases

The electrolytic Paper, cathode (C,K)

The negative electrode is a liquid electrolyte absorbed in a paper. The paper also acts as a spacer between the positive foil carrying the dieletric layer and the opposite Al-foil ( the negative Coil) acting as a contact medium to the eletrolyte. The cathode foil serves as a large contact area for passing current to the operating eletrolyte. Bipolar Al electrolytic capacitors are also available. In this designs both the anode foil and cathode foil are anodized. The cathode foil has the same capacitance rating as the anode foil. This construction allows for operation of direct voltage of either polarity as well as operation of purely alternating voltages. Since it causes internal heating the applied atternating voltage must be kept considerably below the direct voltage rating. Since we have the series connection of two capacitor elements, the total capacitance is equal to only half the individual capacitance value. So compared to polar capacitor, a bipolar capacitor requires upto twice the volume for the same total capacitance.

Chip Morphing(SEMINAR TOPIC FOR EC)

role of energy

Engineering is a study of tradeoffs. In computer engineering the tradeoff has traditionally been between performance, measured in instructions per second, and price. Because of fabrication technology, price is closely related to chip size and transistor count. With the emergence of embedded systems, a new tradeoff has become the focus of design. This new tradeoff is between performance and power or energy consumption. The computational requirements of early embedded systems were generally more modest, and so the performance-power tradeoff tended to be weighted towards power. "High performance" and "energy efficient" were generally opposing concepts.

However, new classes of embedded applications are emerging which not only have significant energy constraints, but also require considerable computational resources. Devices such as space rovers, cell phones, automotive control systems, and portable consumer electronics all require or can benefit from high-performance processors. The future generations of such devices should continue this trend.

Processors for these devices must be able to deliver high performance with low energy dissipation. Additionally, these devices evidence large fluctuations in their performance requirements. Often a device will have very low performance demands for the bulk of its operation, but will experience periodic or asynchronous "spikes" when high-performance is needed to meet a deadline or handle some interrupt event. These devices not only require a fundamental improvement in the performance power tradeoff, but also necessitate a processor which can dynamically adjust its performance and power characteristics to provide the tradeoff which best fits the system requirements at that time.

PROCESSOR PERFORMANCE

These motivations point to three major objectives for a power conscious embedded processor. Such a processor must be capable of high performance, must consume low amounts of power, and must be able to adapt to changing performance and power requirements at runtime.

The objective of this seminar is to define a micro-architecture which can exhibit low power consumption without sacrificing high performance. This will require a fundamental shift to the power-performance curve presented by traditional microprocessors. Additionally, the processor design must be flexible and reconfigurable at run-time so that it may present a series of configurations corresponding to different tradeoffs between performance and power consumption.

MORPH

These objectives and motivations were identified during the MORPH project, a part of the Power Aware Computing / Communication (PACC) initiative. In addition to exploring several mechanisms to fundamentally improve performance, the MORPH project brought forth the idea of "gear shifting" as an analogy for run-time reconfiguration. Realizing that real world applications vary their performance requirements dramatically over time, a major goal of the project was to design microarchitectures which could adjust to provide the minimal required performance at the lowest energy cost. The MORPH project explored a number of microarchitectural techniques to achieve this goal, such as morphable cache hierarchies and exploiting bit-slice inactivity. One technique, multi-cluster architectures, is the direct predecessor of this work. In addition to microarchitectural changes, MORPH also conducted a survey of realistic embedded applications which may be power constrained. Also, design implications of a power aware runtime system were explored.