EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Sunday, January 31, 2010

MSI's new CX420, CR420 and CR720 laptops put Intel's new processors to good, workaday use

Smell that? That's a smell of a real man's budget computer, doing real manly things like task processing and pixel churning. MSI's new CX420, CR420 and CR720 laptops aren't much for looks, but under the hood you can find new-gen Core i Series processors across the board and ATI Radeon HD5470 graphics in the CX420 (pictured). Sure, there's only Intel integrated HD graphics in the CR420 and CR720, and the 1366 x 768 14-inch displays in the CX420 / CR420 are a bit of a letdown, but knowing MSI we're sure the prices for this trio will more than make up for any mild disappointments on the spec sheet. Hit up the PR for the full breakdown, but there's no release date to be found just yet.

News Hubs Galleries Videos Podcasts The Recap Authors FOLLOW US ON TWITTER SUBSCRIBE ABOUT / FAQ TIP US 8.9-inch ExoPC Slate has iPad looks, netbook

8.9-inch ExoPC Slate has iPad looks, netbook internals, Windows 7 soul
recently announced 9.7-inch multitouch tablet. Nevertheless this one's quite different on the inside, delivering "the web without compromise," meaning full browser support with flash courtesy of Windows 7 on an Atom N270 at 1.6GHz, with 2GB of DDR2 memory and a 32GB SSD with SD expansion. Yeah, those specs are familiar too, and while we're not thinking this will deliver the sort of snappy performance seen on the iPad, it will certainly be a lot more functional. Battery life is only four hours, but at least it's user-replaceable, and a price of $599 matches the 32GB iPad. Likewise it will be available in March -- or you can get a non-multitouch prototype for $780 right this very moment. If, that is, you speak enough French to manage the order page.

Adaptive optics(Electrical & Electronics Seminar Topics)

introduction

Adaptive optics is a new technology which is being used now a days in ground based telescopes to remove atmospheric tremor and thus provide a clearer and brighter view of stars seen through ground based telescopes. Without using this system, the images obtained through telescopes on earth are seen to be blurred, which is caused by the turbulent mixing of air at different temperatures causing speed & direction of star light to vary as it continually passes through the atmosphere
.
Adaptive optics in effect removes this atmospheric tremor. It brings together the latest in computers, material science, electronic detectors, and digital control in a system that warps and bends a mirror in a telescope to counteract, in real time the atmospheric distortion.

The advance promises to let ground based telescopes reach their fundamental limits of resolution and sensitivity, out performing space based telescopes and ushering in a new era in optical astronomy. Finally, with this technology, it will be possible to see gas-giant type planets in nearby solar systems in our Milky Way galaxy. Although about 100 such planets have been discovered in recent years, all were detected through indirect means, such as the gravitational effects on their parent stars, and none has actually been detected directly.


WHAT IS ADAPTIVE OPTICS ?

Adaptive optics refers to optical systems which adapt to compensate for optical effects introduced by the medium between the object and its image. In theory a telescope's resolving power is directly proportional to the diameter of its primary light gathering lens or mirror. But in practice , images from large telescopes are blurred to a resolution no better than would be seen through a 20 cm aperture with no atmospheric blurring. At scientifically important infrared wavelengths, atmospheric turbulence degrades resolution by at least a factor of 10.

Under ideal circumstances, the resolution of an optical system is limited by the diffraction of light waves. This so-called " diffraction limit " is generally described by the following angle (in radians) calculated using the light's wavelength and optical system's pupil diameter:

a = 1.22 ^
D

Where the angle is given in radians. Thus, the fully-dilated human eye should be able to separate objects as close as 0.3 arcmin in visible light, and the Keck Telescope (10-m) should be able to resolve objects as close as 0.013 arcsec.


In practice, these limits are never achieved. Due to imperfections in the cornea nd lens of the eye, the practical limit to resolution only about 1 arcmin. To turn the problem around, scientists wishing to study the retina of the eye can only see details bout 5 (?) microns in size. In astronomy, the turbulent atmosphere blurs images to a size of 0.5 to 1 arcsec even at the best sites.

Adaptive optics provides a means of compensating for these effects, leading to appreciably sharper images sometimes approaching the theoretical diffraction limit. With sharper images comes an additional gain in contrast - for astronomy, where light levels are often very low, this means fainter objects can be detected and studied

Ultrasonic Motor(Electrical & Electronics Seminar Topics)

INTRODUCTION

All of us know that motor is a machine which produces or imparts motion, or in detail it is an arrangement of coils and magnets that converts electric energy into mechanical energy and ultrasonic motors are the next generation motors.

In 1980,the world's first ultrasonic motor was invented which utilizes the piezoelectric effect in the ultrasonic frequency range to provide its motive force resulting in a motor with unusually good low speed, high torque and power to weight characteristics.

Electromagnetism has always been the driving force behind electric motor technology. But these motors suffer from many drawbacks. The field of ultrasonic seems to be changing that driving force.

DRAWBACKS OF ELECTROMAGNETIC MOTORS

Electromagnetic motors rely on the attraction and repulsion of magnetic fields for their operation. Without good noise suppression circuitry, their noisy electrical operation will affect the electronic components inside it. Surges and spikes from these motors can cause disruption or even damage in nonmotor related items such as CRTs and various types of receiving and transmitting equipments. Also , electromagnetic motors are notorious for consuming high amount of power and creating high ambient motor temperatures. Both are undesirable from the efficiency point of view. Excessive heat energy is wasted as losses. Even the efficiently rated electromagnetic motor has high input to output energy loss ratios.

Replacing these by ultrasonic motors would virtually eliminate these undesirable effects. The electromagnetic motors produce strong magnetic fields which cause interference. Ultrasonic motors use piezoelectric effect and hence no magnetic interference.

PRINCIPLE OF OPERATION

PIEZOELECTRIC EFFECT

Many polymers, ceramics and molecules are permanently polarized; that is some parts of the molecules are positively charged, while other parts are negatively charged. When an electric field is applied to these materials, these polarized molecules will align themselves with the electric field, resulting in induced dipoles within the molecular or crystal structure of the material. Further more a permanently polarized material such as Quartz(SiO2) or Barium Titanate(BaTiO3) will produce an electric field when the material changes dimensions as a result of an imposed mechanical force. These materials are piezoelectric and this phenomenon is known as Piezoelectric effect. Conversely, an applied electric field can cause a piezoelectric material to change dimensions. This is known as Electrostriction or Reverse piezoelectric effect. Current ultrasonic motor design works from this principle, only in reverse.

When a voltage having a resonance frequency of more than 20KHz is applied to the piezoelectric element of an elastic body (a stator),the piezoelectric element expands and contracts. If voltage is applied, the material curls. The direction of the curl depends on the polarity of the applied voltage and the amount of curl is determined by how many volts are applied.

Robotic Monitoring of Power Systems(Robotic Monitoring of Power Systems)

INTRODUCTION

Economically effective maintenance and monitoring of power systems to ensure high quality and reliability of electric power supplied to customers is becoming one of the most significant tasks of today's power industry. This is highly important because in case of unexpected failures, both the utilities as well as the consumers will have to face several losses. The ideal power network can be approached through minimizing maintenance cost and maximizing the service life and reliability of existing power networks. But both goals cannot be achieved simultaneously. Timely preventive maintenance can dramatically reduce system failures. Currently, there are three maintenance methods employed by utilities: corrective maintenance, scheduled maintenance and condition-based maintenance. The following block diagram shows the important features of the various maintenance methods.

Corrective maintenance dominates in today's power industry. This method is passive, i.e. no action is taken until a failure occurs. Scheduled maintenance on the other hand refers to periodic maintenance carried out at pre-determined time intervals. Condition-based maintenance is defined as planned maintenance based on continuous monitoring of equipment status. Condition-based maintenance is very attractive since the maintenance action is only taken when required by the power system components. The only drawback of condition-based maintenance is monitoring cost. Expensive monitoring devices and extra technicians are needed to implement condition-based maintenance. Mobile monitoring solves this problem.

Mobile monitoring involves the development of a robotic platform carrying a sensor array. This continuously patrols the power cable network, locates incipient failures and estimates the aging status of electrical insulation. Monitoring of electric power systems in real time for reliability, aging status and presence of incipient faults requires distributed and centralized processing of large amounts of data from distributed sensor networks. To solve this task, cohesive multidisciplinary efforts are needed from such fields as sensing, signal processing, control, communications and robotics.

As with any preventive maintenance technology, the efforts spent on the status monitoring are justified by the reduction in the fault occurrence and elimination of consequent losses due to disruption of electric power and damage to equipment. Moreover, it is a well recognized fact in surveillance and monitoring fields that measurement of parameters of a distributed system has higher accuracy when it is when it is accomplished using sensing techniques. In addition to sensitivity improvement and


subsequent reliability enhancement, the use of robotic platforms for power system maintenance has many other advantages like replacing man workers for dangerous and highly specialized operations such as live line maintenance.

MOBILE ROBOT PLATFORM

Generally speaking, the mobile monitoring of power systems involves the following issues:
SENSOR FUSION: The aging of power cables begins long before the cable actually fails. There are several external phenomena indicating ongoing aging problems including partial discharges, hot spots, mechanical cracks and changes of insulation dielectric properties. These phenomena can be used to locate the position of the deteriorating cables and estimate the remaining lifetime of these cables. If incipient failures can be detected, or the aging process can be predicted accurately, possible outages and following economical losses can be avoided.

In the robotic platform, non-destructive miniature sensors capable of determining the status of power cable systems are developed and integrated into a monitoring system including a video sensor for visual inspection, an infrared thermal sensor for detection of hot spots, an acoustic sensor for identifying partial discharge activities and a fringing electric field sensor for determining aging status of electrical insulation. Among failure phenomena, the most important one is the partial discharge activity


Wireless Power Transmission via Solar Power Satellite(Electrical & Electronics Seminar Topics)

INTRODUCTION

A major problem facing Planet Earth is provision of an adequate supply of clean energy. It has been that we face "...three simultaneous challenges -- population growth, resource consumption, and environmental degradation -- all converging particularly in the matter of sustainable energy supply." It is widely agreed that our current energy practices will not provide for all the world's peoples in an adequate way and still leave our Earth with a livable environment. Hence, a major task for the new century will be to develop sustainable and environmentally friendly sources of energy.

Projections of future energy needs over this new century show an increase by a factor of at least two and one Half, perhaps by as much as a factor of five. All of the scenarios from reference 3 indicate continuing use of fossil sources, nuclear, and large hydro. However, the greatest increases come from "new renewables" and all scenarios show extensive use of these sources by 2050. Indeed, the projections indicate that the amount of energy derived from new renewables by 2050 will exceed that presently provided by oil and gas combined. This would imply a major change in the world's energy infrastructure. It will be a Herculean task to acquire this projected amount of energy. This author asserts that there are really only a few good options for meeting the additional energy needs of the new cen

Projections of future energy needs over this new century show an increase by a factor of at least two and one Half, perhaps by as much as a factor of five. All of the scenarios from reference 3 indicate continuing use of fossil sources, nuclear, and large hydro. However, the greatest increases come from "new renewables" and all scenarios show extensive use of these sources by 2050. Indeed, the projections indicate that the amount of energy derived from new renewables by 2050 will exceed that presently provided by oil and gas combined. This would imply a major change in the world's energy infrastructure. It will be a Herculean task to acquire this projected amount of energy. This author asserts that there are really only a few good options for meeting the additional energy needs of the new century in an environmentally acceptable way.One of the so-called new renewables on which major reliance is almost certain to be placed is solar power. Solar power captured on the Earth is familiar to all. However, an alternative approach to exploiting solar power is to capture it in space and convey it to the Earth by wireless means. As with terrestrial capture, Space Solar Power (SSP) provides a source that is virtually carbon-free and sustainable. As will be described later, the power-collecting platforms would most likely operate in geosynchronous orbit where they would be illuminated 24 hours a day (except for short eclipse periods around the equinoxes). Thus, unlike systems for the terrestrial capture of solar, a space-based system would not be limited by the vagaries of the day-night cycle. Furthermore, if the transmission frequency is properly chosen, delivery of power can be carried out essentially independent of weather conditions. Thus Space Solar Power could provide base load electricity

Eddy current brakes(Electrical & Electronics Seminar Topics)

INTRODUCTION

Many of the ordinary brakes, which are being used now days stop the vehicle by means of mechanical blocking. This causes skidding and wear and tear of the vehicle. And if the speed of the vehicle is very high, the brake cannot provide that much high braking force and it will cause problems. These drawbacks of ordinary brakes can be overcome by a simple and effective mechanism of braking system 'The eddy current brake'. It is an abrasion-free method for braking of vehicles including trains. It makes use of the opposing tendency of eddy current
Eddy current is the swirling current produced in a conductor, which is subjected to a change in magnetic field. Because of the tendency of eddy currents to oppose, eddy currents cause energy to be lost. More accurately, eddy currents transform more useful forms of energy such as kinetic energy into heat, which is much less useful. In many applications, the loss of useful energy is not particularly desirable. But there are some practical applications. Such an application is the eddy current brake.

PRINCIPLE OF OPERATIONS

Eddy current brake works according to Faraday's law of electromagnetic induction. According to this law, whenever a conductor cuts magnetic lines of forces, an emf is induced in the conductor, the magnitude of which is proportional to the strength of magnetic field and the speed of the conductor. If the conductor is a disc, there will be circulatory currents i.e. eddy currents in the disc. According to Lenz's law, the direction of the current is in such a way as to oppose the cause, i.e. movement of the disc.
Essentially the eddy current brake consists of two parts, a stationary magnetic field system and a solid rotating part, which include a metal disc. During braking, the metal disc is exposed to a magnetic field from an electromagnet, generating eddy currents in the disc. The magnetic interaction between the applied field and the eddy currents slow down the rotating disc. Thus the wheels of the vehicle also slow down since the wheels are directly coupled to the disc of the eddy current brake, thus producing smooth stopping motion.


EDDY CURRENT INDUCED IN A CONDUCTOR

Essentially an eddy current brake consists of two members, a stationary magnetic field system and a solid rotary member, generally of mild steel, which is sometimes referred to as the secondary because the eddy currents are induced in it. Two members are separated by a short air gap, they're being no contact between the two for the purpose of torque transmission. Consequently there is no wear as in friction brake.

Stator consists of pole core, pole shoe, and field winding. The field winding is wounded on the pole core. Pole core and pole shoes are made of east steel laminations and fixed to the state of frames by means of screw or bolts. Copper and aluminium is used for winding material the arrangement is shown in fig. 1. This system consists of two parts.
1. Stator
2. Rotor

Friday, January 29, 2010

FireWire(seminar topic for computer science)

Definition
FireWire, originally developed by Apple Computer, Inc is a cross platform implementation of the high speed serial data bus -define by the IEEE 1394-1995 [FireWire 400],IEEE 1394a-2000 [FireWire 800] and IEEE 1394b standards-that move large amounts of data between computers and peripheral devices. Its features simplified cabling, hot swapping and transfer speeds of up to 800 megabits per second. FireWire is a high-speed serial input/output (I/O) technology for connecting peripheral devices to a computer or to each other. It is one of the fastest peripheral standards ever developed and now, at 800 megabits per second (Mbps), its even faster .

Based on Apple-developed technology, FireWire was adopted in 1995 as an official industry standard (IEEE 1394) for cross-platform peripheral connectivity. By providing a high-bandwidth, easy-to-use I/O technology, FireWire inspired a new generation of consumer electronics devices from many companies, including Canon, Epson, HP, Iomega, JVC, LaCie, Maxtor, Mitsubishi, Matsushita (Panasonic), Pioneer, Samsung, Sony and FireWire has also been a boon to professional users because of the high-speed connectivity it has brought to audio and video production systems.

In 2001, the Academy of Television Arts & Sciences presented Apple with an Emmy award in recognition of the contributions made by FireWire to the television industry. Now FireWire 800, the next generation of FireWire technology, promises to spur the development of more innovative high-performance devices and applications. This technology brief describes the advantages of FireWire 800 and some of the applications for which it is ideally suited.

TOPOLOGY
The 1394 protocol is a peer-to-peer network with a point-to-point signaling environment. Nodes on the bus may have several ports on them. Each of these ports acts as a repeater, retransmitting any packets received by other ports within the node. Figure 1 shows what a typical consumer may have attached to their 1394 bus. Because 1394 is a peer-to-peer protocol, a specific host isn't required, such as the PC in USB. In Figure 1, the digital camera could easily stream data to both the digital VCR and the DVD-RAM without any assistance from other devices on the bus
FireWire uses 64-bit fixed addressing, based on the IEEE 1212 standard. There are three parts to each packet of information sent by a device over FireWire:

" A 10-bit bus ID that is used to determine which FireWire bus the data came from
" A 6-bit physical ID that identifies which device on the bus sent the data
" A 48-bit storage area that is capable of addressing 256 terabytes of information for each node!

The bus ID and physical ID together comprise the 16-bit node ID, which allows for 64,000 nodes on a system. Individual FireWire cables can run as long as 4.5 meters. Data can be sent through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Look at the example below. The camcorder is connected to the external hard drive connected to Computer A. Computer A is connected to Computer B, which in turn is connected to Computer C. It takes four hops for Computer C to access camera.
The 1394 protocol supports both asynchronous and isochronous data transfers.

Isochronous transfers: Isochronous transfers are always broadcast in a one-to-one or one-to-many fashion. No error correction or retransmission is available for isochronous transfers. Up to 80% of the available bus bandwidth can be used for isochronous transfers.
Asynchronous transfers: Asynchronous transfers are targeted to a specific node with an explicit address. They are not guaranteed a specific amount of bandwidth on the bus, but they are guaranteed a fair shot at gaining access to the bus when asynchronous transfers are permitted. This allows error-checking and retransmission mechanisms to take place.

Biological Computers

INTRODUCTION

Biological computers have emerged as an interdisciplinary field that draws together molecular biology, chemistry, computer science and mathematics. The highly predictable hybridization chemistry of DNA, the ability to completely control the length and content of oligonucleotides, and the wealth of enzymes available for modification of the DNA, make the use of nucleic acids an attractive candidate for all of these nanoscale applications

A 'DNA computer' has been used for the first time to find the only correct answer from over a million possible solutions to a computational problem. Leonard Adleman of the University of Southern California in the US and colleagues used different strands of DNA to represent the 20 variables in their problem, which could be the most complex task ever solved without a conventional computer. The researchers believe that the complexity of the structure of biological molecules could allow DNA computers to outperform their electronic counterparts in future.

Scientists have previously used DNA computers to crack computational problems with up to nine variables, which involves selecting the correct answer from 512 possible solutions. But now Adleman's team has shown that a similar technique can solve a problem with 20 variables, which has 220 - or 1 048 576 - possible solutions.

Adleman and colleagues chose an 'exponential time' problem, in which each extra variable doubles the amount of computation needed. This is known as an NP-complete problem, and is notoriously difficult to solve for a large number of variables. Other NP-complete problems include the 'travelling salesman' problem - in which a salesman has to find the shortest route between a number of cities - and the calculation of interactions between many atoms or molecules.
Adleman and co-workers expressed their problem as a string of 24 'clauses', each of which specified a certain combination of 'true' and 'false' for three of the 20 variables. The team then assigned two short strands of specially encoded DNA to all 20 variables, representing 'true' and 'false' for each one.

In the experiment, each of the 24 clauses is represented by a gel-filled glass cell. The strands of DNA corresponding to the variables - and their 'true' or 'false' state - in each clause were then placed in the cells.

Each of the possible 1,048,576 solutions were then represented by much longer strands of specially encoded DNA, which Adleman's team added to the first cell. If a long strand had a 'subsequence' that complemented all three short strands, it bound to them. But otherwise it passed through the cell.

To move on to the second clause of the formula, a fresh set of long strands was sent into the second cell, which trapped any long strand with a 'subsequence' complementary to all three of its short strands. This process was repeated until a complete set of long strands had been added to all 24 cells, corresponding to the 24 clauses. The long strands captured in the cells were collected at the end of the experiment, and these represented the solution to the problem.

THE WORLD'S SMALLEST COMPUTER

The world's smallest computer (around a trillion can fit in a drop of water) might one day go on record again as the tiniest medical kit. Made entirely of biological molecules, this computer was successfully programmed to identify - in a test tube - changes in the balance of molecules in the body that indicate the presence of certain cancers, to diagnose the type of cancer, and to react by producing a drug molecule to fight the cancer cells.

DOCTOR IN A CELL

In previous biological computers produced input, output and "software" are all composed of DNA, the material of genes, while DNA-manipulating enzymes are used as "hardware." The newest version's input apparatus is designed to assess concentrations of specific RNA molecules, which may be overproduced or under produced, depending on the type of cancer. Using pre-programmed medical knowledge, the computer then makes its diagnosis based on the detected RNA levels. In response to a cancer diagnosis, the output unit of the computer can initiate the controlled release of a single-stranded DNA molecule that is known to interfere with the cancer cell's activities, causing it to self-destruct.

Semantic Web(seminar topics for computer science)

INTRODUCTION

In the beginning, there was no Web. The Web began as a concept of Tim Berners- Lee, who worked for CERN, the European organization for physics research. CERN's technical staff urgently needed to share documents located on their many computers. Berners-Lee had previously built several systems to do that, and with this background he conceived the World Wide Web.

The design had a relatively simple technical basis, which helped the technology take hold and gain critical mass. Berners-Lee wanted anyone to be able to put information on a computer and make that information accessible to anyone else, anywhere. He hoped that eventually, machines would also be able to use information on the Web. Ultimately, he thought, this would allow powerful and effective human-computer- human collaboration.

What is the Semantic Web?
The word semantic implies meaning. For the Semantic Web, semantic indicates that the meaning of data on the Web can be discovered not just by people, but also by computers. The phrase the Semantic Web stands for a vision in which computers software's as well as people can find, read, understand, and use data over the World Wide Web to accomplish useful goals for users.

Of course, we already use software to accomplish things on the Web, but the distinction lies in the words we use. People surf the Web, buy things on web sites, work their way through search pages, read the labels on hyperlinks, and decide which links to follow. It would be much more efficient and less time-consuming if a person could launch a process that would then proceed on its own, perhaps checking with the person from time to time as the work progressed. The business of the Semantic Web is to bring such capabilities into widespread use

2. MAJOR VISIONS OF SEMANTIC WEB
" Indexing and retrieving information
" Meta data
" Annotation
" The Web as a large, interoperable database
" Machine retrieval of data
" Web-based services
" Discovery of services
" Intelligent software agents

THE SEMANTIC WEB FOUNDATION
The Semantic Web was thought up by Tim Berners-Lee, inventor of the WWW, URIs, HTTP, and HTML. There is a dedicated team of people at the World Wide Web consortium (W3C) working to improve, extend and standardize the system, and many languages, publications, tools and so on have already been developed.

The World Wide Web has certain design features that make it different from earlier hyperlink experiments. These features will play an important role in the design of the Semantic Web. The Web is not the whole Internet, and it would be possible to develop many capabilities of the Semantic Web using other means besides the World Wide Web. But because the Web is so widespread, and because it's basic operations are relatively simple, most of the technologies being contemplated for the Semantic Web are based on the current Web, sometimes with extensions.

The Web is designed around resources, standardized addressing of those resources (Uniform Resource Locators and Uniform Resource Indicators), and a small, widely understood set of commands. It is also designed to operate over very large and complex networks in a decentralized way. Let us look at each of these design features.

BitTorrent(seminar topics for computer science)

INTRODUCTION

BitTorrent is a protocol designed for transferring files. It is peer-to-peer in nature, as users connect to each other directly to send and receive portions of the file. However, there is a central server (called a tracker) which coordinates the action of all such peers. The tracker only manages connections, it does not have any knowledge of the contents of the files being distributed, and therefore a large number of users can be supported with relatively limited tracker bandwidth. The key philosophy of BitTorrent is that users should upload (transmit outbound) at the same time they are downloading (receiving inbound.) In this manner, network bandwidth is utilized as efficiently as possible. BitTorrent is designed to work better as the number of people interested in a certain file increases, in contrast to other file transfer protocols.

One analogy to describe this process might be to visualize a group of people sitting at a table. Each person at the table can both talk and listen to any other person at the table. These people are each trying to get a complete copy of a book. Person A announces that he has pages 1-10, 23, 42-50, and 75. Persons C, D, and E are each missing some of those pages that A has, and so they coordinate such that A gives them each copies of the pages he has that they are missing. Person B then announces that she has pages 11-22, 31-37, and 63-70. Persons A, D, and E tell B they would like some of her pages, so she gives them copies of the pages that she has.

The process continues around the table until everyone has announced what they have. The people at the table coordinate to swap parts of this book until everyone has everything. There is also another person at the table, who we will call 'S'. This person has a complete copy of the book, and so does not need anything sent to him. He responds with pages that no one else in the group has. At first, when everyone has just arrived, they all must talk to him to get their first set of pages. However, the people are smart enough to not all get the same pages from him. After a short while, they all have most of the book amongst themselves, even if no one person has the whole thing. In this manner, this one person can share a book that he has with many other people, without having to give a full copy to everyone that is interested. He can instead give out different parts to different people, and they will be able to share it amongst themselves. This person who we have referred to as 'S' is called a seed in the terminology of BitTorrent.

WHAT BITTORRENT DOES?

When a file is made available using HTTP, all upload cost is placed on the hosting machine. With BitTorrent, when multiple people are downloading the same file at the same time, they upload pieces of the file to each other. This redistributes the cost of upload to downloaders, (where it is often not even metered), thus making hosting a file with a potentially unlimited number of downloaders affordable. Researchers have attempted to find practical techniques to do this before. It has not been previously deployed on a large scale because the logistical and robustness problems are quite difficult. Simply figuring out which peers have what parts of the file and where they should be sent is difficult to do without incurring a huge overhead. In addition, real deployments experience very high churn rates. Peers rarely connect for more than a few hours, and frequently for only a few minutes. Finally, there is a general problem of fairness. The total download rate across all downloaders must, of mathematical necessity, be equal to the total upload rate. The strategy for allocating upload that seems most likely to make peers happy with their download rates is to make each peer's download rate be proportional to their upload rate. In practice it's very difficult to keep peer download rates from sometimes dropping to zero by chance, much less make upload and download rates be correlated.

BitTorrent Interface

BitTorrent's interface is almost the simplest possible. Users launch it by clicking on a hyperlink to the file they wish to download, and are given a standard "Save As" dialog, followed by a download progress dialog that is mostly notable for having an upload rate in addition to a download rate. This extreme ease of use has contributed greatly to BitTorrent's adoption, and may even be more important than, although it certainly complements, the performance and cost redistribution features

Thursday, January 28, 2010

Apple introduces new $499 iPad tablet computer

Apple CEO Steve Jobs unveiled the company's much-anticipated iPad tablet computer Wednesday, calling it a new third category of mobile device that is neither smart phone nor laptop, but something in between.

The iPad will start at $499, a price tag far below the $1,000 that some analysts were expecting. But Apple must still persuade recession-weary consumers who alrea dy have other devices to open their wallets yet again. Apple plans to begin selling the iPad in two months. Apple CEO Steve Jobs shows off the new iPad during an event in...

Jobs said the device would be useful for reading books, playing games or watching video, describing it as "so much more intimate than a laptop and so much more capable than a smart phone."

The half-inch-thick iPad is larger than the company's popular iPhone but similar in design. It weighs 1.5 pounds and has a touch screen that is 9.7 inches diagonally. It comes with 16, 32 or 64 gigabytes of flash memory storage, and has Wi-Fi and Bluetooth connectivity built in.

Jobs said the device has a battery that lasts 10 hours and can sit for a month on standby without needing a charge.

Raven Zachary, a contributing analyst with a mobile research agency called The 451 Group, considered the iPad a laptop replacement, especially because Apple is also selling a dock with a built-in keyboard.

Wednesday, January 27, 2010

Immersion Lithography(electronic seminar topics)

OPTICAL LITHOGRAPHY

The dramatic increase in performance and cost reduction in the electronics industry are attributable to innovations in the integrated circuit and packaging fabrication processes. ICs are made using Optical Lithography. The speed and performance of the chips, their associated packages, and, hence, the computer systems are dictated by the lithographic minimum printable size. Lithography, which replicates a pattern rapidly from chip to chip, wafer to wafer, or substrate to substrate, also determines the throughput and the cost of electronic systems. From the late 1960s, when integrated circuits had linewidths of 5 µm, to 1997, when minimum linewidths have reached 0.35 µm in 64Mb DRAM circuits, optical lithography has been used ubiquitously for manufacturing. This dominance of optical lithography in production is the result of a worldwide effort to improve optical exposure tools and resists.

A lithographic system includes exposure tool, mask, resist, and all of the processing steps to accomplish pattern transfer from a mask to a resist and then to devices. Light from a source is collected by a set of mirrors and light pipes, called an illuminator, which also shapes the light. Shaping of light gives it a desired spatial coherence and intensity over a set range of angles of incidence as it falls on a mask. The mask is a quartz plate onto which a pattern of chrome has been deposited.

It contains the pattern to be created on the wafer. The light patterns that pass through the mask are reduced by a factor of four by a focusing lens and projected onto the wafer which is made by coating a silicon wafer with a layer of silicon nitride followed by a layer of silicon dioxide and finally a layer of photo-resist. The photo resist that is exposed to the light becomes soluble and is rinsed away, leaving a miniature image of the mask pattern at each chip location.

Regions unprotected by photo resist are etched by gases, removing the silicon dioxide and the silicon nitride and exposing the silicon. Impurities are added to the etched areas, changing the electrical properties of the silicon as needed to form the transistors.

As early as the 1980s, experts were already predicting the demise of optical lithography as the wavelength of the light used to project the circuit image onto the silicon wafer was too large to resolve the ever-shrinking details of each new generation of ICs. Shorter wavelengths are simply absorbed by the quartz lenses that direct the light onto the wafer.

Although lithography system costs (which are typically more than one third the costs of processing a wafer to completion) increase as minimum feature size on a semiconductor chip decreases, optical lithography remains attractive because of its high wafer throughput.

RESOLUTION LIMITS FOR OPTICAL LITHOGRAPHY

The minimum feature that may be printed with an optical lithography system is determined by the
Rayleigh equation:
W=k1?
NA
where, k1 is the resolution factor, ? is the wavelength of the exposing radiation and NA is the numerical aperture.

Non Visible Imaging

Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron.

Infrared photography has been around for at least 70 years, but until recently has not been easily accessible to those not versed in traditional photographic processes. Since the charge-coupled devices (CCDs) used in digital cameras and camcorders are sensitive to near-infrared light, they can be used to capture infrared photos. With a filter that blocks out all visible light (also frequently called a "cold mirror" filter), most modern digital cameras and camcorders can capture photographs in infrared. In addition, they have LCD screens, which can be used to preview the resulting image in real-time, a tool unavailable in traditional photography without using filters that allow some visible (red) light through.

INTRODUCTION

Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron

Near-infrared (1000 - 3000 nm) spectrometry, which employs an external light source for determination of chemical composition, has been previously utilized for industrial determination of the fat content of commercial meat products, for in vivo determination of body fat, and in our laboratories for determination of lipoprotein composition in carotid artery atherosclerotic plaques. Near-infrared (IR) spectrometry has been used industrially for several years to determine saturation of unsaturated fatty acid esters (1). Near-IR spectrometry uses an tunable light source external to the experimental subject to determine its chemical composition. Industrial utilization of near-IR will allow for the in vivo measurement of the tissue-specific rate of oxygen utilization as an indirect estimate of energy expenditure. However, assessment of regional oxygen consumption by these methods is complex, requiring a high level of surgical skill for implantation of indwelling catheters to isolate the organ under study.

Adaptive Optics in Ground Based Telescopes(electronics seminar topic)

Adaptive optics is a new technology which is being used now a days in ground based telescopes to remove atmospheric tremor and thus provide a clearer and brighter view of stars seen through ground based telescopes. Without using this system, the images obtained through telescopes on earth are seen to be blurred, which is caused by the turbulent mixing of air at different temperatures.

Adaptive optics in effect removes this atmospheric tremor. It brings together the latest in computers, material science, electronic detectors, and digital control in a system that warps and bends a mirror in a telescope to counteract, in real time the atmospheric distortion.

The advance promises to let ground based telescopes reach their fundamental limits of resolution and sensitivity, out performing space based telescopes and ushering in a new era in optical astronomy. Finally, with this technology, it will be possible to see gas-giant type planets in nearby solar systems in our Milky Way galaxy. Although about 100 such planets have been discovered in recent years, all were detected through indirect means, such as the gravitational effects on their parent stars, and none has actually been detected directly.

WHAT IS ADAPTIVE OPTICS ?

Adaptive optics refers to optical systems which adapt to compensate for optical effects introduced by the medium between the object and its image. In theory a telescope's resolving power is directly proportional to the diameter of its primary light gathering lens or mirror. But in practice , images from large telescopes are blurred to a resolution no better than would be seen through a 20 cm aperture with no atmospheric blurring. At scientifically important infrared wavelengths, atmospheric turbulence degrades resolution by at least a factor of 10.

Space telescopes avoid problems with the atmosphere, but they are enormously expensive and the limit on aperture size of telescopes is quite restrictive. The Hubble Space telescope, the world's largest telescope in orbit , has an aperture of only 2.4 metres, while terrestrial telescopes can have a diameter four times that size.

In order to avoid atmospheric aberration, one can turn to larger telescopes on the ground, which have been equipped with ADAPTIVE OPTICS system. With this setup, the image quality that can be recovered is close to that the telescope would deliver if it were in space. Images obtained from the adaptive optics system on the 6.5 m diameter telescope, called the MMT telescope illustrate the impact.

A 64 Point Fourier Transform Chip(electronics seminar topic)

Fourth generation wireless and mobile system are currently the focus of research and development. Broadband wireless system based on orthogonal frequency division multiplexing will allow packet based high data rate communication suitable for video transmission and mobile internet application. Considering this fact we proposed a data path architecture using dedicated hardwire for the baseband processor. The most computationally intensive part of such a high data rate system are the 64-point inverse FFT in the transmit direction and the viterbi decoder in the receiver direction. Accordingly an appropriate design methodology for constructing them has to be chosen a) how much silicon area is needed b) how easily the particular architecture can be made flat for implementation in VLSI c) in actual implementation how many wire crossings and how many long wires carrying signals to remote parts of the design are necessary d) how small the power consumption can be .This paper describes a novel 64-point FFT/IFFT processor which has been developed as part of a large research project to develop a single chip wireless modem.

ALGORITHM FORMULATION

The discrete fourier transformation A(r) of a complex data sequence B(k) of length N
where r, k ={0,1……, N-1} can be described as


Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and k=l+Mm,where s,l ? {0,1…..7} and m, t ? {0,1,….T-1}. Applying these values in first equation and we get


This shows that it is possible to realize the FFT of length N by first decomposing it to one M and one T-point FFT where N = MT, and combinig them. But this results in in a two dimensional instead of one dimensional structure of FFT. We can formulate 64-point by considering M =T = 8



This shows that it is possible to express the 64-point FFT in terms of a two dimensional structure of 8-point FFTs plus 64 complex inter-dimensional constant multiplications. At first, appropriate data samples undergo an 8-point FFT computation. However, the number of non-trivial multiplications required for each set of 8-point FFT gets multiplied with 1. Eight such computations are needed to generate a full set of 64 intermediate data, which once again undergo a second 8-point FFT operation . Like first 8-point FFT for second 8-point again such computions are required. Proper reshuffling of the data coming out from the second 8-point FFT generates the final output of the 64-point FFT .

Fig. Signal flow graph of an 8-point DIT FFT.

For realization of 8-point FFT using the conventional DIT does not need to use any multiplication operation.

The constants to be multiplied for the first two columns of the 8-point FFT structure are either 1 or j . In the third column, the multiplications of the constants are actually addition/subtraction operation followed multiplication of 1/?2 which can be easily realized by using only a hardwired shift-and-add operation. Thus an 8-point FFT can be carried out without using any true digital multiplier and thus provide a way to realize a low- power 64-point FFT at reduced hardware cost. Since a basic 8-point FFT does not need a true multiplier. On the other hand, the number of non-trivial complex multiplications for the conventional 64-point radix-2 DIT FFT is 66. Thus the present approach results in a reduction of about 26% for complex multiplication compared to that required in the conventional radix-2 64-point FFT. This reduction of arithmetic complexity furthur enhances the scope for realizing a low-power 64-point FFT processor. However, the arithmetic complexity of the proposed scheme is almost the same to that of radix-4 FFT algorithm since the radix-4 64-point FFT algorithm needs 52 non-trivial complex multiplications.

Tuesday, January 26, 2010

Time Division Multiple Access (TDMA)

TDMA, or Time Division Multiple Access was one of the first cell phone digital standards available in the United States. It was the first successor to the original AMPS analog service that was popular throughout the country, and was in popular service from the early-mid 1990's up until roughly 2003 when the last of the TDMA carriers, Cingular and AT&T, switched to the GSM digital standard.

TDMA was a significant leap over the analog wireless service that was in place at the time, and it's chief benefit for carriers was that it used the available wireless spectrum much more efficiently than analog, allowing more phone calls to go through simultaneously. An additional benefit to carriers was that it virtually eliminated the criminal cell phone cloning that was popular at the time by encrypting the signal it's wireless signal.

The primary benefit for wireless users of the era was dramatically increased call quality over the scratchy, frequently garbled or "under water" sounds that analog users had become accustomed to. All manufacturers produced TDMA handsets during this period of time, but Nokia's ubiquitous model 5165 is probably the most popular example of TDMA technology.TDMA was replaced by GSM to permit the use of advanced, data intensive features such as text messaging and picture messaging, and to allow an even more efficient us
e of bandwidth.

Asynchronous Transfer Mode (ATM)

Definition
These computers include the entire spectrum of PCs, through professional workstations up to super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.

With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.
For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.

ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future, Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place.

These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.
The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.

With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.

Analog-Digital Hybrid Modulation(electronics seminar topic)

This paper seeks to present ways to eliminate the inherent quantization noise component in digital communications, instead of conventionally making it minimal. It deals with a new concept of signaling called the Signal Code Modulation (SCM) Technique. The primary analog signal is represented by: a sample which is quantized and encoded digitally, and an analog component, which is a function of the quantization component of the digital sample. The advantages of such a system are two sided offering advantages of both analog and digital signaling. The presence of the analog residual allows for the system performance to improve when excess channel SNR is available. The digital component provides increased SNR and makes it possible for coding to be employed to achieve near error-free transmission.

Introduction

Let us consider the transmission of an analog signal over a band-limited channel. This could be possible by two conventional techniques: analog transmission, and digital transmission, of which the latter uses sampling and quantization principles. Analog Modulation techniques such as Frequency and Phase Modulations provide significant noise immunity as known and provide SNR improvement proportional to the square root of modulation index, and are thus able to trade off bandwidth for SNR.

The SCM Technique : An Analytical Approach
Suppose we are given a bandlimited signal of bandwidth B Hz, which needs to be transmitted over a channel of bandwidth Bc with Gaussian noise of spectral density N0 watts per Hz. Let the transmitter have an average power of P watts. We consider that the signal is sampled at the Nyquist rate of 2B samples per second, to produce a sampled signal x(n).

Next, let the signal be quantized to produce a discrete amplitude signal of M=2b levels. Where b is the no. of bits per sample of the digital symbol D, which is to be encoded. More explicitly, let the values of the 2b levels be, q1, q2, q3, q4…qM which are distributed over the range [-1, +1], where is the proportionality factor determined relative to the signal. Given a sample x(n) we find the nearest level qi(n). Here, qi(n) is the digital symbol and xa(n)= x(n)-qi(n) is the analog representation. The exact representation of the analog signal is given by x(n)=qi(n)+xa(n).

We can accomplish the transmission of this information over the noisy channel by dividing it into two channels: one for analog information and another for digital information. The analog channel bandwidth is Ba= aB, and the digital channel bandwidth being Bd= dB, where Ba+Bd=Bc, the channel bandwidth. Let =Bc/B, be the bandwidth expansion factor, i.e. the ratio of the bandwidth of the channel to the bandwidth of the signal.
Similarly, the variables a and d are the ratios of Ba/B and Bd/B. Here we will assume that a=1 so that d= -1. The total power is also divided amongst the two channels with fraction pa for the analog channel and fraction pd for the digital one,
so that pa+pd=1.

Multisensor Fusion and Integration(electronics seminar topic)

Introduction
Sensor is a device that detects or senses the value or changes of value of the variable being measured. The term sensor some times is used instead of the term detector, primary element or transducer.

The fusion of information from sensors with different physical characteristics, such as light, sound, etc enhances the understanding of our surroundings and provide the basis for planning, decision making, and control of autonomous and intelligent machines.

Sensors Evolution

A sensor is a device that responds to some external stimuli and then provides some useful output. With the concept of input and output, one can begin to understand how sensors play a critical role in both closed and open loops.

One problem is that sensors have not been specified. In other words they tend to respond variety of stimuli applied on it without being able to differentiate one from another. Neverthless, sensors and sensor technology are necessary ingredients in any control type application. Without the feedback from the environment that sensors provide, the system has no data or reference points, and thus no way of understanding what is right or wrong g with its various elements.

Sensors are so important in automated manufacturing particularly in robotics. Automated manufacturing is essentially the procedure of remo0ving human element as possible from the manufacturing process. Sensors in the condition measurement category sense various types of inputs, condition, or properties to help monitor and predict the performance of a machine or system.

Multisensor Fusion And Integration

Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system.

Multisensor fusion refers to any stage in the integration process where there is an actual combination of different sources of sensory information into one representational format.

Multisensor Integration

The diagram represents multisensor integration as being a composite of basic functions. A group of n sensors provide input to the integration process. In order for the data from each sensor to be used for integration, it must first be effectively modelled. A sensor model represents the uncertainty and error in the data from each sensor and provides a measure of its quality that can be 7used by the subsequent integration functions.


Monday, January 25, 2010

Mesh Radio

Governments are keen to encourage the roll-out of broadband interactive multimedia services to business and residential customers because they recognise the economic benefits of e-commerce, information and entertainment. Digital cable networks can provide a compelling combination of simultaneous services including broadcast TV, VOD, fast Internet and telephony. Residential customers are likely to be increasingly attracted to these bundles as the cost can be lower than for separate provision. Cable networks have therefore been implemented or upgraded to digital in many urban areas in the developed countries.

ADSL has been developed by telcos to allow on-demand delivery via copper pairs. A bundle comparable to cable can be provided if ADSL is combined with PSTN telephony and satellite or terrestrial broadcast TV services but incumbant telcos have been slow to roll it out and 'unbundling' has not proved successful so far. Some telcos have been accused of restricting ADSL performance and keeping prices high to protect their existing business revenues. Prices have recently fallen but even now the ADSL (and SDSL) offerings are primarily targeted at provision of fast (but contended) Internet services for SME and SOHO customers. This slow progress (which is partly due to the unfavourable economic climate) has also allowed cable companies to move slowly.


A significant proportion of customers in suburban and semi-rural areas will only be able to have ADSL at lower rates because of the attenuation caused by the longer copper drops. One solution is to take fibre out to street cabinets equipped for VDSL but this is expensive, even where ducts are already available.

Network operators and service providers are increasingly beset by a wave of technologies that could potentially close the gap between their fibre trunk networks and a client base that is all too anxious for the industry to accelerate the rollout of broadband. While the established vendors of copper-based DSL and fibre-based cable are finding new business, many start-up operators, discouraged by the high cost of entry into wired markets, have been looking to evolving wireless radio and laser options.

One relatively late entrant into this competitive mire is mesh radio, a technology that has quietly emerged to become a potential holder of the title 'next big thing'. Mesh Radio is a new approach to Broadband Fixed Wireless Access (BFWA) that avoids the limitations of point to multi-point delivery. It could provide a cheaper '3rd Way' to implement residential broadband that is also independent of any existing network operator or service provider. Instead of connecting each subscriber individually to a central provider, each is linked to several other subscribers nearby by low-power radio transmitters; these in turn are connected to others, forming a network, or mesh, of radio interconnections that at some point links back to the central transmitter.


Opaque Networks Utilizing TOS

There is a potential cost, footprint and power savings by eliminating unnecessary opto-electronic conversions on a signal path in a core optical mesh network. Current networks have seen the deployment of wavelength division multiplexing (WDM) technology, followed more recently by the deployment of an optical transport layer where optical crossconnects (OXCs) are connected using WDM links .both currently deployed WDM systems and OXCs use electronics in the signal path, thereby creating an opaque network .it is very compelling to imagine an optical transport layer where signals remain in the optical domain from the time the time the enter the network until they leave the network, thereby creating a transparent network.

To carry out the assessment of opaque and transparent networks, we make the following basic assumptions on the requirements for core mesh networks:
" Network operators require a lowest cost network, not just lowest cost network elements. For example, even though optical may be cheaper than electrical network elements, a network without wavelength conversions and tunable wavelength access in the optical domain could lead to higher network cost due to inefficient capacity usage than a network with wavelength conversions in the electrical domain.
" A network operator must not be constrained to buy the entire network from a single vendor.
" In order to build a dynamic, scalable and manageable backbone network it is essential that manual configuration be eliminated as much as possible.
" An optical switching system must be easily scalable with low cost and and a small footprint as the network grows to many hundreds of wavelength channels per fiber and to a speed 40 Gb/s

NETWORK ARCHITECTURES

Increased traffic volume due to the introduction of new broadband services is driving carriers to deployment of an optical transport layer based on WDM. The network infrastructure of existing core networks is currently undergoing a transformation from rings using synchronous optical networks (SONET) add/drop multiplexers (ADMs) to mesh topologies using OXCs. Even though the applications driving large scale deployment of transparent optical switches are not currently in place, and the traffic demand does not currently justify the the use of transparent switches that are cost effective at very high bit rates, it is possible that at some point in the future transparent switches may be deployed in the network.,

Transparent network architecture

The transparent network is as shown. Since a signal from a client network element(NE),such as a router, connected via a specific wavelength must remain on the same wavelength when there is no wavelength conversion , only a small size switch fabric is needed to interconnect the WDMs and NEs in a node. This architecture also implies end-to-end bit rate and data format transparency. Another architecture of a transparent switch in a transparent network may include a single large fabric instead of multiple switch matrices of small port counts. If one is to provide flexibility, such an architecture design would require the use of tunable lasers at the clients and wavelength conversions.

DV Libraries and the Internet

The recent academic and commercial efforts in digital libraries have demonstrated the potential for white scale online search and retrieval of cataloged electronic content. By improving access to scientific, educational and historical documents and information, digital libraries create powerful opportunities for revamping education, accelerating, scientific discovery and technical advancement, and improving knowledge. Further more, digital libraries go well beyond traditional libraries in storing and indexing diverse and complex types of material such as images, video, graphics, audio, and multimedia. Concurrent with the advancements in digital libraries, the Internet has become a pervasive medium for information access and communication. With the broad penetration of the internet, network-based digital libraries can interoperate with other diverse networked information systems and provide around the clock real time access to widely distributed information catalogs.

Ideally the integration of the digital libraries and the Internet complete a powerful picture for accessing electronic content. However, in reality, the current technologies under lying digital libraries and Internet need considerable advancement before digital libraries supplant traditional libraries. While many of the benefits of the digital libraries result from their support for complex content, such as video, many challenges remain for enabling efficient search and transport. Many of the fundamental problems with digital video libraries will gain new focus in the Next Generation Internet (NGI) initiative.


DIGITAL VIDEO LIBRARIES

Digital video libraries deal with cataloging, searching, and retrieving digital video. Since libraries are designed to search large numbers of users, digital video libraries have greatest utility when deployed online. In order to effectively service users, digital video libraries need to efficiently handle both the search and transport of video.

The model for user interaction with the digital video libraries is illustrated in the figure. Video is initially added to the digital video libraries in an accessioning process that catalogs, indexes and store the video data. The user then searches the digital video library by querying the catalog and index data. The results are return to and browsed by the user. The user then has options for refining the search, such as by relevance feedback, and selecting items for delivery.

The two prevalent modes for delivering video to the user are video retrieval and streaming. In video streaming the video is played back over the network to the user. In many fast forward, reversed, pause, and so forth. In video retrieval, the video is down loaded over the network to the users local terminal. In this case, the video may be later viewed or used for other applications. Other forms of video information systems, such as video on demand (VOD), video conferencing, and video data base (VDB) systems, share characteristics with digital video libraries. The system generally differs in their support for video storage, searching, cataloging, browsing, and retrieval. Video conferencing systems typically deal with the live, real time communication of video over networks. VOD systems deliver high bandwidth video to groups of users. VDBs deal with storing and searching the structured meta-data relative to video, but are not oriented to words video streaming or concurrent play back to large numbers of users.

Sunday, January 24, 2010

Wireless LAN Security (seminar topic for students)

Wireless local area networks (WLANs) based on the Wi-Fi (wireless fidelity) standards are one of today's fastest growing technologies in businesses, schools, and homes, for good reasons. They provide mobile access to the Internet and to enterprise networks so users can remain connected away from their desks. These networks can be up and running quickly when there is no available wired Ethernet infrastructure. They can be made to work with a minimum of effort without relying on specialized corporate installers.

Some of the business advantages of WLANs include:
" Mobile workers can be continuously connected to their crucial applications and data;
" New applications based on continuous mobile connectivity can be deployed;
" Intermittently mobile workers can be more productive if they have continuous access to email, instant messaging, and other applications;
" Impromptu interconnections among arbitrary numbers of participants become possible.
" But having provided these attractive benefits, most existing WLANs have not effectively addressed security-related issues.

THREATS TO WLAN ENVIRONMENTS

All wireless computer systems face security threats that can compromise its systems and services. Unlike the wired network, the intruder does not need physical access in order to pose the following security threats:

Eavesdropping

This involves attacks against the confidentiality of the data that is being transmitted across the network. In the wireless network, eavesdropping is the most significant threat because the attacker can intercept the transmission over the air from a distance away from the premise of the company.

Tampering

The attacker can modify the content of the intercepted packets from the wireless network and this results in a loss of data integrity.

Unauthorized access and spoofing

The attacker could gain access to privileged data and resources in the network by assuming the identity of a valid user. This kind of attack is known as spoofing. To overcome this attack, proper authentication and access control mechanisms need to be put up in the wireless network.

Analog-Digital Hybrid Modulation for Improved Efficiency over Broadband Wireless Systems

This paper seeks to present ways to eliminate the inherent quantization noise component in digital communications, instead of conventionally making it minimal. It deals with a new concept of signaling called the Signal Code Modulation (SCM) Technique. The primary analog signal is represented by: a sample which is quantized and encoded digitally, and an analog component, which is a function of the quantization component of the digital sample. The advantages of such a system are two sided offering advantages of both analog and digital signaling. The presence of the analog residual allows for the system performance to improve when excess channel SNR is available. The digital component provides increased SNR and makes it possible for coding to be employed to achieve near error-free transmission.

Introduction

Let us consider the transmission of an analog signal over a band-limited channel. This could be possible by two conventional techniques: analog transmission, and digital transmission, of which the latter uses sampling and quantization principles. Analog Modulation techniques such as Frequency and Phase Modulations provide significant noise immunity as known and provide SNR improvement proportional to the square root of modulation index, and are thus able to trade off bandwidth for SNR.

The SCM Technique : An Analytical Approach
Suppose we are given a bandlimited signal of bandwidth B Hz, which needs to be transmitted over a channel of bandwidth Bc with Gaussian noise of spectral density N0 watts per Hz. Let the transmitter have an average power of P watts. We consider that the signal is sampled at the Nyquist rate of 2B samples per second, to produce a sampled signal x(n).

Next, let the signal be quantized to produce a discrete amplitude signal of M=2b levels. Where b is the no. of bits per sample of the digital symbol D, which is to be encoded. More explicitly, let the values of the 2b levels be, q1, q2, q3, q4…qM which are distributed over the range [-1, +1], where is the proportionality factor determined relative to the signal. Given a sample x(n) we find the nearest level qi(n). Here, qi(n) is the digital symbol and xa(n)= x(n)-qi(n) is the analog representation. The exact representation of the analog signal is given by x(n)=qi(n)+xa(n).

We can accomplish the transmission of this information over the noisy channel by dividing it into two channels: one for analog information and another for digital information. The analog channel bandwidth is Ba= aB, and the digital channel bandwidth being Bd= dB, where Ba+Bd=Bc, the channel bandwidth. Let =Bc/B, be the bandwidth expansion factor, i.e. the ratio of the bandwidth of the channel to the bandwidth of the signal.
Similarly, the variables a and d are the ratios of Ba/B and Bd/B. Here we will assume that a=1 so that d= -1. The total power is also divided amongst the two channels with fraction pa for the analog channel and fraction pd for the digital o
ne, so that pa+pd=1.

Low Power UART Design for Serial Data Communication

Definition

With the proliferation of portable electronic devices, power efficient data transmission has become increasingly important. For serial data transfer, universal asynchronous receiver / transmitter (UART) circuits are often implemented because of their inherent design simplicity and application specific versatility. Components such as laptop keyboards, palm pilot organizers and modems are few examples of devices that employ UART circuits. In this work, design and analysis of a robust UART architecture has been carried out to minimize power consumption during both idle and continuous modes of operation.

UART

An UART (universal asynchronous receiver / transmitter) is responsible for performing the main task in serial communications with computers. The device changes incoming parallel information to serial data which can be sent on a communication line. A second UART can be used to receive the information. The UART performs all the tasks, timing, parity checking, etc. needed for the communication. The only extra devices attached are line driver chips capable of transforming the TTL level signals to line voltages and vice versa.

To use the device in different environments, registers are accessible to set or review the communication parameters. Setable parameters are for example the communication speed, the type of parity check, and the way incoming information is signaled to the running software.

UART types

Serial communication on PC compatibles started with the 8250 UART in the XT. In the years after, new family members were introduced like the 8250A and 8250B revisions and the 16450. The last one was first implemented in the AT. The higher bus speed in this computer could not be reached by the 8250 series. The differences between these first UART series were rather minor. The most important property changed with each new release was the maximum allowed speed at the processor bus side.

The 16450 was capable of handling a communication speed of 38.4 kbs without problems. The demand for higher speeds led to the development of newer series which would be able to release the main processor from some of its tasks. The main problem with the original series was the need to perform a software action for each single byte to transmit or receive. To overcome this problem, the 16550 was released which contained two on-board FIFO buffers, each capable of storing 16 bytes. One buffer for incoming, and one buffer for outgoing bytes.