EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Friday, January 29, 2010

FireWire(seminar topic for computer science)

Definition
FireWire, originally developed by Apple Computer, Inc is a cross platform implementation of the high speed serial data bus -define by the IEEE 1394-1995 [FireWire 400],IEEE 1394a-2000 [FireWire 800] and IEEE 1394b standards-that move large amounts of data between computers and peripheral devices. Its features simplified cabling, hot swapping and transfer speeds of up to 800 megabits per second. FireWire is a high-speed serial input/output (I/O) technology for connecting peripheral devices to a computer or to each other. It is one of the fastest peripheral standards ever developed and now, at 800 megabits per second (Mbps), its even faster .

Based on Apple-developed technology, FireWire was adopted in 1995 as an official industry standard (IEEE 1394) for cross-platform peripheral connectivity. By providing a high-bandwidth, easy-to-use I/O technology, FireWire inspired a new generation of consumer electronics devices from many companies, including Canon, Epson, HP, Iomega, JVC, LaCie, Maxtor, Mitsubishi, Matsushita (Panasonic), Pioneer, Samsung, Sony and FireWire has also been a boon to professional users because of the high-speed connectivity it has brought to audio and video production systems.

In 2001, the Academy of Television Arts & Sciences presented Apple with an Emmy award in recognition of the contributions made by FireWire to the television industry. Now FireWire 800, the next generation of FireWire technology, promises to spur the development of more innovative high-performance devices and applications. This technology brief describes the advantages of FireWire 800 and some of the applications for which it is ideally suited.

TOPOLOGY
The 1394 protocol is a peer-to-peer network with a point-to-point signaling environment. Nodes on the bus may have several ports on them. Each of these ports acts as a repeater, retransmitting any packets received by other ports within the node. Figure 1 shows what a typical consumer may have attached to their 1394 bus. Because 1394 is a peer-to-peer protocol, a specific host isn't required, such as the PC in USB. In Figure 1, the digital camera could easily stream data to both the digital VCR and the DVD-RAM without any assistance from other devices on the bus
FireWire uses 64-bit fixed addressing, based on the IEEE 1212 standard. There are three parts to each packet of information sent by a device over FireWire:

" A 10-bit bus ID that is used to determine which FireWire bus the data came from
" A 6-bit physical ID that identifies which device on the bus sent the data
" A 48-bit storage area that is capable of addressing 256 terabytes of information for each node!

The bus ID and physical ID together comprise the 16-bit node ID, which allows for 64,000 nodes on a system. Individual FireWire cables can run as long as 4.5 meters. Data can be sent through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Look at the example below. The camcorder is connected to the external hard drive connected to Computer A. Computer A is connected to Computer B, which in turn is connected to Computer C. It takes four hops for Computer C to access camera.
The 1394 protocol supports both asynchronous and isochronous data transfers.

Isochronous transfers: Isochronous transfers are always broadcast in a one-to-one or one-to-many fashion. No error correction or retransmission is available for isochronous transfers. Up to 80% of the available bus bandwidth can be used for isochronous transfers.
Asynchronous transfers: Asynchronous transfers are targeted to a specific node with an explicit address. They are not guaranteed a specific amount of bandwidth on the bus, but they are guaranteed a fair shot at gaining access to the bus when asynchronous transfers are permitted. This allows error-checking and retransmission mechanisms to take place.

Biological Computers

INTRODUCTION

Biological computers have emerged as an interdisciplinary field that draws together molecular biology, chemistry, computer science and mathematics. The highly predictable hybridization chemistry of DNA, the ability to completely control the length and content of oligonucleotides, and the wealth of enzymes available for modification of the DNA, make the use of nucleic acids an attractive candidate for all of these nanoscale applications

A 'DNA computer' has been used for the first time to find the only correct answer from over a million possible solutions to a computational problem. Leonard Adleman of the University of Southern California in the US and colleagues used different strands of DNA to represent the 20 variables in their problem, which could be the most complex task ever solved without a conventional computer. The researchers believe that the complexity of the structure of biological molecules could allow DNA computers to outperform their electronic counterparts in future.

Scientists have previously used DNA computers to crack computational problems with up to nine variables, which involves selecting the correct answer from 512 possible solutions. But now Adleman's team has shown that a similar technique can solve a problem with 20 variables, which has 220 - or 1 048 576 - possible solutions.

Adleman and colleagues chose an 'exponential time' problem, in which each extra variable doubles the amount of computation needed. This is known as an NP-complete problem, and is notoriously difficult to solve for a large number of variables. Other NP-complete problems include the 'travelling salesman' problem - in which a salesman has to find the shortest route between a number of cities - and the calculation of interactions between many atoms or molecules.
Adleman and co-workers expressed their problem as a string of 24 'clauses', each of which specified a certain combination of 'true' and 'false' for three of the 20 variables. The team then assigned two short strands of specially encoded DNA to all 20 variables, representing 'true' and 'false' for each one.

In the experiment, each of the 24 clauses is represented by a gel-filled glass cell. The strands of DNA corresponding to the variables - and their 'true' or 'false' state - in each clause were then placed in the cells.

Each of the possible 1,048,576 solutions were then represented by much longer strands of specially encoded DNA, which Adleman's team added to the first cell. If a long strand had a 'subsequence' that complemented all three short strands, it bound to them. But otherwise it passed through the cell.

To move on to the second clause of the formula, a fresh set of long strands was sent into the second cell, which trapped any long strand with a 'subsequence' complementary to all three of its short strands. This process was repeated until a complete set of long strands had been added to all 24 cells, corresponding to the 24 clauses. The long strands captured in the cells were collected at the end of the experiment, and these represented the solution to the problem.

THE WORLD'S SMALLEST COMPUTER

The world's smallest computer (around a trillion can fit in a drop of water) might one day go on record again as the tiniest medical kit. Made entirely of biological molecules, this computer was successfully programmed to identify - in a test tube - changes in the balance of molecules in the body that indicate the presence of certain cancers, to diagnose the type of cancer, and to react by producing a drug molecule to fight the cancer cells.

DOCTOR IN A CELL

In previous biological computers produced input, output and "software" are all composed of DNA, the material of genes, while DNA-manipulating enzymes are used as "hardware." The newest version's input apparatus is designed to assess concentrations of specific RNA molecules, which may be overproduced or under produced, depending on the type of cancer. Using pre-programmed medical knowledge, the computer then makes its diagnosis based on the detected RNA levels. In response to a cancer diagnosis, the output unit of the computer can initiate the controlled release of a single-stranded DNA molecule that is known to interfere with the cancer cell's activities, causing it to self-destruct.

Semantic Web(seminar topics for computer science)

INTRODUCTION

In the beginning, there was no Web. The Web began as a concept of Tim Berners- Lee, who worked for CERN, the European organization for physics research. CERN's technical staff urgently needed to share documents located on their many computers. Berners-Lee had previously built several systems to do that, and with this background he conceived the World Wide Web.

The design had a relatively simple technical basis, which helped the technology take hold and gain critical mass. Berners-Lee wanted anyone to be able to put information on a computer and make that information accessible to anyone else, anywhere. He hoped that eventually, machines would also be able to use information on the Web. Ultimately, he thought, this would allow powerful and effective human-computer- human collaboration.

What is the Semantic Web?
The word semantic implies meaning. For the Semantic Web, semantic indicates that the meaning of data on the Web can be discovered not just by people, but also by computers. The phrase the Semantic Web stands for a vision in which computers software's as well as people can find, read, understand, and use data over the World Wide Web to accomplish useful goals for users.

Of course, we already use software to accomplish things on the Web, but the distinction lies in the words we use. People surf the Web, buy things on web sites, work their way through search pages, read the labels on hyperlinks, and decide which links to follow. It would be much more efficient and less time-consuming if a person could launch a process that would then proceed on its own, perhaps checking with the person from time to time as the work progressed. The business of the Semantic Web is to bring such capabilities into widespread use

2. MAJOR VISIONS OF SEMANTIC WEB
" Indexing and retrieving information
" Meta data
" Annotation
" The Web as a large, interoperable database
" Machine retrieval of data
" Web-based services
" Discovery of services
" Intelligent software agents

THE SEMANTIC WEB FOUNDATION
The Semantic Web was thought up by Tim Berners-Lee, inventor of the WWW, URIs, HTTP, and HTML. There is a dedicated team of people at the World Wide Web consortium (W3C) working to improve, extend and standardize the system, and many languages, publications, tools and so on have already been developed.

The World Wide Web has certain design features that make it different from earlier hyperlink experiments. These features will play an important role in the design of the Semantic Web. The Web is not the whole Internet, and it would be possible to develop many capabilities of the Semantic Web using other means besides the World Wide Web. But because the Web is so widespread, and because it's basic operations are relatively simple, most of the technologies being contemplated for the Semantic Web are based on the current Web, sometimes with extensions.

The Web is designed around resources, standardized addressing of those resources (Uniform Resource Locators and Uniform Resource Indicators), and a small, widely understood set of commands. It is also designed to operate over very large and complex networks in a decentralized way. Let us look at each of these design features.

BitTorrent(seminar topics for computer science)

INTRODUCTION

BitTorrent is a protocol designed for transferring files. It is peer-to-peer in nature, as users connect to each other directly to send and receive portions of the file. However, there is a central server (called a tracker) which coordinates the action of all such peers. The tracker only manages connections, it does not have any knowledge of the contents of the files being distributed, and therefore a large number of users can be supported with relatively limited tracker bandwidth. The key philosophy of BitTorrent is that users should upload (transmit outbound) at the same time they are downloading (receiving inbound.) In this manner, network bandwidth is utilized as efficiently as possible. BitTorrent is designed to work better as the number of people interested in a certain file increases, in contrast to other file transfer protocols.

One analogy to describe this process might be to visualize a group of people sitting at a table. Each person at the table can both talk and listen to any other person at the table. These people are each trying to get a complete copy of a book. Person A announces that he has pages 1-10, 23, 42-50, and 75. Persons C, D, and E are each missing some of those pages that A has, and so they coordinate such that A gives them each copies of the pages he has that they are missing. Person B then announces that she has pages 11-22, 31-37, and 63-70. Persons A, D, and E tell B they would like some of her pages, so she gives them copies of the pages that she has.

The process continues around the table until everyone has announced what they have. The people at the table coordinate to swap parts of this book until everyone has everything. There is also another person at the table, who we will call 'S'. This person has a complete copy of the book, and so does not need anything sent to him. He responds with pages that no one else in the group has. At first, when everyone has just arrived, they all must talk to him to get their first set of pages. However, the people are smart enough to not all get the same pages from him. After a short while, they all have most of the book amongst themselves, even if no one person has the whole thing. In this manner, this one person can share a book that he has with many other people, without having to give a full copy to everyone that is interested. He can instead give out different parts to different people, and they will be able to share it amongst themselves. This person who we have referred to as 'S' is called a seed in the terminology of BitTorrent.

WHAT BITTORRENT DOES?

When a file is made available using HTTP, all upload cost is placed on the hosting machine. With BitTorrent, when multiple people are downloading the same file at the same time, they upload pieces of the file to each other. This redistributes the cost of upload to downloaders, (where it is often not even metered), thus making hosting a file with a potentially unlimited number of downloaders affordable. Researchers have attempted to find practical techniques to do this before. It has not been previously deployed on a large scale because the logistical and robustness problems are quite difficult. Simply figuring out which peers have what parts of the file and where they should be sent is difficult to do without incurring a huge overhead. In addition, real deployments experience very high churn rates. Peers rarely connect for more than a few hours, and frequently for only a few minutes. Finally, there is a general problem of fairness. The total download rate across all downloaders must, of mathematical necessity, be equal to the total upload rate. The strategy for allocating upload that seems most likely to make peers happy with their download rates is to make each peer's download rate be proportional to their upload rate. In practice it's very difficult to keep peer download rates from sometimes dropping to zero by chance, much less make upload and download rates be correlated.

BitTorrent Interface

BitTorrent's interface is almost the simplest possible. Users launch it by clicking on a hyperlink to the file they wish to download, and are given a standard "Save As" dialog, followed by a download progress dialog that is mostly notable for having an upload rate in addition to a download rate. This extreme ease of use has contributed greatly to BitTorrent's adoption, and may even be more important than, although it certainly complements, the performance and cost redistribution features

Thursday, January 28, 2010

Apple introduces new $499 iPad tablet computer

Apple CEO Steve Jobs unveiled the company's much-anticipated iPad tablet computer Wednesday, calling it a new third category of mobile device that is neither smart phone nor laptop, but something in between.

The iPad will start at $499, a price tag far below the $1,000 that some analysts were expecting. But Apple must still persuade recession-weary consumers who alrea dy have other devices to open their wallets yet again. Apple plans to begin selling the iPad in two months. Apple CEO Steve Jobs shows off the new iPad during an event in...

Jobs said the device would be useful for reading books, playing games or watching video, describing it as "so much more intimate than a laptop and so much more capable than a smart phone."

The half-inch-thick iPad is larger than the company's popular iPhone but similar in design. It weighs 1.5 pounds and has a touch screen that is 9.7 inches diagonally. It comes with 16, 32 or 64 gigabytes of flash memory storage, and has Wi-Fi and Bluetooth connectivity built in.

Jobs said the device has a battery that lasts 10 hours and can sit for a month on standby without needing a charge.

Raven Zachary, a contributing analyst with a mobile research agency called The 451 Group, considered the iPad a laptop replacement, especially because Apple is also selling a dock with a built-in keyboard.

Wednesday, January 27, 2010

Immersion Lithography(electronic seminar topics)

OPTICAL LITHOGRAPHY

The dramatic increase in performance and cost reduction in the electronics industry are attributable to innovations in the integrated circuit and packaging fabrication processes. ICs are made using Optical Lithography. The speed and performance of the chips, their associated packages, and, hence, the computer systems are dictated by the lithographic minimum printable size. Lithography, which replicates a pattern rapidly from chip to chip, wafer to wafer, or substrate to substrate, also determines the throughput and the cost of electronic systems. From the late 1960s, when integrated circuits had linewidths of 5 µm, to 1997, when minimum linewidths have reached 0.35 µm in 64Mb DRAM circuits, optical lithography has been used ubiquitously for manufacturing. This dominance of optical lithography in production is the result of a worldwide effort to improve optical exposure tools and resists.

A lithographic system includes exposure tool, mask, resist, and all of the processing steps to accomplish pattern transfer from a mask to a resist and then to devices. Light from a source is collected by a set of mirrors and light pipes, called an illuminator, which also shapes the light. Shaping of light gives it a desired spatial coherence and intensity over a set range of angles of incidence as it falls on a mask. The mask is a quartz plate onto which a pattern of chrome has been deposited.

It contains the pattern to be created on the wafer. The light patterns that pass through the mask are reduced by a factor of four by a focusing lens and projected onto the wafer which is made by coating a silicon wafer with a layer of silicon nitride followed by a layer of silicon dioxide and finally a layer of photo-resist. The photo resist that is exposed to the light becomes soluble and is rinsed away, leaving a miniature image of the mask pattern at each chip location.

Regions unprotected by photo resist are etched by gases, removing the silicon dioxide and the silicon nitride and exposing the silicon. Impurities are added to the etched areas, changing the electrical properties of the silicon as needed to form the transistors.

As early as the 1980s, experts were already predicting the demise of optical lithography as the wavelength of the light used to project the circuit image onto the silicon wafer was too large to resolve the ever-shrinking details of each new generation of ICs. Shorter wavelengths are simply absorbed by the quartz lenses that direct the light onto the wafer.

Although lithography system costs (which are typically more than one third the costs of processing a wafer to completion) increase as minimum feature size on a semiconductor chip decreases, optical lithography remains attractive because of its high wafer throughput.

RESOLUTION LIMITS FOR OPTICAL LITHOGRAPHY

The minimum feature that may be printed with an optical lithography system is determined by the
Rayleigh equation:
W=k1?
NA
where, k1 is the resolution factor, ? is the wavelength of the exposing radiation and NA is the numerical aperture.

Non Visible Imaging

Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron.

Infrared photography has been around for at least 70 years, but until recently has not been easily accessible to those not versed in traditional photographic processes. Since the charge-coupled devices (CCDs) used in digital cameras and camcorders are sensitive to near-infrared light, they can be used to capture infrared photos. With a filter that blocks out all visible light (also frequently called a "cold mirror" filter), most modern digital cameras and camcorders can capture photographs in infrared. In addition, they have LCD screens, which can be used to preview the resulting image in real-time, a tool unavailable in traditional photography without using filters that allow some visible (red) light through.

INTRODUCTION

Near infrared light consists of light just beyond visible red light (wavelengths greater than 780nm). Contrary to popular thought, near infrared photography does not allow the recording of thermal radiation (heat). Far-infrared thermal imaging requires more specialized equipment. Infrared images exhibit a few distinct effects that give them an exotic, antique look. Plant life looks completely white because it reflects almost all infrared light (because of this effect, infrared photography is commonly used in aerial photography to analyze crop yields, pest control, etc.) The sky is a stark black because no infrared light is scattered. Human skin looks pale and ghostly. Dark sunglasses all but disappear in infrared because they don't block any infrared light, and it's said that you can capture the near infrared emissions of a common iron

Near-infrared (1000 - 3000 nm) spectrometry, which employs an external light source for determination of chemical composition, has been previously utilized for industrial determination of the fat content of commercial meat products, for in vivo determination of body fat, and in our laboratories for determination of lipoprotein composition in carotid artery atherosclerotic plaques. Near-infrared (IR) spectrometry has been used industrially for several years to determine saturation of unsaturated fatty acid esters (1). Near-IR spectrometry uses an tunable light source external to the experimental subject to determine its chemical composition. Industrial utilization of near-IR will allow for the in vivo measurement of the tissue-specific rate of oxygen utilization as an indirect estimate of energy expenditure. However, assessment of regional oxygen consumption by these methods is complex, requiring a high level of surgical skill for implantation of indwelling catheters to isolate the organ under study.