EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Friday, January 15, 2010

EXPERT'S EDGE

hey guy!! worried about your educational future? Don't worry this blog dedicAted purely for your bright future and EXPERT'S EDGE gives exclusive learning experience for students of classes upto XII and Engineering students(ALL BRANCHES,B.TECH)....THIS is not the end ,We also gives you final year(Engineering Students) project assistance.

this blog also dedicated for book-lover here you can read all type of books.just be with us and get the ANYBOOK you need.so,guys Enjoys EXPERT'S EDGE FREE Service..................We are always with you......

GPS And Applications

INTRODUCTION

The Global Positioning System, usually called GPS (the US military refers to it as NAVSTAR), is an intermediate circular orbit (ICO) satellite navigation system used for determining one's precise location and providing a highly accurate time reference almost anywhere on Earth or in Earth orbit.

The first of 24 satellites that form the current GPS constellation (Block II) was placed into orbit on February 14, 1989. The 50th GPS satellite since the beginning in 1978 was launched March 21, 2004 aboard a Delta II rocket

GPS HISTORY

The initial concept of GPS began to take form soon after the launch of Sputnik in 1957. ".... Some scientists and engineers realized that radio transmissions from a satellite in a well-defined orbit could indicate the position of a receiver on the ground" This knowledge resulted in the U.S. Navy's development and use of the "transit" system in the 1960's. This system, however, proved to be cumbersome to use and limited in terms of positioning accuracy.

Starting in the mid-1970s the U.S. Department of Defense (DOD) began the construction of today's GPS and has funded, operated, and maintained control of the system it developed. Eventually $12 billion dollars would take GPS from concept to completion. Full Operational Capacity (FOC) of GPS was reached on July 17, 1995 (U.S.C.G., 1996, www). At one point GPS was renamed NAVSTAR. This name, however, seems to be regularly ignored by system users and others. Although the primary use of GPS was thought to be for classified military operations, provisions were made for civilian use of the system. National security reasons, however, would require that civilian access to accurate positioning be intentionally degraded.

GPS ELEMENTS

GPS was designed as a system of radio navigation that utilizes "ranging" -- the measurement of distances to several satellites -- for determining location on ground, sea, or in the air. The system basically works by using radio frequencies for the broadcast of satellite positions and time. With an antenna and receiver a user can access these radio signals and process the information contained within to determine the "range", or distance, to the satellites. Such distances represent the radius of an imaginary sphere surrounding each satellite. With four or more known satellite positions the users' processor can determine a single intersection of these spheres and thus the positions of the receiver . The system is generally comprised of three segments:
1. The space segment
2. The control segment
3. The user segment

SPACE SEGMENT

The space segment consists of 24 satellites, each in its own orbit 11,000 nautical miles above the Earth. The user segment consists of receivers, which you can hold in users' hands or mount in users' vehicle. The control segment consists of ground stations located around the world that make sure the satellites are working properly.

The GPS space segment uses a total of 24 satellites in a constellation of six orbiting planes. This configuration provides for at least four equally- spaced satellites within each of the six orbital planes. The orbital path is continuous in relation to the earth, meaning that a satellite's orbit will follow the same path on the earth with each orbit. At 10,900nm (20,200km) GPS satellites are able to complete one orbit around the earth every 12 hours. GPS satellites orbit at a 55-degree inclination to the equatorial plane. This space segment configuration provides for a minimum of 5 satellites to be in view from any place on earth, fulfilling the necessary four needed for three-dimensional positioning.

NANO TECHNOLOGY(mechanical SEMINAR TOPICS)

Definition
It is any technology, which exploits phenomena, and structures that can only occur at the nanometer scale, which is the scale of several atoms and small molecules. Nanotechnology is the understanding and control of matter at dimensions of roughly 1 to 100 nanometers, where unique phenomena enable novel applications.

Overview
The related term nanoscience is used to describe the interdisciplinary fields of science devoted to the study of nanoscale phenomena employed in nanotechnology. This is the world of atoms, molecules, macromolecules, quantum dots, and macromolecular assemblies, and is dominated by surface effects such as Van der Waals force attraction, hydrogen bonding, electronic charge, ionic bonding, covalent bonding, hydrophobicity, hydrophilicity, and quantum mechanical tunneling, to the virtual exclusion of macro-scale effects such as turbulence and inertia. For example, the vastly increased ratio of surface area to volume opens new possibilities in surface-based science, such as catalysis.Nanotechnologies may provide new solutions for the millions of people in developing countries who lack access to basic services, such as safe water, reliable energy, health care, and education. The United Nations has set Millennium Development Goals for meeting these needs. The 2004 UN Task Force on Science, Technology and Innovation noted that some of the advantages of nanotechnology include production using little labor, land, or maintenance, high productivity, low cost, and modest requirements for materials and energy.

Many developing countries, for example Costa Rica, Chile, Bangladesh, Thailand, and Malaysia, are investing considerable resources in research and development of nanotechnologies. Emerging economies such as Brazil, China, Inia and South Africa are spending millions of US dollars annually on R&D, and are rapidly increasing their scientific tt ademonstrated by their increasing numbers of publications in peer-reviewed scientific publications.

Introduction
The
top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are currently made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. Atoms can be moved around on a surface with scanning probe microscopy techniques, but it is cumbersome, expensive and very time-consuming. For these reasons, it is not feasible to construct nanoscaled devices atom by atom. Assembling a billion transistor microchip at the rate of about one transistor an hour is inefficient. However, these techniques may eventually be used to make primitive nanomachines, which in turn can be used to make more sophisticated nanomachines.

In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically-precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.

solar sails(mechanical seminar topic)

SUMMARY

Nearly 400 years ago, as much of Europe was still involved in naval exploration of the world, Johannes Kepler proposed the idea of exploring the galaxy using sails. Through his observation that comet tails were blown around by some kind of solar breeze, he believed sails could capture that wind to propel spacecraft the way winds moved ships on the oceans. What Kepler observed was the pressure of solar photons on dust particles that are released by the comet as it is orbiting. While Kepler's idea of a solar wind has been disproving, scientists have since discovered that sunlight does exert enough force to move objects. Photonic pressure is a very gentle force which is not observable on earth because the frictional forces in the atmosphere are so much larger. To take advantage of this force, NASA has been experimenting with giant solar sails that could be pushed through the cosmos by light. Solar sails were seriously studied by NASA in the 1960s as possible manned transportation around the solar system. In those days of optimism serious plans were formed for lunar bases by 1975 nuclear launchers and interplanetary engines, and unmanned interstellar probes. None of these ever received serious funding, and they all died on the drawing boards and test beds by the early 1970s.


WHAT IS A SOLAR SAIL?

A solar sail is a very large mirror that reflects sunlight. As the photons of sunlight strike the sail and bounce off, they gently push the sail along by transferring momentum to the sail. Because there are so many photons from sunlight, and because they are constantly hitting the sail, there is a constant pressure (force per unit area) exerted on the sail that produces a constant acceleration of the spacecraft. Although the force on a solar-sail spacecraft is less than conventional chemical rockets, such as the space shuttle, the solar-sail spacecraft constantly accelerates over time and achieves a greater velocity. It's like comparing the effects of a gust of wind versus a steady, gentle breeze on a dandelion seed floating in the air. Although the gust of wind (rocket engine) initially pushes the seed with greater force, it dies quickly and the seed coasts only so far. In contrast, the breeze weakly pushes the seed during a longer period of time, and the seed travel farther. Solar sails enable spacecraft to move within the solar system and between stars without bulky rocket engines and enormous amounts of fuel.


COMPONENTS OF SOLAR SAIL

There are three components to a solar sail-powered spacecraft:
" Continuous force exerted by sunlight
" A large, ultra thin mirror
" A separate launch vehicle

Cryocar(mechanical seminar topics)

INTRODUCTION

The importance of cars in the present world is increasing day by day. There are various factors that influence the choice of the car. These include performance, fuel, pollution etc. As the prices for fuels are increasing and the availability is decreasing we have to go for alternative choice.

Here an automotive propulsion concept is presented which utilizes liquid nitrogen as the working fluid for an open Rankine cycle. When the only heat input to the engine is supplied by ambient heat exchangers, an automobile can readily be propelled while satisfying stringent tailpipe emission standards. Nitrogen propulsive systems can provide automotive ranges of nearly 400 kilometers in the zero emission mode, with lower operating costs than those of the electric vehicles currently being considered for mass production. In geographical regions that allow ultra low emission vehicles, the range and performance of the liquid nitrogen automobile can be significantly extended by the addition of a small efficient burner. Some of the advantages of a transportation infrastructure based on liquid nitrogen are that recharging the energy storage system only requires minutes and there are minimal environmental hazards associated with the manufacture and utilization of the cryogenic "fuel". The basic idea of nitrogen propulsion system is to utilize the atmosphere as the heat source. This is in contrast to the typical heat engine where the atmosphere is used as the heat sink.

SUMMARY

The LN2000 is an operating proof-of-concept test vehicle, a converted 1984 Grumman-Olson Kubvan mail delivery van. Applying LN2 as a portable thermal storage medium to propel both commuter and fleet vehicles appears to be an attractive means to meeting the ZEV regulations soon to be implemented. Pressurizing the working fluid while it is at cryogenic temperatures, heating it up with ambient air, and expanding it in reciprocating engines is a straightforward approach for powering pollution free vehicles. Ambient heat exchangers that will not suffer extreme icing will have to be developed to enable wide utility of this propulsion system.

Since the expansion engine operates at sub-ambient temperatures, the potential for attaining quasi-isothermal operation appears promising. The engine, a radial five-cylinder 15-hp air motor, drives the front wheels through a five-speed manual Volkswagen transmission. The liquid nitrogen is stored in a thermos-like stainless steel tank. At present the tank is pressurized with gaseous nitrogen to develop system pressure but a cryogenic liquid pump will be used for this purpose in the future. A preheater, called an economizer, uses leftover heat in the engine's exhaust to preheat the liquid nitrogen before it enters the heat exchanger. The specific energy densities of LN2 are 54 and 87 W-h/kg-LN2 for the adiabatic and isothermal expansion processes, respectively, and the corresponding amounts of cryogen to provide a 300 km driving range would be 450 kg and 280 kg. Many details of the application of LN2 thermal storage to ground transportation remain to be investigated; however, to date no fundamental technological hurdles have yet been discovered that might stand in the way of fully realizing the potential offered by this revolutionary propulsion concept


Robots In Radioactive Environments(mechamical seminar topic)

Robots are developed to be used in areas inaccessible to human beings. Radio active environment is one in which high energy radiations like ?, ? and ? radiations are emitted by radioactive materials. There is a limitation in case of the time and dose for which professional worker can be exposed to nuclear radiations according to international regulations so it very useful to use robots in such an environment.

Robots with properly automated can also be used to control nuclear power plants and hence can be used to avert nuclear power plant disasters like one that occurred at Chernobyl. Robots can also be used for the disposal of radioactive waste.

Future is still bright for robots in radio active environment as they are to be used to isolate nuclear power plants from surroundings in case of a nuclear power plant disaster.

summary
The word robot was introduced in 1921 by the Czech play Wright Karel Capek, in his play Rossum's universal robots and is derived from the Czech word "Robota", meaning "forced labour". The story concerns a brilliant scientist named 'ROSSUM' and his son, who developed a chemical substance similar to protoplasm to manufacture robots. Their plan was that the robots would serve the mankind obediently and do all physical labour. Finally, after improvements and eliminating unnecessary parts, they develop a "perfect robot", which eventually goes out of control and attacks humans.

Although Capek introduced the word robot to the world, the term robotics was coined by Isaac Asimov in his science fiction story "run around", where he portrayed robots not in negative manner but built with safety measures in mind to assist human beings. Asimov established in his story three fundamental laws of robots as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not conflict with the first and second laws. .
Robots were introduced into the industry in the early 1960's. Robots originally were in hazardous operations, such as handling toxics and radioactive materials and loading & unloading hot work pieces from furnaces and handling them in foundries.


some other points

Radioactive environment is mainly encountered in nuclear power plants. Some regular repair and maintenance activities at nuclear power plants involve risks of contamination and irradiation. While contamination is an accidental and avoidable phenomenon, irradiation is continuous and effects the operators work areas. Various countries have laws establishing annual maximum doses to which professional workers can be exposed and the maximum time that they may stay inside areas subject to radiation.

Most tasks at nuclear facilities are carried out by in house maintenance specialists. They are few in number and in many cases, require several yeas of experience and extensive training programs. The number of hours that they can work continuously is limited by national international regulations regarding the maximum dose that may be received by exposed professional workers. Legal regulations establish that when a worker reaches a specific dose limit, the worker cannot work in areas subject to radiation for a given period of time. This increase the cost of maintenance services because personal only operate for short periods of time. Given the discontinuous use of human resources and discontinuous nature of work, nuclear service companies are obliged to allow for some uncertainties in scheduling of services and in rationalization of their human resources.

For all the above reasons, it is generally advisable and in some cases mandatory, to use telerobotics for the execution of repair and maintenance tasks in nuclear power plants. This is particularly true of tasks entailing high exposure to radiation