EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Saturday, January 16, 2010

Computer Aided Process Planning (mechanical seminar topics)

SUMMARY/INTRODUCTION

Technological advances are reshaping the face of manufacturing, creating paperless manufacturing environments in which computer automated process planning (CAPP) will play a preeminent role. The two reasons for this effect are: Costs are declining, which encourages partnerships between CAD and CAPP developers and access to manufacturing data is becoming easier to accomplish in multivendor environments. This is primarily due to increasing use of LANs; IGES and the like are facilitating transfer of data from one point to another on the network; and relational databases (RDBs) and associated structured query language (SQL) allow distributed data processing and data access.
.
With the introduction of computers in design and manufacturing, the process planning part needed to be automated. The shop trained people who were familiar with the details of machining and other processes were gradually retiring and these people would be unavailable in the future to do process planning. An alternative way of accomplishing this function was needed and Computer Aided Process Planning (CAPP) was the alternative. Computer aided process planning was usually considered to be a part of computer aided manufacturing. However computer aided manufacturing was a stand alone system. Infact a synergy results when CAM is combined with CAD to create a CAD/CAM. In such a system CAPP becomes the direct connection between design and manufacturing.

Moreover, the reliable knowledge based computer-aided process planning application MetCAPP software looks for the least costly plan capable of producing the design and continuously generates and evaluates the plans until it is evident that non of the remaining plans will be any better than the best one seen so far. The goal is to find a useful reliable solution to a real manufacturing problem in a safer environment. If alternate plans exist, rating including safer conditions is used to select the best plans

WHAT IS CAD?

A product must be defined before it can be manufactured. Computer Aided Design involves any type of design activity that makes use of the computer to develop, analyze or modify an engineering design. There are a number of fundamental reasons for implementing a computer aided design system.
a. Increase the productivity of the designer: This is accomplished by helping the designer to visualize the product and its component subassemblies and parts; and by reducing the time required in synthesizing, analyzing, and documenting the design. This productivity improvement translates not only into lower design cost but also into shorter project completion times.
b. To improve the quality of the design: A CAD system permits a more thorough engineering analysis and a larger number of design alternatives can be investigated. Design errors are also reduced through the greater accuracy provided by the system. These factors lead to a better design.
c. To improve communications: Use of a CAD system provides better engineering drawings, more standardization in the drawings, better documentation of the design, fewer drawing error, and greater legibility.
d. To create a database for manufacturing: In the process of creating a the documentation for the product design (geometries and dimensions of the product and its components, material specification for components, bill of materials etc), much of the required data base to manufacture the product is also created.

Design usually involves both creative and repetitive tasks. The repetitive tasks within design are very appropriate for computerization.

Mine Detection Using Radar Bullets(Mechanical seminar topic)

SUMMARY

Now a day in places like Afghanistan and Iraq we know that land mines are causing serious threat to the lives of civilians. The mines that are implanted during the wartime may remain undetected for several decades and may suddenly be activated after that. Also during wartime mines implanted by our enemy countries are to be detected and diffused properly in order to save the lives of our soldiers. So we should say that detecting landmines is important for every country today.

AFFECTION

The countries known to have severe landmine problems are Afghanistan, Bosnia, Cambodia, Ethiopia, Vietnam, Iraq, Kuwait, Laos, Egypt, Eritrea, Chevalier, China. Unfortunately India, Pakistan, Srilanka, Myanmar are in the list of less mine affected countries besides other 100 countries.

LAND MINE>>>>>>>

The purpose of a landmine is to disable, immobilize or kill. It is an explosive device activated either by a person or vehicle or by command detonated by electric wire or radio signals. Most land mines are laid on just below the surface of ground and are activated by pressure or trip-wire. Usually most of the land mines will contain many metallic parts, which can be made use of in their detection.

Anti-personal mines claims 70 new victims every day. This weapon is particularly cruel on children whose bodies being smaller and closer to the blast are more likely to sustain serious injury. The severe disabilities and psychological traumas that follow the blast- means these children will have to be looked after for many years.

A child injured at the age of 10 will need about 25 critical limbs during there life time. This cost in 3000 Dollars a huge sum to pay in countries where people earn as little as 10 dollar a month. . Between 1979 of 1996 the red crores fitted over 70,000 amputees with critical limbs and the land mine problem in still growing. There for considering these factors the discovery of radar bullet is really a big boost to our world as we launches to 21st century.

RADAR BULLET

The radar bullet is a special type of bullet. The main use of radar bullet is to find landmines without setting foot on the ground. This consists of firing a special bullet in to the ground from a helicopter, which could pin point buried land mines.

The bullet units a radar pulse as it grounds to a halt. This pulse strikes the mine and its image gets available on the computer in the helicopter, offering a safe and efficient way of finding land mines

Apache Helicopter(mechanical seminar topics)

summary
The Apache Helicopter is a revolutionary development in the history of war. It is essentially a flying tank- a helicopter designed to survive heavy attack and inflict massive damage. It can zero in on specific targets, day or night, even in terrible weather. As you might expect, it is a terrifying machine to ground forces.

In this topic, we look at the Apache's amazing flight systems, engines, weapon systems, sensor systems and armour systems. Individually these components are remarkable pieces of technology. Combined together they make up an unbelievable fighting machine - the most lethal helicopter ever created.

HISTORY

The first series of Apaches, developed by Hughes Helicopters in the 1970s, went into active service in 1985. The U.S military is gradually replacing this original design, known as the AH-64A Apache, with the more advanced AH-64D Apache Longbow. In 1984, Mc Donnell Douglas purchased Hughes Helicopters, and in 1997, Boeing manufactures Apache helicopters, and the UK-based GKN Westland helicopters manufacturers the English versions of the Apache, the WAH-64.

DRAG: Drag is an aerodynamic force that resists the motion of an object moving through a fluid. The amount of drag depends on a few factors, such as the size of the object, the speed of the car and the density of the air.

THRUST: Thrust is an aerodynamic force that must be created by an airplane in order to overcome the drag. Airplanes create thrust using propellers, jet engines or rockets.

WEIGHT: This is the force acting downwards or the gravitational force.

LIFT: Lift is the aerodynamic force that holds an airplane in the air, and is probably the important of the four aerodynamic forces. Lift is created by the wings of the airplane.

Lift is a force on a wing immersed in a moving fluid, and it acts perpendicular to the flow of the fluid but drag is the same thing, but acts parallel to the direction of the fluid flow.

1. Air approaching the top surface of the wing is compressed into the air above it as it moves upward. Then, as the top surface curves downward and away from the air stream, a low pressure area is developed and the air above is pulled downward toward the back of the wing.
2. Air approaching the bottom surface of the wing is slowed, compressed and redirected in a downward path. As the air nears the rear of the wing, its sped and pressure gradually match that of the air coming over the top. The overall pressure effects encountered on the bottom of the wing are generally less pronounced than those on the top of the wing.

3.1 FOR STRAIGHT AND LEVEL FLIGHT

The following relationships must be true:
THRUST = DRAG
WEIGHT = LIFT

If for any reason, the amount of drag becomes larger then the amount of thrust, the plane will slow down. If the thrust is increased so that it is greater than drag, the plane will speed up.

If the amount of lift drops below the weight of the airplane, the plane will descend. By increasing the lift, the pilot can make the airplane climb

Friday, January 15, 2010

EXPERT'S EDGE

hey guy!! worried about your educational future? Don't worry this blog dedicAted purely for your bright future and EXPERT'S EDGE gives exclusive learning experience for students of classes upto XII and Engineering students(ALL BRANCHES,B.TECH)....THIS is not the end ,We also gives you final year(Engineering Students) project assistance.

this blog also dedicated for book-lover here you can read all type of books.just be with us and get the ANYBOOK you need.so,guys Enjoys EXPERT'S EDGE FREE Service..................We are always with you......

GPS And Applications

INTRODUCTION

The Global Positioning System, usually called GPS (the US military refers to it as NAVSTAR), is an intermediate circular orbit (ICO) satellite navigation system used for determining one's precise location and providing a highly accurate time reference almost anywhere on Earth or in Earth orbit.

The first of 24 satellites that form the current GPS constellation (Block II) was placed into orbit on February 14, 1989. The 50th GPS satellite since the beginning in 1978 was launched March 21, 2004 aboard a Delta II rocket

GPS HISTORY

The initial concept of GPS began to take form soon after the launch of Sputnik in 1957. ".... Some scientists and engineers realized that radio transmissions from a satellite in a well-defined orbit could indicate the position of a receiver on the ground" This knowledge resulted in the U.S. Navy's development and use of the "transit" system in the 1960's. This system, however, proved to be cumbersome to use and limited in terms of positioning accuracy.

Starting in the mid-1970s the U.S. Department of Defense (DOD) began the construction of today's GPS and has funded, operated, and maintained control of the system it developed. Eventually $12 billion dollars would take GPS from concept to completion. Full Operational Capacity (FOC) of GPS was reached on July 17, 1995 (U.S.C.G., 1996, www). At one point GPS was renamed NAVSTAR. This name, however, seems to be regularly ignored by system users and others. Although the primary use of GPS was thought to be for classified military operations, provisions were made for civilian use of the system. National security reasons, however, would require that civilian access to accurate positioning be intentionally degraded.

GPS ELEMENTS

GPS was designed as a system of radio navigation that utilizes "ranging" -- the measurement of distances to several satellites -- for determining location on ground, sea, or in the air. The system basically works by using radio frequencies for the broadcast of satellite positions and time. With an antenna and receiver a user can access these radio signals and process the information contained within to determine the "range", or distance, to the satellites. Such distances represent the radius of an imaginary sphere surrounding each satellite. With four or more known satellite positions the users' processor can determine a single intersection of these spheres and thus the positions of the receiver . The system is generally comprised of three segments:
1. The space segment
2. The control segment
3. The user segment

SPACE SEGMENT

The space segment consists of 24 satellites, each in its own orbit 11,000 nautical miles above the Earth. The user segment consists of receivers, which you can hold in users' hands or mount in users' vehicle. The control segment consists of ground stations located around the world that make sure the satellites are working properly.

The GPS space segment uses a total of 24 satellites in a constellation of six orbiting planes. This configuration provides for at least four equally- spaced satellites within each of the six orbital planes. The orbital path is continuous in relation to the earth, meaning that a satellite's orbit will follow the same path on the earth with each orbit. At 10,900nm (20,200km) GPS satellites are able to complete one orbit around the earth every 12 hours. GPS satellites orbit at a 55-degree inclination to the equatorial plane. This space segment configuration provides for a minimum of 5 satellites to be in view from any place on earth, fulfilling the necessary four needed for three-dimensional positioning.

NANO TECHNOLOGY(mechanical SEMINAR TOPICS)

Definition
It is any technology, which exploits phenomena, and structures that can only occur at the nanometer scale, which is the scale of several atoms and small molecules. Nanotechnology is the understanding and control of matter at dimensions of roughly 1 to 100 nanometers, where unique phenomena enable novel applications.

Overview
The related term nanoscience is used to describe the interdisciplinary fields of science devoted to the study of nanoscale phenomena employed in nanotechnology. This is the world of atoms, molecules, macromolecules, quantum dots, and macromolecular assemblies, and is dominated by surface effects such as Van der Waals force attraction, hydrogen bonding, electronic charge, ionic bonding, covalent bonding, hydrophobicity, hydrophilicity, and quantum mechanical tunneling, to the virtual exclusion of macro-scale effects such as turbulence and inertia. For example, the vastly increased ratio of surface area to volume opens new possibilities in surface-based science, such as catalysis.Nanotechnologies may provide new solutions for the millions of people in developing countries who lack access to basic services, such as safe water, reliable energy, health care, and education. The United Nations has set Millennium Development Goals for meeting these needs. The 2004 UN Task Force on Science, Technology and Innovation noted that some of the advantages of nanotechnology include production using little labor, land, or maintenance, high productivity, low cost, and modest requirements for materials and energy.

Many developing countries, for example Costa Rica, Chile, Bangladesh, Thailand, and Malaysia, are investing considerable resources in research and development of nanotechnologies. Emerging economies such as Brazil, China, Inia and South Africa are spending millions of US dollars annually on R&D, and are rapidly increasing their scientific tt ademonstrated by their increasing numbers of publications in peer-reviewed scientific publications.

Introduction
The
top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are currently made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. Atoms can be moved around on a surface with scanning probe microscopy techniques, but it is cumbersome, expensive and very time-consuming. For these reasons, it is not feasible to construct nanoscaled devices atom by atom. Assembling a billion transistor microchip at the rate of about one transistor an hour is inefficient. However, these techniques may eventually be used to make primitive nanomachines, which in turn can be used to make more sophisticated nanomachines.

In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically-precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.

solar sails(mechanical seminar topic)

SUMMARY

Nearly 400 years ago, as much of Europe was still involved in naval exploration of the world, Johannes Kepler proposed the idea of exploring the galaxy using sails. Through his observation that comet tails were blown around by some kind of solar breeze, he believed sails could capture that wind to propel spacecraft the way winds moved ships on the oceans. What Kepler observed was the pressure of solar photons on dust particles that are released by the comet as it is orbiting. While Kepler's idea of a solar wind has been disproving, scientists have since discovered that sunlight does exert enough force to move objects. Photonic pressure is a very gentle force which is not observable on earth because the frictional forces in the atmosphere are so much larger. To take advantage of this force, NASA has been experimenting with giant solar sails that could be pushed through the cosmos by light. Solar sails were seriously studied by NASA in the 1960s as possible manned transportation around the solar system. In those days of optimism serious plans were formed for lunar bases by 1975 nuclear launchers and interplanetary engines, and unmanned interstellar probes. None of these ever received serious funding, and they all died on the drawing boards and test beds by the early 1970s.


WHAT IS A SOLAR SAIL?

A solar sail is a very large mirror that reflects sunlight. As the photons of sunlight strike the sail and bounce off, they gently push the sail along by transferring momentum to the sail. Because there are so many photons from sunlight, and because they are constantly hitting the sail, there is a constant pressure (force per unit area) exerted on the sail that produces a constant acceleration of the spacecraft. Although the force on a solar-sail spacecraft is less than conventional chemical rockets, such as the space shuttle, the solar-sail spacecraft constantly accelerates over time and achieves a greater velocity. It's like comparing the effects of a gust of wind versus a steady, gentle breeze on a dandelion seed floating in the air. Although the gust of wind (rocket engine) initially pushes the seed with greater force, it dies quickly and the seed coasts only so far. In contrast, the breeze weakly pushes the seed during a longer period of time, and the seed travel farther. Solar sails enable spacecraft to move within the solar system and between stars without bulky rocket engines and enormous amounts of fuel.


COMPONENTS OF SOLAR SAIL

There are three components to a solar sail-powered spacecraft:
" Continuous force exerted by sunlight
" A large, ultra thin mirror
" A separate launch vehicle