EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Friday, February 5, 2010

MPEG Video Compression(Information Technology Seminar Topics)

Definition

MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups.
To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distributionThe essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples.

These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error….). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.

MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months.

It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).

Pre MPEG
Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype).
Seeing this threat… that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs…. and maybe it would be useful for other things too.

MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.

After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles

Quantum Information Technology(Information Technology Seminar Topics

Definition

The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This document aims to summarize not just quantum computing, but the whole subject of quantum information theory. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, the paper begins with an introduction to classical information theory .The principles of quantum mechanics are then outlined.

The EPR-Bell correlation and quantum entanglement in general, form the essential new ingredient, which distinguishes quantum from classical information theory, and, arguably, quantum from classical physics. Basic quantum information ideas are described, including key distribution, teleportation, the universal quantum computer and quantum algorithms. The common theme of all these ideas is the use of quantum entanglement as a computational resource.

Experimental methods for small quantum processors are briefly sketched, concentrating on ion traps, super conducting cavities, Nuclear magnetic resonance imaging based techniques, and quantum dots. "Where a calculator on the Eniac is equipped with 18000 vacuum tubes and weighs 30 tons, computers in the future may have only 1000 tubes and weigh only 1 1/2 tons" Popular Mechanics, March 1949.

Now, if this seems like a joke, wait a second. "Tomorrows computer might well resemble a jug of water"
This for sure is no joke. Quantum computing is here. What was science fiction two decades back is a reality today and is the future of computing. The history of computer technology has involved a sequence of changes from one type of physical realization to another --- from gears to relays to valves to transistors to integrated circuits and so on. Quantum computing is the next logical advancement.

Today's advanced lithographic techniques can squeeze fraction of micron wide logic gates and wires onto the surface of silicon chips. Soon they will yield even smaller parts and inevitably reach a point where logic gates are so small that they are made out of only a handful of atoms. On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now.

Quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock-speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!

Single Photon Emission Computed Tomography (SPECT)

Definition

Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue.

SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient's specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient's body.

SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980's.

SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan.

Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient's specific organ or body system.

Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what's happening inside the patient's body.

By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D image.

The Future Wallet(Information Technology Seminar Topics)

Definition

"Money in the 21st century will surely prove to be as different from the money of the current century as our money is from that of the previous century. Just as fiat money replaced specie-backed paper currencies, electronically initiated debits and credits will become the dominant payment modes, creating the potential for private money to compete
with government-issued currencies." Just as every thing is getting under the shadow of "e" today we have paper currency being replaced by electronic money or e-cash.

Hardly a day goes by without some mention in the financial press of new developments in "electronic money". In the emerging field of electronic commerce, novel buzzwords like smartcards, online banking, digital cash, and electronic checks are being used to discuss money. But how are these brand-new forms of payment secure? And most importantly, which of these emerging secure electronic money technologies will survive into the next century?

These are some of the tough questions to answer but here's a solution, which provides a form of security to these modes of currency exchange using the "Biometrics Technology". The Money Pad introduced here uses the biometrics technology for Finger Print recognition. Money Pad is a form of credit card or smartcard, which we name so.

Every time the user wants to access the Money Pad he has to make an impression of his fingers which will be scanned and matched with the one in the hard disk of data base server. If the finger print matches with the user's he will be allowed to access and use the Pad other wise the Money Pad is not accessible. Thus providing a form of security to the ever-lasting transaction currency of the future "e-cash".

Money Pad - A form of credit card or smart card similar to floppy disk, which is
introduced to provide, secure e-cash transactions.

Cisco IOS Firewall(Information Technology Seminar Topics)

Definition

The Cisco IOS Firewall, provides robust, integrated firewall and intrusion detection functionality for every perimeter of the network. Available for a wide range of Cisco IOS software-based routers, the Cisco IOS Firewall offers sophisticated security and policy enforcement for connections within an organization (intranet) and between partner networks (extranets), as well as for securing Internet connectivity for remote and branch offices.

A security-specific, value-add option for Cisco IOS Software, the Cisco IOS Firewall enhances existing Cisco IOS security capabilities, such as authentication, encryption, and failover, with state-of-the-art security features, such as stateful, application-based filtering (context-based access control), defense against network attacks, per user authentication and authorization, and real-time alerts.

The Cisco IOS Firewall is configurable via Cisco ConfigMaker software, an easy-to-use Microsoft Windows 95, 98, NT 4.0 based software tool.

A Firewall is a network security device that ensures that all communications attempting to cross it meet an organization's security policy. Firewalls track and control communications deciding whether to allow ,reject or encrypt communications.Firewalls are used to connect a corporate local network to the Internet and also within networks. In other words they stand in between the trusted network and the untrusted network.

The first and most important decision reflects the policy of how your company or organization wants to operate the system. Is the firewall in place to explicitly deny all services except those critical to the mission of connecting to the net, or is the firewall is in place to provide a metered and audited method of 'Queuing' access in a non-threatening manner. The second is what level of monitoring, reducing and control do you want? Having established the acceptable risk level you can form a checklist of what should be monitored, permitted and denied. The third issue is financial.
Implementation methods

Two basic methods to implement a firewall are
1.As a Screening Router:
A screening router is a special computer or an electronic device that screens (filters out) specific packets based on the criteria that is defined. Almost all current screening routers operate in the following manner.
a. Packet Filter criteria must be stored for the ports of the packet filter device. The packet filter criteria are called packet filter ruler.
b. When the packets arrive at the port, the packet header is parsed. Most packet filters examine the fields in only the IP, TCP and UDP headers.
c. The packet filter rules are stored in a specific order. Each rule is applied to the packet in the order in which the packet filter is stored.
d. If the rule blocks the transmission or reception of a packet the packet is not allowed.
e. If the rule allows the transmission or reception of a packet the packet is allowed.
f. If a packet does not satisfy any rule it is blocked.


Nanorobotics(Information Technology Seminar Topics)

Definition

Nanorobotics is an emerging field that deals with the controlled manipulation of objects with nanometer-scale dimensions. Typically, an atom has a diameter of a few Ångstroms (1 Å = 0.1 nm = 10-10 m), a molecule's size is a few nm, and clusters or nanoparticles formed by hundreds or thousands of atoms have sizes of tens of nm. Therefore, Nanorobotics is concerned with interactions with atomic- and molecular-sized objects-and is sometimes called Molecular Robotics.

Molecular Robotics falls within the purview of Nanotechnology, which is the study of phenomena and structures with characteristic dimensions in the nanometer range. The birth of Nanotechnology is usually associated with a talk by Nobel-prize winner Richard Feynman entitled "There is plenty of room at the bottom", whose text may be found in [Crandall & Lewis 1992]. Nanotechnology has the potential for major scientific and practical breakthroughs.

Future applications ranging from very fast computers to self-replicating robots are described in Drexler's seminal book [Drexler 1986]. In a less futuristic vein, the following potential applications were suggested by well-known experimental scientists at the Nano4 conference held in Palo Alto in November 1995:

" Cell probes with dimensions ~ 1/1000 of the cell's size
" Space applications, e.g. hardware to fly on satellites
" Computer memory
" Near field optics, with characteristic dimensions ~ 20 nm
" X-ray fabrication, systems that use X-ray photons
" Genome applications, reading and manipulating DNA
" Nanodevices capable of running on very small batteries
" Optical antennas

Nanotechnology is being pursued along two converging directions. From the top down, semiconductor fabrication techniques are producing smaller and smaller structures-see e.g. [Colton & Marrian 1995] for recent work. For example, the line width of the original Pentium chip is 350 nm. Current optical lithography techniques have obvious resolution limitations because of the wavelength of visible light, which is in the order of 500 nm. X-ray and electron-beam lithography will push sizes further down, but with a great increase in complexity and cost of fabrication. These top-down techniques do not seem promising for building nanomachines that require precise positioning of atoms or molecules.

Alternatively, one can proceed from the bottom up, by assembling atoms and molecules into functional components and systems. There are two main approaches for building useful devices from nanoscale components. The first is based on self-assembly, and is a natural evolution of traditional chemistry and bulk processing-see e.g. [Gómez-López et al. 1996]. The other is based on controlled positioning of nanoscale objects, direct application of forces, electric fields, and so on. The self-assembly approach is being pursued at many laboratories. Despite all the current activity, self-assembly has severe limitations because the structures produced tend to be highly symmetric, and the most versatile self-assembled systems are organic and therefore generally lack robustness. The second approach involves Nanomanipulation, and is being studied by a small number of researchers, who are focusing on techniques based on Scanning Probe Microscopy.

Light Emitting Polymers (LEP)(Information Technology Seminar Topics)

Definition

Light emitting polymers or polymer based light emitting diodes discovered by Friend et al in 1990 has been found superior than other displays like, liquid crystal displays (LCDs) vacuum fluorescence displays and electro luminescence displays. Though not commercialised yet, these have proved to be a mile stone in the filed of flat panel displays. Research in LEP is underway in Cambridge Display Technology Ltd (CDT), the UK.

In the last decade, several other display contenders such as plasma and field emission displays were hailed as the solution to the pervasive display. Like LCD they suited certain niche applications, but failed to meet broad demands of the computer industry.

Today the trend is towards the non_crt flat panel displays. As LEDs are inexpensive devices these can be extremely handy in constructing flat panel displays. The idea was to combine the characteristics of a CRT with the performance of an LCD and added design benefits of formability and low power. Cambridge Display Technology Ltd is developing a display medium with exactly these characteristics.

The technology uses a light-emitting polymer (LEP) that costs much less to manufacture and run than CRTs because the active material used is plastic.

LEP is a polymer that emits light when a voltage is applied to it. The structure comprises a thin film semi conducting polymer sandwiched between two electrodes namely anode and cathode. When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light that escape through glass substrate.

Y2K38(Information Technology Seminar Topics)

Definition

The Y2K38 problem has been described as a non-problem, given that we are expected to be running 64-bit operating systems well before 2038. Well, maybe.

The Problem
Just as Y2K problems arise from programs not allocating enough digits to the year, Y2K38 problems arise from programs not allocating enough bits to internal time.Unix internal time is commonly stored in a data structure using a long int containing the number of seconds since 1970. This time is used in all time-related processes such as scheduling, file timestamps, etc. In a 32-bit machine, this value is sufficient to store time up to 18-jan-2038. After this date, 32-bit clocks will overflow and return erroneous values such as 32-dec-1969 or 13-dec-1901.

Machines Affected Currently (March 1998) there are a huge number of machines affected. Most of these will be scrapped before 2038. However, it is possible that some machines going into service now may still be operating in 2038. These may include process control computers, space probe computers, embedded systems in traffic light controllers, navigation systems etc. etc. Many of these systems may not be upgradeable. For instance, Ferranti Argus computers survived in service longer than anyone expected; long enough to present serious maintenance problems.

Note: Unix time is safe for the indefinite future for referring to future events, provided that enough bits are allocated. Programs or databases with a fixed field width should probably allocate at least 48 bits to storing time values.
Hardware, such as clock circuits, which has adopted the Unix time convention, may also be affected if 32-bit registers are used.

In my opinion, the Y2K38 threat is more likely to result in aircraft falling from the sky, glitches in life-support systems, and nuclear power plant meltdown than the Y2K threat, which is more likely to disrupt inventory control, credit card payments, pension plans etc. The reason for this is that the Y2K38 problem involves the basic system timekeeping from which most other time and date information is derived, while the Y2K problem (mostly) involves application programs.
Emulation and Megafunctions
While 32-bit CPUs may be obsolete in desktop computers and servers by 2038, they may still exist in microcontrollers and embedded circuits. For instance, the Z80 processor is still available in 1999 as an Embedded Function within Altera programmable devices. Such embedded functions present a serious maintenance problem for Y2K38 and similar rollover issues, since the package part number and other markings typically give no indication of the internal function.

Software Issues
Databases using 32-bit Unix time may survive through 2038. Care will have to be taken to avoid rollover issues.

Now that we've far surpassed the problem of "Y2K," can you believe that computer scientists and theorists are now projecting a new worldwide computer glitch for the year 2038? Commonly called the "Y2K38 Problem," it seems that computers using "long int" time systems, which were set up to start recording time from January 1, 1970 will be affected.

XML Encryption

Definition

As XML becomes a predominant means of linking blocks of information together, there is a requirement to secure specific information. That is to allow authorized entities access to specific information and prevent access to that specific information from unauthorized entities. Current methods on the Internet include password protection, smart card, PKI, tokens and a variety of other schemes. These typically solve the problem of accessing the site from unauthorized users, but do not provide mechanisms for the protection of specific information from all those who have authorized access to the site.

Now that XML is being used to provide searchable and organized information there is a sense of urgency to provide a standard to protect certain parts or elements from unauthorized access. The objective of XML encryption is to provide a standard methodology that prevents unauthorized access to specific information within an XML document.

XML (Extensible Markup Language) was developed by an XML Working Group (originally known as the SGML Editorial Review Board) formed under the auspices of the World Wide Web Consortium (W3C) in 1996. Even though there was HTML, DHTML AND SGML XML was developed byW3C to achieve the following design goals.

" XML shall be straightforwardly usable over the Internet.
" XML shall be compatible with SGML.
" It shall be easy to write programs, which process XML documents.
" The design of XML shall be formal and concise.
" XML documents shall be easy to create.

XML was created so that richly structured documents could be used over the web. The other alternate is HTML and SGML are not practical for this purpose.HTML comes bound with a set of semantics and does not provide any arbitrary structure. Even though SGML provides arbitrary structure, it is too difficult to implement just for web browser. Since SGML is so comprehensive that only large corporations can justify the cost of its implementations.

The eXtensible Markup Language, abbreviated as XML, describes a class of data objects called XML documents and partially describes the behavior of computer programs which process them. Thus XML is a restricted form of SGML

A data object is an XML document if it is well-formed, as defined in this specification. A well-formed XML document may in addition be valid if it meets certain further constraints.Each XML document has both a logical and a physical structure. Physically, the document is composed of units called entities. An entity may refer to other entities to cause their inclusion in the document. A document begins in a "root" or document entity

Unicode And Multilingual Computing(

Definition

Unicode provides a unique number for every character,
no matter what the platform,
no matter what the program,
no matter what the language.

Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use.

These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption.

This paper is intended for software developers interested in support for the Unicode standard in the Solaris™ 7 operating environment. It discusses the following topics:

" An overview of multilingual computing, and how Unicode and the internationalization framework in the Solaris 7 operating environment work together to achieve this aim
" The Unicode standard and support for it within the Solaris operating environment
" Unicode in the Solaris 7 Operating Environment
" How developers can add Unicode support to their applications
" Codeset conversions

Unicode And Multilingual Computing

It is not a new idea that today's global economy demands global computing solutions. Instant communications and the free flow of information across continents - and across computer platforms - characterize the way the world has been doing business for some time. The widespread use of the Internet and the arrival of electronic commerce (e-commerce) together offer companies and individuals a new set of horizons to explore and master. In the global audience, there are always people and businesses at work - 24 hours of the day, 7 days a week. So global computing can only grow.

What is new is the increasing demand of users for a computing environment that is in harmony with their own cultural and linguistic requirements. Users want applications and file formats that they can share with colleagues and customers an ocean away, application interfaces in their own language, and time and date displays that they understand at a glance. Essentially, users want to write and speak at the keyboard in the same way that they always write and speak. Sun Microsystems addresses these needs at various levels, bringing together the components that make possible a truly multilingual computing environment.

Ubiquitous Networking(Information Technology Seminar Topics)

Definition

Mobile computing devices have changed the way we look at computing. Laptops and personal digital assistants (PDAs) have unchained us from our desktop computers. A group of researchers at AT&T Laboratories Cambridge are preparing to put a new spin on mobile computing. In addition to taking the hardware with you, they are designing a ubiquitous networking system that allows your program applications to follow you wherever you go.

By using a small radio transmitter and a building full of special sensors, your desktop can be anywhere you are, not just at your workstation. At the press of a button, the computer closest to you in any room becomes your computer for as long as you need it. In addition to computers, the Cambridge researchers have designed the system to work for other devices, including phones and digital cameras. As we move closer to intelligent computers, they may begin to follow our every move.

The essence of mobile computing is that a user's applications are available, in a suitably adapted form, wherever that user goes. Within a richly equipped networked environment such as a modern office the user need not carry any equipment around; the user-interfaces of the applications themselves can follow the user as they move, using the equipment and networking resources available. We call these applications Follow-me applications.

Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

Context-Aware Application

A context-aware application is one which adapts its behaviour to a changing environment. Other examples of context-aware applications are 'construction-kit computers' which automatically build themselves by organizing a set of proximate components to act as a more complex device, and 'walk-through videophones' which automatically select streams from a range of cameras to maintain an image of a nomadic user. Typically, a context-aware application needs to know the location of users and equipment, and the capabilities of the equipment and networking infrastructure. In this paper we describe a sensor-driven, or sentient, computing platform that collects environmental data, and presents that data in a form suitable for context-aware applications.

The platform we describe has five main components:
1. A fine-grained location system, which is used to locate and identify objects.
2. A detailed data model, which describes the essential real world entities that are involved in mobile applications.
3. A persistent distributed object system, which presents the data model in a form accessible to applications.
4. Resource monitors, which run on networked equipment and communicate status information to a centralized repository.
5. A spatial monitoring service, which enables event-based location-aware applications.
Finally, we describe an example application to show how this
platform may be used.

IDS(Information Technology Seminar Topics)

Definition

A correct firewall policy can minimize the exposure of many networks however they are quite useless against attacks launched from within. Hackers are also evolving their attacks and network subversion methods. These techniques include email based Trojan, stealth scanning techniques, malicious code and actual attacks, which bypass firewall policies by tunneling access over allowed protocols such as ICMP, HTTP, DNS, etc. Hackers are also very good at creating and releasing malware for the ever-growing list of application vulnerabilities to compromise the few services that are being let through by a firewall.

IDS arms your business against attacks by continuously monitoring network activity, ensuring all activity is normal. If IDS detects malicious activity it responds immediately by destroying the attacker's access and shutting down the attack. IDS reads network traffic and looks for patterns of attacks or signatures, if a signature is identified, IDS sends an alert to the Management Console and a response is immediately deployed.

What is intrusion?

An intrusion is somebody attempting to break into or misuse your system. The word "misuse" is broad, and can reflect something severe as stealing confidential data to something minor such as misusing your email system for Spam.

What is an IDS?

An IDS is the real-time monitoring of network/system activity and the analysing of data for potential vulnerabilities and attacks in progress.
Need For IDS

Who are attacked?

Internet Information Services (IIS) web servers - which host web pages and serve them to users - are highly popular among business organizations, with over 6 million such servers installed worldwide. Unfortunately, IIS web servers are also popular among hackers and malicious fame-seekers - as a prime target for attacks!

As a result, every so often, new exploits emerge which endanger your IIS web server's integrity and stability. Many administrators have a hard time keeping up with the various security patches released for IIS to cope with each new exploit, making it easy for malicious users to find a vulnerable web server on the Internet. There are multiple issues which can completely endanger your Web server - and possibly your entire corporate network and reputation.

People fell there is nothing on their system that anybody would want. But what they are unaware of is that, there is the issue of legal liability. You are potentially liable for damages caused by a hacker using your machine. You must be able to prove to a court that you took "reasonable" measures to defend yourself from hackers. For example, consider if you put a machine on a fast link (cable modem or DSL) and left administrator/root accounts open with no password. Then if a hacker breaks into that machine, then uses that machine to break into a bank, you may be held liable because you did not take the most obvious measures in securing the machine.