EXPERT'S EDGE


"The greatest barrier to success is the fear of failure"

by:Sven Goran Eriksson

Tuesday, February 2, 2010

Unlicensed Mobile Access(Information Technology Seminar Topics

Definition

During the past year, mobile and integrated fixed/mobile operators announced an increasing number of fixed-mobile convergence initiatives, many of which are materializing in 2006. The majority of these initiatives are focused around UMA, the first standardized technology enabling seamless handover between mobile radio networks and WLANs. Clearly, in one way or another, UMA is a key agenda item for many operators.
Operators are looking at UMA to address the indoor voice market (i.e. accelerate or control fixed-to-mobile substitution) as well as to enhance the performance of mobile services indoors. Furthermore, these operators are looking at UMA as a means to fend off the growing threat from new Voice-over-IP (VoIP) operators.

However, when evaluating a new 3GPP standard like UMA, many operators ask themselves how well it fits with other network evolution initiatives, including:
o UMTS
o Soft MSCs
o IMS Data Services
o I-WLAN
o IMS Telephony
This whitepaper aims to clarify the position of UMA in relation to these other strategic initiatives. For a more comprehensive introduction to the UMA opportunity, refer to "The
UMA Opportunity," available on the Kineto web site (www.kineto.com).

Mobile Network Reference Model

To best understand the role UMA plays in mobile network evolution, it is helpful to first
introduce a reference model for today's mobile networks. Figure 1 provides a simplified
model for the majority of 3GPP-based mobile networks currently in deployment. Based
on Release 99, they typically consist of the following:

o GSM/GPRS/EDGE Radio Access Network (GERAN): In mature mobile markets, the
GERAN typically provides good cellular coverage throughout an operator's service
territory and is optimized for the delivery of high-quality circuit-based voice services.
While capable of delivering mobile data (packet) services, GERAN data throughput is
typically under 80Kbps and network usage cost is high.

o Circuit Core/Services: The core circuit network provides the services responsible for the vast majority of mobile revenues today. The circuit core consists of legacy Serving and Gateway Mobile Switching Centers (MSCs) providing mainstream mobile telephony services as well as a number of systems supporting the delivery of other circuit-based services including SMS, voice mail and ring tones.

o Packet Core/Services: The core packet network is responsible for providing mobile data services. The packet core consists of GPRS infrastructure (SGSNs and GGSNs) as well as a number of systems supporting the delivery of packet-based services including WAP and MMS.

Introducing UMA into Mobile Networks

For mobile and integrated operators, adding UMA to existing networks is not a major undertaking. UMA essentially defines a new radio access network (RAN), the UMA access network. Like GSM/GPRS/EDGE (GERAN) and UMTS (UTRAN) RANs, a UMA access network (UMAN) leverages well-defined, standard interfaces into an operator's existing circuit and packet core networks for service delivery. However, unlike GSM or UMTS RANs, which utilize expensive private backhaul circuits as well as costly base stations and licensed spectrum for wireless coverage, a UMAN enables operators to leverage their subscribers' existing broadband access connections for backhaul as well as inexpensive WLAN access points and unlicensed spectrum for wireless coverage.

Tempest and Echelon(Information Technology Seminar Topics)

Introduction

The notion of spying is a very sensitive topic after the September 11 attack of Terrorists in New York. In the novel 1984, George Orwell foretold a future where individuals had no expectation of privacy because the state monopolized the technology of spying. Now the National security Agency Of USA developed a secret project to spy on people for keep tracing their messages to make technology enabled interception to find out the terrorist activities across the globe, named as Echelon. Leaving the technology ahead of the any traditional method of interception .

The secret project Developed by NSA (National Security Agency of USA) and its allies is tracing every single transmission even a single of keyboard. The allies of USA in this project are UK, Australia, New Zealand and Canada. Echelon is developed with the highest computing power of computers connected through the satellites all over the world. In this project the NSA left the wonderful method of Tempest and Carnivores behind.

Echelon is the technology for sniffing through the messages sent over a network or any transmission media, even it is wireless messages. Tempest is the technology for intercepting the electromagnetic waves over the air. It simply sniffs through the electromagnetic waves propagated from any devices, even it is from the monitor of a computer screen. Tempest can capture the signals through the walls of computer screens and keystrokes of key board even the computer is not connected to a network. Thus the traditional way of hacking has a little advantage in spying.
For the common people it is so hard to believe that their monitor can be reproduced from anywhere in one kilometer range without any transmission media in between the equipment and their computer. So we have to believe the technology enabled us to reproduce anything from a monitor of computer to the Hard Disks including the Memory (RAM) of a distant computer without any physical or visual contact. It is done with the Electromagnetic waves propagated from that device.

The main theory behind the Tempest(Transient Electromagnetic Pulse Emanation Standard.) is that any electronic or electrical devices emit Electromagnetic radiations of specific key when it is operated. For example the picture tube of computer monitor emits radiations when it is scanned up on vertical of horizontal range beyond the screen. It will not cause any harm to a human and it is very small. But it has a specific frequency range. You can reproduce that electromagnetic waves by tracing with the powerful equipments and the powerful filtering methods to correct the errors while transmission from the equipment. Actually this electromagnetic waves are not necessary for a human being because it not coming from a transmitter, but we have a receiver to trace the waves.

For the project named as Echelon the NSA is using supercomputers for sniffing through the packets and any messages send as the electromagnetic waves. They are using the advantage of Distributed computing for this. Firstly they will intercept the messages by the technology named as the Tempest and also with the Carnivore. Every packet is sniffed for spying for the USA's NSA for security reasons.

Zero Knowledge Protocols and Proof Systems(Information Technology Seminar Topics)

Zero-knowledge protocols allow identification, key exchange and other basic cryptographic operations to be implemented without leaking any secret information during the conversation and with smaller computational requirements than using comparable public key protocols. Thus Zero-knowledge protocols seem very attractive especially in smart card and embedded applications. There is quite a lot written about zero-knowledge protocols in theory, but not so much practical down-to-earth material is available even though zero-knowledge techniques have been used in many applications. Some of the practical aspects of zero-knowledge protocols and related issues are discussed, in the mind-set of minimalistic practical environments. The hardware technology used in these environments is described, and resulting real-world practical problems are related to zero-knowledge protocols. A very lightweight zero knowledge protocol is outlined and its possible uses and cryptographic strengths and weaknesses are analyzed.

ZERO-KNOWLEDGE PROTOCOL BASICS

Zero-knowledge protocols, as their name says, are cryptographic protocols which do not reveal the information or secret itself during the protocol, or to any eavesdropper. They have some very interesting properties, e.g. as the secret itself (e.g. your identity) is not transferred to the verifying party, they cannot try to masquerade as you to any third party.
Although Zero-knowledge protocols look a bit unusual, most usual cryptographic problems can be solved by using them, as well as with public key cryptography. For some applications, like key exchange (for later normal cheap and fast symmetric encryption on the communications link) or proving mutual identities, zero-knowledge protocols can in many occasions be a very good and suitable solution.

2.1 THE PARTIES IN A ZERO-KNOWLEDGE PROTOCOL

The following people appear in zero-knowledge protocols:

Peggy the Prover
Peggy has some information that she wants to prove to Victor, but she doesn't want to tell the secret itself to Victor.

Victor the Verifier
Victor asks Peggy a series of questions, trying to find out if Peggy really knows the secret or not. Victor does not learn anything of the secret itself, even if he would cheat or not adhere to the protocol.

Eve the Eavesdropper
Eve is listening to the conversation between Peggy and Victor. A good zero-knowledge protocol also makes sure that any third-party will not learn a thing about the secret, and will not even be able to replay it for anyone else later to convince them.

UMTS(Information Technology Seminar Topics


Standing for "Universal Mobile Telecommunications System", UMTS represents an evolution in terms of services and data speeds from today's "second generation" mobile networks. As a key member of the "global family" of third generation (3G) mobile technologies identified by the ITU, UMTS is the natural evolutionary choice for operators of GSM networks, currently representing a customer base of more than 850 million end users in 195 countries and representing over 70% of today's digital wireless market.

Using fresh radio spectrum to support increased numbers of customers in line with industry forecasts of demand for data services over the next decade and beyond, "UMTS" is synonymous with a choice of WCDMA radio access technology that has already been selected by approaching 120 licensees worldwide.

WHAT IS UMTS?

Universal Mobile Telecommunication System (UMTS) is one of the third generation (3G) mobile phone technologies. It uses WCDMA as the underlying standard and is standardized by the Third Generation Partnership Project (3GPP) and represents the European answer to International Telecommunication Union's (ITU) International Mobile Telecommunication 2000 requirements for 3G cellular radio systems. UMTS is sometimes marketed as 3GSM, emphasizing the combination of the nature of the technology and the GSM standard, which it was, designed to succeed.

FEATURES

UMTS supports up to 1920 kbit/s data transfer rates, although typical users can expect performance of around 384 kbit/s in a heavily loaded real-world system. However, this is still much greater than the 14.4 kbit/s of a single GSM error-corrected data channel or multiple 14.4 kbit/s channels in HSCSD, and offers the first prospect of practical inexpensive access to the World Wide Web on a mobile device and general use of MMS. The precursor to 3G is the now widely used GSM mobile telephony system, referred as 2G. There is also an evolution path from 2G, called GPRS, also known as 2.5G. GPRS supports a much better data rate (up to a maximum of 140.8kbit/s) and is packet based rather than connection oriented. It is deployed in many places where GSM is used.

In the near future today's UMTS networks will be upgraded with High Speed Downlink Packet Access (HSDPA). This will make a downlink transfer speed of up to 10 Mbit/s possible.

Marketing material for UMTS has emphasised the possibility of mobile videoconferencing, although whether there is actually a mass market for this service remains untested.

Other possible uses for UMTS include the downloading of music.

Setting up a LAN using Linux(Information Technology Seminar Topics)

Linux is increasingly popular in the computer networking/telecommunications industry. Acquiring the Linux operating system is a relatively simple and inexpensive task since virtually all of the source code can be downloaded from several different FTP or HTTP sites on the Internet.

This seminar describes how to put together a Local Area Network (LAN) consisting of two or more computers using the Red Hat Linux 6.2 operating system. A LAN is a communications network that interconnects a variety of devices and provides a means for exchanging information among those devices. The size and scope of a LAN is usually small, covering a single building or group of buildings. In a LAN, modems and phone lines are not required, and the computers should be close enough to run a network cable between them.

For each computer that will participate in the LAN, you'll need a network interface card (NIC) to which the network cable will be attached. We will also need to assign a unique hostname and IP address to each computer in the LAN.

INTRODUCTION TO TCP/IP

2.1. INTRODUCTION

TCP/IP is the suite of protocols used by the Internet and most LANs throughout the world. In TCP/IP, every host (computer or other communications device) that is connected to the network has a unique IP address. An IP address is composed of four octets (numbers in the range of 0 to 255) separated by decimal points. The IP address is used to uniquely identify a host or computer on the LAN. For example, a computer with the hostname Morpheus could have an IP address of 192.168.7.127. we should avoid giving two or more computers the same IP address by using the range of IP addresses that are reserved for private, local area networks; this range of IP addresses usually begins with the octets 192.168.

2.2 LAN NETWORK ADDRESS

The first three octets of an IP address should be the same for all computers in the LAN. For example, if a total of 128 hosts exist in a single LAN, the IP addresses could be assigned starting with 192.168.1.x, where x represents a number in the range of 1 to 128. we could create consecutive LANs within the same company in a similar manner consisting of up to another 128 computers. Of course, you are not limited to 128 computers, as there are other ranges of IP addresses that allow you to build even larger networks.

There are different classes of networks that determine the size and total possible unique IP addresses of any given LAN. For example, a class A LAN can have over 16 million unique IP addresses. A class B LAN can have over 65,000 unique IP addresses. The size of your LAN depends on which reserved address range you use and the subnet mask associated with that range

Protein Memory(Information Technology Seminar Topics)

Linux is increasingly popular in the computer networking/telecommunications industry. Acquiring the Linux operating system is a relatively simple and inexpensive task since virtually all of the source code can be downloaded from several different FTP or HTTP sites on the Internet.

This seminar describes how to put together a Local Area Network (LAN) consisting of two or more computers using the Red Hat Linux 6.2 operating system. A LAN is a communications network that interconnects a variety of devices and provides a means for exchanging information among those devices. The size and scope of a LAN is usually small, covering a single building or group of buildings. In a LAN, modems and phone lines are not required, and the computers should be close enough to run a network cable between them.

For each computer that will participate in the LAN, you'll need a network interface card (NIC) to which the network cable will be attached. We will also need to assign a unique hostname and IP address to each computer in the LAN.

INTRODUCTION TO TCP/IP

2.1. INTRODUCTION

TCP/IP is the suite of protocols used by the Internet and most LANs throughout the world. In TCP/IP, every host (computer or other communications device) that is connected to the network has a unique IP address. An IP address is composed of four octets (numbers in the range of 0 to 255) separated by decimal points. The IP address is used to uniquely identify a host or computer on the LAN. For example, a computer with the hostname Morpheus could have an IP address of 192.168.7.127. we should avoid giving two or more computers the same IP address by using the range of IP addresses that are reserved for private, local area networks; this range of IP addresses usually begins with the octets 192.168.

2.2 LAN NETWORK ADDRESS

The first three octets of an IP address should be the same for all computers in the LAN. For example, if a total of 128 hosts exist in a single LAN, the IP addresses could be assigned starting with 192.168.1.x, where x represents a number in the range of 1 to 128. we could create consecutive LANs within the same company in a similar manner consisting of up to another 128 computers. Of course, you are not limited to 128 computers, as there are other ranges of IP addresses that allow you to build even larger networks.

There are different classes of networks that determine the size and total possible unique IP addresses of any given LAN. For example, a class A LAN can have over 16 million unique IP addresses. A class B LAN can have over 65,000 unique IP addresses. The size of your LAN depends on which reserved address range you use and the subnet mask associated with that range

Protein Memory(Information Technology Seminar Topics

Since the dawn of time, man has tried to record important events and techniques for everyday life. At first, it was sufficient to paint on the family cave wall how one hunted. Then came the people who invented spoken languages and the need arose to record what one was saying without hearing it firsthand. Therefore, years later, more early scholars invented writing to convey what was being said. Pictures gave way to letters which represented spoken sounds. Eventually clay tablets gave way to parchment, which gave way to paper. Paper was, and still is, the main way people convey information. However, in the mid twentieth century computers began to come into general use .

Computers have gone through their own evolution in storage media. In the forties, fifties, and sixties, everyone who took a computer course used punched cards to give the computer information and store data. In 1956, researchers at IBM developed the first disk storage system. This was called RAMAC (Random Access Method of Accounting and Control) Since the days of punch cards, computer manufacturers have strived to squeeze more data into smaller spaces. That mission has produced both competing and complementary data storage technology including electronic circuits, magnetic media like hard disks and tape, and optical media such as compact disks.

Today, companies constantly push the limits of these technologies to improve their speed, reliability, and throughput -- all while reducing cost. The fastest and most expensive storage technology today is based on electronic storage in a circuit such as a solid state "disk drive" or flash RAM. This technology is getting faster and is able to store more information thanks to improved circuit manufacturing techniques that shrink the sizes of the chip features. Plans are underway for putting up to a gigabyte of data onto a single chip.

Magnetic storage technologies used for most computer hard disks are the most common and provide the best value for fast access to a large storage space. At the low end, disk drives cost as little as 25 cents per and provide access time to data in ten milliseconds. Drives can be ganged to improve reliability or throughput in a Redundant Array of Inexpensive Disks (RAID). Magnetic tape is somewhat slower than disk, but it is significantly cheaper per megabyte. At the high end, manufacturers are starting to ship tapes that hold 40 gigabytes of data. These can be arrayed together into a Redundant Array of Inexpensive Tapes (RAIT), if the throughput needs to be increased beyond the capability of one drive.

For randomly accessible removable storage, manufacturers are beginning to ship low-cost cartridges that combine the speed and random access of a hard drive with the low cost of tape. These drives can store from 100 megabytes to more than one gigabyte per cartridge.

Standard compact disks are also gaining a reputation as an incredibly cheap way of delivering data to desktops. They are the cheapest distribution medium around when purchased in large quantities ($1 per 650 megabyte disk). This explains why so much software is sold on CD-ROM today. With desktop CD-ROM recorders, individuals are able to publish their own CD-ROMs.

With existing methods fast approaching their limits, it is no wonder that a number of new storage technologies are developing. Currently, researches are looking at protien-based memory to compete with the speed of electronic memory, the reliability of magnetic hard-disks, and the capacities of optical/magnetic storage. We contend that three-dimensional optical memory devices made from bacteriorhodopsin utilizing the two photon read and write-method is such a technology with which the future of memory lies.

Plastic Memory(Information Technology Seminar Topics)

A conducting plastic has been used to create a new memory technology with the potential to store a megabit of data in a millimeter-square device - 10 times denser than current magnetic memories. The device should also be cheap and fast, but cannot be rewritten, so would only be suitable for permanent storage.

Imagine a scenario where the memory stored in your digital cemera or personal digital assistant is partially based one of the most flexible materials made by man: plastic.

Scientists at HP Labs and Princeton University are excited a new memory technology that could store more data and cost less than traditional silicon-based chips for Mobile device as handheld computers, cell phones and MP3 players.

But this chip is different than silicon technologies such as the popular flash memory, the researchers said, because it's partially made of plastic in addition to a foil substrate and some silicon. And while flash memory can be rewritten, the new technology can be written to only once. But it can be read several times and retains data without power because it won't require a laser or motor to read or write.

HP scientist Warren Jackson said simplifying the production of such memory chips is a key factor because it has the potential to lower the cost of memory use on a per megabyte basis for customers. However, this technology could potentially store more data than flash, and perhaps even become fast enough to store video, he said.

"This has the ability to work for a slightly different market than flash because we would now have the ability to not be able to write it a bunch of applications, but just read it so it becomes a permanent record.," Jackson told internetnews.com.


Moreover, this could be favorable to companies concerned about compliance regulations such as HIPAA and Sarbanes-Oxley, ensuring that the integrity of data on documents is preserved over long periods of time, the scientists said.

According to research analysts, finding alternative sources of memory has become a popular research issue because flash memory is expected to reach serious limitations as the dimension demands on devices increasingly get smaller to host a variety of form factors. Smaller memory space means the transistors leak more electricity and suck up more power.

But Gartner research analyst Richard Gordon said engineering obstacles facing memory technologies stretch back 30-plus years and noted that just last week Intel announced a new transistor to take care of the leakage problem.

"Flash technology is currently at a process node of the .11 micron level," Gordon said "There is a roadmap to accommodate it for the next 10 years so it still has a long time to go before it runs out of steam. I don't see that changing unless there is a technology in terms of cost-per-bit and performance that blows flash out of the water."

While unique the concept of plastic or polymer-based memory is not entirely alien. Rival chipmakers are also looking into polymer-based memory. Intel has a program to develop Ferro-electric polymer memory. AMD recently bought Coatue, one of several companies working on polymer memory, including Thin Film Electronics. Intel has a stake in this Swedish company.