Pages

Friday, August 31, 2007

IP spoofing

Criminals have long employed the tactic of masking their true identity, from disguises to aliases to caller-id blocking. It should come as no surprise then, that criminals who conduct their nefarious activities on networks and computers should employ such techniques. IP spoofing is one of the most common forms of on-line camouflage. In IP spoofing, an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by “spoofing” the IP address of that machine. In the subsequent pages of this report, we will examine the concepts of IP spoofing: why it is possible, how it works, what it is used for and how to defend against it.
The concept of IP spoofing was initially discussed in academic circles in the 1980's. In the April 1989 article entitled: “Security Problems in the TCP/IP Protocol Suite”, author S. M Bellovin of AT & T Bell labs was among the first to identify IP spoofing as a real risk to computer networks. Bellovin describes how Robert Morris, creator of the now infamous Internet Worm, figured out how TCP created sequence numbers and forged a TCP packet sequence. This TCP packet included the destination address of his “victim” and using an IP spoofing attack Morris was able to obtain root access to his targeted system without a User ID or password. Another infamous attack, Kevin Mitnick's Christmas Day crack of Tsutomu Shimomura's machine, employed the IP spoofing and TCP sequence prediction techniques. While the popularity of such cracks has decreased due to the demise of the services they exploited, spoofing can still be used and needs to be addressed by all security administrators. A common misconception is that "IP spoofing" can be used to hide your IP address while surfing the Internet, chatting on-line, sending e-mail, and so forth. This is generally not true. Forging the source IP address causes the responses to be misdirected, meaning you cannot create a normal network connection. However, IP spoofing is an integral part of many network attacks that do not need to see responses (blind spoofing).

Internet Access via Cable TV Network

Internet is a network of networks in which various computers connect each other through out the world. The connection to other computers is possible with the help of ISP (Internet Service Provider). Each Internet users depend dialup connections to connect to Internet. This has many disadvantages like very poor speed, may time cut downs etc. To solve the problem, Internet data can be transferred through Cable networks wired to the user computer. Different type connections used are PSTN connection, ISDN connection and Internet via Cable networks. Various advantages are High availability, High bandwidth to low cost, high speed data access, always on connectivity etc.

The number of household getting on the Internet has increased exponentially in the recent past. First time internet users are amazed at the internet’s richness of content and personalization, never before offered by any other medium. But this initial awe last only till they experienced the slow speed of internet content deliver. Hence the popular reference “World Wide Wait”(not world wide web). There is a pent-up demand for the high-speed (or broad band) internet access for fast web browsing and more effective telecommuting.

Visit http://www.seminarsonly.com for more details.

G a m i n g C o n s o l e s

Gaming consoles have proved themselves to be the best in digital entertainment. Gaming consoles were designed for the sole purpose of playing electronic games and nothing else. A gaming console is a highly specialised piece of hardware that has rapidly evolved since its inception incorporating all the latest advancements in processor technology, memory, graphics, and sound among others to give the gamer the ultimate gaming experience.
The global computer and video game industry, generating revenue of over 20 billion U.S. dollars a year, forms a major part of the entertainment industry. The sales of major games are counted in millions (and these are for software units that often cost 30 to 50 UK pounds each), meaning that total revenues often match or exceed cinema movie revenues. Game playing is widespread; surveys collated by organisations such as the Interactive Digital Software Association indicate that up to 60 per cent of people in developed countries routinely play computer or video games, with an average player age in the mid to late twenties, and only a narrow majority being male. Add on those who play the occasional game of Solitaire or Minesweeper on the PC at work, and one observes a phenomenon more common than buying a newspaper, owning a pet, or going on holiday abroad.
Visit http://www.seminarsonly.com for more details.

Fluorescent Multi-layer Disc

The introduction of the Fluorescent Multi-layer Disc (FMD) smashes the barriers of existing data storage formats. Depending on the application and the market requirements, the first generation of 120mm (CD Sized) FMD ROM discs will hold 20 - 100 GigaBytes of pre -recorded data on 12 — 30 data layers with a total thickness of under 2mm.In comparison, a standard DVD disc holds just 4.7 gigabytes. With C3D’s (Constellation 3D) proprietary parallel reading and writing technology, data transfer speeds can exceed 1 gigabit per second, again depending on the application and market need.
Visit http://www.seminarsonly.com for more details.

Extreme Programming (XP)

Extreme Programming (XP) is actually a deliberate and disciplined approach to software development. About six years old, it has already been proven at many companies of all different sizes and industries worldwide. XP is successful because it stresses customer satisfaction. The methodology is designed to deliver the software your customer needs when it is needed. XP empowers software developers to confidently respond to changing customer requirements, even late in the life cycle. This methodology also emphasizes teamwork. Managers, customers, and developers are all part of a team dedicated to delivering quality software. XP implements a simple, yet effective way to enable groupware style development.
Visit http://www.seminarsonly.com for more details.

Earth Simulator

In July 1996, as part of the Global Change Prediction Plan, the promotion of research & development for the Earth Simulator has been reported to the Science Technology Agency, based on the report titled "For Realization of the Global Change Prediction" made by the Aero-Electronics Technology Committee.

In April 1997, the budget for the development of the Earth Simulator was authorized to be allocated to National Space Development Agency of Japan (NASDA) and Power Reactor and Nuclear Fuel Development Corporation (PNC). The Earth Simulator Research and Development Center was established, with Dr. Miyoshi assigned for the director of the center. The project had begun.

Visit http://www.seminarsonly.com for more details.

X-INTERNET

X Internet offers several important advantages over the Web:
1) It rides Moore's Law -- the wide availability of cheap, powerful, low real-estate processing;
2) it leverages ever dear bandwidth -- once the connection is made, a small number of bits will be exchanged, unlike the Web where lots of pages are shuttled out to the client; and
3) X Internet will be far more peer-to-peer -- unlike the server-centric Web.

This scenario could be marred by two threats: viruses and lack of standards. Once executables start to move fluidly through the Net, viruses will have perfect conditions to propagate. Standards, or rather the lack thereof, will block the quick arrival of X Internet. I can't see Microsoft, Sun, IBM, or other traditionalists setting the standards. The Web-killer's design will emerge from pure research, academe, or open source -- as did the Web.
Visit http://www.seminarsonly.com for more details.

3- D IC’s

The three dimensional (3-D) chip design strategy exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize system on a chip (SoC) design. By simply dividing a planar chip into separate blocks, each occupying a separate physical level interconnected by short and vertical interlayer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved.

In the 3-Ddesign architecture, an entire chip is divided into a number of blocks, and each block is placed on a separate layer of Si that are stacked on top of each other.
Visit http://www.seminarsonly.com for more details.

Thursday, June 7, 2007

Seminar Topics

The http://www.seminarsonly.com and http://www.guidance4all.info are the sites which contains fantastic collection of Seminar Topics. Below we mention some of the abstracts of the topics. For more information Please visit the above mentioned sites.

Smart Dust

Smart dust is tiny electronic devices designed to capture mountains of information about their surroundings while literally floating on air. Nowadays, sensors, computers and communicators are shrinking down to ridiculously small sizes. If all of these are packed into a single tiny device, it can open up new dimensions in the field of communications.The idea behind 'smart dust' is to pack sophisticated sensors, tiny computers and wireless communicators in to a cubic-millimeter mote to form the basis of integrated, massively distributed sensor networks. They will be light enough to remain suspended in air for hours. As the motes drift on wind, they can monitor the environment for light, sound, temperature, chemical composition and a wide range of other information, and beam that data back to the base station, miles away.

Smart Dust requires both evolutionary and revolutionary advances in miniaturization, integration, and energy management. Designers can use microelectromechanical systems to build small sensors, optical communication components, and power supplies, whereas microelectronics provides increasing functionality in smaller areas, with lower energy consumption. The power system consists of a thick-film battery, a solar cell with a charge-integrating capacitor for periods of darkness, or both. Depending on its objective, the design integrates various sensors, including light, temperature, vibration, magnetic field, acoustic, and wind shear, onto the mote. An integrated circuit provides sensor-signal processing, communication, control, data storage, and energy management. A photodiode allows optical data reception. There are presently two transmission schemes: passive transmission using a corner-cube retro reflector, and active transmission using a laser diode and steerable mirrors.

M-Voting

M-Voting (An alternate of e-voting):Mobile technology has attained heights and the market trend is that every citizens of India will possess a mobile handset by the year 2010 (at cheaper rates of service.) When such a PDA is available why not using it for a time saving, cost effective, secured method of voting.The concept is as follows:

¢ Every citizen above the age of 18 years has got the right to vote and hence obtaining their fingerprints and storing in the database along with their birth/death record becomes necessary.¢ User sends his finger print (secured print is encrypted and sent as sequence of data in encoded form) to the service provider.¢ Service provider verifies the fingerprint and checks for the validity of voting and sends voter list (a mobile ballot paper) through SMS.¢ User casts his vote and sends 2nd message.

Since mobile phone has connectivity with computer systems it is easy to store and access at the service provider and results be published instantly.

3G vs WiFi

3G:3G is a technology for mobile service providers. Mobile services are provided by service providers that own and operate their own wireless networks and sell mobile services to and -users. Mobile service providers use licensed spectrum to provide wireless telephone coverage over some relatively large contiguous geographic service area. Today it may include the entire country. From a user's perspective, the key feature of mobile service is that it offers ubiquitous and continuous coverage. To support the service, mobile operators maintain a network of interconnected and overlapping mobile base stations that hand-off customers as those customers move among adjacent cells. Each mobile base station may support user's upto several kilometers away. The cell towers are connected to each other by a backhaul network that also provides interconnection to the wire line Public Switched Telecommunications Network (PSTN) and other services. The mobile system operator owns the end-to-end network from the base stations to the backhaul networks to the point of interconnection to the PSTN. ThirdGenerations (3G) mobile technologies will support higher bandwidth digital communications. To expand the range and capability of data services that can be supported by digital mobile systems, service providers will have to upgrade their networks to one of the 3G technologies which can support data rates of from 384Kbps up to 2Mbps.
WiFiWiFi is the popular name for the wireless Ethernet 802.11b standard for WLANs . WiFi allows collections of PCs, terminals ,and other distributed computing devices to share resources and peripherals such as printers, access servers etc. One of the most popular LAN technologies was Ethernet.

Java Ring

A Java Ring is a finger ring that contains a small microprocessor with built-in capabilities for the user, a sort of smart card that is wearable on a finger. Sun Microsystem's Java Ring was introduced at their JavaOne Conference in 1998 and, instead of a gemstone, contained an inexpensive microprocessor in a stainless-steel iButton running a Java virtual machine and preloaded with applets (little application programs). The rings were built by Dallas Semiconductor.


Workstations at the conference had "ring readers" installed on them that downloaded information about the user from the conference registration system. This information was then used to enable a number of personalized services. For example, a robotic machine made coffee according to user preferences, which it downloaded when they snapped the ring into another "ring reader."


Although Java Rings aren't widely used yet, such rings or similar devices could have a number of real-world applications, such as starting your car and having all your vehicle's components (such as the seat, mirrors, and radio selections) automatically adjust to your preferences.

Face Recognition Technology

Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up. Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face.
Facial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others

Bacterio-Rhodopsin Memory

The bacterio-rhodopsin protein is one of the most promising organic memory materials. Seven helix-shaped polymers form a membrane structure, which contains a molecule known as the retinal chromophor. The chromophor absorbs light of a certain color and is therefore able to switch to another stable state in addition to its original state. Only blue light can change the molecule back to its original state.


There have been many methods and proteins researched for use in computer applications in recent years. However, among the most promising approaches, and the focus of this particular web page, is 3-Dimensional Optical RAM storage using the light sensitive protein bacterio-rhodopsin. Bacterio-rhodopsin is a protein found in the purple membranes of several species of bacteria, most notably Halobacterium halobium. This particular bacteria lives in salt marshes. Salt marshes have very high salinity and temperatures can reach 140 degrees Fahrenheit. Unlike most proteins, bacterio-rhodopsin does not break down at these high temperatures.

Light Tree

A light path is an all-optical channel, which may be used to carry circuit switched traffic, and it may span multiple fiber links. Assigning a particular wavelength to it sets these up. In the absence of wavelength converters, a light path would occupy the same wavelength continuity constraint.

A light path can create logical (or virtual) neighbors out of nodes that may be geographically far apart from each other. A light path carries not only the direct traffic between the nodes it interconnects, but also the traffic from nodes upstream of the source to nodes upstream of the destination. A major objective of light path communication is to reduce the number of hops a packet has to traverse.

Under light path communication, the network employs an equal number of transmitters and receivers because each light path operates on a point-to-point basis. However this approach is not able to fully utilize all of the wavelengths on all of the fiber links in the network, also it is not able to fully exploit all the switching capability of each WRS.A light tree is a point to point multipoint all optical channel, which may span multiple fiber links. Hence, a light tree enables single-hop communication between a source node and a set of destination nodes. Thus, a light tree based virtual topology can significantly reduce the hop distance, thereby increasing the network throughput.

Virtual keyboard

A virtual keyboard is actually a key-in device, roughly a size of a fountain pen, which uses highly advanced laser technology, to project a full sized keyboard on to a flat surface. Since the invention of computers they had undergone rapid miniaturization. Disks and components grew smaller in size, but only component remained same for decades -its keyboard. Since miniaturization of a traditional keyboard is very difficult we go for virtual keyboard. Here, a camera tracks the finger movements of the typist to get the correct keystroke. A virtual keyboard is a keyboard that a user operates by typing on or within a wireless or optical -dectable surface or area rather than by depressing physical keys.Since their invention, computers have undergone rapid miniaturization from being a 'space saver' to 'as tiny as your palm'. Disks and components grew smaller in size, but one component still remained the same for decades - it's the keyboard. Miniaturization of keyboard had proved nightmare for users. Users of PDAs and smart phones are annoyed by the tiny size of the keys. The new innovation Virtual Keyboard uses advanced technologies to project a full-sized computing key-board to any surface. This device has become the solution for mobile computer users who prefer to do touch-typing than cramping over tiny keys. Typing information into mobile devices usually feels about as natural as a linebacker riding a Big Wheel. Virtual Keyboard is a way to eliminate finger cramping. All that's needed to use the keyboard is a flat surface. Using laser technology, a bright red image of a keyboard is projected from a device such as a handheld. Detection technology based on optical recognition allows users to tap the images of the keys so the virtual keyboard behaves like a real one. It's designed to support any typing speed.

SyncML

The popularity of mobile computing and communications devices can be traced to their ability to deliver information to users when needed. Users want ubiquitous access to information and applications from the device at hand, plus they want to access and update this information on the fly. The ability to use applications and information on one mobile device, then to synchronize any updates with the applications and information back at the office, or on the network, is key to the utility and popularity of this pervasive, disconnected way of computing. Unfortunately, we cannot achieve these dual visions: Networked data that support synchronization with any mobile device Mobile devices that support synchronization with any networked data Rather, there is a proliferation of different, proprietary data synchronization protocols for mobile devices. Each of these protocols is only available for selected transports, implemented on a selected subset of devices, and able to access a small set of net-worked data.

4G Wireless Systems

Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.


Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 ]8 GHz. it gives the ability for world wide roaming to access cell anywhere.

EDGE

EDGE is the next step in the evolution of GSM and IS- 136. The objective of the new technology is to increase data transmission rates and spectrum efficiency and to facilitate new applications and increased capacity for mobile use. With the introduction of EDGE in GSM phase 2+, existing services such as GPRS and high-speed circuit switched data (HSCSD) are enhanced by offering a new physical layer. The services themselves are not modified. EDGE is introduced within existing specifications and descriptions rather than by creating new ones. This paper focuses on the packet-switched enhancement for GPRS, called EGPRS. GPRS allows data rates of 115 kbps and, theoretically, of up to 160 kbps on the physical layer. EGPRS is capable of offering data rates of 384 kbps and, theoretically, of up to 473.6 kbps.

A new modulation technique and error-tolerant transmission methods, combined with improved link adaptation mechanisms, make these EGPRS rates possible. This is the key to increased spectrum efficiency and enhanced applications, such as wireless Internet access, e-mail and file transfers.

GPRS/EGPRS will be one of the pacesetters in the overall wireless technology evolution in conjunction with WCDMA. Higher transmission rates for specific radio resources enhance capacity by enabling more traffic for both circuit- and packet-switched services. As the Third-generation Partnership Project (3GPP) continues standardization toward the GSM/EDGE radio access network (GERAN), GERAN will be able to offer the same services as WCDMA by connecting to the same core network. This is done in parallel with means to increase the spectral efficiency. The goal is to boost system capacity, both for real- time and best-effort services, and to compete effectively with other third-generation radio access networks such as WCDMA and cdma2000.

Blu Ray Disc

Optical disks share a major part among the secondary storage devices.Blu .ray Disc is a next .generation optical disc format. The technology utilizes a blue laser diode operating at a wavelength of 405 nm to read and write data. Because it uses a blue laser it can store enormous more amounts of data on it than was ever possible. Data is stored on Blu .Ray disks in the form of tiny ridges on the surface of an opaque 1.1 .millimetre .thick substrate. This lies beneath a transparent 0.1mm protective layer. With the help of Blu .ray recording devices it is possible to record up to 2.5 hours of very high quality audio and video on a single BD.

Blu ray also promises some added security, making ways for copyright protections. Blu .ray discs can have a unique ID written on them to have copyright protection inside the recorded streams. Blu .ray disc takes the DVD technology one step further, just by using a laser with a nice color.

Bio-Molecular Computing

Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts.

DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer. In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems.