Sunday, October 5, 2008

Anti-Rootkits

Anti-Rootkit is an application that finds and removes any rootkit that is hidden on your computer using advanced rootkit detection technology.Rootkits in Windows systems are particularly insidious because they are able to be completely invisible to antispyware programs.There are numerous specialized anti-rootkit products available for the detection and removal of these types of malicious programs. Anti-Rootkit can even remove Trojans and Rootkits that are hiding inside NTFS Alternate Data Streams.

            Note that, on demand anti-rootkits vary in terms of options for removal. Some will only show hidden files/drivers/processes/registry keys but will not remove them.Others will show hidden files/drivers/processes/registry keys but will offer only remove known rootkits.Most of the stand alone anti-rootkit released by AV companies are relatively new.Many will eventually be incorporated into future products to extend anti-rootkit abilities.Surprisingly, most of the current offerings that specifically target rootkits are freeware or open source.

ROOT KITS

A Root kit is a collection of tools (programs) that enables administrator-level access to a computer or computer network.Also known as "kernel mode Trojans," root kits are far more sophisticated than the usual batch of Windows backdoor programs that irk network administrators today. The difference is the depth at which they control the compromised system.Conventional backdoors like BO2K operate in "user mode", which is to say, they play at the same level as any other application running on the compromised machine. That means that other applications - like anti-virus scanners - can easily discern evidence of the backdoor's existence in the Window's registry or deep among the computer's files.
          In contrast, a root kit hooks itself into the operating system's Application Program Interface (API), where it intercepts the system calls that other programs use to perform basic functions, like accessing files on the computer's hard drive. The root kit is the man-in-the-middle, squatting between the operating system and the programs that rely on it, deciding what those programs can see and do.
It uses that position to hide itself. If an application tries to list the contents of a directory containing one of the root kit's files, the malware will censor the filename from the list. It'll do the same thing with the system registry and the process list.

       Despite their increasingly sophisticated design, the current crop of Windows root kits are generally not completely undetectable,because it relies on a device driver, booting in "safe mode" will disable its cloaking mechanism, rendering its files visible.

Thursday, October 2, 2008

Intrusion detection system

An intrusion detection system (IDS) monitors network traffic for unwanted attempts at accessing, manipulating, and/or disabling of services.In some cases the IDS may also respond to anomalous or malicious traffic by taking action such as blocking the user or source IP address from accessing the network.


There are IDS that detect based on looking for specific signatures of known threats- similar to the way antivirus software typically detects and protects against malware- and there are IDS that detect based on comparing traffic patterns against a baseline and looking for anomalies.
Host Intrusion Detection Systems are run on individual hosts or devices on the network. A HIDS monitors the inbound and outbound packets from the device only and will alert the user or administrator of suspicious activity is detected.A signature based IDS will monitor packets on the network and compare them against a database of signatures or attributes from known malicious threats. This is similar to the way most antivirus software detects malware. The issue is that there will be a lag between a new threat being discovered in the wild and the signature for detecting that threat being applied to your IDS. During that lag time your IDS would be unable to detect the new threat.


Wireless IDSs can be purchased through a vendor or developed in-house. There are currently only a handful of vendors who offer a wireless IDS solution - but the products are effective and have an extensive feature set.A wireless IDS can be centralized or decentralized. A centralized wireless IDS is usually a combination of individual sensors which collect and forward all data to a central management system, where the wireless IDS data is stored and processed. Decentralized wireless intrusion detection usually includes one or more devices that perform both the data gathering and processing/reporting functions of the IDS. The decentralized method is best suited for smaller WLANs due to cost and management issues. The cost of sensors with data processing capability can become prohibitive when many sensors are required. Also, management of multiple processing/reporting sensors can be more time intensive than in a centralized model.

OSSTMM

The Open Source Security Testing Methodology Manual (OSSTMM) is a peer-reviewed methodology for performing security tests and metrics.The OSSTMM is a great resource for systems administrators who want to evaluate the security of a wide range of systems in an ordered and detailed way.It is a guide for evaluating how secure systems are. It contains detailed instructions on how to test systems in a methodological way, and how to evaluate and report on the results.


The OSSTMM consists of six section :-

* Information Security
* Process Security
* Internet Technology Security
* Communications Security
* Wireless Security
* Physical Security

An OSSTMM audit is an accurate measurement of security at an operational level, void of assumptions and anecdotal evidence. A proper methodology makes for a valid security measurement which is consistent and repeatable. An open methodology means that it is free from political and corporate agendas. An open source methodology allows for free dissemination of information and intellectual property. The OSSTMM is the collective development of a true security test and the computation of factual security metrics.The primary purpose of the OSSTMM is to provide a scientific methodology for the accurate characterization of security through examination and correlation in a consistent and reliable way. This manual is adaptable to most IS audits, penetration tests, ethical hacking, security assessments, vulnerability assessments, red-teaming, blue-teaming, posture assessments, war games, and security audits.

Firewalls

A personal firewall is an application which controls network traffic to and from a computer, permitting or denying communications based on a security policy.A firewall is something, either a piece of hardware or software that is intended to stop people getting their fingers on your property. Similar to a physical device, a firewall ringfences your computer system and in the process protects you from a variety of destructive threats that could cause you to suffer a loss of information, data, or a security breach that could damage your reputation or finances. It's either a hardware device or a software program that filters your internet connection so that your internal private network is kept separate from the outside world, the Internet.

The firewall thinks about packets of information, and these are dealt with individually, and if they're not safe, then they are not allowed through. Firewalls can protect individuals and their personal pc's when they connect to the web, and they can also protect computers within a large organisation or business. The company will have an internal network and the firewall keeps this safe by providing a barrier through which all information has to pass, and if it's not safe, it's stopped, or blocked.
Hackers will probe networks and personal web connections and will try to make a connection, maybe by FTP or telnet, and try to gain control of the machines by using security holes and breaches. These security breaches or exploits can cause a great deal of trouble and this is why it is important to have a firewall, even if it is rudimentary. Spyware, browser hijackers, viruses, Trojan horses, worms, phishing, and spam can all be defeated by firewalls. Firewalls use packet filtering, a proxy service and stateful inspection to control data traffic in to and out of a network. They allow the filtering of IP addresses, domain names and protocols, and differentiate between telnet, snmp, smtp, icmp, udp, ftp, http, tcp and IP data transmissions.


Firewalls can either be software based, or they can be a piece of physical hardware that acts as a gateway.They can protect you from hackers who try to log on to your computer using remote login software, they help to avoid application backdoors, smtp session hijacking, and this is a great way to stop junk email spamming. Operating system bugs, denial of service attacks, email bombs, macros and viruses are all defeated by effective firewall solutions. They also act as a proxy server or part of a dmz or demilitarized zone. A good firewall keeps personal data in and hackers out. Out of the box it makes your PC invisible on the Internet so that hackers can not find it. The programs intelligent intrusion prevention technology blocks suspicious Internet traffic, and easy-to-use privacy controls prevent personal information from being sent out without your knowledge.

Software Patching

A patch is a small piece of software designed to fix problems with or update a computer program or its supporting data.Software patching is an increasingly important aspect of today’s computing environment as the volume, complexity, and number of configurations under which a piece of software runs have grown considerably. Software architects and developers do everything they can to build secure, bug-free software products. To ensure quality, development teams leverage all the tools and techniques at their disposal.

Most software will be used for many years in an ever-changing user environment. This can place new compatibility demands on software and introduce new security vulnerabilities not originally envisioned. Whatever their source, problems can be found in any piece of software and must be addressed with patches.While readers are likely familiar with many of the issues addressed here, my intention is to provide an overview of patching that will help frame one’s thinking when tackling these problems rather than to suggest specific solutions to the problems themselves. The primary focus is on security patches, but the issues discussed are equally applicable to nonsecurity-related defects in any software.
In many cases, security researchers and hackers find vulnerabilities missed during the development cycle, but software vendors find some themselves after the product ships. In the best case, those who find a problem will notify the vendor immediately, before publicly announcing the vulnerability. Other times they do not, however. In some cases they even post exploit code publicly prior to availability of a fix, thereby greatly increasing the risk to users of the affected component.Regardless of the source of the vulnerability, the software vendor has a responsibility to research the issue and, if valid, produce a patch to address the problem and distribute it as widely as possible.Developing a patch requires a thorough understanding of the problem beyond what the finder reported. In some cases, the vulnerability is a simple code flaw that may be easy to fix. In other cases, it may be a much more difficult architectural issue or a problem with how two components interact.

Business Process Outsourcing

BPO is the process of hiring another company to handle business activities for you. It is distinct from information technology (IT) Outsourcing which focuses on hiring a third-party company or service provider to do IT-related activities, such as application management and application development, data center operations, or testing and quality assurance.

In the early days, BPO usually consisted of outsourcing processes such as payroll. Then it grew to include employee benefits management. Now it encompasses a number of functions that are considered "non-core" to the primary business strategy.Now it is common for organizations to outsource financial and administration (F&A) processes, human resources (HR) functions, call center and customer service activities and accounting and payroll.

These outsourcing deals frequently involve multi-year contracts that can run into hundreds of millions of dollars. Often, the people performing the work internally for the client firm are transferred and become employees for the service provider. Dominant outsourcing service providers in the BPO fields (some of which also dominate the IT outsourcing business) include US companies IBM, Accenture, and Hewitt Associates, as well as European and Asian companies Capgemini, Genpact, TCS, Wipro and Infosys.

Also coming into use is the term BTO -- business transformation outsourcing. This refers to the idea of having service providers contribute to the effort of transforming a business into a leaner, more dynamic, agile and flexible operation.

What is Outsourcing?

Outsourcing is contracting with another company or person to do a particular function. Almost every organization outsources in some way.An insurance company, for example, might outsource its janitorial and landscaping operations to firms that specialize in those types of work since they are not related to insurance or strategic to the business. The outside firms that are providing the outsourcing services are third-party providers, or as they are more commonly called, service providers.

Although outsourcing has been around as long as work specialization has existed, in recent history, companies began employing the outsourcing model to carry out narrow functions, such as payroll, billing and data entry. Those processes could be done more efficiently, and therefore more cost-effectively, by other companies with specialized tools and facilities and specially trained personnel.

Currently, outsourcing takes many forms. Organizations still hire service providers to handle distinct business processes, such as benefits management. But some organizations outsource whole operations. The most common forms are information technology outsourcing (ITO) and business process outsourcing (BPO).

Business process outsourcing encompasses call center outsourcing, human resources outsourcing (HRO), finance and accounting outsourcing, and claims processing outsourcing. These outsourcing deals involve multi-year contracts that can run into hundreds of millions of dollars.

Sunday, September 14, 2008

VZ Navigator

The simplest definition of VZ Navigator is that it's a GPS on a cellphone. You see a detailed map, updated in real time, of where you are and directions to where you're going. While that's the easiest way to describe it, it doesn't do VZ Navigator justice. There are plenty of additional options and features that go way beyond the capabilities of a conventional GPS.

The version I tested was VZ Navigator 2.8.0.80, which I was told was pre-release beta code. However, it was impressively and happily bug-free, and performed fine. Currently, the service is available on the Motorola v325, though I'm told it will soon be available on all new Verizon phones with location tracking capabilities. A picture of the Motorola v325 is on the left.

Here you can change certain GPS options, from metric unit display, download options, and changing the voice and detail of the VZ Navigator voice announcer. Also use this menu to tweak, customize and skin (to a very limited extent) your interface. This is also the location of a "Check for Updates" tool and basic "About" information.
Note that if VZ Navigator is turned on and is in navigation mode (telling you where you are and where you're going), it's getting constant updates to update your location, and is sucking battery life the entire time. Just remember to turn off VZ Navigator when you're done with it. Battery consumption, however, was very acceptable. The Motorola v325's standard battery is a 880 mAh 3.6v Lithium Ion unit. Fairly small and tiny, but so is the v325 itself.

Thursday, August 14, 2008

How to avoid phishing

  • If you receive an unexpected e-mail saying your account will be shut down unless you confirm your billing information, do not reply or click any links in the e-mail body.
  • Before submitting financial information through a Web site, look for the "lock" icon on the browser's status bar. It means your information is secure during transmission.

  • If you are uncertain about the information, contact the company through an address or telephone number you know to be genuine.
  • If you unknowingly supplied personal or financial information, contact your bank and credit card company immediately.
Creating a replica of an existing Web page to fool a user into submitting personal, financial, or password data.
Phishing is the term coined by hackers who imitate legitimate companies in e-mails to entice people to share passwords or credit-card numbers. Recent victims include Charlotte's Bank of America, Best Buy and eBay, where people were directed to Web pages that looked nearly identical to the companies' sites. The term had its coming out this week when the FBI called phishing the "hottest, and most troubling, new scam on the Internet."It used to be that you could make a fake account on AOL so long as you had a credit card generator. However, AOL became smart. Now they verify every card with a bank after it is typed in.


The term phishing comes from the fact that Internet scammers are using increasingly sophisticated lures as they "fish" for users' financial information and password data.
Hackers have an endearing tendency to change the letter "f" to "ph," and phishing is but one example. The f-to-ph transformation is not new among hackers, either. It first appeared in the late 1960s among telephone system hackers, who called themselves phone phreaks.


The challenges of WiMAX service activation

On cellular networks, the operator retains primary control over the devices operating on its network, but WiMAX will change this.

In traditional cellular networks, the operator retains primary control over the devices operating on its network, with most devices being directly supplied to the subscriber through the operator's retail stores or partners, and pre-provisioned with the operator's software or SIM card.
WiMAX will change this. Subscribers buying a WiMAX-enabled device will be able to choose the device model they prefer and buy it from an operator-independent retailer. Separating the device distribution model from the service delivery model will result in a strong supply chain of devices needed for successful uptake of mobile applications.
This represents a new operating model for the WiMAX operator - one that reduces the pressure to subsidize devices, maintain extensive inventory, and sell non-core devices to subscribers.

A range of devices operating on the network can create complex challenges for customer support staff.
The ability to push firmware to the device enables users to keep their devices updated, reducing customer support workload and cost for the operator. Ideally, device management, including firmware updates and device configuration, should be tied to the plan preferences of each subscriber and to an automated identification of the device.

The ability to set different priority levels for subscribers becomes a requirement. Because WiMAX can support a range of applications such as VoIP, videoconferencing, or video on demand (VoD), the operator needs the ability to set QoS prioritization. Subscribers need to be able to change their profiles and seamlessly download the required configuration settings to their devices.

WIMAX

WiMAX is a wireless digital communications system intended for wireless "metropolitan area networks". WiMAX can provide broadband wireless access (BWA) up to 30 miles (50 km) for fixed stations, and 3 - 10 miles (5 - 15 km) for mobile stations. In contrast, the WiFi/802.11 wireless local area network standard is limited in most cases to only 100 - 300 feet (30 - 100m).



With WiMAX, WiFi-like data rates are easily supported, but the issue of interference is lessened. WiMAX operates on both licensed and non-licensed frequencies, providing a regulated environment and viable economic model for wireless carriers.


WiMAX can be used for wireless networking in much the same way as the more common WiFi protocol. WiMAX is a second-generation protocol that allows for more efficient bandwidth use, interference avoidance, and is intended to allow higher data rates over longer distances.

The IEEE 802.16 standard defines the technical features of the communications protocol. The WiMAX Forum offers a means of testing manufacturer's equipment for compatibility, as well as an industry group dedicated to fostering the development and commercialization of the technology.


WiMax.com provides a focal point for consumers, service providers, manufacturers, analysts, and researchers who are interested in WiMAX technology, services, and products. Soon, WiMAX will be a very well recognized term to describe wireless Internet access throughout the world.

Saturday, March 8, 2008

How was the Phoenix Mission born?

In the mid to late ’90s, at Dan Goldin’s insistence, what was then called the Human Exploration program at HQ and the Science Office put together a joint mission to Mars scheduled for launch in 2001. It was a lander mission based on a second copy of the Mars Polar Lander (scheduled for 1998). It had an interesting payload that included instruments selected for relevance to human exploration (including MECA and an oxygen production unit). Dr. Chris McKay had been on the committee that had helped develop the plan for this mission and was a supporter of the mission and the connection to human exploration. Dr. Chris McKay had no direct involvement in any of the instruments selected. None of the instruments on which Dr. Chris McKay was a P.I. or a Co-I. were selected.

When Polar Lander crashed in 1998, NASA HQ understandably canceled the 2001 mission, since it was based on the same lander.

Forward to 2002 and the first call for ideas for a Scout mission to Mars. NASA held a workshop in Pasadena to hear ideas for Scout missions and promised to provide seed funding for a few selected ideas.

Carol Stoker and
Dr. Chris McKay thought it would be useful to push the 2001 lander concept. We proposed a Scout mission called Ameba. Here is our summary:
“Ameba is an integrated lander mission that would complement the 2007 lander to investigate the chemical, geological and biological properties of the Martian dust characterize the environment on Mars, and collect data relevant to future robotic and human exploration mission. The existing 2001 lander and its existing soil-analysis instruments form the baseline payload. The following basic questions will be answered by Ameba at low cost and with reduced mission risk: Are there any indications of carbon chemistry and oxidants relevant to life? Are there geological signs that Mars had significant quantities of surface water, or hydrothermal activity? What are the mineralogical and mechanical properties of the dust? How will the soil interact with living organisms? What are the radiation and electrostatic properties of the environment that may be detrimental to life?”

Peter Smith and Mike Hecht were Co-I’s, NASA Ames was the lead institution and would manage the mission, and Dr. Chris McKay was the P.I. At the time of this review of Scout ideas, the word within NASA was HQ would “never let the 2001 lander fly.” Many people thought they were wasting our time trying to reuse that hardware and those instruments.

We were not selected for seed funding at this point, but NASA Ames agreed to provide us with in-house support to develop a proposal for the real Scout competition.

However, soon after the real Scout competition started it was clear that HQ was deciding that essentially all planetary missions would have to be managed by JPL. (In fact the four Scouts selected a year later for further study were all JPL managed). In light of this, Carol and Dr. Chris McKay had a meeting to review the prospects for Ameba and concluded that it had no chance of being selected with an Ames lead. We (correctly) concluded that the only way the 2001 lander would fly again if it was proposed with a highly qualified and experienced member of the original 2001 team as P.I. and with JPL as the managing institution. Peter Smith was the obvious choice. We both know Peter well and we just called him up and had a three-way teleconference, and Peter agreed to be the P.I. Peter did several important things that made the proposal successful: He steered the science rationale into line with the selection criterion combining parts of the 2001 and 1998 landers, he worked effectively with the instrument teams and JPL, and he presented the mission to HQ. The rest, as they say, is history.

Phoenix - Scouting for Water on the Red Planet

Phoenix is a robotic spacecraft that will be used for a space exploration mission to Mars. The scientists conducting the mission will use instruments aboard the Phoenix lander to search for environments suitable for microbial life on Mars, and to research the history of water on the red planet. Phoenix is scheduled to launch in August 2007 and land on Mars in May 2008. The multiagency program is headed by the Lunar and Planetary Laboratory at the University of Arizona, under the direction of NASA. The program is a partnership of universities from the U.S., Canada, Switzerland and Germany; NASA; the Canadian Space Agency; and the aerospace industry. Phoenix will land in the planet’s water-ice-rich northern polar region and use its robotic arm to dig into the arctic terrain.

NASA selected the University of Arizona to lead the Phoenix mission back in August 2003, hoping it would be the first in a new line of smaller, low-cost “Scout” missions in the agency’s exploration of Mars (the cost is about $75 million cheaper than the Spirit/Opportunity rovers, and less than a third the cost of the Viking landers of 1976). The selection was the result of an intense two-year competition with proposals from other institutions. The $325-million NASA award is more than six times larger than any other single research grant in University of Arizona history.


The Mission has a collaborative approach to space exploration. As the very first of NASA’s Mars Scout class, Phoenix combines legacy and innovation in a framework of a true partnership: government, academia and industry. Scout-class missions are led by a scientist, known as a Principal Investigator (PI), whose role is to manage all the scientific data gathered by the spacecraft and lead the mission’s technical and scientific teams.

Pathfinder Airbags in a test
Pathfinder Airbags in

Phoenix is a partnership of universities, NASA centers and the aerospace industry. The science instruments and operations will the University of Arizona’s responsibility. The Jet Propulsion Laboratory in Pasadena, California, operated under contract by Caltech for NASA, will manage the project and provide mission design and control. Lockheed Martin Space Systems in Denver, Colorado, will build and test the spacecraft. The Canadian Space Agency will provide a meteorological station, including an innovative laser-based atmospheric sensor. The co-investigator institutions include Malin Space Science Systems, Max Planck Institute for Solar System Research, NASA Ames Research Center, NASA Johnson Space Center, Optech Incorporated and SETI Institute, to name just a few.

The lander will land the same way the Viking landers did, slowed primarily by landing rockets, shifting from a recent trend of using air bags for softening landings, as was demonstrated in the Pathfinder, Spirit and Opportunity missions, as well as Europe’s ill fated probe—the Beagle 2. In 2007, a report was filed at the American Astronomical Society by Washington State University professor Dirk Schulze-Makuch that made a claim that rocket exhaust contaminated the Viking landing sites, potentially killing any life that may have been there. The hypothesis was made long after any modifications to Phoenix could be made without delaying the mission significantly. One of the investigators on the Phoenix mission, NASA astrobiologist Chris McKay merely stated that the report “piqued his interest.” Experiments conducted by Nilton Renno, mission co-investigator from the University of Michigan, and his students, have specifically looked at the how much surface dust will be kicked up when Phoenix lands. It was determined, however, that the robotic arm could reach undisturbed soil, for sampling and analyzing.

The Future of BMI

Utah's electrode array (Credit: University of Utah)
Utah's electrode array

One of the next challenges in the field of BMI prosthetics is making them feel like normal limbs. A normal limb has a sense of touch and proprioception, the process by which sensory feedback to the brain transmits the location and position of the body's muscles, allowing us to be aware of the arm’s position without having to look. This is accomplished by an array of receptors in the muscles and joints, as well as mechanical receptors in the skin, that enable us to know when we are touching an object. The next generation of prosthetic arms will have proprioception and “feeling,” generating feedback pulses to the brain or to nerve endings that will result in their bearers having an almost natural feel to their bionic limb.

It seems that today, more than ever, BMIs that can operate bionic prosthetics are within our grasp. The Defense Advanced Research Project Agency (DARPA) set an ambitious goal of releasing a fully functioning bionic arm for Food and Drug Administration (FDA) approval by 2009. This arm will have far more degrees of freedom than any other available prosthetic, and in 2011 DARPA is planning to release a prosthetic that has nearly all the motion ability and dexterity of a normal limb, including touch and proprioception. Theoretically, an amputee using this arm will be able to play the piano.

Normann artificial vision (Credit John A. Moran eye center, University of Utah)
Normann artificial vision

A future type of BMI for patients with paralyzed limbs or spinal cord injuries will send efferent motor impulses directly to the muscles of the limb. Unlike the situation of amputees, in spinal cord injuries, the muscles are functional but nerve impulses aren’t able to get there. A muscle-stimulating BMI will be able to bypass the severed point and directly innervate the muscle through small electric currents. Robotic arms and hands approaching the agility and sensitivity of the human hand already exist and have been covered recently by TFOT.

BMI technologies are not only confined to prosthetic and paralyzed limbs. In the future, BMIs may allow blind people to see using an artificial picture-capturing device, much like a camera. Several methods for visual prosthetics have already been used successfully with patients. These methods use a computer chip implanted on the retina that is fed by a miniature camera on a patient's glasses. The chip stimulates the optic nerves, transmitting a picture to the brain. Devices used today allow patients to see vague shapes or distinguish light from dark, but future devices, such as the Cortical Visual Prosthesis being developed allow improved synthetic vision.

Professor Eilon Vaadia (Credit: Hebrew University)
Professor Eilon Vaadia

The John A. Moran Eye Center at the University of Utah has developed a chip , but it could also be applied to other BMI applications. The chip contains an array of electrodes that can be individually stimulated, are small enough to be inserted into brain tissue without much damage, and at the same time are strong enough to withstand the insertion procedure. Some of these implants have been successfully implanted in blind people with positive results. Future generations of these types of devices will lead to improved resolution, and ultimately restoration of sight to the blind.

What we are witnessing today is only the tip of the iceberg of the great potential BMIs hold for medical, military, recreation, and other purposes in the future. BMI research is on the threshold where science meets science fiction. There will surely be exciting news emerging from this field in the very near future.

Noninvasive Brain-Machine Interface (BMI)

BMIs can be divided into two main groups: invasive and noninvasive. Noninvasive BMIs rely on reading the brain's activity without actually piercing the brain surface. The EEG is one of the earliest noninvasive BMIs, measuring the combined activity of massive groups of brain neurons through voltage differences between different parts of the brain. The EEG is performed by placing approximately 20 electrodes on the scalp; these electrodes are connected by wires to an amplifier, through which the signal is converted to a digital reading, which can then be filtered by a computer to remove any artificial interference. Once connected to the EEG, the subject can be shown different stimuli, and the brain’s electrical patterns in response to the stimuli can be studied.

Some EEG BMIs rely on the subject’s ability to develop control of their own brain activity using a feedback system, whereas others use algorithms that recognize EEG patterns that appear with particular voluntary intentions. Virtual-reality systems have been used to supply patients with efficient feedback systems, and subjects have been able to navigate through a virtual-reality setting by imagining themselves walking or driving. These systems can also be used for gaming TFOT.

EEG- a non-invasive method of establishing a BCI. Subjects are hooked into a virtual reality setting while their brain activity is monitored by an EEG. Subjects train using the biofeedback setting to manipulate the virtual reality using their thoughts alone (Credit: Rochester institute of technology)

EEG-based BMIs have been implemented to help patients suffering from body paralysis, such as the motor-neuron disease ALS. By generating certain brain patterns that are then read by the EEG, patients are able to control a computer cursor and indicate their intentions, and thereby communicate with the external world. EEGs are also reported to have enabled severely disabled tetraplegic patients grasp an object using a paralyzed hand. In these cases, the patient generated certain brain waves that were detected by an EEG and converted into external electrical muscle stimulation, which allowed the contraction of the muscles and movement of the paralyzed limb.

EEGs have many shortcomings, due to much overlapping of electrical activity in the brain as well as electrical artifacts. To achieve better resolution, electrodes can be inserted between the skull and the brain, without piercing the brain tissue, and can allegedly achieve a higher resolution of brain activity. Although noninvasive BMI techniques can improve the quality of life for some disabled patients by allowing them a limited and slow capacity of communication, they are unlikely to hold the solution for allowing patients to perform complex tasks that involve multiple degrees of freedom, such as controlling a robotic arm. These activities will be more likely achieved through invasive techniques.

Brain-Machine Interface

The majority of motor functions in our body are driven by electrical currents originating in the brain motor cortex and conducted through the spinal cord and peripheral nerves to the muscles, where the electrical impulse is converted to motion by the contraction and retraction of specific muscles. For example, to bend the arm at the elbow joint, the biceps muscle contracts and the triceps relaxes. This seemingly simple movement is the result of the cumulative activity of many brain cells in the area of the cortex in charge of arm movement. The neurons, following a cognitive decision to bend the arm, generate an electric impulse through the peripheral nerves, causing the correct muscles to contract or relax.

The term used for neuronal activity is "action potential." Action potential occurs when an electric impulse shoots through the long shaft of the neuron, called the axon. Each neuron has one axon but is connected to many other neurons through chemical connections called synapses, and can influence other neurons or be influenced itself by the activity of adjacent neurons, creating an extremely complex network of neural cells.

The action potential in a neuron can be measured by inserting an extremely thin electrode adjacent to the axon, where the passing electric current can be detected. The electrode measures the neuron’s rate of action potential in one second, thus measuring its activity.

Most neuroscientists agree that the rate or frequency of the firing constitutes a sort of code for brain activity. For instance, if a certain group of neurons fires action potentials at a high frequency together, the result is the movement of a limb.

The Basis of Palm Vein Technology

An individual first rests his wrist, and on some devices, the middle of his fingers, on the sensor's supports such that the palm is held centimeters above the device's scanner, which flashes a near-infrared ray on the palm. Unlike the skin, through which near-infrared light passes, deoxygenated hemoglobin in the blood flowing through the veins absorbs near-infrared rays, illuminating the hemoglobin, causing it to be visible to the scanner. Arteries and capillaries, whose blood contains oxygenated hemoglobin, which does not absorb near-infrared light, are invisible to the sensor. The still image captured by the camera, which photographs in the near-infrared range, appears as a black network, reflecting the palm's vein pattern against the lighter background of the palm.

An individual's palm vein image is converted by algorithms into data points, which is then compressed, encrypted, and stored by the software and registered along with the other details in his profile as a reference for future comparison. Then, each time a person logs in attempting to gain access by a palm scan to a particular bank account or secured entryway, etc., the newly captured image is likewise processed and compared to the registered one or to the bank of stored files for verification, all in a period of seconds. Numbers and positions of veins and their crossing points are all compared and, depending on verification, the person is either granted or denied access.

How Secure is the Technology?

On the basis of testing the technology on more than 70,000 individuals, Fujitsu declared that the new system had a false rejection rate of 0.01% (i.e., only one out of 10,000 scans were incorrect denials for access), and a false acceptance rate of less than 0.00008% (i.e., incorrect approval for access in one in over a million scans). Also, if your profile is registered with your right hand, don't log in with your left - the patterns of an individual's two hands differ. And if you registered your profile as a child, it'll still be recognized as you grow, as an individual's patterns of veins are established in utero (before birth). No two people in the world share a palm vein pattern - even those of identical twins differ (so your evil twin won't be able to draw on your portion of the inheritance!)

Potential Applications


The new technology has many potential applications (some of which are already in use) such as an ultra secure system for ATMs and banking transactions, a PC, handheld, or server login system, an authorization system for front doors, schools, hospital wards, storage areas, and high security areas in airports, and even facilitating library lending, doing away with the age-old library card system. Fujitsu is planning to continue the development of the palm vein technology, shrinking the scanner to fit a mobile phone. Fujitsu hopes that its success might usher in a new age in personal data protection techniques, which is especially important when sales of Smartphones and other handhelds are skyrocketing.

Palm Vein Technology

How secure are your assets? Can your personal identification number be easily guessed? As we increasingly rely on computers and other machines in our daily lives, ensuring the security of personal information and assets becomes more of a challenge. If your bank card or personal data falls into the wrong hands, others can profit at your expense. To help deal with this growing problem, Fujitsu has developed a unique biometric security technology that puts access in the palm of your hand and no one else's.

Fujitsu's palm vein authentication technology consists of a small palm vein scanner that's easy and natural to use, fast and highly accurate. Simply hold your palm a few centimeters over the scanner and within a second it reads your unique vein pattern. A vein picture is taken and your pattern is registered. Now no one else can log in under your profile. ATM transactions are just one of the many applications of this new technology.

Fujitsu's technology capitalizes on the special features of the veins in the palm. Vein patterns are unique even among identical twins. Indeed each hand has a unique pattern. Try logging in with your left hand after registering with your right, and you'll be denied access. The scanner makes use of a special characteristic of the reduced hemoglobin coursing through the palm veins — it absorbs near-infrared light. This makes it possible to take a snapshot of what's beneath the outer skin, something very hard to read or steal.

Besides the high accuracy of a false rejection rate of 0.01% and a false acceptance rate of less than 0.00008 %(as of February, 2005), Fujitsu's contactless palm vein authentication offers a range of advantages over other biometric technologies.

"Fingerprint scans and face recognition ID methods are associated with the police by some people on a psychological level," says Shigeru Sasaki, director of Fujitsu's Media Solutions Laboratory. (Interview date: Feb 2nd, 2005) "In public areas, others don't like the thought of touching what everyone else has touched for sanitary reasons. This is why we created a contactless palm vein scanner."

"Fingerprint scans and face recognition ID methods are

The near-infrared rays in the palm vein scanner have no effect on the body when scanning. To protect the privacy and personal information of the user, the registered biometric information itself can be stored in bank cards. Bank of Tokyo-Mitsubishi ATMs in Japan are already equipped with palm vein scanners developed by Fujitsu. Users access their accounts by having a scan of their palm compared to a pre-registered scan stored on their bank card. This is expected to help reduce the growing cases of bank card thefts and fraudulent financial transactions.

Amid the heightened security climate in recent years and fears of terrorism, there has been a surge in demand for accurate biometric authentication methods. Meanwhile, recent bank card forgery cases in Japan have numbered in the hundreds, involving dozens of financial institutions and hundreds of millions of yen. Victims are usually unaware their money is being stolen until it's too late.

Bank card security isn't just the responsibility of the end user. Financial institutions around the world are being urged to take a greater role in preventing bank card fraud by improving card security. Japan's Financial Services Agency, for instance, has called on banks to implement added security measures such as introducing biometric identification systems.

Fujitsu's palm vein authentication technology will help stop this new wave of crime, and can also be adapted for use in access to secure areas as well as online transactions, customer identification and claiming baggage.

Saturday, January 26, 2008

Laser design

The term design can have two different meanings. In some cases, it is meant to be a detailed description of a device, including e.g. used parts, how they are put together, important operation parameters, etc. In other cases, the term denotes the process leading to such a description. This article assumes the first mentioned meaning and discusses some important aspects for the design of laser devices, such as e.g. diode-pumped solid state lasers, or similar devices such as e.g. optical parametric oscillators. A separate article on laser development gives additional information, in particular on the role which a laser design plays within the process of laser development, and how this process can be optimized.

Defining the Design Goals

Before a design is made, the design goals must be carefully evaluated. These should include not only the central performance parameters such as output power and wavelength; many more details can be relevant:

  • optimum performance, e.g. in terms of output power, power efficiency, beam quality, brightness, intensity and/or phase noise, long-term stability (e.g. of the output power or the optical frequency), timing jitter, etc.
  • compact and convenient setup, ease of operation (e.g. simple turn-on procedure, simple wavelength tuning, no need for realignment)
  • maximum flexibility (e.g. for changing operation parameters)
  • reliability, low maintenance requirements, simple and cost-effective error analysis, maintenance and repair
  • minimum sensitivity to vibrations, temperature changes, electromagnetic interference, aging of components
  • low production cost, i.e., low number of parts, simple alignment and testing, avoiding the use of parts which are expensive, sensitive, or difficult to obtain
It is certainly advisable to carefully work out the list of these requirements for the particular case before investing any significant resources in laser development, because it can easily be much more expensive and time-consuming to introduce additional properties into an already existing device.

Important Aspects of Laser Designs

Of course, the properties of the designed laser device are largely determined by the design details, not only by the used parts. Some aspects are particularly important:

This list, which is certainly not yet complete, shows that proper laser designs are not a trivial matter, but are essential for achieving full customer satisfaction, cost efficiency, and flexibility for future developments.


Laser applications

Lasers are sources of light with very special properties, as discussed in the article on laser light. For that reasons, there is a great variety of laser applications. The following sections give an overview.

Manufacturing

Lasers are widely used in manufacturing, e.g. for cutting, welding, soldering, surface treatment, marking, micromachining, pulsed laser deposition, lithography, alignment, etc. In most cases, relatively high optical intensities are applied to a small spot, leading to intense heating, possibly evaporation and plasma generation. Essential aspects are the high spatial coherence of laser light, allowing for strong focusing, and often also the potential for generating intense pulses.

Laser processing methods have many advantages, when being compared to mechanical approaches. They allow to fabricate very fine structures with high quality, avoiding mechanical stress as caused e.g. by mechanical drills and blades. A laser beam with high beam quality can be used to drill very fine and deep holes, e.g. for injection nozzles. A high processing speed is often (but not always) achieved, and it can also be advantageous to process materials without touching them.

Medical Applications

There is a wide range of medical applications. Often these relate to the outer parts of the human body, which are easily reached with light; examples are eye surgery and vision correction (LASIK), dentistry, dermatology (e.g. photodynamic therapy of cancer), and various kinds of cosmetic treatment such as tattoo removal or hair removal.

Lasers are also used for surgery (e.g. of the prostate), exploiting the possibility to cut tissues while causing only a low amount of bleeding.

Very different types of lasers are required for medical applications, depending on the optical wavelength, output power, pulse format, etc. In many cases, the laser wavelength is chosen so that certain substances (e.g. pigments in tattoos or caries in teeth) absorb light more strongly than surrounding tissue, so that they can be more precisely targeted.

Medical lasers are not always used for therapy. Some of them rather assist the diagnosis e.g. via methods of laser microscopy or spectroscopy (see below).

Metrology

Lasers are widely used in optical metrology, e.g. for extremely precise position measurements with interferometers, for long-distance range finding and navigation.

Laser scanners are based on collimated laser beams, which can read e.g. bar codes or other graphics over some distance. It is also possible to scan three-dimensional objects, e.g. in the context of crime scene investigation (CSI).

Optical sampling is a technique applied for the characterization of fast electronic microcircuits, microwave photonics, terahertz science, etc.

Lasers also allow for extremely precise time measurements and are therefore essential ingredient of optical clocks which will soon outperform the currently used atomic cesium clocks.

Fiber-optic sensors, often probed with laser light, allow for the distributed measurement of temperature, stress, and other quantities e.g. in oil pipelines and wings of airplanes.

Data Storage

Optical data storage e.g. in compact disks (CDs), DVDs, HD-DVDs, blu-ray disks and magneto-optical disks, is nearly always relying on a laser source, which has a high spatial coherence and can thus be used to address very tiny spots in the recording medium, allowing a very high density data storage. Another case is holography, where the temporal coherence can also be important.

Communications

Optical fiber communications, extensively used particularly for long-distance optical data transmission, mostly relies on laser light in optical glass fibers. Free-space optical communications e.g. for inter-satellite communications is based on higher power lasers, generating collimated laser beams which propagate over large distances with small beam divergence.

Displays

Laser projection displays containing RGB sources can be used for cinemas, home videos, flight simulators, etc., and are often superior to other displays concerning possible screen dimensions, resolution and color saturation. Further reductions of manufacturing costs will be essential for deep market penetration.

Spectroscopy

Laser spectroscopy is useful e.g. in atmospheric physics and pollution monitoring (e.g. trace gas sensing with differential absorption LIDAR technology). It also plays a role in medicine (e.g. cancer detection), biology, and various types of fundamental research, partly related to metrology (see above).

Microscopy

Laser microscopes and setups for coherence tomography provide images e.g. of biological samples with very high resolution, often in three dimensions. It is also possible to realize functional imaging.

Various Scientific Applications

Laser cooling makes it possible to bring clouds of atoms or ions to extremely low temperatures. This has applications in fundamental research as well as for industrial purposes.

Laser guide stars are used in astronomical observatories in combination with adaptive optics for atmospheric correction. They allow a substantially increased image resolution even in cases where a sufficiently close-by natural guide star is not available.

Military Applications

There are various military laser applications. In relatively few cases, lasers are used as weapons; the "laser sword" has become quite popular via films, but not in practice. Some high power lasers are currently developed for potential use on the battle field, or for destroying missiles, projectiles and mines.

In other cases, lasers function as target designators or laser sights (essentially laser pointers emitting visible or invisible laser beams), or as irritating or blinding (normally not directly destroying) countermeasures e.g. against heat-seeking anti-aircraft missiles. It is also possible to temporarily or permanently blind soldiers with laser beams, although the latter is forbidden by rules of war.

There are also many laser applications which are not specific for military use, e.g. in areas like range finding, LIDAR, and optical communications.