Sunday, December 30, 2007

benefits of IT

For an organization to improve its business process using technology, an IT department is mandatory for management and support of the infrastructure.

An IT department is required for these areas of technology to provide value to the business, because maintenance tasks must be performed by technically competent staff.

End-User Technical Support
Desktop Management
Network Management
Voice and Data Communications
Business Applications
Strategic Technology Planning
Project Management

Besides using technology efficiently, an IT department will also provide a business with lower costs, higher productivity and higher efficiency in other areas. The IT department does this by:

Minimizing over 85% of Downtime


Avoids losing revenues
Lost sales from customers being unable to make purchases
Decreases costs
Payroll for employees being idle
Paying a technician to fix the problem
Increases productivity because employees will spend less time idle
Providing a single point of contact for technology issues


Increases efficiency by assuring that persons handling technology issues are knowledgeable in the area
Increases productivity by allowing employees to focus on core competencies rather than technology issues
Technology Planning


Reduces risk of financial, technological and data losses caused by disasters
Increases return on investment (ROI) and business value realized from technology projects
Improves equipment efficiency with planned maintenance activities

Sunday, December 16, 2007

Types of technology

To many of us, the term technology conjures up visions of things such as computers, cell phones, spaceships, digital video players, computer games, advanced military equipment, and other highly sophisticated machines. Such perceptions have been acquired and reinforced through exposure to televised reports of fascinating devices and news articles about them, science fiction books and movies, and our use of equipment such as automobiles, telephones, computers, and automatic teller machines.

While this focus on devices and machines seems to be very prevalent among the general population, many educators also hold a similar perspective. Since Pressey developed the first teaching machine in 1926 (Nazzaro, 1977), technology applications in public schools and post-secondary education institutions have tended to focus on the acquisition and use of equipment such as film projectors, audio and video tape recorders, overhead projectors, and computers.

Since the early 1960s, however, a trend has emerged that is changing the way we perceive technology in education. At that time, educators began considering the concept of instructional technology. Subsequently, after considerable deliberation, a Congressional Commission on Instructional Technology (1970) concluded that technology involved more than just hardware. The Commission concluded that, in addition to the use of devices and equipment, instructional technology also involves a systematic way of designing and delivering instruction.

With the rapid development of microcomputer technology, increased research on instructional procedures, and the invention of new devices and equipment to aid those with health problems, physical disabilities, and sensory impairments, the latter third of the 20th century has borne witness to a very dramatic evolution. The current perspective is a broad one in which six types of technology are recognized: the technology of teaching, instructional technology, assistive technology, medical technology, technology productivity tools, and information technology (Blackhurst & Edyburn, 2000).

TECHNOLOGY OF TEACHING
The technology of teaching refers to instructional approaches that are very systematically designed and applied in very precise ways. Such approaches typically include the use of well-defined objectives, precise instructional procedures based upon the tasks that students are required to learn, small units of instruction that are carefully sequenced, a high degree of teacher activity, high levels of student involvement, liberal use of reinforcement, and careful monitoring of student performance.

Instructional procedures that embody many of these principles include approaches such as direct instruction (Carnine, Silbert, & Kameenui, 1990), applied behavior analysis (Alberto & Troutman, 1995; Wolery, Bailey, & Sugai, 1988), learning strategies (Deshler & Schumaker, 1986), and response prompting (Wolery, Ault, & Doyle, 1992). Most often, machines and equipment are not involved when implementing various technologies of teaching; however, they can be, as will be seen later.

MEDICAL TECHNOLOGY
The field of medicine continues to amaze us with the advances constantly being made in medical technology. In addition to seemingly miraculous surgical procedures that are technology-based, many individuals are dependent upon medical technology to stay alive or otherwise enable people to function outside of hospitals and other medical settings. It is not uncommon to see people in their home and community settings who use medical technology.

For example, artifical limbs and hip and knee implants can help people function in the environment. Cochlear implants can often improve the hearing of people with auditory nerve damage. Some devices provide respiratory assistance through oxygen supplementation and mechanical ventilation. Others, such as cardiorespiratory monitors and pulse oximeters are used as surveillance devices that alert an attendant to a potential vitality problem. Nutritive assistive devices can assist in tube feeding or elimination through ostomies. Intravenous therapy can be provided through medication infusion and kidney function can be assumed by kidney dialysis machines (Batshaw & Perret, 1992). In addition to keeping people alive, technologies such as these can enable people to fully participate in school, community, and work activities.

Monday, December 10, 2007

GSM Technology

What is GSM?

GSM (Global System for Mobile communications) is an open, digital cellular technology used for transmitting mobile voice and data services. GSM differs from first generation wireless systems in that it uses digital technology and time division multiple access transmission methods. GSM is a circuit-switched system that divides each 200kHz channel into eight 25kHz time-slots. GSM operates in the 900MHz and 1.8GHz bands in Europe and the 1.9GHz and 850MHz bands in the US. The 850MHz band is also used for GSM and 3GSM in Australia, Canada and many South American countries. GSM supports data transfer speeds of up to 9.6 kbit/s, allowing the transmission of basic data services such as SMS (Short Message Service). Another major benefit is its international roaming capability, allowing users to access the same services when travelling abroad as at home. This gives consumers seamless and same number connectivity in more than 210 countries. GSM satellite roaming has also extended service access to areas where terrestrial coverage is not available.

Did you know that you can be instantly contactable on your usual number in over 100 countries world wide, when you travel with your GSM phone using your own number?

The major advantage of GSM technology is that it allows you to use your GSM phone when you travel outside your own country or region. This is known as roaming.

Roaming is the ability to use your own GSM phone number in another GSM network. You can roam to another region or country and use the services of any network operator in that region that has a roaming agreement with your GSM network operator in your home region/country.

A roaming agreement is a business agreement between two network operators to transfer items such as call charges and subscription information back and forth, as their subscribers roam into each other's areas.

General Packet Radio Services (GPRS)



DEFINITION - General Packet Radio Services (GPRS) is a packet-based wireless communication service that promises data rates from 56 up to 114 Kbps and continuous connection to the Internet for mobile phone and computer users. The higher data rates allow users to take part in video conferences and interact with multimedia Web sites and similar applications using mobile handheld devices as well as notebook computers. GPRS is based on Global System for Mobile (GSM) communication and complements existing services such circuit-switched cellular phone connections and the Short Message Service (SMS).

In theory, GPRS packet-based services cost users less than circuit-switched services since communication channels are being used on a shared-use, as-packets-are-needed basis rather than dedicated to only one user at a time. It is also easier to make applications available to mobile users because the faster data rate means that middleware currently needed to adapt applications to the slower speed of wireless systems are no longer be needed. As GPRS has become more widely available, along with other 2.5G and 3G services, mobile users of virtual private networks (VPNs) have been able to access the private network continuously over wireless rather than through a rooted dial-up connection.

GPRS also complements Bluetooth, a standard for replacing wired connections between devices with wireless radio connections. In addition to the Internet Protocol (IP), GPRS supports X.25, a packet-based protocol that is used mainly in Europe. GPRS is an evolutionary step toward Enhanced Data GSM Environment (EDGE) and Universal Mobile Telephone Service (UMTS).

The Great Technology War: LCD vs. DLP

Introduction

If you are new to the world of digital projectors, you won't have to shop around the market very long before discovering that "LCD" and "DLP" somehow refers to two different kinds of projectors. You might not even know what LCD and DLP are before asking the obvious question "which one is better?"

The answer is simple. Sort of. LCD and DLP each have unique advantages over the other. Neither one is perfect. So it is important to understand what each one gives you. Then you can make a good decision about which will be better for you.

By the way, there is a third very significant light engine technology called LCOS (liquid crystal on silicon). It is being developed by several vendors, most notably JVC and Hitachi. Several outstanding home theater projectors have been manufactured with this technology, and JVC's LCOS-based DLA-SX21 is currently on our list of Highly Recommended Home Theater Projectors. However the discussion of LCOS technology is beyond the scope of this article. For more on LCOS click here.

The Technical Differences between LCD and DLP

LCD (liquid crystal display) projectors usually contain three separate LCD glass panels, one each for red, green, and blue components of the image signal being fed into the projector. As light passes through the LCD panels, individual pixels ("picture elements") can be opened to allow light to pass or closed to block the light, as if each little pixel were fitted with a Venetian blind. This activity modulates the light and produces the image that is projected onto the screen.

DLP ("Digital Light Processing") is a proprietary technology developed by Texas Instruments. It works quite differently than LCD. Instead of having glass panels through which light is passed, the DLP chip is a reflective surface made up of thousands of tiny mirrors. Each mirror represents a single pixel.

In a DLP projector, light from the projector's lamp is directed onto the surface of the DLP chip. The mirrors wobble back and forth, directing light either into the lens path to turn the pixel on, or away from the lens path to turn it off.

In very expensive DLP projectors, there are three separate DLP chips, one each for the red, green, and blue channels. However, in DLP projectors under $20,000, there is only one chip. In order to define color, there is a color wheel that consists of red, green, blue, and sometimes white (clear) filters. This wheel spins between the lamp and the DLP chip and alternates the color of the light hitting the chip from red to green to blue. The mirrors tilt away from or into the lens path based upon how much of each color is required for each pixel at any given moment in time. This activity modulates the light and produces the image that is projected onto the screen.

The Advantages of LCD Technology

One benefit of LCD is that it has historically delivered better color saturation than you get from a DLP projector. That's primarily because in most single-chip DLP projectors, a clear (white) panel is included in the color wheel along with red, green, and blue in order to boost brightest, or total lumen output. Though the image is brighter than it would otherwise be, this tends to reduce color saturation, making the DLP picture appear not quite as rich and vibrant. However, some of the DLP-based home theater products now have six-segment color wheels that eliminate the white component. This contributes to a richer display of color. And even some of the newer high contrast DLP units that have a white segment in the wheel are producing better color saturation than they used to. Overall however, the best LCD projectors still have a noteworthy performance advantage in this area.

LCD also delivers a somewhat sharper image than DLP at any given resolution. The difference here is more relevant for detailed financial spreadsheet presentations than it is for video. This is not to say that DLP is fuzzy--it isn't. When you look at a spreadsheet projected by a DLP projector it looks clear enough. It's just that when a DLP unit is placed side-by-side with an LCD of the same resolution, the LCD typically looks sharper in comparison.

A third benefit of LCD is that it is more light-efficient. LCD projectors usually produce significantly higher ANSI lumen outputs than do DLPs with the same wattage lamp. In the past year, DLP machines have gotten brighter and smaller--and there are now DLP projectors rated at 2500 ANSI lumens, which is a comparatively recent development. Still, LCD competes extremely well when high light output is required. All of the portable light cannons under 20 lbs putting out 3500 to 5000 ANSI lumens are LCD projectors.

The Weaknesses of LCD Technology

LCD projectors have historically had two weaknesses, both of which are more relevant to video than they are to data applications. The first is visible pixelation, or what is commonly referred to as the "screendoor effect" because it looks like you are viewing the image through a screendoor. The second weakness is not-so-impressive black levels and contrast, which are vitally important elements in a good video image. LCD technology has traditionally had a hard time being taken seriously among some home theater enthusiasts (understandably) because of these flaws in the image.

However, in many of today's projectors these flaws aren't nearly what they used to be. Three developments have served to reduce the screendoor problem on LCD projectors. First was the step up to higher resolutions, first to XGA resolution (1,024x768), and then to widescreen XGA (WXGA, typically either 1280x720 or 1365x768). This widescreen format is found, for example, on the Sanyo PLV-70 and Epson TW100, (two more products currently on our Highly Recommended list). Standard XGA resolution uses 64% more pixels to paint the image on the screen than does an SVGA (800x600) projector. The inter-pixel gaps are reduced in XGA resolution, so pixels are more dense and less visible. Then with the widescreen 16:9 machines, the pixel count improves by another quantum leap. While an XGA projector uses about 589,000 pixels to create a 16:9 image, a WXGA projector uses over one million. At this pixel density, the screendoor effect is eliminated at normal viewing distances.

Second, the inter-pixel gaps on all LCD machines, no matter what resolution, are reduced compared to what they use to be. So even today's inexpensive SVGA-resolution LCD projectors have less screendoor effect than older models did. And it is virtually invisible on the Panasonic PT-L300U, which is a medium resolution widescreen format of 960x540.

The third development in LCDs was the use of Micro-Lens Array (MLA) to boost the efficiency of light transmission through XGA-resolution LCD panels. Some XGA-class LCD projectors have this feature, but most do not. For those that do, MLA has the happy side effect of reducing pixel visibility a little bit as compared to an XGA LCD projector without MLA. On some projectors with this feature, the pixel grid can also be softened by placing the focus just a slight hair off perfect, a practice recommended for the display of quality video. This makes the pixels slightly indistinct without any noticeable compromise in video image sharpness.

Now when it comes to contrast, LCD still lags behind DLP by a considerable margin. But recent major improvements in LCD's ability to render higher contrast has kept LCD machines in the running among home theater enthusiasts. All of the LCD projectors just mentioned have contrast ratios of at least 800:1. They produce much more snap, better black levels, and better shadow detail than the LCD projectors of years past were able to deliver.

Friday, December 7, 2007

Health Risks due to infrared

Imagine for a moment going about your daily routine without electricity. You probably awoke to an electric clock radio/alarm, showered under warm water supplied via an electric hot water heater, drank a couple of cups of coffee from your automatic electric coffee maker, listened to the weather on the electric powered TV or radio - and the list goes on and on. We live in an electrical environment!

Electricity is all around you and while you cannot see electricity, you can certainly appreciate the results. However, any time electric current travels through a wire, the air, or runs an appliance, it produces an electromagnetic field. It is important to remember that electromagnetic fields are found everywhere that electricity is in use. While researchers have not established an ironclad link between the exposure to electromagnetic fields and ailments such as leukemia, the circumstantial evidence concerns many people.

The evidence also suggests that we need to use some common sense when dealing with electricity. In scientific terms, your body can act as an antenna, as it has a higher conductivity for electricity than does air. Therefore, when conditions are right you may have experienced a small "tingle" of electric current from a poorly grounded electric appliance. As long as these currents are very small there isn't much danger from electric fields, except for potential shocks. Your body, however, also has a permeability almost equal to air, thus allowing a magnetic field to easily enter the body. Unfortunately your body cannot detect the presence of a strong magnetic field, which could potentially do much more harm.

In terms of wireless technology, there are no confirmed health risks or scientific dangers from infrared or radio frequency, with two known exceptions:

  1. point-to-point lasers which can cause burns or blindness
  2. prolonged microwave exposure which has been linked to cancer and leukemia
Therefore, most health concerns related to electromagnetic fields are due to electricity in our day-to-day use, such as computer monitors and TVs. These dangers, if any, are already in the home and work place, and the addition of wireless technology should not be seen as an exceptional risk. We might be rightfully worried or concerned about the electric power grid two blocks from our home or school, but at the same time, we sleep each night with our head only a few feet from an AC powered clock radio, which may be far worse due simply to proximity. We might be also be worried about the magnetic radiation or magnetically induced electrical fields which surround us from the fluorescent light fixtures and high voltage, high frequency lighting we sit under at work and at home. The real danger, however, is that we normally position ourselves too close to the electromagnetic field source (computer monitor, TV, etc.). Remember that the strength of the electromagnetic field (EMF) decreases as the square of the distance from the field source. Therefore, if we are 2 meters away from the source, the EMF strength is reduced to 1/4, but if we move 8 meters away from the source, the EMF strength is reduced to 1/64 of its original strength.


Safety

There are a few things you can do to make your home and work environment a safer "electronic" place. The first thing to consider when possible is to buy Federal Communications Commission (FCC) Class B rated equipment. The FCC classifies computer equipment for its potential to generate radio frequency pollution. Class B emits less radio frequency pollution than Class A, and is more suitable for the residential environment. Unfortunately, while Class B emits less radio frequency pollution, there is nothing in the FCC classes regarding magnitude or level of the pollution.

Other potential risks exist in high voltage (e.g. power) components such as display monitors, computer power supplies, etc. If possible select low power units, shielded units, etc. and operate them at lower resolutions. For example, VGA resolution has a lower refresh scan rate than SVGA, and thus lower magnetic field pollution. If you are adding internal cards to your computers, don't tamper with the computer by removing any internal shielding, covers, etc. Any metal shielding inside your computer was probably put there for a purpose, although to you it may look like a harmless spacer!

If you are really concerned, you can purchase formal safety testing tools or hire a consultant to do formal testing for EMF. There are also cheap tools you can utilize to test for the presence of strong radio or magnetic fields. For example, the presence of a strong magnetic field will deflect a compass needle from pointing north, or the presence of a strong radio frequency field will distort an AM radio's ability to clearly tune in a station. Simple tools like these can be used to screen for strong EMF.


Security

Electromagnetic frequencies currently have little legal status for protection and as such, can be freely intercepted by motivated individuals. This doesn't mean wireless transmission is easily breached, as security varies by the type of wireless transmission method. As presented earlier in the advantages and disadvantages of infrared versus radio frequency transmission, what might be considered an advantage to one method for transmission could turn out to be a disadvantage for security. For example, because infrared is line-of-sight it has less transmission range but is also more difficult to intercept when compared to radio frequency. Radio frequency can penetrate walls, making it much easier to transmit a message, but also more susceptible to tapping.

A possible solution to security issues will likely be some form of data encryption. Data encryption standards (DES) are also being quickly developed for the exchange of information over the Internet, and many of these same DES will be applied to wireless technology.


An Introduction to Infrared Technology

As next-generation electronic information systems evolve, it is critical that all people have access to the information available via these systems. Examples of developing and future information systems include interactive television, touchscreen-based information kiosks, and advanced Internet programs. Infrared technology, increasingly present in mainstream applications, holds great potential for enabling people with a variety of disabilities to access a growing list of information resources. Already commonly used in remote control of TVs, VCRs and CD players, infrared technology is also being used and developed for remote control of environmental control systems, personal computers, and talking signs.

For individuals with mobility impairments, the use of infrared or other wireless technology can facilitate the operation of information kiosks, environmental control systems, personal computers and associated peripheral devices. For individuals with visual impairments, infrared or other wireless communication technology can enable users to locate and access talking building directories, street signs, or other assistive navigation devices. For individuals using augmentative and alternative communication (AAC) devices, infrared or other wireless technology can provide an alternate, more portable, more independent means of accessing computers and other electronic information systems.

In this presentation/paper, an introduction to wireless communication in general is first presented. A discussion specific to infrared technology then follows, with advantages and disadvantages of the technology presented along with security, health and safety issues. The importance of establishing a standard is also discussed with relevance to the disability field, and future uses of infrared technology are presented.


Wireless Communication

Wireless communication, as the term implies, allows information to be exchanged between two devices without the use of wire or cable. A wireless keyboard sends information to the computer without the use of a keyboard cable; a cellular telephone sends information to another telephone without the use of a telephone cable. Changing television channels, opening and closing a garage door, and transferring a file from one computer to another can all be accomplished using wireless technology. In all such cases, information is being transmitted and received using electromagnetic energy, also referred to as electromagnetic radiation. One of the most familiar sources of electromagnetic radiation is the sun; other common sources include TV and radio signals, light bulbs and microwaves. To provide background information in understanding wireless technology, the electromagnetic spectrum is first presented and some basic terminology defined.

The electromagnetic spectrum classifies electromagnetic energy according to frequency or wavelength (both described below). As shown in Figure 1, the electromagnetic spectrum ranges from energy waves having extremely low frequency (ELF) to energy waves having much higher frequency, such as x-rays.

Description of figure(s) below

[Figure 1 description: The electromagnetic spectrum is depicted in Figure 1. A horizontal bar represents a range of frequencies from 10 Hertz(cycles per second) to 10 to the 18th power Hertz. Some familiar allocated frequency bands are labeled on the spectrum. Approximate locations are as follows. (Exponential powers of 10 are abbreviated as 10exp.)

10 Hertz: extremely low frequency or ELF.
10exp5 Hertz: AM radio.
10exp8 Hertz: FM radio.
10exp10 Hertz: Television.
10exp11 Hertz: Microwave.
10exp16 Hertz: Infrared (frequency range is below the visible light spectrum).
10exp16 Hertz: Visible Light.
10exp16 Hertz: Ultraviolet (frequency range is above the visible light spectrum).
10exp18 Hertz: X-rays.]

A typical electromagnetic wave is depicted in Figure 2, where the vertical axis represents the amplitude or strength of the wave, and the horizontal axis represents time. In relation to electromagnetic energy, frequency is:

  1. the number of cycles a wave completes (or the number of times a wave repeats itself) in one second

  2. expressed as Hertz (Hz), which equals once cycle per second

  3. commonly indicated by prefixes such as

    a. Kilo (KHz) one thousand
    b. Mega (MHz) one million
    c. Giga (GHz) one billion

  4. directly related to the amount of information that can be transmitted on the wave

Description of figure(s) below

[Figure 2 description: A sine wave is depicted in the graph in Figure 2. The horizontal axis of the graph represents time, and the vertical axis of the graph represents amplitude. One cycle (or one complete sine wave) is labeled on the graph.]

Description of figure(s) below

[Figure 3 description: Graphs of three different sine waves are depicted in Figure 3. The horizontal axis, with values ranging from 0 to 1, represents time in seconds. The vertical axis, with values ranging from -1 to 1, represents arbitrary amplitude. The first graph in the figure depicts a sine wave with a frequency of 1 cycle per second. As shown, the energy wave makes a complete cycle from 0 to its maximum positive value, then through to its maximum negative value, then back to 0. The second graph in the figure depicts a sine wave with a frequency of 2 cycles per second. The sine wave therefore makes 2 complete cycles of moving from 0 to its maximum positive value, through to its maximum negative value, and back to 0, in the same time that the wave in the first graph completes 1 cycle. The third graph in the figure depicts a sine wave with a frequency of 3 cycles per second. The sine wave therefore completes 3 full cycles in the same amount of time that the wave in the first graph completes 1 cycle.]

Figure 3 illustrates energy waves completing one cycle, two cycles and three cycles per second. Generally, the higher the range of frequencies (or bandwidth), the more information can be carried per unit of time.

The term wavelength is used almost interchangeably with frequency. In relation to electromagnetic energy, wavelength is:

  1. the shortest distance at which the wave pattern fully repeats itself

  2. expressed as meters

  3. commonly indicated by prefixes such as

    a. Kilo (km) 10exp3
    b. Milli (mm) 10exp-3
    c. Nano (nm) 10exp-9

  4. inversely proportional to frequency

Figure 4 depicts an infrared energy wave and a radio energy wave, and illustrates the two different energy wavelengths. As is expected based on the electromagnetic spectrum, the infrared wave is higher frequency and therefore shorter wavelength than the radio wave. Conversely, the radio wave is lower frequency and therefore longer wavelength than the infrared wave. Anyone who has listened to the radio while driving long distances can appreciate that longer wavelength AM radio waves carry further than the shorter wavelength FM radio waves.

Description of figure(s) below

[Figure 4 description: Figure 4 depicts a radio frequency energy wave superimposed upon an infrared energy wave, and illustrates the inverse relationship between frequency and wavelength. The infrared energy wave completes nearly 5 and a half cycles in the time that the radio frequency wave completes 2 cycles. The wavelengths of the infrared wave and the radio wave are labeled, and the infrared wavelength is less than half the wavelength of the radio wave.]

Other terms commonly used in describing wireless communication include transmitter, receiver, and transceiver. In any type of wireless technology, information must be sent (or transmitted) by one device and captured (or received) by another device. The transmitter takes its input - a voice or stream of data bits for example, creates an energy wave that contains the information, and sends the wave using an appropriate output device. As an example, a radio transmitter outputs its energy waves using an antenna, while an infrared transmitter uses an infrared light- emitting diode (LED) or laser diode. The electromagnetic energy waves are captured by the receiver, which then processes the waves to retrieve and output the information in its original form. Any wireless device having the circuitry to both transmit and receive energy signals is referred to as a transceiver. Depending on the communication protocol being used, a device may be capable of only transmitting or receiving information at one time, or it may be capable of both transmitting and receiving information at the same time.

The above described terminology is relevant in all forms of wireless communication, regardless of the band of electromagnetic energy (radio, infrared, etc.) being used. Although radio and ultrasound waves have frequent application in wireless communication, the remainder of the presentation/paper is devoted more specifically to infrared (IR) technology. Infrared technology is highlighted because of its increasing presence in mainstream applications, its current and potential usage in disability-related applications, and its advantages over other forms of wireless communication.


Wednesday, December 5, 2007

Nanorobotics

Nanorobotics is an emerging field that deals with the controlled manipulation of objects with nanometer-scale dimensions. Typically, an atom has a diameter of a few Ã…ngstroms (1 Ã… = 0.1 nm = 10-10 m), a molecule's size is a few nm, and clusters or nanoparticles formed by hundreds or thousands of atoms have sizes of tens of nm. Therefore, Nanorobotics is concerned with interactions with atomic- and molecular-sized objects-and is sometimes called Molecular Robotics. We use these two expressions, plus Nanomanipulation, as synonyms in this article.

Molecular Robotics falls within the purview of Nanotechnology, which is the study of phenomena and structures with characteristic dimensions in the nanometer range. The birth of Nanotechnology is usually associated with a talk by Nobel-prize winner Richard Feynman entitled "There is plenty of room at the bottom", whose text may be found in [Crandall & Lewis 1992]. Nanotechnology has the potential for major scientific and practical breakthroughs. Future applications ranging from very fast computers to self-replicating robots are described in Drexler's seminal book [Drexler 1986]. In a less futuristic vein, the following potential applications were suggested by well-known experimental scientists at the Nano4 conference held in Palo Alto in November 1995:

  • Cell probes with dimensions ~ 1/1000 of the cell's size
  • Space applications, e.g. hardware to fly on satellites
  • Computer memory
  • Near field optics, with characteristic dimensions ~ 20 nm
  • X-ray fabrication, systems that use X-ray photons
  • Genome applications, reading and manipulating DNA
  • Nanodevices capable of running on very small batteries
  • Optical antennas

Nanotechnology is being pursued along two converging directions. From the top down, semiconductor fabrication techniques are producing smaller and smaller structures-see e.g. [Colton & Marrian 1995] for recent work. For example, the line width of the original Pentium chip is 350 nm. Current optical lithography techniques have obvious resolution limitations because of the wavelength of visible light, which is in the order of 500 nm. X-ray and electron-beam lithography will push sizes further down, but with a great increase in complexity and cost of fabrication. These top-down techniques do not seem promising for building nanomachines that require precise positioning of atoms or molecules.

Alternatively, one can proceed from the bottom up, by assembling atoms and molecules into functional components and systems. There are two main approaches for building useful devices from nanoscale components. The first is based on self-assembly, and is a natural evolution of traditional chemistry and bulk processing-see e.g. [Gómez-López et al. 1996]. The other is based on controlled positioning of nanoscale objects, direct application of forces, electric fields, and so on. The self-assembly approach is being pursued at many laboratories. Despite all the current activity, self-assembly has severe limitations because the structures produced tend to be highly symmetric, and the most versatile self-assembled systems are organic and therefore generally lack robustness. The second approach involves Nanomanipulation, and is being studied by a small number of researchers, who are focusing on techniques based on Scanning Probe Microscopy (abbreviated SPM, and described later in this article).

A top-down technique that is closely related to Nanomanipulation involves removing or depositing small amounts of material by using an SPM. This approach falls within what is usually called Nanolithography. SPM-based Nanolithography is akin to machining or to rapid prototyping techniques such as stereolithography. For example, one can remove a row or two of hydrogen atoms on a silicon substrate that has been passivated with hydrogen by moving the tip of an SPM in a straight line over the substrate and applying a suitable voltage. The removed atoms are "lost" to the environment, much like metal chips in a machining operation. Lines with widths in the order of 10 to 100 nm have been written by these techniques-see e.g. [Wiesendanger 1994] for a survey of some of this work. In this article we focus on Nanomanipulation proper, which is akin to assembly in the macroworld.

Nanorobotics research has proceeded along two lines. The first is devoted to the design and computational simulation of robots with nanoscale dimensions-see [Drexler 1992] for the design of robots that resemble their macroscopic counterparts. Drexler's nanorobot uses various mechanical components such as nanogears built primarily with carbon atoms in a diamondoid structure. A major issue is how to build these devices, and little experimental progress has been made towards their construction.

The second area of Nanorobotics research involves manipulation of nanoscale objects with macroscopic instruments. Experimental work has been focused on this area, especially through the use of SPMs as robots. The remainder of this article describes SPM principles, surveys SPM use in Nanomanipulation, looks at the SPM as a robot, and concludes with a discussion of some of the challenges that face Nanorobotics research.


Figure 3 - The initial pattern of 15 nm Au balls (left) and the "USC"
pattern obtained by nanomanipulation (right).


Figure 4 - The "USC" pattern viewed in perspective

Nanotechnology

Manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange the atoms in coal we can make diamond. If we rearrange the atoms in sand (and add a few other trace elements) we can make computer chips. If we rearrange the atoms in dirt, water and air we can make potatoes.

Todays manufacturing methods are very crude at the molecular level. Casting, grinding, milling and even lithography move atoms in great thundering statistical herds. It's like trying to make things out of LEGO blocks with boxing gloves on your hands. Yes, you can push the LEGO blocks into great heaps and pile them up, but you can't really snap them together the way you'd like.

In the future, nanotechnology will let us take off the boxing gloves. We'll be able to snap together the fundamental building blocks of nature easily, inexpensively and in most of the ways permitted by the laws of physics. This will be essential if we are to continue the revolution in computer hardware beyond about the next decade, and will also let us fabricate an entire new generation of products that are cleaner, stronger, lighter, and more precise.

It's worth pointing out that the word nanotechnology has become very popular and is used to describe many types of research where the characteristic dimensions are less than about 1,000 nanometers. For example, continued improvements in lithography have resulted in line widths that are less than one micron: this work is often called "nanotechnolog." Sub-micron lithography is clearly very valuable (ask anyone who uses a computer!) but it is equally clear that conventional lithography will not let us build semiconductor devices in which individual dopant atoms are located at specific lattice sites. Many of the exponentially improving trends in computer hardware capability have remained steady for the last 50 years. There is fairly widespread belief that these trends are likely to continue for at least another several years, but then conventional lithography starts to reach its limits.

If we are to continue these trends we will have to develop a new manufacturing technology which will let us inexpensively build computer systems with mole quantities of logic elements that are molecular in both size and precision and are interconnected in complex and highly idiosyncratic patterns.Nanotechnology will let us do this.

When it's unclear from the context whether we're using the specific definition of nanotechnology (given here) or the broader and more inclusive definition (often used in the literature), we'll use the terms molecularnanotechnology or "molecular manufacturing."

Whatever we call it, it should let us

  • Get essentially every atom in the right place.
  • Make almost any structure consistent with the laws of physics that we can specify in molecular detail.
  • Have manufacturing costs not greatly exceeding the cost of the required raw materials and energy.

Nanotechnology

Manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange the atoms in coal we can make diamond. If we rearrange the atoms in sand (and add a few other trace elements) we can make computer chips. If we rearrange the atoms in dirt, water and air we can make potatoes.

Todays manufacturing methods are very crude at the molecular level. Casting, grinding, milling and even lithography move atoms in great thundering statistical herds. It's like trying to make things out of LEGO blocks with boxing gloves on your hands. Yes, you can push the LEGO blocks into great heaps and pile them up, but you can't really snap them together the way you'd like.

In the future, nanotechnology will let us take off the boxing gloves. We'll be able to snap together the fundamental building blocks of nature easily, inexpensively and in most of the ways permitted by the laws of physics. This will be essential if we are to continue the revolution in computer hardware beyond about the next decade, and will also let us fabricate an entire new generation of products that are cleaner, stronger, lighter, and more precise.

It's worth pointing out that the word nanotechnology has become very popular and is used to describe many types of research where the characteristic dimensions are less than about 1,000 nanometers. For example, continued improvements in lithography have resulted in line widths that are less than one micron: this work is often called "nanotechnolog." Sub-micron lithography is clearly very valuable (ask anyone who uses a computer!) but it is equally clear that conventional lithography will not let us build semiconductor devices in which individual dopant atoms are located at specific lattice sites. Many of the exponentially improving trends in computer hardware capability have remained steady for the last 50 years. There is fairly widespread belief that these trends are likely to continue for at least another several years, but then conventional lithography starts to reach its limits.

If we are to continue these trends we will have to develop a new manufacturing technology which will let us inexpensively build computer systems with mole quantities of logic elements that are molecular in both size and precision and are interconnected in complex and highly idiosyncratic patterns.Nanotechnology will let us do this.

When it's unclear from the context whether we're using the specific definition of nanotechnology (given here) or the broader and more inclusive definition (often used in the literature), we'll use the terms molecularnanotechnology or "molecular manufacturing."

Whatever we call it, it should let us

  • Get essentially every atom in the right place.
  • Make almost any structure consistent with the laws of physics that we can specify in molecular detail.
  • Have manufacturing costs not greatly exceeding the cost of the required raw materials and energy.

Introduction to Multithreading, Superthreading and Hyperthreading

Introduction

Back in the dual-Celeron days, when symmetric multiprocessing (SMP) first became cheap enough to come within reach of the average PC user, many hardware enthusiasts eager to get in on the SMP craze were asking what exactly (besides winning them the admiration and envy of their peers) a dual-processing rig could do for them. It was in this context that the PC crowd started seriously talking about the advantages of multithreading. Years later when Apple brought dual-processing to its PowerMac line, SMP was officially mainstream, and with it multithreading became a concern for the mainstream user as the ensuing round of benchmarks brought out the fact you really needed multithreaded applications to get the full benefits of two processors.

Even though the PC enthusiast SMP craze has long since died down and, in an odd twist of fate, Mac users are now many times more likely to be sporting an SMP rig than their x86-using peers, multithreading is once again about to increase in importance for PC users. Intel's next major IA-32 processor release, codenamed Prescott, will include a feature called simultaneous multithreading (SMT), also known as hyper-threading. To take full advantage of SMT, applications will need to be multithreaded; and just like with SMP, the higher the degree of multithreading the more performance an application can wring out of Prescott's hardware.

Intel actually already uses SMT in a shipping design: the Pentium 4 Xeon. Near the end of this article we'll take a look at the way the Xeon implements hyper-threading; this analysis should give us a pretty good idea of what's in store for Prescott. Also, it's rumored that the current crop of Pentium 4's actually has SMT hardware built-in, it's just disabled. (If you add this to the rumor about x86-64 support being present but disabled as well, then you can get some idea of just how cautious Intel is when it comes to introducing new features. I'd kill to get my hands on a 2.8 GHz P4 with both SMT and x86-64 support turned on.)

SMT, in a nutshell, allows the CPU to do what most users think it's doing anyway: run more than one program at the same time. This might sound odd, so in order to understand how it works this article will first look at how the current crop of CPUs handles multitasking. Then, we'll discuss a technique called superthreading before finally moving on to explain hyper-threading in the last section. So if you're looking to understand more about multithreading, symmetric multiprocessing systems, and hyper-threading then this article is for you.

As always, if you've read some of my previous tech articles you'll be well equipped to understand the discussion that follows. From here on out, I'll assume you know the basics of pipelined execution and are familiar with the general architectural division between a processor's front end and its execution core. If these terms are mysterious to you, then you might want to reach way back and check out my "Into the K7" article, as well as some of my other work on the P4 and G4e.

Conventional multithreading

Quite a bit of what a CPU does is illusion. For instance, modern out-of-order processor architectures don't actually execute code sequentially in the order in which it was written. I've covered the topic of out-of-order execution (OOE) in previous articles, so I won't rehash all that here. I'll just note that an OOE architecture takes code that was written and compiled to be executed in a specific order, reschedules the sequence of instructions (if possible) so that they make maximum use of the processor resources, executes them, and then arranges them back in their original order so that the results can be written out to memory. To the programmer and the user, it looks as if an ordered, sequential stream of instructions went into the CPU and identically ordered, sequential stream of computational results emerged. Only the CPU knows in what order the program's instructions were actually executed, and in that respect the processor is like a black box to both the programmer and the user.

The same kind of sleight-of-hand happens when you run multiple programs at once, except this time the operating system is also involved in the scam. To the end user, it appears as if the processor is "running" more than one program at the same time, and indeed, there actually are multiple programs loaded into memory. But the CPU can execute only one of these programs at a time. The OS maintains the illusion of concurrency by rapidly switching between running programs at a fixed interval, called a time slice. The time slice has to be small enough that the user doesn't notice any degradation in the usability and performance of the running programs, and it has to be large enough that each program has a sufficient amount of CPU time in which to get useful work done. Most modern operating systems include a way to change the size of an individual program's time slice. So a program with a larger time slice gets more actual execution time on the CPU relative to its lower priority peers, and hence it runs faster. (On a related note, this brings to mind one of my favorite .sig file quotes: "A message from the system administrator: 'I've upped my priority. Now up yours.'")

Clarification of terms: "running" vs. "executing," and "front end" vs. "execution core."

For our purposes in this article, "running" does not equal "executing." I want to set up this terminological distinction near the outset of the article for clarity's sake. So for the remainder of this article, we'll say that a program has been launched and is "running" when its code (or some portion of its code) is loaded into main memory, but it isn't actually executing until that code has been loaded into the processor. Another way to think of this would be to say that the OS runs programs, and the processor executes them.

The other thing that I should clarify before proceeding is that the way that I divide up the processor in this and other articles differs from the way that Intel's literature divides it. Intel will describe its processors as having an "in-order front end" and an "out-of-order execution engine." This is because for Intel, the front-end consists mainly of the instruction fetcher and decoder, while all of the register rename logic, out-of-order scheduling logic, and so on is considered to be part of the "back end" or "execution core." The way that I and many others draw the line between front-end and back-end places all of the out-of-order and register rename logic in the front end, with the "back end"/"execution core" containing only the execution units themselves and the retire logic. So in this article, the front end is the place where instructions are fetched, decoded, and re-ordered, and the execution core is where they're actually executed and retired.

Preemptive multitasking vs. Cooperative multitasking

While I'm on this topic, I'll go ahead and take a brief moment to explain preemptive multitasking versus cooperative multitasking. Back in the bad old days, which wasn't so long ago for Mac users, the OS relied on each program to give up voluntarily the CPU after its time slice was up. This scheme was called "cooperative multitasking" because it relied on the running programs to cooperate with each other and with the OS in order to share the CPU among themselves in a fair and equitable manner. Sure, there was a designated time slice in which each program was supposed to execute, and but the rules weren't strictly enforced by the OS. In the end, we all know what happens when you rely on people and industries to regulate themselves--you wind up with a small number of ill-behaved parties who don't play by the rules and who make things miserable for everyone else. In cooperative multitasking systems, some programs would monopolize the CPU and not let it go, with the result that the whole system would grind to a halt.

Preemptive multi-tasking, in contrast, strictly enforces the rules and kicks each program off the CPU once its time slice is up. Coupled with preemptive multi-tasking is memory protection, which means that the OS also makes sure that each program uses the memory space allocated to it and it alone. In a modern, preemptively multi-tasked and protected memory OS each program is walled off from the others so that it believes it's the only program on the system.

Monday, December 3, 2007

Benefits of wireless tech.

The benefits of networking (either wired or wireless) in homes are:
  • file sharing - Network file sharing between computers gives you more flexibity than using floppy drives or Zip drives. Not only can you share photos, music files, and documents, you can also use a home network to save copies of all of your important data on a different computer. Backups are one of the most critical yet overlooked tasks in home networking.

  • printer / peripheral sharing - Once a home network is in place, it's easy to then set up all of the computers to share a single printer. No longer will you need to bounce from one system or another just to print out an email message. Other computer peripherals can be shared similarly such as network scanners, Web cams, and CD burners.

  • Internet connection sharing - Using a home network, multiple family members can access the Internet simultaneously without having to pay an ISP multiple accounts.
You will notice the Internet connection slows down when several people share it, but
broadband Internet can handle the extra load with little trouble. Sharing dial-up Internet
connections works, too. Painfully slow sometimes, you will still appreciate having shared
dial-up on those occasions you really need it.

multi-player games - Many popular home computer games support LAN mode where
friends and family can play together, if they have their computers networked.
  • Internet telephone service - So-called Voice over IP (VoIP) services allow you to make and receive phone calls through your home network across the Internet, saving you money.

  • home entertainment - Newer home entertainment products such as digital video recorders (DVRs) and video game consoles now support either wired or wireless home networking. Having these products integrated into your network enables online Internet gaming, video sharing and other advanced features.
Although you can realize these same benefits with a wired home network, you should
carefully consider building a wireless home network instead, for the following reasons:

1. Computer mobility. Notebook computers and other portable devices are much affordable than they were a few years ago. With a mobile computer and wireless home network, you aren't chained to a network cord and can work on the couch, on your porch, or wherever in the house is most convenient at the moment.

2. No unsightly wires. Businesses can afford to lay cable under their floors or inside walls. But most of us don't have the time or inclination to fuss with this in our home. Unless you own one of the few newer homes pre-wired with network cable, you'll save substantial time and energy avoiding the cabling mess and going wireless.

3. Wireless is the future. Wireless technology is clearly the future of networking. In building a wireless home network, you'll learn about the technology and be able to teach your friends and relatives. You'll also be better prepared for future advances in network technology coming in the future.

The Methodologies


XP (Extreme Programming)

Of all the lightweight methodologies, this is the one that has got the most attention. Partly this is because of the remarkable ability of the leaders of XP, in particular Kent Beck, to get attention. It's also because of the ability of Kent Beck to attract people to the approach, and to take a leading role in it. In some ways, however, the popularity of XP has become a problem, as it has rather crowded out the other methodologies and their valuable ideas.

The roots of XP lie in the Smalltalk community, and in particular the close collaboration of Kent Beck and Ward Cunningham in the late 1980's. Both of them refined their practices on numerous projects during the early 90's, extending their ideas of a software development approach that was both adaptive and people-oriented.

The crucial step from informal practice to a methodology occurred in the spring of 1996. Kent was asked to review the progress of a payroll project for Chrysler. The project was being carried out in Smalltalk by a contracting company, and was in trouble. Due to the low quality of the code base, Kent recommended throwing out the entire code base and starting from scratch his leadership. The result was the Chrysler C3 project (Chrysler Comprehensive Compensation) which since became the early flagship and training ground for XP.

The first phase of C3 went live in early 1997. The project continued since and ran into difficulties later, which resulted in the canceling of further development in 1999. As I write this, it still pays the original 10,000 salaried employees.

XP begins with four values: Communication, Feedback, Simplicity, and Courage. It then builds up to a dozen practices which XP projects should follow. Many of these practices are old, tried and tested techniques, yet often forgotten by many, including most planned processes. As well as resurrecting these techniques, XP weaves them into a synergistic whole where each one is reinforced by the others.

One of the most striking, as well as initially appealing to me, is its strong emphasis on testing. While all processes mention testing, most do so with a pretty low emphasis. However XP puts testing at the foundation of development, with every programmer writing tests as they write their production code. The tests are integrated into a continuous integration and build process which yields a highly stable platform for future development.

On this platform XP builds an evolutionary design process that relies on refactoring a simple base system with every iteration. All design is centered around the current iteration with no design done for anticipated future needs. The result is a design process that is disciplined, yet startling, combining discipline with adaptivity in a way that arguably makes it the most well developed of all the adaptive methodologies.

XP has developed a wide leadership, many of them springing from the seminal C3 project. As a result there's a lot of sources for more information. The best summary at the moment is written by an outsider, Jim Highsmith, whose own methodology I'll cover later. Kent Beck wrote Extreme Programming Explained the key manifesto of XP, which explains the rationale behind the methodology and enough of an explanation of it to tell folks if they are interested in pursuing it further.

Two further books are in the works. Three members of the C3 project: Ron Jeffries, Ann Anderson, and Chet Hendrickson are writing Extreme Programming Installed, an explanation of XP based on the C3 experience. Kent Beck and I are writing Planning Extreme Programming, which discusses how you do planning in this adaptive manner.

As well as books, there are a fair number of web resources. Much of the early advocacy and development of the XP ideas occurred on Ward Cunningham's wiki web collaborative writing environment. The wiki remains a fascinating place to discover, although its rambling nature does lead you into being sucked in. To find a more structured approach to XP, it's best to start with two sites from C3 alumni: Ron Jeffries's xProgramming.com and Don Wells's extremeProgramming.org. Bill Wake's xPlorations contains a slew of useful papers. Robert Martin, the well known author on C++ and OO design has also joined the list of XP promoters. His company, ObjectMentor, has a number of papers on its web site. They also sponsor the xp discussion egroup.


Open Source

You may be surprised by this heading. After all open source is a style of software, not so much a process. However there is a definite way of doing things in the open source community, and much of their approach is as applicable to closed source projects as it is to open source. In particular their process is geared to physically distributed teams, which is important because most adaptive processes stress co-located teams.

Most open source projects have one or more maintainers. A maintainer is the only person who is allowed to commit a change into the source code repository. However people other than the maintainer may make changes to the code base. The key difference is that other folks need to send their change to the maintainer, who then reviews it and applies it to the code base. Usually these changes are made in the form of patch files which make this process easier. The maintainer thus is responsible for coordinating the patches and maintaining the design cohesion of the software.

Different projects handle the maintainer role in different ways. Some have one maintainer for the whole project, some divide into modules and have a maintainer per module, some rotate the maintainer, some have multiple maintainers on the same code, others have a combination of these ideas. Most open source folks are part time, so there is an issue on how well such a team coordinates for a full time project.

A particular feature of open source development is that debugging is highly parallelizable. So many people can be involved in debugging. When they find a bug they can send the patch to the maintainer. This is a good role for non-maintainers since most of the time is spent finding the bug. It's also good for folks without strong design skills.

The process for open-source isn't well written up as yet. The most famous paper is Eric Raymond's The Cathedral and the Bazar, which while an excellent description is also rather brief. Klaus Fogel's book on the CVS code repository also contains several good chapters on open-source process that would be interesting even to those who never want to do cvs update.


Transition To Digital TV Technology Causes Confusion


There's a good chance that a lot of people who have heard about the mandatory transition to digital TV for over the air TV broadcasts that's scheduled for February 17, 2009 is confused about it in any of several different ways. The first thing to be confused about is what will actually be happening with the transition. Fortunately, that's relatively easy to understand. Basically, there are two different formats in which TV programming can be transmitted. The older format is called the analog format and it doesn't make use of computer technology in order to encode or decode the TV programming. TV broadcasters have been using the analog format to send TV programming over the air ever since TV was first introduced back in the middle of the twentieth century.
Digital TV is them more modern and higher tech method of transmitting TV. With Digital TV, all of the video and audio that makes up the picture and sound of the TV programming is converted into digital computer data before being transmitted. Once the digital TV programming has been received, a digital tuner converts it back into the TV programming.

The next thing that many people are confused about is why anyone would want to bother converting from analog TV transmissions to digital TV. After all, if it means getting a new TV set or having to buy a special converter box in order to keep watching an older TV set, why would anyone want to spend the money. Actually, in answer to that question, money is exactly one of the reasons for the conversion to all digital transmission. The consumer electronics industry stands to make a lot of money from people buying new TV sets and digital receiver boxes in preparation of the conversion and has been lobbying Congress for years in an attempt to mandate the change.

In addition to the fact that the consumer electronics industry stands to make a lot of money from the change, the adoption of digital TV stands to provide a lot of technical benefits as well. For example, there are a lot of things that can be done to a digital signal that simply can't be done to an analog one. For example, it can be compressed so that it takes up less bandwidth. Digital TV tuners routinely clean out any interference that crops up during transmission (at least to a point), and that makes the sound and picture quality of digital TV programming much higher than that of analog TV. The fact that most TV stations are transmitting both digital and analog signals right now also means that converting everything over to analog TV will free up a lot of the broadcasting frequencies. The FCC will then designate some of those frequencies to emergency response to that authorities can more effectively respond to terrorist strikes and natural disasters.

Of course like anything else, digital TV has its downsides as well. Besides having to buy digital TV's and digital receivers, more Americans might also have to buy TV antennas if they want to keep watching TV over the air. That's because, while the quality of an analog transmission fades with distance, but can still remain watchable long after it has started to fade, digital TV signals will be crystal clear and then just turn into noise almost right away. Digital TV simply requires better reception to view.

Hopefully this article clarifies many aspects of the conversion to digital TV transmission.

Comcast Re-engineers Cable TV For The 21st Century

Cable TV has developed a reputation for being inferior to satellite TV. One could argue that this reputation was deserved at one point, but since then the cable TV industry has undergone a major transformation that has brought it to the point where it's arguably superior to satellite TV. Comcast is one example of a cable TV company that has brought cable TV up to twenty first century standards.
The biggest change that Comcast made when it revamped its cable TV service was to convert its transmission format from the older, less efficient analog format to Digital TV. Digital TV alone gives Comcast an edge in a number of different ways. The most obvious edge that Comcast gets from digital TV is the clearer picture. This clearer picture has to do with the fact that a certain amount of interference crops up anytime that video is transmitted more than a few feet- even when its transmitted over underground cables. That interference can then be cleaned out of a digital signal by the digital receiver so that the resulting picture is as clean as the source recording. Though it's less obvious, digital TV also delivers superior quality sound as well.

The real advantage that the adoption of digital TV gave to Comcast was the ability to compress video so that more channels could be transmitted over the same cables. This allowed Comcast to increase the number of channels that it could offer over three times. Now the biggest programming plan from Comcast offers very close to three hundred channels. A new technology that this company is in the process of implementing- a technology call Switched Digital Video- will allow Comcast to make even better use of its existing bandwidth. Switched Digital Video will allow Comcast to send just the channel that a viewer is watching to any given viewer at any given time. That's a lot more efficient than sending all of the available channels to each and every viewer all the time and letting the individual receivers sort out what actually gets displayed on the screen. This technology will remove all practical limits on the number of channels that a cable TV company can offer.

Switched Digital Video has shown up just in time because of the surge of interest in HDTV programming and the rampant competition among TV service providers to deliver the most High Def channels to their subscribers. Under the older transmission method- the "shotgun approach"- HDTV channels were extremely difficult to deliver because of the fact that they are up to ten times more data intensive than normal TV and video can only be compressed so much. The bandwidth required to transmit the number of HDTV channels that viewers will soon demand would have crippled a cable system of the past, but thanks to Comcast's implementation of Switched Digital Video technology, the bandwidth requirements of HDTV will be irrelevant. This makes Comcast a good choice for this king of TV now and in the years to come.

All of this makes the cable TV industry in general, and Comcast specifically, worth a good deal of consideration.

HDTV Displays Take Advantage of Fascinating Technology

HDTV has gotten a lot of interest among home entertainment enthusiasts and normal people alike, and it's really no wonder. After all, who could not be excited about bringing all of the best parts of the commercial movie theater experience into their own homes. The HDTV experience includes the same wide screen format that most major motion pictures are filmed in, a higher resolution picture than standard definition televisions are capable of providing, and the theater quality sound format of Dolby Digital 5.1 Surround Sound.
Of course, the technology that goes into making HDTV sets is also really fascinating. After all, how often do you get to buy something called a plasma screen TV or a technology called digital light processing.

While HDTV sets use a variety of very different technologies to provide you with a large, high quality picture, there are a number of things that all of the technologies have in common. For example, they all have to be able to create pictures with extremely high resolutions which means being able to cram lots of pixels onto the TV screen. The more pixels are used, the higher the resolution of the picture. Some HDTV sets are capable of creating pictures with resolutions as high as 1080p, while other HDTV sets barely qualify as high definition by only being able to render pictures with resolutions of 720p.

Plasma screen HDTV's are the ones that probably get the most attention because of the fact that they combine a brilliant variety of colors with a really cool sounding name. Plasma screens are made up of numerous pockets of gas- one pocket of gas for each pixel- and when an electrical current is applied to a pocket of gas, the pocket of gas glows. The color and intensity with which the gas pocket glows is determined by the voltage and amperage of the electrical current that's applied to it. The fact that plasma screens have a reputation for very high contrast comes from the fact that a total absence of electrical current produces a total absence of light and color for a very deep black.

Despite their reputation for providing a great picture and the appeal of their name, plasma screens are probably the least suitable for most people's purposes. For one thing, they aren't very versatile when it comes to looking good in a variety of lighting conditions- they just don't glow brightly enough to look good in rooms with high levels of light and they lose their brightness with time. Most models are also inappropriate for use at higher altitudes because they'll emit an annoying humming sound at elevations above six thousand feet. To top things off, plasma screen HDTV's are also energy hogs.

LCD screens don't sound as cool, but they are better for a greater variety of uses. They use less energy then many other technologies and they are bright enough to be used in a variety of lighting conditions, plus their brightness doesn't fade over time. LCD screens have the disadvantages of not being able to display very deep black and blurring while displaying objects that move fast on their screens. Of course, it's worth taking into account that both of these problems are more traditional problems than current problems as they've both been minimized over the years through more advanced application of technology.

Of course these are the two most popular HDTV display technologies, but there are lots of other exciting technologies to explore as well.

Sunday, December 2, 2007

Web-enabled system to open Remedy Helpdesk Tickets

Our Client Needed:

  • Customized web interface to open Call Center tickets with Remedy Helpdesk software.
  • Ability to open Call Center tickets via email.
  • Web-enabled status and update screens for previously opened tickets.

Solution:

  • Integrated Remedy Arweb software into user’s browser interface providing look and feel of main web site.
  • Browser interface allowed users to add tickets, check status, and add additional action item information to the ticket.
  • Created email template that allows for input into customized Remedy schemas to open ticket with call center.

TechInfo provides technical experts that deliver Enterprise IT Solutions.

TechInfo provides subcontractor personnel in the following key areas:
  • COTS configuration and integration
  • Software Architects, Software Development
  • Database Design and Development
  • Enterprise Systems Design
  • O&M Staffing

TechInfo is a leader in providing computer solutions for business and government. TechInfo delivers enterprise-enabled solutions on a variety of platforms including Solaris, HP-UX, and Linux, and Microsoft Server 2003. Our architects have experience using the RUP, UML, and CMMI processes to build enterprise solutions. Our development personnel have expertise utilizing the latest development languages and COTS products including:

  • Java, J2EE, EJB, JDBC, JSP, Servlets, EJB, Portlets

  • .Net, ASP, XML, COM, VB,

  • Oracle, SQL Server, MySQL

  • BEA, EMC/Documentum, Microsoft, HP, and much more