Sunday, June 27, 2010

Bluetooth 4.0

Bluetooth low energy and its predecessors (think Wibree) have been in the pipe for ages now, but we might actually see this tech take off en masse for the first time now that the Bluetooth SIG has officially added it into a release: 4.0. While Bluetooth 3.0 was all about high energy with the introduction of WiFi transfer, 4.0 takes things down a notch by certifying single-mode low energy devices in addition to dual-mode devices that incorporate both the low energy side of the spec plus either 2.1+EDR or 3.0. In a nutshell, the technology should bring a number of new categories and form factors of wireless devices into the fold since 1Mbps Bluetooth low energy can operate on coin cells -- the kinds you find in wristwatches, calculators, and remote controls -- and the SIG's pulling no punches by saying that "with today's announcement the race is on for product designers to be the first to market." Nokia pioneered Wibree, so you can bet they'll be among the frontrunners -- bring it, guys.

"Bluetooth v4.0 throws open the doors to a host of new markets for Bluetooth manufacturers and products such as watches, remote controls, and a variety of medical and in-home sensors," said Bluetooth SIG executive director Michael Foley. "Many of these products run on button-cell batteries that must last for years versus hours and will also benefit from the longer range enabled by this new version of the Bluetooth specification. "
When talking about "longer range" Foley is referencing the fact Bluetooth 4.0 will also be capable of integrating with WiFi signals just like v3.0 + High Speed. SIG now refers to this functionality as 'Classic Bluetooth', far less of a mouthful.
So Bluetooth continues to evolve and defy critics who claimed the standard would die off many years ago. On the other hand, whether it will be more widely adopted than Bluetooth 3.0 + High Speed remains to be seen. Consumer devices featuring Bluetooth 4.0 will launch in between late 2010 and early 2011, just so long as companies care to implement it.

Intel Core i7

Intel Core i7 is an Intel brand name for several families of desktop and laptop 64-bit x86-64 processors using the Nehalem microarchitecture that are marketed for the business and high-end consumer markets. The "Core i7" brand is intended to differentiate these processors from Core i5 processors intended for the main-stream consumer market and Core i3 processors intended for the entry-level consumer market.
"Core i7" is a successor to the Intel Core 2 brand. The Core i7 identifier was first applied to the initial family of processors codenamed Bloomfield introduced in 2008. In 2009 the name was applied to Lynnfield and Clarksfield models. Prior to 2010, all models were quad-core processors. In 2010, the name was applied to dual-core Arrandale models, and the Gulftown Core i7-980X Extreme processor which has six hyperthreaded cores.
Intel representatives state that the moniker Core i7 is meant to help consumers decide which processor to purchase as the newer Nehalem-based products are released in the future. The name continues the use of the Intel Core brand. Core i7, first assembled in Costa Rica, was officially launched on November 17, 2008 and is manufactured in Arizona, New Mexico and Oregon, though the Oregon (PTD, Fab D1D) plant has already moved to the next generation 32 nm process.

Intel Atom Processor


Intel Atom is a direct successor of the Intel A100 and A110 low-power microprocessors (code-named Stealey), which were built on a 90 nm process, had 512 KB L2 cache and run at 600 MHz/800 MHz with 3W TDP (Thermal Design Power). Prior to the Silverthorne announcement, outside sources had speculated that Atom would compete with AMD's Geode system-on-a-chip processors, used by the One Laptop per Child project, and other cost- and power-sensitive applications for x86 processors. However, Intel revealed on October 15, 2007 that it was developing another new mobile processor, codenamed Diamondville, for OLPC-type devices.
"Atom" was the name under which Silverthorne would be sold, while the supporting chipset formerly code-named Menlow was called Centrino Atom. Intel's initial Atom press release only briefly discussed "Diamondville" and implied that it too would be named "Atom", strengthening speculation that Diamondville is simply a lower-cost, higher-yielding version of Silverthorne with slightly higher TDPs at slightly lower clock speeds.
At Spring Intel Developer Forum (IDF) 2008 in Shanghai, Intel officially announced that Silverthorne and Diamondville are based on the same microarchitecture. Silverthorne would be called the Atom Z series and Diamondville would be called the Atom N series. The more expensive lower-power Silverthorne parts will be used in Intel Mobile Internet Devices (MIDs) whereas Diamondville will be used in low-cost desktop and notebooks. Several Mini-ITX motherboard samples have also been revealed. Intel and Lenovo also jointly announced an Atom powered MID called the IdeaPad U8. The IdeaPad U8 weighs 280 g and has a 4.8 in (12 cm) touchscreen providing better portability than a netbook PC and easier Internet viewing than a mobile phone or PDA.
In April 2008, a MID development kit was announced by Sophia Systems and the first board called CoreExpress-ECO was revealed by a German company LiPPERT Embedded Computers, GmbH. Intel offers Atom based motherboards.
Intel Atom is the brand name for a line of ultra-low-voltage x86 and x86-64 CPUs (or microprocessors) from Intel, designed in 45 nm CMOS and used mainly in netbooks, nettops, and Mobile Internet devices (MIDs). On December 21, 2009 Intel announced the next generation of Atom processors, including the N450, with total kit power consumption down 40%

Nvidia Physx


PhysX is a proprietary realtime physics engine middleware SDK acquired by Ageia (which itself was acquired by Nvidia in February 2008) with the purchase of ETH Zurich spin-off NovodeX in 2004. The term PhysX can also refer to the PPU add-in card designed by Ageia to accelerate PhysX-enabled video games. Video games supporting hardware acceleration by PhysX can be accelerated by either a PhysX PPU or a CUDA-enabled GeForce GPU (which has at least 32 CUDA cores), thus offloading physics calculations from the CPU, allowing it to perform other tasks instead — resulting in a smoother gaming experience and additional visual effects.
Middleware physics engines allow game developers to avoid writing their own code to handle the complex physics interactions possible in modern games.
The PhysX engine and SDK are available for the following platforms:
Apple Mac OS X
Windows
Linux (32-bit)
Nintendo Wii
Sony PlayStation 3
Microsoft Xbox 360
Nvidia provides both the engine and SDK for free to Windows and Linux users and developers.The PlayStation 3 SDK is also freely available due to Sony's blanket purchase agreement.

A physics processing unit (PPU) is a dedicated microprocessor designed to handle the calculations of physics, especially in the physics engine of video games. Examples of calculations involving a PPU might include rigid body dynamics, soft body dynamics, collision detection, fluid dynamics, hair and clothing simulation, finite element analysis, and fracturing of objects. The idea is that specialized processors offload time consuming tasks from a computer's CPU, much like how a GPU performs graphics operations in the main CPU's place.
The first PPUs were the SPARTA and HELLAS.
The term was coined by Ageia's marketing to describe their PhysX chip to consumers. Several other technologies in the CPU-GPU spectrum have some features in common with it, although Ageia's solution is the only complete one designed, marketed, supported, and placed within a system exclusively as a PPU.

Thursday, June 24, 2010

ANIMATRONICS

Animatronics is a cross between animation and electronics. Basically, an animatronic is a mechanized puppet. It may be preprogrammed or remotely controlled. An abbreviated term originally coined by Walt Disney as “Audio-Animatronics” (used to describe his mechanized characters), can actually be seen in various forms as far back as Leonardo-Da-Vinci’s Automata Lion, (theoretically built to present lillies to the King of France during one of his Visits),and has now developed as a career which may require combined talent in Mechanical Engineering , Sculpting / Casting, Control Technologies, Electrical / Electronic, Airbrushing, Radio-Control.
The subject of animatronics, emotional display and recognition has evolved into a major industry and has become more efficient through new technologies. Animatronics is constantly changing due to rapid advancements and trends that are taking place in hardware and software section of the industry. The purpose of this research was to design and build an animatronics robot that will enable students to investigate current trends in robotics. This paper seeks to highlight the debate and discussion concerning engineering challenge that mainly involved secondary level students.
This paper explores the hardware and software design of animatronics and emotional face displays of robots. Design experience included artistic design of the robot, selection of actuators, mechanical design, and programming of the animatronics robot. Students were challenged to develop models with the purpose of creating interest in learning Science, Technology, Engineering, and Mathematics.
It is possible for us to build our own animatronics by making use of ready-made animatronic kits provided by companies such as Mister Computers where no programming skills are required. Only knowledge of Windows is required.
Animatronics was developed by Walt Disney in the early sixties. Essentially, an animatronic puppet is a figure that is animated by means of electromechanical devices . Early examples were found at the 1964 World Fair in the New York Hall of Presidents and Disney Land. In the Hall of Presidents, Lincoln, with all the gestures of a statesman, gave the Gettysburg’s address. Body language and facial motions were matched to perfection with the recorded speech. Animatronics was a popular way of entertainment that had proven itself in the theme parks and cinematography industry
Animatronics is a subset of anthropomorphic robots which are designed drawing inspiration from nature. The most recent advancement in building an anthropomorphic robot is Kismet (earlier developed by MIT), that engages people in expressive face-to-face interaction. Inspired by infant social development, psychology, ethology, and evolutionary perspective, this work integrates theories and concepts from these diverse scientific viewpoints to enable Kismet to enter into natural and intuitive social interaction with a person, reminiscent of adult infant exchanges. Kismet perceives a variety of natural social cues from visual and auditory channels, and delivers social signals to people through gaze direction, facial expression, body posture, and vocalization.
There is a great deal of research around the world recently in Japan on developing of interactive robots with a human face. Development of interactive human like robot brings this research to the frontiers of artificial intelligence, materials, robotics, and psychology. Machines displaying emotions is a relatively new endeavor that goes far back to earlier times. The entertainment field is also overlapping new research on androids; the term android is derived from fiction relating to a complete mechanical automation.
An extension of the engineering challenge is to explore the effectiveness of the project’s
capability to display human emotions, and to design the physical mechanisms that display realistic human facial movements. The objective of this effort was to design and build an animatronic robot SSU-1 (Savannah State University-1). The SSU-1 will be controlled by a preprogrammed embedded microcontroller and will create human like motions for entertainment purposes.


Tuesday, June 22, 2010

Andriod OS

Android is Google's operating system for mobile devices. It is a competitor to Apple's iOS for the iPhone.
Technologically, Android includes middleware and key applications, and uses a modified version of the Linux kernel. It was initially developed by Android Inc., a firm later purchased by Google, and lately by the Open Handset Alliance. It allows developers to write managed code in the Java language, controlling the device via Google-developed Java libraries.
The Android operating system software stack consists of Java applications running on a Java based object oriented application framework on top of Java core libraries running on a Dalvik virtual machine featuring JIT compilation. Libraries written in C include the surface manager, OpenCore media framework, SQLite relational database management system, OpenGL ES 2.0 3D graphics API, WebKit layout engine, SGL graphics engine, SSL, and Bionic libc. The Android operating system consists of 12 million lines of code including 3 million lines of XML, 2.8 million lines of C, and 2.1 million lines of Java.
The unveiling of the Android distribution on 5 November 2007 was announced with the founding of the Open Handset Alliance, a consortium of 71 hardware, software, and telecom companies devoted to advancing open standards for mobile devices. Google released most of the Android code under the Apache License, a free software and open source license.
According to NPD Group, unit sales for Android OS smartphones ranked second among all smartphone OS handsets sold in the U.S. in the first quarter of 2010. BlackBerry OS and iOS ranked first and third respectively.

Sunday, June 20, 2010

OPTICAL COMPUTING

























With the growth of computing technology the need of high performance computers (HPC) has significantly increased. Optics has been used in computing for a number of years but the main emphasis has been and continues to be to link portions of computers, for communications, or more intrinsically in devices that have some optical application or component (optical pattern recognition etc.)

Optical computing was a hot research area in 1980’s.But the work tapered off due to materials limitations that prevented optochips from getting small enough and cheap enough beyond laboratory curiosities. Now, optical computers are back with advances in self-assembled conducting organic polymers that promise super-tiny of all optical chips.

Optical computing technology is, in general, developing in two directions. One approach is to build computers that have the same architecture as present day computers but using optics that is Electro optical hybrids. Another approach is to generate a completely new kind of computer, which can perform all functional operations in optical mode. In recent years, a number of devices that can ultimately lead us to real optical computers have already been manufactured. These include optical logic gates, optical switches, optical interconnections and optical memory.

Current trends in optical computing emphasize communications, for example the use of free space optical interconnects as a potential solution to remove ‘Bottlenecks’ experienced in electronic architectures. Optical technology is one of the most promising, and may eventually lead to new computing applications as a consequence of faster processing speed, as well as better connectivity and higher bandwidth.

2. NEED FOR OPTICAL COMPUTING

The pressing need for optical technology stems from the fact that today’s computers are limited by the time response of electronic circuits. A solid transmission medium limits both the speed and volume of signals, as well as building up heat that damages components.

One of the theoretical limits on how fast a computer can function is given by Einstein’s principle that signal cannot propagate faster than speed of light. So to make computers faster, their components must be smaller and there by decrease the distance between them. This has resulted in the development of very large scale integration (VLSI) technology, with smaller device dimensions and greater complexity. The smallest dimensions of VLSI nowadays are about 0.08mm. Despite the incredible progress in the development and refinement of the basic technologies over the past decade, there is growing concern that these technologies may not be capable of solving the computing problems of even the current millennium. The speed of computers was achieved by miniaturizing electronic components to a very small micron-size scale, but they are limited not only by the speed of electrons in matter but also by the increasing density of interconnections necessary to link the electronic gates on microchips.

The optical computer comes as a solution of miniaturizing problem. Optical data processing can perform several operations in parallel much faster and easier than electrons. This parallelism helps in staggering computational power. For example a calculation that takes a conventional electronic computer more than 11 years to complete could be performed by an optical computer in a single hour. Any way we can realize that in an optical computer, electrons are replaced by photons, the subatomic bits of electromagnetic radiation that make up light.

3. SOME KEY OPTICAL COMPONENTS FOR COMPUTING

The major breakthroughs on optical computing have been centered on the development of micro-optic devices for data input.

VCSEL (VERTICAL CAVITY SURFACE EMITTING LACER)

VCSEL (pronounced ‘vixel’) is a semiconductor vertical cavity surface emitting laser diode that emits light in a cylindrical beam vertically from the surface of a fabricated wafer, and offers significant advantages when compared to the edge-emitting lasers currently used in the majority of fiber optic communications devices. The principle involved in the operation of a VCSEL is very similar to those of regular lasers.

There are two special semiconductor materials sandwiching an active layer where all the action takes place. But rather than reflective ends, in a VCSEL there are several layers of partially reflective mirrors above and below the active layer. Layers of semiconductors with differing compositions create these mirrors, and each mirror reflects a narrow range of wavelengths back in to the cavity in order to cause light emission at just one wavelength.

CLOCKLESS CHIPS

How fast is your personal computer? When people ask this ques tion, they are typically
referring to the frequency of a minuscule clock inside the computer, a
crystal oscillator that sets the basic rhythm used throughout the
machine. In a computer with a speed of one gigahertz, for example,
the crystal "ticks" a billion times a second. Every action of the
computer tak es plac e in tiny steps, each a billionth of a second long.
A simple transfer of data may take only one step; complex
calculations may take many steps. All operations, however, must
begin and end according to the clock's t iming signals. The use of a central clock also creates problems. As
speeds have increased, distributing the timing signals has become
more and more difficult. Present-day transistors can process data so
quick ly that they can accomplish several steps in the time that it takes
a wire to carry a signal from one side of the chip to the other. Keeping
the rhythm identical in all parts of a large chip requires careful design
and a great deal of electric al power. Wouldn't it be nice to have an
alternative? Clockless approach, which uses a technique known as
asynchronous logic, differs from conventional computer circuit design
in that the switching on and off of digital circuits is controlled
individually by specific pieces of data rather than by a tyrannical clock
that forces all of the millions of the circuits on a chip to march in
unison. It overcomes all the disadvantages of a clocked circuit such
as slow speed, high power c onsumption, high electromagnetic noise
etc.
For these reasons the clockless technology is
considered as the technology which is going to drive majority of
electronic chips in the coming years.

DIAMOND CHIP

Electronics without silicon is unbelievable, but it will come true with the evolution of Diamond or Carbon chip. Now a day we are using silicon for the manufacturing of Electronic Chip's. It has many disadvantages when it is used in power electronic applications, such as bulk in size, slow operating speed etc. Carbon, Silicon and Germanium are belonging to the same group in the periodic table. They have four valance electrons in their outer shell. Pure Silicon and Germanium are semiconductors in normal temperature. So in the earlier days they are used widely for the manufacturing of electronic components. But later it is found that Germanium has many disadvantages compared to silicon, such as large reverse current, less stability towards temperature etc so in the industry focused in developing electronic components using silicon wafers

Now research people found that Carbon is more advantages than Silicon. By using carbon as the manufacturing material, we can achieve smaller, faster and stronger chips. They are succeeded in making smaller prototypes of Carbon chip. They invented a major component using carbon that is "CARBON NANOTUBE", which is widely used in most modern microprocessors and it will be a major component in the coming era
Crystalline diamond film that could produce more resilient semiconductor chips than those made from silicon. Until now, synthetic diamonds have proved a poor semiconducting material. Their microscopic crystals are a disorderly hodgepodge, and their edges are not evenly aligned, impeding the flow of current. Now, Schreck and his colleagues have discovered that by growing the diamond film on a surface of iridium, instead of on silicon, they can keep its grain boundaries aligned. Adding atoms of boron or nitrogen enables the diamond film to conduct electricity. Manufacturers plan to build a diamond chip that can withstand temperatures of 500 C, compared to only about 150 C for silicon chips. The chips would be most useful in devices located near hot-burning engines, such as those used in automobiles or airplanes

WHAT IS IT?
In single definition, Diamond Chip or carbon Chip is an electronic chip manufactured on a Diamond structural Carbon wafer. OR it can be also defined as the electronic component manufactured using carbon as the wafer. The major component using carbon is (cnt) Carbon Nanotube. Carbon Nanotube is a nano dimensional made by using carbon. It has many unique properties.

HOW IS IT POSSIBLE?
Pure Diamond structural carbon is non-conducting in nature. In order to make it conducting we have to perform doping process. We are using Boron as the p-type doping Agent and the Nitrogen as the n-type doping agent. The doping process is similar to that in the case of Silicon chip manufacturing. But this process will take more time compared with that of silicon because it is very difficult to diffuse through strongly bonded diamond structure. CNT (Carbon Nanotube) is already a semi conductor.

ADVANTAGES OF DIAMOND CHIP

1 SMALLER COMPONENTS ARE POSSIBLE

As the size of the carbon atom is small compared with that of silicon atom, it is possible to etch very smaller lines through diamond structural carbon. We can realize a transistor whose size is one in hundredth of silicon transistor.

2 IT WORKS AT HIGHER TEMPERATURE

Diamond is very strongly bonded material. It can withstand higher temperatures compared with that of silicon. At very high temperature, crystal structure of the silicon will collapse. But diamond chip can function well in these elevated temperatures. Diamond is very good conductor of heat. So if there is any heat dissipation inside the chip, heat will very quickly transferred to the heat sink or other cooling mechanics.

3 FASTER THAN SILICON CHIP

Carbon chip works faster than silicon chip. Mobility of the electrons inside the doped diamond structural carbon is higher than that of in he silicon structure. As the size of the silicon is higher than that of carbon, the chance of collision of electrons with larger silicon atoms increases. But the carbon atom size is small, so the chance of collision decreases. So the mobility of the charge carriers is higher in doped diamond structural carbon compared with that of silicon.

4 LARGER POWER HANDLING CAPACITY

For power electronics application silicon is used, but it has many disadvantages such as bulk in size, slow operating speed, less efficiency, lower band gap etc at very high voltages silicon structure will collapse. Diamond has a strongly bonded crystal structure. So carbon chip can work under high power environment. It is assumed that a carbon transistor will deliver one watt of power at rate of 100 GHZ. Now days in all power electronic circuits, we are using certain circuits like relays, or MOSFET inter connection circuits (inverter circuits) for the purpose of interconnecting a low power control circuit with a high power circuit .If we are using carbon chip this inter phase is not needed. We can connect high power circuit direct to the diamond chip

Wireless Charging Through Microwaves!


With mobile phone becoming basic part of the life, the recharging of mobile phone batteries has always been a problem. the mobile phone vary in their task time and battery stand by according to their manufacture and batteries. All these phone irrespective of their manufacturer and batteries have to be put recharge after the battery has drained out the main objective of this current proposal is to make recharging of the mobile phone independent of their manufacture and battery maker. In this paper a new proposal has been made so as to make the recharging of the mobile phones is done automatically as you talk in your mobile phone!. This is done by using microwaves. the microwave signal transmitted from the transmitter along with message signal using special kind of antennas called slotted wave guide antenna at a frequency of 2.45GHz .There are minimal additions ,which have to be made in the mobile handsets ,which are the addition of sensor, a "rectenna" and a filter with the above setup, the need for seperate chargers for mobile phone is eliminated and makes charging universal .Thus the more you talk the more is your mobile charge level ,With this proposal the manufacture would be able to remove the talk time and battery stand by from their phone specifications

4G TECHNOLOGY



The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.

The development from first generation analogue systems (1985) to second generation (2G) digital GSM (1992) was the heart of the digital revolution. But much more than this it was a huge success for standardisation emanating from Europe and gradually spreading globally.

However, world-wide roaming still presents some problems with pockets of US standards IS-95 (a code division multiple access [CDMA] rather than a time division multiple access [TDMA] digital system) and IS- 136 (a TDMA variant) still entrenched in some countries. Extensions to GSM (2G) via GPRS (General Packet Radio Service) and EDGE (Enhanced Data rates for GSM Evolution) (E-GPRS) as well as WAP and i-mode (so called 2.5G) will allow the transmission of higher data rates as well as speech prior to the introduction of 3G.

Mobile systems comprise a radio access together with a supporting core network. In GSM the latter is characterised by MAP (Mobile Applications Protocol), which provides the mobility management features of the system.

GSM was designed for digital speech services or for low bit rate data that could fit into a speech channel (e.g. 9.6kbit/s). It is a circuit rather than a packet oriented network and hence is inefficient for data communications. To address the rapid popularity increase of Internet services, GPRS is being added to GSM to allow packet (Internet Protocol [IP]) communications at up to about 100kbit/s.

Third generation (3G) systems were standardised in 1999. These include IMT-2000 (International Mobile Telecommunications 2000), which was standardised within ITU-R and includes the UMTS (Universal Mobile Telecommunications System) European standard from ETSI (European Telecommunications Standards Institute), the US derived CDMA 2000 and the Japanese NTT DoCoMo W-CDMA (Wideband Code Division Multiple Access) system. Such systems extend services to (multirate) high-quality multimedia and to convergent networks of fixed, cellular and satellite components. The radio air interface standards are based upon W-CDMA (UTRA FDD and UTRA TDD in UMTS, multicarrier CDMA 2000 and single carrier UWC-136 on derived US standards). The core network has not been standardised, but a group of three—evolved GSM (MAP), evolved ANSI-41 (from the American National Standards Institute) and IP-based— are all candidates. 3G is also about a diversity of terminal types, including many non-voice terminals, such as those embedded in all sorts of consumer products. Bluetooth (another standard not within the 3G orbit, but likely to be associated with it) is a short-range system that addresses such applications. Thus services from a few bits per second up to 2Mbit/s can be envisioned.


Chip Morphing Technology

International Business Machines Friday unveiled its new “chip morphing technology” that would, as the company said, enable a new class of semiconductor products that can monitor and adjust their functions to improve their quality, performance and power consumption without human intervention.

The patented technology, called “eFUSE”, combines unique software algorithms and microscopic electrical fuses to produce chips that can regulate and adapt their own actions in response to changing conditions and system demands. By dynamically sensing that a chip needs a “tune-up”, eFUSE can alter the configuration and efficiency of circuitry to enhance performance or avoid a potential problem. This autonomic capability is expected to change the way chips are designed, manufactured and integrated into computers, cell phones, consumer electronics and other products.

“eFUSE reroutes chip logic, much the way highway traffic patterns can be altered by opening and closing new lanes,” said Dr. Bernard Meyerson, IBM Fellow, vice president and chief technologist, IBM Systems and Technology Group.

eFUSE is part of a built-in self-repair system that constantly monitors a chip’s functionality. If an imperfection is detected, this innovative technology “instinctively” initiates corrective actions by tripping inexpensive, simple electrical fuses that are designed into the chip at no additional cost. The activated fuses help the chip control individual circuit speed to manage power consumption and repair unexpected, and potentially costly flaws. If the technology detects that the chip is malfunctioning because individual circuits are running too fast or too slow, it can “throttle down” these circuits or speed them up by controlling the appropriate local voltage.

The morphing technology also will optimize and tailor the performance and capabilities of a chip to meet an individual customer’s product needs in response to changing end-user or software demand. Customers further benefit from the versatility of eFUSE as the morphing can be repeated several times – even after the chip has been packaged and shipped in a product.

Invented and refined by IBM scientists and engineers, eFUSE achieves a goal pursued by chip designers for many years by putting to positive use the phenomena of “electromigration”, the company said.

This phenomena has traditionally been detrimental to chip performance and was avoided — even at significant cost and effort. IBM has perfected a technique that harnesses electromigration and uses it to program a fuse without damaging other parts of the chip. Previous implementations of on-chip fuse technology in the industry often involved rupturing fuses, which had resulted in unwanted performance and reliability problems.

Both versatile and adaptable, eFUSE is being implemented to support a variety of applications, such as high-performance microprocessors based on IBM's Power Architecture, including Power5 and other chips used in IBM eServer systems, as well as low-power IBM silicon germanium (SiGe) chips. eFUSE-enabled chips also are available to IBM foundry customers.

IBM also is leveraging the self-managing function of eFUSE in all 90nm custom chips, including those designed with IBM’s advanced embedded DRAM technology.

eFUSE is technology independent, does not require introduction of new materials, tools or processes, and is in production today at IBM’s 300mm facility in East Fiskhill, New York and 200mm plant in Burlington, Virginia.

CHAMELEON CHIP

Today's microprocessors sport a general-purpose design which has its own advantages and disadvantages.

Adv: One chip can run a range of programs. That's why you don't need separate computers for different jobs, such as crunching spreadsheets or editing digital photos
Disadv: For any one application, much of the chip's circuitry isn't needed, and the presence of those "wasted" circuits slows things down.

Suppose, instead, that the chip's circuits could be tailored specifically for the problem at hand--say, computer-aided design--and then rewired, on the fly, when you loaded a tax-preparation program. One set of chips, little bigger than a credit card, could do almost anything, even changing into a wireless phone. The market for such versatile marvels would be huge, and would translate into lower costs for users.

So computer scientists are hatching a novel concept that could increase number-crunching power--and trim costs as well. Call it the chameleon chip.

Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS).

An FPGA is covered with a grid of wires. At each crossover, there's a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the

programming signals. But now, labs in Europe, Japan, and the U.S. are developing techniques to rewire FPGA-like chips anytime--and even software that can map out circuitry that's optimized for specific problems.

The chips still won't change colors. But they may well color the way we use computers in years to come. it is a fusion between custom integrated circuits and programmable logic.in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market.

A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a "chip on demand." In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function.

Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal


processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players.

Another version, programmable logic chips are equipped with arrays of memory cells that can be programmed to perform hardware functions using software tools. These are more flexible than the specialized DSP chips but also slower and more expensive. Hard-wired chips are the oldest, cheapest, and fastest - but also the least flexible - of all the options

Friday, June 18, 2010

Tongue Drive System

Have you ever think that computers can be operated with our human tongue? Interesting? Scientists developed a new revolutionary system to help individuals with disabilities to control computers, wheel chairs and other devices simply by using their tongue. This technology is called as the Tongue Drive System.
Engineers suggests that this technology will be very helpful to the humans suffering from serious disabilities and allow them to become more active and lead independent lives. Here the person has to move only his tongue which is an important part if a person has a paralyzed limbs. Here a tiny magnet having size similar to the grain of rice will be attached to the person's tongue. Processes such as implantation, piercing or adhesion will be used to attach the magnet. This technology will help the disabled person to use computer mouse or powerful wheel chair.

Scientists chose the tongue to control the system because unlike the feet and the hands, which are connected by brain through spinal cord, the tongue and the brain has a direct connection through cranial nerve. In case when a person has a severe spinal cord injure or other damage, the tongue will remain mobile to activate the system.
The motions of the magnet attached to the tongue are
spotted by a number of magnetic field sensors installed
on a headset worn outside or an orthodontic brace
inside the mouth. The signals coming from the
sensors are wirelessly sent to a portable computer that
placed on a wheelchair or attached to an individual's
clothing.

The Tongue system is developed to recognize a wide
array of tongue movements and to apply specific
movements to certain commands, taking into account
user's oral anatomy, abilities and lifestyle. The Tongue
Drive system is touch-free, wireless and non-invasive
technology that needs no surgery for its operation.


Thursday, June 17, 2010


Seminar topics for B-tech N BE studentz!


Hi dear friends, this post is created focusing Computer Science & Information Technology engineering and degree students. Make use of the blog for preparing your seminar. This blog will give you proper guidance for making the seminar and presenting it in a beautiful and depthful manner. The only thing you have to do is to regularly visit this blog for getting knowlegde about the latest technologies and happenings.
"Knowledge is an asset... Achieve it!"

The only thing you have to do is that regularly visit this blog. I will update new posts to the blog each and every day. Hope that the informations in the blog will help you
For your easiness, further posts regarding the seminar topics will be given a related post name so that you can identify the post much easily and quickly.