Showing posts with label Computers. Show all posts
Showing posts with label Computers. Show all posts

26 January 2015

Entangled Photon Emitter That Fits in Computer Chip Developed


Researchers at the Università degli Studi di Pavia in Italy has developed a component that can generate a steady stream of entangled photons. This device, called a micro-ring resonator, is tiny enough to fit in a standard silicon computer chip.

Entanglement may be the foundation of a new way to connect and transmit information. Entanglement is the mysterious connection between two particles where they interact with each other physically even when separated; regardless of distance. Einstein was quoted as describing this property as 'spooky action at a distance'.

In theory, even if the two photons are galaxies apart, they still interact with each other.

Current entangled photon emitters are too big to fit in a computer chip. The development of this micro-ring resonator is a step forward to secure communications and faster computers.

The resonators are loops etched onto the silicon chip. They can corral and re-emit particles of light as illustrated in the image above of the silicon ring resonator with its access waveguide. In the image, the green wave at the input represents the laser pump, the red and blue wave-packets at the output represent the generated photon pairs, and the infinity symbol linking the two outputs indicates the entanglement between the pair of photons.

26 August 2013

Magnetic Mystery Behind Lanthanum Aluminate and Strontium Titanate Combine Computer Processors with Memory Chips


Scientists have theorized how two non-conductive and non-magnetic materials, Lanthanum Aluminate and Strontium Titanate, become conductive and magnetic when combined together. This phenomenon can lead to the development of computer memory with data processing capabilities.

Scientist believe that because of a magnetic phenomenon called "local moments", lanthanum aluminate and strontium titanate, become conductive and magnetic when placed together. With these two properties, these two semiconductors have the ability to process binary data (like a computer processor) and also have the ability to store them (like a memory chip) in one device; a computer processor that can store data.

A semiconductor is a material that has conductive properties midway between a conductor like metal and a non-conductor such as glass. Because of this, depending on the flow of electrons in the semiconductor, it can be either on (1) where electrons can flow freely or off (0) when electrons cannot pass through. Data that is streamed through these semiconductors can be permanently stored on magnetic devices.

14 August 2013

Ordinary Cars Go Green and Reduce Carbon Footprint


Product Advancements Let Drivers Go Green Without a New Car.

With gas prices soaring, environmental concerns and technological advancements in the auto and energy industry, ecological cars (eco cars) are slowly entering the mainstream market. Eco cars are vehicles that do not solely rely on fossil fuels for energy.

The most popularly used energy source for eco cars is electricity. Fuel cells (usually lithium-ion battery cells) power an electric engine that drives the car. Hybrid engines use both a gasoline engine and a battery operated one that alternates between each other depending on the most optimal choice. There are also pure electric vehicles but are limited in terms of range and use.

These type of vehicles have come into focus because of the movement to reduce the global carbon footprint. Carbon footprint can be simplified as the greenhouse gas emissions in an area, by an event, product, or person(s). The largest source of emissions come from driving fossil fueled vehicles (gasoline use). Reducing the carbon footprint from driving can be done primarily by driving less, or at the moment, developing more environment-friendly cars.

Aside from engine and power plant, there are also other ways for vehicles to go green. Taking alternative transportation (public transportation, bicycles, walking) when the distance is not too far, scheduling trips during lean traffic hours, Using more environment-friendly products can also benefit the green movement and reduce greenhouse gas emissions.

21 December 2012

ESO Installs Supercomputer At ALMA Facility - The ALMA Correlator



The ALMA correlator
One of the fastest supercomuter, the ALMA correlator, has been fully installed and tested at the ALMA astronomical facility in Chile. With over 134 million processors and performance up to 17 quadrillion operations per second, the ALMA correlator is one of the fastest supercomputers in existence today.

The Atacama Large Millimeter /submillimeter Array (ALMA) is a space telescope located on the Chajnantor plateau in the Chilean Andes. It has 66 high-precision antennas, spread over distances of up to 16 kilometres. The facility is partially operational and will be fully completed by March 2013.

ALMA studies light emitted by some of the coldest objects in space. Since these objects emit light that is hardly detected, the ALMA looks at wavelengths between infrared light and radio waves. This is known as millimeter and submillimeter radiation. The telescope can detect light emitted by objects that are a few degrees above absolute zero.

The space telescope can help astronomers study the chemical and physical conditions in molecular clouds where stars are produced. These clouds are made up of dense gas and dust which are dark and obscured in visible light, much like clouds in the sky are. By detecting the light emitted in near infrared, ALMA can detect and collect data from these objects.

With the installation of the ALMA correlator, it will increase the sensitivity and image quality of its observation of outer space.

03 December 2012

Applying Magnetism and Microwave To Bismuth Atoms As Qubits For Quantum Computers


Researchers are looking at Bismuth atoms and nuclei as a source for qubits used in a quantum computer. Using magnetism and microwaves, they find that this process may be the next step to practical quantum computers.

Moore's law states that the processing power of a computer will continue to double every eighteen months. For the last 50 years, the law seems to be the norm. But technology is fast approaching the limit to the number of transistors that can fit into a silicon chip.

The current record for most number of transistors put on a chip is 2 billion.

By the year 2030, Moore's law would have brought computers to the atomic level. This would be the time of quantum computers. The speed and computing power of a quantum computer far surpasses that of present day computers (also called classical computers).

21 September 2012

3D Screens On Mobile Devices Using Tilt Displays Being Developed


Mobile devices such as smartphones and tablets are experiencing a tremendous growth in sales. Its use in various applications are gaining popularity.

Manufacturers vie for attention and market share by developing features and technology guaranteed to catch the consumer's eye. Apple alone sold 172 million of its mobile devices (iPad, iPhone and iPod) last year, 2011.

One notable feature users look at is the display screen. Screen size is important since bigger screens can display more data and bigger images. A standard smartphone range from 3.5 inches to 5.5 inches (Samsung Galaxy Note II). An iPad screen stands at 9.7 inches.

There are also different types of screens. They range from the often used TFT-LCD screens to the various flavors of Super AMOLED screens. Even Apple has marketed its display screen naming it "Retina Display" which is basically a high density screen that does not pixelate at all.

But aside from screen size and screen type, it seems another screen feature for mobile devices may be in the horizon; 3D screens.

3D screens are already being featured in high definition LCD screens. To capitalize on this, there are movies released in blue ray and DVD formats that support 3D viewing. Most require the use of 3D glasses to fully appreciate the experience.

3-D display screen on mobile devices could be on the horizon

Imagine a mobile device that visually displays a street map and whose screen physically mutates to show the hilly terrain and buildings. A team of scientists, led by the University of Bristol, have developed a new type of screen display that not only moves but also physically tilts.

Researchers from the University's Department of Computer Science will present the display, called Tilt Displays, at MobileHCI 2012, the world's leading conference in the field of Human Computer Interaction (HCI), held in San Francisco, USA.

Tilt Displays is a display surface about half the size of a standard tablet, such as an iPad. It consists of a collection of individual display components each of which can tilt along one or more axes and move vertically up and down. This ability to tilt along multiple axes distinguishes it from previous actuatable displays.

23 August 2012

Non-Volatile Ferroelectric Memory Developed To Address Upkeep of Cloud Computing


Ferroelectricity is the property of some materials to store a permanent electric field. Ferroelectric RAM (Random Access Memory) or FRAM is a memory technology. It combines the technologies behind RAM and ROM (Read Only Memory).

FRAM is high performance, non-volatile memory that can be used in many applications. Smart cards, security, metering and RFID are some of the applications that can benefit from FRAM.

Major advantages of Ferroelectric RAM over EEPROM and Flash is its speed, endurance, low power consumption, security, and tolerance for radiation. In actual lab tests, FRAM has proven to process information faster than other RAM technologies. It also has a high endurance with an estimated 317 years.

Ferroelectric materials could bring down cost of cloud computing and electronic devices

A new class of organic materials developed at Northwestern University boasts a very attractive but elusive property: ferroelectricity. The crystalline materials also have a great memory, which could be very useful in computer and cellphone memory applications, including cloud computing.

A team of organic chemists discovered they could create very long crystals with desirable properties using just two small organic molecules that are extremely attracted to each other. The attraction between the two molecules causes them to self assemble into an ordered network -- order that is needed for a material to be ferroelectric.

The starting compounds are simple and inexpensive, making the lightweight materials scalable and very promising for technology applications. In contrast, conventional ferroelectric materials -- special varieties of polymers and ceramics -- are complex and expensive to produce. The Northwestern materials can be made quickly and are very versatile.

In addition to computer memory, the discovery of the Northwestern materials could potentially improve sensing devices, solar energy systems and nanoelectronics. The study will be published Aug. 23 by the journal Nature.

"This work will serve as a guide for designing these materials and using ferroelectricity in new ways," said Samuel I. Stupp, Board of Trustees Professor of Chemistry, Materials Science and Engineering, and Medicine. He is a senior author of the paper. "Our molecular design enables us to invent a nearly infinite library of ferroelectric materials."

Ferroelectric materials exhibit spontaneous electric polarization (making one side of the material positive and the opposite side negative) that can be reversed by the application of an electric field (from a battery, for example). These two possible orientations make the materials attractive to researchers developing computer memory because one orientation could correspond to a 1 and the other to a 0. (Computer memory stores information in 1's and 0's.)

"The material's behavior is complex, but the superstructure is simple," said Sir Fraser Stoddart, Board of Trustees Professor of Chemistry in the Weinberg College of Arts and Sciences at Northwestern. He also is a senior author. "It is the superstructure that gives the material its desirable properties."

30 July 2012

New Process In Isolating Graphene Leads To Next Generation Devices



Graphene is a material derived from graphite. It is two dimensional and consists of a single layer of carbon atoms arranged in a honeycomb structure. This structure resembles chicken wire.

Graphene is the thinnest known material and is also the strongest. It conducts electricity as efficiently as copper and outperforms all other materials as a conductor of heat. Graphene is almost completely transparent, yet so dense that even the smallest atom helium cannot pass through it.

Andre Geim and Konstantin Novoselov (the two scientists who successfully isolated the material and won the 2010 Nobel Prize for it) defined graphene as "a flat monolayer of carbon atoms tightly packed into a two-dimensional (2D) honeycomb lattice, and is a basic building block for graphitic materials of all other dimensionalities. It can be wrapped up into 0D fullerenes, rolled into 1D nanotubes or stacked into 3D graphite."

Graphene is a material that has the potential to create foldaway mobile phones, wallpaper-thin lighting panels and the next generation of aircraft.

A graphene circuit can operate at high frequencies of up to 10GHz (10 billion cycles per second), and at temperatures of up to 127°C. It is the most transparent, strongest and most conductive material on Earth.

Cutting the graphene cake

Sandwiching individual graphene sheets between insulating layers in order to produce electrical devices with unique new properties, the method could open up a new dimension of physics research.

Writing in Nature Materials, the scientists show that a new side-view imaging technique can be used to visualize the individual atomic layers of graphene within the devices they have built. They found that the structures were almost perfect even when more than 10 different layers were used to build the stack.

This surprising result indicates that the latest techniques of isolating graphene could be a huge leap forward for engineering at the atomic level.

This development gives more weight to graphene's suitability as a major component in the next generation of computer chips.

The researchers' side-view imaging approach works by first extracting a thin slice from the centre of the device. This is similar to cutting through a rock to reveal the geological layers or slicing into a chocolate gateaux to reveal the individual layers of icing.

22 June 2012

Phase Change Materials (PCM) and Nanotechnology For Faster and Efficient Computer Memory


What is a phase change material?

A phase change material (PCM) is a substance that changes from one state of matter to another at a certain temperature. It can be compared to water; when boiled (at 100 °C ) it turns to a gas and when frozen (at 0 °C)turns to ice.

PCMs are usually used in construction as a means of regulating room temperature. When the temperature in a room gets high, the PCM liquefy and absorb and store heat. Once the temperature lowers, it solidifies and releases the absorbed heat.

PCMs usually utilize the solid to liquid phase change because a phase change to gas requires a larger volume or high pressure to store the material in that state.

Current phase change materials use paraffin as its prime material. Paraffin (a wax) melts at a specified temperature and solidifies back again at a lower temperature. These are produced as microscopic wax spheres (paraffin wrapped in a glass acrylic containers called beads) integrated into building materials.

Penn researchers' study of phase change materials could lead to better computer memory

Memory devices for computers require a large collection of components that can switch between two states, which represent the 1's and 0's of binary language. Engineers hope to make next-generation chips with materials that distinguish between these states by physically rearranging their atoms into different phases. Researchers at the University of Pennsylvania have now provided new insight into how this phase change happens, which could help engineers make memory storage devices faster and more efficient.

The research was conducted by Ritesh Agarwal, associate professor in the Department of Materials Science and Engineering in Penn's School of Engineering and Applied Science, along with members of his research group. A.T. Charlie Johnson, professor in the Department of Physics and Astronomy in the School of Arts and Sciences, and Ju Li, now a professor of nuclear science and engineering at the Massachusetts Institute of Technology, also contributed to the study.

Their research was published in the journal Science.

01 May 2012

MIT News: Side Channel Attacks - New Developments in Computer Security


In the last 10 years, cryptography researchers have demonstrated that even the most secure-seeming computer is shockingly vulnerable to attack. The time it takes a computer to store data in memory, fluctuations in its power consumption and even the noises it emits can betray information to a savvy assailant.

Attacks that use such indirect sources of information are called side-channel attacks, and the increasing popularity of cloud computing makes them an even greater threat. An attacker would have to be pretty motivated to install a device in your wall to measure your computer’s power consumption. But it’s comparatively easy to load a bit of code on a server in the cloud and eavesdrop on other applications it’s running.

Fortunately, even as they’ve been researching side-channel attacks, cryptographers have also been investigating ways of stopping them. Shafi Goldwasser, the RSA Professor of Electrical Engineering and Computer Science at MIT, and her former student Guy Rothblum, who’s now a researcher at Microsoft Research, recently posted a long report on the website of the Electronic Colloquium on Computational Complexity, describing a general approach to mitigating side-channel attacks. At the Association for Computing Machinery’s Symposium on Theory of Computing (STOC) in May, Goldwasser and colleagues will present a paper demonstrating how the technique she developed with Rothblum can be adapted to protect information processed on web servers.

In addition to preventing attacks on private information, Goldwasser says, the technique could also protect devices that use proprietary algorithms so that they can’t be reverse-engineered by pirates or market competitors — an application that she, Rothblum and others described at last year’s AsiaCrypt conference.

07 April 2012

CERN News: Large Hadron Collider Achieve Record Energy Collision of 8 TeV


It was announced by CERN in early 2012 that the LHC will run with a beam energy of 4 TeV. That is 0.5 TeV higher than the previous two years of LHC experiments. This decision was taken by CERN management following the annual performance workshop held in Chamonix last week and a report delivered today by the external CERN Machine Advisory Committee (CMAC).

An electron volt (eV) is a measure of amount of energy gained by the charge of a single electron moved across an electric potential difference of one volt. One trillion electron volts is called a teraelectronvolt or simply a TeV.

The exact measurement of a Tev in is 1 teraelectron volt = 1.60217646 ×10-7 joules.

Why is the amount of energy or TeVs important to using the Large Hadron Collider? Based on Einstein's Theory of General Relativity, E=MC2, energy is the product of mass multiplied by the speed of light squared. By colliding particles running at the speed of light in the LHC and measuring the energy released, the LHC can detect fluctuations in energy that may signify a particle most particularly the Higgs Boson. By measuring the energy released, one can deduce the mass of the escaping particles.

LHC physics data taking gets underway at new record collision energy of 8TeV

At 05 April 2012, 0:38 CEST, the LHC shift crew declared ‘stable beams’ as two 4 TeV proton beams were brought into collision at the LHC’s four interaction points. This signals the start of physics data taking by the LHC experiments for 2012. The collision energy of 8 TeV is a new world record, and increases the machine’s discovery potential considerably.

“The experience of two good years of running at 3.5 TeV per beam gave us the confidence to increase the energy for this year without any significant risk to the machine,” explained CERN1’s Director for Accelerators and Technology, Steve Myers. “Now it’s over to the experiments to make the best of the increased discovery potential we’re delivering them!”

Although the increase in collision energy is relatively modest, it translates to an increased discovery potential that can be several times higher for certain hypothetical particles. Some such particles, for example those predicted by supersymmetry, would be produced much more copiously at the higher energy. Supersymetry is a theory in particle physics that goes beyond the current Standard Model, and could account for the dark matter of the Universe.

22 March 2012

New Advancement in Quantum Computers: Researchers Generate Qubits From Electrons


The computing world is all abuzz about quantum computers and quantum computing.

People are pointing to the inevitability of quantum computers due to Moore's law. The often recited law states that computer processing power continues to double every 18 months. This law has held true for more than 50 years. This means that by the year 2030 or earlier, we will find that the circuits on a microprocessor will approach the atomic scale and quantum computing will be the norm.

Unlike a classical computer (current computers using bits and bytes), a Quantum computer does not use bits to store information. It uses quantum bits or qubits. A qubit is basically an atom that carries the information just like a bit does. But unlike a bit, it just doesn't hold a 1 or 0 state. In an atom, it can have a spin-up stage (1) or a spin-down stage (0). But aside from that, an atom can be in both stages at the same time. This peculiar property of the atom is what everyone is excited about.

In terms of computing power, this means that a quantum computer can theoretically perform a calculation in one step where a classical or digital computer may take several. A classical computer can be programmed to dial a million phone numbers, it will perform this by dialing a phone one million times. A quantum computer can dial the same million numbers all at the same time, in one step.


The alphabet of data processing could include more elements than the "0" and "1" in future. An international research team has achieved a new kind of bit with single electrons, called quantum bits, or qubits. With them, considerably more than two states can be defined. So far, quantum bits have only existed in relatively large vacuum chambers. The team has now generated them in semiconductors. They have put an effect in practice, which the RUB physicist Prof. Dr. Andreas Wieck had already theoretically predicted 22 years ago. This represents another step along the path to quantum computing. Together with colleagues from Grenoble and Tokyo, Wieck from the Chair of Applied Solid State Physics reports on the results in the journal Nature Nanotechnology.

04 March 2012

Helix Nebula - The Science Cloud: Business and Science Tie Up For European Cloud Computing


Cloud computing is a service where all computer applications, data, and software are provided to computers and other devices as a metered service over a network or the internet. This does away with installing it directly to the computer. Everything is stored on a server or cloud for computers to access.

A consortium of leading IT providers and three of Europe’s biggest research centres announced a partnership to launch a European cloud computing platform. "Helix Nebula - the Science Cloud", will support the massive IT requirements of European scientists, and become available to governmental organisations and industry after an initial pilot phase.

The three research centers are: The European Organization for Nuclear Research (CERN), The European Molecular Biology Laboratory (EMBL), and The European Space Agency (ESA).

The partnership is working to establish a sustainable European cloud computing infrastructure, supported by industrial partners, which will provide stable computing capacities and services that elastically meet demand.

This pan-European partnership across academia and industry is in line with the Digital Agenda of the European Commission and will foster innovation for science and create new commercial markets.

29 February 2012

Software Engineers Develop Cryptographic Attack That Allows Access to Secure Internet Servers


HTTP or HyperText Transfer Protocol is the underlying protocol used by the World Wide Web. It defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various browser and computer commands. Entering a web address or URL in a browser sends an HTTP command to the web server directing it to fetch and transmit the requested Web page.

Hypertext Transfer Protocol Secure (HTTPS) is a combination of Hypertext Transfer Protocol (HTTP) with SSL/TLS protocol. It provides encrypted communication and secure identification of a network web server. HTTPS connections are often used for payment transactions on the World Wide Web and for sensitive transactions in corporate information systems. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) , are cryptographic protocols that provide communication security over the Internet.

A padlocked icon in a web-browser or a URL starting with https provides communication security over the Internet. The icon or URL indicates OpenSSL, a cryptography toolkit implementing the SSL protocol, or a similar system is being used. New research by a collaborative team has developed an attack that can circumvent the security OpenSSL should provide. The attack worked on a very specific version of the OpenSSL software, 0.9.8g, and only when a specific set of options were used.

Dr Dan Page, Senior Lecturer in Computer Science in the Department of Computer Science at the University of Bristol, and one of the collaborative team, will present a paper at the RSA conference in San Francisco next week [Wednesday 29 February] about the EPSRC-funded research.

The attack worked by targeting a bug in the software. Carefully constructed messages were sent to the web-server, each of which triggered the bug and allowed part of a cryptographic key to be recovered. Using enough messages, the entire key could be recovered.

15 February 2012

Large Hadron Collider Will Run At Higher Beam Energy Level of 4 TeV


An electron volt (eV) is a measure of amount of energy gained by the charge of a single electron moved across an electric potential difference of one volt. One trillion electron volts is called a teraelectronvolt or simply a TeV.

The exact measurement of a Tev in is 1 teraelectron volt = 1.60217646 ×10-7 joules.

Why is the amount of energy or TeVs important to using the Large Hadron Collider? Basically it boils down to Einstein's Theory of General Relativity, E=MC2. Energy is the product of mass multiplied by the speed of light squared. By colliding particles running at the speed of light in the LHC and measuring the energy released, the LHC can detect fluctuations in energy that may signify a particle most particularly the Higgs Boson. By measuring the energy released, one can deduce the mass of the escaping particles.

CERN1 today announced that the LHC will run with a beam energy of 4 TeV this year, 0.5 TeV higher than in 2010 and 2011. This decision was taken by CERN management following the annual performance workshop held in Chamonix last week and a report delivered today by the external CERN Machine Advisory Committee (CMAC). It is accompanied by a strategy to optimise LHC running to deliver the maximum possible amount of data in 2012 before the LHC goes into a long shutdown to prepare for higher energy running. The data target for 2012 is 15 inverse femtobarns for ATLAS and CMS, three times higher than in 2011. Bunch spacing in the LHC will remain at 50 nanoseconds.

01 February 2012

Sleep Apnea: A High Risk Factor in Silent Stroke and Small Brain Lesions


Sleep Apnea is a disorder when breathing is shallow, obstructed or paused while sleeping. The pauses can last from a few seconds to minutes and occurs five to thirty times or even more in one hour. A loud snort or choking sound happens when normal breathing starts again.

Sleep apnea disrupts sleep and moves the person out of deep sleep and into light sleep when breathing is hindered. This results in poor sleep quality that makes a person tired throughout the day. Sleep apnea is one of the leading causes of excessive daytime sleepiness.

Studies have unearthed a link between strokes and sleep apnea

People with severe sleep apnea may have an increased risk of silent strokes and small lesions in the brain, according to a small study presented at the American Stroke Association's International Stroke Conference 2012.

A silent stroke is a stroke that does not manifest any obvious and identifiable outward symptoms. Although the person may be unaware of suffering a silent stroke, it still can cause damage to the brain and places the patient at an increased risk of a major stroke in the future.

"We found a surprisingly high frequency of sleep apnea in patients with stroke that underlines its clinical relevance as a stroke risk factor," said Jessica Kepplinger, M.D., the study's lead researcher and stroke fellow in the Dresden University Stroke Center's Department of Neurology at the University of Technology in Dresden, Germany.

31 January 2012

What Are Gravitational Waves?


In 1916, Albert Einstein predicted the existence of gravitational waves. Gravitational waves are ripples in the space-time fabric.

Similar in concept to how sound waves are, gravitational waves are a bit more complicated that instead of air as the travelling medium, gravitational waves travel as compressions and expansions of the fabric of space time. They may be pictured as small distortions of space-time geometry which propagate through space as waves. A slight distortion in one region of space distorts nearby regions, and in the end, there is a moving distortion which speeds along at the the speed of light.

These waves have not been discovered yet but it is believed that the distortions that cause gravitational waves are from the accelaration of big masses such as supernova explosions, black holes eating up nearby matter, collisions of black holes, and coalescence of neutron stars.

To date, gravitational waves have not been discovered yet but there are indirect evidence pointing to its existence. In order to prove that they exist, these must be observed and measured. For 20 years, the Explorer Gravitational Wave Detector at CERN have been trying to detect this wave.

20 January 2012

Healthy Crying: Teardrops Contain Lysozyme That Destroys Bacteria


A disease-fighting protein in our teardrops has been tethered to a tiny transistor, enabling UC Irvine scientists to discover exactly how it destroys dangerous bacteria. The research could prove critical to long-term work aimed at diagnosing cancers and other illnesses in their very early stages.

Ever since Nobel laureate Alexander Fleming found that human tears contain antiseptic proteins called lysozymes in 1922, scientists have tried to solve the mystery of how they could relentlessly wipe out far larger bacteria. It turns out that lysozymes have jaws that latch on and chomp through rows of cell walls like someone hungrily devouring an ear of corn, according to findings that will be published Jan. 20 in the journal Science.

Lysozyme is an enzyme found in egg white, tears, and other secretions. It is responsible for breaking down the polysaccharide walls of many kinds of bacteria and thus it provides some protection against infection.

"Those jaws chew apart the walls of the bacteria that are trying to get into your eyes and infect them," said molecular biologist and chemistry professor Gregory Weiss, who co-led the project with associate professor of physics & astronomy Philip Collins.

The researchers decoded the protein's behavior by building one of the world's smallest transistors – 25 times smaller than similar circuitry in laptop computers or smartphones. Individual lysozymes were glued to the live wire, and its eating activities were monitored.

Quantum Computers Offer Secure Cloud Computing


With the advent of cloud computing, scientists and engineers are looking towards integrating this service with the next evolution of computers; Quantum computers.

Basically, cloud computing is a service where all computer applications, data, and software are provided to computers and other devices as a metered service over a network or the internet. This does away with installing it directly to the computer. Everything is stored on a server or cloud for computers to access.

Our present day computers (classical computers) use transistors to compute. The more transistors in a computer chip, the more processing power it has. The record for most number of transistors put on a chip is 2 billion. A quantum computer uses atoms for its computing power. To put simply, atoms, the way they behave, makes computing faster.

A quantum computer can theoretically perform a calculation in one step where a classical or digital computer may take several. A classical computer can be programmed to dial a million phone numbers, it will perform this by dialing a phone one million times. A quantum computer can dial the same million numbers all at the same time, in one step.

Researchers have succeeded in combining the power of quantum computing with the security of quantum cryptography and have shown that perfectly secure cloud computing can be achieved using the principles of quantum mechanics. They have performed an experimental demonstration of quantum computation in which the input, the data processing, and the output remain unknown to the quantum computer. The international team of scientists will publish the results of the experiment, carried out at the Vienna Center for Quantum Science and Technology (VCQ) at the University of Vienna and the Institute for Quantum Optics and Quantum Information (IQOQI), in the forthcoming issue of Science.

18 January 2012

MIT NEWS: The faster-than-fast Fourier transform


CAMBRIDGE, Mass. -- The Fourier transform is one of the most fundamental concepts in the information sciences. It’s a method for representing an irregular signal — like the voltage fluctuations in the wire that connects an MP3 player to a loudspeaker — as a combination of pure frequencies. It’s universal in signal processing, but it can also be used to compress image and audio files, solve differential equations, and price stock options, among other things.

The reason the Fourier transform is so prevalent is an algorithm called the fast Fourier transform (FFT), devised in the mid-1960s, which made it practical to calculate Fourier transforms on the fly. Ever since the FFT was proposed, however, people have wondered whether an even faster algorithm could be found.

At the 2012 Association for Computing Machinery’s Symposium on Discrete Algorithms (SODA), a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic — a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smart phones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.

Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers — discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies.