Researchers have developed a novel photonic chip that uses light instead of electricity — and consumes relatively little power in the process. The chip design could be used to process massive neural networks millions of times more efficiently than today’s classical computers do.
Neural networks are machine-learning models that are widely used for such tasks as robotic object identification, natural language processing, drug development, medical imaging, and powering driverless cars. Novel optical neural networks, which use optical phenomena to accelerate computation, can run much faster and more efficiently than their electrical counterparts.
But as traditional and optical neural networks grow more complex, they eat up tons of power. To tackle that issue, researchers and major tech companies have developed AI accelerators, and specialized chips that improve the speed and efficiency of training and testing neural networks.
The chip design relies on a more compact, energy-efficient optoelectronic scheme that encodes data with optical signals but uses “balanced homodyne detection” for matrix multiplication. That’s a technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals. The design requires only one channel per input and output neuron, and only as many homodyne photodetectors as there are neurons, not weights.
Related Content: Transition Metal Dichalcogenides For Optomechanics