Engineers have introduced a new technique for performing high-level neural network computations by substituting a photonic tensor core for existing digital processors such as GPUs. In the approach, light-speed energy replaces electricity, and optical processing data feeds at a two to three orders higher performance than an electrical tensor processing unit (TPU), supporting unsupervised learning and performance in AI machines.
Neural networks commonly perform and advance machine learning, meaning the discovery has the potential to develop artificial intelligence for various applications. Neural networks in machine learning are trained to classify unseen data and make unsupervised decisions based on information. Once trained on that data, a neural network can formulate an inference to identify and classify objects and patterns, giving data a unique signature.
In the new system, the light-speed photonic TPU improves both the speed and efficiency of existing deep learning paradigms by performing multiplications of matrices in parallel. It relies on an electro-optical interconnect, allowing an efficient reading and writing of the optical memory and the TPU to interface with different architectures.
One distinct advantage the photonic TPU possesses involves the limitations facing digital processors in accurately performing complex operations — and the amount of digital power processors require to complete them. Neural networks unravel multiple essential layers of interconnected neurons, miming the human brain.
Related Content: Pockels Effect For Next-Gen Silicon Photonics Devices