Neural Networks
Neural networks (NNs) have become increasingly relevant in recent years, due mostly to their flexibility of applications, to the access to large amounts of data, and to the increased speed of computations present in modern computers. Their flexibility in application is unparalleled through their diverse range of possible architectures, such as convolutional NNs and recursive NNs. Through these different architectures, NNs can, among many other things, perform facial recognition, predict economic patterns, and compose music. The flexibility of NNs can only be achieved through the access to large amounts of data. This is required for training NNs. These large datasets are becoming increasingly easy to access via the Internet. Even with the quantity of data available, the training of NNs would be impossible if it was not for the high computational speed of modern day computers. This makes the training of large NNs possible. Speed, however, is not without cost: processing more data faster leads to higher energy consumption, which is placing a burden on the power grids across Canada
Research Proposal - Energy Efficient Neural Networks
This research proposal involves creating an optical device used to decrease energy cost while main- taining the high computational speed present in neural networks. Optical processors can be used to efficiently handle large amounts of matrix multiplications while keeping energy cost low because of the inherent parallelism present throughout optics. This parallelism allows for the matrix to be computed all at once rather than element by element, like in digital computers. This ability makes optical processors an ideal candidate for use in NNs, as matrix multiplications take up a lot of computational time in NNs. Optical matrix multiplication has the potential of significantly decreasing the energy usage of NNs while maintaining the high computational speed of digital NNs.