Machine Learning Honours Project
Over the past few years, there has been a push to make neural networks larger and although this has many benefits, it can be difficult to run these neural networks on smaller devices with limited memory and CPU power. This thesis aims to show how one can refine volume preserving neural networks, increasing their accuracy on image data. This results in more accurate models or models that are smaller in terms of number of parameters. It will be shown that one can use spatial permutations to improve the accuracy of smaller and medium sized volume preserving neural networks and one is able to use this smaller network instead and thus reduce the number of parameters. Other techniques are explored such as Haar wavelets and pooling. It will also be shown how volume preserving neural networks can be used to make other neural networks smaller in terms of number of parameters. Replacing dense layers with VPNN layers was also tested and resulted in reducing certain CNNs by 19% with little or no loss in accuracy. Overall the goal is to show methods for improving volume preserving neural networks on image data and use volume preserving neural networks to decrease the number of parameters on other networks.
Please check out the publication if you are interested in learning more
- Linux servers with Large GPUs