Project Level: Honours, PhD

Optimising neural networks in the process of training them to perform a task is a live research topic. This is a computationally expensive task, and tweaking the hyperparameters of the optimisation algorithms is more of an art than a science. Different methods have been proposed for achieving better final accuracies and for removing the need to tweak hyperparameters. Normalised gradient descent, which only uses the direction of the gradient, has been shown to be much more stable than gradient descent and can achieve good test accuracies for some learning tasks. Despite the good properties, because it lacks the ability to adjust the learning rate, it does not achieve the accuracies of gradient descent in many other tasks. In this project, we investigate different methods to adjust the learning rate of normalised gradient descent to create an adaptive optimization method for neural networks. 

Project members