Neural Network Basics

Location: 
CG Auditorium
Speaker: 
Callie Federer

Machine learning, and specifically artificial neural networks, are powerful and dangerous tools that have become too easy to implement, and too difficult to understand. I am proposing running a tutorial to build up a feedforward artificial neural network from scratch. The tutorial will start with a brief history on the beginnings of artificial neural networks and how they originated out of trying to understand the brain. We will briefly discuss the ‘AI winter’, and the breakthrough algorithm backpropagation which brought us back into the ‘AI summer’. I will walk through running one iteration of a neural network by scratch, including calculating its outputs and how to update each of the weights in the network via backpropagation to demystify what is happening. We will then use Python (without any special machine learning packages) and build a neural network. We will also build a convolutional neural network and a recurrent neural network (I will have template code so this moves quickly) so that participants understand conceptually what is different about each type of network and what types of problems they are used for. The value of this type of tutorial is to allow participants to take some time to understand what is happening in neural networks instead of just implementing pre-built code from packages and hoping for the best (aka high accuracy). The goal of this is to make machine learning more accessible to scientists in different areas so they can get a better sense of how to utilize it in their own science. 

Speaker Description: 

I am an Associate Research Scientist at RadiaSoft, LLC working on machine learning for various applications, including prostate cancer treatment plans and particle accelerators. I recently completed my PhD In Computational Bioscience with thesis work on the intersection of machine learning and neuroscience. 

Event Category: