Data Data Science Deep Learning Generative Adversarial Networks Machine-Learning
Convolutional neural networks have been used for some time now for classification of images, among other things. Recently, some new research leads us to believe that neural network can be "tricked" into wrongly classifying images by adding noise or other artifacts to them. If we want to use Deep Convolutional models in critical scenarios (i.e. self driving cars) we have to be certain that these models that we are using will be certain and robust.
In this presentation I will talk about a novel research approach for increasing neural network accuracy and robustness in difficult or adversarial situations.
We will go over how convolutional layers work and how can we modify them to classify based on missing features on an image. With a simple modification we will gain some accuracy on a variation of a well-known MNIST dataset.
All the examples will be in PyTorch 1.1.
Some basic knowledge about neural networks, backpropagation etc. is needed.
Type: Talk (45 mins); Python level: Beginner; Domain level: Intermediate
Teaching Assistant and Neural Network Researcher at UNSPMF, Serbia. Ambassador at Fedora Project. Likes to make weird neural networks models. Sometimes they even work!