Had RIAI today; intro to adversarial examples in neural nets, which are very interesting. The big picture is that neural nets are very vulnerable to targeted tweaking of the input. If you take an image of a panda, where the network gives a score corresponding to 60% likelihood of label “panda”, and you add a small noise term to every pixel of the images, so that a human can’t tell the difference, suddenly the network assigns very high confidence (>99%) that the resulting image is a gibbon (or any other target label you want). The intuition for this is that the noise gets magnified layer by layer. People don’t know how to deal with this, and it’s scary for certain applications like self-driving cars–it turns out that, (with access to the internals of a NN) you can place black and white stickers on a stop sign such that it looks like a speed limit sign with high confidence. Needless to say, very active research topic!
After Randomized Algorithms, (I skipped the RIAI exercise because it just walked through some calculations) I met Jenda (previously referred to as John, Czech dude) for some homework. We went back to Culmann and plugged away; we had fun chatting about the problems and I got two questions, which I felt pretty good about. Then I said that I was going to Aldi’s and Jenda decided to join me, so we walked down together and got a pile of groceries.
After that I made myself a really amazing plate of curry, which was very simple :D I tried to emulate the Mongolian Grill, using the frozen hamburgers from Migros; I chopped them up into little pieces and cooked them; then I threw in onion and mushroom and fresh garlic along with (the crucial bit) some curry powder and some chili flakes, which I browned in the grease in the pan. Along with this terrific bread called St. Galler’s Brot, it made a hell of a meal.