Session:How does artificial intelligence accomplish the feat of learning? (Wondrous Mathematics)

From 34C3_Wiki
Jump to: navigation, search
Description How a mathematical breakthrough made at the end of the 17th century is the workhorse of the artificial neural networks of today
Website(s)
Type Talk
Kids session No
Keyword(s) software
Processing assembly Assembly:Curry Club Augsburg
Person organizing User:IngoBlechschmidt
Language en - English
en - English
Other sessions...

refresh

Starts at 2017/12/28 11:30
Ends at 2017/12/28 12:20
Duration 50 minutes
Location Room:Seminar room 14-15

Conventional computer algorithms are superior to the human intellect in many regards: for instance at multiplying large numbers or winning at chess by analyzing huge numbers of moves. But there are also many tasks which come naturally to us yet exceed the capabilities of algorithms by vast amounts: Rigid algorithms can't decipher human handwriting or drive cars.

The recent breakthroughs in artificial intelligence circumvent these barriers by employing quite a different approach: They use artificial neural networks, which are inspired by the partially-understood way the human brain works.

The unique feature of artificial neural nets is that they aren't rigid, but can learn. Human programmers specify their rough structure and supply training data, but don't write a single line of code governing their behavior.

In the spirit of a good Unix command-line tool, this talk aspires to explain one thing and explain it well: How do artificial neural nets accomplish the feat of learning?

We'll see that the answer is related to a mathematical breakthrough made at the end of the 17th century and discuss why deep learning only surged in the last few years, even though the basics of artificial neural nets were already understood in the 1980s. We'll also touch upon some of the biggest problems of neural nets, which emerge directly from the way neural nets learn.

The talk doesn't require any advanced knowledge of mathematics. It's intended as a warmup to Katharine's talk on deep learning blindspots. If you're already familiar with Michael Neelson's book, then don't expect to learn anything new and come to this talk only if you want to contribute interesting remarks.

Slides (will be extended after the congress)