ML is crowdsourced trial and error, and brute force on sample data.
Deep learning is just the same word with more emphasis on the neural network (crowdsourced computing).
The computers are just doing the same thing a large number of people do, thats the "neural network","crowdsourced","deep learning" aspect. Whatever synonym you like to use.
The applications of spotting eye disease more than a doctor taking his recognition knowledge with his caveman eyeballs far exceed anything available
That is quiet a buzzwordy nonsense soup you barfed up there. I do enjoy you going from "zomg its crowd sourcing" to "zomg its an improvement on human cognition" in a single thought. The ignorant tech journalists whose sole purpose is to sell ads by hyping up shit they dont understand would be proud.
Deep learning has been around for 60 years and it is nothing but fairly complex and fancy optimization and curve fitting approaches to data analysis that has existed for centuries. Its success now is due entirely to the massive amounts of data we collect and the improvements in brute force computational power. As such, it is a powerful tool in any computational data examination.
However, unlike what you are implying, deep learning is not the step closer to computers having human level of pattern matching. The artificial neural network meant to mimic the human neural network replaces millions of years of heuristics honed through evolutionary processes with computational neurons meant to introduce non linearity to the system. These for many reasons are not a sufficient replacement but the most important reason I will discuss next.
Data that we have is information processed by humans. As such, if you can imagine the set of all data, the human is observing this set from the outside. You can think of the human as being a function mapping from the set of information to the set of data. A large portion of this function is the aforementioned heuristics. The computer on the other hand, does not exist outside the set of data and actually exists inside the data as it is human made. In other words, the computer is examining the data from within and does not have the luxury of processing information that the humans have. This leads to the normal self referential paradoxes which force either a complete or consistent system. Because of this, the computer will never be able to attain the levels of cognition that a human has and will always be limited to some percentages of human capability, like where the deep learning systems are at currently. That is, while deep learning will be fairly close to human cognition, it will forever make catastrophic mistakes like confusing ice cream cones for busses, chihuahua for muffins and gorillas for black people.
Why the fuck is this drivel here? Because Deep Learning is ridiculously hyped up. However, almost all the gains are not from the advances in AI but from advances in computational infrastructure which is usually omitted in the hype. The gains that are currently touted have stalled and will not go anywhere past maybe another small % increases. In a decade, deep learning will be nothing but another black box method that someone will plug data into and get some information out of it. It is a dead end just like it was in the 80s as far as AI is concerned. Now this is my opinion of course and if people want to learn this shit, they are welcome to it as long as they are aware that the sources they should be using are academic and not journalist originated trash.
As to the video, two minute papers is an excellent youtube channel that people should follow and watch. However, dont be a Tyen and hype up a piece of research that has not being validated or replicated. The only thing to take from that video is that it is an interesting application of something most of you do as a means to earn a living, i.e. using computers to accomplish some task.
Oh and a PS: IMO Human level AI is not possible on von Neumann architecture.
TLDR: Turing/Church/Godel > von Neumann >> deep learning >>>>>>>>> potato >>>>> tyen