alexp / tags / yann lecun

Tagged with “yann lecun” (4)

  1. The Future of Machine Learning from the Inside Out — Talking Machines

    We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). They talk with us about this history (and future) of research on neural nets. We explore how to use Determinantal Point Processes. Alex Kulesza  and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic. Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).

    http://www.thetalkingmachines.com/blog/2015/3/13/how-machine-learning-got-where-it-is-and-the-future-of-the-field

    —Huffduffed by alexp

  2. The History of Machine Learning from the Inside Out — Talking Machines

    In episode five of Talking Machines, we hear the first part of our conversation with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). Ryan introduces us to the ideas in tensor factorization methods for learning latent variable models (which is both a tongue twister and and one of the new tools in ML). To find out more on the topic, the paper Tensor decompositions for learning latent variable models is a good place to start. You can also take a look at the work of Daniel Hsu, Animashree Anandkumar and Sham M. Kakade Plus we take a listener question about just where statistics stops and machine learning begins.  

    http://www.thetalkingmachines.com/blog/2015/2/26/the-history-of-machine-learning-from-the-inside-out

    —Huffduffed by alexp

  3. The Future of Machine Learning from the Inside Out — Talking Machines

    We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). They talk with us about this history (and future) of research on neural nets. We explore how to use Determinantal Point Processes. Alex Kulesza  and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic. Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).

    http://www.thetalkingmachines.com/blog/2015/3/13/how-machine-learning-got-where-it-is-and-the-future-of-the-field

    —Huffduffed by alexp

  4. The History of Machine Learning from the Inside Out — Talking Machines

    In episode five of Talking Machines, we hear the first part of our conversation with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). Ryan introduces us to the ideas in tensor factorization methods for learning latent variable models (which is both a tongue twister and and one of the new tools in ML). To find out more on the topic, the paper Tensor decompositions for learning latent variable models is a good place to start. You can also take a look at the work of Daniel Hsu, Animashree Anandkumar and Sham M. Kakade Plus we take a listener question about just where statistics stops and machine learning begins.  

    http://www.thetalkingmachines.com/blog/2015/2/26/the-history-of-machine-learning-from-the-inside-out

    —Huffduffed by alexp