Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning by a16z | Free Listening on SoundCloud
a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning
published on 2015/05/01 21:50:55 +0000
"Machine learning is to big data as human learning is to life experience," says Christopher Nguyen, the co-founder and CEO of big data intelligence company Adatao. Sure, but then, what IS big data? (especially as it’s become a buzzword that captures so many things)…
On this episode of the a16z Podcast, Nguyen puts on his former computer science professor hat to describe ‘big data’ in relation to ‘machine learning’ — as well as what comes next with ‘deep learning’. Finally, the former Google exec shares how Hadoop and Spark evolved from the efforts of companies dealing with massive amounts of real-time information; what we need to make machine learning a property of every application (why would we even want to?); and how we can make all this intelligence accessible to everyone.
Download a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning
Users who like a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning
Users who reposted a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning
Playlists containing a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning
Groups containing a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning
More tracks like a16z Podcast: Making Sense of Big Data, Machine Learning, and Deep Learning
Applying the trademark deep-reporting of The Atlantic with the expertise of Fidelity Investments, “The Future According To Now” podcast explores the technological innovations that hold true potential to impact our lives in the near future.
Elon Musk discusses his new project digging tunnels under LA, the latest from Tesla and SpaceX and his motivation for building a future on Mars in conversation with TED’s Head Curator, Chris Anderson.
Rodney Brooks, emeritus professor of robotics at MIT, talks with EconTalk host Russ Roberts about the future of robots and artificial intelligence. Brooks argues that we both under-appreciate and over-appreciate the impact of innovation. He applies this insight to the current state of driverless cars and other changes people are expecting to change our daily …
When a crash is inevitable and a human is at the wheel, the driver makes a split-second decision. In a car controlled by algorithms, this choice is predetermined by a programmer. Should your car hit a pedestrian to save your life? Should your car sacrifice your life to save the lives of others? Self-driving cars will become a reality in the near future, but what of the moral quandaries involved?
In the face of artificial intelligence and machine learning, we need a new radical humanism, says Tim Leberecht. For the self-described "business romantic," this means designing organizations and workplaces that celebrate authenticity instead of efficiency and questions instead of answers. Leberecht proposes four (admittedly subjective) principles for building beautiful organizations.
The artificial intelligence marketplace is primed to surge from $8 billion this year to $47 billion by 2020. AI’s prime contribution will be to work through the enormous amount of data that modern technology is producing, make sense of the big data by discovering trends and patterns, and suggest ways forward. Already, algorithms are conducting research for law firms and writing stories for newspapers, among other tasks. As AI continues to evolve, it will continue to transform the way businesses function and how people live their lives, augmenting—not replacing—human intelligence and expertise.
Will humans have moral obligations to robots? Are we guilty of ‘origin chauvinism’ if we believe only natural phenomena can exhibit consciousness? And is this line of thinking a sinister cousin to racism? What’s the definition of moral agency with regards to driverless cars and lethal autonomous weapons? How might artificial intelligence mitigate human suffering?
In our first meeting of the Longform Society, we’ll read a selection of compelling, entertaining and occasionally terrifying pieces on the subject of robots and artificial intelligence.
Stream the talk here – and then unpack the ideas in depth at http://wheelercentre.com/events/meeting-1-robots
What if the cybernetic matrix calculates that humans are the problem? Man, machine, or something in between… To whom - or what - does the future belong?