From Google search to Facebook news, algorithms shape our online experience. But like us, algorithms are flawed. Programmers write cultural biases into code, whether they realize it or not. Author Luke Dormehl explores the impact of algorithms, on and offline. Staci Burns and James Bridle investigate the human cost when YouTube recommendations are abused. Anthropologist Nick Seaver talks about the danger of automating the status quo. Safiya Noble looks at preventing racial bias from seeping into code. And Allegheny County’s Department of Children and Family Services shows us how a well-built algorithm can help save lives.
Tagged with “algorithm” (17)
Our site: https://designersandgeeks.com ————- Finding an apartment (or a partner), deciding whether to eat at our favorite restaurant or try something new, managing our messy desks and scheduling our time: we think of these as uniquely human problems. They’re not. Deep, fundamental parallels exist between these dilemmas and some of the canonical problems in computer science—which gives us an opportunity to learn something about how to make better decisions in our own lives.
Brian Christian is the coauthor, with Tom Griffiths, of Algorithms to Live By, a #1 bestseller, and the author of The Most Human Human, a New York Times Editors’ Choice, Wall Street Journal bestseller, and New Yorker favorite book of the year. Christian’s writing has appeared in The New Yorker, The Atlantic, Wired, The Wall Street Journal, The Paris Review, and in scientific journals such as Cognitive Science, and has been translated into eleven languages. He has appeared on The Daily Show with Jon Stewart, The Charlie Rose Show, and Radiolab, and has lectured at Google, Facebook, Microsoft, the Santa Fe Institute, and the London School of Economics. He lives in San Francisco.
DESIGNERS + GEEKS EVENTS We host monthly events like this in San Francisco, New York, and Boston. Sign up for our newsletter to be notified when …
We’re building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren’t even the real threat. What we need to understand is how the powerful might use AI to control us — and what we can do in response.
Computer algorithms now shape our world in profound and mostly invisible ways. They predict if we’ll be valuable customers and whether we’re likely to repay a loan. They filter what we see on social media, sort through resumes, and evaluate job performance. They inform prison sentences and monitor our health. Most of these algorithms have been created with good intentions. The goal is to replace subjective judgments with objective measurements. But it doesn’t always work out like that.
Our increasingly smart machines aren’t just changing the workforce; they’re changing us. Already, algorithms are directing human activity in all sorts of ways, from choosing what news people see to highlighting new gigs for workers in the gig economy. What will human life look like as machine learning overtakes more aspects of our society?
Alexis Madrigal, who covers technology for The Atlantic, shares what he’s learned from his reporting on the past, present, and future of automation with our Radio Atlantic co-hosts, Jeffrey Goldberg (editor in chief), Alex Wagner (contributing editor and CBS anchor), and Matt Thompson (executive editor).
Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
"The actual path of a raindrop as it goes down the valley is unpredictable, but the general direction is inevitable," says digital visionary Kevin Kelly — and technology is much the same, driven by patterns that are surprising but inevitable. Over the next 20 years, he says, our penchant for making things smarter and smarter will have a profound impact on nearly everything we do. Kelly explores three trends in AI we need to understand in order to embrace it and steer its development. "The most popular AI product 20 years from now that everyone uses has not been invented yet," Kelly says. "That means that you’re not late."
In this special episode, Janie speaks about her background as a non-traditional student and programmer and about why she feels that asking about algorithms in job interviews is discriminatory.
We take another look at algorithms. Tim Hwang explains how Uber’s algorithms generate phantom cars and marketplace mirages. And we revisit our conversation with Christian Sandvig who, last year asked Facebook users to explain how they imagine the Edgerank algorithm works (this is the algorithm that powers Facebook’s news feed). Sandvig discovered that most of his subjects had no idea there even was an algorithm at work. Plus James Essinger and Suw Charman-Anderson, tell us about Ada Lovelace, the woman who wrote the first computer program (or as James puts it – Algorithm) in 1843.
Solving hard decisions
Deciding when to stop your quest for the ideal apartment, or ideal spouse, depends entirely on how long you expect to be looking, says Brian Christian.
The first one you check will be the best you’ve seen, but it’s unlikely to be the best you’ll ever see.
So you keep looking and keep finding new bests, though ever less frequently, and you start to wonder if maybe you refused the very best you’ll ever find.
And the search is wearing you down.
When should you take the leap and look no further?
The answer from computer science is precise: 37% of the way through your search period.
If you’re spending a month looking for an apartment, you should calibrate (and be sorely tempted) for 11 days, and then you should grab the next best-of-all you find.
Likewise with the search for a mate.
If you’re looking from, say, age 18 to 40, the time to shift from browsing and having fun to getting serious and proposing is at age 26.1.
(However, if you’re getting lots of refusals, “propose early and often” from age 23.5.
Or, if you can always go back to an earlier prospect, you could carry on exploring to age 34.4.)
This “Optimal Stopping” is one of twelve subjects examined in Christian’s (and co-author Tom Griffiths’) book, Algorithms to Live By.
(The other subjects are: Explore/Exploit; Sorting; Caching; Scheduling; Bayes’ Rule; Overfitting; Relaxation; Randomness; Networking; Game Theory; and Computational Kindness.
An instance of Bayes’ Rule, called the Copernican Principle, lets you predict how long something of unknown lifespan will last into the future by assuming you’re looking at the middle of its duration—hence the USA, now 241 years old, might be expected to last through 2257.)
Christian went into detail on the Explore/Exploit problem.
Optimism minimizes regret.
You’ve found some restaurants you really like.
How often should you exploit that knowledge for a guaranteed good meal, and how often should you optimistically take a chance and explore new places to eat?
The answer, again, depends partly on the interval of time involved.
When you’re new in town, explore like mad.
If you’re about to leave a city, stick with the known favorites.
Infants with 80 years ahead are pure exploration— they try tasting everything.
Old people, drawing on 70 years of experience, have every reason to pare the friends they want to spend time with down to a favored few.
The joy of the young is discovering.
The joy of the old is relishing.
Page 1 of 2Older