Ian Sample delves into the ethics of track-and-trace apps, with Carly Kind and Seeta Peña Gangadharan
Tagged with “science” (23)
Over the last 28 years, Stephen Wolfram has led his partially-distributed company from his home in Massachusetts. Over that time he’s developed countless productivity tools and collaborative processes that help Wolfram Research create world-changing tech.
David Wood, smartphone pioneer and Chair, London Futurists, speaks on the topic "Predictions, good, bad, and ugly: roadblocks en route to 2025" at the London Futurists Anticipating 2025 event, http://anticipating2025.com/agenda/
Tagged with science & technology
The O’Reilly Design Podcast: Designing for the “six minds,” the importance of talking like a human, and the future of predictive AI.In this week’s Design Podcast, I sit down with John Whalen, chief experience officer at 10 Pearls, a digital development company focused on mobile and web apps, enterprise solutions, cyber security, big data, IoT, and cloud and dev ops. We talk about the “six minds” that underlie each human experience, why it’s important for designers to understand brain science, and what people really look for in a voice assistant.Here are some highlights:
Why it’s important for product designers to understand how the brain works
I think that by knowing a little bit more about the brain—what draws your attention, how you hold things in memory, how you make decisions, and how emotions can cloud those decisions…the constellation of all these different pieces helps us make sure we’re thinking like our audience and trying to discover their frame of…literally their frame of mind when they’re picking a product or service and using it.
The “six minds” that underlie each human experience
One is vision and attention. The second is memory and all your preconceived ideas and the ways you think the world works. The third is wayfinding—that’s your ability to move around in space, in this case, move around a virtual world. The fourth is language, so the ability to have different linguistic terms. Associated with that is the emotional content there. And, finally, all of that is in service of helping you make decisions and solve problems in your world.
What we look for in a voice-based assistant
We studied how a diverse group of people use Siri, Cortana, Alexa, and Google Assistant, and then we asked, "Well, which one would be your favorite to take home? Which was your personal preference?" A lot of people did pick Google Assistant, which made all kinds of sense because that one did the best at answering questions. But then the second most popular by a wide margin was Alexa from Amazon’s Echo—despite actually being the least successful at answering questions. So, that was intriguing to us and we kind of wondered why.
It turns out that the folks who picked Google Assistant often described what they were looking for from these systems as things like, "I just want the answer fast, just the facts. Give me the answer; I just want to know what’s happening." And some of the people who preferred Alexa said things like, "Well, it answered the question the way I asked it." Or, "I like that I can converse back and forth with it. It makes me feel like I’m speaking to a human." So, there are really humanistic qualities they gravitated to with Alexa.
…We can’t just go out and test our systems to be “percent correct” accurate, we also need to think about this human component. I think that’s the thing I wasn’t necessarily expecting to find from our study. We were curious about this humanistic quality, but we didn’t know how important it was.
How predictive should AI systems be…when does it become creepy?
In our study, we asked questions like, “How much would you like this to know about you?” For example, Amazon knows how often you’ve bought toothpaste, so it could probably predict if you’re running low on toothpaste. It could ask on a random Tuesday, "Gosh, Nikki, would you like some more toothpaste?" And you’re thinking, "How did it know? And where is it looking? And did it have a camera? And who else is in the room?" There are mathematical models that can predict these things quite well.
…There can be all kinds of ways that devices can augment your cognition—and we already do this; we’re already, in some ways, cyborgs, every time we use Google Maps or every time we Google a price to make a decision on choosing something. There are a lot of ways this works, and we are very comfortable with it now. Finding out the weather in advance is actually augmenting what we know, helping us make better decisions.
It can keep doing this; it’s just that we’re not used to it doing it in space and time, and we’re not used to it being as predictive. We’re used to asking it a question and then receiving the answer as opposed to it anticipating that you might need an answer.
Description A new wave of creative applications of AI has arrived, making science fiction authors struggle to keep up with reality. Recent advances in Deep Learning, especially generative models, make it possible to generate text, audio, speech, and images. There’s a wonderfully trippy world of neural nets "going wild" out there, which you, the python enthusiastic, can be part of…
Abstract A new wave of creative applications of AI has arrived, making science fiction authors struggle to keep up with reality. Recent advances in Deep Learning, especially generative models, make it possible to generate text, audio, speech, and images. There’s a wonderfully trippy world of neural nets "going wild" out there, with AI choreographed dancing moves, freestyle raps, impressionist paintings, and Trump impersonating bots. Such "bots" and experiments are but one novel use of this kind of "Creative AI". Taking a more human-centered approach, allowing for control and agency, has the potential to turn these content-generating neural nets, into tools for creative use and explorations of human-machine interaction, where the main theorem is "augmentation, not automation". The talk will particularly focus on "generative" models, and show the python fanatic how to make your move with these particular forms of Deep…
Tagged with science & technology
What happens when doing what you want to do means giving up who you really are?
Listen to SciFri: Science Goes To The Movies: ‘Her’ and other Science Friday Audio Podcast podcasts (50 episodes) on Player FM. No signup or install.
What does the future look like from the past? This exciting program with three people that could not better represent the intelligentsia of futurism circa 1970. This recording is from a radio program called “Sound on Film”, a series on films and the people who make them. This episode is entitled “2001–Science Fiction or Man’s Future?” Recorded May 7th, 1970. Joseph Gelman is the moderator.
At the time of this recording Arthur C. Clarke had recently collaborated on the movie 2001: A Space Odyssey with Stanley Kubrick. Alvin Toffler’s mega-influential book, Future Shock, is about to be published. And Margaret Mead is the world’s foremost cultural anthropologist.
An intriguing conversation that still has relevance today.
2001–Science Fiction or Man’s Future?
Kathy Sierra talks about expertise and neuroscience. The study of the differences between the world class performer and the average performer reveals something more important than genetics. Sierra shares several tips on how everyone can improve their performance and the most important factors in getting really good at something.
The battle over the calculus. Professor Marcus du Sautoy reveals how the great hero of British science is rather less gentlemanly than his German rival. An astronaut and investment analyst pay homage to the enormous power of the calculus.
Page 1 of 3Older