Kartik Hosanagar was ready to turn his home into one of the future. He was looking forward to connecting his thermostat, television, light bulbs and other internet-enabled smart devices and control all of them with a phone, tablet or just his voice.
Hosanagar, a technology and digital business professor at the Wharton School at the University of Pennsylvania, had everything hooked up and the new arrangement went fine for several months — until one day his television started turning itself on and off.
It turned out that a friend who had helped Hosanagar set up his smart home still had access to his home and was inadvertently controlling his TV from his home.
“Sometime during the setup, we switched to his phone and used the TV app to set it up,” Hosanagar said. “So when he was trying to turn on his TV, he would accidentally turn mine on and off.”
It was at that point when the technology professor took a step back and reconsidered if he even needed a smart home, and on a grander scale thought about what it meant to have algorithms and artificial intelligence be a part of his day-to-day life.
Hosanagar will be at the Buena Vista Branch Library in Burbank at 7 p.m. Wednesday night for a free event to discuss his book, “A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control,” which dives into the benefits and risks of artificial intelligence and the algorithms that run them.
He said algorithms show up in various applications. They’re used whenever someone uses Amazon’s Alexa or Google Assistant, or when someone scrolls through Netflix or Spotify and new movies, shows, artists or songs are suggested based on their viewing or listening habits.
“Look at dating apps like Tinder. Nearly all the matches come from algorithmic recommendations,” Hosanagar said. “If you go to the workplace, recruiters are asked to consult algorithms to figure out which applications to shortlist and who to invite.”
Even doctors use algorithms to help make better decisions on how to treat a specific patient, he added.
While using algorithms can be beneficial for consumers and businesses, Hosanagar said it’s getting to the point where there are downsides to using artificial intelligence.
Hosanagar said Facebook was known for replacing its human editors who curated trending news feeds with an algorithm because the humans were accused of being politically biased. However, the artificial intelligence that replaced them was not very good at sorting out fake-news stories.
In 2010, the stock market crashed by nearly a trillion dollars in a day due a single trade that spoofed automated trading algorithms used by traders.
“We’re starting to see that algorithms are capable of not only the failings we see in some human decisions but also a lot of subjectivity we see in humans, like race bias or gender bias,” Hosanagar said.
To ensure that humanity gets a hold of these algorithms before they consume their users, Hosanagar outlines in his book what he calls the Algorithmic Bill of Rights — a set of rules to adhere to when developing programs and artificial intelligence.
One idea would be to require transparency from businesses on how they use consumers’ data to develop their algorithms or simply telling someone an algorithm was used to make a decision.
Hosanagar said Google is a good example of this rule after the company introduced its Duplex technology during its I/O keynote in 2018.
Thousands of people saw Google Assistant call a hair salon and schedule an appointment, witnessing an artificial intelligence interact with the caller on the other line as if it were a real person.
The conversation between the artificial intelligence and human both amazed and concerned the public, mainly due to the lack of transparency. Google officials have since said that whenever this feature is used, Google Assistant will start the conversation by notifying the person on the other end of the line that they are talking to a program.
“We’re moving toward a certain type of AI or machine learning that’s more and more like a black box, where even the developer can’t tell you the logic behind the program’s decision,” Hosanagar said. “I think that push for transparency will force the developers to think about interpretability of the models they use.”