This commentary is by Andrew K. Gentile, a resident of Sheffield and a self-employed electrical engineer.

Artificial intelligence can be extremely valuable when applied to the appropriate type of problems. AI can scan through the medical records of 10 million cancer patients, finding trends that a human could not. It can find genetic markers that indicate a patient’s chances for getting cancer, giving the patient the opportunity to deal with it before the condition becomes symptomatic.

That’s a tremendous benefit to medicine, like being able to see the future. The reason AI can do this so effectively is that it has a mountain of information in the form of years of medical records per patient. 

Of course, AI is also used for less admirable duties, such as targeting advertising. Google has a tremendous amount of information on me. I’m sure it knows many things about me that I don’t even know. It continually scans my emails and my texts. It monitors my web activity, and it even listens to my conversations. 

And from this set of data it picks up keywords that then become the inputs to the AI program assigned to monitor me. The problem that the Google AI is trying to solve is how to select articles or videos that will get my attention. 

I get a lot of trivia articles with titles like, “Only 7% of People Can Answer This ’70s TV Quiz,” or “The Unhappiest Cities in Every State.” 

In a sense, my newsfeed resembles the checkout line in the grocery store, cluttered with mostly useless items, placed there to tempt my restraint. Although I might grab a candy bar while waiting to check out, I do so out of boredom, or impulse. It’s not that I actually wanted the candy. In fact, if candy was sold only in the candy aisle, I might never buy candy. 

Similarly, I read articles in my newsfeed because they are there, and they are there because I read them. This circular feedback is Google’s self-fulfilling prophecy. Above all else, convenience matters.

I could argue that Google’s AI doesn’t work because it doesn’t know what I really want to read. Google is confusing convenience and curiosity for genuine interest. The only feedback Google gets is whether or not I open the article. There may be other clues, such as how long I view the article, and if I open it more than once. Otherwise Google has no clue why I opened the article. 

But Google’s AI has not been programmed to find out what my literary preferences are. It has been programmed to maximize my value as a Google product. My attention is what Google is selling, and the more Google can get my attention, the more valuable I am as a product. Google only has to get, but not hold, my attention. Distracting me is both easier and more profitable than understanding me. 

Google makes money by hindering my ability to see the online world. Sadly, this is a win-lose business model. The more trivia I have to sift through, the more likely a title will catch my eye and I will click on it. And of course, I do. Everyone does. 

The AI doesn’t just adapt to me; I adapt to it. Over time, I will probably become more like what Google thinks I am. Perhaps I already have. This is Google’s checkout line, and that’s where the candy is.

Pieces contributed by readers and newsmakers. VTDigger strives to publish a variety of views from a broad range of Vermonters.