A few years ago, Alan Feldman wandered onto the exhibition floor at ICE London, a major event in the gambling industry.
Feldman had spent the past 30 years as an executive with MGM Resorts International, focusing on problem gambling and the financial, personal and professional repercussions. Before his departure from the company, he helped build a nationwide responsible-gambling program that focused on helping players shift their behaviors to reduce the risk of becoming problem gamblers.
While on the floor at ICE, he noticed a few companies promoting new products that would use artificial intelligence to not only identify problem gambling, but predict it. Feldman was immediately skeptical. AI, he thought, might do a lot of things, but he had never heard of a use that predicted a state of mind.
AI as a solution to problem gambling “raised far more questions than it did answers,” said Feldman, now a distinguished fellow in responsible gambling at the International Gaming Institute at the University of Nevada, Las Vegas. “It was slick. It was interesting. It was compelling intellectually. But whether or not it was really going to do anything, I thought, very much remained at question.”
Another question, this one obvious to any observer: Isn’t a problem gambler exactly what a casino wants financially? In short: no. Even putting aside regulatory issues — gambling operators can be fined or lose their licenses if they fail to monitor problem gambling and act when necessary — it is, counterintuitively, not in their best financial interest.
“Casinos need to have customers in order to sustain themselves,” Feldman said. “And the only way to have customers is to have customers who themselves are healthy and thriving and able to pay their bills and come back the next time.” Problem gamblers “always end the same way,” he added. “The end of the road is the exact same with all of them: They have no money.”
In a more general way, the pairing of AI and gambling makes perfect sense: unlimited and constant data, decision-making, computerized systems. With the explosion of online gaming, the opportunities to harness this combination for a public good seem endless. The reality — interpreting human behavior, navigating privacy laws, addressing regulatory issues — is much more complicated.
At the same time that Feldman was questioning those potential solutions, Danish researchers were trying to solve the same problems. Mindway AI, a company that grew out of Aarhus University, does exactly what Feldman was skeptical of: It predicts future problem gambling. Built using research at Aarhus by its founder, Kim Mouridsen, the company uses psychologists to train AI algorithms in identifying behaviors associated with problem gambling.
One significant challenge is that there is no sole indicator of whether someone is a problem gambler, said Rasmus Kjærgaard, Mindway’s CEO. And at most casinos, human detection of problem gambling focuses on just a few factors — mostly money spent and time played. Mindway’s system takes into account 14 different risks. Those include money and time but also canceled bank withdrawals, shifts in the time of day the player is playing and erratic changes of wagers. Each factor is given a score from 1 to 100, and the AI then builds out a risk assessment of each player, improving itself with each hand of poker or spin of the roulette wheel. Players are scored from green (you’re doing fine) to blood red (immediately step away from the game).
In order to tailor the algorithm to a new casino or online operator, Mindway hands over its data to a group of experts and psychologists trained in identifying such behavior. (The company said they were independent, paid consultants.) They assess each client’s customers and use that model as a sort of baseline. The algorithm then replicates its diagnosis to the full customer database.
“As soon as a player profile or player behavior goes from green to yellow and to the other steps as well, we are able to do something about it,” Kjærgaard said. The value in the program isn’t necessarily just identifying those blood-red problem gamblers; by monitoring the jumps along Mindway’s color spectrum, it predicts and catches players as their play devolves. Currently, he said, casinos and online operators focus their attention on the blood-red gamblers; with Mindway, they can identify the players before they ever reach that point.
The trickiest step, though, according to Brett Abarbanel, director of research at UNLV’s International Gaming Institute, is taking that data and explaining it to a player.
“If my algorithm flags someone, and it thinks that they’re a problem gambler, I’m not going to send them a note and say, ‘Hey, great news: My algorithm has identified you as potentially a problem gambler. You should stop gambling right away!’” The response would be obvious, Abarbanel said, deploying a middle finger: “That’s what will happen.”
How to actually communicate that information — and what to tell the gambler — is an ongoing debate. Some online gaming companies use pop-up messaging; others use texts or emails. Kjærgaard hopes that clients take his data and, depending on the level of risk, reach out to the player directly by phone; the specificity of the data, he said, helps personalize such calls.
Since starting in 2018, Mindway has contracted its services to seven Danish operators, two each in Germany and the Netherlands, one global operator and a U.S. sports-gambling operator, Kjærgaard said. Online gambling giants Flutter Entertainment and Entain have both partnered with Mindway as well, according to the companies’ annual reports.
Since this technology is so new and there’s no regulatory body setting a standard, Mindway and similar companies are, for now, essentially on their own. “We wanted to be able to say to you, to anybody else — operators, obviously — that not only do we provide this scientific-based software, but we also want to have a third party to test the validation of what we do,” Kjærgaard said. “But it is a paradox that there’s no specific requirements which I can ask my team to fulfill.”
Currently, Mindway’s technology lives mostly in online gambling. Operators attach Mindway’s GameScanner system to their portal, and it analyzes not only individual risks but also total risks for the system. Applying that level of oversight to in-person gambling is much more difficult.
One example of a measure of success can be found in Macao. Casino operators there use hidden cameras and facial recognition technology to track gamblers’ betting behavior, as well as poker chips enabled with radio frequency identification technology and sensors on baccarat tables. This data then heads to a central database where a player’s performance is tracked and monitored for interplayer collusion.
This, Kjærgaard said, is the future: The financial incentives will drive success. “Smart tables” and efforts to address money laundering and financial regulations may eventually provide the data that will supercharge the application of AI to in-person gambling.
(It also highlights another difficulty in applying AI to gambling: cultural differences. In Chinese casinos, Abarbanel said, the players are used to this level of monitoring; not so in the United States.)
AI would certainly work for casinos when it came to marketing, promotions and game suggestion, Feldman said, but despite progress in recent years, he remains skeptical of its use to help problem gamblers. The application of such a tool may be better used personally instead of broadly, he believes, much like the “Your spending is 25% higher than last month” reminders that pop up in online banking accounts.
“It’s sort of like drinking. Is there anyone you know who hasn’t gotten drunk once in their life? Doesn’t mean they’re an alcoholic,” he said. “But maybe that one drink a night that’s kind of become one and a half, sometimes two, sometimes three — maybe you want to bring that in a little bit. But you don’t want to have the bar tracking every record here, right?”