HQ, the wildly popular trivia game app, may soon have to contend with a cyber pest that often swarms successful platforms: bots. At least one developer has programmed a bot that has been independently proven to augment, though not necessarily replace, the capabilities of a human player.
The Daily Beast reported on Wednesday that its writers had tested the “HQ Trivia Assistant” bot created by Canadian developer Mike Almond that initially answered 90 percent of questions correctly before they even appeared on most users’ screens. Yet in later rounds, after reporters contacted HQ to ask about the presence of bots on the game, the accuracy rate dropped to 40 percent.
The HQ Trivia Assistant is able pull each question from the app’s programming interface seconds before the gameshow host is able to read it aloud. The bot then enters the transcript of the question into Google and Yahoo and cross-references the preview text in the links from the search results with words in the answer options. Users can then see the percentage likelihood that each potential answer is correct.
Of course, this process isn’t foolproof. Almond reportedly has found that if the answer options share the same word, or if they include numbers or dates, then the bot has a hard time knowing what to look for. It helps players more with smell tests or stabbing in the dark. In fact, he’s never been able to win with help from the trivia assistant, though it appears that Almond just built the bot for kicks. He told the Daily Beast, “I haven’t done this for any other games. This one was just to see how it works. I am not into making money or anything out of it, I like to see things how work and I use it as a teaching moment.”
HQ does offer cash awards for people who win the game, though players can only withdraw money if they have at least $20 in their accounts, so it’s possible that someone more determined could want to develop a more thorough bot for monetary gain. Almond suspects that other winners are using similar bots.
Others have been able to use bots to a similar effect, though not through the app’s backend like Almond’s. Polygon reported in December that a 19-year-old computer science student at Loughborough University programmed a bot that was able to take a screenshot of the question and potential answers, and then uses text detection and Google’s search engine to determine which of the options is most likely to be correct. The student, Toby Mellor, claims that it would be extremely difficult for HQ to try and detect the bot.
HQ, for its part, does reserve the right to “disqualify any entries that it believes in good faith are generated by an automated means or scripts. Entries generated by script, macro or other automated means are void” according to its terms of service, which the company referred to when the Daily Beast asked for a comment.
One more thing
You depend on Slate for sharp, distinctive coverage of the latest developments in politics and culture. Now we need to ask for your support.
Our work is more urgent than ever and is reaching more readers—but online advertising revenues don’t fully cover our costs, and we don’t have print subscribers to help keep us afloat. So we need your help. If you think Slate’s work matters, become a Slate Plus member. You’ll get exclusive members-only content and a suite of great benefits—and you’ll help secure Slate’s future.Join Slate Plus