Expert warns of incorrect flagging amid OpenAI standard review

News Room
By News Room 3 Min Read

As the artificial intelligence company OpenAI moves to strengthen its policies and reporting system, a professor at the University of British Columbia believes more users will be wrongfully flagged in an effort to identify problematic behaviour.

Computer science professor Kevin Leyton-Brown says we have seen examples of incorrect flagging in the past, when companies tried to automatically report child pornography.

“That has ended up ensnaring parents who took a picture of their child in the bathtub or no-fly lists that have ended up barring innocent people who have had a really hard time getting their names taken off the list,” said Leyton-Brown. “Any kind of system like that is going to have false positives.”

The revamp comes after federal Artificial Intelligence Minister Evan Solomon instructed OpenAI to strengthen their safeguards in the wake of the Tumbler Ridge mass shooting. OpenAI has faced criticism over its failure to initially report shooter Jesse Van Rootselaar’s activity on ChatGPT to police in the lead-up to the shooting.

OpenAI was also asked to review previously flagged cases to ensure they are properly reported to the RCMP. Leyton-Brown says if companies want to detect problematic behaviour on their platforms, they need to build a separate system.

“Any system like that is going to be imperfect, and it is going to have some threshold where it decides, ‘this person is roleplaying,’ ‘this person is discussing a fantasy,’ ‘this person looks like they might actually be serious,’” he said.

“When you’re speaking to a psychiatrist or another human being, they are forming an opinion about what you are saying as you are having the conversation. AI systems are not like this. They are just literally having the conversation.”

Leyton-Brown says the conversation around AI regulation is needed as society has a right to regulate it and not leave it to the discretion of private companies.

“There is nothing in principle that stops a company like OpenAI from monitoring conversations, deciding that certain lines have been crossed…and reacting to it. The question is exactly how this should work, what kinds of expectations of privacy people ought to have, and what the system should do about it?”

Leyton-Brown expects we will have similar conversations about AI regulation in the months ahead and says we will likely see some kind of regulation from the federal government.

B.C. Premier David Eby says OpenAI will work with the province to advocate for a national legislative standard for AI to report problematic interactions with its users.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *