Google’s artificial intelligence (AI) has reportedly flagged parents’ accounts for potential abuse over nude photos of their sick kids.

The father said that after using his Android smartphone to take photos of an infection on his toddler’s groin, the tech giant flagged the images as child sexual abuse material (CSAM), citing the NYT, The Verge reported.

The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and spurred a police investigation.

It highlights the complications of trying to tell the difference between potential abuse and an innocent photo once it becomes part of a user’s digital library, whether on their personal device or in cloud storage.

Buy Me A Coffee

The incident occurred in February 2021, when some doctor’s offices were still closed due to the Covid-19 pandemic.

As per the report, Mark (whose last name was not revealed) noticed swelling in his child’s genital region and, at the request of a nurse, sent images of the issue ahead of a video consultation. The doctor wound up prescribing antibiotics that cured the infection.

Mark received a notification from Google two days after taking the photos, stating that his accounts had been locked due to “harmful content” that was “a severe violation of Google’s policies and might be illegal.”

Like many internet companies, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft’s PhotoDNA for scanning uploaded images to detect matches with known CSAM.

In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl.