“The problem is that literally anybody can watch these videos—kids, adults, it doesn’t matter,” she says. Matt first saw a fractal wood burning video shared by a friend on Facebook and was so intrigued that “he started watching YouTube videos on it—and they’re endless.”
Matt was electrocuted when a piece of the casing around the jumper cables he was using came loose and his palm touched metal. “I truly believe if my husband had been fully aware [of the dangers], he wouldn’t have been doing it,” Schmidt says. Her plea is simple: “When you’re dealing with something that has the capability of killing somebody, there should always be a warning … YouTube needs to do a better job, and I know that they can, because they censor all types of people.”
After Matt’s death, medical professionals from the University of Wisconsin wrote a paper entitled “Shocked Though the Heart and YouTube Is to Blame.” Citing Matt’s death and four fractal wood burning injuries they’d personally treated, they asked that “a warning label be inserted before users can access video content” on the crafting technique. “While it is not possible, or even desirable, to flag every video depicting a potentially risky activity,” they wrote, “it seems practical to apply a warning label to videos that could lead to instantaneous death when imitated.”
Matt and Caitlin Schmidt had been best friends since they were 12 years old. He leaves behind three children. Schmidt says that her family has suffered “pain, loss and devastation” and will carry lifelong grief. “We are now the cautionary tale,” she says, “and I wish on everything in my life that we weren’t.”
YouTube told MIT Technology Review its community guidelines prohibit content that’s intended to encourage dangerous activities or has an inherent risk of physical harm. Warnings and age restrictions are applied to graphic videos, and a combination of technology and human staff enforces the company’s guidelines. Dangerous videos banned by YouTube include challenges that pose an imminent risk of injury, pranks that cause emotional distress, drug use, the glorification of violent tragedies, and instructions on how to kill or harm. However, videos can depict dangerous acts if they contain sufficient educational, documentary, scientific, or artistic context.
YouTube first introduced a ban on dangerous challenges and pranks in January 2019—a day after a blindfolded teenager crashed a car while participating in the so-called “Bird Box challenge.”
YouTube removed “a number” of fractal wood burning videos and age-restricted others when approached by MIT Technology Review. But the company did not say why it moderates against pranks and challenges but not hacks.
It would certainly be challenging to do so—each 5-Minute Crafts video contains numerous crafts, one after the other, many of which are simply bizarre but not harmful. And the ambiguity in hack videos—an ambiguity that is not present in challenge videos—can be difficult for human moderators to judge, let alone AI. In September 2020, YouTube reinstated human moderators who had been “put offline” during the pandemic after determining that its AI had been overzealous, doubling the number of incorrect takedowns between April and June.