A Washington state judge has ruled against the admission of “AI-enhanced” video evidence in a triple homicide case, a decision that underscores the skepticism around the belief that AI filters can uncover hidden visual information. King County’s Judge Leroy McCullough articulated in his recent decision that the AI technology employed utilizes “opaque methods to represent what the AI model ‘thinks’ should be shown,” as reported by NBC News on Tuesday.
“This Court finds that admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony, and could lead to a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model,” McCullough stated.
AI-enhanced videos can’t be evidences according to court
Essentially an AI-enhanced photo refers to an image that has been modified, improved, or altered using artificial intelligence technologies. These enhancements can include a range of adjustments such as increasing resolution, restoring damaged or old photographs, colorizing black and white images, removing noise and unwanted objects, or even altering facial expressions and lighting. AI algorithms analyze the content of the photo and apply changes to achieve a desired effect, often creating a more polished, high-quality, or visually appealing outcome than the original.
The legal proceedings concern Joshua Puloka, 46, accused of fatally shooting three people and injuring two at a bar near Seattle in 2021. The defense attempted to use a bystander’s cellphone video, artificially enhanced, though the expected benefits from this footage were not specified. A novice in criminal case video enhancement, using AI technology from Topaz Labs in Texas, was employed by Puloka’s legal team to improve the video clarity. This incident highlights the common misconceptions about AI’s capability to clarify visual data. In reality, AI tools often add non-original details to images rather than clarifying existing ones.
The surge in “AI” branded products has led to public confusion over their true capabilities. Even sophisticated users are misled by the human-like output of large language models like ChatGPT, mistaking it for complex thought. However, these models are mainly predicting subsequent words to mimic human conversation, not engaging in deep reasoning.
Despite significant investment in AI technology, many overestimate its sophistication, attributing errors to biases or overly strict controls. Yet, such flaws are more about the technology’s limitations than anything else. Recognizing the limits of this technology, a Washington judge ruled against using AI-enhanced evidence, pointing out its inability to genuinely improve understanding of the original footage. This decision contrasts with a growing trend of accepting AI-generated content in legal contexts, underscoring the need for a deeper understanding of AI’s capabilities and limitations.
Featured image credit: Saúl Bucio/Unsplash