A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • @ricecake@sh.itjust.works
    link
    fedilink
    English
    49 months ago

    Computational photography in general gets tricky because it relies on your answer to the question “Is a photograph supposed to reflect reality, or should it reflect human perception?”

    We like to think those are the same, but they’re not. Your brain only has a loose interest in reality and is much more focused on utility. Deleting the irrelevant, making important things literally bigger, enhancing contrast and color to make details stand out more.
    You “see” a reconstruction of reality continuously updated by your eyes, which work fundamentally differently than a camera.

    Applying different expose settings to different parts of an image, or reconstructing a video scene based on optic data captured over the entire video doesn’t capture what the sensor captured but it can come much closer to representing what the human holding the camera perceived.
    Low light photography is a great illustration of this, because we see a person walk from light to dark and our brains will shamelessly remember what color their shirt was and that grass is green and update your perception, as well as using a much longer “exposure” time to capture more light data to maintain color perception in low light conditions, even though we might not have enough actual light to make those determinations without clues.

    I think most people want a snapshot of what they perceived at the moment.
    I like the trend of the camera capturing the image, and also storing the “plain” image. There’s also capturing the raw image data, which is basically a dump of the cameras optic sensor data. It’s basically what the automatic post processing is tweaking, and what human photographers use to correct light balance and stuff.

    • Natanael
      link
      fedilink
      English
      19 months ago

      There’s different types of computational photography, the ones which ensures to capture enough sensor data to then interpolate in a way which accurately simulates a different camera/lighting setup are in a way “more realistic” than the ones which heavily really on complex algorithms to do stuff like deblurring. My point is essentially that the calculations done has to be founded in physics rather than in just trying to produce something artistic.

    • @TheBest@midwest.social
      link
      fedilink
      English
      19 months ago

      Great points! Thanks for expanding. I agree with your point that people most often want a recreation of what was perceived. Its going to make this whole AI enhanced eviidence even more nuanced when the tech improves.

      • @ricecake@sh.itjust.works
        link
        fedilink
        English
        19 months ago

        I think the “best” possible outcome is that AI images are essentially treated as witness data, as opposed to direct evidence. (Best is meant in terms of how we treat AI enhanced images, not justice outcomes. I don’t think we should use them for such things until they’re significantly better developed, if ever)

        Because the image is essentially at that point a neural networks interpretation of the image that it captured, which is functionally similar to a human testifying to what they believe they saw in an image.

        I think it could have a use if used in conjunction with the original or raw image, and the network can explain what drive it’s interpretation, which is a tricky thing for a lot of neural network based systems.
        That brings it much closer to how doctors are using them for imaging analysis. It doesn’t supplant the original, but points to part of it with an interpretation, and a synopsis of why it things that blob is a tumor/gun.