1705069924
Seems very unlikely to me at the moment. Everything that defines generative AI today is based on recognizing patterns in 2D images.
That’s true, but these patterns don’t necessarily have to come from the same image. There is a strong pattern that all cars that look like a red Ferrari from the left also look like a red Ferrari from the right. You can “look up” what this should look like if you have enough knowledge about the world.
Software like Stable Diffusion can generate a red Ferrari from a text description without showing an example. If this works, it would be even better if you can provide examples, either the left side of the car as the viewer sees it, or images seen earlier in the film (or later, the computer can’t manage it). Photos from the Internet or (in the future) perhaps a manufacturer database with product information.
It’s not a theory, it just exists, it’s just not fast/good enough for home use yet. At least that’s what I thought, according to @StGermain it was already available on LG TVs 10 years ago. I didn’t know this, but I saw scientific demos back then (more than 10 years ago), so I believe it. They didn’t go as far as I described above, but it’s amazing how far you can go when you supplement images with relatively simple patterns and some general experience about how the world works. Much of the world is extremely predictable.
Perhaps an even better example is our own eyes. We think we see the world around us clearly, but in reality we only see a very small part clearly, actually just a few centimeters. The rest is photoshopped/stabilized by our brain. We don’t see most of our image clearly and mainly keep our eyes on it to detect sudden movements. Only when something moves do we “really” look, whether it is blowing grass or a crawling tiger.
But you don’t really experience it yourself, your brain hides it from you. As soon as you try to see it, your brain quickly looks at the missing piece and hides the waiting time, making it seem like you are seeing everything smoothly and at the same time.
The useful thing about our brain is that it does this even based on bad images. The 3D TV AI doesn’t have to do everything right, if the shapes are approximately correct, it’s good enough. We don’t see any more details anyway, our brain does the rest. You can’t do this if you freeze the image and slowly look at the details, but you shouldn’t do that.
#Samsung #unveils #37inch #monitor #display #content #glassesfree #Update #Computer #News