Found 6 bookmarks
Newest
Introduction to Diffusion Models for Machine Learning
Introduction to Diffusion Models for Machine Learning
Fundamentally, Diffusion Models work by destroying training data through the successive addition of Gaussian noise, and then learning to recover the data by reversing this noising process. After training, we can use the Diffusion Model to generate data by simply passing randomly sampled noise through the learned denoising process.
·assemblyai.com·
Introduction to Diffusion Models for Machine Learning
The iPhones 15 Pro (and iPhones 15)
The iPhones 15 Pro (and iPhones 15)
Effectively, this is a next-generation step into computational photography. With quad pixels, you can easily see how the iPhone 14 Pro generated 12MP images from a 48MP sensor. It’s grade school division: 48 ÷ 4 = 12. But there is no 24MP sensor in the 15 Pro main camera. The Photonic Engine takes several images from the sensor for each photo, including a 48MP one for detail, and a 12MP quad-pixel one for low-light and noise reduction, then computationally fuses them together to produce a single 24MP image. Apple is having its cake and eating it too, merging the benefits of a sensor with many small pixels with the benefits of a sensor with fewer large pixels. You don’t need to know that this is happening, you just get more detail in your photos from the main camera.
·daringfireball.net·
The iPhones 15 Pro (and iPhones 15)
The Limits of Computational Photography
The Limits of Computational Photography
How much of that is the actual photo and how much you might consider to be synthesized is a line I think each person draws for themselves. I think it depends on the context; Moon photography makes for a neat demo but it is rarely relevant. A better question is whether these kinds of software enhancements hallucinate errors along the same lines of what happened in Xerox copiers for years.
·pxlnv.com·
The Limits of Computational Photography
Have iPhone Cameras Become Too Smart? | The New Yorker
Have iPhone Cameras Become Too Smart? | The New Yorker
iPhones are no longer cameras in the traditional sense. Instead, they are devices at the vanguard of “computational photography,” a term that describes imagery formed from digital data and processing as much as from optical information. Each picture registered by the lens is altered to bring it closer to a pre-programmed ideal. Gregory Gentert, a friend who is a fine-art photographer in Brooklyn, told me, “I’ve tried to photograph on the iPhone when light gets bluish around the end of the day, but the iPhone will try to correct that sort of thing.”
·newyorker.com·
Have iPhone Cameras Become Too Smart? | The New Yorker