[HN Gopher] The Center of the Pixel is (0.5, 0.5) ___________________________________________________________________ The Center of the Pixel is (0.5, 0.5) Author : ingve Score : 49 points Date : 2020-06-11 07:23 UTC (15 hours ago) (HTM) web link (www.realtimerendering.com) (TXT) w3m dump (www.realtimerendering.com) | qubex wrote: | Agree. | | Hard to disagree, really. | draw_down wrote: | Sure, and the center of a foursquare of pixels is 1,1. | alleycat5000 wrote: | This comes up all the time in dealing with geospatial rasters. | | For instance, in GDAL there's a whole RFC for dealing with issues | related to pixel corners versus pixel centers! | | https://gdal.org/development/rfc/rfc33_gtiff_pixelispoint.ht... | | "Traditionally GDAL has treated this flag as having no relevance | to the georeferencing of the image despite disputes from a | variety of other software developers and data producers. This was | based on the authors interpretation of something said once by the | GeoTIFF author. However, a recent review of section [section | 2.5.2.2] of the GeoTIFF specificaiton has made it clear that GDAL | behavior is incorrect and that PixelIsPoint georeferencing needs | to be offset by a half a pixel when transformed to the GDAL | georeferencing model." | LeoPanthera wrote: | Related: "A pixel is not a little square". | | http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf | pornel wrote: | However, image is not a wave either. | | I have mixed feelings about this memo. It's right about | practical aspects of resampling filters, but tries too hard to | justify that with sampling theory. For example, pixel-aligned | sharp edges exist and are meaningful in images, unlike | perfectly square waves in sampling theory. | sudosysgen wrote: | Pixel aligned sharp edges do not actually exist and are not | meaningful in images, because a perfectly sharp lens does not | exist (and cannot exist) and as a result you can never form a | sharp edge in an image. You also have de-focus that prevents | you from doing so, and a lens that has a wider depth of field | has an _immediately_ noticeable limit in sharpness. | | Even if you were somehow able to create a perfect lens, you | would not be able to create a perfectly sharp edge with real | world objects. | wtallis wrote: | See, here you're committing the same error that the paper | does: pretending pixels are all about photography and | optics, and ignoring that some computer-generated graphics | actually are supposed to represent perfect squares. | | Sometimes, pixels really are little squares. Not always, | but not never, either. | centimeter wrote: | Worth noting that this does not apply to physical cameras - a | pixel is not, in fact, a point sample, but the integral over a | sub-region of the sensor plane. It's also not a complete | integral - the red pixels in an image are interpolated from | squares that only cover a quarter of the image plane (on 95+% | of sensors). Then you bring in low pass filters (or don't), and | the signal theory starts to get a bit complicated. | Kye wrote: | Bayer filter: https://en.wikipedia.org/wiki/Bayer_filter | | There are other ways to do it, but they generally have a lot | in common. | dTal wrote: | It doesn't apply to screens either, as pixels are - | manifestly - little squares. Your screen does not apply any | sort of lovely reconstruction filter over this "array of | point samples". | | In short, it's wrong. You can model an image as an array of | point samples - however these are not "pixels". | JonathonW wrote: | Your screen _is_ the "lovely reconstruction filter over | this 'array of point samples'". | | This is, for LCDs, _usually_ an array of little squares... | sort of (probably more accurately described as an array of | little rectangles of different color). Things get more | complicated when you start talking about less traditional | subpixel arrangements like PenTile, or the behavior of old | CRTs (where you don 't necessarily have fully discrete | pixels at all). | ckcheng wrote: | The memo interestingly talked about screens, and that it | does not contribute to the pixels as squares model because | there are "overlapping shapes that serve as natural | reconstruction filters"... | | But it was in context of old CRT and Sony Trinitron | monitors! I was wondering what it'd say about LCD screens | but the memo is from 1995, and the first standalone LCDs | only appeared in the mid-1990s and were expensive [1]. | | What it says about CRT electron beams no longer apply, but | I'm guessing this still does: | | > The value of a pixel is converted, for each primary | color, to a voltage level. This stepped voltage is passed | through electronics which, by its very nature, rounds off | the edges of the level steps | | > Your eye then integrates the light pattern from a group | of triads into a color | | > There are, as usual in imaging, overlapping shapes that | serve as natural reconstruction filters | | [1]: https://en.wikipedia.org/wiki/Computer_monitor#Liquid_ | crysta... | Kednicma wrote: | I know that many languages have some sort of support for units. | It would be nice to have libraries which explicitly say that | (0,1) and (-1,1) are different, and support transforming between | them. I think that this transformation comes up all the time when | working with pixels that are properly aligned and centered. | aspaceman wrote: | I know of yt-project, which has a lot of cool support for units | in the context of sci-vis. Support for transforms between | coordinate systems is nice though. Would love to have that. The | only hard part is that systems which try and do this sort of | thing lose some of the elegance of saying "V = (0, 1)" when you | also have to specify the coordinate system you're working under | for every vector. | | There have been some papers that do this though. I can't find | the reference but I know it exists. ___________________________________________________________________ (page generated 2020-06-11 23:00 UTC)