[HN Gopher] Band-Limiting Procedural Textures ___________________________________________________________________ Band-Limiting Procedural Textures Author : todsacerdoti Score : 162 points Date : 2020-08-27 13:35 UTC (9 hours ago) (HTM) web link (iquilezles.org) (TXT) w3m dump (iquilezles.org) | pixelpoet wrote: | Once more: escape your TeX functions, folks! (In this case, use | e.g. "\cos" not "cos") | spekcular wrote: | There should also be a "\," before the dt in the integral. See | the second mistake on this list: | https://www.johndcook.com/blog/2010/02/15/top-latex- | mistakes.... | TheRealPomax wrote: | It would be super great if those super short animations ran a | simple forward-reverse loop instead of forward-forward. Would | make it a lot easier to read for people with attention | disabilities, while only making things more pleasant for all the | normal folk out there. | debacle wrote: | Would someone mind talking this down a bit? It seems like we are | pre-dithering the textures so that, when rendered, the noise is | less visible. Is that right? | shoo wrote: | It's a blur operation. Blurs are often defined as transforming | input function by convolving it with some kernel function | (often a Gaussian, see the Wikipedia page on Gaussian blur, | similar idea holds for blurring 1d, 2d, 3d, nd signals). | Convolution is an operation to combine two functions by | integrating then together in a particular way. This defines | output at each location that's a weighted average of what | values the input function took at neighbouring points to each | given location. Integration provides a way of computing | averages. The weights used in weighted average are defined by | the choice of kernel function. In this case the kernel function | is an indicator function that is 1 in some window and 0 | everywhere else, instead of a Gaussian function that'd be used | for a Gaussian blur. | | Since the input image is defined as a continuous function that | maps a coordinate to a colour (it isn't discretised into | pixels) we can apply the blur operation to the input image | function and figure out analytically for a new image function | that directly evaluates the output smoothed colour value for | any coordinate. Then we can just sample that function at each | pixel. | | Blurring removes or damps high frequency components of input | leaving lower frequency components, so it can be used to remove | or reduce high frequency "noise". | pfortuny wrote: | You are "blurring" the samples because you know your function | is NOT a step-function, essentially. So instead of trying to be | "super precise" (you cannot), you "blur" your Fourier | transform. | | It is NOT that but my explanation is morally what you achieve. | mmastrac wrote: | I can try: | | If you're familiar with Moire patterns and/or the Nyquist | theorem, it's basically ensuring that we don't have too much | information for the sampled channel (ie: the pixels that make | up the line). The symptom of "overloading the channel" is | shimmering and/or moire patterns -- the same sort of artifacts | you'd get while trying to record high frequencies at a very low | sampling rate. | thatcherc wrote: | Here's what I think is going on - | | - In raytracing, you're evaluating some complicated equation at | each pixel location. In this case, there are some cosine | components that have a really high spatial frequency, so you | get that aliased TV-static-looking effect in some parts of the | image | | - One way to avoid that would be take many samples in a small | region around each pixel location (at sub-pixel distances) | which the author refers to as 'supersampling'. This would work | except you'd need to raytrace a _lot_ more points, which would | slow down rendering | | - What you could do instead would be (and this is what the post | is mostly about) would be to replace the cosine(x) function | with a function that is "the average value of cosine(x) from | x-w/2 to x+w/2" - that's the big integral in the post. This | function would effectively just be cosine(x) when x is much | greater than w, but would average out the high-frequency cosine | components of the image when x ~=< w | | - The neat effect is that you can get the same smooth, alias- | free image as you would with the expensive super-sampling | operation just by using a modified version of cosine instead! | 0-_-0 wrote: | In short: as the frequency of the cosine gets too high you | gradually turn it off so you don't get aliasing from the high | frequencies. | inetsee wrote: | The video at the bottom isn't working for me in Firefox. It does | work in Chromium, and it is quite pretty. | rollulus wrote: | It might help to know that the "video" is an interactive, real- | time rendered shader. | Zardoz84 wrote: | works fine on the new Firefox for Android | kostadin wrote: | The opposite for me, working in Firefox and not Chrome. It's a | WEBGL2 shader I believe. | inetsee wrote: | I'm running Firefox 79.0 and Chromium Version 83.0.4103.61 | (Official Build) Built on Ubuntu , running on Ubuntu 18.04 | (64-bit). | Adam1775 wrote: | Works with chrome, make sure you don't have ublock origin, | privacy badger or similar plugins working and should | hopefully display properly. | kostadin wrote: | This. It was privacy badger blocking connect.soundcloud. | nwhitehead wrote: | This is awesome! This reminds me of MinBLEP audio synthesis of | discontinuous functions | (https://www.cs.cmu.edu/~eli/papers/icmc01-hardsync.pdf). Instead | of doing things at high sampling rate and explicitly filtering, | generate the band-limited version directly. | | In the article, talking about smoothstep approximation of sinc: | "I'd argue the smoothstep version looks better" Why would this | be? I would have thought the theoretically correct sinc version | would look nicer. | shoo wrote: | > "I'd argue the smoothstep version looks better" Why would | this be? I would have thought the theoretically correct sinc | version would look nicer. | | For a fixed well-defined mathematical problem, you might be | able to solve it optimally or approximately. One perspective is | to treat the problem as given and immutable and then try to | compute an exact or optimal solution. | | But often the original problem statement is fairly arbitrary, | based on a bunch of guesses or simplifications, and you might | be able to get a better result by changing the problem | definition (perhaps unfortunately making it much messier to | solve exactly) and then solving the new problem statement | approximately. | | What's the actual problem we're trying to solve here? Generate | something that looks visually pleasing. Why is an expression | involving cosine the natural way to define that problem | statement mathematically? There's likely a lot of freedom to | here to vary our problem definition. | | It might be interesting to start with the smoothstep multiplied | result and take the derivative and look at how that differs | from a normal cosine, and ponder why that might produce a more | pleasing result than a cosine. | gnramires wrote: | > In the article, talking about smoothstep approximation of | sinc: "I'd argue the smoothstep version looks better" Why would | this be? I would have thought the theoretically correct sinc | version would look nicer. | | In this case we are sort of mimicking the eye. The eye doesn't | do sinc-bandlimiting (it does a sort of angular integration -- | it sums the photons received in a region). | | I say "sort of", because we're really doing two steps: first we | are projecting a scene into a screen, and then the eye is | viewing the screen. We want (in most cases) that what the eye | sees in the screen corresponds to what it would see directly | (if seeing the scene in reality). | | The naive rendering approach simply samples an exact point for | each pixel. When there's high pixel variation (higher spatial | frequency than the pixel frequency), as you move the camera the | samples will alternate rapidly which wouldn't correspond to the | desired eye reconstruction. The eye would see approximately an | averaged (integrated) color over a small smooth angular window. | | Note we really never get the perfect eye reconstruction unless | the resolution of your display is much larger than the | resolution your eye can perceive[1]. But through anti-aliasing | at least this sampling artifact disappears. | | This window-integration is not an ideal sinc filtering! | Actually it's not bandlimiting at all! since it is a finite- | support convolution -- bandlimiting is just a convenient | theoretical (approximate/incorrect) description. | | In the frequency domain this convolution is not a box (ideal | sinc filtering), it's smooth with ripples. In the spatial | domain (that's really used here), it probably does look | something like a smoothstep (a smooth window)[2]. The details | don't matter if the resolution is large[3]. | | [1] Plus we would actually need to model other optical effects | of the eye (like focus and aberration) that I won't go into | detail :) But you can ask if interested. | | [2] It looks something like this: | https://foundationsofvision.stanford.edu/wp-content/uploads/... | found here: | https://foundationsofvision.stanford.edu/chapter-2-image-for... | This describes only the optical behavior of the eye, there's | also the sampling behavior of the retina. | | [3] Because our own eye integrates the pixels anyway. Again | this does ignore other optical effects of the eye (such as | "focus" and aberration) that vary with distance to the focal | plane, and more. | | TL;DR: The correct function looks something like this | https://foundationsofvision.stanford.edu/wp-content/uploads/... | , which seems close to a smoothstep. | CyberDildonics wrote: | The smoothstep function is close to a gaussian function, which | is very difficult to beat as a pixel filter. | skybrian wrote: | It seems like a theoretically correct box filter might not | actually be the best filter to use? By approximating it you get | a different filter, and whether it's a better filter is | something you need to judge by looking at the result. | | It looks like the sinc version is still adding a little bit of | some higher frequencies (the dampened sine wave), and the | approximation doesn't. Maybe those higher frequencies don't | actually make things look better? | GuB-42 wrote: | Short answer: ringing artefacts | | sinc is perfect if you are looking only at frequency response. | But in images, you also want to preserve locality, that is, | processing of one part of the image should not affect the rest | of the image. For example, sharpening an edge should only | affect the edge in question, not its surroundings. It comes in | contradiction with the idea of preserving frequency response. | Frequencies are about sine waves, and sine waves are wide, | infinitely wide in fact. | | BTW, that's also the reason why in quantum mechanics you can't | know both position and momentum (a frequency) precisely. | | So we need to compromise, and like in scaling algorithms, you | have 3 qualities: sharpness (the result of a good frequency | response), locality and aliasing. You can't have all 3, so you | need to pick the most pleasant combination. | | The extreme cases are: | | - Point sampling: excellent locality and sharpness, terrible | aliasing | | - Linear filtering: excellent locality and no aliasing, very | blurry | | - sinc filtering: excellent sharpness and no aliasing, terrible | locality (ringing artefacts) | | Using smoothstep is a good compromise, it has a bit of aliasing | because it is a step function and it has a bit of smearing | because it is smooth but none of the effects as so bad as to be | unpleasant. | | Side note: for audio, frequency response is more important than | locality, that's why sinc window functions are so popular. | CyberDildonics wrote: | This was called frequency clamping in the book 'advanced | renderman' which talks a lot about procedural textures. | | A simple way to think about it is to imagine a pattern of thin | black and white stripes. If you go far enough away from the | pattern, there will be multiple black and white stripes in the | same pixel. When they are smaller than a pixel the average color | will be grey. Knowing this, you can fade to grey as the stripes | get tiny instead of arriving at grey from heavy sampling. | tgb wrote: | Delightful post. | | > in theory, once per half-pixel, according to Nyquist | | I'd have thought this should be once per two pixels instead. | Nyquist says there's no aliasing between functions with | wavelength L if the sampling at intervals of L/2. So sampling at | once pixel should imply a 2-pixel wavelength minimum without | aliasing. Assuming the author is right, what am I messing up? | andreareina wrote: | If L is 1 pixel, then L/2 would be 0.5 pixels. | tgb wrote: | But L/2 should be the _sampling interval_ , which is fixed at | 1 pixel. For example, a signal with a wavelength of 1 pixel | (or 1/2 pixel) would be identical to a constant signal. ___________________________________________________________________ (page generated 2020-08-27 23:00 UTC)