{"id":"b4b5a110f73ccecd","slug":"symmetric-nearest-neighbour-all-the-things","trashed":false,"description":"","likes":54,"publish_level":"live","forks":2,"fork_of":null,"has_importers":true,"update_time":"2025-09-15T22:18:58.363Z","first_public_version":2276,"paused_version":null,"publish_time":"2022-12-13T21:45:04.792Z","publish_version":2287,"latest_version":2287,"thumbnail":null,"default_thumbnail":null,"roles":[],"sharing":null,"owner":{"id":"3f83172cfde1e56a","avatar_url":"https://avatars.observableusercontent.com/avatar/808373580aa09723ceabb9cb31d01ea6db287280b59eaa63618e259f5d28dd71","login":"jobleonard","name":"Job van der Zwan","bio":"Tinkering with code and algorithms for fun","home_url":"","type":"team","tier":"starter_2024"},"creator":{"id":"d36034ddfb454bdd","avatar_url":"https://avatars.observableusercontent.com/avatar/808373580aa09723ceabb9cb31d01ea6db287280b59eaa63618e259f5d28dd71","login":"jobleonard","name":"Job van der Zwan","bio":"Tinkering with code and algorithms for fun","home_url":"","tier":"pro"},"authors":[{"id":"d36034ddfb454bdd","avatar_url":"https://avatars.observableusercontent.com/avatar/808373580aa09723ceabb9cb31d01ea6db287280b59eaa63618e259f5d28dd71","name":"Job van der Zwan","login":"jobleonard","bio":"Tinkering with code and algorithms for fun","home_url":"","tier":"pro","approved":true,"description":""}],"collections":[{"id":"a9e4eebe2253b0c1","type":"public","slug":"data-science-in-observable","title":"Data Science in Observable","description":"Collection of notebooks related to data science in Observable","update_time":"2023-07-24T18:41:05.870Z","pinned":false,"ordered":true,"custom_thumbnail":null,"default_thumbnail":"b79a61f6b4048f09d2b2781d0c721b3d175885eb4a40e56c42270f77a903ebae","thumbnail":"b79a61f6b4048f09d2b2781d0c721b3d175885eb4a40e56c42270f77a903ebae","listing_count":22,"parent_collection_count":0,"owner":{"id":"f35c755083683fe5","avatar_url":"https://avatars.observableusercontent.com/avatar/5a51c3b908225a581d20577e488e2aba8cbc9541c52982c638638c370c3e5e8e","login":"observablehq","name":"Observable","bio":"The end-to-end solution for building and hosting better data apps, dashboards, and reports.","home_url":"https://observablehq.com","type":"team","tier":"enterprise_2024"}},{"id":"f94ea952d748db30","type":"public","slug":"data-science","title":"Data Science ","description":"Examples of how Data Scientists are using Observable","update_time":"2023-01-18T05:42:35.940Z","pinned":false,"ordered":true,"custom_thumbnail":null,"default_thumbnail":"e31113b6edc4090d97b4eafaefe496036dfe2fa5990c28ec972c101d185e32e3","thumbnail":"e31113b6edc4090d97b4eafaefe496036dfe2fa5990c28ec972c101d185e32e3","listing_count":25,"parent_collection_count":1,"owner":{"id":"f35c755083683fe5","avatar_url":"https://avatars.observableusercontent.com/avatar/5a51c3b908225a581d20577e488e2aba8cbc9541c52982c638638c370c3e5e8e","login":"observablehq","name":"Observable","bio":"The end-to-end solution for building and hosting better data apps, dashboards, and reports.","home_url":"https://observablehq.com","type":"team","tier":"enterprise_2024"}},{"id":"b0b8183de839aa32","type":"public","slug":"algorithms","title":"Algorithms","description":"Explaining code.","update_time":"2018-03-23T05:32:50.477Z","pinned":false,"ordered":false,"custom_thumbnail":"63d4ba1e4c859f6e0d57fb101e317b12c83557a084fcddd4b436292e2026e3f4","default_thumbnail":"4ec29900fd05a0ebe1b59672361be6d757b18d454f7f68f64868b2b763f021df","thumbnail":"63d4ba1e4c859f6e0d57fb101e317b12c83557a084fcddd4b436292e2026e3f4","listing_count":18,"parent_collection_count":0,"owner":{"id":"f35c755083683fe5","avatar_url":"https://avatars.observableusercontent.com/avatar/5a51c3b908225a581d20577e488e2aba8cbc9541c52982c638638c370c3e5e8e","login":"observablehq","name":"Observable","bio":"The end-to-end solution for building and hosting better data apps, dashboards, and reports.","home_url":"https://observablehq.com","type":"team","tier":"enterprise_2024"}}],"files":[],"comments":[],"commenting_lock":null,"suggestion_from":null,"suggestions_to":[],"version":2287,"title":"Variations on Symmetric Nearest Neighbour smoothing","license":null,"copyright":"","nodes":[{"id":0,"value":"md`# Variations on Symmetric Nearest Neighbour smoothing\n\n[The 2.10 release of GIMP](https://www.gimp.org) added a *Symmetric Nearest Neighbour* filter:\n\n<img src=\"https://blindedcyclops.neocities.org/ObservableData/SNN/symmetric%20nearest%20neighbour%20example.png\" style=\"width:100%;height:100%;image-rendering: crisp-edges; image-rendering: pixelated\">\n\nI had never heard of this before, so expected it to be some novel fancy complicated algorithm, but it turns out to be quite simple *and* old, which made its obscurity surprising.\n\n\nBeyond the GIMP documentation itself, the only bit of useful information that I could find at first was [a brief but sufficient description on the SubSurf Wiki](http://subsurfwiki.org/wiki/Symmetric_nearest_neighbour_filter) ([archive.org mirror](https://web.archive.org/web/20190706044806/http://subsurfwiki.org/wiki/Symmetric_nearest_neighbour_filter)), a wiki for subsurface science. Looking for the sources mentioned there on Google Scholar showed that this filter was first described by Harwood et al. in a 1987 paper called [*\"A new class of edge-preserving smoothing filters\"*](https://scholar.google.se/scholar?cluster=11619776633817084182&hl=en&as_sdt=0,5). That makes it quite old, and even more surprising that it was not in any photo-editing software packages before. Perhaps it was encumbered by software patents? Or just somehow not picked up (I guess things were different in the pre-internet era). Whatever the cause, it's in GIMP now, and we can start playing around with it!\n\n## How it works.\n\nThe description of the filter [from gimp.org itself](https://pippin.gimp.org/image-processing/chap_area.html) is quite good:\n\n> *An improvement over the kuwahara filter is the symmetric nearest neighbour filter that compares symmetric pairs/quadruples of pixels with the center pixel and only takes into account the one from this set which is closest in value to the center pixel.*\n\nIn other words: we blur, but only with the pixels most similar to the pixel being blurred over. The image below illustrates selecting between a symmetric pair of points:\n\n<img src=\"https://blindedcyclops.neocities.org/ObservableData/SNN/symmetric%20nearest%20neighbour%20demo.png\" style=\"width:100%;height:100%\">\n\nGiven a center point \\`C\\`, we look at symmetrically opposed pairs like \\`P1\\` and \\`P2\\`. The point that has the least color difference to \\`C\\` is selected (in this pairing that would be \\`P2\\`). This is done for the whole neighbourhood. Then the selected values will be averaged.\n\nThe illustration shows the pair-wise selection method, but this notebook will mostly work with the cross-wise version, as it turns out to generally perform better visually speaking.\n\nConceptually speaking this is quite a simple approach. One tricky aspect is defining a distance function for two colors (or, alternatively and not explored here, doing this separately for each color channel), but we can build on other people's work there. I decided to use the Kotsarenko/Ramos algorithm.\n\n## Variations\n\nThinking about the SNN algorithm some more, it becomes clear these are essentially two composed algorithms:\n\n- the pair-wise or cross-wise selection of pixels\n- averaging the selected pixels (which in the GIMP version as well as the original paper seems to be a simple box blur)\n\nThat also means we can decouple the two, and experiment with using the first part as a form of \"preselection\" with a whole bunch of other image filters and see what the results of that will be! It turns out that the authors of the original paper already realised this, and created both a \"SNN mean\" and \"SNN median\" variant, effectively composing the selection algorithm with a box blur and a median blur, respectively.\n\nBut let's try replacing the box blur with something generally considered a better image filter, like a [Gaussian blur](https://en.wikipedia.org/wiki/Gaussian_blur), see what the effects are on the image quality, and explore from there.\n\nAs test images I picked two photographs I took about a decade ago:\n`","pinned":false,"mode":"js","data":null,"name":null},{"id":35,"value":"tulip = await new Promise((resolve, reject) => {\n  const image = new Image();\n  image.crossOrigin = \"anonymous\";\n  image.onerror = reject;\n  image.onload = () => {\n    resolve(image);\n  };\n  image.src = \"https://blindedcyclops.neocities.org/ObservableData/SNN/tulip.jpg\";\n})","pinned":false,"mode":"js","data":null,"name":null},{"id":999,"value":"kiss = await new Promise((resolve, reject) => {\n  const image = new Image();\n  image.crossOrigin = \"anonymous\";\n  image.onerror = reject;\n  image.onload = () => {\n    // image is 1200 x 600 pixels,\n    // if width is less than that we crop \n    // rather than resize to avoid artefacts\n    if (width >= 1200) resolve(image);\n    const ctx = DOM.context2d(width, 600);\n    ctx.drawImage(image, 0, 0);\n    resolve(ctx.canvas);\n  };\n  image.src = \"https://blindedcyclops.neocities.org/ObservableData/SNN/kiss.jpg\"\n})","pinned":false,"mode":"js","data":null,"name":null},{"id":115,"value":"md`The tulip photo was picked because it is blurry at the edges, having nice examples of gradients in real-world photos, but also has elements that are in sharp focus. The dust in the corner will be a good point of reference too. The kissing couple combines fog with grain to create a very noisy signal that human eyes can extract information from quite well, but naive image filters tend to struggle with (for example, the subtle brick road texture that fades away).\n\nFirst we should recreate the original Symmetric Nearest Neighbour filter. This is done by combining a symmetric nearest neighbour selection algorithm with a box blur. Let's start with implementing the box blur to see if that works as expected:`","pinned":false,"mode":"js","data":null,"name":null},{"id":93,"value":"// parameters are passed as an object, so that we can turn boxblur into a worker\nfunction boxblur({source, width, height, radius}) {\n  if (!(radius > 0)) return source.slice();\n\n  let target = new Uint8ClampedArray(source.length);\n  /*\n   * total number of summed pixels is two times the radius plus\n   * the central pixel, squared. See for example this neighbourhood\n   * of radius 3 around a pixel:\n   *\n   *     . . . . . . .\n   *     . . . . . . .\n   *     . . . . . . .\n   *     . . . # . . .\n   *     . . . . . . .\n   *     . . . . . . .\n   *     . . . . . . .\n   *\n   * At the end we have to divide by the total area, but it's quicker\n   * to multiply by the inverse instead.\n   */\n  const factor = 1 / ((2*radius + 1) * (2*radius + 1));\n\n  const {min, max} = Math;\n  /* \n   * Tangent: this is a really slow way to implement a blur function\n   * We loop over the entire area for every pixel, which makes it\n   * effectively O(n * r²). That gets slow really quickly for larger\n   * radii (not to mention memory access patterns and such). There\n   * are faster ways methods - we explore a two-pass version below\n   * that is effectively 2 * O(n*r) near the end of this notebook.\n   * For educational purposes we stick to these slow versions first.\n   */\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      let r = 0;\n      let g = 0;\n      let b = 0;\n      for (let dy = -radius; dy <= radius; dy++) {\n        const line_dy = min(max(0, y + dy), height-1) * width;\n        for (let dx = -radius; dx <= radius; dx++) {\n          const idx = (min(max(0, x + dx), width-1) + line_dy) * 4;\n          r += source[idx];\n          g += source[idx + 1];\n          b += source[idx + 2];\n        }\n      }\n      const cIdx = x*4 + line;\n      target[cIdx] = r * factor;\n      target[cIdx+1] = g * factor;\n      target[cIdx+2] = b * factor;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":96,"value":"boxblurTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(boxblur, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1010,"value":"boxblurKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(boxblur, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":81,"value":"md`Next we write a generic \\`selectNearest\\` selection function. To make it easier to plug into our filters, it will be applied directly to the \\`Uint8Array\\` of the image data, and return the index of the picked color in that array:`","pinned":false,"mode":"js","data":null,"name":null},{"id":22,"value":"// This is not very efficient, so don't use this in filters with big\n// radii. On the other hand, it's easier to follow code-wise\nfunction selectNearest(cx, cy, dx, dy, source, width, height) {\n  // assuming that source is an RGBA array, the central index should be at:\n  const cIdx = (cx + cy * width) * 4;\n\n\n  // Side-note: interesting dilemma: do we pick \"per color\" or \"per color channel\"?\n  // I suspect the latter will perform better. However, we first try this with a\n  // closest-color formula using Kotsarenko/Ramos.\n\n  // 35215 is the maximum possible value for the Kotsarenko/Ramos YIQ difference metric\n  let minDiff = 35215 ;\n  let pick = cIdx;\n\n  // we only want to test pixels that are inside the boundaries\n  if (cx + dx < width && cy + dy < height) {\n    const pos = cIdx + (dx + dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      pick = pos;\n      minDiff = diff;\n    }\n  }\n  \n  if (cx - dx >= 0 && cy - dy >= 0) {\n    const pos = cIdx - (dx + dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      pick = pos;\n      minDiff = diff;\n    }\n  }\n  \n  if (cx + dy < width && cy - dx >= 0) {\n    const pos = cIdx + (dy - dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      pick = pos;\n      minDiff = diff;\n    }\n  }\n  \n  if (cx - dy >= 0 && cy + dx < height) {\n    const pos = cIdx - (dy - dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      pick = pos;\n      minDiff = diff;\n    }\n  }\n  \n  return pick;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1021,"value":"gaussianBlurSNNKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(gaussianBlurSNN, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":23,"value":"// Adapted from code in https://github.com/mapbox/pixelmatch\n\n/* Inlined for better performance\n    //   function rgb2y(r, g, b) { return r * 0.29889531 + g * 0.58662247 + b * 0.11448223; }\n    //   function rgb2i(r, g, b) { return r * 0.59597799 - g * 0.27417610 - b * 0.32180189; }\n    //   function rgb2q(r, g, b) { return r * 0.21147017 - g * 0.52261711 + b * 0.31114694; }\n\n    //   // blend semi-transparent color with white\n    //   function blend(c, a) {\n    //     return 255 + (c - 255) * a;\n    //   }\n  */\n\n// calculate color difference according to the paper \"Measuring perceived color difference\n// using YIQ NTSC transmission color space in mobile applications\" by Y. Kotsarenko and F. Ramos\nfunction colorDelta(img, pos1, pos2) {\n  let r1 = img[pos1 + 0];\n  let g1 = img[pos1 + 1];\n  let b1 = img[pos1 + 2];\n  // let a1 = img[pos1 + 3]; // we ignore alpha in this demo, for perf reasons\n\n  let r2 = img[pos2 + 0];\n  let g2 = img[pos2 + 1];\n  let b2 = img[pos2 + 2];\n  // let a2 = img[pos2 + 3];\n\n  if (r1 === r2 && g1 === g2 && b1 === b2) return 0;\n\n//   if (a1 < 255) {\n//     a1 /= 255;\n//     r1 = 255 + (r1 - 255) * a1; // blend(r1, a1);\n//     g1 = 255 + (g1 - 255) * a1; // blend(g1, a1);\n//     b1 = 255 + (b1 - 255) * a1; // blend(b1, a1);\n//   }\n\n//   if (a2 < 255) {\n//     a2 /= 255;\n//     r2 = 255 + (r2 - 255) * a2; // blend(r2, a2);\n//     g2 = 255 + (g2 - 255) * a2; // blend(g2, a2);\n//     b2 = 255 + (b2 - 255) * a2; // blend(b2, a2);\n//   }\n\n  // const y = rgb2y(r1, g1, b1) - rgb2y(r2, g2, b2);\n  // const i = rgb2i(r1, g1, b1) - rgb2i(r2, g2, b2);\n  // const q = rgb2q(r1, g1, b1) - rgb2q(r2, g2, b2);\n  \n  const r = r1 - r2;\n  const g = g1 - g2;\n  const b = b1 - b2;\n\n  // Without strength-reduction\n  // const y = r * 0.29889531 + g * 0.58662247 + b * 0.11448223\n  // const i = r * 0.59597799 - g * 0.27417610 - b * 0.32180189\n  // const q = r * 0.21147017 - g * 0.52261711 + b * 0.31114694\n  // return 0.5053 * y * y + 0.299 * i * i + 0.1957 * q * q;\n  \n  // With strength-reduced constants\n  const y = r * 0.2124681075446384 + g * 0.4169973963260294 + b * 0.08137907133969426;\n  const i = r * 0.3258860837850668 - g * 0.14992193838645426 - b * 0.17596414539861255;\n  const q = r * 0.0935501584120867 - g * 0.23119531908149002 + b * 0.13764516066940333;\n  return y*y + i*i + q*q\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":223,"value":"md`And with a simple tweak we created a new filter: Gaussian Symmetric Nearest Neighbour Blurring. There are definitely some differences with the previous filter, although it might not be very obvious if you don't see the pictures side-by-side.\n\nSpecifically, it looks like our edge-effects are less severe than before: the leaves don't have a darkened center, and the bright spot in the second photo remains bright. This makes sense: pixels further away from the central pixel have a lower weight, so even if our radius might be too large, the central pixels are weighed more heavily. As a result, the inverted-center effect that we had with the unweighted box blur is significantly reduced.\n\nAnother thing to notice is that the image is less smoothed than before for the same radius value. The dust in the top-left corner isn't blurred away as strongly as it was with the SNN box blur. This is again a result of central pixels having more weight than outer ones.\n\nStill, in overall image quality I think this wins over our previous filter.\n\n## Symmetrical *Furthest* Neighbour?\n\nSo far we have been picking the colors that are the most similar. What would be the effect of picking the *least* similar color?`","pinned":false,"mode":"js","data":null,"name":null},{"id":32,"value":"function boxblurSNN({source, width, height, radius}) {\n  if (!(radius > 0)) return source.slice();\n\n  let target = new Uint8ClampedArray(source.length);\n  /*\n   * total number of summed pixels is the central pixel,\n   * plus one quadrant, which turns out to be radius * (radius + 1)\n   * See for example this neighbourhood of radius 3 around a pixel:\n   *\n   *     | | | | - - -\n   *     | | | | - - -\n   *     | | | | - - -\n   *     - - - # - - -\n   *     - - - | | | |\n   *     - - - | | | |\n   *     - - - | | | |\n   * At the end we have to divide by that number, but it's quicker\n   * to multiply by the inverse instead.\n   */\n  const factor = 1 / (radius * (radius + 1) + 1);\n\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      let r = source[cIdx];\n      let g = source[cIdx + 1];\n      let b = source[cIdx + 2];\n      for (let dx = 1; dx <= radius; dx++) {\n        for (let dy = 0; dy <= radius; dy++) {\n          const pick = selectNearest(x, y, dx, dy, source, width, height);\n          r += source[pick];\n          g += source[pick + 1];\n          b += source[pick + 2];\n        }\n      }\n      target[cIdx] = r * factor | 0;\n      target[cIdx+1] = g * factor | 0;\n      target[cIdx+2] = b * factor | 0;\n      target[cIdx+3] = 255;\n    }\n  }\n  \n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":194,"value":"md`Works like a charm! Ok, onto turning that into a symmetric nearest neighbour variant:`","pinned":false,"mode":"js","data":null,"name":null},{"id":46,"value":"boxblurSNNTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(boxblurSNN, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1013,"value":"boxblurSNNKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(boxblurSNN, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":134,"value":"md`And just like that our general blur turns into a filter that both smooths and (somewhat) enhances edges! For example, the dirt in the upper left of the tulip photo is mostly smoothed away. At the same time, the edge of the outer ring of the fisheye lens is more defined. For other examples of the edge-enhancing effect, look at the blurry buildings in the horizon. While there is no fine detail to recover, what used to be a blurry transition to the sky changed into a sharp edge. Similarly, the couple in the second photo has a sharper silhouette, while the overal amount of grain is greatly reduced.\n\nHowever, it's not all good. If you look closely there are some strange \"inverted area\" effects happening. For example, some of the small leaves in the bottom background of the first photo: their edges are enhanced, but their centers are actually blurred with the pixels *outside* the leaf. A similar effect can be seen with the bright spot in the upper-left of the kissing couple. It is much more muted than before. \n\nWhat causes these \"inversion\" effects? Well, I set the blur radius to 10 pixels, so the total pixel area being averaged over is 21 by 21 pixels. If a leaf has a smaller area than that, say 15 by 15 pixels, then a pixel in the middle of that leaf will have many instances where all four selectable neighbours are outside of the leaf. Meanwhile, for a pixel on the edge of the leaf there will always be one selectable neighbour inside the leaf. So counter-intuitively, pixels at the edge of an area have a better chance of only selecting neighbours inside that area!\n\nSo that is our basic SNN filter. Let's see what else we can build with this! Maybe we can even deal with the problem described above? Let's \"invent\" a Gaussian Symmetric Nearest Neighbour Blur. A Gaussian blur is basically just a blur with weights. So all we need to do is determine a Gaussian kernel of weights and add it into our existing algorithms. Although [I wrote a notebook for determining a proper Gaussian kernel a while ago](https://observablehq.com/@jobleonard/gaussian-kernel-calculater), I don't really know how to automatically find the right sigma-value for a given radius. So instead we'll use an approximation based on [the quadratic stackblur](https://observablehq.com/@jobleonard/mario-klingemans-stackblur) method:`","pinned":false,"mode":"js","data":null,"name":null},{"id":1289,"value":"function bellcurvish2D(radius) {\n  const p = new Uint32Array(radius+1);\n  for (let i = 0; i <= (radius+1)/2; i++){\n    for (let j = 0; j <= radius/2; j++) {\n      p[i+j]++;\n    }\n  }\n  const q = new Uint32Array(p.length*2 - 1);\n  for (let i = 0; i < p.length; i++){\n    for (let j = 0; j < p.length; j++) {\n      q[i+j] += p[j];\n    }\n  }\n  \n  let total = 0;\n  const quadratic2D = [];\n  for (let i = 0; i < q.length; i++){\n    const c = q[i];\n    const q2 = [];\n    for(let j = 0; j < q.length; j++){\n      const val = c * q[j];\n      total += val;\n      q2.push(val);\n    }\n    quadratic2D.push(q2);\n  }\n  const norm = 1/total;\n  for (let i = 0; i < q.length; i++){\n    for(let j = 0; j < q.length; j++){\n      quadratic2D[i][j] *= norm;\n    }\n  }\n  return quadratic2D;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":163,"value":"md`Now let's write a Gaussian(ish) blur:`","pinned":false,"mode":"js","data":null,"name":null},{"id":169,"value":"function gaussianBlur({source, width, height, radius}) {\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  const kernel2 = bellcurvish2D(radius);\n  \n  const {min, max} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      let r = 0;\n      let g = 0;\n      let b = 0;\n      for (let dy = -radius; dy <= radius; dy++) {\n        const line_dy = min(max(0, y + dy), height-1) * width;\n        for (let dx = -radius; dx <= radius; dx++) {\n          const idx = (min(max(0, x + dx), width-1) + line_dy) * 4;\n          const weight = kernel2[radius + dx][radius + dy];\n          r += source[idx] * weight;\n          g += source[idx + 1] * weight;\n          b += source[idx + 2] * weight;\n        }\n      }\n      const cIdx = x*4 + line;\n      target[cIdx] = r;\n      target[cIdx+1] = g;\n      target[cIdx+2] = b;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":87,"value":"md`Now we can plug that \\`selectNearest\\` function into our simple box blur:`","pinned":false,"mode":"js","data":null,"name":null},{"id":181,"value":"gaussianBlurTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(gaussianBlur, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1017,"value":"gaussianBlurKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(gaussianBlur, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":208,"value":"gaussianBlurSNNTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(gaussianBlurSNN, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":203,"value":"function gaussianBlurSNN({source, width, height, radius}) {\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  let totalWeight = 0;\n  const kernel = bellcurvish2D(radius);\n\n  // because we're dealing with quadrants again, we only want\n  // 1/4 of the kernel weights plus the central pixel.\n  for(let x = radius ; x < kernel.length; x++) {\n    const row = kernel[x];\n    for (let y = radius + 1; y < row.length; y++) {\n      totalWeight += row[y];\n    }\n  }\n  const centerWeight = kernel[radius][radius];\n  totalWeight += centerWeight;\n\n  const norm = 1 / totalWeight\n  \n  const {min, max} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      \n      let r = source[cIdx] * centerWeight;\n      let g = source[cIdx + 1] * centerWeight;\n      let b = source[cIdx + 2] * centerWeight;\n      for (let dx = 1; dx <= radius; dx++) {\n        const row = kernel[radius + dx];\n        for (let dy = 0; dy <= radius; dy++) {\n          const pick = selectNearest(x, y, dx, dy, source, width, height);\n          const weight = row[radius + dy];\n          r += source[pick] * weight;\n          g += source[pick + 1] * weight;\n          b += source[pick + 2] * weight;\n        }\n      }\n      target[cIdx] = r * norm | 0;\n      target[cIdx+1] = g * norm | 0;\n      target[cIdx+2] = b * norm | 0;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1685,"value":"function gaussianBlurSFN({source, width, height, radius}) {\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  let totalWeight = 0;\n  const kernel = bellcurvish2D(radius);\n\n  // because we're dealing with quadrants again, we only want\n  // 1/4 of the kernel weights plus the central pixel.\n  for(let x = radius ; x < kernel.length; x++) {\n    const row = kernel[x];\n    for (let y = radius + 1; y < row.length; y++) {\n      totalWeight += row[y];\n    }\n  }\n  const centerWeight = kernel[radius][radius];\n  totalWeight += centerWeight;\n\n  const norm = 1 / totalWeight\n  \n  const {min, max} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      \n      let r = source[cIdx] * centerWeight;\n      let g = source[cIdx + 1] * centerWeight;\n      let b = source[cIdx + 2] * centerWeight;\n      for (let dx = 1; dx <= radius; dx++) {\n        const row = kernel[radius + dx];\n        for (let dy = 0; dy <= radius; dy++) {\n          const pick = selectFurthest(x, y, dx, dy, source, width, height);\n          const weight = row[radius + dy];\n          r += source[pick] * weight;\n          g += source[pick + 1] * weight;\n          b += source[pick + 2] * weight;\n        }\n      }\n      target[cIdx] = r * norm | 0;\n      target[cIdx+1] = g * norm | 0;\n      target[cIdx+2] = b * norm | 0;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1687,"value":"gaussianBlurSFNTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(gaussianBlurSFN, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2037,"value":"gaussianBlurSFNKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(gaussianBlurSFN, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1694,"value":"md`So our edge-enhancing blur turned into an edge-*inverting* blur. That might seem kind of useless at first beyond intentional glitch art, but think about what basically happened here: if we pick the color that is the least like the central one, then pixels near an edge will likely pick a color outside of the edge. So the pixels near an edge will \"flip\", as we see in the above images, and the closer to the edge the stronger the flip.\n\nThis means that our Symmetrical Furthest Neighbour selection method responds most strongly to edges. So perhaps by combining nearest and furthers neighbour selection we could create a form of edge detection?`","pinned":false,"mode":"js","data":null,"name":null},{"id":918,"value":"function selectDipole(cx, cy, dx, dy, source, width, height) {\n  // assuming that source is an RGBA array, the central index should be at:\n  const cIdx = (cx + cy * width) * 4;\n  // 35215 is the maximum possible value for the Kotsarenko/Ramos YIQ difference metric\n  let minDiff = 35215, maxDiff = 0;\n  let minPick = cIdx, maxPick = cIdx;\n\n  // we only want to test pixels that are within the boundaries\n  if (cx + dx < width && cy + dy < height) {\n    const pos = cIdx + (dx + dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      minPick = pos;\n      minDiff = diff;\n    }\n    if (diff > maxDiff) {\n      maxPick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  if (cx - dx >= 0 && cy - dy >= 0) {\n    const pos = cIdx - (dx + dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      minPick = pos;\n      minDiff = diff;\n    }\n    if (diff > maxDiff) {\n      maxPick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  if (cx + dy < width && cy - dx >= 0) {\n    const pos = cIdx + (dy - dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      minPick = pos;\n      minDiff = diff;\n    }\n    if (diff > maxDiff) {\n      maxPick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  if (cx - dy >= 0 && cy + dx < height) {\n    const pos = cIdx - (dy - dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff < minDiff) {\n      minPick = pos;\n      minDiff = diff;\n    }\n    if (diff > maxDiff) {\n      maxPick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  return {minPick, minDiff, maxPick, maxDiff};\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1721,"value":"function symmetricalNeighbourEdgeDetection({source, width, height, radius}){\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  let totalWeight = 0;\n  const kernel = bellcurvish2D(radius);\n\n  // because we're dealing with quadrants again, we only want\n  // 1/4 of the kernel weights plus the central pixel.\n  for(let x = radius ; x < kernel.length; x++) {\n    const row = kernel[x];\n    for (let y = radius + 1; y < row.length; y++) {\n      totalWeight += row[y];\n    }\n  }\n  const centerWeight = kernel[radius][radius];\n  totalWeight += centerWeight;\n\n  const norm = 1 / totalWeight\n\n  const {min, max, abs, round} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      let rf = source[cIdx] * centerWeight;\n      let gf = source[cIdx + 1] * centerWeight;\n      let bf = source[cIdx + 2] * centerWeight;\n      let rn = rf;\n      let gn = gf;\n      let bn = bf;\n      for (let dx = 1; dx <= radius; dx++) {\n        const row = kernel[radius + dx];\n        for (let dy = 0; dy <= radius; dy++) {\n          const {minPick, maxPick} = selectDipole(x, y, dx, dy, source, width, height);\n          const weight = row[radius + dy];\n          rf += source[maxPick] * weight;\n          gf += source[maxPick + 1] * weight;\n          bf += source[maxPick + 2] * weight;\n          rn += source[minPick] * weight;\n          gn += source[minPick + 1] * weight;\n          bn += source[minPick + 2] * weight;\n        }\n      }\n      // In order to show whether the edge is brighter or darker,\n      // we subtract the difference from 127.5, or neutral grey\n      target[cIdx] = round(127.5 - (rn - rf) * norm);\n      target[cIdx+1] = round(127.5 - (gn - gf) * norm);\n      target[cIdx+2] = round(127.5 - (bn - bf) * norm);\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1742,"value":"symmetricalNeighbourEdgeDetectTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(symmetricalNeighbourEdgeDetection, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":252,"value":"function selectFurthest(cx, cy, dx, dy, source, width, height) {\n  const cIdx = (cx + cy * width) * 4;\n  let maxDiff = 0 ;\n  let pick = cIdx;\n\n  // we only want to test pixels that are inside the boundaries\n  if (cx + dx < width && cy + dy < height) {\n    const pos = cIdx + (dx + dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff > maxDiff) {\n      pick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  if (cx - dx >= 0 && cy - dy >= 0) {\n    const pos = cIdx - (dx + dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff > maxDiff) {\n      pick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  if (cx + dy < width && cy - dx >= 0) {\n    const pos = cIdx + (dy - dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff > maxDiff) {\n      pick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  if (cx - dy >= 0 && cy + dx < height) {\n    const pos = cIdx - (dy - dy * width) * 4;\n    const diff = colorDelta(source, cIdx, pos);\n    if (diff > maxDiff) {\n      pick = pos;\n      maxDiff = diff;\n    }\n  }\n  \n  return pick;\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1759,"value":"symmetricalNeighbourEdgeDetectKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(symmetricalNeighbourEdgeDetection, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2234,"value":"// A version that shows the absolute value of the change - highlights which color is\n// close to an edge without including \"direction\" of change\nfunction symmetricalNeighbourEdgeDetectionAbsolute({source, width, height, radius}){\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  let totalWeight = 0;\n  const kernel = bellcurvish2D(radius);\n\n  // because we're dealing with quadrants again, we only want\n  // 1/4 of the kernel weights plus the central pixel.\n  for(let x = radius ; x < kernel.length; x++) {\n    const row = kernel[x];\n    for (let y = radius + 1; y < row.length; y++) {\n      totalWeight += row[y];\n    }\n  }\n  const centerWeight = kernel[radius][radius];\n  totalWeight += centerWeight;\n\n  const norm = 1 / totalWeight\n\n  const {min, max, abs, round} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      let rf = source[cIdx] * centerWeight;\n      let gf = source[cIdx + 1] * centerWeight;\n      let bf = source[cIdx + 2] * centerWeight;\n      let rn = rf;\n      let gn = gf;\n      let bn = bf;\n      for (let dx = 1; dx <= radius; dx++) {\n        const row = kernel[radius + dx];\n        for (let dy = 0; dy <= radius; dy++) {\n          const {minPick, maxPick} = selectDipole(x, y, dx, dy, source, width, height);\n          const weight = row[radius + dy];\n          rf += source[maxPick] * weight;\n          gf += source[maxPick + 1] * weight;\n          bf += source[maxPick + 2] * weight;\n          rn += source[minPick] * weight;\n          gn += source[minPick + 1] * weight;\n          bn += source[minPick + 2] * weight;\n        }\n      }\n      target[cIdx] = abs(rn - rf) * norm;\n      target[cIdx+1] = abs(gn - gf) * norm;\n      target[cIdx+2] = abs(bn - bf) * norm;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2241,"value":"symmetricalNeighbourEdgeDetectAbsoluteTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(symmetricalNeighbourEdgeDetectionAbsolute, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2238,"value":"symmetricalNeighbourEdgeDetectAbsoluteKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(symmetricalNeighbourEdgeDetectionAbsolute, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1715,"value":"md`Looks like it works! Even the subtle brickwork in the kissing couple photo is slightly highlighted.\n\nNow that we have edge-detection we can use it to sharpen images. As a point of comparison, let's first look at a more traditional method: the [unsharp mask](https://en.wikipedia.org/wiki/Unsharp_masking). It turns out that once you have a Gaussian blur, you can quite easily implement a basic unsharp mask by subtracting the inverted version of the blurred image from the original image. The linked wikipedia article explains it in more detail, but here is an implementation:`","pinned":false,"mode":"js","data":null,"name":null},{"id":231,"value":"function unsharpMask({source, width, height, radius}) {\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  let totalWeight = 0;\n  const kernel = bellcurvish2D(radius);\n\n  // we need a negative weight to get an unsharp-mask effect\n  for(let i = 0; i < kernel.length; i++) {\n    const row = kernel[i];\n    for(let j = 0; j < kernel.length; j++) {\n      row[j] = -row[j];\n    }\n  }\n  // the central kernel has to be offset by the negative weight\n  const centerWeight = 2 + kernel[radius][radius];\n  kernel[radius][radius] = centerWeight;\n\n  const {min, max} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      let r = 0;\n      let g = 0;\n      let b = 0;\n      for (let dy = -radius; dy <= radius; dy++) {\n        const line_dy = min(max(0, y + dy), height-1) * width;\n        // funny enough, because our kernel is symmetrical we can\n        // interchangably do kernel[x][y] or kernel[y][x]\n        const column = kernel[radius + dy];\n        for (let dx = -radius; dx <= radius; dx++) {\n          const idx = (min(max(0, x + dx), width-1) + line_dy) * 4;\n          const weight = column[radius + dx];\n          r += source[idx] * weight;\n          g += source[idx + 1] * weight\n          b += source[idx + 2] * weight;\n        }\n      }\n      const cIdx = x*4 + line;\n      target[cIdx] = r;\n      target[cIdx+1] = g;\n      target[cIdx+2] = b;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":235,"value":"unsharpMaskTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(unsharpMask, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1025,"value":"unsharpMaskKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(unsharpMask, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":302,"value":"md`There we go! One unsharp mask, with all the familiar halos and over-sharpened noise. \n\nSo how to turn our earlier symmetric neighbour edge-detection method into a sharpening filter? Well, the output of the first edge-detection function showed darker or lighter pixels near edges, with lightness depending on which \"side\" of the edge our filter was on - a bit like a positive or negative sign. We did that by subtracting the difference of the sum of the nearest and furthest neighbors from a neutral gray value. All we have to do now is replace this neutral grey value with the value of the original pixel:`","pinned":false,"mode":"js","data":null,"name":null},{"id":1807,"value":"function symmetricalNeighbourSharpen({source, width, height, radius}){\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  let totalWeight = 0;\n  const kernel = bellcurvish2D(radius);\n\n  // because we're dealing with quadrants again, we only want\n  // 1/4 of the kernel weights plus the central pixel.\n  for(let x = radius ; x < kernel.length; x++) {\n    const row = kernel[x];\n    for (let y = radius + 1; y < row.length; y++) {\n      totalWeight += row[y];\n    }\n  }\n  const centerWeight = kernel[radius][radius];\n  totalWeight += centerWeight;\n\n  const norm = 1 / totalWeight\n\n  const {min, max, abs, round} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      const r0 = source[cIdx];\n      const g0 = source[cIdx + 1];\n      const b0 = source[cIdx + 2];\n      let rf = r0 * centerWeight;\n      let gf = g0 * centerWeight;\n      let bf = b0 * centerWeight;\n      let rn = rf;\n      let gn = gf;\n      let bn = bf;\n      for (let dx = 1; dx <= radius; dx++) {\n        const row = kernel[radius + dx];\n        for (let dy = 0; dy <= radius; dy++) {\n          const {minPick, maxPick} = selectDipole(x, y, dx, dy, source, width, height);\n          const weight = row[radius + dy];\n          rf += source[maxPick] * weight;\n          gf += source[maxPick + 1] * weight;\n          bf += source[maxPick + 2] * weight;\n          rn += source[minPick] * weight;\n          gn += source[minPick + 1] * weight;\n          bn += source[minPick + 2] * weight;\n        }\n      }\n      // instead of neutral grey, we now apply\n      // the difference to the original pixel\n      target[cIdx] = round(r0 + (rn - rf) * norm);\n      target[cIdx+1] = round(g0 + (gn - gf) * norm);\n      target[cIdx+2] = round(b0 + (bn - bf) * norm);\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1810,"value":"symmetricalNeighbourSharpenTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(symmetricalNeighbourSharpen, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1815,"value":"symmetricalNeighbourSharpenKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(symmetricalNeighbourSharpen, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2035,"value":"md`And out pops a sharpening filter. The halos are even more pronounced than with the regular unsharp mask though. We'll try to fix that in a bit, but notice too that there are some enhancements: with the plain unsharp mask filter, the grain of the kissing couple photo got more pronounced (more extreme black and white pixels). This version of sharpening seems to avoid that problem, while still keeping the grain.\n\nSo about those halos. One way to think about what causes this is that we push the pixels away from the furthest neighbours, and pull them towards the closest neighbours. However, we do this push by subtracting the differences. Which means that bigger differences will have a proportionally larger effect. If you then add the realization that the *furthest* neighbour will likely have a bigger difference than the *nearest* neighbour, kind of by definition, it should not be surprising any more that we end up overshooting our target. But that also means we have an easy fix: we have to scale the \"push\" of the furthest difference down to the \"pull\" of the nearest neighbour!`","pinned":false,"mode":"js","data":null,"name":null},{"id":2054,"value":"function symmetricalNeighbourSharpenNoHalo({source, width, height, radius}){\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length);\n  let totalWeight = 0;\n  const kernel = bellcurvish2D(radius);\n\n  // because we're dealing with quadrants again, we only want\n  // 1/4 of the kernel weights plus the central pixel.\n  for(let x = radius ; x < kernel.length; x++) {\n    const row = kernel[x];\n    for (let y = radius + 1; y < row.length; y++) {\n      totalWeight += row[y];\n    }\n  }\n  const centerWeight = kernel[radius][radius];\n  totalWeight += centerWeight;\n\n  const norm = 1 / totalWeight\n\n  const {min, max, abs, round} = Math;\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      const r0 = source[cIdx];\n      const g0 = source[cIdx + 1];\n      const b0 = source[cIdx + 2];\n      let rf = r0 * centerWeight;\n      let gf = g0 * centerWeight;\n      let bf = b0 * centerWeight;\n      let rn = rf;\n      let gn = gf;\n      let bn = bf;\n      let diffF = 0;\n      let diffN = 0;\n      for (let dx = 1; dx <= radius; dx++) {\n        const row = kernel[radius + dx];\n        for (let dy = 0; dy <= radius; dy++) {\n          const {minPick, minDiff, maxPick, maxDiff} = selectDipole(x, y, dx, dy, source, width, height);\n          diffF += maxDiff;\n          diffN += minDiff;\n          const weight = row[radius + dy];\n          rf += source[maxPick] * weight;\n          gf += source[maxPick + 1] * weight;\n          bf += source[maxPick + 2] * weight;\n          rn += source[minPick] * weight;\n          gn += source[minPick + 1] * weight;\n          bn += source[minPick + 2] * weight;\n        }\n      }\n      const diffscale = (diffN > 0 && diffF > 0) ? diffN / diffF : 1;\n      rf = (rf * norm - r0) * diffscale;\n      gf = (gf * norm - g0) * diffscale;\n      bf = (bf * norm - b0) * diffscale;\n      rn = rn * norm - r0;\n      gn = gn * norm - g0;\n      bn = bn * norm - b0;\n      // instead of neutral grey, we now apply\n      // the difference to the original pixel\n      target[cIdx] = round(r0 + (rn - rf));\n      target[cIdx+1] = round(g0 + (gn - gf));\n      target[cIdx+2] = round(b0 + (bn - bf));\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":2056,"value":"symmetricalNeighbourSharpenNoHaloTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(symmetricalNeighbourSharpenNoHalo, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2065,"value":"symmetricalNeighbourSharpenNoHaloKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(symmetricalNeighbourSharpenNoHalo, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2078,"value":"md`Well, the halos are indeed gone! But isn't this just like our earlier Gaussian SNN blur now? It does look similar, but upon closer inspection we can see that contrast in fine details has been enhanced in this version: the dust in the upper-left corner of the tulip photo was smoothed away in the GSNNB filter, while it survives here. Similarly, small highlights in the kissing couple photo, like the lights in the background and the reflections on the bike were slightly blurred out with the GSNNB, while their contrast is higher in these outputs.`","pinned":false,"mode":"js","data":null,"name":null},{"id":911,"value":"md`## Two-Pass Symmetric Nearest Neighbour Filters?\n\nRegular blur functions have the annoying property that a single-pass filter have to average over a surface area, which scales badly. To be exact: if our blurring radius is ${tex`n`} pixels, our total number of pixels being averaged over is actually ${tex`(2n+1)^2`} pixels. One optimization is to blur in two passes instead: one horizontal, and then one vertical. Because blurring is [commutative](https://en.wikipedia.org/wiki/Commutative_property) as long as we avoid severe rounding errors, this is visually almost the same, but instead only needs to process ${tex`2(2n+1)`} pixels per pixel.\n\nHowever, applying the SNN/SFN approaches in this case would not work in the exact same way: we would have to first apply it to the horizontal blur, and then to the vertical blur. I have no idea what the expected effects on the results would be, so let's see what it does! I'll skip rendering the plain blur versions, because it has (almost) identical output to the originals.`","pinned":false,"mode":"js","data":null,"name":null},{"id":669,"value":"function boxblurTwoPass({source, width, height, radius}) {\n  let target = new Uint8ClampedArray(source.length),\n      buffer = new Uint32Array(source.length);\n  const factor = 1 / ((2*radius + 1)*(2*radius + 1));\n\n  const {min, max} = Math;\n  // Horizontal pass\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      let r = 0;\n      let g = 0;\n      let b = 0;\n      for (let dx = -radius; dx <= radius; dx++) {\n        const idx = min(max(0, x + dx), width-1) * 4 + line;\n        r += source[idx];\n        g += source[idx + 1];\n        b += source[idx + 2];\n      }\n      const cIdx = x*4 + line;\n      buffer[cIdx] = r;\n      buffer[cIdx+1] = g;\n      buffer[cIdx+2] = b;\n    }\n  }\n  \n  for(let y = 0; y < height; y++) {\n    for (let x = 0; x < width; x++) {\n      let r = 0;\n      let g = 0;\n      let b = 0;\n      const cIdx = (x + y * width) * 4;\n      for (let dy = -radius; dy <= radius; dy++) {\n        const idx = (x + min(max(0, y + dy), height-1)*width) * 4;\n        r += buffer[idx];\n        g += buffer[idx + 1];\n        b += buffer[idx + 2];\n      }\n      target[cIdx] = r * factor | 0;\n      target[cIdx+1] = g * factor | 0;\n      target[cIdx+2] = b * factor | 0;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":680,"value":"function boxblurSNNTwoPass({source, width, height, radius}) {\n  let target = new Uint8ClampedArray(source.length),\n      buffer = new Float64Array(source.length);\n  const factor = 1 / (radius + 1);\n\n  const {min, max} = Math;\n  // Horizontal pass\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      let r = source[cIdx];\n      let g = source[cIdx];\n      let b = source[cIdx];\n      for (let dx = 1; dx <= radius; dx++) {\n        const pick = selectNearest(x, y, dx, 0, source, width, height);\n        r += source[pick];\n        g += source[pick + 1];\n        b += source[pick + 2];\n      }\n      buffer[cIdx] = r * factor;\n      buffer[cIdx+1] = g * factor;\n      buffer[cIdx+2] = b * factor;\n    }\n  }\n  \n  for(let y = 0; y < height; y++) {\n    for (let x = 0; x < width; x++) {\n      const cIdx = (x + y * width) * 4;\n      let r = buffer[cIdx];\n      let g = buffer[cIdx+1];\n      let b = buffer[cIdx+2];\n      for (let dy = 1; dy <= radius; dy++) {\n        const pick = selectNearest(x, y, 0, dy, buffer, width, height);\n        r += buffer[pick];\n        g += buffer[pick + 1];\n        b += buffer[pick + 2];\n      }\n      target[cIdx] = r * factor | 0;\n      target[cIdx+1] = g * factor | 0;\n      target[cIdx+2] = b * factor | 0;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":684,"value":"boxblurSNNTwoPassTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(boxblurSNNTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1053,"value":"boxblurSNNTwoPassKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(boxblurSNNTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":690,"value":"md`That turned out... surprisingly good actually? And since it now scales a lot better we can try using a big radius and see what the results are:`","pinned":false,"mode":"js","data":null,"name":null},{"id":778,"value":"bigboxblurTwoPassTulipSNN = {\n  const blurParams = makeFilterParams(tulipData, 50);\n  return makeFilterWorker(boxblurSNNTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1064,"value":"bigboxblurTwoPassKissSNN = {\n  const blurParams = makeFilterParams(kissData, 50);\n  return makeFilterWorker(boxblurSNNTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1837,"value":"md`... I'm not sure what I was expecting: I mentioned before that when the radius is too big, we get \"inversion\" problems. There is nothing about the two-pass version that would avoid this problem, so obviously we should see that effect too. \n\nSo if this works for box blur, we should expect the Gaussian blur to work as well. We need the one-dimensional version of our bellcurve approximation, but otherwise the principle is the same:`","pinned":false,"mode":"js","data":null,"name":null},{"id":1363,"value":"function bellcurvish1D(radius) {\n  const p = new Uint32Array(radius+1);\n  for (let i = 0; i <= (radius+1)/2; i++){\n    for (let j = 0; j <= radius/2; j++) {\n      p[i+j]++;\n    }\n  }\n  const q = new Uint32Array(p.length*2 - 1);\n  for (let i = 0; i < p.length; i++){\n    for (let j = 0; j < p.length; j++) {\n      q[i+j] += p[j];\n    }\n  }\n\n  const quadratic = [];\n  for (let v of q){\n    quadratic.push(v*v);\n  }\n  return quadratic;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":701,"value":"function gaussianBlurSNNTwoPass({source, width, height, radius}) {\n  let target = new Uint8ClampedArray(source.length),\n      buffer = new Float64Array(source.length);\n  const kernel = bellcurvish1D(radius);\n  let totalWeight = 0;\n  for(let i = radius; i < kernel.length; i++) totalWeight += kernel[i];\n  const norm = 1 / totalWeight;\n  \n  const {min, max} = Math;\n  \n  // Horizontal pass\n  const centerWeight = kernel[radius];\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      let r = source[cIdx] * centerWeight;\n      let g = source[cIdx+1] * centerWeight;\n      let b = source[cIdx+2] * centerWeight;\n      for (let dx = 1; dx <= radius; dx++) {\n        const pick = selectNearest(x, y, dx, 0, source, width, height);\n        const weight = kernel[radius + dx]\n        r += source[pick] * weight;\n        g += source[pick + 1] * weight;\n        b += source[pick + 2] * weight;\n      }\n      buffer[cIdx] = r * norm;\n      buffer[cIdx+1] = g * norm;\n      buffer[cIdx+2] = b * norm;\n    }\n  }\n  \n  // Vertical pass\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      let r = buffer[cIdx] * centerWeight;\n      let g = buffer[cIdx+1] * centerWeight;\n      let b = buffer[cIdx+2] * centerWeight;\n      for (let dy = 1; dy <= radius; dy++) {\n        const pick = selectNearest(x, y, 0, dy, buffer, width, height);\n        const weight = kernel[radius + dy];\n        r += buffer[pick] * weight;\n        g += buffer[pick + 1] * weight;\n        b += buffer[pick + 2] * weight;\n      }\n      target[cIdx] = r * norm;\n      target[cIdx+1] = g * norm;\n      target[cIdx+2] = b * norm;\n      target[cIdx+3] = 255;\n    }\n  }\n\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":704,"value":"gaussianBlurSNNTwoPassTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(gaussianBlurSNNTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":1056,"value":"gaussianBlurSNNTwoPassKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(gaussianBlurSNNTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2109,"value":"md`Also working fine. And finally, let's try adapting our no-halo sharpening:`","pinned":false,"mode":"js","data":null,"name":null},{"id":2084,"value":"// This is becoming quite the mouthful\nfunction symmetricalNeighbourSharpenNoHaloTwoPass({source, width, height, radius}){\n  if (!(radius > 0)) return source.slice();\n  let target = new Uint8ClampedArray(source.length),\n      buffer = new Float64Array(source.length);\n  const kernel = bellcurvish1D(radius);\n  let totalWeight = 0;\n  const centerWeight = kernel[radius];\n  totalWeight += centerWeight;\n  for(let i = radius+1; i < kernel.length; i++) {\n    // we apply the unsharp mask in two passes, so we should\n    // reduce the weights a bit in order to avoid double-sharpening\n    const v = kernel[i] * 0.5;\n    totalWeight += v;\n    kernel[i] = v;\n  }\n  const norm = 1 / totalWeight;\n\n  const {min, max, abs, round} = Math;\n  \n  // Horizontal pass  \n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      const r0 = source[cIdx];\n      const g0 = source[cIdx + 1];\n      const b0 = source[cIdx + 2];\n      let rf = r0 * centerWeight;\n      let gf = g0 * centerWeight;\n      let bf = b0 * centerWeight;\n      let rn = rf;\n      let gn = gf;\n      let bn = bf;\n      let diffF = 0;\n      let diffN = 0;\n      for (let dx = 1; dx <= radius; dx++) {\n        const {minPick, minDiff, maxPick, maxDiff} = selectDipole(x, y, dx, 0, source, width, height);\n        diffF += maxDiff;\n        diffN += minDiff;\n        const weight = kernel[radius + dx];\n        rf += source[maxPick] * weight;\n        gf += source[maxPick + 1] * weight;\n        bf += source[maxPick + 2] * weight;\n        rn += source[minPick] * weight;\n        gn += source[minPick + 1] * weight;\n        bn += source[minPick + 2] * weight;\n      }\n      const diffscale = (diffN > 0 && diffF > 0) ? diffN / diffF : 1;\n      rf = (rf * norm - r0) * diffscale;\n      gf = (gf * norm - g0) * diffscale;\n      bf = (bf * norm - b0) * diffscale;\n      rn = rn * norm - r0;\n      gn = gn * norm - g0;\n      bn = bn * norm - b0;\n      buffer[cIdx] = rn - rf;\n      buffer[cIdx+1] = gn - gf;\n      buffer[cIdx+2] = bn - bf;\n    }\n  }\n\n  // Vertical pass\n  for(let y = 0; y < height; y++) {\n    const line = y * width * 4;\n    for (let x = 0; x < width; x++) {\n      const cIdx = x*4 + line;\n      const r0 = source[cIdx];\n      const g0 = source[cIdx + 1];\n      const b0 = source[cIdx + 2];\n      let rf = buffer[cIdx] * centerWeight;\n      let gf = buffer[cIdx + 1] * centerWeight;\n      let bf = buffer[cIdx + 2] * centerWeight;\n      let rn = rf;\n      let gn = gf;\n      let bn = bf;\n      let diffF = 0;\n      let diffN = 0;\n      for (let dy = 1; dy <= radius; dy++) {\n        const {minPick, minDiff, maxPick, maxDiff} = selectDipole(x, y, dy, 0, source, width, height);\n        diffF += maxDiff;\n        diffN += minDiff;\n        const weight = kernel[radius + dy];\n        rf += buffer[maxPick] * weight;\n        gf += buffer[maxPick + 1] * weight;\n        bf += buffer[maxPick + 2] * weight;\n        rn += buffer[minPick] * weight;\n        gn += buffer[minPick + 1] * weight;\n        bn += buffer[minPick + 2] * weight;\n      }\n      const diffscale = (diffN > 0 && diffF > 0) ? diffN / diffF : 1;\n      target[cIdx] = round(r0 + (rn - rf * diffscale) * norm);\n      target[cIdx+1] = round(g0 + (gn - gf * diffscale) * norm);\n      target[cIdx+2] = round(b0 + (bn - bf * diffscale) * norm);\n      target[cIdx+3] = 255;\n    }\n  }\n  return target;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":2090,"value":"symmetricalNeighbourSharpenNoHaloTwoPassTulip = {\n  const blurParams = makeFilterParams(tulipData, 10);\n  return makeFilterWorker(symmetricalNeighbourSharpenNoHaloTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2091,"value":"\nsymmetricalNeighbourSharpenNoHaloTwoPassKiss = {\n  const blurParams = makeFilterParams(kissData, 10);\n  return makeFilterWorker(symmetricalNeighbourSharpenNoHaloTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2142,"value":"md`Well, our two-pass version unsharpens, but it also re-introduces the halos and amplefies the grain noise again. Less so than a naive unsharp mask though. It may also be a matter of tweaking the strength of the filter through other means, because if I set the radius to 5 we seem to hit a sweet spot in sharpening without visible halos:`","pinned":false,"mode":"js","data":null,"name":null},{"id":2164,"value":"symmetricalNeighbourSharpenNoHaloTwoPassTulip5 = {\n  const blurParams = makeFilterParams(tulipData, 5);\n  return makeFilterWorker(symmetricalNeighbourSharpenNoHaloTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2161,"value":"\nsymmetricalNeighbourSharpenNoHaloTwoPassKissR5 = {\n  const blurParams = makeFilterParams(kissData, 5);\n  return makeFilterWorker(symmetricalNeighbourSharpenNoHaloTwoPass, blurParams);\n}","pinned":false,"mode":"js","data":null,"name":null},{"id":2171,"value":"md`There might be some way to formulate the ideal sharpening strength automatically, but right now I don't see it.\n\n## Conclusions\n\nSo, as we've seen, with the simple addition of a pixel selection method (nearest or furthest neighbour in the color space) we can turn simple blurring filters into edge-enhancing smoothing filters. One issue with the original algorithm is the area-inversion effect when the radius covers larger areas than the fine detail. By using a Gaussian blur kernel we can partially address this issue. Furthermore, by combining nearest and furthest neighbour selection, we can perform edge detection and sharpening, and even come up with a way to auto-scale an unsharp mask to mostly avoid halos.\n\nFinally, we tried converting the filters into two-pass versions, which scale significantly better. It seems to work fairly well for blurring filters, but has mixed results for the sharpening version. There might be some ways to address the latter issues though.\n\nAlso, if anyone responsible for implementing GIMP filters is reading this: maybe we can expand the existing SNN filter to include a few of the ones explored here? ;)\n\nThanks for reading, hope it was interesting!`","pinned":false,"mode":"js","data":null,"name":null},{"id":166,"value":"html`<hr>`","pinned":false,"mode":"js","data":null,"name":null},{"id":38,"value":"tulipData = {\n  width; // trigger resize when going to fullscreen, or landscape on mobile\n  const ctx = DOM.context2d(tulip.width, tulip.height, 1);\n  ctx.drawImage(tulip, 0, 0, ctx.canvas.width, ctx.canvas.height);\n  const imageData  = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);\n  return imageData;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1005,"value":"kissData = {\n  width;\n  const ctx = DOM.context2d(kiss.width, kiss.height, 1);\n  ctx.drawImage(kiss, 0, 0, ctx.canvas.width, ctx.canvas.height);\n  const imageData  = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);\n  return imageData;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1028,"value":"function makeFilterParams(imageData, radius) {\n  return {\n    source: imageData.data,\n    width: imageData.width,\n    height: imageData.height,\n    radius,\n  };\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":544,"value":"// Assumes the filter will return a typed array\nfunction makeFilterWorker(filter, params) {\n  const script = makeFilterWorkerScript(filter);\n  \n  const context = DOM.context2d(params.width, params.height, 1);\n  const imageData = context.getImageData(0, 0, params.width, params.height);\n  // use original image to start with\n  imageData.data.set(params.source);\n  context.putImageData(imageData, 0, 0);\n  \n  // emphasize that this is the unrendered image\n  context.font = '40px serif';\n  context.fillStyle = '#00000088';\n  context.fillRect(0, 0, params.width, params.height);\n  context.fillText('Rendering...', 24, 64);\n  context.fillStyle = 'white';\n  context.fillText('Rendering...', 20, 60);\n  context.strokeStyle = 'black';\n  context.strokeText('Rendering...', 20, 60);\n  \n  const worker = new Worker(script);\n  const messaged = ({data: {rgba}}) => {\n    imageData.data.set(rgba);\n    context.putImageData(imageData, 0, 0);\n  };\n  invalidation.then(() => worker.terminate());\n  worker.onmessage = messaged;\n  worker.postMessage({params});\n  context.canvas.style = \"image-rendering: crisp-edges; image-rendering: pixelated\"\n  return context.canvas;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":1112,"value":"function identityFilter({source}){\n  return source;\n}","pinned":true,"mode":"js","data":null,"name":null},{"id":568,"value":"// Assumes the filter will return a typed array\nfunction makeFilterWorkerScript(filter) {\n  \n  const filterScript = filter.toString();\n  const script = `\n    // Inline all the generic functions used by the filter\n    ${colorDelta.toString()}\n    ${filterScript.indexOf(\"selectNearest\") !== -1 ? selectNearest.toString() : \"\"}\n    ${filterScript.indexOf(\"selectFurthest\") !== -1 ? selectFurthest.toString() : \"\"}\n    ${filterScript.indexOf(\"selectDipole\") !== -1 ? selectDipole.toString() : \"\"}\n    ${filterScript.indexOf(\"bellcurvish2D\") !== -1 ? bellcurvish2D.toString() : \"\"}\n    ${filterScript.indexOf(\"bellcurvish1D\") !== -1 ? bellcurvish1D.toString() : \"\"}\n    // Inline the filter\n    ${filterScript}\n    \n    onmessage = event => {\n      const {params} = event.data;\n      const rgba = ${filter.name}(params);\n      postMessage({rgba}, [rgba.buffer]);\n      close();\n    }\n  `;\n  const url = URL.createObjectURL(new Blob([script], {type: 'text/javascript'}))\n  invalidation.then(() => URL.revokeObjectURL(url));\n  return url;\n}","pinned":true,"mode":"js","data":null,"name":null}],"resolutions":[],"schedule":null,"last_view_time":null}