Z antialiasing support/3D antialiasing

Alex W 2019-8-28 3050

I've been experimenting with antialiasing support and I'm so far pleased - the AA performance in X/Y layers seems great. However, it appears that there's no support at all for Z layers, in order to reduce layer lines. It would seem that such should be eminently possible - while curing a given layer involves illuminating a voxel bounded on (one to 5) side(s) by the model and on the 6th by the build window, the voxel will preferentially grow on the model, making it possible to grow a partial layer height off the model. See https://www.youtube.com/watch?v=5qTAmPrHLow&feature=youtu.be&t=257

I suspect that the current AA algorithm operates by taking a slice of the model at some arbitrary Z within the layer to be rendered, then antialiases that layer with a traditional 2D technique (which would tend to be suggested by the 2x, 4x, and 8x options for the setting.

I propose instead that the algorithm should, for any given voxel, calculate the total volume of the model inside that voxel, and calculate a proportion of (model inclusion)/(total voxel volume). For instance, imagine the trivial case of a 45* slope in X, Y, and Z. In other words, a plane through 4 of the 6 corners of the voxel, bisecting it. In this case, the inclusion ratio would be 0.5. I then propose setting the pixel intensity for this voxel to 0.5, (or 128,128,128, assuming the panel is operated as a regular 8 bit RGB panel). Of course, this pixel value would need to be corrected by the nonlinearity of luminous intensity vs curing rate, per https://www.youtube.com/watch?v=5qTAmPrHLow&feature=youtu.be&t=101 - this is math I can only assume already exists in the AA implementation anyway, although I can't be sure since it appears that enabling AA results in intermediate voxel volumes that are not spatially linear. 

I wrote up a bit more detail, including some images of test prints, here: https://www.alexwhittemore.com/efficacy-of-antialiasing-on-msla-prints/. Mostly though, it's just a rehash of what's above.

Since antialiasing seems to be variably written as anti-aliasing or anti aliasing, I've written this sentence to make the post more searchable :)


New Post (3)
  • Guest 2019-9-9
    Quote 2Floor
    hi this is TheEZhexagon, changing the layer high as it slopping is 1 thing but you got to consider a model that may not slope and has  very high detail in certain non sloping areas, to make it work good it needs to detect sloping and high polygon area's 
  • SolidForm 2020-1-20
    Quote 3Floor
    Yeah that would be something! A true 3D AA done on voxels not only pixels. But that would require some truly heavy computing and some VERY good algorithm. Though this would be The Holy Grail of resin printing!!!
  • Alex W 2020-3-21
    Quote 4Floor
    I mean, the very algorithm I outline above may well do sufficiently, and wouldn't require substantially greater compute power than the current implementation at all! The only reason the current implementation may be easier is that, I imagine, it's probably a built-in function of an off-the-shelf image processing library. 

    As far as preserving detail - the notion that AA trades smoothness for detail I don't think applies in a 3D printing context, or at least, doesn't have to. In fact, it probably makes sense to call the algorithm I'm proposing "superresolution" as Autodesk does (https://www.youtube.com/watch?v=5qTAmPrHLow that I linked above) rather than antialiasing. 

    In 2D imaging (and also 1D signal processing), AA tends to sacrifice detail because you only have so much information in the original pixels to begin with. The whole point of AA is that you can only gather as much information about reality as your sampling resolution allows. You have to throw away a bit of information to make sure the "frequencies too high for me to know about" don't contaminate the "frequencies I'm capable of measuring." 2D AA, performed on a slice-by-slice basis like ChiTuBox currently does, almost certainly suffers this problem of throwing out information. I say this because it has "2x 4x 8x" which are settings that apply to 2D image processing (how many adjacent pixels to look at), but make no sense in the context of a superresolution algorithm like I outline above. 

    In the context of printing, an algorithm like I propose DOESN'T HAVE to throw out ANY detail - We know exactly what the "true" scene is, with infinite resolution, because it's outlined for us in vector form by the .stl! Merely calculating "how much of this voxel is occupied" and then applying a nonlinear correction to that for actual resin growth shouldn't cost any detail whatsoever, and indeed, should result in much GREATER detail than the current pipeline offers.