Well, I don't really know if there is already a filter available covering this idea: The smoothing/denoising filters I know are either spatial or temporal filters. What about smoothing spatially AND temporally at the same time? E.g. you could average a 3x3x3 block of 3 consecutive frames, i.e. the middle pixel would be averaged by its neighbours in the same frame and the neighbour pixels of the neighbour frames in the time dimension. I hope the explanation wasn't too complicated to understand, but I think using existing averaging algorithms on a three-dimensional cube instead of just looking at a two-dimensional plane would not be too complicated to implement. What do you think? bb
------------------------- I put my pants on three legs at a time. ------------------------------------------------ Thanks Richard!
AFAIK that is exactly what the General Convolution 3D filter does. However it's made for VDub, not Avisynth. Something like this can easily be achieve by just chaining two filters, but the problem is that they don't work intelligently to determine edges and sharp lines etc.
Hmm, the 3D averaging I was thinking of is not the same as chaining a spatial and a temporal filter. But the General Convolution 3D filter seem to be the solution (don't know if it has scene change detection, which would be nice). I'll try that one. Is there a General Convolution 3D filter available for AviSynth, too? If not, maybe someone feels challenged to implement it. There's also much room for tweaking, like edge detection, etc. I think this could be a superior method of getting rid of noise in complicated sources like DV. bb
------------------------- [][][][]V[] 1) '08 Ducati 1098s: modded to the nines 2) '01 Ducati 748s: track The exes: black 95 M3, blue 95 M3, green 330is frankenbimmer bf.c OT motorcycle club member #15
Not entirely so, because the temporal check would not be a complete 3D-check, it would only check the current pixel against the same pixel in the previous frame (one check more per pixel). The idea is quite neat - the algorithm could work like this: *Calculate the blurred pixel as usual. *Compare the original pixel to the previous frame. The bigger the difference, the less will the pixel get blurred. That will make low-freqency noise even easier to detect. Might actually work! I've also recived a great suggestion for adaptive threshold, that adapts the threshold to the luma value of the current pixel - so there are lots of improvements in the works. First priority is still an MMX-optimized YUV version though.
Hi Klaus- SSHiQ is far and away the best spatial (and someday spatial-temporal?) smoother out there, and I use it frequently. Please don't get me wrong-I understand what you're saying and was just joking around. Much respect and thanks is due to you and your program, and the optimized version is eagerly awaited.
If I get the time, I'll try to throw together a test-version in C - I've been thinking about using temporal data for some time, and this just triggered a good idea. :) Maybe I should do it tonight, instead of watching a movie ;)
I don't get it. Both multiplication (convolution) and addition (averaging) are associative and commutative. You can specify thresholds within layer or by making a mask from the original (unfiltered) source. 5+3 = 3+5 every time... The problem with temporal filters is they are so doggone slow. It seems like every filter that adds another frame fetch from a file doubles the encode time.
Sorry I don't follow you. Adaptive operations are usually not commutative nor associative. If for example, the operator @ represents the following function: Code:
------------------------- '01 330xi w/ sport package and all the trimmings.... Silver/Grey -->Stock