Square effect refers to the image due to the distortion caused by the discontinuity of the coding boundary and the visual effect of "piece by piece". The square effect is due to the current coding technology is based on block coding, different blocks of the prediction transformation quantization and other processes are independent of each other so that the quantization error introduced by the size and distribution of each other are also independent of each other. Smooth filtering of the block boundaries can effectively reduce or remove the block effect, which is Deblocking Filter.
The de-square filter processing flow is as follows:
De-square filtering in H.265/HEVC
The de-square filtering in H.265 has the following characteristics:
-
Both luminance and chrominance components are processed only according to 8x8 block boundaries, which must be either TU or PU boundaries; image boundaries are not processed. For the chrominance component only if at least one block on either side of the boundary uses intra-frame prediction that boundary needs to be filtered, which makes the number of filters much lower.
-
For filtering, up to 3 pixel values are corrected on each side of the boundary to be processed, which allows 8x8 blocks of boundary space to be independently processed in parallel.
-
The vertical boundary of the whole image can be processed before the horizontal boundary, instead of interspersing the vertical and horizontal boundaries in H.264.
filtering decision
Although the de-square filtering is processed according to the 8x8 block boundary, while in fact the 8x8 block is divided into two parts to be processed independently, the vertical boundary with 8x4 as the basic unit, and the horizontal boundary with 4x8 as the basic unit. This is shown in the figure below:
The filtering decision is to determine its filtering intensity and filtering parameters for all 8x8 block boundaries in the PU and TU boundaries. Only discontinuous boundaries in flat regions need to be filtered. The following figure shows the filtering decision process:
For the luminance component the block boundary requires the above 3 steps to determine the filter strength and filter parameters, while for theThe block boundary filtering decision link for the chromaticity component only needs to determine the boundary strength for it, and its boundary intensity values are taken directly from the corresponding luminance boundaries.
(1) Obtaining Boundary Strength
Boundary Strength (BS) is the preliminary judgment of whether the boundary block needs filtering and filtering parameters according to the coding parameters of the boundary block.The value of BS is 0, 1 or 2.For the luminance component, when the BS is 0, it means that the boundary does not need to be filtered, and the subsequent processing will not be carried out. When the BS of the luminance component is 1 or 2, the subsequent processing will be performed. For the chrominance component when the BS is 0 or 1 it means that the boundary does not need to be filtered and no further processing will be performed, and only when the BS is 2 does it need to be filtered (There is also no need for subsequent filter switching decisions and filter strength selection)。
The acquisition process of BS is shown in the above figure.P and Q are the blocks on each side of the boundary as shown in the previous figure.
The boundary strength calculation is implemented in the following function.
Void xGetBoundaryStrengthSingle ( TComDataCU* pCtu, DeblockEdgeDir edgeDir, UInt uiPartIdx );
(2) Filter switching decisions
Due to the spatial masking effect of the human eye, the imageDiscontinuous boundaries in flat regionsmore easily observed. Discontinuities at the boundaries when there are drastic changes on both sides of the boundaries may be caused by the video content itself. In addition, filtering attenuates the texture information that should be present in strongly textured regions. The filter switch decision is to determine the content characteristics of the boundary based on the degree of variation of pixel values within the boundary block, and then determine whether a filtering operation is needed based on the content characteristics.
The following figure illustrates a vertical block boundary region. p(x,y),q(x,y) are the pixel values on each side of the boundary.
Now define the pixel change rate for the first and last rows.
The texture degree of this vertical block boundary region is defined as:
Larger texture values indicate that the region is less flat, and when large enough the boundary does not need to be filtered.H.265/HEVC specifies that this boundary filter switch is on when the following conditions are met, otherwise it is off.
The threshold beta is the judgment threshold, which is related to the quantization parameter QP of the blocks on both sides of the boundary and is derived by the following method.
const UChar TComLoopFilter::sm_tcTable[MAX_QP + 1 + DEFAULT_INTRA_TC_OFFSET] =
{
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,5,5,6,6,7,8,9,10,11,13,14,16,18,20,22,24
};
const UChar TComLoopFilter::sm_betaTable[MAX_QP + 1] =
{
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,7,8,9,10,11,12,13,14,15,16,17,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64
};
(3) Selection of filter strength
With the filter switch on in the previous step, a more detailed judgment of the video content is required to further determine the filter strength.
In the above three boundary cases, (a) compared with (b), the pixels on both sides of the boundary are flat while the change at the boundary is drastic, which creates a stronger block effect visually, and requires extensive and substantial correction of the pixels around the boundary. (c) The pixel variation at the boundary is particularly large, as the distortion will always be in a certain range, and when the difference is beyond a certain range, this block boundary is due to the video content itself.
Strong filtering is used when all the above 6 equations are satisfied, otherwise weak filtering is used. Eqs. 1 and 2 are used to judge the rate of change of pixel values on both sides of the boundary, Eqs. 3 and 4 are used to judge whether the pixels on both sides of the boundary are flat or not, and Eqs. 5 and 6 are used to judge whether the span of the pixels at the boundary is controlled within a certain range. tc is the judgment threshold, which is related to the quantization parameter QP of the blocks on both sides of the boundary, and it is derived from the following method.
__inline Bool TComLoopFilter::xUseStrongFiltering( Int offset, Int d, Int beta, Int tc, Pel* piSrc)
{
Pel m4 = piSrc[0];
Pel m3 = piSrc[-offset];
Pel m7 = piSrc[ offset*3];
Pel m0 = piSrc[-offset*4];
Int d_strong = abs(m0-m3) + abs(m7-m4);
return ( (d_strong < (beta>>3)) && (d<(beta>>2)) && ( abs(m3-m4) < ((tc*5+1)>>1)) );
}
filtering operation
When the above filtering decision determines that filtering is required, then filtering operation is required. This includes 3 cases: strong filtering of luminance boundaries, weak filtering of luminance boundaries, and filtering of chrominance boundaries.
Strong filtering of luminance boundaries:
Strong filtering makes a large, substantial correction to pixels on both sides of the boundary. The pixels to be corrected are 3 pixels on each side of the boundary.
Weak filtering of luminance boundaries:
The range and magnitude of pixels corrected by weak filtering is small. The maximum number of pixels to be corrected is 2 pixels on each side of the boundary. Filtering operations need to be performed on a pixel-by-pixel basis for each row.
The following is an example of row 1 pixels:
Then determine whether p(1,0) and q(1,0) need to be corrected.
The following code is an implementation of luminance strong filtering and weak filtering:
/**
- Deblocking for the luminance component with strong or weak filter
.
\param piSrc pointer to picture data
\param iOffset offset value for picture data
\param tc tc value
\param sw decision strong/weak filter
\param bPartPNoFilter indicator to disable filtering on partP
\param bPartQNoFilter indicator to disable filtering on partQ
\param iThrCut threshold value for weak filter decision
\param bFilterSecondP decision weak filter/no filter for partP
\param bFilterSecondQ decision weak filter/no filter for partQ
\param bitDepthLuma luma bit depth
*/
__inline Void TComLoopFilter::xPelFilterLuma( Pel* piSrc, Int iOffset, Int tc, Bool sw, Bool bPartPNoFilter, Bool bPartQNoFilter, Int iThrCut, Bool bFilterSecondP, Bool bFilterSecondQ, const Int bitDepthLuma)
{
Int delta;
Pel m4 = piSrc[0];
Pel m3 = piSrc[-iOffset];
Pel m5 = piSrc[ iOffset];
Pel m2 = piSrc[-iOffset*2];
Pel m6 = piSrc[ iOffset*2];
Pel m1 = piSrc[-iOffset*3];
Pel m7 = piSrc[ iOffset*3];
Pel m0 = piSrc[-iOffset*4];
if (sw)
{//!<strong filtering (electronics)
piSrc[-iOffset] = Clip3(m3-2*tc, m3+2*tc, ((m1 + 2*m2 + 2*m3 + 2*m4 + m5 + 4) >> 3));
piSrc[0] = Clip3(m4-2*tc, m4+2*tc, ((m2 + 2*m3 + 2*m4 + 2*m5 + m6 + 4) >> 3));
piSrc[-iOffset*2] = Clip3(m2-2*tc, m2+2*tc, ((m1 + m2 + m3 + m4 + 2)>>2));
piSrc[ iOffset] = Clip3(m5-2*tc, m5+2*tc, ((m3 + m4 + m5 + m6 + 2)>>2));
piSrc[-iOffset*3] = Clip3(m1-2*tc, m1+2*tc, ((2*m0 + 3*m1 + m2 + m3 + m4 + 4 )>>3));
piSrc[ iOffset*2] = Clip3(m6-2*tc, m6+2*tc, ((m3 + m4 + m5 + 3*m6 + 2*m7 +4 )>>3));
}
else
{//!<weak filter
/* Weak filter */
delta = (9*(m4-m3) -3*(m5-m2) + 8)>>4 ;
if ( abs(delta) < iThrCut )
{
delta = Clip3(-tc, tc, delta);
piSrc[-iOffset] = ClipBD((m3+delta), bitDepthLuma);
piSrc[0] = ClipBD((m4-delta), bitDepthLuma);
Int tc2 = tc>>1;
if(bFilterSecondP)
{
Int delta1 = Clip3(-tc2, tc2, (( ((m1+m3+1)>>1)- m2+delta)>>1));
piSrc[-iOffset*2] = ClipBD((m2+delta1), bitDepthLuma);
}
if(bFilterSecondQ)
{
Int delta2 = Clip3(-tc2, tc2, (( ((m6+m4+1)>>1)- m5-delta)>>1));
piSrc[ iOffset] = ClipBD((m5+delta2), bitDepthLuma);
}
}
}
if(bPartPNoFilter)
{
piSrc[-iOffset] = m3;
piSrc[-iOffset*2] = m2;
piSrc[-iOffset*3] = m1;
}
if(bPartQNoFilter)
{
piSrc[0] = m4;
piSrc[ iOffset] = m5;
piSrc[ iOffset*2] = m6;
}
}
Filtering of the chromaticity boundary:
When BS=2, the chromaticity boundary needs to be filtered. The pixels to be corrected are 1 pixel on each side of the boundary.
Here is the implementation of chroma filtering:
/**
- Deblocking of one line/column for the chrominance component
.
\param piSrc pointer to picture data
\param iOffset offset value for picture data
\param tc tc value
\param bPartPNoFilter indicator to disable filtering on partP
\param bPartQNoFilter indicator to disable filtering on partQ
\param bitDepthChroma chroma bit depth
*/
__inline Void TComLoopFilter::xPelFilterChroma( Pel* piSrc, Int iOffset, Int tc, Bool bPartPNoFilter, Bool bPartQNoFilter, const Int bitDepthChroma)
{
Int delta;
Pel m4 = piSrc[0];
Pel m3 = piSrc[-iOffset];
Pel m5 = piSrc[ iOffset];
Pel m2 = piSrc[-iOffset*2];
//!<detacount
delta = Clip3(-tc,tc, (((( m4 - m3 ) << 2 ) + m2 - m5 + 4 ) >> 3) );
piSrc[-iOffset] = ClipBD((m3+delta), bitDepthChroma);
piSrc[0] = ClipBD((m4-delta), bitDepthChroma);
if(bPartPNoFilter)
{
piSrc[-iOffset] = m3;
}
if(bPartQNoFilter)
{
piSrc[0] = m4;
}
}
For those interested, please follow Video Coding on WeChat.