18 C
New York
Saturday, September 21, 2024

Picture fusion-based low-dose CBCT enhancement technique for visualizing miniscrew insertion within the infrazygomatic crest | BMC Medical Imaging


On this part, we illustrate the proposed technique, which consists of 4 fundamental elements. These 4 elements embrace sharpening correction, visibility restoration, distinction enhancement and picture fusion modules. The general flowchart of the proposed technique is proven in Fig. 2.

Fig. 2
figure 2

General framework. The enter CBCT picture is handed by sharpening correction, visibility restoration and distinction enhancement modules and eventually fed into the perceptual fusion module for fusion

Sharpening correction module

First, we enter the CBCT picture and sharpen the enter oral CBCT picture, whose principal perform is to reinforce the perimeters and particulars of the picture in order that the picture appears to be like clearer and sharper. We outline the processing results of the sharpened picture (hat Z) by the next equation:

$$hat Z = (I + mathcal{N}{ I – G*I} )/2,$$

(1)

in Eq. (1), (G * I) is outlined because the Gaussian filtering results of (I). (mathcal{N}{ cdot }) is outlined because the normalization operator. Oral CBCT pictures are sharpened to make the CBCT picture clearer and sharper. It helps to focus on the variations between the boundaries and areas within the picture. Sharpening the picture could make the perimeters of the objects clearer and extra seen, thus bettering the visible high quality of the picture.

Visibility restoration module

Within the visibility restoration module, we work on the visible readability enhancement of CBCT pictures. In an effort to understand the visible readability restoration of CBCT pictures, we introduce an progressive technique primarily based on kind II fuzzy units. Primarily based on the speculation of kind II fuzzy units, we suggest a brand new higher and decrease vary resolution to the Hamacher t-conorm, and make use of a transform-based gamma correction approach to perform the enhancement of CBCT picture visibility. By way of the mixed utility of those technological instruments, we intention to enhance the standard and readability of CBCT pictures to extra precisely assist related purposes and medical analysis. First, the imply (mu) and customary deviation (sigma) of the fuzzy picture (hat Z(x)) are calculated:

$$mu = frac{1}{n} cdot sumlimits_{i = 1}^n {hat Z({x_i})} ,$$

(2)

$$sigma = sqrt {frac{1}{n – 1} cdot sumlimits_{i = 1}^n {{{left( {hat Z({x_i}) – mu } proper)}^2}} } ,$$

(3)

primarily based on Eqs. (2) and (3), we compute the brand new decrease certain for the Hamacher t-conorm. Right here, we compute the brand new higher certain (hat u(x)) by the next equation:

$$hat u(x) = {left( {hat Z(x)} proper)^alpha } + left( {1 – {{left( {hat Z(x)} proper)}^alpha }} proper) cdot {left( {sigma^2} proper)^alpha },$$

(4)

the place (alpha = 0.95). The brand new decrease restrict (hat w(x)) is expressed utilizing the next equation:

$$hat w(x) = left( {frac{ok cdot mu }{{sigma + b}}} proper) cdot left( {hat Z(x) – c cdot mu } proper) + {mu^d}.$$

(5)

Hamacher t-conorm is an operation utilized in fuzzy logic to merge the membership values of two fuzzy units. When calculating the brand new Hamacher t-conorm, we have to be certain that the up to date decrease and higher bounds are taken under consideration with a purpose to precisely replicate the connection between the fuzzy units. It will guarantee correct outcomes when coping with fuzzy information, thus growing the reliability of mathematical and statistical purposes:

$$t(x) = frac{{hat u(x) + hat w(x) + left( {{sigma^2} – 2} proper) cdot hat u(x) cdot hat w(x)}}{{1 – left( {1 – {sigma^2}} proper) cdot hat u(x) cdot hat w(x)}}.$$

(6)

Gamma correction can be utilized to enhance the visible high quality of CBCT pictures once they seem boring or unclear after processing. By remapping the pixel values of the enter CBCT picture, the sharpness and distinction of the picture is enhanced. Gamma correction relies on a nonlinear transformation of the pixel values of a picture utilizing a gamma perform. The gamma perform adjusts the brightness and distinction of the picture in order that darkish and vivid particulars are extra distinguished:

$${L_1}(x) = max left( {t(x)} proper) cdot {left( {frac{t(x)}{{max left( {t(x)} proper)}}} proper)^{1.5 cdot alpha }},$$

(7)

the place ({L_1}(x)) is the ultimate output of the visibility restoration module.

Distinction enhancement module

The primary aim of the distinction enhancement module is to enhance the distinction of CBCT pictures. To attain this aim, we first course of the picture utilizing two distinctive curve transformation features to supply a picture that’s considerably enhanced in distinction by combining their outputs. Subsequent, we introduce a gamma-corrected stretching perform that stretches the depth of the picture to adapt to straightforward intervals. The important thing to this course of is to successfully improve the grayscale variations within the picture by the mixed utility of various transforms and changes, making the small print within the picture extra distinguished and legible, and offering a extra dependable foundation for subsequent medical picture evaluation and analysis. Combining Eqs. (8) and (9), we will apply the chance density perform and the comfortable additive perform of the usual regular distribution to every pixel worth of the CBCT picture with a purpose to understand the person processing of the picture and to enhance the visible high quality of the picture and the flexibility to precise info:

$$g(x) = frac{1}{{sqrt {2pi } }}exp left( { – frac{{{{left( {hat Z(x)} proper)}^2}}}{2}} proper),$$

(8)

$$s(x) = log left( {1 + exp left( {hat Z(x)} proper)} proper).$$

(9)

Then, the logarithmic picture processing technique utilizing Eq. (10) combines the outputs of those two strategies:

$$l(x) = sqrt {f(x) + s(x) + f(x) * s(x)} .$$

(10)

Lastly, a gamma-controlled normalization perform is utilized through Eq. (11) with a purpose to totally stretch the picture intensities to straightforward intervals:

$${L_2}(x) = {left( {frac{{l(x) – min left( {l(x)} proper)}}{{max left( {l(x)} proper) – min left( {l(x)} proper)}}} proper)^eta }.$$

(11)

The outcomes obtained by the distinction enhancement module have improved distinction whereas sustaining brightness and pure. The place (g(x)) is the generated distinction modified picture, (hat Z(x)) is the enter distinction distorted picture and ({L_2}(x)) is the distinction stretched picture by normalization. The place (eta = 0.8) is the gamma correction parameter liable for adjusting the distinction of the picture.

Perceptual fusion module

On this part, we efficiently acquire visibility restored CBCT pictures and distinction enhanced CBCT pictures. Not like conventional picture fusion strategies, our picture fusion technique stems from two impartial duties. Subsequently, we suggest a novel fusion technique that concurrently considers the load project of the 2 pictures. This weight project consists of two points: a weight primarily based on pixel depth and a weight primarily based on international gradient. By integrating the pixel-level depth info and the gradient traits of the general picture, our technique is ready to seize and make the most of the useful info generated by the 2 completely different duties extra comprehensively through the picture fusion course of, thus additional bettering the standard and data content material of the synthesized picture. This progressive weight adjustment technique injects larger flexibility and flexibility into our picture fusion technique, enabling it to carry out properly in several situations and duties.

Weight design primarily based on pixel depth

The pixel depth primarily based fused picture (F(x)) as a weighted sum of pictures will be expressed as:

$$F(x) = {W_1}(x){L_1}(x) + {W_2}(x){L_2}(x),$$

(12)

the place ({W_1}(x)) and ({W_2}(x)) denote the weights of the significance of pixels ({L_1}(x)) and ({L_2}(x)). Thus, (W(x)) provides extra weight to areas the place the pixel intensities carry out properly, ({m_n}) is denoted as the common of the pixel intensities, and the load needs to be bigger when ({L_n}(x)) is near (1 – {m_n}), which will be denoted as (exp left( { – {{left( {{L_n}(x) – left( {1 – {m_n}} proper)} proper)}^2}} proper)). When processing a picture, it is very important take note of the publicity stage of the enter picture. It is because a big distinction between the brightness of the pictures leads to extra well-exposed pixels. To account for this, a bigger worth of ({sigma_N}) is assigned when there’s a important distinction within the common brightness between pictures. The weights primarily based on pixel depth are denoted as:

$${w_{1,n}}(x) = exp left( { – frac{{{{left( {{L_n}(x) – left( {1 – {m_n}} proper)} proper)}^2}}}{2sigma_n^2}} proper).$$

(13)

Amongst them.

$${sigma_n} = left{ {start{array}{*{20}{l}} {1.5left( {{m_{n + 1}} – {m_n}} proper)}&{n = 1} {0.75left( {{m_{n + 1}} – {m_{n – 1}}} proper)}&{1 < n < N} {1.5left( {{m_n} – {m_{n – 1}}} proper)}&{n = N} finish{array}} proper.,$$

(14)

the place (N) is the variety of pictures in a set of pictures. In Eq. (13), darkish pixels are assigned a bigger weight when ({m_n}) is near 1 and vice versa. As well as, when the common brightness is considerably completely different between pictures, the weights are assigned bigger values.

Weight design primarily based on international gradient

We observe that in areas missing texture, pictures typically have low distinction or small gradient values. Subsequently, emphasizing solely massive gradient areas could not successfully spotlight pixels inside areas with smaller gradients. Primarily based on this understanding, we introduce a worldwide gradient weighting technique geared toward emphasizing international distinction. In pictures with larger distinction, the cumulative histogram has smaller gradient values. Subsequently, we have to give larger weight to pixels once they lie throughout the vary of the cumulative histogram with comparatively small gradients. In different phrases, we have to dynamically modify the load of every pixel primarily based on its place and gradient info within the picture. In areas with smaller gradients, we’d count on the pixels to contribute extra, as these are typically areas of the picture that lack texture. Subsequently, we design a worldwide gradient weighting technique to higher take into account the worldwide distinction when processing pictures and dynamically modify the load of pixels in response to the gradient distribution of the picture. This international gradient-based weight adjustment technique makes our picture fusion technique extra clever and complete, in a position to adapt extra flexibly underneath completely different picture traits and distinction circumstances, and successfully enhance the general picture high quality and data switch. The weights primarily based on international gradient will be expressed by the next equation:

$$w_{2,n}(x)=frac{operatorname{Grad}_nleft(L_n(x)proper)^{-1}}{sumlimits_{n=1}^Noperatorname{Grad}_nleft(L_n(x)proper)^{-1}+epsilon},$$

(15)

the place (epsilon) is a really small constructive worth and ({operatorname{Grad}_n}left( {{L_n}(x)} proper)) denotes the gradient of the cumulative histogram when the depth is ({L_n}(x)). In picture processing, international gradient refers back to the capability to carry out a extra complete evaluation of a CBCT picture whereas taking into consideration the general traits of the picture. Whereas conventional native gradient strategies give attention to native variations round particular pixels, international gradient strategies seize a wider vary of picture contextual info by contemplating the gradients of distant pixels. This method contributes to a greater understanding of the construction and options of the whole picture, thus bettering the evaluation of CBCT pictures. To calculate the ultimate weights for every CBCT picture, the 2 weights are mixed and normalized utilizing a selected equation:

$$W_n(x)=frac{w_{1,n}(x)occasions w_{2,n}(x)}{sumlimits_{n=1}^Nw_{1,n}(x)occasions w_{2,n}(x)+epsilon}.$$

(16)

Utilizing the weights obtained by Eq. (16), we will fuse the pictures in response to Eq. (12).

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles