Quality as an image-specific characteristic perceived by an average human observer

  • Вид работы:
    Дипломная (ВКР)
  • Предмет:
    Информационное обеспечение, программирование
  • Язык:
    Английский
    ,
    Формат файла:
    MS Word
    23,39 Кб
  • Опубликовано:
    2015-12-23
Вы можете узнать стоимость помощи в написании студенческой работы.
Помощь в написании работы, которую точно примут!

Quality as an image-specific characteristic perceived by an average human observer

Introduction

images by their quality is one of the most common challenges in many areas of applied science and technology. For example, a set of images returned by a web search may have good search relevance, but if these relevant images also have the best quality, this would certainly improve users impression. Another area is medicine, where patient examinations result in terabytes of visual data which is hard to analyze at a time. This is why preprocessing this data by extracting the best images for further diagnostics will be a timesaving solution for physicians. Finally, if we know what defines image quality numerically, we can start developing quality enhancing filters, making imaging data more appealing to human visual system.quality is a complex concept which might have different interpretations. In our work we consider quality as an image-specific characteristic perceived by an average human observer. Thus, an image of good quality corresponds to our general idea of regular, informative, and well presented. Presently, its common to measure image quality with a single metric like contrast, blurriness etc.goal is to provide a more complex and formal definition of human quality perception by identifying the top factors responsible for visual quality. To eliminate any subjectivity, we consider quality as an objective, non-reference multidimensional measure, that we want to be able to compute independently without comparing the image to the others. Our practical goal is to find a restricted set of features that are most responsible for quality perception. Such a set would become a first step in solving a practical issue of creating a useful tool for displaying medical images improving their quality.

1. Non-reference image quality measures

of research published on image quality uses quality measures estimated for original image and its distorted copies []. In this study we use so called non-reference measures when quality is estimated for single image independently. We use a number of previously developed measures and a number of basic measures like contrast as described below.

1.1 Blurriness measures

partly blurred image affects human perception of quality. That is why we consider blurriness as an important factor of image quality perception. In this work we use two different blurriness measures.

The first one described by F. Crete and T. Dolmiere [1] uses low-pass filter and is based on principle that gray level of neighboring pixels in a less blurred image changes with higher variation than in its blurred copy. So, they compute absolute vertical and horizontal difference D for neighboring pixels in original and blurred images (1):

(Eq. 1a)

(Eq. 1b)

I(x,y) is the intensity value at the (x,y) pixel, h and w are height and width of image. After that, variation of neighboring pixels before and after blurring needs to be analyzed: if variation is high, the original image is considered to be sufficiently sharp. To evaluate variation, we consider only the differences that decreased, and obtain variation V for vertical and horizontal directions (2):

(Eq. 2)

where DB_ver(x,y) is the absolute difference for blurred image B., blurriness for vertical direction is computed:

(Eq. 3)

blurriness is computed in the same way. Finally, maximum of two is selected as the final blurriness measure: Fblur = max(Fblur_hor, Fblur_ver). Further we will write it as Fblur_1.

Another blurriness measure was presented by Min Goo Choi [2], based on edge extraction using intensity gradient. The authors define horizontal and vertical absolute difference value of a pixel computed as a difference between its left and right or upper and lower neighboring pixels. Then they obtain the mean horizontal and vertical absolute differences Dhor_mean for the entire image as in (Eq. 4).

(Eq. 4)

each pixel value is compared with mean absolute horizontal difference values computed for the whole image to select edge candidates as Chor(x,y):

(Eq. 5)

candidate pixel Chor(x,y) has absolute horizontal value larger than its horizontal neighbors, this pixel will be classified as edge pixel Ehor(x,y) as shown in (6).

(Eq. 6)

edge pixel is examined to find whether it corresponds to a blurred edge or not. First, horizontal blurriness of a pixel is computed according to (7).

(Eq. 7)

value is obtained in the same way, maximum of two is selected for final decision. Pixel is considered blurred if its value is larger than a predefined threshold (0.1 suggested in the paper).

(Eq. 8)

, the resulting measure of blurriness for the whole image is called inversed blurriness and is computed as a ratio of blurred edged pixels count to edge pixels count (9).

(Eq. 9)

we will term this measure Fblur_2 to discern it from blur described in [1].assume that increase of blurriness should negatively affect quality perception because a very blurred image will loose important information and be less attractive.

1.2 Image entropy

basic idea behind entropy is to measure the uncertainty of the image. The more information and less noise the image contains, the more useful it would be, and we might relate image usefulness to its objective quality. In our study Shannon entropy was computed for the entire image, its foreground, and its background according to (Eq. 10).

(Eq. 10)

p(Ik) is the probability of the particular intensity value Ik.assume that higher entropy should mean that more signal is contained in the image. For example, if there are less details and mode plain surfaces, entropy would be less. However, noisy image would have more entropy, so we will consider entropy for three levels of image.

1.3 Segmentation

Presented in [3], it shows how much various segments of image can be separated. We use the simplest yet most intuitive implementation comparing two major segments: image background (seg1) and foreground (seg2). In case of this study we simply computed average intensity value and used classified all pixels with lower intensity as background, while the rest of pixels was foreground. To compute segmentation measure, average difference U for neighboring pixels in 3x3 sliding window is computed for each image segment (Eq. 11):

(Eq. 11)

to the following measure W:

(Eq. 12)

we compute average pixel intensity in each segment and obtain squared difference between average intensities of very pair of segments - in our case there is only one pair. Inversed sum of squared differences of average intensities is called B:

(Eq. 13)

measure is obtained as:

sep = 1000*W+B (Eq. 14)

it will be high for images with high separability between segments and low separability within segment. In our case this measure makes sense only for one set of images depicting trees because another set of medical images mostly presents dark background, which is clearly separated from the foreground.

1.4 Flatness

This measure is described in [4] and uses two-dimensional discrete Fourier transform of the image. First, we obtain 2D Discrete Fourier Transform of the image, which is transformed to one-dimensional vector FV. Next, spectral flatness SF is computed as ratio of geometric to arithmetic mean:

(Eq. 15)

resulting measure proposed in the paper is called entropy power and is obtained as a product of spectral flatness measure SF presented in (15) and image variance as shown:

(Eq. 16)

where is average intensity value for the image. This measure is assumed to be higher for less informative, non-predictive and redundant images.

1.5 Sharpness

This measure [5] is based on assumption that differences of neighboring pixels change more in the areas with sharp edges. Therefore the authors compute second-order difference for the neighboring pixels as a discrete analog of second derivative for the image passed through denoising median filter:

, (Eq. 17)

Im the original image passed through median filterauthors define vertical sharpness for each pixel Sver as shown below:

, (Eq. 18)

each pixel is treated as sharp if its sharpness exceeds 0.0001. Number of sharp pixels NSver is computed, and the edge pixels are found with Canny method, number NEver being their count. Then the same process is repeated in the horizontal direction, and the sharp to edge pixels ratio for vertical and horizontal directions is computed as:

(Eq. 19)

assume that sharper image should be percepted as a more attractive and informative.

1.6 Blockness measure

This measure estimates image from the point of block artifacts [6]. Absolute intensity differences for neighboring pixels are obtained for vertical and horizontal directions as shown in (1), each element of resulting matrix is then normalized:

(Eq. 20)

By taking the average for each column of matrix we obtain the horizontal profile of image Phor as shown in (Eq.21):

(Eq. 21)

vertical profile is assessed in the same way, and1-D DFT is applied to both profiles. Magnitude M of DFT coefficients is than considered:

(Eq. 22)

0 £ T £ w-2.

Vertical blockness measure Bl for the block size Z is computed as shown in (Eq. 23). Due to DFT nature, Mhor(T) will have peaks at T, where number b=1,2…Z. Values for Mhor(T) at these peak points correspond to horizontal blockness of image Blhor:

(Eq. 23)

blockness measure can be obtained similarly. In our study 2, 4, 6 and 8 pixels were used as block width. Resulting measure is shown in (24):

, (Eq. 24)

r and 1-r are weights for horizontal and vertical measures. We use r equal to 0.5. This measure will be higher for images distorted with block artifacts.

1.7 Fractal dimension

idea of possible relation between image quality and amount of image details brings us to the measures of fractal dimensions. We detect main contours in the image using Canny method and then estimate fractal dimension of the obtained curve. We use box-count to compute dimension (Eq. 25). N stands for the number of square blocks with side ε with ε =2, 3, 4, and 5.

assume that higher values measure of fractal dimension would correspond to more informative images containing more information.

1.8 Noise level

It is natural to assume that the presence of noise can be detrimental for the perceived image quality. Therefore we included a noise measure developed by Masayuki T. [7]. In this work, noise level is described as standard deviation of the Gaussian noise. The authors propose a patch-based algorithm. First, the original image is decomposed into overlapping patches, and the model for the whole image is written as pi = zi+ni, where zi is the original image patch with i-th pixel in its center transformed to a one-dimensional vector, and pi is the observed patch (also transformed to vector) distorted by Gaussian noise which is presented as vector ni. To estimate noise level we need to obtain unknown standard deviation using only the observed distorted noisy image.the image patches are treated as data in Euclidean space, its variance can be projected onto single axis which direction is defined by vector u. Variance of data V projected on u can be written as:

(Eq. 26)

where is standard deviation of the Gaussian noise.variance of data direction is than defined using Principal Component Analysis (PCA). First, data covariance matrix π is defined as:

(Eq. 27)

where b is number of patches, m is the average in dataset {pi}. Then the variance of the original data is projected onto minimum variance direction equals the minimum eigenvalue :

, (Eq. 28)

ϕ is covariance matrix for noise-free patches z. noise level can be estimated if we decompose minimum eigenvalue of the noisy patches covariance matrix, which is an ill-disposed problem because minimum eigenvalue for noiseless patches covariance matrix is unknown. Then the authors suggest selecting weak textured patches from noisy images because such patches span low-dimensional space and minimum eigenvalue of their covariance matrix is close to zero, so their noise level Fnoise can be estimated as:

, (Eq. 29)

where π is the covariance matrix for weak textured patches.

Undoubtedly, the most important part of the proposed algorithm is the selection of weak textured patches. The main idea is to compare maximum eigenvalue of gradient covariance matrix of patch with some threshold. Gradient covariance matrix C of patch j is computed as:

(Eq. 30)

Gj = [Dhorj, Dverj] and Dhor and Dver are horizontal and vertical derivative operators. select weak textured patch, statistical hypothesis is tested. Null hypothesis (patch has weak flat texture) is accepted if its gradient covariance matrix Cj maximum eigenvalue is less than threshold. Threshold τ for maximum eigenvalue of gradient covariant matrix can be found as:

(Eq. 31)

where is the significance level (we use 0.99), is the inverse-gamma cumulative distribution function with shape parameter b/2 and scale parameter. Inverse-gamma cumulative distribution function is defined as:

(Eq. 32)

where Γ(.) denotes gamma function, a is a scale parameter, b is a shape parameter. Gamma function for positive integer n is defined as:

(Eq. 33)

assume that noisier images would have worse quality and would be less informative.

1.9 Average gradient and edge intensity

Both measures are taken from [8]. Average gradient FAG shows how pixel values change on average for vertical and horizontal directions according to:

(Eq. 34)

intensity FEI is computed as:

Gver and Ghor are vertical and horizontal gradients obtained as: (Eq. 35)

(Eq. 36)

(Eq. 37)

, we use a number of simple image quality metrics. First of all, average intensity FAI is computed as:

(Eq. 38)

image contrast FC and contrast per pixel FCPP are obtained as:

(Eq. 39)

(Eq. 40)

Table 1. Correspondence between described measures and names of features in our dataset. 0, 1 and 2 prefixes relate to images on three levels of Laplacian pyramid

MetricNameCorresponding variablesNo-reference blur metric Fblur1Blur10, blur11, blur12Min Goo Choi methodFblur2Blur20, blur21, blur22Shannon entropyFent Ent10, ent11, ent12Local Shannon entropyEntB0, entF0Separability measureFsepSep0, sep1, sep2Flatness measureFflatFlat0, Flat1, Flat2SharpnessFsharpSharp0, sharp1, sharp2ContrastFCContr20, contr21, contr22Blockness measureFblockBlock20, block40, block60, block80, block21 etcFractal dimensionFfracFrac0, frac1, frac2Average intensityFAIIntens0, intens1, intens2Noise levelFnoiseNoise0, noise1, noise2CPP - contrast per pixelFCPPContr10, contr11, contr12Average gradientFAGAG0, AG1, AG2Edge intensityFEIEI0, EI1, EI2

2. Research design, data collecting and image markup

order to evaluate the performance of various quality measures and validate the results, we used two datasets of grayscale images of different nature and quality. Each image quality was assessed two times: first by human observers (thus capturing our visual perception of the image quality), and second, but a set of metrics described above. The metrics were applied to the original images as well as their lower-resolution copies, derived with Laplacian pyramid decomposition, which produced the total of 57 quality metric measurement per each image. Our main intention was to find the best sets of numerical metrics that would explain the observed human perception of image quality. image dataset used in this work consisted of similar images: the first set had 50 medical images (CT tomography of an abdomen), and the second - 50 scenery photographs of trees and forest landscapes. We intentionally chose the images of rather abstract and emotion-free nature to exclude any subjective bias in the human perception.

The human perception ranks for the images were obtained with pairwise comparisons between all images in each dataset. The images were presented in random pairs to 15 human spectators, asking them to choose the best of the two. This task was implemented using Amazon Mechanical Turk technology; Figure 1 Mechanical Turk assignment for image markup shows screenshot of assignment. ensure comparison robustness, we used markup with triple overlap: each pair of images was compared three times by different observers; final choice computed using the majority rule. As a result, more than 7000 pairs were presented and compared.get image features, 19 basic quality measures were computed for three copies of each image: the original image and its two lower-resolution derived as two levels of the Laplassian pyramid. The resulting 57 measurements were treated as57-dimensional image feature vectors, used as independent variables in models.

1. Mechanical Turk assignment for image markup

3. Experimental Results

.1 Linear regression with known target variable

the first step of research we are trying to solve our task using known quality measures of every image. In such approach we are trying to fit models to predict known outcome. on the pairwise image comparison results we computed a quality index for every image as the number of this images wins divided by the number of comparisons. This allowed us to put the images in a linear quality order. Note that in general this linear order cannot correspond to all the recorded comparisons: in some instances an image with a higher quality index might have been perceived as inferior when compared with some lower-quality image. This non-linearity in image grades originated from the differences in quality perception between different human observers, and we called such image pairs inverted. Overall, 10% of pairs were inverted in medical dataset and 14% were inverted in trees dataset. linear quality indices (rankings) as a target variable, we implemented linear regression with L2 norm as a basic model. We considered all possible regression models containing various combinations of k, k = 1…57 features, and extracted the best models for each k as providing the least regression error. Note that this resulted in an exhaustive search through millions of possible models (feature combinations), therefore we used branch-and-bound algorithm to speed-up the search, regression error for L2 regression, E, was defined as:

(Eq. 41)

Wp stands for the model-predicted image quality, and W - for the real observed quality. of main goals of the study was to find a set of factors that are responsible for the human perception of the image quality. We validated our feature-modeling results using medical (MS) and trees (TS) image datasets separately to make sure that models that perform well for one dataset would be good for another dataset.

Figure 2 shows various models for 1, 2, 3, 4 and 5 features. We used R squared as a metric to evaluate each model as a measure of the fraction of the original data variation explained by model. Treating the concept of image quality as a function of our visual perception rather than image selection, we therefore assumed that a good model should perform well for both MS and TS datasets. Figure 2 Regression models for both datasets visualizes our results. As one can see, R squared is not increasing dramatically after using more than 6 features, so we show only the models with up to 5 predictors. Circle sizes correspond to average error in each model. Largest circles are close to 0.27 while the best models have errors close to 0.08.

2 Regression models for both datasets

can also observe that the circles on the plot tend to cluster along the diagonal line, which means that most models perform similarly on both MS and TS datasets. Moreover, the higher is k (the number of model features/predictors), the closer are circles to the diagonal line. As a result, higher k generally corresponds to more accurate and more image-independent models, which can provide optimal quality predictions for both MS and TS sets.

Figures 3 a, b illustrate best models obtained for MS and TS independently. As the figure indicates, the models selected as the best for one dataset perform well on the other. This already can be viewed as a strong demonstration of the objectivity in the human image quality perception: despite the obvious differences between the images of CT scans and forest landscapes, the models optimal for one set were among the best performers for the other.

3 a, b

Finally, Figure 4 demonstrates top ten models for each model size, sorted by the average mean error on two datasets. It can be seen that most models lie on the diagonal line, models with 4, 5 and 6 features becoming increasingly closer to each other due to high R square for both datasets. 2 summarizes the best predictors selected for each number of features defined in Table 1. It provides us with some significant insights. First of all, there is a limited set of quality measures which occur in most optimal models derived for MS and TS data. It can be assumed that these factors play the most important role in our perception of the image quality:

4 Best ten models for both sets

·Entropy power of the image on first and second levels of Laplacian pyramid (metrics flat0, flat1). It is a product of spectral flatness and variance of the image and shows image signal compressibility, reflecting how much useful signal is contained in the image.

·Entropy of the background (entB0, entB1) and entropy of the whole image, present in many optimal models for both sets

·Blockness measures for all block sizes (of 2, 4, 6 and 8 pixels) are important for all sets of images on all three levels of pyramid

·Both blur measures, sharpness, contrast and edge intensity measures on all resolution levels are significant for all datasets, proving that that perception of contrast and blurriness is one of major image quality metrics.

·Fractal dimension on all levels of image resolution can be found in models for both sets.

·Average gradient is especially important for trees dataset. This measure shows how much pixel values change on average. According to it, images with more contrast edges between objects get higher mark.

·Object separability on first and second levels of pyramid can be found in models for both sets. This measure is higher for images with distinguishable and more contrast parts.

As a result, we identify the following major factors responsible for the human perception of image quality:

·Amount of information contained in image, which can be described by spectral flatness and entropy measures. It is remarkable that random noise is not taken into account, while larger objects have some impact.

·Contrast, average gradient and blurriness are the most important non-reference quality measures that affect visual perception of the whole image, while sharpness and noise level hardly appear in the best models. This might be explained by sensitivity of used metrics.

·Artifact measures like blockness appears to be significant in most models.

·Background entropy performs well only as a add-on factor which explains the variance that was not already covered by the other factors

All things considered, we obtained models containing restricted sets of features that are able to explain quality perception. However, basic matrix of comparisons is our ground truth and main source of information. To measure quality of described approach, we compared each pair of images by predicted quality measures computed by best models of five features mentioned above. To get vector of predicted values we performed leave-one-out cross validation for each of the two sets. This procedure enabled us to get more stable resulting vector of quality measures. On each step one image was separated from other images, so the model weights were learned using the rest of images to predict quality measure for a single image. Final vector of model quality measures was constructed of predicted values and normalized.

Average share of inverted pairs computed for predicted quality measures in comparison to initial matrix is 31% for medical images and 29% for trees. However, this result is far from original and could be improved.

Table 2 Best predictor values for models with restricted sets of factors. Table contains best three models according to average error on two datasets

Model size NBest L2 predictors for both datasetsBest L2 predictors for trees datasetBest L2 predictors for medical dataset1 · Blur10, Blur 12 · Sep0 · Blur20, blur22 · Intens2, · EntF0· AG1 · Sharp1 · EI1, EI0· Ent10 · ent11 · sep02· Blur20, sep0 · Blur20, sep1 · EntF1, blur20 · Blur20/21, intens0/1/2· Blur20, entF0 · EntB0, block60 · EntB0, frac0· Blur20, sep0 · Blur20, sep1 · Blur20, intens03· Blur20, EntB0, sharp1 · Blur10, Blur11,blur22 · Block measures + blur · Blur20, EntB0, frac0· Blur20, entB0, frac2 · entB0, sep0, flat2· Blur10, blur11,blur22 · Contr20, blur21, noise2 · Contr20, intens0, ent11 · Blur20, block22, block624· Blur20 + blockness measures · Blur11, entB1, intens1, block22 · Contr22, noise2, blur21, entB0· entB0, sep0, block80, flat2 · blur10, entB0, sep0, flat2· block62, blur20, contr10, block22 · blur20, contr20,block62, block225, 6· entB0, blur21, flat1, EI1, frac2 · entB0, blur21,flat1,EI1, block62· blur10m entB0, sep0, block40, flat2 · blur10, entB0, sep0, block80, flat2· Blur20, block60, block62,block22 · Blur20, EI0, EI1, block22, block42

.2 Checking linearity of image quality perception

we mentioned before, the reduction of pairwise comparison scores to one-dimensional linear quality indices resulted in 10%-14% of inverted pairs: the instances where linear image quality values would mismatch the result of the image pairwise comparison. Using OLS regression models of five features resulted in 29-30% of inverted pairs.improve our results and to account for more arbitrary ways of defining image quality indices, we decided to consider a scenario where there was no original predefined quality order. That is, the basic idea was to consider quality measures as unknown variables and then try to find their optimal values which would satisfy two major criteria: good predictability with linear regression, and lowest number of inverted image pairs., we have another issue: in previous part we used linear model of quality. However, linear dependence is not obvious and should be checked. To do this, we tried to use a simple method based on best models obtained on previous step. The idea was to use linear models and enlarge R-squared, minimize error and avoid decreasing the number of inverted pairs. We were using known quality measures from previous step as starting values. If linear model is appropriate, than we should be able to improve target vector to get higher R-square without violating restrictions of initial matrix of comparisons.start we looked for the best set of measures which would have the lowest regression error and which will not increase the number of inversions according to initial pairwise comparison matrix. In addition, we tried to decrease the number of inverted pairs with the new set of measures.check this we implemented a simple algorithm described below.:: the starting set of q1…qn, is fraction of wins for the i-th image in parwise comparisons. process:not terminal condition do:qi , 1< i < n : 1. Get interval for qi_new: [qi_min, qi_max], _min = max of all qj, qj<qi, ij ; _max = min of all qk, qk>qi, ki qi_min>qi_max: sorted array [q1, q2,…qm, qm+1,…qn].

For each interval [qm, qm+1] set qt = (qm +qm+1)/2:qt = argmin (Ninverted_pairs). _min ß qt, _max ß qt+1. qt provides no more inverted pairs: qi_new = qi. qi_min <qi_max: go to step 2. each qi in [qi_min, qi_max] with a step 0.1*length of interval:optimal qi = argmin (MSE) for linear regression model. WHILE*

* Repeat steps 1, 2 until R-squared is more than threshold and square error difference on step s and s-1 is less than threshold. To compare error on step s with previous step s-1, fit features weights using vector Qs as a target, obtain model vector Qs_mod and compute errors of Qs-1 and Qs against such vector. assumed that in case of nonlinear dependence between quality and features, this algorithm will not converge: the idea of algorithm is to move initial quality measures closer to the model line. If this is possible without violating restrictions existing in the comparisons matrix, then mean square error (MSE) would decrease because model line will fit new quality measures better. used best ten models of five features and quality measures from previous section as initial values. However, in all cases it was impossible to decrease the fraction of inverted pairs for more than 2% points. We suggest that this can be caused by peculiarities of human perception and lack of transitivity in pairwise comparisons: it is natural that a person who compares images by two is not able to keep all seen images in mind and provide ideal linear order of them.


5 a b. Old and new values of quality for MS(a) and TS(b)

3.3 Computing quality measures of images using Elo ratings approach

improve the initial assignment of the quality indices, we tried one more approach that does not use any initial target vector of quality measures and based on the initial comparisons matrix in order to improve results achieved on previous step.

This approach is based on Elo rating system for chess tournament [9]. Each pair is considered as an independent Bernoulli test where each of two outcomes (winning of image A over image B) has probability p. All comparisons are seen as a series of such tests. Each image in pair has a rating, which determines the outcome of comparison so that image with higher rating wins. Rating of image K is a linear combination of its L features with weights:

(Eq. 42)

Probability of choosing image A in pairwise comparison i or, in other words, probability of image A rating being larger than image B rating is written as logistic function:

(Eq. 43)

The optimal set of features weights would provide ratings that will give the most likely pairwise comparisons. Outcome x of each comparison can be 0 or 1, which can be written using Bernoulli formula where probability P is probability of shown in (Eq. 38):

, x = {0,1} (Eq. 44)

function is written as:

(Eq. 45)

obtain image rankings that would give the most likely pairwise comparisons according to initial matrix, we should iteratively change features weights to maximize logarithm of likelihood which is sum of logarithms of Pi(x) shown in (Eq.39). Optimization was conducted using gradient descent method from SciPy library.

This method was applied to various combinations of five features used in previous method independently on each of image sets in order to compare features and estimate their importance in determining image quality perception. Besides best models for mixed set of images was obtained. To compare models we simply used a rate of truly detected pairwise outcomes, results are presented in Table 3.

We performed ranking approach for possible combinations of five features and looked at best models that provide best results for each set separately and that perform well for both sets. In case of testing model on both sets we use sum of log likelihood for two sets separately, and take average of features weights for two sets. Performance of every model was estimated by number of correct pairwise comparisons according to ratings. They are presented in Table 3.

Table 3 Best predictor values for models with restricted sets of factors. Table contains best three models according to average rate of correct pairwise comparisons

Model Ratio of correct comparisonsBothMedTreesent11, sharp2, block42, block62, intens20.6670.630.67entB0, entF0, ent11, ent12, block820.690.730.75Ent10, entF1, block20, noise1, block220.660.740.75Contr20, ent11, entF2, sharp2, contr120.690.760.73Blur20, ent10, ent11, block21, block220.690.750.74Ent10, entF1, block20, noise1, block220.660.750.76

According to the table, some of best models, that perform well on each of sets separately, give worse results on mixed set of images. It can be clearly seen on a 3D (Figure 6) and 2D plots (Figure 7 a b c) of models. Each axis corresponds to quality on one of sets: TS, MS or mixed set containing both sets. It is seen that most models have better quality on each of MS and TS sets, but have lower quality on mixed set. It means that models are quite good even with five features, however, these features are sensible to image content, so trying to use average weights affects quality of model. Moreover, in many cases feature weights for different sets have opposite signs.interesting finding concerns putting all 57 features in one model which seriously affects result negatively and provides around 40-50% of corrected pairs which is almost as good as just random choice. we look at features contained in best five models, it can be seen that features contained in most models repeat results obtained with OLS regression. One of most important ones is entropy of whole image and its background and foreground on all levels of pyramid. Besides, blurriness, blockness, noise and average intensity and contrast occur in top models, which does not contradict to results obtained with OLS regression in Section 4.1.comparison of previous approach with known quality measures, Elo rating approach provides 24-27% of inverted pairs on separate sets, which is better than with linear regression. This should be so due to using initial comparisons matrix as a ground truth. As for quality for mixed set, we see that models are not able to provide good result because of difference in weights. We are giving a closer look on this question in next section.

6 Models in 3-dimensional space


Figure 7 a b c

3.4 Comparing feature weights

obtaining sets of most important features our intention was to check for features invariant to scene and try to get a unique formula of quality based on separate models for both image sets. In addition, we tested the best models for each image set separately. Using initial comparisons matrix as a ground truth we trained linear classifier with binary outcome to check the results obtained at previous steps. The first part of this experiment aimed at training a model on one set and test on the other. If weights of features derived from the first image set were providing a good prediction for the second set as well, we would assume that the selected features provide a good representation of human image quality perception.part was to check model performance on each set, and to get testing and training samples out of a mixed set to make sure that restricted number of features is able to provide acceptable results. For both parts, the main requirement was the use of linear classifiers according to previous assumption that quality of image depends on the image features linearly.were also using logistic regression classifier, which considers linear dependence between outcome and features. For every pair we use differences of features between left and right image and binary target variable, which equals 1 if left image wins. Scikit-learn library implementation of logistic regression classifier was used. We studied model quality metrics such as accuracy score and area under curve to evaluate model performance and see whether selected features are able to provide good result. On final step we took best ten models of five features and performed a number of binary classification experiments using logistic regression classifier with intercept.

First part of experiment considered learning classifier on one homogenous set of images and testing on another. Results of these experiments demonstrate very low quality regardless number of features in model. Accuracy score is below 45%, precision and recall measures are close to 50% which is the same as random choice. This result was obtained for all experiments with same design. Example of feature weights for the same model learned on each set of images presented in Table 4 demonstrate that coefficients are different on sets.

Table 4 Features weights learned on each set

FeatureMSTSMixed setEntF00.72440.7450.632Spec00.5731-0.010.312Block60-1.2930.47410.65Spec21.1950.4240.251Contr22-1.634-0.396-0.56F1 score0.750.750.63

As for training and testing on same set of images, better results were achieved even with a five-feature set. For example, fifth model from Table 3 provides better results on both sets. It reaches average accuracy of 72% using random shuffle cross validation algorithm with 20% testing size on trees dataset and 71% accuracy score on medical dataset. On mixed dataset where examples of both sets were included into training and testing set, average accuracy score is about 59%.sets of experiments considered models including all 57 features. In this case average accuracy score is 76% for mixed dataset, 80% for medical dataset and 77% for trees dataset. This demonstrates that the best models of 5 features contain most of the useful signal needed for classification.we train on one set and test on another one using all 57 features, model still gives only 50% of accuracy.results show that selected models containing restricted features set are good enough for both set of images. However, there is no universal formula of quality for both sets at once due to different weights of features.

Figure 8 ROC curves for five features classifiers

quality image intensity regression

Conclusion and further research

two datasets of very different nature, we identified the most important image quality factors explaining human perception of the image quality. We used two major approaches: first approach uses a vector of known quality measures that were obtained from initial comparisons using arbitrary formula of wins to comparisons rate. Second approach treats quality as unknown feature and tries to find values using raw comparisons matrix as source of information. Comparing these to major approaches based on their fraction of falsely-predicted pairwise comparisons (inverted image pairs), we obtained 29-30% for the first, and 24-27% for the second approachalso observed that some factors were conceptually similar which enabled us to select a limited set of really important quality factors. In case of medical images, this is a very useful finding which enables us to interpret quality perception and not only to rank images by a number of features but also try to build a framework that improves particular image features. Such tool could be one of potential practical extension of this study.we would like to extend and generalize the achieved results by validating them on more datasets. Another potential study limitation lies in the field of ranking and classifying images by quality. After increasing dataset of manually ranked images we could then conduct a comparison of ranking provided by neural network which can use a large number of all possible features and a classifier which uses a restricted set of most important features. However, such comparison would be fair if we use dataset of neutral monochrome images which makes it useful only for a specific field like medicine and medical images.things considered, our results demonstrate that image quality perception can be modeled with a small set of non-reference factors that are easy to interpret. This can definitely lead to new useful tools for image quality control.

Works cited

[1] Dolmiere T., Ladret P. Crete F., The Blur Effect: Perception and Estimation with a New No-Reference Perceptual Blur Metric. Grenoble: SPIE Electronic Imaging Symposium Conf Human Vision and Electronic Imaging, États-Unis d'Amérique, 2007.

[2] Serir A. Kerouh F., A no-reference blur image quality measure based on wavelet transform.: Digital Information Processing and Communications, 2012.

[3] K. De, A new no-reference image quality measure to determine the quality of a given image using object separability. Taipei: Machine Vision and Image Processing (MVIP), 2012 International Conference on, 2012.

[4] Monica P. Carley-Spencer Jeffrey P. Woodard, No-Reference image quality metrics for structural MRI.: Neuroinformatics, 2006, vol. 4.

[5] Chen F., Doermann D. Kumar J., "Sharpness estimation for Document and Scene Images," in Pattern Recognition (ICPR), 2012 21st International Conference on, Tsukuba, 2012, pp. 3292 - 3295.

[6] JA Bloom C Chen, A blind reference-free blockiness measure. Shanghai: in Proceedings of the Pacic Rim Conference on Advances in Multimedia Information Processing: part I, 2010.

[7] Masayuki Tanaka and Masatoshi Okutomi Xinhao Liu, Noise Level Estimation Using Weak Textured Patches of a Single Noisy Image.: IEEE International Conference on Image Processing (ICIP), 2012.

[8] Xinqi Zheng, Xuan Hu, Wei Zhou, Wei Wang Tao Yuan, A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm.: PLoS ONE, 2014.

[9] Apard E Elo, 8.4 Logistic Probability as a Rating Basis". The Rating of Chessplayers, Past&Present. NY, United States: Press International, 2008.

Source code

A.Elo rating approachscipyscipy.optimizeitertoolsrandommathnumpypandas as pdLikelihoodCalculator:__init__(self, features, comparisons):.features = features.comparisons = comparisonsgetLogLikelihood(self, ratings):= 0.0(i1, i2, v) in self.comparisons:i1,i2len(ratings)abs(ratings[i2] - ratings[i1]) > 200.0:+= -abs(ratings[i2] - ratings[i1]) if (v == 1) == (ratings[i2] > ratings[i1]) else 0.0:= (1.0 / (1.0 + math.exp(ratings[i2] - ratings[i1])))+= math.log(abs(1.0 - p - v))logLikelihoodSumgetRatings(self, weights):[sum([weight * feature for (weight, feature) in itertools.izip(weights, features1)]) for features1 in self.features]updateDerivatives(self, weights, featuresA, featuresB, v, derivatives):= sum([weight * featureA for (weight, featureA) in itertools.izip(weights, featuresA)])= sum([weight * featureB for (weight, featureB) in itertools.izip(weights, featuresB)])= math.exp(ratingB - ratingA)= 1.0 / (1.0 + exp1)exp1 > 1e50:= 1.0 / exp1:= exp1 / (1.0 + exp1) ** 2= -derivativeAj in range(len(weights)):[j] += (derivativeA * featuresA[j] + derivativeB * featuresB[j]) / (value + v - 1.0)__call__(self, weights):= list(weights)= self.getRatings(weights)= self.getLogLikelihood(ratings)= [0.0 for j in range(len(weights))](a, b, v) in self.comparisons:.updateDerivatives(weights, self.features[a], self.features[b], v, derivatives)"Value: " + str(value)(-value, numpy.array([-d for d in derivatives]))findOptimalWeights(features, comparisons):= len(features[0])= [random.random() for j in range(weightsCount)]'START'

(weights, f, d) = scipy.optimize.fmin_l_bfgs_b(LikelihoodCalculator(features, comparisons), weights0)fdweightscheckDerivative(obj, point, u):"Starting, point: " + str(len(point)) + ", u: " + str(u)

(initialValue, gradient) = obj.__call__(point)= list(gradient)"Starting, point: " + str(len(point)) + ", gradient: " + str(len(gradient)) + ", u: " + str(u)"Calculated derivative for u = " + str(u) + ": " + str(gradient[u])power in range(-7, -4):= 10.0 ** power= point[:][u] += delta

(value, gradient1) = obj.__call__(pointWithDelta)"delta: " + str(delta) + ", value: " + str(value) + ", derivative: " + str((value - initialValue) / delta)main():_df = pd.read_csv("/Users/nephidei/Documents/imgproc/final/reit/trees_features.csv", sep=';')nabor in [[3,15,2,50,6,11,21]]:_selected = features_df[nabor]= map(list, features_selected.values)_df = pd.read_csv("/Users/nephidei/Documents/imgproc/final/reit/trees_comp_sure.csv", sep=';')= map(list, comparisons_df.values)comp in comparisons:[0] -= 1[1] -= 1comparisons= findOptimalWeights(features, comparisons)"Weights: " + str(weights)= 0= 0(i, features1) in enumerate(features):= sum([weight * feature for (weight, feature) in itertools.izip(weights, features1)])(a, b, v) in comparisons:= sum([weight * featureA for (weight, featureA) in itertools.izip(weights, features[a])])= sum([weight * featureB for (weight, featureB) in itertools.izip(weights, features[b])])(ratingA > ratingB) == (v == 1):+= 1:+= 1"OK: " + str(okCount) + ", bad: " + str(badCount)().Code for non-reference quality measuresgradientAG = avgGrad(image)

% original image F:= im2double(image);

[m, n] = size(imageF);= zeros(m-1,n-1);i=1:m-1j=1:n-1= imageF(i,j);= imageF(i+1,j);= imageF(i, j+1);= ((a1-a2)^2 + (a1-a3)^2);(i,j) = sqrt((sum1/2));= 1/((m-1)*(n-1));= sum(Gx(:));= C*S;

% blockness

% A Blind Reference-Free Blockiness Measureblockness = blockness(image, bl)= rgb2gray(image);

[m, n] = size(imageF);

% window width= 8;

% block size parameter: bl

% difference

diff_hor = abs(imageF(1:m-1, :) - imageF(2:m, :));_vert = abs(imageF(:, 1:n-1) - imageF(:, 2:n));

% normalization_norm_hor = zeros(m,n);ii=1+w:m-w-1j=1:n= sum(diff_hor(ii-w:ii+w,j).^2) - diff_hor(ii,j)^2;= double(expr1 / (2 * w + 0.0))^0.5;_norm_hor(ii,j) = diff_hor(ii,j)/koren;

% horizontal profile_hor = 1/n*(sum(d_norm_hor,2));_values= zeros(m-1);_FPH = 0.0;ii = 1:m-1= ii*(m-1)/bl-1.0;_PH = 0.0;xi = 1:m-1_PH = sum_PH + prof_hor(xi) * exp(-i*2*pi*xi*X/(m-1));= abs(sum_PH);_values(ii) = FPH;_FPH = sum_FPH + FPH^2;_h = 1/sum(prof_hor(1:m-1))*sqrt((1/(bl-1))*sum_FPH);

% normalization vert_norm_vert = zeros(m,n);

for ii=1:mj=1+w:n-w-1

expr1 = sum(diff_vert(ii,j-w:j+w).^2) - diff_vert(ii,j)^2;= double(expr1 / (2 * w + 0.0))^0.5;_norm_vert(ii,j) = diff_vert(ii,j)/koren;

end

% vertical profile_vert = 1/n*(sum(d_norm_vert,1));

PH_values_vert= zeros(n-1);_FPH_vert = 0.0;j = 1:n-1_vert = j*(n-1)/bl-1.0;_PH_vert = 0.0;xj = 1:n-1_PH_vert = sum_PH_vert + prof_vert(xj) * exp(-i*2*pi*xj*X_vert/(n-1));_vert = abs(sum_PH_vert);_values_vert(j) = FPH_vert;_FPH_vert = sum_FPH_vert + FPH_vert^2;

end_v = 1/sum(prof_vert(1:n-1))*sqrt((1/(bl-1))*sum_FPH_vert);= sqrt((bm_v^2)*0.5 + (bm_h^2)*0.5);

% BLUR METRICS

%

% The Blur Effect: Perception and Estimation with a New No-Reference

% Perceptual Blur Metric

%

% metric NO 001blur = blurOne(image)

% original image F:= im2double(image);

[m, n] = size(imageF);

hv = [1 1 1 1 1 1 1 1 1] * 1/9;= hv';= imfilter(imageF, hv);= imfilter(imageF, hh);

% compute absolute difference images of B and F:_Fvertical = abs(imageF(2:m, :) - imageF(1:m-1, :));_Fhorizontal = abs(imageF(:, 2:n) - imageF(:, 1:n-1));_Bvertical = abs(Bvertical(2:m, :) - Bvertical(1:m-1, :));_Bhorizontal = abs(Bhorizontal(:, 2:n) - Bhorizontal(:, 1:n-1));

% compute variation_vertical = max(0, diff_Fvertical - diff_Bvertical);_horizontal = max(0, diff_Fhorizontal - diff_Bhorizontal);

% compute sum of coefficients of differences (sum of all matrix elements):_Fvertical = sum(sum(diff_Fvertical(2:m-1, 2:n-1)));_Fhorizontal = sum(sum(diff_Fhorizontal(2:m-1, 2:n-1)));_var_vertical = sum(sum(var_vertical(2:m-1, 2:n-1)));_var_horizontal = sum(sum(var_horizontal(2:m-1, 2:n-1)));

% normalize the result:_Fvertical = (sum_Fvertical - sum_var_vertical)/sum_Fvertical;_Fhorizontal = (sum_Fhorizontal - sum_var_horizontal)/sum_Fhorizontal;= max(norm_Fvertical, norm_Fhorizontal);

% BLUR METRICS

%

% A No-Reference Blur Image Quality measure

% Based on Wavelet Transform

%

% Min Goo Choi method

% metric NO 002blur = blurTwo(image)

% original image F:= im2double(image);

[m, n] = size(imageF);

% treshold = 0.1

% compute absolute horizontal difference of image F (HADV):_horizontal = abs(imageF(:, 1:n-2) - imageF(:, 3:n));_horizontal = sum(sum(diff_horizontal));_hor_mean = sum_horizontal /(m*(n-2));

% find edge candidates:= zeros(m,n-2);i=1:mj=1:n-2diff_horizontal(i,j) > diff_hor_mean(i,j) = diff_horizontal(i,j);(i,j) = 0;= zeros(m, n-2);i=1:mj=2:n-3(Ch(i,j) > Ch(i, j-1)) & (Ch(i,j) > Ch(i, j+1))(i,j) = 1;(i,j) = 0;

% compute for vertical:_vertical = abs(imageF(1:m-2, :) - imageF(3:m, :));_vertical = sum(sum(diff_vertical));_vert_mean = sum_vertical /(m*(n-2));= zeros(m-2,n);i=1:m-2j=1:ndiff_vertical(i,j) > diff_vert_mean(i,j) = diff_vertical(i,j);(i,j) = 0;= zeros(m-2, n);i=2:m-3j=1:n(Cv(i,j) > Cv(i-1, j)) & (Cv(i,j) > Cv(i+1, j))(i,j) = 1;(i,j) = 0;

% does detected edge pixel correspond to a blurred edge?_vertical = imageF;j=1:ni=2:m-1_vertical(i,j) = (abs(imageF(i+1, j) + imageF(i-1, j)))/2.0;i = 1_vertical(i,j) = A_vertical(i,j);i = m_vertical(i,j) = A_vertical(i,j);_vertical = (abs(imageF(:,:) - A_vertical(:,:)))./max(A_vertical(:,:),zeros(m,n) + 1e-10);_horizontal = imageF;i=1:mj=2:n-1_horizontal(i,j) = (abs(imageF(i, j-1) + imageF(i, j+1)))/2.0;j = 1_horizontal(i,j) = A_horizontal(i,j);j = n_horizontal(i,j) = A_horizontal(i,j);_horizontal = (abs(imageF(:,:) - A_horizontal(:,:)))./max(A_horizontal(:,:),zeros(m,n) + 1e-10) ;= zeros(m, n);i=1:mj=1:nmax(BR_vertical(i,j), BR_horizontal(i,j)) < .1;(i,j) = 1;(i,j) = 0;

% how many edge and blurred pixels?_cnt_h = nnz(B(:, 2:n-1) .* Eh);_cnt_v = nnz(B(2:m-1, :) .* Ev);_cnt_h = nnz(Eh);_cnt_v = nnz(Ev);_h = blur_cnt_h/edge_cnt_h;_v = blur_cnt_v/edge_cnt_v;= max(ratio_v, ratio_h);contr = contrast(image)

% original image F:= im2double(image);

[m, n] = size(imageF);= [-1, -1, -1, -1, 8, -1, -1, -1]/8;= convn(imageF, kernel, 'same');= mean2(diffImage);= cpp;

Contrast 2

contr2 = contrast2(image)

% original image F:= im2double(image);

[m, n] = size(imageF);= imageF(:); % make it vector= mean(image);= image - m;=norm(image)/sqrt(numel(image));EI = edgeIntens(image)

% original image F:= im2double(image);

[m, n] = size(imageF);= zeros(m,n);= Gx;= Gy;i=2:m-1j=2:n-1= imageF(i+1,j-1);= 2*imageF(i+1,j);= imageF(i+1, j+1);= abs(a1+a2+a3);= imageF(i-1,j);= 2*imageF(i-1,j);=imageF(i-1,j+1);=abs(a4+a5+a6);(i,j) = sum1-sum2;= imageF(i-1,j+1);= 2*imageF(i,j+1);= imageF(i+1, j+1);= abs(a1+a2+a3);= imageF(i-1,j-1);= 2*imageF(i,j-1);=imageF(i+1,j-1);=abs(a4+a5+a6);(i,j) = bum1-bum2;(i,j) = abs(Gx(i,j)^2 + Gy(i,j)^2);= sum(F(:))/(m*n);

% Fractal dimensionfractal = fractal(imageF)= im2double(imageF);

[m, n] = size(imageF);

% const

e= 0.04;

%edge pixels_x = abs(imageF(1:m-2, :) - imageF(3:m, :));

max_edge_x = max(edge_x);_y = abs(imageF(:, 1:n-2) - imageF(:, 3:n));_edge_y = max(edge_y);_pix_x= zeros(m,n);i=2:m-1j=1:nedge_x(i-1,j) > (max_edge_x*e)_pix_x(i,j) = 1;_pix_x(i,j) = 0;_pix_y= zeros(m,n);j=2:n-1i=1:medge_y(i,j-1) > (max_edge_y*e)_pix_y(i,j) = 1;_pix_y(i,j) = 0;

[n,r] = boxcount(edge_pix_x);= -gradient(log(n))./gradient(log(r));= mean(s(2:5));

% Segmentation

% object separability

%segmentation by edgeseparability = separability(image)

% original image F:= im2double(image);

[m, n] = size(imageF);= mean(imageF(:));

% foregroundi=1:mj=1:nimageF(i,j) <= medianF(i,j) = 0;imageF(i,j) = imageF(i,j);= im2double(image);

% backgroundi=1:mj=1:nimageB(i,j) >= medianF(i,j) = 0;imageB(i,j) = imageB(i,j);

%size of segment

[ff,e] = size(nonzeros(imageF));

[bb,e] = size(nonzeros(imageB));

[m, n] = size(imageF);

%avg diff for central pixel in segment= zeros(m,n);i=2:m-1j=2:n-1= imageF(i,j);= imageF(i-1,j-1);= imageF(i-1, j);= imageF(i-1,j+1);= imageF(i,j+1);=imageF(i,j-1);= imageF(i+1,j-1);= imageF(i+1, j);=imageF(i+1,j+1);a1 > 0= [a2,a3,a4,a5,a6,a7,a8,a9];= find(vec>0);

[m,n]=size(ind);= zeros(1,n);k=1:n(1,k) = (vec(ind(k))-a1)^2;(i,j) = (sum(pix))/8;;

%avg diff for central pixel in segment= zeros(m,n);i=2:m-1j=2:n-1= imageB(i,j);= imageB(i-1,j-1);= imageB(i-1, j);= imageB(i-1,j+1);= imageB(i,j+1);=imageB(i,j-1);= imageB(i+1,j-1);= imageB(i+1, j);=imageB(i+1,j+1);a1 > 0= [a2,a3,a4,a5,a6,a7,a8,a9];= find(vec>0);

[m,n]=size(ind);= zeros(1,n);k=1:n(1,k) = (vec(ind(k))-a1)^2;(i,j) = (sum(pix))/8;;= sum(Uk(:))/ff;= sum((:))/bb;= (PkF+PkB)/2;

%average pixel intensity in segment= (sum(nonzeros(imageF)))/ff;

IB = (sum(nonzeros(imageB)))/bb;

G = (IF-IB)^2;= 1/G;= 1000*W+B;= IM;

% Sharpness

% Sharpness Estimation for Document and Scene Images

% metric NO 005sharpness1 = sharpness1(image)

% original image F:

% width of block= 10;

% sharpness threshold= 2;

% edge threshold= 0.09;

%imageF = im2double(image);= rgb2gray(image);

[m, n] = size(imageF);

% median filter for image_med = medfilt2(double(imageF), [3,3]);

% DoM:diff of diff for median filtered image_med_vertical = abs(abs(image_med(3:m-2, :) - image_med(1:m-4, :)) - abs(image_med(3:m-2,:) - image_med(5:m,:)));_med_horizontal = abs(abs(image_med(:, 3:n-2) - image_med(:,1:n-4)) - abs(image_med(:, 3:n-2) - image_med(:, 5:n)));

% diff for original image _vertical = abs(image_med(1:m-1, :) - image_med(2:m, :));_horizontal = abs(image_med(:, 1:n-1) - image_med(:, 2:n));_x = zeros(m,n);i = 4+w:m-3-wj=1+w:n-w= diff_med_vertical(i-w-2:i+w-2, j);= sum(podmatrix(:));_original = diff_vertical(i-w-1:i+w-1, j);= sum(podmatrix_original(:));down == 0_x(i,j) = 1;_x(i,j) = up/(down+1e-10);_y = zeros(m,n);j = 4+w:n-3-wi=1+w:m-w= diff_med_horizontal(i, j-w-2:j+w-2);= sum(podmatrix(:));_original = diff_horizontal(i, j-w-1:j+w-1);= sum(podmatrix_original(:));down == 0_y(i,j) = 1;_y(i,j) = up/(down+1e-10);

%edge pixels_x = abs(imageF(1:m-2, :) - imageF(3:m, :));_edge_x = max(edge_x);_y = abs(imageF(:, 1:n-2) - imageF(:, 3:n));_edge_y = max(edge_y);_pix_x= zeros(m,n);i=2:m-1j=1:nedge_x(i-1,j) > (max_edge_x*e)_pix_x(i,j) = 1;_pix_x(i,j) = 0;_pix_y= zeros(m,n);j=2:n-1i=1:medge_y(i,j-1) > (max_edge_y*e)_pix_y(i,j) = 1;_pix_y(i,j) = 0;_edge_sharp_x = zeros(m,n);i = 4+w:m-3-wj=1+w:n-wedge_pix_x(i,j) == 1 && s_x(i,j) > p_edge_sharp_x(i,j) = 1;_edge_sharp_y = zeros(m,n);j = 4+w:n-3-wi=1+w:m-wedge_pix_y(i,j) == 1 && s_y(i,j) > p_edge_sharp_y(i,j) = 1;_x = sum(matrix_edge_sharp_x(:))/sum(edge_pix_x(:));_y = sum(matrix_edge_sharp_y(:))/sum(edge_pix_y(:));= sqrt(R_x^2 + R_y^2);

%sharpness1 = imshow(matrix_edge_sharp_x);

%sharpness1= imshow(s_x);

% Spectral flatness

% No-Reference Image Quality Metrics for Structural MRI

% original image F:= im2double(image);

[m, n] = size(imageF);= mean(imageF(:));

% backgroundi=1:mj=1:nimageF(i,j) <= medianF(i,j) = 0;imageF(i,j) = imageF(i,j);

%image as 2D signal

% 2D descrete Furier transform= fft2(imageF);

% reshape _v1 = reshape(F, m*n, 1);s = a(x)= abs(x)^2;_v = arrayfun(@a, F_v1(:));_mean = geomean(F_v);_mean = mean2(F_v);

%spectral flatness quality measure= geom_mean / arith_mean;

%spatial flatness quality measure_v1 = reshape(imageF, m*n, 1);_v = arrayfun(@a, I_v1(:));_mean = geomean(I_v);_mean = mean2(I_v);= geom_mean / arith_mean;

%final measure_I = var(imageF(:));= FF * var_I;= FE;

end

C.Finding best linear regression models[cBest, fitBest, CombsF] = OM_BestRegression(Y, X, nVars, nSets, nRegressionType)

% Find the best subset of predictor variables X

% containing exactly nVars variables

% to predict the dependent variable Y

% nSets - the number of best regression parameter combinations

% nRegressionType - defines different regression types

% to return in CombsF_trees = Y(1:50,:);_trees = X(1:50,:);_med = Y(50:104,:);_med = X(50:104,:);

% Verify inputnargin < 4= 20;nargin < 5= 2; % least squares

% Define residual function for L1 regression=@(r)mean(abs(r));=@(r)std(r);=@(r)median(abs(r));= optimset('MaxIter', 10000000, 'MaxFunEvals', 1000000);

% Find the sizes of independent variables X

[~, nx] = size(X);

% Initialize all possible combinations of nVars variables= FindBestCombs(Y_trees,X_trees,nVars,50*nVars);= FindBestCombs(Y_med,X_med,nVars,50*nVars);

%Combs = [Combs1;Combs2]= length(Combs);

% If we have too many combinations, branch and bound

% to use 20 best

% nUseBranchAndBound = 1000000;

% if (nRegressionType~=2)

% nUseBranchAndBound = nUseBranchAndBound/10;

% end;

% if(nAllCombs>nUseBranchAndBound)

% fprintf('Using branch-and-bound for speedup\n');

% [~,Combs,~,~] = bbdireg(Y,X,nVars,10*nSets);

% nAllCombs = length(Combs);

% end;

% Find full model regression, using L2 as a baseline_trees = Y_trees-X_trees*(X_trees\Y_trees);_trees = std(r0_trees); _med = Y_med-X_med*(X_med\Y_med);_med = std(r0_med);

% Set combinatorial parameters= [zeros(nAllCombs,4) Combs];= max(1, round(nAllCombs/20));

% Try all possible combinations, find the bestnComb = 1:nAllCombs

% Output current progressmod(nComb, nSteps)==0 && nSteps>50(' %d', round(100*nComb/nAllCombs));;

% Build predictor for the current subset_trees = X_trees(:, Combs(nComb,:));_med = X_med(:, Combs(nComb,:));

% Compute least squares regression L2 asa baseline

[pL2_trees, ~, r_trees, ~, stats_trees] = regress(Y_trees, Xtmp_trees);

[pL2_med, ~, r_med, ~, stats_med] = regress(Y_med, Xtmp_med);

% Compute the required regression typenRegressionType==0 % median regression=fminsearch(@(p)NormLM(Y-Xtmp*p),pL2,opts);= Y-Xtmp*p; = median(abs(r0));= median(abs(r));= 1-Err/median(abs(Y));nRegressionType==1 % L1 regression_trees=fminsearch(@(p)NormL1(Y_trees-Xtmp_trees*p),pL2_trees,opts);_med=fminsearch(@(p)NormL1(Y_med-Xtmp_med*p),pL2_med,opts);_trees = Y_trees-Xtmp_trees*p_trees; _trees = mean(abs(r0_trees));_trees = mean(abs(r_trees));_trees = 1-(Err_trees/mean(abs(Y_trees)));_med = Y_med-Xtmp_med*p_med; _med = mean(abs(r0_med));_med = mean(abs(r_med));_med = 1-(Err_med/mean(abs(Y_med)));

%weights_complex = 1*Err_trees + 0*Err_med;_avg = (R2_trees+R2_med)/2;nRegressionType==5 % L1 regression_trees = mean(abs(r0_trees));_trees = mean(abs(r_trees));_trees = 1-(Err_trees/mean(abs(Y_trees)));_med = mean(abs(r0_med));_med = mean(abs(r_med));_med = 1-(Err_med/mean(abs(Y_med)));

%weights_complex = 1*Err_trees + 0*Err_med;_avg = (R2_trees+R2_med)/2;;

% Record regression statistics(nComb,1) = Err_complex;(nComb,2) = R2_avg;(nComb,3) = R2_med; % R2, similar to stats(1)(nComb,4) = R2_trees+1; % R2, similar to stats(1)

% CombsF(nComb,5) = Err_trees; % Error for this regression type %log10(stats(3)); % p-val, log10 of

% CombsF(nComb,6) = Err_med+1; % Error for this regression type %log10(stats(3)); % p-val, log10 of;('\n');

% Record best predictor sets= sortrows(CombsF,1);= CombsF(1,1);= CombsF(1, 4:nVars+3);= min(size(CombsF,1), 50);

%len = size(CombsF,1);= CombsF(1:len, :);

% Helper function to check ALL combinations and return the

% most promising NCombs. Use it when the number of combinations

% is too large to fit into memoryCombs = FindBestCombs(Y,X,nVars,NCombs)

% Initialize= [];

% Find the sizes of independent variables X

[~, nX] = size(X);(nX<=nVars);;= sum(log10(1:nX)) - sum(log10(1:nVars)) - sum(log10(1:(nX-nVars)));

% If we have realtively small combination count, enumerate all of them(nCombsLog<1)= combnk(1:nX, nVars);;;

% We have lots of combinaitons. Evaluate each one= 10*NCombs;= 1.0e40*ones(NCombsMax, nVars+1);= 0;= nextchoose(nX, nVars);= prod((nX-nVars+1):nX)/prod(1:nVars);= max(1, round(nAll/20));= 0;n=1:nAll-1

% Output current progressmod(n, nSteps)==0 && nSteps>50(' %d', round(100*n/nAll));;

% Find next combination= H();= X(:, com);= std(Y-xt*(xt\Y));(nc>=NCombsMax) % cleanup= sortrows(Combs, 1);= NCombs+1;= 1;(bSorted>0 && nc>NCombs && e>Combs(NCombs,1)); % no improvement;

% OK, good candidate, add= nc+1;(nc, :) = [e com];;

% Cleanup= sortrows(Combs, 1);= Combs(1:NCombs, 2:nVars+1);

Похожие работы на - Quality as an image-specific characteristic perceived by an average human observer

 

Не нашли материал для своей работы?
Поможем написать уникальную работу
Без плагиата!