Journal of Materials Research and Technology Journal of Materials Research and Technology
Original Article
New approach to evaluate a non-grain oriented electrical steel electromagnetic performance using photomicrographic analysis via digital image processing
Pedro Pedrosa Rebouças Filhoa,, , José Ciro dos Santosa, Francisco Nélio Costa Freitasa, Douglas de Araújo Rodriguesa, Roberto Fernandes Ivoa, Luis Flávio Gaspar Herculanob, Hamilton Ferreira Gomes de Abreub
a Instituto Federal de Educação, Ciência e Tecnologia do Ceará, Programa de Pós-Graduação em Energias Renováveis, Fortaleza, CE, Brazil
b Universidade Federal do Ceará (UFC), Departamento de Engenharia Metalúrgica e de Materiais, Fortaleza, CE, Brazil
Received 14 June 2017, Accepted 06 September 2017
Abstract

The growing global demand for energy makes it necessary to adopt measures ranging from the exploration of new energy sources to the development of technology for machinery and equipment with greater energy efficiency. Non-grain oriented electrical steels are widely used in the construction of rotors and stators that form the core of electric motors, and their microstructures are directly related to its electromagnetic performance. This paper presents a new, fast and efficient method for the classification of non-grain oriented electrical steel microstructural states and their electromagnetic performance using photomicrographic analysis. The study was performed on non-grain oriented electrical steel samples with 1.28% silicon, cold-rolled with reductions of 50% and 70%, annealed in box at 730°C for 12h, and subjected to a subsequent annealing heat treatment for grain growth at 620°C, 730°C, 840°C and 900°C for 1, 10, 100 and 1000min at each temperature. A total of 32 samples were used to form a database with 192 images. Our approach used a combination of extractor features (GLCM, LBP and moments) with the classifiers (Bayes, K-NN, K-means, MLP and SVM), also combined with two data partitioning, and the hold out and leave one out. KNN with 1 neighbor using the GLCM extractor showed the highest accuracy rate of 97.44%, and values greater than 96.0% for the other validation methods. The time required for the test was only 15.4ms. The results obtained with this proposed approach, generate a new approach to evaluate a non-grain oriented electrical steel electromagnetic performance.

Keywords
Electrical steel, Electromagnetic performance, Digital image processing, Pattern recognition
This article is only available in PDF
1Introduction

The economic development of a country is strongly correlated to the growth of demand for electric energy. This demand for electric energy coupled with serious environmental problems makes it necessary to adopt measures ranging from the exploration of new energy matrix resources to technology to develop energy efficient machinery and equipment [1,2].

In terms of optimization and efficiency, research and development are focused on the reduction of electrical losses in materials used in electrical equipment. Electrical steels are widely used in electrical equipment, and they are characterized by having a higher percentage of silicon than most common steels in their chemical composition [3]. In addition, this material is characterized by greater electrical resistivity, low magnetic losses, and efficient amplifier of an externally applied magnetic field.

Steels for electrical purposes are classified into two groups: grain-oriented steels (GO) and non-grain oriented steels (NGO). The first group presents excellent magnetic properties, but its magnetic flux is unidirectional. In this way, optimum characteristics are displayed only in the rolling direction, making it ideal for the production of large transformers [4,5]. The second group has magnetic properties that are independent of the direction in question. Its main use is in the construction of rotors and stators that form the core of electric motors [6,7].

Non-grain oriented steels are divided into two distinct classes: fully-processed and semi-processed [8]. The fully-processed are produced with the ideal characteristics for their final application. While, the semi-processed are produced in a way that the end user has to perform the optimum heat treatments to achieve the desired microstructural characteristics required.

The conventional characterization of non-grain oriented electrical steels in terms of their efficiency can be achieved by studying their magnetic hysteresis curves and microstructural state, from the grain size and their crystallographic texture maps.

A relevant portion of the total electric energy consumption can be attributed to magnetic losses in electrical steel [9,10]. The total magnetic losses Pt are divided into three types: parasitic losses Pp, hysteretic losses Ph and anomalous losses Pa, according to Eq. (1).

The parasitic losses can be reduced by using high levels of silicon [11]. This loss is calculated by Eq. (2). Where Pp is the parasitic losses, e corresponds to blade thickness, f is the test frequency, B is the maximum induction of the test, d is the density, and ρ is the electrical resistance.

The hysteretic losses component is usually measured by calculating the internal area of its curve [7]. These losses decrease with the increase of grain size due to the interaction between the surfaces of the grain boundaries and the walls of magnetic domains [11].

Magnetic losses, in general, can be measured by the internal area of the material hysteresis curve. This curve shows the amount of energy, during a total cycle, dissipated by the Joule effect.

The grain size of non-grain oriented electrical steel also provides significant information regarding the energy efficiency of the material [12–15]. Shimanaka et al. [16] shows that there is an optimal grain size where the magnetic losses are minimized. This ideal grain size is restricted between 100μm and 150μm. Fig. 1 illustrates this statement, and emphasizes that an increase in grain size leads to a continued decrease of magnetic losses by hysteresis, however the anomalous losses increase, which is the reason for the limitation.

Fig. 1.
(0.05MB).

Influence of grain size on the total magnetic losses of electrical steels with 1.85% Si, 2.8% Si and 3.2% Si [16].

The grain characteristics are directly related to the treatments that the electrical steel is submitted to and to its manufacturing procedures. The annealing of a metal that has undergone cold rolling causes a phenomenon called recrystallization, in which new grains are formed within the deformed structure of the metal. These grains grow by absorbing the smaller grains generating a new structure; this effect is known by primary recrystallization. The continuity of annealing in this material can lead to even greater grain growth. That causes an abnormal grain growth of the already recrystallized structure, know as secondary recrystallization.

Burgers [17] affirms that for the same conditions of time and temperature used in the annealing heat treatment applied to electrical steels, the most deformed rolled steel shows a smaller grain size in primary recrystallization. The crystallographic texture is another parameter that carries data about the magnetic efficiency of electrical steels. It portrays the distribution of the crystal orientations in a polycrystalline material. Distribution Function of Crystalline Orientations (DFCO's) are used to obtain a more complete texture description including information about the plane, the direction, and the volume of each orientation [18]. Normally, the steel texture is represented using only the ϕ2=45°C, section, since this has the main information about planes and the crystallographic directions of the material.

The texture of NGO electrical steels does not have a single component, but several components known as fiber texture. The ideal texture is the fiber that presents greater ease of magnetization, in this case the fiber texture 〈100〉. However, the development of this ideal fiber texture is still under study for an industrially viable process. The research aims to produce a texture 〈100〉//DN, where DL is the rolling direction and DN is the normal direction of the plate, with its corresponding fiber being 100 〈0νω〉. However, as yet, this is not the case, manufacturers of NGO electrical steels use a fiber around the Goss component (110) [001] that can be found in 〈110〉//DN fiber. This represents the texture that gives the OG steel excellent magnetic permeability in the rolling direction [19].

When the efficiency of a NGO steel needs to be evaluated, an experienced professional in metallurgical engineering and materials must carry out a careful analysis of all the relevant parameters. However, a manual analysis is a complex task, is subjective, laborious and prone to failures due to human error caused by fatigue, and repetitive actions, as well as the possibilities of analytical errors. Commercial software can be used for the analysis of the microstructure and magnetic properties of metals, however, the costs are high. This makes it impossible to develop academic and experimental low-budget projects.

Our objective is to present a computational study based on Digital Image Processing (PDI) to classify the microstructural state of a non-grain oriented electrical steel containing 1.28% silicon, submitted to thermal treatments for grain growth. The use of microstructure digital images, and exploring the scientific basis for the interaction between magnetic properties and microstructure, indicated that grain size has a strong effect on the magnetic losses of this material.

2Feature extraction

Feature extraction requires extracting significant features from the regions of interest in the image from the segmentation and post processing processes, or as in this study, directly from the original grayscale images [20]. These features evidence the differences and similarities between the objects in the images. Characteristics such as size, shape, texture, position, curvature or occurrence of certain geometric shapes are measured. These measures result in quantitative data that are used to discriminate each image in the classification process through machine learning [21]. The extractors used in this study are described below.

Central Moments make it possible to describe the shape of the object, due to its translational invariance. Its main reference in the formation of the attributes is the center of gravity of the object, and for this reason they are invariant to the translation, but still are dependent on the scale and rotation [22,23].

Statistical Moments perform the extraction based on the distribution in gray levels of the image that are usually calculated from the histogram. These characteristics provide a statistical description of the relationship between the gray levels [24].

Hu's Moments, also known as Invariant Moments, perform recognition regardless of orientation, position, and size. The theory proposed by the researcher Ming-Kuei Hu [24] shows a way to describing flat geometric figures from two-dimensional [25–27].

Haralick [28] proposed an extraction method of characteristics based on the image texture. Texture is an innate property of virtually all surfaces, and contains important information about its structural arrangement and its relationship to the surrounding environment. The feature extractor proposed by Haralick [28] uses resources derived from the Gray Level Co-occurrence Matrix (GLCM) calculations that are the basis for the preparation of statistical measures known as Haralick Descriptors [29,30].

Ojala et al. [31] proposed a method for representing textures based on lo cal binary codes, called LBP (Local Binary Patterns). The operator LBP assigns a label to every pixel of an image by thresholding the 3×3-neighborhood of each pixel with the center pixel value and considering the result of a binary number. Then, the histogram of the labels can be used as a texture descriptor. Due to the need to describe textures of several scales, the LBP operator works with different kernels.

3Pattern recognition

The constant development of computational tools and resources has made the design and development of complex classification and pattern recognition systems possible for a wide range of purposes, such as facial recognition [32], as well as uses in medicine [33,34] and materials science [35,36].

The classifiers used in this study are described below.

3.1K-nearest neighbors

K-nearest neighbors (KNN) evaluates the patterns that are neighbors to each other in the feature space and identifies if they belong to the same default set. Thus, objects that have similar characteristics belong to the same set. For the similarity calculation distance metrics between the training and test samples are used.

The analysis of similarity of the characteristics in this classifier can consider K neighbors, where the value of K is a free parameter, previously defined by the programmer. Each neighbor that is considered indicates a class. When K is greater than 1, there is a prediction of different class indications for each test point. The classifier verifies the most frequent class of K neighbors and allocates the unknown sample into this class [37,38].

3.2K-means

K-means is a part of the group that uses the principle of unsupervised classification [39]. This method classifies the data according to the information itself, without the necessity for any pre-classification or supervision, if using the concept of clusters. A cluster can be finite like a set of samples, in which each sample is closer or is more similar to the centroid within its respective cluster than any other centroid outside.

The data sets in the K-means algorithm are initially shifted to the nearest cluster of the centroid. Thus, these sets present a new reorganization and new centroids are determined. This process is repeated so that in the end, the data are associated with k patterns and a determined centroid. Each centroid receives the label of the class with the most occurrences among its associated k patterns. When an unseen sample is announced, K-means looks for the nearest centroid and then assigns the label of this centroid to this sample [40].

3.3Bayes classifier

The Bayes classifier is based on the use of statistical techniques. The probability that a determined sample belongs to each of the possible classes is predicted by predicting the most probable class for the sample, that is, the class that received the majority of these probabilities [41,42].

The Bayesian theorem uses the conditional probability P(x|ci), which analyzes the various samples that belong to the same class. It also uses the a priori probability P(ci) which is obtained from the number of samples i of each class in relation to the total number of samples. Another probability adopted is the probability of occurrence of these samples P(x). The probability of a sample belonging to a class P(ci|x) is calculated by Eq. (3), called the Bayes Rule. The probability is modified to Priori from this equation, considering new evidence in order to obtain posteriori probabilities.

3.4Support vector machine

The support vector machine (SVM) is a part of the supervised classification algorithm group and is also based on the Statistical Learning Theory [43].

In SVMs, the objective is to obtain a function that minimizes the probability of output obtained by the machine to be different from the desired output. This generated function defines linear or non-linear boundaries for the separation of the binary data set. Although it has been formulated to solve problems with only two classes, there are approaches that allow the resolution of multiclass problems, such as one-on-one and one-on-all techniques [44].

Different from the other classifiers, the SVMs work to find the best boundary or hyperplane between the classes, so that there is a separation between the different classes with the maximum possible margin. This hyperplane is defined with the help of patterns found during the training, called support vectors.

In situations, where the data is not linearly separated, the separation surface is obtained by the kernel artifact and only depends on the function used.

3.5Multi-layer perceptron (MLP)

Pattern recognition using multi-layer perceptron (MLPs) is applied to solve problems in which classes are not linearly separable and this is the most common problem in real applications. The use of this method offers more advantages than the statistical classifiers for non-linear classification.

This method uses artificial neural networks (ANN) with two or more layers of neurons, since single-stranded ANNs only solve linearly separable problems. In relation to the structure of the ANNs, these are divided into network input layers, one or more hidden layers and an output layer [45]. Hidden layers are introduced to increase the network's ability to model complex functions.

At first, it is necessary to perform the training step, in which the synaptic weights are adjusted through a network training algorithm. This algorithm aims to train the network so that the presented classes can be separated and, consequently, the desired outputs can be obtained. In addition, it is necessary to perform the test step, which consists of the recognition of patterns presented to the network that were not in the training set.

4Confusion matrix and evaluation metrics

The confusion matrix is a very effective tool in evaluating the efficiency of a computational classifier. The function of this matrix is to validate the supervised learning.

The confusion matrix allows the comparison of the database used for testing with the database used for training. This matrix indicates the percentage of correctness and error in the classification. The constant values in the main diagonal indicate the hits, while the other data refer to the classification errors.

The metrics used to evaluate the classifiers are calculated using data extracted from the matrix. These metrics are accuracy, PPV (positive predictive values), sensitivity and f-score. They are calculated, respectively, by Eqs. (4)–(7).

where, TP, FN, TN and FP are the values of true-positives, false-negatives, true-negatives and false-positives, respectively.

The TP and FN values represent, respectively, the number of a determined class of samples correctly and incorrectly classified. TN are the numbers of samples that do not belong to a determined class, and are classified as non-relating to this class. FP values are the numbers of incorrectly classified samples of a determined class.

The number of correct predictions indicates the accuracy. This metric is computed by using the ratio of the number of correctly classified samples to the number of samples analyzed. The accuracy is one of the most important measures to decide which classifier is used.

Sensitivity is the ratio of correctly classified samples belonging to a class in relation to the total number of samples of that class. The PPV metric, also known as precision, indicates the ratio each sample classified as belonging to a class and actually belonging to that class. The f-score can be interpreted as a harmonic mean of sensitivity and PPV.

5Materials and methods

In this section, the procedures, the methodology employed and the materials used are presented. Also, the chemical composition of the materials and details of the metallography preparation of the samples are described. First, we present photomicrographs of the three classes studied and the process description for the acquisition of these images, as well as the feature extractors and data selection methods for training and testing. Finally, there is a brief presentation of the classifiers and validation metrics used. The methodology adopted in this study is presented in Fig. 2.

Fig. 2.
(0.25MB).

The procedures adopted in this study.

5.1Materials

In this study, the materials correspond to portions of semi-processed NGO electrical steel sheets with 1.28% silicon, that measure 60mm×40mm. The chemical composition of the steel was 0.05% C, 0.29% Mn, 1.28% Si, 0.025% P, 0.014% S and 0.036% Al.

The samples were annealed at 730°C for 12h and cold rolled to reduce the thickness by 50% and 70%. The secondary recrystallization state was obtained by subjecting the samples to a post-heat treatment at 620°C, 730°C, 840°C and 900°C for 1, 10, 100 and 1000min at each temperature.

5.2Methods

The samples were submitted to heat treatment, after which the metallographic preparations were performed. First, there was a hot inlay in Bakelite, followed by sanding and polishing. These procedures aim, respectively, to facilitate sample handling, to remove surface imperfections of the material and to leave a polished surface, with no marks. Then, the chemical attack was carried out, and this treatment has the effect of revealing the grains and their contours in the electrical steel microstructure. A Nital solution was applied for approximately 5s.

The metallographic images were collected using an optical microscope (Zeiss) with digital image acquisition, and original magnification 100×. In this study, 32 different samples were used, and for each sample, 6 different images were collected, making a total of 192 images, and all had a resolution of 2436×2042 pixels.

The samples were divided into 3 distinct classes to train the classifiers. This division was based on the evolution of the microstructural state, characterized by grain growth.

When the microstructural state was compared with the annealed samples at 730°C for 12h, class 0 exhibited no changes in its microstructure. Class 1 showed a considerable grain growth in relation to class 0. These are grains with normal growth generated by primary recrystallization. The samples from class 2, showed an abnormal grain growth, characteristic of the secondary recrystallization phenomenon.

Fig. 3 shows examples from the metallographic images of the three classes studied. These images clearly show the differences in the microstructure and the grain sizes that compose them.

Fig. 3.
(0.83MB).

Electrical steels samples, where (a), (b), (c) correspond to class 0; (d), (e), (f) to class 1; and (g), (h), (i) to class 2.

Then, the feature extraction was applied. The extractors used and the respective number of attributes were: Central Moments (7), Statistics Moments (10), Hu Moments (7), GLCM (14), and LBP (48).

The data from the extractors were divided into two distinct groups, one for the training set and one for test set, and the order of the groups was performed by the cross-validation method. In this study, we used two distinct data dividing methods, the Hold Out and Leave One Out methods.

The Hold Out method is the simplest kind of cross validation. This type of cross-validation shares the data for each set in a random way. The percentage of data destined for each group is defined by the user. In Leave One Out, the division is performed by class. Therefore, a part of the data from each class is reserved for the test set, and the rest saved for the training set.

In this study the extracted data was divided into two sets, where 50% of the each class data was selected for training and 50% for testing.

The KNN was setup for a K ranging between 1, 3 and 5; K-means was setup for three groups; the Bayes classifier was validated using normal probability distribution; In the SVM model, four different kernel types (linear, polynomial, radial basis function [RBF] and sigmoid) were used; In MLP, the number of neurons in the hidden layer varied according to the number of features extracted. The GLCM, LBP, Hu Moment, Central Moment and Statistical Moment used 80, 35, 25, 90 and 250 neurons, respectively.

The validation of the applied models was performed using different evaluation metrics for the classifier, such as accuracy, sensitivity, PPV and f-score. All methodologies were implemented in C/C++ using Visual Studio 2012.

In addition, this study used the OpenCV 3.0 and the computational process was performed on a computer with a Mac X El Capitan 10.11.2 operational system, Intel Core i5 processor with 2.4GHz and 8GB RAM. Moreover, the proposed method using the computer vision system was compared with a conventional analysis carried out by two experts.

6Results

The results of this study are divided into two subsections. First, metallurgical results are presented then the computational results.

6.1Metallurgical results

Fig. 4a is a photomicrograph corresponding to class 0 that has small recrystallized grains. This class consists of samples that underwent a thickness reduction of 50% and 70%, and submitted to an annealing heat treatment of 620°C and 730°C, respectively. There are no significant changes in its microstructure in relation to the material without treatment. In this case the resulting grain size for class 0 was much lower than the ideal.

Fig. 4.
(0.86MB).

Photomicrographs of samples from class 0, 1, 2, and their respective DFCOs used for complete texture description.

The crystallographic texture of the sample (Fig. 4a) can be analyzed in Fig. 4b. This figure presents the DFCO's in sections of ϕ2=45°C, according to Bunge's notation, it shows only the fiber formation γ (〈111〉//DN), without the Goss component formation.

Fig. 4c shows a sample image from class 1, and the photomicrograph reveals considerable grain growth in relation to class 0, presenting uniform and equiaxial grains. In this class, the increase of grain size is directly related to the increase of temperature and annealing time.

In this class, the grain size resulting from the annealing of the laminated material with a 50% reduction in thickness was greater than the laminated material with a 70% reduction, confirming the studies of [17].

The DFCOs in sections of ϕ2=45°C for this sample are shown in Fig. 4d. The analysis of its crystallographic texture reveals only the formation of the fiber γ(〈111〉//DN), without the formation of the Goss component that characterizes the secondary recrystallization.

Fig. 4e shows an image from a class 2 sample. The samples of this class present the secondary recrystallization phenomenon. The increase in temperature and annealing time were responsible for this characterization. The samples of this class also validate the studies in [17].

The crystallographic texture of these samples can be analyzed in Fig. 4f, where their DFCOs present the emergence of the component (110) [001] known as Goss. The formation of this component coincides with the onset of secondary recrystallization.

6.2Computational results

Figs. 5–8 show the computational results using the Hold Out data partition. In each figure, the values and standard deviation respectively of the accuracy, PPV, sensitivity and f-score are shown.

Fig. 5.
(0.24MB).

Accuracy – hold out.

Fig. 6.
(0.24MB).

f-Score – hold out.

Fig. 7.
(0.23MB).

PPV – hold out.

Fig. 8.
(0.22MB).

Sensitivity – hold out.

The best results were from GLCM and LBP. This is due to the fact that these extractors performed the image texture description, differently from the extractors that depend on the form that the objects have, such as Hu, Central and Statistical Moments.

The graph in Fig. 5 indicates that the highest accuracy was obtained using the KNN-1 classifier with the GLCM extractor and the Hold-Out data partition. A high rate of 97.44% was achieved for the accuracy, and this configuration also had excellent f-score results with 98.55%, PPV with 96.51%, and sensitivity with 96.30% as is shown in Figs. 6–8, respectively.

Accuracy values above 90%, were also obtained using the GLCM extractor and the SVM classifiers of Kernel RBF, KNN-3, MLP-2, as well as using the LBP extractor with the KNN-1 classifier. Classifications that obtained accuracy above 90% also showed values of the other metrics above 90%, except for the sensitivity measure of the KNN-1 classifier with the LBP extractor that showed 89.60%.

Figs. 9–12 show, respectively, the mean and standard deviation values of the accuracy, f-score, PPV and sensitivity obtained with the Leave One Out method.

Fig. 9.
(0.24MB).

Accuracy – leave one out.

Fig. 10.
(0.23MB).

f-Score – leave one out.

Fig. 11.
(0.23MB).

PPV – leave one out.

Fig. 12.
(0.22MB).

Sensitivity – leave one out.

Fig. 9 shows that the highest accuracy levels were MLP-1 with 86.44% and MLP-2 with 82.64%, both using the GLCM extractor data. However, these values are lower than those obtained using Hold Out.

With Leave One Out, the evaluative metrics using the GLCM and the LBP reached higher values than those achieved with the other extractors and therefore certifies the superiority of extractors based on the texture of images for the study in question. In general, the low results of f-score, PPV and sensitivity metrics using the Hu, Central and Statistical Moments data show the low reliability in accuracy rates presented by the classifiers in Leave One Out.

The classifier results with the use of Hold Out and Leave One Out showed some agreement. The classifiers that used the data based on textures, obtained the higher values in the evaluation metrics. Between Hold Out and Leave One Out, the former reached the highest values in accuracy, which is confirmed by f-score metric.

In Fig. 13, box diagrams are shown which highlight the values corresponding to the extraction times. The average time value for each extractor is symbolized by a red line inside the box. Note that the LBP extractor has the lowest average value and the shortest time dispersion. In addition, the central part of the LBP box is much reduced in relation to the others. The LBP extractor, even with a larger number of extracted features, was 3.4 times faster than the GLCM extractor.

Fig. 13.
(0.08MB).

Extraction time.

The time to train and test the classifiers using Hold Out and Leave One Out are presented in Tables 1 and 2. According to Figs. 5–12, the best accuracy obtained by Hold Out was verified with the use of the KNN-1 classifier and the GLCM with 97.44%, while, the best result for the same metric, but using Leave One Out was presented by MLP-1 and GLCM with 86.44%. Again, the classifier that used the data partitioned by Hold Out obtained a large advantage by performing training and testing 144 times faster than the classifier with the best accuracy by the other partition method.

Table 1.

Time in milliseconds to the train and test the classifiers using hold out.

Feature  Classifier  Setup  Training time (ms)  Test time (ms)  Total time (ms) 
GLCMBayes  Normal  3.8  4.8  8.6 
SVMLinear  8.2  8.2 
RBF  10.4  0.8  11.2 
Polynomial  8.8  0.8  9.6 
Sigmoid  12.2  1.2  13.4 
MLPConf. 1  4108.6  1.2  4109.8 
Conf. 2  3312.8  1.2  3314 
KNNN=7.2  15.4  22.6 
N=2.6  2.6 
N=0.2  0.2 
K-means3.8  4.8  8.6 
LBPBayes  Normal  274.2  9.4  283.6 
SVMLinear  9.2  0.2  9.4 
RBF  7.6  2.2  9.8 
Polynomial  9.8  10.8 
Sigmoid  3.8  1.2 
MLPConf. 1  3460.2  1.8  3462 
Conf. 2  6217  4.6  6222.2 
KNNN=0.2  6.6  6.8 
N=0.2  1.0  1.2 
N=0.0  1.8  1.8 
K-means4.8  3.8  8.6 
Central M.Bayes  Normal  62.54  141.45  203.99 
SVMLinear  12,413.6  76  12,489.6 
RBF  20,046.801  23,837.6  43,884.398 
Polynomial  7768.2  9233.8  17,001.4 
Sigmoid  29,552  23,860  53,412 
MLPConf. 1  156,830.203  374.2  157,204.406 
Conf. 2  122,489.797  382.8  122,872.594 
KNNN=6.333  3,102,581  3,102,587.25 
N=3,226,295.25  3,226,300.25 
N=3,295,157  3,295,162 
K-means43.91  36,978.25  37,022.16 
Hu M.Bayes  Normal  74.805  166.451  241.256 
SVMLinear  15,946.6  90.8  16,037.399 
RBF  16,612.6  17,199.199  33,811.797 
Polynomial  5470.4  4080.4  9550.8 
Sigmoid  35,113  28,303.199  63,416.199 
MLPConf. 1  511,205.594  447.6  511,653.188 
Conf. 2  346,479.594  452  346,931.594 
KNNN=6.333  4,298,305  4,298,311.5 
N=6.333  4,440,274  4,440,280.5 
N=4,518,090.5  4,518,096.5 
K-means51.42  12,053.67  12,105.09 
Statistical M.Bayes  Normal  133.496  224.991  358.487 
SVMLinear  13,190.4  93.8  13,284.2 
RBF  25,281  27,834  53,115 
Polynomial  6355.8  4197.8  10,553.6 
Sigmoid  39,253.398  31,429  70,682.398 
MLPConf. 1  22,886,948.812  463.2  287,412 
Conf. 2  251,243.522  498.7  251,742.222 
KNNN=4,697,294.5  4,697,302.5 
N=7.333  4,703,470.5  4,703,478 
N=4,704,359.5  4,704,366.5 
K-means82.33  47,061.92  47,144.25 
Table 2.

Time in milliseconds for extraction and classification using leave one out.

Feature  Classifier  Setup  Training time (ms)  Test time (ms)  Total time (ms) 
GLCMBayes  Normal  12.8  9.4  22.2 
SVMLinear  8.8  0.6  9.4 
RBF  11.6  1.0  12.6 
Polynomial  12.4  0.6  13.0 
Sigmoid  4.6  1.8  6.4 
MLPConf. 1  3264.2  1.2  3265.4 
Conf. 2  2541.0  1.2  2542.2 
KNNN=0.2  7.0  7.2 
N=0.4  0.6  1.0 
N=1.0  2.4  3.4 
K-means3.8  4.8  8.6 
LBPBayes  Normal  147.6  12.8  160.4 
SVMLinear  13.0  0.4  13.4 
RBF  8.4  2.2  10.6 
Polynomial  15.2  1.0  16.2 
Sigmoid  7.0  2.2  9.2 
MLPConf. 1  2210.0  1.8  2211.8 
Conf. 2  4640.4  4642.4 
KNNN=0.2  8.6  8.8 
N=0.0  1.4  1.4 
N=0.2  1.4  1.6 
K-means4.8  3.8  8.6 
Central M.Bayes  Normal  67.14  135.80  199.94 
SVMLinear  12,464.2  77.0  12,541.2 
RBF  19,840.6  23,411.6  43,252.199 
Polynomial  7665.2  9875.6  17,540.801 
Sigmoid  29,500.199  22,983.6  52,483.797 
MLPConf. 1  230,502.203  366.0  230,868.203 
Conf. 2  397,162.594  371.8  397,534.406 
KNNN=5.4  3,094,500.5  3,094,506.0 
N=4.2  3,212,955.25  3,212,959.5 
N=4.2  3,278,905.5  3,278,909.75 
K-means43.91  36,978.25  37,022.16 
Hu M.Bayes  Normal  72.665  159.477  232.132 
SVMLinear  14,435.2  91.4  14,526.601 
RBF  17,941.199  18,098.0  36,039.199 
Polynomial  5108.4  2840.6  7949.0 
Sigmoid  35,201.602  27,293.0  32,494.602 
MLPConf. 1  114,089.797  434.6  114,524.398 
Conf. 2  258,006.594  443.2  258,449.797 
KNNN=5.4  4,298,112.5  4,298,117.9 
N=5.0  4,437,221.0  4,437,226.0 
N=5.2  4,515,119.0  4,515,124.2 
K-means51.42  12,053.67  12,105.09 
Statistical M.Bayes  Normal  141.584  226.214  367.798 
SVMLinear  12,431.6  93.2  12,524.8 
RBF  25,554.4  28,241.801  53,796.203 
Polynomial  6322.8  3952.8  10,275.6 
Sigmoid  39,431.0  31,745.4  71,176.4 
MLPConf. 1  23,945,515.812  457.8  345,973.625 
Conf. 2  815,079.188  455.2  815,534.375 
KNNN=7.667  4,725,178.0  4,725,185.667 
N=7.0  4,707,717.5  4,707,724.5 
N=7.0  4,708,149.0  4,708,156.0 
K-means82.33  47,061.92  47,144.25 

Table 3 shows the confusion matrices with the combinations of classifiers and extractors for the cross validation methods Hold Out and Leave One Out that presented the best accuracy results. Note that the evaluated classifiers obtained relevant percentage values of correct classifications in relation to the total number of samples used. These percentages combined with the low percentages of errors of each class indicate the high performance of these classifiers.

Table 3.

Confusion matrix percentage of GLCM and LBP.

Confusion matrix – HOLD OUT
  MLP 1 – GLCMMLP 2 – GLCMKNN 1 – GLCMKNN 3 – GLCM
Class 
42.11  4.21  0.00  43.16  2.63  0.00  44.74  0.00  0.00  42.11  0.00  0.00 
2.63  27.89  2.63  1.58  27.37  0.53  0.00  31.58  0.00  2.63  31.58  0.00 
0.00  2.11  18.42  0.00  4.21  20.53  0.00  2.63  21.05  0.00  2.63  21.05 
Confusion matrix – HOLD OUT
  KNN 5 – GLCMSVM RBF – GLCMSVM POLY – LBPKNN1 – LBP
Class 
42.11  5.26  0.00  44.74  2.63  2.63  39.47  2.63  2.63  39.47  5.26  0.00 
2.63  23.68  0.00  0.00  31.58  0.00  5.26  28.95  2.63  2.63  28.95  0.00 
0.00  5.26  21.05  0.00  0.00  18.42  0.00  2.63  15.79  2.63  0.00  21.05 
Confusion matrix – LEAVE ONE OUT
  MLP 1 – GLCMMLP 2 – GLCMMLP 2 – LBPKNN 1 – LBP
Class 
34.21  15.79  0.00  33.68  16.84  0.00  42.63  16.32  0.00  44.74  13.16  5.26 
10.53  18.42  8.42  11.05  16.84  7.89  2.11  17.89  13.68  0.00  21.05  5.26 
0.00  0.00  12.63  0.00  0.53  13.16  0.00  0.00  7.37  0.00  0.00  10.53 
Confusion matrix – LEAVE ONE OUT
  SVM LINEAR – LBPMLP 1 – STAT. MOM.MLP 2 – STAT. MOM.SVM POLY – STAT. MOM.
Class 
42.11  10.53  0.00  81.85  17.21  0.94  81.85  17.21  0.94  81.85  17.21  0.94 
2.63  23.68  5.26  0.00  0.00  0.00  0.000  0.00  0.00  0.00  0.00  0.00 
0.00  0.00  15.79  0.00  0.00  0.00  0.000  0.00  0.00  0.00  0.00  0.00 
7Conclusion

This study presents an innovative methodology in the classification of NGO electrical steels for their electromagnetic efficiency, using only photomicrographic analyses. A high percentage of accuracy and reliability was obtained along with the important contribution of rapid and precise automatization of this classification process. All the classifications were supervised; and the classifiers were trained on the basis of hysteresis curves and crystallographic texture of the material.

The best result, in terms of correctness of the classification, was obtained using the GLCM extractor combined with the KNN classifier for 1 nearest neighbor, using the Hold out data distribution. The evaluative metrics for this model were 97.44% for accuracy, 96.71% for f-score, 98.21% for precision, and 96.3% for sensitivity. The training time was 7.200ms and the testing time 15.400ms, totaling 22.600ms, which shows that there is a considerable reduction in time compared to the classification procedure of a conventional analysis.

Based on the results, we can conclude that the proposed procedure demonstrated here is effective, within the accepted tolerance range, for academic use among students, engineers, researchers and specialists, working in renewable energy areas with a focus on energy efficiency, engineering and material science or as a case study in the field of Computer Vision. It is a viable, reliable and fast option to obtain accurate results to classify NGO electrical steels.

Conflicts of interest

The authors declare no conflicts of interest.

References
[1]
H. Zuo,D. Ai
Environment, energy and sustainable economic growth
http://www.sciencedirect.com/science/article/pii/S187770581104879X
[2]
P. Laha,B. Chakraborty
Energy model – a tool for preventing energy dysfunction
Renew Sustain Energy Rev, 73 (2017), pp. 95-114 http://dx.doi.org/10.1016/j.rser.2017.01.106
http://www.sciencedirect.com/science/article/pii/S1364032117301193
[3]
J. Qin,P. Yang,W. Mao,F. Ye
Effect of texture and grain size on the magnetic flux density and core loss of cold-rolled high silicon steel sheets
J Magn Magn Mater, 393 (2015), pp. 537-543 http://dx.doi.org/10.1016/j.jmmm.2015.06.032
http://www.sciencedirect.com/science/article/pii/S030488531530247X
[4]
F. Qiu,W. Ren,G.Y. Tian,B. Gao
Characterization of applied tensile stress using domain wall dynamic behavior of grain-oriented electrical steel
J Magn Magn Mater, 432 (2017), pp. 250-259 http://dx.doi.org/10.1016/j.jmmm.2017.01.076
http://www.sciencedirect.com/science/article/pii/S0304885316317383
[5]
K. Chwastek,A.P.S. Baghel,P. Borowik,B.S. Ram,S.V. Kulkarni
Loss separation in chosen grades of grain-oriented steel
2016 progress in applied electrical engineering (PAEE), pp. 1-6 http://dx.doi.org/10.1109/PAEE.2016.7605105
[6]
O. Hubert,L. Daniel,R. Billardon
Experimental analysis of the magnetoelastic anisotropy of a non-oriented silicon iron alloy
Proceedings of the 15th international conference on soft magnetic materials (SMM15), 254–255 (2003), pp. 352-354 http://dx.doi.org/10.1016/S0304-8853(02)00850-8
http://www.sciencedirect.com/science/article/pii/S0304885302008508
[7]
F.E. da Silva,F.N.C. Freitas,H.F.G. Abreu,L.L. Gonçalves,E.P. de Moura,M.R. Silva
Characterization of the evolution of recrystallization by fluctuation and fractal analyses of the magnetic hysteresis loop in a cold rolled non-oriented electric steel
J Mater Sci, 46 (2011), pp. 3282-3290 http://dx.doi.org/10.1007/s10853-010-5215-8
[8]
E.J. Hilinski,G.H. Johnston
Annealing of electrical steel
2014 4th international electric drives production conference (EDPC), pp. 1-7 http://dx.doi.org/10.1109/EDPC.2014.6984385
[9]
Z. Xia,Y. Kang,Q. Wang
Developments in the production of grain-oriented electrical steel
J Magn Magn Mater, 320 (2008), pp. 3229-3233 http://dx.doi.org/10.1016/j.jmmm.2008.07.003
http://www.sciencedirect.com/science/article/pii/S0304885308007695
[10]
S. Sorrell
Reducing energy demand: a review of issues, challenges and approaches
Renew Sustain Energy Rev, 47 (2015), pp. 74-82 http://dx.doi.org/10.1016/j.rser.2015.03.002
http://www.sciencedirect.com/science/article/pii/S1364032115001471
[11]
F. Bohn,A. Gündel,F. Landgraf,A. Severino,R. Sommer
Magnetostriction in non-oriented electrical steels
lAW3M-05 proceedings of the seventh Latin American workshop on magnetism, magnetic materials and their applications, 384 (2006), pp. 294-296 http://dx.doi.org/10.1016/j.physb.2006.06.014
http://www.sciencedirect.com/science/article/pii/S0921452606013317
[12]
H.-T. Liu,Y.-P. Wang,L.-Z. An,Z.-J. Wang,D.-Y. Hou,J.-M. Chen
Effects of hot rolled microstructure after twin-roll casting on microstructure, texture and magnetic properties of low silicon non-oriented electrical steel
J Magn Magn Mater, 420 (2016), pp. 192-203 http://dx.doi.org/10.1016/j.jmmm.2016.07.034
http://www.sciencedirect.com/science/article/pii/S0304885316314639
[13]
J. Salinas-Beltrán,A. Salinas-Rodríguez,E. Gutiérrez-Castañeda,R.D. Lara
Effects of processing conditions on the final microstructure and magnetic properties in non-oriented electrical steels
J Magn Magn Mater, 406 (2016), pp. 159-165 http://dx.doi.org/10.1016/j.jmmm.2016.01.017
http://www.sciencedirect.com/science/article/pii/S0304885316300166
[14]
F. Fang,Y.-B. Xu,Y.-X. Zhang,Y. Wang,X. Lu,R. Misra
Evolution of recrystallization microstructure and texture during rapid annealing in strip-cast non-oriented electrical steels
J Magn Magn Mater, 381 (2015), pp. 433-439 http://dx.doi.org/10.1016/j.jmmm.2015.01.026
http://www.sciencedirect.com/science/article/pii/S0304885315000372
[15]
G. Bertotti
Connection between microstructure and magnetic properties of soft magnetic materials
Proceedings of the 18th international symposium on soft magnetic materials, 320 (2008), pp. 2436-2442 http://dx.doi.org/10.1016/j.jmmm.2008.04.001
http://www.sciencedirect.com/science/article/pii/S030488530800382X
[16]
H. Shimanaka,Y. Ito,K. Matsumara,B. Fukuda
Recent development of non-oriented electrical steel sheets
J Magn Magn Mater, 26 (1982), pp. 57-64 http://dx.doi.org/10.1016/0304-8853(82)90116-0
http://www.sciencedirect.com/science/article/pii/0304885382901160
[17]
W. Burgers
Principles of recrystallization
The art and science of growing crystals, Wiley, (1963)pp. 416-450
[18]
K.R. Chwastek,A.P.S. Baghel,M.F. de Campos,S.V. Kulkarni,J. Szczygłowski
A description for the anisotropy of magnetic properties of grain-oriented steels
IEEE Trans Magn, 51 (2015), pp. 1-5 http://dx.doi.org/10.1109/TMAG.2015.2449775
[19]
W.A. Pluta
Loss components in electrical steel with Goss texture
2013 international symposium on electrodynamic and mechatronic systems (SELM), pp. 87-88 http://dx.doi.org/10.1109/SELM.2013.6562993
[20]
P.P.R. Filho,F.D.L. Moreira,F.G.d.L. Xavier,S.L. Gomes,J.C.d. Santos,F.N.C. Freitas
New analysis method application in metallographic images through the construction of mosaics via speeded up robust features and scale invariant feature transform
Materials, 8 (2015), pp. 3864-3882 http://dx.doi.org/10.3390/ma8073864
http://www.mdpi.com/1996-1944/8/7/3864
[21]
A.K. Jain,R.P.W. Duin,J. Mao
Statistical pattern recognition: a review
IEEE Trans Pattern Anal Mach Intell, 22 (2000), pp. 4-37 http://dx.doi.org/10.1109/34.824819
[22]
L. Xiankang,G. Meiguo,F. Xiongjun
Application of HRRP even rank central moments features in satellite target recognition
2007 IET international conference on radar systems, pp. 1-4 http://dx.doi.org/10.1049/cp:20070620
[23]
R. Lai,X. Liu,F. Ohkawa
A fast template matching algorithm based on central moments of images
2008 international conference on information and automation, pp. 596-600 http://dx.doi.org/10.1109/ICINFA.2008.4608069
[24]
M.-K. Hu
Visual pattern recognition by moment invariants
IRE Trans Inf Theory, 8 (1962), pp. 179-187 http://dx.doi.org/10.1109/TIT.1962.1057692
[25]
J. Flusser,T. Suk,J. Boldys,B. Zitova
Projection operators and moment invariants to image blurring
IEEE Trans Pattern Anal Mach Intell, 37 (2015), pp. 786-802 http://dx.doi.org/10.1109/TPAMI.2014.2353644
[26]
G. Licciardi,A. Villa,M. Dalla Mura,L. Bruzzone,J. Chanussot,J. Benediktsson
Retrieval of the height of buildings from worldview-2 multi-angular imagery using attribute filters and geometric invariant moments
IEEE J Sel Top Appl Earth Observ Remote Sens, 5 (2012), pp. 71-79 http://dx.doi.org/10.1109/JSTARS.2012.2184269
[27]
J. Yang,N. Xiong,A. Vasilakos,Z. Fang,D. Park,X. Xu
A fingerprint recognition scheme based on assembling invariant moments for cloud computing communications
IEEE Syst J, 5 (2011), pp. 574-583 http://dx.doi.org/10.1109/JSYST.2011.2165600
[28]
R. Haralick
Statistical and structural approaches to texture
Proc IEEE, 67 (1979), pp. 786-804 http://dx.doi.org/10.1109/PROC.1979.11328
[29]
G.L.B. Ramalho,D.S. Ferreira,P.P. Rebouças Filho,F.N.S. de Medeiros
Rotation-invariant feature extraction using a structural co-occurrence matrix
Measurement, 94 (2016), pp. 406-415
[30]
E.C. Neto,P.C. Cortez,T.S. Cavalcante,V.E.R. da Silva Filho,P.P.R. Filho,M.A. Holanda
Supervised enhancement filter applied to fissure detection
Springer International Publishing, (2015)pp. 337-340 http://dx.doi.org/10.1007/978-3-319-13117-7-87
[31]
T. Ojala,M. Pietikäinen,D. Harwood
A comparative study of texture measures with classification based on featured distributions
Pattern Recognit, 29 (1996), pp. 51-59 http://dx.doi.org/10.1016/0031-3203(95)00067-4
http://www.sciencedirect.com/science/article/pii/0031320395000674
[32]
G. Guo,S.Z. Li,K. Chan
Face recognition by support vector machines
Proceedings fourth IEEE international conference on automatic face and gesture recognition (Cat. No. PR00580), pp. 196-201 http://dx.doi.org/10.1109/AFGR.2000.840634
[33]
C.D.A. Vanitha,D. Devaraj,M. Venkatesulu
Gene expression data classification using support vector machine and mutual information-based gene selection
Proc Comput Sci, 47 (2015), pp. 13-21 http://dx.doi.org/10.1016/j.procs.2015.03.178
http://www.sciencedirect.com/science/article/pii/S1877050915004469
[34]
T.S. Furey,N. Cristianini,N. Duffy,D.W. Bednarski,M. Schummer,D. Haussler
Support vector machine classification and validation of cancer tissue samples using microarray expression data
[35]
J.P. Papa,V.H.C. de Albuquerque,A.X. Falcão,J.M.R.S. Tavares
Fast automatic microstructural segmentation of ferrous alloy samples using optimum-path forest
[36]
V.H.C. de Albuquerque,A.R. de Alexandria,P.C. Cortez,J.M.R. Tavares
Evaluation of multilayer perceptron and self-organizing map neural network topologies applied on microstructure segmentation from metallographic images
NDT E Int, 42 (2009), pp. 644-651 http://dx.doi.org/10.1016/j.ndteint.2009.05.002
http://www.sciencedirect.com/science/article/pii/S0963869509000899
[37]
T. Cover,P. Hart
Nearest neighbor pattern classification
IEEE Trans Inf Theory, 13 (1967), pp. 21-27 http://dx.doi.org/10.1109/TIT.1967.1053964
[38]
Z. Deng,X. Zhu,D. Cheng,M. Zong,S. Zhang
Efficient knn classification algorithm for big data
Neurocomputing, 195 (2016), pp. 143-148 http://dx.doi.org/10.1016/j.neucom.2015.08.112
learning for Medical Imaging. http://www.sciencedirect.com/science/article/pii/S0925231216001132
[39]
M. Capó,A. Pérez,J.A. Lozano
An efficient approximation to the k-means clustering for massive data
Knowl-Based Syst, 117 (2017), pp. 56-69 http://dx.doi.org/10.1016/j.knosys.2016.06.031
volume, Variety and Velocity in Data Science. http://www.sciencedirect.com/science/article/pii/S0950705116302027
[40]
E. Lee,M. Schmidt,J. Wright
Improved and simplified in approximability for k-means
Inf Process Lett, 120 (2017), pp. 40-43 http://dx.doi.org/10.1016/j.ipl.2016.11.009
http://www.sciencedirect.com/science/article/pii/S0020019016301739
[41]
P. Domingos,M. Pazzani
On the optimality of the simple bayesian classifier under zero-one loss
Mach Learn, 29 (1997), pp. 103-130 http://dx.doi.org/10.1023/A:1007413511361
[42]
M.J. Islam,Q.M.J. Wu,M. Ahmadi,M.A. Sid-Ahmed
Investigating the performance of naive bayes classifiers and k-nearest neighbor classifiers
2007 international conference on convergence information technology (ICCIT 2007), pp. 1541-1546 http://dx.doi.org/10.1109/ICCIT.2007.148
[43]
V.N. Vapnik
An overview of statistical learning theory
IEEE Trans Neural Netw, 10 (1999), pp. 988-999 http://dx.doi.org/10.1109/72.788640
[44]
K.-B. Duan,S.S. Keerthi
Which is the best multiclass SVM method? An empirical study
Springer, (2005)pp. 278-285 http://dx.doi.org/10.1007/11494683-28
[45]
D.W. Ruck,S.K. Rogers,M. Kabrisky,M.E. Oxley,B.W. Suter
The multilayer perceptron as an approximation to a bayes optimal discriminant function
IEEE Trans Neural Netw, 1 (1990), pp. 296-298 http://dx.doi.org/10.1109/72.80266
Corresponding author. (Pedro Pedrosa Rebouças Filho pedrosarf@gmail.com)
Copyright © 2017. Brazilian Metallurgical, Materials and Mining Association