Classifying colour differences in dyed fabrics using an improved hunger games search optimised random vector functional link

This study proposes an algorithm for classifying colour differences in dyed fabrics using random vector functional link (RVFL) optimised using an improved hunger games search (HGS) algorithm to replace the inefficient traditional classification methods. First, to prevent the HGS algorithm from easily arriving at the local optimal solution, we used the grey wolf optimiser (GWO) to generate the solution set of the HGS algorithm. Subsequently, to reduce the impact of the randomness of the input weight and hidden layer offset on the classification accuracy of RVFL, we used the improved HGS to optimise these two parameters of RVFL. Finally, the RVFL optimised using the improved HGS algorithm is used for classifying the colour differences of dyed fabrics. The performance of the proposed classification algorithm is compared with HGS algorithms improved using the whale optimiser, sine cosine algorithm, and Harris hawks optimiser. The results revealed that the proposed algorithm possesses several advantages, including the maximum, minimum, and average classification errors; good stability; and fast convergence.


Introduction
The colour quality of dyed fabrics is an important index for the product detection of textile printing and dyeing enterprises. The traditional methods for classifying colour differences in dyed fabrics mainly depend on manual classification by experienced and skilled workers. This method is highly subjective and easily influenced by the visual fatigue of the classifier; therefore, this method cannot meet the requirements of the automatic production of dyed fabrics. A model for classifying colour differences in dyed fabrics is, therefore, required to establish the relationship between the colour feature information and evaluation index. Using this model, a reasonable classification can be performed according to the colour feature information.
With the emergence of machine vision and intelligent learning algorithms, the algorithms to classify the colour differences of dyed fabrics based on intelligent learning are used to build high-precision models to meet the requirements of the automatic production of dyed fabrics.
Recently, many scientists have studied the colour differences of dyed fabrics to improve the automatic production of these fabrics. 1,2 For example, Wang et al. 3 proposed a method for producing qualified dyed fabrics with unqualified dyed yarns. Xin et al. 4 proposed a method to classify the colour textures of coloured fabrics based on a double-sided back propagation neural network and co-occurrence matrix. Li et al. 5 used Levenberg-Marquardt optimised backpropagation algorithm to detect the L*a*b* value of the image of a fabric sample and subsequently combined the colour difference of the fabric, which was calculated using a formula. Zhang and Yang 6 reported that a support vector machine model optimised using a genetic algorithm was applied to evaluate the consistency of colour and uniformity of dyeing according to the measured correlation characteristics of the colour differences. Wang et al. 7 proposed a proofing strategy with a standard yarn arrangement to detect warp alignment errors by analysing different types of these errors in an attempt to develop an automatic warp colour detection system. Xie et al., 8 adopted the image pyramid principle to downsample images for correcting the illuminance of the images of warp knitted fabrics to improve the real-time online detection of the colour differences of these fabrics. Barua et al. 9 used a deep learning model to identify defects such as holes, missing yarn, yarn breaking, and dyeing that may occur during the production process of coloured fabrics. The application of neural networks to investigate the colour differences in fabrics provides a method for improving the production efficiency of these fabrics. However, most neural networks suffer from the disadvantage of a long training time. Deep learning requires several training samples and complicated calculations. [10][11][12] To shorten the training time of traditional neural networks for detecting the colour differences of dyed fabrics, Zhou et al. 13 dynamically selected parameters and applied a differential evolution algorithm to optimise the parametric combination of a regularisation limit learning machine model, which can increase the classification accuracy pertaining to colour differences. Li et al. 14 employed an improved grasshopper optimiser to improve the bandwidth and penalty parameters of a kernel extreme learning machine to achieve a superior classification of colour differences. The application of the optimisation algorithm improved the classification accuracy of the neural network. In recent years, novel optimisation algorithms have been proposed, such as grey wolf optimiser (GWO), 15 sine cosine algorithm (SCA), 16 whale optimisation algorithm (WOA), 17 marine predators algorithm, 18 Harris hawks optimisation (HHO), 19,20 hunger games search (HGS) 21 and dragonfly algorithm, 22 which have been widely used in research.
The learning ability of deep learning models is very strong; however, their design is complex. Additionally, the number of calculations required is large, and the requirements for hardware and cost are extremely high. Unlike deep learning algorithms, random vector functional link (RVFL) does not require high-quality computing hardware and has obvious advantages in computing volume. RVFL 23 randomly sets the weights of the hidden and input layers during the training process. The kernel of the RVFL ameliorates nonlinear raw data with the hidden layer learning to increase the nonlinear kernel raw data to enhance the generalisation ability of the RVFL. Based on the aforementioned study, this study proposes a classification method for the colour differences of dyes using an improved HGS optimised RVFL. The main objectives are as follows: 1. An HGS algorithm improved using GWO is proposed. In this study, we used the GWO algorithm to determine the initial population of the HGS algorithm. Because the location of the prime search agent is close to that of the optimal solution after processing, the HGS can largely avoid arriving at the local optimal solution. 2. The improved HGS algorithm is proposed to improve the hidden layer offset and input weight of RVFL. The hidden layer bias and input weight are the decisive factors affecting the classification performance and network stability of RVFL. 3. The improved RVFL is used for the classification of colour differences of dyed fabrics, and the stability, convergence, and significance of the proposed GWO-HGS-RVFL model are analysed.

RVFL network
In the network proposed in Pao et al., 23  . During classification, N is considered to be the number of input data, d is the dimension of the input data, y is the label represented by the input data and L is the number of nodes in the hidden layer. The RVFL network can be expressed as follows: , ,......, is the output weight and W j and b j are randomly generated parameters of RVFL. The network aims to reduce the output deviation as follows: Equations (1) and (2) can be directly expressed as: Equation (3) can be expressed as a matrix as follows: where β and Y represent the output weight and expected output, respectively. Additionally, H, β and Y can be expressed as follows: Furthermore, β  = H † Y can be applied to obtain the output weight, where H † represents the inverse of matrix H.

GWO algorithm
The GWO algorithm 15 uses mathematical modelling of the hierarchy among wolves, as well as their encircling, tracking and attacking behaviour towards other animals, to mathematically express the lives of grey wolves and their methods of capturing prey. Grey wolves usually encircle their prey, and this behaviour can be expressed in the following mathematical form: where t is the current iteration,  A and  C are coefficient vectors, X p is the position vector of the prey and  X is the position vector of the grey wolf.
Grey wolves find a specific prey and surround it. The commander of the hunting operation is usually known as an alpha wolf, whereas beta and delta wolves may participate in these hunting operations as well, and allow search agents to improve their positions with respect to their initial positions. The position of the search agent is updated as follows: When the prey stops moving, the grey wolf attacks it to complete the hunt. To model the approaching prey,  a must be continuously reduced; the fluctuation range of  A must also be decreased.

HGS algorithm
The HGS algorithm 21 can be used for global search and optimisation to avoid arriving at the local optimal solution. The activities and behaviours of a hungry animal can be represented by the following mathematical model: where R is a random number located between [−a, a], r 1 and r 2 are stochastic numbers between [0,1], t is the current iteration, W 2 and W 1 are the current starvation weights, X b represents the global optimal position, X t ( ) represents the current individual position and l is a set constant. Additionally, E can be calculated as follows: where F i ( ) is the fitness of the i th individual and BF is the current best fitness. Additionally, sech() is a hyperbolic function. Furthermore, R can be calculated as follows: where rand is a random number between [0,1] and Max iter _ is the number of loops for core optimisation. To mathematically model hunger in an individual, W 1 is calculated as follows: Additionally, W 2 is calculated as follows: where hungry is the degree of starvation of the ith unit; N is the total number of units; SHungry represents the sum of the degrees of hunger of all the individuals and r r r 3 4 5 , ,and are random numbers between [0,1]. Additionally, hungry i ( ) can be represented as follows: where ALLFitness i ( ) represents the fitness of the ith individual. Finally, H is calculated as follows: where r 6 is a random number between [0,1], F i ( ) represents the fitness of the individual, WF is the present worst fitness value, UB and LB represent the upper and lower bounds of the search space, respectively and LH represents the better bound of H.

HGS algorithm improved using GWO
The HGS algorithm possesses excellent features. More specifically, the classification accuracy after convergence, as well as the convergence speed, of the HGS optimisation algorithm is high, and the algorithm can locate the optimal solution globally. The mathematical model of the approaching prey reveals that the random numbers and constants influence whether an animal will approach the prey, as well as the speed of this approach. Therefore, the HGS algorithm continues to arrive at the local optimal solution when seeking the best solution. Additionally, the process of seeking the optimal solution is slow. In contrast, the GWO algorithm retains the current optimal solution, as well as the second and third alternate solutions, owing to the strict hierarchical system of the algorithm. This enhances the ability of the algorithm to search for the optimal solution globally if the current optimal solution falls within the local area. The local optimal solution, as well as the second and third alternative solutions, can be used to search for the global optimal solution, which should rank among the top three solutions. Therefore, GWO is first applied to optimise the HGS algorithm and provide a suitable initial position for this algorithm. The location of the population search agent in the GWO algorithm after optimisation is ranked among the top three. Subsequently, all the population search agents are transferred to the HGS algorithm to initialise its population. At this point, the position of the search agent that has been determined is close to the optimal solution. This effectively prevents the solution from falling within the local area.

Optimised RVFL model for classifying colour differences
Hidden layer offset and input weight are decisive factors that affect the classification performance and stability of RVFL. Therefore, the randomness of the hidden layer offset and input weight will affect the classification accuracy of RVFL when classifying the colour differences of dyed fabrics. To reduce this effect, the HGS algorithm improved using GWO can be applied to optimise the aforementioned parameters and use them to initialise each individual in the population of RVFL. After the cycle, the position that allows the best classification is located and the hidden layer offset and input weight are assigned accordingly in an attempt to improve the stability and classification accuracy of RVFL. The classification error rate of RVFL on the test set is considered to be the fitness function of the optimisation algorithm as follows: where F and N are the classification error rate and number of samples in the test sets, respectively, T i is classification error in each category and M is the total number of categories. The proposed model for classifying colour differences using GWO-HGS-RVFL for dyed fabrics is shown in Figure 1. The initial population directly influenced the astringent rate of HGS and ability of the algorithm to avoid arriving at the local optimal solution. The steps following steps are employed by GWO-HGS-RVFL to classify the colour differences in dyed fabrics: 1. Use high-precision industrial cameras to obtain the images required for the experiment and change the image format. 2. Calculate the colour differences within the images of the dyed fabrics, and classify the dyed fabrics according to the colour differences. Additionally, combine the features to generate the dataset required for the experiment. , where L and n are the numbers of hidden layer and input layer nodes, respectively. Next, set the maximum number of iterations as T .

Experimental setup
The data set is acquired using the device displayed in Figure 2, and a high-precision colour industrial camera is used to capture the images of the fabrics using standard light sources. The choice of the light source is crucial for classifying the colour differences of dyed fabrics because an inappropriate light source can cause colour distortion. Since this study classifies the colour differences of dyed cotton and polyester fabrics, D65, D50 and A light sources were chosen, which are the three most common light sources in the textile industry. The obtained images are transferred to a computer for processing to obtain the relevant dataset. The colour difference is calculated as follows:  The diagram of the device used in this experiment is shown in Figure 2, and the classification of the colour differences is shown in Table 1.
The acquisition process of the dataset is described as follows:

Arithmetic parameter selection
Selecting the activation function. The choice of the activation function of the RVFL neural network affects the accuracy of RVFL. To maximise the accuracy of the experimental results, the number of nodes in the hidden layer of the network is set to 50, and the RVFL neural network applies five-fold cross-validation when using different activation functions. The results obtained are averaged for verification, and the algorithm is executed 10 times. The outcomes of these 10 executions are recorded, and the average value and standard deviation of these outcomes are calculated. As shown in Table 2, when using 'sig' as the activation function of RVFL, the classification accuracy of the neural network is only 0.002 smaller than the highest classification accuracy obtained from the 'sin' activation function. Additionally, the standard deviation of the classification accuracy obtained using 'sig' is the smallest among those obtained using the other activation functions. Based on the classification accuracy, applied 'sig' is applied as the activation function of the neural network in the next experiment.
Effect of the number of nodes in the hidden layer of RVFL on the classification accuracy. In RVFL, the number of nodes is a key factor affecting classification accuracy. When the number of nodes is too low, RVFL cannot be fitted, thereby resulting in low classification accuracy. In contrast, when the number of nodes is too high, RVFL will be over-fitted, thereby making the neural network ineffective in classifying data other than training samples. This experiment tests the classification accuracies of the various algorithms according to the number of nodes in the hidden layer. The number of nodes in the hidden layer is increased from 5 to 100, and five-fold cross-validation is performed. Subsequently, the algorithm was executed 10 times and the average value was considered to be the final result. Table 3 and Figure 3 show the classification accuracies of the various algorithms for different numbers of nodes in the hidden layer. Figure 3 reveals that if the number of nodes in the hidden layer is set between [5,50], the classification accuracy of a neural network increases with the increase in the number of nodes. However, when the number of nodes is set between [50,100], the classification accuracy increases relatively slowly and subsequently stabilises. After comprehensively analysing the images and tables we conclude that when the number of nodes reaches 80, the classification accuracy becomes relatively stable; therefore, the number of nodes in the hidden layer of the model was set to 80 in this experiment. In addition, we can conclude that the classification accuracies of HGS-RVFL and GWO-HGS-RVFL are better than those of the other algorithms when the number of nodes in the hidden layer is small.
Parameters of the improved HGS algorithm using GWO. Population size and the number of iterations play an important role in an optimisation algorithm and setting these parameters improperly consumes unnecessary resources. If these parameters are too large, the algorithm will execute for an extended duration and increase the time cost. However, if these parameters are too small, the algorithm will produce poor results, thereby resulting in low classification accuracy. In this experiment, different values are set for the two parameters, and different classification accuracies are obtained. These classification accuracies are then compared to obtain the best combination of the number of iterations and population size. The population size can assume five possible values, that is, 10, 20, 30, 40 or 50, and the number of iterations can assume six possible values, that is, 5, 10, 15, 20, 25 or 30. Additionally, 30 combinations of these two parameters are possible. We applied each combination to GWO-HGS-RVFL, averaged the results using five-fold cross-validation, and the final result obtained is the average of 10 executions of the algorithm. Figure 4 displays the influence of different combinations of the two parameters on the classification accuracy of the algorithm, which is represented by the bubbles in the image. When the population number or number of iterations increases, the area of the bubble increases. For fixed population size, with the increase in the number of iterations, the classification accuracy of the algorithm gradually improves. Additionally, for a fixed number of iterations, with the increase in the population size, the classification accuracy of the algorithm improves   gradually. However, when the population size is set to 40 or 50, the classification accuracy of the algorithm does not change substantially; therefore, we set the population size to 40. Because the convergence speeds of different algorithms are different, the number of iterations is set to 60 in this experiment.
Parameter settings for the various algorithms. To ensure the credibility of the results, the parameter settings are consistent with those of GWO-HGS-RVFL. The activation function of RVFL is set to 'sig', number of nodes in the hidden layer is set to 80, population size in the optimisation algorithm is set to 40, and number of iterations is set to 60.

Discussion of the results obtained from the algorithms
This experiment uses five-fold cross-validation for the algorithms, and the obtained results are averaged. The algorithms are executed 10 times to reduce the randomness of the results and improve their credibility. Table 4 displays the minimum value (Min), maximum value (Max), standard deviation (Stdv) and average value (Avg) obtained by executing various algorithms 10 times.  Figure 5, the stability of GWO-HGS-RVFL can be observed to be substantially better than those of GWO-RVFL and WOA-RVFL, slightly better than that of HHO-RVFL, and similar to those of SCA-RVFL and HGS-RVFL. Additionally, Table 3 and Figure 5 reveal that the proposed GWO-HGS-RVFL algorithm in this study possesses a small error and good stability.

Algorithm convergence analysis
In Figure 6, the convergence speed of GWO-HGS-RVFL is compared with those of the other algorithms, in which the classification error rate is used to express the fitness. From the figure, we can clearly observe that the astringent speed of GWO-HGS-RVFL is much higher than those of the neural networks optimised using other algorithms. Additionally, when the number of iterations increases gradually, the fitness of GWO-HGS-RVFL is very low. The performance of this algorithm is excellent and obtained fitness is nearly the same as the final fitness of HGS-RVFL. GWO-HGS-RVFL reaches the smallest degree of fitness in the smallest number of iterations. The comprehensive analysis confirmed the superiority of GWO-HGS-RVFL. The no lunch theory 24 indicates that an optimisation algorithm cannot perform well when all its parameters are  optimised. Therefore, we initialised the population of the HGS algorithm using GWO to improve the convergence speed of HGS and avoid the local optimal solution, thereby improving the classification accuracy of RVFL.

Difference between algorithms
To determine the difference between GWO-HGS-RVFL and other algorithms, Mann-Whitney U test is used to analyse the experimental results. Similar to previous experiments, to increase the reliability of the results, we use five-fold cross-validation, each algorithm is executed 10 times, and the results of the 10 executions are averaged. Table 5 reveals that the p-value of GWO-HGS-RVFL and the other algorithms is less than 0.05 and h is equal to one for all the algorithms; therefore, considerable differences exist between GWO-HGS-RVFL and the other algorithms. The experimental results prove that GWO optimises HGS and subsequently optimises RVFL, which results in obvious advantages over using HGS alone.