Fabric Defect Final Black Book Abcdeffg
Fabric Defect Final Black Book Abcdeffg
Fabric Defect Final Black Book Abcdeffg
Project report
submitted to D Y Patil International University, Akurdi, Pune
in partial fulfilment of full-time degree.
Submitted By:
Parth Sumbre 20200802033
Sushilkumar Sahani 20200802102
This is to certify that the project entitled Fabric Defect Detection submitted by:
is the partial fulfillment of the requirements for the award of degree of Bachelor of Technology
in Computer Science and Engineering is an authentic work carried out by them under my
supervision and guidance.
Director
School of Computer Science Engineering & Application
D Y Patil International University, Akurdi
Pune, 411044, Maharashtra, INDIA
DECLARATION
We, hereby declare that the following report which is being presented in the Major Project
entitled as Fabric Defect Detection is an authentic documentation of our own original work to
the best of our knowledge. The following project and its report in part or whole, has not been
presented or submitted by us for any purpose in any other institute or organization. Any
contribution made to the research by others, with whom we have worked at D Y Patil
International University, Akurdi, Pune or elsewhere, is explicitly acknowledged in the report.
i
ACKNOWLEDGEMENT
With due respect, we express our deep sense of gratitude to our respected guide and coordinator
Dr. Maheshwari Biradar, for his/her valuable help and guidance. We are thankful for the
encouragement that he/she has given us in completing this project successfully.
It is imperative for us to mention the fact that the report of major project could not have been
accomplished without the periodic suggestions and advice of our project supervisor Dr.
Maheshwari Biradar (Name of supervisor) and Dr Rahul Sharma (Project Co-ordinator).
We are also grateful to our respected Director, Dr. Bahubali Shiragapur and Hon’ble Vice
Chancellor, DYPIU, Akurdi, Prof. Prabhat Ranjan for permitting us to utilize all the necessary
facilities of the college.
We are also thankful to all the other faculty, staff members and laboratory attendants of our
department for their kind cooperation and help. Last but certainly not the least; we would like
to express our deep appreciation towards our family members and batch mates for providing
support and encouragement.
ii
Abstract
We thus need to understand that this textile industry is amongst those that form the core
support of not just the national but also international economy and thus forms a convenience
line of merchandise from apparel and fashion wear right down to essentials and even to
manufacturing and production equipments. Ensuring the quality oriented of these fabrics is
also significant because any defects, the existence of defects or shortcomings in the production
of these fabrics will limit the ability to effectively use them and. aesthetic appeal. In general,
fabric defect detection has implemented an offline inspection method due to the fact that the
whole process would be time consuming were it to be done online. the current state at which
human operators are to first go the public that is engaging the service which is costly, slow and
an inefficient way. and error. In doing so this project will aim at identifying how these
challenges may be addressed through rectifications in the form of an automated fabric defect
detection. This filter applied to the medical image and the efficiency of the designed based
image recognition system improved by utilizing the Deep Convolutional Neural Network
which is the most recent innovation. Deep Convolutional Neural It is an empirical fact that the
Network model is amongst the best performing, language detection models and a real-time
model. hence in my view it is possible to state that the best solution for identifying different
defects in a fabric like the holes, stains and even defects in density of threads. The experiments
were performed utilising two datasets that we compiled together with TILDA (Textured Image
Database for Fabric Defect) data set that contains numerous types of fabric defects. pictures,
the system was trained and validated by using thorough set of data with different skin colours,
sizes, and poses. On the first, it encompasses data augmentation, and preprocessing while on
the second it encompasses hyperparameter optimization. The resulting model demonstrated
this enabled them to isolate accurately the fabric defects from other areas within a relatively
shorter time and as such there has been high spike in the performances as from the initial
period. traditional methods. Among the notable features of this project for instance, are the
ability to profile identifiable successful pre-processing strategies. It includes the strategy of
real-time detection of the abovementioned features, a new strategy of a refined detection
algorithm and general strategy for analysing the strategies. sophisticated metrics such as
precision, recall, and mAP that has been Average Precision (mAP) obtained image-wise. The
system’s ability to function in real time and accuracy makes worthy tool in control of quality
textile organizations. industry, which could offer the estimated value for the employment of
manual inspection in industry and enhance the productive efficiency.
iii
TABLE OF CONTENTS
Declaration i
ACKNOWLEDGEMENT ii
ABSTRACT iii
LIST OF FIGURES vi
1 INTRODUCTION 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 LITERATURE REVIEW 5
2.1 Drawbacks of existing system . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Gaps Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 PROPOSED METHODOLOGY 17
3.1 Dataset Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.1 Dataset Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Hyper parameter Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Model Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Mathematical Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.7.1 Convolutional Neural Networks (CNNs) . . . . . . . . . . . . . . . . . 23
3.7.2 Convolution Operation . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.7.3 Sigmoid Activation Function . . . . . . . . . . . . . . . . . . . . . . . 24
3.7.4 Pooling Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.8 Deep Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . 25
3.8.1 Bounding Box Prediction . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.8.2 Loss Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.9 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.9.1 Precision and Recall . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.9.2 F1 Score . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.9.3 Mean Average Precision (mAP) . . . . . . . . . . . . . . . . . . . . . 27
iv
3.10 Data Augmentation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.11 Advantage & Disadvantage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6 CONCLUSION 50
REFERENCES 52
v
List of Figures
3.1 Distribution of Number of Images in Each Folder . . . . . . . . . . . . . . . . . . 17
3.2 Types of Defect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Post Processing Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Post Processing Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 Steps for preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Complete flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.1 Test Sample 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.2 Test Sample 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3 Test Sample 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.4 Test Sample 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.5 Test Sample 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.6 Test Sample 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.7 Test Sample 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.8 Validation Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.9 Validation Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.10 Training Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.11 System output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.12 Confusion Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
vi
List of Tables
2.1 Gap Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Method Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Comparison of Max Pooling and Global Average Pooling . . . . . . . . . . . . . . 24
5.1 Hardware Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
vii
1. INTRODUCTION
The textile industry is one among the major industries that contribute to the growth of both
domestic and global economy. making everything from basic wear from clothing for work or
casual wear to more complex industrial garments. materials. In regards to the outcome, it is
vital to sustain the quality of the textiles produced to the consumers. a favourable market
position and the overall stabilization of market demand. One of the key objectives of this The
central focus of the mission is identification and eradication of defects in fabrics before the
products get to the consumer markets. In the past, fabric defect detection is done through
monitoring by the eve’s and other workers even clinics. the most suitable approach for the
fabric’s inspectors to carefully examine each surface for any flaw. While effective to as a
degree, this method is highly cumbersome and incredibly time-consuming and is profoundly
subjective in essence. It carries with it several problems such as increase in costs due to;
additional costs of material and time due to overlaps as well as; poor quality control as it leads
to inconsistencies and thus inefficiencies in the quality control process. Over the last couple of
years, there have been progress such as the use of artificial intelligence[1]. Machine learning
especially artificial intelligence (AI) and computer vision have provided foundation to
automatic defect detection methods that can provide more complex and inexpensive treatments
than the traditional ones. Among various AI-driven methods, Recent advancements in machine
intelligence and other deep learning technologies have been regarded as some of the most
useful approaches to automation visual inspection tasks[2]. Among all the conceptual
frameworks introduced in this area, the Deep is one of the most famous frameworks. A
Fast-YOLO-2 object detection model based on the Convolutional Neural Network, recognized
for its effectiveness[3, 4]. The advanced version of Deep Convolutional Neural Network, the
Latest Deep Convolutional Neural Network, also extends these features, and Applying texton
enhancement for real-time fabric defect detection is a good option. Deep is focused on using it
or applying it in a given setting to help facilitate improvements in learning processes and
achieve better and faster results[5]. The work presented in the manuscript proposes the
application of Convolutional Neural Network to the problem of fabric defect detection as a
service of the textile manufacturing industry. Thus, by using deep learning and computer
vision techniques, the objective is to get rid of them as inefficient. and time-consuming and
machinelike examination procedures with an effective full auto system. This system is meant
to minimize time spent in quality assurance when producing textiles in the factory[6]. and at
the same time reduce the chances of misreporting of defects while improving the precision of
defect identification. Comprehensive study of the model of identifying the presence of a fabric
flaw using Deep Convolutional Neural Network is discussed in the paper including the
following: , principles of scientific research, such as methodology for choosing the topic of the
study, the design of the experiment, and statistical analysis. By continuously applying it to
1
various, complex real life situations, the professionals were able to refine the techniques
assuring it passes all tests. studies where this approach to engineering and constructing
predictive models from the acquired datasets, is proven to be effective in predicting different
types of defects. Furthermore, wider judicious prospects of the textile industry in relation to
the aspect under study are considered. Deep Convolutional Neural Network capabilities
1.1. Background
The textile industry is a critical sector of the global economy, producing a vast array of fabrics
for various applications, from clothing to industrial materials. Ensuring the quality of these
fabrics is paramount, as defects can significantly impact the usability and aesthetic appeal of
the final product. Traditionally, quality control in this industry has relied on manual inspection
by human operators. This method, while straightforward, is fraught with challenges such as
labor intensity, inaccuracy, and inconsistency due to human subjectivity[7]. The integration of
computer vision and deep learning technologies offers a promising solution to these
challenges. Over the past decade, significant advancements in these fields have revolutionized
image processing tasks, including object detection and classification. Despite these
advancements, the unique challenges posed by fabric defect detection—such as the variability
in fabric types, patterns, and defect types—require robust and efficient models capable of
real-time processing in a manufacturing environment[8]. Deep Convolutional Neural Network
is a state-of-the-art object detection model known for its speed and accuracy. The latest
iteration, Deep Convolutional Neural Network, enhances these capabilities further, making it
an ideal candidate for fabric defect detection. By leveraging the power of Deep Convolutional
Neural Network, this project aims to develop an automated and speedy textile fabric defect
detection system that can identify and categorize a wide array of textile flaws without human
involvement.
1.2. Motivation
The justification for executing this research would be to establish that the conventional
methods of diagnosing fabric defects are inadequate. algorithms to identify the botnet and the
ominous need for better approaches. The textile industry disadvantage of the traditional forms
of inspection include high cost of labor involved in the use of tallymen[2]. high cost its
associated due to more number of printers, the quality of prints may be compromised at times
by the process of intervention and inconsistency in the quality control measures. Automating
the defect From the points above, it can be argued that the following challenges can be
overcome by the detection process: As a result, it will lead to reduced operational costs. They
also built from the Auditor’s ideas with such improvements as better inspection, increased
2
accuracy and better production. The use of Deep It is however important here to appreciate
that Convolutional Neural Network in this case presents a window of opportunity to actualize
these goals. It utilizes the tap layers which include deep layers and Convolutional Neural
Network layer that is famous for its high performance and time optimization[9]. Thus the
method is well positioned to perform real time fault identification which is very rife in
industries. By developing an automated indeed, the proposed new generation Fabric Defect
Detection System based on Deep Convolutional Neural Network; therefore the objective of
this research is making sure that methods and technologies that are more secure and efficient
and used to generate commodities to increase the production result to even higher results.
quality textile product
1.3. Objectives
1. Develop an Automated System: What may be a useful device for textiles is an automatic
text fabric defects scanning device. based on Deep Convolutional Neural Network model as a
technique of classifier and categorization of different types of fabrics. defects.
2. Real-Time Detection: This is just one reason that indicates that there must be a system
developed that would work in real-time with setup. They also facilitate detection of defects at
almost any given period during manufacture, which may in certain cases be quite helpful in
cutting down time.
3. High Accuracy: Ensure profound improvements in skills for the detection and
differentiation of defects for making a better use of the amplified AHIA[7]. Deep
Convolutional Neural Network.
4. Wide Array of Defects: Organise the system to search for the word only once and let a
search function stop at the moment the word is spotted in the discourse. It can stretch from
rips to other general forms of defects or may refer to other broader but clearly distinguishable
defects or imperfections or discolorations or blots or wrong patterns or even wrong color.
5. Enhance Quality Control: Enhance the general competency and execution of manner in the
evaluation of quality of textile products therefore, there is an enhancement in the interface and
flow by avoiding ones whereby one checks the existence of an object through the manual mode.
6. Scalability: The system to be implemented should also be chosen with regard to the
possibility to expand it in case of increasing needs the later. versatile in terms of the kind of
fabrics that the software can be applied to or the manufacturing processes that are accessible
through the commercial software.
3
7. Cost-Effective Solution: Suggest a structure that should be able to offer higher or even
the same value at enhanced cost when contrasted with the current or a future production line.
processes, promoting widespread adoption.
In this project, the idea at the foundation level is to avoid the use of human muscles and create
an aspect that will help to identify the irregularities of texture in the garment. most advanced
real time system that has the capacity of detecting at once many categories of defective fabrics
and give them work ability without referring to any other personnel. intervention. The previous
way of physically inspecting textiles that had been used before involved some of them having
to be done by the physical touch involving one or more people. inaccurate, and inconsistent.
However, this is still possible even now as there are enhanced computer vision, deep learning,
and neural networks such as in self-driving cars. build relationships that can enable proper
results in terms of setting and defining all sorts of flaws on difficult fabrics that cannot allow
for, say, large variations. remains a challenge. The following are the suggested approaches to
tackle the aims of this project for minimizing these considerations through Deep Convolutional
Neural Network a state-of-the-art object detection model[10]. There is also the necessity to use
the real-time mode constantly to ensure the highest level of efficiency in performing the given
function. Therefore, it is possible to identify its origin in the initial stage of manufacturing so
that corrective measures can be taken. By doing so, These objectives of the project will have
the following aspects incorporated on the enhancement of the overall reliability of the quality
assurance process in the textile industry.
4
2. LITERATURE REVIEW
The literature review on fabric defect detection encompasses various methodologies and
approaches, illustrating the progression of process automation in this field. Numerous studies
compare existing methods with newer methodologies, often employing metrics such as
accuracy, mean Average Precision (mAP), and Intersection over Union (IoU) to evaluate the
efficiency of different approaches in identifying and marking defects.
Traditional methods of fabric defect detection include manual inspection and statistical
techniques. Manual inspection, although accurate, is labor-intensive and susceptible to human
error, which can lead to inconsistencies in defect detection. Statistical methods, such as
thresholding and edge detection, offer a simpler approach but struggle with the complexities of
fabric patterns and varying lighting conditions. These limitations highlight the need for more
advanced and automated techniques.
Machine learning-based methods have significantly advanced fabric defect detection. Support
Vector Machines (SVMs) and K-Nearest Neighbors (KNN) are among the prominent
techniques[11]. SVMs, while requiring extensive feature engineering, are effective in
classifying defects by finding optimal hyperplanes that separate defect classes[2]. KNN, on the
other hand, classifies defects based on feature vector similarity, comparing new samples to a
database of known defect types. However, KNN can be computationally intensive, particularly
with large datasets, posing a challenge for real-time applications.
Deep learning-based methods have further revolutionized fabric defect detection by offering
more sophisticated and robust solutions. Convolutional Neural Networks (CNNs) are
particularly effective, as they automatically learn hierarchical feature representations from
images, making them well-suited for handling variations in fabric patterns and lighting
conditions. Among the deep learning techniques, the You Only Look Once (YOLO) model
stands out for its real-time object detection capabilities. The latest iteration, YOLO v8,
enhances accuracy and speed, making it highly suitable for fabric defect detection. YOLO’s
ability to process images quickly and accurately in real-time offers a significant advantage over
traditional and earlier machine learning methods[4].
Data augmentation and preprocessing techniques are crucial for improving the performance
and generalization of fabric defect detection models. Techniques such as flipping, rotation,
scaling, and translation are applied to increase the diversity of training data, helping the model
generalize better to unseen data. Image normalization and resizing are essential preprocessing
steps that ensure numerical stability and compatibility with the detection model’s input size
requirements[12]. These techniques enhance the robustness and effectiveness of the detection
5
models, particularly when dealing with diverse and complex fabric patterns.
Evaluation metrics play a critical role in assessing the performance of fabric defect detection
models. Metrics such as accuracy, precision, recall, F1-score, IoU, and mAP provide a
comprehensive evaluation framework. Accuracy measures the overall correctness of the
model’s predictions, while precision and recall offer insights into the model’s ability to
correctly identify defects and detect all relevant instances, respectively. The F1-score, being
the harmonic mean of precision and recall, balances these two metrics to give a single
performance measure[13]. IoU assesses the accuracy of predicted bounding boxes by
comparing the overlap between predicted and actual defect regions. mAP, which averages the
precision across different defect classes, provides a single performance measure for multi-class
defect detection.
Recent research has focused on improving the accuracy and efficiency of fabric defect detection
models. Key advancements include:
- Transfer Learning: Using pre-trained models on large datasets and fine-tuning them for fabric
defect detection to leverage existing knowledge and improve performance.
- Weakly Supervised Learning: Developing methods that require less annotated data by
leveraging weakly labeled or unlabeled data for training.
Some of the best methods, categorization of the mentioned methods into supervised and
unsupervised approaches for fabric defect detection:
6
Supervised Methods:
• PRAN Net
• An improved loss function for defect segmentation on yarn-dyed and fabric images
Unsupervised Methods
• Competitive Cat Swarm Optimizer based RideNN and deep neurofuzzy network
Despite advancements in fabric defect detection, several challenges remain: - High Variability
in Fabric Patterns: Fabrics can have diverse patterns and textures, making it difficult for models
to generalize across different types of defects[14].
- Noise and Artifacts: Images can contain noise and artifacts that interfere with defect detection.
Effective pre processing and robust models are required to handle such issues.
- Data Scarcity: Obtaining a large, diverse, and annotated dataset of fabric defects can be
challenging. Data augmentation techniques and synthetic data generation can help mitigate
this issue.
7
- High Variability in Fabric Patterns makes it difficult for models to generalize across different
defects.
8
4 Context- Zhoufeng Saliency Detection Further validation
Aware Liu et al. approach accuracy, needed on
Progressive using CAMS precision, different
Attention features, Feature recall, fabric types;
Aggregation Refinement, F1-score exploration on
Network and MDS for optimizing the
multi-level deep training speed
supervision and handling high
noise prints[18]
5 Methodology Jyothi M R Data gathering, Deep Needs more data
of et al. preprocessing, learning on the specific
Proposed model specific types of fabric
System generation, and information, defects and their
prediction using activity characteristics[2]
deep learning diagrams
6 Domain- Simon Domain- Performance Further
Generalized Thomine et generalized on MVTEC investigation into
Texture al. texture anomaly and custom the generalization
Anomaly detection, fast fabric capability and
Detector texture-specific datasets, scalability
training/inference, epoch for different
automated numbers[19] industrial
assessment, applications
avoiding
retraining
7 Plain Xiang Jun LDPG (Local AUC, Further research
Fabric et al. defect prediction F1-score, needed on
Defect and global defect detection handling
Detection recognition) rate, complex fabric
with supervised detection types with
learning accuracy creases and
folds[1]
9
8 CCSO Maheshwari CCSO algorithm Feature Needs further
Algorithm S. Biradar for optimal extraction, validation in real-
for Fabric et al. performance rule world industrial
Defect parameters for generation, settings and
Detection RideNN or de- across different
DNFNN fuzzification, fabric types[20]
summation
layer
9 Deep Minghang Image Interclass Needs a broader
Learning- Yuan et al. pretreatment variance, dataset and
Based and crack crack validation across
Defect identification identification different types of
Detection using image fabric defects
Method processing
for Textile approach[21]
Materials
10 Effective Long Li et Automated IoU, mAP, Needs more
Fabric al. leather defect research on
Defect classification detection detecting smaller
Detection using deep rate population
Model learning, IoU defects and
for High- threshold, optimizing
Resolution shape similarity detection
Images parameter accuracy[22]
11 Trials with L. Song et Quality of defect Segmentation Further validation
Test Images al. region and accuracy, needed on larger
on Fabric background feature datasets and
Defect texture fusion, across different
Sensing segmentation saliency- fabric types
using mapping[6]
segmentation
coefficient and
weight coefficient
10
12 Fabric Xingming Statistical, Histogram Needs further
Defect Long et al. spectral, and features, exploration on
Detection model-based GLCM, integrating tactile
Using analysis of LBP, information
Tactile fabrics’ surface fractal with visual
Information properties analysis, methods[23]
Fourier,
Gabor,
wavelet
transforms
13 Fabric Shuxuan Multi-scale Detection Needs further
Defect Zhao et al. saliency maps accuracy, validation
Detection for detecting background on handling
Using prominent areas suppression complex fabric
MSDS on fabric images structures and
limited defective
samples[24]
14 Fabric Flaw Guang Thresholding, Area, Needs further
Measurement Zheng et al. contour analysis, peripheral research on
and thinning, island length, detecting
Analysis elimination, center point complex faults
enlargement, position, like creased
narrowing, defect shape material and
measurement factors, variegated
angle of yarn[25]
defect
15 Jump Hanqing GAN-based TPR, FPR, Needs further
Connection Cheng et fabric defect PPV, NPV, exploration on
Generative al. detection, F-value adapting to
Adversarial benchmarked diverse defect
Network against patterns and
conventional improving
methods detection
rates[26]
11
16 Segmentation Junfeng Improved IoU, recall, Further research
of Fabric Jing et al. loss function precision, needed on weakly
Defects for defect F1 measure supervised and
segmentation unsupervised
in fabric images learning for better
performance with
limited labeled
datasets[27]
17 Structural Zhiyong Statistical feature Classification Needs more
Approach Zhao et al. extraction, accuracy, research on
to Fabric spectrum mAP integrating
Defect analysis, model- multiple
Detection based approach detection
approaches
for better
accuracy[28]
18 Automatic Zhengrui Fusion Detection Needs further
Unsupervised Peng et al. technique using accuracy, exploration
Fabric anomaly maps, precision, on real-world
Defect upsampling[29] recall, industrial
Detection F1-score application
and scalability
19 Statistical Guodong Trace, yarn Trace, yarn Needs further
Summaries Lu et al. count analysis counts, research on
of Fabric using statistical statistical integrating
Objects techniques analysis statistical
analysis
with modern
machine learning
techniques[9]
20 Real-Time Shuxuan Multi-scale CNN Detection Needs further
Fabric Zhao et al. for detecting accuracy, validation on
Defect multiple defects false handling diverse
Detection positives, fabric types
false and defect
negatives patterns[12]
12
21 Fabric Mahdi Customized deep Detection Needs further
Defect HATAMI CNN for circular accuracy, research on
Detection VARJOVI knitting fabrics success rate optimizing
Using et al. the model
Customized for different
Deep CNN fabric types
and enhancing
real-time
performance[11]
22 Fabric Min Li et Saliency Detection Needs further
Defect al. map derived accuracy, validation on
Detection algorithm for saliency- diverse fabric
Based on defect detection based patterns and
Saliency using histogram methods integrating with
Histogram features[7] other detection
Features methods
23 Fabric Zhoufeng Color Attribute Detection Needs further
Defect Liu et al. Mapping accuracy, validation
Detection Scheme, MAE, on different
Using a Feature Fusion, F-score, fabric types
Saliency- Refinement, S-score and optimizing
Driven Multilevel Deep the technique
Technique Supervision for industrial
application[18]
24 Real-Time Shuxuan Multi-scale CNN Detection Needs further
Fabric Zhao et al. for real-time accuracy, research on
Defect fabric defect false enhancing
Detection detection positives, the model’s
Based on false scalability and
Multi-Scale negatives robustness in
CNN diverse industrial
environments[24]
13
25 Jump Hanqing Generative TPR, FPR, Needs further
Connection Cheng et Adversarial PPV, NPV, exploration on
Generative al. Network (GAN) F-value adapting to
Adversarial based approach diverse defect
Network patterns and
for Fabric improving
Defect detection
Detection rates[26]
26 Fabric Meng An et VLSTM based AP, mAP, Needs further
Defect al. CNN model for detection research on
Detection integrating precision improving
Using a multiple detection
VLSTM deep learning precision and
Based processes handling diverse
CNN fabric types[13]
27 Research Xinying He DenseNet- Detection Needs further
on Fabric et al. SSD network accuracy, research on
Defect for enhanced defect optimizing the
Detection defect detection classification model for diverse
Based flexibility and rate textures and
on Deep precision enhancing
Fusion real-time
DenseNet- performance[14]
SSD
Network
28 Research Xingming Statistical, Histogram Needs further
on Fabric Long et al. spectral, and features, exploration on
Defect model-based GLCM, integrating tactile
Detection analysis of fabric LBP, information with
Using surface properties fractal visual methods
Tactile analysis, for improved
Information Fourier, detection
Gabor, accuracy[23]
wavelet
transforms
14
29 Fabric Mahdi Customized Detection Needs further
Defect HATAMI deep CNN accuracy, research on
Detection VARJOVI for detecting success rate optimizing
Using et al. defects in circular the model
Customized knitting fabrics for different
Deep CNN fabric types
for Circular and enhancing
Knitting real-time
Fabrics performance[11]
30 Fabric Zhoufeng Color Attribute Detection Needs further
Defect Liu et al. Mapping accuracy, validation
Detection Scheme, MAE, on different
Using a Feature Fusion, F-score, fabric types
Saliency- Refinement, S-score and optimizing
Driven Multilevel Deep the technique
Technique Supervision for industrial
application[18]
Ensuring Reliability: the essence is to emphasize on the need for identification of difficulties
within fabric defect detection. This is essential to improving the validity and resilience of
such detection approaches. Real-World Applications: These researched methods have much
potential to develop into practicable methods in the field, and, perhaps, in the industry. These
indicators are very important in revealing fabric flaws at the production stage and the quality
check.
Ground Truth Determination: A manual approach was adopted in determining ground truth
through annotation using annotation software.
Structural vs Statistical Approach: The application of the structural approach for the fabrics
with clearly regular structure is discussed in this paper. Nevertheless, the statistical approach is
15
better than the former approach especially for aberrant and good textile samples.
16
3. PROPOSED METHODOLOGY
17
More particularly, the detection of fabric faults is often based on the quality of the data and the
number of perspectives. A well-defined Using the concept of model fitting to the obtained
dataset to a good extent aids in improving the model accuracy as well as its ability to perform
in the future when confronted with new data[1]. specifies the different forms of fabric defects
as well as determining the manner in which the above factors affect the nature of fabric
defects. Furthermore, the information would be derived from both primary and the secondary
data collection which would be done after evaluating the existing datasets available in the
current literature and additional datasets derived from the current research work. On the basis
of the above selection process, we included the original collection methods to construct a
sample set for fabric defect detection models[9]. The Deep Convolutional Neural Network was
used in classifying the malignant images. TILDA data set and own data set From here, we
have prepared a research work that is free from any sort of flaws, and as such, it goes a long
way toward enhancing the area of fabric defect detection. It consists of a collection of various
textile high resolution images that have been snapped by Laser diode arrays, for example: b) It
is a fact that a majority of fabrics are capable of getting stained, blotched, stained, damaged,
ripped and even possess yarn and or weave, little of which is perfectly even or synchronous.
Leveraging both It also assists a particular model to generalize from diverse types of defect
patterns typical of fabrics in real environments[16]. Consequently, the tested algorithms’
generalization capability increases once optimization augments them[22]. In addition, we also
looked at the directions we somehow gathered for the stated purpose barring the ones that are
listed above. as to achieve the fabric uniformity when transferring images and to adapt such
images to the requirements of the particular purpose of the application
18
To enhance and diversify our dataset furthermore and to enhance the education of the data,
we used data augmentation techniques[7]. Flipping, rotation, scaling, and translation were
among the methods complied in the study. that could possibly be imposed in the real world
in order to test various conditions and changes that may be met. world fabric inspection. This
expansion of our database helps to assure that the model can ‘see’ a much wider possibility of
images. several forms of defects in fabrics and can be useful in enhancing the precision and
versatility of the model being developed. Pre-processing is required before proceeding to fine-
tune the model because of compatibility with the Deep Convolutional Neural Network model.
To further create uniformity in analyzing the results of the images, all the images were resized
with a fixed dimension of 416 x 416 pixels. be compatible with the size of the input for the
Deep Convolutional Neural Network model. Pixel All values were scaled between 0 and 1 to
ensure that the calculations during the model training did not converge to 0 or are extremely
close to zero. In the same way, the dataset adopted in the analysis was further split into training,
validation, and test datasets[14]. The key areas that relate to the evaluation and validation of the
model are also pointed out.
In this regard, it can be stated that defining the appropriate model is one of the most important
issues in terms of fabric defect detection. All kinds of neural networks exist and the most
prospective and still one of the most complex Deep Convolutional Neural Network was chosen
due to an increased level of work and the likelihood of satisfying the need for production.
Moreover, there is Publicity which goes by the names of Deep Convolutional Neural Network.
which has an object detection feature which can be considered as commendable making it
suitable for the detection of several modification in fabric defects. The said above mentioned
detection algorithm as a complete single stage does require small time for conversion of
pictures and has very high accuracy. Are also very useful for the immediate removal of
substandard products right from the production line[12].
Further, Faster R-CNN is making a very extensive use of the Deep Convolutional Neural
Network in general and the speed at which this particular operation is done is superb in order
to obtain the best results[27]. According to the analysis on images, the bubbles of all the
region-based convolutional neural networks (R-CNN) are classified depending on the regions.
This - checks that all the defects are properly described and probably to exclude any problems
of quality and efficiency. delays. Because of these reasons, Deep Convolutional Neural
Network has a high stability, and a high accuracy for this model makes it suitable for tasks that
Among the benefits it has the following; it has enabled development of better quantitative and
qualitative techniques of detecting the defects[6].
19
3.3. Training
It is a part of the learning process which is also known as the forming phase in which the neural
network learns to distinguish between the fabrics’ defect areas and the images[19]. This process
entails the act of fine-tuning in an effort to reduce error. pitched against a loss function, and
increasing the reliability of models. Deep Convolutional Neural Network learns a composite
loss function, namely, object localization loss, object These are oversampling, undersampling,
SMOTE, borderline SMOTE, SVMSMOTE, synthetic, random sampling, and class probability
loss sampling to improve accuracy to detect defect[27]. The Some of the network parameters
are tuned using the optimizers such as the Adam or SGD where the loss is minimized.
This is carried our after the model has been trained in a bid to further fine-tune it to achieve the
best possible performance[2]. Some of the hyperparameter, which have quite an influence in the
effectiveness of the model, include;. altered according to the parameters that would better serve
the established model in detecting defects relating to fabrics while maintaining computational
efficiency. Thus, the aspects of the given model as a function that we can adjust for better
convergence include the learning rate. and converges. This is because training and batch sizes
are matched to the capacity of the model while using the same efficiency. The sizes of the
20
anchor boxes are important in the sense that they make the performance of the needed defect
localization and classification even better. Fine-tuning to coordinate together using the given
above hyper parameters bring out the Deep Convolutional Neural Network model efficiency
in the recognition of the fabrics defects[24]. recognising the signs of nodule development and
applying organisational measures in fashion production.
When the model was developed, it was named Deep Convolutional Neural Network, and as the
hyperparameters have been tested and analyzed, there are now some of the results of that
analysis. parameter optimization is crucial. Such measures like precision, recall, and F1-score
are applied in order summarize the gender differences in using the internet in different
activities[21]. Precision measures the degree of a precise target out of a planned objective or a
feasible goal out of an intended objective. proportion of true positives detected over the
number of actual positives in the sample whereas the recall defined as the ratio of true positives
correctly termed as such to the total number of actual positives available. proportion of the true
positive identification or the number of true positive detections to the total number of actual
detections measured using the true positive rate. Additional to evaluate the performances of
the model, metrics include mAP (Mean Average precision) and accuracy[10]. variance of
average performance scores over the classes of defects and total percent of correct predictions.
Qualitative evaluation through analysing the results of the model on few sample images assists
in understanding the presence of some patterns and hence aids in selecting the correct one[20].
further improvements. These findings can also be backed up by a more comprehensive
assessment of the deep convolutional neural network. the model and sheds light to its
optimization in order to refine its implementation for production use.
21
3.6. Block diagram
1. Input (Fabric Images): This quite often is the initial and may well be the only one through
which ‘fabric images’ are created or reconstructed into the system. These are the images that
form while formulating the images which are normally used while assessing the flaws.
2. Pre-processing: The images of raw fabrics arriving at this place are first filtered and
prepared through several stages which can be described as the primary stages of image
pre-processing. for example resizing normalization and maybe noise reduction before they
would be comfortable with the detection model.
3. Deep Convolutional Neural Network Model: This they did using what they call the Deep
Convolutional Neural Network in achieving the above. About the detection model: as for the
object recognition that is employed it is in the scenario where the exact images containing
the objects are. It processes these images at that level when those images are preprocessed to
receive the kind of boost that is appropriate. identify and localize defects.
4. Defect Detection: The gestures that are utilized in referencing/ outlining the specific areas
depicted in the images are employed in this step. In other words, it encompasses the actions
22
that delineate areas of interest within the images as. defects are present. They also provided a
second, less clear idea of the applicability of the Bounding Box output within a Deep
Convolutional Neural Networks model. around detected defects.
5. Classification: Thus, based on this consideration, the next sub-stage that ensues is that the
detected defects should be categorized into different classes. It however depends with the type
of the flaw such as the holes, blotches or even the tears therefore, the process might take slightly
longer.
6. Output (Detected Defects): The last section is almost self-explanatory; the result, pictures
associated with the result. To do this one should be able to point where the defects are and
the kind of there of the categorization of the defects. This information can be passed on for
other analytical analysis or for ease of the of analysis during the partial automation of decision
making that occurs on the production line.
Convolutional Neural Networks (CNNs) form the backbone of the Deep Convolutional Neural
Network model used in this project. The following mathematical concepts are fundamental to
understanding CNNs:
The convolution operation involves a filter (or kernel) that slides over the input image,
performing element-wise multiplication and summing the results to produce a feature map.
• (i,j): The coordinates of the current position in the output feature map.
• (i,j) in the output feature map resulting from the convolution operation.
23
3.7.3. Sigmoid Activation Function
The Sigmoid activation function is often used in neural networks for binary classification tasks
because it outputs a value between 0 and 1. This makes it suitable for predicting probabilities.
It is defined as:
1
σ (x) = (3.2)
1 + e−x
Pooling operations reduce the spatial dimensions of the feature maps, typically using max
pooling but here we had use GlobalAveragPooling2D:
H W
1
GAP(x) = ∑ ∑ xi, j (3.3)
H ×W i=1 j=1
where H and W are the height and width of the feature map, respectively.
24
Feature Max Pooling Global Average Pooling
Reduces the spatial size of
Dramatically reduces the
the feature maps, helps in
number of parameters and
Effect reducing the computational
helps in preventing
load, and provides some
overfitting
translation invariance
Deep Convolutional Neural Network is an object detection algorithm that divides an image into
a grid and predicts bounding boxes and class probabilities for each grid cell.
Each bounding box is represented by five predictions: (x, y, w, h, c), where (x, y) is the center of
the box, w and h are the width and height, and c is the confidence score. The coordinates x and
y are predicted relative to the grid cell, while w and h are normalized by the dimensions of the
image.
C
Loss = − ∑ yi log(ŷi ) (3.4)
i=1
where C is the number of classes, yi is the true label (1 if the class is the correct class, 0
otherwise), and ŷi is the predicted probability for the class i.
Suppose we have three classes, and the true class is the second one. The true labels y = [0, 1, 0]
and the predicted probabilities ŷ = [0.2, 0.7, 0.1]. The loss would be:
25
Loss = − log(0.7)
Loss ≈ −(−0.357)
Loss ≈ 0.357
N C
1
Lossbatch = − ∑ ∑ yi j log(ŷi j ) (3.5)
N j=1 i=1
where yi j is the true label for instance j and class i, and ŷi j is the predicted probability for
instance j and class i.
True Positives
P= (3.6)
True Positives + False Positives
Recall (R) measures the model’s ability to find all relevant instances:
True Positives
R= (3.7)
True Positives + False Negatives
3.9.2. F1 Score
P·R
F1 = 2 · (3.8)
P+R
26
3.9.3. Mean Average Precision (mAP)
mAP is the mean of the average precision scores for each class:
1 N
mAP = ∑ APi (3.9)
N i=1
where APi is the average precision for class i, and N is the number of classes.
1. Horizontal Flip
where I is the original image, I ′ is the flipped image, x and y are the pixel coordinates, and W is
the width of the image.
2. Vertical Flip
3. Random Rotate 90
27
For rotations of 180 and 270 degrees, the equations are:
4. Rotate
Brightness adjustment:
I ′ (x, y) = I(x, y) + ∆B (3.16)
Contrast adjustment:
I ′ (x, y) = α(I(x, y) − µ) + µ (3.17)
6. Advanced Blur
k j
′
I (x, y) = ∑ ∑ I(x + i, y + j) · G(i, j) (3.18)
i=−k j=−k
1 − i2 +2j2
G(i, j) = e 2σ (3.19)
2πσ 2
with σ being the standard deviation of the Gaussian distribution and k the kernel size.
28
7. Gauss Noise
Advantages
2. High Accuracy: - The utilization of Deep Convolutional Neural Network as the deep
learning model make sure of high accuracy.¡/s2¿ in defect detection. From the results it
can be seen that Helios is capable of correctly identifying different types of defects, as As
outlined in the confusion matrix above, the high accuracy the algorithm has shown proves
the effectiveness of the HMM algorithm. - Specifically, the current study finds significant
precision-s-based performance estimates, including precision, recall, F1-score, and IoU.
concerning the detection as well as the identification of the specific area that is affected
by the defect.
3. Real-Time Processing: s due to the nature of its ability to provide fast image analysis
which is a key feature of DCNN. that need to be used in industrial environments where
the quick identification of the fire and response is paramount. - Real-time processing
is advantageous most of the time as it can quickly detect any defects in the production
phase. segmentation, that enables making corrections without much delay and avoid
multiple errors that lead to increased costs.
29
kinds of fabric colouration but also able to embrace different sorts of pattern and textile
density. by a wide range of Textile products so that the same can be used in different
types of textile products.
5. Data Augmentation and Preprocessing: Data augmentation techniques enhance the ability
of the model to generalize which is a plus for A model. This makes it more efficient in that
it can identify defects in various stunning and samples that it has not encountered before.
- Basic steps such as normalization and resizing of image data will have been done for
proper preparation of the recorded input data. accurate and relevant for modelling to
increase the effectiveness of the training process.
6. Reduction in Human Error: Human factors Human factors, including errors, are reduced
since automated defect detection does not contain biases and is consistent. More often it
requires a structural, meticulous or conscious manual inspection and therefore warrants
more dependable and accurate quality evaluation results.
Disadvantages
1. Data Dependency: A key issue with this model as with any model is shown in the quality
and the variety of the training data. Particularly, poor generalization and failure in
detecting defects can arise from small or biased datasets. world scenarios. - Retrieving a
big collection of fabrics with defects labeled and described is not always easy and fast.
consuming.
4. False Positives and Negatives: Nevertheless, it is very probable that it contains many
false positives, therefore the model can be quite inaccurate even at these settings. There
30
are false positives, wherein they report on the presence of defects when it is actually not
the case and the false negatives that refer to missing real defects. These errors can impact
optimization of production rates, and, reduction of the variation range of product quality.
- To reduce the gap between labeled and actual values, it is imperative to monitor the data
streams continuously and retrain the model periodically. these errors.
5. Adaptability to New Defects: Some of these issues include: Although it can incorporate
new observations into its decision making about the model and its capabilities, it may
not be effective in detecting new or infrequent types of defects that have not been
discovered in training data. Semi-supervised learning most likely requires frequent
updates and retraining with new samples of collected defects. maintain performance. -
Although the given model can be considered a suitable framework for understanding and
analyzing the phenomenon of fabric discoloration, it should be noted that its application
might be challenging if the researcher intends to apply it to various fabrics and a variety
of production conditions . additional fine-tuning and validation.
6. Initial Setup Cost: It discusses; The capital outlay required at inception for the
development and implementation of the automated defect detection system can be
substantial. This covers for expenses relating to data acquisition, model development,
hardware as well asother expenses for developing the model. software. - It is probable
difficult for small scale producers to absorb the fixed costs when they are spread-out over
the production volume. clear return on investment.
31
4. ANALYSIS AND DESIGN
1. Image Acquisition: The process starts with capturing images of the fabric on the
surface of the substrate. This can be implemented with cameras or other imaging devices
depending on the requirements of the system.
3. Noise Reduction: Techniques are applied to clean up the noise from the dataset on
images, which aids in enhancing the reliability of developing a correct understanding of
the defects.
4. Normalization: Pixel intensities of the images are scaled to a standard scale to make
it consistent and to enhance the performance of the detection model in order to achieve
better accuracy.
5. Segmentation: In this step, the images are divided into areas of interest in order to find
regions that contain useful information of the fabric which in turn helps in emphasizing
and directing the detection model towards the region of the image that is of interest.
Our model architecture is a deep Convolutional Neural Network (CNN) designed for image
classification tasks. Here’s a detailed explanation of your model architecture:
32
Figure 4.2: Model Architecture
The model contains a series of convolutional layers, each followed by an activation function
(SiLU(Sigmoid) in this case). Here’s the breakdown: Conv2D Layers: The convolutional
layers use 32, 64, 128, 256, 512, and finally 1280 filters of size (3x3). The number of filters
increases as the model progresses deeper, allowing the network to learn more complex features.
Activation Layers: After each convolutional layer, a ReLU activation function is applied. This
introduces non-linearity into the model, which helps it to learn more complex patterns.
The model’s structure alternates between Conv2D layers and Activation layers. It starts with:
33
4.2.4. Global Average Pooling Layer
GlobalAveragePooling2D: This layer reduces each (4, 4, 512) feature map to a single 512-
dimensional vector by taking the average of all values in each feature map. This significantly
reduces the number of parameters and helps in preventing overfitting.
Dropout: This layer randomly sets a fraction of input units to 0 at each update during training
time, which helps prevent overfitting.
Dense: This is the final layer with 5 neurons, corresponding to the 5 classes in your
classification problem. This layer uses a softmax activation function (not explicitly shown here
but typically implied in classification tasks) to output a probability distribution over the 5
classes.
4.2.7. Parameters
• The model has a large number of layers and parameters, making it capable of learning
highly complex patterns.
• The increasing number of filters as the depth of the network increases allows for
hierarchical feature learning.
• The use of GlobalAveragePooling reduces the spatial dimensions and thus the number of
parameters, while Dropout helps in regularization.
• The final dense layer outputs a probability distribution over the classes, making it suitable
for a classification task.
34
• Overall, this is a very deep and complex model, typical for tasks that require high levels
of feature extraction and pattern recognition, such as image classification.
35
4.3. Data flow diagram
36
• Start of Program - Initialize System: This step prepares the environment and settings
needed for the system to function, including loading libraries, running essential programs,
and offering system updates.
• Image Acquisition: The system deliberately captures images of fabrics using various
devices. This is crucial to achieve high-quality images necessary for detecting defects
that may go unnoticed by the human eye.
• Is Image Acquisition Successful? - Decision Point: The system decides if the image
acquisition was successful. If yes, it proceeds to preprocessing; if no, the step is repeated
until a satisfactory image is captured.
• Preprocessing: This involves filtering to clean noise, adjusting contrast, resizing, and
normalizing the image to prepare it for further processing.
• Segmentation: The image is dissected to separate the background from the fabric and to
subdivide the fabric into regions, aiding in focused analysis.
• Feature Extraction: Relevant features are extracted from the pre-processed image,
quantizing the picture information into a form suitable for mathematical manipulation
and further processing.
• Model Detection: Utilizes a deep convolutional neural network with 216 layers to
analyze the image and identify probable defects.
• Bounding Box Prediction: Following model detection, bounding boxes are drawn
around detected defects to show their extent and location.
• Non-Max Suppression: Reduces the overlap of bounding boxes, decreasing the number
of false positives and enhancing defect detection accuracy.
• Classification: Detected regions are categorized into types of defects (major, minor, or
condition-based), with each defect tagged according to its characteristics.
• Is Defect Detected? - Decision Point: The system determines whether defects are
present and, if found, displays their types and locations. If no defects are found, it moves
to the next image.
• Output Defect Type and Location: Outputs the type and location of detected defects
for user analysis or other uses.
• Continue to Next Image: The system proceeds to the next image, part of a continuous
monitoring process.
• End Program: Represents the program’s conclusion, where the system may be turned
off or set to idle, waiting for further commands.
37
5. RESULTS AND DISCUSSIONS
Created a Deep Convolutional Neural Network image classification facility for the detection
of fabric defects including five classes: points consists of good points, holes, oil spot, thread
error, and objects. Hence this study utilized the TILDA dataset and by training our Own Dataset
using 100 epochs, we achieved relatively impressive result. The model sustained optimization
and arrived at a minimum training error during the training phase. Since there was intentionally
no talking of validation data, our guess is that the model spares a nearly 95.8% accuracy. If it’s
and the performance is calculated with validation data having Test/validation Accuracy of 90.
7%. Perhaps, this makes it even effective in the practical application of the proposed framework
for the identification of faulty fabric in real scenarios. The hardware used for building this model
was:
5.1. Results
The Fabric Defect Detection model was tested thoroughly using a set of images of fabrics which
were randomly selected and were used to test the accuracy of the classifier and to determine how
effectively the model can classify different kind of defects. The testing process was done by
feeding the model with images showingthe defects that the model will be predicted on to ensure
that various categories of the defects are tested.
In the testing phase, the projections of the model were compared with actual defects which are
obviously visible in the fabric images. This comparison further gave an insight and
understanding of the model appropriate in identifying and classifying defects, which include
holes, stains, misweave, and other imperfections seen in the textile material.
These insights are useful in identifying the limitations of the model for future enhancements
and improvements of the model to increase the best possibility in use in textile quality control.
-Result 1
38
- Detected Defects:
• Objects: 81%
• Thread Error: 7%
• Good: 7%
• Hole: 2%
-Result 2
- Detected Defects:
• Good 29%
• Hole 18%
• Objects 9%
• Oil Spot 1%
39
Figure 5.2: Test Sample 2
-Result 3
- Detected Defects:
• Hole 88%
• Good 6%
• Objects 4%
• Oil Spot 1%
• Thread Error 1%
40
Figure 5.3: Test Sample 3
-Result 4
- Detected Defects:
• Objects 53%
• Good 18%
• Hole 14%
• Oil Spot 5%
41
Figure 5.4: Test Sample 4
-Result 5
- Detected Defects:
• Objects 48%
• Good 29%
42
-Result 6
- Detected Defects:
• Good 33%
• Hole 23%
• Objects 19%
• Oil Spot 8%
-Result 7
- Detected Defects:
• Objects 41%
• Good 31%
• Hole 10%
• Oil Spot 9%
• Thread Error 9%
43
Figure 5.7: Test Sample 7
44
These are the trend curves that represent training loss of the fabric defect detection model over
100 epochs. The This is the training loss for the training data for the ‘linear’ model and starts
at around 1. 5 and as independent decreasing number, it also demonstrates a decrease
successively number 5. , asymptote) occurs and the value of epochs reduces for the optimal
number of epochs that is less than zero. 25. This steady decrease in training is likely the result
of local governments allocating less funding for fertility care than required. that suggests that
the model is learning how to identify the imperfections in fabrics through the reduction of The
discrepancy between the actual and estimated number of defects or the difference between
assumed and real labels. The reduction in training loss is a significant factor that will
contribute to the improvement of the current Model in the long run. positive indication that the
model is doing a better job and continues to grow more accurate with time. time.
This is a plot that shows the training and validation loss of the fabric defect detection model
with respect to the epochs to 100. Similar added to the training loss, the validation loss begins
45
at above one for the first epoch and gradually descends. 4 and decreases steadily, flattening out
around 1 = 1, awaiting a changed situation. 0. The line with arrow signs a shows the trend of
the reduction in the loss and the dotted line b-sale represents the curve. the results curve the
validation loss the plot of it is depicted at the end of each epoch. The consistent decrease in the
validation loss indicates that there is a good possibility of generalization ability of the current
model with unseen data meaning that it bears high effectiveness. predicting decisions based on
the training carried out on the previous set of fabric defects and applying it onto progressive
new sample of fabric defects.
This shows the training accuracy of the fabric defect detection model challege and reach 100
epochs. Theaccuracy refers to the degree of conformity to the true state of things it is a measure
of the ratio of accurately identified fabric defects to the total number of designated defects
predictions made. From the graph the following observations can be made For the smaller
set of instances there is a very steep rise in the accuracy over the epochs to improves to a
46
maximum value For the large set of instances the accuracy over the epochs gradually rises
to a maximum The steep rise of the accuracy for a shorter set of instances during the initial
epochs indicates that. and choosing the correct answer to a question can be anywhere from as
low as 50% in the first stages of training and over 90% in the later stages. This upward This
trend shows us that the model is developing its proficiency in identifying the appropriate shirts.
Index Descriptions Defect type Accuracy Measure of internal fabric defects, with high Training
accuracy suggesting good learning during the Training phase.
47
Over laid here is the conceptual diagram highlighting the diversified generic categories of defect
types that are inherent in our The model is able to identify all the kinds of fabric defect that
belongs to its category. It is thus clear that the fabric industry falls under the various kinds of
defects that are strange in mode and consequences they have for the quality of the final product.
Some of these defects include the following; stains, holes, tears, irregular and improper sewing,
overlocking and fraying edges. Weave, miskick, warp burls, water damage, and broken head is
important in centrally located manufacturing plants. Each defect type can greatly alter more or
less the characteristics of the fabric visually and in terms of functionality which results in the
alteration of the fabric’s value by a considerable measure. financial damages, if not revealed and
treated as soon as possible, by people thut. In the given diagram the capability pays special focus
of our model to recognize these various types of defects with high levels of accuracy within
the example indicated by the red arrow in Figure 1. Rigor in terms of reliability and essence
of robustness as has been presented in a real industrial environment. Such characteristics, in
relationship to many aspects of daily life and work, provide the civil servant with the capability
to achieve the realistic appraisal of the capacities and characters of the people with whom he
has to deal, which is one of the fundamental requisites in civil service. it is invaluable to have
so many varieties of defects to remain delivering only the best quality end products. fabric
production. This automated detection process was highly beneficial as it eliminated the need
for manual search and in addition it is sometimes tedious and also undergoes human error but
guarantees thorough Roughing. provides a more orderly and therefore more accurate method
of rating fabric quality.
5.3. Accuraccy
48
The confusion matrix in the figure above represents a model for fabric defect detection, which
has been normalized. the project. A confusion matrix is a performance measurement that
is utilized for assessing the performance of larger datasets or models. that indicates actual
accuracy of the classification model. The matrix is nice and compares the predicted defect labels
to the real problem tags so one could identify which classes the model is identifying successfully
and where the four-letter words might be misunderstood. In this confusion matrix, the Yates
coordinates are provided with the true labels indicated on the x-axis while the predicted labels. ,
is based on the following assumption: The dependent variable, indicating the number of MAs, is
recorded along the y-axis. The raw percentages associated with each cell in the matrix indicate
the extent to which experts tend to predict. made for each class. The houses on the diagonal
line are the instances that have been correctly classified in the test set for each cells on the
diagonal represent the correctly classified instances as per the two classes, the cells that are
off the diagonal represent the wrong classification. Therefore, having looked at the different
matrices, we are able to find high accuracy in identifying different types of fabric defects. For
for instance, the model correctly identifies “holes” in images 9% of the time and “oil spots” 10%
of the time. percentage of 95% for the classification of good fabric. “good” fabric are classified
95 percent of the time, which clearly shows a good results in calling defective material fewer
times. Although, mal-mode misclassifications are noticeable within the matrix proposed. For
example, ”objects” are still mistaken for being ‘good’ or ‘threads errors’ and ‘thread errors’ in
some instances. are categorized as “oil spots. ” The only potential problems are slight over-
estimation, which is not a big concern, and the occasional misclassification of small areas as
“oil spots. on the diagonal show that in terms of performance, the model does a good job
in the identification and differentiation of different fabric defects. This is because the color
intensity present in the matrix represents the percentage of accurate prediction as shown javafx.
As we move down the bar, these ‘getting richer’ shades get darker to signify higher values.
This makes it easy to grasp the factors that encompass and support practice as evidenced in this
visual representation. to pinpoint the strengths and skills of the model as well as the opportunity
zones that can be effectively developed. To sum it up, the confusion matrix offers a great
potential insight into the delicate characteristics of the constructed model. emphasizing primary
advantages referring to the ability of the presented method to detect most of the fabric defects,
as well as weep areas that could need one more additional focusing and training in certain
sections.
49
6. CONCLUSION
This project demonstrates a comprehensive and practical approach to automated fabric defect
detection using advanced deep learning techniques. By leveraging the YOLO v8 model, the
system achieves high accuracy and efficiency in identifying various types of fabric defects,
providing a robust framework for quality control in the textile industry.
Integration of Software Components The core of this system is the YOLO v8 model, which is
known for its real-time object detection capabilities. The model is trained to detect a wide array
of fabric defects, including holes, stains, tears, and irregular weaves. The integration of this
model with pre-processing and post-processing techniques ensures that the system can handle
various types of fabrics and defect patterns effectively.
The system employs several image processing techniques to enhance the detection capabilities
of the YOLO v8 model. These techniques include: Techniques such as flipping, rotation,
scaling, and translation are applied to increase the diversity of the training data. This helps the
model generalize better to unseen data. - Image Normalization: Pixel values are normalized to
a standard range (e.g., [0, 1]) to improve the numerical stability of the model during training. -
Image Resizing: Images are resized to match the input size required by the detection model
(e.g., 416x416 pixels for YOLO v8). - The YOLO v8 model is trained on a comprehensive
dataset that includes various types of fabric defects. The training process involves optimizing
the model’s parameters to minimize the detection error and improve accuracy. - Hyper
parameter tuning is performed to adjust learning rates, batch sizes, and other parameters to
achieve the best performance. - Bounding Box Prediction: The YOLO v8 model predicts
bounding boxes for detected defects, which are then refined using techniques such as
Non-Max Suppression to eliminate duplicate detections. -The model calculates the confidence
scores for each detected defect, ensuring that only high confidence predictions are considered.
The performance of the fabric defect detection system is evaluated using various metrics,
including accuracy, precision, recall, F1-score, Intersection over Union (IoU), and mean
Average Precision (mAP). The system demonstrates high accuracy and reliability in detecting
fabric defects across different types of fabrics and defect patterns.
50
and unseen data. - Robust Evaluation Metrics: The use of multiple evaluation metrics ensures a
thorough assessment of the system’s performance, highlighting its reliability and effectiveness.
By leveraging the latest advancements in deep learning, this project provides a significant
contribution to the field of textile manufacturing, ensuring higher quality products and more
efficient production processes.
51
References
[1] J. Xiang, J. Wang, J. Zhou, S. Meng, R. Pan, and W. Gao, “Fabric defect detection
based on a deep convolutional neural network using a two-stage strategy,” Textile
Research Journal, vol. 91, no. 1-2, pp. 130–142, 06 2020. [Online]. Available:
https://doi.org/10.1177/0040517520935984
[2] M. R. Jyothi and M. Prasad, “Fabric default detection using cnn,” International Journal
for Research in Applied Science and Engineering Technology, vol. 10, no. 7, pp. 647–653,
07 2022. [Online]. Available: https://doi.org/10.22214/ijraset.2022.45356
[3] Y. Guo, X. Kang, J. Li, and Y. Yang, “Automatic fabric defect detection method using
ac-yolov5,” Electronics, vol. 12, no. 13, pp. 2950–2950, 07 2023. [Online]. Available:
https://doi.org/10.3390/electronics12132950
[4] G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics yolov8,” 2023. [Online]. Available:
https://github.com/ultralytics/ultralytics
[5] W. Ouyang, B. Xu, J. Hou, and X. Yuan, “Fabric defect detection using activation layer
embedded convolutional neural network,” IEEE Access, vol. 7, pp. 70 130–70 140, 01
2019. [Online]. Available: https://doi.org/10.1109/access.2019.2913620
[6] L. Song, R. Li, and S. Chen, “Fabric defect detection based on membership degree
of regions,” IEEE Access, vol. 8, pp. 48 752–48 760, 01 2020. [Online]. Available:
https://doi.org/10.1109/access.2020.2978900
[7] M. Li, S. Wan, Z. Deng, and Y. Wang, “Fabric defect detection based on saliency
histogram features,” Computational Intelligence, vol. 35, no. 3, pp. 517–534, 04 2019.
[Online]. Available: https://doi.org/10.1111/coin.12206
[9] J. Wang, J. Yang, G. Lu, C. Zhang, Z. Yu, and Y. Yang, “Adaptively fused attention
module for the fabric defect detection,” Advanced intelligent systems, vol. 5, no. 2, 01
2023. [Online]. Available: https://doi.org/10.1002/aisy.202200151
[10] P. Peng, Y. Wang, C. Hao, Z. Zhu, T. Liu, and W. Zhou, “Automatic fabric defect
detection method using pran-net,” Applied sciences, vol. 10, no. 23, pp. 8434–8434, 11
2020. [Online]. Available: https://doi.org/10.3390/app10238434
52
[11] M. H. VARJOVİ¿, M. F. Talu, and K. Hanbay, “Fabric defect detection using
customized deep convolutional neural network for circular knitting fabrics,” Türk
doğa ve fen dergisi, vol. 11, no. 3, pp. 160–165, 09 2022. [Online]. Available:
https://doi.org/10.46810/tdfd.1108264
[13] M. An, S. Wang, L. Zheng, and X. Liu, “Fabric defect detection using
deep learning: An improved faster r-approach,” 07 2020. [Online]. Available:
https://doi.org/10.1109/cvidl51233.2020.00-78
[14] X. He, L. Wu, F. Song, D. Jiang, and G. Zheng, “Research on fabric defect
detection based on deep fusion densenet-ssd network,” 05 2020. [Online]. Available:
https://doi.org/10.1145/3411201.3411701
[15] J. Zhao, Z. Shi, Q. Zheng, and M. Shunqi, “Fabric defect detection based on
transfer learning and improved faster r-cnn,” Journal of Engineered Fibers and Fabrics,
vol. 17, pp. 155 892 502 210 866–155 892 502 210 866, 01 2022. [Online]. Available:
https://doi.org/10.1177/15589250221086647
[16] H. İbrahim Çelik, L. C. Dülger, and M. Topalbekiroğlu, “Fabric defect detection using
linear filtering and morphological operations,” Indian Journal of Fibre Textile Research
(IJFTR), vol. 39, no. 3, pp. 254–259, 11 2014. [Online]. Available: http://nopr.niscair.res.
in/bitstream/123456789/29392/1/IJFTR%2039%283%29%20254-259.pdf
[17] Y. Huang and X. Zhong, “Rpdnet: Automatic fabric defect detection based on a
convolutional neural network and repeated pattern analysis,” Sensors, vol. 22, no. 16, pp.
6226–6226, 08 2022. [Online]. Available: https://doi.org/10.3390/s22166226
[18] Z. Liu, B. Tian, C. Li, X. Li, and Q. Wang, “A context-aware progressive attention
aggregation network for fabric defect detection,” Journal of Engineered Fibers and
Fabrics, vol. 18, pp. 155 892 502 311 746–155 892 502 311 746, 01 2023. [Online].
Available: https://doi.org/10.1177/15589250231174612
53
Sensing and Imaging, vol. 23, no. 1, 12 2021. [Online]. Available: https:
//doi.org/10.1007/s11220-021-00370-2
[21] M. Yuan, J. Gu, W. Xu, and Y. Zhao, “Deep learning-based defect detection method for
textile materials,” Journal of physics, vol. 2450, no. 1, pp. 012 074–012 074, 03 2023.
[Online]. Available: https://doi.org/10.1088/1742-6596/2450/1/012074
[22] L. Li, Q. Li, Z. Liu, and X. Lin, “Effective fabric defect detection model for
high-resolution images,” Applied sciences, vol. 13, no. 18, pp. 10 500–10 500, 09 2023.
[Online]. Available: https://doi.org/10.3390/app131810500
[23] X. Long, B. Fang, Y. Zhang, G. Luo, and F. Sun, “Fabric defect detection using
tactile information,” 05 2021. [Online]. Available: https://doi.org/10.1109/icra48506.
2021.9561092
[24] S. Zhao, Y. Li, J. Zhang, J. Wang, and R. Y. Zhong, “Real-time fabric defect
detection based on multi-scale convolutional neural network,” IET collaborative
intelligent manufacturing, vol. 2, no. 4, pp. 189–196, 12 2020. [Online]. Available:
https://doi.org/10.1049/iet-cim.2020.0062
[25] G. Zheng, “Fabric defect detection method based on image distance difference,”
Journal of Hebei University of Science and Technology, 01 2006. [Online]. Available:
http://en.cnki.com.cn/Article en/CJFDTOTAL-HBQJ200603016.htm
[26] H. Cheng, J. Liang, and H. Lı́u, “Jump connection generative adversarial network
for fabric defect detection,” Research Square (Research Square), 04 2023. [Online].
Available: https://doi.org/10.21203/rs.3.rs-2756284/v1
[28] Z. Zhao, K. Gui, and W. Pei-mao, “Fabric defect detection based on cascade faster r-cnn,”
Proceedings of the 4th International Conference on Computer Science and Application
Engineering, 10 2020. [Online]. Available: https://doi.org/10.1145/3424978.3425080
[29] Z. Peng, X. Gong, B. Wei, X. Xu, and S. Meng, “Automatic unsupervised fabric defect
detection based on self-feature comparison,” Electronics, vol. 10, no. 21, pp. 2652–2652,
10 2021. [Online]. Available: https://doi.org/10.3390/electronics10212652
[30] S. Zhao, G. Li, M. Zhou, and M. Li, “Ica-net: Industrial defect detection network based
on convolutional attention guidance and aggregation of multiscale features,” Engineering
Applications of Artificial Intelligence, vol. 126, pp. 107 134–107 134, 11 2023. [Online].
Available: https://doi.org/10.1016/j.engappai.2023.107134
54
[31] H. Zhao and T. Zhang, “Fabric surface defect detection using se-ssdnet,” Symmetry,
vol. 14, no. 11, pp. 2373–2373, 11 2022. [Online]. Available: https://doi.org/10.3390/
sym14112373
55