Pähler Et Al. (2024) - Modular Data Analytics in QM

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

MODULAR DATA ANALYTICS AS A TOOL FOR CITIZEN DATA SCIENTISTS IN

QUALITY MANAGEMENT

Sebastian Pähler
Miele & Cie. KG
Gütersloh, Germany
[email protected]

Marius Syberg
Institute of Production Systems, Technical University Dortmund
Dortmund, Germany
[email protected]

Jochen Deuse
Centre for Advanced Manufacturing, University of Technology Sydney
Sydney, Australia
[email protected]

Abstract: The vision of Industrial Data Science to link data along the entire value chain with data from the product life cycle
contains great potential for holistic quality improvements. In order to be able to exploit this increasing potential, tools and
evaluations as well as the qualification of experts in quality management in the field of Industrial Data Science are essential. A
promising approach is the generalisation and modularisation of data analysis processes, which can be reused in Quality
Management for similar analytics tasks of different data sources in this discipline. With an appropriate integration into the
company’s processes and easy accessibility of the generalized analytics modules, this approach promises to reduce the
complexity and difficulty of data analysis. In this paper a total of seven different exemplary analysis modules that quality
management experts can use to handle, visualise and analyse new data in Field Quality Surveillance. These are provided as part
of a research project via a browser-based platform that enables users to perform data analyses without the use of other software
or programming knowledge. The analysis modules are a central building block for supporting Quality Management experts to
act as Citizen Data Scientists. In addition, skills training and general change management for organisational restructuring must
go hand in hand with the use of the modules in order to achieve valid analytics results quickly.

Keywords: Quality Management, Field Quality Surveillance, Industrial Data Science, Data Analytics (key words)

I. INTRODUCTION
The volatility of today's markets poses major challenges for companies that consider quality to be their brand essence. With
an increasing amount of data available, organisations are well advised to rely on advanced tools and techniques to extract
valuable insights for quality optimisation. However, the necessary knowledge and expertise to use those advanced tools and
techniques are not always readily available. Additionally, while the use of data scientists can bring valuable technical skills,
they may not possess the necessary process understanding that is critical for improving quality. Quality is a crucial aspect
throughout the entire product life cycle. It is an essential aspect for the effectiveness and efficiency of a company’s processes
and supports continuous improvement like in product development, production and customer service. Quality Management
(QM) acts as a "service provider" to ensure the overall quality of products and processes. The integration of data science and
artificial intelligence has the potential to significantly improve quality. The concept of the "Citizen Data Scientist" (CDS) has
emerged as a promising solution for organisations to empower their employees to gain insights from data on a broad basis
without the bottleneck of limited data scientist expert’s support. CDS in QM are able to support every employee in making
data-driven decisions with regard to good quality. With the right tools and role definition, organisations can leverage this
concept to drive quality improvements and create a more data-driven culture. Within the framework of the AKKORD research
project, this question was explored within a dedicated use case. As this projects purpose was the creation of a referencing
platform for knowledge transfer, execution and business development of Industrial Data Science solutions on a modular basis
along the complete value chain and product life cycle, this use case contributed with data from the area of Field Quality
Surveillance (FQS). The data from this area allows direct and indirect conclusions to be drawn about the quality experienced
by the customer during the usage. For the aspects of knowledge transfer and execution of developed solutions, in the AKKORD
project a platform approach is realised that allows users to build up competence in the field of Industrial Data Science (IDS)
[1–3] and to execute collaborative, modular data analysis. Based on the data from FQS, analysis modules were created to analyse
key figures of this field and shared with other users on the platform . Meanwhile the experiences gained, while developing these
modules, contributed to the design of the competence development part of the platform. Based on these experiences the platform
solution is a promising tool for further qualification of QM experts/employees in the direction of CDS. This paper presents the
results of the AKKORD project, which were developed within the scope of a QM use case with the support of the data science
tool RAPIDMINER. To this purpose, the status of the literature on the role of data science and QM as well as on the concept
of the CDS will be discussed first. Then, the methodological approach is described in which the general approach of the
AKKORD project is also outlined. The results are then presented and the industrial application is described. An outlook on
further transfer measures and future applications completes this article.

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


II. LITERATURE REVIEW
The following section provides an overview of the methods and relevant applicable solutions from the field of data science
in QM as well as the idea of the CDS itself. Since currently a common definition of the term FQS is lacking it will briefly be
characterised. As Zhang observes there is a gap in literature because many companies use their own methods of investigating
their FQS without publishing it [4]. In this context, FQS refers to the surveillance or monitoring of consumer products which
the manufacturing companies of these primarily conduct in the customer service and QM departments. It is often used
interchangeably with 'field data management' and the terms mentioned subsequently [5]. The term FQS is connected to product
monitoring obligations from product liability but goes beyond this task. In the business context the term is mainly characterised
by the automotive industry [6] and the medical device manufacturers [7, 8] also called field surveillance, product or market
surveillance (respectively post-market surveillance). The task is an analysis of data reflecting complaints and possible faults of
products during the customer’s usage phase. Further it should outline the real usage as well as misuse of the products by
customers. In summary it is the data that should help to measure the overall product quality in the field [9].
A. Special Role of Data Science and Quality Management
In modern manufacturing companies and organisations QM plays a unique role by connecting various departments. The field
of activities is ranging from supporting the management of these organisations through the scope of continuous improvement
of processes and products up to the development and monitoring of product quality internally and externally within preventive
and reactive quality assurance tasks [10]. This means that QM acts as a management tool for the whole organisation and serves
as a technical controller. It has to connect the management and methodical/analytical competencies with measuring tasks [5].
During the product development it is leading preventive quality measurements such as risk analyses, feasibility studies,
conducting design reviews, and testing of prototyped products to ensure their performance [11]. Further, in production, QM is
responsible for monitoring and controlling product characteristics and ensuring that they meet the required specifications. This
includes tasks such as conducting product inspections, testing and verifying product performance, as well as ensuring that
production processes produce output within the defined tolerances [5]. Beyond this it uses data from the service or marketing
departments to measure and assure product quality externally, from the customer’s point of view. In cases of deviations of
product properties or product quality it is necessary to compile the necessary information quickly to get fast managerial
decisions and actions for the correction or improvement of the product or process [5]. Additionally Vasiliev et al. [12] state that
there is a need to develop the current isolated analysis of several statistical data and integrate this analysis into an integrated
management system. As Industry 4.0 progresses, data science approaches have become essential for decision-making within
the manufacturing environment.[2, 13]. This underscores the need for deep integration into business processes and empowering
the individuals who operate and manage those processes.
In the context of the described QM tasks the focus of this paper and the AKKORD project will be on the possibilities to support
the integration of data science approaches in a flexible way along different steps of the quality backward chain that Beaujean
and Schmitt [14] described. In research one exemplary concept for this kind of integration has been developed by Schafer et al.
[15]. can support this integration in a QM context in research They integrated the data science approaches CRISP-DM [16] with
existing tools and QM methods, particularly within the realm of industrial production and the methodology of DMAIC. Beyond
this production-centric focus, it is essential to evolve data analysis to account for data from the product usage phase. Within
this context, QM frequently taps into data from after-sales service processes as a vital information source. Furthermore QM
tries to integrate data from the usage of connected Internet of Things (IoT) products [17] into that analysis. All of these data
give several indications regarding customer satisfaction, use, misuse and faults of the appliances [18]. In consequence this
supports the major aim of FQS which lies in the detection, measurement and analysis of product deviations in the field. This
data can originate from various sources. Since the primary reason for recording this data is not QM analysis, integrated and
prepared before any analysis. The data are often incomplete and messy and need to be pre-processed by careful cleaning as well
as an interpretation of the meanings of typical values in that data sets [19]. This kind of evaluation of product quality is
increasingly supported by methods of Data Mining [20]. In conclusion, for FQS, the synergy between data science projects
using the Cross Industry Process for Data Mining (CRISP-DM) [16] methodology and the traditional DMAIC approach in the
context of Six Sigma, QM, and quality improvement seems suitable for the concept of a CDS in QM and support QM
methodologically in terms of making new data sources accessible for their purpose of analysis, like in the examples of this
paper.
B. The Conecpt and Role of a Citizen Data Scientist
The use of data science in general has become increasingly popular in recent years, as organisations have recognised the
potential benefits of using data to inform decision-making and gain a competitive advantage [21]. However, despite its growing
popularity, the adoption of data science by organisations remains a complex and challenging process [22]. Several factors
contribute to this difficulty, including the complexity of data science techniques, the need for specialised expertise, the
availability and quality of data and the need for cultural change within organisations [3]. These challenges can make it difficult
for organisations to effectively use data science and realise its potential benefits, highlighting the importance of addressing
these challenges in order to fully realise the potential of data science in business.
A CDS is a person who creates or generates models that use advanced diagnostic analytics or predictive and prescriptive
capabilities, but whose primary job function is outside the field of statistics and analytics. They are usually employees from a
different department, but with the necessary skills and understanding to use data analytics tools to solve business problems and
support decision-making [23]. They often connect the business problem with a data analytics use case and build a link between
the two worlds [24, 25].
To become a CDS, employees need to have a basic understanding of the use case, be able to define the business goals, and make
connections to the relevant domain [26]. This understanding can be acquired through short training sessions that are designed
to be immediately applicable, but it also requires a fundamental understanding of analytics and statistics [27]. While CDS do
not necessarily need to be able to program on their own, they should have a basic understanding of the data analytics process.
CDS are being utilised in a variety of industries, including finance, healthcare, retail, and manufacturing [28]. In finance, they
can be used to analyse large amounts of data to make informed investment decisions, identify market trends, and develop
predictive models. In healthcare, they are analysing patient data to improve the understanding of illnesses or inform treatment
decisions, e.g. in the development of applications for tracking pandemics [29]. In retail, the use of citizen data science could
improve the analysis of sales data to improve and fasten marketing strategies [30]. In manufacturing, they are being used to
analyse production data to improve operational efficiency and identify opportunities for process improvement [31].
An individual working in modern QM possesses several attributes and skills that are conductive to become a CDS according to
the state of the art. Firstly, they possess a strong understanding of the data analysis process, including data collection, cleaning,
and validation techniques [28]. Today, they work on some of the same questions with some of the same data, but use different
tools that are sometimes no longer sufficient. This provides a solid foundation for them to build upon as they learn new
techniques and tools in data analysis [32]. Furthermore, they may have a background in statistical analysis, which is essential
for making informed decisions based on data. Finally, they may have experience working with cross-functional teams,
collaborating with others to solve problems and make data-driven decisions [33]. These skills, combined with a strong desire
to learn and a willingness to continuously improve, make them well-suited to become CDS.
In addition to the skills mentioned, becoming a CDS requires the access to assets that enable easy entry into data analytics. This
can be supported by different approaches, two of them are the provision of tools for executing data analyses and the availability
of content for individuals to upskill themselves [3, 34]. For the first one, low-code tools can help CDS to build and deploy data
analysis projects without requiring extensive coding knowledge [35]. Such tools enable the creation of data pipelines, automated
data processing, and data visualisation without the need for complex coding. For the second one, there are many sources of
content for competence development available, including online courses, tutorials, and community forums that can provide a
foundation in statistics, machine learning, and data visualisation [36]. These resources can equip individuals with the necessary
skills to engage in data analysis projects and create value within their organisation. Overall, the availability of low-code tools
and competence development resources can empower CDS to take on data analysis projects and provide insights that drive
business outcomes. It is important to consider that the use of tools and the development of competences will usually take place
on the job. This explains, according to current literature, the focus on self-directed learning [37]. Self-directed learning allows
workers to take control of their learning and development by identifying areas where they need to improve and seeking out
opportunities to acquire new knowledge and skills. In addition, the requirements of the industrial environment and domain
specificity must be considered.
Coming from the domain perspective of QM, its focus lays on identifying trends, making informed decisions and driving
continuous quality improvement. To do this efficiently, it is important to interpret results and extract knowledge from the data
to gain insights and make meaningful changes. Choosing the right visualisation is a key aspect of effective data interpretation.
A visual approach enables interpretation of data analytical results without the need for prior mathematical and statistical
knowledge [38]. Large amounts of data can be captured with the help of suitable visualisations and overviewed in a first step.
This increases the transparency and comprehensibility of decisions [39]. Mazarov et al. [40] proclaim that visualisations are
used in a context-specific and goal-oriented way. Thus, goals, contents and communication needs define the choice of
visualisation in the individual phases of the CRISP-DM. By choosing the appropriate visualisation, QM professionals can
quickly identify patterns and anomalies in the data, leading to an improved communication about product quality issues as well
as more informed decisions.
Depending on the analysis problem to be solved, other skills and tools may need to be considered.

III. USING MODULAR DATA SCIENCE IN QUALITY MANAGEMENT


With respect to the AKKORD research project in general, a concept and platform has been developed that allows users to
build competence in IDS and to perform collaborative, modular data analysis. As the focus of this paper is an approach to
develop QM staff towards becoming CDS, this specific QM approach was developed parallel to that generic solution in
AKKORD with mutual benefit. The generic solution is created as a publicly accessible ‘Learning and Collaboration Platform’
as part of the AKKORD reference toolkit. Within this the ‘AI Toolbox’ actively enables modular data analysis, where users
generate data analytic pipelines and provide them to other users for collaboration. Further the collaborative skills development
is implemented in the ‘Work & Learn Platform’ [41] to ensure the skill enhancement of interested applicants into the desired
roles in the data analysis process [42]. There is a direct link between the two elements. In the specific QM perspective, active
participation was made in the development of the platform approach in general. At the same time, specific analysis modules,
especially for time series, were developed and made available on the platform, as well as content in the learning area was
actively shaped in order to qualify QM employees as CDS. Figure 1 drafts how this specific QM approach works. In a general
problem-solving process the FQS needs data for quality monitoring and deviation detection. These initial steps for the
methodology of DMAIC can be supported by the data from Error Codes of IOT Appliances and Spare Parts Evaluation as
sources of information for this use case. To develop the right analysis solutions for the steps of Define, Measure, Analyse and
Control for the ‘AI Toolbox’ the knowledge about IDS Processes is elementary and provided by the ‘Work & Learn Platform’.
Field Quality Surveillance
Complaints Data
Error Parts Analysis
Test Panels

Error Codes of IOT Spare Parts


Appliances Evaluation

Quality
AKKORD – AI Toolbox
Deviation Visualisation Modules
Monitoring Detection Forecast Modules
KPI Modules
Clustering Modules

Problem Solving
Process – Product
Implement Corrective Deviations Problem Industrial Data
Measure quantification Science Process

Problem
Root Cause

Figure 1 AKKORD ‘AI Toolbox’ as an enabler for using Data Science in Field Quality Surveillance

This paper presents the analysis modules that were deployed on the platform during the project with the tool RAPIDMINER.
These have been developed with data from Laundry care in way that creates direct added value for the MIELE company. They
are designed explicitly pragmatic and low threshold applicable without expert programming knowledge. The modules make use
of FQS data, which provide direct or indirect insights into the customer's quality experience during the usage phase. Specifically,
the paper explores two application areas: Sales of Spare Parts and Error Codes from connected IoT Appliances. These data sets
were chosen to explore the potential as a possible additional source of information for FQS. Furthermore, these data sets are
used completely anonymised. The analysis modules developed based on both data sets (see Figure 2) are presented in the next
sections. Both sections start with a discussion of the problem, followed by a description of the developed analysis modules as
well as the generated value for MIELE and the AKKORD project.
Monitoring & Observation
QM Error
Trigger with
Code
Alert Level
Overview

Grouping & Clustering


Spare Parts Error Codes of IoT
Spare Parts QM
Evaluation Heatmap Clustering
QM Heatmap Appliances

Time Series Analysis & Forecasting


Component
QM Forecast
Analysis

Figure 2 Overview of the analysis modules developed in FQS for application in different contexts

In order to achieve a mutual value creation for QM at MIELE as well as for the public AKKORD project both of the two
application areas have be considered with their two ambitions. On the one hand they need to be applicable for a reliable
company’s internal use and on the other hand they need to be easily applicable for a generic and public use. In particular for the
second ambition, the FURPS model, which Syberg et al. [41] use for IDS-specific requirements for a collaboration platform,
can be followed and adapted to the requirements for modules in those two use cases. The focus is on functionality and usability;
reusability and application design are particularly important (see Figure 3). Reliability and performance, which play a more
decisive role in the roll-out in the company, are less of a focus in the development of the modules [43].
Total
Functionality
10

4
Supportability Usability
2

Performance Reliability

Figure 3Requirements for the developments of generelised Data Analysis Modules for Error Codes of IoT Apliances according the FURPS
model [43]

A. Use case of Spare Parts Evaluation


In terms of FQS the data of spare parts sales is a possible source of information. As it could be assumed that for most sold a
technical issue of an appliance causes these sales, this data is assumed to be useful as an indicator for possibly unknown quality
issues in the field especially in a global perspective across different countries. Additionally, this data can highlight issues
emerging post-warranty, particularly vital for companies producing long-lasting products. Since the spare part sales data
originates from its specific sales process, it requires transformation, preparation, and anonymization for FQS applicability. This
necessitates the expertise of a QM-CDS familiar with internal business processes. Integrating service and QM master data with
logistics and distribution data is crucial. The data should be tailored to specific QM concerns. The objective is to generate a
comprehensive view of spare part metrics for over 10,000 distinct parts across more than 10 appliance categories sold in over
40 countries, pinpointing potential quality issues in the field.

1) Problem statement and Business Goals:


A concept is needed that enables a business expert who is working in the FQS to identify the most relevant spare parts for one
group of appliances. Based on the described variety of spare parts, in addition to the spare parts sold most, he has to identify
those spare parts that show unexpected deviation from the expected values. The data must be prepared and transformed such
that a business expert can discern insights, while being aware of its limitations and potential issues. Spare parts sales data can
be inconsistent, occasionally incomplete, and irregularly structured, and at times, intermittent. [44]. Consequently, the extracted
information necessitates rigorous validation given the inherent irregularities in the data. [44, 45] Furthermore analysing spare
parts sales data from the described perspective as a support for FQS is a new kind of analysis. The available literature shows
primarily an analysis of this data in the context of resources planning [46]. Additional use cases aim to predict sales numbers
using time series analysis [44, 47]. However, each of these cases acknowledges the limited informational value due to the
unpredictable and irregular nature of spare parts sales data.
For this use case a short description and characteristics of the different kinds of necessary and useful master data as well as
sales data will be given. The sales data contains of a monthly accumulation of the sales of one material number of the different
spare parts per country as the location of sale. This data also contains a hierarchy description of the material. This means the
kind of product every part is sold for can be identified. Due to the fact that some parts possibly fit to different groups of
appliances the process and product knowledge of the QM-CDS is needed to correctly interpret this data. Parts of this data are
also self-explanatory so already helpful information could be extracted. Further the different material numbers are named with
standardised text modules which allows an aggregation of similar parts throughout these name giving modules.
2) Analysis Modules:
As the major target is to support business analysts in QM to get an overview about spare part sales in general and to identify
spare parts with the most obvious deviating trajectory of sales figures from sales figures that are expectable for business experts
(in the following called deviating sales figures), a well-chosen visualisation of the data should be the first method for analysis
[48]. Furthermore, the mentioned research on spare part sales data analysis has shown different approaches of time series
analysis that could also fit for this use case. Three major methods carried out to fit well for this use case and are also transferable
for comparable analysis and use cases. They show a further analysis beyond the typically used bar, line, pie or statistical charts
in the FQS. In consequence they have been adjusted as modules for a more generic use in the AKKORD framework.
a) Heatmap:
In particular for creating overviews in sales data, a concept is needed that allows to show the behaviour of sales figures for one
product similar to a line chart but also provide the ability to easily structure, divide and compare the figures of single products.
In this use case that kind of structure and division should be created for example in groups of spare parts for one product group.
For this kind of visualisation of time series data Hashimoto and Matsushita [49] suggest heatmaps and streamgraphs that are
going beyond line and bar charts. These provide the ability to present changes in stacked time series for several attributes with
scaled changes area, colour or colour intensity of the charts on a relative basis like the mean value of the shown time period.
Interpreting the given sales data as a stack of time series from groups of spare parts the heatmap is a useful visualisation that
helps the business experts. Therefor this heatmap visualization was implemented as a module that should enable QM users to
create an overview and visual analysis of stacked time series with hierarchical description and group assignment. Below an
example is shown for groups of spare parts on from a sample data set. In this Heatmap one row represents the patterns of sales
figures of one group of spare parts based on a moving average on that time series. The change of the colour from green via
white to red indicates a rise sale above the moving average of this group of spare parts over time.

Figure 4 Spare Part Sales Heatmap (schematic)

b) Trigger with alert level:


For the automatic notification of QM staff about spare parts with deviating sales figures a trigger with alert levels was created.
For this purpose a forecast value of sales figures for the current month and the three months before should be calculated as a
forecasted value. These forecast values are then compared to the true value of realized sales. This comparison is done for a
horizon of four months. In case of higher realized sales figures in all the four months than forecasted this means the highest
level of alarm. In consequence the business experts get these spare parts pointed out as deviating. With this hint further analysis
regarding known Error Code modes or similar hints from other data sources connected to this spare part in the FQS as the
business context should be done. The classification is shown below.
Table 1 Level Alarm Level for spare parts

Symbol Alarm Definition


Level
Strongly deviating sales figures, in the time frame of the last four month the real sales
4
figures of every month were higher than a forecasted value
Deviating sales figures, in the time frame of the last four month the real sales figures of
3
three month were higher than a forecasted value
In the time frame of the last four month the real sales figures of two month were higher
2
than a forecasted value
In the time frame of the last four month the real sales figures of one month were higher
1
than a forecasted value
In the time frame of the last four month the real sales figures of no month was
0
higher than a forecasted value or a calculation of a forecast value was not possible

For the calculation of forecast values for sales figures the methods of Auto-Regressive Integrated Moving Average (ARIMA)
and Support Vector Machine (SVM) were used with the given sales figures as historical training data. The method of ARIMA
was chosen as a commonly used method for time series analysis and also used in a comparable spare parts evaluation of Wang
et al. [44, 50] and represents solution one. The second approach of an SVM was chosen as it has already been tested successfully
in context of analysis of spare parts by other use cases [44, 47, 51] and represents solution two. The calculation was done with
the RAPIDMINER data analytics software.. For the ARIMA model a window of time series data of the sales figures for one
group of spare parts are used as the input. The model provides as an output a forecast value for the first month after the window
which then is compared to the actual value of that month to generate the trigger. Within RAPIDMINER for the given data set
the time series needs at least a period of 20 months of historical sales data to calculate forecast values with the ARIMA method.
For the SVM as well a window of time series data of the sales figures for one group of spare parts is used as the basic input.
Additionally for each data point the information of the months name of that data point, the number of countries in that month
the spare parts were sold as well as the total sales of spare parts for the affected product group were added in the input table and
the training of the model. The purpose of adding these extracted information for each data point in the input table was to make
these information available for the training of the SVM. As an output again a forecasted value for the first month after the
window was calculated to compare it with the actual values.
A comparison of the performance of both solutions for the given data set was done with the relative error chosen as the
performance parameter. Using default values for the parameters on these methods in RAPIDMINER, SVM was the method
with the smaller relative error. Further the given example SVM can work with less than 20 months of historical sales figures,
which is an advantage of the SVM method, because the data set of spare parts often contains parts with only 10 to 20 months
of historical data. Based for the given business goals a data set of at least a period of 12 month of sales data is recommended
by using the SVM. A further advantage of using the SVM method this use case is that it offers the ability to integrate additional
data sources for the training of the model and increasing the accuracy of the prognosis in comparison to the ARIMA model
which is only able to work with the historical data of the time series. In consequence solution two with SVM was deployed for
an industrial use at MIELE. But transferring both methods from this use case to a more general use in the context of AKKORD
the less complex data preparation of the ARIMA model this solution one was chosen to deploy the model into a generally usable
module on the 'AI-Toolbox' of the AKKORD Platform. According to the mentioned FURPS model this module has a higher
functionality and usability in terms of generic use. This means that both variants of this Trigger with alert level have been
developed for a further use. The analysis based on ARIMA was transformed to a more generic module available for other use
cases with this kind of time series analysis and the analysis based on the SVM has further been developed for a regular use
internally at MIELE for this specific use case of spare parts sales evaluation. Below a sample from the internally developed
trigger of Alarm levels for groups of spare parts from one product group is shown. The left column shows the name of the group
of spare parts, the middle column shows the calculated value of alert level and the right column provides additional information
about the total mean sales of this group of spare parts per month.

Figure 5 Sample report of Triggers with alarm Levels and medium Sales Figures (Language in German to a pilot for German users, Spare
parts names hidden due to data privacy)

c) Component analysis (Seasonality, Trend, Random):


For a deeper analysis of special spare parts or groups of spare parts that might have been identified by an analyst from the
trigger a component analysis has turned out to be helpful. As Altay et al. [52] based on Corston’s research [53] already have
stated spare parts sales data as often lumpy or intermittent especially for an analysis of these characteristics an analysis of
seasonal and random effects in the sales figures is helpful for these parts. Therefore, an additive component analysis for time
series based on moving averages was chosen and processed for a visual analysis. This analysis method is a basic, built-in method
in RAPIDMINER and divides the given time series data into a seasonal, trend and random component. Processed to a line chart
visualization within this analysis the QM analysts will be supported to identify the reasons for a suspicious appearance in the
heatmap or trigger whether it is based on an intermittent or seasonal effect or on a trend in the sales figures. It also helps the
analysts to apply their experience-based business knowledge for example on the intermittent behaviour of some special spare
parts like e.g. electronics. As this type of time series analysis is also suitable for other use cases it is part of the AKKORD ‘AI
Toolbox’ in form of a general module. The example below shows in red the real sales figures of one group of spare parts, in
green it shows the seasonal component of these sales figures, the black line represents the trend component and the grey one
shows the random component. The example shows how this group of spare parts is sold with seasonal fluctuations but has also
a rising trend that in the end is rising stronger than the overall trend in the month before.
Figure 6 Spare Part Component Analysis

3) Overview with a dashboard and evaluation:


In the context of the needed integration to the company’s business processes (see chapter II. A. and [12]) the described analysis
modules need to be integrated into the company’s QM reporting systems. The analytics modules on the ‘AI-Toolbox’ are more
appropriate for analytics in isolated use cases. The spare parts evaluation turned out for MIELE to be an additional and regular
source of information for the FQS. Therefore, the described modules were integrated into a dashboard that should support the
business experts in their regular analysis. This is shown exemplary below.

Figure 7 Dashboard

An evaluation of this analysis of spare parts sales figures within expert interviews revealed that this is an additional helpful
tool for business and analytics experts in after sales service and the QM departments who work in the context of FQS. Especially
the direct contextualization of a triggered alert for deviating sales figures with visual processing of the sales figures was
evaluated as helpful. The reason for that is the possibility of an immediate evaluation of the information from the trigger within
a further visual analysis to work out the business expert knowledge about the behaviour of specific spare parts sales from the
experts experiences. This evaluation has also shown that it is a tool primary addressed to experts and not towards the managers
and executive level in the company.
B. Use case of Error Codes of IoT Appliances
The increasing adoption of connected MIELE appliances opens up the possibility to gain insights into the causes and patterns
of appliances Error Codes, for example in washing machines. AKKORD research project explores the possibilities to help
experts to understand the factors that contribute to device Error Codes, as well as to define, measure and analyse potential areas
for improvement in terms of device design and maintenance.
Besides finding new insights within the Error Codes of IoT Appliances itself, the Use case focuses on the development of
general analysis modules that can be used for other companies or experts working with data with similar characteristics. The
goal is to provide a scalable and flexible solution for analysing anonymized connected appliances Error Code data. The modules
are designed to be adaptable to various types of data, allowing for broader applications across the industry. Further they are
tested and validated on real-world data from different product groups of the company to ensure their accuracy and reliability.
The outcome of this research is a set of tools that can be used by other companies to monitor and better understand the causes
of connected device Error Codes and similar data.
1) Problem Statement and Business Goals
The Error Codes of IoT appliances are special for various reasons. An Error Code is not necessarily an actual failure of the
device, but rather a message that a device sends out about its status during an activity e.g., they can also provide the user with
information about necessary care activities for the appliance. The data is anonymised in order to remove customer reference.
However, this means that suitable key figures must be generated from the raw data, which provide additional information about
the appliances in monitoring. At the same time, it needs to be accessible and useable for experts different business divisions so
that the analyses can be used optimally for these different applications of that divisions. Therefore Ewerszumrode et al. [54]
have shown that this kind of data is able to give an assistance for QM but further knowledge of Business experts is needed. So,
the focus here is to develop analysis modules enabling business experts from differing divisions to get a better overview and
easily apply the analytics modules to similar use cases. The aim is to develop a proof of concept that will find as broad a field
of application as possible. In addition, this data set was used in the development of the ‘AI-Toolbox’ platform itself to develop
the fundamentals for the deployment of extended analysis procedures and visualisations. In consequence they also need a high
functionality and usability, whereas reliability and performance are underrated according to the mentioned FURPS model. The
modules are designed with the aim of making it as easy as possible for QM staff to draw conclusions about the behaviour of the
appliances at the customer’s. The focus must be on the users of the modules who are not data scientists. Analyses must be
comprehensible and visualisations must be familiar and/or easy to interpret for QM staff.
2) Analysis Modules
The focus in this field is to overview the behaviour of IoT devices in the field. To this end, modules are developed with which
the behaviour can be displayed, monitored and in subsequent developments be predicted and possibly even explained. Since it
is not possible, due to anonymization, to draw conclusions about individual devices from the data, the focus is on a good
overview and comparability of individual device groups. Four modules are described in detail.
a) QM Error Code Overview
The first analysis module gives an overall view of the amount of Error Codes in the field related to a specific time interval. This
allows a visual analysis and makes it possible to see at a glance whether significant changes have occurred in the field in relation
to the total number of all or specific errors compared to the previous point in time. The corresponding Error Codes can be
filtered in order to draw conclusions about critical faults that require action (see Figure 4). Using a line chart to present a QM
Error Code Overview over time delivers a lucid depiction of evolving trends, making it easier to discern anomalies or persistent
issues. This chronological representation not only contextualizes errors but also empowers decision-makers to strategize and
allocate resources effectively based on historical insights.

Figure 8 QM Error Code Overview Dashboard, Comparison of three device types in relation to one Error Code over a given time interval

b) QM Error Code Heatmap


The QM Error Code Heatmap is an analysis module based on the prepared data of the Error Code overview. This visualises the
frequency of certain Error Codes in certain device groups, whereby this number is normalised to the number of appliances of
the group in the field. This heatmap enables a more detailed view of the Error Codes in relation to the appliances that emit them.
This module provides the user with an overview based on which the analysts can delve deeper directly into specific devices or
faults and analyse them for their origin (see Figure 5). Employing a heatmap for this offers an immediate visual insight. It
efficiently presents data, enabling a clear comparison of failure rates across device categories.

Figure 9 QM Error Code Heatmap, Visualisation Example (Error Codes hidden due to data privacy)

c) QM Clustering
Cluster analysis based on different fault types can be a useful approach to identifying errors in different types of equipment and
their root cause [55, 56]. This technique involves grouping similar appliances based on the type of Error Code they exhibit,
allowing a more targeted analysis of potential Error Codes. By identifying patterns of Error Codes within each cluster, it is
possible to identify common sources of error and develop more effective strategies to mitigate and prevent future events that
lead to Error Codes. Possible causes are patterns in the use of certain appliances (e.g., successive washing programmes, low
temperatures, etc.), certain components or possible production steps that can lead to defects. Therefore, clustering based on
Error Code type can provide valuable insights into the complex interplay of factors that contribute to equipment Error Code,
helping to improve the reliability and performance of critical systems. From this kind of analysis business experts will be able
to identify some overall salient groups of appliances with further factorial analysis [54]. The module allows users to cluster the
unit types into groups according to the numerous types of faults, as explained above. For this purpose, a principal factor analysis
is carried out to obtain an approximate overview of the relevant influencing factors. K-means clustering is then used to form
clusters in the units. The possible number of clusters and the maximum number of runs can be changed by the user in the
module. In this way, the user can use their domain knowledge to change the critical parameters without having to do any
programming. An extension of the module to other processes is planned. This module has been deployed as a proof of concept
for a web-based, user-adjustable analysis using AI. In the following the result visualisation of this proof of concept is shown.

Figure 10 QM Clustering, Visualisation Example (Four Clusters, Three Principle Factors)

d) QM Forecast
Another QM application is the forecasting of key figures. Unlike the other contexts and described modules, however, here it is
not primary important to produce forecasts that are as accurate as possible or as stable as possible in a certain horizon, but rather
to first take an exploratory look at which forecast method best fits the course of the key figure trends consisting of the new data
sets and sources. A further scope of application for this module is the context of creating scenarios for possible future trends of
detected issues from the data. For this reason, a module has been developed in which the actual trend of a key figure from a
selectable point in time can be visually compared with the results of different forecasting methods on the basis of the input data
set. In this way, a rough pre-selection of appropriate methods can be made without the need for in-depth statistical and data
analytical knowledge. The module is developed for running univariate as well as multivariate. This could support the work of
business experts to identify expectable future figures (see Figure 11) and as well the work of CDS to identify the right methods
for solutions like the trigger with alert level.

Compared to the use case of Spare parts Evaluation these solutions for the analysis of the Error Codes of IoT Appliances solely
have been applied to a use on the ‘AI-Toolbox’ not for a specific application in the QM reporting system of the company.
However, the gained experiences of developing these modules were methodologically applied for an integrated analysis of this
data source within the company’s QM. Within these developments also the importance of supporting the QM professionals with
context specific visualisation of data was validated and the use of the generic modules of this use case were assessed as useful
foundation for the communication in the development phase of the internal solution.
Figure 11 QM Forecast, Visualisation Example (Comparison of ARIMA, Holt-Winters and a Function and Seasonal Component Forecast
Model)

IV. CONCLUSION
In summary, the project partners have succeeded in developing several tools that MIELE can now use for their work in the
process of FQS as well as trainings for developing their QM staff to become CDS. On the one hand, the modules described are
relevant as tools. The modules for key figure monitoring, analysis and forecasting can be used in different areas of application
with similar characterized data and, if necessary, be customised by implementing them with the 'AI Toolbox' based on
RAPIDMINER. On the other hand, employees can be trained in the targeted use of the modules within the 'Work & Learn
Platform', so that both tools and methods are available for competence development [41].
Within the presented work experiences have been made on how to connect Data Science approaches with QM tasks and methods
especially in the area of FQS. Based on these experiences, the goal is to further develop the staff of the individual business units
into CDS. Further analysis modules are also being developed, which will be fed with other data from the QM area. Particularly
with regard to the FURPS model presented [43], the modules have been developed for a generic use with high usability and
functionality nevertheless already the example of applicating the trigger with alert level has shown that it is important to focus
on the area of 'performance' in the individual implementation at the company in order to produce good and reliable results for
special use cases. In addition, it has been actively worked on basic aspects in order to take a further organisational step in the
digital transformation of QM. In summary, QM works along the entire process chain of manufacturing companies and
AKKORD is one of many necessary components that can enable and support staff to do their work in a good way.
ACKNOWLEDGMENT
The work on this paper has been supported by the German Federal Ministry of Education and Research (BMBF) as part of the
funding program ‘Industry 4.0 - Collaborations in Dynamic Value Networks (InKoWe)’ in the research project AKKORD
(02P17D210); www.akkord-projekt.de.

V. REFERENCES
[1] F. Reinhart, “Industrial Data Science - Data Science in der industriellen Anwendung,” Industrie 4.0 Management, vol. 32,
pp. 27–30, 2016.
[2] J. Deuse, N. West, and M. Syberg, “Rediscovering scientific management. The evolution from industrial engineering to
industrial data science,” Int. J. Prod. Manag. Eng., vol. 10, no. 1, pp. 1–12, 2022, doi: 10.4995/ijpme.2022.16617.
[3] V. Nolte, T. Sindram, J. Mazarov, and J. Deuse, “Industrial Data Science erfolgreich implementieren,” Zeitschrift für
wirtschaftlichen Fabrikbetrieb, vol. 115, no. 10, pp. 734–737, 2020, doi: 10.1515/zwf-2020-1151020.
[4] J. Zhang, “Field Product Reliability Risk Assessment,” in 2022 Annual Reliability and Maintainability Symposium
(RAMS), Tucson, AZ, USA, 2022, pp. 1–6.
[5] R. Schmitt and T. Pfeifer, Qualitätsmanagement: Strategien, Methoden, Techniken, 5th ed. München, Wien: Hanser,
2015.
[6] Verband der Automobilindustrie and Qualitätsmanagement-Center, Schadteilanalyse Feld : Vermarktung Und
Kundenbetreuung, 1st ed. Oberursel: VDA QMC, 2009.
[7] REGULATION (EU) 2017/745 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 5 April 2017 on
medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and
repealing Council Directives 90/385/EEC and 93/42/EEC, 2017.
[8] ISO/TR 20416:2020: Medical devices — Post-market surveillance for manufacturers, 2020. Accessed: Feb. 28 2022.
[Online]. Available: https://www.iso.org/standard/67942.html
[9] Robert Bosch GmbH, C/QMM Tilsch, Ed., “6. Auswerten von Felddaten: Qualitätsmanagement in der Bosch-Gruppe |
Technische Statistik,” Accessed: Feb. 28 2023. [Online]. Available: https://assets.bosch.com/media/global/bosch_group/
purchasing_and_logistics/information_for_business_partners/downloads/quality_docs/general_regulations/bosch_
publications/booklet-no06-evaluation-of-field-data_en.pdf
[10] DIN EN ISO 9000:2015-11, Qualitätsmanagementsysteme_- Grundlagen und Begriffe (ISO_9000:2015); Deutsche und
Englische Fassung EN_ISO_9000:2015, Berlin.
[11] J. M. Juran, Der neue Juran: Qualität von Anfang an. Landsberg/Lech: Moderne Industrie, 1993.
[12] V. A. Vasiliev, S. V. Aleksandrova, and M. N. Aleksandrov, “Integration of Quality Management Tools into a Digital
Management System,” in 2021 International Conference on Quality Management, Transport and Information Security,
Information Technologies (IT&QM&IS), Yaroslavl, Russian Federation, 2021, pp. 352–354.
[13] N. West, J. Gries, C. Brockmeier, J. C. Göbel, and J. Deuse, “Towards integrated Data Analysis Quality: Criteria for the
application of Industrial Data Science,” in 2021 IEEE 22nd International Conference on Information Reuse and
Integration for Data Science (IRI), 2021, pp. 131–138.
[14] P. Beaujean and R. Schmitt, “The Quality Backward Chain - The Adaptive Controller of Entrepreneurial Quality,” in
Advances in Intelligent and Soft Computing, Proceedings of the 6th CIRP-Sponsored International Conference on Digital
Enterprise Technology, J. Kacprzyk, G. Q. Huang, K. L. Mak, and P. G. Maropoulos, Eds., Berlin, Heidelberg: Springer
Berlin Heidelberg, 2010, pp. 1133–1143.
[15] F. Schafer, C. Zeiselmair, J. Becker, and H. Otten, “Synthesizing CRISP-DM and Quality Management: A Data Mining
Approach for Production Processes,” in 2018 IEEE International Conference on Technology Management, Operations
and Decisions (ICTMOD), Marrakech, Morocco, 2018, pp. 190–195.
[16] Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T.P., Shearer, C., & Wirth, R., CRISP-DM 1.0: Step-by-step
data mining guide, 2000.
[17] H. Schulte, F. Hoffmann, and R. Mikut, Eds., Proceedings - 31. Workshop Computational Intelligence : Berlin, 25. - 26.
November 2021: KIT Scientific Publishing, 2021.
[18] S. A. Grishaeva, “Quality Data Analysis in the Service Management System,” in 2022 International Conference on
Quality Management, Transport and Information Security, Information Technologies (IT&QM&IS), Saint Petersburg,
Russian Federation, 2022, pp. 34–36.
[19] W. S. Smith, S. Coleman, J. Bacardit, and S. Coxon, “Insight from data analytics with an automotive aftermarket SME,”
Qual Reliab Engng Int, vol. 35, no. 5, pp. 1396–1407, 2019, doi: 10.1002/qre.2529.
[20] H. Song and Z. Cao, “Research on product quality evaluation based on big data analysis,” in 2017 IEEE 2nd International
Conference on Big Data Analysis (ICBDA)(, Beijing, China, 2017, pp. 173–177.
[21] A. Bousdekis, K. Lepenioti, D. Apostolou, and G. Mentzas, “A Review of Data-Driven Decision-Making Methods for
Industry 4.0 Maintenance Applications,” Electronics, vol. 10, no. 7, p. 828, 2021, doi: 10.3390/electronics10070828.
[22] K. Vassakis, E. Petrakis, and I. Kopanakis, “Big Data Analytics: Applications, Prospects and Challenges,” in Lecture
Notes on Data Engineering and Communications Technologies, Mobile Big Data, G. Skourletopoulos, G. Mastorakis, C.
X. Mavromoustakis, C. Dobre, and E. Pallis, Eds., Cham: Springer International Publishing, 2018, pp. 3–20.
[23] M. T. Mullarkey, A. R. Hevner, T. Grandon Gill, and K. Dutta, “Citizen Data Scientist: A Design Science Research
Method for the Conduct of Data Science Projects,” in Lecture Notes in Computer Science, Extending the Boundaries of
Design Science Theory and Practice, B. Tulu, S. Djamasbi, and G. Leroy, Eds., Cham: Springer International Publishing,
2019, pp. 191–205.
[24] V. Raina and S. Krishnamurthy, Building an Effective Data Science Practice: A Framework to Bootstrap and Manage a
Successful Data Science Practice, 1st ed. Erscheinungsort nicht ermittelbar, Boston, MA: Apress; Safari, 2021. [Online].
Available: https://learning.oreilly.com/library/view/-/9781484274194/?ar
[25] J. Tapadinhas and C. Idoine, Citizen Data Science Augments Data Discovery and Simplifies Data Science. [Online].
Available: Joao Tapadinhas, Carlie Idoine
[26] Towards a Process Model to Enable Domain Experts to Become Citizen Data Scientists for Industrial Applications:
IEEE, 2022.
[27] P. Alpar and M. Schulz, “More Data Analysis with Citizen Data Scientists?,” in Lecture Notes in Networks and Systems,
Information Systems and Technologies, A. Rocha, H. Adeli, G. Dzemyda, and F. Moreira, Eds., Cham: Springer
International Publishing, 2022, pp. 122–130.
[28] T. Murallie, Welcome to the Age of Citizen Data Scientists. [Online]. Available: https://towardsdatascience.com/how-to-
become-a-citizen-data-scientist-294660da0494
[29] H. Vella, “Citizen scientists - the power of people data to manage public health [research - citizen data],” Engineering &
Technology, vol. 16, no. 5, pp. 52–55, 2021, doi: 10.1049/et.2021.0509.
[30] J. R. Saura, “Using Data Sciences in Digital Marketing: Framework, methods, and performance metrics,” Journal of
Innovation & Knowledge, vol. 6, no. 2, pp. 92–102, 2021, doi: 10.1016/j.jik.2020.08.001.
[31] G. Gradinaru, “Tools For Citizen Data Scientist in Industry 4.0,” Economic and Social Development: Book of
Proceedings, pp. 132–138, 2019.
[32] M. Kim, T. Zimmermann, R. DeLine, and A. Begel, “The emerging role of data scientists on software development
teams,” in Proceedings of the 38th International Conference on Software Engineering, Austin Texas, 2016, pp. 96–107.
[33] M. Sakpal, How to Use Citizen Data Scientists to Maximize Your D&A Strategy. [Online]. Available: https://
www.gartner.com/smarterwithgartner/how-to-use-citizen-data-scientists-to-maximize-your-da-strategy
[34] M. M. de Medeiros, N. Hoppen, and A. C. G. Maçada, “Data science for business: benefits, challenges and
opportunities,” BL, vol. 33, no. 2, pp. 149–163, 2020, doi: 10.1108/BL-12-2019-0132.
[35] R. Sanchis, Ó. García-Perales, F. Fraile, and R. Poler, “Low-Code as Enabler of Digital Transformation in Manufacturing
Industry,” Applied Sciences, vol. 10, no. 1, p. 12, 2020, doi: 10.3390/app10010012.
[36] C. Romero and S. Ventura, “Educational data mining and learning analytics: An updated survey,” WIREs Data Mining
Knowl Discov, vol. 10, no. 3, 2020, doi: 10.1002/widm.1355.
[37] S. Loeng, “Self-Directed Learning: A Core Concept in Adult Education,” Education Research International, vol. 2020,
pp. 1–12, 2020, doi: 10.1155/2020/3816132.
[38] D. A. Keim, “Information visualization and visual data mining,” IEEE Trans. Visual. Comput. Graphics, vol. 8, no. 1, pp.
1–8, 2002, doi: 10.1109/2945.981847.
[39] P. Cortez and M. J. Embrechts, “Using sensitivity analysis and visualization techniques to open black box data mining
models,” Information Sciences, vol. 225, pp. 1–17, 2013, doi: 10.1016/j.ins.2012.10.039.
[40] J. Mazarov, J. Schmitt, J. Deuse, R. Richter, R. Kühnast-Benedikt, and H. Biedermann, “Visualisation in Industrial Data
Science projects (Translation): Visualisierung in Industrial Data-Science-Projekten (Original title),” Industrie 4.0
Management, vol. 36, no. 6, pp. 63–66, 2020.
[41] M. Syberg, N. West, J. Schwenken, R. Adams, and J. Deuse, “Requirements for the Development of a Collaboration
Platform for Competency-Based Collaboration in Industrial Data Science Projects,” in Lecture Notes on Data
Engineering and Communications Technologies, IoT and Data Science in Engineering Management, F. P. García
Márquez, I. Segovia Ramírez, P. J. Bernalte Sánchez, and A. Del Muñoz Río, Eds., Cham: Springer International
Publishing, 2023, pp. 64–69.
[42] J. Schwenken, C. Klupak, M. Syberg, N. West, F. Walker, and J. Deuse, “Development of a Transdisciplinary Role
Concept for the Process Chain of Industrial Data Science,” in Lecture Notes in Networks and Systems, Proceedings of
Data Analytics and Management, A. Khanna, Z. Polkowski, and O. Castillo, Eds., Singapore: Springer Nature Singapore,
2023, pp. 81–88.
[43] R. B. Grady and D. L. Caswell, Software metrics: Establishing a company-wide program. Englewood Cliffs, NJ:
Prentice-Hall, 1987.
[44] J. Wang, X. Pan, L. Wang, and W. Wei, “Method of Spare Parts Prediction Models Evaluation Based on Grey
Comprehensive Correlation Degree and Association Rules Mining: A Case Study in Aviation,” Mathematical Problems
in Engineering, vol. 2018, pp. 1–10, 2018, doi: 10.1155/2018/2643405.
[45] W. J. Kennedy, J. Wayne Patterson, and L. D. Fredendall, “An overview of recent literature on spare parts inventories,”
International Journal of Production Economics, vol. 76, no. 2, pp. 201–215, 2002, doi: 10.1016/S0925-5273(01)00174-8.
[46] C. Menden, J. Mehringer, A. Martin, and M. Amberg, “Vorhersage von Ersatzteilbedarfen mit Hilfe von
Clusteringverfahren,” HMD, vol. 56, no. 5, pp. 1000–1016, 2019, doi: 10.1365/s40702-019-00532-7.
[47] A. Bacchetti and N. Saccani, “Spare parts classification and demand forecasting for stock control: Investigating the gap
between research and practice,” Omega, vol. 40, no. 6, pp. 722–737, 2012, doi: 10.1016/j.omega.2011.06.008.
[48] D. L. Olson, “Data Visualization,” in Computational Risk Management, Descriptive Data Mining, D. L. Olson, Ed.,
Singapore: Springer Singapore, 2017, pp. 9–28.
[49] Y. Hashimoto and R. Matsushita, “Heat Map Scope Technique for Stacked Time-series Data Visualization,” in 2012 16th
International Conference on Information Visualisation, Montpellier, France, 2012, pp. 270–273.
[50] P. Ramos, N. Santos, and R. Rebelo, “Performance of state space and ARIMA models for consumer retail sales
forecasting,” Robotics and Computer-Integrated Manufacturing, vol. 34, pp. 151–163, 2015, doi:
10.1016/j.rcim.2014.12.015.
[51] Z. Wang, J. Wen, and D. Hua, “Research on distribution network spare parts demand forecasting and inventory quota,” in
2014 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC), Hong Kong, 2014, pp. 1–6.
[52] N. Altay, F. Rudisill, and L. A. Litteral, “Adapting Wright's modification of Holt's method to forecasting intermittent
demand,” International Journal of Production Economics, vol. 111, no. 2, pp. 389–408, 2008, doi:
10.1016/j.ijpe.2007.01.009.
[53] J. D. Croston, “Forecasting and Stock Control for Intermittent Demands,” Operational Research Quarterly (1970-1977),
vol. 23, no. 3, p. 289, 1972, doi: 10.2307/3007885.
[54] J. Ewerszumrode, M. Schöne, S. Godt, and M. Kohlhase, “Assistenzsystem zur Qualitätssicherung von IoT-Geräten
basierend auf AutoML und SHAP,” in Proceedings - 31. Workshop Computational Intelligence : Berlin, 25. - 26.
November 2021, H. Schulte, F. Hoffmann, and R. Mikut, Eds.: KIT Scientific Publishing, 2021, pp. 285–305. Accessed:
Feb. 28 2023. [Online]. Available: https://publikationen.bibliothek.kit.edu/1000138532
[55] J.-W. Lin, P. R. Chelliah, M.-C. Hsu, and J.-X. Hou, “Efficient Fault-Tolerant Routing in IoT Wireless Sensor Networks
Based on Bipartite-Flow Graph Modeling,” IEEE Access, vol. 7, pp. 14022–14034, 2019, doi:
10.1109/ACCESS.2019.2894002.
[56] A. Shahraki, A. Taherkordi, Ø. Haugen, and F. Eliassen, “A survey and future directions on clustering: From WSNs to
IoT and modern networking paradigms,” IEEE Transactions on Network and Service Management, vol. 18, no. 2, pp.
2242–2274, 2020.

You might also like