Problem Description

A major problem faced by businesses in asset-heavy industries such as manufacturing is the significant costs that are associated with delays in the production process due to mechanical problems. Most of these businesses are interested in predicting these problems in advance so that they can proactively prevent the problems before they occur which will reduce the costly impact caused by downtime. Please refer to the playbook for predictive maintenance for a detailed explanation of common use cases in predictive maintenance and modelling approaches.

In this notebook, we follow the ideas from the playbook referenced above and aim to provide the steps of implementing a predictive model for a scenario which is based on a synthesis of multiple real-world business problems. This example brings together common data elements observed among many predictive maintenance use cases and the data itself is created by data simulation methods.

The business problem for this example is about predicting problems caused by component failures such that the question “What is the probability that a machine will fail in the near future due to a failure of a certain component” can be answered. The problem is formatted as a multi-class classification problem and a machine learning algorithm is used to create the predictive model that learns from historical data collected from machines. In the following sections, we go through the steps of implementing such a model which are feature engineering, label construction, training and evaluation. First, we start by explaining the data sources in the next section.

Data Sources

Common data sources for predictive maintenance problems are :
– Failure history: The failure history of a machine or component within the machine.
– Maintenance history: The repair history of a machine, e.g. error codes, previous maintenance activities or component replacements.
– Machine conditions and usage: The operating conditions of a machine e.g. data collected from sensors.
– Machine features: The features of a machine, e.g. engine size, make and model, location.
– Operator features: The features of the operator, e.g. gender, past experience

The data for this example comes from 4 different sources which are real-time telemetry data collected from machines, error messages, historical maintenance records that include failures and machine information such as type and age.


    telemetry <- read.csv('PdM_telemetry.csv')
    errors <- read.csv('PdM_errors.csv')
    maint <- read.csv('PdM_maint.csv')
    failures <- read.csv('PdM_failures.csv')
    machines <- read.csv('PdM_machines.csv')


The first data source is the telemetry time-series data which consists of voltage, rotation, pressure, and vibration measurements collected from 100 machines in real time averaged over every hour collected during the year 2015. Below, we display the first and last 10 records in the dataset. A summary of the whole dataset is also provided.

    # format datetime field which comes in as.character
    telemetry$datetime <- as.POSIXct(telemetry$datetime,
                                     format="%Y-%m-%d %H:%M:%S",

    cat("Total number of telemetry records:", nrow(telemetry))

As an example, below is a plot of voltage values for machine ID 1 for the first half of 2015.


    theme_set(theme_bw())  # theme for figures
    options(repr.plot.width = 8, repr.plot.height = 3)

    ggplot(data = telemetry %>% filter(machineID == 1, 
                                     datetime > as.POSIXct("2015-01-01"),
                                     datetime < as.POSIXct("2015-02-01")),
           aes(x = datetime, y = volt)) + geom_line(color = "blue") + labs(y = "voltage")


The second major data source is the error logs. These are non-breaking errors thrown while the machine is still operational and do not constitute as failures. The error date and times are rounded to the closest hour since the telemetry data is collected at an hourly rate.

    # format datetime and errorID fields
    errors$datetime <- as.POSIXct(errors$datetime,
                                  format="%Y-%m-%d %H:%M:%S", 
    errors$errorID <- as.factor(errors$errorID)

    cat("Total number of error records:",nrow(errors))

    options(repr.plot.width = 5, repr.plot.height = 3)
    ggplot(errors, aes(x = errorID)) + 
        geom_histogram(fill = "orange") + 
        labs(title = "Errors by type", x = "error types")


These are the scheduled and unscheduled maintenance records which correspond to both regular inspection of components as well as failures. A record is generated if a component is replaced during the scheduled inspection or replaced due to a breakdown. The records that are created due to breakdowns will be called failures which is explained in the later sections. Maintenance data has both 2014 and 2015 records.

    # format datetime and comp fields
    maint$datetime <- as.POSIXct(maint$datetime,
                                 format="%Y-%m-%d %H:%M:%S", 
    maint$comp <- as.factor(maint$comp)

    cat("Total number of maintenance records:", nrow(maint))
    head(maint, 10)

    options(repr.plot.width = 5, repr.plot.height = 3)
    ggplot(maint, aes(x = comp)) + 
        geom_histogram(fill= "magenta") +
        labs(title = "Component replacements", x = "component types")


This data set includes some information about the machines: model type and age (years in service).

    # format model field
    machines$model <- as.factor(machines$model)

    cat("Total number of machines:", nrow(machines))
    head(machines, 10)

    options(repr.plot.width = 6, repr.plot.height = 3)
    ggplot(machines, aes(x = age, fill = model)) +
        geom_histogram(color = "black") +
        labs(title = "Machines", x = "age") 


These are the records of component replacements due to failures. Each record has a date and time, machine ID, and failed component type.

    # format datetime and failure fields
    failures$datetime <- as.POSIXct(failures$datetime,
                                    format="%Y-%m-%d %H:%M:%S", 
    failures$failure <- as.factor(failures$failure)

    cat("Total number of failures:", nrow(failures))
    head(failures, 10)

Below is the distribution of the failures due to each component. We see that the most failures happen due to component 2.

    options(repr.plot.width = 5, repr.plot.height = 3)
    ggplot(failures, aes(x = failure)) + 
        geom_histogram(fill = "red") + 
        labs(title = "Failure distribution", x = "component type")

Feature Engineering

The first step in predictive maintenance applications is feature engineering which requires bringing the different data sources together to create features that best describe a machines’s health condition at a given point in time. In the next sections, several feature engineering methods are used to create features based on the properties of each data source.

Lag Features from Telemetry

Telemetry data almost always comes with time-stamps which makes it suitable for calculating lagging features. A common method is to pick a window size for the lag features to be created and compute rolling aggregate measures such as mean, standard deviation, minimum, maximum, etc. to represent the short term history of the telemetry over the lag window. In the following, rolling mean and standard deviation of the telemetry data over the last 3 hour lag window is calculated for every 3 hours.


    # calculate the rolling mean and rolling standard deviation 
    # on the last 3 hour lag window (width=3), for every 3 hours (by=3)
    # for each machine ID.
    telemetrymean <- telemetry %>%
        arrange(machineID, datetime) %>% 
        group_by(machineID) %>%
        mutate(voltmean = rollapply(volt, width = 3, FUN = mean, align = "right", fill = NA, by = 3),
               rotatemean = rollapply(rotate, width = 3, FUN = mean, align = "right", fill = NA, by = 3),
               pressuremean = rollapply(pressure, width = 3, FUN = mean, align = "right", fill = NA, by = 3),
               vibrationmean = rollapply(vibration, width = 3, FUN = mean, align = "right", fill = NA, by = 3)) %>%
        select(datetime, machineID, voltmean, rotatemean, pressuremean, vibrationmean) %>%


    telemetrysd <- telemetry %>% 
        arrange(machineID, datetime) %>%
        group_by(machineID) %>%
        mutate(voltsd = rollapply(volt, width = 3, FUN = sd, align = "right", fill = NA, by = 3),
               rotatesd = rollapply(rotate, width = 3, FUN = sd, align = "right", fill = NA, by = 3),
               pressuresd = rollapply(pressure, width = 3, FUN = sd, align = "right", fill = NA, by = 3),
               vibrationsd = rollapply(vibration, width = 3, FUN = sd, align = "right", fill = NA, by = 3)) %>%
               select(datetime, machineID, voltsd, rotatesd, pressuresd, vibrationsd) %>%
               filter(! %>%


For capturing a longer term effect, 24 hour lag features are also calculated as below.

    # calculate the rolling mean and rolling standard deviation 
    # on the last 24 hour lag window (width=24), for every 3 hours (by=3)
    # for each machine ID.
    telemetrymean_24hrs <- telemetry %>%
        arrange(machineID, datetime) %>%
        group_by(machineID) %>%
        mutate(voltmean_24hrs = rollapply(volt, width = 24, FUN = mean, align = "right", fill = NA, by = 3),
               rotatemean_24hrs = rollapply(rotate, width = 24, FUN = mean, align = "right", fill = NA, by = 3),
               pressuremean_24hrs = rollapply(pressure, width = 24, FUN = mean, align = "right", fill = NA, by = 3),
               vibrationmean_24hrs = rollapply(vibration, width = 24, FUN = mean, align = "right", fill = NA, by = 3)) %>%
               select(datetime, machineID, voltmean_24hrs, rotatemean_24hrs, pressuremean_24hrs, vibrationmean_24hrs) %>%
               filter(! %>% 


    telemetrysd_24hrs <- telemetry %>% 
        arrange(machineID, datetime) %>%
        group_by(machineID) %>%
        mutate(voltsd_24hrs = rollapply(volt, width = 24, FUN = sd, align = "right", fill = NA, by = 3),
               rotatesd_24hrs = rollapply(rotate, width = 24, FUN = sd, align = "right", fill = NA, by = 3),
               pressuresd_24hrs = rollapply(pressure, width = 24, FUN = sd, align = "right", fill = NA, by = 3),
               vibrationsd_24hrs = rollapply(vibration, width = 24, FUN = sd, align = "right", fill = NA, by = 3)) %>%
               select(datetime, machineID, voltsd_24hrs, rotatesd_24hrs, pressuresd_24hrs, vibrationsd_24hrs) %>%
               filter(! %>%


Next, the columns of the feature datasets created earlier are merged to create the final feature set from telemetry.

    # merge columns of feature sets created earlier
    telemetryfeat <- data.frame(telemetrymean, telemetrysd[,-c(1:2)]) 
    telemetryfeat_24hrs <- data.frame(telemetrymean_24hrs, telemetrysd_24hrs[,-c(1:2)])
    telemetryfeat <- telemetryfeat %>%
                     left_join(telemetryfeat_24hrs, by = c("datetime", "machineID")) %>%
                     filter(! %>% 


Lag Features from Errors

Like telemetry data, errors come with timestamps. An important difference is that the error IDs are categorical values and should not be averaged over time intervals like the telemetry measurements. Instead, we count the number of errors of each type in a lagging window:

    # create a column for each error type
    errorcount <- errors %>% select(datetime, machineID, errorID) %>% 
      mutate(error1 = as.integer(errorID == "error1"), 
             error2 = as.integer(errorID == "error2"),
             error3 = as.integer(errorID == "error3"),
             error4 = as.integer(errorID == "error4"),
             error5 = as.integer(errorID == "error5"))

    # sum the duplicate errors in an hour
    errorcount <- errorcount %>% 
      summarise(error1sum = sum(error1), 
                error2sum = sum(error2), 
                error3sum = sum(error3), 
                error4sum = sum(error4), 
                error5sum = sum(error5)) %>%


    # align errors with telemetry datetime field
    errorfeat <- telemetry %>% 
        select(datetime, machineID) %>%
        left_join(errorcount, by = c("datetime", "machineID"))

    # replace missing values
    errorfeat[] <- 0


    # count the number of errors of different types in the last 24 hours, for every 3 hours
    errorfeat <- errorfeat %>% 
        arrange(machineID, datetime) %>%
        group_by(machineID) %>%
        mutate(error1count = rollapply(error1sum, width = 24, FUN = sum, align = "right", fill = NA, by = 3),
               error2count = rollapply(error2sum, width = 24, FUN = sum, align = "right", fill = NA, by = 3),
               error3count = rollapply(error3sum, width = 24, FUN = sum, align = "right", fill = NA, by = 3),
               error4count = rollapply(error4sum, width = 24, FUN = sum, align = "right", fill = NA, by = 3),
               error5count = rollapply(error5sum, width = 24, FUN = sum, align = "right", fill = NA, by = 3)) %>%
        select(datetime, machineID, error1count, error2count, error3count, error4count, error5count) %>%
        filter(! %>% 


Days Since Last Replacement from Maintenance

A crucial data set in this example is the maintenance records which contain the information of component replacement records. Possible features from this data set can be, for example, the number of replacements of each component in the last 3 months to incorporate the frequency of replacements. However, more relevent information would be to calculate how long it has been since a component is last replaced as that would be expected to correlate better with component failures since the longer a component is used, the more degradation should be expected.

As a side note, creating lagging features from maintenance data is not as straightforward as for telemetry and errors, so the features from this data are generated in a more custom way. This type of ad-hoc feature engineering is very common in predictive maintenance since domain knowledge plays a big role in understanding the predictors of a problem. In the following, the days since last component replacement are calculated for each component type as features from the maintenance data.

    # create a binary column for each component. 1 if replacement occured, 0 if not.
    comprep <- maint %>% 
        select(datetime, machineID, comp) %>% 
        mutate(comp1 = as.integer(comp == "comp1"), 
               comp2 = as.integer(comp == "comp2"),
               comp3 = as.integer(comp == "comp3"),
               comp4 = as.integer(comp == "comp4")) %>%



    comprep <-
    setkey(comprep, machineID, datetime)

    # seperate different component type replacements into different tables
    comp1rep <- comprep[comp1 == 1, .(machineID, datetime, lastrepcomp1 = datetime)]# component 1 replacements
    comp2rep <- comprep[comp2 == 1, .(machineID, datetime, lastrepcomp2 = datetime)]# component 2 replacements
    comp3rep <- comprep[comp3 == 1, .(machineID, datetime, lastrepcomp3 = datetime)]# component 3 replacements
    comp4rep <- comprep[comp4 == 1, .(machineID, datetime, lastrepcomp4 = datetime)]# component 4 replacements

    # use telemetry feature table datetime and machineID to be matched with replacements
    compdate <-[,c(1:2)]) 
    setkey(compdate, machineID, datetime)

    # data.table rolling match will attach the latest record from the component replacement tables 
    # to the telemetry date time and machineID
    comp1feat <- comp1rep[compdate[,.(machineID, datetime)],roll = TRUE] 
    comp1feat$sincelastcomp1 <- as.numeric(difftime(comp1feat$datetime, comp1feat$lastrepcomp1, units = "days"))
    comp2feat <- comp2rep[compdate[,.(machineID, datetime)], roll = TRUE] 
    comp2feat$sincelastcomp2 <- as.numeric(difftime(comp2feat$datetime, comp2feat$lastrepcomp2, units = "days"))
    comp3feat <- comp3rep[compdate[,.(machineID, datetime)], roll = TRUE] 
    comp3feat$sincelastcomp3 <- as.numeric(difftime(comp3feat$datetime, comp3feat$lastrepcomp3, units="days"))
    comp4feat <- comp4rep[compdate[,.(machineID, datetime)], roll = TRUE] 
    comp4feat$sincelastcomp4 <- as.numeric(difftime(comp4feat$datetime, comp4feat$lastrepcomp4, units = "days"))

    # merge all tables
    compfeat <-data.frame(compdate, comp1feat[,.(sincelastcomp1)], comp2feat[,.(sincelastcomp2)],


Machine Features

The machine features can be used without further modification. These include descriptive information about the type of each machine and its age (number of years in service). If the age information had been recorded as a “first use date” for each machine, a transformation would have been necessary to turn those into a numeric values indicating the years in service.

Lastly, we merge all the feature data sets we created earlier to get the final feature matrix.

    # telemetry and error features have the same datetime 
    finalfeat <- data.frame(telemetryfeat, errorfeat[,-c(1:2)])

    # merge with component features and machine features lastly
    finalfeat <- finalfeat %>% 
        left_join(compfeat, by = c("datetime","machineID")) %>% 
        left_join(machines, by = c("machineID"))

    head(finalfeat, 10)
    cat("The final set of features are:",paste0(names(finalfeat), ","))

Label Construction

When using multi-class classification for predicting failure due to a problem, labelling is done by taking a time window prior to the failure of an asset and labelling the feature records that fall into that window as “about to fail due to a problem” while labelling all other records as “€œnormal.” This time window should be picked according to the business case: in some situations it may be enough to predict failures hours in advance, while in others days or weeks may be needed to allow e.g. for arrival of replacement parts.

The prediction problem for this example scenerio is to estimate the probability that a machine will fail in the near future due to a failure of a certain component. More specifically, the goal is to compute the probability that a machine will fail in the next 24 hours due to a certain component failure (component 1, 2, 3, or 4). Below, a categorical failure feature is created to serve as the label. All records within a 24 hour window before a failure of component 1 have failure=comp1, and so on for components 2, 3, and 4; all records not within 24 hours of a component failure have failure=none.

    # left join final features with failures on machineID then mutate a column for datetime difference
    # filter date difference for the prediction horizon which is 24 hours
    labeled <- left_join(finalfeat, failures, by = c("machineID")) %>%
        mutate(datediff = difftime(datetime.y, datetime.x, units = "hours")) %>%
        filter(datediff <= 24, datediff >= 0)

    # left join labels to final features and fill NA's with "none" indicating no failure
    labeledfeatures <- left_join(finalfeat, 
                                 labeled %>% select(datetime.x, machineID, failure),
                                 by = c("datetime" = "datetime.x", "machineID")) %>%

    levels(labeledfeatures$failure) <- c(levels(labeledfeatures$failure), "none")

Below is an example of records that are labeled as failure=comp4 in the failure column. Notice that the first 8 records all occur in the 24-hour window before the first recorded failure of component 4. The next 8 records are within the 24 hour window before another failure of component 4.

    head(labeledfeatures[labeledfeatures$failure == "comp4",], 16)


After the feature engineering and labelling steps, either Azure Machine Learning Studio or this notebook can be used to create a predictive model. The recommend Azure Machine Learning Studio experiment can be found in the Cortana Intelligence Gallery: Predictive Maintenance Modelling Guide Experiment. Below, we describe the modelling process and provide an example R model.

Training, Validation and Testing

When working with time-stamped data as in this example, record partitioning into training, validation, and test sets should be performed carefully to prevent overestimating the performance of the models. In predictive maintenance, the features are usually generated using lagging aggregates: records in the same time window will likely have identical labels and similar feature values. These correlations can give a model an “unfair advantage” when predicting on a test set record that shares its time window with a training set record. We therefore partition records into training, validation, and test sets in large chunks, to minimize the number of time intervals shared between them.

Predictive models have no advance knowledge of future chronological trends: in practice, such trends are likely to exist and to adversely impact the model’s performance. To obtain an accurate assessment of a predictive model’s performance, we recommend training on older records and validating/testing using newer records.

For both of these reasons, a time-dependent record splitting strategy is an excellent choice for predictive maintenace models. The split is effected by choosing a point in time based on the desired size of the training and test sets: all records before the timepoint are used for training the model, and all remaining records are used for testing. (If desired, the timeline could be further divided to create validation sets for parameter selection.) To prevent any records in the training set from sharing time windows with the records in the test set, we remove any records at the boundary — in this case, by ignoring 24 hours’ worth of data prior to the timepoint.

    # split at 2015-08-01 01:00:00, to train on the first 8 months and test on last 4 months
    # labelling window is 24 hours so records within 24 hours prior to split point are left out
    trainingdata1 <- labeledfeatures[labeledfeatures$datetime < "2015-07-31 01:00:00",] 
    testingdata1 <- labeledfeatures[labeledfeatures$datetime > "2015-08-01 01:00:00",]

    # split at 2015-09-01 01:00:00, to train on the first 9 months and test on last 3 months
    # labelling window is 24 hours so records within 24 hours prior to split point are left out
    trainingdata2 <- labeledfeatures[labeledfeatures$datetime < "2015-08-31 01:00:00",] 
    testingdata2 <- labeledfeatures[labeledfeatures$datetime > "2015-09-01 01:00:00",]

    # split at 2015-10-01 01:00:00, to train on the first 10 months and test on last 2 months
    # labelling window is 24 hours so records within 24 hours prior to split point are left out
    trainingdata3 <- labeledfeatures[labeledfeatures$datetime < "2015-09-30 01:00:00",] 
    testingdata3 <- labeledfeatures[labeledfeatures$datetime > "2015-10-01 01:00:00",]


    # create the training formula 
    trainformula <- as.formula(paste('failure',
                                     paste(names(labeledfeatures)[c(3:29)],collapse=' + '),
                                     sep=' ~ '))

    # train model on 3 splits
    gbm_model1 <- gbm(formula = trainformula, data = trainingdata1, 
                      distribution = "multinomial", n.trees = 50,
                      interaction.depth = 5, shrinkage = 0.1)
    gbm_model2 <- gbm(formula = trainformula, data = trainingdata2, 
                      distribution = "multinomial", n.trees = 50,
                      interaction.depth = 5, shrinkage = 0.1)
    gbm_model3 <- gbm(formula = trainformula, data = trainingdata3,
                      distribution = "multinomial", n.trees = 50,
                      interaction.depth = 5, shrinkage = 0.1)

    # print relative influence of variables for 1st model as an example


In predictive maintenance, machine failures are usually rare occurrences in the lifetime of the assets compared to normal operation. This causes an imbalance in the label distribution which usually causes poor performance as algorithms tend to classify majority class examples better at the expense of minority class examples as the total misclassification error is much improved when majority class is labeled correctly. This causes low recall rates although accuracy can be high and becomes a larger problem when the cost of false alarms to the business is very high. To help with this problem, sampling techniques such as oversampling of the minority examples are usually used along with more sophisticated techniques which are not covered in this notebook.

Also, due to the class imbalance problem, it is important to look at evaluation metrics other than accuracy alone and compare those metrics to the baseline metrics which are computed when random chance is used to make predictions rather than a machine learning model. The comparison will bring out the value and benefits of using a machine learning model better.

In the following, we use an evaluation function that computes many important evaluation metrics along with baseline metrics for classification problems. For a detailed explanation of this function and the metrics please refer to the blog post Computing Classification Evaluation Metrics in R .

    # label distribution after features are labeled - the class imbalance problem
    ggplot(labeledfeatures, aes(x=failure)) + 
        geom_bar(fill="red") + 
        labs(title = "label distribution", x = "labels")

    # define evaluate function
    Evaluate<-function(actual=NULL, predicted=NULL, cm=NULL){
        if(is.null(cm)) {
            actual = actual[!]
            predicted = predicted[!]
            f = factor(union(unique(actual), unique(predicted)))
            actual = factor(actual, levels = levels(f))
            predicted = factor(predicted, levels = levels(f))
            cm = as.matrix(table(Actual=actual, Predicted=predicted))

        n = sum(cm) # number of instances
        nc = nrow(cm) # number of classes
        diag = diag(cm) # number of correctly classified instances per class 
        rowsums = apply(cm, 1, sum) # number of instances per class
        colsums = apply(cm, 2, sum) # number of predictions per class
        p = rowsums / n # distribution of instances over the classes
        q = colsums / n # distribution of instances over the predicted classes

        accuracy = sum(diag) / n

        #per class
        recall = diag / rowsums
        precision = diag / colsums
        f1 = 2 * precision * recall / (precision + recall)

        macroPrecision = mean(precision)
        macroRecall = mean(recall)
        macroF1 = mean(f1)

        #1-vs-all matrix
        oneVsAll = lapply(1 : nc,
                              v = c(cm[i,i],
                                    rowsums[i] - cm[i,i],
                                    colsums[i] - cm[i,i],
                                    n-rowsums[i] - colsums[i] + cm[i,i]);
                              return(matrix(v, nrow = 2, byrow = T))})

        s = matrix(0, nrow=2, ncol=2)
        for(i in 1:nc){s=s+oneVsAll[[i]]}

        #avg accuracy
        avgAccuracy = sum(diag(s))/sum(s)

        microPrf = (diag(s) / apply(s,1, sum))[1];

        #majority class
        mcIndex = which(rowsums==max(rowsums))[1] # majority-class index
        mcAccuracy = as.numeric(p[mcIndex]) 
        mcRecall = 0*p;  mcRecall[mcIndex] = 1
        mcPrecision = 0*p; mcPrecision[mcIndex] = p[mcIndex]
        mcF1 = 0*p; mcF1[mcIndex] = 2 * mcPrecision[mcIndex] / (mcPrecision[mcIndex] + 1)

        #random accuracy
        expAccuracy = sum(p*q)
        kappa = (accuracy - expAccuracy) / (1 - expAccuracy)

        #random guess
        rgAccuracy = 1 / nc
        rgPrecision = p
        rgRecall = 0*p + 1 / nc
        rgF1 = 2 * p / (nc * p + 1)

        #rnd weighted
        rwgAccurcy = sum(p^2)
        rwgPrecision = p
        rwgRecall = p
        rwgF1 = p

        classNames = names(diag)
        if(is.null(classNames)) classNames = paste("C",(1:nc),sep="")

            ConfusionMatrix = cm,
            Metrics = data.frame(
                Class = classNames,
                Accuracy = accuracy,
                Precision = precision,
                Recall = recall,
                F1 = f1,
                MacroAvgPrecision = macroPrecision,
                MacroAvgRecall = macroRecall,
                MacroAvgF1 = macroF1,
                AvgAccuracy = avgAccuracy,
                MicroAvgPrecision = microPrf,
                MicroAvgRecall = microPrf,
                MicroAvgF1 = microPrf,
                MajorityClassAccuracy = mcAccuracy,
                MajorityClassPrecision = mcPrecision,
                MajorityClassRecall = mcRecall,
                MajorityClassF1 = mcF1,
                Kappa = kappa,
                RandomGuessAccuracy = rgAccuracy,
                RandomGuessPrecision = rgPrecision,
                RandomGuessRecall = rgRecall,
                RandomGuessF1 = rgF1,
                RandomWeightedGuessAccurcy = rwgAccurcy,
                RandomWeightedGuessPrecision = rwgPrecision,
                RandomWeightedGuessRecall= rwgRecall,
                RandomWeightedGuessWeightedF1 = rwgF1)))

    # evaluation metrics for first split
    pred_gbm1 <-, testingdata1, 
                                       n.trees = 50,type = "response"))

    names(pred_gbm1) <- gsub(".50", "", names(pred_gbm1))
    pred_gbm1$failure <- as.factor(colnames(pred_gbm1)[max.col(pred_gbm1)])

    eval1 <- Evaluate(actual=testingdata1$failure,predicted=pred_gbm1$failure)

    # evaluation metrics for second split
    pred_gbm2 <-, testingdata2,  
                                       n.trees = 50,type = "response"))

    names(pred_gbm2) <- gsub(".50", "", names(pred_gbm2))
    pred_gbm2$failure <- as.factor(colnames(pred_gbm2)[max.col(pred_gbm2)])

    eval2 <- Evaluate(actual=testingdata2$failure,predicted=pred_gbm2$failure)

    # evaluation metrics for third split
    pred_gbm3 <-, testingdata3,  
                                       n.trees = 50,type = "response"))

    names(pred_gbm3)<-gsub(".50", "", names(pred_gbm3))
    pred_gbm3$failure <- as.factor(colnames(pred_gbm3)[max.col(pred_gbm3)])

    eval3 <- Evaluate(actual=testingdata3$failure,predicted=pred_gbm3$failure)

In predictive maintenance, we are often most concerned with how many of the actual failures were predicted by the model, i.e. the model’s recall. (Recall becomes more important as the consequences of false negatives — true failures that the model did not predict — exceed the consequences of false positives, viz. false prediction of impending failure.) Below, we compare the recall rates for each failure type for the three models. The recall rates for all components as well as no failure are all above 90% meaning the model was able to capture above 90% of the failures correctly.

    # report the recall rates for the models
    rownames <- c("comp1","comp2","comp3","comp4","none")
    data.frame(cbind(failure = rownames,
                     gbm_model1_Recall = eval1$Metrics$Recall,
                     gbm_model2_Recall = eval2$Metrics$Recall,
                     gbm_model3_Recall = eval3$Metrics$Recall))