Peter Hahn
5 min readOct 10, 2022

--

Peter’s R — Solving prolonged waiting-times with tidymodels P4

Part 4: tidytext and final solution

In the previous posts, I established a basic model and introduced resampling and model tuning by the means of tidymodels. The aim was a reduction in waiting time for the patients of the hand surgery department when scheduled for an operation. I defined waiting time as the time from arrival until the beginning of the operation.

Material

As in the previous posts you find the data and the markdown files in my github-repo.

Department specifics

When I optimize waiting time, I must keep in mind that there must be no delays in the process because a delay causes extra costs and makes the employees unhappy. Normally, three patients are scheduled at 6:30 a.m. They get a plexus anesthesia with a long-acting local anesthetic. Next, patients beginning with the fourth patient are scheduled by summarizing the p2p times (patient to patient).

Principle of planning with p2p(patient to patient)

What information exists during planning

The previous models were based on ops (ICPM) as one variable. When I create the plan for the next day, the ops aren’t available in all operations. For some of the standard operations, the ops already exist, but non-standard operations have no ops. It would be very time-consuming to lookup all ops in advance. Variables available are
- age of the patient
- operating surgeon
- day of the week
- outpatient versus inpatient
and besides these variables two text variables.
The text of the planning, named “bez“. This text contains information about the planned operation, it is not structured and the medical assistant enters the text during the operation planning.
The other variable, “op-kenn” is a structured variable. The text is specific for frequently used operations, e.g. Karpaltunnelsyndrom, Dupuytren. Unfortunately, there are inconsistencies between the export of the EHR for creating the daily operation planning and the export which I used to train the model.

Plan of operations. First line Info: ‘op-kenn’. Bold text:’bez’

I tried to use this to variables as a substitution for the “ops “, because they are available when I plan the operations.

Preprocess the text information

Because both variables contain a lot of clutter, e.g. time requested by the patient, some preprocessing is necessary. Both variables are merged to one variable. Then I removed all numbers and special character as ‘: ‘ or ‘. ‘from the variables. Within the recipe, I tokenize this variable and stop words are removed. The maximum number of words is determined by tuning with max_words. Rest of the code is unchanged.

basic_rec <- 
recipe(p2p ~ ., data = hc_train) %>%
step_tokenize(bez) %>%
step_stopwords(bez, language = "de") %>%
step_tokenfilter(bez, max_tokens = tune()) %>%
step_tf(bez) %>%
#step_lencode_glm(op_kenn, outcome =vars(p2p)) %>%
step_dummy(all_nominal(), -all_outcomes()) %>%
step_zv()

Training

I fit a XGboost model using cross-validation. I use Anova-racing for tuning. This is not so time-consuming as a full grid search. Based on an intermediate evaluation of a subset of resamples, it does not consider some performance metrics for further resampling. Detailed explanation are here. Fitting the data on my 64-arm Mac using 6 cores took 52 minutes.

registerDoMC(cores = 6)
multi_metric <- metric_set(rmse)
set.seed(1308)
xgb_race <-
xgb_wf %>%
tune_race_anova(
hc_fold,
grid = 20,
param_info = xgb_param,
metrics = multi_metric,
control = control_race(save_pred = TRUE, verbose_elim = TRUE)
)
registerDoSEQ()

Final evaluation of the model

select_best() identifies the best model parameters. With these parameters, the model will be fitted to the entire data and evaluated with the test data, last_fit().

best_results_all <- 
xgb_race %>%
show_best(metric = "rmse")
best_results <-
xgb_race %>%
select_best(metric = "rmse")
boosting_test_results <-
xgb_wf %>%
finalize_workflow(best_results) %>%
last_fit(split = fall_split)
collect_metrics(boosting_test_results)

Collect-metrics reveal a rmse of 14 and the plot of predicted vs observed shows greater differences with higher values of p2p.

Observed vs predicted p2p

After finally fitting with the entire data, vip estimates the importance of the variables.

library(vip)final_wfl <- 
xgb_wf %>%
finalize_workflow(best_results)
fitted_wfl <- final_wfl %>%
fit(data = hc_train)
fitted_wfl %>%
extract_fit_parsnip() %>%
vip(geom = "point", num_features = 15)
Feature importance

The greatest importance has the feature stat(inpatient vs outpatient), followed by tf_bez_allgemein. Both features mark operations which have a longer duration. In contrast, the two features arzt_code_ap and arzt_code_nf are the most experienced and fastes operating hand-surgeons.

Use the model

Finally, we can save the model,

saveRDS(wz_model, “../models/model1.RDS”)

and use it to predict p2p of all operations of the next day.

What’s next?

Currently, I have an extra R program where I load the model, preprocess the operating plan for the next day, fit to the model and get p2p times of the next day. With them, I plan the schedule for the next day, as described above.

I am very attentive and observe the course of the operations every day, in order to detect problems or delays.

Next, I must build a frontend which is so easy that everybody can use this in my absence. Perhaps I will try vetiver.

After a while, I must evaluate if we have a reduction in t_diff (time from appearing until the beginning of the operation), which was the primary goal. At regular intervals, I must retrain the program, because the circumstances will change. The staff will change, we will change operations, etc. Hopefully, it is possible to improve the results. At the moment I haven’t tried: tfidf, embeddings and other.

I will report the first results in a month or two and inform, if I succeed in solving the basic problem by deploying the model.

Read here for deploying the model and first results.

I hope you enjoyed my journey through the problem of prolonged waiting times and my work through the book Tidy modeling with R. The journey will continue.
Stay tuned.

--

--