Figures & data
Figure 1. The model architecture of Transformer Encoder (based on(Vaswani, Shazeer, and Parmar et al. Citation2017)).
![Figure 1. The model architecture of Transformer Encoder (based on(Vaswani, Shazeer, and Parmar et al. Citation2017)).](/cms/asset/52e6d54d-256e-4031-a924-27b5848ae474/uaai_a_2346059_f0001_oc.jpg)
Table 1. Information of event log.
Table 2. Details of masked transformer-based event log repair method.
Table 3. Confusion matrix(Sokolova and Lapalme Citation2009).
Table 4. Evaluation metrics(Sokolova and Lapalme Citation2009).
Figure 5. Performance of the model with different noise addition schemes (fixed 15% noise inclusion rate).
![Figure 5. Performance of the model with different noise addition schemes (fixed 15% noise inclusion rate).](/cms/asset/e8ab26a4-4b45-4015-99a0-e5b835eadf35/uaai_a_2346059_f0005_oc.jpg)
Table 5. Execution time reported in milliseconds(Fixed 15% noise content).
Table 6. Details of hyperparameters for the Baseline Methods(Nguyen et al. Citation2019).
Table 7. Accuracy of missing noise repair.
Table 8. Performance of the model on the BPIC2015 dataset.