271
Views
0
CrossRef citations to date
0
Altmetric
Materials Informatics

Prediction of martensite start temperature of steel combined with expert experience and machine learning

&
Article: 2354655 | Received 11 Mar 2024, Accepted 06 May 2024, Published online: 29 May 2024

ABSTRACT

The martensite start temperature (MS) plays a pivotal role in formulating heat treatment regimes for steel. This paper, through the compilation of experimental data from literature and the incorporation of expert knowledge to construct features, employs machine learning algorithms to predict the MS of steel. The study highlights that the ETR algorithm attains optimal prediction accuracy, and the inclusion of atomic features enhances the model’s performance. Feature selection is accomplished by evaluating linear and nonlinear relationships between data using the Pearson correlation coefficient (PCC), variance inflation factor (VIF), and maximum information coefficient (MIC). Subsequently, the performance of machine learning models on unknown data is compared to validate the model’s generalization ability. The introduction of SHAP values for model interpretability analysis unveils the influencing mechanisms between features and the target variable. Finally, utilizing a specific steel type as an illustration, the paper underscores the practical value of the model.

GRAPHICAL ABSTRACT

IMPACT STATEMENT

This study innovatively integrates experimental data, expert knowledge, and ETR algorithm for accurate MS prediction in steel, enhancing model performance with systematic feature selection and interpretability analysis, demonstrating practical utility.

1. Introduction

The martensite start temperature (MS) holds significant importance in analyzing the martensitic phase transformation process, guiding alloy design, formulating heat treatment processes, and welding applications. Currently, experimental determination of MS relies on various methods such as metallography, dilatometry, hardness testing, and thermal analysis [Citation1–4]. The entire process demands considerable human and material resources. Therefore, researchers are now exploring methods for accurately predicting MS.

Numerous factors influence the MS, with the most crucial being the chemical composition of the steel. Since the 1940s, researchers have proposed numerous empirical formulas relating MS to chemical composition, as illustrated in . Initially, the calculation formulas mostly utilized linear regression to establish multivariate linear equations between alloy elements and MS, as seen in No. 1 ~ No. 6, No. 8~No. 11 and No. 14 ~ 15. However, such purely linear relationships neglect the interactions among alloy elements. Consequently, some scholars improved traditional linear regression models by introducing nonlinear terms, as shown in No. 7, No. 12, No. 13 and No. 16. Additionally, aside from alloy composition, studies indicate that the austenite grain size (dr) also influences MS [Citation21–24], leading to the development of the empirical model shown in No. 18. In summary, while there are numerous empirical formulas for calculating MS, most of them have specific applicability ranges, exhibiting optimal predictive performance only within certain composition limits. As a result, they still face challenges in meeting the increasingly precise requirements of scientific research and production.

Table 1. Empirical equation of MS calculation.

The development of the martensitic phase transformation nucleation thermodynamic theory has laid the foundation for the thermodynamic models of MS. In the early stages, Bhadeshia et al. [Citation25] investigated the driving forces of martensitic phase transformation in different carbon steels. They established a functional relationship for describing the martensitic phase transformation driving force (Gc) with carbon content as the independent variable. This formula is applicable for predicting MS in low-alloy steels. However, given that steel contains alloy elements beyond C, Si, Mn, Ni, etc., other alloy elements such as Cr, Mo, also impact MS. To broaden the applicability range of the thermodynamic model for MS calculation, Ghosh et al. [Citation26,Citation27], combining solid-solution strengthening theory, proposed a new martensitic phase transformation free energy, as shown in EquationEquation (1).

(1) Gc=K1+Wμ(Xi)+Wth(Xi,T)(1)

Where K1 is a constant, Xi is the molar fraction of element i, Wμ represents temperature-independent resistance work, and Wth represents temperature-dependent resistance work, which can be neglected when MS exceeds 300 K. Ghosh and Olson, based on the empirical formula for the resistance work in binary systems, utilized the Pythagorean iteration method to derive a mathematical model for the martensitic phase transformation free energy in multicomponent systems, as shown in EquationEquation (2). This model demonstrates effective predictions for the MS of multicomponent alloy steels; however, its predictive performance for highly alloyed steel MS is somewhat unsatisfactory.

(2) ΔGc(J/mol)=K1+i(KiXi0.5)2+j(KjXj0.5)2+k(KkXk0.5)2+KCoXCo0.5(2)

Where i represents C and N; j represents Cr, Mn, Mo, Nb, Si, Ti, V; k encompasses Al, Cu, Ni, and W; Ki, Kj and Kk are the corresponding calculation coefficients, respectively.

Both empirical formulas and thermodynamic models fall within the realm of statistical mathematics, traditionally applied to relatively simple problems that assume data follows a normal distribution. However, due to the multitude of factors influencing MS and their complex interactions, it is challenging to derive a universal predictive model. The advent of Machine Learning (ML) effectively addresses this issue by autonomously learning patterns and rules from data for prediction, classification, and decision-making [Citation28–32]. ML does not rely on explicit rules but extracts information from data, making it suitable for handling nonlinear problems and high-dimensional data. Capdevila et al. [Citation33], utilizing a neural network model, obtained a highly accurate prediction formula by considering the impact weights of each element on MS. Rahaman et al. [Citation34] employed a random forest model for MS prediction, and their results outperformed those of thermodynamic models. However, these predictions only consider the influence of alloy composition and lack integration with expert knowledge and prior information, requiring further model refinement.

In this study, we combine expert knowledge and machine learning to predict steel MS. We propose features related to MS based on expert knowledge and enhance the model’s performance by introducing atomic features. Through feature selection, we identify the optimal subset of features. Additionally, we conduct interpretability analysis on the machine learning model to determine feature importance ranking and understand the underlying mechanisms influencing MS. Finally, we compare the predictive results of our proposed model with those of existing models to demonstrate its advantages and application prospects.

2. Data and methods

2.1. Data collection and analysis

The dataset used in this study is derived from the literature data published by Lu et al. [Citation35]. The majority of MS values in the dataset were obtained from Continuous Cooling Transformation (CCT) diagrams. MATLAB code was employed to analyze the CCT diagrams and extract the MS information. The dataset comprises 1157 entries, including chemical compositions, austenitizing temperature (TAust), and corresponding MS data for various steels. The specific distribution ranges are outlined in . As there are orders of magnitude differences among features, the features are processed by normalization and scaled to 0 ~ 1. The calculation formula is shown in EquationEquation (3) [Citation30]. shows the distribution of each feature after normalization. According to the maximum value and minimum value of the feature, the value of the feature in the data set can be given intuitively. depicts the distribution of MS in the dataset, which overall conforms to a normal distribution pattern.

(3) X=Xmin(X)max(X)min(X)(3)

Figure 1. Distribution of normalized features.

Figure 1. Distribution of normalized features.

Figure 2. Distribution of MS in data set.

Figure 2. Distribution of MS in data set.

Table 2. Spatial distribution range of data set.

Where X* represents the normalized feature, while min(X) and max(X) denote the minimum and maximum values of the original feature X, respectively.

2.2. Feature construction and selection

As mentioned earlier, both austenite grain size and martensitic phase transformation driving force have a significant impact on MS. Therefore, incorporating both factors into the features is crucial to enhance the predictive performance of the model. The parameter TAust, determining dr [Citation36,Citation37], is utilized in place of dr. Gc is computed based on EquationEquation (2), with the parameters from EquationEquation (2) listed in . The original features of the dataset include TAust, Gc, and alloy element compositions. Additionally, studies indicate that incorporating atomic features can significantly improve the model’s predictive performance [Citation31,Citation38,Citation39]. Factors such as electronegativity, atomic mass, and atomic radius can influence the process of austenite-to-martensite transformation. Hence, this study introduces a series of atomic features, as shown in . The calculation of lattice parameters for Face centered cubic (FCC) and Body centered cubic (BCC) is expressed in EquationEquations (4) and (Equation5) [Citation40,Citation41].

(4) FCC=3.578+0.033wC+0.00095wMn0.0002wNi\break+0.0006wCr+0.0056wAl+0.0031wMo+0.0018wV(4)
(5) BCC=2.8664+(aFe0.279xC)2(aFe+2.49xC)aFe33aFe2\break0.03xSi+0.06xMn+0.07xNi+0.31xMo+0.05xCr+0.096xV(5)

Table 3. Parameters used in Equationequation (2).

Table 4. Atomic feature.

Where FCC is the lattice parameter of austenite and wi are the mass fraction of element i; B.B.C. is the lattice parameter of ferrite; xi is the molar fraction of element i; aFe is the lattice parameter of ferrite in pure iron, taking 2.8664.

After creating the necessary input features, subsequent steps involve feature selection to eliminate potentially irrelevant features and reduce the model’s complexity. Firstly, Pearson correlation coefficient (PCC) [Citation42,Citation43] is computed to measure the linear correlation between features, as shown in EquationEquation (6). Generally, a PCC absolute value greater than 0.8 [Citation32] suggests the presence of multicollinearity among variables, which can be addressed using Variance Inflation Factor (VIF) [Citation44]. VIF represents the ratio of the variance when there is multicollinearity among explanatory variables to the variance when there is no multicollinearity. A higher VIF indicates more severe collinearity, and it can be calculated using EquationEquation (7). After eliminating linearly correlated features, the presence of nonlinear correlations between variables is assessed using the Maximum Information Coefficient (MIC) [Citation45]. MIC is based on mutual information, measuring the mutual dependence between two variables. Mutual information quantifies the information gain for one variable given the values of another variable. MIC uses mutual information as a foundation and undergoes normalization and maximization through a series of steps. For continuous variables, the mutual information between X and Y can be calculated using EquationEquation (8). The range of I(X;Y) is [0, +∞], with a higher value indicating a stronger correlation between the two variables. MIC is then computed based on I(X;Y) using EquationEquation (9). The range of MIC is [0,1], with a higher value indicating a stronger correlation between the two variables.

(6) PCC=1n1i=1n(aia)(bib)SaSb(6)
(7) VIFi=11Ri2(7)
(8) I(X;Y)=p(x,y)log(p(x,y)p(x)p(y))dxdy(8)
(9) MIC(X,Y)=maxf,gI(f(X),f(Y))log2min{k1,k2}(9)

Where n is the sample size; a and b are two feature variables; a and b are the mean values of the feature variables; Sa and Sb are the standard deviations of the feature variables; R2i represents the determination coefficient of the i-th explanatory variable as the dependent variable, obtained by performing linear regression with the other k-1 explanatory variables; p(x,y) is the joint probability density function of X and Y; p(x) and p(y) are the marginal probability density functions of X and Y; f and g are monotonic functions mapping X and Y to the [0,1] interval; k1 and k2 are the possible number of values for X and Y, respectively.

2.3. Machine learning model

With the advancement of computer science and technology, numerous machine learning algorithms have emerged, each with its own set of applicable scenarios. Therefore, it is essential to compare machine learning algorithms and select the most suitable one for a given dataset.

This study employed four machine learning algorithms: Extremely Randomized Trees(ETR), Gradient Boosting Machine(GBT), Support Vector Machine(SVR), and Lasso Regression(LSO). Using features as input and MS as output, the predictive performance of different algorithms was compared to identify the optimal predictive model. Among the four algorithms, ETR and GBT belong to ensemble methods, where GTR falls under the Bagging algorithm and GBT falls under the Boosting algorithm. The individual learners established by Bagging algorithms are independent of each other, allowing for parallelized computations. In contrast, Boosting algorithms involve strong interdependence among individual learners, restricting computations to a sequential, serialized manner. SVR seeks to find a hyperplane in a high-dimensional space that separates different categories in the dataset while maximizing the distance to the two closest data points. LSO, on the other hand, is a form of linear regression that introduces an L1 norm term in the model’s loss function to control model complexity.

To compare the predictive performance of different models, this study selected the coefficient of determination (R2) and mean absolute error (MAE) as evaluation metrics, assessing the goodness of fit of the models. The specific calculation formulas are outlined in EquationEquations (10) and (Equation11).

(10) R2=1i=1n(fiyi)2i=1n(fiyi,ave)2(10)
(11) MAE=1ni=1nyifi(11)

where fi represents the predicted value; yi is the actual value; yi,ave is the mean value of the actual values. Additionally, during the model development process, the dataset was partitioned into a training set and a test set in an 8:2 ratio. The model was fitted using the training set, and subsequently, the effectiveness of the model was validated using the test set.

In machine learning, hyperparameters are parameters that need to be set before model training, and they are not learned by the model but are manually defined by practitioners. Examples of hyperparameters include learning rate, regularization parameters, tree depth, etc. Current methods for optimizing hyperparameters include grid search, random search, and Bayesian optimization. In contrast to the first two, Bayesian optimization [Citation46,Citation47] leverages previous observations and intelligently selects the next point for evaluation by continuously adjusting candidate points in the search space. Compared to grid search and random search, which uniformly sample the search space, Bayesian optimization may require fewer iterations to find the optimal solution. The advantage of Bayesian optimization lies in its ability to discover better hyperparameter configurations with relatively fewer iterations. Therefore, this study chooses Bayesian optimization as the method for hyperparameter tuning. Additionally, to mitigate the potential impact of randomness in data partitioning on model performance, cross-validation is combined with Bayesian optimization to jointly search for the best hyperparameters. presents the hyperparameter search results for the four algorithms.

Table 5. Hyperparametric adjustment.

3. Result and discussion

3.1. Model prediction

presents the comparison between model predictions and measured values when using the original feature inputs. The x-axis represents measured data, and the y-axis represents predicted data. Closer alignment to the diagonal indicates smaller prediction errors. Among the four machine learning models, the ETR model consistently outperforms others, exhibiting the best performance in both the training and test sets. Particularly on the test set, it achieves an R2 of 0.91 and MAE of 16.58 K, indicating excellent predictive capability. Thus, the ETR algorithm is selected as the optimal algorithm for this study. The outstanding performance of the ETR model primarily stems from its characteristics as an ensemble learning algorithm. ETR exhibits remarkable flexibility, allowing it to seamlessly adapt to diverse data types and effectively handle complex nonlinear relationships. This inherent flexibility empowers ETR to excel in addressing intricate problems. Moreover, ETR leverages the use of multiple decision trees, employing randomized feature and data subset selection during tree construction to mitigate the risk of overfitting. This feature enables ETR to perform exceptionally well, particularly in scenarios involving high-dimensional data and noisy datasets. Conversely, other models, notably SVR, often struggle with high-dimensional data due to factors such as the sparsity induced by the curse of dimensionality, escalating computational complexity, and challenges in parameter selection. To further enhance the model’s predictive ability, atomic features are introduced in addition to the original features. illustrates the comparison between model predictions and test values in this scenario. Compared to , the R2 increases to 0.94, and the MAE decreases to 13.71 K, indicating an improved model performance with the introduction of atomic features.

Figure 3. Comparison of predicted and measured values of four machine learning models with original feature input (a) ETR(b) GBT(c) SVR(d) LOS.

Figure 3. Comparison of predicted and measured values of four machine learning models with original feature input (a) ETR(b) GBT(c) SVR(d) LOS.

Figure 4. Comparison between the predicted value of the model and the measured value after adding atomic feature.

Figure 4. Comparison between the predicted value of the model and the measured value after adding atomic feature.

3.2. Feature selection

When introducing atomic features, the consideration of inter-feature correlation was overlooked, necessitating feature selection. shows the PCC heat map between data set variables. It is worth noting that there are multiple collinearity problems between some features, for example, the PCC between mAW and mN is 1, which indicates that there is a complete linear correlation between them. Therefore, we can consider deleting one of the features to avoid the influence of redundant information. After PCC screening, it is found that the features with multiple collinearity include W, mAW, afve, ffve, C, FCC, mR, mN, apve, fsve, fpve, mAR and BCC. In view of these characteristics, it is necessary to further analyze and decide how to deal with them to ensure the robustness and prediction performance of the model. VIF calculations were then performed for these features, and shows the corresponding VIF values. With a threshold of 100, only BCC, C, ffve, and W were retained. Subsequently, MIC values were calculated for the remaining features to assess their nonlinear relationships with the target variable, as depicted in . Using 0.1 as the threshold, features with MIC less than 0.1 were considered unrelated to MS and were thus discarded. The final retained features are C, Gc, TAsut, BCC, mE, Mn, Cr, fdve, mC, asve, Si, Ni, adve, Mo, V, rN, and Al. illustrates the comparison between model predictions and measured values after feature selection. Despite the reduction in input feature dimensions, the model’s predictive performance has not deteriorated. The effectiveness of feature selection is demonstrated by maintaining prediction accuracy while reducing model complexity.

Figure 5. PCC heat map between features.

Figure 5. PCC heat map between features.

Figure 6. VIF value of multicollinearity feature.

Figure 6. VIF value of multicollinearity feature.

Figure 7. MIC analysis of features and target variables.

Figure 7. MIC analysis of features and target variables.

Figure 8. Comparison between predicted values and measured values of the model after feature selection.

Figure 8. Comparison between predicted values and measured values of the model after feature selection.

3.3. Validation of generalization ability

To validate the predictive capability of the model on unknown data, this study collected 89 sets of data from other literature sources [Citation18,Citation48–51], and the disparity between measured and calculated values of MS is depicted in . Despite the model not being trained on these samples, it still exhibits commendable predictive performance, indicating robust generalization ability to effectively forecast unknown data. Additionally, compares the results with those computed using JmartPro and empirical formulas, highlighting that the proposed model’s accuracy surpasses both alternatives by a significant margin.

Figure 9. Difference between MS measured value and calculated value.

Figure 9. Difference between MS measured value and calculated value.

3.4. Model interpretable analysis

In the realm of MS prediction, machine learning exhibits higher predictive accuracy compared to traditional empirical formulas. However, due to its nature as a ‘black-box model’, where complex mapping relations exist between inputs and outputs, it lacks transparency and interpretability in internal decision-making. Researchers face challenges in comprehending how the model makes specific predictions or decisions based on input data. Therefore, alternative methods are needed to enhance interpretability. SHapley Additive exPlanations (SHAP) values [Citation52,Citation53] offer an approach to elucidate the outputs of machine learning models. Rooted in cooperative game theory’s Shapley values, SHAP values provide a framework to allocate contributions of each feature to the model’s output, aiding in understanding the model’s decision-making process. The central idea behind SHAP values is to simulate the impact of incorporating different features on the model output, assigning a Shapley value to each feature. This method possesses properties like consistency, balance, and linearity, making it a potent tool for model interpretation.

provides a summary of feature SHAP values. illustrates the distribution of average SHAP values for each feature, with the length of the axis representing the magnitude of average SHAP values, reflecting feature importance. The descending order of feature importance from top to bottom is C, Gc, Ni, TAust, mE, Cr, BCC, adve, Mn, fdve, Al, asve, rN, V, Si, Mo, mC. displays the distribution of SHAP values for each sample in the dataset. Each point represents a sample, and the color of the point corresponds to the feature value. The redder the color, the larger the corresponding feature value, while the bluer the color, the smaller the feature value. The x-axis represents the distribution of SHAP values, where SHAP > 0 indicates a positive impact of the feature on the target variable, leading to an increase, and vice versa when SHAP < 0, indicating a negative impact causing a decrease. Taking feature C as an example, points with a predominantly blue color are distributed on the positive half-axis of the x-axis, while points with a reddish color are distributed on the negative half-axis of the x-axis. This indicates that an increase in the C content in steel tends to decrease MS. For the top eight features, illustrates the distribution of SHAP values corresponding to each feature. Using the sign of SHAP values as a boundary, one can determine the range of feature values where the target variable increases or decreases. For instance, when C > 0.3, Gc > 5000, Ni > 1.3, mE > 1.85, Cr > 3.3, BCC > 10, adve > 5, the corresponding SHAP values are less than 0, leading to a decrease in MS. Furthermore, shows the distribution of SHAP values with TAust. There is no distinct boundary line between positive and negative SHAP values corresponding to TAust. This is because, for MS, dr is one of its influencing factors, and this study approximates TAust as a substitute for dr. Apart from TAust, austenitizing time also affects dr, making it challenging to clearly demonstrate how TAust influences MS.

Figure 10. Summary chart of characteristic SHAP values (a) average SHAP(b) SHAP of each sample.

Figure 10. Summary chart of characteristic SHAP values (a) average SHAP(b) SHAP of each sample.

Figure 11. Distribution of SHAP values corresponding to features (a) C (b) gc (c) Ni (d) taust (e) mE (f) Cr (g) BCC (h) adve.

Figure 11. Distribution of SHAP values corresponding to features (a) C (b) gc (c) Ni (d) taust (e) mE (f) Cr (g) BCC (h) adve.

3.5. Model application

The primary objective of this study is to establish a unified prediction model for MS that can accurately predict across various steel types, including low-carbon steel, high-carbon steel, low-alloy steel, and high-alloy steel. This section illustrates the model’s application using a specific steel type as an example. Data for 63 steel samples were collected from the literature, encompassing low-carbon steel, high-carbon steel, low-alloy steel, and high-alloy steel [Citation4,Citation54–72]. depicts the disparity between the model’s predictions and measured values, along with the corresponding carbon and alloy element content. The machine learning model demonstrates robust predictive performance even for special steel types that may pose challenges for empirical formulas or thermodynamic models. This validates the broad applicability and value of the model.

Figure 12. The difference between the calculated value and the measured value of the model and the corresponding C content and element content.

Figure 12. The difference between the calculated value and the measured value of the model and the corresponding C content and element content.

4. Conclusion

  1. The ETR algorithm establishes a model with optimal predictive performance among the four machine learning algorithms. Additionally, the introduction of atomic features proves advantageous in enhancing the model’s performance.

  2. Feature selection was achieved through PCC, VIF, and MIC, effectively reducing the model’s complexity without compromising predictive accuracy.

  3. Comparative analysis of model predictions on an unknown dataset against JmartPro and empirical formulas validates the model’s strong generalization capabilities.

  4. SHAP values were employed for interpretability analysis, providing insights into feature importance rankings and critical value ranges.

  5. Using a special steel type as an example demonstrates the model’s universality, affirming its extensive practical value.

Acknowledgments

The authors are very grateful to the reviewers and editors for their valuable suggestions, which have helped improve the paper substantially.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work is supported by grants from the National Key Research and Development Program of China (Grant No. 2021YFB3501502, Grant No.2021YFB3702500); Ministry of Science and Technology of the People’s Republic of China.

References

  • Yang HS, Bhadeshia H. Uncertainties in dilatometric determination of martensite start temperature. Mater Sci Technol. 2007;23(5):556–14. doi: 10.1179/174328407X176857
  • Liu C, Huang M, Ren Q, et al. Effect of grain size and cooling rate on the martensite start temperature of stainless steel. Steel Res Int. 2022;93(8):2200044. doi: 10.1002/srin.202200044
  • Luo Q, Chen H, Chen W, et al. Thermodynamic prediction of martensitic transformation temperature in Fe-ni-C system. Scripta Materialia. 2020;187:413–417. doi: 10.1016/j.scriptamat.2020.06.062
  • Bojack A, Zhao L, F MP, et al. In-situ determination of austenite and martensite formation in 13Cr6Ni2Mo supermartensitic stainless steel. Mater Charact. 2012;71:77–86. doi: 10.1016/j.matchar.2012.06.004
  • Payson P. Martensite reactions in alloy steels. Trans American Soc Met. 1944;33:261–280.
  • Rowland E. The application of ms points to case depth measurement. Trans ASM. 1946;27(2):27–47. doi: 10.2307/1932518
  • Grange R. The temperature range of martensite formation. Trans American Inst Mining Metal Eng. 1946;167:467–501.
  • Nehrenberg A. The temperature range of martensite formation. Trans AIME. 1946;167:494–498.
  • Steven W. The temperature of martensite and bainite in low-alloy steels. J Iron Steel Inst. 1956;183:349–359.
  • Andrews K. Empirical formulae for the calculation of some transformation temperatures. J Iron Steel Inst. 1965;0:721–727.
  • Kung C, Rayment J. Examination of the validity of existing empirical formulae for the calculation of M/sub s/temperature. Metall Trans A. 1982;13(2):328–331. doi: 10.1007/BF02643327
  • Ishida K. Calculation of the effect of alloying elements on the ms temperature in steels. Journal of Alloys and Compounds. 1995;220(1–2):126–131. doi: 10.1016/0925-8388(94)06002-9
  • Sverdlin A, Ness A. Steel heat treatment handbook. GE Totten and MAH Howes, editors. New York: Marcel Dekker Inc.; 1997.
  • Capdevila C, FG C, De Andrés CG. Determination of ms temperature in steels: a bayesian neural network model. ISIJ Inter. 2002;42(8):894–902. doi: 10.2355/isijinternational.42.894
  • Van Bohemen S. Bainite and martensite start temperature calculated with exponential carbon dependence. Mater Sci Technol. 2012;28(4):487–495. doi: 10.1179/1743284711Y.0000000097
  • Barbier D. Extension of the martensite transformation temperature relation to larger alloying elements and contents. Adv Eng Mater. 2014;16(1):122–127. doi: 10.1002/adem.201300116
  • Eichelman G. The effect of composition on. Tran Amer Soc Met. 1953;45:77–95.
  • Finkler H, Schirra M. Transformation behaviour of the high temperature martensitic steels with 8–14% chromium. Steel Res Int. 1996;67(8):328–342. doi: 10.1002/srin.199605498
  • Kaar S, Steineder K, Schneider R, et al. New Ms-formula for exact microstructural prediction of modern 3rd generation AHSS chemistries. Scripta Materialia. 2021;200:113923. doi: 10.1016/j.scriptamat.2021.113923
  • Lee SJ, Jung M. Prediction of martensite start temperatures of highly alloyed steels. Arch Metallurgy Mat. 2021;66:224. doi: 10.24425/amm.2021.134765
  • Lee SJ, Park KS. Prediction of martensite start temperature in alloy steels with different grain sizes. Metall Mater Trans A. 2013;44(8):3423–3427. doi: 10.1007/s11661-013-1798-4
  • Yang HS, Bhadeshia HKDH. Austenite grain size and the martensite-start temperature. Scripta Materialia. 2009;60(7):493–495. doi: 10.1016/j.scriptamat.2008.11.043
  • C GJR, Rios PR. Martensite start temperature and the austenite grain-size. J Mater Sci. 2010;45(4):1074–1077. doi: 10.1007/s10853-009-4044-0
  • García-Junceda A, Capdevila C, G CF, et al. Dependence of martensite start temperature on fine austenite grain size. Scripta Materialia. 2008;58(2):134–137. doi: 10.1016/j.scriptamat.2007.09.017
  • Bhadeshia H. Driving force for martensitic transformation in steels. Metal Sci. 1981;15(4):175–177. doi: 10.1179/030634581790426714
  • Ghosh G, Olson G. Kinetics of F.C.C. → B.C.C. heterogeneous martensitic nucleation—I. The critical driving force for athermal nucleation. Acta Metallurgica et Materialia. 1994;42(10):3361–3370. doi: 10.1016/0956-7151(94)90468-5
  • Ghosh G, Olson G. Kinetics of F.C.c. → b.c.c. heterogeneous martensitic nucleation—II. Thermal activation. Acta Metallurgica et Materialia. 1994;42(10):3371–3379. doi: 10.1016/0956-7151(94)90469-3
  • Liu C, Lu Y, Feng J, et al. Prediction and customized design of Curie temperature of Fe-based amorphous alloys based on interpretable machine learning. Mater Today Commun. 2023;38:107667. doi: 10.1016/j.mtcomm.2023.107667
  • Liu C, Wang X, Cai W, et al. Machine learning aided prediction of glass-forming ability of metallic glass. Processes. 2023;11(9):2806. doi: 10.3390/pr11092806
  • Liu C, Wang X, Cai W, et al. Optimal design of the austenitic stainless-steel composition based on machine learning and genetic algorithm. Materials. 2023;16(16):5633. doi: 10.3390/ma16165633
  • Liu C, Wang X, Cai W, et al. Prediction of the fatigue strength of steel based on interpretable machine learning. Materials. 2023;16(23):7354. doi: 10.3390/ma16237354
  • Liu C, Wang X, Cai W, et al. Prediction of magnetocaloric properties of Fe-based amorphous alloys based on interpretable machine learning. J Non-Crystalline Solids. 2024;625:122749. doi: 10.1016/j.jnoncrysol.2023.122749
  • Capdevila C, Caballero F, Andrés CGD. Prediction of martensite start temperature by neural network analysis. City: EDP sciences; 2003. p. 217–221.
  • Rahaman M, Mu W, Odqvist J, et al. Machine learning to predict the martensite start temperature in steels. Metall Mater Trans A. 2019;50(5):2081–2091. doi: 10.1007/s11661-019-05170-8
  • Lu Q, Liu S, Li W, et al. Combination of thermodynamic knowledge and multilayer feedforward neural networks for accurate prediction of MS temperature in steels. Mater Design. 2020;192:108696. doi: 10.1016/j.matdes.2020.108696
  • Lee S-J, Lee Y-K. Prediction of austenite grain growth during austenitization of low alloy steels. Mater Design. 2008;29(9):1840–1844. doi: 10.1016/j.matdes.2008.03.009
  • S ZS, Q LM, G LY, et al. The growth behavior of austenite grain in the heating process of 300M steel. Mater Sci Eng A. 2011;528(15):4967–4972. doi: 10.1016/j.msea.2011.02.089
  • Xiong J, Zhang T, Shi S. Machine learning of mechanical properties of steels. Sci China Technol Sci. 2020;63(7):1247–1255. doi: 10.1007/s11431-020-1599-5
  • Yan Z, Li L, Cheng L, et al. New insight in predicting martensite start temperature in steels. J Mater Sci. 2022;57(24):11392–11410. doi: 10.1007/s10853-022-07329-y
  • Garcia-Mateo C, Peet M, Caballero F, et al. Tempering of hard mixture of bainitic ferrite and austenite. Mater Sci Technol. 2004;20(7):814–818. doi: 10.1179/026708304225017355
  • Bhadeshia H, David S, Vitek J, et al. Stress induced transformation to bainite in Fe–cr–mo–C pressure vessel steel. Mater Sci Technol. 1991;7(8):686–698. doi: 10.1179/mst.1991.7.8.686
  • Park E, Lee YJ. Estimates of standard deviation of spearman’s rank correlation coefficients with dependent observations. Com Stat Sim Com. 2001;30:129–142. doi: 10.1081/SAC-100001863
  • Hauke J, Kossowski T. Comparison of values of pearson’s and spearman’s correlation coefficients on the same sets of data. Quaestiones Geographic. 2011;30:87–93. doi: 10.2478/v10117-011-0021-1
  • G TC, S KR, M AA, et al. Extracting the variance inflation factor and other multicollinearity diagnostics from typical regression results. Basic Appl Social Psychol. 2017;39(2):81–90. doi: 10.1080/01973533.2016.1277529
  • N RD, A RY, K FH, et al. Detecting novel associations in large data sets. Science. 2011;334(6062):1518–1524. doi: 10.1126/science.1205438
  • Zhang W, Wu C, Zhong H, et al. Prediction of undrained shear strength using extreme gradient boosting and random forest based on Bayesian optimization. Geosci Front. 2021;12(1):469–477. doi: 10.1016/j.gsf.2020.03.007
  • Shahriari B, Swersky K, Wang Z, et al. Taking the human out of the loop: a review of Bayesian optimization. Vol. 104. Vancouver: Proceedings of the IEEE; 2015. p. 148–175.
  • L NHK, Nakashima K, Tsuchiyama T, et al. Effect of solution nitriding on microstructure and hardness in 12% Cr martensitic stainless steels. Vol. 98. Kyushu: Tetsu-To-Hagane/Journal of the Iron Steel Institute of Japan; 2012. p. 25–31.
  • Haynes A, Steven W. The temperature of formation of martensite and bainite in low-alloy steel. J Iron Steel Inst. 1956;183:349–359.
  • Van Bohemen S, Sietsma J. Kinetics of martensite formation in plain carbon steels: critical assessment of possible influence of austenite grain boundaries and autocatalysis. Mater Sci Technol. 2014;30(9):1024–1033. doi: 10.1179/1743284714Y.0000000532
  • Zhang Z. An atlas of continuous cooling transformation (CCT) diagrams applicable to low carbon low alloy weld metals. Beijing: CRC Press; 2021.
  • M LS, Erion G, Chen H, et al. From local explanations to global understanding with explainable AI for trees. Nature Mach Intell. 2020;2(1):56–67. doi: 10.1038/s42256-019-0138-9
  • Štrumbelj E, Kononenko I. Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst. 2014;41(3):647–665. doi: 10.1007/s10115-013-0679-x
  • Field DM, Baker DS, Van Aken DC. On the prediction of α-martensite temperatures in medium manganese steels. Metall Mater Trans A. 2017;48(5):2150–2163. doi: 10.1007/s11661-017-4020-2
  • Tereshchenko N, Yakovleva I, Mirzaev D, et al. Features of isothermal formation of carbide-free bainite in high-carbon manganese-silicon steel. Phys Metals Metallogr. 2018;119(6):569–575. doi: 10.1134/S0031918X18060145
  • S WX, Narayana P, Maurya A, et al. Modeling the quantitative effect of alloying elements on the Ms temperature of high carbon steel by artificial neural networks. Materials Letters. 2021;291:129573. doi: 10.1016/j.matlet.2021.129573
  • Sourmail T, Smanio V. Low temperature kinetics of bainite formation in high carbon steels. Acta Materialia. 2013;61(7):2639–2648. doi: 10.1016/j.actamat.2013.01.044
  • J PM, S HH, Avettand-Fènoël M-N, et al. Low-temperature transformation to bainite in a medium-carbon steel. Int J Mater Res. 2017;108(2):89–98. doi: 10.3139/146.111461
  • Leiro A, Vuorinen E, Sundin K-G, et al. Wear of nano-structured carbide-free bainitic steels under dry rolling–sliding conditions. Wear. 2013;298:42–47. doi: 10.1016/j.wear.2012.11.064
  • A SM, Rementeria R, Kuntz M, et al. Low-temperature bainite: a thermal stability study. Metall Mater Trans A. 2018;49(6):2026–2036. doi: 10.1007/s11661-018-4595-2
  • A SM, Eres-Castellanos A, Ruiz-Jimenez V, et al. Quantitative assessment of the time to end bainitic transformation. Metals. 2019;9(9):925. doi: 10.3390/met9090925
  • Garcia-Mateo C, Caballero F, Capdevila C, et al. Estimation of dislocation density in bainitic microstructures using high-resolution dilatometry. Scripta Materialia. 2009;61(9):855–858. doi: 10.1016/j.scriptamat.2009.07.013
  • Zhou P, Guo H, M ZA, et al. Effect of pre-existing martensite on bainitic transformation in low-temperature bainite steel. Qingdao: Trans Tech Publ; 2017. p. 803–809.
  • Zhao J, Jia X, Guo K, et al. Transformation behavior and microstructure feature of large strain ausformed low-temperature bainite in a medium C-Si rich alloy steel. Mater Sci Eng A. 2017;682:527–534. doi: 10.1016/j.msea.2016.11.073
  • Seol J-B, Raabe D, Choi P-P, et al. Atomic scale effects of alloying, partitioning, solute drag and austempering on the mechanical properties of high-carbon bainitic–austenitic TRIP steels. Acta Materialia. 2012;60(17):6183–6199. doi: 10.1016/j.actamat.2012.07.064
  • Gao G, Zhang H, Tan Z, et al. A carbide-free bainite/martensite/austenite triplex steel with enhanced mechanical properties treated by a novel quenching–partitioning–tempering process. Mater Sci Eng A. 2013;559:165–169. doi: 10.1016/j.msea.2012.08.064
  • Tian J, Chen G, Xu Y, et al. Comprehensive analysis of the effect of ausforming on the martensite start temperature in a Fe-C-Mn-si medium-carbon high-strength bainite steel. Metall Mater Trans A. 2019;50(10):4541–4549. doi: 10.1007/s11661-019-05376-w
  • Soliman M, Palkowski H. Development of the low temperature bainite. Arch Civil Mech Eng. 2016;16(3):403–412. doi: 10.1016/j.acme.2016.02.007
  • Yang J, Lu Y, Guo Z, et al. Corrosion behaviour of a quenched and partitioned medium carbon steel in 3.5 wt.% NaCl solution. Corros Sci. 2018;130:64–75. doi: 10.1016/j.corsci.2017.10.027
  • Long X, Zhang F, Kang J, et al. Low-temperature bainite in low-carbon steel. Mater Sci Eng A. 2014;594:344–351. doi: 10.1016/j.msea.2013.11.089
  • Grajcar A, Zalecki W, Skrzypczyk P, et al. Dilatometric study of phase transformations in advanced high-strength bainitic steel. J Therm Anal Calorim. 2014;118(2):739–748. doi: 10.1007/s10973-014-4054-2
  • Naderi M, Saeed-Akbari A, Bleck W. The effects of non-isothermal deformation on martensitic transformation in 22MnB5 steel. Mater Sci Eng A. 2008;487(1–2):445–455. doi: 10.1016/j.msea.2007.10.057