2,002
Views
4
CrossRef citations to date
0
Altmetric
50th ANNIVERSARY INVITED REVIEW

Review of studies on criticality safety evaluation and criticality experiment methods

, , &
Pages 1045-1061 | Received 28 Dec 2012, Accepted 01 Jul 2013, Published online: 24 Sep 2013

Abstract

Since the early 1960s, many studies on criticality safety evaluation have been conducted in Japan. Computer code systems were developed initially by employing finite difference methods, and more recently by using Monte Carlo methods. Criticality experiments have also been carried out in many laboratories in Japan as well as overseas. By effectively using these study results, the Japanese Criticality Safety Handbook was published in 1988, almost the intermediate point of the last 50 years. An increased interest has been shown in criticality safety studies, and a Working Party on Nuclear Criticality Safety (WPNCS) was set up by the Nuclear Science Committee of Organisation Economic Co-operation and Development in 1997. WPNCS has several task forces in charge of each of the International Criticality Safety Benchmark Evaluation Program (ICSBEP), Subcritical Measurement, Experimental Needs, Burn-up Credit Studies and Minimum Critical Values. Criticality safety studies in Japan have been carried out in cooperation with WPNCS. This paper describes criticality safety study activities in Japan along with the contents of the Japanese Criticality Safety Handbook and the tasks of WPNCS.

1. Introduction

The first criticality accident in Japan occurred on September 30, 1999, at the Tokai nuclear fuel-manufacturing plant operated by JCO Co., Ltd. The number of fission reactions in the accident was 2.5 × 1018 in the whole period of 20 h. Two employees were killed due to radiation exposure. The radiation dose at a distance of 80 m from the plant was estimated at almost the permissible annual limit of 1 mSv. This was a typical criticality accident occurring outside reactors.

“Criticality” is a state in which the number of fission reactions does not change from generation to generation. In 1932, J. Chadwich discovered the elementary particle, the neutron. In 1938, O. Hahn and F. Strassmann discovered the phenomenon of nuclear fission. In the early 1900s, the concept of nuclear cross section was constructed by a researcher group guided by E. Rutherford. In 1872, Ludwig Edvard Boltzmann published an important paper entitled “The Mathematical Theory of Non-Uniform Gases.”

In the paper, he introduced the following equation:

The right-hand side of the equation is a collision term. Expressing it with neutron cross sections, the following “Boltzmann transport equation” can be derived, which is often used in reactor physics:

where

Setting the left-hand side of Equation (Equation2) to zero and dividing the neutron source S(E, Ω) of Equation (Equation3) by k eff, the Equation (Equation2) becomes an eigenvalue equation whose eigenvalue is k eff. The eigenvalue k eff physically means the neutron multiplication factor.

Cases of k eff being less than, equal to, and greater than unity correspond to a subcritical, a critical, and a supercritical state, respectively.

Determining the criticality of a system can be achieved through critical experiments using nuclear reactors.

With the development of computer functions and performance, evaluating criticality by computed results was attempted. It was assumed that if the computed k eff on a system equaled 1.0, the system was critical. However, the accuracy of the computed results was not clear and the computer codes were immature. In the early days, computed results were not reliable enough to be used to directly determine criticality.

Due to the progress of experimental data collection, the arrangement of nuclear data, and the development of computer codes, the reliability of computed results rapidly improved. Regarding the collection of experimental data, many critical experiments have been performed, and the results are compiled in the ICSBEP Handbook [Citation1]. For arrangement of nuclear data, Evaluated Nuclear Data Files (ENDF) are produced in the United States [Citation2] and European countries [Citation3], and are used as standard data. In Japan, Japan Evaluated Nuclear Data Libraries [Citation4] are compiled. On development of computer codes, finite difference method codes were developed in the beginning, and Monte Carlo method codes have recently been developed. These Monte Carlo codes have become especially important for criticality evaluation calculation. The codes make it possible to take account of geometry more rigorously than finite difference method codes. In the 1960s, the multi-group Monte Carlo code, KENO [Citation5], was often used. Recently, continuous-energy Monte Carlo method codes, such as MCNP [Citation6] and MVP [Citation7], are frequently used for criticality calculation.

Regarding criticality safety evaluation methods, several criticality safety handbooks have been compiled in many countries. For example, TID-7011 of the United States [Citation8], CEA-R3114 of France [Citation9], and AHSB of the United Kingdom [Citation10] were prepared before 1980. These handbooks are based on critical experimental data and simplified estimation methods with safety margin tables. In Japan, TID-7011 of the United States was mainly used for criticality safety evaluation. There were, however, some differences in methods and data among those handbooks, and so the Science and Technology Agency of Japan decided to prepare the Criticality Safety Handbook of Japan. Upon production, a principle was decided: criticality should be assessed by newly advanced computer codes with credible nuclear data files. Because evaluation of computed results is essential to satisfy this principle, a lot of critical experimental data was analyzed with computer codes and the accuracy of the computations was examined. This methodology is quite different from those handbooks of the West, in which criticality safety is evaluated with simplified computation methods supported by experimental data. While the Japanese Criticality Safety Handbook data were compiled from 1980 to 1988, criticality codes accuracy evaluation tasks were maintained [Citation11] by the task forces of the international organization OECD/NEA. Japanese researchers participated in these tasks and made many contributions, the fruits of which are reflected in the Japanese Handbook [Citation12] published in 1988. Afterwards, with the cooperation of OECD/NEA, many tasks have been maintained concerning criticality safety such as the “International Criticality Safety Benchmark Project (ICSBEP)” initiated in 1995, research on the “Use of sensitivity analysis method for criticality safety evaluation,” research on “Source convergence of Monte Carlo criticality calculation,” research on “Burn-up credit,” and research on “Uncertainty for criticality safety.”

The second version of the Japanese handbook will be compiled using the results of the research studies mentioned above.

In Japan, numerous criticality and subcriticality experimental studies have been conducted in many research laboratories of universities, makers and national institutes (Japan Atomic Energy Research Institute (JAERI) and Power Reactor and Nuclear Fuel Development Corporation). In the beginning, the studies were restricted to examining reactor cores. In the 1980s, the Japanese government approved a plan to construct a commercial nuclear fuel-reprocessing plant in Rokkasho-mura, where nuclear fuel is treated in many kinds of physical and chemical forms. Assessing the criticality conditions of such fuel materials is essential, and the critical facilities, static experiment critical facility, and transient experiment critical facility (TRACY) were constructed in JAERI. With TRACY, neutron phenomena in a supercritical state were studied, and provided useful information for the analysis of the JCO criticality accident in Japan.

The authors of this review paper consider that it is important to establish the following criticality safety-related issues: (1) criticality safety guides and standards, (2) criticality safety evaluation methods, (3) criticality and subcriticality measurement techniques, and (4) preparedness for criticality accidents. From this point of view, this paper describes the activities for compiling the first Japanese Criticality Safety Handbook in Section 2.1. Enhancement of criticality safety evaluation methods, which has been mainly performed under international cooperative activities of OECD/NEA, is reviewed in Sections 2.2 and 2.3. Besides criticality experiments, subcriticality measurement techniques are reviewed in Section 3. Finally, Section 4 reviews criticality accident evaluation methods.

2. Criticality safety evaluation methods

2.1. Nuclear Criticality Safety Handbook of Japan

2.1.1. Preparation of Nuclear Criticality Safety Handbook of Japan

The nuclear safety guide was conceived by a group that met at the Rocky Flats Plant in 1955 to discuss industrial nuclear safety problems. The methodology for securing criticality safety of nuclear fuel facilities, the evaluation methods, and the required data were summarized in handbooks, standards, and guides including TID-7016, AHSB(S), and CEA-R3114 in the United States, the United Kingdom, and France, respectively. Computational methods and basic nuclear data, however, had either not yet been properly developed or had not reached sufficient sophistication to reliably predict the critical status of fissile materials. The criticality safety values were estimated by multiplying safety factors or using simplified calculation methods with these handbook data. In Japan, these handbooks were often used for evaluating criticality safety.

However, it was noted that some differences were found in criticality control methods and data among the handbooks. Over the years, substantial progress has been made in developing nuclear data and computer codes to evaluate criticality safety for nuclear fuel handling. Therefore, upon compiling a Japanese handbook, the criticality safety values were determined based on calculations with precise computer codes evaluated with the experimental data.

2.1.2. Preparation of the criticality safety evaluation codes JACS

Thanks to the development of computers, many computer codes and nuclear data files have been created and prepared. Until the 1970s, one- and two-dimensional transport codes, based on finite difference methods, using diffusion approximations or a discrete ordinates (Sn) method were widely used for k eff calculation. In nuclear fuel-handling facilities, nuclear fuel has various kinds of physical and chemical properties and complicated geometry. When evaluating the criticality safety of these facilities, Monte Carlo methods have been found to be more effective than finite difference methods.

In FY1982, the JAERI began to undertake a project to improve computer codes and available data for nuclear safety assessment. In this project, the Monte Carlo code, KENO-IV, and the evaluated neutron cross-section data file, ENDF/B-IV, were used. With these data and the code, the computer code system JACS [Citation13] was developed by JAERI.

A flow diagram of JACS is shown in . As shown in this figure, JACS has four important functions: one is to produce multi-group nuclear constants by processing ENDF/B-IV data with the MGCL-ACE [Citation14] system, the second is to calculate macroscopic cross sections of each material region by MAIL or REMAIL [Citation14] codes, the third is to solve neutron transport equations and evaluate k eff by Sn or Monte Carlo codes, and the fourth is to evaluate criticality safety with the accuracy evaluation table.

Figure 1 Calculation flow in JACS code system

Figure 1 Calculation flow in JACS code system

A Multi-group Cross-Section Library (MGCL) is a Bondarenko-type constants library [Citation15] like the JAERI-FAST Set. The number of energy groups in the MGCL master library was limited to 137 due to the lack of computer capacity at the time of development.

To generate an “accuracy evaluation table,” many experimental criticality data were collected and analyzed with JACS [Citation16]. shows the computed results on homogeneous lowly-enriched uranium systems, where k < means a critical lower limit. If the k eff of the system were computed to be less than this value, the system would be judged to be subcritical. This value was obtained by analyzing 389 experimental data using MGCL and KENO-IV.

Figure 2 JACS calculation results for critical experiment system (part 1: homogeneous lowly-enriched uranium system)

Figure 2 JACS calculation results for critical experiment system (part 1: homogeneous lowly-enriched uranium system)

A criticality working group of the NEA/NSC assessed many criticality codes around the world with criticality benchmark data. JACS performed well in this assessment [Citation17]. After this assessment, the OECD/NEA started some projects on criticality safety evaluation. In 1992, the International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated, which contributed to collecting experimental data that were used for evaluating the criticality safety analysis codes. As a result, it became easier to obtain the critical lower limit of a corresponding experiment system [Citation18]. Utilizing these analytical results effectively, sensitivity analysis methods are now being studied actively by a task group of the OECD/NEA.

2.2. Enhancement of criticality safety evaluation methods

2.2.1. Activities of the International Criticality Safety Evaluation Project

Validating a criticality safety evaluation method, in other words, judging how accurately the method can predict the eigenvalue k eff is usually carried out by comparing the calculated k eff with the relevant criticality experiments. Although thousands of criticality experiments have been performed throughout the world, many were not evaluated with a high degree of quality assurance and were not well documented. To identify, evaluate, verify, and formally document a comprehensive peer-reviewed set of criticality safety benchmark data, the ICSBEP was initiated in 1992 by the US Department of Energy. Then, the ICSBEP became an official activity of the OECD Nuclear Energy Agency in 1995. Now, it is recognized that the ICSBEP contributes to significantly reducing the time, budget, and work forces for the tedious evaluation processes of erroneous critical experiment data scattered throughout journals, reports, and logbooks. The peer-reviewed ICSBEP evaluation report is published every year as the ICSBEP Handbook, which is available on DVD or on the Internet. The 2011 edition of the ICSBEP Handbook [Citation19] contains benchmark specifications for 4552 critical or subcritical configurations consisting of an enormous number of combinations of nuclear fuels, moderators, absorbers, and neutron spectra. The ICSBEP is a part of international collaborative efforts involving numerous scientists, engineers, administrative support personnel, and program sponsors from 24 different countries and the OECD/NEA. The ICSBEP Handbook is now being widely used for validation and verification of reactor core design codes as well as criticality safety analysis tools.

Each evaluation report of the ICSBEP Handbook consists of mainly four sections. The first section contains a detailed description of the experiment. The second section contains an overall evaluation of the experiment. The effects of uncertainty in the data on k eff are discussed and quantified. The third section contains the data necessary to construct calculation models of critical or subcritical systems. The fourth section contains calculated k eff obtained with the benchmark specification data given in the third section. The input listings used in the fourth section are contained in the DVD of the ICSBEP Handbook.

The ICSBEP Handbook spans a wide variety and a large number of critical and subcritical configurations. With the increasing number of critical and subcritical configurations, it is becoming hard to find experiments that meet the user's requirements. To make effective use of the ICSBEP Handbook, a relational database DICE (Database for the International Handbook of Evaluated Criticality Safety Benchmark Experiments) [Citation20] has been developed and included in the DVD of the ICSBEP Handbook. DICE enables the user to easily access relevant experiments by specifying some related keywords (e.g., fuel types, moderator, neutron absorber, and neutron spectrum).

From now on, ICSBEP will continue to provide and disseminate precise, structured, and formalized documentation of criticality safety-related benchmark data, along with best-estimate uncertainties, to users in a form that enables them to reliably validate analytical methods including nuclear cross-section data.

2.2.2. Use of the sensitivity analysis method for criticality safety evaluation

Even though a criticality safety evaluation method is validated against a set of criticality experiments, the method cannot be applied to the criticality safety evaluation for fissionable systems beyond its applicability. For example, a criticality safety evaluation method that has been validated against a fast uranium metal system cannot be directly applied to a thermal plutonium solution system because the systems are quite different. However, how can we determine if two fissionable systems are similar or not?

To answer this question, sensitivity and uncertainty analysis (S/U) techniques have been developed. TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) is one of the most comprehensive S/U tools [Citation21]. One of the outstanding features of TSUNAMI lies in the capability of perturbation theory eigenvalue sensitivity calculation with Monte Carlo techniques. While most other sensitivity calculation tools rely on the deterministic method, TSUNAMI adapts the Monte Carlo techniques, which are standard tools for criticality safety evaluations.

The TSUNAMI analysis techniques are based on the conjecture that systems with neutron multiplication factors that exhibit similar sensitivities to perturbations in the neutron cross-section data will have similar biases due to the use of the same computational method and the same nuclear data for the criticality safety analysis. TSUNAMI provides a degree of similarity between a benchmark experiment and a particular criticality safety application. The degree of similarity is an integral parameter (customarily denoted as c k) that couples the sensitivity data with tabulated cross-section covariance data to give a correlation coefficient that provides a measure of the shared variance due to cross-section uncertainties in the computed k eff for the application and a given experiment. The definition of c k is given below. Suppose that a symmetric M×M matrix containing the relative variances (diagonal elements) and covariances (off-diagonal elements) is given as C αα where M is the number of nuclide-reaction pair × the number of neutron energy groups. The uncertainty matrix is given by

where t indicates a transpose, Sk is a matrix containing sensitivities of the calculated k eff to a nuclear data parameter α and is described as
where I is the number of critical systems considered. The sensitivity matrix Sk is calculated by a Monte Carlo or discrete ordinates transport method installed in the SCALE code system. The Ckk matrix consists of relative variance values σ2 i for each of the critical systems considered and the relative covariance between two systems . The degree of similarity between the systems i and j is

Based on the criteria given in Ref. [Citation22], any benchmark experiment that demonstrates a ck value of 0.8 or greater with a given application is deemed applicable for the criticality code validation of that system.

Traditional methods for choosing experiments with characteristics that are similar to those of the application being validated rely on moderation level (H/X), energy of average lethargy causing fission (EALF), etc. On the other hand, the value of ck is expected to give a more quantitative similarity between critical experiments and applications. Some critical experiments that have been deemed applicable in terms of the traditional criteria may be rejected as the benchmark data in terms of ck . On the other hand, some critical experiments that have not been deemed applicable may be accepted as the benchmark data. For example, the EALF of a uranium-plutonium mixed oxide powder (Pu content: 22 wt%, H/(Pu + U) = 1.6, density of mixed oxide: 5.5 g/cm3) is 3751 eV. An experimental critical configuration (Pu content: 30 wt%, H/(Pu + U) = 3), which is included in the ICSBEP Handbook, has an EALF of 41 eV [Citation22]. Judging from the EALF, this experiment does not seem to be appropriate to be used as benchmark data for the MOX powder. However, ck between the MOX powder and the experiment is 0.93, showing very close similarity of the two configurations. Furthermore, ck s between the MOX powder and critical experiments with PuO2 compact blocks range from 0.80 to 0.91 [Citation22]. The conventional criteria for screening benchmark data for MOX fuel do not allow for experiments without uranium. However, based on the screening criteria with TSUNAMI methodology, the experiments with PuO2 fuel may become available for benchmark data for MOX fuel.

While the traditional methods for screening benchmark experiments are still being used for licensing, the value of c k could provide a more reasonable basis from a reactor physics point of view. The advent of TSUNAMI S/U methodology has caused a paradigm shift in the community of criticality safety evaluation in selecting, planning, and performing critical experiments.

2.2.3. Source convergence of Monte Carlo criticality calculation

Fission source convergence in Monte Carlo criticality calculations has been a challenging issue since the early stages of development of the Monte Carlo method. Fission source convergence is very important from a criticality safety point of view since false convergence may lead to an erroneous effective multiplication factor. In the fall of 2000, The OECD/NEA Working Party on Nuclear Criticality Safety (WPNCS) Expert Group on Source Convergence in Criticality Analysis was established to explore this issue for the benefit of the international criticality safety community. The first phase of the Expert Group's program posed four challenging test problems in which slow convergence, under-sampling, or seemingly intractable noise were encountered. Various Monte Carlo criticality codes and methods were applied to the four test problems [Citation23]. The second phase of the Group is summarized in Ref. [Citation24] to address the works in source convergence emphasizing benefits and limitations.

The source convergence-related issues that we have to confront to seek more reliable criticality safety evaluations are (1) determining source convergence, (2) preventing false convergence, (3) accelerating convergence, and (4) eliminating bias in confidence intervals.

For the first issue (determining source convergence), a quantity called the Shannon entropy has been introduced to assess the convergence status of the fission source distribution [Citation25]. The Shannon entropy is based on the information theory and is defined as

where N is the number of regions for scoring fission source sites and Pi is the ratio of the number of fission source sites in the ith region to the number of total fission source sites. The Shannon entropy provides a single number for each cycle to represent the cycle-by-cycle transient of the fission source convergence. The Shannon entropy converges to a single steady-state value as the source distribution approaches stationary. Monte Carlo users are advised to examine the convergence of both k eff and the Shannon entropy.

Regarding the second and third issues (preventing false convergence and accelerating convergence), we have to note that false convergence often occurs for a system suffering from slow convergence. Thus, in most cases, false convergence can be avoided by introducing a technique of accelerating fission source convergence. Most methods for accelerating convergence are applicable to deterministic calculation methods. Implementing these methods using Monte Carlo techniques is not straightforward. However, two promising Monte Carlo methods for accelerating source convergence have been developed, i.e., the superhistory powering method [Citation26] and Wielandt's method [Citation27]. It should be noted that “acceleration” in Monte Carlo criticality calculation does not always mean reducing the total CPU time up to convergence. Instead, it means reducing the number of cycle iterations up to convergence.

In the superhistory powering method, a set of starting fission neutrons from the previous batch is tracked, including all their fission neutron progeny through generation N (typically 10). Only the fission source sites in the last Nth generation are used for the starting fission source sites in the next batch, thereby making source renormalizations at the end of batches less frequent.

Wielandt's method is a well-known acceleration technique that is commonly used in deterministic criticality calculations. The first attempt to adapt the method for Monte Carlo criticality calculations was done by Yamamoto and Miyoshi [Citation27]. The same manner as in the superhistory powering method, this adaptation of Wielandt's method tracks neutrons belonging to several forthcoming generations in each cycle. In contrast to the superhistory powering method, the neutron population always decays completely within a cycle. Moreover, new fission neutrons for the subsequent cycles are sampled in all neutron histories within the current cycle, i.e., not only in the last generation. The basic concept of Wielandt's method is that a fraction of fission neutrons produced during the random walk processes, which are used in the next cycle in the conventional method, are tracked within the current cycle. The fraction is controlled by a user-specified parameter k e, which has to be larger than k eff. This procedure reduces the dominance ratio. As k e approaches k eff, the dominance ratio decreases and fission chains in one cycle become longer. The longer fission chains within one cycle permit the source distribution to “spread out” more in a single iteration, decreasing the number of cycles for source convergence. Wielandt's method provides a drastic improvement in convergence rate and greatly reduces the likelihood of false convergence.

As a method for avoiding false convergence, Naito and Yang proposed the novel “sandwich method” [Citation28]. The essence of the sandwich method is that a true converged k eff can be found between two k eff that are obtained from two extreme initial source guesses giving the highest and lowest k eff, respectively. With this method, a range of a finally converged k eff is estimated without fail. The sandwich method can be used with any code, Monte Carlo or deterministic.

The fourth issue is related to elimination of bias in confidence intervals of k eff. Even if we have a converged source distribution, individual samples of that distribution taken in each cycle may be correlated. This arises from the fact that a fission source location in one cycle is inherited from the fission source locations of its parent neutrons. This correlation leads to underprediction of the variances of k eff. One of the methods to eliminate the underestimation is the superhistory powering method, which was originally designed to reduce the bias of the k eff estimators that stems from frequent fission source renormalization. An iterative method applied to real variance estimation of k eff has been developed by Ueki, Mori, and Nakagawa [Citation29]. The relations among the “real” and “apparent” variances are formulated, where “real” refers to a true value that is calculated from independently repeated Monte Carlo runs and “apparent” refers to the expected value of estimates from a single Monte Carlo run.

In Wielandt's method, only a fraction of the fission neutrons are used as the fission source for the next cycle. The remaining fission neutrons continue in the current cycle. This extends the number of fission generations tracked in each cycle. Kiedrowski [Citation30] demonstrated that as the number of fission generations within a cycle increases, the correlation and bias of the variance will decrease. Thus, Wielandt's method is an effective method for removing variance bias [Citation31].

2.3. Burn-up credit criticality benchmarks

The reactivity of nuclear fuel decreases with irradiation (or burn-up) due to the transformation of heavy nuclides and the formation of fission products (FPs). Burn-up credit studies aim at accounting for fuel irradiation in criticality studies of the nuclear fuel cycle. In the Japanese Criticality Safety Handbook, the reactivity effects of fuel irradiation are discussed. In the handbook, a change in composition of spent fuel during irradiation is studied. In 1995, Naito et al. presented a study on a criticality safety evaluation method for burn-up credit in JAERI [Citation32]. Kurosawa et al. presented a report titled “Isotopic Composition of Spent Fuels for Criticality Safety Evaluation Isotopic Composition Database (SFCOMPO)” [Citation33].

Many criticality safety calculation tools have been developed. However, they must be appropriate for application to burned fuel systems and must guarantee a reasonable safety margin. To this end, a suite of burn-up credit criticality benchmarks has been established by the NEA. The benchmarks have been selected to allow a comparison of results among participants using a wide variety of calculation tools and nuclear data sets. The nature of the burn-up credit problem requires that the capability to calculate both spent fuel composition and reactivity be demonstrated. The benchmark problems were selected to investigate code performance over a variety of physics issues associated with burn-up credit as described in . In 1991, the OECD/NEA inaugurated a benchmark group to pursue the study of burn-up credit criticality benchmarks. Many Japanese researchers benefitted from the projects. The following is an abstract of the results.

Table 1 Summary of benchmark problems addressed by the OECD/NEA Burn-up Credit Task Force

2.3.1. Burn-up Credit Criticality Benchmark Final Results of Phase IA: Infinite Array of PWR Pin-Cells [Citation34]

The report describes the final result of Phase IA of the Burn-up Credit Criticality Benchmark conducted by OECD/NEA. The Phase IA benchmark problem is an infinite array of a simple PWR spent fuel rod. Analysis was done on the PWR spent fuels of 30 and 40 GWd/tHM after 1 year and 5 years of cooling time. In total, 25 results from 19 institutes of 11 countries have been submitted.

For the nuclides in spent fuel, 7 major actinides and 15 major FPs were selected for the benchmark calculation. In the case of 30 GWd/tHM burn-up, it was found that the major actinides and the major FPs contributed to more than 50% and 30% of the total reactivity loss due to burn-up, respectively. Therefore, more than 80% of the reactivity loss is covered by 22 nuclides. However, larger deviations among the reactivity losses by participants have been found for cases including FPs than for cases with only actinides, indicating the existence of relatively large uncertainties in FP cross sections. The large deviation also seen in the case of the fresh fuel has been found to decrease sufficiently by replacing the cross-section library from ENDF-B/IV with that from ENDF-B/V.

2.3.2. Burn-up Credit Criticality Benchmark Phase IIA conducted by OECD/NEA [Citation35]

In the Phase IIA benchmark problems, the effect of an axial burn-up profile of PWR spent fuels on criticality (end effect) was studied. The axial profiles at 10, 30 and 50 GWd/tHM burn-up were considered. In total, 22 results from 18 institutes of 10 countries have been submitted. The calculated multiplication factors from the participants fall within the band of +1\%; Δk eff. For radiation of up to 30 GWd/tHM, the end effect was found to be less than 1.0\%; Δk eff. But, for the 50 GWd/tHM case, the effect was more than 4.0\%; Δk eff when both actinides and FPs are taken into account, whereas it remained at less than 1.0\%; Δk eff when only actinides were considered. The fission density data indicate the importance of end regions in the criticality safety analysis of spent fuel systems.

2.3.3. Burn-up Credit Criticality Benchmark Phase IB Isotopic Prediction [Citation36]

This report summarizes the results and findings of the Phase IB benchmark, which was proposed to provide a comparison of the abilities of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effects on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement with a difference of less than 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are in agreement with a difference of less than 11% for all FPs studied. Furthermore, most deviations are less than 10%, and many are less than 5%.

2.3.4. Burn-up Credit Criticality Benchmark Analysis of Phase IIB [Citation37]

The OECD/NEA “burn-up credit Criticality Benchmark” working group has studied the effect of axial burn-up profile on the criticality of a realistic PWR spent fuel transport cask (Phase IIB). The final results of this benchmark are presented and analyzed in this report.

Nine basic cases and two additional accident configurations were considered with the following varying parameters: burn-up (0 GWd/tHM for fresh fuel, 30 and 50 GWd/tHM), fuel composition (actinides only and actinides with 15 FPs), axial burn-up discretization (1 or 9 zones). In all, 14 participants from seven different countries submitted partial or complete results (multiplication factors and fission reaction rates). Good agreement was found between participants for calculated k eff. The dispersion of results, characterized by 2 σr (where σr is the ratio of the standard deviation to the average value) ranged from 0.5% to 1.1% for irradiated fuels and equaled 1.3% for fresh fuel. The reactivity effect of axial burn-up profile for basic cases was similar to that obtained in Phase IIA: less than 1000 pcm for cases with burn-up less than or equal to 30 GWd/tHM or for cases without FPs and about −4000 pcm for 50 GWd/tHM burn-up and composition including FPs. However, the two accident cases highlighted that the reactivity effect of axial burn-up discretization depends on the configuration studied. For the accident conditions defined for this benchmark, the axially averaged flat distribution was found to be a nonconservative approximation even for low burn-ups (10 GWd/tHM) and without FPs, a reactivity effect of burn-up profile.

2.3.5. Burn-up Credit Criticality Benchmark Phase IIIA [Citation38]

The report describes the final results of Phase IIIA Benchmarks conducted by the Burn-up Credit Criticality Calculation Working Group under the auspices of the OECD/NEA. The benchmarks are intended to confirm the predictive capability of the current computer code and data library combinations for the neutron multiplication factor k eff of a layer of irradiated BWR fuel assembly array model. In total, 22 benchmark problems were proposed for calculations of k eff. The effects of the following parameters were investigated: cooling time, inclusion/exclusion of FP nuclides and axial burn-up profile, and inclusion of axial profile of void fraction or constant void fractions during burn-up. Axial profiles of fractional fission rates were further requested for 5 cases out of the 22 problems. Twenty-one sets of results are presented, contributed by 17 institutes from 9 countries. The relative dispersion of k eff values calculated by the participants from the mean value was almost all within the band of ±1\%; Δk/k. The deviations from the averaged calculated fission rate profiles were found to be within ±5\%; for most cases.

2.3.6. OECD/NEA Burn-up Credit Criticality Benchmarks Phase IIIB: Burn-up Calculations of BWR Fuel Assemblies for Storage and Transport [Citation39]

The report describes the final results of the Phase IIIB Benchmark conducted by the Expert Group on Burn-up Credit Criticality Safety under the auspices of the NEA of the OECD. The Benchmark was intended to compare the predictability of current computer code and data library combinations for the atomic number densities of an irradiated BWR fuel assembly model. The fuel assembly was irradiated under a specific power of 25.6 MW/tHM up to 40 GWd/tHM and cooled for 5 years. The void fraction was assumed to be uniform throughout the channel box and constant, at 0%, 40% and 70%, during burn-up. In total, 16 results were submitted from 13 institutes of 7 countries. The calculated atomic number densities of 12 actinides and 20 FP nuclides were found to be for the most part within a range of ±10\%; relative to the average, although some results, esp. 155Eu and gadolinium isotopes, exceeded the band, and will require further investigation. Pin-wise burn-up results were in agreement among the participants. The results in the infinite neutron multiplication factor k eff were also in agreement with each other for the void fractions of 0% and 40%; however, some results deviated from the averaged value noticeably for the void fraction of 70%.

2.3.7. The next challenge for the burn-up credit method

The next challenge for the burn-up credit method lies in its application to mixed oxide (MOX) fuels, i.e., fuels containing a mixture of uranium and plutonium oxides. A comprehensive MOX benchmark study would contain all of the elements of the previous phases, but with the added difficulties associated with the nonunique specification of MOX fuels and the manner in which they would be utilized in existing thermal reactor designs. The definition of a universally attractive benchmark exercise is further complicated by the different incentives for adopting a MOX fuel strategy among the member countries of the group participants [Citation40,Citation41].

2.4. Concluding remarks on the review of criticality safety evaluation method

This section reviews the compilation of the first Japanese Criticality Safety Handbook and the international cooperative activities on criticality safety evaluation methods that have been promoted mainly by the WPNCS of OECD/NEA. The delegates from Japan have played a key role in all these activities, and the Working Party's activities have made a significant contribution to enhancing the accuracy of computed k eff.

3. Subcriticality measurement

3.1. Purpose of subcriticality measurement

Subcriticality measurement is an experimental technique to measure reactivity deference from a critical state, and the absolute value of negative reactivity is defined as subcriticality, with units measured in dk/k (or pcm) or dollar.

Research on this measurement has been carried out since the early stages of nuclear development until the present day. However, the purpose or objective facility for this measurement has gradually changed during that period. In the 1950s, several subcritical assembly facilities were constructed in research institutes or universities, and initial reactor physics experiments were started in Japan to measure various reactor physics parameters such as critical mass. Subcriticality was one of the measurement items in these experiments [Citation42]. The subcritical assemblies were soon replaced by critical assemblies that were constructed in the 1960s, such as TCA, FCA, and SHE, in the Japan Atomic Energy Research Institute (the present Japan Atomic Energy Agency). With use of these critical assemblies, since subcriticality could be measured based on the measured data in critical state, the accuracy of measured subcriticality was greatly improved compared with subcritical assembly experiments, and subcriticality measurement was applied to various reactivity measurements such as material sample worth reactivity [Citation43]. In this period, nuclear criticality safety during the treatment of nuclear materials in nuclear facilities was a very important issue in the design of those facilities. However, subcriticality measurement was not applied because the accuracy of measured subcriticality was still considerably poor in this era. This situation changed in the 1980s due to the decision to construct a new commercial fuel-reprocessing plant in Rokkasho, Resultantly, the importance of subcriticality measurement increased in the nuclear fuel-cycling facilities including fuel-reprocessing plants, spent fuel storage facilities, etc., and various kinds of studies in this field were carried out [Citation44]. Additionally, importance increased due to the development of subcriticality measurement devices to measure subcriticality accurately in a short time, as described later. In the 1990s, application of subcriticality measurement to spent fuel facilities was studied for the purpose of effective use of the facility by monitoring subcriticality to increase the number of fuel assemblies stored in the limited spent pool area. This was considered to be a new purpose of subcriticality measurement, not for a criticality safety aspect, but for economical reasons. At present, application of subcriticality measurement is being attempted in reactor physics experiments in power reactors, where reactivity such as control rod worth is measured in a subcritical state to shorten the period of a reactor physics experiment before start up of high power operation [Citation45]. Subcriticality monitoring in Accelerator-Driven Subcritical System (ADS) [Citation46] is also one of the important subjects to assure the safe operation of ADS. Moreover, in the treatment of fuel debris at the Fukushima-1 power station of TEPCO, subcriticality measurement will be necessary from a criticality safety point of view for future decommissioning.

3.2. Various subcriticality measurement methods

In the measurement of subcriticality, several common difficulties have been encountered in the various measurement methods that have been developed since the early days of these techniques. The first one is the necessity of an external neutron source such as 252Cf or accelerator neutrons to maintain the fission chain reactions, excepting spontaneous fission source-containing systems such as plutonium solution. An external neutron source is usually placed outside the objective system, so the space dependency of measured results largely depends on the neutron source position, which is called the space dependency problem. This effect can be reduced if the neutron source is placed inside the system. Instead, however, the effect is enlarged due to the increase of subcriticality because the amount of total fission reactions caused by fissile materials in the system decreases compared with the intensity of the external neutron source. To overcome this difficulty, several experimental techniques have been proposed: increasing the number of neutron detectors and summing up their data, considering higher mode fluxes caused by external neutron source [Citation47], etc. However, the problem has not yet been completely solved. The second difficulty is the decrease in neutron count rate of the neutron detector because of the limitation of external neutron source density, which causes the degradation of accuracy of measured subcriticality. This can be overcome only by increasing the measurement time or detection efficiency, which is hard to achieve in reality. During subcriticality measurement, these difficulties should be always considered.

Various subcriticality measurement methods are briefly described below. Note that most of them are based on the well-known one-point reactor kinetic equation.

3.2.1. Control rod drop method

In this method, step-wise negative reactivity, which is caused when a control rod drop is inserted, is determined by measurement of flux change profile before and after insertion [Citation48]. This method is often adopted in control rod worth calibration, and it can be applied for cases up to a fairly large subcriticality measurement. Because of local insertion of a neutron absorber into the system and disturbance of neutron flux at the corresponding position, the space dependency problem occurs near the periphery of the control rod.

3.2.2. Neutron source extraction method

This method involves extracting the external neutron source quickly or turning off the accelerator power to terminate neutron production and measure the flux change profile before and after the extraction similar to the control rod drop method [Citation49]. In this method, it is necessary to know the neutron count rate in another subcritical state where the subcriticality is known beforehand. It is a simple experimental method. However, except for very shallow subcritical states, the decreasing rate of neutron flux is very fast and it is difficult to measure the subcriticality accurately.

3.2.3. Neutron source multiplication method

In this easily implemented method, neutron count rates are measured in a known subcritical state and then the subcriticality of the objective unknown state can be obtained through measurement of neutron count rate by assuming constant neutron source intensity. To improve the accuracy of the measurement, a modified neutron source measurement method was developed, where calculated neutron flux and adjoint neutron flux distributions are utilized in the correction of space dependency of measured date [Citation50]. This method is widely used in reactivity measurement for sample worth reactivity determination.

3.2.4. Pulsed neutron method

This is the most famous experimental technique to measure the subcriticality, and for this reason, a pulsed neutron source device has been installed in most critical facilities. In this method, pulsed neutrons produced by such means as a DT reaction accelerator are repeatedly injected in the objective system, and neutron flux change after the injection is measured. Two experimental procedures are done to analyze the experimental data: one is to obtain the prompt neutron decay constant by measuring the decay profile of prompt neutron just after injection of neutrons, after which, subcriticality is calculated using the effective delayed neutron fraction and neutron generation time both of which are calculated or measured in other known subcritical conditions. The other analysis method is called an area ratio method where integrated flux levels in both prompt neutron and delayed neutron components are measured [Citation51]. The advantage of this method is that subcriticality can be obtained in dollar units without using the measured results in another known subcritical state. This method can also be applied to a large subcritical state.

3.2.5. Neutron noise analysis method

Even in a steady state subcritical condition, neutron count rate by a neutron detector always fluctuates among certain values, and analysis of this fluctuation is called the neutron noise analysis method. Two well-known methods are utilized: time domain analysis and frequency domain analysis. For time domain analysis, a mean to variance value method, the Feynman-alpha method, is often adopted in subcriticality measurement [Citation52]. In this method, the variation of neutron count rate distribution from the Poisson distribution is analyzed and it is possible to obtain the prompt neutron decay constant, which can be converted to subcriticality [Citation53]. To decrease the space dependency problem, a correlation measurement with two neutron detectors has been developed. Since this method is easy to employ by using an external neutron source and a neutron detector, it is a promising candidate to be adopted in an operating nuclear fuel facility for subcriticality measurement. It is currently used in reactor physics experiments in power reactors such as Monju [Citation45]. Extension of this method to measurement of third-moment of neutron correlation, a new noise analysis method, was proposed to determine the absolute value of subcriticality without any other calibration data [Citation54] and experimental confirmation of this method is now progressing.

Frequency domain analysis is done to analyze the measured neutron detection time series data by Fourier transformation method and obtain the prompt neutron decay constant. A noted method based on frequency analysis is the Mihalczo method, in which a specially designed 252Cf-containing detector is used combined with other two normal detectors. It is possible to measure subcriticality without knowing calibrated subcritical condition data [Citation55].

3.2.6. Exponential experiment

Unlike other methods, this method measures neutron flux distribution and the decay constant of neutron flux apart from the neutron source, which is related to the subcriticality [Citation56]. In a real situation, such as a spent fuel facility, neutron flux distribution is measured by moving a neutron detector, and if burn-up distribution is complicated, several corrections will be required through data calculation.

3.3. Development of measuring devices in subcriticality measurement

Among various subcriticality measurement methods, the achievement of successful results by the neutron noise analysis method largely depends on the development of data acquisition devices, which is different from other reactor physics experimental methods. In the early days of reactor experiments, neutron noise analyses were carried out based on an analog data acquisition system such as an oscilloscope or a chart recorder system; however, a digital recording system such as an Multichannel Scaler (MCS) system was developed to acquire time series data from neutron detectors continuously and as a result, accuracy of measured subcriticality greatly improved [Citation3Citation7]. Because of such data acquisition systems, the neutron noise analysis method can be applied to subcriticality measurement. In the early stages of the MCS, it took long time to analyze measured data. However, owing to the development of computer system, it is currently possible to analyze the data in real time. These days, data acquisition systems have greatly improved: continuous measurement with time resolution of less than 100 ns for several detectors, simultaneous measurement of pulse height data through analog-to-digital converter, simultaneous measurement of pulse shape itself, and transferring the measured results to a computer in real time. This kind of system is almost an ultimate system for neutron noise analysis. This was a big dream for researchers in the early days.

Compared with the development of data acquisition systems, a neutron detector such as a BF3 or 3He counter is still utilized in subcriticality measurement. Required properties for neutron detectors are high detection efficiency, small size, noise discrimination from gamma rays, etc., and these requirements can be accomplished by the above counters. Some subcriticality measurements are carried out by using a thin optical fiber detector [Citation57] or a position sensitive detector that can measure one-dimensional flux distribution in real time [Citation58]. However, most subcriticality measurements have been performed using existing detectors such as a BF3 counter.

3.4. Review of development of measuring devices and future works

As mentioned above, many kinds of devices and methods have been developed, such as (a) control rod drop method, (b) neutron source extraction method, (c) neutron source multiplication methods (d) pulse neutron method, (e) neutron noise method, and (f) exponential experiment. These methods are reviewed and the future works of the methods are discussed.

The purpose of subcriticality measurement gradually changes, but it is still an important issue to assure nuclear criticality safety in nuclear fuel cycle facilities and to improve economical efficiency during operation of those facilities. Moreover, subcriticality measurement has begun to be applied to other fields such as reactivity measurement in power reactors, monitoring in ADS operation, and fuel debris treatment. It is needed to solve several problems, space dependency for example, in the future for precise and reliable measurement. Subcriticality measurement with use of new detectors is also needed in the future. For example, if new detectors that can acquire neutron spectrum information, very small detectors, or two-dimensional flux measuring detectors, etc., are developed, innovative theory for subcriticality measurement needs to be developed for accurate measurement. This can be only achieved by future collaboration work with reactor physicists and radiation physics researchers. Recently, the number of feasible critical facilities or research reactors for basic critical safety experiments in Japan is decreasing, so joint collaboration work between these facilities and researchers should be encouraged to continue future basic research for development of subcriticality measurement.

4. Methods for criticality accident evaluation

The first criticality accident reported in Ref. [Citation59] occurred at Los Alamos Scientific Laboratory in the United States on 11 February 1945. Since then, more than 50 criticality accidents have occurred in the world. In Japan, two criticality accidents occurred in 1999. One is known as the JCO accident in Tokai-mura, which occurred on 30 September [Citation60,Citation61]; the other is the unintended withdrawal of control rods at the Shiga 1 nuclear power plant of Hokuriku Electric Power Company on 18 June [Citation62].

Apart from criticality accidents at reactors or assemblies, 22 criticality accidents occurred during fuel-processing operations, of which, 21 occurred with the fissile material in solutions and slurries [Citation59]. Therefore, a lot of work has been done on criticality accidents with the fissile material in solution.

In Japan, as of 2012, one nuclear fuel-reprocessing plant is running in Tokai-mura, another is under construction in Rokkasho-mura where an MOX fuel fabrication plant is also being planned for construction. For each plant, the total number of fissions for a postulated scenario has been evaluated. As an example, the total number of fissions in the Rokkasho fuel-reprocessing plant is 1019 for design basis accidents and 1020 for site evaluation accidents [Citation63]. These values are upper bounding values, values large enough to evaluate technical adequacy based on the experience of past criticality accidents.

This section focuses on criticality accidents in a solution system, because most criticality accidents have occurred within solution during processing operations and require a lot of work. On the other hand, it should be mentioned that the scope of criticality accident evaluation methods is wide and there are many issues such as the MOX powder system, re-criticality in the Fukushima accident, geological disposal, and criticality accidents in a power reactor.

4.1. Simple evaluation method

From the late 1950s to the early 1960s, criticality accidents frequently occurred in the United States and Russia. Additionally, some experiments on criticality accidents of fissile material in solution were conducted using Kinetic Experiment Water Boiler (KEWB) in the United States from 1956 to 1966 [Citation64,Citation65], VIRs in Russia since 1964 [Citation66], IGRIK in Russia since 1976 [Citation67,Citation68], Conséquences Radiologiques d'un Accident de Criticité (CRAC) in France, 1967–1971 [Citation69], and SILENE in France 1974–2010 [Citation70,Citation71]

From the 1970s, considering the results of those experiments, some simple methods were proposed to estimate the total number of fission reactions in criticality accidents of nuclear solution fuel [Citation72Citation77]. The methods were based on experiments and the experience of past criticality accidents. The maximum power or total number of fissions was expressed by a function of the volume of the fuel solution or the container, etc. It was reported that most simple methods give the upper bounding value of the number of fissions occurring in a criticality accident.

The equation of each method is shown in . The feature of each method is as follows:

Table 2 Simple evaluation methods

a.

Tuck [Citation72]

The total number of fissions is expressed by a function of the solution volume. It is based on the consideration that the released fission energy causes the fuel solution to dry out and the criticality accident to terminate.

b.

Olsen et al. [Citation73]

The total number of fissions is expressed by the sum of two functions. The one for the initial burst is a function of the solution volume; the one for the plateau is a function of time. Those functions are based on the data of CRAC experiment.

c.

Barbry et al. [Citation74]

The total number of fissions is expressed by a function of the volume of the fuel solution and the duration time of free excursion, which is based on the data of CRAC and SILENE experiments.

d.

Nomura et al. [Citation75,Citation76]

The total number of fissions is expressed by a function of the volume of the fuel solution. Two expressions are provided. One is for the case of boiling and the other for nonboiling. The former is based on the idea that a limited amount of water vaporizes in the criticality accident.

e.

Duluc [Citation77]

The total number of fissions is expressed by a function of the volume or mass of the fuel solution, the duration time of free excursion and the constant depending on the geometry of fuel solution. Four expressions are provided. Two are for the case of boiling and the other two for nonboiling.

The simple evaluation method is intended to provide the upper bounding value of a possible number of fissions [Citation78]. Therefore, it is known that the value calculated using a simple method could result in an overestimation by one or two orders.

4.2. Kinetics analysis method

The improvement of computer performance has made it possible to numerically solve the basic equations of neutron, heat transfer, and so on. A number of numerical simulation codes have been developed since around the mid-1980s [Citation79Citation85]. In addition, three experimental studies, TRACY [Citation86Citation91], SHEBA [Citation92Citation94], and YAGUAR [Citation95] began in the 1990s.

One-point kinetics was implemented in codes, such as AGNES [Citation79], TRACE [Citation80], CREST [Citation81], CRITEX [Citation82], SKINATH [Citation83], and FELIX [Citation84], which have been developed for the fuel solution system. The FETCH code [Citation85] directly solves the neutron transport equation. These codes can calculate the power and temperature profiles of fuel solution in detail. Experimental data are needed to validate these codes. For example, the AGNES code has been developed and validated using the result of TRACY experiments [Citation91].

A series of transient criticality experiments with a nitrate solution of lowly-enriched uranium began in 1996 using the transient experiment criticality facility, TRACY; the experiment was conducted by Japan Atomic Energy Agency in order to obtain data for the criticality safety evaluation for the Rokkasho fuel reprocessing plant.

Based on the data of the TRACY and SILENE experiments, a transient criticality benchmark problem was found. The results calculated using the AGNES code and others were compared to demonstrate the accuracy of the values based on different calculation models.

In 1999, the JCO criticality accident occurred [Citation60]. Remarkably, the fission power remained at a high level for 8 h due to the cooling by the water jacket surrounding the precipitation tank. Such cooling gave rise to an unexpectedly large number of fissions released, which was larger than the values calculated using the simple methods. The cooling model of the AGNES code was improved taking the JCO accident into consideration, and was used for criticality accident analysis [Citation96].

As an example of kinetics codes being validated using the data of TRACY experiments, the outline of the calculation model of the AGNES code is explained below [Citation78].

The purpose of the AGNES code is to simulate a criticality accident of a fuel solution system. It numerically solves a one-point kinetics equation, a one-dimensional heat balance equation from the fuel solution to the cooling water and equations describing the behavior of radiolytic gas void.

The one-point kinetics equation is solved with the coefficients of reactivity temperature and void feedbacks:

where

The heat balance equation is considered in three parts. The equations in fuel solution (j = 1), core tank wall (j = 2), and cooling water (j = 3) are as follows:

and

For radiolytic gas void calculation, the region of fuel solution is divided into some small regions on the rz plane. The balance equation of the gas void in a small region is the following:

where Fij is the void fraction, Cij is the radiolytic gas mol concentration, P is the power density, is the void velocity, ν is the energy-void transfer coefficient, C 0 is the radiolytic gas saturation mol concentration, and θ is the Heviside function.

4.3. Pseudo-steady analysis method

An original pseudo-steady method was developed in Germany [Citation97] in 1989. In 2002, some modifications were made in Japan to make it much more suitable for practical use [Citation98]. It was modified in order to calculate the average power using reactivity feedback coefficients and to be applicable to complicate geometries. The modified pseudo-steady method is applicable to cases of boiling of the solution, in which steam void must be considered as well as nonboiling case.

The modified pseudo-steady method requires a much lower calculation cost than a one-point kinetics code. Nevertheless, it provides more accurate results than the original method if the proper coefficients are given.

It is assumed that the system is in a pseudo-steady state; therefore, the inserted reactivity is balanced to the sum of the feedback reactivities:

where ρin is an inserted reactivity, ρtemp is the temperature feedback reactivity, and ρvoid is the void feedback reactivity. The feedback reactivities ρtemp and ρvoid are written using solution fuel temperature T and void fraction f as in the following:

where α Tn and α Vn (n = 1, 2, …) are constants called “temperature reactivity coefficient” and “void reactivity coefficient,” respectively. Usually, the first or second order terms are considered. The relation between void fraction and power is written in the following way:

The calculation procedure is as follows: (a) the inserted reactivity is given for the first step, (b) the void fraction is calculated using Equation (Equation14) with T = 0, (c) using the void fraction calculated in (b) and Equation (Equation15) or (Equation16), the power is calculated, (d) the temperature increase is calculated, and (e) forward time step and repeat from (a) to (d) until the end of calculation.

The pseudo-steady method has been developed using the data of SILENE experiments. The calculated power or energy is in agreement with the experimental data on average.

4.4. Future work concerning criticality accident evaluation

To evaluate upper bounding values of fissions under many kinds of criticality accident conditions, many experiments have been performed in the world, such as KEWB in the United States, VIRs in Russia, and CRAC and SILENE in France. They have been analyzed by (1) simple evaluation methods, (2) solving the one-point kinetic equation, and (3) pseudo-steady analysis method. In the case of such supercritical problems, the system condition changes by outbreak of radiation and heat, and the k eff varies due to the feedback reactivity effects, and so, strictly speaking, the time-dependent Boltzmann transport equation must be solved. This calculation is much more difficult than that for the steady-state equation. However, having had recent experiences of the JCO accident at Tokai-mura and the power reactor accident in Fukushima-ken, researches on supercritical phenomena are expected to progress.

5. Concluding remarks

Research on criticality safety has been overviewed for the 50 years since the first edition of the Journal of Nuclear Science and Technology was published. The neutron multiplication factor is related to the eigenvalue of the Boltzmann transport equation. The history of criticality research in the 50 years has focused on finding methods to accurately obtain the effective multiplication factor (k eff). The main tasks were (1) the development of numerical calculation methods, (2) the enhancement of criticality safety evaluation methods, and (3) the collection of subcritical, supercritical, and critical experiment data. Calculation methods have been developed, assisted by the improvement of computer performance and nuclear data libraries. More than 50 years ago, simplified Boltzmann equations were solved by hand calculation, whereas recently, rigorous Boltzmann equations are solved by continuous-energy Monte Carlo methods with high-performance computers. To judge the accuracy of computed k eff, they must be rigorously compared with measured data. To this end, many evaluations must be performed synthetically, such as (a) degree of approximations to the Boltzmann transport equation, (b) numerical calculation model, (c) nuclear constants used in the equation, (d) accuracy of measured reactivity itself, and (e) applied physical model. Japanese researchers have contributed to improving the accuracy of computed k eff by production of the Japanese Criticality Safety Handbook, by research on subcriticality measurement methods and under the tasks of WPNCS of OECD/NEA.

In the case of supercritical problems, the system condition changes due to outbreaks of radiation and heat, and the k eff varies due to the feedback reactivity effects. Thus, the time-dependent Boltzmann transport equation must be solved. This calculation is much more difficult than that for the steady-state equation. The effort to overcome such difficulty has advanced the methods such as simple equation method, one-point kinetics method, and pseudo-steady method. Recently the improvement of computer performance has made it possible to solve these time-dependent Boltzmann equations. On the other hand, in nuclear facilities outside the reactor, nuclear fuel usually must be treated under subcritical conditions, and therefore, there have been very few criticality calculations or experiments under supercritical conditions in the past 50 years. However, with the recent experiences of the JCO accident at Tokai-mura and the power reactor accident in Fukushima prefecture, we can expect that study on criticality accidents will progress hereafter.

Notes

F T, total number of fissions; V, volume (liter); D, diameter (cm); H, height (cm); t, duration time (s).

References

  • ICSBEP . International criticality safety benchmark experiments project initiated in October 1992
  • Garber , D , ed. 1979 . ENDF/B summary document , 3rd edition , BNL-17541 .
  • Nordborg , C and Salvatores , M . 1994 . Status of the JEF evaluated data library . Proceedings of International Conference on Nuclear Data for Science and Technology . 1994 , Gatlinburg , TN .
  • Shibata , K , Kawano , T , Nakagawa , T , Iwamoto , O , Katakura , J , Fukahori , T , Chiba , S , Hasegawa , A , Murata , T , Matsunobu , H , Ohsawa , T , Nakajima , Y , Yoshida , T , Zukeran , A , Kawai , M , Baba , M , Ishikawa , M , Asami , T , Watanabe , T , Watanabe , Y , Igashira , M , Yamamuro , N , Kitazawa , H , Yamano , N and Takano , H . 2002 . Japanese evaluated nuclear data library version 3 revision-3: JENDL-3.3 . J Nucl Sci Technol. , 39 : 1125 – 1136 . doi: 10.1080/18811248.2002.9715303
  • Whitesides , G E and Cross , N F . 1969 . KENO, a multi-group Monte Carlo criticality program , CTC-5 .
  • MCNP Monte Carlo Team . 1987 . MCNP-a general Monte Carlo N-Particle transport code, version 5 , La-UR-03-1987 . Los Alamos National Laboratory
  • Nagaya , Y , Okumura , K , Mori , T and Nakagawa , M . 2005 . MVP/GMVPII: general Monte Carlo codes for neutron and photon transport calculations based on continuous energy and multi-group methods , JAERI 1348 .
  • 1978 . Nuclear safety guide, TID-7016 revision 2 , NUREG/CR-0095 .
  • 1967 . Guide de criticite , CEA-R3114 .
  • 1965 . Nuclear safety guide, AHSB(s) R92 , United Kingdom Atomic Energy Authority .
  • Okuno , H and Naito , Y . 1987 . Evaluation of calculation errors by the criticality safety analysis code system JACS , AERI M 87-5057 . J
  • 1995 . Nuclear criticality safety handbook of Japan , JAERI-Review 95-013 . Published in 1988 (in Japanese), [in English]
  • Katakura , J , Naito , Y and Komuro , Y . 1982 . “ Development of computer code system JACS for criticality safety ” . In Trans.41 of 1982 Annual meeting of America Nuclear Society
  • Naito , Y . 1981 . MGCL-PROCESSOR: a computer code system for processing multi-group constants library MGCL , JAERI-M 9396 .
  • Bondarenko , I I , ed. 1964 . Group constants for nuclear calculation , New York : Consultants Bereau .
  • Nomura , Y . 1986 . Benchmark calculation by the nuclear criticality safety analysis code system JACS (MGCL, KENO-IV) , JAERI 1303 .
  • Whitesides , G E . 1988 . Standard problem exercise on criticality codes for dissolving fissile oxides in acids , 306 NEA/NEACRP/L . Issued in 1990
  • Okuno , H . Second version of data collection part of nuclear criticality safety handbook JAEA-Data/Code 2009-90.
  • Briggs , J B , Bess , J D and Gulliford , J . 2011 . Growth of the international criticality safety and reactor physics experiment evaluation projects . Proceedings of International Nuclear Criticality Safety . Sep 19–23 2011 , Edinburgh .
  • Nouri , A , Nagel , P , Briggs , J B and Ivanova , T . 2003 . DICE: database for the international criticality safety benchmark evaluation program handbook . Nucl Sci Eng. , 145 : 11 – 19 .
  • Rearden , B T . 2004 . Perturbation theory eigenvalue sensitivity analysis with Monte Carlo technique . Nucl Sci Eng. , 146 : 367 – 382 .
  • Elam , K R and Rearden , B T . 2003 . Use of sensitivity and uncertainty analysis to select benchmark experiments for the validation of computer codes and data . Nucl Sci Eng. , 145 : 196 – 212 .
  • Blomquist , R N , Nouri , A , Armishaw , M , Jacquet , O , Naito , Y and Yamamoto , T . 2003 . OECD/NEA source convergence benchmark program: overview and summary of results . Proceedings of the 7th International Conference on Nuclear Criticality Safety . Oct 20–24 2003 , Tokai , Ibaraki . Vol. 1 , pp. 278 – 282 .
  • Blomquist , R N . 2011 . Source convergence in Monte Carlo criticality calculations: status . Proceedings of International Nuclear Criticality Safety . Sep 19–23; 2011 , Edinburgh .
  • Ueki , T and Brown , F B . 2002 . Stationarity diagnostics using Shannon entropy in Monte Carlo criticality calculation I: F test . Trans Am Nucl Soc. , 87 : 156 – 157 .
  • Brissenden , R J and Garlick , A R . 1986 . Biases in the estimation of k-eff and its errors by Monte Carlo methods . Ann Nucl Energy. , 13 : 63 – 86 . doi: 10.1016/0306-4549(86)90095-2
  • Yamamoto , T and Miyoshi , Y . 2004 . Reliable method for fission source convergence of Monte Carlo criticality calculation with Wielandt's method . J Nucl Sci Technol. , 41 : 99 – 107 . doi: 10.1080/18811248.2004.9715465
  • Naito , Y and Yang , J . 2004 . The sandwich method for determining source convergence in Monte Carlo calculation . J Nucl Sci Technol. , 41 : 559 – 568 . doi: 10.1080/18811248.2004.9715519
  • Ueki , T , Mori , T and Nakagawa , M . 1997 . Error estimations and their biases in Monte Carlo calculations . Nucl Sci Eng. , 125 : 1 – 11 .
  • Kiedrowski , B C and Brown , F B . 2008 . Using Wielandt's method to eliminate confidence interval underprediction bias in MCNP5 criticality calculation . Trans Am Nucl Soc. , 99 : 338 – 40 .
  • Shim , H J and Kim , C H . 2009 . Tally efficiency analysis for Monte Carlo Wielandt method . Ann Nucl Energy. , 36 : 1694 – 1701 . doi: 10.1016/j.anucene.2009.09.004
  • Naito , Y , Takano , M , Kurosawa , M and Suzaki , T . 1995 . Study on the criticality safety evaluation method for burn-up credit in JAERI . Nucl Technol. , 110 : 40 – 52 .
  • Kurosawa , M and Naito , Y . 1995 . Isotopic composition of spent fuels for criticality safety evaluation isotopic composition database (SFCOMPO) . Proceedings of 5th International Conference on Nuclear Criticality Safety, ICNC 95 . Sep 17–23 1995 , Albuquerque . Vol. 1 , pp. 2.11 – 2.15 .
  • Takano , M . 1995 . Burn-up credit criticality benchmark final results of phase I-A, infinite array of pin-cell , JAERI-M 94–003 .
  • Takano , M and Okuno , H . 1996 . OECD/NEA burn-up credit criticality benchmark , JAERI – Research 96–003 .
  • DeHart , M D , Brady , M C and Parks , C V . 1996 . OECD/NEA burn-up credit calculation criticality benchmark phase-IB , ORNL-6901 .
  • Nouri , A. 1998 . Burn-up credit criticality benchmark analysis of phase II-B: conceptual PWR spent fuel transport cask , IPSN/98-05,NEA/NSC/DOC .
  • Okuno , H , Naito , Y and Ando , Y . 2000 . Burn-up credit criticality benchmark phase III-A: criticality calculations of BWR spent fuel assemblies in storage and transport , 12 JAERI-Research 2000-041, NEA/NSC/DOC .
  • Okuno , H , Naito , Y and Suyama , K . 2002 . Burn-up credit criticality benchmark phase III-B: burn-up calculations of BWR spent fuel assemblies in storage and transport , 2 JAERI-Research 2002-001, NEA/NSC/DOC .
  • O’Connor , G , Bowden , R and Thorne , P . 2003 . Phase IV-A: reactivity prediction calculations for infinite arrays of PWR MOX fuel pincells , 3 NEA/NSC/DOC .
  • O’Connor (BNFL), P.H. Liam (NAIS) . 2003 . OECD/NEA burn-up credit criticality benchmark phase IV-B: results of phase IV-B analysis , 4 NEA/NSC/DOC .
  • Oyama , A . 1959 . Reactor physics experiments using subcritical systems . J Atomic Energy Soc Japan , 1 : 436 in Japanese doi: 10.3327/jaesj.1.436
  • Sumita , K . 1962 . Reactivity measurement on the reflected semi-homogeneous critical assembly by pulsed neutron technique . J Atomic Energy Soc Japan , 4 : 825 doi: 10.3327/jaesj.4.825
  • Oct 19–23 1987 . Proceedings of International Seminar on Nuclear Criticality Safety ISCS’87 Oct 19–23 ,
  • Kitano , A . 2010 . Monju reactor physics experiments in the restart core . Trans Am Nucl Soc , 103 : 785
  • Mishima , K . 2007 . Research project on accelerator-driven subcritical system using FFAG accelerator and Kyoto university critical assembly . J Nucl Sci Technol , 44 : 499 doi: 10.1080/18811248.2007.9711314
  • Misawa , T . 2003 . Measurement of subcriticality by higher mode source multiplication method , 178 JAERI-conf 2003-019 .
  • Misawa , T . 2010 . Nuclear reactor physics experiments. , Kyoto : Kyoto University Press .
  • Taninaka , H . 2011 . Determination of subcritical reactivity of a thermal accelerator-driven system from beam trip and restart experiment . J Nucl Sci Technol , 48 : 873 doi: 10.1080/18811248.2011.9711772
  • Tsuji , M . 2003 . Subcriticality measurement by neutron source multiplication method with fundamental mode extraction . J Nucl Sci Technol , 40 : 158 doi: 10.1080/18811248.2003.9715346
  • Sjostrand , N G . 1956 . Measurements on a subcritical reactor using a pulsed. neutron source . Ark Fys , 11 : 233
  • Feynman , R P . 1956 . Dispersion of the neutron emission in U-235 fission . J Nucl Energy , 3 : 64
  • Misawa , T . 1990 . Measurement of prompt decay constant and subcriticality by the Feynman-α method . Nucl Sci Eng , 104 : 53
  • Furuhash , A . 1968 . Third moment of the number of neutrons detected in short time intervals . J Nucl Sci Technol , 5 : 48 doi: 10.1080/18811248.1968.9732402
  • Mihalczo , J T . 1978 . Determination of reactivity from power spectral density measurement with Californium-252 . Nucl Sci Eng , 66 : 29
  • Suzaki , T . 1991 . Subcriticality determination of low-enriched UO2 lattices in water by exponential experiment . J Nucl Sci Technol , 28 : 1067 doi: 10.1080/18811248.1991.9731473
  • Yagi , T . 2013 . Application of wavelength shifting fiber to subcriticality measurements . Appl Radiat Isot , 72 : 11 doi: 10.1016/j.apradiso.2012.10.009
  • Kitamura , Y . 1999 . Time-spatial neutron measurement by using position-sensitive 3He proportional counter . Nucl Inst Methods A , 422 : 64 doi: 10.1016/S0168-9002(98)01064-X
  • McLaughlin , T P . 2000 . A review of criticality accidents 2000 revision , LA-13638 .
  • Nuclear Safety Committee Uranium Fabrication Facility Criticality Accident Investigation Commission . 1999 . Uranium fabrication facility criticality accident investigation commission report [in Japanese]
  • Watanabe , N and Tamaki , H . 2000 . Review and compilation of criticality accidents in nuclear fuel processing facilities outside of Japan , JAERI-Review 2000-006 .
  • METI . 2007 . Report on criticality accident at Shiga 1 of HOKURIKU electric company in 1999 and other incidents of unexpected withdrawal of control rod under suspension [abstract]
  • 1999 . Criticality safety handbook version 2 , JAERI 1340 .
  • Flora , J W . 1962 . Kinetic experiments on water boilers “A” core report – Part I program history, facility description, and experimental results , Report NAA-SR-5415 .
  • Dunenfeld , M S and Stitt , R K . 1962 . Summary review of the kinetics experiments on water boilers , Report NAA-SR-7087 .
  • Voinov , A M . “ Nuclear safety in pulse reactor and critical assembly at RFNC-VNIIEF ” . In International Conference on Nuclear Criticality Safety. ICNC 2007
  • Taskin , V B . 1995 . Investigating operations of the IGRIK solutions pulse reactor . Trans Amer Nucl Soc , 72 : 315 – 316 .
  • Taskin , V B . 1994 . “ Pulsed homogeneous reactor IGRIK ” . In International Embedded Topical Meeting of Physics, Safety, and Applications of pulse reactors 55 – 66 . Washington , DC
  • Lecorche , P and Seale , R I . 1973 . Review of the experiments performed to determine the radiological consequences of a criticality accident , Oak Ridge , TN : Y-CDC-12 .
  • Barbry , F . 1994 . SILENE reactor: results of selected typical experiments , Report SRSC no. 223 .
  • Barbry , F . 2009 . Review of the CRAC and SILENE criticality accident studies . Nucl Sci Eng. , 161 : 160 – 187 .
  • Tuck , G . 1974 . Simplified methods of estimating the results of accidental solution excursions . Nucl Technol , 23 : 177
  • Olsen , A R . 1974 . Empirical model to estimate energy release from accidental criticality . Trans Am Nucl Soc , 19 : 189
  • Barbry , F , Rozain , J P , Ratel , R and Manaranche , J C . 1987 . Criticality accident studies in France: experimental programs and modelisation . Proceedings of International Seminar on Nuclear Criticality Safety, ISCS '87 . 1987 , Tokyo , Japan. pp. 423
  • Nomura , Y and Okuno , H . 1995 . Simplified evaluation models for total fission number in a criticality accident . Nucl Technol , 109 : 142 – 152 .
  • Nomura , Y , Okuno , H and Miyoshi , Y . 2004 . Validation of simplified evaluation models for first peak power, energy, and total fissions of a criticality accident in a nuclear fuel processing facility by TRACY experiments . Nucl Technol , 148 : 235 – 243 .
  • Duluc , M . “ New improvement in simplified methods of estimating the number of fissions during a criticality accident in solution ” . In Proceedings of Topl. Mtg. Nuclear Criticality Safety Division (NCSD 2009 Sep 13–17;
  • Nakajima , K . 2003 . Application of simplified methods to evaluate consequences of criticality accident using past accident data . International Conference on Nuclear Criticality Safety ICNC 2003 . 2003 , Tokai-mura . pp. 171 – 176 .
  • Nakajima , K , Yamane , Y and Miyoshi , Y . 2002 . A kinetics code for criticality accident analysis of fissile solution systems: AGNES2 , JAERI-Data/Code 2002–004 .
  • Basoglu , B . 1998 . Development of a new simulation code for evaluation of criticality transients involving fissile solution boiling , JAERI-Data/Code 98–011 .
  • Kato , R . 1989 . The code CREST to simulate criticality accident power excursion in fuel solution . Proceedings of Safety Margin in Critical Safety . Nov 26–30 1989 , San Francisco .
  • Mather , D J . 1984 . CRITEX – A computer program to calculate criticality excursions in fissile liquid systems , SRD R 380 .
  • Dodds , H L . 1984 . SKINATH – A computer program for solving the reactor point kinetics equations with sample thermal hydraulic feedback , ORNL/ CSD/ TM –210 .
  • Gmal , B and Weber , J . 1989 . “ FELIX–A computer code for simulation of criticality excursions in liquid fissile solutions ” . In Safety of the nuclear fuel cycle , Edited by: Ebert , K and Ammon , R V . Deerfield Bch: VCH Publishers .
  • Pain , C C . 2001 . Transient criticality in fissile solutions compressibility effects . Nucl Sci Eng , 138 : 78 – 95 .
  • Nakajima , K . 2002 . TRACY transient experiment databook 1 pulse withdrawal experiment , Report JAERI Data Code 2002–005 .
  • Nakajima , K . 2002 . TRACY transient experiment databook 2 ramp withdrawal experiment , Report JAERI Data Code 2002–006 .
  • Nakajima , K . 2002 . TRACY transient experiment databook 3 ramp feed experiment , Report JAERI Data Code 2002–007 .
  • Yamane , Y . 2003 . Transient characteristics observed in TRACY supercritical experiments . International Conference on Nuclear Criticality Safety ICNC 2003 . 2003 , Tokai-mura . pp. 791 – 796 .
  • Yamane , Y . 2011 . Final series of TRACY experiments . International Conference on Nuclear Criticality ICNC 2011, paper 6-09 . 2011 , Edinburgh .
  • Miyoshi , Y . 2009 . Inter-code comparison exercise for criticality excursion analysis: benchmarks phase I: pulse mode experiments with uranyl nitrate solution using the TRACY and SILENE experimental facilities , OECD/NEA Report no. 06285 . (ISBN: 978-92-64-99073-9)
  • Cappiello , C C . 1997 . Solution high-energy burst assembly (SHEBA) results from subprompt critical experiments with uranyl fluoride fuel , Report LA-13373-MS .
  • Souto , F J , Kimpland , R H and Hegar , A S . 2005 . Analysis of the effects of radiolytic-gas bubbles on the operation of solution reactors for the production of medical isotopes . Nucl Sci Eng. , 150 : 322 – 325 .
  • Butterfield , K B . 1994 . The SHEBA experiment . Trans Amer Nucl Soc , 70 : 199 – 200 .
  • Levakov , B G , Gorin , N V , Kurakov , N P , Lukin , A V , Lyzhin , A E and Nevodnichy , N N . 1994 . Pulse reactor YAGUAR with its core of highly concentrated solution of uranium salts in light water . International Embedded Topical Meeting of Physics, Safety, and Applications of Pulse Reactors . Nov 13–17 1994 , Washington , DC .
  • Yamane , Y . 2000 . Analysis of JCO criticality accident (3)-kinetic evaluation of initial burst by AGNES2- . Proceedings of 2000 Fall meeting of AESJ . 2000 .
  • Shulenberg , T and Dohler , J . 1986 . Heating-up nucleation and boiling of a critical solution of fissile material . Int J Multiphase Flow , 12 ( 5 ) : 759 doi: 10.1016/0301-9322(86)90050-9
  • Nakajima , K , Yamamoto , T and Miyoshi , Y . 2002 . Modified quasi-steady-state method to evaluate the mean power profiles of nuclear excursions in fissile solution . J Nucl Sci Technol , 39 ( 11 ) : 1162 doi: 10.1080/18811248.2002.9715307

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.