54
Views
0
CrossRef citations to date
0
Altmetric
Article

Statistical Properties of the Department of Commerce’s Antidumping Duty Calculation Method with Implications for Current Trade Cases

ORCID Icon, ORCID Icon, &
Received 14 Apr 2023, Accepted 27 May 2024, Accepted author version posted online: 05 Jun 2024
Accepted author version

Abstract

The Department of Commerce (DOC) uses differential pricing analysis in order to detect whether a foreign exporter dumps goods in the U.S. market at prices lower than the exporter sells the goods for in its domestic market. A dumping duty is then levied on the exporter, the amount of which depends on the dumping margin. Several recent cases at the Federal Circuit Court of Appeals have challenged the DOC’s methodology on statistical grounds. In this paper, the DOC’s procedure for calculating the dumping margin is described in detail, including the rules for the controversial zeroing policy. Several statistical issues with the DOC’s approach are identified and some potential improvements are proposed.

Disclaimer

As a service to authors and researchers we are providing this version of an accepted manuscript (AM). Copyediting, typesetting, and review of the resulting proofs will be undertaken on this manuscript before final publication of the Version of Record (VoR). During production and pre-press, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal relate to these versions also.

1 Introduction

The implementation of laws and treaties governing international trade relies on statistical methodology that rarely is brought to the attention of the statistical community. One major area of international trade law concerns trade remedies, the tools used by government to take corrective action against foreign exporters causing material injury to a domestic industry because of unfair pricing and/or foreign government subsidies. Several laws protecting U.S. companies from unfair competition concern the dumping of goods by a foreign company, which occurs when it sells a product in the U.S. at a price below what it sells for in its home market (or a third country where prices are thought to be representative of the home prices), which is considered the “normal value.” When the International Trade Administration (ITA), a division of the Department of Commerce (DOC), finds that a foreign exporter has dumped goods, it imposes a duty intended to recapture the value of the dumping.

Casey (2020) describes the history and general procedures used in antidumping cases. Statistical issues can arise in at least two stages of an investigation of possible dumping: (1) when detecting dumping, and (2) in the calculation of the dumping duty (if dumping is detected). The statute 19 U.S.C. §1677f-1(d)(1)(B) says that dumping occurs when there is a “pattern of export prices …that differ significantly among purchasers, regions, or periods of time.” To implement the statute, the DOC developed a differential pricing method, described in more detail in Section 2. This method involves many comparisons, between each purchaser and the rest, each time period and the rest, and each region and the rest. Each comparison is summarized by the effect size measure known as Cohen’s D (Cohen, 1988). If a transaction involves a purchaser, period, or region with absolute Cohen’s D exceeding 0.8, the transaction is flagged as potentially having been dumped. If the flagged transactions account for more than 33% or more than 66% of total sales, increasingly severe dumping duties are levied. The difference between the duties can be substantial and many cases concern whether the flagged percentage exceeded the 33% or 66% level. If the calculated dumping duty is very small (less than 2% of the total sales price, the so-called de minimis threshold), then no duty is imposed. Once a dumping duty has been levied, the exporter must—in addition to paying the duty—prepay dumping duties on future merchandise exported to the U.S. at the same rate as the original dumping duty. This rate is known as the dumping margin.

The Federal Circuit Court of Appeals in three recent cases, Stupp v. United States, 5 F.4th 1341 (Fed. Cir. 2021); Mid Continent Steel v. United States, 31 F.4th 1367 (Fed. Cir. 2022); and Nexteel Co., Ltd. v. United States, No. 2021-1334 (Fed. Cir. 2022), accepted plaintiffs’ statistical criticisms of the appropriateness of the DOC’s use of Cohen’s D. Originally, Cohen developed the 0.8 threshold (Cohen, 1988, Section 2.2.3) assuming that in both populations the variable of interest (here, sales price) had a normal distribution and the variances of those distributions were equal. The plaintiffs in Stupp noted that the sample sizes in many of the comparisons were small, the data were not normally distributed, and the variances were not equal. They cited the relevant literature showing that Cohen’s D was sensitive to violations of the underlying assumptions. The appellate court agreed with the plaintiffs that the DOC needed to provide more justification for its use of differential pricing methodology in situations where the data does not satisfy the assumptions on which the procedure is based. These issues were especially important in this case because the dumping margin (2.53%) was close to the de minimis threshold of 2% below which no penalty would be assessed.

The formula for Cohen’s D uses an estimated standard deviation in the denominator. Instead of the standard textbook formula for estimating the common standard deviation with the square root of a weighted average of the group variances (Cohen, 1988, p. 67), the DOC calculates the common standard deviation by taking the square root of the unweighted average of the group variances. An imbalance in sample sizes can impact the estimated standard deviation and hence Cohen’s D. After the plaintiff questioned this aspect of the DOC’s calculation by submitting evidence of the potential effect and showing that most texts use the weighted average, the Federal Circuit in Mid Continental Steel remanded the case to give the DOC an opportunity to justify its differential pricing method. As in Stupp, the calculated dumping margin (2.16%) was just over the de minimis threshold.

In Nexteel, plaintiffs argued that the DOC differential pricing methodology was flawed because the assumptions of normality, sufficient sample size, and roughly equal variances were not met. The Federal Circuit Court vacated the trial court’s decision upholding the differential pricing methodology and remanded the case for reconsideration in view of Stupp.

The paper will proceed as follows. Section 2 describes the DOC’s method for calculating the dumping margin with a worked example. Section 3 explains several problems with the DOC’s method and offers a modified method that corrects these. In Section 4 we conduct a simulation study comparing the DOC’s method and our proposed modification. We close with a discussion of the relevance of this work to recent court cases.

2 DOC’s Antidumping Duty Calculation

2.1 Description of Procedure

To detect dumping, the DOC looks for disparities among an exporter’s individual sales in the U.S. instead of looking for comparatively low prices in the U.S. versus prices in the exporter’s home country. U.S. prices of the product under examination during a time frame of interest (usually one year) are divided into mutually exclusive groups on the basis of three different criteria (Period, Purchaser, or Region). Disparities between groups are checked for multiple times using each criterion, where the number of checks equals the number of groups.

For example, consider the criterion Period, which is usually described in terms of quarters. All sales in the same quarter are grouped together. In turn, each of the four quarters is taken as the test group and compared to the comparison group composed of all other quarters. For example, all transactions occuring in quarter 2 (abbreviated Q2) form one test group and all transactions occuring in Q1, Q3, or Q4 form the comparison group. Each quarter is used as the test group once and included in the comparison group three times.

Each comparison is done by calculating a Cohen’s D statistic; if its absolute value exceeds 0.8, then all transactions in the test group are flagged. The formula DOC uses for Cohen’s D is:where the means and standard deviations are weighted by quantity, i.e. sales volume. In particular, letting wi be the quantity corresponding to the ith sales price Xi in the U.S.: where the subscript g identifies the group (test or comparison) and defines the set over which the sums are taken.

An individual transaction is flagged if it was flagged in any of the Period, Region, or Purchaser comparisons. The DOC procedure considers the percentage of total sales coming from flagged transactions when calcuating the dumping margin. If the flagged percentage is less than 33%, the dumping duty is calculated as the quantity-weighted sum, across all transactions, of the difference between the average home price and the average U.S. transaction price (“average-to-average” [A-to-A]). If the percentage is more than 66%, the dumping duty is the quantity-weighted sum of the difference between the average home price and each individual U.S. transaction’s price, with negative differences, where the U.S. transaction’s price exceeds the average home price, excluded or “zeroed out” (“average-to-transaction” [A-to-T]).1 If the flagged percentage is between 33% and 66%, then zeroing is done only for flagged transactions (“mixed”). The dumping margin is the dumping duty divided by total sales. These rules are summarized in Table 1.

When A-to-T or the mixed method is called for, it will only actually be used if the difference between A-to-T (or mixed) and A-to-A is deemed significant. For the purposes of this exception, a difference is considered significant if it either (a) constitutes a 25% (or greater) change relative to the A-to-A method, or (b) changes from de minimis to non de minimis.

2.2 Conducting Cohen’s D Tests - Worked Example

The authors contacted Trade.gov and were provided with the DPSALES data set2 along with SAS code for doing the Cohen’s D analyses. We coded the DOC procedure in R and confirmed our code’s accuracy using the SAS code and output. The DPSALES file is an illustrative data set containing 12 rows of data from each of 13 unique products (CONNUMs) sold by a (fictional) foreign exporter in the U.S. market, for a total of 156 rows. For the purpose of illustration, we use only the data on product number “XL-13-14-15” and assume it constitutes all U.S. sales by this exporter. We have changed some variable names to match our notation and rounded the price column to 6 decimal places. In this data set, the sales price and quantity do not change within Purchaser, but this need not be the case in general.

We calculate Cohen’s D for each level of Purchaser, Period, and Region, separately. We first test Purchaser 6 versus the comparison group consisting of all other Purchasers. The weighted average price for Purchaser 6 is with a standard deviation of Stest=0, since all three sales involving this purchaser consisted of a sale of exactly one unit at this same price. The weighted average price for the comparison group is:X¯comp=3(2×1.560769+3×2.472525+4×0.945159)3(2+3+4)=1.591083.

The weighted variance is:Scomp2=i=412(XiX¯comp)2wii=412wi=i=412(Xi1.591083)2wii=412wi=0.444982.

Cohen’s D, calculated using the DOC’s method3, is:X¯testX¯comp12Stest2+12Scomp2=1.4963621.59108302+0.4446142=0.0947210.471494=0.200896.

Since the absolute value of Cohen’s D is less than 0.8, the transactions corresponding to Purchaser 6 are not flagged.

We likewise calculate Cohen’s D for the other levels of Purchaser, all levels of Period, and all levels of Region. The results can be found on the righthand side of Table 2. Note that the Cohen’s D column for Period is NA because there were only sales during Q1.

2.3 Calculating the Dumping Margin - Worked Example

The DOC procedure flags any transaction for which the absolute Cohen’s D value for Purchaser, Period, or Region exceeds 0.8. The rightmost column of Table 2 indicates that the last six transactions were flagged (because of the Purchaser tests). These observations account fori=712wiXi/i=112wiXi=33.595/47.448=70.8%of the total sales volume, which exceeds the 66% threshold. Thus, in calculating the dumping margin, the DOC would employ the A-to-T method that zeroes out any transaction in which the sales prices exceeds the average home price, also called the “normal value.” Details of how the home price is calculated are beyond the scope of this paper, but it can be different for different transactions. Table 3 contains the original prices (X, rounded to 3 places) and quantities (w) as well as the home prices (H, rounded to 3 places). It also contains the average home price ( H¯), the margin (average home price minus sales price), and total margin (margin times quantity).

No transactions had sales price exceeding the normal value, so no zeroing is necessary. Put another way, the A-to-A, mixed, and A-to-T methods all produce the same dumping margin (but recall that A-to-T is required here). The dumping margin is the sum of the margins divided by the value of total sales:Dumping margin[any method]=3(1×1.386+2×1.321+3×0.409+4×1.937)3(1×1.496+2×1.561+3×2.473+4×0.945)=82.2%,where we multiplied by 3 at the beginning of the numerator and denominator because there were three transactions at each sales price in Table 2.

Had any of the sales prices been above the average normal value, when doing the A-to-T method we would zero out the affected transactions by omitting the associated addend(s) from the parenthetical expression in the numerator. For example, suppose the 2.473 sales price in rows 7 – 9 of Table 2 were instead 3.473; then the margin would be 2.8823.473=0.591. The A-to-A and A-to-T dumping margins would be:Dumping margin[AtoA]=330(1×1.386+2×1.321+3×(0.591)+4×1.937)330(1×1.496+2×1.561+3×3.473+4×0.945)=53.1%,andDumping margin[AtoT]=330(1×1.386+2×1.321+3×0+4×1.937)330(1×1.496+2×1.561+3×3.473+4×0.945)=62.6%.

Recall that if the flagged percentage is less than 33%, then A-to-A is used. If it is between 33% and 66%, then A-to-A is used on unflagged transactions while A-to-T is used on flagged transactions. If the flagged percentage exceeds 66%, then A-to-T is used. When A-to-T or the mixed method is called for, the DOC will only actually use it if the difference between A-to-T (or mixed) and A-to-A is deemed significant. In the above example where we used 3.473 as a sales price instead of 2.473, the relative difference between 53.1% (A-to-A) and 62.6% (A-to-T) is only 17.9% (which is less than the 25% necessary). Thus, the difference between A-to-T and A-to-A would not be considered significant, so the A-to-A method would be used even if the flagged percentage (after recalculating the Cohen’s D values using a sales price of 3.473) exceeded 66%.

3 Problems and Remedies

In this section we highlight several situations where the DOC procedure can lead to undesirable results and we recommend simple fixes. In the next section, we demonstrate superior performance of the modified procedure via a simulation study.

3.1 Problem: DOC flags high priced transactions as dumped because it considers absolute value of Cohen’s D. Remedy: Only flag very negative Cohen’s D values.

The DOC procedure flags transactions for which any of the three Cohen’s D values exceeds 0.8 in absolute value. This means that if, for example, Purchaser A receives goods at a much higher price than all the other purchasers, then when Purchaser A forms the test group its Cohen’s D value would be large and all transactions involving A would be flagged. This is undesirable because of all the Purchasers, Purchaser A should have the least suspicion of being the recipient of dumped goods, since dumping occurs when goods are sold at an unreasonably low price, not a high one. The remedy for this problem is: require Cohen’s D to be less than –0.8, instead of less than –0.8 or greater than 0.8. This problem and remedy are illustrated in setting 6 in the simulation study of the next section.

We can see this problem in the Trade.gov data set as well. Referring to Table 2, it should be clear without any formal dumping detection algorithm at all that transactions involving Purchaser 2001 are the most suspect. But the absolute value rule causes Purchaser 1701 to be flagged as well—even though the prices charged to it were higher than the prices charged to Purchasers 6 and 42, which remain unflagged. Our modified approach would flag only Purchaser 2001.

Advocates of the current DOC procedure might claim that even though the transactions involving a Purchaser (or Period or Region) with D > 0.8 do not actually exhibit dumping, there is little harm in flagging them at this stage. After all, the duty calculation requires comparing the actual transaction prices to the average home price, so any truly non-dumped transactions will not contribute to the dumping duty. The problem with this line of reasoning is that it ignores the effect of zeroing and the fact that the 33% and 66% cutoffs that determine how much zeroing will be allowed are employed in the first stage of the procedure.

3.2 Problem: Outliers might affect estimate of σ. Remedy: Use a robust estimate.

Rare but severe dumping in sales to a specific Purchaser, Period, or Region can cause overestimation of σ. For example, suppose that Purchaser A is occasionally the recipient of goods sold at a tremendous markdown. When A’s transactions form the test group, these very low prices would inflate the estimate of the test group’s standard deviation because the DOC procedure does not consider possible outliers. The standard deviation is in the denominator, so this causes Cohen’s D to be artificially low (in absolute value), preventing it from reaching the 0.8 threshold. Thus, transactions that really were dumped might not be flagged as such. To remedy this problem, we calculate Cohen’s D using a robust estimate of σ that is resistant to outliers. We chose 1.0483 times the median of the (volume-weighted) absolute pairwise differences (Rousseeuw and Croux, 1993), but other robust estimators could be utilized. See settings 7 and 8 in the next section for a situation where this problem and remedy occur.

3.3 Problem: Dumping duty and margin are not monotonic in amount of dumping. Remedy: Use quantity of dumped goods instead of sales value of dumped goods when calculating flagged percentage.

The method the DOC uses to calculate the flagged percentage can lead to tax implications inconsistent with the purpose of the antidumping legislation. In particular, it can lead to situations in which the dumping duty is reduced when the dumping is increased! As described in Section 2.1, the flagged percentage is the sales value of all flagged transactions divided by the total sales value. Recall that when the flagged percentage reaches the 33% or 66% thresholds, increasingly severe taxes are imposed. Since the flagged percentage includes only flagged sales in the numerator, but flagged and unflagged sales in the denominator, a decrease in the sales price can actually reduce the flagged percentage such that it crosses the threshold to a more lenient tax rate.

Indeed, we have constructed an example, available as a supplement to this article, where such perverse incentives occur. Purchaser A purchases 35% of the total quantity of a product, at an average price of $914.65, whereas the other Purchasers pay $997.34 on average. All of A’s (and only A’s) transactions are flagged, yielding a flagged percentage of 33.06% and a dumping margin of 2.36% (assuming a normal price of $985 under the mixed method). However, if the exporter simply gives a further $7 discount on each of A’s transactions, A’s average price is $907.65, the flagged percentage is 32.89%, and the calculated dumping margin is 1.97%, which because of the de minimis rule is actually set to zero. So under the DOC’s rules, the exporter avoids the entire 2.36% dumping tax simply by dumping a little more!

To force dumping duties and dumping margins to vary monotonically with the amount of dumping, a simple fix would be to calculate flagged percentage as the quantity of goods sold in flagged transactions divided by the total quantity of goods sold. This would prohibit transactions at a low price from driving down the flagged percentage. For instance, in the above example, the flagged percentage would be 35% in both situations.

3.4 Recommended modification of the DOC procedure

We recommend modifying the DOC’s procedure to incoporate these three remedies. That is, we calculate Cohen’s D using robust estimates of σ in both the test and comparison groups, and we flag transactions if the robust Cohen’s D is less than –0.8. We then calculate the flagged percentage as the quantity of flagged transactions divided by the total quantity.

The exact formula for the recommended robust Cohen’s D is similar to the original formula, except that the standard deviations are defined differently. In particular, Drobust=X¯testX¯compS, where X¯g (g indexes test or comparison group) equals iXiwiiwi, and S=12Stest2+12Scomp2, where Sg (for test or comparison group) is defined as explained below.

For convenience, we drop notational dependence on the group index g. Let Δ denote the vector of all absolute differences |XiXj| of sales prices, ordered such that for any kk,ΔkΔk. Define Γ to be a vector with length equal to Δ, correspondingly ordered, but containing the products of weights wiwj instead of the absolute differences |XiXj|. We define S to be 1.0483Δk*, where the index k*=min{K1:(k=1KΓk)/(kΓk)0.5}. In other words, S is 1.0483 times the median of sales-volume-weighted absolute pairwise differences in sales prices.

4 Simulation Study

4.1 Setup

We conducted a simulation study to investigate the performance of four methods: DOC’s procedure, our proposed modification, and pooled variance variants of each using S2=(ntest1)Stest2+(ncomp1)Scomp2ntest+ncomp2. In each of 1000 simulation runs, Purchaser, Period, and Region were sampled without replacement but independently of each other. A data set of n = 200 rows (i.e., transactions) was created with the number of rows for each value of Purchaser, Period, and Region given below.

  • Purchaser: A (0.7n), B (0.15n), C (0.1n), D (0.05n);

  • Period: 1 (0.25n), 2 (0.25n), 3 (0.25n), 4 (0.25n); and

  • Region: Northeast (0.1n), South (0.25n), West (0.2n), Midwest (0.45n).

The sales volume for each transaction was set to 1 and the normal value was set to 1000. All prices were independent draws from a N(μ=1000,σ=100) distribution, except as explained in the second column of Tables 4 and 5. (For example, under setting 2, all prices came from N(1000,100), except for transactions involving Purchaser C, which came from N (1100,100).) Setting pairs (4,4b) and (6, 6b) were selected to study the importance of Cohen’s normality assumption, whereas (6, 6b) and (9, 9b) allow us to study the equal variance assumption. The standard deviation in setting 6 was set to 0.1×252+0.9×100295.20 so that Purchaser C’s true Cohen’s D value (calculated with population parameters) would be equal to Purchaser C’s true Cohen’s D in setting 6b. Similarly, the standard deviation in setting 9 was set to 0.7×252+0.3×100258.63 so that Purchaser’s A’s true Cohen’s D values in settings 9 and 9b would match.

For each setting, based on 1000 simulation runs, we calculate: (a) the fraction of flagged percentages (whether calculated according to DOC’s procedure or ours) falling in each of the [0, 0.33), [0.33,0.66], and (0.66,1] bins; and (b) the average dumping margin without de minimis rules. We also calculate the recommended target flagged percentage and the recommended target dumping margin.

We define the recommended target flagged percentage as the quantity of goods sold associated with a Purchaser, Period, or Region having D<0.8 (where D is calculated using the population means and variance), divided by the total quantity sold. For example, under setting 6, where Purchaser C defines the test group and Purchasers A, B, and D define the comparison group, the means are μtest=600 and μcomp=1000, and σ2=95.202. So D=(6001000)/95.20=4.20, and no other comparisons have D<0.8. The recommended target flagged percentage is therefore 10%, since Purchaser C accounts for 10% of the quantity of goods sold.

We define the recommended target dumping margin as the expected tax (without applying de minimis rules) divided by the expected total sales value, where the duty calculation method (A-to-A, mixed, or A-to-T) is determined based on the recommended target flagged percentage. For example, under setting 6, the recommended target flagged percentage is 10%, so the A-to-A method will be used. The expected sales price is $1000×0.9+$600×0.1=$960, so the expected total sales value is 200×$960 (since there are n = 200 transactions). According to Table 1, first row, the expected tax is E[i=1200(H¯Xi)wi]=i=1200($1000$960)=200×$40. The recommended target dumping margin is therefore 200×$40200×$960=40/960=0.0417. Note that because de minimis rules are not applied, it is possible for this quantity to be negative.

4.2 Simulation Study Results

Results from the simulation study can be found in Tables 4 and 5. Table 4 gives the results for the DOC procedure and our recommended procedure, both using equal weighting in the pooled variance estimates. Table 5 gives the results using sample-size-based weighted pooled variance estimates. Column groups 4 – 6 give the percentage of simulation runs with flagged percentage in the respective bins (might not sum to 100% because of rounding). Bold values indicate the bin of the recommended target flagged percentage (column 3). Column group 7 gives average dumping margins and the recommended target dumping margin. For all settings, the normal value was 1000. Settings 1 – 3 do not exhibit dumping (since no Purchaser, Period, or Region has an average price less than 1000), but the other settings do.

In many settings both our procedure and the DOC’s give similar results, suggesting that the DOC procedure is often adequate. We discuss in more detail the notable results. Unless otherwise specified, the discussion in this section applies to the results in Table 4.

In setting 3, even though there was no dumping, the 66% threshold was reached about half the time under both the DOC procedure and our recommended procedure. This is because when Purchaser A formed the test group, Purchaser C was part of the comparison group; and in half the simulation runs, Purchaser C increased the average price of the comparison group so much that Purchaser A’s Cohen’s D value dropped below –0.8, as if it had been the recipient of dumping (even though it had not). In these cases, the dumping margin was about 3.5% under either method. In the other half of the simulation runs, A-to-A was used, which leads to a negative dumping margin of about –3.8% under either method, yielding average dumping margins of about –0.1%.

The performance discrepancy between the DOC and proposed procedures in setting 6 is due to the absolute value rule. Purchaser C, the recipient of the dumped goods, accounts for 10% of the transactions, and the modified procedure correctly applies A-to-A in every simulation run. On the other hand, in more than half of the simulation runs, Purchaser C being in the comparison group causes the Cohen’s D values for the test group to exceed 0.8 such that the 66% threshold is reached. Because of this, the DOC procedure imposes too severe a penalty in the dumping margin calculation stage. The lack of normality among the Purchaser C sales prices in setting 6b appears to have little effect on the DOC procedure’s problem.

In settings 7 and 8, the DOC procedure fails to flag any transactions even though a full 70% of them involve a Purchaser who was the recipient of dumped goods. The procedure fails because the estimate for σ is inflated as a result of the extremely low prices (outliers) coming from the N(400,50) distribution. In setting 7, the average-to-average method produces an average dumping margin of 4.4% but this is an underestimate of the recommended target as it does not incorporate any zeroing. In setting 8, the average-to-average method produces a negative dumping margin on average because the non-dumped prices are actually above the normal value and balance out the low outliers. The modified method imposes larger duties in these situations, as desired.

In setting 11, the dumping margins under either method are a little higher on average than the recommended target. This is because the Cohen’s D for Purchaser A using known parameter values is –0.75, making the recommended target flagged percentage zero; but the Cohen’s D is estimated with error, so in many simulation runs the DOC and proposed methods flag all of A’s transactions, which leads to a larger dumping margin than the recommended target.

The recent cases at the Federal Circuit Court of Appeals concerned assumptions of normality, equal variance, and equal weights in the pooled variance calculation. The results from settings 4 and 4b and from settings 6 and 6b suggest that lack of normality need not have a large impact on the dumping duty calculation method used or the final dumping margin. Likewise, the results from settings 6 and 6b and from settings 9 and 9b suggest that unequal variances need not have a large impact either. In particular, the average dumping margin was unchanged between 6 and 6b, and it went from 6.5% in 9 to 6.8% in 9b. However, if the dumping margin is near the 2% de minimis threshold, even a small change in the calculated dumping margin can have a big impact on the duty owed. Of course, these borderline cases are the ones most likely to generate litigation

Whether a weighted average is used for the pooled variance appears to make quite a difference in many settings. For examples, compare Tables 4 and 5 at settings 3, 6, and 6b (and to a lesser extent 9b). Highlighting just one of these, in setting 6, the DOC method with equal-weight pooling led to an average dumping margin of 6.3%, but this increased to 7.5% for weighted pooling.

5 Discussion and Conclusion

The policy of zeroing has come under international scrutiny for allegedly violating World Trade Organization (WTO) agreements. As Casey (2020) reports: “The United States has been a respondent in more than 150 disputes before the WTO. Fifty-six of those involved the [WTO’s Antidumping Agreement] and many of those cases involved zeroing. In all the finalized cases, the United States lost or settled.” Cox (2015) has argued that Congress must amend the Tariff Act of 1930 to disallow zeroing if the U.S. wants to meet its WTO obligations.

For its part, the DOC justifies the practice of zeroing on the grounds that “large negative margins on some transactions where importers charged very high prices could mask dumping in other transactions” (DOC2013b, 27). If there were no zeroing, a firm intent on dumping could compare its sales prices in the U.S. to those in its home country, calculate the lowest average U.S. price that will keep its dumping margin under 2%, and dump its excess goods at this calculated price. When the firm realizes that its average U.S. price is lower than the calculated price because it actually did dump the product in certain time periods or regions or to some purchasers, it might artificially increase the price of some sales with a collaborative purchaser to avoid the dumping duty.4 We can imagine such a firm offering other benefits to a collaborative purchaser in exchange for accepting the artificially high price. For example, the firm might reduce the price of the product under investigation in the following year, reduce the price of a different product, or give the collaborating purchaser priority when ordering a different product that is highly sought after. Although such hypothetical scenarios might not be applicable to every defendant, they are certainly possible. Zeroing is intended to prevent such behavior, and the de minimis rule mitigates harm from zeroing to good actors whose prices might fluctuate naturally in the absence of price manipulation. While the practice of zeroing is somewhat ad hoc, it is arguably effective at preventing malicious manipulation of sales prices to the detriment of U.S. manufacturers and their employees.

In this article, we have described the DOC’s dumping duty calculation method in detail and pointed out several flaws with the method in practice. Our criticisms of the method are more statistical than economic or legal. In that sense our critique is similar to those of Rude and Gervais (2009) and McFarland (2015). We found that the DOC’s method: (a) flags transactions at especially high prices (which are unlikely to represent actual dumping), (b) estimates σ without consideration of outliers, and (c) can cause perverse incentives where an exporter’s dumping duty would be reduced if the exporter were to dump more. We have provided simple fixes for these flaws in such a way that the DOC’s overall framework is left intact.

However, some simulation settings reveal problems that appear to be inherent to that framework. In setting 3, for example, the great disparity between the Purchasers’ prices results in heavy zeroing, even though no dumping had actually occurred. As we noted earlier in Section 3.1, an inherent problem with the DOC’s two-stage framework is that the decision about how much zeroing to do is made in the first stage, during which the normal (home) value is not considered. Even if zeroing in the case of dumping were appropriate (as the DOC alleges), the disparity-based method might not be accurate enough to actually detect dumping. We have not studied how often situations like setting 3 arise in actual cases, in large part because of the proprietary nature of such data.

Many of the cases that end up in court have a dumping margin slightly above the de minimis threshold that drops below it under a different dumping margin calculation method. The recent cases at the Federal Circuit Court of Appeals challenged the DOC’s methodology because of questionable assumptions of normality, equal variances, and equal weights in the pooled variance. Our work shows that the dumping margin can depend greatly on how the pooling of variances is done. In addition, we identified several other statistical issues with the DOC’s procedure, including non-monotonicity of the dumping duty (and dumping margin) in the amount of dumping. We offered remedies for these in case the DOC wants to refine its procedure.

SUPPLEMENTARY MATERIAL

TheoreticalResultsSupplement:Distributions of the standard textbook Cohen’s D and a special case of the DOC’s Cohen’s D are derived. The probability that a transaction is flagged is given as a function of group sample sizes and effect size. Asymptotic results are also presented. (.pdf file)

ConstructedDataSet:This tab-delimited data set was referred to in Section 3.3. It has the property that when dumping is increased, the dumping margin and dumping duty decrease. (.txt file)

CohensD_and_Simulation_Rcode:R code for running the simulation study and for calculating the dumping duty and dumping margin for the constructed data set. (.txt file)

Notes

1 The practice of zeroing is a controversial one; see, e.g., Cox (2015) and Casey (2020).

2 See https://access.trade.gov/resources/sas/programs/amcp.html

3 The standard deviation estimate in the denominator of this expression would be 0.596399 instead of 0.471494 if the typical pooled standard deviation were used.

4 Based on the results from simulation setting 3, though, a smaller increase spread out among several collaborators would be less detectable.

Table 1 Details on dumping margin calculations. Xi and wi are the U.S. sales price and quantity for transaction i, H¯ is the average home price, and F is the index set for flagged transactions. * For the mixed and A-to-T methods, dumping margins are subject to the exception explained in the final paragraph of Section 2.1.

Table 2 Left: Sales data from an illustrative data set provided by Trade.gov. Right: Cohen’s D values and an indicator of whether any of the three Cohen’s D values exceeds 0.8 in absolute value, which means that DOC would flag the transaction.

Table 3 Data used for calculating dumping margin.

Table 4 Simulation study results for the DOC and proposed methods (equal-weight average for pooled variance).

Table 5 Simulation study results for the DOC and proposed methods (weighted average for pooled variance).

Supplemental material

Supplemental Material

Download PDF (170.9 KB)

Supplemental Material

Download Text (5.1 KB)

Supplemental Material

Download Text (20.8 KB)

References

  • (2021). Stupp v. United States. 5 F.4th 1341 (Fed. Cir. 2021).
  • (2022). Mid Continent Steel v. United States. 31 F.4th 1367 (Fed. Cir. 2022).
  • (2022). Nexteel Co., Ltd. v. United States. No. 2021-1334 (Fed. Cir. 2022).
  • Casey, C. A. (2020). Trade remedies: Antidumping. Congressional Research Service, R46296. Available online at: https://crsreports.congress.gov/product/pdf/R/R46296/30.
  • Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Lawrence Erlbaum Associates, Mahwah, New Jersey, 2nd edition.
  • Cox, C. (2015). International trade’s zero-sum game: How zeroing in accordance with the Tariff Act of 1930 harms the American economy and why it must go. Loyola Consumer Law Review, 28:107–136.
  • McFarland, H. (2015). Antidumping: The third rail of trade policy. Journal of Economic Policy Reform, 18:293–308.
  • Rousseeuw, P. and Croux, C. (1993). Alternative to the median absolute deviation. Journal of the American Statistical Association, 88:1273–1283.
  • Rude, J. and Gervais, J.-P. (2009). Biases in calculating dumping margins: The case of cyclical products. Review of Agricultural Economics, 31:122–142.