15,083
Views
7
CrossRef citations to date
0
Altmetric
Perspectives

The Financial Analysts Journal and Investment Management

Abstract

The Financial Analysts Journal is a leading forum for sharing knowledge about investment management. It often features academic research, but its focus has consistently been on practice and how new knowledge can support one of society’s most important endeavors: preserving and growing assets for our collective economic future. In this article, I review some key contributions about portfolio management published in this journal. The lively debates demonstrate how asset management has evolved through give-and-take discussion of innovation versus established practice. The Financial Analysts Journal has consistently introduced its readers to new ideas and methods. In doing so, it has greatly improved professional practice.

Disclosure: The author reports no conflicts of interest.

Editor’s Note

Submitted 8 April 2020

Accepted 29 April 2020 by Stephen J. Brown

The Financial Analysts Journal has long played an important role in the professionalization of investment management. During its distinguished history, it has been a leading outlet, together with the Journal of Portfolio Management, for shared knowledge about the critical issue of portfolio choice. The Financial Analysts Journal regularly brings academic finance theory into practice, but just as often, it opens new challenges for scholars to explore. This article traces the development of modern portfolio management through a tour of some of the Financial Analysts Journal’s most important contributions to that field. The selection is personal. Over the course of my academic career, the Financial Analysts Journal has introduced me to ideas and techniques that lay parallel to, and yet were distinctly different from, the concurrent topics in academic finance journals. It was—and still is—full of big ideas, interesting puzzles, and new insights into such topics as asset allocation, diversification, performance measurement, hedging, and investor behavior. Academia has introduced a number of key models and methods to practice over the past several decades, but translating, adapting, and using these models has been the central concern of practice. Contributors to the Financial Analysts Journal have been and continue to be at the vanguard of this effort.

Bottom Up vs. Top Down

When the Financial Analysts Journal was founded in 1945, practitioners had no comprehensive financial model of portfolio management. For the first two decades of its life, the Financial Analysts Journal focused squarely on security analysis and valuation methods. The informed analyst was interested in such questions as which variables best forecast stock and bond performance, the economic outlook for various industries and commodities, and how the macroeconomy affected markets. Important issues in the pages of the Financial Analysts Journal in this early era included the forecasting power of the price-to-earnings ratio (Molodovsky 1953), the challenge of growth stock valuation (Jenks 1947), and the professionalization of security analysis (Graham 1946, 1952). As Irving Kahn (2005, p. 6), a founding editorial board member, put it:

Many of us were interested in hands-on, fundamental analysis of actual companies and actual industries. The approach we pursued was to give how-to advice for day-to-day practice on analyzing company, industry, and national statistics and facts.

Issues of portfolio construction, particularly with respect to methods of diversification, although not entirely neglected, clearly took a back seat. Financial analysis was a bottom-up profession.

Diversification was certainly recognized, however, as beneficial. A smattering of articles considered the broader structure of the portfolio. In an important contribution, Carter (1950) argued that in a mutual fund context, analyzing and appraising a security on its individual merits is not enough; rather, stocks must be considered within the context of the overall portfolio. Several Financial Analysts Journal contributors highlighted portfolio diversification within US markets. A study by Robert Milne (1958) used a panel of a decade of earnings by industry to argue that institutions should hold a mix of stocks mirroring the multiple sources of corporate profits. He closed by posing the question to readers, “What constitutes sound diversification?” (p. 54). Economists Edward Renshaw and Paul Feldstein contributed one of the most prescient of the early Financial Analysts Journal articles about diversified portfolios (Renshaw and Feldstein 1960). Their proposal for an “unmanaged” index fund is discussed in detail by Stephen Brown in his recent article in this journal (Brown 2020).

The “New Research.”

G. H. Fisher (1953) introduced mean–variance optimization to Financial Analysts Journal readers. His article focused on the problem of estimating inputs to the model developed by Harry Markowitz. Fisher’s time overlapped with Markowitz’s time at the Rand Corporation, and Fisher was intimately familiar with Markowitz’s (1952) landmark paper “Portfolio Selection” that appeared in the Journal of Finance less than a year earlier than Fisher’s article. In fairly technical terms, Fisher argued that diversification derived from the structure of the economy and that a stochastic version of Leontief’s input–output model could help estimate the covariance matrix.Footnote1 Fisher’s article is evidence that, on the one hand, the Financial Analysts Journal did not shy away from highly mathematical but potentially useful contributions.

On the other hand, the profession did not entirely embrace new methods and tools from operations research. The Markowitz model was lumped together with other “computer programs” that, to some extent, posed a challenge to the bottom-up security analysis that prevailed. In 1963, when Benjamin Graham reflected in the Financial Analysts Journal on the future of financial analysis, much of his article focused on the traditional role of valuation—in which he was the reigning expert. Graham did see the future in applying statistical and computational models, however, and he nicely summarized the approach as follows:

I believe there is theoretical merit in the original Markowitz concept of “efficient portfolios.” This seeks to find by a computer program the portfolio that offers the largest expected return compatible with a given acceptable risk, or, conversely, the least risk associated with a required or expected return. Under this approach it would be up to the Analyst to estimate the degree of risk as well as the expectable return for each issue in the large group from which the portfolio would be drawn. (Graham 1963, p. 70)

In the following decades, the flow of articles in the Financial Analysts Journal reflected the evolution of investment analysis from a bottom-up, fundamental quest for value to a top-down, holistic approach that relied on new quantitative methods. By 1966, mean–variance analysis, once a shock to the profession, was hailed as “the new research” (Shelton 1966). Mathematics and statistics became forever part of the toolkit of readers of the Financial Analysts Journal.

The Markowitz Revolution

Markowitz’s mean–variance model represented a revolutionary approach to portfolio construction, but practitioners quickly realized that implementation presented challenges:

  • What securities or asset classes should be used?

  • What inputs should be used?

  • What should be done when the mathematical model generated patently absurd portfolios?

  • What time horizon is appropriate?

  • When should the model be updated?

  • How does one estimate investor risk aversion, preferences, and goals?

  • How does one integrate investment objectives?

All these questions were important, and academic research was not necessarily focused on answering them. The Financial Analysts Journal and, subsequently, the Journal of Portfolio Management were key forums where solutions to these questions were proposed, results of model tests were presented, and debates about the merits of the new model took place.

The Problem of Inputs.

Applying mean–variance optimization to the universe of investable securities requires the estimation of a huge covariance matrix. William Baumol’s (1966) article “Mathematical Analysis of Portfolio Selection” is an early, highly readable assessment of this problem. Despite the industry having moved into the computer age, covariance estimation was evidently extremely costly. Citing the pathbreaking paper of William Sharpe (1963), Baumol (1966, p. 98) wrote:

The cost of a simplified portfolio calculation involving 1500 securities has been estimated to lie between $150 and $350 for a single run in computer time alone, and it has been suggested that a single run of the complete Markowitz calculations might come to as much as 50 times these orders of magnitude.

Baumol articulated the Achilles heel of applying the model at the individual-security level—namely, that inverting a covariance matrix of 2,000 security returns was empirically impossible. Forecasting expected returns security by security was also problematic—not only because of efficient market theory but also because companies change over time: “For even if the individual data were all known, the portfolio selection calculation in which they were utilized is hardly simply and intuitively obvious” (Baumol 1966, p. 99).

Sharpe’s 1963 article was arguably more important to the practice of portfolio management than his famous CAPM equilibrium theory. In the 1963 work, he introduced the “diagonal model” showing how the vast dimensionality of the covariance matrix of individual security returns could be reduced to a manageable size by the assumption of a single factor representation. In his words:

The major characteristic of the diagonal model is the assumption that the returns of various securities are related only through common relationships with some basic underlying factor. The return from any security is determined solely by random factors and this single outside element. . . . More explicitly:

where Ai and Bi are parameters, Ci is a random variable with an expected value of zero and variance Qi, and I is the level of some index. The index, I, may be the level of the stock market as a whole, the Gross National Product, some price index or any other factor thought to be the most important single influence on the returns from securities. The future level of I is determined in part by random factors:

where An+1 is a parameter and Cn+1 is a random variable with an expected value of zero and a variance of Qn+1. It is assumed that the covariance between Ci and Cj is zero for all values of i and j (i  j). (Sharpe 1963, p. 281)

Sharpe noted later in the article, “The model’s extreme simplicity enables the investigator to perform a portfolio analysis at a very small cost” (p. 291).

I quoted Sharpe (1963) at length because his method radically changed the application of mean–variance optimization to portfolio management. It also provided the intuition behind the capital asset pricing model (CAPM). If the simplifying assumption of a single-factor model delivers a close approximation to the efficient frontier, then the loading on that single factor is perhaps a sufficient metric, in practice, for risk. For Sharpe, evidently the notion of an equilibrium was closely tied to the practical challenge of implementation of the Markowitz model. His 18 contributions to the Financial Analysts Journal over his career testify to a lifelong interest in the practice of asset management, not simply in its theory.

The diagonal model—single-factor representation of security returns—was an important contribution to portfolio optimization. Not only did it lead to a significant reduction in the dimensionality of the portfolio optimization problem by reducing the number of input parameters to be estimated, but it also led to simplified algorithms for portfolio selection.Footnote2 In a reappraisal of factor model representations of security returns in the context of large-scale portfolio optimization, Fan, Fan, and Lv (2008) argued that because portfolio optimization algorithms are based on the inverse of the return covariance matrix, reducing covariance matrixes to a factor structure is not only less computationally complex but also provides more reliable estimates—and, therefore, more reliable optimization output—than does inverting the full covariance matrix.

The late Lawrence Fisher was a long-time member of the University of Chicago faculty and co-founder, with James Lorie, of the database from which all modern research in equities has sprung: CRSP (the Center for Research in Security Prices). He also contributed to the theory of optimization. Fisher’s 1975 Financial Analysts Journal article was motivated by the problem of parameter estimation errors and resulting nonsensical portfolios. He reconciled potentially flawed parameter inputs with capital market theory by working backwards from the investor’s current portfolio. Taking current portfolio variance as a measure of risk tolerance and using Sharpe’s diagonal model, he showed that the dual of the portfolio optimization problem identifies a set of expected asset returns that support the current allocation.Footnote3 These results can be compared with theory—for example, with the CAPM—or with an investor’s own fundamental views. Rather than imposing priors on the outputs to the optimization, he used the outputs to adjust priors on the inputs. He showed how to use these expected returns iteratively to tune the portfolio. His work appears to have paralleled Sharpe (1974), who also proposed an iterative allocation process through inspection of implied expected returns derived from the dual problem.

Fisher (1975) described how the idea originated in discussions of the dual approach with Markowitz in 1964, and he credits Treynor and Black (1973) with a Bayesian approach related to the CAPM. Sharpe’s 1974 article might have prompted Fisher to finally write the idea up and place it in an influential practitioner publication. Regardless of precedent, this notion of using the optimization framework as a structure for assessing the logic of the inputs as well as the reasonableness of the output emerged early in practice. Both works prefigure Black and Litterman (1991), a highly influential model for applied portfolio management. Black and Litterman proposed that the initial expected return inputs should be whatever are required to ensure that the implied default asset allocation is equal to what we observe in the markets.

The practical problem of statistical inputs to mean–variance optimization continued to be a focus of the Financial Analysts Journal. A straightforward tack was taken by Ibbotson and Sinquefield (1979), who updated their 1977 study of historical estimates of the risk and return of major US asset classes. The firm Ibbotson Associates grew, in part, to meet the demand in practice for reliable statistical evidence about risk and return by asset class. In a stationary world, the longer the time series, the more accurate the estimate of the mean. In 1977, the CRSP data provided 52 years of returns—a reasonable framework for making projections about the long-term future of US capital markets. Jeremy Siegel (1992) extended the data for US stock and bond returns back to 1802, providing even more data on risk, return, and correlation.

Dimson, Marsh, and Staunton (2004) presented an international version of the Ibbotson data based on more than a century of annual returns to global financial markets—essentially doubling the length of the Ibbotson and Sinquefield study. They demonstrated that the risk-and-return characteristics documented in US markets—particularly the equity risk premium—were consistent with the international evidence.

Of course, Financial Analysts Journal contributors also recognized problems with relying on historical inputs to mean–variance optimization. Carleton and Lakonishok (1985) showed how using historical estimates may be misleading because of estimation error.

Despite the optimizer revolution, not all Financial Analysts Journal contributors advocated its use. The critique of mean–variance optimization launched by Richard Michaud (1989) was particularly influential. He coined the phrase “estimation-error maximizers” for the Markowitz model. Michaud observed that investment professionals were abandoning mean–variance optimizers when they found their portfolios to be “unintuitive and without obvious investment value.” He also highlighted research to reduce estimation error through such tools as Bayes–Stein shrinkage (Philippe Jorion 1986). Michaud’s message is that the most attractive assets—those with the highest mean—are also those most likely to have the highest estimation error. This caution is important to those chasing historical asset class returns. Philippe Jorion (1992), whose Bayes–Stein work Michaud cited, published his own take on input estimation problems in a Financial Analysts Journal article. Studying how best to construct an international bond portfolio, he pioneered the use of simulation to quantify the range of portfolio weights one would obtain for each country.

Rebalancing and Risk Parity.

Another big issue that emerged in applying the mean–variance model was portfolio rebalancing. In “Adaptive Asset Allocation Policies” in the Financial Analysts Journal, Sharpe (2010) summarized why rebalancing matters and why it necessarily reflects investor beliefs. He pointed out that rebalancing is contrarian, so it must reflect a view that the equity market will mean-revert—a theoretically unsatisfying axiom with marginal empirical support. In contrast, a buy-and-hold portfolio presumes a static supply of assets. He proposed an adaptive approach that would, taking into account both growth in values and net security issuance, rebalance toward the composition of the world wealth portfolio. He relied, again, on the dual of the portfolio optimization problem and a conviction that equilibrium asset pricing is a reasonable prior. In Sharpe’s view, assessing the asset mix of the world wealth portfolio is easier than confidently estimating expected returns.

An interesting model that contradicts Sharpe’s equilibrium proposition emerged in financial practice in the mid-2000s—namely, risk parity. In simple form, the model derives weights from the risks of the asset classes, not their weights in the world wealth portfolio or from implied expected returns. An example, which is illustrated in Asness, Frazzini, and Pedersen (2012), weights the equity and debt components according to the inverse of their rolling three-year excess volatilities. Without leverage, this process would result in something like a 25%/75% stock/bond portfolio and, in standard capital market equilibrium, an expected return consistent with a beta of 0.25. Levering the fixed-income part of the portfolio to the same risk as the equity, however, brings the investor to risk parity. The backtest carried out by Asness et al. from 1926 shows the risk-parity portfolio handily beating a 60/40 long-only portfolio and the value-weighted market portfolio on a raw and a risk-adjusted basis.

Proponents of risk parity recognize that it is inconsistent with standard capital market theory because it implies that low-risk debt lies above the capital market line. Asness et al. (2012) and Anderson, Bianchi, and Goldberg (2012) proposed an institutional/behavioral explanation: leverage aversion. Leverage itself presents a risk for which the marginal investor demands compensation. Given a target return of, say, 8% annually, investors (either by choice or because of institutional constraints) prefer to hold more equities rather than lever a portfolio of low-yield fixed-income assets. If the marginal investor is leverage averse, the result could compress the spread in expected returns between low-risk assets and equity and generate a high Sharpe ratio for the portfolio. The resulting premium for low-beta assets is consistent with Frazzini and Pedersen (2010), who pointed out that it is consistent with a constrained version of capital market theory investigated by Fischer Black (1972).

Skeptics of risk parity expressed concern that leverage is a seductive sinecure for underfunded pension liabilities (Sullivan 2010) and that belief in a higher-than-merited Sharpe ratio for levered bonds is not necessary to support a capital market line with a higher slope than the equity risk premium as commonly measured. Some analysts (e.g., Siegel 2010) noted that the market portfolio should contain a mix of all asset classes, not just equities. In fact, with a broad set of inputs, an analyst might be able to estimate the market portfolio in better ways than with the risk-parity heuristic.

How Much Does Asset Allocation Matter Anyway?

A question that encapsulates the age-old contrast between top-down and bottom-up portfolio management is the importance of active management versus asset allocation. To address this issue, Brinson, Hood, and Beebower (1986) decomposed the variance of returns to 91 large pension plans over the period 1974–1983 into three parts: returns resulting from passive allocation, from timing, and from selection. They attributed 94% of the variation in the average fund’s performance to its passive asset allocation. Timing and security selection, once the original bread and butter of financial analysts (and Financial Analysts Journal readers in its early years), made up less than 7% of the performance differential. These results were updated in 1991 (Brinson, Singer, Beebower 1991) with another 10 years of a larger, out-of-sample dataset, and the results were the same: The marginal impact of getting the allocation right swamped the focus on how it was executed.

Brinson et al. (1986) sparked a 20-year debate about the importance of asset allocation, the subtext of which was whether active management mattered (e.g., Hood 2005; Ellis 2015). Ibbotson and Kaplan (2000) pointed out that Brinson, Hood, and Beebower’s analysis was not as straightforward as it seemed. Brinson et al.’s (1986) method did not account for most funds having roughly the same market exposure, or beta, which would necessarily explain the lion’s share of time-series variation in portfolio performance. In their study, Ibbotson and Kaplan asked how much of the variation in returns among funds is explained by differences in policy. They studied 10 years of returns and allocations for 94 balanced mutual funds for 1989–1998 and a similar sample of pension funds. Essentially, they verified Brinson et al.’s (1986) result: A regression on the market could explain 80%–90% of time-series variation in returns. They found, however, that performance differences among funds, estimated by cross-sectional regressions of individual funds on their passive policy benchmarks, explained 35%–40% of differences in fund performance. In short, when the common exposure was accounted for, Ibbotson and Kaplan found that security selection explains more of the return variation among funds than asset allocation explains. The authors made a simple econometric point: Cross-sectional regressions remove the time-series effect of common movement via the intercept term in the regression. This approach is useful in studying why one fund’s return differs from another, whereas the Brinson et al. (1986) time-series approach is useful for understanding how well a passive proxy explains the average performance of a population of funds.

The controversy was joined by several other Financial Analysts Journal authors. For example, Vardharaj and Fabozzi (2007) found a similar difference between time-series and cross-sectional variation in their study of US stock portfolios. Xiong, Ibbotson, Idzorek, and Chen (2010) followed up with larger fund samples and a slightly different benchmark of industry average performance. In their Financial Analysts Journal comment, Sénéchal and Singer (2010) argued that asset allocation during the crash mattered a lot but the Xiong et al. approach masked this effect. Thus, asset allocation remains an important issue. Evidently, pointing out methodological differences in measuring the effects of asset allocation did not settle differences in interpretation. Regardless, the debate was useful for Financial Analysts Journal readers in choosing what approach to take to answer various kinds of questions.

Risk

Some of the biggest financial innovations of the twentieth century were derivatives and dynamic models of portfolio choice. The Black–Scholes model was a watershed in the theory of asset pricing, but its mathematical complexity provided limited intuition (Black and Scholes 1972, 1973; Merton 1973b). A year after the model’s publication in academic outlets, Black (1975) introduced option pricing to Financial Analysts Journal readers. He straightforwardly explained how the Black–Scholes option valuation model worked and provided a schedule of hedge ratios for those daunted by the mathematics. Others also pressed to bridge the gap between the complex formula and the needs of practice. For example, Dimson (1977a, 1977b) developed a graphical method for rapidly calculating call options.

The potential application of option-pricing theory to portfolio management was quickly recognized. University of California, Berkeley, professor Nils Hakansson (1976) floated a visionary proposal in his Financial Analysts Journal article “The Purchasing Power Fund: A New Kind of Financial Intermediary.” He described a means by which investors could buy “supershares” that paid out in specific market outcomes. These conditional contracts could serve as building blocks for an investor to construct payoffs of all sorts—including downside risk protection in case of a market decline. With the solution to option pricing at hand, investors could finely tailor payoffs to suit their preferences, and Hakansson envisioned new institutions to deliver these tradable baskets of securities.

Later, Cox, Ross, and Rubinstein (1979) published their famous article on binomial option pricing. It was revolutionary because it showed how to dynamically construct an option even when it did not exist in the marketplace. Shortly thereafter, Mark Rubinstein founded Leland, O’Brien, and Rubinstein (LOR) and began using the binomial model to offer capital preservation to institutional investors. In the Financial Analysts Journal in 1981, Rubinstein and Leland (p. 69) explained the mechanism of a dynamic hedge that lay at the heart of the binomial model:

If options on a particular stock or on a portfolio do not exist, we can create them by using the appropriate strategy for the underlying asset and cash. For example, we can effectively create an at-the-money protective put option on our equity portfolio. We would begin by placing part of our capital in the equity portfolio and part in cash and then, without changing the composition of the equity portfolio, shift between the portfolio and cash as the equity portfolio value changes and as the “expiration date” approaches. Such an investment strategy would be tantamount to insuring the equity portfolio against losses by paying a fixed premium to an insurance company.

Rubinstein’s (1985) more detailed account, “Alternative Paths to Portfolio Insurance,” having won first prize in the Institute for Quantitative Research in Finance competition in 1984, appeared in the July/August 1985 issue of the Financial Analysts Journal. He showed how the binomial model could be used to construct a “synthetic put,” a dynamic hedge between equity and debt positions that theoretically would maintain a floor value for the portfolio. LOR’s portfolio insurance was a mathematically derived stop-loss strategy that would sell equities as their value declined—reaching a zero stock position before hitting the floor. The article briefly touched on one potential weakness: “Stop-loss strategies may be threatened by jumps in security prices” (Rubinstein 1985, p. 49). How LOR would provide insurance against security price jumps was not clear in the article.

Other contributors to the Financial Analysts Journal filled in some of the details about the effects of portfolio insurance. For example, Clarke and Arnott (1987), in an article accepted prior to the 1987 crash, illustrated the trade-offs and costs associated with holding or constructing a put on the portfolio. They showed how the put created a spike in the return distribution near the floor and that the cost of the put was high compared with the expected return of an all-equity portfolio. Bookstaber and Clarke (1985) pointed out that a portfolio with options was no longer lognormally distributed; a third moment had to be considered. Therefore, standard portfolio metrics, such as the capital market line, security market line, and the Sharpe and Treynor measures, were not valid tools for risk adjustment in such a portfolio. Mean–variance optimization could not be used for portfolio choice when options were a significant component of the portfolio. Modern financial tools had once again changed the paradigm of portfolio management. Just when Financial Analysts Journal readers figured they had mastered the new finance, the game changed once again.

The binomial model, and the LOR application of it to portfolio insurance, provided a key insight for investment managers. The put was no magic. It was an intuitively appealing variation of a more traditional stop-loss strategy where, instead of a dichotomous choice between risky and safe assets, the choice was based on a schedule of continuous rebalancing between stocks and bonds over a fixed time horizon. Other researchers soon showed how to relax the time horizon constraint. Estep and Kritzman (1988) proposed TIPP—that is, “time-invariant portfolio protection.” And Perold (1986) proposed CPPI—that is, “constant proportion portfolio insurance.” Both were based on concepts introduced in Merton (1971) and discussed in Perold and Sharpe’s 1988 Financial Analysts Journal review article on varieties of dynamic allocation strategies. Dybvig (1999) extended the idea of CPPI to endowment spending policies that use a dynamic cash cushion to cover liabilities; this Financial Analysts Journal article won the Commonfund Prize that year.

The Crash of 1987.

The market crash of 1987 was, at that time, the biggest one-day decline in the history of the US stock market. It was a sudden free fall that called into question not only the legitimacy of the efficient market theory but also the reliability of quantitative investment models. In the January/February issue of 1988, Rubinstein assessed the connection between portfolio insurance and this market crash. True to his 1985 observation, market jumps made continuous rebalancing impossible. When the market opened down several percentage points, so-called portfolio insurance (a misnomer to begin with because it was not a guarantee) did not perform as expected. It failed at precisely the moment when it was most needed.

Rubinstein (1988) also reflected on the question of whether portfolio insurance caused the crash itself. He estimated that on Black Monday (19 October 1987), portfolio insurers accounted for about 11.5% of the volume and that the amount of equities committed to portfolio insurance strategies was $60 billion–$80 billion before the crash. He believed neither of these estimates seemed dispositive of guilt. After all, investors had always used stop-loss strategies, albeit not all based on a common quantitative model. More relevant, in his view, was a sudden fear about the structural integrity of the market itself. Investors on Black Monday discovered they could not execute their trades and had no clarity about when and how they could. Market failure may have provoked the nine standard deviation plunge of 22.6% in the Dow Jones Industrial Average. Or not: Richard Roll’s (1988) widely cited study of the crash found only marginal evidence that market arrangements played a role.

Paradoxically, Rubinstein (1988) also predicted that the crash of 1987 would increase the demand for protective dynamic strategies. A couple of generations after the extreme volatility of the 1930s, investors had, overnight, relearned the lessons of downside risk the hard way. He guessed they might be willing to forgo some return to avoid another such disaster. A search of books published over the period 1980–2012 on the term “portfolio insurance” (using the Google N-gram viewer) showed a peak in 1990 and a decline to near zero by 2012, which is consistent with a half-life of interest in extreme risk mitigation of about 10 years. Rubinstein, now deceased, was a true financial visionary. After portfolio insurance, he created the exchange-traded fund (ETF)—an innovation likely to be with us for a long time. The concept of the ETF had its origins in Hakansson’s (1976) supershares idea; the challenge lay in getting regulatory approval (McLaughlin 2007). In Rubinstein’s 1989 Financial Analysts Journal article, he proposed a tradable, large, diversified basket of stocks, which his firm launched and began trading on the Philadelphia Stock Exchange a few months before the article was published. Although Rubinstein planned to build out Hakansson’s entire “superfund” concept, only the first step—the ETF—was realized.

Hedging Liabilities

Every investment portfolio has an objective—namely, to meet a set of future expenditures. Pension funds, for example, want to provide for an expected stream of future liabilities with particular characteristics, including duration and sensitivity to inflation. Liabilities should drive portfolio choice, but integrating them into mean–variance optimization is not trivial. Markowitz’s original model used indifference curves derived from quadratic utility to proxy for the investor’s objective function. He also realized, however, that this abstraction made application difficult. In “Markowitz Revisited” (Markowitz 1976, p. 51), he suggested simply reporting the risk-and-return pairs from the frontier and leaving the choice up to the investor:

Real investors these days usually seem more comfortable with the idea of examining risk–return tradeoffs than with psychoanalyzing their utility function and letting the computer pick a portfolio that maximizes its expected value.

In a later Financial Analysts Journal article (Markowitz 1999), he credited A. D. Roy (1952) with “linearizing” the objective function by choosing a “disaster point,” d, on the y-axis and maximizing the slope to tangency in risk–return space.

A Liability Asset.

In the 1980s, Martin Leibowitz, then research director at Salomon Brothers, began to publish studies on the importance of matching the duration of assets and liabilities. Leibowitz and Henriksson (1988) used Roy’s (1952) linearization to express the notion of a shortfall constraint—a disaster being the failure to meet pension liabilities. They proposed that the shortfall target should be a portfolio proxy for portfolio liabilities rather than a simple numeric value.

Leibowitz (1987) and Ezra (1991) applied this idea to pension plan sponsors and argued that defined benefit plan sponsors cared about the expected surplus of assets minus liabilities and thus should seek to maximize that value. The notion is simple. A pension fund holds a short position in a series of promised future cash payments and requires a long position to meet the liabilities. The present value of the liabilities is interest-rate sensitive and varies with their duration. Despite the trend in the 1980s to embrace the equity risk premium as a source of long-term growth, Leibowitz (1986) argued that bonds in the pension fund portfolio offer an important hedge against interest-rate shocks. He believed that chief investment officers should look at total portfolio duration to understand the pension’s exposure to underfunding.

But what about stocks? Their dividend stream is presumably infinite and growing. Leibowitz and Kogelman (1993) attributed the puzzlingly low empirical estimate of the duration of corporate equities to the observation that over the long term, equities are a hedge against inflation, which is correlated with interest rates. Delving deeply into the role equities play in a portfolio with bondlike liabilities, Bodie (1995) showed that an all-equity portfolio poses serious shortfall risk, even over a long horizon, and that not only does equity risk not diminish with the holding period but it may even grow.

The extremely practical technique of surplus optimization emerged from this and related research in the Financial Analysts Journal and the Journal of Portfolio Management. Researchers have added interesting extensions. For example, Bookstaber and Gold (1988), integrating the “liability asset” approach with portfolio insurance, argued that the liability can be properly proxied only by a dynamic portfolio. They showed that pension liabilities contain, at least conceptually, some equities with low R2 values, which provides the theoretical justification for pensions to hold equities, not just duration-matched bonds. In summary, the idea that liabilities can be modeled and hedged with a “shadow portfolio” of high interest-rate sensitivity is a fundamental insight. It almost certainly helps us understand—if not address—the current funding crisis in public pension funds and has continued to highlight the future consequences of a sustained low-interest-rate environment.

Value at Risk.

The goal of measuring the probability of a shortfall—and the sensitivity of that probability to factor exposures—motivated the widespread adoption in the 1990s of the concept of value at risk (VAR). Facing highly visible portfolio failures (such as the 1994 bankruptcy of Orange County, California, because of its use of derivatives), the investment profession adopted a constellation of metrics and methods for assessing the probability of Roy’s d—the disaster point. In a simple world that obeys lognormality and Sharpe’s diagonal model, calculating the probability of a disaster of a given magnitude over a given horizon is straightforward: It is the area under the left-most piece of the lognormal curve. Derivatives make this calculation vastly more complicated, and VAR estimation suffers from the familiar problem of parameter uncertainty as well as the curse in financial markets of fat-tailed distributions.

Linsmeier and Pearson (2000) provided Financial Analysts Journal readers a general overview of VAR methods. Some early red flags about the methodology include the study by Beder (1995), who found that, in practice, no two VAR measurements were alike. Her opinion was particularly valuable in light of her central role as a risk consultant attempting to prevent the Orange County bankruptcy. She also pointed out the dangerous but common misapprehension that a VAR loss estimate is a worst-case scenario; in fact, one should expect a loss at least as bad as the VAR for a given fraction of the time.

Philippe Jorion, whose 2006 book Value at Risk became the bible of VAR methods, also cautioned early in its application (Jorion 1996) that higher-order uncertainty about parameters was an important issue in VAR calculations. The seemingly precise quantification of expected minimum losses for given probability levels provided by VAR belies their inherent nature as estimates—indeed, estimates built on estimates, for which there is precious little empirical validation conditional on rare events. He pointed out that precise estimates of the left tail of a future distribution may be simply impossible. These and other articles demonstrate that, well before the financial crisis of 2008, the Financial Analysts Journal had become a major forum for discussion about (and critique of) VAR approaches.Footnote4

Beta and Risk: Factor Investing

The Financial Analysts Journal has also been an important forum for the topic of factor investing since virtually the inception of the concept. Sharpe and Cooper (1972) published a test of the CAPM risk–return relationship in the Financial Analysts Journal soon after Black, Jensen, and Scholes (1972) was published. Using CRSP data for the period 1931–1967, Sharpe and Cooper found that the top 10% of portfolios with high sensitivity to the market index (measured over 60 prior months) had high out-of-sample returns. This horizon was long enough to presume that realized returns reflected ex ante expected returns. This early test of the CAPM was particularly useful for investment managers because it provided an empirical foundation for using a single-factor model to build high-return portfolios.

The research on multifactor models also has a long history in the Financial Analysts Journal. Barr Rosenberg was an early innovator in the use of multifactor models of stock returns (Rosenberg 1982). He wrote presciently in the Financial Analysts Journal in 1982 about the challenges of estimating risk premiums associated with macroeconomic factors:

Various studies of factors in security returns—that is, common elements that underlie the returns of similar securities, such as shares in the same industry or bonds in the same quality group—have led to substantial progress in developing models for investment risk. These models allow prediction of the uncertainty of returns on securities and on portfolios of securities. Much less success has attended our efforts to establish the normal rewards for these factors, which are properties of general equilibrium. (p. 47)

In prior work, Rosenberg (1974) pointed out that, particularly in a multifactor framework, returns-based factor-loading estimates are subject to estimation error. He proposed using, instead, individual-security characteristics as the basis for factor construction. These traits are not regression based and can change as the company’s business changes. He argued that they can capture the systematic structure of residual risks that is otherwise difficult to estimate. This idea—the use of security characteristics as opposed to time-series estimates of factor loadings—has become an important approach to factor investing.Footnote5 Rosenberg found that after the market factor is extracted, company-level characteristics explain a significant amount of the residual covariance matrix. Interestingly, he also noted a problem—namely, unclear economic interpretation of the resulting factors.

Some 20 years after Sharpe and Cooper (1972) and 18 years after Rosenberg (1974), Fama and French (1992) revisited CAPM beta as a major determinant of expected stock returns. They raced the market factor against factor portfolios sorted on size, leverage, book‐to‐market equity, and the earnings-to-price ratio. The portfolio characteristics crowded out the market factor as the source of explanatory power for returns.

Research on these and related predictors of returns had a long prior history, but Fama and French (1992) invigorated an active modern interest in using characteristic-based factors to generate positive risk-adjusted returns.

The quest for powerful factors to explain stock returns continues apace. Academia and practice together have made significant progress since 1972 in modeling the cross-sectional differences in security returns.

Factors and the Macroeconomy.

Sharpe (1963) opened the door to much more than tests of the CAPM. Recall from his description of the diagonal model that it does not specify what the factor is. His CAPM theory identified it as the market portfolio, but the diagonal model allows it to be anything the analyst believes to be “the most important single influence on the returns from securities” (p. 281). So, the factor need not be the sole influence on returns, only the single most important one. It could even be a macroeconomic factor, such as the GNP. The general framework of Sharpe (1963) has proven to be extremely fruitful for theory and for practice. His diagonal model, devised for expediently addressing the dimensionality problem, opened the door for researchers to propose a variety of candidates for the most important factor—for example, consumption (Breeden 1979)—or to consider the relevance of multiple factors (Merton 1973a; Ross 1976a, 1976b).

Stephen Ross’s arbitrage pricing theory (APT) was a key innovation in factor investing. It proposed a link between asset returns and macroeconomic risks (Ross 1972, Ross 1976a, 1976b). The theory derived from the same concept as diagonality—that risk can be reduced to a few important dimensions defined by factors, each of which has an associated risk premium. Investors can hold a riskless asset with no factor covariance or take risk along factor dimensions in exchange for higher expected returns.

APT’s intuition is analogous to insurance companies choosing to underwrite a mix of property, casualty, and life depending on their specialization. A competitive, efficient marketplace for insurance determines the premium rates for each type of risk. An investor’s expected portfolio return (or a stock’s expected return) is simply a linear combination of the amount of exposure to each factor times the market-determined premium for that factor.

APT is useful for practice because it relaxes the CAPM implication that everyone would want to hold shares in the same market portfolio. The CAPM might tell investors to hold shares in one big index fund, but APT showed investors how to design portfolios that, instead, fit idiosyncratic preferences. Roll and Ross (1984) proposed just such an approach in their Financial Analysts Journal article. In it, they showed how to construct portfolios for investors who differ in their sensitivities to risks captured by macroeconomic variables—namely, inflation, industrial production, the default spread, and the term structure. Of course, most investors want to hedge against negative shocks to all of these. However, certain investors—for example, sovereign funds endowed with natural resources—might accept exposure to one risk, such as term structure shocks, in exchange for hedging crashes in global industrial production.

Chen, Roll, and Ross (1986) empirically identified important influences on stock returns that had a macroeconomic logic to them. These influences must be pervasive factors to command an insurance premium. Thus, a higher exposure to them should be compensated with a higher expected return. Chen et al. showed that betas on macro-factors explained cross-sectional differences in returns to US equities.

The APT framework is general enough to accommodate some of the other anomalies that have been systematized into investable factors. For example, the “betting against beta” puzzle discussed previously relies on the notion of leverage aversion. In APT terms, it can be interpreted as a theory about the risk premium on the yield-curve factor. Similarly, liquidity is an important systemic risk that is probably priced (Ibbotson, Chen, Kim, and Hu 2013). Less obvious, however, is momentum, which does not seem to easily fit known or postulated economic risk factors.Footnote6

A key question for the application of factor investing is how to meaningfully assess whether a client has a greater-than-average or less-than-average sensitivity to shocks to these factors. On the one hand, history can reveal certain statistical norms, but large shocks—for example “momentum crashes”—are rare and may not be representative. On the other hand, if everyone invested in factors that had a history of positive excess return but no special risk, then—as Sharpe would predict—prices for these factors should adjust.

The Financial Analysts Journal article written by Gregory Connor (1995) does an excellent job of comparing the explanatory power of three ways to construct factors: via loadings on macroeconomic variables (as in Chen et al. 1986), with statistical estimates of factors latent in returns (as in Roll and Ross 1980 and Connor and Korajczyk 1988), and on the basis of fundamental, characteristic-based groupings estimated by BARRA (the firm founded by Rosenberg). A comparison by Conner of these three methods is extremely useful. Using macroeconomic variables as risk factors, which makes the most sense economically, did the worst job of explaining differences in returns. The statistical model worked best (in sample), and using the fundamental factors did well—but mostly because of differences in the performance of industries—as opposed to identifying truly fundamental risk sources. Despite the economic logic of macroeconomic variables, study after study has confirmed that the Fama and French (1992) factors (and the periodic updates that Fama and French have provided as they continue to delve into the mystery of the cross-section of stock returns) are difficult to beat in terms of explanatory value.

Factor investing by various names, including “smart beta,” had become pervasive in asset management by the mid-2010s. It is not difficult to see why. Factors composed of securities with particular characteristics or past behavior were found to have performed well over long stretches of time and to have attractive covariance profiles. APT provided a theoretical framework about why the factors should work and an agenda for how to assess whether past performance will replicate out of sample. At the same time, factor investing provides a framework for incorporating the liability structure of the portfolio into the asset allocation process.

The continued challenge of modern asset pricing research is to convincingly explain the nature of the factors. Do the factors that best spread returns capture some pervasive fundamental risk, or are they driven by systematic behavioral biases and tendencies—which themselves may be sources of systematic risk? With many creative researchers looking for new factors, a “factor zoo” has not been difficult to identify. The problem for investors is whether these factors will deliver out-of-sample returns as risk premiums (if you believe in the APT) or alpha (if you do not).

Alternatives

A fundamental question about factor investing is whether all priced factors trade in public securities markets. There is no reason they should. Some assets may not be amenable to public ownership and control. Others may have information asymmetries that institutional investors cannot easily access and monitor. The fact that the public capital markets are limited in scope was long understood as a nuisance to asset pricing tests of factors. Roll (1977) pointed out that an investment could look like it had positive alpha but only because the market was not on the efficient frontier, which theory says is formed from all assets. Serious attempts to identify the global wealth portfolio have appeared in the Financial Analysts Journal. Doeswijk, Lam, and Swinkels (2014) estimated the components of the invested world asset portfolio over the period 1990–2012, an important update of Ibbotson and Siegel (1983).

The Endowment Model.

What is a problem to academia can be an opportunity for enterprising portfolio managers. In the 1990s, David Swensen, chief investment officer of the Yale Investments Office, pioneered an approach that relied on the aggressive use of extended diversification across a variety of asset classes: international markets, hedge funds, venture capital, private equity, real estate, and natural resources. Loosely termed “alternative investments,” these assets are all things that should be in the world wealth portfolio (Swensen 2000). At the time Swensen arrived at Yale in 1985, alternative investments were extremely understudied but held prospects of high returns to astute investors. He noted that within each of these asset classes, the performance spread between winning and losing managers was huge. To him, that phenomenon implied inefficient markets and the potential to add alpha by selecting good managers.

The Yale endowment model seemed, at first, both risky and revolutionary. For example, the risks of limited partnership investment in private capital and hedge funds were difficult to assess. No reliable return metrics were available for private capital, and extant hedge fund databases in the early 1990s suffered from significant biases.Footnote7 VAR did not capture the uncertainties of opaque strategies and structures that relied fundamentally on human skill and integrity.

Swensen and his team developed a risk management process that took into account statistical metrics but focused on personnel assessment and the possibility of operational risk. My co-authors and I found this perspective to be of first-order importance to fund survival (Brown, Goetzmann, Liang, and Schwarz 2009).

The Yale endowment approach, with its focus on alternative investments and the use of outside managers, has been widely emulated.Footnote8 Barber and Wang (2013) studied the performance of university endowments over a 21-year period and found that many followed strategies heavily emphasizing alternative investments. They found that Yale and other endowments of large universities did well by adopting this approach. The reason may be their capacity for maintaining a long-term perspective, which allows for purchasing and holding the less liquid investments.

Over its lifetime, the Financial Analysts Journal has been an important forum for research and discussion about alternative investments. For example, in the post–World War II period, non-US equities were a novelty, but in the 1950s, investment adviser Roman Gorski argued for international portfolio diversification to hedge geopolitical uncertainties (Gorski 1954) and Standard Oil’s investment adviser, T. R. Lilley, championed European stocks as diversifiers in the institutional portfolio (Lilley 1959). Later, Bruno Solnick (1974) pointed out the benefits of international diversification—a topic he had long studied academically.

Another example of an alternative asset discussed by contributors to the Financial Analysts Journal is commodities. Bodie and Rosansky (1980) provided one of the first reports of a rigorous study of commodity futures’ rates of return as an investment. Gorton and Rouwenhorst (2006) documented a premium similar to the equity premium over more than a half-century of commodity returns, and commodity returns had low correlations with equities. Their follow-up Bhardwaj, Gorton, and Rouwenhorst (2015) tracked the out-of-sample performance over a decade.

Similarly for private capital as an investment, the Financial Analysts Journal published a remarkable early empirical study of venture capital investing. Rotch (1968) documented the highly successful performance of a few major funds but noted that a “high degree of risk is inherent in the route they take to financial gain” (p. 147).

Further afield from financial assets but plainly relevant for such institutions as university endowments and museums are collectibles. Dimson and Spaenjers (2014) aggregated the evidence about the risk and return to investing in collectibles of many kinds—gold, stamps, art, and even violins. None beat stocks as to return, but all beat bills and bonds, albeit with considerable volatility. Moreover, holding and transactions costs were difficult to evaluate.

Hedge Funds.

Hedge funds are a key component of the endowment model. The Yale endowment has regularly allocated more than 20% to “absolute-return” strategies, the commonly used category for hedge funds. Conceptually, absolute-return strategies are the opposite of factor strategies. Their purpose is not to profit from exposure to systematic risk premiums but to capture mispricing; they focus on alpha, not beta. In 1966, hedge fund manager Martin Sosnoff provided Financial Analysts Journal readers with an excellent introduction to hedge funds (Sosnoff 1966):

A HEDGE FUND is a securities fund which not only buys stocks for long-term price appreciation but also sells stocks short. The concept of short selling is injected to reduce risk during periods of market decline. The emphasis is on maximizing stock market selection, i.e., buying stocks with above average prospects and selling short stocks which appear over-priced based upon investment judgment. The leverage of borrowed money is used to maximize capital gains. As the fund continually will be short a certain percentage of invested capital, a fully invested investment posture generally is maintained. Hopefully, the elimination of market risk will be attained by being long and short the market in varying proportions over a period of time. . . . (p. 105) The hedge fund manager either lives by the sword or dies by the sword. (p. 105)

Michael Steinhardt’s speech about hedge fund styles to the Financial Analysts Federation’s Investment Management Workshop at Princeton University on 19 July 1982 was reprinted in the Financial Analysts Journal (Steinhardt 1982). With the usual disclaimers, he touted Steinhardt Partners’ impressive track record: From 1967 to 1982, his fund had generated after-fee gains of 26.3%; most notably, the earnings were 27.8% in 1974, when the S&P 500 Index was down 38.1%.

When the Yale endowment model popularized the use of hedge funds, it reintroduced the original DNA of the Financial Analysts Journal to portfolio management: fundamental security analysis focused on valuation and identification of mispriced securities. Although some investors were astute enough to hire top hedge fund talent (like Michael Steinhardt) early on, not until the 1990s did hedge funds truly become an asset class. At that point, the trajectory from bottom-up investing to top-down asset allocation had come full circle. Steinhardt himself closed his fund in 1995 after huge losses in 1994—more than a decade after his speech at Princeton. Ironically, he went on to become chair of WisdomTree Investments, a “fundamental indexing” provider that focused on ETF factor investing.

The Financial Analysts Journal, together with the Journal of Portfolio Management, has continued to explore the merits and role of hedge funds in investment portfolios and in markets. A thorough review of the hedge fund literature—even just within the Financial Analysts Journal—would require an article of its own, but a few key points can be highlighted. As data quality improved, researchers began to provide useful quantitative metrics to investors. Liang (1999, 2001) was among the earliest empirical studies of hedge funds. Taking into account the crucial survivorship bias problem, he documented high Sharpe ratios for hedge funds over a five-year period to 1996. Malkiel and Saha (2005), using a different database and methodology, found less positive results. They argued that the survivorship problem was even bigger than formerly believed. The question of risk-adjusted hedge fund performance has continued and each year brings more data and a chance to study collective and individual performance (e.g., Ibbotson, Chen, and Zhu 2011).

Survivorship has been a central issue in measuring hedge fund performance, but another problem involves the proper measurement of absolute return. The first issue is how to adjust for exposure to systematic risk factors when a hedge fund has the freedom to pursue fleeting opportunities across a broad spectrum of securities and markets. The second issue is how to measure return when the expectation is that skill will derive from successful timing of investments. Fung and Hsieh (2002, 2004) addressed these issues by introducing dynamic risk factors estimated from the space of hedge fund returns themselves and represented by marketable assets. Their model included factors to capture hedge fund–specific behavior, such as trend following and merger arbitrage.

Since Fung and Hsieh’s landmark work, Financial Analysts Journal contributors have continued active debate about whether and how to control for systematic factor exposures. At issue is the question of whether returns come from skill or exposure to known risk premiums: If from skill, then hedge fund managers earn their 2% management and 20% incentive fees. If from exposure to known premiums, then presumably, such exposures could—and perhaps should—be replicated less expensively (Waring and Siegel 2006).

Conclusion

Asset management is one of society’s most important activities. It is a huge fiduciary responsibility to manage the assets on which the future livelihood of households and vital organizations rely. Without question, therefore, the science of portfolio management has made an important contribution to society. Asset management in the twenty-first century is an improvement over asset management of 100 years ago. More people all over the world have greater access to diversified, regulated, and thoughtfully constructed portfolios than ever before—and often, at a lower cost than before. The Financial Analysts Journal and related publications have played no small role in this development.

Society makes progress through institutions. Professional societies and focused journals are vital to this progress. The professional approach to asset management adopted by CFA Institute embraces the idea that fiduciary duty demands not only a mastery of established concepts and methods but also constant learning and sharing of knowledge. Although the adoption of new ideas and tools is difficult, the vehicle by which they are introduced, tested, and adapted makes a huge difference in the ease or difficulty of their spread. The editorial process sets the tone for the discourse around new ideas. To read through the contributions, letters, debates, and editors’ comments in the Financial Analysts Journal over the past 75 years is to trace the evolving technology of investment management—to see how it absorbs new knowledge, challenges it, tests it through application, and feeds these experiences and views back. This process has been well curated by the succession of Financial Analysts Journal editors, and the profession is the better for it.

Acknowledgments

I thank Stephen J. Brown, Elroy Dimson, Myra Drucker, Luis García-Feijóo, CFA, CIPM, Martin Gruber, Roger Ibbotson, Martin Liebowitz, Ranji Nagaswami, CFA, Geert Rouwenhorst, William Sharpe, Larry Siegel, Sandra Urie, CFA, and Paula Volent, CFA, for helpful comments.

Notes

1 Wassily Leontief (1906–1999) was awarded the Nobel Memorial Prize in Economic Sciences in 1973 for developing comprehensive input–output models of the US and the world economy. His approach has been widely used to understand the systemic propagation of demand and supply shocks through economies and the complex network of global production and trade that emerged in the 20th century.

2 See Elton, Gruber, and Padberg (1978); Latané and Tuttle (1967); and Sharpe (1972). These simplified algorithms are discussed at length in Chapter 6 of Elton, Gruber, Brown, and Goetzmann (2014).

3 The duality principle is the principle that optimization problems  may be viewed from either the perspective of the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem.

4 See Fong and Vasicek (1997); Kritzman and Rich (2002); Duffie and Ziegler (2003).

5 “The results show that there are highly significant extra-market components of covariance among security returns; moreover, these risk components are such that the loadings of individual security returns on the factors are determined by observable characteristics of the firm: income statement and balance sheet data, industry membership, and historical behavior of returns on the security” (Rosenberg 1974), p. 263.

6 For Financial Analysts Journal articles on momentum, see Chan, Jegadeesh, and Lakonishok (1999). For value and momentum, see Asness (1997).

7 See Brown, Goetzmann, and Ibbotson (1999); Fung and Hsieh (2002); Aggarwal and Jorion (2010).

8 Yale’s endowment model may not have been the first of its kind. Chambers and Dimson (2015) showed that Swensen’s philosophy closely follows that of John Maynard Keynes—a visionary college endowment manager in his own right.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.