Phillip W. Magness

Historian – 19th century United States
  • .: Phil Magness’ Blog :.

    Personal blog of Dr. Phil Magness, historian of the American Civil War and the 19th Century United States. Here I will post my thoughts and commentary on current research topics, upcoming events, and the general state of academia.
  • June 2017
    M T W T F S S
    « May    
     1234
    567891011
    12131415161718
    19202122232425
    2627282930  
  • How Nancy MacLean went whistlin’ Dixie

    Posted By on June 27, 2017

    If you read Duke University historian Nancy MacLean’s new book Democracy in Chains, you will probably come away thinking that the late economist James M. Buchanan believed himself to be something of an intellectual heir to the Vanderbilt Agrarians of the 1930s. According to MacLean, these now-obscure southern literary figures were a main reason Buchanan wanted to go to Vanderbilt University.

    Even though Buchanan’s family ultimately could not afford to send him to the prestigious university, MacLean claims that Buchanan owed these men a direct intellectual debt. They allegedly “stamped his vision of the good society and the just state.”  One of the Agrarians in particular, she claims, had a “decisive” influence on “Jim Buchanan’s emerging intellectual system” – the poet Donald Davidson.

    MacLean has a very specific reason for making this claim, and she returns to it at multiple points in her book. The Agrarians, in addition to spawning a southern literary revival (the novelist Robert Penn Warren was one of their members), were also segregationists. By connecting them to Buchanan, she bolsters one of the primary charges of her book: an attempt to link Buchanan’s economic theories to a claimed resentment over Brown v. Board and the subsequent defeat of racial segregation in 1960s Virginia.

    MacLean’s argument has a problem though. Buchanan wrote very little on Brown or the ensuing school desegregation, and the archival evidence she presents from his papers is both thin and far short of the smoking gun she implies it to be. Instead, she sets out to strengthen her portrayal of Buchanan as a segregationist by tying him to other known segregationists. The Agrarians, and specifically Davidson, serve this purpose in her narrative by becoming formative intellectual influences on Buchanan.

    There’s a problem with MacLean’s story though: it appears to be completely made up.

    Her footnotes to the passages on the Agrarians don’t actually check out, and the Davidson link in particular appears to be a figment of her own imagination. I’ll walk through the sources in detail, starting with the passage where Davidson appears:

    MacLean’s purpose here is to identify Davidson as the font for one of Buchanan’s most frequently enlisted concepts from his academic work – the all-powerful Leviathan state. Of course most students of political philosophy will automatically recognize that this metaphor is a famous one. It derives from the 17th century English philosopher Thomas Hobbes, as MacLean begrudgingly concedes. But Buchanan’s version of the Leviathan is different, she contends – a product of Davidson’s “new and distinctive” use to describe a northern-dominated post-Civil War federal government and thus a code-word for racially tinged “states rights” and other nefarious purposes.

    There’s another problem with MacLean’s evidence. Donald Davidson’s name does not appear anywhere in Buchanan’s academic works. The massive 20 volume Collected Works of James Buchanan is searchable online. It contains most of his major books and papers and it does not yield a single hit for the name. Thomas Hobbes, by contrast, is one of the most frequently discussed figures in Buchanan’s works according to the index:

    MacLean nonetheless presses ahead with her invented connection and attempts to tar Buchanan with a litany of vices from the Agrarians: sympathy with the Confederacy, voter suppression, and racial animosity toward African-Americans. These and other charges may be seen in the passage below from MacLean, including a quotation that she claims to show Buchanan’s endorsement of the Agrarians’ vision:

    This passage points us to footnote 12 for the chapter for a list of its sources, which – again – purportedly link Buchanan to this literary group in ways that reflect all the aforementioned claims and charges. Except that’s not what the reader actually finds in footnote 12, or any of its neighboring notes on the Agrarians…

    Along with the citations to a couple Agrarian tracts, what we find instead is a fairly boilerplate list of secondary literature on 20th century racism and its links to the Agrarians. The only reference to Buchanan at all is not an archival source but rather a citation to page 126 of his autobiography, Better than Plowing. Not recalling any passages that would support what MacLean claims here about the Agrarians, I turned to Buchanan’s autobiography to check the reference. The page appears below and consists of a single passing reference to the Southern Agrarians having been influenced by Thomas Jefferson’s famous concept of the yeoman farmer.

    That’s it. There are no references to Donald Davidson. No segregationist visions, or pining over the Confederacy. No claims about wanting to study with the Agrarians at Vanderbilt. No intellectual nods to them at all, aside from a brief factual statement that they espoused a well known Jeffersonian argument about the agricultural lifestyle.

    MacLean’s book has already caught some flak for factual misrepresentations of her sources. In this case she appears to have simple made up an inflammatory association and tacked it onto Buchanan in an effort to paint him as a racist. When scrutinized though in her own sources, it becomes quickly apparent that she has no actual evidence to sustain her many detailed and specific claims. When one actually searches for the link and checks her sources, it quickly becomes apparent that there is none. In fact, one could legitimately note that there are more references to the pro-segregation Vanderbilt Agrarians on Nancy MacLean’s own CV than in the entire Collected Works of James M. Buchanan.

    On Tariffs and the American Civil War

    Posted By on May 26, 2017

    A new piece that I wrote on the role of tariffs in the American Civil War era is now available at the Essential Civil War Curriculum, hosted by Virginia Tech. This article is an encyclopedia-style overview of my research on the subject as well as what other scholars have written, but it provides a short primer on a subject that is often muddled in confusion and erroneous claims.

    If you ever encounter the argument that “tariffs caused the Civil War,” I’d simply urge to you read this piece first to see why the line of reasoning behind that claim is both in error and representative of a false “Lost Cause” historiography that came out of the Reconstruction era. At the same time though, I detail the history of the tariff issue’s role in antebellum economic debates and show how this culminated in an ancillary controversy, after slavery, on the eve of the Civil War.

    Philanthropy and the Great Depression: what historical tax records tell us about charity

    Posted By on May 19, 2017

    As part of my ongoing investigation into early 20th century tax policy, I recently compiled a data series to track patterns in charitable giving during the 1920s and 1930s. As a result of tax code changes in 1917, the IRS began allowing federal income tax payers to deduct up to 15% of their taxable income for donations to recognized philanthropic causes. Eligible donations included charities for the poor, as well as certain contributions to the arts, scientific study, and education. The policy was intended to incentivize private giving and its successor program persists to the present day in the form of tax deductible donations to eligible non-profit organizations.

    The IRS required tax filers to report their charitable deductions on their tax forms and tabulated the total amounts in their annual report on the income tax system. Surprisingly, little work has been done with the resulting data series on annual charitable contributions in this period.

    The chart above illustrates the total amount (inflation-adjusted) of charitable contributions in the period between the implementation of the deductions policy and the eve of World War II. Several patterns are noticeable. First, the Great Depression caused a precipitous decline in charitable giving that persisted into the late 1930s. While this effect is partially attributable to the economic decline caused by the Depression, its persistence also attests to the documented ‘crowding out’ effect that New Deal era spending had upon private charities. Earlier work by Jonathan Gruber and Daniel Hungerman noticed a similar drop-off in church-based charitable giving during the New Deal era, as the federal government picked up the tab for Depression-relief programs.

    Further evidence of this phenomenon may be seen in the raw IRS figures showing where charitable deductions came from. The chart below depicts earners in the $1 million+ income tax bracket, extended through the middle of World War II. Notice several patterns. A sharp spike in charitable giving followed the introduction of charitable giving exemptions, indicating the incentive structure worked. Giving among the wealthiest Americans also spiked dramatically after the 1924 tax cut, reducing high World War I-era rates to a top marginal rate of 25%. Charitable giving among the wealthy dropped off though during the New Deal, and especially following another income tax hike enacted by Herbert Hoover in 1932. It never really recovered until some point after World War II.

    Now compare this second chart to the first, depicting overall charitable giving. While donations by the wealthy are evident in the mid 1920s on this chart as well, something else is missing. The total amount of deductions actually began to accelerate around 1939-1940 in the first chart. This acceleration occurred even though deduction patterns for the wealthiest earners remained essentially flat during the same years. 

    What explains this somewhat counter-intuitive pattern? The answer may be seen in the next chart, showing raw charitable deduction amounts claimed by lower level income earners – specifically tax brackets for incomes between $5,000 and $10,000 per year (note: an almost identical pattern appears for earners in tax brackets below the $5,000 mark, although IRS records did not report these brackets individually in most years – only in cumulative).

    As we can see in this chart, donations from the bottom of the income ladder actually drove the spike in charitable giving in the early 1940s. The reason has to do with yet another set of changes to the federal tax code. Beginning on the eve of the war and continuing until 1945, Congress rapidly expanded the federal tax base onto lower income earners and simultaneously increased income tax enforcement to control for tax evaders. This was done in part to finance the war, though it also involved major administrative reforms such as the addition of automatic payroll withholding in 1943 to increase tax compliance. Faced with a newfound tax burden, lower income individuals began taking advantage of the same charitable deduction allowance that the wealthy utilized to alleviate their own tax burdens in the 1920s.

    Different Measurements of Income Inequality – the interwar Wisconsin Example

    Posted By on May 16, 2017

    I have a new paper, co-written with Vincent Geloso, on the measurement of inequality in Wisconsin between 1919 and 1941. The discussion’s geography may initially seem obscure, but there’s a method to this investigation. In the early part of the 20th century Wisconsin had a stable state income tax system and, more importantly, generated high quality data about its taxpayers on a semi-regular basis throughout the period. The Wisconsin tax featured low but moderately progressive rates on a scale of 1 to 7%, was applied broadly across the state’s population, and underwent relatively few statutory changes to its tax rates over the period between its inception in 1911 and the end of World War II.

    Wisconsin’s state income tax contrasts greatly with the federal income tax in the same time period. Federal tax rates fluctuated wildly until the end of World War II. Congress frequently tinkered with top marginal rates. They adopted a low of 25% in the 1920s, but raised them to over 60% in both World War I and the Great Depression era. By the end of World War II, it exceeded 90% on the top income earners. The federal income tax was also notoriously inconsistent in this period. Tax avoidance and evasion were both recognized problems of the system, and the tax base itself was constrained to a narrow set of the population – sometimes even less than 10% of all U.S. households were eligible to pay. This latter decision was in keeping with the tax’s intended progressivity, although it ended with a rapid tax base expansion during World War II that gave us our modern income tax system.

    The point in contrasting these two systems – Wisconsin and federal – is to show how the structure of the tax code affected the data each generated. For example, Wisconsin’s low rates and broad tax base ensured that it collected tax returns from the majority of Wisconsin households in most years even as the IRS collected returns from a much smaller tax base (these collections predated automatic payroll withholding and were therefore self-reported). During the late 1920s and 1930s, Wisconsin consistently reported a higher overall net income for the state than did IRS figures.

    Wisconsin also had more tax filers on the local level in every year between 1915 and 1941, when the IRS surpassed them due to the wartime tax expansions. During the 1930s it was not uncommon for Wisconsin to collect 3 to 4 times as many state-level tax returns as the IRS did in the state at the federal level.

    To make a long story short, all of this matters for purposes of data analysis, including the calculation of income shares and inequality. Thomas Piketty and Emmanual Saez’s article on the historical distribution of incomes in the United States uses federal income tax data from the IRS to compile its estimates. Though their method for doing so is an innovative improvement upon numerous prior attempts to calculate income shares, it still remains highly sensitive to the underlying quality of its tax data source.

    The Wisconsin tax system therefore has implications for measuring inequality by serving as a point of comparison against the IRS. Since Wisconsin had a more stable tax regime and collected its returns from a broader portion of the population, it is likely a superior data set to the federal IRS records for the reasons mentioned.

    So what happens when we calculate the income distribution for Wisconsin in the 1920s and 30s? If we use the state income tax records as our data source, we end up with very different results than the federal IRS records. In particular, the IRS records tend to show a higher level of inequality than the state records – which is to be expected considering that the IRS primarily collected returns from the wealthiest income brackets whereas Wisconsin taxed the entire state more broadly. A depiction of this effect in the 1920s appears in the graph above, and our full series of generated income shares for the top 10% of earners may be found in the appendix to the paper.

    On Keynes and Eugenics

    Posted By on April 25, 2017

    My article with Sean J. Hernandez on the “Economic Eugenicism of John Maynard Keynes” is now available at SSRN. This article should be approached as a synthesis of the role that eugenics played across Keynes’ career and in the formation of his economic theories. It is also the proverbial tip of the iceberg as far as new and under-explored evidence goes.

    I intend to write more on this topic in the coming months as part of an ongoing project to provide contextual detail and background to the emergence of Keynesian thought in the 1920s and 30s. I’ll offer a short preview in the form of a quotation of Keynes, recorded by Margaret Sanger at a 1925 conference on population in Geneva:

    “I am discouraged because they are not striking at fundamentals. They do not want to think of one fundamental question, and that is the population question. There is not a city, not a country, in the League of Nations today that will accept it, or discuss it, and until the nations of the world are willing to sit down and talk about their problems from the population point of view, its rate of growth, its distribution, and its quality, they might just as well throw their peace proposals into the waste basket, because they will never have international peace until they do consider that problem.”

    To my knowledge, none of Keynes’ biographers have engaged with his role at this event or the implications of his statement for his views on unemployment, conflict, war, and resource allocation. Stay tuned!

    How the AAUP bends statistics to create an adjunct crisis

    Posted By on April 13, 2017

    Earlier this week the American Association of University Professors released its annual report on the economic status of academia. Repeating a theme from prior years, this report heavily emphasizes the position of adjunct faculty and makes a number of bold empirical claims about the alleged growth of the part time academic workforce. For example, the statement released with the report asserts:

    “Faculty on part-time appointments continue to make up the largest share of the academic labor force, and the percentage of faculty jobs that are part time continued to trend higher.”

    Similar claims appear throughout the full report. The AAUP uses the following figure to “prove” assertions about the current size and alleged trajectory of adjunct growth.

    Taken at its face, the figure appears to support their contention of adjunct growth using figures from 1975, 1995, and 2015. This figure, however, is an act of empirical deception. In reality, the adjunct workforce peaked in 2011. It has been on a continuous decline ever since.

    By selectively presenting only 3 actual years of data with 20-year gaps in between, the AAUP’s figure creates an illusion of dramatic and continuing growth in the higher ed adjunct workforce. They make no real effort to understand the complex causes behind these data points, and the trend they do purport to show appears to be almost willfully designed to obscure the actual drop in higher ed’s use of adjunct labor since 2011.

    The actual pattern is revealed in this chart, which I compiled from the AAUP’s own previously published figures. Curiously, they abandoned the practice from previous years of including these data in their most recent report’s figures.

    As this trend line shows, the adjunct workforce percentage peaked in 2011 and was followed by a continued decline in 2013, 2014, and 2015 (the AAUP did not release a figure for 2012). The 2015 decline was even steeper than the previous two years. As it stands right now, the total percentage of adjuncts has dropped below its published figures for 2007. One would have to go back a decade to 2005 to find a lower percentage of adjuncts.

    The reasons for this decline are simple: the adjunct-heavy for-profit higher ed industry bubble collapsed around 2011, initiating a sharp overall drop in the number of adjunct faculty. As the latest stats show, this decline has continued for the past 5 years and shows no signs at the moment of letting up.

    The real question though is this: why is the AAUP, in spite of evidence showing a clear and still-ongoing contraction in the adjunct workforce, asserting the opposite to be true? And why did they utilize an intentionally partial and biased statistical portrayal to obscure the fact that adjunct numbers have been shrinking for the past several years? As is often the case with the AAUP of late, political ideology appears to trump both scientific credibility and intellectual consistency.

    Addendum:

    The same AAUP report also obscures another trend about faculty employment. The total percentage of tenured and tenure-track faculty is actually up over the past decade, even though it is below its 1970s level. In addition to the roughly 30% of faculty who are tenured according to the AAUP figures, another 17% are employed in full time non-tenure track positions.

    Most of this spike has also taken place after the 2007-2008 financial crisis, defying another popular but empirically unattested claim about the supposed decline of tenure.

    Note that the AAUP figures also include graduate students as a “faculty” category, even though these positions are actually more akin to apprenticeships and usually come with sizable tuition credits in addition to payments. The removal of the grad student figures from their totals would have the effect of increasing the percentages of tenured and tenure-track faculty.

    Why Piketty-Saez yields an unreliable inequality estimate before World War II

    Posted By on April 8, 2017

    Next week I will be co-presenting a paper at the APEE conference on the reliability of historical estimates of income inequality in the United States. Our paper examines and offers a number of corrections to the widely cited income inequality time series by Thomas Piketty and Emmanuel Saez (2003). This series provides the baseline for multiple subsequent studies of inequality, and is the primary U.S. inequality series in the World Wealth & Income Database.

    The Piketty-Saez series is the primary example of the famous U-shaped inequality trend line for the United States in the 20th century that was prominently featured in Piketty’s 2014 book Capital in the 21st Century. It is calculated using income tax records from the IRS and a variety of complex statistical techniques to extract a distributional measure of income inequality for the top 1% through top 10% of income earners.

    In this post I want to focus specifically on how Piketty & Saez arrive at their estimates for the pre-World War II period, or basically the first half of their U-shape. This period is both interesting and statistically problematic because the IRS data they use as their source has several under-recognized drawbacks. Most American households were not eligible to pay income taxes prior to a rapid expansion of the tax base through new wartime income tax laws in 1941-1945. Before 1941, only about 10% of U.S. households – or even fewer in some years – were required to file their income taxes. In addition, tax enforcement was often deeply inconsistent in those early years, resulting in ample opportunities for both illegal tax evasion and legal tax avoidance. There were even year-to-year inconsistencies in the accounting measurements that the IRS employed to tabulate reported income. Piketty & Saez are aware of some of these issues and attempt to adjust for them (e.g. IRS accounting issues), but also largely inattentive to others (e.g. evasion and avoidance problems). Our paper argues that the cumulative effect of these issues renders their pre-World War II data, or basically the first half of the U-shape, unusable.

    I will be detailing several of these issues in the coming months, but today I’ll be walking you through some of the issues with one of the most dramatic adjustments that Piketty and Saez make. To reach their initial income distributions for the pre-war period, they begin by taking raw filing data from the IRS’ annual Statistics of Income (SOI) report (most of their post-war data comes from more comprehensive IRS microfile sources that are only available from the 1960s to the present). They use the SOI to calculate distributional estimates using a Pareto interpolation technique that is discussed at length in their paper’s data appendix. The technique itself is fairly standard fare, assuming the source data are accurate. Due to some of the aforementioned problems of IRS accounting inconsistencies and the low number of eligible tax filers before the war, they have to make a few adjustments to its results.

    It’s easiest to see the effects of the adjustments they make through 3 steps in the chart below (showing the calculations for the top 10% income share):

    Model 1, in blue, shows the raw Pareto interpolation from the unadjusted IRS SOI reports. Model 2, in red, attempts to address the problem of insufficient returns due to the low number of eligible tax filers before World War II. To do so it estimates and integrates a modest number of “missing returns” by taking the ratio of married vs. single tax filers in the pre-war years. As you can see, it increases the distributional share of the top 10% slightly before 1940. It does not alter any of the post-1940 results.

    A much larger adjustment comes from what we describe as Model 3, shown here in dark green. This is an accounting adjustment that purports to address the IRS’ switch from Net Income (NI) to Adjusted Gross Income (AGI) in 1943-1944. The difference between the two involves how each handle deductions for charitable giving, local and state tax payments, and some categories of interest payment. It is strictly a feature of the way the tax code handled each. AGI encompasses a share of untaxed but realized income that isn’t present in NI, hence the justification for making an adjustment.

    A problem emerges though with how Piketty and Saez calculate this NI-to-AGI adjustment for Model 3. As you can see in the chart above, the Model 3 adjustments are the most substantial change that Piketty and Saez make to the pre-World War II Pareto calculations. They consistently add about 5 percentage points to the distributional share before 1941, but relatively little thereafter.

    The way that Piketty and Saez go about calculating this adjustment is, unfortunately, opaque. Using their calculation files to replicate the adjustment, it appears that they simply inserted an even, constant, and nicely rounded multiplier across the pre-war income share. The multiplier in turn “bumps” the entire trendline upward until World War II, when Piketty and Saez begin relaxing their multipliers and then bottoming them out to zero with the 1943-1944 NI to AGI switch at the IRS. The weights that Piketty and Saez use for their multipliers are highlighted in blue on their spreadsheet below. The yellow highlighted cells reflect the post-AGI switch at the IRS.

    Notice that all of the adjustments are even, rounded numbers. Also notice two important features: (1) the weights they apply scale upward toward the highest income earning percentiles and (2) the weights are held perfectly constant across the board from 1918 to 1941, and then rapidly reduced from 1941 to 1943. This presumably reflects a number of assumptions that Piketty & Saez make, including the effects of the wartime expansion of the tax base that occurred with a succession of tax hikes after 1940. They also very conveniently create a shape in the resulting time series that looks like the first half of the famous U-shaped pattern.

    Piketty and Saez provide very little indication of where any of these weights even come from, let alone if they accurately reflect the size and distribution of untaxed deductions from the pre-AGI period of IRS accounting. The evenly rounded and constant numbers also strongly suggest that simple “guesstimation” is at play (my readers will remember that Piketty has a bad habit of guesstimating numbers and weights along these lines in historical periods of sparse or insufficient data, usually to create the trend line shape that he wishes to depict).

    Part of the problem comes from the unavoidable issue of insufficient historical data. The IRS records are not sufficiently complete to perform a direct NI-to-AGI adjustment in most pre-war years. The question then becomes one of whether the Piketty-Saez weights, apparently guesstimated, are justifiable. Let me offer one piece of evidence that strongly suggests they are not. While the IRS did not report or differentiate all types of deductions in the pre-war period, they did track tax-exempt donations to charities from the mid 1920s onward.

    Annual charitable deduction totals fluctuated wildly throughout this period, showing deep responsiveness to changes in the tax code and to the Great Depression. The inflation-adjusted totals are depicted below:

    Charitable deductions represent only a part of the NI-to-AGI adjustment, so we cannot make a direct 1-to-1 claim about their effects. Still, the severity of the fluctuations itself suggests at least one strong reason why the stable, constant multiplier that Piketty and Saez employ could be highly problematic.

    This is but one of many similar issues I will be highlighting with their pre-World War II adjustments in the forthcoming paper and future posts. It is a substantial one though, the removal of which completely alters and substantially diminishes the first half of their famous U-shaped distribution.

     

    Further debating Adjunct Justice

    Posted By on April 4, 2017

    Economist Steven Shulman recently authored a rebuttal of sorts to the first of two articles that Jason Brennan and I wrote on the subject of adjunct justice. If nothing else he deserves credit for doing so in a submission to a scholarly journal, where this conversation needs to take place. Most adjunct “activists” have thus far avoided submitting their arguments to scholarly outlets where they would be subjected to peer review and a higher standard of sourcing. A few of them have even attacked me for “publishing privilege” and insinuated that I only publish in academic journals to keep my work “behind a paywall.” These claims are specious but also common. Shulman is not an activist, and understands the value of conducting research in professional outlets. So even as we disagree on this specific topic, I welcome his efforts to elevate the conversation over adjuncting into a scholarly venue.

    That noted, Shulman’s core criticisms of our work contain multiple misinterpretations and erroneous lines of reasoning. I still encourage others to read the piece in full, but I wanted to take this opportunity to respond to a couple of his claims. The first concerns his attempt to calculate an alternative estimate of the cost of “adjunct justice” in response to the figures we presented in our original article. He asserts that he reaches a set of figures “one-third to one-half below B[rennan] & M[agness]’s range.” I’ll quote the relevant passage for his calculations:

    “The average entry-level salary for assistant professors in is $70,655 (CHE:ibid). If a fulltime faculty member whose only responsibility is teaching (i.e., no research or administration) is required to teach eight courses per academic year, she or he would be paid $8832 per course. Including benefits brings per course compensation for new assistant professors to $11,776. If adjunct faculty pay per course is $2923, fair pay for new adjunct faculty members would require an additional $8853 per course. In this frame of reference, the aggregate cost of adjunct justice would amount to $27.9 billion per year, qualified by the same over-estimate and under-estimate biases noted above.”

    There are two substantial errors in this approach. The first is that Shulman is using the wrong figure as a starting point. The job he describes and uses as the basis for his calculation is not actually reflective of the typical entry-level assistant professor appointment in the United States. It is much closer to the rank of “instructor” or “lecturer” – typically an entry-level full-time faculty appointment that carries heavy teaching loads and operates on a renewable contract basis, as opposed to tenure track. As we argued in our second article on the issue of adjunct exploitation, a full time entry level instructor/lecturer position is much closer to the job qualifications of the average current adjunct professor than an assistant professorship (and even this requires the generous assumption that adjuncts possess the proper terminal degrees that are usually required for these roles. Most do not). The most recent data on instructor/lecturer level appointments places the average salary in the $50-56 thousand range, or well below Shulman’s assistant professor salary starting point. As a result he is severely overstating the pay differential between his hypothetical adjunct and the faculty rank they would most likely qualify for if converted to full time positions. He is essentially comparing  apples to oranges.

    The second issue with Shulman’s comparison is his assessment of faculty duties. He incorrectly assumes that the full time faculty conversion entails only teaching obligations. In reality, almost all full time faculty are contractually obligated to meet expectations of research output and contribute to a variety of tasks known as university service (serving on committees, advising students, department obligations etc.) This is certainly true of almost all assistant professor appointments, where such tasks are an integral component of a professor’s application for tenure. But it is also the norm for instructors/lecturers, even if they are hired for teaching-heavy positions with only modest research expectations. So how much of their workload do full-time faculty spend on teaching versus research and service? The numbers vary somewhat by rank and type of institution, but this question has in fact been exhaustively investigated over the years with surveys and case studies. The general range is between 40 and 65% on teaching, with the remainder divided between research and service. A tenured professor at an R1 university will likely be closer to the 40% range, or perhaps even less if they are productive “star” researchers who can negotiate a reduction in teaching obligations. An entry level professor at a liberal arts college will likely have a heavier 3-3 to 4-4 teaching load, and thus be expected to commit more time to classroom instruction. In either case though, it’s reasonable to expect that faculty at even the lowest entry-level academic ranks will be spending at least a third of their time on activities other than teaching. This further complicates Shulman’s calculations as it further reduces the actual per-course compensation that faculty receive for the teaching portions of their contracts. Based on the calculations that Jason and I did, a true apples-to-apples comparison would yield only modest compensation differences between a PhD-holding adjunct and an entry level full time lecturer. In one of the scenarios we considered, the per-classroom-hour difference between the two was only about $4.

    These two corrections reveal that the pay differential between adjuncts and the closest comparable full-time position is actually pretty modest, when compared on hours actually spent on teaching-related activities. It would be a mistake though to conclude that this difference makes “adjunct justice” affordable though. While a $4/hour pay hike would undoubtedly be welcome by most adjuncts, it is also far short of what practically any of the adjunct activist organizations purport to be a “just” wage. For an adjunct who strings together a 4-4 teaching load, it would probably amount to an extra $4-5 thousand dollars a year on a $26,000 base salary. That figure is less than half of even the most conservative salary demands of adjunct activists like the SEIU and New Faculty Majority, let alone their stated goal of $15,000 per course (Shulman also incorrectly states that we based our original estimates off of “implausible” adjunct salary goals – we actually took them directly from the published statements of multiple adjunct labor activists and organizations). So in effect, Shulman ends up both overstating the classroom-related pay differential of comparably ranked adjunct and full time faculty, and understating the “justice” demands of the adjunct activists by conflating their stated goals with his own erroneous calculation.

    Another way to put it is this: after adjusting for qualifications and the actual portion of a full time faculty member’s job that goes into teaching, the teaching compensation-based salary difference between a 4-4 PhD holding adjunct and a full time instructor/lecturer – the closest equivalent rank – is probably about $4-5K per year. Shulman wants around twice that at almost $9K more per year. And the adjunct activists want well in excess of Shulman’s figure, with a variety of proposals demanding anywhere from $20K to as high as $90K more per year.

    I’d also be remiss if I didn’t point out another argument in this passage:

    “According to B&M, adjunct faculty justice would harm many adjunct faculty members as well as students because the conversion of part-time positions into a smaller number of full-time positions would cost many adjunct faculty members their jobs and deprive students of their expertise. This argument is similar to the conservative claim that workplace reforms like the minimum wage harm the very people they are meant to help. The fact that the evidence of these harms has proved thin (Brown, 1999) has not made the argument any less potent for anti-reformers like B&M. They insist that “it is not plausible that universities can help all adjuncts or give them all a better deal. Instead, because of budget constraints, they can at best help some and hurt others.” But that conclusion rests more on their implausible assumptions than on the budget constraints faced by higher education.”

    Let’s break down the argument here:

    1. Brennan & Magness’ argument about the trade-offs entailed in adjunct justice sounds somewhat similar to the “conservative” argument against the minimum wage.
    2. Here’s a single decades-old citation that purports to show the “conservative” argument against the minimum wage is wrong.
    3. Brennan & Magness’ trade-offs argument is therefore wrong too. Also, Brennan & Magness are “anti-reformers.”

    That’s a classic non-sequitur, and an amusing one for an economist to make as well.

    The conversation over the use of adjuncts in higher ed is ongoing, and I look forward to future examination of it in suitable scholarly venues. To that end, Shulman’s argument – even with the aforementioned faults – is at least a conversation starter. Its empirical argument still falls far short of its conclusions. But perhaps it will inspire other adjunct activists to take their case to scholarly venues for an actual discussion, instead of shrieking on Storify and making unsubstantiated empirical claims to journalists who uncritically accept their accuracy.

    Low lie the yields of Malthunry

    Posted By on March 29, 2017

    Every year around St. Patrick’s Day, the Great Irish Famine of 1845-52 briefly reenters the public’s consciousness. Parallels to more recent political events, including the Syrian refugee crisis and the ongoing debate over immigration, have also elevated its salience as a historical precursor. In a subtle rebuke of President Donald Trump, Irish Prime Minister Enda Kenny recently invoked the United States’ relatively liberal immigration policy of the 1840s as a core feature of the American identity: “four decades before Lady Liberty lifted her lamp, we were the wretched refuse on the teeming shore.”

    The famine’s immediate instigator was the potato blight – a disease that wiped out the island’s primary subsistence food crop and resulted in the starvation deaths of over a million people. Further investigation of its causes quickly goes astray, and in some quarters of academia and the press alike it has become common to blame the Irish famine on the ravages of “laissez-faire” capitalism.

    The argument for this view is often historically simplistic. It usually casts the famine as an instance of class-discrimination arising from a market failure in which the British government allegedly shirked its duties to relieve the starving masses out of a belief that the unhindered market would sort the matter out. Some darker variants go so far as to portray the event as a capitalism-induced economic cleansing to rid Ireland of its poor and dependent classes at the behest of wealthy landowners.

     

    The Political Origins of the Irish Famine

    While it is difficult to understate the misery of the famine itself, these portrayals politicize its history beyond recognition, while conveniently sidestepping the pronounced role that illiberal economic and political institutions played in Ireland’s food crisis. Two preexisting policies were largely to blame for the famine’s severity once the potato blight struck.

    The first was a legacy of England’s 16th century break with the Roman Catholic church and, more directly, Oliver Cromwell’s proto-genocidal conquest of Ireland in the wake of the English Civil War some two centuries prior. These events gave rise to a series of brutally repressive anti-Catholic penal laws in Ireland. The most far-reaching was a government-enforced land redistribution scheme that stripped Irish Catholics of their property for a variety of “offenses” against English Protestant rule – for supporting the breakaway Kilkenny Confederation in 1641, for pledging allegiance to Charles I in the Civil War, for backing the Catholic King James II after the Glorious Revolution of 1688 in the Williamite War, and for recurring support for later Jacobite causes over the next half-century.

    As applied in Ireland, these laws struck at the heart of the very same institutions that fueled the Industrial Revolution in neighboring Great Britain. After the landowner redistributions of 1649-1691, Catholics were severely restricted from purchasing or even leasing property for the next century. The British Parliament prohibited Catholic ownership of firearms, barred Catholics from public office and disenfranchised Catholic voters, imposed Catholic-specific taxes, established preferential inheritance laws that favored Protestant converts, restricted Catholics from specific gentry-level professions, and even barred Catholics from sending their children abroad to be educated. Edmund Burke, the liberal-turned-conservative philosopher, described their cumulative effect as a “machine…fitted for the oppression, impoverishment, and degradation of a people.” The relief of these punitive provisions became a primary cause of late 18th and early 19th century English liberals, with major though imperfect reform bills being secured in 1791 and 1829.

    The second political source of the famine’s severity was found in the economic philosophy of mercantilism, and specifically the protectionist Corn Laws that the British Parliament enacted in the wake of the Napoleonic Wars. A cronyist political tool of agricultural landowners in England, these measures severely taxed the importation of foreign-grown wheat and other grains, which could be grown more efficiently and cheaply in better climates. By intentional design, these tariffs raised the price of food items in Britain and Ireland to the benefit of agricultural landowners. Consumers paid the direct price.

    Though the Corn Laws were the most famous legislative products of British mercantilism, they actually built upon several decades of earlier agricultural protectionism. Ireland was particularly hard hit, as the grain tariffs came into existence in conjunction with the aforementioned penal statutes. In the 18th century, the British Parliament imposed a variety of commodity-specific laws that restricted the export of Irish commodities to non-English merchants. Food production in Ireland itself was greatly distorted by the tariff system, which incentivized comparatively less efficient uses of agricultural land. Combined with the restrictive laws on property ownership, these measures gave rise to an Irish agricultural model built around the cultivation of grains for sale to fixed buyers at tariff-protected prices to the benefit of absentee landowners in England. Adam Smith, in fact, diagnosed the economic ills of this system in the Wealth of Nations. He denounced its extremely regressive tax effects “as is the case in Ireland [where] such absentees may derive a great revenue from the protection of a government to the support of which they do not contribute a single shilling.”

    When the blight arrived in 1845, it struck an Irish Catholic population had been forcibly reduced to a state of landless poverty and subsistence agriculture by 200 years of government predation and punitively protectionist trade policies on food items. Landless and still under the shadows of two centuries of economic and political repression, they were also held captive to a food market that artificially increased grain prices beyond their reach and relegated them to subsistence on a failing potato crop. Symptomatic of these government-created distortions, Ireland actually continued to export grains to England under politically preferential land and trade arrangements even after the onset of the famine.

     

    Free Market Liberalism and the Famine Response

    The conditions that made the Irish famine so catastrophic were largely created by centuries of political intrusions upon everything from the freedom of trade to the most basic abilities of Irish Catholics to own property. This created an economic system in Ireland that stood in direct antithesis to both the free trade prescriptions of Smith and David Ricardo, and to the economic doctrine of non-intervention. In fact, the most important famine-relief policies from within the United Kingdom and from abroad were rooted in the doctrines of laissez-faire. In Britain, the onset of the famine proved to be a major instigating trigger of the Corn Law system’s destruction.

    In 1846 after two years of crop failures and poor agricultural conditions all around, Prime Minister Robert Peel bucked the protectionist majorities of his own party and acquiesced to the liberal Whig free trade cause of Richard Cobden and John Bright. Ireland weighed heavily on Peel’s conversion to free trade, and in late 1845 he even surreptitiously approved the importation of over £100,000 of corn from the United States in circumvention of the tariffs for distribution through the Irish workhouses. The tariff relief ultimately succeeded, though not without a price. They split Peel’s cabinet and party, costing him the government. The compromises needed to secure a majority for the Corn Law repeal also resulted in its implementation being dragged out for three years over a schedule of successive reductions. By its completion in 1849, the ravages of the famine had spread to all of Ireland.

    A second source of famine relief emerged abroad in the form of immigration. Following the Jeffersonian “Revolution of 1800,” the United States adopted to a relatively liberal immigration policy (at least by 19th century standards) that permitted easy entry into its ports as well as remittances back home, which could be used in turn to pay for the transatlantic voyage of other family members. Britain also permitted relatively unimpeded migration to Canada and other overseas parts of its empire, though nativist political reactions resulted in this policy being restricted after 1847. All said, more than a million Irish refugees utilized the option of immigration to escape the famine between 1845 and its conclusion in 1854.

     

    A Malthusian Famine Relief

    We’ve established thus far that (1) the illiberal economic and political restrictions upon Ireland were a major cause of the famine’s severity and (2) the liberal policies of free trade and free migration were two of its most important means of relief. Despite these realizations, the famine itself is often blamed on capitalism. How are we to reconcile the two claims?

    Simply put, those who cast the blame for the famine on free markets have largely misidentified their target through a combination of sloppy history and poor economics. They normally accuse Peel’s successor John Russell of adhering to an economic “orthodoxy” of “laissez-faire” rooted in Smith and Ricardo, and point to the British government’s reluctance to invest in famine-relief charities out of the belief that the free market would sort things out. Briefly setting aside the reality that Ireland’s punitively regulated economy in 1845 was anything but a laissez-faire paradise, another problem emerges in the historical misidentification of the intellectual inspirations of the blamed parties.

    Lead among them is the political administrator whose name, perhaps more than any other, has come to be associated with the British government’s failures during the famine years: Charles Trevelyan. The scion of an aristocratic Whig family who occupied a prominent post in the British Treasury civil service, Trevelyan effectively took over the famine relief after the fall of Peel’s government in 1846. He is curiously portrayed as a free-market dogmatist and devotee of Adam Smith, as is the case in this depiction from a leftist political science professor. Seldom mentioned however is that Trevelyan’s economic beliefs are more closely linked to the teachings of Thomas Malthus.

    Malthus is a perpetually controversial figure, both for competing claimants to his intellectual legacy and disparate assessments of his own intentions. The connection between Trevelyan and Malthus is undeniable though. Trevelyan’s introduction to the study of political economy occurred when he was a student of Malthus himself at the East India Company College in Hertfordshire.

    Some decades prior to the famine, Malthus supported the original enactment of the Corn Laws on the grounds that they permitted a balance between manufacturer and agricultural interests. He claimed influence from Smith as well, even as this inheritance was vigorously contested by his contemporary David Ricardo. To this end, Malthus also criticized the penal laws in Ireland for their economic detriments, placing him at least at times on the classical liberal side of the issue. His connection to the famine however comes from his most famous work, an oft-revised tract on the economics of overpopulation. In its basic form, the Malthusian doctrine predicted a mathematical conundrum in which exponential population growth would eventually surpass the ability of natural resource production to sustain it.

    The implications of Malthus’ theory have been hotly debated for centuries. He personally resisted one of its possible implications – coercive population control – in favor of encouraging people to restrict procreation through abstinence and late-life marriage. Many of his intellectual followers have shown more pronounced proclivities toward population control, including the forced sterilization and eugenics programs advanced by self-described “neo-Malthusians” in the early 20th century. While we cannot assign guilt to Malthus for the actions of others after his lifetime, it is not difficult to see how these positions could be arrived at from a reading of his works. In fact, one startling passage on Ireland itself appeared in an 1817 letter that Malthus wrote to Ricardo:

    “Through most of this country, great marks of improvement were observable, though its progress had received a severe check during the last two years, the effect of which was peculiarly to aggravate the predominant evil of Ireland, namely population greatly in excess above the demand for labour, though in general not much in excess of the means of subsistence on account of the rapidity with which potatoes have increased under a system of cultivating them on very small properties with a view to support than sale.The land in Ireland is infinitely more peopled than in England; and to give full effect to the natural resources of the country, a great part of this population should be swept from the soil into large manufacturing and commercial Towns.”

    It is not difficult to see in this passage the underpinnings of Trevelyan’s later attribution of the famine to Irish breeding and overpopulation a generation later. Trevelyan’s own report on the famine relief, published in 1848, is deeply rooted in Malthusian population doctrine. It faults the forces of population strain for the situation in Ireland and, in some of the more  infamous passages, suggests that the unfolding events reflected divine will upon the island’s consumptive excesses.

    Consistent with portrayals that incorrectly brand the famine relief as a “laissez-faire” enterprise, Trevelyan is often portrayed as having taken a callous do-nothing approach to his task that would allow the population crisis to sort itself out by migration or, if necessary, starvation. It is difficult to reconcile this claim with the actual famine relief pursued by Russell’s government.

     

    Keynesianism before Keynes

    Here Trevelyan’s course has much more in common with a different line of economic thought that emerged from one camp of Malthus’ followers, viewing the state as a mechanism to manage the untamed “natural” forces of an economy. It is more commonly associated with the 20th century economist John Maynard Keynes, who also styled himself a neo-Malthusian on both population issues and macroeconomic management. In addition to sharing Keynes’ near-obsession with Malthusian population pressures as a putative explanation for social ills, Trevelyan believed the government’s role was to essentially oversee the crisis as if it were a countervailing force to the “nature” of an unregulated market. To accomplish this control he sought to position the the government as a jobs provider for an economy in crisis.  In 1846 he launched a massive “public works” program that sought to employ the “surplus” Irish population in the construction of roads and river improvements.

    Trevelyan’s report openly boasts of employing almost 100,000 people in government projects within a few months of its start. By 1847 it employed five times that number. Like many centralized economic programs however, Trevelyan’s “public works” succumbed to waste, graft, and maladministration. Borrowing from additional Malthusian doctrines about consumer demand and clinging to the notion that potato subsistence had detached the Irish from any familiarity with purchasing their own food, he indulged multiple failed experiments in price and wage manipulation. These were ostensibly designed to “teach” the workers how to properly manage their earnings without a potato crop. In reality, they ended up suppressing the public sector wages that the government offered while also, at times, inducing artificial spikes in grain prices.

    The construction projects had additional unanticipated effects. They diverted laborers away from agricultural pursuits (although this objective was similarly rooted in an interventionist belief that too many Irish were wedded to agricultural pursuits and thereby flooded its labor market). This in turn suppressed potato planting once a blight-free crop could be raised and impeded the recovery. To make matters worse, the British government also attempted to finance its massive expenditures with a changing array of taxes on landowners in the affected districts. As Mark Thornton has argued, these taxes likely introduced multiple unintended consequences: their burdens were passed through onto poor tenants by absentee landowners, as land value levies they further diverted resources away from food production, and they likely squeezed out private charity relief for the famine itself.

    The bureaucratic disaster of the public works program eventually drew scrutiny in parliament and the press, resulting in its cancellation followed by a succession of similarly disastrous attempts to manage the famine by different forms of government-induced “charity.” In the end, the most effective relief mechanisms proved to be free migration – owing to the relatively liberal policies of the United States in its willingness to receive Irish famine refugees – and the elimination of trade protectionism over Britain’s food sources as the Corn Law repeal’s implementation took full effect in 1849.

    The government approaches to famine relief were widely derided at the time as a succession of failures though – not on account of their “laissez-faire” approach, but the exact opposite. They tried to manage Ireland out of a food crises through public spending, price controls, and make-work programs. One testimonial recorded in the House of Lords in 1852 succinctly captured the absurdity of the entire enterprise:

    “We continued the Works we had selected originally, but towards the end a number of works we had excluded were commenced, merely for the purpose of employing the people, nearly in the same way as if we had dug a hole to fill it up again.”

    It would appear in this evidence that Charles Trevelyan, the overpopulation-obsessed Malthusian administrator of one of the largest public works programs in Irish history, was far from a laissez-faire dogmatist. Perhaps we should enlist an alternative descriptor: Trevelyan was actually something of a proto-Keynesian.

    The English Department attacks academic freedom again

    Posted By on March 23, 2017

    A Faculty Senate report at Wake Forest University adopted the following resolution at a meeting last week:

    “Motion 2: To freeze current hiring by the Eudaimonia Institute, and cancel any internal (e.g. Eudaimonia conference) or external presentations related to the IE, and to restrict publication of material from EI until the COI committee is established and the University COI policy can be applied.”

    For the sake of academic freedom, the Faculty Senate fortunately has no enforcement power to carry out this resolution. It is nonetheless difficult to imagine a more direct assault than this measure against the Eudaimonia Institute, a free-market aligned scholarly institute comprised of an interdisciplinary group of Wake Forest faculty members. The resolution openly seeks the power to censor these faculty members’ ability to publish their own scholarly work, to host lectures and events, and to even make hiring decisions for their own personnel. Two other concurrently adopted resolutions from the same meeting seek to subject the Eudaimonia Institute to an oversight review board, and to suspend its funding. These too are, fortunately, non-binding.

    The motive for this assault upon the basic rights of faculty to conduct research free of censorship and intimidation is equally chilling. The Faculty Senate committee that drafted the resolutions did so out of political opposition to one of the Eudaimonia Institute’s main donors, the Charles Koch Foundation. They believe the Koch Foundation’s free-market political beliefs are objectionable and wish to see them excluded from campus, so they set out to persecute a group of faculty who receive Koch funding. Despite the committee’s claim that it is simply seeking to investigate “conflicts of interest” posed by private donors irrespective of their politics, note that no similar objections have been raised about the activities or donors of a multitude of left-leaning institutes at Wake Forest, including two that openly engage in political activism for progressive causes: the Anna Julia Cooper Center for Social Justice and the Pro Humanitate Institute, a project of former MSNBC pundit Melissa Harris-Perry.

    The Koch Foundation is a major financial supporter of classical liberal scholarly endeavors in the Untied States. It funds faculty and research centers at hundreds of universities and provides resources for research projects, student scholarships, and speaker events. The majority sustains research on free-market economics, though they’ve also funded several million dollars in scholarships for students from historically disadvantaged and minority groups. (Full disclosure: my own university and institution have similarly benefited from the Koch Foundation’s academic support, and I’m proud that they consider my own research to be worthy of support. I’m also proud to report that they’ve never once tried to influence the findings of anything I’ve ever written, despite conspiratorial insinuations otherwise by a number of madjunct activists). This funding is nonetheless seen as unacceptable by the numerous partisans of ideological orthodoxy who inhabit higher education. Even though the Koch Foundation represents a tiny fraction of a percent of the total research funding in higher education, with much larger shares coming from progressive left-leaning foundations and deeply politicized government sources and even though free-market and classical liberal faculty are a distinct minority in left-leaning academia, their very existence is deemed intolerable by the illiberal elements campus left.

    …which brings us back to the events at Wake Forest. Here a number of faculty have decided that their own colleagues should not be permitted to conduct research on perfectly mainstream economic and philosophical topics because it conflicts with progressive political ideology. These faculty have therefore set out to sabotage their colleagues’ funding and censor their work. Last week’s resolutions came about as a product of an anti-Koch petition circulated last semester among some Wake Forest faculty members. The breakdown of signers by discipline displays a familiar pattern. Most signers come from the humanities and social sciences, STEM disciplines are comparatively rare, and – as always – the English Department was the the main instigator:

    We’ve seen multiple examples of this exact same pattern in recent controversies over academic freedom, including the events a month ago at Middlebury College in Vermont where a faculty-fomented protest resulted in a violent attack upon speaker Charles Murray and another faculty member. That protest also involved a widely circulated faculty petition denouncing Murray’s talk. It too was dominated by the humanities, with a group of English and MLA department faculty leading the pack.

    In pointing this out, please note that I am in no way making a gratuitous attack upon English as a discipline. English has an important place in a well rounded liberal education. We should actually be deeply alarmed though by the politicization of English faculty (as well as the other humanities at large). Their involvement in these blatant attempts to silence dissenting political views on campus is quickly becoming a recurring pattern. It also stands in stark contrast with STEM faculty and the quantitative social sciences, who lend comparatively fewer faculty supporters to campus illiberalism.

    This is not without reason. As a recent article in the American Interest magazine showed, university faculty have become more politicized over the past 25 years even though the American public at large has maintained a relatively stable left/right split. More startling though, the most pronounced politicization has taken place in a few disciplines that are now overwhelmingly skewed toward the political left. English is, unambiguously, the most skewed discipline, with over 80% of its faculty self-identifying on the political left according to the most recent UCLA Higher Education Research Institute survey.

    Faculty political biases come with the territory of academia, and are not objectionable in themselves as they represent a direct product of freedom of thought and freedom of inquiry. A problem emerges though when certain fields skew so heavily to one side that they effectively shut out viewpoints that dissent from a prevailing political orthodoxy. The chart above suggests we have surpassed that point in English, and that several of the humanities are not far behind. More alarming though is the correlation revealed with the petitions at Wake Forest,  Middlebury, and other campuses where political disagreements have resulted in threats to academic freedom. The most aggressively left-leaning fields like English/MLA and the other humanities also seem to dominate faculty petitions that actively call for the suppression of dissenting viewpoints on campus. It is increasingly apparent that the two patterns – progressive ideological homogeneity within a discipline and support for restricting the academic freedom of right-leaning faculty, speakers, and students – are closely related.