Phillip W. Magness

Historian – 19th century United States
  • .: Phil Magness’ Blog :.

    Personal blog of Dr. Phil Magness, historian of the American Civil War and the 19th Century United States. Here I will post my thoughts and commentary on current research topics, upcoming events, and the general state of academia.
  • May 2017
    M T W T F S S
    « Apr    
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
  • Philanthropy and the Great Depression: what historical tax records tell us about charity

    Posted By on May 19, 2017

    As part of my ongoing investigation into early 20th century tax policy, I recently compiled a data series to track patterns in charitable giving during the 1920s and 1930s. As a result of tax code changes in 1917, the IRS began allowing federal income tax payers to deduct up to 15% of their taxable income for donations to recognized philanthropic causes. Eligible donations included charities for the poor, as well as certain contributions to the arts, scientific study, and education. The policy was intended to incentivize private giving and its successor program persists to the present day in the form of tax deductible donations to eligible non-profit organizations.

    The IRS required tax filers to report their charitable deductions on their tax forms and tabulated the total amounts in their annual report on the income tax system. Surprisingly, little work has been done with the resulting data series on annual charitable contributions in this period.

    The chart above illustrates the total amount (inflation-adjusted) of charitable contributions in the period between the implementation of the deductions policy and the eve of World War II. Several patterns are noticeable. First, the Great Depression caused a precipitous decline in charitable giving that persisted into the late 1930s. While this effect is partially attributable to the economic decline caused by the Depression, its persistence also attests to the documented ‘crowding out’ effect that New Deal era spending had upon private charities. Earlier work by Jonathan Gruber and Daniel Hungerman noticed a similar drop-off in church-based charitable giving during the New Deal era, as the federal government picked up the tab for Depression-relief programs.

    Further evidence of this phenomenon may be seen in the raw IRS figures showing where charitable deductions came from. The chart below depicts earners in the $1 million+ income tax bracket, extended through the middle of World War II. Notice several patterns. A sharp spike in charitable giving followed the introduction of charitable giving exemptions, indicating the incentive structure worked. Giving among the wealthiest Americans also spiked dramatically after the 1924 tax cut, reducing high World War I-era rates to a top marginal rate of 25%. Charitable giving among the wealthy dropped off though during the New Deal, and especially following another income tax hike enacted by Herbert Hoover in 1932. It never really recovered until some point after World War II.

    Now compare this second chart to the first, depicting overall charitable giving. While donations by the wealthy are evident in the mid 1920s on this chart as well, something else is missing. The total amount of deductions actually began to accelerate around 1939-1940 in the first chart. This acceleration occurred even though deduction patterns for the wealthiest earners remained essentially flat during the same years. 

    What explains this somewhat counter-intuitive pattern? The answer may be seen in the next chart, showing raw charitable deduction amounts claimed by lower level income earners – specifically tax brackets for incomes between $5,000 and $10,000 per year (note: an almost identical pattern appears for earners in tax brackets below the $5,000 mark, although IRS records did not report these brackets individually in most years – only in cumulative).

    As we can see in this chart, donations from the bottom of the income ladder actually drove the spike in charitable giving in the early 1940s. The reason has to do with yet another set of changes to the federal tax code. Beginning on the eve of the war and continuing until 1945, Congress rapidly expanded the federal tax base onto lower income earners and simultaneously increased income tax enforcement to control for tax evaders. This was done in part to finance the war, though it also involved major administrative reforms such as the addition of automatic payroll withholding in 1943 to increase tax compliance. Faced with a newfound tax burden, lower income individuals began taking advantage of the same charitable deduction allowance that the wealthy utilized to alleviate their own tax burdens in the 1920s.

    Different Measurements of Income Inequality – the interwar Wisconsin Example

    Posted By on May 16, 2017

    I have a new paper, co-written with Vincent Geloso, on the measurement of inequality in Wisconsin between 1919 and 1941. The discussion’s geography may initially seem obscure, but there’s a method to this investigation. In the early part of the 20th century Wisconsin had a stable state income tax system and, more importantly, generated high quality data about its taxpayers on a semi-regular basis throughout the period. The Wisconsin tax featured low but moderately progressive rates on a scale of 1 to 7%, was applied broadly across the state’s population, and underwent relatively few statutory changes to its tax rates over the period between its inception in 1911 and the end of World War II.

    Wisconsin’s state income tax contrasts greatly with the federal income tax in the same time period. Federal tax rates fluctuated wildly until the end of World War II. Congress frequently tinkered with top marginal rates. They adopted a low of 25% in the 1920s, but raised them to over 60% in both World War I and the Great Depression era. By the end of World War II, it exceeded 90% on the top income earners. The federal income tax was also notoriously inconsistent in this period. Tax avoidance and evasion were both recognized problems of the system, and the tax base itself was constrained to a narrow set of the population – sometimes even less than 10% of all U.S. households were eligible to pay. This latter decision was in keeping with the tax’s intended progressivity, although it ended with a rapid tax base expansion during World War II that gave us our modern income tax system.

    The point in contrasting these two systems – Wisconsin and federal – is to show how the structure of the tax code affected the data each generated. For example, Wisconsin’s low rates and broad tax base ensured that it collected tax returns from the majority of Wisconsin households in most years even as the IRS collected returns from a much smaller tax base (these collections predated automatic payroll withholding and were therefore self-reported). During the late 1920s and 1930s, Wisconsin consistently reported a higher overall net income for the state than did IRS figures.

    Wisconsin also had more tax filers on the local level in every year between 1915 and 1941, when the IRS surpassed them due to the wartime tax expansions. During the 1930s it was not uncommon for Wisconsin to collect 3 to 4 times as many state-level tax returns as the IRS did in the state at the federal level.

    To make a long story short, all of this matters for purposes of data analysis, including the calculation of income shares and inequality. Thomas Piketty and Emmanual Saez’s article on the historical distribution of incomes in the United States uses federal income tax data from the IRS to compile its estimates. Though their method for doing so is an innovative improvement upon numerous prior attempts to calculate income shares, it still remains highly sensitive to the underlying quality of its tax data source.

    The Wisconsin tax system therefore has implications for measuring inequality by serving as a point of comparison against the IRS. Since Wisconsin had a more stable tax regime and collected its returns from a broader portion of the population, it is likely a superior data set to the federal IRS records for the reasons mentioned.

    So what happens when we calculate the income distribution for Wisconsin in the 1920s and 30s? If we use the state income tax records as our data source, we end up with very different results than the federal IRS records. In particular, the IRS records tend to show a higher level of inequality than the state records – which is to be expected considering that the IRS primarily collected returns from the wealthiest income brackets whereas Wisconsin taxed the entire state more broadly. A depiction of this effect in the 1920s appears in the graph above, and our full series of generated income shares for the top 10% of earners may be found in the appendix to the paper.

    On Keynes and Eugenics

    Posted By on April 25, 2017

    My article with Sean J. Hernandez on the “Economic Eugenicism of John Maynard Keynes” is now available at SSRN. This article should be approached as a synthesis of the role that eugenics played across Keynes’ career and in the formation of his economic theories. It is also the proverbial tip of the iceberg as far as new and under-explored evidence goes.

    I intend to write more on this topic in the coming months as part of an ongoing project to provide contextual detail and background to the emergence of Keynesian thought in the 1920s and 30s. I’ll offer a short preview in the form of a quotation of Keynes, recorded by Margaret Sanger at a 1925 conference on population in Geneva:

    “I am discouraged because they are not striking at fundamentals. They do not want to think of one fundamental question, and that is the population question. There is not a city, not a country, in the League of Nations today that will accept it, or discuss it, and until the nations of the world are willing to sit down and talk about their problems from the population point of view, its rate of growth, its distribution, and its quality, they might just as well throw their peace proposals into the waste basket, because they will never have international peace until they do consider that problem.”

    To my knowledge, none of Keynes’ biographers have engaged with his role at this event or the implications of his statement for his views on unemployment, conflict, war, and resource allocation. Stay tuned!

    How the AAUP bends statistics to create an adjunct crisis

    Posted By on April 13, 2017

    Earlier this week the American Association of University Professors released its annual report on the economic status of academia. Repeating a theme from prior years, this report heavily emphasizes the position of adjunct faculty and makes a number of bold empirical claims about the alleged growth of the part time academic workforce. For example, the statement released with the report asserts:

    “Faculty on part-time appointments continue to make up the largest share of the academic labor force, and the percentage of faculty jobs that are part time continued to trend higher.”

    Similar claims appear throughout the full report. The AAUP uses the following figure to “prove” assertions about the current size and alleged trajectory of adjunct growth.

    Taken at its face, the figure appears to support their contention of adjunct growth using figures from 1975, 1995, and 2015. This figure, however, is an act of empirical deception. In reality, the adjunct workforce peaked in 2011. It has been on a continuous decline ever since.

    By selectively presenting only 3 actual years of data with 20-year gaps in between, the AAUP’s figure creates an illusion of dramatic and continuing growth in the higher ed adjunct workforce. They make no real effort to understand the complex causes behind these data points, and the trend they do purport to show appears to be almost willfully designed to obscure the actual drop in higher ed’s use of adjunct labor since 2011.

    The actual pattern is revealed in this chart, which I compiled from the AAUP’s own previously published figures. Curiously, they abandoned the practice from previous years of including these data in their most recent report’s figures.

    As this trend line shows, the adjunct workforce percentage peaked in 2011 and was followed by a continued decline in 2013, 2014, and 2015 (the AAUP did not release a figure for 2012). The 2015 decline was even steeper than the previous two years. As it stands right now, the total percentage of adjuncts has dropped below its published figures for 2007. One would have to go back a decade to 2005 to find a lower percentage of adjuncts.

    The reasons for this decline are simple: the adjunct-heavy for-profit higher ed industry bubble collapsed around 2011, initiating a sharp overall drop in the number of adjunct faculty. As the latest stats show, this decline has continued for the past 5 years and shows no signs at the moment of letting up.

    The real question though is this: why is the AAUP, in spite of evidence showing a clear and still-ongoing contraction in the adjunct workforce, asserting the opposite to be true? And why did they utilize an intentionally partial and biased statistical portrayal to obscure the fact that adjunct numbers have been shrinking for the past several years? As is often the case with the AAUP of late, political ideology appears to trump both scientific credibility and intellectual consistency.

    Addendum:

    The same AAUP report also obscures another trend about faculty employment. The total percentage of tenured and tenure-track faculty is actually up over the past decade, even though it is below its 1970s level. In addition to the roughly 30% of faculty who are tenured according to the AAUP figures, another 17% are employed in full time non-tenure track positions.

    Most of this spike has also taken place after the 2007-2008 financial crisis, defying another popular but empirically unattested claim about the supposed decline of tenure.

    Note that the AAUP figures also include graduate students as a “faculty” category, even though these positions are actually more akin to apprenticeships and usually come with sizable tuition credits in addition to payments. The removal of the grad student figures from their totals would have the effect of increasing the percentages of tenured and tenure-track faculty.

    Why Piketty-Saez yields an unreliable inequality estimate before World War II

    Posted By on April 8, 2017

    Next week I will be co-presenting a paper at the APEE conference on the reliability of historical estimates of income inequality in the United States. Our paper examines and offers a number of corrections to the widely cited income inequality time series by Thomas Piketty and Emmanuel Saez (2003). This series provides the baseline for multiple subsequent studies of inequality, and is the primary U.S. inequality series in the World Wealth & Income Database.

    The Piketty-Saez series is the primary example of the famous U-shaped inequality trend line for the United States in the 20th century that was prominently featured in Piketty’s 2014 book Capital in the 21st Century. It is calculated using income tax records from the IRS and a variety of complex statistical techniques to extract a distributional measure of income inequality for the top 1% through top 10% of income earners.

    In this post I want to focus specifically on how Piketty & Saez arrive at their estimates for the pre-World War II period, or basically the first half of their U-shape. This period is both interesting and statistically problematic because the IRS data they use as their source has several under-recognized drawbacks. Most American households were not eligible to pay income taxes prior to a rapid expansion of the tax base through new wartime income tax laws in 1941-1945. Before 1941, only about 10% of U.S. households – or even fewer in some years – were required to file their income taxes. In addition, tax enforcement was often deeply inconsistent in those early years, resulting in ample opportunities for both illegal tax evasion and legal tax avoidance. There were even year-to-year inconsistencies in the accounting measurements that the IRS employed to tabulate reported income. Piketty & Saez are aware of some of these issues and attempt to adjust for them (e.g. IRS accounting issues), but also largely inattentive to others (e.g. evasion and avoidance problems). Our paper argues that the cumulative effect of these issues renders their pre-World War II data, or basically the first half of the U-shape, unusable.

    I will be detailing several of these issues in the coming months, but today I’ll be walking you through some of the issues with one of the most dramatic adjustments that Piketty and Saez make. To reach their initial income distributions for the pre-war period, they begin by taking raw filing data from the IRS’ annual Statistics of Income (SOI) report (most of their post-war data comes from more comprehensive IRS microfile sources that are only available from the 1960s to the present). They use the SOI to calculate distributional estimates using a Pareto interpolation technique that is discussed at length in their paper’s data appendix. The technique itself is fairly standard fare, assuming the source data are accurate. Due to some of the aforementioned problems of IRS accounting inconsistencies and the low number of eligible tax filers before the war, they have to make a few adjustments to its results.

    It’s easiest to see the effects of the adjustments they make through 3 steps in the chart below (showing the calculations for the top 10% income share):

    Model 1, in blue, shows the raw Pareto interpolation from the unadjusted IRS SOI reports. Model 2, in red, attempts to address the problem of insufficient returns due to the low number of eligible tax filers before World War II. To do so it estimates and integrates a modest number of “missing returns” by taking the ratio of married vs. single tax filers in the pre-war years. As you can see, it increases the distributional share of the top 10% slightly before 1940. It does not alter any of the post-1940 results.

    A much larger adjustment comes from what we describe as Model 3, shown here in dark green. This is an accounting adjustment that purports to address the IRS’ switch from Net Income (NI) to Adjusted Gross Income (AGI) in 1943-1944. The difference between the two involves how each handle deductions for charitable giving, local and state tax payments, and some categories of interest payment. It is strictly a feature of the way the tax code handled each. AGI encompasses a share of untaxed but realized income that isn’t present in NI, hence the justification for making an adjustment.

    A problem emerges though with how Piketty and Saez calculate this NI-to-AGI adjustment for Model 3. As you can see in the chart above, the Model 3 adjustments are the most substantial change that Piketty and Saez make to the pre-World War II Pareto calculations. They consistently add about 5 percentage points to the distributional share before 1941, but relatively little thereafter.

    The way that Piketty and Saez go about calculating this adjustment is, unfortunately, opaque. Using their calculation files to replicate the adjustment, it appears that they simply inserted an even, constant, and nicely rounded multiplier across the pre-war income share. The multiplier in turn “bumps” the entire trendline upward until World War II, when Piketty and Saez begin relaxing their multipliers and then bottoming them out to zero with the 1943-1944 NI to AGI switch at the IRS. The weights that Piketty and Saez use for their multipliers are highlighted in blue on their spreadsheet below. The yellow highlighted cells reflect the post-AGI switch at the IRS.

    Notice that all of the adjustments are even, rounded numbers. Also notice two important features: (1) the weights they apply scale upward toward the highest income earning percentiles and (2) the weights are held perfectly constant across the board from 1918 to 1941, and then rapidly reduced from 1941 to 1943. This presumably reflects a number of assumptions that Piketty & Saez make, including the effects of the wartime expansion of the tax base that occurred with a succession of tax hikes after 1940. They also very conveniently create a shape in the resulting time series that looks like the first half of the famous U-shaped pattern.

    Piketty and Saez provide very little indication of where any of these weights even come from, let alone if they accurately reflect the size and distribution of untaxed deductions from the pre-AGI period of IRS accounting. The evenly rounded and constant numbers also strongly suggest that simple “guesstimation” is at play (my readers will remember that Piketty has a bad habit of guesstimating numbers and weights along these lines in historical periods of sparse or insufficient data, usually to create the trend line shape that he wishes to depict).

    Part of the problem comes from the unavoidable issue of insufficient historical data. The IRS records are not sufficiently complete to perform a direct NI-to-AGI adjustment in most pre-war years. The question then becomes one of whether the Piketty-Saez weights, apparently guesstimated, are justifiable. Let me offer one piece of evidence that strongly suggests they are not. While the IRS did not report or differentiate all types of deductions in the pre-war period, they did track tax-exempt donations to charities from the mid 1920s onward.

    Annual charitable deduction totals fluctuated wildly throughout this period, showing deep responsiveness to changes in the tax code and to the Great Depression. The inflation-adjusted totals are depicted below:

    Charitable deductions represent only a part of the NI-to-AGI adjustment, so we cannot make a direct 1-to-1 claim about their effects. Still, the severity of the fluctuations itself suggests at least one strong reason why the stable, constant multiplier that Piketty and Saez employ could be highly problematic.

    This is but one of many similar issues I will be highlighting with their pre-World War II adjustments in the forthcoming paper and future posts. It is a substantial one though, the removal of which completely alters and substantially diminishes the first half of their famous U-shaped distribution.

     

    Further debating Adjunct Justice

    Posted By on April 4, 2017

    Economist Steven Shulman recently authored a rebuttal of sorts to the first of two articles that Jason Brennan and I wrote on the subject of adjunct justice. If nothing else he deserves credit for doing so in a submission to a scholarly journal, where this conversation needs to take place. Most adjunct “activists” have thus far avoided submitting their arguments to scholarly outlets where they would be subjected to peer review and a higher standard of sourcing. A few of them have even attacked me for “publishing privilege” and insinuated that I only publish in academic journals to keep my work “behind a paywall.” These claims are specious but also common. Shulman is not an activist, and understands the value of conducting research in professional outlets. So even as we disagree on this specific topic, I welcome his efforts to elevate the conversation over adjuncting into a scholarly venue.

    That noted, Shulman’s core criticisms of our work contain multiple misinterpretations and erroneous lines of reasoning. I still encourage others to read the piece in full, but I wanted to take this opportunity to respond to a couple of his claims. The first concerns his attempt to calculate an alternative estimate of the cost of “adjunct justice” in response to the figures we presented in our original article. He asserts that he reaches a set of figures “one-third to one-half below B[rennan] & M[agness]’s range.” I’ll quote the relevant passage for his calculations:

    “The average entry-level salary for assistant professors in is $70,655 (CHE:ibid). If a fulltime faculty member whose only responsibility is teaching (i.e., no research or administration) is required to teach eight courses per academic year, she or he would be paid $8832 per course. Including benefits brings per course compensation for new assistant professors to $11,776. If adjunct faculty pay per course is $2923, fair pay for new adjunct faculty members would require an additional $8853 per course. In this frame of reference, the aggregate cost of adjunct justice would amount to $27.9 billion per year, qualified by the same over-estimate and under-estimate biases noted above.”

    There are two substantial errors in this approach. The first is that Shulman is using the wrong figure as a starting point. The job he describes and uses as the basis for his calculation is not actually reflective of the typical entry-level assistant professor appointment in the United States. It is much closer to the rank of “instructor” or “lecturer” – typically an entry-level full-time faculty appointment that carries heavy teaching loads and operates on a renewable contract basis, as opposed to tenure track. As we argued in our second article on the issue of adjunct exploitation, a full time entry level instructor/lecturer position is much closer to the job qualifications of the average current adjunct professor than an assistant professorship (and even this requires the generous assumption that adjuncts possess the proper terminal degrees that are usually required for these roles. Most do not). The most recent data on instructor/lecturer level appointments places the average salary in the $50-56 thousand range, or well below Shulman’s assistant professor salary starting point. As a result he is severely overstating the pay differential between his hypothetical adjunct and the faculty rank they would most likely qualify for if converted to full time positions. He is essentially comparing  apples to oranges.

    The second issue with Shulman’s comparison is his assessment of faculty duties. He incorrectly assumes that the full time faculty conversion entails only teaching obligations. In reality, almost all full time faculty are contractually obligated to meet expectations of research output and contribute to a variety of tasks known as university service (serving on committees, advising students, department obligations etc.) This is certainly true of almost all assistant professor appointments, where such tasks are an integral component of a professor’s application for tenure. But it is also the norm for instructors/lecturers, even if they are hired for teaching-heavy positions with only modest research expectations. So how much of their workload do full-time faculty spend on teaching versus research and service? The numbers vary somewhat by rank and type of institution, but this question has in fact been exhaustively investigated over the years with surveys and case studies. The general range is between 40 and 65% on teaching, with the remainder divided between research and service. A tenured professor at an R1 university will likely be closer to the 40% range, or perhaps even less if they are productive “star” researchers who can negotiate a reduction in teaching obligations. An entry level professor at a liberal arts college will likely have a heavier 3-3 to 4-4 teaching load, and thus be expected to commit more time to classroom instruction. In either case though, it’s reasonable to expect that faculty at even the lowest entry-level academic ranks will be spending at least a third of their time on activities other than teaching. This further complicates Shulman’s calculations as it further reduces the actual per-course compensation that faculty receive for the teaching portions of their contracts. Based on the calculations that Jason and I did, a true apples-to-apples comparison would yield only modest compensation differences between a PhD-holding adjunct and an entry level full time lecturer. In one of the scenarios we considered, the per-classroom-hour difference between the two was only about $4.

    These two corrections reveal that the pay differential between adjuncts and the closest comparable full-time position is actually pretty modest, when compared on hours actually spent on teaching-related activities. It would be a mistake though to conclude that this difference makes “adjunct justice” affordable though. While a $4/hour pay hike would undoubtedly be welcome by most adjuncts, it is also far short of what practically any of the adjunct activist organizations purport to be a “just” wage. For an adjunct who strings together a 4-4 teaching load, it would probably amount to an extra $4-5 thousand dollars a year on a $26,000 base salary. That figure is less than half of even the most conservative salary demands of adjunct activists like the SEIU and New Faculty Majority, let alone their stated goal of $15,000 per course (Shulman also incorrectly states that we based our original estimates off of “implausible” adjunct salary goals – we actually took them directly from the published statements of multiple adjunct labor activists and organizations). So in effect, Shulman ends up both overstating the classroom-related pay differential of comparably ranked adjunct and full time faculty, and understating the “justice” demands of the adjunct activists by conflating their stated goals with his own erroneous calculation.

    Another way to put it is this: after adjusting for qualifications and the actual portion of a full time faculty member’s job that goes into teaching, the teaching compensation-based salary difference between a 4-4 PhD holding adjunct and a full time instructor/lecturer – the closest equivalent rank – is probably about $4-5K per year. Shulman wants around twice that at almost $9K more per year. And the adjunct activists want well in excess of Shulman’s figure, with a variety of proposals demanding anywhere from $20K to as high as $90K more per year.

    I’d also be remiss if I didn’t point out another argument in this passage:

    “According to B&M, adjunct faculty justice would harm many adjunct faculty members as well as students because the conversion of part-time positions into a smaller number of full-time positions would cost many adjunct faculty members their jobs and deprive students of their expertise. This argument is similar to the conservative claim that workplace reforms like the minimum wage harm the very people they are meant to help. The fact that the evidence of these harms has proved thin (Brown, 1999) has not made the argument any less potent for anti-reformers like B&M. They insist that “it is not plausible that universities can help all adjuncts or give them all a better deal. Instead, because of budget constraints, they can at best help some and hurt others.” But that conclusion rests more on their implausible assumptions than on the budget constraints faced by higher education.”

    Let’s break down the argument here:

    1. Brennan & Magness’ argument about the trade-offs entailed in adjunct justice sounds somewhat similar to the “conservative” argument against the minimum wage.
    2. Here’s a single decades-old citation that purports to show the “conservative” argument against the minimum wage is wrong.
    3. Brennan & Magness’ trade-offs argument is therefore wrong too. Also, Brennan & Magness are “anti-reformers.”

    That’s a classic non-sequitur, and an amusing one for an economist to make as well.

    The conversation over the use of adjuncts in higher ed is ongoing, and I look forward to future examination of it in suitable scholarly venues. To that end, Shulman’s argument – even with the aforementioned faults – is at least a conversation starter. Its empirical argument still falls far short of its conclusions. But perhaps it will inspire other adjunct activists to take their case to scholarly venues for an actual discussion, instead of shrieking on Storify and making unsubstantiated empirical claims to journalists who uncritically accept their accuracy.

    Low lie the yields of Malthunry

    Posted By on March 29, 2017

    Every year around St. Patrick’s Day, the Great Irish Famine of 1845-52 briefly reenters the public’s consciousness. Parallels to more recent political events, including the Syrian refugee crisis and the ongoing debate over immigration, have also elevated its salience as a historical precursor. In a subtle rebuke of President Donald Trump, Irish Prime Minister Enda Kenny recently invoked the United States’ relatively liberal immigration policy of the 1840s as a core feature of the American identity: “four decades before Lady Liberty lifted her lamp, we were the wretched refuse on the teeming shore.”

    The famine’s immediate instigator was the potato blight – a disease that wiped out the island’s primary subsistence food crop and resulted in the starvation deaths of over a million people. Further investigation of its causes quickly goes astray, and in some quarters of academia and the press alike it has become common to blame the Irish famine on the ravages of “laissez-faire” capitalism.

    The argument for this view is often historically simplistic. It usually casts the famine as an instance of class-discrimination arising from a market failure in which the British government allegedly shirked its duties to relieve the starving masses out of a belief that the unhindered market would sort the matter out. Some darker variants go so far as to portray the event as a capitalism-induced economic cleansing to rid Ireland of its poor and dependent classes at the behest of wealthy landowners.

     

    The Political Origins of the Irish Famine

    While it is difficult to understate the misery of the famine itself, these portrayals politicize its history beyond recognition, while conveniently sidestepping the pronounced role that illiberal economic and political institutions played in Ireland’s food crisis. Two preexisting policies were largely to blame for the famine’s severity once the potato blight struck.

    The first was a legacy of England’s 16th century break with the Roman Catholic church and, more directly, Oliver Cromwell’s proto-genocidal conquest of Ireland in the wake of the English Civil War some two centuries prior. These events gave rise to a series of brutally repressive anti-Catholic penal laws in Ireland. The most far-reaching was a government-enforced land redistribution scheme that stripped Irish Catholics of their property for a variety of “offenses” against English Protestant rule – for supporting the breakaway Kilkenny Confederation in 1641, for pledging allegiance to Charles I in the Civil War, for backing the Catholic King James II after the Glorious Revolution of 1688 in the Williamite War, and for recurring support for later Jacobite causes over the next half-century.

    As applied in Ireland, these laws struck at the heart of the very same institutions that fueled the Industrial Revolution in neighboring Great Britain. After the landowner redistributions of 1649-1691, Catholics were severely restricted from purchasing or even leasing property for the next century. The British Parliament prohibited Catholic ownership of firearms, barred Catholics from public office and disenfranchised Catholic voters, imposed Catholic-specific taxes, established preferential inheritance laws that favored Protestant converts, restricted Catholics from specific gentry-level professions, and even barred Catholics from sending their children abroad to be educated. Edmund Burke, the liberal-turned-conservative philosopher, described their cumulative effect as a “machine…fitted for the oppression, impoverishment, and degradation of a people.” The relief of these punitive provisions became a primary cause of late 18th and early 19th century English liberals, with major though imperfect reform bills being secured in 1791 and 1829.

    The second political source of the famine’s severity was found in the economic philosophy of mercantilism, and specifically the protectionist Corn Laws that the British Parliament enacted in the wake of the Napoleonic Wars. A cronyist political tool of agricultural landowners in England, these measures severely taxed the importation of foreign-grown wheat and other grains, which could be grown more efficiently and cheaply in better climates. By intentional design, these tariffs raised the price of food items in Britain and Ireland to the benefit of agricultural landowners. Consumers paid the direct price.

    Though the Corn Laws were the most famous legislative products of British mercantilism, they actually built upon several decades of earlier agricultural protectionism. Ireland was particularly hard hit, as the grain tariffs came into existence in conjunction with the aforementioned penal statutes. In the 18th century, the British Parliament imposed a variety of commodity-specific laws that restricted the export of Irish commodities to non-English merchants. Food production in Ireland itself was greatly distorted by the tariff system, which incentivized comparatively less efficient uses of agricultural land. Combined with the restrictive laws on property ownership, these measures gave rise to an Irish agricultural model built around the cultivation of grains for sale to fixed buyers at tariff-protected prices to the benefit of absentee landowners in England. Adam Smith, in fact, diagnosed the economic ills of this system in the Wealth of Nations. He denounced its extremely regressive tax effects “as is the case in Ireland [where] such absentees may derive a great revenue from the protection of a government to the support of which they do not contribute a single shilling.”

    When the blight arrived in 1845, it struck an Irish Catholic population had been forcibly reduced to a state of landless poverty and subsistence agriculture by 200 years of government predation and punitively protectionist trade policies on food items. Landless and still under the shadows of two centuries of economic and political repression, they were also held captive to a food market that artificially increased grain prices beyond their reach and relegated them to subsistence on a failing potato crop. Symptomatic of these government-created distortions, Ireland actually continued to export grains to England under politically preferential land and trade arrangements even after the onset of the famine.

     

    Free Market Liberalism and the Famine Response

    The conditions that made the Irish famine so catastrophic were largely created by centuries of political intrusions upon everything from the freedom of trade to the most basic abilities of Irish Catholics to own property. This created an economic system in Ireland that stood in direct antithesis to both the free trade prescriptions of Smith and David Ricardo, and to the economic doctrine of non-intervention. In fact, the most important famine-relief policies from within the United Kingdom and from abroad were rooted in the doctrines of laissez-faire. In Britain, the onset of the famine proved to be a major instigating trigger of the Corn Law system’s destruction.

    In 1846 after two years of crop failures and poor agricultural conditions all around, Prime Minister Robert Peel bucked the protectionist majorities of his own party and acquiesced to the liberal Whig free trade cause of Richard Cobden and John Bright. Ireland weighed heavily on Peel’s conversion to free trade, and in late 1845 he even surreptitiously approved the importation of over £100,000 of corn from the United States in circumvention of the tariffs for distribution through the Irish workhouses. The tariff relief ultimately succeeded, though not without a price. They split Peel’s cabinet and party, costing him the government. The compromises needed to secure a majority for the Corn Law repeal also resulted in its implementation being dragged out for three years over a schedule of successive reductions. By its completion in 1849, the ravages of the famine had spread to all of Ireland.

    A second source of famine relief emerged abroad in the form of immigration. Following the Jeffersonian “Revolution of 1800,” the United States adopted to a relatively liberal immigration policy (at least by 19th century standards) that permitted easy entry into its ports as well as remittances back home, which could be used in turn to pay for the transatlantic voyage of other family members. Britain also permitted relatively unimpeded migration to Canada and other overseas parts of its empire, though nativist political reactions resulted in this policy being restricted after 1847. All said, more than a million Irish refugees utilized the option of immigration to escape the famine between 1845 and its conclusion in 1854.

     

    A Malthusian Famine Relief

    We’ve established thus far that (1) the illiberal economic and political restrictions upon Ireland were a major cause of the famine’s severity and (2) the liberal policies of free trade and free migration were two of its most important means of relief. Despite these realizations, the famine itself is often blamed on capitalism. How are we to reconcile the two claims?

    Simply put, those who cast the blame for the famine on free markets have largely misidentified their target through a combination of sloppy history and poor economics. They normally accuse Peel’s successor John Russell of adhering to an economic “orthodoxy” of “laissez-faire” rooted in Smith and Ricardo, and point to the British government’s reluctance to invest in famine-relief charities out of the belief that the free market would sort things out. Briefly setting aside the reality that Ireland’s punitively regulated economy in 1845 was anything but a laissez-faire paradise, another problem emerges in the historical misidentification of the intellectual inspirations of the blamed parties.

    Lead among them is the political administrator whose name, perhaps more than any other, has come to be associated with the British government’s failures during the famine years: Charles Trevelyan. The scion of an aristocratic Whig family who occupied a prominent post in the British Treasury civil service, Trevelyan effectively took over the famine relief after the fall of Peel’s government in 1846. He is curiously portrayed as a free-market dogmatist and devotee of Adam Smith, as is the case in this depiction from a leftist political science professor. Seldom mentioned however is that Trevelyan’s economic beliefs are more closely linked to the teachings of Thomas Malthus.

    Malthus is a perpetually controversial figure, both for competing claimants to his intellectual legacy and disparate assessments of his own intentions. The connection between Trevelyan and Malthus is undeniable though. Trevelyan’s introduction to the study of political economy occurred when he was a student of Malthus himself at the East India Company College in Hertfordshire.

    Some decades prior to the famine, Malthus supported the original enactment of the Corn Laws on the grounds that they permitted a balance between manufacturer and agricultural interests. He claimed influence from Smith as well, even as this inheritance was vigorously contested by his contemporary David Ricardo. To this end, Malthus also criticized the penal laws in Ireland for their economic detriments, placing him at least at times on the classical liberal side of the issue. His connection to the famine however comes from his most famous work, an oft-revised tract on the economics of overpopulation. In its basic form, the Malthusian doctrine predicted a mathematical conundrum in which exponential population growth would eventually surpass the ability of natural resource production to sustain it.

    The implications of Malthus’ theory have been hotly debated for centuries. He personally resisted one of its possible implications – coercive population control – in favor of encouraging people to restrict procreation through abstinence and late-life marriage. Many of his intellectual followers have shown more pronounced proclivities toward population control, including the forced sterilization and eugenics programs advanced by self-described “neo-Malthusians” in the early 20th century. While we cannot assign guilt to Malthus for the actions of others after his lifetime, it is not difficult to see how these positions could be arrived at from a reading of his works. In fact, one startling passage on Ireland itself appeared in an 1817 letter that Malthus wrote to Ricardo:

    “Through most of this country, great marks of improvement were observable, though its progress had received a severe check during the last two years, the effect of which was peculiarly to aggravate the predominant evil of Ireland, namely population greatly in excess above the demand for labour, though in general not much in excess of the means of subsistence on account of the rapidity with which potatoes have increased under a system of cultivating them on very small properties with a view to support than sale.The land in Ireland is infinitely more peopled than in England; and to give full effect to the natural resources of the country, a great part of this population should be swept from the soil into large manufacturing and commercial Towns.”

    It is not difficult to see in this passage the underpinnings of Trevelyan’s later attribution of the famine to Irish breeding and overpopulation a generation later. Trevelyan’s own report on the famine relief, published in 1848, is deeply rooted in Malthusian population doctrine. It faults the forces of population strain for the situation in Ireland and, in some of the more  infamous passages, suggests that the unfolding events reflected divine will upon the island’s consumptive excesses.

    Consistent with portrayals that incorrectly brand the famine relief as a “laissez-faire” enterprise, Trevelyan is often portrayed as having taken a callous do-nothing approach to his task that would allow the population crisis to sort itself out by migration or, if necessary, starvation. It is difficult to reconcile this claim with the actual famine relief pursued by Russell’s government.

     

    Keynesianism before Keynes

    Here Trevelyan’s course has much more in common with a different line of economic thought that emerged from one camp of Malthus’ followers, viewing the state as a mechanism to manage the untamed “natural” forces of an economy. It is more commonly associated with the 20th century economist John Maynard Keynes, who also styled himself a neo-Malthusian on both population issues and macroeconomic management. In addition to sharing Keynes’ near-obsession with Malthusian population pressures as a putative explanation for social ills, Trevelyan believed the government’s role was to essentially oversee the crisis as if it were a countervailing force to the “nature” of an unregulated market. To accomplish this control he sought to position the the government as a jobs provider for an economy in crisis.  In 1846 he launched a massive “public works” program that sought to employ the “surplus” Irish population in the construction of roads and river improvements.

    Trevelyan’s report openly boasts of employing almost 100,000 people in government projects within a few months of its start. By 1847 it employed five times that number. Like many centralized economic programs however, Trevelyan’s “public works” succumbed to waste, graft, and maladministration. Borrowing from additional Malthusian doctrines about consumer demand and clinging to the notion that potato subsistence had detached the Irish from any familiarity with purchasing their own food, he indulged multiple failed experiments in price and wage manipulation. These were ostensibly designed to “teach” the workers how to properly manage their earnings without a potato crop. In reality, they ended up suppressing the public sector wages that the government offered while also, at times, inducing artificial spikes in grain prices.

    The construction projects had additional unanticipated effects. They diverted laborers away from agricultural pursuits (although this objective was similarly rooted in an interventionist belief that too many Irish were wedded to agricultural pursuits and thereby flooded its labor market). This in turn suppressed potato planting once a blight-free crop could be raised and impeded the recovery. To make matters worse, the British government also attempted to finance its massive expenditures with a changing array of taxes on landowners in the affected districts. As Mark Thornton has argued, these taxes likely introduced multiple unintended consequences: their burdens were passed through onto poor tenants by absentee landowners, as land value levies they further diverted resources away from food production, and they likely squeezed out private charity relief for the famine itself.

    The bureaucratic disaster of the public works program eventually drew scrutiny in parliament and the press, resulting in its cancellation followed by a succession of similarly disastrous attempts to manage the famine by different forms of government-induced “charity.” In the end, the most effective relief mechanisms proved to be free migration – owing to the relatively liberal policies of the United States in its willingness to receive Irish famine refugees – and the elimination of trade protectionism over Britain’s food sources as the Corn Law repeal’s implementation took full effect in 1849.

    The government approaches to famine relief were widely derided at the time as a succession of failures though – not on account of their “laissez-faire” approach, but the exact opposite. They tried to manage Ireland out of a food crises through public spending, price controls, and make-work programs. One testimonial recorded in the House of Lords in 1852 succinctly captured the absurdity of the entire enterprise:

    “We continued the Works we had selected originally, but towards the end a number of works we had excluded were commenced, merely for the purpose of employing the people, nearly in the same way as if we had dug a hole to fill it up again.”

    It would appear in this evidence that Charles Trevelyan, the overpopulation-obsessed Malthusian administrator of one of the largest public works programs in Irish history, was far from a laissez-faire dogmatist. Perhaps we should enlist an alternative descriptor: Trevelyan was actually something of a proto-Keynesian.

    The English Department attacks academic freedom again

    Posted By on March 23, 2017

    A Faculty Senate report at Wake Forest University adopted the following resolution at a meeting last week:

    “Motion 2: To freeze current hiring by the Eudaimonia Institute, and cancel any internal (e.g. Eudaimonia conference) or external presentations related to the IE, and to restrict publication of material from EI until the COI committee is established and the University COI policy can be applied.”

    For the sake of academic freedom, the Faculty Senate fortunately has no enforcement power to carry out this resolution. It is nonetheless difficult to imagine a more direct assault than this measure against the Eudaimonia Institute, a free-market aligned scholarly institute comprised of an interdisciplinary group of Wake Forest faculty members. The resolution openly seeks the power to censor these faculty members’ ability to publish their own scholarly work, to host lectures and events, and to even make hiring decisions for their own personnel. Two other concurrently adopted resolutions from the same meeting seek to subject the Eudaimonia Institute to an oversight review board, and to suspend its funding. These too are, fortunately, non-binding.

    The motive for this assault upon the basic rights of faculty to conduct research free of censorship and intimidation is equally chilling. The Faculty Senate committee that drafted the resolutions did so out of political opposition to one of the Eudaimonia Institute’s main donors, the Charles Koch Foundation. They believe the Koch Foundation’s free-market political beliefs are objectionable and wish to see them excluded from campus, so they set out to persecute a group of faculty who receive Koch funding. Despite the committee’s claim that it is simply seeking to investigate “conflicts of interest” posed by private donors irrespective of their politics, note that no similar objections have been raised about the activities or donors of a multitude of left-leaning institutes at Wake Forest, including two that openly engage in political activism for progressive causes: the Anna Julia Cooper Center for Social Justice and the Pro Humanitate Institute, a project of former MSNBC pundit Melissa Harris-Perry.

    The Koch Foundation is a major financial supporter of classical liberal scholarly endeavors in the Untied States. It funds faculty and research centers at hundreds of universities and provides resources for research projects, student scholarships, and speaker events. The majority sustains research on free-market economics, though they’ve also funded several million dollars in scholarships for students from historically disadvantaged and minority groups. (Full disclosure: my own university and institution have similarly benefited from the Koch Foundation’s academic support, and I’m proud that they consider my own research to be worthy of support. I’m also proud to report that they’ve never once tried to influence the findings of anything I’ve ever written, despite conspiratorial insinuations otherwise by a number of madjunct activists). This funding is nonetheless seen as unacceptable by the numerous partisans of ideological orthodoxy who inhabit higher education. Even though the Koch Foundation represents a tiny fraction of a percent of the total research funding in higher education, with much larger shares coming from progressive left-leaning foundations and deeply politicized government sources and even though free-market and classical liberal faculty are a distinct minority in left-leaning academia, their very existence is deemed intolerable by the illiberal elements campus left.

    …which brings us back to the events at Wake Forest. Here a number of faculty have decided that their own colleagues should not be permitted to conduct research on perfectly mainstream economic and philosophical topics because it conflicts with progressive political ideology. These faculty have therefore set out to sabotage their colleagues’ funding and censor their work. Last week’s resolutions came about as a product of an anti-Koch petition circulated last semester among some Wake Forest faculty members. The breakdown of signers by discipline displays a familiar pattern. Most signers come from the humanities and social sciences, STEM disciplines are comparatively rare, and – as always – the English Department was the the main instigator:

    We’ve seen multiple examples of this exact same pattern in recent controversies over academic freedom, including the events a month ago at Middlebury College in Vermont where a faculty-fomented protest resulted in a violent attack upon speaker Charles Murray and another faculty member. That protest also involved a widely circulated faculty petition denouncing Murray’s talk. It too was dominated by the humanities, with a group of English and MLA department faculty leading the pack.

    In pointing this out, please note that I am in no way making a gratuitous attack upon English as a discipline. English has an important place in a well rounded liberal education. We should actually be deeply alarmed though by the politicization of English faculty (as well as the other humanities at large). Their involvement in these blatant attempts to silence dissenting political views on campus is quickly becoming a recurring pattern. It also stands in stark contrast with STEM faculty and the quantitative social sciences, who lend comparatively fewer faculty supporters to campus illiberalism.

    This is not without reason. As a recent article in the American Interest magazine showed, university faculty have become more politicized over the past 25 years even though the American public at large has maintained a relatively stable left/right split. More startling though, the most pronounced politicization has taken place in a few disciplines that are now overwhelmingly skewed toward the political left. English is, unambiguously, the most skewed discipline, with over 80% of its faculty self-identifying on the political left according to the most recent UCLA Higher Education Research Institute survey.

    Faculty political biases come with the territory of academia, and are not objectionable in themselves as they represent a direct product of freedom of thought and freedom of inquiry. A problem emerges though when certain fields skew so heavily to one side that they effectively shut out viewpoints that dissent from a prevailing political orthodoxy. The chart above suggests we have surpassed that point in English, and that several of the humanities are not far behind. More alarming though is the correlation revealed with the petitions at Wake Forest,  Middlebury, and other campuses where political disagreements have resulted in threats to academic freedom. The most aggressively left-leaning fields like English/MLA and the other humanities also seem to dominate faculty petitions that actively call for the suppression of dissenting viewpoints on campus. It is increasingly apparent that the two patterns – progressive ideological homogeneity within a discipline and support for restricting the academic freedom of right-leaning faculty, speakers, and students – are closely related.

    The Marxist Devil and Free Speech on Campus

    Posted By on March 9, 2017

    Jason Brennan authored a long post the other day that presented multiple challenges to anti-speech activism on campus. The entire piece is worth reading, but I wanted to call attention to one point in particular:

    Some people say we can’t “platform” ideas that could be used for evil. I look forward to seeing those same people demand we shut down all Marxist talks and fire all the Marxist scholars, since Marxist ideas led to 100 million or more democides in the 20th century.

    The underlying argument – that certain ideas are too “dangerous” to be tolerated on campus, thereby making them “exempt” from the principles of free speech and academic freedom alike – is an increasingly common one among campus radicals. Following recent events at Berkeley, the editor of a student paper ran an entire series political editorials in this vein espousing the use of violence to shut down controversial speakers and ideas. The logic of this claim is symptomatic of a frenzied paranoia afflicting the illiberal campus left, as I discussed in my last post. It is also deeply ingrained in the pseudoscholarly fever swamp of Critical Theory, exemplified in Herbert Marcuse’s 1965 essay espousing a philosophically incoherent “liberating tolerance” that intentionally excludes beliefs on the political right by deeming them “repressive” and oppositional to various pet causes of revolutionary radicals. Marcuse justified his position by citing the horrors of Nazism and Fascism, thereby attempting to position intolerance as a necessary defensive bulwark against atrocity. The intended consequence was a free license to like-minded radicals to silence their opponents in the name of inhibiting any belief they deem dangerous:

    They would include the withdrawal of toleration of speech and assembly from groups and movements which promote aggressive policies, armament, chauvinism, discrimination on the grounds of race and religion, or which oppose the extension of public services, social security, medical care, etc. Moreover, the restoration of freedom of thought may necessitate new and rigid restrictions on teachings and practices in the educational institutions which, by their very methods and concepts, serve to enclose the mind within the established universe of discourse and behavior–thereby precluding a priori a rational evaluation of the alternatives.

    The targets of Marcuse’s world have since multiplied to practically anything and everything that a self-appointed campus epistocracy of Critical Theorists deems even mildly offensive, hence the recent proliferation of “microaggression” reporting policies and the like from the basements of the English department and the cubicles of student affairs bureaucracies. Yet for all its pretenses of being a necessary defense against “dangerous” ideas, this line of argument suffers from a glaring vulnerability that Jason identifies above: Karl Marx.

    Critical Theorists will no doubt carve themselves an exemption for atrocities committed in the “service of the revolution” or some similar nonsense, but the horrendous body count of Marxist communism is indisputable. The most commonly accepted estimate places the 20th century total at 100 million victims, with the repressive communist societies of the Soviet Union and Maoist China constituting the two deadliest regimes in all of human history. Lesser communist societies leave similarly devastating death tolls practically every place they are attempted, the only difference between them and Stalin being a matter of scale.

    With a few noted exceptions who retreat into the Soviet equivalent of Holocaust denial, most modern Marxists try to dissociate themselves from these numbers by insisting the perpetrators were not sufficiently authentic adherents of Marxism. This excuse is intellectually flimsy, and tends to overlook the fact that every one of these regimes (1) considered itself Marxist, (2) believed itself to be implementing Marxist ideas, and (3) enjoyed alarming degrees of support and enthusiasm in its own time by contemporary Marxist intellectuals, particularly before their death tolls were widely known. All said, the empirical evidence of Marxism’s connections to unprecedented levels of death and devastation is impossible to escape.

    All of this leads to an inescapable conclusion: If one admits the principle that academia is obliged to deny a platform to specific ideas on account of their demonstrated propensity to do harmful and horrendous things, then one must *necessarily* exclude Marxists from that platform on account of Marxism’s track record of murder and devastation, which is empirically unparalleled in all of human history.

    To be completely clear, I make this observation as a free speech absolutist. I shudder at the thought of excluding even the most wrongheaded idea from free and open discussion – Marxism included – precisely because I value open inquiry as a principle onto itself and because I fear any power that could be turned around and abusively deployed against non-Marxist beliefs, my own included. We give the devil the benefit of law for our own safety’s sake. If one removes these absolute protections for free and open discussion and deems dangerous ideas unworthy of a platform, then the very same Marcusian activists that advanced this argument in the first place are made vulnerable to the track record of their own Marx-derived philosophical roots. Surely they are less-than-ready to grapple with the implication that vulnerability carries for their own standing on campus.

    Measuring Marx on Campus:

    So how prevalent is Marxism in academia these days, and can we measure what the illiberal campus activists must logically be willing to forego if their argument on denying dangerous speech is admitted? There are many empirical dimensions to this question, almost all of which point to a sizable presence compared to society at large.

    First, as I’ve pointed out previously, Karl Marx enjoys an elevated and disproportionately common presence on American university syllabi. By this metric, Marx’s Communist Manifesto is the single most frequently assigned text in the college classroom other than the Strunk and White grammar manual. While some of this standing obviously reflects the historical significance of communism in the past two centuries, its prominence also far exceeds other similar historically significant works of political philosophy. Only Plato’s Republic approaches these numbers among other major figures in the philosophical canon. Locke’s treatises, Hobbes’ Leviathan, Machiavelli’s the Prince, and Mill’s On Liberty don’t even come close. Marx’s use on syllabi also noticeably concentrates in certain disciplines in the humanities, though far less so in economics or the more empirical end of the social sciences.

    Second, a handful of studies have actually sought to measure the number of faculty who self-identify as Marxists. The most recent comprehensive study is a survey conducted in 2006 by Neil Gross and Solon Simmons. The authors conducted a national poll of university faculty to ask them about their political self-identifications, including options to identify as “radical” and “marxist.”

    The authors of this study downplayed the total number of Marxists they found, calling them “rare” and noting that they made up only 3% of all faculty. This figure includes respondents from the STEM fields, business schools, and medical professions though – all of which showed next to zero Marxist faculty members. Their breakdown appears in the figure at the top of the post.

    A more complicated picture emerges though when we look at these survey results within other areas of the academy. As Bryan Caplan has pointed out, the Marxists (as well as radicals) actually concentrate in a few specific areas – and they do so with sizable numbers. The largest concentration of self-identified Marxist professors is in the social sciences at 17.5% (on top of that, 24% of social scientists identify as radical).

    The same survey also published discipline-specific data on some of the larger disciplines within the social sciences. These results suggest that Marxist faculty also cluster at comparatively higher rates in specific disciplines. In the largest example they reported, 25.5% of Sociology faculty self-identified as Marxists.

    How many actual professors does this encompass? According to the Bureau of Labor Statistics, there are approximately 16,160 sociology professors in the United States today. Assuming the percentages have remained constant since the survey, there are approximately 4,120 self-declared Marxists sociology professors in the United States today.

    (Note that holding the Marxist percentages constant is a conservative assumption. More recent faculty surveys conducted on a simpler left-right scale have shown a strong leftward shift in faculty ideology since 2006)

    The BLS also estimates that approximately 136,000 professors are employed in the social sciences as a whole. At 17.5% held constant, there are about 23,800 self-described Marxist social science professors in the United States.

    At these numbers (again assuming the 2006 survey breakdown is constant), sociologists would also make up about 17% of the Marxist professors who teach in the social sciences (for perspective, sociology accounts for about 11.8% of social science faculty as a whole).

    These are admittedly rough calculations based on an survey that likely warrants updating in light of the more recent leftward shifting trend in faculty ideology. They also omit other left-leaning identifications such as “radical” that likely include faculty who share some favorable views of Marxism, such as the Critical Theorists who dominate many fields in the humanities. Both groups are common in the humanities and social sciences, though unfortunately the 2006 survey authors did not release enough information to parse out these overlaps. We may nonetheless add the 5% of humanities faculty who fall into the Marxist category by their own designation. Doing so, we get a combined total of about 31,600 self-identified Marxists between the humanities and social sciences.

    I would argue that the concentration of Marxists in some academic disciplines are disproportionately high relative to the limited intellectual insights they provide – particularly given the disastrous track record of Marxist insights into social scientific matters, as seen in action across the past century or more. But I’d also contend that these faculty deserve the unwavering protections of the academic freedom to advance their scholarly research and of free speech to advance their arguments in an open societal discussion, even as I’d likely disagree with most of their specific insights.

    The illiberal campus activists we’ve seen and heard so much from in recent years make no similar commitment to either of these protections, and in fact openly call for their removal in cases where they deem an alternative viewpoint “dangerous” or “evil.” In light of Marxism’s political track record, and given the high concentrations of Marxists in some of the same academic disciplines that cater to and advance these same illiberal anti-speech arguments, all I can say is this: be careful what you wish for.

    The Paranoid Style of the Illiberal Campus

    Posted By on March 8, 2017

    In 1964 historian Richard Hofstadter wrote an essay for Harper’s Magazine entitled “The Paranoid Style in American Politics.” His thesis was a simple one – that American politics was “an arena for angry minds,” and that this anger often fomented “a sense heated exaggeration, suspiciousness, and conspiratorial fantasy” among small but vocal groups of the electorate. Hofstadter famously adhered to deeply progressive political beliefs, and the immediate target of his essay was a flurry of conspiratorial movements operating in his own day – McCarthyism, the John Birch Society, and even elements of the Goldwater campaign.

    Despite these examples, Hofstadter conceded that this “style of mind” involved a character that was “far from new and that is not necessarily right-wing.” It had antecedents throughout American political history on the right and left, and its patterns reflected a certain type of paranoid approach to political analysis rather than any one ideological perspective. Hofstadter’s paranoiac had distinct traits, summarized in the following excerpt:

    The paranoid spokesman sees the fate of conspiracy in apocalyptic terms—he traffics in the birth and death of whole worlds, whole political orders, whole systems of human values. He is always manning the barricades of civilization. He constantly lives at a turning point.

    As a member of the avant-garde who is capable of perceiving the conspiracy before it is fully obvious to an as yet unaroused public, the paranoid is a militant leader. He does not see social conflict as something to be mediated and compromised, in the manner of the working politician. Since what is at stake is always a conflict between absolute good and absolute evil, what is necessary is not compromise but the will to fight things out to a finish. Since the enemy is thought of as being totally evil and totally unappeasable, he must be totally eliminated—if not from the world, at least from the theatre of operations to which the paranoid directs his attention. This demand for total triumph leads to the formulation of hopelessly unrealistic goals, and since these goals are not even remotely attainable, failure constantly heightens the paranoid’s sense of frustration. Even partial success leaves him with the same feeling of powerlessness with which he began, and this in turn only strengthens his awareness of the vast and terrifying quality of the enemy he opposes.

    The enemy is clearly delineated: he is a perfect model of malice, a kind of amoral superman—sinister, ubiquitous, powerful, cruel, sensual, luxury-loving. Unlike the rest of us, the enemy is not caught in the toils of the vast mechanism of history, himself a victim of his past, his desires, his limitations. He wills, indeed he manufactures, the mechanism of history, or tries to deflect the normal course of history in an evil way. He makes crises, starts runs on banks, causes depressions, manufactures disasters, and then enjoys and profits from the misery he has produced. The paranoid’s interpretation of history is distinctly personal: decisive events are not taken as part of the stream of history, but as the consequences of someone’s will. Very often the enemy is held to possess some especially effective source of power: he controls the press; he has unlimited funds; he has a new secret for influencing the mind (brainwashing); he has a special technique for seduction.

    We live in a similarly paranoid time, owing no small part to the political insanities of the day – be it the 2016 election, the current president’s twitter feed, or the explosion of illiberal and anti-speech protests on campus.

    I did not vote in the last election for reasons I have explained elsewhere and I have little interest in the White House save for deriving occasional entertainment from the absurdities of its self-marginalization. The anti-speech movement on campus is another matter though. It affects the scholarly environment of my own chosen career path, including my ability to research and investigate controversial subject matters that run counter to conventional political wisdom on inequality, on higher education, and a host of other topics.

    When encountering the ravings of these illiberal activists, it is useful to recall Hofstadter’s diagnosis. Modern paranoiacs abound in academia.

    They include the paranoid militants who see conspiracies of “neoliberalism” lurking behind every corner of the university. Their ranks are flush with student and faculty activists who openly eschew the norms of free discussion and mediation as a means of addressing social conflict, instead preferring to blocksilence, and stamp out dissenting scholarly viewpoints. They see conscientious disagreement with their own beliefs as a malicious evil, and assign it that designation as a shallow pretext to violently assault their opponents. They demand viewpoint uniformity from entire academic disciplines and often seek to remove other voices from the theatre of their operations, the university. They indulge fantastical visions of the university system, and rage in frustration and fury at the tiniest of unintentional perceived slights, even when evidence of a claimed occurrence is spotty at best. They situate themselves within one of the most ideologically skewed institutions of our society, and yet remain entirely dissatisfied by what they claim is its insufficiently “activist” disposition. The academic marxiness of certain disciplines, including those in the humanities that seem to take aggressively ideological positions on practically every conceivable topic of social science that arises from beyond their own professional competencies, is not enough to satiate the complaint.  Entire areas of academic study that merely sit on the center-left are denounced for being insufficiently subservient to the pseudoscholarly wasteland of Critical Theory. And those areas of study that exhibit actual intellectual diversity are said to be infested with greed and therefore, presumably, outright evil.

    These modern paranoiacs advance elaborate mythologies about the imperiled state of their own academic corners, even though a mountain of empirical evidence shows they are actually expanding in size. They see a world full of sinister villains: dehumanized faculty who are the dupes of malicious forces and “Dark Money” from the Koch brothers, faceless administrators who are supposedly driven by schemes of “corporatization”  to seek out “profits” through some mysterious mechanism that is never adequately explained, and, lurking behind it all, an “exploitative” planet-hating, global warming-causing, capitalist economic order that feeds itself through insatiable consumption of university resources that belong, by right, to postmodernist faculty in the English department. Above all else, the paranoiacs are the victims in their own minds – victims of “privilege” and other conceptual claims of oppression that they can barely articulate, victims of any and all acts of free speech that they find personally disagreeable, victims of their own employment status as envisioned through offensively hyperbolic self-comparisons to past atrocities, and even victims of being impeded from inflicting violence upon people they dislike.

    The paranoid style that Hofstadter diagnosed half a century ago is still with us today and, in some quarters of academia, it is thriving.