Leftists Don’t Want To Talk About The Abortion Rate Of Black Babies

FEB 25 

WRITTEN BY AUSTIN STONE & T.W. SHANNON

This article originally appeared at the The Federalist.

In a hearing last week about racism in public school curriculum, Arizona state Rep. Walt Blackman said any “honest conversation” about America’s past of slavery and discrimination must also acknowledge the genocide of our present age: abortion.

“There are more black babies aborted than born every day,” he said.

White leftist activists constantly talk about racism, but they conveniently ignore that their sacred cow of abortion is racist to its core.

Abortion should never have become a political football. It used to be something every American, Democrat and Republican, agreed was wrong.

Democrat politician and activist Jesse Jackson is a good example. In 1975, he compared the Roe v. Wade decision to slavery: “There are those who argue that the right to privacy is of a higher order than the right to life … That was the premise of slavery.” But after Jackson ran for the Democrat presidential nomination in 1988, he conformed to the pro-choice party line.

Since then, a tragic dissonance has ensued in the black community over party affiliation and abortion. While most black Americans (54 percent) think abortion is morally unacceptable, they are still more closely associated with the Democratic Party and its abortion-friendly platform. Many black voters find themselves out of step with Democrat candidates seeking their votes, especially on social issues, similar to blue-collar voters who feel Democrat policies have left them behind.

Democrats can easily reconcile with their voter base on this issue. Restoring respect for all life would win the hearts of many in the black community.

A recently released report from an organization we work with, the Center for Urban Renewal and Education (CURE), shows the devastating effects of abortion on generations of black Americans. The report details the abortion industry’s predatory practices — how abortion providers seek out minority women, advertise directly to them, and sometimes perform illegal procedures, like the infamous late-term abortionist Kermit Gosnell. This multi-billion dollar industry isn’t trying to protect anyone’s rights; it’s trying to profit off desperate women, especially black women.

In 2012, the Life Issues Institute reported that “79% of Planned Parenthood’s surgical abortion facilities are strategically located within walking distance of African-American and/or Hispanic communities.” In 2017, they updated these numbers to include 25 new abortion mega centers, 100 percent of which were within walking distance of minority neighborhoods. With each abortion bringing in hundreds or even thousands of dollars, depending on whether they are early- or late-term abortions, it’s clear that abortion providers are engaging in a grisly sort of supply-side economics.

Worse still, abortion has always been a tool of racial eugenics, the ideology that seeks to limit “undesirable” black births. Star Parker, the founder of CURE and author of the report, argues that “From its inception, the abortion industry has sought to control and hinder the growth of the Black population, a core objective of the movement’s founders.”

This is a historical fact. Margaret Sanger, the founder of abortion giant Planned Parenthood, was a racial eugenicist who was concerned that “the mass of Negroes … still breed carelessly and disastrously,” and hired black pastors lest “word go out that we want to exterminate the Negro population.” Many other early abortion activists, such as Hugh Moore and Edward Ross, sought to expand abortion to prevent non-white population growth. Today, there are still abortion activists who behave like the black community needs more abortions.

The politicization of abortion — an issue that is moral to its core — is a wound in our national fabric that we feel deeply and personally. One of us was born to a single mother and later adopted, but many others in the exact same situation were aborted. We work with CURE to support policies that help mothers with unplanned pregnancies avoid the grievous act of abortion. Black mothers especially face intense manipulation and pressure because of the politics of abortion.

We must restore bipartisan moral common sense and offer hope to these women. There is no hope without justice, and there’s no justice without truth. We can’t talk about racism without talking about the dark stain on our society that abortion represents. Black Americans, who suffer disproportionately from abortion, deserve equal rights, including the right to life. The unborn deserve racial justice too.

The Black Family: 40 Years of Lies

Rejecting the Moynihan report caused untold, needless misery.

By Kay S. Hymowitz

Read through the megazillion words on class, income mobility, and poverty in the recent New York Times series “Class Matters” and you still won’t grasp two of the most basic truths on the subject: 1. entrenched, multigenerational poverty is largely black; and 2. it is intricately intertwined with the collapse of the nuclear family in the inner city.

By now, these facts shouldn’t be hard to grasp. Almost 70 percent of black children are born to single mothers. Those mothers are far more likely than married mothers to be poor, even after a post-welfare-reform decline in child poverty. They are also more likely to pass that poverty on to their children. Sophisticates often try to dodge the implications of this bleak reality by shrugging that single motherhood is an inescapable fact of modern life, affecting everyone from the bobo Murphy Browns to the ghetto “baby mamas.” Not so; it is a largely low-income—and disproportionately black—phenomenon. The vast majority of higher-income women wait to have their children until they are married. The truth is that we are now a two-family nation, separate and unequal—one thriving and intact, and the other struggling, broken, and far too often African-American.

So why does the Times, like so many who rail against inequality, fall silent on the relation between poverty and single-parent families? To answer that question—and to continue the confrontation with facts that Americans still prefer not to mention in polite company—you have to go back exactly 40 years. That was when a resounding cry of outrage echoed throughout Washington and the civil rights movement in reaction to Daniel Patrick Moynihan’s Department of Labor report warning that the ghetto family was in disarray. Entitled “The Negro Family: The Case for National Action,” the prophetic report prompted civil rights leaders, academics, politicians, and pundits to make a momentous—and, as time has shown, tragically wrong—decision about how to frame the national discussion about poverty.

To go back to the political and social moment before the battle broke out over the Moynihan report is to return to a time before the country’s discussion of black poverty had hardened into fixed orthodoxies—before phrases like “blaming the victim,” “self-esteem,” “out-of-wedlock childbearing” (the term at the time was “illegitimacy”), and even “teen pregnancy” had become current. While solving the black poverty problem seemed an immense political challenge, as a conceptual matter it didn’t seem like rocket science. Most analysts assumed that once the nation removed discriminatory legal barriers and expanded employment opportunities, blacks would advance, just as poor immigrants had.

Conditions for testing that proposition looked good. Between the 1954 Brown decision and the Civil Rights Act of 1964, legal racism had been dismantled. And the economy was humming along; in the first five years of the sixties, the economy generated 7 million jobs.

Yet those most familiar with what was called “the Negro problem” were getting nervous. About half of all blacks had moved into the middle class by the mid-sixties, but now progress seemed to be stalling. The rise in black income relative to that of whites, steady throughout the fifties, was sputtering to a halt. More blacks were out of work in 1964 than in 1954. Most alarming, after rioting in Harlem and Paterson, New Jersey, in 1964, the problems of the northern ghettos suddenly seemed more intractable than those of the George Wallace South.

Moynihan, then assistant secretary of labor and one of a new class of government social scientists, was among the worriers, as he puzzled over his charts. One in particular caught his eye. Instead of rates of black male unemployment and welfare enrollment running parallel as they always had, in 1962 they started to diverge in a way that would come to be called “Moynihan’s scissors.” In the past, policymakers had assumed that if the male heads of household had jobs, women and children would be provided for. This no longer seemed true. Even while more black men—though still “catastrophically” low numbers—were getting jobs, more black women were joining the welfare rolls. Moynihan and his aides decided that a serious analysis was in order.

Convinced that “the Negro revolution . . . , a movement for equality as well as for liberty,” was now at risk, Moynihan wanted to make several arguments in his report. The first was empirical and would quickly become indisputable: single-parent families were on the rise in the ghetto. But other points were more speculative and sparked a partisan dispute that has lasted to this day. Moynihan argued that the rise in single-mother families was not due to a lack of jobs but rather to a destructive vein in ghetto culture that could be traced back to slavery and Jim Crow discrimination. Though black sociologist E. Franklin Frazier had already introduced the idea in the 1930s, Moynihan’s argument defied conventional social-science wisdom. As he wrote later, “The work began in the most orthodox setting, the U.S. Department of Labor, to establish at some level of statistical conciseness what ‘everyone knew’: that economic conditions determine social conditions. Whereupon, it turned out that what everyone knew was evidently not so.”

But Moynihan went much further than merely overthrowing familiar explanations about the cause of poverty. He also described, through pages of disquieting charts and graphs, the emergence of a “tangle of pathology,” including delinquency, joblessness, school failure, crime, and fatherlessness that characterized ghetto—or what would come to be called underclass—behavior. Moynihan may have borrowed the term “pathology” from Kenneth Clark’s The Dark Ghetto, also published that year. But as both a descendant and a scholar of what he called “the wild Irish slums”—he had written a chapter on the poor Irish in the classic Beyond the Melting Pot—the assistant secretary of labor was no stranger to ghetto self-destruction. He knew the dangers it posed to “the basic socializing unit” of the family. And he suspected that the risks were magnified in the case of blacks, since their “matriarchal” family had the effect of abandoning men, leaving them adrift and “alienated.”

More than most social scientists, Moynihan, steeped in history and anthropology, understood what families do. They “shape their children’s character and ability,” he wrote. “By and large, adult conduct in society is learned as a child.” What children learned in the “disorganized home[s]” of the ghetto, as he described through his forest of graphs, was that adults do not finish school, get jobs, or, in the case of men, take care of their children or obey the law. Marriage, on the other hand, provides a “stable home” for children to learn common virtues. Implicit in Moynihan’s analysis was that marriage orients men and women toward the future, asking them not just to commit to each other but to plan, to earn, to save, and to devote themselves to advancing their children’s prospects. Single mothers in the ghetto, on the other hand, tended to drift into pregnancy, often more than once and by more than one man, and to float through the chaos around them. Such mothers are unlikely to “shape their children’s character and ability” in ways that lead to upward mobility. Separate and unequal families, in other words, meant that blacks would have their liberty, but that they would be strangers to equality. Hence Moynihan’s conclusion: “a national effort towards the problems of Negro Americans must be directed towards the question of family structure.”

Astonishingly, even for that surprising time, the Johnson administration agreed. Prompted by Moynihan’s still-unpublished study, Johnson delivered a speech at the Howard University commencement that called for “the next and more profound stage of the battle for civil rights.” The president began his speech with the era’s conventional civil rights language, condemning inequality and calling for more funding of medical care, training, and education for Negroes. But he also broke into new territory, analyzing the family problem with what strikes the contemporary ear as shocking candor. He announced: “Negro poverty is not white poverty.” He described “the breakdown of the Negro family structure,” which he said was “the consequence of ancient brutality, past injustice and present prejudice.” “When the family collapses, it is the children that are usually damaged,” Johnson continued. “When it happens on a massive scale, the community itself is crippled.”

Johnson was to call this his “greatest civil rights speech,” but he was just about the only one to see it that way. By that summer, the Moynihan report that was its inspiration was under attack from all sides. Civil servants in the “permanent government” at Health, Education, and Welfare (HEW) and at the Children’s Bureau muttered about the report’s “subtle racism.” Academics picked apart its statistics. Black leaders like Congress of Racial Equality (CORE) director Floyd McKissick scolded that, rather than the family, “[i]t’s the damn system that needs changing.”

In part, the hostility was an accident of timing. Just days after the report was leaked to Newsweek in early August, L.A.’s Watts ghetto exploded. The televised images of the South Central Los Angeles rioters burning down their own neighborhood collided in the public mind with the contents of the report. Some concluded that the “tangle of pathology” was the administration’s explanation for urban riots, a view quite at odds with civil rights leaders’ determination to portray the violence as an outpouring of black despair over white injustice. Moreover, given the fresh wounds of segregation, the persistent brutality against blacks, and the ugly tenaciousness of racism, the fear of white backsliding and the sense of injured pride that one can hear in so many of Moynihan’s critics are entirely understandable.

Less forgivable was the refusal to grapple seriously—either at the time or in the months, years, even decades to come—with the basic cultural insight contained in the report: that ghetto families were at risk of raising generations of children unable to seize the opportunity that the civil rights movement had opened up for them. Instead, critics changed the subject, accusing Moynihan—wrongfully, as any honest reading of “The Negro Family” proves—of ignoring joblessness and discrimination. Family instability is a “peripheral issue,” warned Whitney Young, executive director of the National Urban League. “The problem is discrimination.” The protest generating the most buzz came from William Ryan, a CORE activist, in “Savage Discovery: The Moynihan Report,” published in The Nation and later reprinted in the NAACP’s official publication. Ryan, though a psychologist, did not hear Moynihan’s point that as the family goes, so go the children. He heard code for the archaic charge of black licentiousness. He described the report as a “highly sophomoric treatment of illegitimacy” and insisted that whites’ broader access to abortion, contraception, and adoption hid the fact that they were no less “promiscuous” than blacks. Most memorably, he accused Moynihan of “blaming the victim,” a phrase that would become the title of his 1971 book and the fear-inducing censor of future plain speaking about the ghetto’s decay.

That Ryan’s phrase turned out to have more cultural staying power than anything in the Moynihan report is a tragic emblem of the course of the subsequent discussion about the ghetto family. For white liberals and the black establishment, poverty became a zero-sum game: either you believed, as they did, that there was a defect in the system, or you believed that there was a defect in the individual. It was as if critiquing the family meant that you supported inferior schools, even that you were a racist. Though “The Negro Family” had been a masterpiece of complex analysis that implied that individuals were intricately entwined in a variety of systems—familial, cultural, and economic—it gave birth to a hardened, either/or politics from which the country has barely recovered.

By autumn, when a White House conference on civil rights took place, the Moynihan report, initially planned as its centerpiece, had been disappeared. Johnson himself, having just introduced large numbers of ground troops into Vietnam, went mum on the subject, steering clear of the word “family” in the next State of the Union message. This was a moment when the nation had the resources, the leadership (the president had been overwhelmingly elected, and he had the largest majorities in the House and Senate since the New Deal), and the will “to make a total . . . commitment to the cause of Negro equality,” Moynihan lamented in a 1967 postmortem of his report in Commentary. Instead, he declared, the nation had disastrously decided to punt on Johnson’s “next and more profound stage in the battle for civil rights.” “The issue of the Negro family was dead.”

Well, not exactly. Over the next 15 years, the black family question actually became a growth industry inside academe, the foundations, and the government. But it wasn’t the same family that had worried Moynihan and that in the real world continued to self-destruct at unprecedented rates. Scholars invented a fantasy family—strong and healthy, a poor man’s Brady Bunch—whose function was not to reflect truth but to soothe injured black self-esteem and to bolster the emerging feminist critique of male privilege, bourgeois individualism, and the nuclear family. The literature of this period was so evasive, so implausible, so far removed from what was really unfolding in the ghetto, that if you didn’t know better, you might conclude that people actually wanted to keep the black family separate and unequal.

Consider one of the first books out of the gate, Black Families in White America, by Andrew Billingsley, published in 1968 and still referred to as “seminal.” “Unlike Moynihan and others, we do not view the Negro as a causal nexus in a ‘tangle of pathologies’ which feeds on itself,” he declared. “[The Negro family] is, in our view, an absorbing, adaptive, and amazingly resilient mechanism for the socialization of its children and the civilization of its society.” Pay no attention to the 25 percent of poor ghetto families, Billingsley urged. Think instead about the 75 percent of black middle-class families—though Moynihan had made a special point of exempting them from his report.

Other black pride–inspired scholars looked at female-headed families and declared them authentically African and therefore a good thing. In a related vein, Carol Stack published All Our Kin, a 1974 HEW-funded study of families in a midwestern ghetto with many multigenerational female households. In an implicit criticism of American individualism, Stack depicted “The Flats,” as she dubbed her setting, as a vibrant and cooperative urban village, where mutual aid—including from sons, brothers, and uncles, who provided financial support and strong role models for children—created “a tenacious, active, lifelong network.”

In fact, some scholars continued, maybe the nuclear family was really just a toxic white hang-up, anyway. No one asked what nuclear families did, or how they prepared children for a modern economy. The important point was simply that they were not black. “One must question the validity of the white middle-class lifestyle from its very foundation because it has already proven itself to be decadent and unworthy of emulation,” wrote Joyce Ladner (who later became the first female president of Howard University) in her 1972 book Tomorrow’s Tomorrow. Robert Hill of the Urban League, who published The Strengths of Black Families that same year, claimed to have uncovered science that proved Ladner’s point: “Research studies have revealed that many one-parent families are more intact or cohesive than many two-parent families: data on child abuse, battered wives and runaway children indicate higher rates among two-parent families in suburban areas than one-parent families in inner city communities.” That science, needless to say, was as reliable as a deadbeat dad.

Feminists, similarly fixated on overturning the “oppressive ideal of the nuclear family,” also welcomed this dubious scholarship. Convinced that marriage was the main arena of male privilege, feminists projected onto the struggling single mother an image of the “strong black woman” who had always had to work and who was “superior in terms of [her] ability to function healthily in the world,” as Toni Morrison put it. The lucky black single mother could also enjoy more equal relationships with men than her miserably married white sisters.

If black pride made it hard to grapple with the increasingly separate and unequal family, feminism made it impossible. Fretting about single-parent families was now not only racist but also sexist, an effort to deny women their independence, their sexuality, or both. As for the poverty of single mothers, that was simply more proof of patriarchal oppression. In 1978, University of Wisconsin researcher Diana Pearce introduced the useful term “feminization of poverty.” But for her and her many allies, the problem was not the crumbling of the nuclear family; it was the lack of government support for single women and the failure of business to pay women their due.

With the benefit of embarrassed hindsight, academics today sometimes try to wave away these notions as the justifiably angry, but ultimately harmless, speculations of political and academic activists. “The depth and influence of the radicalism of the late 1960s and early 1970s are often exaggerated,” historian Stephanie Coontz writes in her new book, Marriage, a History: From Obedience to Intimacy, or How Love Conquered Marriage. This is pure revisionism. The radical delegitimation of the family was so pervasive that even people at the center of power joined in. It made no difference that so many of these cheerleaders for single mothers had themselves spent their lives in traditional families and probably would rather have cut off an arm than seen their own unmarried daughters pushing strollers.

Take, for instance, Supreme Court Justice William Brennan, who wrote a concurring assent in the 1977 Moore v. City of East Cleveland decision. The case concerned a woman and her grandson evicted from a housing project following a city ordinance that defined “family” as parents—or parent—and their own children. Brennan did not simply agree that the court should rule in favor of the grandmother—a perfectly reasonable position. He also assured the court that “the extended family has many strengths not shared by the nuclear family.” Relying on Robert Hill’s “science,” he declared that delinquency, addiction, crime, “neurotic disabilities,” and mental illness were more prevalent in societies where “autonomous nuclear families prevail,” a conclusion that would have bewildered the writers of the Constitution that Brennan was supposedly interpreting.

In its bumbling way and with far-reaching political consequences, the executive branch also offered warm greetings to the single-parent family. Alert to growing apprehension about the state of the American family during his 1976 presidential campaign, Jimmy Carter had promised a conference on the subject. Clearly less concerned with conditions in the ghetto than with satisfying feminist advocates, the administration named a black single (divorced) mother to lead the event, occasioning an outcry from conservatives. By 1980, when it finally convened after numerous postponements, the White House Conference on the Family had morphed into the White House Conference on Families, to signal that all family forms were equal.

Instead of the political victory for moderate Democrats that Carter had expected, the conference galvanized religious conservatives.

Later, conservative heavyweight Paul Weyrich observed that the Carter conference marked the moment when religious activists moved in force into Republican politics. Doubtless they were also more energized by their own issues of feminism and gay rights than by what was happening in the ghetto. But their new rallying cry of “family values” nonetheless became a political dividing line, with unhappy fallout for liberals for years to come.

Meanwhile, the partisans of single motherhood got a perfect chance to test their theories, since the urban ghettos were fast turning into nuclear-family-free zones. Indeed, by 1980, 15 years after “The Negro Family,” the out-of-wedlock birthrate among blacks had more than doubled, to 56 percent. In the ghetto, that number was considerably higher, as high as 66 percent in New York City. Many experts comforted themselves by pointing out that white mothers were also beginning to forgo marriage, but the truth was that only 9 percent of white births occurred out of wedlock.

And how was the black single-parent family doing? It would be fair to say that it had not been exhibiting the strengths of kinship networks. According to numbers crunched by Moynihan and economist Paul Offner, of the black children born between 1967 and 1969, 72 percent received Aid to Families with Dependent Children before the age of 18. School dropout rates, delinquency, and crime, among the other dysfunctions that Moynihan had warned about, were rising in the cities. In short, the 15 years since the report was written had witnessed both the birth of millions of fatherless babies and the entrenchment of an underclass.

Liberal advocates had two main ways of dodging the subject of family collapse while still addressing its increasingly alarming fallout. The first, largely the creation of Marian Wright Edelman, who in 1973 founded the Children’s Defense Fund, was to talk about children not as the offspring of individual mothers and fathers responsible for rearing them, but as an oppressed class living in generic, nebulous, and never-to-be-analyzed “families.” Framing the problem of ghetto children in this way, CDF was able to mount a powerful case for a host of services, from prenatal care to day care to housing subsidies, in the name of children’s developmental needs, which did not seem to include either a stable domestic life or, for that matter, fathers. Advocates like Edelman might not have viewed the collapsing ghetto family as a welcome occurrence, but they treated it as a kind of natural event, like drought, beyond human control and judgment. As recently as a year ago, marking the 40th anniversary of the Civil Rights Act, CDF announced on its website: “In 2004 it is morally and economically indefensible that a black preschool child is three times as likely to depend solely on a mother’s earnings.” This may strike many as a pretty good argument for addressing the prevalence of black single-mother families, but in CDF-speak it is a case for federal natural-disaster relief.

The Children’s Defense Fund was only the best-known child-advocacy group to impose a gag rule on the role of fatherless families in the plight of its putative constituents. The Carnegie Corporation followed suit. In 1977, it published a highly influential report by Kenneth Keniston called All Our Children: The American Family Under Pressure. It makes an obligatory nod toward the family’s role in raising children, before calling for a cut in unemployment, a federal job guarantee, national health insurance, affirmative action, and a host of other children’s programs. In a review in Commentary, Nathan Glazer noted ruefully that All Our Children was part of a “recent spate of books and articles on the subject of the family [that] have had little if anything to say about the black family in particular and the matter seems to have been permanently shelved.” For that silence, children’s advocates deserve much of the credit—or blame.

The second way not to talk about what was happening to the ghetto family was to talk instead about teen pregnancy. In 1976 the Alan Guttmacher Institute, Planned Parenthood’s research arm, published “Eleven Million Teenagers: What Can Be Done About the Epidemic of Adolescent Pregnancy in the United States?” It was a report that launched a thousand programs. In response to its alarms, HEW chief Joseph Califano helped push through the 1978 Adolescent Health Services and Pregnancy Prevention and Care Act, which funded groups providing services to pregnant adolescents and teen moms. Nonprofits, including the Center for Population Options (now called Advocates for Youth), climbed on the bandwagon. The Ford and Robert Wood Johnson Foundations showered dollars on organizations that ran school-based health clinics, the Charles Stewart Mott Foundation set up the Too Early Childbearing Network, the Annie E. Casey Foundation sponsored “A Community Strategy for Reaching Sexually Active Adolescents,” and the Carnegie, Ford, and William T. Grant Foundations all started demonstration programs.

There was just one small problem: there was no epidemic of teen pregnancy. There was an out-of-wedlock teen-pregnancy epidemic. Teenagers had gotten pregnant at even higher rates in the past. The numbers had reached their zenith in the 1950s, and the “Eleven Million Teenagers” cited in the Guttmacher report actually represented a decline in the rate of pregnant teens. Back in the day, however, when they found out they were pregnant, girls had either gotten married or given their babies up for adoption. Not this generation. They were used to seeing children growing up without fathers, and they felt no shame about arriving at the maternity ward with no rings on their fingers, even at 15.

In the middle-class mind, however, no sane girl would want to have a baby at 15—not that experts mouthing rhetoric about the oppressive patriarchal family would admit that there was anything wrong with that. That middle-class outlook, combined with post-Moynihan mendacity about the growing disconnect between ghetto childbearing and marriage, led the policy elites to frame what was really the broad cultural problem of separate and unequal families as a simple lack-of-reproductive-services problem. Ergo, girls “at risk” must need sex education and contraceptive services.

But the truth was that underclass girls often wanted to have babies; they didn’t see it as a problem that they were young and unmarried. They did not follow the middle-class life script that read: protracted adolescence, college, first job, marriage—and only then children. They did not share the belief that children needed mature, educated mothers who would make their youngsters’ development the center of their lives. Access to birth control couldn’t change any of that.

At any rate, failing to define the problem accurately, advocates were in no position to find the solution. Teen pregnancy not only failed to go down, despite all the public attention, the tens of millions of dollars, and the birth control pills that were thrown its way. It went up—peaking in 1990 at 117 pregnancies per 1,000 teenage girls, up from 105 per 1,000 in 1978, when the Guttmacher report was published. About 80 percent of those young girls who became mothers were single, and the vast majority would be poor.

Throughout the 1980s, the inner city—and the black family—continued to unravel. Child poverty stayed close to 20 percent, hitting a high of 22.7 percent in 1993. Welfare dependency continued to rise, soaring from 2 million families in 1970 to 5 million by 1995. By 1990, 65 percent of all black children were being born to unmarried women.

In ghetto communities like Central Harlem, the number was closer to 80 percent. By this point, no one doubted that most of these children were destined to grow up poor and to pass down the legacy of single parenting to their own children.

The only good news was that the bad news was so unrelentingly bad that the usual bromides and evasions could no longer hold. Something had to shake up what amounted to an ideological paralysis, and that something came from conservatives. Three thinkers in particular—Charles Murray, Lawrence Mead, and Thomas Sowell—though they did not always write directly about the black family, effectively changed the conversation about it. First, they did not flinch from blunt language in describing the wreckage of the inner city, unafraid of the accusations of racism and victim blaming that came their way. Second, they pointed at the welfare policies of the 1960s, not racism or a lack of jobs or the legacy of slavery, as the cause of inner-city dysfunction, and in so doing they made the welfare mother the public symbol of the ghetto’s ills. (Murray in particular argued that welfare money provided a disincentive for marriage, and, while his theory may have overstated the role of economics, it’s worth noting that he was probably the first to grasp that the country was turning into a nation of separate and unequal families.) And third, they believed that the poor would have to change their behavior instead of waiting for Washington to end poverty, as liberals seemed to be saying.

By the early 1980s the media also had woken up to the ruins of the ghetto family and brought about the return of the repressed Moynihan report. Declaring Moynihan “prophetic,” Ken Auletta, in his 1982 The Underclass, proclaimed that “one cannot talk about poverty in America, or about the underclass, without talking about the weakening family structure of the poor.” Both the Baltimore Sun and the New York Times ran series on the black family in 1983, followed by a 1985 Newsweek article called “Moynihan: I Told You So” and a 1986 CBS documentary, The Vanishing Black Family, produced by Bill Moyers, a onetime aide to Lyndon Johnson, who had supported the Moynihan report. The most symbolic moment came when Moynihan himself gave Harvard’s prestigious Godkin lectures in 1985 in commemoration of the 20th anniversary of “The Negro Family.”

For the most part, liberals were having none of it. They piled on Murray’s 1984 Losing Ground, ignored Mead and Sowell, and excoriated the word “underclass,” which they painted as a recycled and pseudoscientific version of the “tangle of pathology.” But there were two important exceptions to the long list of deniers. The first was William Julius Wilson. In his 1987 The Truly Disadvantaged, Wilson chastised liberals for being “confused and defensive” and failing to engage “the social pathologies of the ghetto.” “The average poor black child today appears to be in the midst of a poverty spell which will last for almost two decades,” he warned. Liberals have “to propose thoughtful explanations for the rise in inner city dislocations.” Ironically, though, Wilson’s own “mismatch theory” for family breakdown—which hypothesized that the movement of low-skill jobs out of the cities had sharply reduced the number of marriageable black men—had the effect of extending liberal defensiveness about the damaged ghetto family. After all, poor single mothers were only adapting to economic conditions. How could they do otherwise?

The research of another social scientist, Sara McLanahan, was not so easily rationalized, however. A divorced mother herself, McLanahan found Auletta’s depiction of her single-parent counterparts in the inner city disturbing, especially because, like other sociologists of the time, she had been taught that the Moynihan report was the work of a racist—or, at least, a seriously deluded man. But when she surveyed the science available on the subject, she realized that the research was so sparse that no one knew for sure how the children of single mothers were faring. Over the next decade, McLanahan analyzed whatever numbers she could find, and discovered—lo and behold—that children in single-parent homes were not doing as well as children from two-parent homes on a wide variety of measures, from income to school performance to teen pregnancy.

Throughout the late eighties and early nineties, McLanahan presented her emerging findings, over protests from feminists and the Children’s Defense Fund. Finally, in 1994 she published, with Gary Sandefur, Growing Up with a Single Parent. McLanahan’s research shocked social scientists into re-examining the problem they had presumed was not a problem. It was a turning point. One by one, the top family researchers gradually came around, concluding that McLanahan—and perhaps even Moynihan—was right.

In fact, by the early 1990s, when the ghetto was at its nadir, public opinion had clearly turned. No one was more attuned to this shift than triangulator Bill Clinton, who made the family a centerpiece of his domestic policy.

In his 1994 State of the Union Address, he announced: “We cannot renew our country when, within a decade, more than half of our children will be born into families where there is no marriage.” And in 1996, despite howls of indignation, including from members of his own administration (and mystifyingly, from Moynihan himself), he signed a welfare-reform bill that he had twice vetoed—and that included among its goals increasing the number of children living with their two married parents.

So, have we reached the end of the Moynihan report saga? That would be vastly overstating matters. Remember: 70 percent of black children are still born to unmarried mothers. After all that ghetto dwellers have been through, why are so many people still unwilling to call this the calamity it is? Both NOW and the National Association of Social Workers continue to see marriage as a potential source of female oppression. The Children’s Defense Fund still won’t touch the subject. Hip-hop culture glamorizes ghetto life: “ ’cause nowadays it’s like a badge of honor/to be a baby mama” go the words to the current hit “Baby Mama,” which young ghetto mothers view as their anthem. Seriously complicating the issue is the push for gay marriage, which dismissed the formula “children growing up with their own married parents” as a form of discrimination. And then there is the American penchant for to-each-his-own libertarianism. In opinion polls, a substantial majority of young people say that having a child outside of marriage is okay—though, judging from their behavior, they seem to mean that it’s okay, not for them, but for other people. Middle- and upper-middle-class Americans act as if they know that marriage provides a structure that protects children’s development. If only they were willing to admit it to their fellow citizens.

All told, the nation is at a cultural inflection point that portends change. Though they always caution that “marriage is not a panacea,” social scientists almost uniformly accept the research that confirms the benefits for children growing up with their own married parents. Welfare reform and tougher child-support regulations have reinforced the message of personal responsibility for one’s children. The Bush administration unabashedly uses the word “marriage” in its welfare policies. There are even raw numbers to support the case for optimism: teen pregnancy, which finally started to decline in the mid-nineties in response to a crisper, teen-pregnancy-is-a-bad-idea cultural message, is now at its lowest rate ever.

And finally, in the ghetto itself there is a growing feeling that mother-only families don’t work. That’s why people are lining up to see an aging comedian as he voices some not-very-funny opinions about their own parenting. That’s why so many young men are vowing to be the fathers they never had. That’s why there has been an uptick, albeit small, in the number of black children living with their married parents.

If change really is in the air, it’s taken 40 years to get here—40 years of inner-city misery for the country to reach a point at which it fully signed on to the lesson of Moynihan’s report. Yes, better late than never; but you could forgive lost generations of ghetto men, women, and children if they found it cold comfort.

Talking Honestly About Abortion

In most cases pregnancy is not an accident, which as moral actors should tell us something.

by Doug Bandow

I was listening to an online debate over abortion. It was a good forum, seeking to encourage civil dialogue among those holding radically different views. There may be no more incendiary issue, yet participants treated each other with respect; only occasionally did passions flare and voices rise ever so slightly.

The topic was focused on the impact on women’s equality, which created a sometimes stilted discussion. Perhaps as a result, some fundamental issues failed to make more than a brief appearance. Thus, the forum missed a chance to force greater understanding.

Addressing abortion is never going to be easy. It involves intimate conduct, is ill-suited to government regulation, and requires trade-offs between the fundamental values of liberty and life. Like family affairs, it would be best if government never had to get involved. However, like family affairs, sometimes government must get involved.

The first point, which was generally ignored by advocates, is that abortion is not just another “medical procedure.” Rather, a second life is involved. Which counteracts the standard (and otherwise persuasive) arguments about personal autonomy.

People who advocate restricting the right to an abortion are not bluenoses worried about what other people are doing behind closed doors. There has never been a lobby seeking to ban masturbation. After the Supreme Court tossed out sodomy laws in 2003 in Lawrence v. Texas no national movement arose to overturn the result. Morality loses political potency when it involves protecting people’s souls rather than people’s lives.

One need not get lost in theological arguments or philosophical assessments about when personhood attaches. Whether at conception or implantation, a process begins which, if not interrupted, will yield a human being. That delivers moral value irrespective of the level of fetal development. After seeing a sonogram few putative parents speak of a “collection of cells.” The preferred term is “baby.” At some point, whether viability or another measure, it is impossible for any serious observer to deny that potential becomes real.

Consider how most people react to pregnancy. Strangers respond to moms-to-be and their babies with solicitude, compassion, concern, and protectiveness generally irrespective of cultural, religious, and political viewpoints. Abusing a pregnant woman is seen as particularly wicked. Victimizing a pregnant woman results in an additional legal charge. Harming a baby often is a separate crime. Almost everyone implicitly recognizes that the child-to-be has independent moral status. The latter is not extinguished by citing the legal rights of others.

One can still contend that the mother-to-be’s well-being and desire trump her baby’s interests. Indeed, most pro-life advocates believe abortion is justified if the mother’s life is in danger. However, balancing moral equities is not simple. The interests of the baby, already possessing moral value and soon to become what all acknowledge to be an equal person, also must be respected. Moreover, it becomes increasingly implausible to dismiss the developing life as his or her development proceeds. Late-term abortions look and feel like infanticide for a reason.

An equally important factor, which as far as I could tell went unmentioned in the forum, is that pregnancy is not a condition forced on most people. Other than the case of rape — an important but very limited exception — people choose to have sex. And having sex is what yields babies. That means most pregnancies are the result of voluntary action if not explicit intent.

Making the point is not a censorious attempt to punish people who abandon traditional religious strictures regarding sex. Rather, it is to indicate that in a typical case parents have what should be an obvious moral responsibility for the children that they create. Bring someone into the world, even if inadvertently, and you bear some obligation for and to the resulting life.

There is nothing unique about this argument. Conduct normally has consequences even if the specific result was not intended. Drive recklessly or drunk and you will be held liable if you cause harm — even if that was not what you planned to do. Shoot and kill someone without justification (e.g., self-defense) and you will be held responsible, even if his or her death was not your intent. Although mens rea will, appropriately, affect charge and punishment, lack of desire does not mean exoneration.

Again, one can argue about degrees of responsibility and appropriate remedies. But a serious conversation is required. There’s nothing mysterious about the connection between sex and babies. Take the risk and create a life … something more than just “oops, that was an accident, so where can I get an abortion?” is necessary in response.

Such a discussion is most likely to happen in 50 state legislatures. Roe v. Wade was recognized as bad law by many liberals at the time. Seven justices concocted a fundamental right out of constitutional emanations, effusions, exhalations, and eruptions. As Yale Law Professor John Hart Ely observed: “It is bad because it is bad constitutional law, or rather because it is not constitutional law and gives almost no sense of an obligation to try to be.”

Overturning Roe, which as bad constitutional law should be reversed, would not outlaw abortion nationwide. Rather, such a ruling would return the issue to the states, where it was evolving when Roe short-circuited the political process. The New York Times figured that more than half the states would likely keep abortion legal without Roe.

That result seems so outrageous to the Left only because it gained a total victory — extreme even compared to most abortion laws elsewhere in the world — a half century ago when the Supreme Court decided to seize power and act like an uber legislature. A reversal of Roe would force the pro-abortion lobby to become a normal political movement again and make its case to the American people rather than federal judges.

Abortion is a tough issue. However, any discussion needs to be honest and address the most fundamental issues. Which requires recognition that more than one life is involved and pregnancy is no accident. When a baby shows up someone should take responsibility for their actions. We might still argue over what that means for the legality of abortion. But only then will the conversation be honest and the response be legitimate.

How the Democrats fell for Mussolini

America’s elite has adopted the fascist dream of a corporate oligarchy

BY JOEL KOTKIN

There’s a tendency today to see Benito Mussolini as a pathetic sideshow, an incompetent blusterer who went from Adolf Hitler’s idol to his lapdog. Yet in many ways, Mussolini’s notion of fascism has become increasingly dominant in much of the world, albeit in an unexpected form: in the worldview of those progressives who typically see “proto-fascism” lurking on the Right.

Mussolini, a one-time radical socialist, viewed himself as a “revolutionary” transforming society by turning the state into “the moving centre of economic life”. In Italy and, to a greater extent, Germany, fascism also brought with it, at least initially, an expanded highly populist welfare state much as we see today.

Indeed, Mussolini’s idea of a an economy controlled from above, with generous benefits but dominated by large business interests, is gradually supplanting the old liberal capitalist model. In the West, for example, the “Great Reset,” introduced by the World Economic Forum’s Klaus Schwab, proposes an expanded welfare state and an economy that transcends the market for the greater goal of serving racial and gender “equity”, as well as saving the planet.

Wherever it appears, whether in the early 20th century or today, fascism — in its corporate sense — relies on concentrated economic power to achieve its essential and ideological goals. In 1922, for instance, large corporations and landowners helped finance Mussolini’s Black Shirts for their March on Rome. Confindustria, the leading organisation of Italian industrialists, was glad to see the end of class-based chaos and welcomed the state’s infrastructure surge.

Elsewhere, the German cartels and Japanese zaibatsu both kowtowed to and benefited from fascist state support and contracts. Even today, China, in many aspects the model fascist state of our times, follows Il Duce’s model of cementing the corporate elite into the power structure. Since 2000, a hundred billionaires sit in the country’s Communist legislation, a development that Mao would never have countenanced. 1

Capitalist countries have historically resisted such concentrations of power, but this process seems inexorable after a pandemic which devastated small businesses yet saw the ultra-rich grow richer and the largest firms record eye-watering profits. A handful of giant tech corporations now account for nearly 40% of the value of the Standard and Poor Index, a level of concentration unprecedented in modern history.

Companies like Amazon are our zaibatsu, with influence over a vast array of industries, from online retail to cloud computing, the health food business, media and even space travel. Once such firms may have adhered to free market capitalism, but they have increasingly grown to see the value of a larger, more centralised and pervasive state.

This parallels the alarming transformation of the US Democratic Party, the putative “party of the people” , now increasingly a subsidiary of the corporate elite. Among financial firms, communications companies and lawyers, Biden outraised Trump by five-to-one or more. Today’s oligarchs are particularly keen on the progressive non-profit sector, which provides important support for their political and social advocacy — a means for them to make politically correct statements about climate change, gender and race, while still obtaining enormous profit margins and unprecedented wealth.

But whereas the old fascism sought greater prosperity, its new form, at least in the West, supports only an expanded welfare state that keeps the beleaguered middle and working classes both quiescent and stripped of aspiration. Worthies such as former Bank of Canada and Bank of England chief Mark Carney even embrace “de-growth,” a conscious slowing of the economy and embrace of declining living standards.

Indeed, the widely hailed Club of Rome report in 1972 — “The Limits to Growth” — was financed not by green activists but by the Agnelli family from Fiat, once a linchpin of Mussolini’s original corporate state.2 The Report predicted massive shortages of natural resources, slower economic growth, less material consumption and ultimately less social mobility.3

Fast forward to today’s new economic order, and it’s clear that not all economic animals are equal. There are opportunities galore for Wall Street investors, Silicon Valley tech oligarchscobalt miners, electric car manufacturers and renewable energy producers through the massive subsidies for producing green.

And these woke oligarchs, like their fascist counterparts before them, see little use for democracy. Eric Heymann, a senior executive at Deutsche Bank, suggests that to reach the climate goals of Davos, corporations will have to embrace “a certain degree of eco-dictatorship”.4 After all, it would be difficult to get elected officials to approve limits on such mundane popular pleasures as affordable air travel, cars, freeways and suburbs with single-family houses, unless they were imposed by judicial or executive fiat.

Unsurprisingly, the biggest losers will inevitably be the poor. Wherever the conventional green policies central to the “Great Reset” have been imposed — California, Britain, Canada, Australia, Greece, Germany, France — the result has been to create high levels of “energy poverty”; the Jacques Delors Institute estimated that some thirty million Europeans were not able to adequately heat their homes during the most recent winter.

But then there are many hypocrisies at the heart of today’s incarnation of Mussolini-style fascism. Our new elites, for example, see no contradiction in supporting claims of “systemic racism” and “social justice” at home, while cooperating with Chinese authorities who abuse basic human rights in Hong Kong or to impose forced labour in Xinjiang. Boldly progressive firms like Airbnb have no problems sharing customer data with China’s security state; nor does Apple show compunction in relying on Uighur labour to build their products.

But in the battle between the two emergent fascist systems, China possesses powerful advantages. Communist Party cadres at least offer more than a moralising agenda; they can point to the country’s massive reduction of extreme poverty and a huge growth in monthly wages, up almost five-fold since 2006. At a time when the middle class is shrinking in the West, China’s middle class increased enormously from 1980 to 2000, although its growth appears to have slowed in recent years.

Like Mussolini, who linked his regime to that of Ancient Rome, China’s rulers look to Han supremacy and the glories of China’s Imperial past. “The very purpose of the [Chinese Communist] Party in leading the people in revolution and development,” Xi Jinping told party cadres a decade ago, “is to make the people prosperous, the country strong, and [to] rejuvenate the Chinese nation.”

In contrast, the tired capitalism of our corporate elite — who seem to have given up on broad-based economic growth — seems increasingly detached from the interests and aspirations of their own citizens’ needs.

Apple’s Tim Cook, for example, waxes enthusiastically about a “common future in cyberspace” with autocratic China. Wall Street also actively lobbies on behalf of Beijing, hoping to cash in on investments that strip America’s productive capacity but enrich them. Oligarchs like Michael Bloomberg describe China, a country of business opportunity for his firm, as “ecologically friendly, democratically accountable, and invulnerable to the threat of revolution”.

How do we combat this trend towards fascist structures? The answer is straightforward, if unprescriptive: to resist them with liberal ideals and a renewed commitment to upward mobility. That won’t be easy. As of today, the consolidation of oligarchic power is supported by massive lobbying operations and dispersals of cash, including to some Right-wing libertarians, who doggedly justify censorship and oligopoly on private property grounds.

Yet despite their riches and technical know-how, the oligarchic elites face widespread and growing scepticism towards both the traditional and social media outlets under their control. Similarly, it’s also unlikely many in the middle class will embrace their programme of race indoctrination, or accept a marked decline in living standards.

But building a coalition against the new fascism requires avoiding destructive nativism and instead focusing on how to restore competition and protect consumers from the overweening power, and vast wealth of the corporate elites.

Will a citizenry, dependent on transfer payments and increasingly voiceless, still put up a fight? To slow fascism’s spread, either from China or from within, requires a re-awakening of the spirit of resistance to authority that has long marked human progress and now seems far too rare.

FOOTNOTES
  1. See Richard McGregor, The Party: The Secret World of China’s Communist Rulers (New York: Harper, 2010), 206–8; David S. G. Goodman, Class in Contemporary China (Cambridge: Polity Press, 2014), 26, 86.
  2. See “Club of Rome a Worldwide Organization,” New York Times Archives, February 27, 1972; Enclycopedia Britannica Online, s.v., “Agnelli, Giovanni,” accessed May 11, 2021, https://www.britannica.com/biography/Giovanni-Agnelli-Italian-industrialist-1921-2003.
  3. Norman Yoffee, “Orienting Collapse,” in The Collapse of Ancient States and Civilizations, ed. Norman Yoffee and George L. Cowgill (Tucson: University of Arizona Press, 1991), 4–5.
  4. Eric Heyman, “What We Must Do To Rebuild,” Deutsche Bank Research, November 2020.

Male Transjacking Will Ultimately End Women’s Sports

Transgender males are increasingly entering and dominating women’s sports at all levels, taking away opportunities that women have fought years to win. By 

In 2016, Therese Johaug, a Norwegian three-time Olympic cross-country skiing champion, received an 18-month suspension from the sport she loved after it was discovered that the team-approved lip balm she was using to treat her badly sunburned lips contained a performance-enhancing steroid.

A devastated Johaug lamented, “I feel I did everything right. I went to an expert who gave me the ointment, and I asked him if the cream was on a doping list. The answer I got was ‘no.’”

But the powers that be were undeterred from their well-established hard line of fairness, and Johaug was forced to watch the 2018 winter Olympics from the sidelines.

It’s an unfortunate set of circumstances that raises the question: If chemicals from a necessary, medicated lip balm can be construed as such an unjust physical advantage, how on Earth can athletic authorities continue to turn a blind eye to the litany of physical advantages the transgender men increasingly competing in women’s sports so obviously possess in their male bodies?

The ‘Standards’ for Trans Athletes Are Ludicrous

This question remains unanswered, as the International Olympic Committee continues to waffle over the rules for participation in Olympic women’s events. Their rules presently allow men to participate as women, provided their testosterone levels are below 10 nanomoles per liter for at least 12 consecutive months.

These standards completely fail to consider the host of other advantages inherent in the male body: increased 02 capacity, overall musculature, bone size and density, increased joint stability, and lower body fat, to name a few. These advantages don’t magically disappear with the wave of a synthetic estrogen wand.

For those tempted to believe the male takeover of women’s sports is such a fringe issue that it’s not likely to be an important or frequent enough problem to merit any concern, think again. Here are just a few of the many ways women and girls are losing to their impersonators.

Men Easily Dominate in Women’s Sports

Fallon Fox is a male, American mixed martial arts fighter who competes in the women’s division. Fox ended the career of his opponent, Tamikka Brents, within the first three minutes of their fight when he shattered her eye socket, an injury requiring seven staples in her head, prompting her to declare, “I’ve never felt so overpowered in all my life.”

Hannah Mouncey is going to injure someone if allowed to continue dominating on the Australian women’s national handball team. He played on the men’s national team before deciding to grow out his hair and declare himself a woman.

Rachel McKinnon is a man and two-time women’s world cycling champion, who also uses his status as a professor of philosophy at College of Charleston in South Carolina to bully those who disagree with him, responding to dissenting opinions on Twitter with threats such as, “Abigail Shrier got wrecked on FOX Nation. I’ll do it to you, too.”

Gabrielle Ludwig is a 6-foot-6-inch man who took a starting spot on the women’s basketball team at Mission College in California. He was named first team all conference and mysteriously led the league in rebounds.

Fewer than 5,000 spots are available on NCAA Division III women’s volleyball teams. That didn’t prevent Chloe Anderson, a male, from taking one of them at the University of California, Santa Cruz.

Some of Alaska’s finest female track athletes watched the state final race from the sidelines after Nattaphon Wangyot, a male, edged them out of their places in it.

Terry Miller and Andraya Yearwood are a dynamic duo from Connecticut, where the unmedicated, post-pubescent boys took first- and second-place state championship titles in girls’ track events. When asked about his obvious physical advantage, Miller flippantly said the girls “should work harder.”

Similarly unmoved by the inequity of his male advantage is Cece Telfer, a man who ran on the Franklin Pierce University men’s 2016-17 track and field team before deciding he would rather race against women. He became the women’s 2019 Division II national champion in the 400-meter race, beating his closest opponent by a second and a half.

Amelia Galpin is a man who competes against women in the Boston Marathon. He, ironically, was featured on “the body edition” of Women’s Running Magazine, sending the message loud and clear that the ideal woman’s body includes a penis. How very progressive.

Laurel Hubbard and JayCee Cooper are two men doing their darndest to dominate women’s powerlifting. Hubbard took gold in two women’s heavyweight categories at the Pacific Games. Cooper filed a discrimination claim against USA Powerlifting, demanding a right to lift against women.

Caroline Layt is a man who was once voted “Women’s Rugby Player of the Year.” Britney Stinson, also a man, has broken into the Women’s Football Alliance and USA Baseball. Maxine Blythin is a man who just recently won the title “Women’s Cricketer of the Week.” Cate McGregor — you guessed it, another man — is on the Canberra women’s cricket team.

Lies Are Informing Public Policy

The list goes on and on, and so does the utterly nonsensical rhetoric relentlessly shoved down the public’s collective throat as fact. In the Human Rights Campaign’s “Guide for Schools in Transition,” the section related to trans-identified people in sports reads, “Concerns about competitive advantage are unfounded and often grounded in stereotypes about the differences and abilities of males vs. females.”

This is the kind of rhetoric informing public policy — the notion that men’s advantage in sports is nothing more than a sex stereotype that can be overcome with a little more elbow grease and courage from the females. It’s fascinating, is it not, that given this newfound clarity, we don’t somehow see females identifying their way onto the starting lineups of NFL or NBA teams.

Does anyone else remember what happened when Serena Williams challenged the 203rd-ranked player from the men’s league? I’ll give you a hint: She lost. Badly. The dominant U.S. women’s soccer team routinely loses to high school boys teams.

Sex-Based Protections Exist for a Reason

I take no pleasure in acknowledging this reality. As a former small-college basketball player, I’ll never forget the day a group of meathead-looking men showed up for our open gym and asked to play against us. I had a bit of a chip on my shoulder and something to prove, so I played as hard and aggressively as I possibly could.

At one point, I decided to try to stop one of the men from completing a fast-break layup. I sprinted in front of him, planted my body on the block outside the key, and braced myself for impact.

I was still seeing stars 10 minutes later. I had never been hit by so much brute force in my entire life. I later discovered I had, in fact, taken a charge from then-Seattle Seahawks running back Shaun Alexander, so my bravado was actually sheer stupidity, but the point has stayed with me, stamped into my memory for more than a decade now: No amount of 5 a.m. practices or extra drills or mental toughness or “working harder” would ever be sufficient for me to overcome the physical gap between our abilities.

That’s why sex-based protections exist in the first place. Without them, women like me would never be able to afford our college educations, as men would have swept up the scholarships we received. And that’s exactly what’s starting to happen.

Fight Back to Protect Women’s Sports

I hesitated to write this article for quite some time because it’s so profoundly discouraging to know that every time I talk about this, plenty of men are sitting around saying, “Women’s sports are a joke anyway,” or, “Feminists made their bed; now they need to lie in it.”

The fact that I feel compelled even to consider writing an extra paragraph unpacking the merits of women’s sports is evidence of the volume of work we have left to do. Ambivalent men will always find a way to blame women for our mistreatment, and it’s why feminism will always continue to exist: Someone has to care about this stuff. It’s not right, and it needs to stop.

Men’s and women’s bodies are different. It’s not rocket science, it’s biology, and it turns out biology is one bigoted son of a gun. Anatomy discriminates. Women have known this for centuries. Biology is science, however, and as the left is constantly reminding us, science denial is pretty dangerous.

It’s time to step up and speak up and stop this nonsense once and for all. It’s going to take all hands on deck. Women, girls, and the people who love them need to complain loudly and often whenever they’re faced with the prospect of having to compete against the men who would cheat them out of what is rightfully theirs. Film the lunacy of it all. Share it broadly on social media.

Women had to fight loud and hard to acquire athletic opportunities in the first place. Unfortunately, it has become increasingly clear we will have to fight long and hard to keep them.

 

 

Elizabeth Warren’s “Accountable” Court

Considering that Sen. Elizabeth Warren is a professor at Harvard’s law school, one would think her policy pronouncements would be more in line with judicial procedure and requirements…but that doesn’t seem to be the case, an analysis by Greg Weiner

Elizabeth Warren, the Senator from Harvard Law School, has a plan—of course she does—for guaranteeing an “impartial and ethical judiciary” based on “the basic premise of our legal system,” which is “that every person is treated equally in the eyes of the law.” Shortly before its unveiling, she tweeted a promise to nominate “a demonstrated advocate for workers” to the Supreme Court.

In other words, she seeks a justice who would violate Canon 3 of the Code of Conduct for United States Judges, which requires jurists to disqualify themselves from cases in which they have “a personal bias or prejudice concerning a party.” The Code does not apply to the Supreme Court, but buckle up: The aforesaid “plan for that” would extend the ethical rules to the Supreme Court, which means Warren is promising to appoint justices whose conduct she will seek to classify as unethical.

This tangle of contradiction—as to her plans, Warren likely wants us to behold the magnificence of the forest, not the individual trees—illustrates the outcome-based constitutionalism that has infected American jurisprudence. It may be true, as Chief Justice John Roberts has said, that we do not have Obama judges or Trump judges. But we are apparently supposed to have worker judges or employer judges, abortion judges or gun judges.

Conspicuously lacking from Warren’s plan for an impartial judiciary is any sense of what that means for the judge’s role in the constitutional order. The bulk of the plan seeks to root out among judges the corruption Warren sees lurking around the corner of every disagreement. Judges retire to escape ethics inquiries; take away their pensions. “Ban judges from owning or trading individual stocks.” Supreme Court justices would have to explain recusal decisions. She would apply to Supreme Court justices the judicial code of conflict. She would fast-track impeachment of judges by changing the rules of the House of Representatives.

There may be some merit in some of this. There is certainly none in her comical description of the Federalist Society as “an extremist right-wing legal group.” (Try the American Bar Association as “an extremist left-wing legal group.” Neither rolls plausibly off the tongue.)

Other proposals, such as Congress dictating which justices can rule in which cases, may present separation-of-powers concerns. Requiring justices to explain recusal decisions because litigants asked for them would encourage frivolous recusal requests. As to fast-tracking impeachments, could someone please tell the vaunted law professor that (Article I, Section 5) “each House may determine the Rules of its Proceedings”? There is nothing there, and everything disturbing, about the president telling Congress what its rules for impeachment should be.

But the plan’s real significance lies in two broader points. The first is the overall thrust of the proposals, which assume, as the Progressive movement did, that sweeping away the detritus of corruption will do away with disagreement (read: politics) and illuminate right answers in all their sparkling clarity. In this schema, we can be done with the messiness of prudential judgment.

The second is the negative space. Warren has no conception of the proper judicial role other than that it should favor litigants whose political stances she supports. The plan does not even do the courtesy of endorsing living constitutionalism. It apparently assumes that such is the natural result of eliminating corruption.

The first rule for constitutional law students should be that if their policy preferences and constitutional conclusions always align, they should reassess their interpretive methods. A similar question of judicial nominees—from Warren or others—would be to name a case in which a policy was substantively wrong but constitutionally permissible. Warren’s constitutional and policy views coincide with suspicious consistency. Nor is she alone. Robert Bork used to say that most constitutional law was a question of whose ox was being gored.

That appears to be the case for Warren. But what is even more striking is that she elucidates no judicial philosophy at all other than evaluating judges according to the outcomes they reach and assuming that those who reach the wrong ones must have been corrupt. This is a one-way standard, of course, unless Warren would assume that her pro-worker judges must be corruptly beholden to organized labor.

To be sure, corruption among judges should be rooted out, and there is a case for continuing investigations after judges leave the bench. But this incessant talk of “accountability” is no substitute for a judicial philosophy that encompasses a substantive, constitutional idea of the judge’s role in a republic.

There is nothing inherently wrong with holding misbehaving judges—according to Federalist 81, even judges who consistently rule abusively—accountable. But to reduce jurisprudence to accountability is to assume that judges have two choices in every case: Warren’s preferred outcome and the corrupt one for which they must be held responsible.

Would that constitutionalism and politics were so simple. On second thought, we may be thankful they are not. The need for judgment is what makes politics as opposed to technocracy possible. If Warren is to be president, as opposed to a senator-cum-orator, she had better get used to the fact of politics. The sheer scope of Warren’s plans for everything means she has no hope of achieving them if her legislative strategy is to stigmatize those with opposing views as corrupt.

Perhaps most disturbing, while Warren’s judicial proposals evince no judicial philosophy, there may in fact be a latent constitutional theory discernible in her spate of “plans for that.” It is that the president runs the regime and everyone else is a minion in it. We have ingested an ample serving of that philosophy for the last 12 years, perhaps longer. The word “Congress” appears only twice in Warren’s judicial plan—once to refer to judges lying to Congress and the other to demand that Congress “take action” when a judge is accused of an ethical violation. Consider this in reverse: Would anyone give serious consideration to a congressional candidate whose platform was to proclaim how the president will behave?

They would not. Nor should they. If the basis of Warren’s candidacy is that she has a plan for everything, perhaps she should have a defensible plan for the Constitution too.

Deniable Dishonesty

An analysis of an answer Sen. Warren gave in a recent debate by Theodore Dalrymple

A paradigm shift is a sudden change in fundamental assumptions about, or way of looking at, the world. Senator Elizabeth Warren illustrated one of the most startling ones of recent years with the answer that she gave to a question put to her recently on television.

“How would you react,” she was asked, “to a supporter who said to you, ‘I’m old-fashioned and my faith teaches me that marriage is between one man and one woman.’” Warren replied, “Well, I’m going to assume it’s a guy who said that. And I’m going to say, then just marry one woman. I’m cool with that. Assuming you can find one.”

The audience, reportedly, laughed. The Guardian newspaper said that she had won plaudits for this sally, but it surely must have been something other than the sheer wit of her distinctly sub-Wildean reply that caused the audience to laugh.

For many centuries it was assumed that marriage is between a man and a woman. However, we have changed all that, as Sganarelle, pretending to be a doctor, said when he was told that the heart is on the left and the liver on the right. And we have changed it all in an historical twinkling of an eye.

Senator Warren’s semi-facetious reply was a masterpiece of deniable dishonesty. In that sense it was worthy of admiration for its subtle employment of the old rhetorical tricks of suppressio veri and suggestio falsi. What did her assumption that it was a man who asked the question mean to imply? Surely that men are the principal beneficiaries of marriage and that women its victims—under the assumption that human relations are a zero-sum game. In one circumstance, the senator’s implication was correct: that of forced marriage as practiced, say, by the people of Pakistani descent in Britain, which allows men their freedom to play around while the wife stays at home as a drudge, whether domestic or sexual or both. But it is unlikely that the senator had this situation in mind, since it would have contradicted her multicultural sensibilities, and her audience’s politically correct sensitivities, to have said so.

In fact, ample evidence exists that marriage is protective of women rather than harmful to them, to say nothing of their children. If I were a Marxist, I would say that Warren’s attitude was a means by which she strove to protect the interests and power of the upper-middle classes against those of the lower classes, for the higher up the social scale you go, the stronger the institution of marriage becomes, for all its hypocrisies and betrayals. The upper-middle classes pretending to despise marriage are no more sincere than was Marie Antoinette playing shepherdess, though they do more harm by their pretense than Marie Antoinette ever did, for no one was ever encouraged to become a shepherdess by her playacting. It is otherwise with the upper-middle class’s playacting.

But perhaps the most destructive (and surely insincere) aspect of Warren’s answer was the implication that it now requires tolerance to countenance marriage, the assumption being that marriage is abnormal and therefore to be reprehended—the need for tolerance implying reprehension, for there is no need to tolerate what we already approve of.

As for the senator’s implication that men with traditional views will have difficulty in finding a woman to marry—or even have trouble getting a second date, after they express their deplorable opinions on the first one—my experience of treating unmarried mothers is that they hope that their daughters will not follow their own path in life, but rather find a responsible, stable man as the father of their children. The problem is that such men seem in short supply in their social sphere.

The audience’s laughter implied that at least a part of the population is willing, perhaps eager, to be complicit in Warren’s dishonesty. If criticized, she could always claim that she was only joking, but behind her joke she was deadly serious. Or should I say deadly frivolous?

Candace Owens: Dems Using White Supremacy Issue to Scare Blacks into Voting for Them

By Melanie Arter | September 20, 2019 | 3:25 PM EDT

Conservative commentator and Blexit leader Candace Owens testified in Congress Friday at a hearing on confronting violent white supremacy, telling a House Oversight and Government Reform subcommittee that Democrats are using the issue of white supremacy to scare blacks into voting for the Democrat Party.

During her opening testimony, Owens acknowledged that white supremacy is “indeed real,” but added that “despite the media’s obsessive coverage of it, it represents an isolated, uncoordinated and fringe occurrence within America.”

 

“It’s a fringe occurrence that is being used in my opinion by Democrats to scare Americans into giving up their votes to a party that can no longer win based on simple ideas, which is why we’re seeing so many of these hearings back-to-back despite other threats that are facing this nation,” she said.

“I want to reiterate that point. White supremacy is real, just as racism is real, but neither of these ideologies are real in this room. They have become mechanisms for the left to continue to call these hearings and to distract from much bigger issues that are facing this country and which threaten minorities, much bigger issues that they are responsible for,” Owens added.

She ticked off a handful of issues that she said are greater threats to black America: father absence, illiteracy, and abortion.

White nationalism sounds a lot better as a threat than father absence. When are we going to call a hearing on the 74 percent of single motherhood rate in black America today? My guess is probably never. Since Democrats are the author of that epidemic, which leaves us – black Americans – 20 times more likely to end up in prison, nine times more likely to drop out of high school, and five times more likely to lead a life in poverty and to commit crime.

White nationalism also sounds a lot better than illiteracy rates. I’m assuming we’re never going to call a hearing on that, which is a real epidemic which is facing black Americans and minority Americans today, an epidemic which by the way has a lot closer of a tie to our nation’s history of white supremacy. Slave codes in the early 19th century made it illegal for black Americans to learn to read. Why? Because if slaves could read, they could access information. I don’t believe that much has changed.

On the most recent National Assessment of Education and Progress, just 17 percent of black students scored proficient in reading at a 12th grade level. Eighty-three percent of blacks in America were not found proficient in reading at a 12th grade level. Are we going to have a hearing on that? Probably not.

White nationalism also sounds a lot better than abortion as a threat, which has resulted in the slaughter of 18 million black Americans since 1973 and points to a bigger crisis, which is the fact that the black population growth has stagnated in this country. The crisis, and in major cities like in New York, we have more black babies aborted than born alive. If we’re talking about preserving lives and we’re talking about white supremacy, we should probably have a conversation about that.

Owens said that Democrats in the hearing are focused on white supremacy on the Internet so they can get permission to censor conservatives.

“But today in this room, we’re going to see Democrats try to connect the dots to white supremacy on the Internet. So the question is why? So that people who have absolutely nothing to do with propagating white supremacy are censored, silenced and controlled. What they are actually after is our permission to censor and silence and control any dissenting voices that go against the mainstream narrative that they wish to propagate,” she said.

Owens described attempts by liberals to silence her on social media.

To give a glimpse into just how absurd and expansive the definition of white supremacy has become, I offer to the committee that I have been libeled and smeared by Democrat media cohorts as someone who supports white supremacy. You need but look at me to determine that that just isn’t true.

Why? Because I routinely say black people don’t have to be Democrats. I am now considered somebody that is radicalizing people on the Internet. What a radical idea – black people waking up to the abuses in the Democrat Party, which has been instigated upon black America over the last 60 years. There have been sincere attempts – just so everybody knows – to censor me on social media, because I am radical.

YouTube once censored me for criticizing Black Lives Matter. They reversed the censorship, and they apologized, and they called it a mistake. Facebook once censored me for calling out liberal supremacy as a threat facing black America. What I said specifically was that in any community where liberal policies reign supreme, you’ll find that black America is hurting. I stand by that assessment.

Facebook reversed my censorship, apologized and claimed it was a mistake. Of course, I’m fortunate that I have a big enough platform that when I get branded something extreme, I can reverse it, but the majority of Americans don’t have that platform. The majority of Americans with dissenting opinions are silenced forever.

Owens said that liberals use the term racist to silence those who disagree with them and the term white nationalism is being used to anger black Americans into voting for the Democrat Party.

Many words which have once held very serious meanings have come to take on a very different definition over the last couple of years as Democrats have desperately tried to grapple with the fact that they are no longer able to manipulate Americans with broad claims and broad strokes of racism, sexism, misogyny and the like, words like racism, which today most nearly means anything or anyone that disagrees with a liberal and terms like white nationalism, which today and in this room and upon this floor most nearly means that it’s election time in America.

It’s time for the left to do what they do best – divide, distract and hope to keep the most important voting block to their party – which is black Americans – angry and emotional and reactive enough to keep voting for the same party that has systematically destroyed our families, sent our men to prison, and deferred all of our dreams.

I will close out by telling you that this is not going to work. America and more importantly, black America, is waking up to the ploy, the bad acting, the faux concern, these hearings. It’s not going to stop black America from breaking the chains of victimhood, and it’s certainly not going to stop me from being one of the loudest voices against it.

Several U.S. states and cities have banned conversion therapy, which rests on the belief that being LGBT+ is a mental illness that can be cured

Just to show how the media can distort “news”, the basis for the challenge of the New York law was an Orthodox Jewish therapist who opposed the law on religious grounds, not a right-wing Christian group as described below…..

 

By Oscar Lopez

MEXICO CITY, Sept 12 (Thomson Reuters Foundation) – New York City took the first step on Thursday toward repealing its ban on gay conversion therapy, aiming to avert a legal challenge that could put LGBT+ rights at risk nationwide, officials said.

The legal challenge has come from a conservative Christian group, the Alliance Defending Freedom (ADF), that claims the therapy ban is censorship of free speech and unconstitutional.

Several U.S. states and cities have banned conversion therapy, which rests on the belief that being LGBT+ is a mental illness that can be cured, either for minors or altogether.

Hundreds of thousands of LGBT+ Americans have undergone the widely discredited process that uses psychological, spiritual or physical practices, according to a study by the UCLA School of Law in California.

A bill to repeal the therapy ban in the New York City Council was introduced on Thursday by its speaker, Corey Johnson, who said he had consulted with LGBT+ rights advocates.

Advocates fear the legal challenge by the ADF could make its way through the increasingly conservative federal courts to the U.S. Supreme Court.

President Donald Trump has made scores of conservative judicial appointments, including two Supreme Court justices.

If successful, advocates fear, the ADF case could give the conservative courts an opportunity to set legal precedents that could have broad negative implications for LGBT+ rights.

“The courts have changed considerably over the last few years, and we cannot count on them to rule in favor of much-needed protections for the LGBTQ community,” Johnson, who is openly gay, said in a statement emailed to the Thomson Reuters Foundation.

“This was a painful decision,” he said. “I listened to the advocates who know the issue best, as well as my heart. I will never stop fighting for the community I am so proud to be a part of.”

In the ADF’s view, the ban threatened the constitutional right of New York citizens “to have whatever private conversations they want to have,” said Roger Brooks, an attorney for the group, in a statement.

“A Supreme Court decision making clear that psychologists, counselors, and their patients continue to enjoy their First Amendment rights … would be an important victory for free speech,” he said.

Attorneys for ADF also represented a Colorado baker who won a Supreme Court victory in 2018 over his refusal to make a wedding cake for a gay couple.

The ADF is listed as a hate group by the Southern Poverty Law Center, a civil rights organization that tracks and monitors right-wing groups.

If New York City repeals its ban, “that will be the right thing to do,” the ADF said in a statement. “We commend them for it.”

Nationwide, however, efforts to ban conversion therapy for people under age 18 are gaining momentum, and this year New York state lawmakers approved such a ban.

“The City Council’s action will stop unnecessary litigation after the passage of statewide protections and save valuable resources that can be used to help LGBTQ residents,” said Amit Paley, the head of The Trevor Project, a suicide prevention group, in an email.

Eighteen U.S. states have banned conversion therapy for minors, with legislation pending in 21 more, according to Born Perfect, an advocacy group that wants to ban the practice.

The Provocations of Camille Paglia The maverick critic and scholar has championed great art, defended free speech, and offered groundbreaking analysis of popular culture.

The word “person” captures a concept so fundamental to Westerners that it can be jarring to discover that it once had a different meaning. Etymologically, “person” comes from the Latin word persona, which means “mask.” To be a person is to wear a mask, act out a role—what people today might call being fake.

But to Camille Paglia, the dissident social critic, a mask does not conceal a person’s true nature; it helps reveal it. This is why Halloween was her favorite holiday as a child. It was “a fantastic opportunity,” she told an interviewer recently, “to enact one’s repressed and forbidden self—which in my case was male.” When she was five, she dressed up as Robin Hood; at seven, she was a Roman soldier; at eight, Napoleon; at nine, Hamlet. “These masks,” Paglia told me in Philadelphia recently, “are parts of myself.”

Paglia, 72, grew up in the 1950s, when girls played house, not Hamlet. It was an unforgiving time to be different. As a fifth-grader, Paglia shoved a boy in order to be first in line; her teacher made her look up “aggressive” in the dictionary after school, an exercise that left her in tears. But at Halloween, she could defy conventions. Eventually, she would explain not only her personality but also the development of Western civilization through sexual masks. “I show how much of Western life, art, and thought,” she writes in Sexual Personae, her 735-page history of Western culture, “is ruled by personality, which the book traces through recurrent types of personae (‘masks’).”

A professor of humanities and media studies at the University of the Arts in Philadelphia, where she has taught since 1984, Paglia became an intellectual celebrity after the 1990 publication of Sexual Personae, her first book, which carries the subtitle Art and Decadence from Nefertiti to Emily Dickinson. Melding history and psychology with art and literature and laced with references to popular culture, the book delivered a one-two punch to academe. A feminist critical of the modern women’s movement, Paglia insisted on the greatness of Western civilization, though it was already unfashionable to do so. And she asserted that its greatness resulted from a creative but violent tension between male and female—between the Apollonian male principle of order (civilization) and the Dionysian female principle of chaos (nature). Two of the book’s most quoted lines are “If civilization had been left in female hands, we would still be living in grass huts” and “There is no female Mozart because there is no female Jack the Ripper.” Reading Sexual Personae, one reviewer wrote, was “a bit like being mugged.”

Now, nearly 30 years later, Paglia has once again found herself in the middle of the culture wars. Taking aim at the #MeToo movement, she told an interviewer that it is “ridiculous that any university ever tolerated a complaint of a girl coming in six months or a year after an event. If a real rape was committed, go frigging report it to the police.” In April, students at her university, upset by such statements, tried to de-platform Paglia, a lesbian who identifies as transgender. When they failed to get her scheduled lecture, “Ambiguous Images: Sexual Duality and Sexual Multiplicity in Western Art,” canceled or moved off campus, they organized a protest during the talk—and someone pulled the fire alarm. Later, the protesters urged the university to replace Paglia with a “queer person of color.”

Fortunately, the university’s president, David Yager, did what many of his peers at other schools roiled by such protests have failed to do: issued a statement defending freedom of expression. “Artists over the centuries,” Yager wrote in an e-mail to campus, “have suffered censorship, and even persecution, for the expression of their beliefs through their work. My answer is simple: Not now, not at UArts.” Paglia was delighted. An outspoken defender of free speech, she is horrified by the rise of censorship in academia—and was especially aghast, given her own history, at Yale’s attempt to police students’ Halloween costumes in 2015.

In her latest book, an essay collection called Provocations, she states that she’d like to be remembered as a “dissident writer who defended free thought and free speech.” But Provocations is not just a polemic against political correctness. The career retrospective, which includes writings from the last 25 years, covers subjects like gender, education, popular culture, and art. It showcases Paglia’s sweeping scholarship and puckish irreverence for PC pieties. “To questioning young people drawn to the siren song of hormones and surgery,” she writes, “I say: Stay fluid! Stay free!”

The book also reveals Paglia’s humility, a quality usually concealed by what she calls her “raging egomania.” Provocations, she writes, is for people who see art “as a medium of intuition and revelation.” It’s for those who stand in awe before nature, “a vast and sublime force”; for people “who see life in spiritual terms as a quest for enlightenment”; and “for those who elevate free thought and free speech over all other values, including material considerations of wealth, status, or physical well-being.”

Behind that devotion to heterodoxy lies something softer. She admitted that she’s chosen to censor herself in front of her students, no longer teaching them, for example, Billie Holiday’s “Strange Fruit,” a song about lynching, which was for years an important part of her course “The Art of Song Lyric.” “I don’t want to upset them. The historical material is too painful for a music class,” she said.

This reveals something important about Paglia. Her project in Provocations, and in much of her later work, is not to provoke simply for the sake of it, in the manner of, say, Milo Yiannopoulos. Her project is cultural populism. “I feel I should use my name recognition for service, for art,” she told the blog Bookslut in 2015. “I’m just a teacher in the classroom from beginning to end,” she added. Paglia sees culture, from the stories of the Bible to the paintings of Picasso to the ballads of Joni Mitchell, as a vast patchwork of meaning that inspires awe and delivers wisdom. She wants to bring the riches of art, literature, and religion to everyday people.

Like Athena, the Greek goddess of wisdom who sprouted adult-like from the head of Zeus, Paglia appears to have entered the world fully formed. She was born in working-class Endicott, New York, in 1947, when thousands of immigrants were arriving in the upstate town looking for work in the shoe factories. Her mother, Lydia, and her four grandparents were Italian immigrants. Her father, Pasquale, was the only member of his family to attend college, later becoming a professor of Romance languages at LeMoyne College in Syracuse. “I got my intellectuality, studiousness, and severity from my father,” she told New York magazine in 1991. “And I got my energy, optimism, and practicality from my mother.” Her sister, Lenora, was born when Paglia was 14.

Paglia’s early childhood was, she said, a “total immersion in Italian culture.” She and her parents lived with her maternal grandparents in the Italian section of Endicott. Her paternal grandparents lived two long blocks away, next to a Sons of Italy hall. Though her parents spoke English at home, Paglia was otherwise surrounded by people who communicated in “mutually unintelligible Italian dialects.”

Endicott was in many ways like a rural Italian village—which meant that Paglia saw how gender dynamics worked in the premodern world. Her grandmothers were matriarchal, goddess-like figures, who ruled home and hearth. They dictated the affairs of Paglia’s daily life. “Eat!” they’d command her in Italian. “Sleep!” Even more severe were the petite elderly Italian ladies who would visit their homes. “You had to watch out for them,” she said, “because when they kissed you, they’d bite your earlobe.” When Paglia and her parents moved from Endicott to the top floor of a dairy farm in Oxford, New York, where her father taught high school Spanish and her mother worked as a teller at the local bank, she encountered more tough women—farmers working the animals and land. Paglia dedicated Sexual Personae to her grandmothers and a paternal aunt.

Looking back, Paglia saw that her grandmothers had their own sphere of power at home, separate from the male sphere—where older women ruled. “Young women were nothing” in that world, Paglia said. Today, it’s the opposite: women try to gain power in the male sphere of work and lose status culturally as they age. “You’re unhappy,” Paglia said of today’s professional women, “because you’re spending all day long in this mechanical professional world. But we willingly put up with that because we want the financial autonomy and freedom.”

Her childhood also instilled in her an appreciation of men, especially working-class men—the plumbers, factory workers, and policemen who keep the world going. Paglia’s paternal grandfather was a barber, and her maternal grandfather operated a leather-stretching device at the Endicott-Johnson shoe factory. Four of her uncles served in the military during World War II, and her father was an army paratrooper. “One of the reasons I’m not anti-male,” Paglia told me, “is because I saw the sacrifices made by my father’s generation in those men.”

Paglia encountered her first works of art with her family at St. Anthony of Padua Catholic Church in Endicott. The stained-glass windows and polychrome statues of the saints entranced her. So did the large art book Art Treasures of the Louvre, which her father brought back from France after spending a year at the Sorbonne on the GI Bill. Five-year-old Camille was enchanted by the “gorgeous plates, in chronological order, of the history of oil painting.” One image made a special impression—a big photograph of a marble sculpture of the goddess Diana, the huntress, by the School of Fontainebleau. Paglia hung the image in her room. “I loved the idea of the armed woman,” she said. Prekindergarten, she made her first visit to the Metropolitan Museum of Art, where the Egyptian section mesmerized her. “I can remember very clearly that you could smell the age, the mummy casing, the wood.”

The other “overwhelming” experience she remembers from her early childhood is seeing the movie Show Boat, starring Ava Gardner, at the theater with her parents when she was four. This ignited Paglia’s passion for popular culture, “the master mythology of my postwar generation.” Gardner, Paglia’s “first crush,” was a goddess in the Hollywood pantheon. “When she’s performing ‘Can’t Help Lovin’ That Man,’ ” Paglia said, “and they have her face filling the big screen—this is what knocked me out!” She admired Gardner’s glamour and confidence. “The quality she exudes on screen is kind of eerie, almost, like, vampiric.”

Paglia laments the loss in today’s world of the wonder that defined her childhood. “When I was young,” she said, “there was all this energy, color, grandeur exploding from the big screen.” But today, people increasingly watch videos on the small screens of their phones or laptops. When she was a child, the Met overwhelmed her senses and filled her with awe. Today, the museum feels too sanitized, the objects too remote. “I can’t believe they redesigned it with all those stupid glass cases!” she said.

“I loved the idea of the armed woman,” Paglia said of Diana the Huntress, whose image entranced her as a child. (LEEMAGE/CORBIS/GETTY IMAGES)
“I loved the idea of the armed woman,” Paglia said of Diana the Huntress, whose image entranced her as a child. (LEEMAGE/CORBIS/GETTY IMAGES)

It shouldn’t be surprising, then, that Paglia’s favorite book as a child was about a girl living in a land of wonder. She still owns the worn copies of Through the Looking Glass and Alice in Wonderland that her parents read to her as a child. She loved Lewis Carroll’s playful language. She memorized lines from the story, like the Red Queen’s command “Remove the joint!” “It was the first thing that I heard that inspired me about language,” Paglia said about Through the Looking Glass. “It’s so crisp and witty.”

She also credits Time and Oscar Wilde’s epigrams with teaching her to write in a condensed and succinct manner. “I love the one-liner, the axiom. I adore and parody them in my work,” she said. “Like in Sexual Personae, I’m talking about the status of cats in ancient Egypt and I say, ‘The cat is the least Christian inhabitant of the average home.’ ” She laughed. “I’m parodying the marketing analysis of ‘average home.’” But the greatest sentence she ever wrote, she said, is “God is man’s greatest idea.”

In adolescence, she wrote poetry and kept notebooks in which she’d copy prose that she admired from newspapers or novels, studying the passages to understand what made them good. Her writing would eventually bring her into contact with feminism for the first time. At 14, after seeing an item about Amelia Earhart in the newspaper, she began obsessively researching the feminist aviator, with the goal of writing a book about her. Earhart became a symbol to Paglia of “female freedom, thought, and movement.” As she researched Earhart, she also encountered figures such as politician Clare Boothe Luce, journalist Dorothy Thompson, and aviator Anne Morrow Lindbergh. “These women of the twenties and thirties were amazing pioneers without all this male bashing that goes on now,” Paglia said.

She worked on the Earhart project for three years, showing an academic’s patience and proficiency for research. By this point, she knew she wanted to be a scholar when she grew up. She wrote nearly 300 letters of inquiry, spent Saturdays at the Syracuse public library, and made pilgrimages to Earhart sites during family road trips. But Paglia drifted from the Earhart project after she turned 16, in 1963. For her birthday that year, one of her father’s colleagues, a Belgian woman, gave her a copy of Simone de Beauvoir’s The Second Sex. Reading it changed Paglia’s life. “I date my intellectual independence from that moment,” she wrote.

In Beauvoir, she found not only a vision of feminism but also a model. “Her commanding voice and enormous historical scope were huge inspirations for me,” she said. “She’s so magisterial and did such copious scholarship.”

After reading The Second Sex, Paglia “began to imagine a vaster project, which would build on Beauvoir and go beyond her.” That project began to take shape when Paglia was a student at Harpur College at SUNY Binghamton between 1964 and 1968. She majored in English and began writing essays on gender and sexual ambiguity in literature that would shape her ideas in Sexual Personae. “No one,” she said, “was thinking about sex in those days in academe.”

After graduating as valedictorian from Harpur, Paglia landed at Yale in 1968 for graduate school in English literature. It was a year of revolutionary social change, but at Yale, traditionalism ruled, especially in the English department, where Beat poetry and leftist literary critics like Leslie Fiedler, two major influences on Paglia, were disdained. Her genteel, WASP professors didn’t know what to make of this young woman decked out in psychedelic outfits who wrote about Freud and sex in her papers. One professor felt so uneasy around Paglia that he nervously rolled and unrolled his necktie with his fingers whenever she spoke in class. When rumors circulated that Paglia wanted to write a doctoral dissertation about sex in art, Harold Bloom summoned her to his office and declared, “My dear, I am the only one who can direct that dissertation!”

Paglia resisted the reigning approach to literature of the time, the New Criticism, whose epicenter was Yale. The twentieth-century literary movement championed “textual explication,” or close readings, of literary works, treating them as self-contained objects. Paglia admired the “microscopic” method but wanted much more—to ground literature in history, biography, and psychology, which included sex. That became the aim of her dissertation, originally called “The Androgynous Dream” and later known as Sexual Personae. Paglia ransacked Yale’s Sterling Memorial Library looking for different approaches to literature. She read Emile Durkheim, Max Weber, Erich Neumann, and Carl Jung, but the scholar who floored her was Sir James George Frazer. His seminal work The Golden Bough was a synthesis of myth. “My largest ambition,” Paglia writes in her preface to Sexual Personae, is “to fuse Frazer with Freud.”

Sterling Library was a gothic temple to scholarship—and Paglia worked with the reverence of a medieval monk. “To be a scholar,” Paglia has written, “is the greatest of vocations: to compose a devout commentary, a talmud, on the created world.” Her mother, she likes to point out, was born near the sixth-century monastery where Thomas Aquinas was educated. Her two mentors, Milton Kessler and Harold Bloom, were “visionary rabbis.” “Universities descend from medieval institutions,” she told me, “that were [intended] to train clergy, and there’s always been a model of withdrawal from the world and contemplation and honor and ethics in the academic tradition.”

Paglia admired Ava Gardner’s glamour and confidence: “The quality she exudes on screen is kind of eerie, almost, like, vampiric.” (MONDADORI PORTFOLIO/GETTY IMAGES)
Paglia admired Ava Gardner’s glamour and confidence: “The quality she exudes on screen is kind of eerie, almost, like, vampiric.” (MONDADORI PORTFOLIO/GETTY IMAGES)

Her devotion to this noble vision explains why Paglia was appalled by what happened next in academia. In the early 1970s, as she was finishing her doctoral course work, a new school of literary studies gained its first U.S. foothold at Yale and would eventually overthrow New Criticism as the main way academics would interpret texts in English departments across the country. It was known by many names: post-structuralism, continental theory, and deconstruction. Its leaders were the French theorists Jacques Derrida, Jacques Lacan, and Michel Foucault.

Paglia was repelled by the pretensions of these French thinkers. Though she had her problems with the “old-guard professors at the Yale Graduate School,” she recognized them as “genuine scholars, passionately devoted to study and learning. They believed they had a moral obligation to seek the truth and to express it as accurately as they could,” she writes in Provocations. But the French theorists and their converts in American universities were “like high priests murmuring to each other.” Rather than revealing and clarifying the meaning of literature, they obscured it.

As Paglia found herself on the wrong side of literary fashion, she also found herself at odds with feminism. In 1970, Paglia, a lover of rock and roll, told members of the New Haven Women’s Liberation Rock Band that she thought the Rolling Stones’ song “Under My Thumb” was a work of art. They considered the song sexist and “went into a rage,” Paglia wrote, and “surrounded me, practically spat in my face, literally my back was to the wall.” In another incident, Paglia was at dinner with some women professors who “went ballistic” on her and told her she’d been “brainwashed by male scientists” because she alluded to hormonal differences between the sexes. “I was rebuffed and rejected from the women’s movement from day one,” Paglia said.

The irony was that Paglia had always been a feminist. In high school, she wrote a letter to the editor of Newsweek, which was published in the magazine, about Amelia Earhart’s pioneering status as a woman aviator. In college, she rebelled against the sexist parietal rules that imposed curfews on women but not men. And in her first job, as a professor at Bennington College, originally an all-girls’ school that had gone co-ed two years before Paglia arrived, she kicked a male student in the derriere for an offensive nightclub skit he had performed on campus. “He sprawled out on the floor, and his glasses flew!” Some of the women students, delighted, gave Paglia the Award of the Order of the Golden Boot, a poster with an image of a gleaming yellow Frye boot that Paglia had worn to do the deed. But after a final incident, at a dance, where Paglia got into a fistfight with another student, it was Bennington’s turn to try to give her the boot. In the end, Paglia accepted a settlement and quietly resigned.

Paglia said that Bennington is where she “grew up.” She realized that her “do your own thing” attitude was wasting the time of her colleagues and students and diverting her from her vocation as a teacher. When I asked about the “kick story,” Paglia was bashful. “Oh, my God, when I look back—oh, my God,” she said. She held her hands up to her face. “That was so wrong,” she said. “Who am I, a free-speech militant?” In an interview published in Provocations, Paglia said that the one thing she would edit from her past is “the arrogantly militant Amazon feminism which I foolishly tried to impose” at Bennington. “Deep social change,” she learned, “takes time.”

The early 1980s were Paglia’s wilderness years. Unable to find steady employment in academia, she worked to finish Sexual Personae, which was rejected by five agents and seven publishers. When I asked Paglia if she had despaired that her work would never reach an audience, she mentioned Emily Dickinson, dubbed “Madame de Sade” in Sexual Personae, who was virtually unknown in her day. Whenever Paglia felt frustrated by her lack of success, she reminded herself “how a great genius like Dickinson got absolutely nothing back from her staggeringly innovative work.”

Her fate would not be Dickinson’s, however. Thanks to a chance meeting that Paglia had with a senior editor of Yale University Press, Sexual Personae would appear in February 1990, but with little fanfare—no publicity, no marketing, no picture of the author on the flap. Still, it started making its way into the hands of influential readers. One was Herbert Golder, a classics professor at Boston University and the editor of Arion, a literary journal. He contacted Paglia and asked if she’d review two books on classical antiquity by openly gay academics. When Paglia read the books, One Hundred Years of Homosexuality, by David M. Halperin, and The Constraints of Desire, by John J. Winkler, she was appalled. When post-structuralism took root at Yale in the early 1970s, Paglia assumed that it would be a short-lived fad. “Not until I read those awful two books,” she said, “did I realize how bad the situation was—and that what was going on was the literal destruction of a scholarly tradition that began in medieval monasteries and universities.”

Perceiving that the contamination of the humanities by theory represented a crisis for higher education, she devoted the next six months to researching and writing a mega-essay called “Junk Bonds and Corporate Raiders” for Arion. According to Paglia, Bloom told her that she was wasting her time. But to Paglia, nothing was more important than saving the universities from the “soulless, beady-eyed careerists” who “cynically deny the possibility of meaning” in the great works of the past and have ruined the humanities with their “shallow, juvenile attitude toward culture.” During our conversation, Paglia called them “absolutely the most corrupt and evil individuals on the landscape.”

In “Junk Bonds,” she argued that Lacan, Derrida, Foucault, and their followers were frauds. “These minor French theorists,” she wrote in a version of the essay that appeared in the New York Times Book Review, “have had a disastrous effect on American education. Lacan encourages pompous bombast and Foucault teaches cheap cynicism, while Derrida’s aggressive method, called deconstruction, systematically trashes high culture by reducing everything to language and then making language destroy itself.” “Junk Bonds” contains one of Paglia’s other favorite sentences: “Better Jehovah than Foucault.”

To Paglia, it made no sense to study French theorists in America. Their work, she argued in a Fordham University lecture in 2000, is specific to French language and to the culture of postwar Europe, and it doesn’t transfer to the Anglo-American tradition, where pragmatism and Romanticism infuse the arts. In the lecture, “The North American Intellectual Tradition,” republished in Provocations, she offers a counter-canon to Lacan, Derrida, and Foucault in the critics Marshall McLuhan, Leslie Fiedler, and Norman O. Brown. Their theories of culture, she told me, rely on “social observation of real people, real experiences, and of nature itself and the material world.”

As Paglia worked on “Junk Bonds,” a generally favorable review of Sexual Personae appeared in the New York Times Book Review. Then, in November 1990, she gave a slide lecture about the history of women in Hollywood at the 92nd Street Y, the New York cultural center. The novelist and journalist Francesca Stanfill attended and approached Paglia about writing a magazine profile on her. Because Paglia’s talk had celebrated Madonna, the New York Times invited her to write about the controversy over alleged pornography in Madonna’s music video for “Justify My Love.”

“Madonna—Finally, a Real Feminist,” appeared in December 1990, and it rocketed Paglia to fame. A month later, her notoriety was secured when she wrote an op-ed on date rape for Newsday, which has become her most reprinted piece of writing. Another month later, New York magazine published “Woman Warrior,” Stanfill’s cover story about Paglia. After “Junk Bonds” appeared in Arion, publications around the country excerpted it, including the New York Times Book Review, which ran it on its front page in May 1991 with the headline “Ninnies, Pedants, Tyrants and Other Academics.” When Paglia lectured at MIT in September of that year, she drew an overflow audience of thousands of people.

Paglia had broken through. She was called the second Marshall McLuhan and “the academic Joan Rivers.” New Yorker cartoonists caricatured her, and the New York Post’s Page Six gossip column even chronicled her doings. Paglia seemed to relish her celebrity.

Paglia’s next two books—Sex, Art, and American Culture and Vamps and Tramps—were essay collections with extensive appendixes documenting her media appearances and mentions, but they also contained real scholarship. Paglia’s essays can leave readers with the impression that she contains the whole of Western civilization in her mind. In her slender 1998 book The Birds, for example, published by the British Film Institute, she writes that the Hitchcock classic is “in the main line of British Romanticism, descending from the raw nature-tableaux and sinister femmes fatales of Coleridge.”

For lovers of the humanities, her subsequent volumes, Break, Blow, Burn (about poetry, and published in 2005) and Glittering Images (about art, and appearing in 2012), are the jewels in the Paglia canon. Her mission in these books is to “save culture from theory,” in the words of the poet Clive James. “Technical analysis of a poem is like breaking down a car engine,” she writes, “which has to be reassembled to run again. Theorists childishly smash up their subjects and leave the disjecta membra like litter.” Paglia instead sought to make the arts accessible and relevant to ordinary readers through a series of concise, smart explications of the greatest works of Western civilization, from Shakespeare’s Sonnet 73 and Sylvia Plath’s “Daddy” to Titian’s Venus with a Mirror and Piet Mondrian’s Composition with Red, Blue, and Yellow.

Both books end with Paglia’s signature nod to pop culture. The final poem covered in Break, Blow, Burn is Joni Mitchell’s “Woodstock,” and the last essay in Glittering Images is a tribute to the Star Wars film Revenge of the Sith. Mitchell’s ballad, she writes, is a commentary on the 1960s, “a harrowing lament for hopes dashed and energies tragically wasted.” George Lucas, meanwhile, was the “only” cultural figure “during the decades bridging the twentieth and twenty-first centuries” with “the pioneering boldness and world impact that we associate with the early masters of avant-garde modernism.” To Paglia, filmmakers like Lucas are our modern mythmakers; singers like Joni Mitchell, our modern bards.

But Paglia worries that we are moving into a “soulless future” as art, literature, and religion recede from the public square. In one essay in Provocations, “The Magic of Images,” she writes that today’s young adults “are unmoored from the mother ship of culture.” For Paglia’s generation, popular culture was the “brash alternative” to religion, literature, and the fine arts. For young adults these days, raised on the “darting images” of television and social media, it is the culture. Manic, jittery, and kinetic, it has produced adults with the same qualities.

To Paglia, the antidote to this is the kind of education she received at Harpur College, which counterbalances the sensory immediacy of pop with the philosophical depth of complex high art. But unless they deliberately seek them out, today’s students are rarely exposed to the greatest and most influential works of Western civilization. What they often encounter instead is a watered-down Marxism that sees the world in terms of society, politics, and economics—a materialistic philosophy that has no sense of the spiritual or sublime.

“That’s why they’re in a terrible fever and so emotional,” Paglia said. “There is a total vacuum in their view of life. They don’t have religion any longer. Religion teaches you metaphysics. It shows you how to examine yourself and ask questions about your relationship with the universe.” The Bible, she said, is “one of the greatest books ever written.”

Instead of finding meaning in religion or culture, today’s new generation has turned to politics. This, Paglia said, is “absolute idolatry.” Her students believe that “human happiness is possible through social reform—that utopia is possible.” A much better understanding of human nature is found in the great works of art and literature, which reveal “the tragic view of life.” The fact that Break, Blow, Burn became a national bestseller reveals that there is a craving for the kind of education Paglia is advocating.

The route to a renaissance in education and the arts, she argues, lies in the study of religion. “All art began as religion,” Paglia said in a debate at the Yale Political Union in 2017. Its metaphysics “frees the mind from parochial entrapment in the immediate social environment.” Its “stress on personal responsibility for the condition of the soul,” she added, “releases the individual from irrational blame of others.” And it has the potential to satisfy students’ existential yearnings. In her remarks at Yale, published in Provocations, Paglia argued that college students should be taught religion as culture, not morality, and that a study of comparative religion is the “true multiculturalism.”

Her fascination with comparative religion began as a college student. Like many spiritual seekers of her generation, she was drawn to mysticism, nature, and the occult. The British scholar Alan Watts, who helped to popularize Zen Buddhism in America, had a profound influence on her. His books on Eastern and Western culture, “comparing the way a Hindu or Buddhist sees the world to the way a Judeo-Christian sees the world,” gave her the multicultural education she advocates for students today.

In a brilliant essay in Provocations, “Cults and Cosmic Consciousness,” she argues that the spiritual yearnings of her generation gave rise to the New Age movement that flowered in the 1980s and 1990s. She “absolutely” considers herself part of that movement, she said. Though acknowledging that it is “choked with debris,” Paglia believes that New Age “deserves respect for its attunement to nature and its search for meaning at a time when neither nature nor meaning is valued in discourse in the humanities.” It has a “core of perennial wisdom” that draws from “Asian religion, European paganism, and Native American nature-cult.”

Her latest research project is New Age to the core. Ten years ago, as she was researching Glittering Images, she started noticing odd stone formations near her home in the Philadelphia suburbs. Wondering what they could be, she went to the library at the University of Pennsylvania’s Museum of Archaeology and Anthropology and inspected the vast collection on Native American culture. Eventually, she became convinced that these stone formations had religious significance. She is now working on a book about the nature religion of Native Americans of the Northeast. “I’ve found stone objects that are mind-boggling,” she said. “I now can just cross a lawn and find fragments of artifacts everywhere. I have many tools—scrapers, hammers, and knife blades, some of them still razor sharp,” she said. Stone tools in Pennsylvania “may date from 10,000 BC, which makes them older than the pyramids of Egypt.”

Cosmic reality is both wondrous and terrifying to her. “The sublime,” she said, “opens up the vastness of the universe, in which human beings and their works are small and nothing!” The world may be less enchanted than it was when Paglia was a child, but she still stands in awe of it. Her life’s work has been to share that message with others.

Emily Esfahani Smith, a journalist in Washington, D.C., is the author of The Power of Meaning.

Top Photo: In her latest essay collection, “Provocations,” Paglia writes that she wants to be remembered as a “dissident writer” who defended free thought. (PETER KRAMER/BRAVO/NBCU PHOTO BANK/GETTY IMAGES)
Continue reading