2 Ac

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

Comp Adv

Inno Adv
Inno Adv Response to: Patent thickets
Data disproves patent thickets – existing innovation is because of patents
Adam Mossoff, 24 - Professor of Law, Antonin Scalia Law School George Mason University. Answers from Adam Mossoff to Questions
for the Record from Senator Alex Padilla Senate Committee on the Judiciary, Subcommittee on Intellectual Property, United States Senate “The
Patent Eligibility Restoration Act – Restoring Clarity, Certainty, and Predictability to the U.S. Patent System” 1/24,
https://www.judiciary.senate.gov/imo/media/doc/2024-01-23_-_qfr_responses_-_mossoff.pdf //DH

Although conventional wisdom and classical economics define patents as monopolies by which the incentive to invent is balanced against
restraints on access and higher short-term prices, this is a fundamental misunderstanding of the nature and function of patents. Patents and
other intellectual property rights, such as copyright and trademark, are
not merely incentives to create, but also
incentives to commercialize innovation. They are property rights. Thus, they represent an equal opportunity for any person who
creates a new invention to secure the fruits of their labors, just like any person who works as a farmer or worker should have secured to them
the fruits of their productive labors. Thus, patents, like all property rights, are
the basis for commercialization activities, such as
obtaining venture capital financing, entering into license deals, and creating new commercial
structures for efficiently placing new products and services into the hands of consumers , such as the
franchise business model invented by U.S. patent owners in the nineteenth century. In the healthcare market, for example, this has meant an
ever-increasing supply of cuttingedge medical treatments and increasing availability of older medical treatments that are now “off patent.”
Patents not only function for companies to recoup billions in investments and thousands of labor
hours in creating new drugs and other healthcare innovations, they facilitate extensive licensing and
information-sharing agreements that efficiently distribute these healthcare innovations to patients.
These extensive manufacturing, commercial distribution, and information-sharing agreements were the launch pad for the unprecedented
response by the biopharmaceutical sector in inventing, producing, and distributing billions of doses of the COVID-19 vaccines during the
pandemic—an achievement never before accomplished by the biopharmaceutical sector since the invention and patent for Aspirin in 1900 and
the invention of vaccines in the 18th century.1 Although drug prices are a subject of policy debate, it is important to recognize that 95% of the
essential medicines identified by the World Health Organization are in the public domain; thus, these drugs are available for production by any
generic company wishing to sell them in the healthcare market in any country in the world, subject to regulatory approval by health officials.2
In the high-tech sector, the patent system has driven an explosion in new products and services at a
rate never before seen in any sector of the global innovation economy. “Several empirical studies
demonstrate that the observed pattern in high-tech industries, especially in the smartphone industry, is one of
constant lower quality-adjusted prices, increased entry and competition, and higher performance
standards.”3 This has occurred in one of the most patent-intensive sectors of the economy.4 This empirical evidence
contracted the predictions of academics and economists almost twenty years ago that “patent holdup”
and “patent thickets” on smartphones and other high-tech devices would raise prices for consumers and stifle
innovation.5 All of this economic and historical evidence creates a strong presumption that reforming patent eligibility doctrine by
returning it back to its longstanding function within the U.S. patent system would benefit consumers. Consumers will benefit from
the continued creation of new products and services and more jobs. Overall, the U.S. will continue to
experience economic growth and a rising standard of living for all consumers .

No evidence of patent thickets exists – the more likely effect is greater competition
Randall Rader, 2024 – Former Chief Judge (ret.) U.S. Court of Appeals for the Federal Circuit, and as Professor, Chief Judge Rader has
taught courses on patent law and other advanced intellectual property courses at George Washington University Law School, University of
Virginia School of Law, Georgetown University Law Center, the Munich Intellectual Property Law Center “Rader’s Ruminations – Patent
Eligibility III: Seven Times the Federal Circuit Has Struck Out” IP Watchdog, 3/31, https://ipwatchdog.com/2024/03/31/raders-
ruminations-patent-eligibility-iii-seven-times-federal-circuit-struck/id=174751/ //DH The main point for this
softball pitch is the justification that follows: “[M]onopolization of those tools through the grant of a patent might tend to impede innovation
more than it would tend to promote it.” Mayo, 132 S.Ct. at 1923. This theory (and the Court senses the weakness of its sole justification by
using the words “might tend to”) is akin to a theory known in academic circles as the “tragedy of the anti-commons.” The
“tragedy” in a few words is that “too many” patents give too many owners the right to inhibit all future research
and progress. This hypothesis sprang from the 1998 writings of Professors Heller and Eisenberg. Heller; Eisenberg; Can Patents Deter
Innovation? The Anti-commons in Biomedical Research; SCIENCE Mag. (May 1998). In truth, this so-called tragedy has been fully
rejected by academic and empirical studies. See, e.g., Teece, David; The “Tragedy of the Anticommons” Fallacy: A Law and
Economics Analysis of Patent Thickets and FRAND Licensing; Berkeley Tech.L.J. Vol 32:1489 (2017) (“The systematic problem identified here is
undercompensation, and possibly overuse, not underuse.”). Upon reflection, the Supreme Court’s “tragedy” reasoning becomes a floating
softball pitch that the Federal Circuit should hit to knock the entire eligibility doctrine back to statutory sanity. Now, the Supreme Court often
advises the use of “common sense” in patent law settings. See, e.g., KSR v. Teleflex, 550 U.S. 398 (2007). Let’s apply “common sense” to the
Court’s “too many” patents justification. If
the United States has “too many” patents endangering technological progress,
where is the empirical evidence to prove that hypothesis? See, e.g., John P. Walsh, Ashish Arora & Wesley M. Cohen; “Effects
of Research Tool Patents and Licensing on Biomedical Innovation”; PATENTS IN THE KNOWLEDGE-BASED ECONOMY 285, 285 (2003) (“[Despite]
an increase in patents on . . . ‘research tools,’ . . . we find that drug discovery has not been substantially impeded.”). Where
have fields
of research been shut down by “too many” patents? Where have prices soared in technologies
captured by overbearing exclusive rights? Where have groups of companies abandoned technology
because it is too expensive or already locked up? Where is the evidence? The empirical evidence suggests
that technology availability has soared and prices have declined as innovation creates intense cycles
of research competition. Indeed, the semiconductor chips that run most high-tech phones cost less than a cup of designer coffee.
Actually, the reason patents do not deter, but spur scientific development, is embedded in the disclosure doctrines of the Patent Act. By
statutory design, each patent on a new, non-obvious invention opens more doors to future research than it could ever close. Yet, where has the
Federal Circuit undertaken to explain that the “too many patents” theory has no empirical or theoretical foundation? The Supreme Court has
served up a pitch that begs to be hit: After all, the Court’s justification for its new “exceptions” claim-by-claim validity doctrine does not pass
the “common sense” test. Noempirical data shows declining patent filing rates; no empirical data shows
patents closing down technology markets; no empirical data shows patents causing research to dry up
or grind to a halt. This softball pitch begs the Federal Circuit to show that the Court’s reasoning has no basis. Instead, the Circuit has yet
to swing its bat at this softball pitch, instead swinging only its sledgehammer. Strike four!
Inno Adv Response to: Chem Pollution impact negation
Chemical pollution is ongoing – it kills 9 million a year and outweighs war
Ravi Naidu, 2021 – Global Centre for Environmental Remediation (GCER), The University of Newcastle “Chemical pollution: A growing peril
and potential catastrophic risk to humanity” Environment International Volume 156, November 2021, Science Direct,
https://www.sciencedirect.com/science/article/pii/S0160412021002415 //DH

Rockström et al. (2009) warned that chemical pollution is one of the planetary boundaries that ought not to be
crossed to safeguard humanity. Altogether more than nine million humans are dying prematurely each year – one in
six deaths – due to contamination of their air, water, food, homes, workplaces, or consumer goods (Landrigan et al. 2018). To place this in perspective,

the chemical-related annual death toll is significantly greater than that of World War II and today constitutes
the greatest preventable form of mortality. Furthermore, it inflicts catastrophic losses on wildlife, notably insects and

animals that depend on them, ecosystems and their services, such as pollination or clean water, on
which humans depend for our own existence. This underlines the role of chemical pollution in
potential planet-wide ecological breakdown (Dave 2013). There is increasing evidence in recent decades of cognitive, reproductive and
developmental disorders and premature deaths caused by chemical contamination of the human living environment (Diamanti-Kandarakis et al. 2009). A thorough
and state-of-the-art literature and global database search was made to support the perspective developed here. We present a global picture of chemical pollutants
from many sources affecting human wellbeing in general, and humanity’s long-term survival prospects in particular. This analysis is in addition to the effects of
greenhouse gases and their effects on climate and humanity, which are considered elsewhere (Cavicchioli et al. 2019). Emphasis is given to chronic toxicity from
exposure to low levels of pollutants on human reproductive capability, cognitive and foetal health, and food security. We identify priority issues and propose
potential solutions to reduce impacts on human civilisation. 2. Production and consumption of chemicals In Man in a Chemical World Abraham Cressy Morrison
outlined the importance of chemistry, not only in contemporary post-industrial times, but also during earlier periods of traditional lifestyles (Morrison 1937).
Chemical processes and innovations have been a cornerstone of civilisation, which probably started ca. 17,000 years during the transition of humans from hunters
to civil societies, and will continue to be so for the foreseeable future (Rasmussen 2015). In 2017, approximately 2.3 billion tonnes of synthetic chemicals were
produced globally – double the amount produced in 2000 (Cayuela and Hagan 2019). The majority of the chemicals were petroleum compounds (expressed as
25.7% of sales), speciality chemicals (26.2% of sales) and polymers (19.2% of sales) (CEFIC 2021). The use of chemicals other than pharmaceuticals is projected to
increase by 70% by 2030, with China and the European Union (EU) remaining the largest consumers (see such projections in Supplementary Information, Fig. S1a,b).
In 2019, world sales of chemicals were estimated at $4,363 billion, equivalent to the production of more than 2.3 billion tonnes of chemicals (excluding
pharmaceuticals), which is approximately 300 kg per year for every man, woman, and child in the world (CEFIC, 2021, UNEP, 2019). Since the 1970s there has been
strong growth in the development and production of industrial chemicals that has introduced thousands of novel substances to daily use. According to the European
Chemical Industry Council, the major sectors other than pharmaceuticals that utilise synthetic chemicals are agriculture, health, mining, services, rubber and plastic
manufacturing, construction, and other industrial production (CEFIC 2021). New chemicals are often released with insufficient risk assessment (Sala and Goralczyk,
2013, Wang et al., 2020), and their mixtures are creating new chemical environments with very uncertain toxicity. Chemical intensification is a feature of almost all
major industries: in
modern agriculture, for example, the intensive production of crops and livestock to feed much of the world now
relies on the annual application of some 5 million tonnes of pesticides and 200 million tonnes of
concentrated nitrogen, phosphorus and potassium (NPK) fertilisers. According to the Food and Agriculture Organization of the
United Nations (FAO) database, the total volume of pesticides was 3,835,826 tonnes in 2008, which increased by ca. 7% in the next decade (See comparative
statistics in Supplementary Information, Fig. S1c) (FAOSTAT 2019). In the USA alone, the number of active chemical components in various pesticides stands at more
than 400 (USGS 2017). Agrichemical use is also increasing in newly industrialising countries, such as China, which is now the world’s largest producer and user of
industrial chemicals, itself accounting for 36% and 25% of world demand for chemical fertilisers and pesticides, respectively (Guo et al. 2010). 3. Chemicals as global
pollutants Although anthropogenic and synthetic chemicals have delivered enormous benefits to human civilisation, including disease control and food productivity,
their benefits are now being offset by equally large-scale negative impacts resulting from unintentional human and environmental exposure, and insidious toxicity
(Fig. 1) (ECHA, 2018, NPI, 2017, US-EPA., 2017). Well-known harmful pollutants such as arsenic (As), lead (Pb), cadmium (Cd) and mercury (Hg), as well as smog and
air-borne particulate pollutant in large cities, have been documented since ancient Rome and Athens, whose citizens suffered from contaminated water supplies,
air, cooking and eating utensils, and food (Patterson et al. 1987). The Agency for Toxic Substances and Disease Registry (ATSDR) lists 275 priority chemicals as
pollutants, based on their frequency, toxicity and potential for human exposure. However, this is likely to be a significant underestimate given the difficulties in
tracking novel or ‘unknown’ chemicals in the environment after they have been released (Anna et al. 2016). To overcome this uncertainty, science is attempting to
define ‘emerging contaminants’ that are yet to be regulated, in order to anticipate future problems (Richardson and Kimura 2017). Many chemicals now considered
pollutants were beneficial at the time of their discovery (Kerr 2017). For example, when organochlorine insecticides were developed in the 1950s their main
application was to control agricultural and disease-carrying insect pests, and they were successful in the short term. However, with the publication of Rachel
Carson’s Silent Spring in 1962 (Carson 1962), the world began to recognise it was facing severe problems due to the persistence of organic pesticides in the
environment and the resulting cumulative exposure of wildlife and humans. Although some persistent organic pesticides have since been banned, humanity is still
dealing with their legacy. Dichloro-diphenyl-trichloroethane (DDT), which was used widely in the 1950s, is a well-known example. Continuing illicit pesticide
manufacture and use, and lasting residues, remain a problem in some countries. The lag between discovering a chemical’s benefits and understanding its potential
harms has resulted in a pattern of new chemical synthesis, licensing, production and use, followed by concerns over potential effects, bans and restrictions, followed
by an urgent search for replacement chemicals – frequently with other negative effects. This has led to ‘pulses’ of new chemicals being released into the
environment and food chain in recent decades, followed by frequent detection of negative side-effects. So, while chemical toxicity is not new, it
is the
phenomenal 40-fold increase in the production of chemicals and resource extraction during the last
100 years that now poses a serious risk to humanity (see Table 1 for an estimate of combined anthropogenic chemical emissions)
(Cribb, 2014, Cribb, 2017, Cribb, 2021). Emissions of pollutants can be continuous but they are often under-reported and there is great variability in reported values
(Supplementary Information, S2).
Inno Adv Response to: No bio-innovation anyways

Patents are the vital component to commercialize synbio innovation


David Cain, 2024 - Patent Attorney, former cryptography primary examiner for USPTO. “Strategic Patenting: The Role of Patents in
Sustaining Biotechnological Innovation” 5/9,

https://www.linkedin.com/pulse/strategic-patenting-role-patents-sustaining-innovation-david-cain-zcrqf //DH

In the verdant frontier of the biotechnology industry, innovation is not just a pathway to scientific discovery; it is the very foundation upon
which the sector's growth is built. Over the past few decades, this dynamic field has expanded dramatically, evolving from a niche area of
scientific exploration into a robust industry that spans health, agriculture, and environmental sciences. The
driving force behind this
expansion is a relentless pursuit of breakthroughs—new ways to edit genes, reprogram cells, and
redesign biological systems—which promise to revolutionize how we treat diseases, cultivate crops, and
mitigate ecological impacts. However, the path from laboratory insight to marketable product is fraught with
complexity and competition, where ideas alone do not suffice for success. Here, patents emerge as critical
instruments of protection, serving not only as legal shields against infringement but also as vital assets in the biotech company’s
arsenal. Patents protect the substantial investments made in research and development, ensuring that
innovators can reap the financial benefits of their discoveries . This protection, in turn, fuels further
innovation, creating a cycle of funding and discovery that drives the industry forward . The importance
of patents in the biotech industry cannot be overstated. They provide the necessary security for
investors to allocate capital towards risky biotech ventures , knowing that intellectual property laws safeguard their
investments. Moreover, patents facilitate an environment where shared knowledge leads to new innovations
—through licensing agreements or research collaborations—thus broadening the scope of scientific exploration and
application. In sum, as biotechnology continues to advance by leaps and bounds, the strategic use of patents is
indispensable in nurturing the ecosystem of innovation. They are not merely legal formalities but the lifeblood
of progress in a realm where the next great discovery is always just over the horizon.
AT Topicality
1. We meet. PERA expands patent eligibility. The exceptions it codifies exist now
Philip S. Johnson, 2024 - Chair of the Steering Committee of the Coalition for 21st Century Patent
Reform, JD from Harvard. Answers to Questions for the Record from Senator Alex Padilla, before the
Intellectual Property Subcommittee of the Judiciary Committee of the United States Senate on “The
Patent Eligibility Restoration Act – Restoring Clarity, Certainty, and Predictability to the U.S. Patent
System,” January 23, https://www.judiciary.senate.gov/imo/media/doc/2024-01-23_-_qfr_responses_-
_johnson.pdf //DH

As explained in my written testimony, the Eligibility Exclusions of Subsection 101(b) have been added to
PERA to reassure its critics that items that never would have been eligible for patenting under prior to
the recent Supreme Court’s activity will still be ineligible for patenting. Subsection 101(b) codifies five
eligibility exclusions.

2. Counter-interpretation. Strengthen means looking at the net effect of a law


Gregory N. Mandel, 17 - Dean & Peter J. Liacouras Professor of Law, Temple University. “Institutional
Fracture in Intellectual Property Law: The Supreme Court Versus Congress” Minnesota Law Review,
102:803 https://www.minnesotalawreview.org/wp-content/uploads/2018/01/Mandel_MLR.pdf //DH

I constructed a database of every Supreme Court opinion implicating patent, copyright, trademark, or
trade secret issues from July 1, 2002 through June 30, 2016. The database entries are the Supreme
Court’s final decision in each matter; certiorari dispositions are not included. I removed from the
database any cases that, though referring to intellectual property law, did not actually decide any
intellectual property issue. For example, Illinois Tool Works, Inc. v. Independent Ink, Inc. concerned
whether there should be a presumption of market power under antitrust law where the product in
question is subject to patent protection.7 Though this case bears a relation to patent protection, its
result did not turn on or affect patent law. The final Supreme Court database includes forty-four
intellectual property decisions and is summarized in Appendix A. Using similar methods, I developed a
database of every federal statute concerning patent, copyright, trademark, or trade secret rights during
the same time period. As with the Supreme Court data, I removed statutes that did not actually affect
patent, copyright, trademark, or trade secret doctrine. For example, the Lanham Act is the primary
statute providing for Federal trademark protection in the United States.8 Portions of the Lanham Act
regulate nontrademark activities, such as false advertising.9 Legislation that affected only the false
advertising portions of the Lanham Act was excluded from the database. The final dataset includes forty-
three legislative entries for the pertinent period and is summarized in Appendix B.10 The following
sections analyze the contours of intellectual property activity in these Supreme Court and congressional
datasets.
A. SUPREME COURT DECISIONS
The primary variable for analysis is whether a given Supreme Court decision or legislative action
strengthened or weakened intellectual property protection. Consistent with prior research in this
context, strengthened versus weakened refers to the extent of protection afforded to the intellectual
property rights owner.11 Accordingly, Supreme Court decisions that make it easier to acquire
intellectual property rights; broaden the scope of intellectual property protection; make it easier to
prove infringement; or strengthen remedies for infringement are all considered to strengthen
intellectual property protection. Decisions that have the opposite effects weaken protection.12
(footnote 12)
12. Consistent with the standard approach applied in analyzing the ideological direction of Supreme
Court decisions, whether a given decision strengthened or weakened intellectual property rights was
determined based on the net effect on intellectual property law with respect to the issue at hand, not
based simply on whether there was a change from the status quo. See, e.g., Lee Epstein & Andrew D.
Martin, Does Public Opinion Influence the Supreme Court? Possibly Yes (But We’re Not Sure Why), 13 U.
PA. J. CONST. L. 263, 272 (2010) (applying this methodology to code decisions as liberal versus
conservative); Isaac Unah et al., U.S. Supreme Court Justices and Public Mood, 30 J.L. & POL. 293, 307–
10 (2015) (same); The Supreme Court Database, WASH. U. L. SCH., supremecourtdatabase.org (last
visited Nov. 5, 2017) (same). Thus, Eldred v. Ashcroft, 537 U.S. 186 (2003), is coded as strengthening
intellectual property rights because it upheld the Copyright Term Extension Act against a constitutional
challenge. As the statutory name implies, the Copyright Term Extension Act extended owner’s copyright
terms. Though upholding the law effectively maintained the status quo, the Court’s decision on the issue
before it favored greater protection.

3. Predictability – Mandel is a comprehensive study of all IP protection in the US –


their interpretation of ‘strengthen’ is arbitrary with no intent to define
4. No ground loss – if the net effect is to strengthen, they get every disadvantage link
5. Topic education – PERA is the biggest patent reform law in the literature, it’s the
core controversy
6. Extra topicality is good. It increases ground; PICs check topic irrelevant offense.
7. Prefer reasonability. Competing interpretation encourage a race to the bottom of
the most self-serving definitions, which crowds out substance.
Cap K
Consequentialism Good
The goal of policy-making should be to maximize benefit and minimize costs---that
requires analysis of consequences, not adherence to moral aboslutes
Fettweis 13, Professor of IR @ Tulane (Chris, “The Pathologies of Power,” p. 242-243)

Classical realists have long considered prudence, in Hans Morgenthau's words, "the supreme virtue in politics."47 Their conception of the term,
and how it has traditionally been used in U.S. foreign policy, is similar to the dictionary definition: wisdom, caution, circumspection, and "provident care in the
would aim above all to minimize cost and maximize benefits.49 It
management of resources."48 Simply put, a prudent foreign policy

would strive to be rational, careful, and restrained, and it would not waste national resources pursuing low-priority goals or
addressing minor threats. Prudence is essentially the ability to weigh potential consequences of alternative political

actions. It demands that the main criteria for any decision be a cost-benefit analysis , or an honest attempt to
assess the implications for the national interest. Although such calculations are by necessity uncertain in a world where rationality is bounded and values
unquantifiable, if policy makers were to value prudence above all other virtues they would by force of habit explain and justify their decisions using a rational
framework, with reference to reason and evidence rather than emotion. Were prudence the defining virtue in policy debates, the ideal for which policy makers
strive, it would quickly silence the voices of fear, honor, glory, and hubris. The process of evaluation can never be foolproof, but by insisting that it be at the center
of decision making at the very least prudence can make assumptions clear and offer a basis for evaluation absent in those decisions driven by pathology. The

evaluation of policy cannot be done without recognition of cost. Simply achieving a goal - or winning - does
not justify action. To be considered rational, the other side of the ledger must be considered as well. This may
sound obvious, but a surprising number of scholars and analysts judge foreign policies based solely on
whether or not objectives are fulfilled.50 Neoconservatives in particular tend to ignore costs, assuming that the United States is

capable of paying virtually any price in the fight against evil. The war in Iraq, that exemplar of imprudence, was not preceded

by extensive projections of the likely price tag. When pressed, Bush administration officials repeatedly deferred such discussions by
denying such estimates were possible.5' At best, they were of secondary relevance. In the war's aftermath, the same officials stress how much better the world is
without Saddam rather than how much worse it is without those who gave their lives in removing him.∂ Like realism itself, prudence
is hardly
amoral. It merely demands a focus on the morality of outcomes, not intentions. Actions that produce
bad results are imprudent, no matter how good the intent. On this, Morgenthau quotes Lincoln:∂ I do the very best I know, the
very best I can, and I mean to keep doing so until the end. If the end brings me out all right, what is said against me won't amount to anything. If the end brings me
out wrong, ten angels swearing I was right would make no difference.2∂ Although the central criteria for prudent cost-benefit analyses must be the national
interest, no abnegation of national ideals or international responsibility need follow. Foreign humanitarian assistance is cheap, relatively speaking, and often carries
benefits for donor and recipient alike. The entire operation in Somalia, during which as many as a quarter million lives were saved, cost U.S. taxpayers less than two
billion dollars.53 More was spent every week at the height of the Iraq war. Qaddafi was removed for half that. A focus on the outcome makes it clear that the Iraq
war was a blunder of the first order. Even if the intentions of the Bush administration were indeed good, it is hard to see how the outcome can be said to be worth
the cost. Thomas Ricks quotes a senior intelligence official in Iraq as saying that the long-term American goal after the surge is "a stable Iraq that is unified, at peace
with its neighbors, and is able to police its inter-nal affairs, so it isn't a sanctuary for Al Qaeda. Preferably a friend to us, but it doesn't have to be."54 Presumably
one could add the absence of weapons of mass destruction to this rather scaled-back list of goals, and perhaps the continuation of the uninterrupted flow of oil
from the Gulf. In other words, if all goes well over the course of the next few years -and there is obviously no guarantee it will - Iraq might look quite a bit like it did
in 2003, only with a marginally more friendly dictator in charge. The cost of this restoration of the virtual status quo ante will be at least forty-five hundred American
dead and some thirty thousand wounded, at least a hundred thousand Iraqis killed and millions more displaced, and up to as many as three trillion U.S. taxpayer
dollars spent.55 The war inspired many young Arabs, such as Ibrahim Hassan al-Asiri, to join the∂ ranks ofjihadi terrorists, swelling the ranks of America's true
enemies. Al-Asiri is currently the main bomb maker for "Al Qaeda in the Arabian Peninsula," the group that operates out of Yemen and continues to try to take
down Western airliners, and he is considered the "most dangerous man in the world" according to many people who maintain such rankings.56 The decision to
invade Iraq may well turn out to be the most imprudent action this country has ever taken.∂ Another operation from the same year might serve as a
counterexample to Iraq, a prudent foreign policy adventure where the benefits outweighed the costs. The July 2003 intervention in Liberia may be little
remembered, but that is partially because it was such a success. The United States deployed around two thousand Marines to Monrovia and ended a siege during a
particularly brutal civil war. Security returned to the capital and an unknowable number of lives were saved. Unlike in Somalia, die mission did not creep into nation
building, proving that intervention need not be tainted by hubris. By October the civil war had effectively ended and the Marines withdrew, having suffered no
casualties and incurring little cost to the U.S. taxpayer. In the years since, Charles Taylor, the paragon of the West African kleptocradc despot, was put on trial at The
Hague and the security situation in Liberia has improved markedly. The Marines have not returned.∂ No assessment of costs and benefits can guarantee good
decisions, of course. But by making assumptions clear, by inculcating and rewarding a systematic evaluation of alternatives , expectations can
be assessed more rationally and decisions rescued from emotion. If leaders work actively to minimize pathologies and replace them with rational, fact-based beliefs,
the odds of arriving at rational conclusions rise. If prudence is the goal, therefore, the following should form the core of the foreign policy
conventional wisdom:∂ • The world is more peaceful than ever before.∂ • While no country is ever completely safe, the United States has few - if any - serious
security threats.
Chevron
1. No net benefit of the counterplan
Chevron will destroy government legitimacy.
Hamburger 16—(Maurice & Hilda Friedman Professor of Law, Columbia Law School). Philip
Hamburger. September 2016. “Chevron Bias”. The George Washington Law Review, Volume 84, Number
5. . Accessed 10/21/21.
The standard question about deference to administrative interpretation focuses on the statutory authority for agencies. To
understand judicial deference, however, it is necessary to ask the constitutional questions about judges
—about their office or duty to exercise independent judgment and about their systematic bias in violation of the right of due process.

Of course, if judges cannot defer to an agency’s interpretation of its authorizing statute, they will face difficult questions about what to do with
agency interpretations. These difficult
statutory questions, however, are no excuse for failing to confront the less
difficult constitutional questions raised here.
First, whatever the statutory extent of an agency’s power to interpret for its purposes, judges have the constitutional duty to interpret for
purposes of deciding their cases. The
Constitution vests judicial power in the courts, and it staffs the courts with
judges—that is, with persons who have an office of independent judgment. Judges, in adjudicating their cases, thus have the duty to
exercise their own independent judgment about what the law is, including their own independent
judgment about the interpretation of the law. Accordingly, when judges defer to agency judgments about statutory
interpretation, the judges abandon their very office or duty as judges. They make a mockery of their office, reducing it from a posture of
independent judgment to a posture of bowing to power.

Second, the Constitution prohibits judges from denying the due process of law, and judges therefore cannot engage in systematic bias in favor
of the government. Nonetheless, judges
defer to administrative interpretation, thus often engaging in systematic
bias for the government and against other parties.
In cases such as Chevron, the judges candidly declare their abandonment of judicial office and their
embrace of systematic bias. What are Americans to think when their judges openly declare such
things? When the judges brazenly tell Americans that they cannot get unbiased independent judgment in the courts, is
it surprising thatAmericans become suspicious of the judges and the rest of the government? And if
Americans, in their disagreements with the government, cannot find unbiasedindependent judgment
in the courts, will it be surprising if they seek justice in other ways?
The judges thus are playinga dangerous game. The availability of judges who exercise their
own independent and unbiased judgment, without deference, is thefoundation
of American government, in which conflict is resolved by law rather than force, and in which conflicts
about the law are decided by the judges. Without what Locke called “indifferent judges,” the people are apt
to become their own judges—that is, they eventually will be tempted to take judgment into their own
hands.192 And who then will be in a position to say they are unjustified? Certainly not the judges.
Although deference to administrative interpretation is only part of the judiciary’s involvement with administrative power, it
illustrates a broader problem with such involvement—that it corruptsthe judiciary. Judges during the past century
have in myriad ways participated in administrative power, to ensure both its efficiency and its
legitimacy, and this has had consequences not only for administrative power, but also for the
judges. Rather than worry about maintaining administrative power, judges need to worry about their own role; in this instance,
about the unlawfulness of their deference and the consequences for them and the entire government.

In the end, it must be hoped that the judges themselves will solve the dangers of deference. Although they
undoubtedly will continue to enjoy the robes of their office, they need to decide whether they will fill those robes. Under the
Constitution, they
must exercise an office of independent judgment and must avoid systematic bias in
violation of due process, and if they fail to meet these most basic requirements, they will have little right
to public respect or even self-respect.
Court Clog
1. Non-unique—the courts will hear a tsunami of lawsuits about regulations now
Katz 24 (Eric Katz, Senior Correspondent for Government Executive. “Will recent Supreme Court rulings
'devastate the functioning of the federal government?'” 7/1/2024. Accessed 7/11/2024.
https://www.govexec.com/management/2024/07/will-recent-supreme-court-rulings-devastate-
functioning-federal-government/397795/) wtk

Last week, the court overturned the precedent known as Chevron deference, which said broadly that
courts must defer to agencies when interpreting ambiguous statutory language. In Relentless and Loper
Bright v. Commerce Department, the court ruled in a 6-3 decision—the same tally in all three of the
recent rulings—that the judiciary, not federal agencies, should resolve questions of law according to
their own judgment. In Jarkesy v. Securities and Exchange Commission, it ruled that agencies issuing civil
penalties should defend those decisions in federal court rather than solely in in-house tribunals.
The court dealt yet another blow to federal agencies on Monday, deciding in Corner Post v. Federal
Reserve that it must strike down the existing six-year statute of limitations to sue the government over a
rule. Instead, the court said, the clock starts whenever a party can claim to suffer an injury as a result of
an agency-issued rule. In this case, a truck stop in North Dakota challenged a cap on debit-card
processing fee issues by the Federal Reserve.
Like the other cases, the fallout from Corner Post is likely to be widespread: agencies will see no time
limit on the challenges they face from their rules and regulations. Any regulated party can sue at
virtually any time, leaving agencies to constantly defend themselves in court even decades after issuing
a rule.
In her dissent, Associate Justice Ketanji Brown Jackson warned of the consequences, particularly when
taken in conjunction with the court’s other recent decisions.
“At the end of a momentous term, this much is clear: the tsunami of lawsuits against agencies that the
court’s holdings in this case and Loper Bright have authorized has the potential to devastate the
functioning of the federal government,” Jackson wrote. “That result simply cannot be what Congress
intended when it enacted legislation that stood up and funded federal agencies and vested them with
authority to set the ground rules for the individuals and entities that participate in…our economy and
our society.”
Bridget Dooling, a law professor at Ohio State University who served for 10 years in the White House's
Office of Information and Regulatory Affairs, including as its deputy chief, said the cumulative impact of
the decisions could open the floodgates of new litigation against federal agencies.
“Now it is open season on the regulatory state,” Dooling said.

2. Non-unique— patent litigation is high because of the vagueness of the Alice/Mayo


framework
David J. Kappos, 24 - attorney and former government official who served as Under Secretary of
Commerce for Intellectual Property and Director of the United States Patent and Trademark Office from
2009 to 2013 “Written Testimony to the U.S. Senate Judiciary Subcommittee on Intellectual Property
Regarding PERA”, 1/23, https://www.judiciary.senate.gov/imo/media/doc/2024-01-23_-_testimony_-
_kappos.pdf //DH
The vagueness and randomness of the Alice/Mayo framework have also enabled patent infringers to
exploit Section 101 as a litigation weapon exacting unnecessary burdens and costs on good-faith
patent holders and the courts, further disincentivizing investment and innovation. Patent infringers
now routinely raise Section 101 as a defense, often merely as a strategy to complicate and prolong
litigation, rather than as a good-faith defense. One analysis found that from 2012 to 2014 (when Alice
was decided), Section 101 was raised in just two Rule 12(b)(6) motions across the country each year. In
the year after Alice, that number rose to 36 motions, and by 2019, accused infringers were filing nearly
100 such motions each year.8

3. Link turn—PERA will reduce patent litigation


Courtenay C. Brinckerhoff, 24 – registered patent attorney and have been representing chemical,
biotech, and pharmaceutical clients before the USPTO for over 30 years. Answers To Questions For The
Record from Senator Padilla, before the U.S. Senate Committee on the Judiciary Subcommittee on
Intellectual Property, “The Patent Eligibility Restoration Act”, 1/23,
www.judiciary.senate.gov/imo/media/doc/2024-01-23_-_qfr_responses_-_brinckerhoff.pdf //DH

My practice does not focus on patent litigation, but I understand that Alice and Mayo have impacted
patent litigation by permitting patent challengers to invalidate patents at an early stage of litigation,
such as at the motion to dismiss stage, on a record with very little or no evidence other than the patent
document itself. Moreover, the uncertainty surrounding the scope of the “judicial exceptions” and the
willingness of courts to invalidate seemingly concrete inventions as “abstract ideas,” has incentivized
challenges based on patent eligibility. PERA would reign in the use of Section 101 as a blunt
instrument against patents, and would require patent eligibility determinations to be made on a more
precise basis.

4. Courts don’t solve climate change


Goho et al. 24 (Shaun Goho, Frank Sturges, Veronica Saltzman, and Mary Sasso, writers for the Clean
Air Task Force. “Advocating for climate and clean air rules after a Supreme Court power grab”
7/15/2024. Accessed 7/15/2024. https://www.catf.us/2024/07/advocating-climate-clean-air-rules-after-
supreme-court-power-grab/) wtk

In the final week of its term, the Supreme Court issued a string of decisions expressing remarkable
hostility to federal regulatory agencies and public health protections. Of most immediate importance to
public health, the Supreme Court blocked EPA’s Good Neighbor Rule — a regulation that protects
downwind states from air pollution. In the longer term, though, this group of decisions builds on and
accelerates recent trends in the Court’s decisions and, cumulatively, amounts to a sea change in entire
fields of the law. The jarring ripple effects will be felt for years to come. In particular, these decisions:
 Carry out a massive judicial power grab that takes decisions out of the hands of subject matter
experts and gives them to unelected and unaccountable judges;
 Destabilize environmental regulation by making it more likely that different courts will reach
conflicting decisions, that previously settled questions of law will be reopened, and that judges
will overturn agency actions based on their own policy preferences; and
 Demonstrate a striking distrust of environmental regulation and agency expertise more
generally.
States
Entire argument is killed by Perm do the counterplan- the US is consisted of the 50
states, any joint action by these 50 states will pass through congress making it US
action
Perm do the CP
Google Arts and Culture, N.D., “Federal government of the United States”, , Xoxo 8.17.2021
The federal government of the United States is the national government of the United States, a federal
republic in North America, composed of 50 states, a federal district, five major self-governing territories
and several island possessions. The federal government is composed of three distinct branches:
legislative, executive, and judicial, whose powers are vested by the U.S. Constitution in the Congress, the
president and the federal courts, respectively. The powers and duties of these branches are further
defined by acts of Congress, including the creation of executive departments and courts inferior to the
Supreme Court
Missouri%2520State-Wilkinson-Woodall-Aff-Northwestern-Round2.docx
Farm bill
Fiat is assuming the plan is already passed- not that the bill is already on the floor
Neg burden to prove that PERA and the Farm bill are mutually exclusive
Entire argument is that PERA will take a long time to pass- PERA is already on

You might also like