Monday, May 2, 2016

pharma aids

Life, Death, and Property Rights:
The Pharmaceutical Industry Faces AIDS in Africa
At the start of the twenty-first century, AIDS (Acquired Immune Deficiency Syndrome) was poised to become the most deadly disease known to humanity. In the 20 years since its first documented outbreaks, 24.4 million people had died from AIDS, while over 40 million were living with the disease.1 Rapidly spreading around the world, AIDS had proliferated in countries where basic factors including education and safer sex practices were at a minimum, with 95% of infections occurring in the developing world.2 In certain countries in sub-Saharan Africa as many as one in three adults tested positive for HIV (Human Immunodeficiency Virus), the virus that eventually causes AIDS. In South Africa alone, in the year 2000, there were an estimated 4,700,000 people infected with HIV/AIDS.3 Terrifyingly, many experts believed that worse was yet to come. With infection rates continuing to increase in many areas and treatment woefully inadequate for the vast majority of those infected, AIDS threatened the developing world with an epidemic of unprecedented magnitude.
There were, to be sure, treatments for AIDS. While a cure remained frustratingly elusive decades after the disease first emerged, scientists and pharmaceutical firms had managed to concoct a series of increasingly effective drugs that enhanced the quality of life for many AIDS victims and transmuted what had once been a certain sentence of death. The drugs were complicated to administer, however, and extremely expensive—roughly $10,000 to $15,000 per person, per year. In Africa, where average annual per capita incomes ranged from $450 to $8,900, they were essentially unaffordable.
Not surprisingly, then, the death toll in Africa had begun by the 1990s to plague the world's pharmaceutical firms as well. Across Africa and into the developed nations, doctors and AIDS activists clamored for an industrywide response to the AIDS epidemic. Pharmaceutical companies, they argued, needed to lower the prices they charged for AIDS drugs in Africa. They needed to make their drugs more accessible and to ensure that even the poorest victims could benefit from cutting- edge research. Where the companies were unwilling or unable to comply with these demands, the activists argued, local pharmaceutical firms should be able to step into the gap, selling generic versions of the sought-after drugs.
Such "solutions" were anathema to the established Western pharmaceutical industry. These were firms, after all, that spent millions of dollars on research and development; firms that often spent years, or decades, perfecting the treatments they discovered. In their home markets, these firms were protected by patent systems that enabled them to recover the development costs of their drugs. They were not simply going to turn around and distribute their product for free. Moreover, the patents that protected the Western pharmaceutical firms were part of a broader system of intellectual property rights—a system based on the sanctity of privately owned information and enshrined in the newly created rules of the World Trade Organization. If the pharmaceutical companies relented in the face of Africa's epidemic, they risked denting the entire global structure of intellectual property rights. For if drugs were free—or cheap, or copied—in Africa, why not books or music or software? Indeed, why stop in Africa? There were poor people everywhere, and millions who were dying from AIDS.
The stakes of AIDS in Africa, therefore, were immense. If the pharmaceutical companies did nothing, they risked the lives of millions and a massive blow to their global standing and reputation. But if they lowered prices in Africa or weakened their patent rights, they risked destroying the system that sustained them; the system that, arguably, enabled them to save the very people who were now organizing against them.
Prelude to Catastrophe: The Emergence of AIDS in the United States
The first wave of the AIDS epidemic was concentrated in the United States, and particularly in the cities of New York and San Francisco, where a group of otherwise healthy gay men began in the early 1980s to develop a rare form of cancer. Suspecting that an unknown virus might be to blame, scientists began to look for the culprit and transmission mechanism. Quickly, they realized this strange new disease traveled through direct physical contact: blood transfusions, sharing needles, perinatility (from mother to child during childbirth), and sexual contact, both homosexual and heterosexual. Nevertheless, because AIDS was first discovered in the gay community and remained concentrated there for some time, it swiftly became known as the "gay disease" and was associated, almost entirely at first, with homosexual activity.
In the 1980s, the disease spread through the United States at an alarming rate: 100,000 people tested positive for AIDS in the first eight years of testing, with an additional 100,000 cases identified just two years later.4 Because scientists were unable to define an effective medical treatment for this mysterious virus, death rates were extremely high and patients rarely survived for more than 10 years.
This initial phase of infection was characterized by a general lack of information, widespread fear, and an inability or unwillingness to deal with the extent of the problem in the United States. In a sign of the times, Ryan White, a 13-year-old hemophiliac with AIDS, had to attend middle school classes by phone after being barred from his Indiana school in 1985. The superintendent justified his decision by explaining, "With all the things we do and don't know about AIDS, I just decided not to do it. There are a lot of unknowns and uncertainties, and then you have the inherent fear that would generate among classmates."5 The U.S. government, meanwhile, did little to stem public fears. President Reagan waited six years after the initial outbreak of AIDS to address the topic publicly and the United States was among the last of the major Western industrialized nations to launch a coordinated education campaign.
Soon, however, it became clear that AIDS was not an isolated problem of the United States. In 1983, 33 countries reported AIDS infection and by 1987 that number had increased to 127. In 1984 the Ugandan Minister of Flealth asked the World Flealth Organization for assistance in battling AIDS, and the United Kingdom formed a special cabinet committee to investigate the disease. That same year, the Zambian Ministry of Flealth launched a national AIDS education campaign. It would do little to stop the spread of the virus, though, which swiftly claimed the life of the president's son.
As the effects of AIDS swept across the world, activists in the United States finally propelled the disease into the public spotlight. In 1987, a group of mostly gay protestors created ACT UP (AIDS Coalition to Unleash Power), a "diverse, non-partisan group of individuals united in anger and committed to direct action to end the AIDS crisis."6 In "zap" demonstrations designed to attract media attention and raise public awareness, ACT UP members staged "die-ins," harassed politicians, and picketed corporate buildings. Seven demonstrators broke into the New York Stock Exchange in 1989, where they chained themselves to railings to protest the high prices of AIDS treatment. In a less confrontational but equally influential campaign, Elizabeth Glaser, wife of TV's "Starsky and Hutch" star Paul Glaser, took the cause to Washington during her well-publicized battle with the disease in the late 1980s. Infected by a blood transfusion during childbirth before transmitting the disease to her two children, Glaser became a marquee celebrity spokesperson for the disease, lobbying to raise money and challenging the public's conception of AIDS as a "gay" or "drug-user" disease.
Before long, increased awareness and scientific breakthrough began to make some headway against the disease. Through a combination of preventive campaigns and improved treatment, health care officials greatly reduced the rate of transmission while boosting the quality of care provided to those with AIDS. By the mid-1990s, the rate of infection in the U.S. appeared to have peaked, dropping from nearly 80,000 new cases diagnosed in 1993 to just 21,700 in 20007 While more people in the United States were living with HIV and AIDS than ever before, doctors had found a menu of ways to check the spread of the disease in the body and thus to slow the inevitable pace of death.
Some of this improvement, to be sure, was due to preventive measures that the activists had helped bring to light: blood screening, needle exchange programs, and promotion of condom use. But much was the result of a slow but steady onslaught against the virus itself. In the 20 years that followed the discovery of AIDS, scientists had still not succeeded in curing the disease or formulating an effective vaccine against it. But they had made great strides in understanding and treating both AIDS and HIV. Essentially, HIV spreads in a patient's body by targeting cells that normally clear foreign pathogens. As these cells replicate, the HIV virus eventually—often rapidly—devastates the immune system of its victim. HIV develops into full-blown AIDS when the virus has spread to such a degree that it disrupts the basic function of the immune system. Patients die from diseases that their immune systems can no longer prevent, rather than from AIDS itself.
Initially, scientists struggling with the HIV/AIDS virus were forced to treat only the disease's symptoms, focusing for instance on the pneumonia that felled so many patients in the latter stages of AIDS. In 1986, however, the U.S. Food and Drug Administration (FDA) approved the first AIDS- specific drug, an antiviral medication known as zidovudine or AZT, which worked by inhibiting one of the enzymes involved with replication. Several years later, scientists developed a new class of drugs, called protease inhibitors, that attempted to stop already infected cells from further spreading the HIV virus. Both AZT and the protease inhibitors represented a quantum leap in treatment quality, and it was their usage that began, by the early 1990s, to reduce the toll of AIDS. However, as the drugs were increasingly prescribed, the virus itself began to mutate, morphing into new and more drug-resistant strains. In response, doctors backed away from the once-promising "monotherapies" and began to attack AIDS with what became known as HAART (Highly Active Antiviral Therapy), a combination of drugs designed to combat the virus at various levels.
By the late 1990s, the HAART approach seemed to be working. AIDS infection rates had plummeted in the United States and infected patients were living longer, healthier lives. While a diagnosis of AIDS was still devastating to the victim, it was no longer the death sentence it had been a decade ago. Infected patients such as basketball great Magic Johnson or New Republic editor Andrew Sullivan were living normal and productive lives, and had reclaimed their position in society. HAART, however, was far from perfect. It was, in its early days, extremely complicated, demanding that patients take over 20 pills a day in accordance with a rigid schedule. Second, the cost of these pills was exorbitant, reaching $10,000-$15,000 a year for patients in the United States. And finally, all this medicine still did not get at the root cause of the disease: HAART treated AIDS, but it didn't cure it.
The Second Wave: AIDS and the Developing World
Just as the United States was beginning to stem the HIV/AIDS crisis within its own borders, infection rates in much of the rest of the world began spinning out of control. By the year 2000, 25.3 million people with HIV /AIDS lived in Africa, while South and Southeast Asia and Latin America hosted 6.1 million and 1.4 million, respectively.8 In many of these regions, heterosexual contact was the primary method of transmission; indeed, 55% of sub-Saharan Africans infected were women.9 In the year 2001 alone, the virus infected 3.4 million new victims in Africa, raising the sub-continent's percentage of total adults infected to 8.4%.10 In Zimbabwe, where over 30% of the adult population tested positive for HIV, the demographic toll of the disease had already been felt, pushing the country's once-robust rate of population growth down to zero. Starting in 2003, the populations of Zimbabwe, Botswana, and South Africa were actually predicted to decline by ,l%-.3% a year.11 Similar declines were likely to sweep across the continent, as other countries watched their HIV cases develop into full-blown AIDS.
In South Africa, home to the largest HIV-positive population in the world, the prognosis was particularly grim. Already staggering under the burdens of poverty and a grossly unequal distribution of wealth, South Africa had seen AIDS ravage its population in the late 1990s: 20% of the adult population was infected with HIV in the year 2000, with 500,000 new cases reported in that year alone. A total of 4.7 million suffered from the disease, which had also become the leading cause of death.12 Predictions for the future were terrifying. Fueled by a lack of information about the disease and continued high-risk behavior (a 1998 survey found that only 16% of women used a condom in the last encounter with their nonspouse sexual partner), health care officials predicted that HIV infection rates could hit 7.5 million by 2010, with 635,000 people dying that year from the disease.13 The vast majority of those affected would be between 15 and 29 years old, with half infected before they turned 25, and half dead before 35.14
Beyond this unspeakable death toll, experts also feared broader setbacks to South Africa, including a reduction in life expectancy, a decrease in educational opportunities for its citizens, and growing socioeconomic disparities. Various reports noted that the economic, political, and social fallout of the epidemic could be enormous, ranging from a decimated workforce (including a 40% to 50% loss of employees in certain sectors) to declining growth and an impending crisis for a healthcare system unequipped to deal with millions of sick and dying.15 Most disturbing, perhaps, was the disease's expected toll on the next generation of South Africans. Because of the nature of its transmission, AIDS tended to infect both parents of a family, and thus to leave a staggering number of orphans in its wake. By 2010 experts predicted that there could be nearly 2 million AIDS orphans in South Africa. How would the country, already pushed to its fiscal limits, provide either physical or emotional care to these swelling ranks of children? "Everybody is scared," wrote the author of one USAID report. "We're moving into uncharted territory."15
Sadly, the countries hit hardest by the HIV /AIDS epidemic were also those least able to provide their citizens with adequate treatment. In most of the developing world, HIV-positive populations did not have access to the most rudimentary health care or doctors, let alone the ultra-expensive antiviral drugs regularly prescribed in the United States. Indeed, conditions in these countries often precluded all but the most basic care. Doctors were scarce across the developing world, storage and distribution systems rare, and education about the disease often sorely limited. Under these conditions, even the limited funding that did flow to HIV/AIDS treatment was often wasted, with the World Bank estimating that for every $100 that African governments spent on drugs, only $12 worth of medicine reached the patient.17 In many countries, moreover, the social stigma that surrounded AIDS kept infected people from seeking assistance and allowed their governments to turn a blind eye to the disease. The most notorious example of this behavior occurred in South Africa, the heart of the epidemic, where President Thabo Mbeki stubbornly refused to increase his country's spending on medical treatment, arguing that poverty, poor diet, and other social ills were to blame.18
Others, however, were not so complacent, nor were they prepared to blame Africa's ills solely on the African context. Instead, raging rates of death and infection on the continent were linked, not illogically, to the things Africa lacked: drugs, in particular, and the means to afford them. And from this vantage point, the weight of blame fell heavily on the pharmaceutical industry.
The Pharmaceutical Industry and Intellectual Property Rights
Over the course of the twentieth century, Western pharmaceutical companies had transformed themselves from anonymous, low-profit chemical suppliers into high profile, top-performing firms. They earned billions of dollars, employed thousands of the world's leading scientists, and made the drugs that saved lives. Many factors had contributed to the pharmaceutical industry's success— medical advances, government support, and a capital market eager to oblige. At the center of the industry, though, and close to the heart of all industry executives sat the patent system, a legal bulwark protecting—some might even say permitting—the development of drugs.
First devised in fifteenth-century Venice, patents were intended to repay their holders for the effort and expenses incurred during a product's development: if a glassblower pioneered a new method for shaping glass, for example, or a craftsman built a better compass, the republic wanted to ensure that they could reap the commercial benefits of their work. So Venice granted its patent holders a window of market exclusivity for their product, a kind of government-sanctioned temporary monopoly. Over subsequent centuries, justification for patents varied from recognizing the "natural right" of an inventor's ownership of ideas to providing a practical way for governments to promote and reward innovation. Such motives figured prominently in the early history of the United States. Story has it that the Constitutional Convention adjourned one afternoon to watch John Frich's steamboat undergo tests on the Delaware River. Hoping to safeguard the freedom of individuals and the enterprises they were creating, many of the founding fathers of the United States saw the implementation of a federal patent system as a sensible way of defending emerging American industry. And thus, to "encourage the progress of science and the useful arts,"19 America's legislators enshrined a system of patents in the U.S. constitution.20
Not all Americans, however, favored laws that restricted the dissemination of ideas. Indeed, many argued that patents were an invidious form of regulation, one that favored commerce over the greater development of humankind. As Thomas Jefferson, an early proponent of freely flowing ideas, famously reasoned:
He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature.... Inventions then cannot, in nature, be a subject of property.21
Such poetic objection, though, remained rare. Several decades after Jefferson's argument, President Lincoln appeared to speak for the nation when he noted that "the Patent System added the fuel of interest to the fire of genius."22 Patents enjoyed permanent inclusion in the laws of the United States.
For the pharmaceutical industry, however, this system of protection was initially irrelevant. For patents were designed to protect research—innovation—and the pharmaceutical trade in the early United States had little to do with discovery. Instead, the drug industry was largely composed of small, low-profile firms that supplied pharmacists with the bulk chemicals they used to produce formulaic mixtures. Indeed, until the turn of the twentieth century, "patented drugs" were simply those marketed under a trademarked name—often the most dubious drugs or "elixirs" available.
Events in the 1930s and 1940s, however, brought drastic change to the pharmaceutical industry. In 1932, a German pharmacologist discovered Prontosil, a sulfanilamide drug with significant antibacterial properties. Suddenly, it appeared that scientific research could lead to direct cures; that chemistry and discovery could pave the path to drugs. Accordingly, the U.S. government poured $3 million into a massive research and production effort during World War II, involving more than 20 companies and several universities in a coordinated research effort.23 These initial funds flowed primarily to war-specific concerns and led, over time, to the development of yellow fever and typhus vaccines and the discovery of oral saline therapies to help exhausted soldiers on the battlefields. After the war, research efforts widened, leading to a widely heralded stream of pharmaceutical breakthroughs: antibiotics in the 1940s, steroids in the 1950s, and birth control pills in the 1960s. Far more efficacious than the elixirs of earlier years, these new drugs flooded the U.S. market, bringing massive improvements in medicine and (with birth control pills especially) significant social change.24 They also brought a correspondingly large improvement in the fortunes of pharmaceutical companies and a newfound interest in patents.
By the 1970s, the U.S. pharmaceutical industry was completely transformed. Companies such as Merck and Pfizer, once humble chemical suppliers, had become corporate giants, with armies of in- house scientists and a major commitment to research and development. (See Exhibit 4a for the growth of research and development expenses.) They had arsenals of proprietary drugs (such as Merck's Pepcid and Abbott Labs' erythromycin) that were distributed through complex networks and supported by sophisticated promotion campaigns. And they had a patent system, enshrined since the eighteenth century, which gave each company 20 years of exclusive control over the drugs that it developed.25 As of 1980, U.S. pharmaceutical companies were spending just under $2 billion annually for research and development.26 By 2001, the $170 billion U.S. pharmaceutical market was easily the largest in the world, a combination of high demand (Americans consumed relatively large amounts of drugs), high quality, and prices that remained stubbornly aloof from government regulation. U.S. pharmaceutical firms, together with European players such as Novartis and Bayer, were among the largest corporations in the world and easily among the most profitable: between 1994 and 1998, after-tax profits in the drug industry averaged 17% of sales, or three times higher than industrywide averages.27
Not surprisingly, representatives of the industry explained these profit levels as evidence of their products' power: pharmaceutical companies made money because they made products that worked; products that customers wanted to buy at almost any cost. Moreover, the industry argued, drug prices were high because they had to be. Each drug, after all, was the product of an increasingly expensive research process, one that was inherently fraught with uncertainty. For each drug that Pfizer, say, developed, there were roughly 5,000 compounds that Pfizer researchers had tested and abandoned in the laboratory.28 And even those compounds that did prove promising were subjected to a long and tedious process of testing and regulation. Under guidelines established by the U.S. Food and Drug Administration (FDA), no drug could be sold in the U.S. market until it underwent three stages of successful clinical testing and survived a rigorous approval procedure. Only one in 20 trial-phase drugs was eventually approved for public use, and even successful candidates often took 10 years and 50,000 pages of documentation to win FDA approval.29 The total cost of this process was reported to be between $500 million and $880 million.30 No wonder, industry representatives argued, that drug prices were high. No wonder drugs demanded years of patent protection to recoup their research costs. Or as one industry expert explained, "You have to relativize this. A company can support some research without being paid off, but not much. Especially with the pressure on shareholder value."31
To be sure, not all observers shared the pharmaceutical industry's analysis of its own economics. Indeed, criticism of the industry was rife in the 1980s and 1990s, and eventually assumed a fairly predictable course. Periodically, critics (such as the early AIDS activists) would decry the industry's high profits and the prices it charged for life-saving drugs. During the Clinton administration, there was a burst of concern about the trend of drug prices and an intense, if ill-fated, attempt to increase regulation of the industry.32 In other instances, critics complained that drug companies spent more on advertising than on research, and that Americans, stimulated perhaps by this marketing blitz, were taking drugs unnecessarily.33 One study, for example, revealed that two-thirds of the people taking allergy-fighting Claritin did not in fact have allergies.34 Moreover, the critics also claimed that pharmaceutical firms vastly overstated their drug development costs: according to Ralph Nader's consumer advocacy organization, R&D costs for an average drug totaled only between $57 million and $71 million throughout the 1990s.35
In general, however, the structure of the U.S. pharmaceutical industry did not cause Americans undue concern. Although drug prices in the United States were substantially higher than they were elsewhere—nearly triple average drug prices in Spain and Greece, for example—Americans (and their insurers) seemed willing to pay, or at least unwilling to fight.36 Or as one industry expert noted, "In reality, if a drug is going to save a life, we will find a way to afford it."37 Moreover, the skeleton that supported the U.S. drug industry—the U.S. patent system—was itself almost entirely immune from dissent. Even if Americans resisted high drug prices, it appeared, and even if they resented the companies that charged these prices, they weren't willing to topple the underlying system. For most Americans believed, like Lincoln, in the inviolability of intellectual property rights and in the advantages of combining "interest" with "genius."
Going Global: The International Battlefield for Intellectual Property Rights
Elsewhere in the world, attitudes toward intellectual property rights were quite different. Some countries had strong patent laws for pharmaceuticals, but mixed them with a regulatory structure that kept drug prices low. Others had licensing requirements or weaker laws that led to lower prices and, often, the development of a generic pharmaceutical industry. And some had no patent laws, no domestic drug industry, and relied entirely on foreign imports.
For countries that lacked a domestic pharmaceutical industry, looser patent laws made sense. In these countries, there was often a sense that the nation simply could not afford either to develop new medications on its own or to pay the high prices that patented drugs commanded elsewhere. Instead, it was economically more efficient simply to allow local firms to copy high-priced imports, reproducing their chemical composition without having to bear the costs of research and development. Such strategies also paid significant social dividends, since they led almost inevitably to cheaper and more accessible drugs. After India loosened its patent laws, for example, drug prices in that country were often more than 90% lower than those in the OECD states.38 Similarly, after local drug companies began producing a generic version of Pfizer's antifungal Fluconazole, prices dropped within months from $7.00 per dose to just 60 cents.39
Occasionally, weaker property rights actually led over time to the creation of a substantial generic industry, as occurred in India, Turkey, and Argentina. India's growth was particularly notable, since it took place only after 1970, when the country scaled down its patent laws in order to reduce drug prices. Elsewhere, countries that otherwise adhered to strict protection of intellectual property made a specific exception in the pharmaceutical sector, arguing that health concerns trumped property rights in this industry. Such was the case, for example, in Canada throughout the 1980s, where the government historically offered compulsory licenses for any drug imported into the country. After paying a government-determined royalty payment (generally 4%) to the patent holder, local Canadian firms were free to produce their own version of the drug, even if it was still on patent in its home market. Brazil had a similar system, requiring foreign patent holders to produce their drugs locally and charge locally mandated prices.
In the initial phase of post-war pharmaceutical growth, this diffusion of national systems functioned relatively well. Nations fashioned their laws to suit their own needs, and pharmaceutical firms adapted to their local circumstances: where property rights were strong and markets large (as in the United States and Great Britain), well-funded drug companies made both substantial profits and significant breakthroughs. Where rights were weak and patients poor, pharmaceutical firms were either generic (as in India or Brazil) or nonexistent.
As the major Western drug manufacturers began to go global, however, they also began a decades-long campaign to bring Western-style property rights to the developing world. Led by Pfizer and its politically active chairman, Edmund T. Pratt, Jr., the companies negotiated for these rights with individual governments, lobbied to have them embedded under the auspices of WIPO— the World Intellectual Property Organization—and formed international alliances with other knowledge-based industries, such as software and publishing.40 They ran advertising campaigns and held extensive meetings, striving to convince foreign officials of the urgency of their plight.
And in the end, the pharmaceutical companies essentially returned to Washington. In 1988, the U.S. Congress amended the Trade and Tariff Act of 1984, authorizing the U.S Trade Representative (USTR) to take retaliatory action against any country believed to be guilty of failing to protect U.S. companies' intellectual property rights through a provision known as "Special 301." Justifying the policy by arguing that "we don't work for" consumers in patent-violating countries such as Argentina and South Korea, the USTR compiled annual lists of suspected violators and threatened unilateral sanctions unless these countries conformed with the dictates of U.S. law. In response, countries—including Argentina, Thailand, Brazil, Italy, and Korea—swiftly tightened their patent and copyright laws. At an ideological level, several of these states continued to assert the superiority of their former systems. But pressure from the United States was sufficient to compel a regulatory change of heart.
In the meantime, the pharmaceutical lobby also labored to include intellectual rights on the World Trade Organization's agenda. Allying with other developed nations during the Uruguay Round of GATT (General Agreement on Tariffs and Trade) negotiations, U.S. officials successfully added intellectual property rights to the slate of new requirements. Under a provision known as TRIPS (for trade-related intellectual property rights), all countries that wanted to join the newly created World Trade Organization (WTO) would henceforth be required to grant a minimum of 20-year patent protection in most fields of medicine. Despite loopholes allowing for the compulsory licensing of drugs in some cases and the concession of a 10-year implementation period for the least-developed countries, TRIPS marked a landmark achievement in the pharmaceutical industry's quest for global patent protection. "TRIPS," claimed one industry specialist, "is the most important international agreement on intellectual property this century."41
Debate, however, continued to rage. In many parts of the world, both government officials and public opinion maintained that U.S.-style patent rights were ill-suited to their countries' circumstances—that they were unfair, unrealistic, and ultimately damaging to public health. Indeed, despite their formal adherence to TRIPS, many countries still argued that health had to take precedence over profits, and that all countries (including the United States) should be able to license patented drugs in case of medical emergency. Healthcare experts also stressed that generic production posed far less of a threat than industry representatives suggested. Indeed, in most parts of the world, people simply could not afford patented drugs and would not purchase them under any circumstances. In areas such as sub-Saharan Africa, for example, where per capita expenditure on drugs rarely topped $100 a year, activists stressed that "the purchasing power of 1.2 billion people living on US$1 a day simply does not constitute a commercial incentive to research and development."42
Yet the pharmaceutical companies remained firm. Snug behind the protection of both U.S. law and TRIPS, they maintained the intellectual purity of their position. Without patent protection, no company could afford to take the financial risks that drug development demanded. And without drugs and drug research, millions of people would suffer.
Back to Africa
At the turn of the century, the AIDS situation in Africa remained grim. Even as AIDS-related deaths in the developed world were plummeting at last; and even though nonprofit groups such as Doctors Without Borders were trying to distribute drugs to small clusters of Africans, the vast majority of Africa's AIDS patients remained without treatment. In 2001, 25 million Africans were HIV-positive, roughly half of whom were medically eligible for combination treatments such as HAART.43 Only 25,000 of these patients had received antiretroviral drugs, however—a tiny portion when compared to Western standards.44 In Uganda, for example, only about 1,000 patients, or 0.1% of the infected population, had received the prescribed triple therapy; in Malawi, the number was only 30.45 Without drugs, these patients were also essentially without hope. And thus the AIDS epidemic continued to ravage Africa and to wreak an escalating havoc on a continent already beset with troubles.
It wasn't long, therefore, before Africa spawned AIDS activists of its own—patients or their supporters who struggled to find some way of attacking, or at least forestalling, the disease's deadly hold. Like their Western predecessors, the activists spread a wide net at first, cajoling governments and the media to take notice of their plight. But eventually they focused on the easiest and most obvious target: the pharmaceutical companies whose drugs held the prospect of survival.
Although these groups were split to some extent by their negotiating tactics, they shared a common underlying complaint—the same complaint that had initially motivated ACT UP a decade earlier in the United States. Simply put, the activists charged that the AIDS plague in Africa was due, at least in part, to the actions of Western drug firms. It was the drug companies, they claimed, that consciously held prices beyond the grasp of most Africans; the drug companies that refused to allow lower-priced versions of their product. It was the drug companies that hid behind the veneer of Western patent laws and used international pressure to preserve their own profits. And while the companies reaped profits from high-priced AIDS drugs, millions of Africans were dying.
Central to the activists' complaint was the problem of prices. In the developed world, the typical price of HAART treatment in 1999 remained at the staggering level of $10,000-$15,000. Much of this cost, however, was assumed either by government insurance plans (in Canada and Europe) or by private insurance (in the United States). In Africa, where both public and private insurance schemes were scarce, pricing drugs at Western levels essentially meant pricing them far beyond the reach of nearly all potential patients: at $10,000, virtually no one in Africa, or indeed anywhere in the developing world, could afford HAART-style treatment. Accordingly, AIDS activists and patients in Africa accused the Western drug companies of using their pricing policies to "wage an undeclared drugs war" against the developing world.46 By refusing to lower their prices, the activists charged, pharmaceutical firms were directly responsible for millions of unnecessary deaths and for allowing AIDS to spread unchecked across the continent. "There are drugs," argued one prominent AIDS consultant, "there is unimaginable wealth in this world, and the people who have the money are refusing to help."47 Echoed another: "The poor have no consumer power, so the market has failed them. I'm tired of the logic that says, 'He who can't pay, dies.'"48
What particularly infuriated many Africa-based activists was the pharmaceutical companies' reluctance to endorse generic substitutes for their product. Typically, branded drugs from the developing world were sold in Africa and elsewhere as lower-priced copies—there were generic antibiotics, for example, and ulcer-fighting medicines rather than the branded products that prevailed in the West. Sometimes these drugs were produced by Western firms, and sometimes by generic producers based in India, Argentina, or Brazil. Nearly always, though, the generic drugs were legally generic (even by U.S. standards) since the original patents had already expired. AIDS drugs, however, were different: they were new, they were often experimental, and nearly all of them remained protected by Western patents in their home markets. In the case of these drugs, therefore, Western companies were reluctant to permit generic production—even if generic firms could arguably produce the same drug at a substantially reduced price. For the companies, this was simply accepted practice. But for the activists, it was greed that led to murder. Industry experts estimated that generic competition in the AIDS drug market could lead to an immediate 50%-90% reduction in drug prices.49
Already, a handful of countries had formally embraced generic solutions. In India, for example, local firms were legally permitted to produce patented medicines as long as they could develop a new means of producing them, and in Thailand, the government actively encouraged the production of generic AIDS drugs.50 The most dramatic policy, though, came from Brazil, where a controversial "Free Drugs for All" program provided a compelling example of low-priced, large-scale antiretroviral treatment. Launched in 1997, this government-sponsored initiative offered a variety of HAART therapies, for free, to all AIDS patients. Using generic drugs produced by its own local industry, the government spent only $4,716 per patient per year—compared with $12,000 for similar therapy in the United States. By the turn of the century, over 90,000 patients had received treatment and infection rates were plummeting in Brazil.51 AIDS-related deaths had fallen by half since 1996 and the government actually appeared to be saving money: according to official estimates, treatment under the Free Drugs program had kept 146,000 people out of the hospital and saved the country $422 million in averted hospital charges.52 Overjoyed with their success, health officials argued that Brazil demonstrated both the efficacy of generic solutions and the economic feasibility of treating even very poor patients. "The simplistic argument that treating AIDS is expensive is no longer convincing," asserted one well-placed observer, "In Brazil, the cost of the investment is economically positive."53 Even more impressive, perhaps, was that Brazil had not completely abandoned the recognition of international patents. Indeed, under pressure from the United States, Brazil only permitted the generic production of antiretroviral drugs commercialized before 1997. Half of its per- patient cost came from the importation of more recently patented treatments.
As word of Brazil's success spread, health workers in Africa urged a similar response. They argued that drugs had to be made available on the continent in large quantities and at reduced prices, meaning that the pharmaceutical companies would have to either cut their prices or agree to generic production of still-patented treatments. In South Africa, seeds of a future battle were sown on December 12,1997, when President Nelson Mandela signed a series of key amendments to the South African Medicines Act. Initiated by the controversial South African Health Minister, Nkosazana Zuma, the amendments gave the health minister authority to break international patents and import or manufacture generic drugs.54 As part of a larger plan to improve the sorry state of healthcare in a country that paid some of the highest drug prices in the world, Zuma pledged, "I will not sacrifice the public's right to health on the altar of vested interests."55
In May of 1999, the World Health Assembly (the policymaking body of the World Health Organization) passed a resolution that declared public health concerns "paramount" in intellectual property issues related to pharmaceuticals. When U.S. Vice President Gore visited Africa in the summer of 1999, AIDS was a constant refrain; in South Africa, the vice president was actually booed in public for his pro-patent stance. Under attack from mounting public criticism, U.S. President Clinton announced in late 1999 that the United States would no longer impose sanctions on developing countries seeking cheaper AIDS treatment. But U.S. pharmaceutical companies, which obviously bore the brunt of any policy relaxation, were far from being convinced. On the contrary, they continued to stress that the problem of AIDS in Africa had little to do with either the cost or availability of cutting-edge treatments. Africans were dying, they insisted, because of Africa's own problems.
View from the Pharmaceuticals
It was not easy, of course, for the pharmaceutical industry to respond dispassionately to the criticism that erupted in the late 1990s. People in the industry genuinely believed that they were saving lives, not harming them, and they felt that the criticism was seriously misplaced. So they tried to build a careful and compelling case, one that shifted the spotlight away from their own activities.
According to the major pharmaceutical producers, the slow spread of AIDS treatment in Africa had little to do with either the price or availability of AIDS drugs. Instead, it was due simply to endemic features of the African landscape: to the lack of clean water in many areas, to poor medical infrastructure and limited information, and to stubborn political and social issues that local authorities either could not, or would not, address. Industry executives claimed, for example, that government officials in Africa were far less committed to the AIDS struggle than their public pronouncements might suggest and that corruption remained a corroding influence across the African continent. To support this contention, industry advocates pointed to the World Bank study that found that, for every $100 that certain African governments spent on drugs, only $12 worth of medicine actually reached the patient.56 Under such conditions, the industry argued, making drugs more widely available would not necessarily make Africans healthier; instead, drugs might simply be stolen and resold elsewhere in the world. As Dr. Josef Decosas, director of the Southern Africa AIDS Training Program, put it:
"Virtually all African countries have centralized drug import and distribution centers, and most of them are broken or corrupted. Even if you make these drug available for free, the systems to deliver them are not there."57 In early 2001, an estimated 50% of drug stocks were reported stolen from South African public hospitals and clinics.58
Pharmaceutical companies also emphasized the sheer difficulty of distributing AIDS drugs across the African continent. Transportation links in the region were relatively scarce, as were basic elements of a medical infrastructure. Without clinics to support the treatments, they argued, and health-care workers to administer them, regimes like HAART simply wouldn't work. Indeed, to be successful, HAART-style therapies demanded support programs, palliative care, and preventive treatments—none of which could be easily established in most parts of Africa. Moreover, because AIDS preyed on the immune system, patients on HAART treatment were prone to opportunistic infections and needed to depend on a regime of regular tests and monitoring—a regime which, again, would be difficult to implement in the African context. Burroughs Wellcome, for example, which manufactured the AIDS-fighting compound AZT, had concluded in the late 1990s that Africa lacked the laboratory support necessary to administer its own drug safely.59 Similarly, Britain's GlaxoSmithKline had funded a study demonstrating the difficulties of adhering to an AIDS treatment regime, with the number of pills, side effects, and food restrictions all cited as problems.60 Although the study was not specific to Africa, other industry representatives used it and similar data to imply that poorer patients from other cultures might not be able to adhere to the demands of an AIDS regime. Even USAID director Andrew Natsios articulated this controversial sentiment when he justified his organization's decision not to fund antiretroviral treatment by observing that African AIDS patients "don't know what Western time is" and thus would be unable to take treatments effectively.51
Pharmaceutical executives also pointed to the difficulty of maintaining effective treatment in poverty. Taking care of AIDS, they noted, entailed more than just taking drugs. Patients also had to take care of their general health, getting proper nutrition and exercise and avoiding other diseases. Where people lived in poverty, all of these environmental conditions became considerably more difficult to sustain—to a point where even the most powerful drugs might be rendered useless. Indeed, failed treatment, as one health worker noted, "can become a vicious circle—no food, no money—so they can't take their medicine properly, so they get opportunistic diseases, so they can't work, they get depressed and that leads them further away from treatment."52 Under these conditions, scientists feared that resistant strains of the HIV virus could easily evolve, thwarting efforts at long-term control.53 Moreover, while AIDS under any circumstance was a treacherous, deadly disease, Africa was still plagued by other ills—river blindness, tuberculosis, measles, typhoid, malaria—that were relatively easy to treat. "In view of the competing demands of other health problems," therefore, many industry observers concluded that wide-scale HAART treatment in Africa was both unwise and possibly dangerous.54
Finally, although pharmaceutical companies were reluctant to engage in detailed price discussions, they were generally willing to defend their position on both patents and pricing. Essentially, the industry argued that high prices and long-term patents were a critical element of their business, regardless of where this business occurred. If they lowered prices for AIDS drugs, decreased revenues in this area would drive innovation to other, more lucrative diseases. Pharmaceutical companies would stop investing in AIDS research, and AIDS patients, over the long run, would suffer. Moreover, once the sanctity of patents was undermined in Africa or for AIDS, what would stop this loosening trend from continuing elsewhere, in European markets, for example, or for equally potent (and profitable) cancer-fighting drugs? The Western patent system, industry representatives urged, had facilitated the emergence of research-driven companies and life-saving scientific breakthroughs. If patents were undermined in Africa, then they would swiftly be undermined elsewhere in the global economy. And without patents, research across the global economy would suffer. Ultimately, the problem of AIDS in Africa, argued the pharmaceutical industry, was a problem for governments and society. Tackling AIDS meant tackling education. It meant talking about subjects (sexual behavior, gender norms) that were still taboo in many places. And it meant spending money—public money—in places where funds were scarce. None of these tasks were the responsibility of the world's pharmaceutical firms. Instead, as a senior drug lobbyist had reasoned earlier in the decade:
The predominant contribution that the research-based pharmaceutical industry could reasonably be expected to make is in their predominant area of competence—research and development. The broader responsibility for ensuring that such products are delivered to those they could benefit should be born by society, particularly government.65
Africa's AIDS activists, however, were not content to leave their burden with either government or society. Instead, pointing to success stories such as Brazil and Thailand, they insisted that large- scale drug treatment was imminently feasible in Africa. "The limiting factor for us," stated the nonprofit Doctors Without Borders, "is price."66 Similarly, leaders from various health NGOs argued that while conditions in poor countries precluded treatment for some, most major cities in the developing world would be fully equipped to handle AIDS patients if they had access to affordable tests and drugs.67
In late 2000, Joep Lange, a negotiator at nonprofit International Antiviral Therapy Evaluation, articulated his frustration at industry representatives' reaction to plans that would have expanded treatment by extending business discounts for drugs distributed to employees:
They laughed at us. The [pharmaceutical] companies are not interested. They don't want to treat a million people tomorrow. They say, "We want to do it responsibly," but there's a lot of window dressing there. They don't know what could be the repercussions: Their whole price structure could collapse. They are scared to death.68

No comments:

Post a Comment