Filter results by Topics

Your search for all content returned 68 results

Save search You must be logged in as an individual save a search. Log-in/register
Book
Basic & Clinical Pharmacology, 16th Edition

by Bertram G. Katzung, Todd W. Vanderah

The most comprehensive and authoritative pharmacology text—updated with new content and USMLE-style questions

Katzung's Basic & Clinical Pharmacology has been helping students master key pharmacological concepts and practices for decades. Continuing the tradition, this updated sixteenth edition facilitates learning with engagingly written text, a full-color presentation, hundreds of illustrations, and important new content.

The text is organized to reflect the course sequence in many pharmacology courses and in integrated curricula. Each chapter opens with a case study, covers drug groups and prototypes, and closes with summary tables and diagrams that encapsulate important information. This edition has been updated to reflect the latest research and practices.

Delivers the Knowledge and Insight Needed to Excel in Every Facet of Pharmacology!

• Encompasses all aspects of medical pharmacology, including botanicals, and over-the-counter drugs

• Addresses the clinical choice and use of drugs in patients and the monitoring of their effects

• Case studies introduce clinical problems and issues

• Trade Name/Generic Name tables are provided at the end of each chapter for easy reference when writing a chart order or prescription

• New: Chapter on cannabinoids

• New: 50 USMLE-style questions

• New: Drug tables and descriptions of important new drugs

• Full-color illustrations help clarify important concepts, and provide more information about drug mechanisms and effects

Book Chapter
9. Adrenoceptor Agonists & Sympathomimetic Drugs

9. Adrenoceptor Agonists & Sympathomimetic Drugs

The effects of catecholamines are mediated by cell-surface receptors. Adrenoceptors are typical G protein-coupled receptors (GPCRs; see Chapter 2). The receptor protein has an extracellular N-terminus, traverses the membrane seven times (transmembrane domains) forming three extracellular and three intracellular loops, and has an intracellular C-terminus (Figure 9–1). They are coupled to G proteins that regulate various effector proteins. Each G protein is a heterotrimer consisting of α, β, and γ subunits. G proteins are classified on the basis of their distinctive α subunits. G proteins of particular importance for adrenoceptor function include Gs, the stimulatory G protein of adenylyl cyclase; Gi and Go, the inhibitory G proteins of adenylyl cyclase; and Gq and G11, the G proteins coupling α receptors to phospholipase C. The activation of G protein-coupled receptors by catecholamines promotes the dissociation of guanosine diphosphate (GDP) from the α subunit of the corresponding G protein. Guanosine triphosphate (GTP) then binds to this G protein, and the α subunit dissociates from the βγ subunit. The activated GTP-bound α subunit then regulates the activity of its effector. Effectors of adrenoceptor-activated α subunits include adenylyl cyclase, phospholipase C, and ion channels. The α subunit is inactivated by hydrolysis of the bound GTP to GDP and phosphate, and the subsequent reassociation of the α subunit with the βγ subunit. The βγ subunits have additional independent effects, acting on a variety of effectors such as ion channels and enzymes.

Figure 9–1 Activation of α1 responses. Stimulation of α1 receptors by catecholamines leads to the activation of a Gq-coupling protein. The activated α subunit (αq*) of this G protein activates the effector, phospholipase C, which leads to the release of IP3 (inositol 1,4,5-trisphosphate) and DAG (diacylglycerol) from phosphatidylinositol 4,5-bisphosphate (PtdIns 4,5P2). IP3 stimulates the release of sequestered stores of calcium, leading to an increased concentration of cytoplasmic Ca2+. Ca2+ may then activate Ca2+-dependent protein kinases, which in turn phosphorylate their substrates. DAG activates protein kinase C (PKC). GDP, guanosine diphosphate; GTP, guanosine triphosphate. See text for additional effects of α1-receptor activation.
katzung16_ch9_f001

Adrenoceptors were originally characterized pharmacologically by their relative affinities for agonists; α receptors have the comparative potencies epinephrine ≥ norepinephrine >> isoproterenol, and β receptors have the comparative potencies isoproterenol > epinephrine ≥ norepinephrine. Molecular cloning further identified distinct subtypes of these receptors (Table 9–1).

Table 9–1 Adrenoceptor types and subtypes.

Receptor

Agonist

Antagonist

G Protein

Effects

Gene on Chromosome

1Initially an “α1C” receptor was described but was later recognized to be identical to the α1A receptor. To avoid confusion, the nomenclature now omits α1C.

Nomenclature: The adrenoreceptors (and the genes that encode them) are also known by the abbreviation ADR, followed by the type (ADRA, ADRB) and subtype (ADRA1A, ADRA1B, etc). The corresponding nomenclature for the dopamine receptors is DRD1, DRD2, etc.

α1 type

Phenylephrine

Prazosin

Gq

↑ IP3, DAG common to all

 

 α1A

 

Tamsulosin

 

 

C8

 α1B

 

 

 

 

C5

 α1D1

 

 

 

 

C20

α2 type

Clonidine

Yohimbine

Gi

↓ cAMP common to all

 

 α2A

Oxymetazoline

 

 

 

C10

 α2B

 

Prazosin

 

 

C2

 α2C

 

Prazosin

 

 

C4

β type

Isoproterenol

Propranolol

Gs

↑ cAMP common to all

 

 β1

Dobutamine

Betaxolol

 

 

C10

 β2

Albuterol

Butoxamine

 

 

C5

 β3

Mirabegron

 

 

 

C8

Dopamine type

Dopamine

 

 

 

 

 D1

Fenoldopam

 

Gs

↑ cAMP

C5

 D2

Bromocriptine

 

Gi

↓ cAMP

C11

 D3

 

 

Gi

↓ cAMP

C3

 D4

 

Clozapine

Gi

↓ cAMP

C11

 D5

 

 

Gs

↑ cAMP

C4

Likewise, the endogenous catecholamine dopamine produces a variety of biologic effects that are mediated by interactions with specific dopamine receptors (see Table 9–1). These receptors are particularly important in the brain (see Chapters 21, 28, and 29) and in the splanchnic and renal vasculature. Molecular cloning has identified several distinct genes encoding five receptor subtypes: two D1-like receptors that activate adenylate cyclase (D1 and D5), and three D2-like receptors that inhibit adenylate cyclase (D2, D3, and D4). Further complexity exists because alternative splicing produces D2 receptor isoforms, and D2 receptors might also form oligo- and heterodimers. These subtypes may have importance for understanding the efficacy and adverse effects of novel antipsychotic drugs (see Chapter 29).

Receptor Types

A. Alpha Receptors

Alpha1 receptors are coupled via G proteins of the Gq family to phospholipase C. This enzyme hydrolyzes polyphosphoinositides, leading to the formation of inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG) (see Table 9–1 and Figure 9–1). IP3 promotes the release of sequestered Ca2+ from intracellular stores, which increases cytoplasmic free Ca2+ concentrations that activate various calcium-dependent protein kinases and other calmodulin-regulated proteins. Activation of these receptors may also increase influx of calcium across the cell’s plasma membrane. IP3 is sequentially dephosphorylated, which ultimately leads to the formation of free inositol. DAG cooperates with Ca2+ in activating protein kinase C (PKC), which modulates activity of many signaling pathways. In addition, α1 receptors activate signal transduction pathways that stimulate tyrosine kinases such as mitogen-activated protein kinases (MAP kinases) and polyphosphoinositol-3-kinase (PI-3-kinase).

Alpha2 receptors are coupled to the inhibitory regulatory protein Gi (Figure 9–2) that reduces adenylyl cyclase activity and lowers intracellular levels of cyclic adenosine monophosphate (cAMP). It is likely that not only the α subunit of Gi, but also its βγ subunits, contribute to inhibition of adenylyl cyclase. It is also likely that α2 receptors are coupled to other signaling pathways that regulate ion channels and enzymes involved in signal transduction.

Figure 9–2 Activation and inhibition of adenylyl cyclase by agonists that bind to catecholamine receptors. Binding to β adrenoceptors stimulates adenylyl cyclase by activating the stimulatory G protein, Gs, which leads to the dissociation of its α subunit charged with GTP. This activated αs subunit directly activates adenylyl cyclase, resulting in an increased rate of synthesis of cAMP. Alpha2-adrenoceptor ligands inhibit adenylyl cyclase by causing dissociation of the inhibitory G protein, Gi, into its subunits—ie, an activated αi subunit charged with GTP and a β-γ unit. The mechanism by which these subunits inhibit adenylyl cyclase is uncertain. cAMP binds to the regulatory subunit (R) of cAMP-dependent protein kinase, leading to the liberation of active catalytic subunits (C) that phosphorylate specific protein substrates and modify their activity. These catalytic units also phosphorylate the cAMP response element binding protein (CREB), which modifies gene expression. See text for other actions of β and α2 adrenoceptors.
katzung16_ch9_f002

B. Beta Receptors

All three receptor subtypes (β1, β2, and β3) are coupled to the stimulatory regulatory protein Gs, which activates adenylyl cyclase to increase intracellular levels of cAMP (see Table 9–1 and Figure 9–2). Cyclic AMP is the second messenger that mediates most of the actions of β-receptors; in the liver cAMP mediates a cascade of events culminating in the activation of glycogen phosphorylase; in the heart, it increases the influx of calcium across the cell membrane; whereas in smooth muscle it promotes relaxation through phosphorylation of myosin light-chain kinase to an inactive form (see Figure 12–1). Some actions of β adrenoceptors may be mediated through different intracellular signaling pathways, via exchange proteins activated by cAMP rather than conventional protein kinase A (PKA), or via coupling to Gs but independent of cAMP, or coupling to Gq proteins and activation of MAP kinases.

The β3 adrenoceptor is a lower-affinity receptor compared with β1 and β2 receptors but is more resistant to desensitization. It is found in several tissues, but its physiologic or pathologic role in humans is not clear. Beta3 receptors are expressed in the detrusor muscle of the bladder and induce its relaxation, and the selective β3 agonist mirabegron is used clinically for the treatment of symptoms of overactive bladder (urinary urgency and frequency).

C. Dopamine Receptors

The D1 receptor is typically associated with the stimulation of adenylyl cyclase (see Table 9–1); for example, D1 receptor–induced smooth muscle relaxation is presumably due to cAMP accumulation in the smooth muscle of those vascular beds in which dopamine is a vasodilator. D2 receptors have been found to inhibit adenylyl cyclase activity, open potassium channels, and decrease calcium influx.

Adrenoceptor Polymorphisms

Since elucidation of the sequences of the genes encoding adrenoceptors, it has become clear that there are relatively common genetic polymorphisms (variations in the gene sequence) for many of these receptor subtypes in humans. Distinct polymorphisms may be inherited together, in combinations termed haplotypes. Genetic polymorphisms can result in changes in critical amino acids that may alter the function of the receptor in ways that are clinically relevant. Some polymorphisms have been shown to alter susceptibility to diseases such as heart failure, modify the propensity of a receptor to desensitize, or modulate therapeutic responses to drugs in diseases such as asthma. In many other cases, studies have reported inconsistent results as to the pathophysiologic importance of polymorphisms.

Receptor Regulation

Responses mediated by adrenoceptors are not fixed and static. The magnitude of the response depends on the number and function of adrenoceptors on the cell surface and on the regulation of these receptors by catecholamines themselves, other hormones and drugs, age, and a number of disease states (see Chapter 2). These changes may modify the magnitude of a tissue’s physiologic response to catecholamines and can be important clinically during the course of treatment. One of the best-studied examples of receptor regulation is the desensitization of adrenoceptors that may occur after exposure to catecholamines and other sympathomimetic drugs. After a cell or tissue has been exposed for a period of time to an agonist, that tissue often becomes less responsive to further stimulation by that agent (see Figure 2–12). Other terms such as tolerance, refractoriness, and tachyphylaxis also have been used to describe desensitization. This process has potential clinical significance because it may limit the therapeutic response to sympathomimetic agents.

Many mechanisms have been found to contribute to desensitization. Some mechanisms occur relatively slowly, over the course of hours or days, and these typically involve transcriptional or translational changes in the receptor protein level, or its transport to the cell surface. Other mechanisms of desensitization occur quickly, within minutes. Rapid modulation of receptor function in desensitized cells may involve critical covalent modification of the receptor, association of these receptors with other proteins, or changes in their subcellular location.

There are two major categories of desensitization of responses mediated by G protein-coupled receptors. Homologous desensitization refers to loss of responsiveness exclusively of the receptors that have been exposed to repeated or sustained activation by an agonist. Heterologous desensitization refers to the process by which desensitization of one receptor by its agonists also results in desensitization of another receptor that has not been directly activated by the agonist in question.

A major mechanism of desensitization that occurs rapidly involves phosphorylation of receptors by members of the G protein-coupled receptor kinase (GRK) family, of which there are seven in most mammals, with four subtypes (GRK2, GRK3, GRK5, and GRK6) being ubiquitously expressed. Specific adrenoceptors become substrates for these kinases only when they are bound to an agonist. This mechanism is an example of homologous desensitization because it specifically affects only agonist-occupied receptors.

Phosphorylation of these receptors enhances their affinity for arrestins, a family of four proteins, of which the two nonvisual arrestin subtypes are widely expressed. Upon binding of arrestin, the capacity of the receptor to activate G proteins is blunted, likely as a result of steric hindrance, as suggested by the crystal structures of GPCR complexes with G proteins and arrestins (see Figure 2–12). Arrestin then interacts with clathrin and clathrin adaptor AP2, leading to endocytosis of the receptor. In addition to their role in the desensitization process, arrestins can trigger G protein-independent signaling pathways.

Receptor desensitization may also be mediated by second-messenger feedback. For example, β adrenoceptors stimulate cAMP accumulation, which leads to activation of PKA; PKA can phosphorylate residues on β receptors, resulting in inhibition of receptor function. For the β2 receptor, PKA phosphorylation occurs on serine residues in the third cytoplasmic loop of the receptor. Similarly, activation of PKC by Gq-coupled receptors may lead to phosphorylation of this class of G protein-coupled receptors. PKA phosphorylation of the β2 receptor also switches its G protein preference from Gs to Gi, further reducing cAMP response. This second-messenger feedback mechanism has been termed heterologous desensitization because activated PKA or PKC may phosphorylate any structurally similar receptor with the appropriate consensus sites for phosphorylation by these enzymes—e.g., the elevation of cAMP by activation of other receptors can trigger PKA phosphorylation of β receptors.

Receptor Selectivity

Selectivity means that a drug preferentially activates one subgroup of receptors at concentrations that have little or no effect on another subgroup. However, selectivity is not usually absolute (nearly absolute selectivity has been termed specificity), and at higher concentrations, a drug may also interact with related classes of receptors. The clinical effects of a given drug may depend not only on its selectivity to adrenoceptor types, but also on the relative expression of receptor subtypes in a given tissue. Examples of clinically useful sympathomimetic agonists that are relatively selective for α1-, α2-, and β-adrenoceptor subgroups are compared with some nonselective agents in Table 9–2.

Table 9–2 Relative receptor affinities.

 

Relative Receptor Affinities

1See text.

Alpha agonists

 

 Phenylephrine, methoxamine

α1 > α2 >>>>> β

 Clonidine, methylnorepinephrine

α2 > α1 >>>>> β

Mixed alpha and beta agonists

 

 Norepinephrine

α1 = α2; β1 >> β2

 Epinephrine

α1 = α2; β1 = β2

Beta agonists

 

 Dobutamine1

β1 > β2 >>>> α

 Isoproterenol

β1 = β2 >>>> α

 Albuterol, terbutaline, metaproterenol, ritodrine

β2 >> β1 >>>> α

Dopamine agonists

 

 Dopamine

D1 = D2 >> β >> α

 Fenoldopam

D1 >> D2

Even though each receptor subtype is coupled to a G protein that mediates most of its intracellular signaling (see Table 9–1), in many cases a receptor can be coupled to other G proteins, or signal through both G protein-dependent and G protein-independent pathways. This observation has prompted the concept of developing biased agonists that selectively activate one of the signaling pathways (see Box: Therapeutic Potential of Biased Agonists at Beta Receptors). There is also interest in discovering allosteric modulators of receptor function, i.e., ligands that bind to the receptor at a site different from the agonist binding site and modulate the response to the agonist.

Therapeutic Potential of Biased Agonists at Beta Receptors

Traditional β agonists like epinephrine activate cardiac β1 receptors, increasing heart rate and cardiac workload through coupling with G proteins. This can be deleterious in situations such as myocardial infarction. Beta1 receptors are also coupled through G protein-independent signaling pathways involving β-arrestin, which are thought to be cardioprotective. A “biased” agonist could potentially activate only the cardioprotective, β-arrestin–mediated signaling (and not the G protein–mediated signals that lead to greater cardiac workload). Such a biased agonist would be of great therapeutic potential in situations such as myocardial infarction or heart failure. In asthma, there is interest in developing biased agonists that are effective bronchial muscle relaxants but are not subject to desensitization. Biased agonists potent enough to reach these therapeutic goals have not yet been developed.

The Norepinephrine Transporter

When norepinephrine is released into the synaptic cleft, it binds to postsynaptic adrenoceptors to elicit the expected physiologic effect. However, just as the release of neurotransmitters is a tightly regulated process, the mechanisms for removal of neurotransmitter must also be highly effective. The norepinephrine transporter (NET) is the principal route by which this occurs. It is particularly efficient in the synapses of the heart, where up to 90% of released norepinephrine is removed by the NET. Remaining synaptic norepinephrine may escape into the extrasynaptic space and enter the bloodstream or be taken up into extraneuronal cells and metabolized by catechol-O-methyltransferase. In other sites such as the vasculature, where synaptic structures are less well developed, removal by NET may still be 60% or more. The NET, often situated on the presynaptic neuronal membrane, pumps the synaptic norepinephrine back into the neuron cell cytoplasm. In the cell, this norepinephrine may reenter the vesicles or undergo metabolism through monoamine oxidase to dihydroxyphenylglycol (DHPG). Elsewhere in the body similar transporters remove dopamine (dopamine transporter, DAT), serotonin (serotonin transporter, SERT), and other neurotransmitters. The NET, surprisingly, has equivalent affinity for dopamine as for norepinephrine, and it can sometimes clear dopamine in brain areas where DAT is low, like the cortex.

Blockade of the NET, e.g., by the nonselective psychostimulant cocaine or the NET-selective agents atomoxetine or reboxetine, impairs this primary site of norepinephrine removal and thus synaptic norepinephrine levels rise, leading to greater stimulation of α and β adrenoceptors. In the periphery this effect may produce a clinical picture of sympathetic activation, but it is often counterbalanced by concomitant stimulation of α2 adrenoceptors in the brain stem that reduces sympathetic activation.

The function of the norepinephrine and dopamine transporters is complex, and drugs can interact with the NET to actually reverse the direction of transport and induce the release of intraneuronal neurotransmitter. This is illustrated in Figure 9–3. Under normal circumstances (panel A), presynaptic NET (red) inactivates and recycles norepinephrine (NE, red) released by vesicular fusion. In panel B, amphetamine (black) acts as both an NET substrate and a reuptake blocker, eliciting reverse transport and blocking normal uptake, thereby increasing NE levels in and beyond the synaptic cleft. In panel C, agents such as methylphenidate and cocaine (hexagons) block NET-mediated NE reuptake and enhance NE signaling.

Figure 9–3 Pharmacologic targeting of monoamine transporters. Commonly used drugs such as antidepressants, amphetamines, and cocaine target monoamine (norepinephrine, dopamine, and serotonin) transporters with different potencies. A shows the mechanism of reuptake of norepinephrine (NE) back into the noradrenergic neuron via the norepinephrine transporter (NET), where a proportion is sequestered in presynaptic vesicles through the vesicular monoamine transporter (VMAT). B and C show the effects of amphetamine and cocaine on these pathways. See text for details.
katzung16_ch9_f003

Book Chapter
10. Adrenoceptor Antagonist Drugs

10. Adrenoceptor Antagonist Drugs

Mechanism of Action

Alpha-receptor antagonists may be reversible or irreversible in their interaction with these receptors. Reversible antagonists dissociate from receptors, and the block can be overcome with sufficiently high concentrations of agonists; irreversible drugs do not dissociate and cannot be surmounted. Phentolamine and prazosin (Figure 10–1) are examples of reversible antagonists. Phenoxybenzamine forms a reactive ethyleneimonium intermediate (see Figure 10–1) that covalently binds to α receptors, resulting in irreversible blockade. Figure 10–2 illustrates the effects of a reversible drug in comparison with those of an irreversible agent.

Figure 10–1 Structure of several α-receptor–blocking drugs.
katzung16_ch10_f001
Figure 10–2 Dose-response curves to norepinephrine in the presence of two different α-adrenoceptor–blocking drugs. The tension produced in isolated strips of cat spleen, a tissue rich in α receptors, was measured in response to graded doses of norepinephrine. Left: Tolazoline, a reversible blocker, shifted the curve to the right without decreasing the maximum response when present at concentrations of 10 and 20 μmol/L. Right: Dibenamine, an analog of phenoxybenzamine and irreversible in its action, reduced the maximum response attainable at both concentrations tested. (Reproduced with permission from Bickerton RK: The response of isolated strips of cat spleen to sympathomimetic drugs and their antagonists. J Pharmacol Exp Ther 1963;142:99-110.)
katzung16_ch10_f002

As discussed in Chapters 1 and 2, the duration of action of a reversible antagonist is largely dependent on the half-life of the drug in the body and the rate at which it dissociates from its receptor: The shorter the half-life, the less time it takes for the effects of the drug to dissipate. In contrast, the effects of an irreversible antagonist may persist long after the drug has been cleared from the plasma. In the case of phenoxybenzamine, the restoration of tissue responsiveness after extensive α-receptor blockade is dependent on synthesis of new receptors, which may take several days. The rate of return of α1-adrenoceptor responsiveness may be particularly important in patients who have a sudden cardiovascular event or who become candidates for urgent surgery.

Pharmacologic Effects

A. Cardiovascular Effects

Because arteriolar and venous tone are determined to a large extent by α receptors on vascular smooth muscle, α-receptor antagonist drugs cause a lowering of peripheral vascular resistance and blood pressure (Figure 10–3). These drugs can prevent the pressor effects of usual doses of α agonists; indeed, in the case of agonists with both α and β2 effects (eg, epinephrine), selective α-receptor antagonism may convert a pressor to a depressor response (see Figure 10–3); it illustrates how the activation of both α and β receptors in the vasculature may lead to opposite responses, and that blockade of α adrenoceptors unmasks the effects of epinephrine on β receptors. Alpha-receptor antagonists (Table 10–1) block sympathetic-mediated vasoconstriction, causing a decrease in blood pressure. The fall in blood pressure is greater in situations when blood pressure depends on increased sympathetic activity. This may occur, for example, on standing when blood pressure is maintained by sympathetic activation to compensate for gravitational pooling of blood in the lower body. Thus, α adrenoceptor antagonists can cause orthostatic hypotension by blocking sympathetically-mediated peripheral arterial vasoconstriction and splanchnic capacitance venoconstriction. Because β receptors are unopposed, this is associated with a compensatory baroreflex-mediated tachycardia.

Figure 10–3 Top: Effects of phentolamine, an α-receptor–blocking drug, on blood pressure in an anesthetized dog. Epinephrine reversal is demonstrated by tracings showing the response to epinephrine before (middle) and after (bottom) phentolamine. All drugs given intravenously. BP, blood pressure; HR, heart rate.
katzung16_ch10_f003
Table 10–1 Relative selectivity of antagonists for adrenoceptors.

Drugs

Receptor Affinity

Alpha antagonists

 

 Prazosin, terazosin, doxazosin

α1 >>>> α2

 Phenoxybenzamine

α1 > α2

 Phentolamine

α1 = α2

 Yohimbine, tolazoline

α2 >> α1

Mixed antagonists

 

 Labetalol, carvedilol

β1 = β2 ≥ α1 > α2

Beta antagonists

 

 Metoprolol, acebutolol, alprenolol, atenolol, betaxolol, celiprolol, esmolol, nebivolol

β1 >>> β2

 Propranolol, carteolol, nadolol, penbutolol, pindolol, timolol

β1 = β2

 Butoxamine

β2 >>> β1

B. Other Effects

Blockade of α receptors in noncardiac tissues elicits miosis (small pupils) and nasal stuffiness. Alpha1 receptors are expressed in the base of the bladder and the prostate, and their blockade decreases resistance to the flow of urine. Alpha blockers, therefore, are used therapeutically for the treatment of urinary retention due to prostatic hyperplasia (see below).

Book Chapter
39. Adrenocorticosteroids & Adrenocortical Antagonists

39. Adrenocorticosteroids & Adrenocortical Antagonists

The adrenal cortex releases a large number of steroids into the circulation. Some have minimal biologic activity and function primarily as precursors, and there are some for which no function has been established. The hormonal steroids may be classified as those having important effects on intermediary metabolism and immune function (glucocorticoids), those having principally salt-retaining activity (mineralocorticoids), and those having androgenic or estrogenic activity (see Chapter 40). In humans, the major glucocorticoid is cortisol and the most important mineralocorticoid is aldosterone. Quantitatively, dehydroepiandrosterone (DHEA) in its sulfated form (DHEAS) is the major adrenal androgen. However, DHEA and two other adrenal androgens, androstenedione and androstenediol, are weak androgens and androstenediol is a potent estrogen. Androstenedione can be converted to testosterone and estradiol in extra-adrenal tissues (Figure 39–1). Adrenal androgens constitute the major endogenous precursors of estrogen in women after menopause and in younger patients in whom ovarian function is deficient or absent.

Figure 39–1 Outline of major pathways in adrenocortical hormone biosynthesis. The major secretory products are underlined. Pregnenolone is the major precursor of corticosterone and aldosterone, and 17-hydroxypregnenolone is the major precursor of cortisol. The enzymes and cofactors for the reactions progressing down each column are shown on the left and across columns at the top of the figure. When a particular enzyme is deficient, hormone production is blocked at the points indicated by the shaded bars. (Reproduced with permission from Ganong WF: Review of Medical Physiology, 22nd ed. New York, NY: McGraw Hill; 2005.)
katzung16_ch39_f001

Cortisol (also called hydrocortisone, compound F) exerts a wide range of physiologic effects, including regulation of intermediary metabolism, cardiovascular function, growth, and immunity. Its synthesis and secretion are tightly regulated by the central nervous system, which is very sensitive to negative feedback by the circulating cortisol and exogenous (synthetic) glucocorticoids. Cortisol is synthesized from cholesterol (as shown in Figure 39–1). The mechanisms controlling its secretion are discussed in Chapter 37.

The rate of secretion follows a circadian rhythm (Figure 39–2) governed by pulses of ACTH that peak in the early morning hours and after meals. In plasma, cortisol is bound to circulating proteins. Corticosteroid-binding globulin (CBG), an α2 globulin synthesized by the liver, binds about 90% of the circulating hormone under normal circumstances. The remainder is free (about 5–10%) or loosely bound to albumin (about 5%) and is available to exert its effect on target cells. When plasma cortisol levels exceed 20–30 mcg/dL, CBG is saturated, and the concentration of free cortisol rises rapidly. CBG is increased in pregnancy, with estrogen administration, and in hyperthyroidism. It is decreased by hypothyroidism, genetic defects in synthesis, and protein deficiency states. Albumin has a large capacity but low affinity for cortisol, and for practical purposes albumin-bound cortisol should be considered free. Synthetic corticosteroids such as dexamethasone are largely bound to albumin rather than CBG.

Figure 39–2 Circadian variation in plasma cortisol throughout the 24-hour day (upper panel). The sensitivity of tissues to glucocorticoids is also circadian but inverse to that of cortisol, with low sensitivity in the late morning and high sensitivity in the evening and early night (lower panel). The sensitivity of tissues to glucocorticoids is inversely related to that of glucocorticoid receptor (GR) acetylation by the transcription factor CLOCK; the acetylated receptor has decreased transcriptional activity.
katzung16_ch39_f002

The half-life of cortisol in the circulation is normally about 60–90 minutes; it may be increased when hydrocortisone (the pharmaceutical preparation of cortisol) is administered in large amounts or when stress, hypothyroidism, or liver disease is present. Only 1% of cortisol is excreted unchanged in the urine as free cortisol; about 20% of cortisol is converted to cortisone by 11-hydroxysteroid dehydrogenase in the kidney and other tissues with mineralocorticoid receptors (see below) before reaching the liver. Most cortisol is metabolized in the liver. About one-third of the cortisol produced daily is excreted in the urine as dihydroxy ketone metabolites and is measured as 17-hydroxysteroids (see Figure 39–3 for carbon numbering). Many cortisol metabolites are conjugated with glucuronic acid or sulfate at the C3 and C21 hydroxyls, respectively, in the liver; they are then excreted in the urine.

Figure 39–3 Chemical structures of several glucocorticoids. The acetonide-substituted derivatives (eg, triamcinolone acetonide) have increased surface activity and are useful in dermatology. Dexamethasone is identical to betamethasone except for the configuration of the methyl group at C16: in betamethasone it is beta (projecting up from the plane of the rings); in dexamethasone it is alpha.
katzung16_ch39_f003

In some species (eg, the rat), corticosterone is the major glucocorticoid. It is less firmly bound to protein and therefore metabolized more rapidly. The pathways of its degradation are similar to those of cortisol.

Most of the known effects of the glucocorticoids are mediated by widely distributed intracellular glucocorticoid receptors. These proteins are members of the superfamily of nuclear receptors, which includes steroid, sterol (vitamin D), thyroid, retinoic acid, and many other receptors with unknown or nonexistent ligands (orphan receptors). All these receptors interact with the promoters of—and regulate the transcription of—target genes (Figure 39–4). In the absence of the hormonal ligand, glucocorticoid receptors are primarily cytoplasmic, in oligomeric complexes with chaperone heat-shock proteins (hsp). The most important of these are two molecules of hsp90, although other proteins (eg, hsp40, hsp70, FKBP5) also are involved. Free hormone from the plasma and interstitial fluid enters the cell and binds to the receptor, inducing conformational changes that allow it to dissociate from the heat shock proteins and dimerize. The dimeric ligand-bound receptor complex then is actively transported into the nucleus, where it interacts with DNA and nuclear proteins. As a homodimer, it binds to glucocorticoid receptor elements (GREs) in the promoters of responsive genes. The GRE is composed of two palindromic sequences that bind to the hormone receptor dimer.

Figure 39–4 A model of the interaction of a steroid, S (eg, cortisol), and its receptor, R, and the subsequent events in a target cell. The steroid is present in the blood in bound form on the corticosteroid-binding globulin (CBG) but enters the cell as the free molecule. The intracellular receptor is bound to stabilizing proteins, including two molecules of heat-shock protein 90 (hsp90) and several others including FKBP5, denoted as “X” in the figure. This receptor complex is incapable of activating transcription. When the complex binds a molecule of cortisol, an unstable complex is created and the hsp90 and associated molecules are released. The steroid-receptor complex is now able to dimerize, enter the nucleus, bind to a glucocorticoid response element (GRE) on the regulatory region of the gene, and regulate transcription by RNA polymerase II and associated transcription factors. A variety of regulatory factors (not shown) may participate in facilitating (coactivators) or inhibiting (corepressors) the steroid response. The resulting mRNA is edited and exported to the cytoplasm for the production of protein that brings about the final hormone response. An alternative to the steroid-receptor complex interaction with a GRE is an interaction with and altering the function of other transcription factors, such as NF-κB in the nucleus of cells.
katzung16_ch39_f004

In addition to binding to GREs, the ligand-bound receptor also forms complexes with and influences the function of other transcription factors, such as AP1 and nuclear factor kappa-B (NF-κB), which act on non-GRE-containing promoters, to contribute to the regulation of transcription of their responsive genes. These transcription factors have broad actions on the regulation of growth factors, proinflammatory cytokines, etc, and to a great extent mediate the anti-growth, anti-inflammatory, and immunosuppressive effects of glucocorticoids.

Two genes for the corticoid receptor have been identified: one encoding the classic glucocorticoid receptor (GR) and the other encoding the mineralocorticoid receptor (MR). Alternative splicing of human glucocorticoid receptor pre-mRNA generates two highly homologous isoforms termed hGRα and hGRβ. Human GRα is the classic ligand-activated glucocorticoid receptor, which, in the hormone-bound state, modulates the expression of glucocorticoid-responsive genes. In contrast, hGRβ does not bind glucocorticoids and is transcriptionally inactive. However, hGRβ is able to inhibit the effects of hormone-activated hGRα on glucocorticoid-responsive genes, playing the role of a physiologically relevant endogenous inhibitor of glucocorticoid action. It was recently shown that the two hGR alternative transcripts have eight distinct translation initiation sites—ie, in a human cell there may be up to 16 GRα and GRβ isoforms, which may form up to 256 homodimers and heterodimers with different transcriptional and possibly nontranscriptional activities. This variability suggests that this important class of steroid receptors has complex stochastic activities. In addition, rare mutations in hGR may result in partial glucocorticoid resistance. Affected individuals have increased ACTH secretion because of reduced pituitary feedback and additional endocrine abnormalities (see below).

The prototype GR isoform is composed of about 800 amino acids and can be divided into three functional domains (see Figure 2–6). The glucocorticoid-binding domain is located at the carboxyl terminal of the molecule. The DNA-binding domain is located in the middle of the protein and contains nine cysteine residues. This region folds into a “two-finger” structure stabilized by zinc ions connected to cysteines to form two tetrahedrons. This part of the molecule binds to the GREs that regulate glucocorticoid action on glucocorticoid-regulated genes. The zinc fingers represent the basic structure by which the DNA-binding domain recognizes specific nucleic acid sequences. The amino terminal domain is involved in the transactivation activity of the receptor and increases its specificity.

The interaction of glucocorticoid receptors with GREs or other transcription factors is facilitated or inhibited by several families of proteins called steroid receptor coregulators, divided into coactivators and corepressors. The coregulators do this by serving as bridges between the receptors and other nuclear proteins and by expressing enzymatic activities such as histone acetylase or deacetylase, which alter the conformation of nucleosomes and the transcribability of genes.

Between 10% and 20% of expressed genes in a cell are regulated by glucocorticoids. The number and affinity of receptors for the hormone, the complement of transcription factors and coregulators, and post-transcription events determine the relative specificity of these hormones’ actions in various cells. The effects of glucocorticoids are due mainly to proteins synthesized from mRNA transcribed from their target genes.

Some of the effects of glucocorticoids can be attributed to their binding to mineralocorticoid receptors. Indeed, MRs bind aldosterone and cortisol with similar affinity. A mineralocorticoid effect of the higher levels of cortisol is avoided in some tissues (eg, kidney, colon, salivary glands) by expression of 11β-hydroxysteroid dehydrogenase type 2, the enzyme responsible for biotransformation to its 11-keto derivative (cortisone), which has minimal action on aldosterone receptors.

The GR also interacts with other regulators of cell function. One such molecule is CLOCK/BMAL-1, a transcription factor dimer expressed in all tissues and generating the circadian rhythm of cortisol secretion (see Figure 39–2) at the suprachiasmatic nucleus of the hypothalamus. CLOCK is an acetyltransferase that acetylates the hinge region of the GR, neutralizing its transcriptional activity and thus rendering target tissues resist ant to glucocorticoids. As shown in Figure 39–2, lower panel, the glucocorticoid target tissue sensitivity rhythm generated is in reverse phase to that of circulating cortisol concentrations, explaining the increased sensitivity of the organism to evening administration of glucocorticoids. The activated GR also interacts with NF-κB, a regulator of production of cytokines and other molecules involved in inflammation. This explains the circadian variability of the inflammatory reaction, which is enhanced in the evening and early night and suppressed in the morning.

Prompt effects such as initial feedback suppression of pituitary ACTH occur in minutes and are too rapid to be explained on the basis of gene transcription and protein synthesis. It is not known how these effects are mediated. Among the proposed mechanisms are direct effects on cell membrane receptors for the hormone or nongenomic effects of the classic hormone-bound glucocorticoid receptor. The putative membrane receptors might be entirely different from the known intracellular receptors. For example, recent studies implicate G protein-coupled membrane receptors in the response of glutamatergic neurons to glucocorticoids in rats. Furthermore, all steroid receptors (except the MRs) have been shown to have palmitoylation motifs that allow enzymatic addition of palmitate and increased localization of the receptors in the vicinity of plasma membranes. Such receptors are available for direct interactions with, and effects on, various membrane-associated or cytoplasmic proteins without the need for entry into the nucleus and induction of transcriptional actions.

The glucocorticoids have widespread effects because they influence the function of most cells in the body. The major metabolic consequences of glucocorticoid secretion or administration are due to direct actions of these hormones in the cell. However, some important effects are the result of homeostatic responses by insulin and glucagon. Although many of the effects of glucocorticoids are dose-related and become magnified when large amounts are administered for therapeutic purposes, there are also other effects—called permissive effects—without which many normal functions become deficient. For example, the response of vascular and bronchial smooth muscle to catecholamines is diminished in the absence of cortisol and restored by physiologic amounts of this glucocorticoid. Similarly, the lipolytic responses of fat cells to catecholamines, ACTH, and growth hormone are attenuated in the absence of glucocorticoids.

The glucocorticoids have important dose-related effects on carbohydrate, protein, and fat metabolism. The same effects are responsible for some of the serious adverse effects associated with their use in therapeutic doses. Glucocorticoids stimulate and are required for gluconeogenesis and glycogen synthesis in the fasting state. They stimulate phosphoenolpyruvate carboxykinase, glucose-6-phosphatase, and glycogen synthase and the release of amino acids in the course of muscle catabolism.

Glucocorticoids increase serum glucose levels and thus stimulate insulin release but inhibit the uptake of glucose by muscle cells, while they stimulate hormone-sensitive lipase and thus lipolysis. The increased insulin secretion stimulates lipogenesis and to a lesser degree inhibits lipolysis, leading to a net increase in fat deposition combined with increased release of fatty acids and glycerol into the circulation.

The net results of these actions are most apparent in the fasting state, when the supply of glucose from gluconeogenesis, the release of amino acids from muscle catabolism, the inhibition of peripheral glucose uptake, and the stimulation of lipolysis all contribute to maintenance of an adequate glucose supply to the brain.

Although glucocorticoids stimulate RNA and protein synthesis in the liver, they have catabolic and antianabolic effects in lymphoid and connective tissue, muscle, peripheral fat, and skin. Supraphysiologic amounts of glucocorticoids lead to decreased muscle mass and weakness, and thinning of the skin. Catabolic and antianabolic effects on bone are the cause of osteoporosis in Cushing syndrome and impose a major limitation in the long-term therapeutic use of glucocorticoids. In children, glucocorticoids reduce growth. This effect may be partially prevented by administration of growth hormone in high doses, but this use of growth hormone is not recommended except in extreme situations.

Glucocorticoids dramatically reduce the manifestations of inflammation. This is due to their profound effects on the concentration, distribution, and function of peripheral leukocytes and to their suppressive effects on inflammatory cytokines and chemokines, and on other small molecule mediators of inflammation. Inflammation, regardless of its cause, is characterized by the extravasation and infiltration of leukocytes into the affected tissue. These events are mediated by a complex series of interactions of white cell adhesion molecules with those on endothelial cells and are inhibited by glucocorticoids. After a single dose of a short-acting glucocorticoid, the concentration of neutrophils in the circulation increases while the lymphocytes (T and B cells), monocytes, eosinophils, and basophils decrease. The changes are maximal at 6 hours and are dissipated in 24 hours. The increase in neutrophils is due both to increased influx into the blood from the bone marrow and to decreased migration from the blood vessels, leading to a reduction in the number of cells at the site of inflammation. The reduction in circulating lymphocytes, monocytes, eosinophils, and basophils is primarily the result of their movement from the vascular bed to lymphoid tissue.

Glucocorticoids also inhibit the functions of tissue macrophages as well as dendritic and other antigen-presenting cells. The ability of these cells to respond to antigens and mitogens is reduced. The effect on macrophages is particularly marked and limits their ability to phagocytose and kill microorganisms and to produce tumor necrosis factor α, interleukin 1, metalloproteinases, and plasminogen activator. Both macrophages and lymphocytes produce less interleukin 12 and interferon-γ, important inducers of Th1 cell activity, and cellular immunity.

In addition to their effects on leukocyte function, glucocorticoids influence the inflammatory response by inhibiting phospholipase A2 and thus reduce the synthesis of arachidonic acid, the precursor of prostaglandins and leukotrienes, and of platelet-activating factor. Finally, glucocorticoids reduce expression of cyclooxygenase 2, the inducible form of this enzyme, in inflammatory cells, thus reducing the amount of enzyme available to produce prostaglandins (see Chapters 18 and 36).

Glucocorticoids cause vasoconstriction when applied directly to the skin, possibly by suppressing mast cell degranulation. They also decrease capillary permeability by reducing the amount of histamine released by basophils and mast cells.

The anti-inflammatory and immunosuppressive effects of glucocorticoids are largely due to the actions described above. In humans, complement activation is unaltered, but its effects are inhibited. Antibody production can be reduced by large doses of steroids, although it is unaffected by moderate doses (eg, 20 mg/d of prednisone).

The anti-inflammatory and immunosuppressive effects of these agents are widely useful therapeutically but are also responsible for some of their most serious adverse effects (see text that follows).

Glucocorticoids have important effects on the nervous system. Adrenal insufficiency causes marked slowing of the alpha rhythm of the electroencephalogram and is associated with depression. Increased amounts of glucocorticoids often produce behavioral disturbances in humans: initially insomnia and euphoria and subsequently depression. Large doses of glucocorticoids may increase intracranial pressure (pseudotumor cerebri).

Glucocorticoids given chronically suppress the pituitary release of ACTH, growth hormone, thyroid-stimulating hormone, and luteinizing hormone.

Large doses of glucocorticoids have been associated with the development of peptic ulcer, possibly by suppressing the local immune response against Helicobacter pylori. They also promote fat redistribution in the body, with increase of visceral, facial, nuchal, and supraclavicular fat, and they appear to antagonize the effect of vitamin D on calcium absorption. The glucocorticoids also have important effects on the hematopoietic system. In addition to their effects on leukocytes, they increase the number of platelets and red blood cells.

Cortisol deficiency results in impaired renal function (particularly glomerular filtration), augmented vasopressin secretion, and diminished ability to excrete a water load.

Glucocorticoids have important effects on the development of the fetal lungs. Indeed, the structural and functional changes in the lungs near term, including the production of pulmonary surface-active material required for air breathing (surfactant), are stimulated by glucocorticoids.

Recently, glucocorticoids were found to have direct effects on the epigenetic regulation of specific target genes by altering the activities of DNA methyltransferases and other enzymes participating in epigenesis. This is of particular importance in the prenatal treatment of pregnant mothers or treatment of young infants and children, when the epigenetic effects of glucocorticoids may be long term or even permanent. These effects may predispose these patients to behavioral or somatic disorders, such as depression or obesity and metabolic syndrome.

Glucocorticoids have become important agents for use in the treatment of many inflammatory, immunologic, hematologic, and other disorders. This has stimulated the development of many synthetic steroids with anti-inflammatory and immunosuppressive activity.

Pharmaceutical steroids are usually synthesized from cholic acid obtained from cattle or steroid sapogenins found in plants. Further modifications of these steroids have led to the marketing of a large group of synthetic steroids with special characteristics that are pharmacologically and therapeutically important (Table 39–1; see Figure 39–3).

Table 39–1 Some commonly used natural and synthetic corticosteroids for general use. (See Table 61-4 for dermatologic corticosteroids.)

Agent

Activity1

Equivalent Oral Dose (mg)

Forms Available

Anti-Inflammatory

Topical

Salt-Retaining

1Potency relative to hydrocortisone.

2Outside the United States.

3Triamcinolone acetonide: Up to 100.

Short- to medium-acting glucocorticoids

 Hydrocortisone (cortisol)

1

1

1

20

Oral, injectable, topical

 Cortisone

0.8

0

0.8

25

Oral

 Prednisone

4

0

0.3

5

Oral

 Prednisolone

5

4

0.3

5

Oral, injectable

 Methylprednisolone

5

5

0.25

4

Oral, injectable

 Meprednisone2

5

0

4

Oral, injectable

Intermediate-acting glucocorticoids

 Triamcinolone

5

53

0

4

Oral, injectable, topical

 Paramethasone2

10

0

2

Oral, injectable

 Fluprednisolone2

15

7

0

1.5

Oral

Long-acting glucocorticoids

 Betamethasone

25–40

10

0

0.6

Oral, injectable, topical

 Dexamethasone

30

10

0

0.75

Oral, injectable, topical

Mineralocorticoids

 Fludrocortisone

10

0

250

2

Oral

 Desoxycorticosterone acetate2

0

0

20

Injectable, pellets

The metabolism of the naturally occurring adrenal steroids has been discussed above. The synthetic corticosteroids (see Table 39–1) are in most cases rapidly and almost completely absorbed when given by mouth. Although they are transported and metabolized in a fashion similar to that of the endogenous steroids, important differences exist.

Alterations in the glucocorticoid molecule influence its affinity for glucocorticoid and mineralocorticoid receptors as well as its protein-binding affinity, side chain stability, rate of elimination, and metabolic products. Halogenation at the 9 position, unsaturation of the Δ1–2 bond of the A ring, and methylation at the 2 or 16 position prolong the half-life by more than 50%. The Δ1 compounds are excreted in the free form. In some cases, the agent given is a prodrug; for example, prednisone is rapidly converted to the active product prednisolone in the body.

Pharmacodynamics

The actions of the synthetic steroids are similar to those of cortisol (see above). They bind to the specific intracellular receptor proteins and produce the same effects but have different ratios of glucocorticoid to mineralocorticoid potency (Table 39–1).

Chronic adrenocortical insufficiency is characterized by weakness, fatigue, weight loss, hypotension, hyperpigmentation, and inability to maintain the blood glucose level during fasting. In such individuals, minor noxious, traumatic, or infectious stimuli may produce acute adrenal insufficiency with circulatory shock and even death.

In primary adrenal insufficiency, about 20–30 mg of hydrocortisone must be given daily, with increased amounts during periods of stress. Although hydrocortisone has some mineralocorticoid activity, this must be supplemented by an appropriate amount of a salt-retaining hormone such as fludrocortisone. Synthetic glucocorticoids that are long-acting and devoid of salt-retaining activity should not be administered as hormone substitution to these patients.

When acute adrenocortical insufficiency is suspected, treatment must be instituted immediately. Therapy consists of large amounts of parenteral hydrocortisone in addition to correction of fluid and electrolyte abnormalities and treatment of precipitating factors.

Hydrocortisone sodium succinate or phosphate in doses of 100 mg intravenously is given every 8 hours until the patient is stable. The dose is then gradually reduced, achieving maintenance dosage within 5 days.

The administration of salt-retaining hormone is resumed when the total hydrocortisone dosage has been reduced to less than 50 mg/d.

This group of disorders is characterized by specific defects in the synthesis of cortisol. In pregnancies at high risk for congenital adrenal hyperplasia, fetuses can be protected from genital abnormalities by administration of dexamethasone to the mother.

The most common defect is a decrease in or lack of P450c21 (21α-hydroxylase) activity.[1] As can be seen in Figure 39–1, this would lead to a reduction in cortisol synthesis and secretion and thus produce a compensatory increase in ACTH release. The adrenal becomes hyperplastic and produces abnormally large amounts of precursors such as 17-hydroxyprogesterone that can be diverted to the androgen pathway, which leads to virilization and can result in ambiguous genitalia in the female fetus. Metabolism of this compound in the liver leads to pregnanetriol, which is characteristically excreted into the urine in large amounts in this disorder and can be used to make the diagnosis and to monitor efficacy of glucocorticoid substitution. However, the most reliable method of detecting this disorder is the increased response of plasma 17-hydroxyprogesterone to ACTH stimulation.

If the defect is in 11-hydroxylation, large amounts of deoxycorticosterone are produced, and because this steroid has mineralocorticoid activity, hypertension with or without hypokalemic alkalosis may ensue. When 17-hydroxylation is defective in the adrenals and gonads, hypogonadism also is present. However, increased amounts of 11-deoxycorticosterone are formed, and the signs and symptoms associated with mineralocorticoid excess—such as hypertension and hypokalemia—also are observed.

When first seen, the infant with congenital adrenal hyperplasia may be in acute adrenal crisis and should be treated as described above, using appropriate electrolyte solutions and an intravenous preparation of hydrocortisone in stress doses. Once the patient is stabilized, oral hydrocortisone, 12–18 mg/m2 per day in two unequally divided doses (two thirds in the morning, one third in late afternoon) is begun. The dosage is adjusted to allow normal growth and bone maturation and to prevent androgen excess. Alternate-day therapy with prednisone has also been used to achieve greater ACTH suppression without increasing growth inhibition. Fludrocortisone, 0.05–0.2 mg/d, should also be administered by mouth, with added salt to maintain normal blood pressure, plasma renin activity, and electrolytes.

Cushing syndrome is usually the result of bilateral adrenal hyperplasia secondary to an ACTH-secreting pituitary adenoma (Cushing disease) but occasionally is due to tumors or nodular hyperplasia of the adrenal gland or ectopic production of ACTH by other nonadrenocortical tumors. The manifestations are those associated with the chronic presence of excessive glucocorticoids. When glucocorticoid hypersecretion is marked and prolonged, a rounded, plethoric face and trunk obesity are striking in appearance. Protein loss may be significant and includes muscle wasting; thinning, purple striae, and easy bruising of the skin; poor wound healing; and osteoporosis. Other serious disturbances include mental disorders, hypertension, and diabetes. This disorder is treated by surgical removal of the tumor producing ACTH or cortisol, irradiation of the pituitary tumor, or resection of one or both adrenals. These patients must receive large doses of cortisol during and after the surgical procedure. Doses of up to 300 mg of soluble hydrocortisone may be given as a continuous intravenous infusion on the day of surgery. The dose must be reduced slowly to normal replacement levels, since rapid reduction in dose may produce withdrawal symptoms, including nausea, vomiting, fever, and joint pain. If adrenalectomy has been performed, long-term maintenance is similar to that outlined above for adrenal insufficiency.

c. Primary generalized glucocorticoid resistance (Chrousos syndrome)

This rare sporadic or familial genetic condition is usually due to inactivating mutations of the glucocorticoid receptor gene. The hypothalamic-pituitary-adrenal (HPA) axis hyperfunctions in an attempt to compensate for the defect, and the increased production of ACTH leads to high circulating levels of cortisol and cortisol precursors such as corticosterone and 11-deoxycorticosterone with mineralocorticoid activity, as well as of adrenal androgens. These increased levels may result in hypertension with or without hypokalemic alkalosis and hyperandrogenism expressed as virilization and precocious puberty in children and acne, hirsutism, male pattern baldness, and menstrual irregularities (mostly oligo-amenorrhea and hypofertility) in women. The therapy of this syndrome is high doses of synthetic glucocorticoids such as dexamethasone with no inherent mineralocorticoid activity. These doses are titrated to normalize the production of cortisol, cortisol precursors, and adrenal androgens.

d. Aldosteronism

Primary aldosteronism usually results from the excessive production of aldosterone by an adrenal adenoma. However, it may also result from abnormal secretion by hyperplastic glands or from a malignant adrenal tumor. The clinical findings of hypertension, weakness, and tetany are related to the continued renal loss of potassium, which leads to hypokalemia, alkalosis, and elevation of serum sodium concentrations. Recently, aldosteronism was found to be a more frequent cause of hypertension than originally thought, with a rate higher than 20%. This syndrome can also be produced in disorders of adrenal steroid biosynthesis by excessive secretion of deoxycorticosterone, corticosterone, or 18-hydroxycorticosterone—all compounds with inherent mineralocorticoid activity.

In contrast to patients with secondary aldosteronism (see text that follows), these patients have low (suppressed) levels of plasma renin activity and angiotensin II. When treated with fludrocortisone (0.2 mg twice daily orally for 3 days) or deoxycorticosterone acetate (20 mg/d intramuscularly for 3 days—but not available in the United States), patients fail to retain sodium and the secretion of aldosterone is not significantly reduced. When the disorder is mild, it may escape detection if only serum potassium levels are used for screening. However, it may be detected by an increased ratio of plasma aldosterone to renin. Patients generally improve when treated with spironolactone or eplerenone, steroidal aldosterone receptor-blocking agents, and the response to these agents is of diagnostic and therapeutic value. Recently, nonsteroidal compounds with aldosterone receptor-mediated antagonist activity were added to our armamentarium. One of these, finerenone is already in the market.

It is sometimes necessary to suppress the production of ACTH to identify the source of a particular hormone or to establish whether its production is influenced by the secretion of ACTH. In these circumstances, it is advantageous to use a very potent substance such as dexamethasone because the use of small quantities reduces the possibility of confusion in the interpretation of hormone assays in blood or urine. For example, if complete suppression is achieved by the use of 50 mg of cortisol, the urinary 17-hydroxycorticosteroids will be 15–18 mg/24 h, since one-third of the dose given will be recovered in urine as 17-hydroxycorticosteroid. If an equivalent dose of 1.5 mg of dexamethasone is used, suppression will again be complete but the urinary excretion will be only 0.5 mg/24 h and blood levels will be low.

The dexamethasone suppression test is used for the diagnosis of Cushing syndrome and has also been used in the differential diagnosis of depressive psychiatric states. As a screening test, 1 mg dexamethasone is given orally at 11 PM, and a plasma sample is obtained the following morning. In normal individuals, the morning cortisol concentration is usually <2 mcg/dl,="" whereas="" in="" cushing="" syndrome="" the="" level="" is="" usually="">5 mcg/dL. The results are not reliable in the patient with depression, anxiety, concurrent illness, and other stressful conditions or in the patient who is receiving a medication that enhances the catabolism of dexamethasone in the liver. To distinguish between hypercortisolism due to anxiety, depression, and alcoholism (pseudo-Cushing syndrome) and bona fide Cushing syndrome, a combined test is carried out, consisting of dexamethasone (0.5 mg orally every 6 hours for 2 days) followed by a standard corticotropin-releasing hormone (CRH) test (1 mg/kg given as a bolus intravenous infusion 2 hours after the last dose of dexamethasone).

In patients in whom the diagnosis of Cushing syndrome has been established clinically and confirmed by a finding of elevated free cortisol in the urine, suppression with large doses of dexamethasone will help to distinguish patients with Cushing disease from those with steroid-producing tumors of the adrenal cortex or with the ectopic ACTH syndrome. Dexamethasone is given in a dosage of 0.5 mg orally every 6 hours for 2 days, followed by 2 mg orally every 6 hours for 2 days, and the urine is then assayed for cortisol or its metabolites (Liddle test); or dexamethasone is given as a single dose of 8 mg at 11 PM, and the plasma cortisol is measured at 8 AM the following day. In patients with Cushing disease, the suppressant effect of dexamethasone usually produces a 50% reduction in hormone levels. In patients in whom suppression does not occur, the ACTH level will be low in the presence of a cortisol-producing adrenal tumor and elevated in patients with an ectopic ACTH-producing tumor.

Lung maturation in the fetus is regulated by the fetal secretion of cortisol. Treatment of the mother with large doses of glucocorticoid reduces the incidence of respiratory distress syndrome in infants delivered prematurely. When delivery is anticipated before 34 weeks of gestation, intramuscular betamethasone, 12 mg, followed by an additional dose of 12 mg 18–24 hours later, is commonly used. Betamethasone is chosen because maternal protein binding and placental metabolism of this corticosteroid is less than that of cortisol, allowing increased transfer across the placenta to the fetus. A study of more than 10,000 infants born at 23–25 weeks of gestation indicated that exposure to exogenous corticosteroids before birth reduced the death rate and evidence of neurodevelopmental impairment.

The synthetic analogs of cortisol are useful in the treatment of a diverse group of diseases generally unrelated to any known disturbance of adrenal function (Table 39–2). The usefulness of corticosteroids in these disorders is a function of their ability to suppress inflammatory and immune responses and to alter leukocyte function, as previously described (see also Chapter 55). These agents are useful in disorders in which host response is the cause of the major manifestations of the disease. A good example of this is the therapy of patients with severe COVID-19, in whom high doses of a synthetic glucocorticoid decreased mortality by approximately 30%. In instances in which the inflammatory or immune response is important in controlling the pathologic process, therapy with corticosteroids may be dangerous but justified to prevent irreparable damage from an inflammatory response—if used in conjunction with specific therapy for the disease process.

Table 39–2 Some therapeutic indications for the use of glucocorticoids in nonadrenal disorders.

Disorder

Examples

Allergic reactions

Angioneurotic edema, asthma, bee stings, contact dermatitis, drug reactions, allergic rhinitis, serum sickness, urticaria

Collagen-vascular disorders

Giant cell arteritis, lupus erythematosus, scleroderma, mixed connective tissue syndromes, polymyositis, rheumatoid arthritis, temporal arteritis

Eye diseases

Acute uveitis, allergic conjunctivitis, choroiditis, optic neuritis

Gastrointestinal diseases

Inflammatory bowel disease, nontropical sprue, subacute hepatic necrosis

Hematologic disorders

Acquired hemolytic anemia, acute allergic purpura, leukemia, lymphoma, autoimmune hemolytic anemia, idiopathic thrombocytopenic purpura, multiple myeloma

Systemic inflammation

Acute respiratory distress syndrome (sustained therapy with moderate dosage accelerates recovery and decreases mortality)

Infections

Acute respiratory distress syndrome, sepsis, COVID-19

Inflammatory conditions of bones and joints

Arthritis, bursitis, tenosynovitis

Nausea and vomiting

A large dose of dexamethasone reduces emetic effects of chemotherapy and general anesthesia

Neurologic disorders

Cerebral edema (large doses of dexamethasone are given to patients following brain surgery to minimize cerebral edema in the postoperative period), multiple sclerosis

Organ transplants

Prevention and treatment of rejection (immunosuppression)

Pulmonary diseases

Aspiration pneumonia, bronchial asthma, prenatal prevention of infant respiratory distress syndrome, sarcoidosis

Renal disorders

Nephrotic syndrome

Skin diseases

Atopic dermatitis, dermatoses, lichen simplex chronicus (localized neurodermatitis), mycosis fungoides, pemphigus, psoriasis, seborrheic dermatitis, xerosis

Thyroid diseases

Malignant exophthalmos, subacute thyroiditis

Miscellaneous

Hypercalcemia, mountain sickness

Since corticosteroids are not usually curative, the pathologic process may progress while clinical manifestations are suppressed. Therefore, chronic systemic therapy with these drugs should be undertaken with great care and only when the seriousness of the disorder warrants their use and when less hazardous measures have been exhausted.

In general, attempts should be made to bring the disease process under control using medium- to intermediate-acting glucocorticoids such as prednisone and prednisolone (Table 39–1), as well as all ancillary measures possible to keep the dose low. Where possible, alternate-day therapy should be used (see the following text). Therapy should not be decreased or stopped abruptly. When prolonged therapy is anticipated, it is helpful to obtain chest x-rays and a tuberculin test, since glucocorticoid therapy can reactivate dormant tuberculosis. The presence of diabetes, peptic ulcer, osteoporosis, and psychological disturbances should be taken into consideration, and cardiovascular function should be assessed.

Treatment for transplant rejection is a very important application of glucocorticoids. The efficacy of these agents is based on their ability to reduce antigen expression from the grafted tissue, delay revascularization, and interfere with the sensitization of cytotoxic T lymphocytes and the generation of primary antibody-forming cells.

Toxicity

The benefits obtained from glucocorticoids vary considerably. Use of these drugs must be carefully weighed in each patient against their widespread effects. The major undesirable effects of glucocorticoids are the result of their hormonal actions, which lead to the clinical picture of iatrogenic Cushing syndrome (see later in text).

When glucocorticoids are used for short periods (<2 weeks),="" it="" is="" unusual="" to="" see="" serious="" adverse="" effects="" even="" with="" moderately="" large="" doses.="" however,="" insomnia,="" behavioral="" changes="" (primarily="" hypomania),="" and="" acute="" peptic="" ulcers="" are="" occasionally="" observed="" even="" after="" only="" a="" few="" days="" of="" treatment.="" acute="" pancreatitis="" is="" a="" rare="" but="" serious="" acute="" adverse="" effect="" of="" high-dose="">

Most patients who are given daily doses of 100 mg of hydrocortisone or more (or the equivalent amount of synthetic steroid) for longer than 2 weeks undergo a series of changes that have been termed iatrogenic Cushing syndrome. The rate of development is a function of the dosage and the genetic background of the patient. In the face, rounding, puffiness, fat deposition, and plethora usually appear (moon facies). Similarly, fat tends to be redistributed from the extremities to the trunk, the back of the neck, and the supraclavicular fossae. There is an increased growth of fine hair over the face, thighs, and trunk. Steroid-induced punctate acne may appear, and insomnia and increased appetite are noted. In the treatment of dangerous or disabling disorders, these changes may not require cessation of therapy. However, the underlying metabolic changes accompanying them can be very serious by the time they become obvious. The continuing breakdown of protein and diversion of amino acids to glucose production increase the need for insulin and over time result in weight gain; visceral fat deposition; myopathy and muscle wasting; thinning of the skin, with striae and bruising; hyperglycemia; and eventually osteoporosis, diabetes, and aseptic necrosis of the hip. Wound healing also is impaired under these circumstances. When diabetes occurs, it is treated with diet and insulin. These patients are often resistant to insulin but rarely develop ketoacidosis. In general, patients treated with corticosteroids should be on high-protein and potassium-enriched diets and should receive adequate doses of vitamin D.

Other serious adverse effects of glucocorticoids include peptic ulcers and their consequences. The clinical findings associated with certain disorders, particularly bacterial and mycotic infections, may be masked by the corticosteroids, and patients must be carefully monitored to avoid serious mishap when large doses are used. Severe myopathy is more frequent in patients treated with long-acting glucocorticoids. The administration of such compounds has been associated with nausea, dizziness, and weight loss in some patients. These effects are treated by changing drugs, reducing dosage, and increasing potassium and protein intake.

Hypomania or acute psychosis may occur, particularly in patients receiving very large doses of corticosteroids. Long-term therapy with intermediate- and long-acting steroids is associated with depression and the development of posterior subcapsular cataracts. Psychiatric follow-up and periodic slit-lamp examination are indicated in such patients. Increased intraocular pressure is common, and glaucoma may be induced. Benign intracranial hypertension also occurs. In dosages of 45 mg/m2 per day or more of hydrocortisone equivalent, growth retardation occurs in children. Medium-, intermediate-, and long-acting glucocorticoids have greater growth-suppressing potency than the natural steroid at equivalent doses.

When given in larger than physiologic amounts, steroids such as cortisone and hydrocortisone, which have mineralocorticoid effects in addition to glucocorticoid effects, cause some sodium and fluid retention and loss of potassium. In patients with normal cardiovascular and renal function, this leads to a hypokalemic, hypochloremic alkalosis and eventually to a rise in blood pressure. In patients with hypoproteinemia, renal disease, or liver disease, edema also may occur. In patients with heart disease, even small degrees of sodium retention may lead to heart failure. These effects can be minimized by using synthetic non-salt-retaining steroids, sodium restriction, and judicious amounts of potassium supplements.

When corticosteroids are administered for more than 2 weeks, adrenal suppression may occur. If treatment extends over weeks to months, the patient should be given appropriate supplementary therapy at times of minor stress (twofold dosage increases for 24–48 hours) or severe stress (up to tenfold dosage increases for 48–72 hours) such as accidental trauma or major surgery. If corticosteroid dosage is to be reduced, it should be tapered slowly. If therapy is to be stopped, the reduction process should be quite slow when the dose reaches replacement levels. It may take 2–12 months for the hypothalamic-pituitary-adrenal axis to function acceptably, and cortisol levels may not return to normal for another 6–9 months. The glucocorticoid-induced suppression is not a pituitary problem, and treatment with ACTH does not reduce the time required for the return of normal function.

If the dosage is reduced too rapidly in patients receiving glucocorticoids for a certain disorder, the symptoms of the disorder may reappear or increase in intensity. However, patients without an underlying disorder (eg, patients cured surgically of Cushing disease) also develop symptoms with rapid reductions in corticosteroid levels. These symptoms include anorexia, nausea or vomiting, weight loss, lethargy, headache, fever, joint or muscle pain, and postural hypotension. Although many of these symptoms may reflect true glucocorticoid deficiency, they may also occur in the presence of normal or even elevated plasma cortisol levels, suggesting glucocorticoid dependence.

Contraindications & Cautions

Patients receiving glucocorticoids must be monitored carefully for the development of hyperglycemia, glycosuria, sodium retention with edema or hypertension, hypokalemia, peptic ulcer, osteoporosis, and hidden infections.

The dosage should be kept as low as possible, and intermittent administration (eg, alternate-day) should be used when satisfactory therapeutic results can be obtained on this schedule. Even patients maintained on relatively low doses of corticosteroids may require supplementary therapy at times of stress, such as when surgical procedures are performed or intercurrent illnesses or accidents occur.

Glucocorticoids must be used with great caution in patients with peptic ulcer, heart disease or hypertension with heart failure, certain infectious illnesses such as varicella and tuberculosis, psychoses, diabetes, osteoporosis, or glaucoma.

Selection of Drug & Dosage Schedule

Glucocorticoid preparations differ with respect to relative anti-inflammatory and mineralocorticoid effect, duration of action, cost, and dosage forms available (Table 39–1), and these factors should be taken into account in selecting the drug to be used.

In patients with normal adrenals, ACTH was used in the past to induce the endogenous production of cortisol to obtain similar effects. However, except when an increase in androgens is desirable, the use of ACTH as a therapeutic agent has been abandoned. Instances in which ACTH was claimed to be more effective than glucocorticoids were probably due to the administration of smaller amounts of corticosteroids than were produced by the dosage of ACTH.

In determining the dosage regimen to be used, the physician must consider the seriousness of the disease, the amount of drug likely to be required to obtain the desired effect, and the duration of therapy. In some diseases, the amount required for maintenance of the desired therapeutic effect is less than the dose needed to obtain the initial effect, and the lowest possible dosage for the needed effect should be determined by gradually lowering the dose until a small increase in signs or symptoms is noted.

When it is necessary to maintain continuously elevated plasma corticosteroid levels to suppress ACTH, a slowly absorbed parenteral preparation or small oral doses at frequent intervals are required. The opposite situation exists with respect to the use of corticosteroids in the treatment of inflammatory and allergic disorders. The same total quantity given in a few doses may be more effective than that given in many smaller doses or in a slowly absorbed parenteral form.

Severe autoimmune conditions involving vital organs must be treated aggressively, and undertreatment is as dangerous as overtreatment. To minimize the deposition of immune complexes and the influx of leukocytes and macrophages, 1 mg/kg per day of prednisone in divided doses is required initially. This dosage is maintained until the serious manifestations respond. The dosage can then be gradually reduced.

When large doses are required for prolonged periods of time, alternate-day administration of the compound may be tried. When used in this manner, very large amounts (eg, 100 mg of prednisone) can sometimes be administered with less marked adverse effects because there is a recovery period between each dose. The transition to an alternate-day schedule can be made after the disease process is under control. It should be done gradually and with additional supportive measures between doses.

When selecting a drug for use in large doses, a medium- or intermediate-acting synthetic steroid with no or little mineralocorticoid effect is advisable. If possible, it should be given as a single morning dose.

Local therapy, such as topical preparations for skin disease, ophthalmic forms for eye disease, intra-articular injections for joint disease, inhaled steroids for asthma, and hydrocortisone enemas for ulcerative colitis, provides a means of delivering large amounts of steroid to the diseased tissue with reduced systemic effects.

Beclomethasone dipropionate and several other glucocorticoids—primarily budesonide, flunisolide, and mometasone furoate, administered as inhaled aerosols—have been found to be extremely useful in the treatment of asthma (see Chapter 20).

Beclomethasone dipropionate, triamcinolone acetonide, budesonide, flunisolide, fluticasone, and others are available as nasal sprays for the topical treatment of allergic rhinitis. They are effective at doses (one or two sprays one, two, or three times daily) that in most patients result in plasma levels that are too low to influence adrenal function or have any other systemic effects.

Corticosteroids incorporated in ointments, creams, lotions, and sprays are used extensively in dermatology. These preparations are discussed in more detail in Chapter 61.

Recently, new timed-release hydrocortisone tablets were developed for the replacement treatment of Addisonian and congenital adrenal hyperplasia patients. These tablets produce plasma cortisol levels that are similar to those secreted normally in a circadian fashion.

For therapeutic reasons, new timed-release prednisone tablets were developed for the therapy of patients with rheumatoid arthritis.

The most important mineralocorticoid in humans is aldosterone. However, small amounts of 11-deoxycorticosterone (11-DOC) also are formed and released. Although the amount is normally insignificant, 11-DOC was of some importance therapeutically in the past. Its actions, effects, and metabolism are qualitatively similar to those described below for aldosterone. Fludrocortisone, a synthetic corticosteroid, is the most commonly prescribed salt-retaining hormone.

Aldosterone

Aldosterone is synthesized mainly in the zona glomerulosa of the adrenal cortex. Its structure and synthesis are illustrated in Figure 39–1. The rate of aldosterone secretion is subject to several influences. ACTH produces a moderate stimulation of its release, but this effect is not sustained for more than a few days in the normal individual. The quantities of aldosterone produced by the adrenal cortex and its plasma concentrations are insufficient to participate in any significant feedback control of ACTH secretion.

Without ACTH, aldosterone secretion falls to about half the normal rate, indicating that other factors, eg, angiotensin, are able to maintain and perhaps regulate its secretion (see Chapter 17). Independent variations between cortisol and aldosterone secretion can also be demonstrated by means of lesions in the nervous system such as decerebration, which decreases the secretion of cortisol while increasing the secretion of aldosterone.

Aldosterone and other steroids with mineralocorticoid properties promote the reabsorption of sodium from the distal part of the distal convoluted renal tubule and from the cortical collecting tubules, loosely coupled to the excretion of potassium and hydrogen ion. Sodium reabsorption in the sweat and salivary glands, in the gastrointestinal mucosa, and across cell membranes in general also is increased. Excessive levels of aldosterone produced by tumors or overdosage with synthetic mineralocorticoids lead to hypokalemia, metabolic alkalosis, increased plasma volume, and hypertension.

Mineralocorticoids act by binding to the mineralocorticoid receptor in the cytoplasm of target cells, especially principal cells of the distal convoluted and collecting tubules of the kidney. The drug-receptor complex activates a series of events similar to those described above for the glucocorticoids and illustrated in Figure 39–4. It is of interest that this receptor has the same affinity for cortisol, which is present in much higher concentrations in the extracellular fluid. The specificity for mineralocorticoids in the kidney appears to be conferred, at least in part, by the presence—in the kidney—of the enzyme 11β-hydroxysteroid dehydrogenase type 2, which converts cortisol to cortisone. The latter has low affinity for the receptor and is inactive as a mineralocorticoid or glucocorticoid in the kidney. The major effect of activation of the aldosterone receptor is increased expression of Na+/K+-ATPase and the epithelial sodium channel (ENaC).

Aldosterone is secreted at the rate of 100–200 mcg/d in normal individuals with a moderate dietary salt intake. The plasma level in men (resting supine) is about 0.007 mcg/dL. The half-life of aldosterone injected in tracer quantities is 15–20 minutes, and it does not appear to be firmly bound to serum proteins.

The metabolism of aldosterone is similar to that of cortisol, about 50 mcg/24 h appearing in the urine as conjugated tetrahydroaldosterone. Approximately 5–15 mcg/24 h is excreted free or as the 3-oxo glucuronide.

11-Deoxycorticosterone (11-DOC)

11-DOC, which also serves as a precursor of aldosterone (see Figure 39–1), is normally secreted in amounts of about 200 mcg/d. Its half-life when injected into the human circulation is about 70 minutes. Estimates of its concentration in plasma are approximately 0.03 mcg/dL. The control of its secretion differs from that of aldosterone in that the secretion of 11-DOC is primarily under the control of ACTH. Although the response to ACTH is enhanced by dietary sodium restriction, due to adaptations, a low-salt diet does not increase 11-DOC secretion. The secretion of DOC may be markedly increased in abnormal conditions such as adrenocortical carcinoma and congenital adrenal hyperplasia with reduced P450c11 or P450c17 activity.

Fludrocortisone

This compound, a potent steroid with both glucocorticoid and mineralocorticoid activity, is the most widely used mineralocorticoid. Oral doses of 0.1 mg two to seven times weekly have potent salt-retaining activity and are used in the treatment of adrenocortical insufficiency associated with mineralocorticoid deficiency. These dosages are too small to have important anti-inflammatory or antigrowth effects.

The adrenal cortex secretes large amounts of DHEA and smaller amounts of androstenedione and testosterone. Although these androgens are thought to contribute to the normal maturation process, they do not stimulate or support major androgen-dependent pubertal changes in humans. Studies suggest that DHEA and its sulfate might have other important physiologic actions. If that is correct, these results are probably due to the peripheral conversion of DHEA to more potent androgens or to estrogens and interaction with androgen and estrogen receptors, respectively. Additional effects may be exerted through an interaction with the GABAA and glutamate receptors in the brain or with a nuclear receptor in several central and peripheral sites. The therapeutic use of DHEA in humans has been explored, but the substance has already been adopted with uncritical enthusiasm by members of the sports drug and the vitamin and food supplement cultures.

The results of a placebo-controlled trial of DHEA in patients with systemic lupus erythematosus have been reported as well as those of a study of DHEA replacement in women with adrenal insufficiency. In both studies a small beneficial effect was seen, with some improvement of the disease in the former and an added sense of well-being in the latter. The androgenic or estrogenic actions of DHEA could explain the effects of the compound in both situations. In contrast, there is no evidence to support DHEA use to increase muscle strength or improve memory.

Book Chapter
42. Agents That Affect Bone Mineral Homeostasis

42. Agents That Affect Bone Mineral Homeostasis

Calcium and phosphate, the major mineral constituents of bone, are also two of the most important minerals for general cellular function. Accordingly, the body has evolved complex mechanisms to carefully maintain calcium and phosphate homeostasis (Figure 42–1). Approximately 98% of the 1–2 kg of calcium and 85% of the 1 kg of phosphorus in the human adult are found in bone, the principal reservoir for these minerals. This reservoir is dynamic, with constant remodeling of bone and ready exchange of bone mineral with that in the extracellular fluid. Bone also serves as the principal structural support for the body and provides the space for hematopoiesis. This relationship is more than fortuitous, as elements of the bone marrow affect skeletal processes just as skeletal elements affect hematopoietic processes. During aging and in nutritional diseases such as anorexia nervosa and obesity, fat accumulates in the marrow, suggesting a dynamic interaction between marrow fat and bone. Furthermore, bone has been implicated as an endocrine tissue with release of osteocalcin, which in its uncarboxylated form stimulates insulin secretion, testicular function, and muscle endurance. Abnormalities in bone mineral homeostasis can lead to a wide variety of cellular dysfunctions (eg, tetany, coma, muscle weakness), disturbances in structural support of the body (eg, osteoporosis with fractures), and loss of hematopoietic capacity (eg, infantile osteopetrosis).

Figure 42–1 Mechanisms contributing to bone mineral homeostasis. Serum calcium (Ca) and phosphorus (P) concentrations are controlled principally by three hormones, 1,25-dihydroxyvitamin D (D), fibroblast growth factor 23 (FGF23), and parathyroid hormone (PTH), through their action on absorption from the gut and from bone and on renal excretion. PTH and 1,25(OH)2D increase the input of calcium and phosphorus from bone into the serum and stimulate bone formation. 1,25(OH)2D also increases calcium and phosphate absorption from the gut. In the kidney, 1,25(OH)2D decreases excretion of both calcium and phosphorus, whereas PTH reduces calcium but increases phosphorus excretion. FGF23 stimulates renal excretion of phosphate. Calcitonin (CT) is a less critical regulator of calcium homeostasis, but in pharmacologic concentrations can reduce serum calcium and phosphorus by inhibiting bone resorption and stimulating their renal excretion. Feedback may alter the effects shown; for example, 1,25(OH)2D increases urinary calcium excretion indirectly through increased calcium absorption from the gut and inhibition of PTH secretion and may increase urinary phosphate excretion because of increased phosphate absorption from the gut and stimulation of FGF23 production.
katzung16_ch42_f001

Calcium and phosphate enter the body from the intestine. The average American diet provides 600–1000 mg of calcium per day, of which approximately 100–250 mg is absorbed. This amount represents net absorption, because both absorption and secretion occur. Although the duodenum is the site of the highest rate of calcium absorption, the long dwell time of intestinal contents in the ileum makes it the site of the greatest amount of calcium absorption. The quantity of phosphorus in the American diet is about the same as that of calcium. However, the efficiency of absorption (principally in the jejunum) is greater, ranging from 70% to 90%, depending on intake. In the steady state, renal excretion of calcium and phosphate balances intestinal absorption. In general, more than 98% of filtered calcium and 85% of filtered phosphate are reabsorbed by the kidney. The movement of calcium and phosphate across the intestinal and renal epithelia is closely regulated. Dysfunction of the intestine (eg, nontropical sprue) or kidney (eg, chronic renal failure) can disrupt bone mineral homeostasis.

Three hormones serve as the principal regulators of calcium and phosphate homeostasis: parathyroid hormone (PTH), fibroblast growth factor 23 (FGF23), and vitamin D via its active metabolite 1,25-dihydroxyvitamin D (1,25[OH]2D) (Figure 42–2). The role of calcitonin (CT) is less critical during adult life but may play a greater role during pregnancy and lactation. The term vitamin D, when used without a subscript, refers to both vitamin D2 (ergocalciferol) and vitamin D3 (cholecalciferol). This applies also to the metabolites of vitamin D2 and D3. Vitamin D2 and its metabolites differ from vitamin D3 and its metabolites only in the side chain where they contain a double bond between C-22–23 and a methyl group at C-24 (Figure 42–3). Vitamin D is considered a prohormone because it must be further metabolized to gain biologic activity (see Figure 42–3). Vitamin D3 is produced in the skin under ultraviolet B (UVB) radiation (eg, in sunlight) from its precursor, 7-dehydrocholesterol (7-DHC). The initial product, pre-vitamin D3, undergoes a temperature-sensitive isomerization to vitamin D3. 7-DHC is on the pathway to cholesterol, a step controlled by the enzyme 7-dehydrocholesterol reductase (DHCR7). Levels and regulation of DHCR7 control the levels of 7-DHC in the skin and thus the amount of substrate available for vitamin D production. The precursor of vitamin D2 is ergosterol, found in plants and fungi (mushrooms). It undergoes a similar transformation to vitamin D2 with UVB radiation. Vitamin D2 thus comes only from the diet, whereas vitamin D3 comes from the skin or the diet, or both. The subsequent metabolism of these two forms of vitamin D is essentially the same and follows the illustration for vitamin D3 metabolism in Figure 42–3. The first step is the 25-hydroxylation of vitamin D to 25-hydroxyvitamin D (25[OH]D). A number of enzymes in the liver and other tissues perform this function, of which CYP2R1 is the most important at least in the liver. 25(OH)D is then metabolized to the active hormone 1,25-dihydroxyvitamin D (1,25[OH]2D) in the kidney and elsewhere. PTH stimulates the production of 1,25(OH)2D in the kidney, whereas FGF23 is inhibitory. Elevated levels of blood phosphate and calcium also inhibit 1,25(OH)2D production in part by their effects on FGF23 (high phosphate stimulates FGF23 production) and PTH (high calcium inhibits PTH production). 1,25(OH)2D regulates its own levels by stimulating the enzyme 24-hydroxyase (CYP24A1), which begins the catabolism of 1,25(OH)2D, by suppressing PTH production, and by stimulating FGF23 production, all of which combine to reduce 1,25(OH)2D levels. Other tissues also produce 1,25(OH)2D; the control of this production differs from that in the kidney, as will be discussed subsequently. The complex interplay among PTH, FGF23, and 1,25(OH)2D is discussed in detail later.

Figure 42–2 The hormonal interactions controlling bone mineral homeostasis. In the body (A), 1,25-dihydroxyvitamin D (1,25[OH]2D) is produced by the kidney under the control of parathyroid hormone (PTH), which stimulates its production, and fibroblast growth factor 23 (FGF23), which inhibits its production. 1,25(OH)2D in turn inhibits the production of PTH by the parathyroid glands and stimulates FGF23 release from bone. 1,25(OH)2D is the principal regulator of intestinal calcium and phosphate absorption. At the level of the bone (B), both PTH and 1,25(OH)2D regulate bone formation and resorption, with each capable of stimulating both processes. This is accomplished by their stimulation of preosteoblast proliferation and differentiation into osteoblasts, the bone-forming cell. PTH also stimulates osteoblast formation indirectly by inhibiting the osteocyte’s production of sclerostin, a protein that blocks osteoblast proliferation by inhibiting the wnt pathway (not shown). PTH and 1,25(OH)2D stimulate the expression of RANKL by the osteoblast, which, with MCSF, stimulates the differentiation and subsequent activation of osteoclasts, the bone-resorbing cell. OPG blocks RANKL action, and may be inhibited by PTH and 1,25(OH)2D. FGF23 in excess leads to osteomalacia indirectly by inhibiting 1,25(OH)2D production and lowering phosphate levels. MCSF, macrophage colony-stimulating factor; OPG, osteoprotegerin; RANKL, ligand for receptor for activation of nuclear factor-κB.
katzung16_ch42_f002
Figure 42–3 Conversion of 7-dehydrocholesterol to vitamin D3 in the skin and its subsequent metabolism to 25-hydroxyvitamin D3 (25[OH]D3) in the liver and to 1,25-dihydroxyvitamin D3 (1,25[OH]2D3) and 24,25-dihydroxyvitamin D3 (24,25[OH]2D3) in the kidney. Control of vitamin D metabolism is exerted primarily at the level of the kidney, where high concentrations of serum phosphorus (P) and calcium (Ca) as well as fibroblast growth factor 23 (FGF23) inhibit production of 1,25(OH)2D3 (indicated by a minus [−] sign), but promote that of 24,25(OH)2D3 (indicated by a plus [+] sign). Parathyroid hormone (PTH), on the other hand, stimulates 1,25(OH)2D3 production but inhibits 24,25(OH)2D3 production. The insert (shaded) shows the side chain for ergosterol, vitamin D2, and the active vitamin D2 metabolites. Ergosterol is converted to vitamin D2 (ergocalciferol) by UV radiation similar to the conversion of 7-dehydrocholesterol to vitamin D3. Vitamin D2, in turn, is metabolized to 25-hydroxyvitamin D2, 1,25-dihydroxyvitamin D2, and 24,25-dihydroxyvitamin D2 via the same enzymes that metabolize vitamin D3. In humans, corresponding D2 and D3 metabolites have equivalent biologic effects, although they differ in pharmacokinetics. +, facilitation; –, inhibition; P, phosphorus; Ca, calcium; PTH, parathyroid hormone; FGF23, fibroblast growth factor 23.
katzung16_ch42_f003

To summarize: 1,25(OH)2D suppresses the production of PTH, as does calcium, but stimulates the production of FGF23. Phosphate stimulates both PTH and FGF23 secretion. In turn PTH stimulates 1,25(OH)2D production, whereas FGF23 is inhibitory. 1,25(OH)2D stimulates the intestinal absorption of calcium and phosphate. 1,25(OH)2D and PTH promote both bone formation and resorption in part by stimulating the proliferation and differentiation of osteoblasts and osteoclasts. Both PTH and 1,25(OH)2D enhance renal retention of calcium, but PTH promotes renal phosphate excretion, as does FGF23, whereas 1,25(OH)2D promotes renal reabsorption of phosphate. These feedback loops combine to maintain calcium and phosphate homeostasis.

Other hormones—calcitonin, prolactin, growth hormone, insulin, insulin-like growth factors, thyroid hormone, glucocorticoids, and sex steroids—influence calcium and phosphate homeostasis under certain physiologic circumstances and can be considered secondary regulators. Deficiency or excess of these secondary regulators within a physiologic range does not produce the disturbance of calcium and phosphate homeostasis that is observed in situations of deficiency or excess of PTH, FGF23, and vitamin D. However, certain of these secondary regulators—especially calcitonin, glucocorticoids, and estrogens—are useful therapeutically and are discussed in subsequent sections.

In addition to these hormonal regulators, calcium and phosphate themselves, other ions such as sodium and fluoride, and a variety of drugs (bisphosphonates, anticonvulsants, various anti-HIV drugs, and diuretics) also alter calcium and phosphate homeostasis.

Parathyroid hormone (PTH) is a single-chain peptide hormone composed of 84 amino acids. It is produced in the parathyroid gland in a precursor form of 115 amino acids, the excess 31 amino terminal amino acids being cleaved off before secretion. Within the gland is a calcium-sensitive protease capable of cleaving the intact hormone into fragments, thereby providing one mechanism by which calcium limits the production of PTH. A second mechanism involves the calcium-sensing receptor (CaSR) which, when stimulated by calcium, reduces PTH production and secretion. The parathyroid gland also contains the vitamin D receptor (VDR) and the enzyme, CYP27B1, that produces 1,25(OH)2D, thus enabling circulating or endogenously produced 1,25(OH)2D to suppress PTH production. 1,25(OH)2D also induces the CaSR, making the parathyroid gland more sensitive to suppression by calcium. Biologic activity resides in the amino terminal region of PTH such that synthetic PTH 1-34 (available as teriparatide) is fully active and used in the treatment of osteoporosis However, a full length form of PTH (rhPTH 1-84, Natpara) has been approved for treatment of hypoparathyroidism. In addition, an analog of PTHrP (abaloparatide) that functions much like teriparatide has recently been approved for the treatment of osteoporosis. Other analogs of PTH are currently in development. Loss of the first two amino terminal amino acids eliminates most biologic activity.

The metabolic clearance of intact PTH is rapid, with a half-time of disappearance measured in minutes. Most of the clearance occurs in the liver and kidney. The inactive carboxyl terminal fragments produced by metabolism of the intact hormone have a much lower clearance, especially in renal failure. In the past, this accounted for the very high PTH values observed in patients with renal failure when the hormone was measured by radioimmunoassays directed against the carboxyl terminal region. Currently, most PTH assays differentiate between intact PTH 1-34 and large inactive fragments, so that it is possible to more accurately evaluate biologically active PTH status in patients with renal failure. That said, in renal failure biologically inactive fragments of PTH detected by the newer “intact” PTH assays still complicate the measurement.

PTH regulates calcium and phosphate flux across cellular membranes in bone and kidney, resulting in increased serum calcium and decreased serum phosphate (see Figure 42–1). In bone, PTH increases the activity of osteoblasts, the bone-forming cells, as well as the activity and number of osteoclasts, the cells responsible for bone resorption (see Figure 42–2). However, this stimulation of osteoclasts is not a direct effect. Rather, PTH acts on the osteoblast to induce membrane-bound and secreted soluble forms of a protein called RANK ligand (RANKL). RANKL acts on osteoclasts and osteoclast precursors to increase both the numbers and activity of osteoclasts. This action increases bone remodeling, a specific sequence of cellular events initiated by osteoclastic bone resorption and followed by osteoblastic bone formation. Denosumab, an antibody that inhibits the action of RANKL, has been developed for the treatment of excess bone resorption in patients with osteoporosis and certain cancers. PTH also inhibits the production and secretion of sclerostin from osteocytes. Sclerostin is one of several proteins that block osteoblast proliferation by inhibiting the wnt pathway. Antibodies against sclerostin (eg, romosozumab) have recently been approved for the treatment of osteoporosis. Unlike PTH, romosozumab does not stimulate osteoclast activity, but rather suppresses it, at least initially. Thus, PTH directly and indirectly increases proliferation of osteoblasts, the cells responsible for bone formation. Although both bone resorption and bone formation are enhanced by PTH, the net effect of excess endogenous PTH is to increase bone resorption. However, administration of exogenous PTH in low and intermittent doses increases bone formation without first stimulating bone resorption. This net anabolic action may be indirect, involving other growth factors such as insulin-like growth factor 1 (IGF1) as well as inhibition of sclerostin as noted above. These anabolic actions have led to the approval of recombinant PTH 1-34 and its PTHrP analog teriparatide and abaloparatide, respectively, for the treatment of osteoporosis. In the kidney, PTH stimulates 1,25(OH)2D production, and increases tubular reabsorption of calcium and magnesium, but reduces reabsorption of phosphate, amino acids, bicarbonate, sodium, chloride, and sulfate. As mentioned earlier, full-length PTH (rhPTH 1-84, Natpara) has been approved in part for these renal effects, which otherwise limit standard calcium and calcitriol treatment of hypoparathyroidism.

Vitamin D is a secosteroid produced in the skin from 7-dehydrocholesterol under the influence of ultraviolet radiation. Vitamin D is also found in certain foods and is used to supplement dairy products and other foods. Both the natural form (vitamin D3, cholecalciferol) and the plant-derived form (vitamin D2, ergocalciferol) are present in the diet. As discussed earlier these forms differ in that ergocalciferol contains a double bond and an additional methyl group in the side chain (see Figure 42–3). Ergocalciferol and its metabolites bind less well than cholecalciferol and its metabolites to vitamin D–binding protein (DBP), the major transport protein of these compounds in blood, and have a somewhat different path of catabolism. As a result, their half-lives are shorter than those of the cholecalciferol metabolites. This influences treatment strategies, as will be discussed. However, the key steps in metabolism and biologic activities of the active metabolites are comparable, so with this exception the following comments apply equally well to both forms of vitamin D.

Vitamin D is a precursor to a number of biologically active metabolites (see Figure 42–3). Vitamin D is first hydroxylated in the liver and other tissues to form 25(OH)D (calcifediol). As noted earlier, there are a number of enzymes with 25-hydroxylase activity. This metabolite is further converted in the kidney to a number of other forms, the best studied of which are 1,25(OH)2D (calcitriol) and 24,25-dihydroxyvitamin D (secalciferol, 24,25[OH]2D), by the enzymes CYP27B1 and CYP24A1, respectively. The regulation of vitamin D metabolism is complex, involving calcium, phosphate, and a variety of hormones, the most important of which are PTH, which stimulates, and FGF23, which inhibits the production of 1,25(OH)2D by the kidney while reciprocally inhibiting or promoting the production of 24,25(OH)2D. The importance of CYP24A1, the enzyme that 24-hydroxylates 25(OH)D and 1,25(OH)2D, is well demonstrated in children and adults with inactivating mutations of this enzyme who develop high levels of calcium and 1,25(OH)2D resulting in kidney damage from nephrocalcinosis and stones. Of the natural metabolites, vitamin D, 25(OH)D (calcifediol) and 1,25(OH)2D (as calcitriol) are available for clinical use (Table 42–1). A number of analogs of 1,25(OH)2D have been synthesized to extend the usefulness of this metabolite to a variety of nonclassic conditions. Calcipotriene (calcipotriol), for example, is being used to treat psoriasis, a hyperproliferative skin disorder (see Chapter 61). Doxercalciferol and paricalcitol are approved for the treatment of secondary hyperparathyroidism in patients with chronic kidney disease. Eldecalcitol is approved in Japan for the treatment of osteoporosis. Other analogs are being investigated for the treatment of various malignancies.

Table 42–1 Vitamin D and its major metabolites and analogs.

Chemical and Generic Names

Abbreviation

Vitamin D3; cholecalciferol

D3

Vitamin D2; ergocalciferol

D2

25-Hydroxyvitamin D3; calcifediol

25(OH)D3

1,25-Dihydroxyvitamin D3; calcitriol

1,25(OH)2D3

24,25-Dihydroxyvitamin D3; secalciferol

24,25(OH)2D3

Dihydrotachysterol

DHT

Calcipotriene (calcipotriol)

None

1α-Hydroxyvitamin D2; doxercalciferol

1α(OH)D2

19-nor-1,25-Dihydroxyvitamin D2; paricalcitol

19-nor-1,25(OH)D2

Vitamin D and its metabolites circulate in plasma tightly bound to the DBP. This α-globulin binds 25(OH)D and 24,25(OH)2D with comparable high affinity and vitamin D and 1,25(OH)2D with lower affinity. There is increasing evidence that it is the free or unbound forms of these metabolites that have biologic activity. This is of clinical importance because patients with liver disease or nephrotic syndrome have lower levels of DBP, whereas DBP levels are increased with estrogen therapy and during the later stages of pregnancy. Furthermore, there are several different forms of DBP in the population with different affinities for the vitamin D metabolites, and, as noted earlier, the affinity of DBP for the D2 metabolites is less than that for the D3 metabolites. Thus individuals can vary with respect to the fraction of free metabolite available, so that measuring only the total metabolite concentration may be misleading with respect to assessing vitamin D status. In normal subjects, the terminal half-life of injected calcifediol (25[OH]D) is around 23 days, whereas in anephric subjects it is around 42 days. The half-life of 24,25(OH)2D is probably similar. Tracer studies with vitamin D have shown a rapid clearance from the blood. The liver appears to be the principal organ for clearance. Excess vitamin D is stored in adipose tissue. The metabolic clearance of calcitriol (1,25[OH]2D) in humans likewise indicates a rapid turnover, with a terminal half-life measured in hours. Several of the 1,25(OH)2D analogs are bound poorly by DBP. As a result, their clearance is very rapid, with a terminal half-life of minutes. Such analogs have less hypercalcemic, hypercalciuric effect than calcitriol, an important aspect of their use in the management of conditions such as psoriasis and secondary hyperparathyroidism.

The mechanism of action of the vitamin D metabolites remains under active investigation. However, 1,25(OH)2D is well established as the most potent stimulant of intestinal calcium and phosphate transport and bone resorption. 1,25(OH)2D appears to act on the intestine both by induction of new protein synthesis (eg, calcium-binding protein and TRPV6, an intestinal calcium channel) and by modulation of calcium flux across the brush border and basolateral membranes by processes that do not all require new protein synthesis. The molecular action of 1,25(OH)2D on bone is more complex and controversial as it is both direct and indirect. Much of the skeletal effect is attributed to the provision of adequate calcium and phosphate from the diet by stimulation of their intestinal absorption. However, like PTH, 1,25(OH)2D can induce RANKL in osteoblasts to regulate osteoclast activity and proteins such as osteocalcin and alkaline phosphatase, which may regulate the mineralization process by osteoblasts. The metabolites 25(OH)D and 24,25(OH)2D are far less potent stimulators of intestinal calcium and phosphate transport or bone resorption.

Specific receptors for 1,25(OH)2D (VDR) exist in nearly all tissues, not just intestine, bone, and kidney. As a result much effort has been made to develop analogs of 1,25(OH)2D that will target these nonclassic target tissues without increasing serum calcium. These nonclassic actions include regulation of the secretion of PTH, insulin, and renin; regulation of innate and adaptive immune function through actions on dendritic cell and T-cell differentiation; enhanced muscle function; and proliferation and differentiation of a number of cancer cells. Thus, the potential clinical utility of 1,25(OH)2D and its analogs is expanding. A different receptor has recently been found for 24,25(OH)2D. However, the physiologic role of this receptor is not yet fully understood.

Fibroblast growth factor 23 (FGF23) is a single-chain protein with 251 amino acids, including a 24-amino-acid leader sequence. It inhibits 1,25(OH)2D production and phosphate reabsorption (via the sodium phosphate cotransporters NaPi 2a and 2c) in the kidney and can lead to both hypophosphatemia and inappropriately low levels of circulating 1,25(OH)2D. Whereas FGF23 was originally identified in certain mesenchymal tumors, osteoblasts and osteocytes in bone appear to be its primary site of production. Other tissues can also produce FGF23, though at lower levels. FGF23 requires O-glycosylation for its secretion, a glycosylation mediated by the glycosyl transferase GALNT3. Mutations in GALNT3 result in abnormal deposition of calcium phosphate in periarticular tissues (tumoral calcinosis) with elevated phosphate and 1,25(OH)2D. FGF23 is normally inactivated by cleavage at an RXXR site (amino acids 176–179). Mutations of the arginines (R) in this site lead to excess FGF23, the underlying problem in autosomal dominant hypophosphatemic rickets. A similar disease, X-linked hypophosphatemic rickets (XLH), is due to mutations in PHEX, an endopeptidase, which initially was thought to cleave FGF23. However, this concept has been shown to be invalid, and the mechanism by which PHEX mutations lead to increased FGF23 levels remains obscure. FGF23 binds to FGF receptors (FGFR) 1 and 3c in the presence of the accessory receptor Klotho-α. Both Klotho and the FGFR must be present for signaling in most tissues, although high levels of FGF23 appear to affect cardiomyocytes lacking Klotho. Mutations in Klotho disrupt FGF23 signaling, resulting in elevated phosphate and 1,25(OH)2D levels, a phenotype quite similar to inactivating mutations in FGF23 or GALNT3. FGF23 production is stimulated by 1,25(OH)2D and phosphate and directly or indirectly inhibited by the dentin matrix protein DMP1 found in osteocytes. Mutations in DMP1 lead to increased FGF23 levels and osteomalacia. Recently an antibody to FGF23, burosumab, has been approved for the treatment of XLH, an approval likely to extend to other diseases marked by high FGF23 levels.

A summary of the principal actions of PTH, FGF23, and vitamin D on the three main target tissues—intestine, kidney, and bone—is presented in Table 42–2. The net effect of PTH is to raise serum calcium, reduce serum phosphate, and increase 1,25(OH)2D; the net effect of FGF23 is to decrease serum phosphate and 1,25(OH)2D; the net effect of vitamin D is to raise both calcium and phosphate while decreasing PTH and increasing FGF23. Regulation of calcium and phosphate homeostasis is achieved through important feedback loops. Calcium is one of two principal regulators of PTH secretion. It binds to a novel ion recognition site that is part of a Gq protein-coupled receptor called the calcium-sensing receptor (CaSR) that employs the phosphoinositide second messenger system to link changes in the extracellular calcium concentration to changes in the intracellular free calcium. As serum calcium levels rise and activate this receptor, intracellular calcium levels increase and inhibit PTH secretion. This inhibition by calcium of PTH secretion, along with inhibition of renin and atrial natriuretic peptide secretion, is the opposite of the effect of calcium in other tissues such as the beta cell of the pancreas, in which calcium stimulates secretion. Phosphate regulates PTH secretion directly and indirectly. Its indirect actions are the result of forming complexes with calcium in the serum. Because it is the ionized free concentration of extracellular calcium that is detected by the parathyroid gland, increases in serum phosphate levels reduce the ionized calcium levels, leading to enhanced PTH secretion. Whether the parathyroid gland expresses phosphate receptors that mediate the direct action of phosphate on PTH secretion remains unclear. Such feedback regulation is appropriate to the net effect of PTH to raise serum calcium and reduce serum phosphate levels. Likewise, both calcium and phosphate at high levels reduce the amount of 1,25(OH)2D produced by the kidney and increase the amount of 24,25(OH)2D produced.

Table 42–2 Actions of parathyroid hormone (PTH), vitamin D, and FGF23 on gut, bone, and kidney.

 

PTH

Vitamin D

FGF23

1Direct effect. Vitamin D also indirectly increases urine calcium owing to increased calcium absorption from the intestine and decreased PTH.

Intestine

Increased calcium and phosphate absorption (by increased 1,25[OH]2D production)

Increased calcium and phosphate absorption by 1,25(OH)2D

Decreased calcium and phosphate absorption by decreased 1,25(OH)2 production

Kidney

Decreased calcium excretion, increased phosphate excretion, stimulation of 1,25(OH)2D production

Calcium and phosphate excretion may be decreased by 25(OH)D and 1,25(OH)2D1

Increased phosphate excretion, decreased 1,25(OH)2D production

Bone

Calcium and phosphate resorption increased by high doses. Low doses increase bone formation.

Increased calcium and phosphate resorption by 1,25(OH)2D; bone formation may be increased by 1,25(OH)2D

Decreased mineralization due to hypophosphatemia and low 1,25(OH)2D levels

Net effect on serum levels

Serum calcium increased, serum phosphate decreased

Serum calcium and phosphate both increased

Decreased serum phosphate

High serum calcium works directly and indirectly by reducing PTH secretion. High serum phosphate works directly and indirectly by increasing FGF23 levels. Since 1,25(OH)2D raises serum calcium and phosphate, whereas 24,25(OH)2D has less effect, such feedback regulation is again appropriate. 1,25(OH)2D directly inhibits PTH secretion (independent of its effect on serum calcium) by a direct inhibitory effect on PTH gene transcription. The parathyroid gland expresses both the VDR and CYP27B1, so that endogenous production of 1,25(OH)2D within the parathyroid gland may be more important for the regulation of PTH secretion than serum levels of 1,25(OH)2D. This provides yet another negative feedback loop. In patients with chronic renal failure who frequently are deficient in producing 1,25(OH)2D due in part to elevated FGF23 levels, loss of this 1,25(OH)2D-mediated feedback loop coupled with impaired phosphate excretion and intestinal calcium absorption leads to secondary hyperparathyroidism. The ability of 1,25(OH)2D to inhibit PTH secretion directly is being exploited with calcitriol analogs that have less effect on serum calcium because of their lesser effect on intestinal calcium absorption. Such drugs are proving useful in the management of secondary hyperparathyroidism accompanying chronic kidney disease and may be useful in selected cases of primary hyperparathyroidism. 1,25(OH)2D also stimulates the production of FGF23. This completes the negative feedback loop in that FGF23 inhibits 1,25(OH)2D production while promoting hypophosphatemia, which in turn inhibits FGF23 production and stimulates 1,25(OH)2D production. However, the rise in FGF23 in the early stages of renal failure remains unexplained and is not due to increases in either 1,25OH)2D or phosphate, and appears not to be under the same feedback control as operates under normal physiologic conditions. The interaction between FGF23 and PTH is less clear. FGF23 has been found to suppress PTH secretion, although it appears to enhance PTH actions on the kidney, at least with respect to phosphate excretion. PTH, on the other hand, has been reported to promote FGF23 production in bone.

A number of hormones modulate the actions of PTH, FGF23, and vitamin D in regulating bone mineral homeostasis. Compared with that of PTH, FGF23, and vitamin D, the physiologic impact of such secondary regulation on bone mineral homeostasis is minor. However, in pharmacologic amounts, several of these hormones, including calcitonin, glucocorticoids, and estrogens, have actions on bone mineral homeostatic mechanisms that can be exploited therapeutically.

The calcitonin secreted by the parafollicular cells of the mammalian thyroid is a single-chain peptide hormone with 32 amino acids and a molecular weight of 3600. A disulfide bond between positions 1 and 7 is essential for biologic activity. Calcitonin is produced from a precursor with a molecular weight of 15,000. The circulating forms of calcitonin are multiple, ranging in size from the monomer (molecular weight 3600) to forms with an apparent molecular weight of 60,000. Whether such heterogeneity includes precursor forms or covalently linked oligomers is not known. Because of its chemical heterogeneity, calcitonin preparations are standardized by bioassay in rats. Activity is compared to a standard maintained by the British Medical Research Council (MRC) and expressed as MRC units.

Human calcitonin monomer has a half-life of about 10 minutes. Salmon calcitonin has a longer half-life of 40–50 minutes, making it more attractive as a therapeutic agent. Much of the clearance occurs in the kidney by metabolism; little intact calcitonin appears in the urine.

The principal effects of calcitonin are to lower serum calcium and phosphate by actions on bone and kidney. Calcitonin inhibits osteoclastic bone resorption. Although bone formation is not impaired at first after calcitonin administration, with time both formation and resorption of bone are reduced. In the kidney, calcitonin reduces both calcium and phosphate reabsorption as well as reabsorption of other ions, including sodium, potassium, and magnesium. Tissues other than bone and kidney are also affected by calcitonin. Calcitonin in pharmacologic amounts decreases gastrin secretion and reduces gastric acid output while increasing secretion of sodium, potassium, chloride, and water in the gut. Pentagastrin is a potent stimulator of calcitonin secretion (as is hypercalcemia), suggesting a possible physiologic relationship between gastrin and calcitonin. In the adult human, no readily demonstrable problem develops in cases of calcitonin deficiency (thyroidectomy) or excess (medullary carcinoma of the thyroid). However, the ability of calcitonin to block bone resorption and lower serum calcium makes it a useful drug for the treatment of Paget disease, hypercalcemia, and osteoporosis, albeit a less efficacious drug than other available agents such as the bisphosphonates.

Glucocorticoid hormones alter bone mineral homeostasis by antagonizing vitamin D-stimulated intestinal calcium transport, stimulating renal calcium excretion, blocking bone formation, and at least initially stimulating bone resorption. Although these observations underscore the negative impact of glucocorticoids on bone mineral homeostasis, these hormones have proved useful in reversing the hypercalcemia associated with lymphomas and granulomatous diseases such as sarcoidosis in which unregulated ectopic production of 1,25[OH]2D occurs or in cases of vitamin D intoxication. Prolonged administration of glucocorticoids is a common cause of osteoporosis in adults and can cause stunted skeletal development in children (see Chapter 39).

Estrogens can prevent accelerated bone loss during the immediate postmenopausal period and at least transiently increase bone in postmenopausal women.

The prevailing hypothesis advanced to explain these observations is that estrogens reduce the bone-resorbing action of PTH. Estrogen administration leads to an increased 1,25(OH)2D level in blood, but estrogens have no direct effect on 1,25(OH)2D production in vitro. The increased 1,25(OH)2D levels in vivo following estrogen treatment may result from decreased serum calcium and phosphate and increased PTH. However, estrogens also increase DBP production by the liver, which increases the total concentrations of the vitamin D metabolites in circulation without necessarily increasing the free levels. Estrogen receptors have been found in bone, and estrogen has direct effects on bone remodeling. Case reports of men who lack the estrogen receptor or who are unable to produce estrogen because of aromatase deficiency noted marked osteopenia and failure to close epiphyses. This further substantiates the role of estrogen in bone development, even in men. The principal therapeutic application for estrogen administration in disorders of bone mineral homeostasis is the treatment or prevention of postmenopausal osteoporosis. However, long-term use of estrogen has fallen out of favor due to concern about adverse effects. Selective estrogen receptor modulators (SERMs) have been developed to retain the beneficial effects on bone while minimizing deleterious effects on breast, uterus, and the cardiovascular system (see Box: Therapies for Osteoporosis and Chapter 40).

Therapies for Osteoporosis

Bone undergoes a continuous remodeling process involving resorption and formation. Any process that disrupts this balance by increasing bone resorption relative to formation results in osteoporosis. Inadequate gonadal hormone production is a major cause of osteoporosis in men and women. Estrogen replacement therapy at menopause is a well-established means of preventing osteoporosis in the female, but many women fear its adverse effects, particularly the increased risk of breast cancer from continued estrogen use (the well-demonstrated increased risk of endometrial cancer is prevented by combining the estrogen with a progestin) and do not like the persistence of menstrual bleeding that often accompanies this form of therapy. Medical enthusiasm for this treatment has waned with the demonstration that it does not protect against and may increase the risk of heart disease. Raloxifene was the first of the selective estrogen receptor modulators (SERMs; see Chapter 40) to be approved for the prevention of osteoporosis. Raloxifene shares some of the beneficial effects of estrogen on bone without increasing the risk of breast or endometrial cancer (it may actually reduce the risk of breast cancer). Although not as effective as estrogen in increasing bone density, raloxifene has been shown to reduce vertebral fractures.

Nonhormonal forms of therapy for osteoporosis have been developed with proven efficacy in reducing fracture risk. Bisphosphonates such as alendronate, risedronate, ibandronate, and zoledronate have been conclusively shown to increase bone density and reduce fractures over at least 5 years when used continuously at a dosage of 10 mg/d or 70 mg/week for alendronate; 5 mg/d or 35 mg/week for risedronate; 2.5 mg/d or 150 mg/month for ibandronate; and 5 mg annually for intravenous zoledronate. Side-by-side trials between alendronate and calcitonin (another approved nonestrogen drug for osteoporosis) indicated a greater efficacy of alendronate. Bisphosphonates are poorly absorbed and must be given on an empty stomach or infused intravenously. At the higher oral doses used in the treatment of Paget disease, alendronate causes gastric irritation, but this is not a significant problem at the doses recommended for osteoporosis when patients are instructed to take the drug with a glass of water and remain upright. Denosumab is a human monoclonal antibody directed against RANKL, and it is very effective in inhibiting osteoclastogenesis and activity. Denosumab is given in 60-mg doses subcutaneously every 6 months. Unlike the bisphosphonates, when denosumab treatment is discontinued there is often a surge of bone resorption that is only partially prevented with antiresorptive agents like zoledronate. However, recent trials indicate that denosumab treatment continues to increase bone mineral density up to 10 years, unlike the plateau seen with bisphosphonates after a couple of years. All of these drugs inhibit bone resorption with secondary effects to inhibit bone formation. On the other hand, teriparatide, the recombinant form of PTH 1-34 and abaloparatide, an analog of PTHrP, directly stimulate bone formation as well as bone resorption. However, teriparatide and abaloparatide are given daily by subcutaneous injection. Their efficacy in preventing fractures is at least as great as that of the bisphosphonates. More recently, antibodies to sclerostin, an inhibitor of bone formation that is produced in osteocytes, have been developed. Romosozumab is now approved for the treatment of osteoporosis. Romosozumab is injected monthly subcutaneously in 210-mg doses for one year. No renal adjustment is required. However, it carries a black box warning indicating it should not be used in patients with risks of stroke or coronary vascular disease. It is important that following the one year treatment patients are started on an antiresorptive to maintain the bone that was gained. In all cases, adequate intake of calcium and vitamin D needs to be maintained.

Furthermore, there are several other forms of therapy used in other countries but not available in the United States. In Europe, strontium ranelate, a drug that appears to stimulate bone formation and inhibit bone resorption, has been used for several years with favorable results in large clinical trials. However, approval for its use in the United States has not been achieved. In Japan, eldecalcitol, an analog of 1,25(OH)2D, has been approved for the treatment of osteoporosis with minimal effects on serum calcium. It is not yet available in the United States.

The bisphosphonates are analogs of pyrophosphate in which the P-O-P bond has been replaced with a nonhydrolyzable P-C-P bond (Figure 42–4). Currently available bisphosphonates include etidronate, pamidronate, alendronate, risedronate, tiludronate, ibandronate, and zoledronate. With the development of the more potent bisphosphonates, etidronate is seldom used.

Figure 42–4 The structure of pyrophosphate and of the first three bisphosphonates—etidronate, pamidronate, and alendronate—that were approved for use in the United States.
katzung16_ch42_f004

Results from animal and clinical studies indicate that less than 10% of an oral dose of these drugs is absorbed. Food reduces absorption even further, necessitating their administration on an empty stomach. A major adverse effect of oral forms of the bisphosphonates (risedronate, alendronate, ibandronate) is esophageal and gastric irritation, which limits the use of this route by patients with upper gastrointestinal disorders. This complication can be circumvented with infusions of pamidronate, zoledronate, and ibandronate. Intravenous dosing also allows a larger amount of drug to enter the body and markedly reduces the frequency of administration (eg, zoledronate is infused once per year). Nearly half of the absorbed drug accumulates in bone; the remainder is excreted unchanged in the urine. Decreased renal function dictates a reduction in dosage. The portion of drug retained in bone depends on the rate of bone turnover; drug in bone often is retained for months to years.

The bisphosphonates exert multiple effects on bone mineral homeostasis, which make them useful for the treatment of hypercalcemia associated with malignancy, for Paget disease, and for osteoporosis (see Box: Therapies for Osteoporosis). They owe at least part of their clinical usefulness and toxicity to their ability to retard formation and dissolution of hydroxyapatite crystals within and outside the skeletal system as well as inhibiting osteoclast activity. Some of the newer bisphosphonates appear to increase bone mineral density well beyond the 2-year period predicted for a drug whose effects are limited to slowing bone resorption. This may be due to their other cellular effects, which include inhibition of 1,25(OH)2D production, inhibition of intestinal calcium transport, metabolic changes in bone cells such as inhibition of glycolysis, inhibition of cell growth, and changes in acid and alkaline phosphatase activity.

Amino bisphosphonates such as alendronate and risedronate inhibit farnesyl pyrophosphate synthase, an enzyme in the mevalonate pathway that appears to be critical for osteoclast survival. The cholesterol-lowering statin drugs (eg, lovastatin), which block mevalonate synthesis (see Chapter 35), stimulate bone formation, at least in animal studies. Thus, the mevalonate pathway appears to be important in bone cell function and provides new targets for drug development. The mevalonate pathway effects vary depending on the bisphosphonate used (only amino bisphosphonates have this property) and may account for some of the clinical differences observed in the effects of the various bisphosphonates on bone mineral homeostasis.

With the exception of the induction of a mineralization defect by higher than approved doses of etidronate and gastric and esophageal irritation by the oral bisphosphonates, these drugs have proved to be remarkably free of adverse effects when used at the doses recommended for the treatment of osteoporosis. Esophageal irritation can be minimized by taking the drug with a full glass of water and remaining upright for 30 minutes or by using the intravenous forms of these compounds. The initial infusion of zoledronate is commonly associated with several days of a flu-like syndrome that generally does not recur with subsequent infusions. Of other complications, osteonecrosis of the jaw has received considerable attention but is rare in patients receiving usual doses of bisphosphonates (perhaps 1/100,000 patient-years). This complication is more frequent when high intravenous doses of zoledronate are used to control bone metastases and cancer-induced hypercalcemia. Concern has also been raised about over-suppressing bone turnover. This may underlie the occurrence of subtrochanteric femur fractures in patients on long-term bisphosphonate treatment. This complication appears to be rare, comparable to that of osteonecrosis of the jaw, but has led some authorities to recommend a “drug holiday” after 5 years of treatment if the clinical condition warrants it (ie, if the fracture risk of discontinuing the bisphosphonate is not deemed high). Impetus to reevaluate the results of antiresorptive therapy after 5 years of treatment (3 years for zoledronate) comes from the observation that these relatively rare side effects become more common as treatment extends beyond 5 years.

Denosumab is a fully humanized monoclonal antibody that binds to and prevents the action of RANKL. As described earlier, RANKL is produced by osteoblasts and other cells, including T lymphocytes. It stimulates osteoclastogenesis via RANK, the receptor for RANKL that is present on osteoclasts and osteoclast precursors. By interfering with RANKL function, denosumab inhibits osteoclast formation and activity. It is at least as effective as the potent bisphosphonates in inhibiting bone resorption and has been approved for treatment of postmenopausal osteoporosis and some cancers (prostate and breast). The latter application is to limit the development of bone metastases or bone loss resulting from the use of drugs that suppress gonadal function. Denosumab is administered subcutaneously every 6 months. The drug appears to be well tolerated, but four concerns remain. First, a number of cells in the immune system also express RANKL, suggesting that there could be an increased risk of infection associated with the use of denosumab. Second, because the suppression of bone turnover with denosumab is similar to that of the potent bisphosphonates, the potential risk of osteonecrosis of the jaw and subtrochanteric fractures is comparable. Third, denosumab can lead to transient hypocalcemia, especially in patients with marked bone loss (and bone hunger) or compromised calcium regulatory mechanisms, including chronic kidney disease and vitamin D deficiency. That said, denosumab can be used in patients with advanced renal disease, unlike the bisphosphonates, as it is not cleared by the kidney, and it has the advantage over bisphosphonates in that it is readily reversible because it does not deposit in bone. However, when used in patients with renal failure, careful attention to serum calcium levels is necessary. Fourth, denosumab does not result in the death of osteoclasts, unlike the bisphosphonates, so that if denosumab therapy is interrupted a surge of bone resorption can occur putting the patient at risk for new fractures. Efforts to prevent this surge of bone resorption with potent antiresorptives like zoledronate are only partially effective.

Sclerostin is a protein produced by osteocytes that blocks the action of the wnt receptor in osteoblasts. The wnt receptor when activated by selected wnts promotes beta-catenin signaling increasing the proliferation of osteoblasts. Sclerostin blocks wnt activation, suppressing bone formation. Antibodies to sclerostin have been developed, of which only romosozumab has been FDA approved for the treatment of osteoporosis. This form of therapy promotes bone formation and inhibits bone resorption, although the mechanism for its effects on osteoclasts is not well understood. The use of romosozumab is limited to one year after which it is recommended that patients be switched to an antiresorptive to prevent loss of the bone gained. In some of the trials with this drug there appeared to be an increased risk of myocardial infarction (MI), stroke, and cardiovascular death, thus generating a black box warning contraindicating its use in patients with a previous MI or stroke within the past year.

Cinacalcet is the first representative of a new class of drugs that activates the calcium-sensing receptor (CaSR) described above. Etelcalcetide is a more recently approved and somewhat more potent calcimimetic that currently is approved only for secondary hyperparathyroidism in CKD patients on dialysis. CaSR is widely distributed but has its greatest concentration in the parathyroid gland. By activating the parathyroid gland CaSR, these drugs inhibit PTH secretion. These drugs are approved for the treatment of secondary hyperparathyroidism in chronic kidney disease (CKD), and cinacalcet is also approved for the treatment of parathyroid carcinoma and severe primary hyperparathyroidism. In CKD patients requiring dialysis, hypocalcemia can occur with the use of these drugs, and nausea is often a limiting factor. CaSR antagonists are also being developed and may be useful in conditions of hypoparathyroidism or as a means to stimulate intermittent PTH secretion in the treatment of osteoporosis.

The chemistry and pharmacology of the thiazide family of drugs are discussed in Chapter 15. The principal application of thiazides in the treatment of bone mineral disorders is in reducing renal calcium excretion. Thiazides may increase the effectiveness of PTH in stimulating reabsorption of calcium by the renal tubules or may act on calcium reabsorption secondarily by increasing sodium reabsorption in the proximal tubule. In the distal tubule, thiazides block sodium reabsorption at the luminal surface, increasing the calcium-sodium exchange at the basolateral membrane and thus enhancing calcium reabsorption into the blood at this site (see Figure 15–4). Thiazides have proved to be useful in reducing the hypercalciuria and incidence of urinary stone formation in subjects with idiopathic hypercalciuria. Part of their efficacy in reducing stone formation may lie in their ability to decrease urine oxalate excretion and increase urine magnesium and zinc levels, both of which inhibit calcium oxalate stone formation. By reducing urine calcium losses, these drugs may also support treatment of osteoporosis.

Fluoride is well established as effective for the prophylaxis of dental caries and has previously been investigated for the treatment of osteoporosis. Both therapeutic applications originated from epidemiologic observations that subjects living in areas with naturally fluoridated water (1–2 ppm) had fewer dental caries and fewer vertebral compression fractures than subjects living in nonfluoridated water areas. Fluoride accumulates in bones and teeth, where it may stabilize the hydroxyapatite crystal. Such a mechanism may explain the effectiveness of fluoride in increasing the resistance of teeth to dental caries, but it does not explain its ability to promote new bone growth.

Fluoride in drinking water appears to be most effective in preventing dental caries if consumed before the eruption of the permanent teeth. The optimum concentration in drinking water supplies is 0.5–1 ppm. Topical application is most effective if done just as the teeth erupt. There is little further benefit to giving fluoride after the permanent teeth are fully formed. Excess fluoride in drinking water leads to mottling of the enamel proportionate to the concentration above 1 ppm.

Fluoride has also been evaluated for the treatment of osteoporosis. Results of earlier studies indicated that fluoride alone, without adequate calcium supplementation, produced osteomalacia. Subsequent studies in which calcium supplementation has been adequate demonstrated an improvement in calcium balance, an increase in bone mineral, and an increase in trabecular bone volume. Despite these promising effects of fluoride on bone mass, clinical studies have failed to demonstrate a reliable reduction in fractures, and some studies showed an increase in fracture rate. At present, fluoride is not approved by the US. Food and Drug Administration (FDA) for treatment or prevention of osteoporosis, and it is unlikely to be.

Adverse effects observed—at the higher doses used for testing fluoride’s effect on bone—include nausea and vomiting, gastrointestinal blood loss, arthralgias, and arthritis in a substantial proportion of patients. Such effects are usually responsive to reduction of the dose or giving fluoride with meals (or both).

Strontium ranelate is composed of two atoms of strontium bound to an organic ion, ranelic acid. Although not approved for use in the United States, this drug is used in Europe for the treatment of osteoporosis. Strontium ranelate appears to block differentiation of osteoclasts while promoting their apoptosis, thus inhibiting bone resorption. At the same time, strontium ranelate appears to promote bone formation. Unlike bisphosphonates, denosumab, or teriparatide, but similar to romosozumab, strontium ranelate increases bone formation markers while inhibiting bone resorption markers. Large clinical trials have demonstrated its efficacy in increasing bone mineral density and decreasing fractures in the spine and hip. Toxicities reported thus far are similar to placebo.

Book Chapter
14. Agents Used in Cardiac Arrhythmias

14. Agents Used in Cardiac Arrhythmias

The electrical impulse that triggers a normal cardiac contraction originates at regular intervals in the sinoatrial (SA) node (Figure 14–1), usually at a frequency of 60–100 bpm. This impulse spreads rapidly through the atria and enters the atrioventricular (AV) node, which is normally the only conduction pathway between the atria and ventricles. Conduction through the AV node is slow, requiring about 0.15 seconds. (This delay provides time for atrial contraction to propel blood into the ventricles.) The impulse then propagates down the His-Purkinje system and invades all parts of the ventricles, beginning with the endocardial surface near the apex and ending with the epicardial surface at the base of the heart. Activation of the entire ventricular myocardium is complete in less than 0.1 seconds. As a result, ventricular contraction is synchronous and hemodynamically effective. Arrhythmias represent electrical activity that deviates from the above description as a result of an abnormality in impulse initiation and/or impulse propagation.

Figure 14–1 Schematic representation of the heart and normal cardiac electrical activity (intracellular recordings from areas indicated and electrocardiogram [ECG]). Sinoatrial (SA) node, atrioventricular (AV) node, and Purkinje cells display pacemaker activity (phase 4 depolarization). The ECG is the body surface manifestation of the depolarization and repolarization waves of the heart. The P wave is generated by atrial depolarization, the QRS by ventricular muscle depolarization, and the T wave by ventricular repolarization. Thus, the PR interval is a measure of conduction time from atrium to ventricle, and the QRS duration indicates the time required for all of the ventricular cells to be activated (ie, the intraventricular conduction time). The QT interval reflects the duration of the ventricular action potential.
katzung16_ch14_f001

Ionic Basis of Membrane Electrical Activity

The electrical excitability of cardiac cells is a function of the unequal distribution of ions across the plasma membrane—chiefly sodium (Na+), potassium (K+), calcium (Ca2+), and chloride (Cl)—and the relative permeability of the membrane to each ion. The gradients are generated by transport mechanisms that move these ions across the membrane against their concentration gradients. The most important of these transport mechanisms is the Na+/K+-ATPase, or sodium pump, described in Chapter 13. It is responsible for keeping the intracellular sodium concentration low and the intracellular potassium concentration high relative to their respective extracellular concentrations. Other transport mechanisms maintain the gradients for calcium and chloride.

As a result of the unequal distribution, when the membrane becomes permeable to a given ion, that ion tends to move down its concentration gradient. However, because of its charged nature, ion movement is also affected by differences in the electrical charge across the membrane, or the transmembrane potential. The potential difference that is sufficient to offset or balance the concentration gradient of an ion is referred to as the equilibrium potential (Eion) for that ion, and for a monovalent cation at physiologic temperature, it can be calculated by a modified version of the Nernst equation:

katzung16_ch14_eq001

where Ce and Ci are the extracellular and intracellular ion concentrations, respectively. Thus, the movement of an ion across the membrane of a cell is a function of the difference between the transmembrane potential and the equilibrium potential. This is also known as the “electrochemical gradient” or “driving force.”

The relative permeability of the membrane to different ions determines the transmembrane potential. However, ions contributing to this potential difference are unable to freely diffuse across the lipid membrane of a cell. Their permeability relies on aqueous channels (specific pore-forming proteins). The ion channels that are thought to contribute to cardiac action potentials are illustrated in Figure 14–2. Most channels are relatively ion-specific, and the current generated by the flux of ions through them is controlled by “gates” (flexible portions of the peptide chains that make up the channel proteins). Sodium, calcium, and some potassium channels are thought to have two types of gates—one that opens or activates the channel and another that closes or inactivates the channel. For the majority of the channels responsible for the cardiac action potential, the movement of these gates is controlled by voltage changes across the cell membrane; that is, they are voltage-sensitive. However, certain channels are primarily ligand- rather than voltage-gated. Furthermore, the activity of many voltage-gated ion channels can be modulated by a variety of other factors, including permeant ion concentrations, tissue metabolic activity, and second messenger signaling pathways.

Figure 14–2 Schematic diagram of the ion permeability changes and transport processes that occur during an action potential. Yellow indicates inward (depolarizing) membrane currents; blue indicates outward (repolarizing) membrane currents. Multiple subtypes of potassium and calcium currents, with different sensitivities to blocking drugs, have been identified. The right side of the figure lists the genes and proteins responsible for each type of channel or transporter.
katzung16_ch14_f002

Pumps and exchangers that contribute indirectly to the membrane potential by creating ion gradients (as discussed above) can also contribute directly because of the current they generate through the unequal exchange of charged ions across the membrane. Such transporters are referred to as being “electrogenic.” An important example is the sodium-calcium exchanger (NCX). Throughout most of the cardiac action potential, this exchanger couples the movement of one calcium ion out of the cell for every three sodium ions that move in, thus generating a net inward or depolarizing current. Although this current is typically small during diastole, when intracellular calcium concentrations are low, spontaneous release of calcium from intracellular storage sites can activate this exchange mechanism, generating a depolarizing current that contributes to pacemaker activity as well as arrhythmogenic events called delayed afterdepolarizations (see below).

The Active Cell Membrane

In atrial and ventricular cells, the diastolic membrane potential (phase 4) is typically very stable. This is because it is dominated by a potassium permeability or conductance that is due to the activity of channels that generate an inward-rectifying potassium current (IK1). This keeps the membrane potential near the potassium equilibrium potential, EK (about –90 mV when Ke = 5 mmol/L and Ki = 150 mmol/L). It also explains why small changes in extracellular potassium concentration have significant effects on the resting membrane potential of these cells. For example, increasing extracellular potassium shifts the equilibrium potential in a positive direction, causing depolarization of the resting membrane potential. It is important to note, however, that potassium is unique in that changes in the extracellular concentration can also affect the permeability of potassium channels, which can produce some nonintuitive effects (see Box: Effects of Potassium).

Effects of Potassium

Changes in serum potassium can have profound effects on electrical activity of the heart. An increase in serum potassium, or hyperkalemia, can depolarize the resting membrane potential due to changes in EK. If the depolarization is great enough, it can inactivate some of the sodium channels, resulting in increased refractory period duration and slowed impulse propagation. Conversely, a decrease in serum potassium, or hypokalemia, can hyperpolarize the resting membrane potential. This can lead to an increase in pacemaker activity due to greater activation of pacemaker channels, especially in latent pacemakers (eg, Purkinje cells), which are more sensitive to changes in serum potassium than normal pacemaker cells.

In addition to effects on the potassium electrochemical gradient, changes in serum potassium can also produce effects that appear somewhat paradoxical, especially as they relate to action potential duration. This is because changes in serum potassium also affect the potassium conductance (increased potassium increases the conductance, whereas decreased potassium decreases the conductance), and this effect often predominates. As a result, hyperkalemia can reduce action potential duration, and hypokalemia can prolong action potential duration. This effect of potassium probably contributes to the observed increase in sensitivity to potassium channel-blocking antiarrhythmic agents (quinidine or sotalol) during hypokalemia, resulting in accentuated action potential prolongation and a tendency to cause torsades de pointes arrhythmia.

The upstroke (phase 0) of the action potential is due to the inward sodium current (INa). From a functional point of view, the behavior of the channels responsible for this current can be described in terms of three states (Figure 14–3). It is now recognized that these states actually represent different conformations of the channel protein. Depolarization of the membrane by an impulse propagating from adjacent cells results in opening of the activation (m) gates of sodium channels (see Figure 14–3, middle), and sodium permeability is markedly increased. Extracellular sodium is then able to diffuse down its electrochemical gradient into the cell, causing the membrane potential to move very rapidly toward the sodium equilibrium potential, ENa (about +70 mV when Nae = 140 mmol/L and Nai = 10 mmol/L). As a result, the maximum upstroke velocity of the action potential is very fast. This intense influx of sodium is very brief because opening of the m gates upon depolarization is promptly followed by closure of the h gates and inactivation of these channels (see Figure 14–3, right). This inactivation contributes to the early repolarization phase of the action potential (phase 1). In some cardiac myocytes, phase 1 is also due to a brief increase in potassium permeability due to the activity of channels generating fast and slow transient outward currents (Ito,f and Ito,s).

Figure 14–3 A schematic representation of Na+ channels cycling through different conformational states during the cardiac action potential. Transitions between resting, activated, and inactivated states are dependent on membrane potential and time. The activation gate is shown as m and the inactivation gate as h. Potentials typical for each state are shown under each channel schematic as a function of time. The dashed line indicates that part of the action potential during which most Na+ channels are completely or partially inactivated and unavailable for reactivation.
katzung16_ch14_f003

A small fraction of the sodium channels activated during the upstroke may actually remain open well into the later phases of the action potential, generating a late sodium current (INaL). However, sustained depolarization during the plateau (phase 2) is due primarily to the activity of calcium channels. Because the equilibrium potential for calcium, like sodium, is very positive, these channels generate a depolarizing inward current. Cardiac calcium channels activate and inactivate in what appears to be a manner similar to sodium channels, but in the case of the most common type of calcium channel (the “L” type), the transitions occur more slowly and at more positive potentials. After activation, these channels eventually inactivate decreasing the permeability to calcium, and the permeability to potassium begins to increase, leading to final repolarization (phase 3) of the action potential. Two types of potassium channels are particularly important in phase 3 repolarization. They generate what are referred to as the rapidly activating (IKr) and slowly activating (IKs) delayed rectifier potassium currents. Repolarization, especially late in phase 3, is also aided by the inward rectifying potassium channels that are responsible for the resting membrane potential.

It is noteworthy that other delayed rectifier-type potassium currents also play important roles in repolarization of certain cardiac cell types. For example, the ultra-rapidly activating delayed rectifier potassium current (IKur) is particularly important in repolarizing the atrial action potential. The resting membrane potential and repolarization of atrial myocytes are also affected by a potassium current (IK,ACh) generated by channels that are gated by the parasympathetic neurotransmitter acetylcholine.

Purkinje cells are similar to atrial and ventricular cells in that they generate an action potential with a fast upstroke due to the activity of sodium channels. However, unlike atrial and ventricular cells, the membrane potential during phase 4 exhibits spontaneous depolarization. This is due to the presence of pacemaker channels that generate an inward depolarizing pacemaker current. This is sometimes referred to as the “funny” current (If), because the channels involved have the unusual property of being activated by membrane hyperpolarization. Under some circumstances, Purkinje cells can act as pacemakers for the heart by spontaneously depolarizing and initiating an action potential that is then propagated throughout the ventricular myocardium. However, under normal conditions, the action potential in Purkinje cells is triggered by impulses that originate in the SA node and are conducted to these cells through the AV node.

Pacemaking activity in the SA node is due to spontaneous depolarization during phase 4 of the action potential as well (see Figure 14–1). This diastolic depolarization is mediated in part by the activity of pacemaker channels (If). It is also thought to be due to the net inward current generated by the sodium-calcium exchanger, which is activated by the spontaneous release of calcium from intracellular storage sites. Unlike the action potential in Purkinje cells, spontaneous depolarization in the SA node triggers the upstroke of an action potential that is primarily due to an increase in permeability to calcium, not sodium. Because the calcium channels involved open or activate slowly, the maximum upstroke velocity of the action potential in SA node cells is relatively slow. Repolarization occurs when the calcium channels subsequently close due to inactivation and delayed rectifier-type potassium channels open.

A similar process is involved in generating action potentials in the AV node. Although the intrinsic rate of spontaneous diastolic depolarization in the AV node is typically faster than that of Purkinje cells, it is still slower than the rate of depolarization in the SA node. Therefore, action potentials in the AV node are normally triggered by impulses that originate in the SA node and are conducted to the AV node through the atria. It is important to recognize that action potential upstroke velocity is a key determinant of impulse conduction velocity. Because the action potential upstroke in AV node cells is mediated by calcium channels, which open or activate relatively slowly, impulse conduction through the AV node is slow. This contributes to the delay between atrial and ventricular contraction.

Electrical activity in the SA node and AV node is significantly influenced by the autonomic nervous system (see Chapter 6). Sympathetic activation of β adrenoceptors speeds pacemaker activity in the SA node and impulse propagation through the AV node by enhancing pacemaker and calcium channel activity, respectively. Conversely, parasympathetic activation of muscarinic receptors slows pacemaker activity and conduction velocity by inhibiting the activity of these channels, as well as by increasing the potassium conductance by turning on acetylcholine-activated potassium channels.

The Effect of Membrane Potential on Excitability

A key factor in the pathophysiology of arrhythmias and the actions of antiarrhythmic agents is the relationship between the membrane potential and the effect it has on the ion channels responsible for excitability of the cell. During the plateau of atrial, ventricular, or Purkinje cell action potentials, most sodium channels are inactivated, rendering the cell refractory or inexcitable. Upon repolarization, recovery from inactivation takes place (in the terminology of Figure 14–3, the h gates reopen), making the channels available again for excitation. This is a time- and voltage-dependent process. The actual time required for enough sodium channels to recover from inactivation in order that a new propagated response can be generated is called the refractory period. Full recovery of excitability typically does not occur until action potential repolarization is complete. Thus, refractoriness or excitability can be affected by factors that alter either action potential duration or the resting membrane potential. This relationship can also be significantly impacted by certain classes of antiarrhythmic agents. One example is drugs that block sodium channels. They can reduce the extent and rate of recovery from inactivation (Figure 14–4). Changes in refractoriness caused by either altered recovery from inactivation or altered action potential duration can be important in the genesis or suppression of certain arrhythmias. A reduction in the number of available sodium channels can reduce excitability. In some cases, it may result in the cell being totally refractory or inexcitable. In other cases, there may be a reduction in peak sodium permeability. This can reduce the maximum upstroke velocity of the action potential, which will in turn reduce action potential conduction velocity.

Figure 14–4 Dependence of sodium channel function on the membrane potential preceding the stimulus. Left: The fraction of sodium channels available for opening in response to a stimulus is determined by the membrane potential immediately preceding the stimulus. The decrease in the fraction available when the resting potential is depolarized in the absence of a drug (control curve) results from the voltage-dependent closure of h gates in the channels. The curve labeled Drug illustrates the effect of a typical local anesthetic antiarrhythmic drug. Most sodium channels are inactivated during the plateau of the action potential. Right: The time constant for recovery from inactivation after repolarization also depends on the resting potential. In the absence of drug, recovery occurs in less than 10 ms at normal resting potentials (−85 to −95 mV). Depolarized cells recover more slowly (note logarithmic scale). In the presence of a sodium channel-blocking drug, the time constant of recovery is increased, but the increase is far greater at depolarized potentials than at more negative ones.
katzung16_ch14_f004

In cells like those found in the SA and AV nodes, where excitability is determined by the availability of calcium channels, excitability is most sensitive to drugs that block these channels. As a result, calcium channel blockers can decrease pacemaker activity in the SA node as well as conduction velocity in the AV node.

Book Chapter
33. Agents Used in Cytopenias; Hematopoietic Growth Factors

33. Agents Used in Cytopenias; Hematopoietic Growth Factors

Basic Pharmacology

Iron deficiency is the most common cause of chronic anemia. Like other forms of chronic anemia, iron deficiency anemia leads to pallor, fatigue, dizziness, exertional dyspnea, and other generalized symptoms of tissue hypoxia. The cardiovascular adaptations to chronic anemia—tachycardia, increased cardiac output, vasodilation—can worsen the condition of patients with underlying cardiovascular disease.

Iron forms the nucleus of the iron-porphyrin heme ring, which together with globin chains forms hemoglobin. Hemoglobin reversibly binds oxygen and provides the critical mechanism for oxygen delivery from the lungs to other tissues. In the absence of adequate iron, small erythrocytes with insufficient hemoglobin are formed, giving rise to microcytic hypochromic anemia. Iron-containing heme is also an essential component of myoglobin, cytochromes, and other proteins with diverse biologic functions.

Pharmacokinetics

Free inorganic iron is extremely toxic, but iron is required for essential proteins such as hemoglobin; therefore, evolution has provided an elaborate system for regulating iron absorption, transport, and storage (Figure 33–1). The system uses specialized transport, storage, ferrireductase, and ferroxidase proteins whose concentrations are controlled by the body’s demand for hemoglobin synthesis and adequate iron stores (Table 33–1). A peptide called hepcidin, produced primarily by liver cells, serves as a key central regulator of the system. Nearly all of the iron used to support hematopoiesis is reclaimed from catalysis of the hemoglobin in senescent or damaged erythrocytes. Normally, only a small amount of iron is lost from the body each day, so dietary requirements are small and easily fulfilled by the iron available in a wide variety of foods. However, in special populations with either increased iron requirements (eg, growing children, pregnant women) or increased losses of iron (eg, menstruating women), iron requirements can exceed normal dietary supplies, and iron deficiency can develop.

Figure 33–1 Absorption, transport, and storage of iron. Intestinal epithelial cells actively absorb inorganic iron via the divalent metal transporter 1 (DMT1) and heme iron via the heme carrier protein 1 (HCP1). Iron that is absorbed or released from absorbed heme iron in the intestine (1) is actively transported into the blood by ferroportin (FP) and stored as ferritin (F). In the blood, iron is transported by transferrin (Tf) to erythroid precursors in the bone marrow for synthesis of hemoglobin (Hgb) in red blood cells (RBC) (2); or to hepatocytes for storage as ferritin (3). The transferrin-iron complex binds to transferrin receptors (TfR) in erythroid precursors and hepatocytes and is internalized. After release of iron, the TfR-Tf complex is recycled to the plasma membrane and Tf is released. Macrophages that phagocytize senescent erythrocytes (RBC) reclaim the iron from the RBC hemoglobin and either export it or store it as ferritin (4). Hepatocytes use several mechanisms to take up iron and store the iron as ferritin. High hepatic iron stores increase hepcidin synthesis, and hepcidin inhibits ferroportin; low hepatocyte iron and increased erythroferrone inhibits hepcidin and enhances iron absorption via ferroportin. Ferrous iron (Fe2+), blue diamonds, squares; ferric iron (Fe3+), red; DB, duodenal cytochrome B; F, ferritin. (Modified from Trevor A: Pharmacology Examination & Board Review, 9th ed. New York, NY: McGraw Hill; 2010.)
katzung16_ch33_f001
Table 33–1 Iron distribution in normal adults.1

 

Iron Content (mg)

Men

Women

1Values are based on data from various sources and assume that normal men weigh 80 kg and have a hemoglobin level of 16 g/dL and that normal women weigh 55 kg and have a hemoglobin level of 14 g/dL.

Reproduced with permission from Wyngaarden JB, Smith LH: Cecil Textbook of Medicine, 18th ed. Philadelphia, PA: Saunders/Elsevier; 1988.

Hemoglobin

3050

1700

Myoglobin

 430

 300

Enzymes

  10

   8

Transport (transferrin)

   8

   6

Storage (ferritin and other forms)

 750

 300

Total

4248

2314

A. Absorption

The average American diet contains 10–15 mg of elemental iron daily. A normal individual absorbs 5–10% of this iron, or about 0.5–1 mg daily. Iron is absorbed in the duodenum and proximal jejunum, although the more distal small intestine can absorb iron if necessary. Iron absorption increases in response to low iron stores or increased iron requirements. Total iron absorption increases to 1–2 mg/d in menstruating women and may be as high as 3–4 mg/d in pregnant women.

Iron is available in a wide variety of foods but is especially abundant in meat. The iron in meat protein can be efficiently absorbed, because heme iron in meat hemoglobin and myoglobin can be absorbed intact without first having to be dissociated into elemental iron (see Figure 33–1). Iron in other foods, especially vegetables and grains, is often tightly bound to organic compounds and is much less available for absorption. Nonheme iron in foods and iron in inorganic iron salts and complexes must be reduced by a ferrireductase to ferrous iron (Fe2+) before it can be absorbed by intestinal mucosal cells.

Iron crosses the luminal membrane of the intestinal mucosal cell by two mechanisms: active transport of ferrous iron by the divalent metal transporter DMT1, and absorption of iron complexed with heme (see Figure 33–1). Together with iron split from absorbed heme, the newly absorbed iron can be actively transported into the blood across the basolateral membrane by a transporter known as ferroportin and oxidized to ferric iron (Fe3+) by the ferroxidase hephaestin. The liver-derived hepcidin inhibits intestinal cell iron release by binding to ferroportin and triggering its internalization and destruction. Excess iron is stored in intestinal epithelial cells as ferritin, a water-soluble complex consisting of a core of ferric hydroxide covered by a shell of a specialized storage protein called apoferritin.

B. Transport

Iron is transported in the plasma bound to transferrin, a β-globulin that can bind two molecules of ferric iron (see Figure 33–1). The transferrin-iron complex enters maturing erythroid cells by a specific receptor mechanism. Transferrin receptors—integral membrane glycoproteins present in large numbers on proliferating erythroid cells—bind and internalize the transferrin-iron complex through the process of receptor-mediated endocytosis. In endosomes, the ferric iron is released, reduced to ferrous iron, and transported by DMT1 into the cytoplasm, where it is funneled into hemoglobin synthesis or stored as ferritin. The transferrin-transferrin receptor complex is recycled to the cell membrane, where the transferrin dissociates and returns to the plasma. This process provides an efficient mechanism for supplying the iron required by developing red blood cells.

Increased erythropoiesis is associated with an increase in the number of transferrin receptors on developing erythroid cells and a reduction in hepatic hepcidin release. Iron store depletion and iron deficiency anemia are associated with an increased concentration of serum transferrin.

C. Storage

In addition to the storage of iron in intestinal mucosal cells, iron is also stored, primarily as ferritin, in macrophages in the liver, spleen, and bone, and in parenchymal liver cells (see Figure 33–1). The mobilization of iron from macrophages and hepatocytes is primarily controlled by hepcidin regulation of ferroportin activity. Low hepcidin concentrations result in iron release from these storage sites; high hepcidin concentrations inhibit iron release. Ferritin is detectable in serum. Since the ferritin present in serum is in equilibrium with storage ferritin in reticuloendothelial tissues, the serum ferritin level can be used to estimate total body iron stores.

D. Elimination

There is no mechanism for excretion of iron. Small amounts are lost in the feces by exfoliation of intestinal mucosal cells, and trace amounts are excreted in bile, urine, and sweat. These losses account for no more than 1 mg of iron per day. Because the body’s ability to excrete iron is so limited, regulation of iron balance must be achieved by changing intestinal absorption and storage of iron in response to the body’s needs. As noted below, impaired regulation of iron absorption leads to serious pathology.

Clinical Pharmacology

A. Indications for the Use of Iron

The only clinical indication for the use of iron preparations is the treatment or prevention of iron deficiency anemia. This manifests as a hypochromic, microcytic anemia in which the erythrocyte mean cell volume (MCV) and the mean cell hemoglobin concentration are low (Table 33–2). Iron deficiency is commonly seen in populations with increased iron requirements. These include infants, especially premature infants; children during rapid growth periods; pregnant and lactating women; and patients with chronic kidney disease who lose erythrocytes at a relatively high rate during hemodialysis and also form them at a high rate as a result of treatment with the erythrocyte growth factor erythropoietin (see below). Inadequate iron absorption also can cause iron deficiency. This is seen after gastrectomy and in patients with severe small bowel disease that results in generalized malabsorption.

Table 33–2 Distinguishing features of the nutritional anemias.

Nutritional Deficiency

Type of Anemia

Laboratory Abnormalities

MCV, mean cell volume; MCHC, mean cell hemoglobin concentration; SI, serum iron; TIBC, transferrin iron-binding capacity.

Iron

Microcytic, hypochromic with MCV <80 fl="" and="" mchc=""><>

Low SI <30 mcg/dl="" with="" increased="" tibc,="" resulting="" in="" a="" %="" transferrin="" saturation="" (si/tibc)="" of=""><10%; low="" serum="" ferritin="" level=""><20>

Folic acid

Macrocytic, normochromic with MCV >100 fL and normal or elevated MCHC

Low serum folic acid (<4>

Vitamin B12

Same as folic acid deficiency

Low serum cobalamin (<100 pmol/l)="" accompanied="" by="" increased="" serum="" homocysteine="" (="">13 μmol/L), and increased serum (>0.4 μmol/L) and urine (>3.6 μmol/mol creatinine) methylmalonic acid

The most common cause of iron deficiency in adults is blood loss. Menstruating women lose about 30 mg of iron with each menstrual period; women with heavy menstrual bleeding may lose much more. Thus, many premenopausal women have low iron stores or even iron deficiency. In men and postmenopausal women, the most common site of blood loss is the gastrointestinal tract. Patients with unexplained iron deficiency anemia should be evaluated for occult gastrointestinal bleeding.

B. Treatment

Iron deficiency anemia is treated with oral or parenteral iron preparations. Oral iron corrects the anemia just as rapidly and completely as parenteral iron in most cases if iron absorption from the gastrointestinal tract is normal. An exception is the high requirement for iron of patients with advanced chronic kidney disease who are undergoing hemodialysis and treatment with erythropoietin; for these patients, parenteral iron administration is preferred.

1. Oral iron therapy

A wide variety of oral iron preparations is available. Because ferrous iron is most efficiently absorbed, ferrous salts should be used. Ferrous sulfate, ferrous gluconate, and ferrous fumarate are all effective and inexpensive and are recommended for the treatment of most patients.

Different iron salts provide different amounts of elemental iron, as shown in Table 33–3. In an iron-deficient individual, about 50–100 mg of iron can be incorporated into hemoglobin daily, and about 25% of oral iron given as ferrous salt can be absorbed. Therefore, 200–400 mg of elemental iron should be given daily to correct iron deficiency most rapidly. Patients unable to tolerate such large doses of iron can be given lower daily doses of iron, which results in slower but still complete correction of iron deficiency. Treatment with oral iron should be continued for 3–6 months after correction of the cause of the iron loss. This corrects the anemia and replenishes iron stores.

Table 33–3 Some commonly used oral iron preparations.

Preparation

Tablet Size

Elemental Iron per Tablet

Usual Adult Dosage for Treatment of Iron Deficiency (Tablets per Day)

Ferrous sulfate, hydrated

325 mg

65 mg

2–4

Ferrous sulfate, desiccated

200 mg

65 mg

2–4

Ferrous gluconate

325 mg

36 mg

3–4

Ferrous fumarate

325 mg

106 mg

2–3

Common adverse effects of oral iron therapy include nausea, epigastric discomfort, abdominal cramps, constipation, and diarrhea. These effects are usually dose-related and often can be overcome by lowering the daily dose of iron or by taking the tablets immediately after or with meals, without dairy. Some patients have less severe gastrointestinal adverse effects with one iron salt than another and benefit from changing preparations. Patients taking oral iron commonly develop black stools; this has no clinical significance in itself but may obscure the diagnosis of continued gastrointestinal blood loss.

2. Parenteral iron therapy

Parenteral therapy should be reserved for patients with documented iron deficiency who are unable to tolerate or absorb oral iron and for patients with extensive chronic anemia who cannot be maintained with oral iron alone, or require rapid repletion. This includes patients with advanced chronic renal disease requiring hemodialysis and treatment with erythropoietin, various postgastrectomy conditions and previous small bowel resection, inflammatory bowel disease involving the proximal small bowel, and malabsorption syndromes.

The challenge with parenteral iron therapy is that parenteral administration of inorganic free ferric iron produces serious dose-dependent toxicity, which severely limits the dose that can be administered. However, when the ferric iron is formulated as a colloid containing particles with a core of iron oxyhydroxide surrounded by a core of carbohydrate, bioactive iron is released slowly from the stable colloid particles. In the United States, the three traditional forms of parenteral iron are iron dextran, sodium ferric gluconate complex, and iron sucrose. Two newer preparations are also available (see below).

Iron dextran is a stable complex of ferric oxyhydroxide and dextran polymers containing 50 mg of elemental iron per milliliter of solution. It can be given by deep intramuscular injection or by intravenous infusion, although the intravenous route is used most commonly. Intravenous administration eliminates the local pain and tissue staining that often occur with the intramuscular route and allows delivery of the entire dose of iron necessary to correct the iron deficiency at one time. Adverse effects of intravenous iron dextran therapy include headache, light-headedness, fever, arthralgias, nausea and vomiting, back pain, flushing, urticaria, bronchospasm, and, rarely, anaphylaxis and death. Owing to the risk of a hypersensitivity reaction, a small test dose of iron dextran should always be given before full intramuscular or intravenous doses are given. Patients with a strong allergy history and patients who have had a prior reaction to parenteral iron are more likely to have hypersensitivity reactions to treatment with parenteral iron dextran. The iron dextran formulations used clinically are distinguishable as high-molecular-weight and low-molecular-weight forms. In the United States, the INFeD preparation is a low-molecular-weight form while Dexferrum is a high-molecular-weight form. Clinical data—primarily from observational studies—indicate that the risk of anaphylaxis is largely associated with high-molecular-weight formulations.

Sodium ferric gluconate complex and iron-sucrose complex are alternative parenteral iron preparations. Ferric carboxymaltose is a colloidal iron preparation embedded within a carbohydrate polymer. Ferumoxytol is a superparamagnetic iron oxide nanoparticle coated with carbohydrate. The carbohydrate shell is removed in the reticuloendothelial system, allowing the iron to be stored as ferritin, or released to transferrin. Ferumoxytol may interfere with magnetic resonance imaging (MRI) studies. Thus if imaging is needed, MRI should be performed prior to ferumoxytol therapy or alternative imaging modality used if needed soon after dosing. The US Food and Drug Administration (FDA) has issued a black box warning about risk of potentially fatal allergic reactions associated with the use of ferumoxytol.

For patients treated chronically with parenteral iron, it is important to monitor iron storage levels to avoid the serious toxicity associated with iron overload. Unlike oral iron therapy, which is subject to the regulatory mechanism provided by the intestinal uptake system, parenteral administration—which bypasses this regulatory system—can deliver more iron than can be safely stored. Iron stores can be estimated on the basis of serum concentrations of ferritin and the transferrin saturation, which is the ratio of the total serum iron concentration to the total iron-binding capacity (TIBC).

Clinical Toxicity

A. Acute Iron Toxicity

Acute iron toxicity is seen almost exclusively in young children who accidentally ingest iron tablets. As few as 10 tablets of any of the commonly available oral iron preparations can be lethal in young children. Adult patients taking oral iron preparations should be instructed to store tablets in child-proof containers out of the reach of children. Children who are poisoned with oral iron experience necrotizing gastroenteritis with vomiting, abdominal pain, and bloody diarrhea followed by shock, lethargy, and dyspnea. Subsequently, improvement is often noted, but this may be followed by severe metabolic acidosis, coma, and death. Urgent treatment is necessary. Whole bowel irrigation (see Chapter 58) should be performed to flush out unabsorbed pills. Deferoxamine, a potent iron-chelating compound, can be given intravenously to bind iron that has already been absorbed and to promote its excretion in urine and feces. Activated charcoal, a highly effective adsorbent for most toxins, does not bind iron and thus is ineffective. Appropriate supportive therapy for gastrointestinal bleeding, metabolic acidosis, and shock must also be provided.

B. Chronic Iron Toxicity

Chronic iron toxicity (iron overload), also known as hemochromatosis, results when excess iron is deposited in the heart, liver, pancreas, and other organs. It can lead to organ failure and death. It most commonly occurs in patients with inherited hemochromatosis, a disorder characterized by excessive iron absorption, and in patients who receive many red cell transfusions over a long period of time (eg, individuals with β-thalassemia).

Chronic iron overload in the absence of anemia is most efficiently treated by intermittent phlebotomy. About one unit of blood can be removed every week until all of the excess iron is removed. Iron chelation therapy using parenteral deferoxamine or the oral iron chelators deferasirox or deferiprone (see Chapter 57) is less efficient, and more complicated, expensive, and hazardous, but sometimes the only option for patients with iron overload that cannot be managed by phlebotomy. This is often the case for many individuals with inherited and acquired causes of refractory anemia such as thalassemia major, sickle cell anemia, aplastic anemia, etc. Deferiprone has rarely been associated with agranulocytosis; thus weekly complete blood count monitoring is required.

Vitamin B12 (cobalamin) serves as a cofactor for several essential biochemical reactions in humans. Deficiency of vitamin B12 leads to megaloblastic anemia (see Table 33–2), gastrointestinal symptoms, and neurologic abnormalities. Although deficiency of vitamin B12 due to an inadequate supply in the diet is unusual, deficiency of B12 in adults—especially older adults—due to inadequate absorption of dietary vitamin B12 is a relatively common and easily treated disorder.

Chemistry

Vitamin B12 consists of a porphyrin-like ring with a central cobalt atom attached to a nucleotide. Various organic groups may be covalently bound to the cobalt atom, forming different cobalamins. Deoxyadenosylcobalamin and methylcobalamin are the active forms of the vitamin in humans. Cyanocobalamin and hydroxocobalamin (both available for therapeutic use) and other cobalamins found in food sources are converted to the active forms. The ultimate source of vitamin B12 is from microbial synthesis; the vitamin is not synthesized by animals or plants. The chief dietary source of vitamin B12 is microbially derived vitamin B12 in meat (especially liver), eggs, and dairy products. Vitamin B12 is sometimes called extrinsic factor to differentiate it from intrinsic factor, a protein secreted by the stomach that is required for gastrointestinal uptake of dietary vitamin B12.

Pharmacokinetics

The average American diet contains 5–30 mcg of vitamin B12 daily, 1–5 mcg of which is usually absorbed. The vitamin is avidly stored, primarily in the liver, with an average adult having a total vitamin B12 storage pool of 3000–5000 mcg. Only trace amounts of vitamin B12 are normally lost in urine and stool. Because the normal daily requirements of vitamin B12 are only about 2 mcg, it would take about 5 years for all of the stored vitamin B12 to be exhausted and for megaloblastic anemia to develop if B12 absorption were stopped. Vitamin B12 is absorbed after it complexes with intrinsic factor, a glycoprotein secreted by the parietal cells of the gastric mucosa. Intrinsic factor combines with the vitamin B12 that is liberated from dietary sources in the stomach and duodenum, and the intrinsic factor-vitamin B12 complex is subsequently absorbed in the distal ileum by a highly selective receptor-mediated transport system. Vitamin B12 deficiency in humans most often results from malabsorption of vitamin B12 due either to lack of intrinsic factor or to loss or malfunction of the absorptive mechanism in the distal ileum. Nutritional deficiency is rare but may be seen in strict vegetarians after many years without meat, eggs, or dairy products.

Once absorbed, vitamin B12 is transported to the various cells of the body bound to a family of specialized glycoproteins, transcobalamin I, II, and III. Excess vitamin B12 is stored in the liver.

Pharmacodynamics

Two essential enzymatic reactions in humans require vitamin B12 (Figure 33–2). In one, methylcobalamin serves as an intermediate in the transfer of a methyl group from N 5-methyltetrahydrofolate to homocysteine, forming methionine (see Figure 33–2A; Figure 33–3, section 1). Without vitamin B12, conversion of the major dietary and storage folate—N 5-methyltetrahydrofolate—to tetrahydrofolate, the precursor of folate cofactors, cannot occur. As a result, vitamin B12 deficiency leads to deficiency of folate cofactors necessary for several biochemical reactions involving the transfer of one-carbon groups. In particular, the depletion of tetrahydrofolate prevents synthesis of adequate supplies of the deoxythymidylate (dTMP) and purines required for DNA synthesis in rapidly dividing cells, as shown in Figure 33–3, section 2. The accumulation of folate as N 5-methyltetrahydrofolate and the associated depletion of tetrahydrofolate cofactors in vitamin B12 deficiency have been referred to as the “methylfolate trap.” This is the biochemical step whereby vitamin B12 and folic acid metabolism are linked, and it explains why the megaloblastic anemia of vitamin B12 deficiency can be partially corrected by ingestion of large amounts of folic acid. Folic acid can be reduced to dihydrofolate by the enzyme dihydrofolate reductase (see Figure 33–3, section 3) and thereby serve as a source of the tetrahydrofolate required for synthesis of the purines and dTMP required for DNA synthesis.

Figure 33–2 Enzymatic reactions that use vitamin B12. See text.
katzung16_ch33_f002
Figure 33–3 Enzymatic reactions that use folates. Section 1 shows the vitamin B12-dependent reaction that allows most dietary folates to enter the tetrahydrofolate cofactor pool and becomes the “folate trap” in vitamin B12 deficiency. Section 2 shows the deoxythymidine monophosphate (dTMP) cycle. Section 3 shows the pathway by which folic acid enters the tetrahydrofolate cofactor pool. Double arrows indicate pathways with more than one intermediate step. dUMP, deoxyuridine monophosphate.
katzung16_ch33_f003

Vitamin B12 deficiency causes the accumulation of homocysteine due to reduced formation of methylcobalamin, which is required for the conversion of homocysteine to methionine (see Figure 33–3, section 1). The increase in serum homocysteine can be used to help establish a diagnosis of vitamin B12 deficiency (see Table 33–2). There is evidence from observational studies that elevated serum homocysteine increases the risk of atherosclerotic cardiovascular disease. However, randomized clinical trials have not shown a definitive reduction in cardiovascular events (myocardial infarction, stroke) in patients receiving vitamin supplementation that lowers serum homocysteine.

The other reaction that requires vitamin B12 is isomerization of methylmalonyl-CoA to succinyl-CoA by the enzyme methylmalonyl-CoA mutase (see Figure 33–2B). In vitamin B12 deficiency, this conversion cannot take place and the substrate, methylmalonyl-CoA, as well as methylmalonic acid accumulate. The increase in serum and urine concentrations of methylmalonic acid can be used to support a diagnosis of vitamin B12 deficiency (see Table 33–2). In the past, it was thought that abnormal accumulation of methylmalonyl-CoA causes the neurologic manifestations of vitamin B12 deficiency. However, newer evidence implicates the disruption of the methionine synthesis pathway as the cause of neurologic problems. Whatever the biochemical explanation for neurologic damage, the important point is that administration of folic acid in the setting of vitamin B12 deficiency will not prevent neurologic manifestations even though it will largely correct the anemia caused by the vitamin B12 deficiency.

Clinical Pharmacology

Vitamin B12 is used to treat or prevent deficiency. The most characteristic clinical manifestation of vitamin B12 deficiency is megaloblastic, macrocytic anemia (see Table 33–2), often with associated mild or moderate leukopenia or thrombocytopenia (or both), and a characteristic hypercellular bone marrow with an accumulation of megaloblastic erythroid and other precursor cells. The neurologic syndrome associated with vitamin B12 deficiency usually begins with paresthesias in peripheral nerves and weakness that progresses to spasticity, ataxia, and other central nervous system dysfunctions. Correction of vitamin B12 deficiency arrests the progression of neurologic disease, but it may not fully reverse neurologic symptoms that have been present for several months. Although most patients with neurologic abnormalities caused by vitamin B12 deficiency have megaloblastic anemia when first evaluated, occasional patients have few if any hematologic abnormalities.

Once a diagnosis of megaloblastic anemia is made, it must be determined whether vitamin B12 or folic acid deficiency is the cause. Other megaloblastic anemia causes are rare. This can usually be accomplished by measuring serum levels of the vitamins. The Schilling test, which measures absorption and urinary excretion of radioactively labeled vitamin B12, can be used to further define the mechanism of vitamin B12 malabsorption when this is found to be the cause of the megaloblastic anemia.

The most common causes of vitamin B12 deficiency are pernicious anemia, partial or total gastrectomy, and conditions that affect the distal ileum, such as malabsorption syndromes, inflammatory bowel disease, or small bowel resection. Strict vegans eating a diet free of meat and dairy products may become B12 deficient.

Pernicious anemia results from defective secretion of intrinsic factor by the gastric mucosal cells. Patients with pernicious anemia have gastric atrophy and fail to secrete intrinsic factor (as well as hydrochloric acid). These patients frequently have autoantibodies to intrinsic factor. Historically, the Schilling test demonstrated diminished absorption of radioactively labeled vitamin B12, which is corrected when intrinsic factor is administered with radioactive B12, since the vitamin can then be normally absorbed. This test is now rarely performed due to use of radioactivity in the assay.

Vitamin B12 deficiency also occurs when the region of the distal ileum that absorbs the vitamin B12-intrinsic factor complex is damaged, as when the ileum is involved with inflammatory bowel disease or when the ileum is surgically resected. In these situations, radioactively labeled vitamin B12 is not absorbed in the Schilling test, even when intrinsic factor is added. Rare cases of vitamin B12 deficiency in children have been found to be secondary to congenital deficiency of intrinsic factor or to defects of the receptor sites for vitamin B12-intrinsic factor complex located in the distal ileum. Alternatives to the Schilling test include testing for intrinsic factor antibodies and testing for elevated homocysteine and methylmalonic acid levels (see Figure 33–2) to make a diagnosis of pernicious anemia with high sensitivity and specificity.

Almost all cases of vitamin B12 deficiency are caused by malabsorption of the vitamin; therefore, parenteral injections of vitamin B12 are required for therapy. For patients with potentially reversible diseases, the underlying disease should be treated after initial treatment with parenteral vitamin B12. Most patients, however, do not have curable deficiency syndromes and require lifelong treatment with vitamin B12.

Vitamin B12 for parenteral injection is available as cyanocobalamin or hydroxocobalamin. Hydroxocobalamin is preferred because it is more highly protein-bound and therefore remains longer in the circulation. Initial therapy should consist of 100–1000 mcg of vitamin B12 intramuscularly daily or every other day for 1–2 weeks to replenish body stores. Maintenance therapy consists of 100–1000 mcg intramuscularly once a month for life. If neurologic abnormalities are present, maintenance therapy injections should be given every 1–2 weeks for 6 months before switching to monthly injections. Oral vitamin B12-intrinsic factor mixtures and liver extracts should not be used to treat vitamin B12 deficiency; however, oral doses of 1000 mcg of vitamin B12 daily are usually sufficient to treat patients with pernicious anemia who refuse or cannot tolerate the injections. After pernicious anemia is in remission following parenteral vitamin B12 therapy, the vitamin can be administered intranasally as a spray or gel.

Reduced forms of folic acid are required for essential biochemical reactions that provide precursors for the synthesis of amino acids, purines, and DNA. Folate deficiency is relatively common, even though the deficiency is easily corrected by administration of folic acid. The consequences of folate deficiency go beyond the problem of anemia because folate deficiency is implicated as a cause of congenital malformations in newborns and may play a role in vascular disease (see Box: Folic Acid Supplementation: A Public Health Dilemma).

Folic Acid Supplementation: A Public Health Dilemma

Starting in January 1998, all products made from enriched grains in the United States and Canada were required to be supplemented with folic acid. These rulings were issued to reduce the incidence of congenital neural tube defects (NTDs). Epidemiologic studies show a strong correlation between maternal folic acid deficiency and the incidence of NTDs such as spina bifida and anencephaly. The requirement for folic acid supplementation is a public health measure aimed at the significant number of women who do not receive prenatal care and are not aware of the importance of adequate folic acid ingestion for preventing birth defects in their infants. Observational studies from countries that supplement grains with folic acid have found that supplementation is associated with a significant (20–25%) reduction in NTD rates. Observational studies also suggest that rates of other types of congenital anomalies (heart and orofacial) have fallen since supplementation began.

There may be an added benefit for adults. N5-Methyl-tetrahydrofolate is required for the conversion of homocysteine to methionine (Figure 33–2; Figure 33–3, section 1). Impaired synthesis of N5-methyltetrahydrofolate results in elevated serum concentrations of homocysteine. Data from several sources suggest a positive correlation between elevated serum homocysteine and occlusive vascular diseases such as ischemic heart disease and stroke. Clinical data suggest that the folate supplementation program has improved the folate status and reduced the prevalence of hyperhomocysteinemia in a population of middle-aged and older adults who did not use vitamin supplements. There is also evidence that adequate folic acid protects against several cancers, including colorectal, breast, and cervical cancer.

Although the potential benefits of supplemental folic acid during pregnancy are compelling, the decision to require folic acid in grains was controversial. As described in the text, ingestion of folic acid can partially or totally correct the anemia caused by vitamin B12 deficiency. However, folic acid supplementation does not prevent the potentially irreversible neurologic damage caused by vitamin B12 deficiency. People with pernicious anemia and other forms of vitamin B12 deficiency are usually identified because of signs and symptoms of anemia, which typically occur before neurologic symptoms. Some opponents of folic acid supplementation were concerned that increased folic acid intake in the general population would mask vitamin B12 deficiency and increase the prevalence of neurologic disease in the elderly population. To put this in perspective, approximately 4000 pregnancies, including 2500 live births, in the United States each year are affected by NTDs. In contrast, it is estimated that more than 10% of the elderly population in the United States, or several million people, are at risk for the neuropsychiatric complications of vitamin B12 deficiency. In acknowledgment of this controversy, the FDA kept its requirements for folic acid supplementation at a somewhat low level. There is also concern based on observational and prospective clinical trials that high folic acid levels can increase the risk of some diseases, such as colorectal cancer, for which folic acid may exhibit a bell-shaped curve. Further research is needed to more accurately define the optimal level of folic acid fortification in food and recommendations for folic acid supplementation in different populations and age groups.

Chemistry

Folic acid (pteroylglutamic acid) is composed of a heterocycle (pteridine), p-aminobenzoic acid, and glutamic acid (Figure 33–4). Various numbers of glutamic acid moieties are attached to the pteroyl portion of the molecule, resulting in monoglutamates, triglutamates, or polyglutamates. Folic acid undergoes reduction, catalyzed by the enzyme dihydrofolate reductase (“folate reductase”), to give dihydrofolic acid (see Figure 33–3, section 3). Tetrahydrofolate is subsequently transformed to folate cofactors possessing one-carbon units attached to the 5-nitrogen, to the 10-nitrogen, or to both positions (see Figure 33–3). Folate cofactors are interconvertible by various enzymatic reactions and serve the important biochemical function of donating one-carbon units at various levels of oxidation. In most of these, tetrahydrofolate is regenerated and becomes available for reutilization.

Figure 33–4 The structure of folic acid. (Reproduced with permission from Murray RK, Granner DK, Mayes PA, et al: Harper’s Biochemistry, 24th ed. McGraw Hill, 1996.)
katzung16_ch33_f004

Pharmacokinetics

The average American diet contains 500–700 mcg of folates daily, 50–200 mcg of which is usually absorbed, depending on metabolic requirements. Pregnant women may absorb as much as 300–400 mcg of folic acid daily. Various forms of folic acid are present in a wide variety of plant and animal tissues; the richest sources are yeast, liver, kidney, and green vegetables. Normally, 5–20 mg of folates is stored in the liver and other tissues. Folates are excreted in the urine and stool and are also destroyed by catabolism, so serum levels fall within a few days when intake is diminished. Because body stores of folates are relatively low and daily requirements high, folic acid deficiency and megaloblastic anemia can develop within 1–6 months after the intake of folic acid stops, depending on the patient’s nutritional status and the rate of folate utilization.

Unaltered folic acid is readily and completely absorbed in the proximal jejunum. Dietary folates, however, consist primarily of polyglutamate forms of N5-methyltetrahydrofolate. Before absorption, all but one of the glutamyl residues of the polyglutamates must be hydrolyzed by the enzyme α-1-glutamyl transferase (“conjugase”) within the brush border of the intestinal mucosa. The monoglutamate N5-methyltetrahydrofolate is subsequently transported into the bloodstream by both active and passive transport and is then widely distributed throughout the body. Inside cells, N5-methyltetrahydrofolate is converted to tetrahydrofolate by the demethylation reaction that requires vitamin B12 (see Figure 33–3, section 1).

Pharmacodynamics

Tetrahydrofolate cofactors participate in one-carbon transfer reactions. As described earlier in the discussion of vitamin B12, one of these essential reactions produces the dTMP needed for DNA synthesis. In this reaction, the enzyme thymidylate synthase catalyzes the transfer of the one-carbon unit of N5, N10-methylenetetrahydrofolate to deoxyuridine monophosphate (dUMP) to form dTMP (see Figure 33–3, section 2). Unlike all the other enzymatic reactions that use folate cofactors, in this reaction the cofactor is oxidized to dihydrofolate, and for each mole of dTMP produced, 1 mole of tetrahydrofolate is consumed. In rapidly proliferating tissues, considerable amounts of tetrahydrofolate are consumed in this reaction, and continued DNA synthesis requires continued regeneration of tetrahydrofolate by reduction of dihydrofolate, catalyzed by the enzyme dihydrofolate reductase. The tetrahydrofolate thus produced can then reform the cofactor N5, N10-methylenetetrahydrofolate by the action of serine transhydroxymethylase and thus allow for the continued synthesis of dTMP. The combined catalytic activities of dTMP synthase, dihydrofolate reductase, and serine transhydroxymethylase are referred to as the dTMP synthesis cycle. Enzymes in the dTMP cycle are the targets of two anticancer drugs: methotrexate inhibits dihydrofolate reductase, and a metabolite of 5-fluorouracil inhibits thymidylate synthase (see Chapter 54).

Cofactors of tetrahydrofolate participate in several other essential reactions. N5-Methylenetetrahydrofolate is required for the vitamin B12-dependent reaction that generates methionine from homocysteine (see Figure 33–2A; Figure 33–3, section 1). In addition, tetrahydrofolate cofactors donate one-carbon units during the de novo synthesis of essential purines. In these reactions, tetrahydrofolate is regenerated and can reenter the tetrahydrofolate cofactor pool.

Clinical Pharmacology

Folate deficiency results in a megaloblastic anemia that is microscopically indistinguishable from the anemia caused by vitamin B12 deficiency (see above). However, folate deficiency does not cause the characteristic neurologic syndrome seen in vitamin B12 deficiency. In patients with megaloblastic anemia, folate status is assessed with assays for serum folate or for red blood cell folate. Red blood cell folate levels are often of greater diagnostic value than serum levels, because serum folate levels tend to be labile and do not necessarily reflect tissue levels.

Folic acid deficiency is often caused by inadequate dietary intake of folates. Patients with alcohol dependence and patients with liver disease can develop folic acid deficiency because of poor diet and diminished hepatic storage of folates. Pregnant women and patients with hemolytic anemia have increased folate requirements and may become folic acid-deficient, especially if their diets are marginal. Evidence implicates maternal folic acid deficiency in the occurrence of fetal neural tube defects. (See Box: Folic Acid Supplementation: A Public Health Dilemma.) Patients with malabsorption syndromes also frequently develop folic acid deficiency. Patients who require renal dialysis are at risk of folic acid deficiency because folates are removed from the plasma during the dialysis procedure.

Folic acid deficiency can be caused by drugs. Methotrexate and, to a lesser extent, trimethoprim and pyrimethamine, inhibit dihydrofolate reductase and may result in a deficiency of folate cofactors and ultimately in megaloblastic anemia. Long-term therapy with phenytoin also can cause folate deficiency, but it only rarely causes megaloblastic anemia.

Parenteral administration of folic acid is rarely necessary, since oral folic acid is well absorbed even in patients with malabsorption syndromes. A dose of 1 mg folic acid orally daily is sufficient to reverse megaloblastic anemia, restore normal serum folate levels, and replenish body stores of folates in almost all patients. Therapy should be continued until the underlying cause of the deficiency is removed or corrected. Therapy may be required indefinitely for patients with malabsorption or dietary inadequacy. Folic acid supplementation to prevent folic acid deficiency should be considered in high-risk patients, including pregnant women, patients with alcohol dependence, hemolytic anemia, liver disease, or certain skin diseases, and patients on renal dialysis.

Book Chapter
35. Agents Used in Dyslipidemia

35. Agents Used in Dyslipidemia

Structure

Lipoproteins have hydrophobic core regions containing cholesteryl esters and triglycerides surrounded by unesterified cholesterol, phospholipids, and apoproteins. Certain lipoproteins contain very high-molecular-weight B proteins that exist in two forms: B-48, formed in the intestine and found in chylomicrons and their remnants; and B-100, synthesized in liver and found in VLDL, VLDL remnants (IDL), LDL (formed from VLDL), and Lp(a) lipoproteins. HDL consist of at least 20 discrete molecular species containing apolipoprotein A-I (apo A-I). About 100 other proteins are known to be distributed variously among the HDL species, exhibiting antioxidant, antimicrobial, anti-inflammatory, and molecular signaling activities. They also transport microRNAs. Twelve HDL species are recognized in the ovary and six in cerebrospinal fluid.

ACRONYMS

Apo

Apolipoprotein

CETP

Cholesteryl ester transfer protein

CK

Creatine kinase

HDL

High-density lipoproteins

HMG-CoA

3-Hydroxy-3-methylglutaryl-coenzyme A

IDL

Intermediate-density lipoproteins

LCAT

Lecithin:cholesterol acyltransferase

LDL

Low-density lipoproteins

Lp(a)

Lipoprotein(a)

LPL

Lipoprotein lipase

PCSK9

Proprotein convertase subtilisin/kexin type 9

PPAR

Peroxisome proliferator-activated receptor

VLDL

Very-low-density lipoproteins

Synthesis & Catabolism

A. Chylomicrons

Chylomicrons are formed in the intestine and carry triglycerides of dietary origin, unesterified cholesterol, and cholesteryl esters. They transit the thoracic duct to the bloodstream.

Triglycerides are removed from the chylomicrons in extrahepatic tissues through a pathway shared with VLDL that involves hydrolysis by the lipoprotein lipase (LPL) system. Decrease in particle diameter occurs as triglycerides are depleted. Surface lipids and small apoproteins are transferred to HDL. The resultant chylomicron remnants are taken up by receptor-mediated endocytosis into hepatocytes.

B. Very-Low-Density Lipoproteins

VLDL are secreted by liver and export triglycerides to peripheral tissues (Figure 35–1). VLDL triglycerides are hydrolyzed by LPL, yielding free fatty acids for storage in adipose tissue and for oxidation in tissues such as cardiac and skeletal muscle. Depletion of triglycerides produces remnants (IDL), some of which undergo endocytosis directly into hepatocytes. The remainder are converted to LDL by further removal of triglycerides mediated by hepatic lipase. This process explains the “beta shift” phenomenon, the increase of LDL (beta-lipoprotein) in serum as hypertriglyceridemia subsides. Increased levels of LDL can also result from increased secretion of VLDL and from decreased LDL catabolism.

Figure 35–1 Metabolism of lipoproteins of hepatic origin. The heavy arrows show the primary pathways. Nascent VLDL are secreted via the Golgi apparatus. They acquire additional apo C lipoproteins and apo E from HDL. Very-low-density lipoproteins (VLDL) are converted to VLDL remnants (IDL) by lipolysis via lipoprotein lipase in the vessels of peripheral tissues. In the process, C apolipoproteins and a portion of the apo E are given back to high-density lipoproteins (HDL). Some of the VLDL remnants are converted to LDL by further loss of triglycerides and loss of apo E. A major pathway for LDL degradation involves the endocytosis of LDL by LDL receptors in the liver and the peripheral tissues, for which apo B-100 is the ligand. Dark color denotes cholesteryl esters; light color denotes triglycerides; the asterisk denotes a functional ligand for LDL receptors; triangles indicate apo E; circles and squares represent C apolipoproteins. FFA, free fatty acid; RER, rough endoplasmic reticulum. (Adapted with permission from Rosenberg RN, Prusiner S, DiMauro S, et al: The Molecular and Genetic Basis of Neurological Disease, 2nd ed. Philadelphia, PA: Butterworth-Heinemann; 1997.)
katzung16_ch35_f001

C. Low-Density Lipoproteins

LDL are catabolized chiefly in hepatocytes and other cells after receptor-mediated endocytosis. Cholesteryl esters from LDL are hydrolyzed, yielding free cholesterol for the synthesis of cell membranes. Cells also obtain cholesterol by synthesis via a pathway involving the formation of mevalonic acid by HMG-CoA reductase. Production of this enzyme and of LDL receptors is transcriptionally regulated by the content of cholesterol in the cell. Normally, about 70% of LDL is removed from plasma by hepatocytes. Even more cholesterol is delivered to the liver via IDL and chylomicrons. Unlike other cells, hepatocytes can eliminate cholesterol by secretion in bile and by conversion to bile acids.

D. Lp(a) Lipoprotein

Lp(a) is formed from LDL and the (a) protein, linked by a disulfide bridge. The (a) protein is highly homologous with plasminogen but is not activated by tissue plasminogen activator. It occurs in a number of isoforms of different molecular weights. Levels of Lp(a) vary from nil to over 2000 nM/L and are determined chiefly by genetic factors. Lp(a) is found in atherosclerotic plaques and also contributes to coronary disease by inhibiting thrombolysis. It is associated with aortic stenosis. Levels are elevated in certain inflammatory states. The risk of coronary disease is strongly related to the level of Lp(a) and is partially mitigated by aspirin. A common variant (14399M) in the coding region is associated with elevated levels.

E. High-Density Lipoproteins

The apoproteins of HDL are secreted largely by the liver and intestine. Much of the lipid comes from the surface monolayers of chylomicrons and VLDL during lipolysis. HDL also acquires cholesterol from peripheral tissues, protecting the cholesterol homeostasis of cells. Free cholesterol is chiefly exported from the cell membrane by a transporter, ABCA1, acquired by a small particle termed prebeta-1 HDL, and then esterified by lecithin:cholesterol acyltransferase (LCAT), leading to the formation of larger HDL species. Cholesterol is also exported by the ABCG1 transporter and the scavenger receptor, SR-BI, to large HDL particles. The cholesteryl esters are transferred to VLDL, IDL, LDL, and chylomicron remnants with the aid of cholesteryl ester transfer protein (CETP). Much of the cholesteryl ester thus transferred is ultimately delivered to the liver by endocytosis of the acceptor lipoproteins. HDL can also deliver cholesteryl esters directly to the liver via SR-BI that does not involve endocytosis of the lipoproteins. At the population level, HDL cholesterol (HDL-C) levels relate inversely to atherosclerosis risk. Among individuals, the capacity to accept exported cholesterol can vary widely at identical levels of HDL-C. The ability of peripheral tissues to export cholesterol via the transporter mechanism and the acceptor capacity of HDL are emerging as major determinants of coronary atherosclerosis.

Lipoprotein disorders are detected by measuring lipids in serum after a 10-hour fast. Risk of heart disease increases with concentrations of the atherogenic lipoproteins, is inversely related to levels of HDL-C, and is modified by other risk factors. Evidence from clinical trials suggests that an LDL cholesterol (LDL-C) of about 50 mg/dL is optimal for patients with coronary or peripheral arterial disease. Ideally, triglycerides should be below 120 mg/dL. Although LDL-C is still the primary target of treatment, reducing the levels of VLDL and IDL also is important. The lipoproteins involved in each disorder are shown in Table 35–1. Diagnosis of a primary disorder usually requires further clinical and genetic data as well as ruling out secondary hyperlipidemias (Table 35–2). Because measurement of plasma triglycerides commonly focuses on constituent glycerol, patients with a rare condition, glycerol kinase deficiency, can be erroneously identified as having hypertriglyceridemia. This can be excluded by ultracentrifugation.

Table 35–1 The primary hyperlipoproteinemias and their treatment.

Disorder

Manifestations

Diet + Single Drug

Drug Combination

1Select pharmacologically compatible statin or bempedoic acid (see text).

Primary chylomicronemia (familial lipoprotein lipase, cofactor deficiency; others)

Chylomicrons, VLDL increased

Dietary management; omega-3 fatty acids, fibrate, or niacin (Apo C-III antisense)

Fibrate plus niacin

Familial hypertriglyceridemia

VLDL increased; chylomicrons may be increased

Dietary management; omega-3 fatty acids, fibrate, niacin, statin

Two or three of the individual drugs

Familial combined hyperlipoproteinemia

VLDL predominantly increased

Omega-3 fatty acids, statin, fibrate

Statin plus omega-3 fatty acids or Fibrate

 

LDL predominantly increased

Statin or ezetimibe

Statin plus ezetimibe

 

VLDL, LDL increased

Statin, omega-3 fatty acids, fibrate

Statin plus omega-3 fatty acids or Fibrate

Familial dysbetalipoproteinemia

VLDL remnants, chylomicron remnants increased

Statin, fibrate

Statin plus fibrate1

Familial hypercholesterolemia

 

 

 

 Heterozygous

LDL increased

Statin, ezetimibe, resin, or PCSK9 MAB

Two or three of the individual drugs

 Homozygous

LDL increased

Statin, ezetimibe, lomitapide, PCSK9 MAB, evinacumab

Combinations of several single agents

Familial ligand-defective apo B-100

LDL increased

Statin, PCSK9 MAB, ezetimibe

Two or three of the single agents

Lp(a) hyperlipoproteinemia

Lp(a) increased

Niacin, PCSK9 MAB

 

Table 35–2 Secondary causes of hyperlipoproteinemia.

Hypertriglyceridemia

Hypercholesterolemia

Diabetes mellitus

Hypothyroidism

Alcohol ingestion

Early nephrosis

Severe nephrosis

Resolving lipemia

Estrogens

Immunoglobulin-lipoprotein complex disorders

Uremia

Anorexia nervosa

HIV infection

Cholestasis

Myxedema

Hypopituitarism

Glycogen storage disease

Corticosteroid excess

Hypopituitarism

Androgen overdose

Acromegaly

 

Immunoglobulin-lipoprotein complex disorders

 

Lipodystrophy

 

Protease inhibitors, tacrolimus, sirolimus, other drugs

 

Phenotypes of abnormal lipoprotein distribution are described in this section. Drugs mentioned for use in these conditions are described in the following section on basic and clinical pharmacology.

Hypertriglyceridemia is associated with increased risk of coronary disease. Chylomicrons, VLDL, and IDL are found in atherosclerotic plaques. These patients tend to have cholesterol-rich VLDL of small particle diameter and small, dense LDL. Hypertriglyceridemic patients with coronary disease or risk equivalents should be treated aggressively. Patients with triglycerides above 700 mg/dL should be treated to prevent acute pancreatitis because the LPL clearance mechanism is saturated at about this level.

Hypertriglyceridemia is an important component of the metabolic syndrome, which also includes insulin resistance, hypertension, and abdominal obesity. Reduced levels of HDL-C are usually observed due to transfer of cholesteryl esters to triglyceride-rich lipoproteins. Hyperuricemia is frequently present. Insulin resistance occurs in most patients. Management frequently requires the use of metformin, another antidiabetic agent, or both (see Chapter 41). The severity of hypertriglyceridemia of any cause is increased in the presence of the metabolic syndrome, type 2 diabetes, or the ingestion of alcohol.

Primary Chylomicronemia (Familial Chylomicronemia Syndrome)

Chylomicrons are not present in the serum of normal individuals who have fasted 10 hours. The recessive traits of genetically compromised LPL, its cofactor Apo C-II, and the LMF1, CREB3L3, or GPIHBP1 proteins are usually associated with severe lipemia (1000 mg/dL of triglycerides or higher when the patient is consuming a typical American diet). Mutations in Apo A-V can impair lipolysis in both the homozygous and heterozygous states. Familial chylomicronemia might not be diagnosed until an attack of acute pancreatitis occurs or a woman becomes pregnant. Patients may have eruptive xanthomas, hepatosplenomegaly, hypersplenism, lipemia retinalis, and lipid-laden foam cells in bone marrow, liver, and spleen. The lipemia is increased by estrogens because they stimulate VLDL production. Although these patients have a predominant chylomicronemia, they may also have elevated VLDL, presenting with a pattern called mixed lipemia. Deficiency of lipolytic activity can be diagnosed after intravenous injection of heparin. A presumptive diagnosis is made by demonstrating a pronounced decrease in triglycerides 72 hours after elimination of dietary fat. Marked restriction of dietary fat, weight control, exercise, and abstention from alcohol are the basis of effective long-term treatment of chylomicronemia and all hypertriglyceridemias. A fibrate, niacin, or marine omega-3 fatty acids may be of some benefit if VLDL levels are increased. Apo C-III antisense, available in Europe as volanesorsen, is a potential adjunct to therapy. Plasmapheresis may contribute to the rapid reduction of triglycerides in the setting of acute pancreatitis.

Familial Hypertriglyceridemia

The primary hypertriglyceridemias probably reflect a variety of genetic determinants. Variants in a large number of genes have been implicated as causative in patients with hypertriglyceridemia, for which a polygenic risk score has been developed. Many patients have central obesity with insulin resistance. Impaired removal of triglyceride-rich lipoproteins with overproduction of VLDL can result in mixed lipemia. Eruptive xanthomas, lipemia retinalis, epigastric pain, and pancreatitis are variably present depending on the severity of the lipemia. Treatment is primarily dietary. Marine omega-3 fatty acids, especially EPA only, may be helpful for patients with coronary artery disease or who are at high risk. Some patients require treatment with a statin if LDL is elevated, and a fibrate may be needed if triglycerides are consistently greater than 500 mg/dL. If insulin resistance is not present, niacin may be helpful. Metformin is useful in patients with insulin resistance.

Familial Combined Hyperlipoproteinemia (FCH)

The genetic basis of FCH is undetermined but probably involves multiple loci. In this very common disorder, which is associated with an increased incidence of coronary disease, individuals may have elevated levels of VLDL, LDL, or both, and the pattern may change with time. An elevated level of Apo B-100 is a constant feature. FCH involves an approximate doubling in VLDL secretion and appears to be transmitted as a dominant trait. Triglycerides can be increased by factors noted above. Elevations of cholesterol and triglycerides are generally moderate. Diet alone does not normalize lipid levels. A statin alone, or in combination with niacin or fenofibrate, is often required to treat these patients. When fenofibrate is combined with a statin, either pravastatin or rosuvastatin is recommended because neither is metabolized via CYP3A4. Marine omega-3 fatty acids may be useful.

Familial Dysbetalipoproteinemia

In this disorder, remnants of chylomicrons and VLDL accumulate and levels of LDL are decreased. Because remnants are rich in cholesteryl esters, the level of total cholesterol may be as high as that of triglycerides. Diagnosis is confirmed by the absence of the ε3 and ε4 alleles of apo E, the ε2/ε2 genotype. Other rare apo E isoforms that lack receptor ligand properties can also be associated with this disorder. Patients often develop tuberous or tuberoeruptive xanthomas, or characteristic planar xanthomas of the palmar creases. They tend to be obese, and some have impaired glucose tolerance. These factors, as well as hypothyroidism, can increase the lipemia. Coronary and peripheral atherosclerosis occurs with increased frequency. Weight loss, together with decreased fat, cholesterol, and alcohol consumption, may be sufficient. Statins are often effective because they increase hepatic LDL receptors that participate in remnant removal. A fibrate is sometimes needed to control the condition.

LDL Receptor-Deficient Familial Hypercholesterolemia (FH)

This is a common autosomal dominant trait. Although levels of LDL tend to increase throughout childhood, the diagnosis can often be made on the basis of elevated umbilical cord blood cholesterol. In most heterozygotes, cholesterol levels range from 260 to 400 mg/dL. Triglycerides are usually normal. Tendon xanthomas are often present. Arcus corneae and xanthelasma may also be present. Coronary disease tends to occur prematurely. In homozygous FH, which can lead to coronary disease in childhood, levels of cholesterol can range from 500 to over 1000 mg/dL. Early tuberous, tendinous, and planar xanthomas occur, and aortic stenosis is common.

Some individuals have combined heterozygosity for alleles producing nonfunctional and kinetically impaired LDL receptors. In heterozygous patients, LDL can be normalized with reductase inhibitors or combined drug regimens (Figure 35–2). Homozygotes and those with combined heterozygosity whose receptors retain even minimal function may partially respond to combinations of niacin, ezetimibe, and reductase inhibitors. Lomitapide, a small molecule inhibitor of microsomal triglyceride transfer protein (MTP), and monoclonal antibodies directed at PCSK9 also have some effectiveness. Evinacumab, a monoclonal antibody against ANGPTL3, is very effective because its mechanism of action does not depend on LDL receptor function. LDL apheresis is a definitive treatment in medication-refractory patients.

Figure 35–2 Sites of action of HMG-CoA reductase inhibitors, PCSK9 MAB, niacin, ezetimibe, and resins used in treating hyperlipidemias. Low-density lipoprotein (LDL) receptors are increased by treatment with resins and HMG-CoA reductase inhibitors. PCSK9 MAB decreases destruction of LDL receptors by PCSK9. VLDL, very-low-density lipoproteins; R, LDL receptor; L, lysosome.
katzung16_ch35_f002

Familial Ligand-Defective Apolipoprotein B-100

Defects in the domain of apo B-100 that binds to the LDL receptor impair the endocytosis of LDL, leading to hypercholesterolemia of moderate severity. Tendon xanthomas may occur. Response to reductase inhibitors is variable. Upregulation of LDL receptors in liver does not increase uptake of ligand-defective LDL particles. Fibrates or niacin may have beneficial effects by reducing VLDL production.

PCSK9 Gain of Function

The receptor chaperone PCSK9 normally conducts the receptor to the lysosome for degradation (see Figure 35-2). Gain-of-function mutations in PCSK9 are associated with elevated levels of LDL-C and are managed with a PCSK9 antibody.

LDLRAP1 Variants (Autosomal Recessive Hypercholesterolemia)

The RAP1 gene product facilitates the uptake in hepatocytes of the LDL receptor and its associated LDL particle, enhancing the removal of LDL from plasma. Rare defective variants lead to increased LDL, clinically resembling homozygous FH. Statins and bile acid sequestrants may be effective. A similar mechanism for an Apolipoprotein E variant (Leu 167 del) has been described.

CYP7a Deficiency

Decreased catabolism of cholesterol to bile acids and accumulation of cholesterol in hepatocytes result from loss of function mutations in CYP7a. LDL in plasma is increased with downregulation of LDL receptors. LDL is moderately elevated in heterozygous patients. Homozygous patients have higher LDL levels, sometimes elevated triglycerides, early coronary disease, increased risk of gallstones, and are resistant to statins. The addition of niacin to a statin results in a significant reduction in lipid levels.

Familial Combined Hyperlipoproteinemia (FCH)

As described above, some persons with FCH have only an elevation in LDL necessitating treatment with a statin.

Lp(a) Hyperlipoproteinemia

This genetic disorder, which is associated with increased atherogenesis and arterial thrombus formation, is determined chiefly by alleles that dictate increased production of the (a) protein moiety. Lp(a) can be secondarily elevated in patients with severe nephrosis and certain other inflammatory states. Niacin reduces levels of Lp(a) in many patients, and an (a) antisense will be available soon. Reduction of levels of LDL-C below 100 mg/dL decreases the risk of atherosclerosis attributable to Lp(a), and the administration of low-dose aspirin may reduce the risk of thrombus. PCSK9 MABs also reduce levels of Lp(a) by about 25%.

Cholesteryl Ester Storage Disease

Individuals lacking activity of lysosomal acid lipase (LAL) accumulate cholesteryl esters in liver and macrophages leading to hepatomegaly with subsequent fibrosis, and atherosclerosis. They have elevated levels of LDL-C, low levels of HDL-C, and often modest hypertriglyceridemia. Rarely, a fatal totally ablative form, Wolman disease, occurs in infancy. A recombinant replacement enzyme therapy, sebelipase alfa given intravenously weekly or every other week, effectively restores the hydrolysis of cholesteryl esters in liver, normalizing plasma lipoprotein levels.

Phytosterolemia

The ABCG5 and ABCG8 half-transporters act together in enterocytes and hepatocytes to export phytosterols into the intestinal lumen and bile, respectively. Homozygous or combined heterozygous ablative mutations in either transporter result in elevated levels of LDL enriched in phytosterols, tendon and tuberous xanthomas, and accelerated atherosclerosis. Many techniques for quantitation of cholesterol include phytosterols and can misdiagnose this disorder. Ezetimibe is the specific therapeutic.

Rare genetic disorders, including Tangier disease and LCAT (lecithin:cholesterol acyltransferase) deficiency, are associated with extremely low levels of HDL. Familial hypoalphalipoproteinemia is a more common disorder with levels of HDL usually below 35 mg/dL in men and 45 mg/dL in women, most commonly attributable to mutations in the ABCA1 gene. These patients tend to have premature atherosclerosis, and the low HDL may be the only identified risk factor. Paradoxically, HDL levels above 90 mg/dL are associated with increased atherosclerotic vascular disease in population studies. This risk relationship is associated in some cases with variants in the SCARB-1 gene.

Management of HDL deficiency includes special attention to avoidance or treatment of other risk factors. Niacin increases HDL in many of these patients but the effect on clinical outcome is unknown. Reductase inhibitors and fibric acid derivatives exert lesser effects. Aggressive reduction of LDL and VLDL is indicated.

In the presence of hypertriglyceridemia, HDL is low because of exchange of cholesteryl esters from HDL into triglyceride-rich lipoproteins. Treatment of hypertriglyceridemia increases HDL.

Before primary disorders can be diagnosed, secondary causes of the lipid phenotype must be considered. The more common conditions are summarized in Table 35–2. The lipoprotein abnormality usually resolves if the underlying disorder can be treated successfully. These secondary entities can also amplify a primary genetic disorder.

Book Chapter
23. The Alcohols

23. The Alcohols

Pharmacokinetics

Ethanol is a small water-soluble molecule that is absorbed rapidly from the gastrointestinal tract. After ingestion of alcohol in the fasting state, peak blood alcohol concentrations are reached within 30 minutes. The presence of food in the stomach delays absorption by slowing gastric emptying. Distribution is rapid, with tissue levels approximating the concentration in blood. The volume of distribution for ethanol approximates total body water (0.5–0.7 L/kg). After an equivalent oral dose of alcohol, women have a higher peak concentration than men, in part because women have a lower total body water content and in part because of differences in first-pass metabolism. In the central nervous system (CNS), the concentration of ethanol rises quickly, since the brain receives a large proportion of total blood flow and ethanol readily crosses biologic membranes.

More than 90% of alcohol consumed is oxidized in the liver; much of the remainder is excreted through the lungs and in the urine. The excretion of a small but consistent proportion of alcohol by the lungs can be quantified with breath alcohol tests that serve as a basis for a legal definition of “driving under the influence” (DUI) in many countries. In most states in the USA, the alcohol level for driving under the influence is set at 80 mg/dL (0.08%). At levels of ethanol usually achieved in blood, the rate of oxidation follows zero-order kinetics; that is, it is independent of time and concentration of the drug. The typical adult can metabolize 7–10 g (150–220 mmol) of alcohol per hour, the equivalent of approximately one “drink” [10 oz (300 mL) beer, 3.5 oz (105 mL) wine, or 1 oz (30 mL) distilled 80-proof spirits]. A commercial product (“Palcohol”), approved in some states in the USA in 2015, consists of a powder to be mixed to form a drink containing 10% ethanol (approximately equivalent to wine).

Two major pathways of alcohol metabolism to acetaldehyde have been identified (Figure 23–1). Acetaldehyde is then oxidized to acetate by a third metabolic process.

Figure 23–1 Metabolism of ethanol by alcohol dehydrogenase and the microsomal ethanol-oxidizing system (MEOS). Alcohol dehydrogenase and aldehyde dehydrogenase are inhibited by fomepizole and disulfiram, respectively. NAD+, nicotinamide adenine dinucleotide; NADPH, nicotinamide adenine dinucleotide phosphate.
katzung16_ch23_f001

A. Alcohol Dehydrogenase Pathway

The primary pathway for alcohol metabolism involves alcohol dehydrogenase (ADH), a family of cytosolic enzymes that catalyze the conversion of alcohol to acetaldehyde (see Figure 23–1, left). These enzymes are located mainly in the liver, but small amounts are found in other organs such as the brain and stomach. There is considerable genetic variation in ADH enzymes, affecting the rate of ethanol metabolism and also appearing to alter vulnerability to alcohol-abuse disorders. For example, one ADH allele (the ADH1B*2 allele), which is associated with rapid conversion of ethanol to acetaldehyde, has been found to be protective against alcohol dependence in several ethnic populations, especially East Asians.

Some metabolism of ethanol by ADH occurs in the stomach in men, but a smaller amount occurs in women, who appear to have lower levels of the gastric enzyme. This difference in gastric metabolism of alcohol in women probably contributes to the sex-related differences in blood alcohol concentrations noted above.

During conversion of ethanol by ADH to acetaldehyde, hydrogen ion is transferred from ethanol to the cofactor nicotinamide adenine dinucleotide (NAD+) to form NADH. As a net result, alcohol oxidation generates an excess of reducing equivalents in the liver, chiefly as NADH. The excess NADH production appears to contribute to the metabolic disorders that accompany chronic alcoholism and to both the lactic acidosis and hypoglycemia that frequently accompany acute alcohol poisoning.

B. Microsomal Ethanol-Oxidizing System (MEOS)

This enzyme system, also known as the mixed function oxidase system, uses NADPH as a cofactor in the metabolism of ethanol (see Figure 23–1, right) and consists primarily of cytochrome P450 2E1, 1A2, and 3A4 (see Chapter 4).

During chronic alcohol consumption, MEOS activity is induced. As a result, chronic alcohol consumption results in significant increases not only in ethanol metabolism but also in the clearance of other drugs eliminated by the cytochrome P450s that constitute the MEOS system, and in the generation of the toxic byproducts of cytochrome P450 reactions (toxins, free radicals, H2O2).

C. Acetaldehyde Metabolism

Much of the acetaldehyde formed from alcohol is oxidized in the liver in a reaction catalyzed by mitochondrial NAD-dependent aldehyde dehydrogenase (ALDH). The product of this reaction is acetate (see Figure 23–1), which can be further metabolized to CO2 and water, or used to form acetyl-CoA.

Oxidation of acetaldehyde is inhibited by disulfiram, a drug that has been used to deter drinking by patients with alcohol dependence. When ethanol is consumed in the presence of disulfiram, acetaldehyde accumulates and causes an unpleasant reaction of facial flushing, nausea, vomiting, dizziness, and headache. Several other drugs (eg, metronidazole, cefotetan, trimethoprim) inhibit ALDH and have been claimed to cause a disulfiram-like reaction if combined with ethanol.

Some people, primarily of East Asian descent, have genetic deficiency in the activity of the mitochondrial form of ALDH, which is encoded by the ALDH2 gene. When these individuals drink alcohol, they develop high blood acetaldehyde concentrations and experience a noxious reaction similar to that seen with the combination of disulfiram and ethanol. This form of reduced-activity ALDH is strongly protective against alcohol-use disorders.

Pharmacodynamics of Acute Ethanol Consumption

A. Central Nervous System

The CNS is markedly affected by acute alcohol consumption. Alcohol causes sedation, relief of anxiety and, at higher concentrations, slurred speech, ataxia, impaired judgment, and disinhibited behavior, a condition usually called intoxication or drunkenness (Table 23–1). These CNS effects are most marked as the blood level is rising, because acute tolerance to the effects of alcohol occurs after a few hours of drinking. For chronic drinkers who are tolerant to the effects of alcohol, higher concentrations are needed to elicit these CNS effects. For example, an individual with chronic alcoholism may appear sober or only slightly intoxicated with a blood alcohol concentration of 300–400 mg/dL (0.30-0.40%), whereas this level is associated with marked intoxication or even coma in a nontolerant individual. The propensity of moderate doses of alcohol to inhibit the attention and information-processing skills as well as the motor skills required for operation of motor vehicles has profound effects. Approximately 30–40% of all traffic accidents resulting in a fatality in the United States involve at least one person with blood alcohol near or above the legal level of intoxication, and drunken driving is a leading cause of death in young adults.

Table 23–1 Blood alcohol concentration (BAC) and clinical effects in nontolerant individuals.

BAC (mg/dL)1

Clinical Effect

1In many parts of the United States, a blood level above 80–100 mg/dL for adults or 5–20 mg/dL for persons under 21 is sufficient for conviction of driving while “under the influence.”

50–100

Sedation, subjective “high,” slower reaction times

100–200

Impaired motor function, slurred speech, ataxia

200–300

Emesis, stupor

300–400

Coma

>400

Respiratory depression, death

Like other sedative-hypnotic drugs, alcohol is a CNS depressant. At high blood concentrations, it induces coma, respiratory depression, and death.

Ethanol affects a large number of membrane proteins that participate in signaling pathways, including neurotransmitter receptors for amines, amino acids, opioids, and neuropeptides; enzymes such as Na+/K+-ATPase, adenylyl cyclase, phosphoinositide-specific phospholipase C; a nucleoside transporter; and ion channels. Much attention has focused on alcohol’s effects on neurotransmission by glutamate and γ-aminobutyric acid (GABA), the main excitatory and inhibitory neurotransmitters in the CNS. Acute ethanol exposure enhances the action of GABA at GABAA receptors, which is consistent with the ability of GABA-mimetics to intensify many of the acute effects of alcohol and of GABAA antagonists to attenuate some of the actions of ethanol. Ethanol inhibits the ability of glutamate to open the cation channel associated with the N-methyl-D-aspartate (NMDA) subtype of glutamate receptors. The NMDA receptor is implicated in many aspects of cognitive function, including learning and memory. “Blackouts”—periods of memory loss that occur with high levels of alcohol—may result from inhibition of NMDA receptor activation. Experiments that use modern genetic approaches eventually will yield a more precise definition of ethanol’s direct and indirect targets. In recent years, experiments with mutant strains of mice, worms, and flies have reinforced the importance of previously identified targets and helped identify new candidates, including a calcium-regulated and voltage-gated potassium channel that may be one of ethanol’s direct targets (see Box: What Can Drunken Worms, Flies, and Mice Tell Us About Alcohol?).

What Can Drunken Worms, Flies, and Mice Tell Us About Alcohol?

For a drug like ethanol, which exhibits low potency and specificity and modifies complex behaviors, the precise roles of its many direct and indirect targets are difficult to define. Increasingly, ethanol researchers are employing genetic approaches to complement standard neurobiologic experimentation. Three experimental animal systems for which powerful genetic techniques exist—mice, flies, and worms—have yielded intriguing results.

Strains of mice with abnormal sensitivity to ethanol were identified many years ago by breeding and selection programs. Using sophisticated genetic mapping and sequencing techniques, researchers have made progress in identifying the genes that confer ethanol susceptibility or resistance traits. A more targeted approach is the use of transgenic mice to test hypotheses about specific genes. For example, after earlier experiments suggested a link between brain neuropeptide Y (NPY) and ethanol, researchers used two transgenic mouse models to further investigate the link. They found that a strain of mice that lacks the gene for NPY—NPY knockout mice—consume more ethanol than control mice and are less sensitive to ethanol’s sedative effects. As would be expected if increased concentrations of NPY in the brain make mice more sensitive to ethanol, a strain of mice that overexpresses NPY drinks less alcohol than the controls even though their total consumption of food and liquid is normal. Work with other transgenic knockout mice supports the central role in ethanol responses of signaling systems that have long been believed to be involved (eg, GABAA, glutamate, dopamine, opioid, and serotonin receptors) and has helped build the case for newer candidates such as NPY and corticotropin-releasing hormone, cannabinoid receptors, ion channels, and protein kinase C.

It is easy to imagine mice having measurable behavioral responses to alcohol, but drunken worms and fruit flies are harder to imagine. Actually, both invertebrates respond to ethanol in ways that parallel mammalian responses. Drosophila melanogaster fruit flies exposed to ethanol vapor show increased locomotion at low concentrations but at higher concentrations, they become poorly coordinated, sedated, and finally immobile. These behaviors can be monitored by sophisticated laser or video tracking methods or with an ingenious “chromatography” column of air that separates relatively insensitive flies from inebriated flies, which drop to the bottom of the column. The worm Caenorhabditis elegans similarly exhibits increased locomotion at low ethanol concentrations and, at higher concentrations, reduced locomotion, sedation, and—something that can be turned into an effective screen for mutant worms that are resistant to ethanol—impaired egg laying. The advantage of using flies and worms as genetic models for ethanol research is their relatively simple neuroanatomy, well-established techniques for genetic manipulation, extensive libraries of well-characterized mutants, and completely or nearly completely solved genetic codes. Already, much information has accumulated about candidate proteins involved with the effects of ethanol in flies. In an elegant study on C elegans, researchers found evidence that a calcium-activated, voltage-gated BK potassium channel is a direct target of ethanol. This channel, which is activated by ethanol, has close homologs in flies and vertebrates, and evidence is accumulating that ethanol has similar effects in these homologs. Genetic experiments in these model systems should provide information that will help narrow and focus research into the complex and important effects of ethanol in humans.

B. Heart

Significant depression of myocardial contractility has been observed in individuals who acutely consume moderate amounts of alcohol, ie, at a blood concentration above 100 mg/dL.

C. Smooth Muscle

Ethanol is a vasodilator, probably as a result of both CNS effects (depression of the vasomotor center) and direct smooth muscle relaxation caused by its metabolite, acetaldehyde. In cases of severe overdose, hypothermia—caused by vasodilation—may be marked in cold environments. Preliminary evidence indicates that flibanserin, a drug that interacts with 5-HT receptors, augments the hypotensive effects of ethanol and may cause severe orthostatic hypotension and syncope (see Chapter 16). Ethanol also relaxes the uterus and—before the introduction of more effective and safer uterine relaxants (eg, calcium channel antagonists)—was used intravenously for the suppression of premature labor.

Consequences of Chronic Alcohol Consumption

Chronic alcohol consumption profoundly affects the function of several vital organs—particularly the liver—and the nervous, gastrointestinal, cardiovascular, and immune systems. Since ethanol has low potency, it requires concentrations thousands of times higher than other misused drugs (eg, cocaine, opiates, amphetamines) to produce its intoxicating effects. As a result, ethanol is consumed in quantities that are unusually large for a pharmacologically active drug. The tissue damage caused by chronic alcohol ingestion results from a combination of the direct effects of ethanol and acetaldehyde, and the metabolic consequences of processing a heavy load of a metabolically active substance. Specific mechanisms implicated in tissue damage include increased oxidative stress coupled with depletion of glutathione, damage to mitochondria, growth factor dysregulation, and potentiation of cytokine-induced injury.

Chronic consumption of large amounts of alcohol is associated with an increased risk of death. Deaths linked to alcohol consumption are caused by liver disease, cancer, accidents, and suicide.

A. Liver and Gastrointestinal Tract

Liver disease is the most common medical complication of alcohol abuse; an estimated 15–30% of chronic heavy drinkers eventually develop severe liver disease. Alcoholic fatty liver, a reversible condition, may progress to alcoholic hepatitis and finally to cirrhosis and liver failure. In the USA, chronic alcohol abuse is the leading cause of liver cirrhosis and of the need for liver transplantation. The risk of developing liver disease is related both to the average amount of daily consumption and to the duration of alcohol abuse. Women appear to be more susceptible to alcohol hepatotoxicity than men. Concurrent infection with hepatitis B or C virus increases the risk of severe liver disease. Cirrhosis contributes to elevated portal blood pressure and esophageal and gastric venous varices. These varices may rupture and result in massive bleeding.

The pathogenesis of alcoholic liver disease is a multifactorial process involving metabolic repercussions of ethanol oxidation in the liver, dysregulation of fatty acid oxidation and synthesis, and activation of the innate immune system by a combination of direct effects of ethanol and its metabolites and by bacterial endotoxins that access the liver as a result of ethanol-induced changes in the intestinal tract. Tumor necrosis factor-α appears to play a pivotal role in the progression of alcoholic liver disease and may be a fruitful therapeutic target.

Other portions of the gastrointestinal tract can also be injured. Chronic alcohol ingestion is by far the most common cause of chronic pancreatitis in the Western world. In addition to its direct toxic effect on pancreatic acinar cells, alcohol alters pancreatic epithelial permeability and promotes the formation of protein plugs and calcium carbonate-containing stones.

Individuals with chronic alcoholism are prone to gastritis and have increased susceptibility to blood and plasma protein loss during drinking, which may contribute to anemia and protein malnutrition. Alcohol also injures the small intestine, leading to diarrhea, weight loss, and multiple vitamin deficiencies.

Malnutrition from dietary deficiency and vitamin deficiencies due to malabsorption are common in alcoholism. Malabsorption of water-soluble vitamins is especially severe.

B. Nervous System

1. Tolerance and dependence

The consumption of alcohol in high doses over a long period results in tolerance and in physical and psychological dependence. Tolerance to the intoxicating effects of alcohol is a complex process involving poorly understood changes in the nervous system as well as the pharmacokinetic changes described earlier. As with other sedative-hypnotic drugs, there is a limit to tolerance, so that only a relatively small increase in the lethal dose occurs with increasing alcohol use.

Chronic alcohol drinkers, when forced to reduce or discontinue alcohol, experience a withdrawal syndrome, which indicates the existence of physical dependence. Alcohol withdrawal symptoms usually consist of hyperexcitability in mild cases and seizures, toxic psychosis, and delirium tremens (eg, shaking, confusion, high blood pressure, fever, hallucinations, death) in severe ones. The dose, rate, and duration of alcohol consumption determine the intensity of the withdrawal syndrome. When consumption has been very high, merely reducing the rate of consumption may lead to signs of withdrawal.

Psychological dependence on alcohol is characterized by a compulsive desire to experience the rewarding effects of alcohol and, for current drinkers, a desire to avoid the negative consequences of withdrawal. People who have recovered from alcoholism and become abstinent still experience periods of intense craving for alcohol that can be triggered by environmental cues associated in the past with drinking, such as familiar places, groups of people, or events.

The molecular basis of alcohol tolerance and dependence is not known with certainty, nor is it known whether the two phenomena reflect opposing effects on a shared molecular pathway. Tolerance may result from ethanol-induced up-regulation of a pathway in response to the continuous presence of ethanol. Dependence may result from overactivity of that same pathway after the ethanol effect dissipates and before the system has time to return to a normal ethanol-free state.

Chronic exposure of animals or cultured cells to alcohol elicits a multitude of adaptive responses involving neurotransmitters and their receptors, ion channels, and enzymes that participate in signal transduction pathways. Up-regulation of the NMDA subtype of glutamate receptors and voltage-sensitive Ca2+ channels may underlie the seizures that accompany the alcohol withdrawal syndrome. GABA neurotransmission is believed to play a significant role in tolerance and withdrawal because (1) sedative-hypnotic drugs that enhance GABAergic neurotransmission are able to substitute for alcohol during alcohol withdrawal, and (2) there is evidence of down-regulation of GABAA-mediated responses with chronic alcohol exposure.

Like other drugs of abuse, ethanol modulates neural activity in the brain’s mesolimbic dopamine reward circuit and increases dopamine release in the nucleus accumbens (see Chapter 32). Alcohol affects local concentrations of serotonin, endorphin, endocannabinoid, and dopamine—neurotransmitters involved in the brain reward system. The discovery that naltrexone, a nonselective opioid receptor antagonist, helps patients who are recovering from alcoholism abstain from drinking supports the idea that a common neurochemical reward system is shared by very different drugs associated with physical and psychological dependence. There is also convincing evidence from animal models that ethanol intake and seeking behavior are reduced by antagonists of another important regulator of the brain reward system, the cannabinoid CB1 receptor. Two other important neuroendocrine systems that appear to play key roles in modulating ethanol-seeking activity in experimental animals are the appetite-regulating system—which uses peptides such as leptin, ghrelin, and neuropeptide Y—and the stress response system, which is controlled by corticotropin-releasing factor.

2. Neurotoxicity

Consumption of large amounts of alcohol over extended periods (usually years) can lead to neurologic deficits. The most common neurologic abnormality in chronic alcoholism is generalized symmetric peripheral nerve injury, which begins with distal paresthesias of the hands and feet. Degenerative changes can also result in gait disturbances and ataxia. Other neurologic disturbances associated with alcoholism include dementia and, rarely, demyelinating disease.

Wernicke-Korsakoff syndrome is a relatively uncommon but important entity characterized by paralysis of the external eye muscles, ataxia, and a confused state that can progress to hallucinations, confabulations, coma, and death. It is associated with thiamine deficiency but is rarely seen in the absence of alcoholism, although reports of Wernicke-Korsakoff have been seen in patients with gastric bypass. Because of the importance of thiamine in this pathologic condition and the absence of toxicity associated with thiamine administration, all patients suspected of having Wernicke-Korsakoff syndrome (including virtually all patients who present to the emergency department with altered consciousness, seizures, or both) should receive thiamine therapy. Often, the ocular signs, ataxia, and confusion improve promptly upon administration of thiamine. However, most patients are left with a chronic disabling memory disorder known as Korsakoff psychosis.

Alcohol may also impair visual acuity, with painless blurring that occurs over several weeks of heavy alcohol consumption. Changes are usually bilateral and symmetric and may be followed by optic nerve degeneration. Ingestion of ethanol substitutes such as methanol (see section Pharmacology of Other Alcohols) causes severe visual disturbances.

C. Cardiovascular System

1. Cardiomyopathy and heart failure

Alcohol has complex effects on the cardiovascular system. Heavy alcohol consumption of long duration is associated with a dilated cardiomyopathy with ventricular hypertrophy and fibrosis. In animals and humans, alcohol causes cardiac membrane disruption, depressed function of mitochondria and sarcoplasmic reticulum, intracellular accumulation of phospholipids and fatty acids, as well as upregulation of voltage-gated calcium channels. There is evidence that patients with alcohol-induced dilated cardiomyopathy do significantly worse than patients with idiopathic dilated cardiomyopathy, even though cessation of drinking is associated with a reduction in cardiac size and improved function. The poorer prognosis for patients who continue to drink appears to be due in part to interference by ethanol with the beneficial effects of β blockers and angiotensin-converting enzyme (ACE) inhibitors.

2. Arrhythmias

Heavy drinking—and especially “binge” drinking—are associated with both atrial and ventricular arrhythmias. Patients undergoing alcohol withdrawal syndrome can develop severe arrhythmias that may reflect abnormalities of potassium or magnesium metabolism as well as enhanced release of catecholamines. Seizures, syncope, and sudden death during alcohol withdrawal may be due to these arrhythmias.

3. Hypertension

A link between heavier alcohol consumption (more than three drinks per day) and hypertension has been firmly established in epidemiologic studies. Alcohol is estimated to be responsible for approximately 5% of cases of hypertension, independent of obesity, salt intake, coffee drinking, and cigarette smoking. A reduction in alcohol intake appears to be effective in lowering blood pressure in hypertensive individuals who are also heavy drinkers; the hypertension seen in this population is also responsive to standard blood pressure medications.

4. Coronary heart disease

Although the deleterious effects of excessive alcohol use on the cardiovascular system are well established, there is strong epidemiologic evidence that moderate alcohol consumption actually prevents coronary heart disease (CHD), ischemic stroke, and peripheral arterial disease. This type of relationship between mortality and the dose of a drug is called a “J-shaped” relationship. Results of these clinical studies are supported by ethanol’s ability to raise serum levels of high-density lipoprotein (HDL) cholesterol (the form of cholesterol that appears to protect against atherosclerosis; see Chapter 35), by its ability to inhibit some of the inflammatory processes that underlie atherosclerosis while also increasing production of the endogenous anticoagulant tissue plasminogen activator (t-PA, see Chapter 34), and by the presence in some alcoholic beverages such as red wine of antioxidants (eg, resveratrol, ellagic acid) and other substances that may protect against atherosclerosis. These observational studies are intriguing, but randomized clinical trials examining the possible benefit of moderate alcohol consumption in prevention of CHD have not been carried out.

D. Blood

Alcohol indirectly affects hematopoiesis through metabolic and nutritional effects and may also directly inhibit the proliferation of all cellular elements in bone marrow. The most common hematologic disorder seen in chronic drinkers is mild anemia resulting from alcohol-related folic acid deficiency. Iron-deficiency anemia may result from gastrointestinal bleeding. Alcohol has also been implicated as a cause of several hemolytic syndromes, some of which are associated with hyperlipidemia and severe liver disease.

E. Endocrine System and Electrolyte Balance

Chronic alcohol use has important effects on the endocrine system and on fluid and electrolyte balance. Clinical reports of gynecomastia and testicular atrophy in alcoholics with or without cirrhosis suggest a derangement in steroid hormone balance.

Individuals with chronic liver disease may have disorders of fluid and electrolyte balance, including ascites, edema, and effusions. Alterations of whole body potassium induced by vomiting and diarrhea, as well as severe secondary aldosteronism, may contribute to muscle weakness and can be worsened by diuretic therapy. The metabolic derangements caused by metabolism of large amounts of ethanol can result in hypoglycemia, as a result of impaired hepatic gluconeogenesis, and in ketosis, caused by excessive lipolytic factors, especially increased cortisol and growth hormone.

F. Fetal Alcohol Syndrome

Chronic maternal alcohol abuse during pregnancy is associated with teratogenic effects, and alcohol is a leading cause of mental retardation and congenital malformation. The abnormalities that have been characterized as fetal alcohol syndrome include (1)  intrauterine growth retardation, (2) microcephaly, (3) poor coordination, (4) underdevelopment of midfacial region (appearing as a flattened face), and (5) minor joint anomalies. More severe cases may include congenital heart defects and mental retardation. Although the level of alcohol intake required to cause serious neurologic deficits appears quite high, the threshold for more subtle neurologic deficits is uncertain.

The mechanisms that underlie ethanol’s teratogenic effects are unknown. Ethanol rapidly crosses the placenta and reaches concentrations in the fetus that are similar to those in maternal blood. The fetal liver has little or no alcohol dehydrogenase activity, so the fetus must rely on maternal and placental enzymes for elimination of alcohol.

The neuropathologic abnormalities seen in humans and in animal models of fetal alcohol syndrome indicate that ethanol triggers apoptotic neurodegeneration and also causes aberrant neuronal and glial migration in the developing nervous system. In tissue culture systems, ethanol causes a striking reduction in neurite outgrowth.

G. Immune System

The effects of alcohol on the immune system are complex; immune function in some tissues is inhibited (eg, the lung), whereas pathologic, hyperactive immune function in other tissues is triggered (eg, liver, pancreas). In addition, acute and chronic exposure to alcohol have widely different effects on immune function. The types of immunologic changes reported for the lung include suppression of the function of alveolar macrophages, inhibition of chemotaxis of granulocytes, and reduced number and function of T cells. In the liver, there is enhanced function of key cells of the innate immune system (eg, Kupffer cells, hepatic stellate cells) and increased cytokine production. In addition to the inflammatory damage that chronic heavy alcohol use precipitates in the liver and pancreas, it predisposes to infections, especially of the lung, and worsens the morbidity and increases the mortality risk of patients with pneumonia.

H. Increased Risk of Cancer

Chronic alcohol use increases the risk for cancer of the mouth, pharynx, larynx, esophagus, and liver. Evidence also points to a small increase in the risk of breast cancer in women. A threshold level for alcohol consumption as it relates to cancer has not been determined. Alcohol itself does not appear to be a carcinogen in most test systems. However, its primary metabolite, acetaldehyde, can damage DNA, as can the reactive oxygen species produced by increased cytochrome P450 activity. Other factors implicated in the link between alcohol and cancer include changes in folate metabolism and the growth-promoting effects of chronic inflammation.

Alcohol-Drug Interactions

Interactions between ethanol and other drugs can have important clinical effects resulting from alterations in the pharmacokinetics or pharmacodynamics of the second drug.

The most common pharmacokinetic alcohol-drug interactions stem from alcohol-induced increases of drug-metabolizing enzymes, as described in Chapter 4. Thus, prolonged intake of alcohol without damage to the liver can enhance the metabolic biotransformation of other drugs. Ethanol-mediated induction of hepatic cytochrome P450 enzymes is particularly important with regard to acetaminophen. Chronic consumption of three or more drinks per day increases the risk of hepatotoxicity due to toxic or even high therapeutic levels of acetaminophen as a result of increased P450-mediated conversion of acetaminophen to reactive hepatotoxic metabolites (see Figure 4–5). Current US Food and Drug Administration (FDA) regulations require that over-the-counter products containing acetaminophen carry a warning about the relation between ethanol consumption and acetaminophen-induced hepatotoxicity.

In contrast, acute alcohol use can inhibit metabolism of other drugs because of decreased enzyme activity or decreased liver blood flow. Phenothiazines, tricyclic antidepressants, and sedative-hypnotic drugs are the most important drugs that interact with alcohol by this pharmacokinetic mechanism.

Pharmacodynamic interactions also are of great clinical significance. The additive CNS depression that occurs when alcohol is combined with other CNS depressants, particularly sedative-hypnotics, is most important. Alcohol also potentiates the pharmacologic effects of many nonsedative drugs, including vasodilators and oral hypoglycemic agents.

Book Chapter
45. Aminoglycosides & Spectinomycin

45. Aminoglycosides & Spectinomycin

The aminoglycosides include streptomycin, neomycin, kanamycin, amikacin, gentamicin, tobramycin, sisomicin, netilmicin, plazomicin, and others. They are used most widely in combination with other agents to treat drug-resistant organisms; for example, they are used with a β-lactam antibiotic in serious infections with gram-negative bacteria, with a β-lactam antibiotic or vancomycin for gram-positive endocarditis, and with one or several other agents for treatment of mycobacterial infections such as tuberculosis.

General Properties of Aminoglycosides

A. Physical and Chemical Properties

Aminoglycosides have a hexose ring, either streptidine (in streptomycin) or 2-deoxystreptamine (in other aminoglycosides), to which various amino sugars are attached by glycosidic linkages (Figures 45–1 and 45–2). They are water-soluble, stable in solution, and more active at alkaline than at acid pH.

Figure 45–1 Structure of streptomycin.
katzung16_ch45_f001
Figure 45–2 Structures of several aminoglycoside antibiotics. Ring II is 2-deoxystreptamine. The resemblance between kanamycin and amikacin and between gentamicin, netilmicin, and tobramycin can be seen. Plazomicin’s ring II and III are similar to the other structures; it shares the same hydroxyl-aminobutyric acid R group as amikacin. Its ring I differs from amikacin in that it is unsaturated. The circled numerals on the kanamycin molecule indicate points of attack of plasmid-mediated bacterial transferase enzymes that can inactivate this drug. katzung16_ch45_one, katzung16_ch45_two, and katzung16_ch45_three, acetyltransferase; katzung16_ch45_four, phosphotransferase; katzung16_ch45_five, adenylyltransferase. Amikacin is resistant to modification at katzung16_ch45_two, katzung16_ch45_three, katzung16_ch45_four, and katzung16_ch45_five, whereas plazomicin is resistant to modification at katzung16_ch45_one, katzung16_ch45_two, katzung16_ch45_four, and katzung16_ch45_five.
katzung16_ch45_f002

B. Mechanism of Action

The mode of action of streptomycin has been studied more closely than that of other aminoglycosides, but all aminoglycosides are thought to act similarly. Aminoglycosides are irreversible inhibitors of protein synthesis. The initial event is passive diffusion via porin channels across the outer membrane (Figure 43–3). Drug is then actively transported across the cell membrane into the cytoplasm by an oxygen-dependent process. The transmembrane electrochemical gradient supplies the energy for this process, and transport is coupled to a proton pump. Low extracellular pH and anaerobic conditions inhibit transport by reducing the gradient. Transport may be enhanced by cell wall–active drugs such as penicillin or vancomycin; this enhancement may be the basis of the synergy of those antibiotics with aminoglycosides.

Inside the cell, aminoglycosides bind to 30S-subunit ribosomal proteins. Protein synthesis is inhibited by aminoglycosides in at least three ways (see Figure 45–3): (1) interference with the initiation complex of peptide formation; (2) misreading of mRNA, which causes incorporation of incorrect amino acids into the peptide and results in a nonfunctional protein; and (3) breakup of polysomes into nonfunctional monosomes. These activities occur more or less simultaneously, and the overall effect is irreversible and leads to cell death.

Figure 45–3 Putative mechanisms of action of the aminoglycosides in bacteria. Normal protein synthesis is shown in the top panel. At least three aminoglycoside effects have been described, as shown in the bottom panel: blocking of formation of the initiation complex; miscoding of amino acids in the emerging peptide chain due to misreading of the mRNA; and blocking of translocation on mRNA. Blocking of movement of the ribosome may occur after the formation of a single initiation complex, resulting in an mRNA chain with only a single ribosome on it, a so-called monosome. (Reproduced with permission from Trevor AT, Katzung BG, Masters SB: Pharmacology: Examination & Board Review, 6th ed. New York, NY: McGraw Hill; 2002.)
katzung16_ch45_f003

C. Mechanisms of Resistance

Three principal mechanisms of resistance have been established: (1) production of a transferase enzyme that inactivates the aminoglycoside by adenylylation, acetylation, or phosphorylation. This is the principal type of resistance encountered clinically. (2) There is impaired entry of aminoglycoside into the cell. This may result from mutation or deletion of a porin protein involved in transport and maintenance of the electrochemical gradient or from growth conditions under which the oxygen-dependent transport process is not functional. (3) The receptor protein on the 30S ribosomal subunit may be deleted or altered as a result of a mutation.

D. Pharmacokinetics and Once-Daily Dosing

Aminoglycosides are absorbed very poorly from the intact gastrointestinal tract, and almost the entire oral dose is excreted in feces after oral administration. However, the drugs may be absorbed if ulcerations are present. Aminoglycosides are usually administered intravenously as a 30–60 minute infusion. After intramuscular injection, aminoglycosides are well absorbed, giving peak concentrations in blood within 30–90 minutes. After a brief distribution phase, peak serum concentrations are identical to those following intravenous injection. The normal half-life of aminoglycosides in serum is 2–3 hours, increasing to 24–48 hours in patients with significant impairment of renal function. Aminoglycosides are only partially and irregularly removed by hemodialysis—eg, 40–60% for gentamicin—and even less effectively by peritoneal dialysis. Aminoglycosides are highly polar compounds that do not enter cells readily. They are largely excluded from the central nervous system and the eye. In the presence of active inflammation, however, cerebrospinal fluid levels reach 20% of plasma levels, and, in neonatal meningitis, the levels may be higher. Intrathecal or intraventricular injection is required for high levels in cerebrospinal fluid. Even after parenteral administration, concentrations of aminoglycosides are not high in most tissues except the renal cortex. Concentration in most secretions is also modest; in the bile, the level may reach 30% of that in blood. With prolonged therapy, diffusion into pleural or synovial fluid may result in concentrations 50–90% of that of plasma.

Traditionally, aminoglycosides have been administered in two or three equally divided doses per day in patients with normal renal function. However, administration of the entire daily dose in a single injection may be preferred in many clinical situations for at least two reasons. Aminoglycosides exhibit concentration-dependent killing; that is, higher concentrations kill a larger proportion of bacteria and kill at a more rapid rate. They also have a significant postantibiotic effect, such that the antibacterial activity persists beyond the time during which measurable drug is present. The postantibiotic effect of aminoglycosides can last several hours. Because of these properties, a given total amount of aminoglycoside may have better efficacy and less toxicity when administered as a single large dose than when administered as multiple smaller doses.

When administered with a cell wall–active antibiotic (a β-lactam or vancomycin), aminoglycosides may exhibit synergistic killing against certain bacteria. The effect of the drugs in combination is greater than the anticipated effect of each individual drug; ie, the killing effect of the combination is more than additive. This synergy may be important in certain clinical situations, such as endocarditis.

Adverse effects from aminoglycosides are both time- and concentration-dependent. Toxicity is unlikely to occur until a certain threshold concentration is reached, but, once that concentration is achieved, the time beyond this threshold becomes critical. This threshold is not precisely defined, but a trough concentration above 2 mcg/mL is predictive of toxicity. At clinically relevant doses, the total time above this threshold is greater with multiple smaller doses of drug than with a single large dose.

Numerous clinical studies demonstrate that a single daily dose of aminoglycoside is just as effective—and probably less toxic—than multiple smaller doses. Therefore, many authorities recommend that aminoglycosides be administered as a single daily dose in most clinical situations. However, the efficacy of once-daily aminoglycoside dosing in combination therapy of enterococcal endocarditis and staphylococcal endocarditis in patients with a prosthetic valve remains to be defined, and administration of lower doses two or three times daily is still recommended. In contrast, limited data do support once-daily dosing in streptococcal endocarditis. The role of once-daily dosing in pregnancy, obesity, and in neonates also is not well defined.

Once-daily dosing has potential practical advantages. For example, repeated determinations of serum concentrations are unnecessary unless an aminoglycoside is given for more than 3 days. A drug administered once a day rather than three times a day is less labor intensive. And once-a-day dosing is more feasible for outpatient therapy.

Aminoglycosides are cleared by the kidney, and excretion is directly proportional to creatinine clearance. To avoid accumulation and toxic levels, once-daily dosing of aminoglycosides is generally avoided if renal function is impaired. Rapidly changing renal function, which may occur with acute kidney injury, must also be monitored to avoid overdosing or underdosing. Provided these pitfalls are avoided, once-daily aminoglycoside dosing is safe and effective. If the creatinine clearance is >60 mL/min, then a single daily dose of 5–7 mg/kg of gentamicin or tobramycin is recommended (15 mg/kg for amikacin and plazomicin). For patients with creatinine clearance <60 ml/min,="" traditional="" dosing="" as="" described="" below="" is="" recommended.="" of="" note,="" plazomicin="" has="" been="" studied="" using="" only="" the="" extended="" or="" once-daily="" dosing="" strategy;="" specific="" information="" on="" dosing="" and="" therapeutic="" drug="" monitoring="" is="" outlined="" in="" the="" plazomicin="" section="" that="" follows.="" with="" once-daily="" dosing,="" serum="" concentrations="" need="" not="" be="" routinely="" checked="" until="" the="" second="" or="" third="" day="" of="" therapy,="" depending="" on="" the="" stability="" of="" renal="" function="" and="" the="" anticipated="" duration="" of="" therapy.="" in="" most="" circumstances,="" it="" is="" unnecessary="" to="" check="" peak="" concentrations;="" an="" exception="" may="" be="" when="" ensuring="" adequately="" high="" peak="" concentrations="" for="" treating="" infections="" caused="" by="" drug-resistant="" pathogens.="" the="" goal="" is="" to="" administer="" drug="" so="" that="" concentrations="" of=""><1 mcg/ml="" are="" present="" between="" 18="" and="" 24="" hours="" after="" dosing.="" this="" provides="" sufficient="" time="" for="" washout="" of="" drug="" to="" occur="" before="" the="" next="" dose="" is="" given.="" several="" nomograms="" have="" been="" developed="" and="" validated="" to="" assist="" clinicians="" with="" once-daily="" dosing="" (eg,="" freeman="" reference,="" hartford="">

With traditional dosing, adjustments must be made to prevent accumulation of drug and toxicity in patients with renal insufficiency. Either the dose of drug is kept constant and the interval between doses is increased, or the interval is kept constant and the dose is reduced. Nomograms and formulas have been constructed relating serum creatinine levels to adjustments in traditional treatment regimens. For a traditional twice- or thrice-daily dosing regimen, peak serum concentrations should be determined from a blood sample obtained 30–60 minutes after a dose, and trough concentrations from a sample obtained just before the next dose. Doses of gentamicin and tobramycin should be adjusted to maintain peak levels between 5 and 10 mcg/mL (typically between 8 and 10 mcg/mL in more serious infections) and trough levels <2 mcg l=""><1 mcg/ml="" is="">

E. Adverse Effects

All aminoglycosides are ototoxic and nephrotoxic. Ototoxicity and nephrotoxicity are more likely to be encountered when therapy is continued for more than 5 days, at higher doses, in the elderly, and in the setting of renal insufficiency. Concurrent use with loop diuretics (eg, furosemide, bumetanide, or ethacrynic acid) or other nephrotoxic antimicrobial agents (eg, vancomycin or amphotericin) can potentiate nephrotoxicity and should be avoided if possible. Ototoxicity can manifest either as auditory damage, resulting in tinnitus and high-frequency hearing loss initially, or as vestibular damage with vertigo, ataxia, and loss of balance. Nephrotoxicity can be identified by rising serum creatinine levels or reduced estimated glomerular filtration rate, although the earliest indication can be an increase in trough serum aminoglycoside concentrations. Neomycin, kanamycin, and amikacin are the agents most likely to cause auditory damage. Streptomycin and gentamicin are the most vestibulotoxic. Neomycin, tobramycin, and gentamicin are the most nephrotoxic. Plazomicin may be associated with lower rates of ototoxicity and nephrotoxicity compared to older aminoglycosides; however, as the newest aminoglycoside, clinical experience is limited.

In very high doses, aminoglycosides can produce a curare-like effect with neuromuscular blockade that results in respiratory paralysis. This paralysis is usually reversible by calcium gluconate, when given promptly, or neostigmine. Hypersensitivity occurs infrequently.

F. Clinical Uses

Aminoglycosides are mostly used against aerobic gram-negative bacteria, especially when there is concern for drug-resistant pathogens or in critically ill patients. Tobramycin and gentamicin are almost always used in combination with a β-lactam antibiotic to extend empiric coverage and to take advantage of the potential synergy between these two classes of drugs. Amikacin and streptomycin are frequently given in combination with other antibacterials for the treatment of mycobacterial infections. It remains unclear whether plazomicin will be given alone or in combination with other agents. Penicillin-aminoglycoside combinations have also been used to achieve bactericidal activity in treatment of enterococcal endocarditis and to shorten duration of therapy for viridans streptococcal endocarditis. Due to toxicity, these combinations are used less frequently when alternate regimens are available. For example, in the case of enterococcal endocarditis, studies suggest that the combination of ampicillin and ceftriaxone is an effective regimen with less risk for nephrotoxicity. When aminoglycosides are used, the selection of agent and dose depends on the infection being treated and the susceptibility of the isolate.