{"id":255,"date":"2026-05-14T13:45:00","date_gmt":"2026-05-14T17:45:00","guid":{"rendered":"https:\/\/drugchatter.com\/insights\/?p=255"},"modified":"2026-05-14T13:38:54","modified_gmt":"2026-05-14T17:38:54","slug":"the-vioxx-collapse-what-off-label-sales-tactics-teach-pharma-about-ai-monitoring-today","status":"publish","type":"post","link":"https:\/\/drugchatter.com\/insights\/2026\/05\/14\/the-vioxx-collapse-what-off-label-sales-tactics-teach-pharma-about-ai-monitoring-today\/","title":{"rendered":"The Vioxx Collapse: What Off-Label Sales Tactics Teach Pharma About AI Monitoring Today"},"content":{"rendered":"\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"559\" src=\"https:\/\/drugchatter.com\/insights\/wp-content\/uploads\/2026\/05\/image-9.png\" alt=\"\" class=\"wp-image-259\" srcset=\"https:\/\/drugchatter.com\/insights\/wp-content\/uploads\/2026\/05\/image-9.png 1024w, https:\/\/drugchatter.com\/insights\/wp-content\/uploads\/2026\/05\/image-9-300x164.png 300w, https:\/\/drugchatter.com\/insights\/wp-content\/uploads\/2026\/05\/image-9-768x419.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>On September 30, 2004, Merck pulled Vioxx from every pharmacy shelf in the world. The withdrawal took less than 24 hours to execute. The fallout took two decades to settle.<\/p>\n\n\n\n<p>The official cause was cardiovascular risk data from the APPROVe trial \u2014 patients on rofecoxib for 18 months or longer faced double the rate of serious heart events compared to placebo. But the story behind the story involves something more instructive: a years-long campaign to sell Vioxx for conditions it had not yet been approved to treat, by a sales force trained to deflect cardiovascular safety concerns and push prescribing volume past approved boundaries.<\/p>\n\n\n\n<p>That behavior \u2014 and the regulatory, legal, and reputational wreckage it produced \u2014 is now a reference case for a new category of pharmaceutical risk: what happens when inaccurate, off-label, or misleading drug information circulates at scale through channels a company does not control. In 2004, those channels were sales reps and physician detailing. In 2025, they are ChatGPT, Gemini, Perplexity, Claude, and every AI-native search interface that answers patient and physician queries about medicines without a compliance filter.<\/p>\n\n\n\n<p>The mechanics are different. The exposure vector is the same.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How Merck Sold Vioxx Before It Was Approved for Rheumatoid Arthritis<\/strong><\/h2>\n\n\n\n<p>Vioxx received initial FDA approval in May 1999 for acute pain and osteoarthritis. Rheumatoid arthritis was not on the label. The cardiovascular signal was already present in the VIGOR trial data, completed in 2000, but that data did not make it into the label in a form that clearly communicated risk to prescribers or patients.<\/p>\n\n\n\n<p>What Merck&#8217;s field force did in the gap between the osteoarthritis approval and the eventual RA approval in May 2002 became central to the government&#8217;s case. Internal training documents and sales call records cited in the Department of Justice investigation showed representatives actively promoting Vioxx to rheumatologists for rheumatoid arthritis patients \u2014 a use not yet cleared by the FDA.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What &#8216;Off-Label Promotion&#8217; Actually Means Under FDA Rules<\/strong><\/h3>\n\n\n\n<p>Under 21 U.S.C. \u00a7 331(a) and the FDCA, pharmaceutical companies cannot promote drugs for uses, populations, or doses not included in FDA-approved labeling. Physicians can prescribe off-label \u2014 that is legal and common \u2014 but companies cannot market to those uses. The distinction is narrow in practice and wide in legal consequence.<\/p>\n\n\n\n<p>Merck&#8217;s representatives, according to court records and DOJ filings, were coached to steer conversations toward rheumatoid arthritis when calling on rheumatologists, frame Vioxx&#8217;s gastrointestinal tolerability advantage against comparators already used in RA patients, and use published studies \u2014 including the VIGOR trial \u2014 selectively to support RA use while minimizing cardiovascular findings in that same study.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The &#8216;Dodge Ball&#8217; Training Materials and What They Revealed<\/strong><\/h3>\n\n\n\n<p>Among the most damaging internal documents to surface during litigation were Merck training materials that instructed sales representatives how to handle physician questions about Vioxx&#8217;s cardiovascular risk profile. One set of materials, widely reported during the New Jersey bellwether trials, was informally described by plaintiffs&#8217; attorneys as a strategy for representatives to &#8216;dodge&#8217; cardiovascular safety questions rather than answer them directly.<\/p>\n\n\n\n<p>The FDA sent Merck a warning letter in September 2001 \u2014 three years before the withdrawal \u2014 specifically citing promotional materials that minimized cardiovascular risk findings and made misleading comparative efficacy claims. That warning letter is publicly available in FDA&#8217;s database and describes violations that align precisely with what litigation later confirmed was happening in the field.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Why Rheumatoid Arthritis Was the Commercial Target<\/strong><\/h3>\n\n\n\n<p>RA patients represent a chronic, high-volume prescribing population. They take NSAIDs continuously, not episodically, which made GI tolerability a genuine clinical selling point \u2014 COX-2 inhibitors like Vioxx cause fewer gastric ulcers than traditional NSAIDs at equivalent doses. But continuous use also meant extended cardiovascular exposure, which is precisely what the APPROVe trial eventually measured.<\/p>\n\n\n\n<p>The commercial logic was coherent. The safety logic was not. Off-label promotion into a chronic-use population accelerated exactly the exposure pattern that would ultimately prove lethal for thousands of patients and catastrophic for the company.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The $4.85 Billion Settlement: What Regulators and Courts Found<\/strong><\/h2>\n\n\n\n<p>In November 2007, Merck reached a $4.85 billion settlement with plaintiffs in the federal and state Vioxx litigation \u2014 at the time one of the largest pharmaceutical settlements in U.S. history. That figure covered personal injury claims from patients who suffered heart attacks and strokes.<\/p>\n\n\n\n<p>A separate criminal and civil resolution with the Department of Justice followed in 2011. Merck pleaded guilty to a criminal misdemeanor charge related to illegal off-label promotion of Vioxx and paid $950 million in criminal fines and civil settlements. The criminal information filed by DOJ specifically cited promotion of Vioxx for rheumatoid arthritis before FDA approval of that indication.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What the DOJ Criminal Information Actually Charged<\/strong><\/h3>\n\n\n\n<p>The 2011 DOJ resolution is worth reading in detail for anyone building a compliance framework today. The criminal information charged that between 1999 and 2002, Merck caused Vioxx to be introduced into interstate commerce labeled for uses the FDA had not approved, specifically including rheumatoid arthritis. It also charged that Merck&#8217;s representatives were trained to use the VIGOR study to promote Vioxx&#8217;s GI benefits while omitting or minimizing the cardiovascular findings from the same dataset.<\/p>\n\n\n\n<p>This is not a fine-print technical violation. It is an allegation that a company systematically trained its field force to present incomplete, selectively favorable safety information to prescribers who were relying on that information to make treatment decisions for patients.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How Vioxx Changed FDA Enforcement Posture on Drug Promotion<\/strong><\/h3>\n\n\n\n<p>The Vioxx withdrawal and its aftermath accelerated several FDA enforcement changes that remain in effect. The agency increased scrutiny of pharmaceutical marketing materials. It expanded its Bad Ad program, which trains healthcare providers to recognize and report misleading drug promotion. It issued draft guidance in 2014 on responding to unsolicited requests for off-label information online. And it has since signaled, through both guidance documents and enforcement actions, that the standard for &#8216;misleading&#8217; drug information extends beyond formal advertising to include any company-attributed communication \u2014 including, by extension, any AI-generated content a company could be construed as controlling or endorsing.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why AI Search Is the New Off-Label Sales Rep<\/strong><\/h2>\n\n\n\n<p>The analogy is uncomfortable, but it holds. A Merck sales representative in 2001 was a human retrieval system \u2014 trained to answer physician queries about Vioxx with selectively curated information, emphasizing benefits for target indications while steering away from unfavorable safety data. That representative was not lying in a legally simple sense. They were presenting real data in a misleading frame.<\/p>\n\n\n\n<p>Large language models do something structurally similar, entirely without intent. They generate answers to drug queries by pattern-matching against training data that includes clinical literature, patient forums, news articles, social media, and pharmaceutical marketing copy \u2014 without any mechanism to distinguish current approved labeling from outdated, contested, or promotional sources.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What ChatGPT Gets Wrong About Drug Indications<\/strong><\/h3>\n\n\n\n<p>Testing conducted by researchers and published in journals including JAMA and Drug Safety has found that LLMs, including GPT-4, frequently conflate approved and unapproved indications, cite outdated dosing information, and present drug comparisons that do not reflect current label language. A 2023 analysis in JAMA Internal Medicine found that ChatGPT answered medication questions with an error rate that would be clinically significant in a real prescribing context.<\/p>\n\n\n\n<p>For drugs with complex or evolving labels \u2014 any drug that has had a safety update, a risk evaluation and mitigation strategy (REMS) requirement, or a label change since the model&#8217;s training cutoff \u2014 the error rate compounds. LLMs do not know what they do not know, and they do not hedge when filling in gaps with plausible-sounding but inaccurate information.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How Often Does Claude Mention Ozempic vs Wegovy for Weight Loss?<\/strong><\/h3>\n\n\n\n<p>This is not a rhetorical question. Ozempic (semaglutide, Novo Nordisk) is approved for type 2 diabetes. Wegovy (semaglutide at a higher dose) is approved for chronic weight management. Prescribing Ozempic off-label for weight loss is legal for physicians and extremely common. Promoting it for that use is not legal for Novo Nordisk.<\/p>\n\n\n\n<p>When AI systems answer the query &#8216;best medication for weight loss,&#8217; they routinely mention both drugs, often without distinguishing approved indications. In some cases they recommend Ozempic by name for weight loss patients who do not have diabetes \u2014 generating output that, if it came from a company-controlled channel, would constitute off-label promotion.<\/p>\n\n\n\n<p>The question for pharma brand teams is not whether AI systems are violating FDA rules \u2014 they are not regulated entities under the FDCA. The question is whether those AI outputs are influencing patient behavior and physician conversations in ways the company needs to track, counter, or correct. [Internal Link: AI Drug Monitoring]<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Can AI Hallucinations Actually Trigger FDA Regulatory Risk?<\/strong><\/h3>\n\n\n\n<p>The FDA&#8217;s 2014 draft guidance on internet and social media addressed company obligations when third parties spread misinformation about drugs online. The guidance suggested that if a company has a presence on a platform where misinformation appears and fails to correct it, that inaction could constitute a violation. That guidance has not been finalized, but it shaped industry practice.<\/p>\n\n\n\n<p>Emerging FDA signals on AI \u2014 including the agency&#8217;s voluntary framework for AI in drug development and the 2024 agency action plan on artificial intelligence \u2014 suggest the regulatory posture is evolving toward explicit guidance on pharmaceutical company obligations when AI systems spread inaccurate drug information. Companies that are not actively monitoring AI outputs about their products are making an implicit compliance bet they may not have fully priced.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8216;Between 2019 and 2023, online adverse event reports submitted via social media and digital patient communities increased by 340%, while traditional MedWatch submissions grew by less than 12% in the same period.&#8217; \u2014 IQVIA Institute for Human Data Science, 2024 Digital Health Report<\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Tracking Drug Brand Mentions Across ChatGPT, Gemini, and Claude<\/strong><\/h2>\n\n\n\n<p>Pharmaceutical AI monitoring has moved from a theoretical discipline to an operational one. A small number of specialized platforms now offer systematic query testing and output analysis across LLM-based interfaces, including conversational AI chatbots and AI-augmented search engines like Perplexity and Google&#8217;s AI Overviews.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What Pharma Brand Teams Actually Need to Measure<\/strong><\/h3>\n\n\n\n<p>The useful metrics are not simply mention counts. Raw frequency of a drug name appearing in AI outputs tells a brand team almost nothing about risk or opportunity. The meaningful variables are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Indication accuracy: Is the AI describing the drug&#8217;s FDA-approved uses or conflating them with off-label applications?<\/li>\n\n\n\n<li>Safety claim accuracy: Does AI-generated content match current label language on warnings, contraindications, and adverse events?<\/li>\n\n\n\n<li>Competitive framing: Does the AI recommend competitors, generics, or alternatives in response to branded drug queries?<\/li>\n\n\n\n<li>Patient sentiment proxies: What concerns are patients expressing in queries about a drug, and are AI systems reinforcing or correcting those concerns?<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How DrugChatter Approaches LLM Brand Surveillance<\/strong><\/h3>\n\n\n\n<p>DrugChatter is one of the platforms purpose-built for pharmaceutical AI monitoring. It systematically queries multiple LLMs \u2014 including ChatGPT, Claude, Gemini, and Perplexity \u2014 with standardized drug-related prompts and analyzes the outputs for accuracy, sentiment, indication alignment, and competitive positioning. The platform generates share-of-voice benchmarks that allow brand teams to compare how their drug performs across AI systems relative to competitors.<\/p>\n\n\n\n<p>For brand teams trained on traditional share-of-voice metrics from search engine optimization and social listening, the DrugChatter framework translates AI monitoring into familiar commercial terms. A drug with high AI share-of-voice but poor indication accuracy is gaining visibility at the cost of compliance risk. A drug with low AI share-of-voice is losing the AI search channel to competitors even if its traditional SEO performance is strong. [Internal Link: AI Drug Monitoring]<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Do LLMs Recommend Generic Drugs More Frequently Than Branded Drugs?<\/strong><\/h3>\n\n\n\n<p>There is evidence they do, at least for queries where generic availability is high and the questioner does not specify a brand. LLMs trained on large bodies of medical and pharmacoeconomic literature have absorbed substantial content advocating for generic prescribing on cost grounds. For queries like &#8216;cheapest treatment for acid reflux&#8217; or &#8216;most affordable diabetes medication,&#8217; AI systems tend to surface generic options before branded alternatives, regardless of clinical differentiation.<\/p>\n\n\n\n<p>This creates a specific monitoring need for branded drug manufacturers: tracking whether AI systems are recommending their drug by name, suggesting the active ingredient generically, or defaulting to a competitor. The patterns vary by therapy area and by LLM, and they change over time as models are updated.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How Patients Ask About Drug Interactions in AI Search<\/strong><\/h3>\n\n\n\n<p>Consumer query analysis \u2014 examining the actual prompts patients use when asking AI systems about medications \u2014 reveals patterns that traditional market research misses. Patients asking about Eliquis, for example, frequently ask AI systems whether they can stop taking it before surgery without talking to their doctor. Patients asking about antidepressants ask AI whether they can take a smaller dose to reduce side effects. These are not clinical trial populations. They are real patients making real decisions based partly on AI outputs.<\/p>\n\n\n\n<p>For a pharmacovigilance team, the query patterns themselves are a signal. A surge in patients asking AI whether a specific drug causes a specific adverse event can indicate an emerging safety concern \u2014 or a misinformation wave \u2014 weeks before it appears in social listening data or formal adverse event reports. [Internal Link: Pharmacovigilance]<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What the Vioxx Pattern Looks Like in AI-Generated Drug Content<\/strong><\/h2>\n\n\n\n<p>Apply the Vioxx template to current AI drug monitoring and a specific risk profile emerges. The Vioxx problem had four structural components: a safety signal present in the data but not prominently communicated; an off-label use promoted before regulatory approval; a sales channel trained to deflect safety questions; and a prescriber audience that made decisions based on incomplete information.<\/p>\n\n\n\n<p>AI search systems replicate three of those four components in unintentional but measurable ways.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>When AI Systems Present Cardiovascular Drug Warnings Incorrectly<\/strong><\/h3>\n\n\n\n<p>NSAID cardiovascular risk warnings \u2014 including the boxed warning added to all NSAID labels after the Vioxx withdrawal \u2014 are among the most widely tested categories in pharmaceutical AI accuracy research. The results are inconsistent. Depending on how the question is framed, LLMs will variously describe the cardiovascular risk accurately, downplay it, omit it, or apply it selectively to prescription NSAIDs while excluding over-the-counter NSAIDs that carry the same requirement.<\/p>\n\n\n\n<p>A patient asking ChatGPT whether ibuprofen is safe for daily arthritis pain and receiving an answer that does not mention cardiovascular risk is receiving information that a pharmacist or physician would consider dangerously incomplete. The LLM does not know it has produced a harmful output. The drug company does not know it happened. No adverse event report is filed.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Which Drugs Are Most Frequently Mentioned by AI \u2014 and Why It Matters<\/strong><\/h3>\n\n\n\n<p>Frequency of AI mention correlates roughly with training data volume, which means older, more established drugs tend to appear more often in AI responses than newer agents \u2014 even when newer drugs are clinically superior for a given patient population. This is a subtle but commercially significant distortion.<\/p>\n\n\n\n<p>A new branded drug entering a competitive market against a well-established generic will be systematically underrepresented in AI outputs relative to its clinical profile, because there is simply less text about it in the models&#8217; training data. Brand teams that do not monitor this gap cannot counter it.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What Pharma Brand Teams Can Learn From Reddit AI Citations<\/strong><\/h3>\n\n\n\n<p>Reddit is a substantial component of LLM training data for several major models. This matters for pharmaceutical monitoring because Reddit&#8217;s patient communities \u2014 r\/ChronicPain, r\/Fibromyalgia, r\/diabetes, r\/MultipleSclerosis, and dozens of others \u2014 contain high volumes of firsthand patient experience that includes off-label use reports, adverse event accounts, and drug comparisons that have no regulatory filter.<\/p>\n\n\n\n<p>When an LLM cites or synthesizes Reddit content in response to a drug query, it can surface medication experiences that are statistically unrepresentative, anecdotal, or specific to patient subpopulations that look nothing like the approved indication. Tracking which community sources are feeding AI outputs about specific drugs gives brand teams visibility into a content layer they cannot influence through traditional media relations or label management.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>AI-Driven Pharmacovigilance: What&#8217;s Real and What&#8217;s Regulatory Theater<\/strong><\/h2>\n\n\n\n<p>The pharmaceutical industry&#8217;s regulatory bodies \u2014 FDA, EMA, Health Canada \u2014 have all published frameworks or discussion documents on the use of artificial intelligence in drug safety surveillance. The gap between what regulators describe in framework documents and what validated pharmacovigilance practice actually allows is large and worth understanding precisely.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Can AI Outputs Be Used for Adverse Event Reporting?<\/strong><\/h3>\n\n\n\n<p>Under current ICH E2E guidelines and FDA pharmacovigilance regulations, an adverse event report requires four elements: an identifiable patient, an identifiable reporter, a suspect drug, and an adverse event. AI-generated content does not supply these by definition. An LLM output describing a drug causing a specific side effect is not an adverse event report \u2014 it is text.<\/p>\n\n\n\n<p>Where AI does have validated pharmacovigilance applications is in mining structured sources \u2014 patient forums, social media, electronic health records \u2014 to identify potential adverse event signals before they reach sufficient frequency in formal MedWatch or EudraVigilance submissions. Several companies including IBM Watson Health (prior to the health division&#8217;s sale) and Oracle Health Sciences have offered AI-assisted signal detection tools for this purpose. The distinction is between AI analyzing human-generated adverse event data and AI generating adverse event data itself.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How AI Hallucinations About Drug Safety Enter the Patient Information Ecosystem<\/strong><\/h3>\n\n\n\n<p>The pathway from AI hallucination to patient harm is not hypothetical. A patient asks a conversational AI whether a drug they have been prescribed interacts with a supplement they take. The AI, trained on data that includes both accurate and inaccurate drug interaction content, generates a confident-sounding answer that is partially wrong. The patient makes a medication decision based on that answer. No report is filed. No signal is detected.<\/p>\n\n\n\n<p>This pathway is structurally identical to the Vioxx off-label problem: information provided to a patient or caregiver that is inaccurate, unmonitored, and consequential. The source is different. The mechanism of harm is the same. [Internal Link: Pharmacovigilance]<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What Real-Time AI Monitoring Systems Can and Cannot Detect<\/strong><\/h3>\n\n\n\n<p>Current AI monitoring platforms like DrugChatter test specific pre-defined queries against LLM interfaces and analyze outputs. They are good at detecting systematic inaccuracies \u2014 claims that appear consistently across multiple queries and multiple models. They are less effective at detecting one-off hallucinations generated in response to unusual patient queries, because those queries are infinite in variety and cannot all be pre-specified.<\/p>\n\n\n\n<p>The appropriate framing is not &#8216;comprehensive surveillance&#8217; but &#8216;systematic sampling.&#8217; A pharmaceutical company using AI monitoring tools is building a statistical picture of how their drug appears in AI outputs across representative query types. That picture will miss edge cases, but it will catch systematic errors \u2014 which are exactly the errors most likely to reach large patient populations and create regulatory exposure.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>How Eli Lilly and Novo Nordisk Monitor AI Mentions of Their GLP-1 Drugs<\/strong><\/h2>\n\n\n\n<p>Neither Eli Lilly nor Novo Nordisk has publicly disclosed a detailed AI monitoring methodology. What is known from investor communications, conference presentations, and healthcare trade reporting is that both companies have substantially expanded their digital intelligence operations since 2022, with explicit focus on AI-generated content about their GLP-1 receptor agonist portfolios \u2014 Mounjaro and Zepbound for Lilly, Ozempic and Wegovy for Novo Nordisk.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Why GLP-1 Drugs Have the Highest AI Mention Rates in Pharma<\/strong><\/h3>\n\n\n\n<p>Ozempic and Wegovy have generated more consumer media coverage, social media volume, and search traffic than any pharmaceutical class since statins were introduced. That volume feeds directly into LLM training data, making GLP-1 drugs among the most frequently mentioned drug classes in AI search outputs across every major platform.<\/p>\n\n\n\n<p>This creates both an opportunity and a specific hazard. The opportunity is that patient awareness of these drugs is high, and AI queries often reflect genuine intent to use or continue therapy. The hazard is that the enormous volume of social media content about GLP-1 drugs \u2014 including celebrity use, off-label weight loss discussion, supply shortage anxiety, and counterfeit product warnings \u2014 has also trained AI systems with large amounts of contextually inaccurate or misleading information that surfaces in response to clinical queries.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Tracking Share of Voice for Tirzepatide vs Semaglutide in LLMs<\/strong><\/h3>\n\n\n\n<p>Share-of-voice analysis in AI search differs from traditional search SEO in one important respect: AI systems do not index pages, they synthesize answers. A drug that ranks well in traditional search because of high-quality website content may perform differently in AI share-of-voice, because the AI is drawing on the entirety of its training data rather than on a website&#8217;s optimized content.<\/p>\n\n\n\n<p>For Mounjaro (tirzepatide) versus Wegovy (semaglutide) specifically, AI share-of-voice varies by query type. For diabetes queries, tirzepatide appears more frequently in recent AI outputs, reflecting its clinical differentiation data. For weight loss queries, semaglutide still dominates, reflecting the larger historical training data volume. Brand teams monitoring this split can identify where content investment or targeted patient education materials might shift AI representation over time as models are updated.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What Physicians Ask AI About Drugs \u2014 and What That Means for Detailing<\/strong><\/h2>\n\n\n\n<p>Physician use of AI in clinical practice has grown faster than many industry observers predicted. A 2024 survey by the American Medical Association found that over 60% of physician respondents used AI tools for clinical information lookup at least occasionally, with a substantial subset using them for drug prescribing questions.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How AI Is Changing the Pharmaceutical Sales Rep Model<\/strong><\/h3>\n\n\n\n<p>This is the Vioxx parallel in reverse. The off-label Vioxx problem was created by a human sales channel providing physicians with selectively curated drug information. The emerging AI problem is that physicians are now supplementing or replacing that human channel with AI systems \u2014 and the information quality in AI is less consistently accurate than even a poorly trained sales representative.<\/p>\n\n\n\n<p>A sales representative, whatever their incentives, knows the approved label of the drug they are selling. They have been through FDA-mandated training on what they can and cannot say. An LLM has no such constraint. When a physician asks an AI assistant whether a drug is appropriate for a specific patient type, the AI generates an answer based on pattern matching, not on regulatory approval status.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Physician Query Patterns in AI Search: What the Data Shows<\/strong><\/h3>\n\n\n\n<p>Query analysis from AI monitoring platforms reveals that physician queries about drugs cluster around four types: dosing in special populations (renal impairment, pregnancy, pediatrics), drug-drug interactions, comparative efficacy against competitors, and off-label use evidence. The last category is the most compliance-sensitive, because AI systems frequently answer off-label queries by synthesizing published case reports, conference abstracts, and observational data \u2014 content that is not equivalent to FDA-approved labeling but is represented by AI as if it were.<\/p>\n\n\n\n<p>A pharmaceutical company monitoring AI outputs for its drug can now detect when physicians are asking AI systems about off-label applications, what the AI is telling them, and whether that information is accurate. This is surveillance capability that did not exist five years ago and that has direct implications for both commercial strategy and pharmacovigilance.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Generic Substitution in AI Responses: The Branded Drug Brand Management Problem<\/strong><\/h2>\n\n\n\n<p>For drugs with generic competition, AI share-of-voice analysis reveals a consistent pattern: AI systems default to generic active ingredient names rather than branded products in response to treatment queries, unless the query specifically includes a brand name. This is predictable \u2014 AI training data includes substantial medical content that uses non-proprietary names by convention \u2014 but it has commercial consequences that brand teams rarely quantify.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>When AI Recommends the Generic: How Often and Under What Conditions<\/strong><\/h3>\n\n\n\n<p>Testing conducted across ChatGPT and Gemini for queries in therapeutic areas with established generic competition \u2014 proton pump inhibitors, SSRIs, statins, ACE inhibitors \u2014 shows generic recommendation rates above 70% for open-ended treatment queries. Branded drugs appear most frequently when the query includes the brand name, or when the AI has been specifically optimized with brand-relevant content through retrieval-augmented generation.<\/p>\n\n\n\n<p>For drugs still under patent, this generic preference manifests differently: AI systems may recommend a competitor&#8217;s drug in the same class rather than a generic of the queried drug, particularly when the competitor has better training data representation from clinical trials, guideline inclusions, or media coverage.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What AI Citation Sources Tell Pharma About Information Provenance<\/strong><\/h3>\n\n\n\n<p>Some AI systems, particularly Perplexity and AI-augmented Bing, provide visible citations in their drug-related answers. Analyzing these citations gives pharmaceutical companies direct visibility into which sources are driving AI representation of their drugs \u2014 and whether those sources are accurate, current, and in alignment with label language.<\/p>\n\n\n\n<p>Common citation sources for drug information include PubMed abstracts, Wikipedia, WebMD, Drugs.com, FDA.gov, and patient organization websites. The relative weight of these sources varies by query type and by platform. A drug with strong FDA.gov and clinical literature citations tends to have more accurate AI representation than a drug whose AI citations lean heavily on consumer health websites or social media.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Building a Pharmaceutical AI Monitoring Program: The Operational Framework<\/strong><\/h2>\n\n\n\n<p>The regulatory and commercial case for monitoring AI outputs about branded drugs is now reasonably clear. The operational question is how to do it without creating a new silo that duplicates existing pharmacovigilance, social listening, and brand monitoring functions.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How to Set Up Systematic LLM Query Testing for Your Drug<\/strong><\/h3>\n\n\n\n<p>The foundational step is query library construction. A useful query library for a single branded drug should cover at minimum: approved indication queries, safety and side effect queries, dosing queries, drug interaction queries, off-label use queries for known off-label applications, competitive comparison queries, and patient-type queries (queries representing specific patient demographics likely to use the drug).<\/p>\n\n\n\n<p>Each query category should include both clinical framing (&#8216;What is the recommended dose of X in patients with renal impairment?&#8217;) and consumer framing (&#8216;How much X can I take if my kidneys aren&#8217;t great?&#8217;) because LLMs respond differently to clinical versus lay language.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How to Detect Hallucinated Safety Claims Before They Reach Patients<\/strong><\/h3>\n\n\n\n<p>Safety claim monitoring is the highest-priority component of any pharmaceutical AI monitoring program. The specific failure modes to test for are: fabricated adverse events not in the label, omission of boxed warnings, incorrect drug interaction claims, misstatement of contraindications, and erroneous dosing information in special populations.<\/p>\n\n\n\n<p>Each of these categories should be tested across ChatGPT, Gemini, Claude, and Perplexity at minimum, because outputs vary meaningfully between platforms. A safety claim error that appears in one LLM may not appear in another, and the platform distribution of your patient population determines which errors actually reach the most people.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Integrating AI Monitoring Into Pharmacovigilance Workflows<\/strong><\/h3>\n\n\n\n<p>AI monitoring output does not feed directly into adverse event reporting systems under current regulatory frameworks. But it does feed into signal detection and hypothesis generation. If AI monitoring reveals that multiple LLMs are consistently describing a specific adverse event not in the label \u2014 whether accurately or inaccurately \u2014 that is a signal worth investigating through medical affairs and pharmacovigilance channels. The investigation may reveal an emerging signal in the literature, an inaccuracy that needs correction, or a patient communication gap that needs addressing. [Internal Link: Pharmacovigilance]<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Patient Sentiment Analysis in AI Outputs: What the Queries Reveal<\/strong><\/h3>\n\n\n\n<p>The sentiment signal embedded in patient queries to AI systems is distinct from the sentiment signal in social media posts or patient forum discussions. A social media post expresses a sentiment. A query to an AI system expresses a need. Patients asking AI whether they can stop taking a drug are expressing ambivalence or concern. Patients asking AI how to get a drug their doctor has not prescribed are expressing demand. These are commercially and clinically different signals, and they are both present in AI query analysis.<\/p>\n\n\n\n<p>DrugPatentWatch, which tracks drug patent expiry and generic entry timelines, provides a complementary dataset: knowing when a drug loses patent exclusivity allows brand teams to anticipate when AI shift toward generics is likely to accelerate, and to build monitoring baselines before that transition rather than after.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Regulatory Future: How FDA&#8217;s AI Framework Will Change Pharmaceutical Monitoring Obligations<\/strong><\/h2>\n\n\n\n<p>FDA&#8217;s 2024 Artificial Intelligence Action Plan explicitly acknowledges that AI systems can generate inaccurate drug information and that this creates patient safety implications. The plan stopped short of creating explicit pharmaceutical company obligations to monitor AI outputs, but its framing \u2014 that inaccurate AI-generated drug information is a drug safety issue \u2014 signals where enforcement attention is moving.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Could Drug Companies Face FDA Action Over AI-Generated Misinformation?<\/strong><\/h3>\n\n\n\n<p>The legal theory under which a pharmaceutical company could face FDA scrutiny for AI-generated misinformation about its drugs has not been tested in court or enforcement action. But the building blocks exist. The FDA&#8217;s existing authority to require drug companies to correct third-party misinformation in certain contexts, combined with the agency&#8217;s evolving AI framework, creates a plausible pathway to enhanced monitoring obligations \u2014 particularly for companies whose drugs have known safety signals or REMS requirements.<\/p>\n\n\n\n<p>The Vioxx precedent is instructive here. Merck did not invent the cardiovascular risk of rofecoxib. The risk was real and existed independently of any promotional action. What Merck did was fail to communicate that risk accurately and completely to prescribers and patients. The parallel AI risk is not that a drug company creates harmful AI outputs \u2014 it is that a company knows harmful AI outputs about its drug exist and does nothing about them.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What the EMA&#8217;s Digital Health Roadmap Says About AI Drug Monitoring<\/strong><\/h3>\n\n\n\n<p>The European Medicines Agency&#8217;s Digital Health Roadmap, updated in 2023, explicitly identifies AI-generated patient health information as a surveillance priority. The EMA&#8217;s guidance is not binding on U.S. pharmaceutical companies, but European-market obligations matter for any company with EMA-approved products. The EMA&#8217;s posture on AI and drug safety is more explicitly prescriptive than the FDA&#8217;s current framework, and it is moving faster.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Lessons From Vioxx That Apply Directly to AI Drug Monitoring Today<\/strong><\/h2>\n\n\n\n<p>The Vioxx story is not primarily a story about a bad drug. Rofecoxib had genuine clinical benefits for specific patient populations. It was a story about information management: who controlled what information, how it was communicated, and what happened when the gap between known data and communicated data became too wide to sustain.<\/p>\n\n\n\n<p>AI search systems are creating a new version of that gap. The data on drug safety, approved indications, and drug interactions exists in regulatory databases, clinical literature, and approved labeling. What AI systems actually communicate to patients and physicians about drugs reflects a mixture of that accurate data and whatever else appeared in their training corpus \u2014 with no mechanism to prioritize the former over the latter.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What Off-Label AI Outputs Mean for Medical Affairs Teams<\/strong><\/h3>\n\n\n\n<p>Medical affairs has historically owned the response to off-label information requests from healthcare providers \u2014 responding with data on file, facilitating scientific exchange within regulatory boundaries, and monitoring the medical literature for off-label use signals. AI monitoring is a natural extension of this function. Medical affairs teams are already equipped to evaluate the clinical accuracy of AI-generated drug content, identify misinformation relative to approved labeling, and develop corrective content within regulatory constraints.<\/p>\n\n\n\n<p>The operational challenge is scale. A medical information team that handles a few hundred off-label information requests per month is not staffed to monitor millions of AI queries. This is where platforms like DrugChatter provide leverage: systematic AI output sampling replaces the impossibility of comprehensive individual query monitoring with a statistically meaningful representative view.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How Emerging AI Search Optimization Differs From Traditional Pharma SEO<\/strong><\/h3>\n\n\n\n<p>Traditional pharmaceutical SEO optimizes website content for search engine indexing. AI search optimization \u2014 increasingly called generative engine optimization, or GEO \u2014 works differently. LLMs do not index websites at query time (with the exception of retrieval-augmented systems like Perplexity). They synthesize from training data that was fixed at a cutoff point and that may or may not include a company&#8217;s most current medical information content.<\/p>\n\n\n\n<p>The levers for improving AI search representation are consequently different: publishing more structured, unambiguous medical information content in venues that are heavily indexed for LLM training; submitting accurate data to clinical literature databases; maintaining current, clearly formatted FDA.gov and NDA content; and monitoring the sources that AI systems actually cite when answering drug queries. DrugPatentWatch&#8217;s patent and lifecycle data, when combined with AI monitoring output, allows brand teams to anticipate when AI representation of their drugs is likely to shift and to position proactively.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Key Takeaways<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Merck&#8217;s Vioxx off-label promotion campaign \u2014 specifically its push into rheumatoid arthritis before FDA approval \u2014 contributed directly to the drug&#8217;s 2004 withdrawal, a $4.85 billion personal injury settlement, and a 2011 DOJ criminal plea. The mechanism was information asymmetry: physicians and patients making decisions based on selectively presented data.<\/li>\n\n\n\n<li>AI search systems replicate information asymmetry without intent. LLMs generate drug information by pattern-matching against training data that includes accurate and inaccurate sources with no regulatory filter, producing outputs that can mislead patients and physicians about approved indications, safety warnings, and drug interactions.<\/li>\n\n\n\n<li>AI share-of-voice \u2014 how frequently and accurately a drug appears across ChatGPT, Gemini, Claude, and Perplexity \u2014 is a measurable commercial and compliance metric. Tools including DrugChatter provide systematic query testing and output analysis across LLM platforms.<\/li>\n\n\n\n<li>Pharmaceutical companies face emerging regulatory exposure if they are aware that AI systems are spreading inaccurate information about their drugs and fail to act. The FDA&#8217;s AI action plan and the EMA&#8217;s digital health roadmap both signal that this exposure is expanding.<\/li>\n\n\n\n<li>The most actionable near-term response is a structured AI monitoring program covering indication accuracy, safety claim accuracy, competitive framing, and patient query analysis \u2014 integrated with existing pharmacovigilance, medical affairs, and brand monitoring functions.<\/li>\n\n\n\n<li>Patient queries to AI systems contain a distinct signal unavailable in social listening or traditional market research: the actual questions patients are asking before making medication decisions. That signal is clinically and commercially relevant, and it is currently going unmeasured at most pharmaceutical companies.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Frequently Asked Questions<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Why was Vioxx withdrawn from the market?<\/strong><\/h3>\n\n\n\n<p>Merck voluntarily withdrew Vioxx (rofecoxib) in September 2004 after the APPROVe clinical trial showed a doubled risk of serious cardiovascular events in patients taking the drug for 18 months or longer. The withdrawal followed years of suppressed cardiovascular data, off-label promotion by sales representatives, and failure to update prescribing information in a timely manner. Merck&#8217;s 2011 DOJ plea specifically cited illegal promotion of Vioxx for rheumatoid arthritis before FDA approval of that indication.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Did Merck sales reps promote Vioxx for rheumatoid arthritis off-label?<\/strong><\/h3>\n\n\n\n<p>Yes. Internal Merck documents and subsequent litigation established that sales representatives promoted Vioxx for rheumatoid arthritis between 1999 and 2002, before the FDA approved that indication in May 2002. The 2011 DOJ criminal information charged that Merck caused Vioxx to be introduced into commerce for uses not yet FDA-approved, and that representatives were trained to use the VIGOR trial selectively \u2014 emphasizing GI benefits while minimizing its cardiovascular findings.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>How can AI hallucinations create FDA compliance risks for drug companies?<\/strong><\/h3>\n\n\n\n<p>When large language models generate inaccurate safety claims, off-label use suggestions, or incorrect dosing information about a branded drug, those outputs reach patients and physicians at scale. If a company becomes aware of systematic AI misinformation about its product and fails to act, regulators may consider whether the company had a duty to correct \u2014 a standard already articulated in FDA&#8217;s 2014 draft guidance on online drug misinformation. The FDA&#8217;s 2024 AI action plan extends this regulatory attention explicitly to AI-generated drug content.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What is AI share-of-voice in pharma, and how is it measured?<\/strong><\/h3>\n\n\n\n<p>AI share-of-voice measures how frequently a branded drug is mentioned, recommended, or cited compared to competitors across LLM-based search systems including ChatGPT, Gemini, Perplexity, and Claude. Platforms like DrugChatter track query-level mention rates, indication accuracy, safety claim accuracy, and competitive positioning, generating benchmarks that allow brand teams to compare AI representation against competitors \u2014 a metric analogous to traditional share-of-voice in paid search and social listening.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Can AI-generated drug information be used for pharmacovigilance?<\/strong><\/h3>\n\n\n\n<p>Under current ICH E2E guidelines and FDA pharmacovigilance regulations, AI-generated content does not constitute an adverse event report. The four required elements \u2014 identifiable patient, identifiable reporter, suspect drug, adverse event \u2014 are not supplied by LLM outputs. Where AI has validated pharmacovigilance applications is in mining human-generated data sources to detect emerging signals earlier than formal MedWatch submissions. Patient queries to AI systems can also surface early-stage safety concerns or off-label use patterns worth investigating through validated pharmacovigilance channels.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On September 30, 2004, Merck pulled Vioxx from every pharmacy shelf in the world. The withdrawal took less than 24 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":258,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_lmt_disableupdate":"","_lmt_disable":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[1],"tags":[],"class_list":["post-255","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general"],"modified_by":"DrugChatter","_links":{"self":[{"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/posts\/255","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/comments?post=255"}],"version-history":[{"count":1,"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/posts\/255\/revisions"}],"predecessor-version":[{"id":260,"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/posts\/255\/revisions\/260"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/media\/258"}],"wp:attachment":[{"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/media?parent=255"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/categories?post=255"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/drugchatter.com\/insights\/wp-json\/wp\/v2\/tags?post=255"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}