COMMENTARY

Dec 16, 2022 This Week in Cardiology Podcast

John M. Mandrola, MD

Disclosures

December 16, 2022

Please note that the text below is not a full transcript and has not been copyedited. For more insight and commentary on these stories, subscribe to the This Week in Cardiology podcast, download the Medscape app or subscribe on Apple Podcasts, Spotify, or your preferred podcast provider. This podcast is intended for healthcare professionals only.

In This Week’s Podcast

For the week ending December 16, 2022, John Mandrola, MD comments on the following news and features stories.

Announcements

Normally #TWICPodcast takes 2 weeks off during Christmas/New Years. You all get a bonus this year. Next Friday I will have a brief recap of the year. I’ve published my Top Ten column, and will discuss some of the notable and honorable mentions, then take New Year’s off and return the first week of January. Thank you everyone. It’s been a great year. I am so glad we got to meet in person some this year.

BP Control Over the Long Term

Literally, I mean literally, every clinic day, at least once, blood pressure (BP) control comes up. It’s a number and people love numbers. And I need to apologize, I missed an important study in October, regarding the longer-term results of the SPRINT trial of two BP targets.

Let’s briefly recap SPRINT. The New England Journal of Medicine published the initial randomized controlled trial (RCT) in 2015.

  • More than 9000 patients with hypertension (HTN) and increased cardiovascular (CV) risk but no diabetes.

  • Average age 67, and nearly a third over age 75.

  • Systolic BP target of 120 or 140.

  • Primary endpoint: myocardial infarction (MI), acute coronary syndrome (ACS), stroke, heart failure (HF), or CV death.

  • Over 3.3 years, in the intensive arm, there was a 25% reduction in the primary endpoint; this included significant reductions in MI, HF, CV death, and overall death was reduced, however the absolute risk reduction (ARR) was only 1.6%,

  • Adverse effects were higher in the intensive arm, 2.5% to 4.7% including hypotension, syncope, electrolyte abnormalities, and AKI.

A nice editorial in the Annals of Internal Medicine by Ortiz and James, in 2016, put it this way: “For every 1000 patients like those in SPRINT, there would be 16 persons who get benefit, 22 persons harmed and 962 not benefited or harmed.”

When SPRINT came out there was intense controversy. One was the manner of the BP measurements. Was there an attendant or not? Because if not, people felt this would result in a lower BP. Other limitations were that it was open label, so performance bias could be present, and finally, at least 5% of the data was missing.

Now to the new paper in JAMA-Cardiology looking at longer-term all cause and CV death with intensive BP lowering over longer follow-up of SPRINT. First author is Byron Jaeger.

  • This was a secondary analysis of SPRINT in which the authors did extended observational follow-up for death via the National Death Index in the 5 years after SPRINT closed in late 2015.

  • This included a subset of about one-third of patients who were in SPRINT — about 3000 of the original 9000.

  • Recall that in the randomized part of SPRINT at 3.3 years, there was a 34% reduction in CV death with the intensive 120 mmHg arm and 17% reduction in all-cause death.

  • But at 9 years, these reductions were gone. Absolutely no difference. So were the differences in systolic BP. Mean BP in the intensive arm had gone from 132 to 140 mmHg.

The authors concluded: “The beneficial effect of intensive treatment on cardiovascular and all-cause mortality did not persist after the trial. Given increasing outpatient SBP levels in participants randomized to intensive treatment following the trial, these results highlight the importance of consistent long-term management of hypertension.”

Comments. The reactions among many were, basically, do better. Clinicians, do better; systems, do better. The editorialist, Daniel Jones, called these results disappointing.

I don’t think so.

I lead with this study because I think it highlights the difference between what can be done in studies and what can be done in real-life.

  • Take BP measurements, for example. HTN trials rightly standardize the BP by taking three BP readings with an automated cuff. Attended or non-attended, normal clinicians don’t do that. It’s not possible.

  • SPRINT also had intense follow-up by research nurses. These don’t exist in real life.

  • SPRINT enrolled patients well enough to be in trials. Again, many patients we see don’t have the capacity to be in a trial.

I have no doubt that with SPRINT-like patients and SPRINT-like care, you can drive down CV events over the short term. But remember, this came at a cost of increased adverse events, even in the confines of the trial. Adverse events would surely be increased if real-world docs got too aggressive with BP meds without the follow-up of SPRINT and in frailer patients.

Messerli and Bangalore said it beautifully in the American Journal of Medicine in May 2016:

“We should remember a simple but inescapable truth in medicine: patients are genetically, physiologically, metabolically, pathologically, psychologically, and culturally different. Accordingly, there never will be only one way to diagnose and treat many medical disorders, including hypertension.

“To lower blood pressure of all hypertensive patients uniformly to ≤120 mmHg clearly has to be considered absurd, regardless of the SPRINT results.”

My friends, remember, the big bang in HTN is treating severe HTN, like those in the Black Barbershop study. Messing with too much BP control in every older person would be like Icarus.

Omecamtiv Mecarbil Meets the FDA

Omecamtiv mecarbil is a novel first in class cardiac myosin activator. From here on I will call it OM. It is a positive inotrope.

The US Food and Drug Administration (FDA) recently held an advisory committee (AdComm) meeting on its approval. FDA has the ultimate decision, but the agency often asks an advisory committee to render guidance on appraisal of the evidence. I’ve been on an AdComm — for vernakalant — and it was great experience because of the depth to which evidence is examined.

At the AdComm, FDA reviewers present their review. They also release documents ahead of time. For OM, their review was negative. More on that in a minute. But also, they noted safety issues: one was that the drug, if approved, would be dosed by drug levels, which is a big issue in the real world, and there also seemed to be more troponin releases and higher CV deaths in patients with atrial fibrillation (AF) at baseline.

Anyway, the FDA panel voted 8 to 3 that the benefits of OM do not outweigh the risks for patients with HF with reduced ejection fraction (HFrEF).

I covered OM in the June 3 #TWICpodcast. Let’s very briefly go over the evidence, but first recall the difficult history that positive inotropes have had. Dobutamine and milrinone, for example, have not improved outcomes.

It’s counter-intuitive, isn’t it? The problem with HFrEF is poor contractility, so you’d think drugs that increase contractility would help. Therein lies the entire basis of evidence-based medicine — things that should work, say because they move a surrogate marker, like a blood test, may not work. The obvious answer is that the human body is complex. And if you want to know what works, you have to randomize and measure clinical outcomes, not surrogate markers.

  • The GALACTIC HF trial of OM vs placebo in patients with HFrEF did that. More than 8500 patients were randomly assigned to different doses of OM vs placebo.

  • During the nearly 2 years of follow-up, the primary endpoint of first HF visit or CV death occurred in 37% of patients in the OM group vs 39.1% of patients on placebo.

  • The hazard ratio (HR) was 0.92, an 8% reduction, with a 95% confidence interval (CI) ranging from 0.86 to 0.99. And a P-value of 0.03, so barely significant.

The next step is to go to Table 2 and look at the components of the primary endpoint.

  • CV death was not different, and there were a lot of CV death events (19% in each group). HF events were lower in the OM arm, but not significantly so.

  • Secondary endpoints: Quality of life (QOL) as measured by the Kansas City Cardiomyopathy Questionnaire was not different. Again, OM is supposed to be a positive inotrope, so you’d expect people to feel better, but they did not.

  • All-cause death overall was nearly identical.

  • Also this year, a second study came out of OM vs placebo called METEORIC-HF. JAMA published this study of exercise capacity in patients with HFrEF; OM vs placebo on VO2 treadmill testing. The results were clear: OM had no significant effect on exercise capacity.

Here we have a new drug, with no effect on QOL, no effect on exercise capacity, no reduction in CV death, overall death, and a statistically fragile, tiny effect in an outcomes trial.

Even Amgen, one of the makers of the drug, bailed on it last year. “The Big Biotech walked, passing the rights to the program back to Cytokinetics in November and ending a nearly 15-year collaboration.”

In the summer, GALACTIC authors published a substudy of the main trial in which they looked at a subgroup of patients with low and normal BP and found that OM seemed to help those with low BP. The HR met significance, but it did not for those with BP > 100 mmHg. Two of the authors of this paper are industry employees.

It turns out that FDA reviewers were not persuaded by any of the attempts at subgroup analyses. Good on them.

The drug is slated for a decision in February. We’ve learned over the past 2 years that FDA can go against the AdComm. If approved, you can expect the marketing train to go full steam ahead. Doctors will then have to decide. The history of cardiology shunning low-value dubious interventions doesn’t make me optimistic if OM makes it past FDA approval.

Measuring Quality of Care

JAMA has published an important paper on the folly of trying to measure the quality of doctors. Before I say anything about the important paper, first author, Amelia Bond, PhD, from Weill Cornell, I want to say two things:

  • We should strive to have the highest quality doctors.

  • But we should also strive to avoid soft thinking. Policy makers must never fail to read economist Charles Goodhart who stated, “when a measure becomes a target, it ceases to be a good measure.”

As the largest payer in US healthcare, Medicare has proposed and embraced the notion of “pay for performance,” which can be translated to all manner of catchy slogans – pay for quality instead of quantity, for instance.

The most recent paper in JAMA assessed the actual performance of a program called Merit-based Incentive Payment System or MIPS.

  • The authors did a cross-sectional study of 80,000 primary care docs who participated in the merit-based program.

  • They then created three groups of docs based on MIPS scores

    • Bad 773 physicians with low MIPS scores (≤30),

    • Medium 6151 physicians with medium MIPS scores (>30-75),

    • Good 69,322 physicians with high MIPS scores (>75)

Now for the main outcomes: None of this mattered when you looked at real outcomes.

  • “MIPS scores were inconsistently associated with outcomes.” For example, low-scoring MIPS docs had significantly better performance on emergency department visits, but worse performance on all-cause hospitalizations. And there were no differences in four other admission outcomes.

  • “Nineteen percent of physicians with low MIPS scores had composite outcomes performance in the top quintile, while 21% of physicians with high MIPS scores had outcomes in the bottom quintile.”

The authors concluded: “These findings suggest that the MIPS program may be ineffective at measuring and incentivizing quality improvement among US physicians.”

Editorialist J Michael Williams, who I don’t know, but who wrote a great editorial, started with the issue of sloganeering. I liked this quote, although it’s in JAMA-speak. Williams writes: “Extrinsic financial incentives applied to measured aspects of quality can detract attention and resources from harder-to-measure but important aspects and undermine the intrinsic motivation of clinicians to do what they think is best for patients.”

Translation to normal-speak: Foolish quality measures worsen quality. That’s because doctors have a bank of attention and energy. The more you fill it with incentivizing foolish surrogates — like how many of your patients get screening HbA1Cs or mammograms — you distract from real doctoring.

In fact, I would submit that if we did real doctoring, wherein patients were educated about the tiny or zero effect size on overall survival plus potential adverse effects of many of these process measures, fewer patients would accept them. That is, good doctoring, through a shared understanding of Medicine, might make a doctor look worse.

  • Williams goes on to point out that the last decade of data on quality measures is replete with “damning” evidence that has “consistently found little to no improvement — even on targeted measures — and revealed plenty of cause for concern.”

  • He then points out that technocrats contend that it’s the measurement design. Give us more time, we can find a better system to measure and incent quality.

  • Dr. Williams rebuts – again in JAMA-speak: “While there are undoubtedly refinements that can be made, particularly in the case of the impressively flawed MIPS, the intractability of the drawbacks — long recognized in the economics and management literature — should not be underestimated.”

Translation: a) Read Goodhart’s law; b) understand human nature.

What I have trouble deciding is whether the technocracy of people who push these distractions onto doctors do it because of conflicts of interest, say relationships to quality-measurement organizations, such as the Joint Commission or the American Board of Internal Medicine, or they do it because they didn’t get enough time on playgrounds with other humans.

  • Here is J Michael Williams: “To err may be human, but to overlook humans is to err.” He then eases ever so slightly into Hayek territory. In the Constitution of Liberty, a must read by the way, Hayek, argued that no one person or persons have sufficient wisdom to control markets. He believed in a collective wisdom. That would, of course, come out in a totally free market.

  • J Michael Williams, on harnessing the collective wisdom of actual doctors who see real patients: “The potential gains from doing so are substantial. Strategies for leveraging peer motivation (eg, via teams), cultivating collective wisdom to support decision-making, and engaging clinicians in systems change could have wide-ranging and lasting effects extending well beyond the specific behavioral modifications achieved by scripted nudges.”

  • He might be talking about physician-owned organizations, or at least physician-managed systems. Maybe not.

  • He then moves to competition. He believes that it’s not just competition for patients. But for clinicians and for the ability to bring in new models of care into the market. “Thus, the makings of quality improvement may not be as alluringly obvious as “paying for quality instead of quantity”— eg, tougher antitrust measures, limits on noncompete provisions in physician employment contracts, and technical assistance from payers to ease market entry of promising delivery models.”

In the end we go back to MIPS. Of course it should end. That is easy. But the real issues for the future are major system changes way above the folly of measuring who checks boxes better.

I really liked this editorial. It’s unusual in its courage. It makes you think about big issues. Human-nature issues. And the downsides of a technocracy that puts profits above quality. To any of the young US clinicians who listen: this is important for your future.

Open Science and Arterial Conduits for Heart Surgery

One of the best papers I have ever read is from the group led by Brian Nosek from the University of Virginia, first author, Raphael Silberzahn. It is called, “Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results” and the journal Advances in Methods and Practices in Psychological Science published it in 2018.

  • They brought together 29 teams of data scientists to analyze one dataset. It happened to be soccer statistics from Europe. Their one question was: were referees more likely to give red cards to dark skin-toned players compared to light-skinned players?

  • The 29 teams could use any analytic method they chose. They could even brainstorm together. In fact, they used 29 different ways to analyze this one question.

  • And boom, slightly more than two-thirds found a statistically significant positive association, and one-third did not.

  • Again, these were expert statistician teams. It was one question. It was the same data.

I was shocked. Because when you read studies, they report one analytic method, not 29. But here was a paper showing that results (or at least statistical significance) could turn on the choices we make in analytic methods.

The paper never really took hold. Perhaps because they asked a social science question instead of something more technical or medical.

This week, we were treated to amazing example of how this could work in a medical study. Of course, the core reason this could occur turned on open science.

Here is what happened: This summer, a distinguished group of academic surgeons published a combination meta-analysis of five coronary artery bypass graft (CABG) trials trying to answer the question of how different conduits worked for CABG. Don’t get bogged down in the details of CABG. I am not a heart surgeon, and you probably aren’t either. The story is about open science and how results can turn on choices made in analysis.

  • The study used five CABG trials to sort out mortality outcomes in patients who had radial artery (RA) or saphenous vein (SV) or right internal thoracic artery (RITA) conduits in addition to the left internal thoracic artery or LITA.

  • The one-sentence background is that there is a great debate in the surgery world as to what is best for conduits for coronary bypass. Everyone agrees that the LITA is number one, but then what? Do you harvest the RA, RITA, or SV?

  • Gaudino and colleagues’ meta-analysis wasn’t a typical meta-analysis. That’s because the trials had many different combinations of comparisons.

  • Gaudino and colleagues took the 10,000 plus patients in the five trials and use propensity matching to make triplet groups. About 1700 patients in each of three groups: RA, SV, and RITA.

Propensity matching is a way to use data to find similar patients in a big dataset — a method to balance co-variates, as in randomization but not as effective. Their method for doing this was very complicated. I don’t pretend to understand it.

Their results were surprising.

  • They found an ≈ 40% lower mortality with RA vs SV and RITA group respectively. Pause there.

  • The choice of conduit after LITA led to 40% reduction in mortality. To the editors and other surgeons, that seemed implausible.

What happened next was beautiful. The editors asked Prof Gaudino if he would share the dataset. He said yes. Total transparency. Another statistics group at University College London, led by Professor Nick Freemantle, re-analyzed the same dataset.

  • They found, first, that when they analyzed it the way Dr. Gaudino’s group did, the results were identical. So, they showed there were no errors in the analysis.

  • But Freemantle felt there were more traditional ways to do propensity matching. Again, I can’t say which is better. But Freemantle’s analytic method yielded no significant differences in outcomes with the different conduits.

  • Again, it’s the same dataset. Just different choices in analyzing it.

Another academic CV surgeon, David Taggert, along with colleagues published an editorial on the original Gaudino paper saying that a 40% reduction from a choice of conduit was implausibly large. While he did not mention the re-analysis, he explained why it was implausible. I was drawn to the fact that the reduction in major adverse cardiac events was less than the mortality reduction. Whenever you see that — overall mortality reduced by more than cardiac specific mortality -- be on the lookout for confounding, eg, healthier patients getting one treatment.

I will link to all these publications, but I am absolutely thrilled to report on this. It is such good news, that scientists would share their data, for an independent review, and the two reviews show conflicting results. You know what the answer is now, right? The answer is a proper randomized controlled trial directly comparing the two strategies, so no one has to make such varied analytic choices.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....