Search for a command to run...
We appreciate Dr. Tammemägi’s letter with regard to our recent article (1), but we disagree in some respects with his perspective and note a number of arithmetical and conceptual errors.Dr. Tammemägi notes that the age of our cases with lung cancer is greater than the age of controls without lung cancer and considers that to reflect a flaw in our study. Case–control designs vary, intentionally, depending on their scientific purpose. Although it is true that matching on potential confounders can isolate the impact of other potential causal factors, case–control studies used to develop and validate screening tests should be designed instead to match the characteristics of a cohort that would emerge from a prospective screening study. It is a feature, not a bug, to have the distribution of population characteristics among those with and without lung cancer closely hew to those seen in the target population for screening (2). In our article, we highlighted that we had been successful in this regard.Dr. Tammemägi also attempts to introduce his own lung cancer risk prediction model. It is a derivative of a lung cancer risk prediction which one of us previously created (3). He provides no explanation of his methods, but did not incorporate subject age when he generated his predictions, even though this is a powerful predictor of disease occurrence.We also note mathematical and transcription errors in his letter.He cites the biomarker test sensitivity and specificity without contextualizing to the population in which testing is planned. In this setting, after applying screening population age and lung cancer stage distributions from the National Lung Screening Trial and National Health Interview Survey datasets, we reported these as 80% and 58%, respectively.He employs the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial cohort to project disease prevalence, but CT screening itself increases lung cancer prevalence (there was no CT screening in the PLCO study).Based on his prediction modeling, he estimates an improvement in the positive predictive value (PPV) associated with a positive biomarker of 36%. This is incorrect as it conflates prevalence in the U.S. Preventative Services Task Force (USPSTF) eligible population (0.7%; ref. 4) with prevalence in the PLCO trial (0.54%), producing a result with no clinical context.Using his approach, the PPV improvement he reported should have been 78%. In the alternative, if he had aligned the disease prevalence estimate to the intended use cohort and used the estimates of biomarker performance in that population, the increase in PPV associated with a positive biomarker result would be from 0.54% to 1.02% or an improvement of 90%. That is an approximate doubling of the likelihood of lung cancer, just as we reported in the article.There are conceptual errors in the letter as well.He compares a model that he claims to have generated from our unblinded data with the published performance of the locked biomarker classifier on blinded data. The two are not comparable, and this difference should have been highlighted by the author.In his analysis of quit duration versus proportion of lung cancer cases, he introduces a boundary at 15 years quit duration, a cutpoint lacks clinical or biological rationale. He then applies two separate smoothing functions to participants with quit duration above and below this boundary, arguing that the observed discontinuity at this boundary is problematic. It is well known that smoothing functions are unreliable at their edges (i.e., at the point of the discontinuity) because of a paucity of neighboring points on each side. Indeed, it would have been surprising if the two curves intersected once he separated the data.Finally, Dr. Tammemägi is skeptical that a blood test with high sensitivity for lung cancer can help improve screening rates, as he harbors a belief that individuals with tobacco use dependence are just not likely to get screened for cancer anyway. The citation supporting his opinion examines the characteristics of individuals who chose to enroll, versus those who did not enroll, in the Physicians Health Study. Setting aside that this was a randomized interventional study conducted in the early 1980’s exclusively focused on male high-income professionals, he presents no evidence that predictors of the decision to enroll in such a study has anything to do with a person’s inclination to get screened for cancer. Of all willing participants, 12.2% reported ongoing smoking, compared with 10.8% of those declining participation. These numbers suggest no meaningful difference between tobacco use dependence and study enrollment and do not support Dr. Tammemägi’s supposition. An examination of data evaluating the relationship between tobacco use dependence and overall cancer screening participation shows that smoking history does not predict cancer screening behavior, with most studies showing roughly equal rates of screening among those with or without the history of tobacco use (5).Regardless of how we determine the population that should be screened for lung cancer (based on age and smoking history alone or based on a validated clinical risk prediction model), many individuals in the screen-eligible population are currently not being screened. A minimally invasive blood test with high sensitivity for the detection of lung cancer may increase uptake and adherence. A clinical risk model and blood test are therefore not competitors at the same point of the screening process; they are complementary and serial in their application. In the simulation models that forecast potential population impact of blood test utilization, we made clear that we were modeling hypothetical scenarios and presented a broad range of assumptions in the article and supplementary materials that readers can evaluate. Across the full range of assumptions, there is a public health benefit of improving screening rates with a blood-based biomarker that has the performance we documented.P.J. Mazzone reports grants from DELFI Diagnostics during the conduct of the study, as well as grants from Adela, Biodesix, Exact Sciences, Nucleix, and Veracyte outside the submitted work. P.B. Bach reports other support from DELFI Diagnostics during the conduct of the study. R.B. Scharpf reports personal fees from DELFI Diagnostics during the conduct of the study; patents for cell-free DNA for cancer detection have been licensed and with royalties paid from DELFI Diagnostics; being a co-founder of DELFI Diagnostics, ownership of DELFI Diagnostics stock, and being a consultant for this organization (reviewed and approved by the Johns Hopkins University in accordance with its conflict-of-interest policies). V.E. Velculescu reports grants and personal fees from DELFI Diagnostics during the conduct of the study; patents for cancer genomics and cell-free DNA analyses pending, issued, licensed, and with royalties paid from multiple entities; and being a founder of DELFI Diagnostics, serving on the board of directors, and ownership of DELFI Diagnostics stock, which is subject to certain restrictions under university policy. Additionally, Johns Hopkins University owns equity in DELFI Diagnostics. V.E. Velculescu divested his equity in Personal Genome Diagnostics to Labcorp in February 2022 and reports being an inventor on patent applications submitted by Johns Hopkins University related to cancer genomic analyses and cell-free DNA for Cancer Detection that have been licensed to one or more entities, including DELFI Diagnostics, Labcorp, Qiagen, Sysmex, Agios, Genzyme, Esoterix, Ventana, and ManaT Bio. Under the terms of these license agreements, the university and inventors are entitled to fees and royalty distributions. V.E. Velculescu reports being an advisor to Virion Therapeutics and Epitope Diagnostics (reviewed and approved by Johns Hopkins University in accordance with its conflict-of-interest policies). L.R.G. Pike reports grants from Harbinger Health, DELFI Diagnostics, and Caris Life Sciences and personal fees from Dxcover and Genece Health outside the submitted work.