1. In a cohort of women aged 40 years or older, those with persistently dense breasts or those with increasing breast density over time had an increased risk of developing breast cancer.
Evidence Rating Level: 2 (Good)
Breast density has been shown to be an important predictor for breast cancer development. Prior studies have shown that increased breast density is associated with an increased risk of developing breast cancer. The Breast Imaging Reporting and Data System (BI-RADS) has been the main reporting standard for measuring breast density. However, the validity and reliability have been called into question. As a result, newer studies have focused on individual changes in mammographic density. Research using these techniques is limited, so this retrospective cohort study sought to examine the patterns of change in mammographic breast density and the ways in which it is associated with the risk of developing breast cancer. Women aged 40 years or older at baseline were included. Individuals were excluded from the study if they had a prior cancer diagnosis at any site before the last screening (2015-2016), or if there was missing data on breast density. According to BI-RADS, breast density is split into four categories: 1 – almost entirely fat, 2 – scattered fibroglandular density, 3 – heterogeneously dense, and 4 – extremely dense. After meeting the eligibility criteria, 1 747 507 women were included (mean [SD] ag, 61.4 [9.3] years). At the time of the first screening, 41% (708 687 of 728 506) of women without breast cancer and 56% (10 640 of 19 001) of women with breast cancer had dense breasts. Among all the women, five trajectories were identified. Group 1 consisted of women with persistently dense breast tissue, this was used as the reference group. Group 2 consisted of women with fatty breast tissue at baseline that increased in density over time, and groups 3-5 had a higher breast density at baseline but had a stable increase. Women in groups 2-5 had a higher breast cancer risk compared to those in group 1. Specifically, group 2 had a 1.60-fold (95% confidence interval [CI] 1.49 to 1.72) increased risk of breast cancer, while groups 3-5 had a higher risk by 1.86 (1.74 to 1.98), 2.49 (2.33 to 2.65), and 3.07 (2.87 to 3.28) respectively. The results for invasive breast cancer and ductal carcinoma in situ were similar when analyzed by type of breast cancer. All age groups had a similar breast density trajectory, despite differences in menopausal status or body mass index (BMI). Overall, in this retrospective cohort study with longitudinal follow-up, women with continually dense breasts or with breasts increasing in density, had increased risk of developing breast cancer.
Mental disorders among offspring prenatally exposed to systemic glucocorticoids
1. In this cohort study of offspring born to mothers at risk for preterm delivery or diagnosed with an autoimmune or inflammatory disorder, there was an association between systemic glucocorticoid exposure prenatally and future development of a mental disorder.
Evidence Rating Level: 2 (Good)
Systemic glucocorticoids have been prescribed for use in preterm labour as a way to decrease fetal morbidity and mortality, and for individuals with autoimmune or inflammatory disorders. Cortisol, a hormone, and endogenous glucocorticoid is involved in key development processes in utero, including development of the central nervous system (CNS). However, there may be an increased risk of the infant developing mental disorders if exposed to excess glucocorticoids prenatally. The goal of this cohort study was to understand the association between glucocorticoid exposure prenatally and the development of mental disorders in offspring. Data was excluded for stillbirths, infants born to mothers who used systemic glucocorticoids up to 3 months before conception, and infants with missing gestational age data. Among the offspring born to mothers at risk of delivering preterm, they were divided into exposed and unexposed. Similarly, offspring born to mothers with autoimmune or inflammatory diseases were divided into exposed and unexposed. When the offspring reached the follow-up stage, they were assessed for mental disorders commonly occurring in this age group including intellectual disability, autism spectrum disorders, ADHD, and mood-related disorders. A total of 1 061 548 infants were included in the analysis, of whom, 31 518 were born to mothers at risk of preterm birth (3659 [11.6%] exposed) and 288 747 born to mothers with autoimmune or inflammatory conditions (6453 [2.2%] exposed). Exposed offspring had mothers with a higher prevalence of pregnancy complications compared with unexposed offspring born to mothers at risk of preterm delivery. For offspring born to mothers at risk of preterm delivery, the adjusted risk for various conditions were as follows: autism spectrum disorder were 6.6% vs 4.3% (RR, 1.5 [95% CI, 1.2-1.9]); intellectual disabilities were 1.6% and 1.3% (RR, 1.3 [95% CI, 0.8-1.8]); ADHD were 5.8% vs 4.3% (RR, 1.3 [95% CI, 1.0-1.7]); and mood, anxiety and stress-related disorders, were 7.2% vs 4.6% (RR, 1.5 [95% CI, 1.1-2.0]) comparing exposed vs unexposed respectively. The same comparison was looked at in offspring born to mothers with autoimmune or inflammatory disorders. For the exposed vs unexposed in that cohort respectively, the adjusted risks were 4.8% vs 3.8% (RR, 1.3 [95% CI, 1.1-1.5]) for autism spectrum disorders; 1.1% vs 0.8% (RR, 1.4 [95% CI 0.9-2.0]) for intellectual disabilities; 5.5% vs 4.4% (RR, 1.3 [95% CI, 1.0-1.5]) for ADHD; and 6.6% vs 4.6% (RR, 1.4 [95% CI, 1.2-1.8]) for mood, anxiety, and stress-related disorders. As evidenced by the results, there was an association between prenatal exposure to systemic glucocorticoids and the development of mental disorders later in life.
Risk model-guided clinical decision support for suicide screening
1. In this randomized clinical trial (RCT) comparing two forms of clinical decision support (CDS) prevention, interruptive CDS prompted more in-person assessments compared to noninterruptive CDS.
Evidence Rating Level: 1 (Excellent)
Effective suicide prevention includes accurately identifying individual risk, predicting outcomes, and appropriate intervention. One possibly helpful but under-researched area that has yet to be evaluated in a randomized clinical trial (RCT) is the use of clinical decision support (CDS) in suicide prevention. Previous data has shown that screening during healthcare encounters is of utmost importance as many individuals who die by suicide had recent contact with primary care. Due to gaps in screening methods, several models have been created with recent research suggesting that statistical modelling and face-to-face screening combined is more effective than either alone. The objective of this study was to determine if interruptive CDS lead to more frequent in-person suicide risk assessments than noninteractive CDS and determining if CDS in general increased in-person screening rates. To address this goal, a 2-arm RCT was employed with patients randomly assigned in a 1:1 ratio if their predicted risk was 2% or greater. Patients were eligible for randomization if they were routinely seen in primary care settings. For the design, the interruptive CDS had an alert window and a patient icon that were simultaneously visible while the non-interruptive CDS had a summary panel. A total of 596 of 7732 encounters and 561 of 6062 patients were enrolled and randomized (mean [SD] age, 59.3 [16.5] years). The interruptive CDS arm included 289 encounters, of which, clinicians chose to screen in 121 (42%) cases, with an assessment of their choosing. In the noninterruptive CDS arm, there were 307 encounters, of which the clinicians chose to screen in 12 (4%) cases. These results show that there were higher rates of in-person screening in the interruptive CDS compared to the noninterruptive CDS (odds ratio, 17.70; 95% CI, 6.42-48.79; P<.001). Once again compared to the noninterruptive arm, the interruptive CDS was associated with a higher overall frequency of documented suicide risk assessments. (11 of 307 encounters [4%] compared with 63 of 289 encounters [22%]; P<.001). In the same clinical settings, the suicide risk assessment rate was 8% in the prior year at baseline (64 of 832 encounters), compared to the current study rate of 22% (63 of 289 encounters). The results of this RCT demonstrate that interruptive CDS was more effective than noninterruptive CDS in prompting in-person assessments.
1. In a cohort of adults with chronic obstructive pulmonary disease (COPD), receiving fluticasone-umeclidinium-vilanterol was not associated with improved clinical outcomes compared to receiving budesonide-glycopyrrolate-formoterol.
Evidence Rating Level: 2 (Good)
Treatment recommendations for chronic obstructive pulmonary disease (COPD) include triple inhaler combination therapy with an inhaled corticosteroid, a long-acting muscarinic antagonist, and a long acting β-agonist. In the United States, there are two single inhalers that incorporate the combination therapy. The two are budesonide-glycopyrrolate-formoterol (Breztri Aerosphere), a twice daily metered inhaler, and fluticasone-umeclidinium-vilanterol (Trelegy Ellipta), a once daily dry powder inhaler. Globally, health systems have tried to reduce the use of metered doses as they contain propellants which cause harm to the environment from greenhouse gas emissions. However, there have not been many studies comparing the two types of triple inhaler therapy. To address this gap, this study examined budesonide-glycopyrrolate-formoterol and fluticasone-umeclidinium-vilanterol to compare their effectiveness and safety in COPD patients. Eligibility criteria included having a diagnosis of COPD based on International Classification of Diseases, being ages 40 years or older, and being enrolled for at least one year before the study start time. Individuals were excluded if they had COPD but a previous asthma diagnosis. To address the effectiveness, the primary outcome was moderate or severe COPD exacerbation, and to address safety, the primary outcome was first admission to hospital with pneumonia. The study included 20 388 matched pairs from 87 751 individuals using fluticasone-umeclidinium-vilanterol, and 20 395 using budesonide-glycopyrrolate-formoterol. There was a 9% increased hazard of first moderate or severe COPD exacerbation in patients receiving budesonide-glycopyrrolate-formoterol compared to those receiving fluticasone-umeclidinium-vilanterol (HR 1.09 [95% CI 1.04 to 1.14]). When comparing the two treatment groups for pneumonia incidence, the hazard of first admission was the same (HR 1.00 [95% CI, 0.91 to 1.10]; absolute risk difference 0.4% [95% CI, -0.6% to 1.3%]). Looking at the results for the secondary outcome of all-cause mortality, there was a 7% increased relative hazard (HR 1.07 [95% CI 1.02 to 1.12]) and a 1.9% increase in absolute risk (95% CI 0.1% to 3.6%) in the budesonide-glycopyrrolate-formoterol group compared to the fluticasone-umeclidinium-vilanterol group. In summary, in a cohort of patients with COPD, those treated with fluticasone-umeclidinium-vilanterol did not experience better outcomes clinically than those treated with budesonide-glycopyrrolate-formoterol. However, health systems wanting to reduce their greenhouse gas emissions may recommend the use of fluticasone-umeclidinium-vilanterol as a safe and effective treatment.
Changes in sarcopenia and incident cardiovascular disease in prospective cohorts
1. In this prospective cohort, there was a significantly increased risk of developing cardiovascular disease (CVD) in the participants with possible sarcopenia at baseline compared to those without sarcopenia. The participants with sarcopenia at baseline also had an increased risk, however, it was not significant.
2. Participants with possible sarcopenia who regained non-sarcopenia status, as well as those with sarcopenia who improved to either possible- or non-sarcopenia status experienced a significantly lower risk of developing CVD.
Evidence Rating Level: 1 (Excellent)
Sarcopenia is characterized by a progressive loss of muscle mass and strength resulting from factors like inflammation, or mitochondrial dysfunction, commonly presenting in middle-aged and older adults. Due to the decline of muscle strength, adverse clinical outcomes, ranging from falls or fractures to decreased quality of life and increased risk of mortality. Cardiovascular disease (CVD) is another condition in which incidence increases with age. To understand the connection between the two conditions, this study sought to examine the association between sarcopenia changes and new-onset CVD in adults. The data for this prospective cohort study was collected from the China Health and Retirement Longitudinal Study (CHARLS). Individuals were ineligible to participate if they had missing sarcopenia data at baseline, CVD at baseline, or were lost to follow-up. The Asian Working Group for Sarcopenia (AWGS) algorithm was used to evaluate sarcopenia status. This algorithm includes three components, those being muscle strength, appendicular skeletal muscle mass (ASM), and physical performance. Of the participants included in the study, 180 of them (2.4%) met the criteria for severe sarcopenia occurring when there is low muscle mass along with low muscle strength and physical performance. There were 1874 participants with possible sarcopenia, characterized by either low muscle strength or low physical performance. A total of 7499 participants (average age, 58.5 years) were included in the baseline analysis, while 4822 participants (average age 58.4 years) were included in the analysis of sarcopenia changes. Compared to participants without sarcopenia, those with possible sarcopenia had an increased risk of developing CVD (HR 1.25, 95% CI, 1.11-1.42). Patients diagnosed with sarcopenia had an increased risk of developing CVD as well, in comparison to those without sarcopenia, however, this was not statistically significant (HR 1.01, 95% CI, 0.81-1.26). There was an increased risk of new-onset CVD in patients who progressed to possible sarcopenia or sarcopenia compared to those with stable non-sarcopenia (HR 1.30, 95% CI, 1.06-1.59). Conversely, there was a decreased risk of developing CVD in participants who had sarcopenia at baseline who recovered to possible- or non-sarcopenia compared to those with stable sarcopenia (HR 0.61, 95% CI 0.37-0.99). A similar trend was observed for those with possible sarcopenia at baseline who recovered to non-sarcopenia. The recoverees had a decreased risk of developing CVD compared to those who stayed in the possible sarcopenia group (HR 0.67, 95% CI, 0.52-0.86). Overall, advancing sarcopenia status heightens the risk of developing CVD, whereas recovery from sarcopenia lowers it.
Image: PD
©2024 2 Minute Medicine, Inc. All rights reserved. No works may be reproduced without expressed written consent from 2 Minute Medicine, Inc. Inquire about licensing here. No article should be construed as medical advice and is not intended as such by the authors or by 2 Minute Medicine, Inc.