Objective To explore evidence on the links between patient experience and clinical safety and effectiveness outcomes. Design Systematic review. Setting A wide range of settings within primary and secondary care including hospitals and primary care centres. Participants A wide range of demographic groups and age groups. Primary and secondary outcome measures A broad range of patient safety and clinical effectiveness outcomes including mortality, physical symptoms, length of stay and adherence to treatment. Results This study, summarising evidence from 55 studies, indicates consistent positive associations between patient experience, patient safety and clinical effectiveness for a wide range of disease areas, settings, outcome measures and study designs. It demonstrates positive associations between patient experience and self-rated and objectively measured health outcomes; adherence to recommended clinical practice and medication; preventive care (such as health-promoting behaviour, use of screening services and immunisation); and resource use (such as hospitalisation, length of stay and primary-care visits). There is some evidence of positive associations between patient experience and measures of the technical quality of care and adverse events. Overall, it was more common to find positive associations between patient experience and patient safety and clinical effectiveness than no associations. Conclusions The data presented display that patient experience is positively associated with clinical effectiveness and patient safety, and support the case for the inclusion of patient experience as one of the central pillars of quality in healthcare. It supports the argument that the three dimensions of quality should be looked at as a group and not in isolation. Clinicians should resist sidelining patient experience as too subjective or mood-oriented, divorced from the ‘real’ clinical work of measuring safety and effectiveness.
Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.
Objectives To estimate global, regional (21 regions) and national (187 countries) sodium intakes in adults in 1990 and 2010. Design Bayesian hierarchical modelling using all identifiable primary sources. Data sources and eligibility We searched and obtained published and unpublished data from 142 surveys of 24 h urinary sodium and 103 of dietary sodium conducted between 1980 and 2010 across 66 countries. Dietary estimates were converted to urine equivalents based on 79 pairs of dual measurements. Modelling methods Bayesian hierarchical modelling used survey data and their characteristics to estimate mean sodium intake, by sex, 5 years age group and associated uncertainty for persons aged 20+ in 187 countries in 1990 and 2010. Country-level covariates were national income/person and composition of food supplies. Main outcome measures Mean sodium intake (g/day) as estimable by 24 h urine collections, without adjustment for non-urinary losses. Results In 2010, global mean sodium intake was 3.95 g/day (95% uncertainty interval: 3.89 to 4.01). This was nearly twice the WHO recommended limit of 2 g/day and equivalent to 10.06 (9.88–10.21) g/day of salt. Intake in men was ∼10% higher than in women; differences by age were small. Intakes were highest in East Asia, Central Asia and Eastern Europe (mean >4.2 g/day) and in Central Europe and Middle East/North Africa (3.9–4.2 g/day). Regional mean intakes in North America, Western Europe and Australia/New Zealand ranged from 3.4 to 3.8 g/day. Intakes were lower (<3.3 g/day), but more uncertain, in sub-Saharan Africa and Latin America. Between 1990 and 2010, modest, but uncertain, increases in sodium intakes were identified. Conclusions Sodium intakes exceed the recommended levels in almost all countries with small differences by age and sex. Virtually all populations would benefit from sodium reduction, supported by enhanced surveillance.
ObjectiveTo estimate the prevalence of wounds managed by the UK's National Health Service (NHS) in 2012/2013 and the annual levels of healthcare resource use attributable to their management and corresponding costs.MethodsThis was a retrospective cohort analysis of the records of patients in The Health Improvement Network (THIN) Database. Records of 1000 adult patients who had a wound in 2012/2013 (cases) were randomly selected and matched with 1000 patients with no history of a wound (controls). Patients’ characteristics, wound-related health outcomes and all healthcare resource use were quantified and the total NHS cost of patient management was estimated at 2013/2014 prices.ResultsPatients’ mean age was 69.0 years and 45% were male. 76% of patients presented with a new wound in the study year and 61% of wounds healed during the study year. Nutritional deficiency (OR 0.53; p<0.001) and diabetes (OR 0.65; p<0.001) were independent risk factors for non-healing. There were an estimated 2.2 million wounds managed by the NHS in 2012/2013. Annual levels of resource use attributable to managing these wounds and associated comorbidities included 18.6 million practice nurse visits, 10.9 million community nurse visits, 7.7 million GP visits and 3.4 million hospital outpatient visits. The annual NHS cost of managing these wounds and associated comorbidities was £5.3 billion. This was reduced to between £5.1 and £4.5 billion after adjusting for comorbidities.ConclusionsReal world evidence highlights wound management is predominantly a nurse-led discipline. Approximately 30% of wounds lacked a differential diagnosis, indicative of practical difficulties experienced by non-specialist clinicians. Wounds impose a substantial health economic burden on the UK's NHS, comparable to that of managing obesity (£5.0 billion). Clinical and economic benefits could accrue from improved systems of care and an increased awareness of the impact that wounds impose on patients and the NHS.
This paper uses elements of Weberian and Foucauldian social theory to speculate on the consequences of recent higher education change in the UK. We argue that changes in the political, institutional and funding environment have produced forms if HE organization that increase the power of management and diminish the autonomy of professional academics. These new forms of organization, which are increasingly bureaucratic and utilize sophisticated systems of surveillance, will make academics increasingly instrumental in their attitudes and behaviour. We conclude that the rationalization of HE should be resisted, but that nostalgia for a previous order should not be part of that resistant. 'Mass' higher education organizations are not simply good or bad, but their rationale and consequences need to be clearly thought through if their negative aspects are to be addressed.
Objective To synthesise qualitative studies that explore prescribers’ perceived barriers and enablers to minimising potentially inappropriate medications (PIMs) chronically prescribed in adults. Design A qualitative systematic review was undertaken by searching PubMed, EMBASE, Scopus, PsycINFO, CINAHL and INFORMIT from inception to March 2014, combined with an extensive manual search of reference lists and related citations. A quality checklist was used to assess the transparency of the reporting of included studies and the potential for bias. Thematic synthesis identified common subthemes and descriptive themes across studies from which an analytical construct was developed. Study characteristics were examined to explain differences in findings. Setting All healthcare settings. Participants Medical and non-medical prescribers of medicines to adults. Outcomes Prescribers’ perspectives on factors which shape their behaviour towards continuing or discontinuing PIMs in adults. Results 21 studies were included; most explored primary care physicians’ perspectives on managing older, community-based adults. Barriers and enablers to minimising PIMs emerged within four analytical themes: problem awareness; inertia secondary to lower perceived value proposition for ceasing versus continuing PIMs; self-efficacy in regard to personal ability to alter prescribing; and feasibility of altering prescribing in routine care environments given external constraints. The first three themes are intrinsic to the prescriber (eg, beliefs, attitudes, knowledge, skills, behaviour) and the fourth is extrinsic (eg, patient, work setting, health system and cultural factors). The PIMs examined and practice setting influenced the themes reported. Conclusions A multitude of highly interdependent factors shape prescribers’ behaviour towards continuing or discontinuing PIMs. A full understanding of prescriber barriers and enablers to changing prescribing behaviour is critical to the development of targeted interventions aimed at deprescribing PIMs and reducing the risk of iatrogenic harm.
BackgroundTotal hip or knee replacement is highly successful when judged by prosthesis-related outcomes. However, some people experience long-term pain.ObjectivesTo review published studies in representative populations with total hip or knee replacement for the treatment of osteoarthritis reporting proportions of people by pain intensity.Data sourcesMEDLINE and EMBASE databases searched to January 2011 with no language restrictions. Citations of key articles in ISI Web of Science and reference lists were checked.Study eligibility criteria, participants and interventionsProspective studies of consecutive, unselected osteoarthritis patients representative of the primary total hip or knee replacement population, with intensities of patient-centred pain measured after 3 months to 5-year follow-up.Study appraisal and synthesis methodsTwo authors screened titles and abstracts. Data extracted by one author were checked independently against original articles by a second. For each study, the authors summarised the proportions of people with different severities of pain in the operated joint.ResultsSearches identified 1308 articles of which 115 reported patient-centred pain outcomes. Fourteen articles describing 17 cohorts (6 with hip and 11 with knee replacement) presented appropriate data on pain intensity. The proportion of people with an unfavourable long-term pain outcome in studies ranged from about 7% to 23% after hip and 10% to 34% after knee replacement. In the best quality studies, an unfavourable pain outcome was reported in 9% or more of patients after hip and about 20% of patients after knee replacement.LimitationsOther studies reported mean values of pain outcomes. These and routine clinical studies are potential sources of relevant data.Conclusions and implications of key findingsAfter hip and knee replacement, a significant proportion of people have painful joints. There is an urgent need to improve general awareness of this possibility and to address determinants of good and bad outcomes.
Objective The objective of this study was to characterise the incidence rates of herpes zoster (HZ), also known as shingles, and risk of complications across the world. Design We systematically reviewed studies examining the incidence rates of HZ, temporal trends of HZ, the risk of complications including postherpetic neuralgia (PHN) and HZ-associated hospitalisation and mortality rates in the general population. The literature search was conducted using PubMed, EMBASE and the WHO library up to December 2013. Results We included 130 studies conducted in 26 countries. The incidence rate of HZ ranged between 3 and 5/1000 person-years in North America, Europe and Asia-Pacific, based on studies using prospective surveillance, electronic medical record data or administrative data with medical record review. A temporal increase in the incidence of HZ was reported in the past several decades across seven countries, often occurring before the introduction of varicella vaccination programmes. The risk of developing PHN varied from 5% to more than 30%, depending on the type of study design, age distribution of study populations and definition. More than 30% of patients with PHN experienced persistent pain for more than 1 year. The risk of recurrence of HZ ranged from 1% to 6%, with long-term follow-up studies showing higher risk (5–6%). Hospitalisation rates ranged from 2 to 25/100 000 person-years, with higher rates among elderly populations. Conclusions HZ is a significant global health burden that is expected to increase as the population ages. Future research with rigorous methods is important.
Objective To conduct a systematic review and meta-analysis of prices of healthier versus less healthy foods/diet patterns while accounting for key sources of heterogeneity. Data sources MEDLINE (2000–2011), supplemented with expert consultations and hand reviews of reference lists and related citations. Design Studies reviewed independently and in duplicate were included if reporting mean retail price of foods or diet patterns stratified by healthfulness. We extracted, in duplicate, mean prices and their uncertainties of healthier and less healthy foods/diet patterns and rated the intensity of health differences for each comparison (range 1–10). Prices were adjusted for inflation and the World Bank purchasing power parity, and standardised to the international dollar (defined as US$1) in 2011. Using random effects models, we quantified price differences of healthier versus less healthy options for specific food types, diet patterns and units of price (serving, day and calorie). Statistical heterogeneity was quantified using I2 statistics. Results 27 studies from 10 countries met the inclusion criteria. Among food groups, meats/protein had largest price differences: healthier options cost $0.29/serving (95% CI $0.19 to $0.40) and $0.47/200 kcal ($0.42 to $0.53) more than less healthy options. Price differences per serving for healthier versus less healthy foods were smaller among grains ($0.03), dairy (−$0.004), snacks/sweets ($0.12) and fats/oils ($0.02; p<0.05 each) and not significant for soda/juice ($0.11, p=0.64). Comparing extremes (top vs bottom quantile) of food-based diet patterns, healthier diets cost $1.48/day ($1.01 to $1.95) and $1.54/2000 kcal ($1.15 to $1.94) more. Comparing nutrient-based patterns, price per day was not significantly different (top vs bottom quantile: $0.04; p=0.916), whereas price per 2000 kcal was $1.56 ($0.61 to $2.51) more. Adjustment for intensity of differences in healthfulness yielded similar results. Conclusions This meta-analysis provides the best evidence until today of price differences of healthier vs less healthy foods/diet patterns, highlighting the challenges and opportunities for reducing financial barriers to healthy eating.
Objective Comparison of recent national survey data on prevalence, awareness, treatment and control of hypertension in England, the USA and Canada, and correlation of these parameters with each country stroke and ischaemic heart disease (IHD) mortality. Design Non-institutionalised population surveys. Setting and participants England (2006 n=6873), the USA (2007–2010 n=10 003) and Canada (2007–2009 n=3485) aged 20–79 years. Outcomes Stroke and IHD mortality rates were plotted against countries’ specific prevalence data. Results Mean systolic blood pressure (SBP) was higher in England than in the USA and Canada in all age–gender groups. Mean diastolic blood pressure (DBP) was similar in the three countries before age 50 and then fell more rapidly in the USA, being the lowest in the USA. Only 34% had a BP under 140/90 mm Hg in England, compared with 50% in the USA and 66% in Canada. Prehypertension and stages 1 and 2 hypertension prevalence figures were the highest in England. Hypertension prevalence (≥140 mm Hg SBP and/or ≥90 mm Hg DBP) was lower in Canada (19·5%) than in the USA (29%) and England (30%). Hypertension awareness was higher in the USA (81%) and Canada (83%) than in England (65%). England also had lower levels of hypertension treatment (51%; USA 74%; Canada 80%) and control (<140/90 mm Hg; 27%; the USA 53%; Canada 66%). Canada had the lowest stroke and IHD mortality rates, England the highest and the rates were inversely related to the mean SBP in each country and strongly related to the blood pressure indicators, the strongest relationship being between low hypertension awareness and stroke mortality. Conclusions While the current prevention efforts in England should result in future-improved figures, especially at younger ages, these data still show important gaps in the management of hypertension in these countries, with consequences on stroke and IHD mortality.
ObjectiveThe majority of cardiovascular diagnoses in the Danish National Patient Registry (DNPR) remain to be validated despite extensive use in epidemiological research. We therefore examined the positive predictive value (PPV) of cardiovascular diagnoses in the DNPR.DesignPopulation-based validation study.Setting1 university hospital and 2 regional hospitals in the Central Denmark Region, 2010–2012.ParticipantsFor each cardiovascular diagnosis, up to 100 patients from participating hospitals were randomly sampled during the study period using the DNPR.Main outcome measureUsing medical record review as the reference standard, we examined the PPV for cardiovascular diagnoses in the DNPR, coded according to the International Classification of Diseases, 10th Revision.ResultsA total of 2153 medical records (97% of the total sample) were available for review. The PPVs ranged from 64% to 100%, with a mean PPV of 88%. The PPVs were ≥90% for first-time myocardial infarction, stent thrombosis, stable angina pectoris, hypertrophic cardiomyopathy, arrhythmogenic right ventricular cardiomyopathy, takotsubo cardiomyopathy, arterial hypertension, atrial fibrillation or flutter, cardiac arrest, mitral valve regurgitation or stenosis, aortic valve regurgitation or stenosis, pericarditis, hypercholesterolaemia, aortic dissection, aortic aneurysm/dilation and arterial claudication. The PPVs were between 80% and 90% for recurrent myocardial infarction, first-time unstable angina pectoris, pulmonary hypertension, bradycardia, ventricular tachycardia/fibrillation, endocarditis, cardiac tumours, first-time venous thromboembolism and between 70% and 80% for first-time and recurrent admission due to heart failure, first-time dilated cardiomyopathy, restrictive cardiomyopathy and recurrent venous thromboembolism. The PPV for first-time myocarditis was 64%. The PPVs were consistent within age, sex, calendar year and hospital categories.ConclusionsThe validity of cardiovascular diagnoses in the DNPR is overall high and sufficient for use in research since 2010.
ObjectivesAn estimated 6%–10% of US adults took a hypnotic drug for poor sleep in 2010. This study extends previous reports associating hypnotics with excess mortality.SettingA large integrated health system in the USA.DesignLongitudinal electronic medical records were extracted for a one-to-two matched cohort survival analysis.SubjectsSubjects (mean age 54 years) were 10 529 patients who received hypnotic prescriptions and 23 676 matched controls with no hypnotic prescriptions, followed for an average of 2.5 years between January 2002 and January 2007.Main outcome measuresData were adjusted for age, gender, smoking, body mass index, ethnicity, marital status, alcohol use and prior cancer. Hazard ratios (HRs) for death were computed from Cox proportional hazards models controlled for risk factors and using up to 116 strata, which exactly matched cases and controls by 12 classes of comorbidity.ResultsAs predicted, patients prescribed any hypnotic had substantially elevated hazards of dying compared to those prescribed no hypnotics. For groups prescribed 0.4–18, 18–132 and >132 doses/year, HRs (95% CIs) were 3.60 (2.92 to 4.44), 4.43 (3.67 to 5.36) and 5.32 (4.50 to 6.30), respectively, demonstrating a dose–response association. HRs were elevated in separate analyses for several common hypnotics, including zolpidem, temazepam, eszopiclone, zaleplon, other benzodiazepines, barbiturates and sedative antihistamines. Hypnotic use in the upper third was associated with a significant elevation of incident cancer; HR=1.35 (95% CI 1.18 to 1.55). Results were robust within groups suffering each comorbidity, indicating that the death and cancer hazards associated with hypnotic drugs were not attributable to pre-existing disease.ConclusionsReceiving hypnotic prescriptions was associated with greater than threefold increased hazards of death even when prescribed <18 pills/year. This association held in separate analyses for several commonly used hypnotics and for newer shorter-acting drugs. Control of selective prescription of hypnotics for patients in poor health did not explain the observed excess mortality.
ObjectivesThere is little consensus regarding the burden of pain in the UK. The purpose of this review was to synthesise existing data on the prevalence of various chronic pain phenotypes in order to produce accurate and contemporary national estimates.DesignMajor electronic databases were searched for articles published after 1990, reporting population-based prevalence estimates of chronic pain (pain lasting >3 months), chronic widespread pain, fibromyalgia and chronic neuropathic pain. Pooled prevalence estimates were calculated for chronic pain and chronic widespread pain.ResultsOf the 1737 articles generated through our searches, 19 studies matched our inclusion criteria, presenting data from 139 933 adult residents of the UK. The prevalence of chronic pain, derived from 7 studies, ranged from 35.0% to 51.3% (pooled estimate 43.5%, 95% CIs 38.4% to 48.6%). The prevalence of moderate-severely disabling chronic pain (Von Korff grades III/IV), based on 4 studies, ranged from 10.4% to 14.3%. 12 studies stratified chronic pain prevalence by age group, demonstrating a trend towards increasing prevalence with increasing age from 14.3% in 18–25 years old, to 62% in the over 75 age group, although the prevalence of chronic pain in young people (18–39 years old) may be as high as 30%. Reported prevalence estimates were summarised for chronic widespread pain (pooled estimate 14.2%, 95% CI 12.3% to 16.1%; 5 studies), chronic neuropathic pain (8.2% to 8.9%; 2 studies) and fibromyalgia (5.4%; 1 study). Chronic pain was more common in female than male participants, across all measured phenotypes.ConclusionsChronic pain affects between one-third and one-half of the population of the UK, corresponding to just under 28 million adults, based on data from the best available published studies. This figure is likely to increase further in line with an ageing population.
Objectives To determine the relationship between the reduction in salt intake that occurred in England, and blood pressure (BP), as well as mortality from stroke and ischaemic heart disease (IHD). Design Analysis of the data from the Health Survey for England. Setting and participants England, 2003 N=9183, 2006 N=8762, 2008 N=8974 and 2011 N=4753, aged ≥16 years. Outcomes BP, stroke and IHD mortality. Results From 2003 to 2011, there was a decrease in mortality from stroke by 42% (p<0.001) and IHD by 40% (p<0.001). In parallel, there was a fall in BP of 3.0±0.33/1.4±0.20 mm Hg (p<0.001/p<0.001), a decrease of 0.4±0.02 mmol/L (p<0.001) in cholesterol, a reduction in smoking prevalence from 19% to 14% (p<0.001), an increase in fruit and vegetable consumption (0.2±0.05 portion/day, p<0.001) and an increase in body mass index (BMI; 0.5±0.09 kg/m2, p<0.001). Salt intake, as measured by 24 h urinary sodium, decreased by 1.4 g/day (p<0.01). It is likely that all of these factors (with the exception of BMI), along with improvements in the treatments of BP, cholesterol and cardiovascular disease, contributed to the falls in stroke and IHD mortality. In individuals who were not on antihypertensive medication, there was a fall in BP of 2.7±0.34/1.1±0.23 mm Hg (p<0.001/p<0.001) after adjusting for age, sex, ethnic group, education, household income, alcohol consumption, fruit and vegetable intake and BMI. Although salt intake was not measured in these participants, the fact that the average salt intake in a random sample of the population fell by 15% during the same period suggests that the falls in BP would be largely attributable to the reduction in salt intake rather than antihypertensive medications. Conclusions The reduction in salt intake is likely to be an important contributor to the falls in BP from 2003 to 2011 in England. As a result, it would have contributed substantially to the decreases in stroke and IHD mortality.
Objectives To synthesise current evidence for the effects of exenatide and liraglutide on heart rate, blood pressure and body weight. Design Meta-analysis of available data from randomised controlled trials comparing Glucagon-like peptide-1 (GLP-1) analogues with placebo, active antidiabetic drug therapy or lifestyle intervention. Participants Patients with type 2 diabetes. Outcome measures Weighted mean differences between trial arms for changes in heart rate, blood pressure and body weight, after a minimum of 12-week follow-up. Results 32 trials were included. Overall, GLP-1 agonists increased the heart rate by 1.86 beats/min (bpm) (95% CI 0.85 to 2.87) versus placebo and 1.90 bpm (1.30 to 2.50) versus active control. This effect was more evident for liraglutide and exenatide long-acting release than for exenatide twice daily. GLP-1 agonists decreased systolic blood pressure by −1.79 mm Hg (−2.94 to −0.64) and −2.39 mm Hg (−3.35 to −1.42) compared to placebo and active control, respectively. Reduction in diastolic blood pressure failed to reach statistical significance (−0.54 mm Hg (−1.15 to 0.07) vs placebo and −0.50 mm Hg (−1.24 to 0.24) vs active control). Body weight decreased by −3.31 kg (−4.05 to −2.57) compared to active control, but by only −1.22 kg (−1.51 to −0.93) compared to placebo. Conclusions GLP-1 analogues are associated with a small increase in heart rate and modest reductions in body weight and blood pressure. Mechanisms underlying the rise in heart rate require further investigation.
This review is an abridged version of a Cochrane Review previously published in the Cochrane Database of Systematic Reviews 2010, Issue 4, Art. No.: MR000013 DOI: 10.1002/14651858.MR000013.pub5 (see www.thecochranelibrary.com for information). Cochrane Reviews are regularly updated as new evidence emerges and in response to feedback, and Cochrane Database of Systematic Reviews should be consulted for the most recent version of the review. Objective To identify interventions designed to improve recruitment to randomised controlled trials, and to quantify their effect on trial participation. Design Systematic review. Data sources The Cochrane Methodology Review Group Specialised Register in the Cochrane Library, MEDLINE, EMBASE, ERIC, Science Citation Index, Social Sciences Citation Index, C2-SPECTR, the National Research Register and PubMed. Most searches were undertaken up to 2010; no language restrictions were applied. Study selection Randomised and quasi-randomised controlled trials, including those recruiting to hypothetical studies. Studies on retention strategies, examining ways to increase questionnaire response or evaluating the use of incentives for clinicians were excluded. The study population included any potential trial participant (eg, patient, clinician and member of the public), or individual or group of individuals responsible for trial recruitment (eg, clinicians, researchers and recruitment sites). Two authors independently screened identified studies for eligibility. Results 45 trials with over 43 000 participants were included. Some interventions were effective in increasing recruitment: telephone reminders to non-respondents (risk ratio (RR) 1.66, 95% CI 1.03 to 2.46; two studies, 1058 participants), use of opt-out rather than opt-in procedures for contacting potential participants (RR 1.39, 95% CI 1.06 to 1.84; one study, 152 participants) and open designs where participants know which treatment they are receiving in the trial (RR 1.22, 95% CI 1.09 to 1.36; two studies, 4833 participants). However, the effect of many other strategies is less clear, including the use of video to provide trial information and interventions aimed at recruiters. Conclusions There are promising strategies for increasing recruitment to trials, but some methods, such as open-trial designs and opt-out strategies, must be considered carefully as their use may also present methodological or ethical challenges. Questions remain as to the applicability of results originating from hypothetical trials, including those relating to the use of monetary incentives, and there is a clear knowledge gap with regard to effective strategies aimed at recruiters.
ObjectivesTo investigate the contribution of ultra-processed foods to the intake of added sugars in the USA. Ultra-processed foods were defined as industrial formulations which, besides salt, sugar, oils and fats, include substances not used in culinary preparations, in particular additives used to imitate sensorial qualities of minimally processed foods and their culinary preparations.DesignCross-sectional study.SettingNational Health and Nutrition Examination Survey 2009–2010.ParticipantsWe evaluated 9317 participants aged 1+ years with at least one 24 h dietary recall.Main outcome measuresAverage dietary content of added sugars and proportion of individuals consuming more than 10% of total energy from added sugars.Data analysisGaussian and Poisson regressions estimated the association between consumption of ultra-processed foods and intake of added sugars. All models incorporated survey sample weights and adjusted for age, sex, race/ethnicity, family income and educational attainment.ResultsUltra-processed foods comprised 57.9% of energy intake, and contributed 89.7% of the energy intake from added sugars. The content of added sugars in ultra-processed foods (21.1% of calories) was eightfold higher than in processed foods (2.4%) and fivefold higher than in unprocessed or minimally processed foods and processed culinary ingredients grouped together (3.7%). Both in unadjusted and adjusted models, each increase of 5 percentage points in proportional energy intake from ultra-processed foods increased the proportional energy intake from added sugars by 1 percentage point. Consumption of added sugars increased linearly across quintiles of ultra-processed food consumption: from 7.5% of total energy in the lowest quintile to 19.5% in the highest. A total of 82.1% of Americans in the highest quintile exceeded the recommended limit of 10% energy from added sugars, compared with 26.4% in the lowest.ConclusionsDecreasing the consumption of ultra-processed foods could be an effective way of reducing the excessive intake of added sugars in the USA.
Objectives To report on the causes of blindness certifications in England and Wales in working age adults (16–64 years) in 2009–2010; and to compare these with figures from 1999 to 2000. Design Analysis of the national database of blindness certificates of vision impairment (CVIs) received by the Certifications Office. Setting and participants Working age (16–64 years) population of England and Wales. Main outcome measures Number and cause of blindness certifications. Results The Certifications Office received 1756 CVIs for blindness from persons aged between 16 and 64 inclusive between 1 April 2009 and 31 March 2010. The main causes of blindness certifications were hereditary retinal disorders (354 certifications comprising 20.2% of the total), diabetic retinopathy/maculopathy (253 persons, 14.4%) and optic atrophy (248 persons, 14.1%). Together, these three leading causes accounted for almost 50% of all blindness certifications. Between 1 April 1999 and 31 March 2000, the leading causes of blindness certification were diabetic retinopathy/maculopathy (17.7%), hereditary retinal disorders (15.8%) and optic atrophy (10.1%). Conclusions For the first time in at least five decades, diabetic retinopathy/maculopathy is no longer the leading cause of certifiable blindness among working age adults in England and Wales, having been overtaken by inherited retinal disorders. This change may be related to factors including the introduction of nationwide diabetic retinopathy screening programmes in England and Wales and improved glycaemic control. Inherited retinal disease, now representing the commonest cause of certification in the working age population, has clinical and research implications, including with respect to the provision of care/resources in the NHS and the allocation of research funding.
ObjectiveTo investigate trends in incident and prevalent diagnoses of type 2 diabetes mellitus (T2DM) and its pharmacological treatment between 2000 and 2013.DesignAnalysis of longitudinal electronic health records in The Health Improvement Network (THIN) primary care database.SettingUK primary care.ParticipantsIn total, we examined 8 838 031 individuals aged 0–99 years.Outcome measuresThe incidence and prevalence of T2DM between 2000 and 2013, and the effect of age, sex and social deprivation on these measures were examined. Changes in prescribing patterns of antidiabetic therapy between 2000 and 2013 were also investigated.ResultsOverall, 406 344 individuals had a diagnosis of T2DM, of which 203 639 were newly diagnosed between 2000 and 2013. The incidence of T2DM rose from 3.69 per 1000 person-years at risk (PYAR) (95% CI 3.58 to 3.81) in 2000 to 3.99 per 1000 PYAR (95% CI 3.90 to 4.08) in 2013 among men; and from 3.06 per 1000 PYAR (95% CI 2.95 to 3.17) to 3.73 per 1000 PYAR (95% CI 3.65 to 3.82) among women. Prevalence of T2DM more than doubled from 2.39% (95% CI 2.37 to 2.41) in 2000 to 5.32% (95% CI 5.30 to 5.34) in 2013. Being male, older, and from a more socially deprived area was strongly associated with having T2DM, (p<0.001). Prescribing changes over time reflected emerging clinical guidance and novel treatments. In 2013, metformin prescribing peaked at 83.6% (95% CI 83.4% to 83.8%), while sulfonylureas prescribing reached a low of 41.4% (95% CI 41.1% to 41.7%). Both remained, however, the most commonly used pharmacological treatments as first-line agents and add-on therapy. Thiazolidinediones and incretin based therapies (gliptins and GLP-1 analogues) were also prescribed as alternate add-on therapy options, however were rarely used for first-line treatment in T2DM.ConclusionsPrevalent cases of T2DM more than doubled between 2000 and 2013, while the number of incident cases increased more steadily. Changes in prescribing patterns observed may reflect the impact of national policies and prescribing guidelines on UK primary care.
OBJECTIVES: Evaluating the variation in the strength of the effect across studies is a key feature of meta-analyses. This variability is reflected by measures like tau(2) or I(2), but their clinical interpretation is not straightforward. A prediction interval is less complicated: it presents the expected range of true effects in similar studies. We aimed to show the advantages of having the prediction interval routinely reported in meta-analyses. DESIGN: We show how the prediction interval can help understand the uncertainty about whether an intervention works or not. To evaluate the implications of using this interval to interpret the results, we selected the first meta-analysis per intervention review of the Cochrane Database of Systematic Reviews Issues 2009-2013 with a dichotomous (n=2009) or continuous (n=1254) outcome, and generated 95% prediction intervals for them. RESULTS: In 72.4% of 479 statistically significant (random-effects p0), the 95% prediction interval suggested that the intervention effect could be null or even be in the opposite direction. In 20.3% of those 479 meta-analyses, the prediction interval showed that the effect could be completely opposite to the point estimate of the meta-analysis. We demonstrate also how the prediction interval can be used to calculate the probability that a new trial will show a negative effect and to improve the calculations of the power of a new trial. CONCLUSIONS: The prediction interval reflects the variation in treatment effects over different settings, including what effect is to be expected in future patients, such as the patients that a clinician is interested to treat. Prediction intervals should be routinely reported to allow more informative inferences in meta-analyses.