(C) PLOS One This story was originally published by PLOS One and is unaltered. . . . . . . . . . . Deploying wearable sensors for pandemic mitigation: A counterfactual modelling study of Canada’s second COVID-19 wave [1] ['Nathan Duarte', 'Department Of Electrical', 'Computer Engineering', 'Faculty Of Engineering', 'Mcgill University', 'Montreal', 'Rahul K. Arora', 'Department Of Community Health Sciences', 'University Of Calgary', 'Calgary'] Date: 2022-12 Wearable sensors can continuously and passively detect potential respiratory infections before or absent symptoms. However, the population-level impact of deploying these devices during pandemics is unclear. We built a compartmental model of Canada’s second COVID-19 wave and simulated wearable sensor deployment scenarios, systematically varying detection algorithm accuracy, uptake, and adherence. With current detection algorithms and 4% uptake, we observed a 16% reduction in the second wave burden of infection; however, 22% of this reduction was attributed to incorrectly quarantining uninfected device users. Improving detection specificity and offering confirmatory rapid tests each minimized unnecessary quarantines and lab-based tests. With a sufficiently low false positive rate, increasing uptake and adherence became effective strategies for scaling averted infections. We concluded that wearable sensors capable of detecting presymptomatic or asymptomatic infections have potential to help reduce the burden of infection during a pandemic; in the case of COVID-19, technology improvements or supporting measures are required to keep social and resource costs sustainable. Find-Test-Trace-Isolate (FTTI) systems reliant on lab-based tests are important components of pandemic mitigation but can miss infectious individuals that do not have symptoms and may be limited by slow test result turnaround times. Wearable sensors show promise in continuous, passive detection of respiratory infections, before or absent symptoms. Here, we used a mathematical model to study the counterfactual impact of deploying wearable sensors to detect SARS-CoV-2 infections during Canada’s second COVID-19 wave. We observed a meaningful reduction in the burden of infection but also found that false positive alerts resulting from imperfect detection specificity resulted in high social and resource costs. Improving detection specificity and offering rapid antigen tests to confirm positive alerts both helped minimize unnecessary quarantines and lab-based tests. We found that once the false positive rate was sufficiently reduced, increasing uptake and adherence became effective strategies to scale the number of averted infections. Our study demonstrates that wearable sensors capable of detecting infections before or absent symptoms are promising pandemic mitigation tools. It also provides intuition around how detection performance, uptake, adherence, and supporting policies might shape the impact of broad scale wearable sensor deployment. In this study, we investigated the potential for wearable sensors capable of detecting presymptomatic and asymptomatic infections to help reduce the burden of infection during the acute phase of a pandemic. To do so, we used COVID-19 as an example and explored counterfactual scenarios in which these devices were deployed to combat Canada’s second wave. We built a compartmental epidemiological model in which wearable devices notify users of potential infection and prompt them to seek a confirmatory lab-based test, quarantining while waiting for the result. We aimed to (1) assess the baseline impact of deploying currently available detection algorithms during Canada’s second COVID-19 wave, (2) investigate how detection accuracy and behavioural parameters influence this impact, and (3) explore a complementary strategy wherein rapid antigen tests are used to confirm wearable-based notifications of potential infection. Wearable sensors have already been established as tools to detect deviations from users’ physiological baselines [ 7 ]. Recent findings suggest that wearable sensors may also be able to detect infections caused by respiratory pathogens such as SARS-CoV-2, before or absent symptoms [ 8 – 10 ]. Alavi et al, for example, developed an algorithm that analyzes patterns in smartwatch-captured overnight resting heart rate and provides real-time alerts of potential presymptomatic and asymptomatic SARS-CoV-2 infection [ 10 ]. If such algorithms were widely deployed, wearable sensors could be promising tools for pandemic mitigation; they could help FTTI systems more rapidly identify (and subsequently isolate) infectious individuals, including those without symptoms. Wearable sensors would also offer the unique benefit of passive monitoring, which minimizes required user engagement, and could operate in privacy-preserving fashion because sensor data would not need to be shared with a centralized database. With these potential benefits in mind, several studies have focused on developing wearable sensor-based infectious disease detection algorithms or even using these devices for infectious disease surveillance [ 8 – 12 ]. However, to the best of our knowledge, the potential population-level impact of deploying these devices for pandemic mitigation has yet to be explored. Infectious disease outbreaks can have devastating health and economic consequences. Effective public health strategies are crucial for limiting transmission and minimizing these harms. One approach to controlling viral spread during pandemics–a “Find, Test, Trace, Isolate” (FTTI) strategy–relies on identifying and isolating infectious individuals [ 1 ]. However, the COVID-19 pandemic has demonstrated that FTTI systems reliant on lab-based tests are often limited by missed hidden infection chains resulting from presymptomatic and asymptomatic transmission, and by slow test result turnaround times [ 2 , 3 ]. Digital contact tracing and rapid testing programs have potential to fill these gaps, but both approaches have faced numerous implementation barriers: inadequate participation levels, concerns around privacy and feasibility, and limited test availability [ 4 – 6 ]. Prior to vaccine availability–and also in scenarios where vaccines are available but immune-evasive variants are circulating–reducing viral transmission is an important public health policy objective. We calculated the number of averted infections and the percent reduction in the burden of infection to quantify the health impact of wearable sensor deployment. We defined the number of averted infections as the difference between the historical number of infections and the number of infections in a counterfactual scenario. We calculated the percent reduction in the burden of infection by dividing the number of averted infections by the historical number of infections. We also measured the number of days incorrectly spent in quarantine per month per device user (a consequence of false positive notifications) as the primary indicator of the strategy’s social burden [ 26 ]. Finally, to assess resource consumption, we quantified the number of additional lab-based tests (and rapid antigen tests, where applicable) required each day, on average. Next, we applied β according to Eq ( 2 ) to simulate counterfactual scenarios. The time series for β that results from Eq ( 1 ) incorporates all historical policy measures (e.g., restrictions, business closures, testing availability) and behaviour (e.g., adherence to restrictions, quarantines) that occurred. However, because some Susceptible, Exposed, and Infectious device users now quarantine in simulations, the counterfactual incidence of infection–the π obtained from Eq ( 2 )–decreases relative to historical levels. In Section 4 in S1 Text , we investigated the possibility that device users who are not notified of potential infection act in a riskier fashion (e.g., increasing contacts) relative to historical behaviour (Fig C in S1 Text ) [ 25 ]. a is a coefficient used to study this possibility and is nominally set to 1. When a is above 1, the average device user in the Susceptible, Presymptomatic Infectious, and Asymptomatic Infectious compartments acts in a riskier fashion relative to historical behaviour; when a is below 1, the average user in these groups acts more cautiously. To perform simulations, we first extracted the historical transmission rate (β) from the incidence of infection (π) according to Eq ( 1 ). In Eq ( 1 ), N represents the size of the entire population and λ represents the transmission potential of infected individuals without symptoms relative to those with symptoms. Using the true incidence of infection, rather than a time series of incompletely-ascertained cases, is crucial to appropriately capture the extent of historical viral spread [ 20 ]. Because estimating π is challenging and was not itself an objective of the present work, we drew from the Institute for Health Metrics and Evaluation (IHME) infection model, a time series nowcasting model that is widely used to understand the historical extent of infection [ 21 – 23 ]. The IHME model estimates π from confirmed cases, hospitalizations, and deaths, and validates results against seroprevalence data. We downloaded these data from the IHME website on December 7, 2021. To ensure our findings were robust to the underlying infection model, we replicated core analyses using estimates of π from the Imperial College London (ICL) infection model (Fig B in S1 Text ) [ 24 ]. Multiple studies have established the potential for wearable sensors to detect presymptomatic, asymptomatic, and symptomatic SARS-CoV-2 infections [ 16 – 19 ]. With this said, a meaningful yet unknown fraction of Symptomatic individuals would have already undergone some degree of quarantining–behaviour already accounted for in the historical transmission rate (β). For this reason, we did not include a pathway for Symptomatic device users to enter Quarantined states. We separately explored the impact of smaller and larger values for the prevalence of infected individuals that remain asymptomatic (Fig D in S1 Text ). To incorporate wearable sensor deployment, we stratified Susceptible, Exposed, and Infectious states by whether individuals are device users or not. Wearable device users can enter Quarantined states if they are notified of potential infection, and if they adhere to this notification by seeking a confirmatory lab-based test and quarantining while awaiting the result. We modelled adherence as the fraction of notified users notified who comply with all recommended next steps; accordingly non-adherent users ignore the notification entirely in this framework. We captured adherence in one parameter to preserve model parsimony and considered all values of this parameter (i.e., from 0% to 100%) recognizing the reality there will be great variation in the extent to which notified users are adherent. To explore the notion that adherence may not be “all or nothing” in practice, we separately considered the possibility that non-adherent users who do not take any recommended next steps still act more cautiously (e.g., limiting contacts, wearing a more protective mask) due to the notification (Fig C in S1 Text ). We first explored a baseline scenario in which wearable device users can download an application with currently available detection algorithms [ 10 ]. We then investigated the impact of technology and behavioural parameters: detection sensitivity and specificity; uptake, defined as the proportion of the population that has downloaded the application and uses their wearable device often enough; and adherence, defined as the proportion of users who comply with all recommended next steps after a positive notification. Finally, we considered a complementary intervention wherein users with a positive notification are offered a confirmatory rapid antigen test before they are prompted to seek a lab-based test and quarantine. We simulated Canada’s second COVID-19 wave (September 1, 2020 to February 20, 2021). This time window allowed us to capture the dynamics of wearable sensor deployment during an acute phase of the pandemic and at a time when the technology would have been ready and deployable. Further, it allowed us to consider scenarios prior to broad vaccine availability and before then-emerging variants of concern (VOCs) were dominant [ 13 ]. Potential reinfections were also likely to be negligible in this timeframe [ 14 ]. Although we do not explicitly model vaccines, VOCs, and reinfections, we later consider various hypotheses concerning their potential impact on wearable sensor deployment in the Discussion section. The use of antigen tests reduced the number of days incorrectly spent in quarantine by ~300-fold by increasing the “effective specificity” of the strategy ( Fig 6B ). That is, with antigen tests, the likelihood of a Susceptible user being incorrectly prompted to quarantine on a given day fell from (1 –ν w ) to the product of (1 –ν w ) and (1 –ν a ), where ν w and ν a are detection algorithm specificity and antigen test specificity, respectively. In earlier scenarios ( Fig 4A and 5A ), the number of averted infections was decreased by improving detection specificity more than it was increased by improving detection sensitivity; fewer infections were averted in scenarios with “high” as opposed to nominal detection sensitivity and specificity. Here, the specificity contributed by the antigen tests diminished the relative impact of improving detection specificity on the number of averted infections: the “effective specificity” of the strategy was 99.976% with nominal detection specificity and 99.995% with high detection specificity (Table B in S1 Text ) [ 31 ]. Instead, improving detection sensitivity was what increased the number of averted infections. Importantly, antigen tests had the secondary effect of decreasing the strategy’s “effective sensitivity”–the product of antigen test sensitivity (91.1%) and detection algorithm sensitivity [ 31 ]. Our earlier findings suggested that false positive notifications of potential infection were the primary cause of unnecessary quarantines and lab-based tests. Improving detection specificity was one way to decrease false positive notifications. Here, we investigated whether offering confirmatory rapid antigen tests to users with a positive notification could also contribute to reducing unnecessary quarantines and lab-based tests ( Fig 6 ; Table 1 ; Table E in S1 Text ). We considered multiple scenarios, each with either low levels of uptake (0.5%) or adherence (14%), nominal levels of uptake (4%) or adherence (50%), or high levels of uptake (12.5%) or adherence (86%). We examined these scenarios in the cases of nominal detection sensitivity and specificity, and of “high” detection sensitivity and specificity (using the same definitions of “high” as above). Adherence meaningfully impacted the achievable reduction in the burden of infection ( Fig 5B ). With nominal detection sensitivity and specificity, increasing adherence among participating wearable device users from 20% to 80% tripled the achieved reduction in the burden of infection, raising it from 7.2% (95% CI: 6.3–8.1%) to 22.1% (95% CI: 20.4–23.6%). However, increasing the proportion of users who comply with notifications also magnified the consequences of false positive notifications: the number of days incorrectly spent in quarantine per month per user ( Fig 5C ) and the demand for lab-based tests ( Fig 5D ) grew proportionally with adherence. These social and resource costs grew at a slower rate with improved detection specificity. (A) Averted infections, (B) reduction in the burden of infection, (C) days incorrectly spent in quarantine per month per user, and (D) average daily demand for lab-based tests, all over the simulation period, as a function of increasing adherence. Grey dashed lines denote nominal adherence (50%). In the “High Sensitivity” and “High Specificity” scenarios, detection specificity and sensitivity are kept at their nominal values, respectively. Symbol markers are added in (C) and (D) to distinguish overlapping curves: in these charts, the “Nominal Sensitivity and Specificity” and “High Sensitivity” curves overlap, and the “High Specificity” and “High Sensitivity and Specificity” curves overlap. Adherence to public health guidelines also impacts the success of pandemic control measures. Targeted policies–for example, compensating individuals in self-isolation–could help improve compliance with public health recommendations [ 30 ]. Here, we explored the role of adherence in wearable sensor deployment strategies ( Fig 5 ; Fig A in S1 Text ). We captured adherence in one parameter to preserve model parsimony but recognize that there is likely to be great variation in the extent to which notified users adhere to recommended next steps in practice (Table B in S1 Text ). For this reason, we chose to explore outcomes at all values of adherence–from 0% adherence, where no users comply with any recommended next steps, to 100% adherence, where all users comply with all recommended next steps. We kept uptake constant at 4% and assessed multiple technology scenarios using the same definitions of “high” detection sensitivity and specificity as before. Separately, we also considered the case of partial adherence where non-adherent users act more cautiously due to the notification (Fig C in S1 Text ). In all technology scenarios, increasing uptake averted more infections, though with eventual diminishing returns ( Fig 4A and Fig 4B ). Within our estimated range of uptake (0.5% to 7.5%), and with nominal detection sensitivity and specificity, each percent increase in uptake resulted in an additional 3.4% (95% CI: 2.8–4.0%) reduction in the burden of infection ( Fig 4B ). As expected, improving detection specificity resulted in fewer averted infections when uptake was held constant; this effect was most pronounced between ~30% and ~60% uptake. The number of days incorrectly spent in quarantine per month per device user remained constant as a function of uptake but decreased from ~2.15 to ~0.45 when detection specificity was increased ( Fig 4C ). This ~80% decrease was consistent with our definition of “high specificity” underscoring that detection specificity directly influences the burden of incorrect quarantines on device users. The average daily demand for lab-based tests scaled linearly with uptake, but at a slower rate with improved detection specificity ( Fig 4D ). (A) Averted infections, (B) reduction in the burden of infection, (C) days incorrectly spent in quarantine per month per user, and (D) average daily demand for lab-based tests, all over the simulation period, as a function of increasing uptake. Grey dashed lines denote nominal uptake (4%). In the “High Sensitivity” and “High Specificity” scenarios, detection specificity and sensitivity are kept at their nominal values, respectively. Symbol markers are added in (C) and (D) to distinguish overlapping curves: in these charts, the “Nominal Sensitivity and Specificity” and “High Sensitivity” curves overlap, and the “High Specificity” and “High Sensitivity and Specificity” curves overlap. Ensuring that public health measures reach sufficient levels of uptake has been a continued challenge through the COVID-19 pandemic. Digital contact tracing and vaccination efforts have demonstrated that well-constructed policies–for example, incentivizing participation–can improve uptake of measures [ 28 , 29 ]. Here, we explored the role of uptake to provide relevant context for the design of wearable sensor deployment policies ( Fig 4 ; Fig A in S1 Text ). We estimated that uptake would fall between 0.5% and 7.5% (Tables B–D in S1 Text ) at baseline but chose to present outcomes at all levels of uptake (i.e., from 0% to 100%) to illustrate emergent phenomena. We also explored multiple technology scenarios, setting “high” detection sensitivity and specificity at 96.0% and 98.4%, respectively; we based these increases on the respective goals of capturing 20% more infections and reducing the false positive rate by 80% relative to nominal values. We kept adherence constant at 50%. Increasing detection sensitivity increased the number of averted infections by prompting more Infectious users to quarantine ( Fig 3A and Fig 3B ). On the other hand, increasing specificity had a two-part effect. First, as specificity approached 100%, the number of days incorrectly spent in quarantine approached zero ( Fig 3C ); sensitivity had negligible impact on incorrect quarantines. Second, by virtue of decreasing the number of incorrect quarantines, increasing specificity resulted in a larger pool of Susceptible individuals; in turn, fewer infections were averted. Despite this second effect, incorrect quarantines were not central to the strategy’s public health impact. In the baseline scenario above (80% detection sensitivity, 4% uptake, and 50% adherence), a 12.1% (95% CI: 11.0–13.1%) reduction in the burden of infection was still achievable with perfect detection specificity (and no incorrect quarantines). 22.7% (95% CI: 13.1–32.5%) of averted infections could be attributed to incorrect quarantines in the baseline scenario, though this proportion decreased as sensitivity improved (Fig E in S1 Text ). After their initial release on technology platforms, health detection algorithms can be updated and improved as more real-world data are collected. However, it is often challenging to dramatically raise detection sensitivity and specificity at the same time. We explored the implications of this tradeoff ( Fig 3 ), varying detection sensitivity and specificity while keeping uptake and adherence constant at 4% and 50%, respectively. We observed that in a baseline scenario, 366,143 (95% CI: 333,242–396,944) infections could have been averted during Canada’s second COVID-19 wave–a 15.6% (95% CI: 14.2–16.9%) reduction in the burden of infection ( Fig 2A ). However, the social costs were high: between ~75,000 and ~125,000 device users were incorrectly quarantining on any given day ( Fig 2B ). Moreover, between ~40,000 and ~65,000 additional lab-based tests were required each day ( Fig 2C ), corresponding to a 51.6% (95% CI: 41.1–63.6%) increase in demand relative to historical volumes. Historically, ~101,000 lab-based tests were performed each day, on average, during the simulation timeframe [ 22 , 27 ]. The number of individuals incorrectly in quarantine and daily demand for lab-based tests were generally steady over time because they largely depend on the number of Susceptible device users, adherence, and detection specificity; the gradual decrease can be attributed to the flow of users into the Removed state. These findings were robust to our use of the IHME infection model (Fig B in S1 Text ). We first investigated the baseline scenario in which detection algorithms that currently exist are made publicly available for device users to download and use ( Fig 2 ) [ 10 ]. Upon notification of potential presymptomatic or asymptomatic infection, users are prompted to seek a confirmatory lab-based test, quarantine while awaiting the result (nominally, for 2 days), and self-isolate until recovery if positive. We used the nominal values outlined in Table B in S1 Text , setting uptake, adherence, detection sensitivity, and detection specificity to 4%, 50%, 80%, and 92%, respectively. Discussion We used a counterfactual model of Canada’s second COVID-19 wave to demonstrate that wearable sensors capable of detecting infections before or absent symptoms have meaningful potential to help mitigate the acute phase of a pandemic. Through continuous and non-invasive monitoring of physiological parameters, these devices can help FTTI systems identify hidden infection chains with minimal delay and without active user engagement or broad sharing of user data. We showed that (1) deploying currently available detection algorithms could have helped reduce the acute phase burden of infection, but with substantial social and resource costs; (2) improving detection algorithm specificity and offering confirmatory rapid antigen tests can help minimize unnecessary quarantines and lab-based tests; and (3) once false positive notifications are minimized, increasing uptake and adherence become effective strategies to scale the number of averted infections. In theory, wearable sensor deployment reduces the burden of infection by decreasing the pool of Infectious individuals (a function of detection algorithm sensitivity). Here we found that detection specificity played an unexpectedly large role as well, with false positive notifications of potential infection prompting unnecessary quarantines and thereby decreasing the pool of Susceptible individuals. Thus, although prioritizing uptake and adherence as part of a wearable sensor deployment strategy could mitigate a substantial number of infections, the unsustainable growth of associated costs should also be considered. In a baseline scenario, without improvements to detection specificity, every user would spend over two days a month on average incorrectly quarantining, and ~40,000 to ~65,000 additional confirmatory lab-based tests would be required each day. The social and economic harm caused by solely promoting uptake or adherence without improvements to detection specificity would likely undermine public confidence in and compliance with a wearable-based pandemic mitigation strategy [32]. Alavi et al found that many false positives were due to the detection algorithm identifying lifestyle-driven changes in resting heart rate (e.g., after intense exercise or alcohol consumption); accounting for these factors using more advanced algorithms may be one way to target improved detection specificity [10]. We found that the inclusion of confirmatory antigen testing was a valuable mechanism, beyond improving detection specificity, to increase the “effective specificity” of the strategy and decrease the overall false positive rate. The inclusion of antigen testing decreased days incorrectly spent in quarantine by ~300-fold and brought the additional demand on lab-based testing infrastructure to justifiable levels. However, even with the inclusion of antigen tests, improvements to detection specificity still had value. In scenarios with “high” nominal detection specificity, we observed a ~4-fold reduction in days incorrectly spent in quarantine per month per user, a ~2-fold reduction in lab-based tests performed each day, and a ~5-fold reduction in antigen tests used each day. Importantly, a strategy in which antigen tests support the deployment of wearable sensors is notably different from one involving frequent use of rapid antigen tests for diagnosis or screening [33]. On their own, broad antigen test-based screening approaches require tremendous manufacturing volumes, infrastructure, and funding [34]. Conversely, wearable sensors can non-invasively detect infections without active user engagement, reducing the effort required to participate. Further, wearable sensors may even help improve diagnostic test allocation by directing tests toward individuals with a higher pre-test probability of infection [35]. The COVID-19 pandemic’s evolution has been shaped by the uptake of vaccines, the emergence of more transmissible and immune-evasive variants, and the potential for breakthrough and repeat infections [36]. Although we did not consider these factors when modelling Canada’s second wave, we speculate that their effects on wearable sensor-based mitigation strategies would be driven by changes in users’ physiological responses and in SARS-CoV-2 epidemiology. In particular, we hypothesize that wearable sensor-based mitigation would be impacted in four major ways. First, vaccination has been found to elicit similar physiological responses to infection (e.g., elevated resting heart rate) and these physiological responses might be captured by wearable sensor-based detection algorithms [10,37]. We expect this to manifest as an increase in the incidence of false positive notifications, which we have considered in depth in our analyses related to detection specificity. However, we also speculate that vaccination-driven false positive notifications would likely be flagged as such by the user and ignored. Second, prior immunity from vaccination may attenuate physiological responses elicited by breakthrough infections, altering detection sensitivity [38]. Although it might generally be expected that the degree of attenuation would depend on the VOC causing infection, as well as the specific infection and vaccination history of the individual, evidence of minimal differences between physiological responses to breakthrough infections during Germany’s Delta and Omicron waves has been reported [38]. From a modeling perspective, incorporating temporal changes in detection sensitivity may be an appropriate starting point for exploring this effect. Third, the onset of symptoms may occur earlier in the infectious period in individuals with pre-existing immunity than in immunologically naïve individuals [39,40]. In these scenarios, the early onset of symptoms would already contribute to the detection of infections earlier in the infectious period. However, we speculate that if detection algorithms retained their ability to identify presymptomatic infections, wearable sensors could even further reduce the fraction of the infectious period in which users unknowingly transmit the virus–and in turn, even further decrease the burden of infection. Finally, increases in transmissibility–whether due to higher viral loads or immune evasion in VOCs–would also influence the impact of wearable sensor-based mitigation strategies by attenuating the achievable reduction in the burden of infection (Fig H in S1 Text) [3,41–43]. Moving forward, more empirical data will be needed in order to develop models of wearable sensor deployment in the SARS-CoV-2 vaccine and variant era, and in turn explore these hypotheses. Our work has important limitations. First, we do not account for heterogeneities in wearable device use which, in reality, is influenced by age, race, level of education, and income [44,45]. Future analyses could more precisely address how a device user being removed from the pool of Susceptible or Infectious individuals will impact the epidemic trajectory based on that user’s demographic and socioeconomic profile. Indeed, the COVID-19 pandemic has disproportionately impacted low-income and minority groups, while younger individuals are more likely to be super-spreaders [46–48]. Future studies could also consider policies that subsidize wearable devices, reducing the participation barrier for groups underrepresented among current device owners. Second, we made the simplifying assumption that all users without symptoms (and that no users with symptoms) could benefit from wearable-informed prompts to seek a confirmatory test and tentatively quarantine. We may be underestimating the effect size because wearable sensors also show promise in detecting symptomatic SARS-CoV-2 infection and many symptomatic individuals did not historically self-isolate [16–18,49,50]. Third, we did not consider how uptake or adherence may vary with time, detection accuracy, or other factors [28,32,49,51]. Finally, we did not consider how detection algorithm performance varies over the course of infection. Using the example of COVID-19, we demonstrated the potential of wearable sensors to support FTTI systems with real-time detection of presymptomatic and asymptomatic infections and thereby reduce the burden of infection during a pandemic. Messaging to the public will be an important to ensure a wearable sensor-based mitigation strategy is successful: for example, public health leaders will need to communicate the limitations of wearable sensors with respect to detecting infections and emphasize that a lack of a notification does not rule out potential infection (Fig C in S1 Text). Moreover, moving forward, it will also be important to consider how wearable sensor data can be linked with other health data such as laboratory tests to yield more impactful diagnoses, to address potential issues with data format and secure storage with an eye to heightened challenges in resource-constrained settings, and to ensure that device users prompted to quarantine have appropriate supports to do so [30,52,53]. Ultimately, as sensor technology and detection algorithms evolve–for example, to potentially distinguish infections with SARS-CoV-2 from those with seasonal influenza–there is clear merit to further exploring how wearable sensors can be incorporated into FTTI systems to support pandemic mitigation [54]. [END] --- [1] Url: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000100 Published and (C) by PLOS One Content appears here under this condition or license: Creative Commons - Attribution BY 4.0. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/plosone/