uttered)
Discussion: “What does it all mean?”
The “discussion” section is intended to explain to your reader what your data can be interpreted to mean. As with all science, the goal for your report is simply to provide evidence that something might be true or untrue—not to prove it unequivocally. The following questions should be addressed in your “discussion” section:
Resources: Hogg, Alan. "Tutoring Scientific Writing." Sweetland Center for Writing. University of Michigan, Ann Arbor. 3/15/2011. Lecture. Swan, Judith A, and George D. Gopen. "The Science of Scientific Writing." American Scientist . 78. (1990): 550-558. Print. "Scientific Reports." The Writing Center . University of North Carolina, n.d. Web. 5 May 2011. http://www.unc.edu/depts/wcweb/handouts/lab_report_complete.html
Available downloads, related records.
Advertisement
Supported by
It was much more accurate than primary care doctors using cognitive tests and CT scans. The findings could speed the quest for an affordable and accessible way to diagnose patients with memory problems.
By Pam Belluck
Scientists have made another major stride toward the long-sought goal of diagnosing Alzheimer’s disease with a simple blood test . On Sunday, a team of researchers reported that a blood test was significantly more accurate than doctors’ interpretation of cognitive tests and CT scans in signaling the condition.
The study , published Sunday in the journal JAMA, found that about 90 percent of the time the blood test correctly identified whether patients with memory problems had Alzheimer’s. Dementia specialists using standard methods that did not include expensive PET scans or invasive spinal taps were accurate 73 percent of the time, while primary care doctors using those methods got it right only 61 percent of the time.
“Not too long ago measuring pathology in the brain of a living human was considered just impossible,” said Dr. Jason Karlawish, a co-director of the Penn Memory Center at the University of Pennsylvania who was not involved in the research. “This study adds to the revolution that has occurred in our ability to measure what’s going on in the brain of living humans.”
The results, presented Sunday at the Alzheimer’s Association International Conference in Philadelphia, are the latest milestone in the search for affordable and accessible ways to diagnose Alzheimer’s, a disease that afflicts nearly seven million Americans and over 32 million people worldwide. Medical experts say the findings bring the field closer to a day when people might receive routine blood tests for cognitive impairment as part of primary care checkups, similar to the way they receive cholesterol tests.
“Now, we screen people with mammograms and PSA or prostate exams and other things to look for very early signs of cancer,” said Dr. Adam Boxer, a neurologist at the University of California, San Francisco, who was not involved in the study. “And I think we’re going to be doing the same thing for Alzheimer’s disease and hopefully other forms of neurodegeneration.”
In recent years, several blood tests have been developed for Alzheimer’s. They are currently used mostly to screen participants in clinical trials and by some specialists like Dr. Boxer to help pinpoint if a patient’s dementia is caused by Alzheimer’s or another condition.
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in .
Want all of The Times? Subscribe .
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Scientific Reports volume 14 , Article number: 18058 ( 2024 ) Cite this article
Metrics details
Recent advances in AI and intelligent vehicle technology hold the promise of revolutionizing mobility and transportation through advanced driver assistance systems (ADAS). Certain cognitive factors, such as impulsivity and inhibitory control have been shown to relate to risky driving behavior and on-road risk-taking. However, existing systems fail to leverage such factors in assistive driving technologies adequately. Varying the levels of these cognitive factors could influence the effectiveness and acceptance of ADAS interfaces. We demonstrate an approach for personalizing driver interaction via driver safety interfaces that are are triggered based on the inference of the driver’s latent cognitive states from their driving behavior. To accomplish this, we adopt a data-driven approach and train a recurrent neural network to infer impulsivity and inhibitory control from recent driving behavior. The network is trained on a population of human drivers to infer impulsivity and inhibitory control from recent driving behavior. Using data collected from a high-fidelity vehicle motion simulator experiment, we demonstrate the ability to deduce these factors from driver behavior. We then use these inferred factors to determine instantly whether or not to engage a driver safety interface. This approach was evaluated using leave-one-out cross validation using actual human data. Our evaluations reveal that our personalized driver safety interface that captures the cognitive profile of the driver is more effective in influencing driver behavior in yellow light zones by reducing their inclination to run through them.
Improvements in advanced driver safety assistance systems have the potential to save lives 1 , 2 . However, these safety systems could benefit from targeting the cause of individual drivers’ dangerous driving behavior, which is known to be affected by many different factors, including cognitive, social, and situational 3 , 4 , 5 . Among the cognitive factors that influence risky driving behavior are cognitive impulsivity , which is the tendency to act without thinking 6 , and inhibitory control , which is the ability to suppress goal-irrelevant stimuli and behavioral responses 7 . Risky driving has been associated with higher self-reported impulsivity 4 , 8 , 9 , 10 , 11 , and with poorer inhibitory control in relevant laboratory tasks 4 , 10 , 11 , 12 , 13 . A recent review has shown the relationship between impulsivity and speeding and other driving violations 14 . More recent work has emphasized that the relationship between impulsive processes and driving errors and violations is influenced by cognitive abilities and self-regulation 15 , 16 . Further, such effects are associated with both sensation seeking (a concept related to impulsivity) and age, with recent work demonstrating that higher sensation-seeking and younger age were predictive of the highest speed during driving on a virtual reality track 17 . These cognitive factors also influence individuals’ reactions to different types of interfaces 18 , 19 .
Paaver et al. 20 showed that even a brief classroom-style lesson on impulsivity and driving can prevent speeding. Although the significance of impulsivity and inhibitory control as risk factors for vehicle accidents has not yet been leveraged in ADAS interfaces, these concepts have been used to develop effective driver educational materials. While there are numerous driver safety interfaces available, there is a gap in the research regarding the influence of impulsivity and inhibitory control on drivers’ responses to these interfaces. More specifically, studies have not adequately explored how to tailor the deployment of these safety interfaces to individual drivers, taking into account their unique levels of impulsivity and inhibitory control. Such personalization is crucial, as it can determine the effectiveness of the interface in enhancing driver safety.
Thus, the efficacy of driver safety systems may vary due to individual differences in cognition. The design of human-machine interfaces (HMIs), with a focus on addressing specific cognitive characteristics, has the potential to enhance both their safety effectiveness and user acceptance 21 . Crucially, the ability to estimate cognitive characteristics from observed driver behavior lays the groundwork for more personalized and effective safety interventions.
Our goal is to build a driver safety system that leverages learned representations of individual drivers’ cognitive factors to personalize HMIs that result in safer driving outcomes. Such a system would allow us to fully separate the underlying reasons for personalization (i.e., the learned cognitive factors) from what specific HMI attributes are personalized as a result of those reasons. This approach, in turn, allows for the deployment of highly versatile safety systems - for instance, if a new HMI is developed, these can be integrated without additional re-training of the underlying representation. The neural representations of cognitive factors enable refinement of the estimated factors, as well as deployment of personalized safety intervention, at large scale.
In this paper, we present experimental evidence for how factors such as impulsivity and inhibitory control can influence people’s responses to driver safety interfaces and how the inference of such cognitive measures enables an approach for personalizing safety interfaces. We do so by constructing a neural network model that embeds driver behavior into a latent space that captures these factors; finally, we demonstrate the embedded representation’s utility for triggering the deployment of assistive driving interfaces targeted to inhibitory control and impulsivity. To our knowledge, we are the first to demonstrate driver assistance personalization in a high-fidelity simulator.
In this paper we contribute: (1) Experimental evidence of how impulsivity and inhibitory control relate to performance under different choices of driver safety systems on a new dataset collected in a large-scale, high-fidelity, driving simulator; (2) A neural network model capable of encoding individual cognitive factor differences based on recent driving behavior; and (3) A decision-making system capable of personalizing the activation of driver safety interface based on the inferred cognitive factors.
Our work is at the intersection of two active research areas: the role of cognitive factors in understanding driving behavior, and learning approaches that capture specific latent factors for HMIs.
Cognitive factors and driving behaviors Common approaches for assessing driving behavior commonly involve self-report surveys 22 , ticketed speeding violations 23 , or crash records 24 . While these measurements can be good indicators of risky driving behavior, self-report metrics such as these are not always reliable 25 , contain private information, and do not lend themselves to seamless integration into preventative use with drivers. Other studies have shown driving characteristics can be estimated by measuring reactions to predetermined unsafe events in a simulated driving task 12 .
Our work provides a comprehensive general approach (Fig. 1 ) to inferring latent cognitive factors from driving behavior logs via a neural network encoder, and uses a high-fidelity driving motion simulator where behavior is closer to real-world vehicles than in lower-fidelity simulators (e.g., bench set up with a steering wheel) (Fig. 3 c).
A conceptual overview of our framework. Latent factors embed cognitive measures from the driving behavior, and used to inform HMI choice(dashed lines). Solid line marked the observable driving behavior and personalized HMI.
In addition to measuring driving behavior, researchers often measure impulsivity and other behavioral and cognitive factors via tests and questionnaires 26 , 27 , 28 , 29 . However, for these cognitive factors to effectively enhance vehicle safety systems, they should be estimated in a scalable way and applied to the development of personalized assistive interfaces within vehicles. In our work, we adopt a data-driven approach to train a neural network model that estimates cognitive factors from driving behavior (as opposed to relying on tests and questionnaires) thereby lending itself to deployment at scale. This could lead to more accurate information about drivers and further lead to effective intervention design and deployment criteria.
Learning Latent Factors for Human-Machine Interfaces Since an intelligent vehicle is a robotic system, our approach also relates to efforts in personalizing interactions between humans and robots or other machines. Prior work in machine learning for HMIs and human-robot teaming has focused on various human-robot interaction modalities such as driver monitoring, optimal shared control laws, and design of assistive robot behaviors (see e.g. 30 , 31 , 32 , 33 ). However, these approaches for human-robot interactions typically do not explicitly consider individual differences in cognitive factors and therefore fall under the category of a “one-size-fits-all” design.
The same is true for modern-day driver assistance systems such as lane-departure warnings or forward-collision warnings. Typical interventions issued by such systems depend on an individual’s state and action history and manifest as corrections of unsafe or suboptimal human actions generated from a policy learned from a desired set of behaviors required of the system 34 . Such approaches have been found to over-fit to the average-case behavior of individuals in a population, leading to incorrect inference of the human’s state and poor generalizability 35 , 36 . Given both the safety risks and the high degree of individual variation in factors like impulsivity and inhibitory control, over-fitting can have potentially dire consequences for drivers 37 . Recent work has shown that learning latent representations summarizing human behavior can improve teaming and interaction with the human. For instance, work on dialog systems 38 , recommender systems 39 , 40 , and intent recognition for products and motion 41 , 42 have demonstrated that latent representations are capable of better predicting the user’s need for a given intervention and their reaction to that intervention. We posit that using this representation as a basis for deciding whether to interact and which modes of interaction to use should improve safety over “one-size-fits-all” decision schemes.
In this paper, we explore how to effectively personalize HMIs based on people’s impulsivity and inhibitory control. We posit that latent factors such as impulsivity and inhibitory control can be inferred in an automated manner from driving behavior and can inform choices of interactions with the drivers to benefit them at a large scale.
We now proceed to describe our computational approach for encoding latent cognitive factors. The resulting neural network distills a human driver’s recent driving history down to a low-dimensional parameter space whose structure can be easily shaped via multiple cognitive measures in a semi-supervised manner. The model we use includes a context encoder whose input is a time-receding, fixed-window trajectory of driving behavior in a scenario and whose output is a low-dimensional latent vector. This latent representation is then coupled with a separate decision-making module that takes this latent vector as input and outputs a decision at each decision time-step; for instance, whether or not to present a particular HMI to the driver at the current time-step. The architecture is shown in Fig. 2 , with further details in the “ supplemental information ”. As a result of experimentation, we found that a two-dimensional latent vector provided sufficient capacity to capture relevant cognitive factors, yet allow direct interpretation of the learned trends in the representation without possible distortions introduced by dimensionality reduction schemes (e.g. t-distributed Stochastic Neighbor Embedding 43 ).
The context encoder is represented as a long short-term memory (LSTM) recurrent neural network 44 , \(q_{\psi }(z \mid \tau )\) , and defines the probability of latent vector z given a past trajectory \(\tau\) of the driver.
The hidden layer h is fed into two linear layers that output the mean and log-variance of the latent encoding 45 .
As driving actions do not directly relate to psychological traits, we leverage contrastive learning 46 , 47 to encourage the latent representation to conform to measured cognitive factors (we introduce the specific factors we use in the Results section). As decisions should be based on more than one cognitive factor, we consider our cognitive factor target to be a vector.
The context encoding model transforms a driver’s past driving history \(\tau\) to a latent vector z , and uses a decoder network \(p_{\theta }(a|z)\) to predict the driver’s action a at the current time-step. We set up the loss terms to encourage z to capture both the individual’s cognitive factors and reconstruction of driver actions, with the factors allowing for the downstream decision-making module to have awareness of any time-independent factors inherent to the individual driver, as well as driver actions allowing for awareness of the behaviors in a given situation. Any scene context information present in \(\tau\) will indirectly manifests in z through \(q_{\psi }(z \mid \tau )\) . Thus, we expect a weak dependence of predicted driver action on scene context. The overall loss used to train the encoder consists of three components:
\(L_1(a, z; \theta ) = -\mathbb {E}_z \log p_{\theta }(a \vert z)\) is the expected negative log likelihood of action a under the model (reconstruction loss) induced by the conditional distribution p over z , where z characterizes driving behavior up to time t and \(\theta\) represents the parameters of the action decoder network.
\(L_2(z,y;\psi)\) , a contrastive loss supervised using a vector of cognitive factor targets y 48 . For continuous-valued cognitive measures, this loss is
where \(\mathcal {Z}\) represents a training samples batch, where each independently-sampled \(z, z' \in \mathcal {Z}\) is a \(\vert Z \vert\) -dimensional latent vector induced by the LSTM context encoder with parameters \(\psi\) , \(y_{z}\) is a vector of batch-normalized cognitive measures associated with z , \(\ell (z, z')\) is a measure associated with two vectors z and \(z'\) (which we choose as their Euclidean distance, i.e. \(\Vert z - z'\Vert\) ), and \(\epsilon\) controls the magnitude of dissimilarity of y -values in z -space, where a larger \(\epsilon\) enforces higher separation of \(\Vert z - z'\Vert\) for fixed \(\Vert y_{z} - y_{z'}\Vert\) .
\(L_3(z) = D_{KL}(q_{\psi }(z \mid \tau )\vert \mathcal {N}(0,I))\) , a Kullback-Leibler (KL)-regularization loss for the distribution of z , e.g. as in 49 , 50 . \(\mathcal {N}(0, I)\) is the unit-normal distribution of appropriate dimension.
These terms are combined into an overall training loss:
where \(\alpha _1\) , \(\alpha _2\) , and \(\alpha _3\) are the respective loss coefficients.
Overall system architecture, including context encoder, decoder for future state and action prediction, outputs of cognitive measures, and latent factors used for HMI selection and decision-making.
HMI Decision-Making : We evaluate the utility of the inferred latent factors model by marrying it with a decision rule for selecting the activation of the HMI. The decisions are defined via a simple classifier whose inputs are the inferred latent factors. The classifier is trained to optimize a criterion for HMI selection within the training data. We take the criterion for classification to be the difference in average speed between two conditions, with and without HMI, when the yellow light is active, (averaged across trajectories for a single subject). This criterion reflects the speed reduction induced in the subject when an HMI is presented to the driver. Therefore, for each subject, we have a single regression target and the decision maker is trained to map the latent factors inferred from that subject’s trajectory snippets around yellow light transitions to the corresponding regression target; essentially learning a many-to-one function. We use Support Vector Regression 51 with a polynomial kernel as our decision model.
Our motion-simulator driving experiment was designed to address the following hypotheses:
People with different levels of cognitive factors should exhibit different driving behaviors.
People with different levels of cognitive factors should respond differently to HMIs.
Our model should infer individual differences in cognitive factors from driving behavior data.
When using our model of inferred cognitive factor differences to choose HMIs, and those choices should result in lower speeds when passing through traffic lights.
The goal of our experiments is to validate H1–H4 by performing the following: (1) constructing candidate HMIs using a simple hand-crafted decision rule to time the deployment of the HMI for alerting the driver when they were approaching a traffic light to influence their driving behavior (specifics can be found in Fig. 3 e, (2) data collection of unassisted, baseline driving behaviors from a variety of types of individual drivers in a simulated road setting involving traffic lights, (3) data collection of driving behaviors with the HMI assistance schemes, (4) utilizing the collected data for training a model that encodes cognitive traits, as measured by cognitive assessments, from driving behavior.
In post-hoc, retrospective, analysis of the data, we conducted: (5) post-hoc evaluation of the HMI effect on driver behavior on approach to traffic lights, (6) post-hoc evaluation of our encoding of cognitive traits with respect to cognitive assessments, and (7) post-hoc evaluation of individuals’ behavioral response with the provided HMIs and using the models. Due to the logistical constraints associated with including more participants in our study, we designed our experiments to use a single pool of subjects to address each tasks (1)–(7). Hence, we conduct a randomized study involving each candidate HMIs without using the cognitive inference model. Data collected from the study was used to train a neural network-based cognitive inference model. The model was validated using a leave-one-out cross-validation scheme with respect to a chosen behavior statistic (mean speed during yellow light phase), in retrospect, by averaging over trials in which the experimental condition matched the model’s decision.
Thirty-nine Northern California-based drivers aged 18 and older ( Mean age = 49, Female = 16, Non-binary = 1 ) were recruited to participate in our study via Fieldwork, a global market research firm. Participants were only invited to participate if they held an active driver’s license, were not pregnant, and were vaccinated for COVID-19. Further details can be found in the recruitment section in the “ supplemental information ”.
Half of the participants were between the ages of 18–22, the other half were over the age of 65. We chose to recruit these two age groups because previous research has shown significant differences in their levels of impulsivity, inhibitory control, and risk propensity. 52 Additionally, these two populations are at heightened risk of vehicle accidents 11 . We opted to start with these groups to determine if there is a detectable signal. While age-related differences are not discussed in this paper, additional analyses can be found in the “ supplemental information ”. We did not find any significant differences between these two populations in our analysis.
This research was reviewed, approved, and done according to the human-subject guidelines set by the Western Institutional Review Board-Copernicus Group (WCG) IRB protocol number 20221727. Participants filled out a consent form prior to participation and were compensated $150 for their two-hour participation.
Participants were excluded from the analysis if they did not complete the study. Of the 39 participants, 7 participants did not complete the driving trials due to motion sickness. Of the 32 remaining participants, the data of 5 participants was excluded from the analysis due to technical difficulties with the motion simulator during testing. The final sample size was therefore 27 individuals.
As illustrated in Fig. 3 d, participants drove on a looped road with traffic lights that randomly changed from green to yellow at varying times of arrival of the vehicle at the traffic light, inducing a zone of dilemma 53 (See Fig. 3 d). Each loop consisted of eight traffic lights, four of which would turn yellow. The driving time during the laps summed over all participants was 540 min, which has been shown to be sufficient for driver behavior estimation in similar driving conditions 54 . We collected four driving trials (laps) where participants interacted with different prototype driver safety interfaces and two baseline driving laps without the interfaces.
Participants completed the driving portion of the task using our vehicle motion simulator (See Fig. 3 c 55 , 56 , 57 ). The motion simulator has a cabin with two car seats, a steering wheel, and pedals that resemble the front half of a vehicle. The cabin is supported by a 6 DOF Motion Platform 58 and actuated based on the simulated vehicle movement in a virtual traffic environment. The cabin is surrounded by a projection screen that shows the virtual traffic environment. The CARLA simulator controls the virtual traffic and renders high-fidelity visuals by Unreal Engine 59 . A control booth behind the cabin allows the experimenter to control the scenarios and monitor participant safety. Communication between the experimenter and participant is enabled through a headset that is connected to a microphone and speakers in the cabin.
Two types of warning interfaces were used: a) transverse markings, projected on the road the car was driving; and b) a 2D yellow circle, projected as if it appeared in a heads-up display. Figure 3 e shows the virtual scenario and both interface types. The first two laps had no interfaces. The purpose of the first baseline lap was for the participant to get acclimated to the simulator and get a feel for how it drives and not included in analysis. For each interface, we also manipulated a trigger condition that determined whether or not it was displayed. Each interface was displayed either when the vehicle approached the traffic light (185 meters away) or when the upcoming traffic light changed from green to yellow.
Impulsivity: To assess participants’ impulsivity 60 , we used the BIS/BAS scale and the UPPS-P scale. The BIS/BAS was used to measure both the behavioral inhibition system (BIS) and the behavioral activation system (BAS), while the UPPS-P was used to account for different facets of impulsivity 61 .
Inhibitory Control: We used the Go-No Go task 62 and the Stop Signal task 63 , 64 to measure response inhibition. Stop Signal task measures were as described by Verbruggen et al. 64 .
Self-reported Driving Behavior: To assess participants’ road errors and violations, we used the Manchester Driver Behavior Questionnaire (DBQ) 22 . It includes four sub-scales that measure driver errors (such as failing to check your mirrors), lapses (such as turning the wrong blinker on), aggressive violations (such as racing other vehicles on the street), and ordinary violations (such as ignoring the speed limit on the highway).
Driving Behavior in the Motion Simulator: We also captured driving behavior as participants drove in the motion simulator. We recorded their driving speed, acceleration, and response to yellow traffic lights.
( a ) Participant overview. ( b ) Set of surveys used to measure latent cognitive factors. ( c ) An illustration of the driving motion simulator used for data collection. ( d ) Driving task course overview. For each lap, four of the lights would transition from green to yellow to red; these were randomly selected for each trial. ( e ). Set of HMIs presented in the driving task. Participants would complete two baseline laps to start. The first baseline lap was considered practice to get the driver acclimated to the simulator and was not included in analysis. After the second baseline lap, the four HMI trials were randomized in the order they were presented to the driver.
The effect of different HMI types on the mean speed during the lap. “D” refers to a distance-based trigger of the HMI, where the HMI is presented when the vehicle enters within 185 meters of the traffic light, and “L” refers to a light-based trigger, where the HMI is presented at the moment the traffic light turns from green to yellow. Each box plot displays the median, interquartile range (IQR), and outliers for the mean speed during these conditions.
We analyzed the relationship between various aspects of impulsivity, inhibitory control, driving behavior, and responses to HMIs designed to encourage drivers to slow down. We then analyzed the performance of our model in inferring participants’ cognitive factors and predicting whether they should interact with a HMI to support driving goals.
To understand the relationship between the different cognitive factors and driving behavior when reacting to the yellow lights, we conducted a Bayesian correlation analysis using the JASP software 65 . For the analysis, we used the data from all of the driving laps – including the ones with HMIs presented. A table with all of the Bayesian correlations can be found in the “ supplemental information ” document. As shown in these tables, a number of significant correlations emerged.
The self-reported ordinary violations (errors such as speeding or staying close to another vehicle you are behind) measured in the DBQ 22 were (mean = 12.778, sd = 1.819) positively correlated with the mean speed at the yellow light (r = 0.4, BF10 = 9693) and the maximum speed when the yellow was active light (r = 0.54 BF10 = \(1.141\times 10^9\) ), indicating that drivers who reported higher levels of ordinary violations from the DBQ (mean = 13.556, sd = 4.348) were more likely to speed through yellow lights in this task.
We found several correlations between the BIS/BAS measures and driving behavior. In particular, BAS Fun Seeking mean = 11.704, sd = 2.165 was positively correlated with the mean speed at the active yellow light (r = 0.473, BF10 = \(1.700\times 10^6\) ) and the maximum speed at the yellow light (r = 0.31, BF10 = 99.19). These data suggest that individuals who have a higher desire for new and exciting experiences may be more likely to take risks while driving, such as speeding through yellow lights. BAS Reward Responsiveness (mean = 16.741, sd = 1.740) was also positively correlated with the maximum speed at an active yellow light (r = 0.29, BF10 = 39.63).
Similar to the BIS/BAS measures, various correlations emerged using the UPPS-P subscales. For instance, UPPS-P Positive Urgency (mean = 6.630, was positively correlated with the maximum speed at an active yellow light (r = 0.28, BF10 = 26.93), and UPPS-P Sensation Seeking (mean = 11.000, sd = 3.150) was positively correlated with the mean speed at the active yellow light (r = 0.29, BF10 = 42.89) and the maximum speed at the active yellow light (r = 0.47, BF10 = \(1.540\times 10^6\) ). These results are consistent with the results found for BAS Fun Seeking (mean = 11.704, sd = 2.165) and BAS Reward Responsiveness (mean = 16.741. sd = 1.740), which provides further evidence that people who desire fun, new and thrilling experiences are more likely to speed and take risks when reacting to traffic lights.
Multiple correlations also emerged using the measures from the Stop Signal task. For instance, the reaction time on go trials with a response (goRT_all, mean = 618.148, sd = 170.594) was negatively correlated with the mean speed at the yellow light (r = \(-\) 0.38, BF10 = 2933). This suggests that drivers with longer reaction times may be more likely to slow down at yellow lights rather than speeding through them.
Finally, we also found numerous correlations using the Go/No-Go measures. Among the correlations, the average response time (gonogo_average_rt, mean = 382.981, sd = 49.262) was negatively correlated with the mean speed at the yellow light (r = \(-\) 0.46, BF10 = 352747) and the maximum speed at the yellow light (r = \(-\) 0.40, BF10 = 9205), which is consistent with the reaction time results from the Stop Signal task (e.g. goRT_all).
Interaction plots showing how the presence of the HMI interacted with different measures. The lines represent different levels of the measures: +1 SD (High), Mean, and -1 SD (Low). From left to right, the measures are: ( a ) BAS Fun Seeking: Motivation to find novel rewards spontaneously; ( b ) SSRT: Stop Signal Reaction Time: Ability to inhibit a response; ( c ) UPPS-P Positive Urgency: Tendency to act impulsively due to positive affect; d) DBQ Ordinary Violations: Self-reported ordinary driving violations.
We fitted separate linear mixed models to predict each driving behavior measure based on interface condition (Table 1 ). All conditions demonstrated a statistically significant and negative effect on the mean speed during the lap, as depicted in Fig. 4 .
To further understand how different factors affect drivers’ responses to HMI, we conducted a linear mixed models (LMM) analysis, using multiple LMMs to examine the effects of various factors, including the presence or absence of HMI ( HMI_presence ) and their potential interactions. Participant ID was used as a random effect to account for individual differences. The lmer function in the lme4 R package 66 was employed for predicting mean speed when yellow lights were active based on these variables as
where \((1 | \text {Participant})\) denotes the random intercept. The models were fitted using the Restricted Maximum Likelihood (REML) estimation method, and the t-tests utilized Satterthwaite’s approximation method.
For detailed statistical outcomes, please refer to Table 1 . For a visual representation of some interaction effects, please see Fig. 5 , which complements the textual analysis. Here, we highlight some key findings that were noted to have a strong effect:
BIS/BAS : The BAS Fun Seeking subscale showed a significant main effect of HMI presence ( \(\beta = -11.14\) , \(SE = 4.64\) , \(t = -2.4\) , \(p = 0.018\) ) and a significant interaction with BAS Fun Seeking ( \(\beta = 0.9\) , \(SE = 0.39\) , \(t = 2.31\) , \(p = 0.023\) ), suggesting that individuals with higher BAS Fun Seeking scores drove faster in the presence of HMI compared to those with lower scores. The fixed effects accounted for 22.5% of the variance ( \(R^2_m = 0.225\) ), while the combined fixed and random effects accounted for 75% ( \(R^2_c = 0.75\) ).
UPPS-P : The Positive Urgency subscale revealed a significant main effect of HMI presence ( \(\beta = -8.71\) , \(SE = 2.66\) , \(t = -3.28\) , \(p = 0.0014\) ) and a significant interaction with Positive Urgency ( \(\beta = 1.23\) , \(SE = 0.38\) , \(t = 3.22\) , \(p = 0.0017\) ), indicating that individuals with higher Positive Urgency scores drove faster in the presence of HMI. The fixed effects explained 2.2% of the variance ( \(R^2_m = 0.022\) ), while the combined fixed and random effects explained 76.4% ( \(R^2_c = 0.764\) ).
Go/No-Go Measures : The Go/No-Go Average Response Time measure showed no significant main effect of HMI presence ( \(\beta = -0.85\) , \(SE = 6.88\) , \(t = -0.124\) , \(p = 0.9019\) ), but a significant effect of response time ( \(\beta = -0.072\) , \(SE = 0.028\) , \(t = -2.57\) , \(p = 0.0139\) ), indicating that longer response times were associated with slower driving speeds. The interaction between HMI presence and response time was not significant ( \(\beta = 0.00017\) , \(SE = 0.018\) , \(t = 0.010\) , \(p = 0.9922\) ). The fixed effects explained 20.4% of the variance ( \(R^2_m = 0.204\) ), while the combined fixed and random effects explained 74.0% ( \(R^2_c = 0.740\) ).
Stop Signal Measures : The SSRT measure showed no significant main effects of HMI presence ( \(\beta = 3.69\) , \(SE = 2.29\) , \(t = 1.61\) , \(p = 0.1094\) ) or SSRT ( \(\beta = 0.0145\) , \(SE = 0.0131\) , \(t = 1.11\) , \(p = 0.2724\) ). However, a significant interaction between HMI presence and SSRT was observed ( \(\beta = -0.0147\) , \(SE = 0.0073\) , \(t = -2.01\) , \(p = 0.0471\) ), suggesting that individuals with higher SSRTs drove slower in the presence of HMI compared to those with lower SSRTs. The fixed effects explained 1.0% of the variance ( \(R^2_m = 0.010\) ), while the combined fixed and random effects explained 75.1% ( \(R^2_c = 0.751\) ).
Manchester DBQ : The DBQ Ordinary Violations subscale showed a significant main effect of HMI presence ( \(\beta = -6.99\) , \(SE = 2.76\) , \(t = -2.53\) , \(p = 0.0128\) ) and a significant interaction with Ordinary Violations from the DBQ ( \(\beta = 0.473\) , \(SE = 0.194\) , \(t = 2.44\) , \(p = 0.0164\) ), suggesting that individuals with higher Ordinary Violations on the DBQ scores drove faster in the presence of HMI. The fixed effects explained 16.4% of the variance ( \(R^2_m = 0.164\) ), while the combined fixed and random effects explained 75.2% ( \(R^2_c = 0.752\) ).
Given the various measures collected in the study, we used stepwise regression to select the most important features for training our neural-network based cognitive factor inference model. We combined forward selection, starting with an empty model and adding the predictor that produced the largest increase in model fit, with backward elimination, removing the predictor that produced the smallest decrease in model fit until no further improvement was observed. By following this process, the stepwise regression yielded a set of four cognitive factors to be used in the model: UPPS-P - Positive Urgency, BAS Fun Seeking, goRT_all, and DBQ - Ordinary violations.
We adopt the learning approach described to infer cognitive factors based on the subjects’ driving during the experiment. As mentioned earlier, we use the same data to perform training and evaluate model inference. In order to fairly conduct the evaluation, we perform leave-one-out cross-validation over the 27 subjects, averaging model performance over 10 random seeds, and capture properties of the embedding and the resulting training decision criteria performance. We include a complete description of the training and evaluation steps and further findings in the “ supplemental information ”. The distribution of the inferred latent factors is shown in Fig. 6 a. Qualitatively, we observe that fairly strong clustering has emerged for each of the cognitive factors which indicates the effectiveness of the contrastive learning approach is effective. To quantify this further, we show in Table 2 the fit between the distribution of the selected cognitive and the inferred latent factors. Since there is no direct or linear mapping assumed in contrastive learning, we probed the uniformity of the inferred embedding. We used the KL distance between the cognitive measures and the inferred factors’ distribution. The results demonstrate the model’s ability to infer several variables interest centered around impulsivity and inhibitory control.
We next proceed to probe the efficacy of the resulting latent space to inform HMI adaptation to the subjects. We use leave-one-out to evaluate the decision classifier based on the inferred latent factors. From the test subject’s data, we extract trajectory snippets around the yellow light transitions. The segment of the trajectory before the transition is fed into the context encoder to generate an inferred latent factor. The decision classifier subsequently consumes this latent factor to produce the HMI decision. In order to evaluate the interface selection decisions by the decision classifier we compare them to fixed interface choice chosen optimally for all participants (“one-size-fits-all” approach). We then measure the participants’ behavior in terms of our chosen behavior statistic (mean average speed when yellow light was active) for the selected HMI choice (the classifier’s decision) for the withheld subject averaged over the trials in which the experimental condition matched the decision classifiers output (thereby treating the experiment as a within-subject randomized trial study).
We measure performance of the decision scheme with three metrics: mean yellow light speed, reporting mean ( \(\mu\) ) and standard deviation ( \(\sigma\) ) aggregated over individuals, along with a Cohen’s \(\kappa\) and Balanced Accuracy scores that measure, respectively, accuracy of interface selection scheme under an unbalanced dataset. When leveraging the latent factors to decide on an HMI choice, we achieve a balanced accuracy of 56% and a Cohen Kappa of 0.145 in selecting the optimal HMI for the specific driver, as shown in Table 3 , resulting in a reduction of 0.59 m/s in the mean speed throughout the yellow-phase of the traffic light. Additionally, in Fig. 6 b (left), we code each of the latents generated from the trajectory snippets according to the decision module’s predictions. In conjunction with Fig. 6 b (right), the trajectory snippets for which deployment of the HMI was the decision, we see that the average speed after the yellow light transitions is lower, showing the effectiveness of the HMI decision scheme. The color distribution in the different plots demonstrate how the embedding space captures both the driver traits as captured in the questionnaires (a), and the chosen interface decision and resulting driver speed at the yellow light interval (b).
Example embedding and decision module result based on training data from a 27-subject fold; ( a ) Embedding of participants’ past history trajectories with contrastive loss based on four factors: goRT all, UPPS-P Positive Urgency, DBQ Ordinary Violations, and BAS Fun. Colors mark low (red) to high (blue) measures; ( b ) Trained decision boundary (left) and average speed during the yellow light phase conditioned on the decision scheme (right), plotted on the latent embedding space \(z_0\) , \(z_1\) . Each point represents a unique time window over which the inference was run.
Despite efforts to include a large sample for our study, our sample size was relatively small. Some of this is due to participant motion sickness which at times was quite severe participation had to be ended early. We highlight that this is due to various logistical limitations such as the high costs involved in running a high-fidelity motion simulator study, COVID-related restrictions in recruiting human subjects and the need to implement in-lab social distancing measures, and the technological setup involved with a high-fidelity simulator. We also reiterate that some exclusion of participants was necessary, given our prioritization of a sound dataset over a larger one. While our sample size is in line with what others use in driving simulator studies 67 , 68 , or machine-learning driving behavior research 69 , 70 , it is still a relatively small population. We limited our experiment to older and younger participants thinking there would be a larger effect between these two groups. Although this effect did not appear related to age, we found an effect independent of age. Future work should expand the sample to a larger and more representative sample to look at the generalization of these findings. Since our analysis shows promise, a follow-up examining the algorithm’s decisions in real-time would be warranted.
As traffic accidents and violations frequently occur due to poor impulsivity and inhibitory control, it is important to create driver safety systems that can overcome these cognitive limitations on a personalized level. In this work, we present an approach to infer the individual’s latent factor, the use it to decide when it is or is not appropriate to show a driver safety interface depending on someone’s inferred impulsivity and inhibitory control.
To create this approach, we conducted a driving study using a high-fidelity motion simulator to understand how cognitive factors affect people’s responses to driver safety interfaces. Our study revealed that the prototype interfaces had differing effects on drivers based on their level of impulsivity, as indicated by multiple self-reported and behavioral metrics. In particular, we observed that drivers with lower levels of impulsivity tended to slow down when exposed to the interfaces, while drivers with higher levels of impulsivity exhibited the opposite response. Indeed, previous research has shown that impulsive drivers are more likely to run yellow lights 71 , although yellow lights were designed to warn drivers that they may need to slow down. Our study is the first to show that vehicle safety interfaces may also lead to unintended driving behavior responses for some drivers based on their impulsivity.
Leveraging the data collected in the study, we trained an LSTM network that can infer cognitive traits and, based on these, decide whether or not to employ a driver safety interface. The results show that our decision-making scheme can infer latent factors that are compact, correlate with cognitive measures associated with impulsivity, and can be used effectively to select driver interfaces to improve driver behavior, resulting in lower speed at the zone of dilemma of yellow lights. Although previous work has shown the relationship between cognitive factors such as impulsivity and driving behavior, this is the first time a model is proposed and examined so as to make driver safety recommendations based on cognitive factor inferences conditioned on the driver’s behavior.
The suggested approach lends itself to fleet-scale, online, in-vehicle optimization of the interaction with the driver across the population. If deployed in such a manner, overall improvements in driver safety interfaces may lead to safer roads overall.
Data and material will be made available upon request by emailing the corresponding authors.
Singh, S. Critical reasons for crashes investigated in the national motor vehicle crash causation survey. Tech. Rep. DOT HS 812 , 115 (2015).
Google Scholar
Bareiss, M., Scanlon, J., Sherony, R. & Gabler, H. C. Crash and injury prevention estimates for intersection driver assistance systems in left turn across path/opposite direction crashes in the united states. Traffic Inj. Prev. 20 , S133–S138 (2019).
Article PubMed Google Scholar
Department of Transportation, U. S. NHTSA releases 2019 crash fatality data (2019).
Walshe, E. A., Ward McIntosh, C., Romer, D. & Winston, F. K. Executive function capacities, negative driving behavior and crashes in young drivers. Int. J. Environ. Res. Public Health 14 , 1314 (2017).
Article PubMed PubMed Central Google Scholar
Albert, D., Chein, J. & Steinberg, L. The teenage brain: Peer influences on adolescent decision making. Curr. Dir. Psychol. Sci. 22 , 114–120 (2013).
Barati, F., Pourshahbaz, A., Nosratabadi, M. & Mohammadi, Z. The role of impulsivity, attentional bias and decision-making styles in risky driving behaviors. Int. J. High Risk Behav. Addict. 9 , 1-e98001 (2020).
Article Google Scholar
Munakata, Y. et al. A unified framework for inhibitory control. Trends Cogn. Sci. 15 , 453–459 (2011).
Constantinou, E., Panayiotou, G., Konstantinou, N., Loutsiou-Ladd, A. & Kapardis, A. Risky and aggressive driving in young adults: Personality matters. Accid. Anal. Prev. 43 , 1323–1331 (2011).
Dahlen, E. R., Martin, R. C., Ragan, K. & Kuhlman, M. M. Driving anger, sensation seeking, impulsiveness, and boredom proneness in the prediction of unsafe driving. Accid. Anal. Prev. 37 , 341–348 (2005).
Hayashi, Y., Foreman, A. M., Friedel, J. E. & Wirth, O. Executive function and dangerous driving behaviors in young drivers. Transp. Res. Part F Traffic Psychol. Behav. 52 , 51–61 (2018).
National Research Council et al. Preventing Teen Motor Crashes: Contributions from the Behavioral and Social Sciences: Workshop Report (National Academies Press, 2007).
Hatfield, J., Williamson, A., Kehoe, E. J. & Prabhakharan, P. An examination of the relationship between measures of impulsivity and risky simulated driving amongst young drivers. Accid. Anal. Prev. 103 , 37–43 (2017).
Jongen, E. M. M., Brijs, K., Komlos, M., Brijs, T. & Wets, G. Inhibitory control and reward predict risky driving in young novice drivers—a simulator study. Proced. Soc. Behav. Sci. 20 , 604–612 (2011).
Sârbescu, P. & Rusu, A. Personality predictors of speeding: Anger-aggression and impulsive-sensation seeking. A systematic review and meta-analysis. J. Safety Res. 77 , 86–98 (2021).
Memarian, M., Lazuras, L., Rowe, R. & Karimipour, M. Impulsivity and self-regulation: A dual-process model of risky driving in young drivers in Iran. Accid. Anal. Prevent. 187 , 107055 (2023).
Lazuras, L., Rowe, R., Poulter, D. R., Powell, P. A. & Ypsilanti, A. Impulsive and self-regulatory processes in risky driving among young people: A dual process model. Front. Psychol. 10 , 439067 (2019).
Ju, U., Williamson, J. & Wallraven, C. Predicting driving speed from psychological metrics in a virtual reality car driving simulation. Sci. Rep. 12 , 10044 (2022).
Article ADS CAS PubMed PubMed Central Google Scholar
McDonald, A., Carney, C. & McGehee, D. V. Vehicle owners’ experiences with and reactions to advanced driver assistance systems (2018).
Montgomery, J., Kusano, K. D. & Gabler, H. C. Age and gender differences in time to collision at braking from the 100-car naturalistic driving study. Traffic Inj. Prev. 15 (Suppl 1), S15-20 (2014).
Paaver, M. et al. Preventing risky driving: A novel and efficient brief intervention focusing on acknowledgement of personal risk factors. Accid. Anal. Prevent. 50 , 430–437 (2013).
Horberry, T., Regan, M. A. & Stevens, A. Driver Acceptance of New Technology: Theory, Measurement and Optimisation (Crc Press, 2018).
Af Wåhlberg, A., Dorn, L. & Kline, T. The manchester driver behaviour questionnaire as a predictor of road traffic accidents. Theor. Issues Ergon. Sci. 12 , 66–86 (2011).
O’Brien, F. & Gormley, M. The contribution of inhibitory deficits to dangerous driving among young people. Accid. Anal. Prev. 51 , 238–242 (2013).
Chang, Z., Lichtenstein, P., D’Onofrio, B. M., Sjölander, A. & Larsson, H. Serious transport accidents in adults with attention-deficit/hyperactivity disorder and the effect of medication: A population-based study. JAMA Psychiat. 71 , 319–325 (2014).
Gemming, L., Jiang, Y., Swinburn, B., Utter, J. & Mhurchu, C. N. Under-reporting remains a key limitation of self-reported dietary intake: An analysis of the 2008/09 New Zealand adult nutrition survey. Eur. J. Clin. Nutr. 68 , 259–264 (2014).
Article CAS PubMed Google Scholar
Dougherty, D. M., Mathias, C. W., Marsh, D. M. & Jagar, A. A. Laboratory behavioral measures of impulsivity. Behav. Res. Methods 37 , 82–90 (2005).
Lipszyc, J. & Schachar, R. Inhibitory control and psychopathology: A meta-analysis of studies using the stop signal task. J. Int. Neuropsychol. Soc. 16 , 1064–1076 (2010).
Maack, D. J. & Ebesutani, C. A re-examination of the BIS/BAS scales: Evidence for BIS and bas as unidimensional scales. Int. J. Methods Psychiatr. Res. 27 , e1612 (2018).
Cyders, M. A., Littlefield, A. K., Coffey, S. & Karyadi, K. A. Examination of a short English version of the UPPS-P impulsive behavior scale. Addict. Behav. 39 , 1372–1376 (2014).
Kaplan, S., Guvensan, M. A., Yavuz, A. G. & Karalurt, Y. Driver behavior analysis for safe driving: A survey. IEEE Trans. Intell. Transp. Syst. 16 , 3017–3032 (2015).
Schaff, C. & Walter, M. R. Residual policy learning for shared autonomy. In Robotics Science and Systems (2020). arXiv:2004.05097 .
Losey, D. P. et al. Learning latent actions to control assistive robots. Auton. Robots 46 , 115–147 (2022).
Backman, K., Kulić, D. & Chung, H. Reinforcement learning for shared autonomy drone landings (2022). arXiv:2202.02927 .
Nidamanuri, J., Nibhanupudi, C., Assfalg, R. & Venkataraman, H. A progressive review: Emerging technologies for ADAS driven solutions. IEEE Trans. Intell. Veh. 7 , 326–341 (2022).
Xie, A., Losey, D. P., Tolsma, R., Finn, C. & Sadigh, D. Learning latent representations to influence multi-agent interaction. In Conf. on Robot Learning (2020). arXiv:2011.06619 .
Tsividis, P. A. et al. Human-Level reinforcement learning through Theory-Based modeling, exploration, and planning. arXiv (2021). arXiv:2107.12544 .
Mazza, G. L. et al. Correlation database of 60 cross-disciplinary surveys and cognitive tasks assessing self-regulation. J. Pers. Assess. 103 , 238–245 (2021).
Article MathSciNet PubMed Google Scholar
Yang, R., Chen, J. & Narasimhan, K. Improving dialog systems for negotiation with personality modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , 681–693 (Association for Computational Linguistics, Online, 2021).
Song, K. et al. Recommendation vs sentiment analysis: A text-driven latent factor model for rating prediction with cold-start awareness. In Int. Joint Conf. on Artificial Intelligence , Research Collection School Of Computing and Information Systems, 2744 (AAAI Press, 2017).
Yu, Z., Lian, J., Mahmoody, A., Liu, G. & Xie, X. Adaptive user modeling with long and short-term preferences for personalized recommendation. In Int. Joint Conf. on Artificial Intelligence (California, 2019).
Tanjim, M. M. et al. Attentive sequential models of latent intent for next item recommendation. In Proceedings of The Web Conference 2020 , WWW ’20, 2528–2534 (Association for Computing Machinery, New York, NY, USA, 2020).
Rudenko, A. et al. Human motion trajectory prediction: A survey. IJRR (2019).
Van der Maaten, L. & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9 (2008).
Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9 , 1735–1780 (1997).
Kingma, D. P. & Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
Gutmann, M. & Hyvarinen, A. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS , 297–304.
Khosla, P. et al. Supervised contrastive learning. Adv. Neural. Inf. Process. Syst. 33 , 18661–18673 (2020).
Rai, N., Adeli, E., Lee, K.-H., Gaidon, A. & Niebles, J. C. Cocon: Cooperative-contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 3384–3393 (2021).
Kingma, D. P. & Welling, M. Auto-Encoding variational bayes. In Int. Conf. on Learning Representations (2014).
Rezende, D. J., Mohamed, S. & Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Int. Conf. on Machine Learning (2014).
Chang, C.-C. & Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2 , 1–27 (2011).
Jonah, B. A. Age differences in risky driving. Health Educ. Res. 5 , 139–149 (1990).
Zhang, Y., Fu, C. & Hu, L. Yellow light dilemma zone researches: A review. J. Traffic Transp. Eng. (English Edition) 1 , 338–352 (2014).
Deo, N. & Trivedi, M. M. Multi-Modal trajectory prediction of surrounding vehicles with maneuver based LSTMs. In IVS (2018).
Best, A., Anderson, J. & Patrikalakis, A. Driver-in-the-loop simulation for guardian and chauffeur (2022).
Schrum, M. L., Sumner, E., Gombolay, M. C. & Best, A. Maveric: A data-driven approach to personalized autonomous driving. Trans. Rob. 40 , 1952–1965. https://doi.org/10.1109/TRO.2024.3359543 (2024).
Karagulle, R., Ozay, N., Arechiga, N., DeCastro, J. & Best, A. Incorporating logic in online preference learning for safe personalization of autonomous vehicles. 1–11, https://doi.org/10.1145/3641513.3650129 (2024).
Motion Systems. 6 DOF Platform. https://motionsystems.eu/ (2023).
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A. & Koltun, V. Carla: An open urban driving simulator. In Conference on robot learning , 1–16 PMLR, 2017).
Carver, C. S. & White, T. L. Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: the bis/bas scales. J. Pers. Soc. Psychol. 67 , 319 (1994).
Whiteside, S. P., Lynam, D. R., Miller, J. D. & Reynolds, S. K. Validation of the UPPS impulsive behaviour scale: A four-factor model of impulsivity. Eur. J. Pers. 19 , 559–574 (2005).
Gomez, P., Ratcliff, R. & Perea, M. A model of the go/no-go task. J. Exp. Psychol. Gen. 136 , 389 (2007).
Lappin, J. S. & Eriksen, C. W. Use of a delayed signal to stop a visual reaction-time response. J. Exp. Psychol. 72 , 805 (1966).
Verbruggen, F. et al. A consensus guide to capturing the ability to inhibit actions and impulsive behaviors in the stop-signal task. Elife 8 , e46323 (2019).
Team, J. Jasp (version 0.18.2)[computer software] (2024).
Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67 , 1–48. https://doi.org/10.18637/jss.v067.i01 (2015).
Megías, A., Di Stasi, L. L., Maldonado, A., Catena, A. & Cándido, A. Emotion-laden stimuli influence our reactions to traffic lights. Transport. Res. F: Traffic Psychol. Behav. 22 , 96–103 (2014).
Woide, M., Miller, L., Colley, M., Damm, N. & Baumann, M. I’ve got the power: Exploring the impact of cooperative systems on driver-initiated takeovers and trust in automated vehicles. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications , 123–135 (2023).
Scally, K. et al. Impact of external cue validity on driving performance in Parkinson’s disease. Parkinsons Dis. 2011 , 159621 (2011).
PubMed PubMed Central Google Scholar
Zhang, Y. & Kumada, T. Automatic detection of mind wandering in a simulated driving task with behavioral measures. PLoS One 13 , e0207092 (2018).
Chein, J., Albert, D., O’Brien, L., Uckert, K. & Steinberg, L. Peers increase adolescent risk taking by enhancing activity in the brain’s reward circuitry. Dev. Sci. 14 , F1-10 (2011).
Download references
This work has been funded by Toyota Research Institute. All authors work for and receive compensation from Toyota Research Institute.
These authors contributed equally: Emily S. Sumner, Jonathan DeCastro, Jean Costa, Deepak E. Gopinath, Everlyne Kimani, Tiffany Chen and Guy Rosman.
Toyota Research Institute, Los Altos, CA, USA
Emily S. Sumner, Jonathan DeCastro, Jean Costa, Deepak E. Gopinath, Everlyne Kimani, Shabnam Hakimi, Allison Morgan, Andrew Best, Hieu Nguyen, Daniel J. Brooks, Bassam ul Haq, Andrew Patrikalakis, Hiroshi Yasuda, Kate Sieck, Avinash Balachandran, Tiffany L. Chen & Guy Rosman
Cambridge, MA, USA
Emily S. Sumner, Jonathan DeCastro, Deepak E. Gopinath, Daniel J. Brooks & Guy Rosman
You can also search for this author in PubMed Google Scholar
E.S., J.D., J.C., D.G., E.K., S.H., A.M., A.B., D.B., H.Y., K.S., T.L.C., A.B., and G.R. designed the research. E.S., J.D., J.C., E.G., E.K., A.M., A.B., H.N., D.B., and H.Y. performed the research. J.D., D.G., H.N., B.H., A.P., and D.B. designed analytic tools. J.D., D.G., J.C., and E.K. analyzed the data. E.S., J.D., J.C., D.G., E.K., A.M., H.Y., T.L.C. and G.R. wrote the paper. All authors reviewed the manuscript.
Correspondence to Emily S. Sumner or Jonathan DeCastro .
Competing interests.
The authors declare no competing interests.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary figure 1., supplementary figure 2., supplementary figure 3., supplementary figure 4., supplementary figure 5., supplementary figure 6., supplementary figure 7., rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Cite this article.
Sumner, E.S., DeCastro, J., Costa, J. et al. Personalizing driver safety interfaces via driver cognitive factors inference. Sci Rep 14 , 18058 (2024). https://doi.org/10.1038/s41598-024-65144-8
Download citation
Received : 18 January 2024
Accepted : 17 June 2024
Published : 05 August 2024
DOI : https://doi.org/10.1038/s41598-024-65144-8
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.
How science REALLY works...
Scientific ideas can be tested through both experiments and other sorts of studies. Both provide important sources of evidence.
Misconception: Experiments are a necessary part of the scientific process. Without an experiment, a study is not rigorous or scientific.
Correction: Scientific testing involves more than just experiments. There are many valid ways to test scientific ideas and the appropriate method depends on many factors. Read more about it.
Experiments are one way to test some sorts of ideas, but science doesn’t live on experiment alone. There are many other ways to scientifically test ideas too…
An experiment is a test that involves manipulating some factor in a system in order to see how that affects the outcome. Ideally, experiments also involve controlling as many other factors as possible in order to isolate the cause of the experimental results. Experiments can be simple tests set up in a lab, like rolling a ball down different inclines to see how the angle affects the rolling time. But large-scale experiments can also be performed out in the real world. For example, classic experiments in ecology involved removing a species of barnacles from intertidal rocks on the Scottish coast to see how that would affect other barnacle species over time. But whether they are large- or small-scale, performed in the lab or in the field, and require years or mere milliseconds to complete, experiments are distinguished from other sorts of tests by their reliance on the intentional manipulation of some factors and, ideally, the control of others.
Some aspects of the natural world aren’t manipulable, and hence can’t be studied with direct experiments. We simply can’t go back in time and introduce finches to three separate island groups to see how they evolve. We can’t move the planets around to see how their orbits would be altered by a new configuration. And we can’t cause volcanoes to erupt in order to investigate how they affect the ecosystems that surround them. Other times, it would be unethical to perform an experiment – for example, to investigate the effect of maternal alcohol consumption on babies.
In such cases, we can still figure out what expectations a hypothesis generates and make observations to test the idea. For example, we can’t actually experiment on distant stars in order to test ideas about which nuclear reactions occur within them, but we can test those ideas by building sensors that allow us to observe what forms of radiation the stars emit. Similarly, we can’t perform experiments to test ideas about what T. rex ate, but we can test those ideas by making detailed observations of their fossilized teeth and comparing those to the teeth of modern organisms that eat different foods. And of course, many ideas can be tested by both experiment and through straightforward observation. For example, we can test ideas about how chlorofluorocarbons interact with the ozone layer by performing chemical experiments in a lab and through observational studies of the atmosphere.
In some cases, we get lucky and are able to take advantage of a natural experiment . Natural experiments occur when the universe, in a sense, performs an experiment for us — that is, the relevant experimental set-up already exists, and all we have to do is observe the results. For example, researchers in England wanted to know if a program to improve the health and well-being of young children and their families was effective. Enrolling some children in the program and randomly excluding others to create a controlled experiment would be unethical. However, for other reasons, the program was rolled out in some geographic areas, but not in others. This set up a natural experiment that the researchers could take advantage of by comparing outcomes in families who received the program with outcomes in similar families who did not receive the program. Analyzing the results of this natural experiment suggested that the program helped children develop socially, encouraged families to build better learning environments for their kids, and discouraged poor parenting.
To learn how a natural experiment provides support for the theory of general relativity, take an advanced side trip to Illuminating relativity: Experimenting with the stars .
What happens when you can’t do an experiment? There are plenty of other ways to test scientific ideas. To see how observational studies factor into the process of science, check out these stories:
The logic of scientific arguments
Digging into data
Subscribe to our newsletter
Oxygen spillover from ruo2 to moo3 enhances activity and durability of ruo2 for acidic oxygen evolution.
Trade-off between activity and durability of acidic oxygen evolution reaction (OER) catalysts is of key concern in the field of electrocatalysis. RuO2 delivers good activity but displays poor stability due to the over-oxidation and consequent leachability of surface ruthenium species. Herein, we report an oxygen spillover strategy by designing RuO2/MoO3 catalysts with abundant and intimate interfaces to enable spillover of the reactive *O intermediate from RuO2 to MoO3 and thereby suppress over-oxidation and dissolution of RuO2, delivering both high activity and stability of Ru-based electrocatalysts. RuO2/MoO3 catalysts exhibited a significantly low overpotential of 167 mV at 10 mA cm−2 and negligible degradation of OER performance in 0.5 M H2SO4 within a period of 300 h. Experimental evidences (in-situ Raman spectra, cyclic voltammetry analysis, operando Fourier transformed infrared spectroscopy, etc.) and theoretical calculations demonstrated the occurrence of oxygen spillover from RuO2 to MoO3 and the subsequent participation of lattice oxygen of MoO3 instead of RuO2 for the steps of the release of oxygen, generation of oxygen vacancy and rehabilitation of lattice oxygen during acidic OER. This study provides a unique approach of oxygen spillover to solve the dilemma of activity and stability of Ru-based OER electrocatalysts.
To support increased transparency, we offer authors the option to publish the peer review history alongside their article.
View this article’s peer review history
Download citation, permissions.
W. Gou, S. Zhang, Y. Wang, X. Tan, L. Liao, Z. Qi, M. Xie, Y. Ma, Y. Su and Y. Qu, Energy Environ. Sci. , 2024, Accepted Manuscript , DOI: 10.1039/D4EE02549K
To request permission to reproduce material from this article, please go to the Copyright Clearance Center request page .
If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.
If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page .
Read more about how to correctly acknowledge RSC content .
Search articles by author.
This article has not yet been cited.
Team develops two nasal sprays -- an immune activator and a new vaccine -- to prevent virus transmission.
A team of researchers, led by the University of Houston, has discovered two new ways of preventing and treating respiratory viruses. In back-to-back papers in Nature Communications , the team -- from the lab of Navin Varadarajan, M.D. Anderson Professor of William A. Brookshire Chemical and Biomolecular Engineering -- reports the development and validation of NanoSTING, a nasal spray, as a broad-spectrum immune activator for controlling infection against multiple respiratory viruses; and the development of NanoSTING-SN, a pan-coronavirus nasal vaccine, that can protect against infection and disease by all members of the coronavirus family.
NanoSTING Therapeutic HIGHLIGHTS
NanoSTING-NS Pan-coronavirus Vaccine HIGHLIGHTS
NanoSTING is a special formula that uses tiny fat droplets to deliver an immune-boosting ingredient called cGAMP. This formula helps the body's cells stay on high alert to prevent attack from respiratory viruses.
"Using multiple models, the team demonstrated that a single treatment with NanoSTING not only protects against pathogenic strains of SARS-CoV-2 but also prevents transmission of highly transmissible variants like the Omicron variants," reports Varadarajan. "Delivery of NanoSTING to the nose ensures that the immune system is activated in the nasal compartment and this in turn prevents infection from viruses."
As the recent COVID19 pandemic illustrated, the development of off-the-shelf treatments that counteract respiratory viruses is a largely unsolved problem with a huge impact on human lives.
"Our results showed that intranasal delivery of NanoSTING, is capable of eliciting beneficial type I and type III interferon responses that are associated with immune protection and antiviral benefit," reports first author and postdoctoral associate, Ankita Leekha.
The authors further show that NanoSTING can protect against both Tamiflu sensitive and resistant strains of influenza, underscoring its potential as a broad-spectrum therapeutic.
"The ability to activate the innate immune system presents an attractive route to armoring humans against multiple respiratory viruses, viral variants and also minimizing transmission to vulnerable people," said Leekha. "The advantage of NanoSTING is that only one dose is required unlike the antivirals like Tamiflu that require 10 doses."
The mechanism of action of NanoSTING is complementary to vaccines, monoclonal antibodies and antivirals, the authors noted.
Nano STING-SN
Despite the successful implementation of multiple vaccines against SARS-CoV-2, these vaccines need constant updates due to viral evolution, plus the current generation of vaccines only offers limited protection against transmission of SARS-CoV-2.
Enter NanoSTING-SN, a multi-antigen, intranasal vaccine, that eliminates virus replication in both the lungs and the nostrils and has the ability to protect against multiple coronaviruses and variants.
"Using multiple preclinical models, the team demonstrated that the vaccine candidate protects the primary host from disease when challenged with highly pathogenic variants. Significantly, the vaccine also prevents transmission of highly transmissible variants like the Omicron variants to vaccine-naïve hosts," reports Varadarajan.
The authors further show that the nasal vaccine was 100% effective at preventing transmission of the Omicron VOCs to unvaccinated hosts.
"The ability to protect against multiple coronaviruses and variants provides the exciting potential towards a universal coronavirus vaccine," said Leekha. "The ability to prevent infections and transmission might finally end this cycle of onward transmission and viral evolution in immunocompromised people."
The research was conducted by a collaborative team at UH including Xinli Liu, College of Pharmacy and Vallabh E. Das, College of Optometry along with Brett L. Hurst of Utah State University and consultation from AuraVax Therapeutics, a spinoff from Varadarajan's Single Cell Lab at UH, which is developing NanoSTING.
Funding for the studies was provided by NIH (R01GM143243), Owens Foundation, and AuraVax Therapeutics.
Story Source:
Materials provided by University of Houston . Original written by Laurie Fickman. Note: Content may be edited for style and length.
Journal References :
Cite This Page :
Strange & offbeat.
The illicit synthetic opioid industry is built on surprisingly simple chemistry. Here’s the science behind fentanyl, and how underworld “cooks” put it to work.
By DAISY CHUNG , LAURA GOTTESDIENER and DRAZEN JORGIC
Filed July 25, 2024, 9 a.m. GMT
Fentanyl Chemistry 101
Fentanyl is a synthetic drug. That means it’s not created from plants like marijuana or cocaine, but rather entirely from chemicals.
Fentanyl can be easy to make using compounds known as “precursors.” These are ready-made building blocks created from common industrial chemicals. Certain types of precursors are particularly prized by illicit fentanyl producers because they function as shortcuts to making the finished product. One senior U.S. administration official compared it to using “premixed brownie batter” versus trying to whip up a batch from scratch.
To make this harder for criminals to pull off, governments around the world strictly regulate a few of these key precursors. The U.S. government controls a couple of them as tightly as cocaine, methamphetamine and even finished fentanyl itself.
So, unscrupulous chemical sellers and illicit fentanyl producers have resorted to some creative chemistry to get around these strictures.
Understanding these chemistry tricks, it’s easy to see how illicit fentanyl producers have a big advantage. Every time a chemical is regulated, they can simply shift to an alternative to evade law enforcement.
It isn’t just the simple chemistry that assists illicit producers. Their supply chain is simple, too. Illicit fentanyl producers receive many of their chemicals the same way ordinary online shoppers receive all sorts of legal merchandise: by mail from China. A Reuters investigation published today reveals how easy it is to buy these chemicals .
Fentanyl is so potent, and each tablet contains such a tiny dose, that just a small amount of precursor chemicals can make a massive amount of illicit pills.
This means sellers can hide these chemicals in small boxes, often with false shipping labels.
Reuters purchased a dozen chemicals that independent chemists said could be used to make fentanyl. Many of these substances arrived in packages that listed the contents as cheap consumer goods: a doorknob, an adapter, hair accessories, typewriter parts.
With millions of packages flying around the world daily, authorities are hard-pressed to find the ones containing fentanyl chemicals.
Initial processing
Once illicit fentanyl makers get their hands on the necessary precursors, they can synthesize fentanyl in crude labs in less than a day.
A reporter traveled in February to Mexico’s Sinaloa state, home of the powerful Sinaloa Cartel, to speak with a freelance fentanyl producer about his craft. He operated in a poor neighborhood on the edge of the state capital Culiacán, an area controlled by the cartel that’s dotted with stash houses. Lookouts clutching two-way radios stood by the side of the dirt road leading to the house.
He said whipping up the drug was as easy as “making chicken soup.”
This cook, who left school at age 12, got his start as an assistant to another producer. Fentanyl recipes are prized assets, he said. His mentor was stingy with information and forbade him from taking notes. But within six months the apprentice had memorized all the steps and went into business for himself. He said he sourced his chemicals from local brokers, who took orders on WhatsApp and delivered within hours. He’s since exited the trade due to threats from the cartel chieftains, who have barred freelance producers from manufacturing fentanyl in Sinaloa.
Virtually all the illicit fentanyl trafficked to the U.S. is produced in Mexico, according to U.S. authorities. Historically, the state of Sinaloa has been the epicenter of production, though crime syndicates in other regions of Mexico have entered the trade too. Traffickers in Sinaloa operate open-air labs in rural areas such as forests or remote ranches. They’ve also set up ventilated laboratories inside apartments and houses in cities such as Culiacán.
The most common way of making illicit fentanyl at the moment is known as the one-pot Gupta method . It’s named after an Indian scientist, Dr. Pradeep Kumar Gupta, who helped develop a streamlined process for synthesizing medical-grade fentanyl, an analgesic used in operating rooms worldwide. Makers of street fentanyl have put their own spins on the technique. However, the name has stuck. (Gupta couldn’t be reached for comment.)
Gupta’s original method requires just three steps . The whole process takes place at room temperature and there’s no specialized lab equipment required.
His technique could also be used for the synthesis of thousands of different types of fentanyl analogs.
The Sinaloa cook told Reuters how he made fentanyl. His description of the process indicates that he was able take a shortcut and start with Step 3, according to Dr. Holmes, the Doane University chemist.
That’s because one of the chemicals the cook sourced from local brokers was something he called “El 400.” Holmes, who reviewed the cook’s process at Reuters’ request, said “El 400” is likely the immediate precursor 4-ANPP.
While that substance is tightly regulated internationally, some illegal producers can still find ways to obtain it, and thereby skip the first two synthesis steps in the three-step version of the Gupta method.
On the ground with a cook
Below is a depiction of how the cook performed Step 3 to yield fentanyl. (Reuters is withholding the names of some of the chemicals he used, to avoid providing detailed instructions and other information that could aid in synthesizing the drug.)
Post-production
The resulting paste is dried, ground into a fine powder, then carefully weighed and packaged into 1 kg bags for transport to a post-production laboratory.
In 2023, U.S. authorities seized nearly 116 million fentanyl pills, according to a National Institutes of Health-backed research paper published in May.
But hundreds of millions more likely ended up on American streets.
In a 2023 indictment targeting the sons of the jailed Sinaloa cartel kingpin, Joaquin “El Chapo” Guzman, U.S. officials estimated that $1,000 worth of fentanyl precursors can yield profits that are up to 800 times their original investment. With such economic incentives, said a U.S. federal investigator, few expect the flow of fentanyl to U.S. streets to stop any time soon.
Fentanyl Express: Deadly Chemistry
By Daisy Chung, Laura Gottesdiener and Drazen Jorgic
Additional reporting by Kristina Cooke
Graphics by Daisy Chung
Edited by Feilding Cage and Marla Dickerson
Dr. Andrea Holmes, professor of chemistry at Doane University in Nebraska
Dr. Alex J. Krotulski, director of toxicology and chemistry at the Center for Forensic Science Research and Education
The DEA’s Fentanyl Profiling Program
International Journal of Drug Policy
The International Narcotics Control Board
IMAGES
VIDEO
COMMENTS
Here are some rules for drawing scientific diagrams: Always use a pencil to draw your scientific diagrams. Use simple, sharp, 2D lines and shapes to draw your diagram. Don't draw 3D shapes or use shading. Label everything in your diagram. Use thin, straight lines to label your diagram. Do not use arrows.
A lab report conveys the aim, methods, results, and conclusions of a scientific experiment. The main purpose of a lab report is to demonstrate your understanding of the scientific method by performing and evaluating a hands-on lab experiment. This type of assignment is usually shorter than a research paper. Lab reports are commonly used in ...
A lab report is an overview of your experiment. Essentially, it explains what you did in the experiment and how it went. Most lab reports end up being 5-10 pages long (graphs or other images included), though the length depends on the experiment. ... All in all, make sure to keep your scientific lab report concise, focused, honest, and ...
What this handout is about. This handout provides a general guide to writing reports about scientific research you've performed. In addition to describing the conventional rules about the format and content of a lab report, we'll also attempt to convey why these rules exist, so you'll get a clearer, more dependable idea of how to approach ...
Generally, a report for a lab experiment comprises of a few essential sections that are common to all. However, depending on the type of experiment or the methodology used, there could be variations in the basic structure. Title Like any other formal document, the lab report should begin with a concise but insightful title for the experiment.
In order to write a lab report in the format of a formal scientific paper, it is important to see where the format fits within the broader context of scientific communication. As a student and a member of the general public, you understand one level of scientific communication already. When scientific information
Graphs and figures must both be labeled with a descriptive title. Label the axes on a graph, being sure to include units of measurement. The independent variable is on the X-axis, and the dependent variable (the one you are measuring) is on the Y-axis. Be sure to refer to figures and graphs in the text of your report: the first figure is Figure ...
A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion. The title page, abstract, references, and appendices are started on separate pages (subsections from the main body of the report are not). Use double-line spacing of text, font size 12, and include page numbers.
A typical lab report format includes a title, introduction, procedure, results, discussion, and conclusions. A science laboratory experiment isn't truly complete until you've written the lab report. You may have taken excellent notes in your laboratory notebook, but it isn't the same as a lab report. The lab report format is designed to ...
The experiment is already finished. Use the past tense when talking about the experiment. "The objective of the experiment was…" The report, the theory, and permanent equipment still exist; therefore, describe these using the present tense: "The purpose of this report is…" "Bragg's Law for diffraction is …"
A laboratory report is a document written to describe and analyze an experiment that addresses a scientific inquiry. A lab report helps you conduct an experiment and then systematically design a conclusion based on your hypothesis. Note: A lab report is not the same as a lab notebook. A notebook is a detailed log you keep throughout the study.
As with all forms of writing, it's not the report's length that matters, but the quality of the information conveyed within. This article outlines the important bits that go into writing a lab report (title, abstract, introduction, method, results, discussion, conclusion, reference). At the end is an example report of reducing sugar ...
Most lab reports are organized, first to last: background information, problem, hypothesis, materials, procedure, data, and your interpretation of what happened as a conclusion. 5. Break sections of your report into subsections, if necessary. Technical aspects of your paper might require significant explanation.
This section describes an organizational structure commonly used to report experimental research in many scientific disciplines, the IMRAD format: Introduction, Methods, Results, And Discussion. Although the main headings are standard for many scientific fields, details may vary; check with your instructor, or, if submitting an article to a journal, refer to the instructions to authors.…
Heading. Identify the experiment by name and give the date performed, your name (first and underlined) and that of your lab partner (s), and lastly the name of your TA. Abstract. Give an extremely short (only a few sentences) description of the object of the experiment and a statement of your principal results. Theory.
The present article, essentially based on TA Lang's guide for writing a scientific paper [ 1 ], will summarize the steps involved in the process of writing a scientific report and in increasing the likelihood of its acceptance. Figure 1. The Edwin Smith Papyrus (≈3000 BCE) Figure 2.
Writing a Lab Report. Writing a scientific lab report is significantly different from writing for other classes like philosophy, English, and history. The most prominent form of writing in biology, chemistry, and environmental science is the lab report, which is a formally written description of results and discoveries found in an experiment.
science or science-related degree), you may be asked to produce a science-style report. A report is the result of an investigation, experiment, or research that presents the findings in one document. You may be asked to write a short report of 1000 words, or you might undertake a research project of 20,000 words (or more).
Download Article. 1. Start with an abstract. The abstract is a very short summary of the paper, usually no more than 200 words. Base the structure of your abstract on the structure of your paper. This will allow the reader to see in short form the purpose, results, and significance of the experiment.
Present the results of the paper, in logical order, using tables and graphs as necessary. Explain the results and show how they help to answer the research questions posed in the Introduction. Evidence does not explain itself; the results must be presented and then explained. Avoid: presenting results that are never discussed; presenting ...
1. Introduce the experiment in your conclusion. Start out the conclusion by providing a brief overview of the experiment. Describe the experiment in 1-2 sentences and discuss the objective of the experiment. Also, make sure to include your manipulated (independent), controlled and responding (dependent) variables. 2.
Having a clear understanding of the typical goals and strategies for writing an effective lab report can make the process much less troubling. General Considerations. It is useful to note that effective scientific writing serves the same purpose that your lab report should. Good scientific writing explains: The goal(s) of your experiment
Light-matter interaction (e.g., interaction of excitons with small cavities or surface plasmons) can occur in weak or strong coupling regimes 1,2,3. In a weak coupling regime, the rates of energy ...
Eastern Pacific Cloud Aerosol Precipitation Experiment (EPCAPE) Science Plan Coastal cities provide the opportunity to characterize marine clouds and the substantial effects of manmade particles on cloud properties and processes. La Jolla lies to the north of San Diego, California, but it is often about a day directly downwind of the major pollution sources located in the ports of Los Angeles ...
Pam Belluck is a health and science reporter, covering a range of subjects, including reproductive health, long Covid, brain science, neurological disorders, mental health and genetics.
Our motion-simulator driving experiment was designed to address the following hypotheses: H1 People with different levels of cognitive factors should exhibit different driving behaviors.
Misconception: Experiments are a necessary part of the scientific process.Without an experiment, a study is not rigorous or scientific. Correction: Scientific testing involves more than just experiments.There are many valid ways to test scientific ideas and the appropriate method depends on many factors.Read more about it.
Trade-off between activity and durability of acidic oxygen evolution reaction (OER) catalysts is of key concern in the field of electrocatalysis. RuO2 delivers good activity but displays poor stability due to the over-oxidation and consequent leachability of surface ruthenium species. Herein, we report an ox
Researchers have discovered new ways of preventing and treating respiratory viruses. In two new papers, the team reports the development and validation of NanoSTING, a nasal spray, as a broad ...
Here's the science behind fentanyl, and how underworld "cooks" put it to work. By DAISY CHUNG , LAURA GOTTESDIENER and DRAZEN JORGIC Filed July 25, 2024, 9 a.m. GMT