Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Does mathematics training lead to better logical thinking and reasoning? A cross-sectional assessment from students to professors

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Writing – original draft, Writing – review & editing

Affiliation School of Mathematics and Statistics, The University of Sydney, Sydney, Australia

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation School of Arts and Humanities, Edith Cowan University, Joondalup, Australia

ORCID logo

  • Clio Cresswell, 
  • Craig P. Speelman

PLOS

  • Published: July 29, 2020
  • https://doi.org/10.1371/journal.pone.0236153
  • Peer Review
  • Reader Comments

Fig 1

Mathematics is often promoted as endowing those who study it with transferable skills such as an ability to think logically and critically or to have improved investigative skills, resourcefulness and creativity in problem solving. However, there is scant evidence to back up such claims. This project tested participants with increasing levels of mathematics training on 11 well-studied rational and logical reasoning tasks aggregated from various psychological studies. These tasks, that included the Cognitive Reflection Test and the Wason Selection Task, are of particular interest as they have typically and reliably eluded participants in all studies, and results have been uncorrelated with general intelligence, education levels and other demographic information. The results in this study revealed that in general the greater the mathematics training of the participant, the more tasks were completed correctly, and that performance on some tasks was also associated with performance on others not traditionally associated. A ceiling effect also emerged. The work is deconstructed from the viewpoint of adding to the platform from which to approach the greater, and more scientifically elusive, question: are any skills associated with mathematics training innate or do they arise from skills transfer?

Citation: Cresswell C, Speelman CP (2020) Does mathematics training lead to better logical thinking and reasoning? A cross-sectional assessment from students to professors. PLoS ONE 15(7): e0236153. https://doi.org/10.1371/journal.pone.0236153

Editor: Jérôme Prado, French National Center for Scientific Research (CNRS) & University of Lyon, FRANCE

Received: January 13, 2020; Accepted: June 30, 2020; Published: July 29, 2020

Copyright: © 2020 Cresswell, Speelman. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting Information files.

Funding: The authors received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Mathematics is often promoted as endowing those who study it with a number of broad thinking skills such as: an ability to think logically, analytically, critically and abstractly; having capacity to weigh evidence with impartiality. This is a view of mathematics as providing transferable skills which can be found across educational institutions, governments and corporations worldwide. A view material to the place of mathematics in curricula.

Consider the UK government’s commissioned inquiry into mathematics education “Making Mathematics Count” ascertaining the justification that “mathematical training disciplines the mind, develops logical and critical reasoning, and develops analytical and problem-solving skills to a high degree” [ 1 p11]. The Australian Mathematical Sciences Institute very broadly states in its policy document “Vision for a Maths Nation” that “Not only is mathematics the enabling discipline, it has a vital productive role planning and protecting our well-being” (emphasis in original) [ 2 ]. In Canada, British Columbia’s New 2016 curriculum K-9 expressly mentions as part of its “Goals and Rationale”: “The Mathematics program of study is designed to develop deep mathematical understanding and fluency, logical reasoning, analytical thought, and creative thinking.” [ 3 ]. Universities, too, often make such specific claims with respect to their teaching programs. “Mathematics and statistics will help you to think logically and clearly, and apply a range of problem-solving strategies” is claimed by The School of Mathematical Sciences at Monash University, Australia [ 4 ]. The School of Mathematics and Statistics at The University of Sydney, Australia, directly attributes as part of particular course objectives and outcomes skills that include “enhance your problem-solving skills” as part of studies in first year [ 5 ], “develop logical thinking” as part of studies in second year, which was a statement drafted by the lead author in fact [ 6 ], and “be fluent in analysing and constructing logical arguments” as part of studies in third year [ 7 ]. The University of Cambridge’s Faculty of Mathematics, UK, provides a dedicated document “Transferable Skills in the Mathematical Tripos” as part of its undergraduate mathematics course information, which again lists “analytic ability; creativity; initiative; logical and methodical reasoning; persistence” [ 8 ].

In contrast, psychological research, which has been empirically investigating the concept of transferability of skills since the early 1900s, points quite oppositely to reasoning skills as being highly domain specific [ 9 ]. Therefore, support for claims that studying mathematics engenders more than specific mathematics knowledge is highly pertinent. And yet it is largely absent. The 2014 Centre for Curriculum Redesign (CCR) four part paper “Mathematics for the 21st Century: What Should Students Learn?” concludes in its fourth paper titled “Does mathematics education enhance higher-order thinking skills?” with a call to action “… there is not sufficient evidence to conclude that mathematics enhances higher order cognitive functions. The CCR calls for a much stronger cognitive psychology and neuroscience research base to be developed on the effects of studying mathematics” [ 10 ].

Inglis and Simpson [ 11 ], bringing up this very issue, examined the ability of first-year undergraduate students from a high-ranking UK university mathematics department, on the “Four Cards Problem” thinking task, also known as the Wason Selection Task. It is stated as follows.

Each of the following cards have a letter on one side and a number on the other.

logical thinking research

Here is a rule: “if a card has a D on one side, then it has a 3 on the other”. Your task is to select all those cards, but only those cards, which you would have to turn over in order to find out whether the rule is true or false. Which cards would you select?

This task involves understanding conditional inference, namely understanding the rule “If P then Q” and with this, deducing the answer as “P and not Q” or “D and 7”. Such logical deduction indeed presents as a good candidate to test for a potential ability of the mathematically trained. This task has also been substantially investigated in the domain of the psychology of reasoning [ 12 p8] revealing across a wide range of publications that only around 10% of the general population reach the correct result. The predominant mistake being to pick “D and 3”; where in the original study by Wason [ 13 ] it is suggested that this was picked by 65% of people. This poor success rate along with a standard mistake has fuelled interest in the task as well as attempts to understand why it occurs. A prevailing theory being the so named matching bias effect; the effect of disproportionately concentrating on items specifically mentioned in the situation, as opposed to reasoning according to logical rules.

Inglis and Simpson’s results isolated mathematically trained individuals with respect to this task. The participants were under time constraint and 13% of the first-year undergraduate mathematics students sampled reached the correct response, compared to 4% of the non-mathematics (arts) students that was included. Of note also was the 24% of mathematics students as opposed to 45% of the non-mathematics students who chose the standard mistake. The study indeed unveiled that mathematically trained individuals were significantly less affected by the matching bias effect with this problem than the individuals without mathematics training. However, the achievement of the mathematically trained group was still far from masterful and the preponderance for a non-standard mistake compared with non-mathematically trained people is suggestive. Mathematical training appears to engender a different thinking style, but it remains unclear what the difference is.

Inglis, Simpson and colleagues proceeded to follow up their results with a number of studies concentrated on conditional inference in general [ 14 , 15 ]. A justification for this single investigatory pathway being that if transfer of knowledge is present, something subtle to test for in the first place, a key consideration should be the generalisation of learning rather than the application of skills learned in one context to another (where experimenter bias in the choice of contexts is more likely to be an issue). For this they typically used sixteen “if P then Q” comprehension tasks, where their samples across a number of studies have included 16-year-old pre-university mathematics students (from England and Cyprus), mathematics honours students in their first year of undergraduate university study, third year university mathematics students, and associated control groups. The studies have encompassed controls for general intelligence and thinking disposition prior to training, as well as follows ups of up to two years to address the issue of causation. The conclusive thinking pattern that has emerged is a tendency of the mathematical groups towards a greater likelihood of rejecting the invalid denial of the antecedent and affirmation of the consequent inferences. But with this, and this was validated by a second separate study, the English mathematics group actually became less likely to endorse the valid modus tollens inference. So again, mathematical training appears to engender a different thinking style, but there are subtleties and it remains unclear what the exact difference is.

This project was designed to broaden the search on the notion that mathematics training leads to increased reasoning skills. We focused on a range of reasoning problems considered in psychological research to be particularly insightful into decision making, critical thinking and logical deduction, with their distinction in that the general population generally struggles with answering them correctly. An Australian sample adds diversity to the current enquiries that have been European focussed. Furthermore, in an effort to identify the impact of mathematics training through a possible gradation effect, different levels of mathematically trained individuals were tested for performance.

Well-studied thinking tasks from a variety of psychological studies were chosen. Their descriptions, associated success rates and other pertinent details follows. They were all chosen as the correct answer is typically eluded for a standard mistake.

The three-item Cognitive Reflection Test (CRT) was used as introduced by Frederick [ 16 ]. This test was devised in line with the theory that there are two general types of cognitive activity: one that operates quickly and without reflection, and another that requires not only conscious thought and effort, but also an ability to reflect on one’s own cognition by including a step of suppression of the first to reach it. The three items in the test involve an incorrect “gut” response and further cognitive skill is deemed required to reach the correct answer (although see [ 17 ] for evidence that correct responses can result from “intuition”, which could be related to intelligence [ 18 ]).

In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Bat and ball

A bat and a ball cost $1.10 in total. The bat costs a dollar more than the ball. How much does the ball cost?

The solutions are: 47 days for the Lily Pads problem, 5 minutes for the Widgets problem and 5 cents for the Bat and Ball problem. The considered intuitive, but wrong, answers are 24 days, 100 minutes and 10 cents, respectively. These wrong answers are attributed to participants becoming over focused on the numbers so as to ignore the exponential growth pattern in the Lily Pads problem, merely complete a pattern in numbers in the Widgets problem, and neglect the relationship “more than” in the Bat and Ball problem [ 19 ]. The original study by Frederick [ 16 ] provides a composite measure of the performance on these three items, with only 17% of those studied (n = 3428) reaching the perfect score. The CRT has since been studied extensively [ 19 – 21 ]. Research using the CRT tends not to report performance on the individual items of the test, but rather a composite measure of performance. Attridge and Inglis [ 22 ] used the CRT as a test for thinking disposition of mathematics students as one way to attempt to disentangle the issue of filtering according to prior thinking styles rather than transference of knowledge in successful problem solving. They repeat tested 16-year old pre-university mathematics students and English literature students without mathematics subjects at a one-year interval and found no difference between groups.

Three problems were included that test the ability to reason about probability. All three problems were originally discussed by Kahneman and Tversky [ 23 ], with the typically poor performance on these problems explained by participants relying not on probability knowledge, but a short-cut method of thinking known as the representativeness heuristic. In the late 1980s, Richard Nisbett and colleagues showed that graduate level training in statistics, while not revealing any improvement in logical reasoning, did correlate with higher-quality statistical answers [ 24 ]. Their studies lead in particular to the conclusion that comprehension of, what is known as the law of large numbers, did show improvement with training. The first of our next three problems targeted this law directly.

  • (a). the larger hospital
  • (b). the smaller hospital
  • (c). about the same (that is, within 5 percent of each other)

Kahneman and Tversky [ 23 ] reported that, of 50 participants, 12 chose (a), 10 chose (b), and 28 chose (c). The correct answer is (b), for the reason that small samples are more likely to exhibit extreme events than large samples from the same population. The larger the sample, the more likely it will exhibit characteristics of the parent population, such as the proportion of boys to girls. However, people tend to discount or be unaware of this feature of sampling statistics, which Kahneman and Tversky refer to as the law of large numbers. Instead, according to Kahneman and Tversky, people tend to adhere to a fallacious law of small numbers, where even small samples are expected to exhibit properties of the parent population, as illustrated by the proportion of participants choosing the answer (c) in their 1972 study. Such thinking reflects use of the representativeness heuristic, whereby someone will judge the likelihood of an uncertain event based on how similar it is to characteristics of the parent population of events.

Birth order

  • (a). What is your estimate of the number of families surveyed in which the exact order of births was BGBBBB?
  • (b). In the same survey set, which, if any, of the following two sequences would be more likely: BBBGGG or GBBGBG?

All of the events listed in the problem have an equal probability, so the correct answer to (a) is 72, and to (b) is “neither is more likely”. Kahneman and Tversky [ 23 ] reported that 75 of 92 participants judged the sequence in (a) as less likely than the given sequence. A similar number (unspecified by Kahneman and Tversky, but the statistical effect was reported to be of the same order as in (a)) reported that GBBGBG was the more likely sequence. Again, Kahneman and Tversky suggested that these results reflected use of the representativeness heuristic. In the context of this problem, the heuristic would have taken the following form: some birth orders appear less patterned than others, and less patterned is to be associated with the randomness of birth order, making them more likely.

Coin tosses

  • (a). H T H T H T H T
  • (b). H H H H T T T T
  • (c). T T H H T T H H
  • (d). H T T H T H H T
  • (e). all of the above are equally likely

The correct answer in this problem is (e). Kahneman and Tversky [ 23 ] reported that participants tend to choose less patterned looking sequences (e.g., H T T H T H H T) as more likely than more systematic looking sequences (e.g., H T H T H T H T). This reasoning again reflects the representativeness heuristic.

Three further questions from the literature were included to test problem solving skill.

Two drivers

  • (a). Driver A would win the race
  • (b). Driver B would win the race
  • (c). the two drivers would arrive at the same time (within a few seconds of one another)

This problem was developed by Pelham and Neter [ 25 ]. The correct answer is (a), which can be determined by calculations of driving times for each Driver, using time = distance/velocity. Pelham and Neter argue, however, that (c) is intuitively appealing, on the basis that both drivers appear to have the same overall average speed. Pelham and Neter reported that 67% of their sample gave this incorrect response to the problem, and a further 13% selected (b).

Petrol station

Imagine that you are driving along the road and you notice that your car is running low on petrol. You see two petrol stations next to each other, both advertising their petrol prices. Station A’s price is 65c/litre; Station B’s price is 60c/litre. Station A’s sign also announces: “5c/litre discount for cash!” Station B’s sign announces “5c/litre surcharge for credit cards.” All other factors being equal (for example, cleanliness of the stations, number of cars waiting at each etc), to which station would you choose to go, and why?

This problem was adapted from one described by Galotti [ 26 ], and is inspired by research reported by Thaler [ 27 ]. According to Thaler’s research, most people prefer Station A, even though both stations are offering the same deal: 60c/litre for cash, and 65c/litre for credit. Tversky and Kahneman [ 28 ] explain this preference by invoking the concept of framing effects. In the context of this problem, such an effect would involve viewing the outcomes as changes from some initial point. The initial point frames the problem, and provides a context for viewing the outcome. Thus, depending on the starting point, outcomes in this problem can be viewed as either a gain (in Station A, you gain a discount if you use cash) or a loss (in Station B, you are charged more (a loss) for using credit). Given that people are apparently more concerned about a loss than a gain [ 29 ], the loss associated with Station B makes it the less attractive option, and hence the preference for Station A. The correct answer, though, is that the stations are offering the same deal and so no station should be preferred.

And finally, a question described by Stanovich [ 30 , 31 ] as testing our predisposition for cognitive operations that require the least computational effort.

Jack looking at Anne

  • (c). Cannot be determined

Stanovich reported that over 80% of people choose the “lazy” answer (c). The correct answer is (a).

The above questions survey, in a clear problem solving setting, an ability to engage advanced cognitive processing in order to critically evaluate and possibly override initial gut reasoning, an ability to reason about probability within the framework of the law of large numbers and the relationship between randomness and patterning, an ability to isolate salient features of a problem and, with the last question in particular, an ability to map logical relations. It might be hypothesised that according to degrees of mathematical training, in line with the knowledge base provided and the claims of associated broad and enhanced problem-solving abilities in general, that participants with greater degrees of such training would outperform others on these questions. This hypothesis was investigated in this study. In addition, given that no previous study on this issue has examined the variety of problems used in this study, we also undertook an exploratory analysis to investigate whether there exist any associations between the problems in terms of their likelihood of correct solution. Similarities between problems might indicate which problem solving domains could be susceptible to the effects of mathematics training.

  • Introductory—First year, second semester, university students with weak high school mathematical results, only enrolled in the current unit as a compulsory component for their chosen degree, a unit not enabling any future mathematical pathway, a typical student may be enrolled in a Biology or Geography major;
  • Standard—First year, second semester, university students with fair to good high school mathematical results, enrolled in the current mathematics unit as a compulsory component for their chosen degree with the possibility of including some further mathematical units in their degree pathway, a typical student may be enrolled in an IT or Computer Science major;
  • Advanced1—First year, second semester, university mathematics students with very strong interest as well as background in mathematics, all higher year mathematical units are included as possible future pathway, a typical student may be enrolled in a Mathematics or Physics major;
  • Advanced2—Second year, second semester, university mathematics students with strong interest as well as background in mathematics, typically a direct follow on from the previously mentioned Advanced1 cohort;
  • Academic—Research academics in the mathematical sciences.

Participants

123 first year university students volunteered during “help on demand” tutorial times containing up to 30 students. These are course allocated times that are supervised yet self-directed by students. This minimised disruption and discouraged coercion. 44 second year university students completed the questionnaire during a weekly one-hour time slot dedicated to putting the latest mathematical concepts to practice with the lecturer (whereby contrast to what occurs in tutorial times the lecturer does most of the work and all students enrolled are invited). All these university students completed the questionnaire in normal classroom conditions; they were not placed under strict examination conditions. The lead author walked around to prevent discussion and coercion and there was minimum disruption. 30 research academics responded to local advertising and answered the questionnaire in their workplace while supervised.

The questionnaires were voluntary, anonymous and confidential. Participants were free to withdraw from the study at any time and without any penalty. No participant took this option however. The questionnaires gathered demographic information which included age, level of education attained and current qualification pursued, name of last qualification and years since obtaining it, and an option to note current speciality for research academics. Each problem task was placed on a separate page. Participants were not placed under time constraint, but while supervised, were asked to write their start and finish times on the front page of the survey to note approximate completion times. Speed of completion was not incentivised. Participants were not allowed to use calculators. A final “Comments Page” gave the option for feedback including specifically if the participants had previously seen any of the questions. Questionnaires were administered in person and supervised to avoid collusion or consulting of external sources.

The responses were coded four ways: A) correct; B) standard error (the errors discussed above in The Study); C) other error; D) left blank.

The ethical aspects of the study were approved by the Human Research Ethics Committee of the University of Sydney, protocol number [2016/647].

The first analysis examined the total number of correct responses provided by the participants as a function of group. Scores ranged from 1 to 11 out of a total possible of 11 (Problem 6 had 2 parts) ( Fig 1 ). An ANOVA of this data indicated a significant effect of group (F(4, 192) = 20.426, p < .001, partial η 2 = .299). Pairwise comparisons using Tukey’s HSD test indicated that the Introductory group performed significantly worse than the Advanced1, Advanced2 and Academic groups. There were no significant differences between the Advanced1, Advanced2 and Academic groups.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Error bars are one standard error of the mean.

https://doi.org/10.1371/journal.pone.0236153.g001

Overall solution time, while recorded manually and approximately, was positively correlated with group, such that the more training someone had received, the longer were these solution times (r(180) = 0.247, p = .001). However, as can be seen in Fig 2 , this relationship is not strong.

thumbnail

https://doi.org/10.1371/journal.pone.0236153.g002

A series of chi-squared analyses, and their Bayesian equivalents, were performed on each problem, to determine whether the distribution of response types differed as a function of group. To minimise the number of cells in which expected values in some of these analyses were less than 5, the Standard Error, Other Error and Blank response categories were collapsed into one category (Incorrect Response). For three of the questions, the expected values of some cells did fall below 5, and this was due to most people getting the problem wrong (Four Cards), or most people correctly responding to the problem (Bat and Ball, Coin Tosses). In these cases, the pattern of results was so clear that a statistical analysis was barely required. Significant chi-squared results were examined further with pairwise posthoc comparisons (see Table 1 ).

thumbnail

https://doi.org/10.1371/journal.pone.0236153.t001

The three groups with the least amount of training in mathematics were far less likely than the other groups to give the correct solution (χ 2 (4) = 31.06, p < .001; BF 10 = 45,045) ( Table 1 ). People in the two most advanced groups (Advanced2 and Academic) were more likely to solve the card problem correctly, although it was still less than half of the people in these groups who did so. Further, these people were less likely to give the standard incorrect solution, so that most who were incorrect suggested some more cognitively elaborate answer, such as turning over all cards. The proportion of people in the Advanced2 and Academic groups (39 and 37%) who solved the problem correctly far exceeded the typical proportion observed with this problem (10%). Of note, also, is the relatively high proportion of those in the higher training groups who, when they made an error, did not make the standard error, a similar result to the one reported by Inglis and Simpson [ 11 ].

The cognitive reflection test

In the Lily Pads problem, although most people in the Standard, Advanced1, Advanced2 and Academic groups were likely to select the correct solution, it was also the case that the less training someone had received in mathematics, the more likely they were to select an incorrect solution (χ 2 (4) = 27.28, p < .001; BF 10 = 15,554), with the standard incorrect answer being the next most prevalent response for the two lower ability mathematics groups ( Table 1 ).

Performance on the Widgets problem was similar to performance on the Lily Pads problem in that most people in the Standard, Advanced1, Advanced2 and Academic groups were likely to select the correct solution, but that the less training someone had received in mathematics, the more likely they were to select an incorrect solution (χ 2 (4) = 23.76, p< .001; BF 10 = 516) ( Table 1 ). As with the Lily Pads and Widget problems, people in the Standard, Advanced1, Advanced2 and Academic groups were highly likely to solve the Bat and Ball problem (χ 2 (4) = 35.37, p < .001; BF 10 = 208,667). Errors were more likely from the least mathematically trained people (Introductory, Standard) than the other groups ( Table 1 ).

To compare performance on the CRT with previously published results, performance on the three problems (Lily Pads, Widgets, Bat and Ball) were combined. The number of people in each condition that solved 0, 1, 2, or 3 problems correctly is presented in Table 2 . The Introductory group were evenly distributed amongst the four categories, with 26% solving all three problems correctly. Around 70% of the rest of the groups solved all 3 problems correctly, which is vastly superior to the 17% reported by Frederick [ 16 ].

thumbnail

https://doi.org/10.1371/journal.pone.0236153.t002

Responses to the Hospitals problem were almost universally split between correct and standard errors in the Standard, Advanced1, Advanced2 and Academic groups. Although this pattern of responses was also evident in the Introductory group, this group also exhibited more non-standard errors and non-responses than the other groups. However, the differences between the groups were not significant (χ 2 (4) = 4.93, p = .295; BF 10 = .068) ( Table 1 ). Nonetheless, the performance of all groups exceeds the 20% correct response rate reported by Kahneman and Tversky [ 23 ].

The two versions of the Birth Order problem showed similar results, with correct responses being more likely in the groups with more training (i.e., Advanced1, Advanced2 and Academic), and responses being shared amongst the various categories in the Introductory and Standard groups (χ a 2 (4) = 24.54, p < .001; BF 10 = 1,303; χ b 2 (4) = 25.77, p < .001; BF 10 = 2,970) ( Table 1 ). Nonetheless, performance on both versions of the problem in this study was significantly better than the 82% error rate reported by Kahneman and Tversky [ 23 ].

The Coin Tosses problem was performed well by all groups, with very few people in any condition committing errors. There were no obvious differences between the groups (χ 2 (4) = 3.70, p = .448; BF 10 = .160) ( Table 1 ). Kahneman and Tversky [ 23 ] reported that people tend to make errors on this type of problem by choosing less patterned looking sequences, but they did not report relative proportions of people making errors versus giving correct responses. Clearly the sample in this study did not perform like those in Kahneman and Tversky’s study.

Responses on the Two Drivers problem were clearly distinguished by a high chance of error in the Introductory and Standard groups (over 80%), and a fairly good chance of being correct in the Advanced1, Advanced2 and Academic groups (χ 2 (4) = 46.16, p < .001; BF 10 = 1.32 x 10 8 ) ( Table 1 ). Academics were the standout performers on this problem, although over a quarter of this group produced an incorrect response. Thus, the first two groups performed similarly to the participants in the Pelham and Neter [ 25 ] study, 80% of whom gave an incorrect response.

Responses on the Petrol Station problem were marked by good performance by the Academic group (73% providing a correct response), and just over half of each of the other groups correctly solving the problem. This difference was not significant (χ 2 (4) = 4.68, p = .322: BF 10 = .059) ( Table 1 ). Errors were fairly evenly balanced between standard and other, except for the Academic group, who were more likely to provide a creative answer if they made an error. Thaler [ 27 ] reported that most people get this problem wrong. In this study, however, on average, most people got this problem correct, although this average was boosted by the Academic group.

Responses on the Jack looking at Anne problem generally were standard errors, except for the Advanced2 and Academic groups, which were evenly split between standard errors and correct responses (χ 2 (4) = 18.03, p = .001; BF 10 = 46) ( Table 1 ). Thus, apart from these two groups, the error rate in this study was similar to that reported by Stanovich [ 30 ], where 80% of participants were incorrect.

A series of logistic regression analyses were performed in order to examine whether the likelihood of solving a particular problem correctly could be predicted on the basis of whether other problems were solved correctly. Each analysis involved selecting performance (correct or error) on one problem as the outcome variable, and performance on the other problems as predictor variables. Training (amount of training) was also included as a predictor variable in each analysis. A further logistic regression was performed with training as the outcome variable, and performance on all of the problems as predictor variables. The results of these analyses are summarised in Table 3 . There were three multi-variable relationships observed in these analyses, which can be interpreted as the likelihood of solving one problem in each group being associated with solving the others in the set. These sets were: (1) Lily Pads, Widgets and Petrol Station; (2) Hospitals, Four Cards and Two Drivers; (3) Birth Order and Coin Tosses. Training also featured in each of these sets, moderating the relationships as per the results presented above for each problem.

thumbnail

https://doi.org/10.1371/journal.pone.0236153.t003

The final “Comments Page” revealed the participants as overwhelmingly enjoying the questions. Any analysis of previous exposure to the tasks proved impossible as there was little to no alignment on participant’s degree of recall, if any, and even perceptions of what exposure entailed. For example, some participants confused being exposed to the particular tasks with being habitually exposed to puzzles, or even mathematics problems, more broadly.

In general, the amount of mathematics training a group had received predicted their performance on the overall set of problems. The greater the training, the more problems were answered correctly, and the slower the recorded response times. There was not an obvious difference between the Advanced1, Advanced2 and Academic groups on either of these measures, however there were clear differences between this group and the Introductory and Standard groups, with the former exhibiting clearly superior accuracy. While time records were taken approximately, so as to avoid adding time pressure as a variable, that the Advanced1, Advanced2 and Academic groups recorded more time in their consideration of the problems, may suggest a “pause and consider” approach to such problems is a characteristic of the advanced groups. This is in line with what was suggested by an eye-movement tracking study of mathematically trained students attempting the Four Cards Problem; where participants that had not chosen the standard error had spent longer considering the card linked to the matching bias effect [ 14 ]. It is important to note, however, that longer response times may reflect other cognitive processes than deliberation [ 32 ].

Performance on some problems was associated with performance on other problems. That is, if someone correctly answered a problem in one of these sets, they were also highly likely to correctly answer the other problems in the set. These sets were: (1) Lily Pads, Widgets and Petrol Station; (2) Hospitals, Four Cards and Two Drivers; (3) Birth Order and Coin Tosses. This is different with how these problems have been typically clustered a priori in the research literature: (I) Lily Pads, Widgets and Bat and Ball (CRT); (II) Hospitals and Two Drivers (explained below); (III) Hospitals, Birth Order and Coin Tosses (representativeness heuristic); (IV) Birth Order and Coin Tosses (probability theory). Consideration of these problem groupings follows.

Correctly answering all three problems in (I) entailed not being distracted by particular pieces of information in the problems so as to stay focused on uncovering the real underlying relationships. The Lily Pads and Widget problems can mislead if attention is over focused on the numbers, and conversely, the Petrol Station problem can mislead if there is too much focus on the idea of a discount. While the Lily Pads and Widget problems are traditionally paired with the Bat and Ball problem in the CRT, it may be that performance on the Bat and Ball problem did not appear as part of this set due to an added level of difficulty. With the problems in (I), avoiding being distracted by certain parts of the questions at the expense of others almost leads directly to the correct answer. However, with the Bat and Ball problem, further steps in mathematical reasoning still need to occur in answering which two numbers add together to give a result while also subtracting one from the other for another.

With the problems in (II) it is of interest that the Two Drivers problem was created specifically to be paired with the Hospitals problem to test for motivation in problem solving [ 23 ]. Within this framework further transparent versions of these problems were successfully devised to manipulate for difficulty. The Two Drivers problem was amended to have Driver B travelling at exactly 5 mph during the first half of the race and at exactly 95 mph during the last half of the race. The Hospitals problem was amended so the smaller hospital would have “only 2” babies born each day and where for a period of one year the hospitals recorded the number of days on which all of the babies born were boys. Could the association in (II) be pointing to how participants overcome initial fictitious mathematical rules? Maybe they reframe the question in simpler terms to see the pattern. The Four Cards Problem also elicited a high number of incorrect answers where, associated with mathematical training, the standard incorrect solution was avoided for more cognitively elaborate ones. Indeed, a gradation effect appeared across the groups where the standard error of the “D and 3” cards becomes “D only” ( Table 4 ). Adrian Simpson and Derrick Watson found a comparable result across their two groups [14 p61]. This could again be pointing to having avoided an initial fictitious rule of simply concentrating on items directly found in the question, participants then seek to reframe the question to unearth the logical rule to be deduced. An added level of difficulty with this question may be why participants become trapped in a false answer. The eye-movement tracking study mentioned above supports this theory.

thumbnail

https://doi.org/10.1371/journal.pone.0236153.t004

The problems in (III) fit naturally together as part of basic probability theory, a topic participants would have assimilated, or not, as part of various education curricula. While the equal likelihood of all possible outcomes with respect to a coin toss may be culturally assimilated, the same may not be as straightforward for birth gender outcomes where such assumptions could be swayed by biological hypothesis or folk wisdom [ 33 ]. The gradation of the results in terms of mathematical training does not support this possibility.

The effect of training on performance accuracy was more obvious in some problems compared to others, and to some extent, this was related to the type of problem. For instance, most of the problems in which performance was related to training (Four Cards, CRT [Lily Pads, Widgets, Bat and Ball], Two Drivers, Jack looking at Anne) could be classed as relying on logical and/or critical thinking. The one exception was the Birth Order problems, which are probability related.

In contrast, two of the three problems in which training did not appear to have much impact on performance (Hospitals and Coin Tosses) require domain-specific knowledge. The Hospitals problem requires a degree of knowledge about sampling statistics. This is a topic of quite distinct flavour that not all mathematically trained individuals gain familiarity with. On the other hand, all groups having performed well on the Coin Tosses problem is in line with a level of familiarity with basic probability having been originally presented at high school. While the questioning of patterning as negatively correlated with randomness is similar to that appearing in the Birth Order question, in the Birth Order question this aspect is arguably more concealed. These results and problem grouping (III) could be pointing to an area for improvement in teaching where the small gap in knowledge required to go from answering the Coin Tosses problem correctly to achieving similarly with the Birth Order problem could be easily addressed. A more formal introduction to sampling statistics in mathematical training could potentially bridge this gap as well as further be extended towards improvement on the Hospitals problem.

The other problem where performance was unrelated to training, the Petrol Station problem, cannot be characterised similarly. It is more of a logical/critical thinking type problem, where there remains some suggestion that training may have impacted performance, as the Academic group seemed to perform better than the rest of the sample. An alternate interpretation of this result is therefore that this problem should not be isolated but grouped with the other problems where performance is affected by training.

  • The Introductory group’s mathematics high school syllabus studied prior to first semester course entry covered: Functions, Trigonometric Functions, Calculus (Introduction to Differentiation, Applications of the Derivative, Antiderivatives, Areas and the Definite Integral), Financial Mathematics, Statistical Analysis. The Introductory group then explored concepts in mathematical modelling with emphasis on the importance of calculus in their first semester of mathematical studies.
  • The Standard group’s mathematics high school syllabus studied prior to first semester course entry covered: Functions, Trigonometric Functions, Calculus (Rates of Change, Integration including the method of substitution, trigonometric identities and inverse trigonometric functions, Areas and Volumes of solids of revolution, some differential equations), Combinatorics, Proof (with particular focus on Proof by Mathematical Induction), Vectors (with application to projectile motion), Statistical Analysis. In first semester their mathematical studies then covered a number of topics the Advanced1 group studied prior to gaining entrance at university; further details on this are given below.
  • The Advanced1 group’s mathematics high school syllabus studied prior to first semester course entry covered: the same course content the Standard group covered at high school plus extra topics on Proof (develop rigorous mathematical arguments and proofs, specifically in the context of number and algebra and further develop Proof by Mathematical Induction), Vectors (3 dimensional vectors, vector equations of lines), Complex Numbers, Calculus (Further Integration techniques with partial fractions and integration by parts), Mechanics (Application of Calculus to Mechanics with simple harmonic motion, modelling motion without and with resistance, projectiles and resisted motion). The Standard group cover these topics in their first semester university studies in mathematics with the exclusion of further concepts of Proof or Mechanics. In first semester the Advanced1 group have built on their knowledge with an emphasis on both theoretical and foundational aspects, as well as developing the skill of applying mathematical theory to solve practical problems. Theoretical topics include a host of theorems relevant to the study of Calculus.

In summary, at the point of our study, the Advanced1 group had more knowledge and practice on rigorous mathematical arguments and proofs in the context of number and algebra, and more in-depth experience with Proofs by Induction, but the bulk of extra knowledge rests with a much deeper knowledge of Calculus. They have had longer experience with a variety of integration techniques, and have worked with a variety of applications of calculus to solve practical problems, including a large section on mechanics at high school. In first semester at university there has been a greater focus on theoretical topics including a host of theorems and associated proofs relevant to the topics studied. As compared to the Introductory and Standard groups, the Advanced1 group have only widened the mathematics knowledge gap since their choice of post-compulsory mathematics at high school. The Advanced2 group come directly from an Advanced1 cohort. And the Academics group would have reached the Advanced1 group’s proficiency as part of their employment. So, are specific reasoning skills resulting from this level of abstract reasoning? Our findings suggest this should certainly be an area of investigation and links in interestingly with other research work. In studying one of the thinking tasks in particular (the Four Cards Problem) and its context of conditional inference more specifically, Inglis and Simpson [ 15 ] found a clear difference between undergraduates in mathematics and undergraduates in other university disciplines, yet also showed a lack of development over first-year university studies on conditional inference measures. A follow up study by Attridge and Inglis [ 22 ] then zeroed in on post-compulsory high school mathematical training and found that students with such training did develop their conditional reasoning to a greater extent than their control group over the course of a year, despite them having received no explicit tuition in conditional logic. The development though, whilst demonstrated as not being the result of a domain-general change in cognitive capacity or thinking disposition, and most likely associated with the domain-specific study of mathematics, revealed a complex pattern of endorsing more of some inferences and less of others. The study here focused on a much broader problem set associated with logical and critical thinking and it too is suggestive of a more complex picture in how mathematics training may be contributing to problem solving styles. A more intricate pattern to do with the impact of mathematical training on problem solving techniques is appearing as required for consideration.

There is also a final interpretation to consider: that people in the Advanced 1, Advanced2 and Academic groups did not gain anything from their mathematics training in terms of their ability to solve these problems. Instead, with studies denying any correlation of many of these problems with what is currently measured as intelligence [ 30 ], they might still be people of a particular intelligence or thinking disposition to start with, who have been able to use that intelligence to not only solve these problems, but also survive the challenges of their mathematics training.

That the CRT has been traditionally used as a measure of baseline thinking disposition and that performance has been found to be immutable across groups tested is of particular interest since our results show a clear possible training effect on these questions. CRT is tied with a willingness to engage in effortful thinking which presents as a suitable ability for training. It is beyond the scope of this study, but a thorough review of CRT testing is suggestive of a broader appreciation and better framework to understand thinking disposition, ability and potential ability.

Mathematical training appears associated with certain thinking skills, but there are clearly some subtleties that need to be extricated. The thinking tasks here add to the foundational results where the aim is for a firmer platform on which to eventually base more targeted and illustrative inquiry. If thinking skills can be fostered, could first year university mathematics teaching be improved so that all samples from that group reach the Advanced1 group level of reasoning? Do university mathematics courses become purely about domain-specific knowledge from this point on? Intensive training has been shown to impact the brain and cognition across a number of domains from music [ 34 ], to video gaming [ 35 ], to Braille reading [ 36 ]. The hypothesis that mathematics, with its highly specific practice, fits within this list remains legitimate, but simply unchartered. With our current level of understanding it is worth appreciating the careful wording of the NYU Courant Institute on ‘Why Study Math?’ where there is no assumption of causation: “Mathematicians need to have good reasoning ability in order to identify, analyze, and apply basic logical principles to technical problems.” [ 37 ].

Limitations

One possible limitation of the current study is that the problems may have been too easy for the more advanced people, and so we observed a ceiling effect (i.e., some people obtained 100% correct on all problems). This was most obvious in the Advanced1, Advanced2 and Academic groups. It is possible that participants in these groups had developed logical and critical thinking skills throughout their mathematical training that were sufficient to cope with most of the problems used in this study, and so this would support the contention that training in mathematics leads to the development of logical and critical thinking skills useful in a range of domains. Another interpretation is that participants in these groups already possessed the necessary thinking skills for solving the problems in this study, which is why they are able to cope with the material in the advanced units they were enrolled in, or complete a PhD in mathematics and hold down an academic position in a mathematics department. This would then suggest that training in mathematics had no effect on abstract thinking skills—people in this study possessed them to varying extents prior to their studies. This issue might be settled in a future study that used a greater number of problems of varying difficulties to maximise the chances of finding a difference between the three groups with the most amount of training. Alternatively, a longitudinal study that followed people through their mathematics training could determine whether their logical and critical thinking abilities changed throughout their course.

A further limitation of the study may be that several of the reasoning biases examined in this study were measured by only one problem each (i.e., Four Cards Problem, Two Drivers, Petrol Station, Jack looking at Anne). A more reliable measure of these biases could be achieved by including more problems that tap into these biases. This would, however, increase the time required of participants during data collection, and in the context of this study, would mean a different mode of testing would likely be required.

Broad sweeping intuitive claims of the transferable skills endowed by a study of mathematics require evidence. Our study uniquely covers a wide range of participants, from limited mathematics training through to research academics in the mathematical sciences. It furthermore considered performance on 11 well-studied thinking tasks that typically elude participants in psychological studies and on which results have been uncorrelated with general intelligence, education levels and other demographic information [ 15 , 16 , 30 ]. We identified different performances on these tasks with respect to different groups, based on level of mathematical training. This included the CRT which has developed into a method of measuring baseline thinking disposition. We identified different distributions of types of errors for the mathematically trained. We furthermore identified a performance threshold that exists in first year university for those with high level mathematics training. This study then provides insight into possible changes and adjustments to mathematics courses in order for them to fulfil their advertised goal of reaching improved rational and logical reasoning for a higher number of students.

It is central to any education program to have a clear grasp of the nature of what it delivers and how, but arguably especially so for the core discipline that is mathematics. In 2014 the Office of The Chief Scientist of Australia released a report “Australia’s STEM workforce: a survey of employers” where transferable skills attributed to mathematics were also ones that employers deemed as part of the most valuable [ 38 ]. A better understanding of what mathematics delivers in this space is an opportunity to truly capitalise on this historical culture-crossing subject.

Supporting information

https://doi.org/10.1371/journal.pone.0236153.s001

Acknowledgments

The authors would like to thank Jacqui Ramagge for her proof reading and input, as well as support towards data collection.

  • 1. Smith A. Making mathematics count: The report of Professor Adrian Smith’s inquiry into post-14 mathematics education. 2004. London: The Stationery Office.
  • 2. AMSI, Vision for a Maths Nation. 2015. http://amsi.org.au/publications/a-vision-for-a-maths-nation/
  • 3. British Columbia [Internet]. Mathematics; Goals and Rationale. 2016 [cited 2019 Dec 5]. https://curriculum.gov.bc.ca/curriculum/mathematics/core/goals-and-rationale
  • 4. Monash University [Internet]. Mathematical Sciences. 2019 [cited 2019 Jul 30]. https://www.monash.edu/science/schools/mathematical-sciences/current .
  • 5. The University of Sydney [Internet]. MATH1014. 2017 [cited 2019 Dec 5]. http://www.maths.usyd.edu.au/u/UG/TU/YR1ADMIN/r/MATH1014.pdf .
  • 6. The University of Sydney [Internet]. MATH2965. 2016 [cited 2016 Dec 12]. http://www.maths.usyd.edu.au/u/UG/IM/MATH2965/
  • 7. The University of Sydney [Internet]. MATH3066. 2017 [cited 2017 Dec 8]. http://www.maths.usyd.edu.au/u/UG/SM/MATH3066/r/2017info3066.pdf .
  • 8. Cambridge University [Internet]. Mathematical Tripos. 2019 [cited 2019 Jul 30]. https://www.maths.cam.ac.uk/undergrad/course/transferable_skills .
  • 9. Speelman CP, Kirsner K. Beyond the learning curve: The construction of mind. Oxford: Oxford University Press; 2005.
  • 10. Fadel C. Mathematics for the 21 st Century: What Should Students Learn? Boston, Massachusetts: Center for Curriculum Redesign; 2014.
  • 11. Inglis M, Simpson A. Heuristic biases in mathematical reasoning. In: Chick HL, Vincent JL, editors. Proceedings of the 29th Conference of the International Group for the Psychology of Mathematics Education. Melbourne: PME; 2005. p. 177–84.
  • 12. Manktelow KI. Reasoning and Thinking. UK: Psychology Press; 1999.
  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 14. Inglis M, Attridge N. Does mathematical study develop logical thinking? Testing the theory of formal discipline. London: World Scientific Publishing Europe Ltd; 2016.
  • 24. Nisbett RE. Can reasoning be taught? In: Callan E, Grotzer T, Kagan J, Nisbett RE, Perkins DN, Shulman LS, editors. Education and a Civic Society: Teaching Evidence-based Decision Making. Cambridge, MA: American Academy of Arts & Sciences; 2009.
  • 26. Galotti KM. Cognitive psychology in and out of the laboratory. Belmont, CA: Brooks/Cole; 1994.
  • 37. NYU [Internet]. Why Study Math? 2019 [cited 2019 Jul 30]. https://math.nyu.edu/dynamic/undergrad/overview/why-study-math/
  • 38. Office of The Chief Scientist. Australia’s STEM workforce: a survey of employers. Barton ACT: Deloitte Access Economics; 2014.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 01 June 2003

Neural foundations of logical and mathematical cognition

  • Olivier Houdé 1 &
  • Nathalie Tzourio-Mazoyer 1  

Nature Reviews Neuroscience volume  4 ,  pages 507–514 ( 2003 ) Cite this article

2409 Accesses

123 Citations

11 Altmetric

Metrics details

Brain-imaging techniques have made it possible to explore the neural foundations of logical and mathematical cognition. These techniques are revealing more than simply where these high-order processes take place in the human cortex. Imaging is beginning to answer some of the oldest questions about what logic and mathematics are, and how they emerge and evolve through visuospatial cognition, language, executive functions and emotion.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

176,64 € per year

only 14,72 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

logical thinking research

Similar content being viewed by others

logical thinking research

Two views on the cognitive brain

logical thinking research

Foundations of human spatial problem solving

logical thinking research

Biological constraints on neural network models of cognitive function

Fischer, K. & Kaplan, U. in The Encyclopedia of Cognitive Science Vol. 3 (ed. Nadel, L.) 679–682 (Nature Publishing Group, Macmillan, London, 2003).

Google Scholar  

Changeux, J.-P. & Connes, A. Conversations on Mind, Matter, and Mathematics (Princeton Univ. Press, Princeton, 1998).

Gardner, H. Frames of Mind: The Theory of Multiple Intelligences (Basic Books, New York, 1993).

Fodor, J. The Modularity of Mind (The MIT Press, Cambridge, 1983).

Fodor, J. The Mind Doesn't Work That Way (The MIT Press, Cambridge, 2000).

Book   Google Scholar  

Kosslyn, S. M. & Rosenberg, R. Psychology: the Brain, the Person, the World (Allyn and Bacon, Boston, 2001).

Braine, M. D. S. & O'Brien, D. P. (eds) Mental Logic (Erlbaum, Hove, 1998).

Johnson-Laird, P. N. Mental models and deduction. Trends Cogn. Sci. 5 , 434–442 (2001).

Article   Google Scholar  

Mellet, E., Petit, L., Denis, M. & Tzourio-Mazoyer, N. Reopening the mental imagery debate: lessons from functional anatomy. NeuroImage 8 , 129–139 (1998).

Article   CAS   Google Scholar  

Goel, V., Gold, B., Kapur, S. & Houle, S. The seats of reason? An imaging study of deductive and inductive reasoning. NeuroReport 8 , 1305–1310 (1997).

Goel, V., Gold, B., Kapur, S. & Houle, S. Neuroanatomical correlates of human reasoning. J. Cogn. Neurosci. 10 , 293–302 (1998).

Goel V., Buchel, C., Frith, C. & Dolan, R. Dissociation of mechanisms underlying syllogistic reasoning. NeuroImage 12 , 504–514 (2000).

Wharton, C. M. & Grafman, J. Deductive reasoning and the brain. Trends Cogn. Sci. 2 , 54–59 (1998).

Langer, J. The descent of cognitive development. Dev. Sci. 3 , 361–388 (2000).

Baillargeon, R. & Wang, S. Event categorization in infancy. Trends Cogn. Sci. 6 , 85–92 (2002).

Evans, J. St. B. T. Bias in Human Reasoning: Causes and Consequences (Erlbaum, London, 1989).

Gaukroger, S. in The Encyclopedia of Cognitive Science Vol. 1 (ed. Nadel, L.) 947–950 (Nature Publishing Group, Macmillan, London, 2003).

Houdé, O. et al. Shifting from the perceptual brain to the logical brain: the neural impact of cognitive inhibition training. J. Cogn. Neurosci. 12 , 721–728 (2000).

Evans, J. St B. T. Matching bias in conditional reasoning. Thinking Reasoning 4 , 45–82 (1998).

Raichle, M. E. et al. Practice-related changes in human brain functional anatomy during non-motor learning. Cereb. Cortex 4 , 8–26 (1994).

Sakai, K. et al. Transition of brain activation from frontal to parietal areas in visuomotor sequence learning. J. Neurosci. 18 , 1827–1840 (1998).

Diamond, A., Kirkham, N. & Amso, D. Conditions under which young children can hold two rules in mind and inhibit a prepotent response. Dev. Psychol. 38 , 352–362 (2002).

Houdé, O. Inhibition and cognitive development: object, number, categorization, and reasoning. Cogn. Dev. 15 , 63–73 (2000).

Dehaene S., Kerszberg M. & Changeux, J.-P. A neuronal model of a global workspace in effortful cognitive tasks. Proc. Natl Acad. Sci. USA 95 , 14529–14534 (1998).

Smith, E. E. & Jonides, J. Storage and executive processes in the frontal lobes. Science 283 , 1657–1661 (1999).

Miller, E. K. The prefrontal cortex and cognitive control. Nature Rev. Neurosci. 1 , 59–65 (2000).

Fuster, J. M. Linkage at the top. Neuron 21 , 1223–1229 (1998).

Goel, V. & Dolan, R. J. Explaining modulation of reasoning by belief. Cognition 87 , 11–22 (2003).

Moutier, S. Deductive reasoning and matching-bias inhibition training in school children. Curr. Psychol. Cogn. 19 , 429–452 (2000).

Fuster, J. The Prefrontal Cortex: Anatomy, Physiology, and Neuropsychology of the Frontal Lobe (Lippincott, New York, 1997).

Bjorklund, D. F. In search of a metatheory for cognitive development (or, Piaget is dead and I don't feel so good). Child Dev. 68 , 144–148 (1997).

Johnson, M. H. Functional brain development in humans. Nature Rev. Neurosci. 2 , 475–483 (2001).

Durston, S. et al. A neural basis for the development of inhibitory control. Dev. Sci. 5 , 9–16 (2002).

Casey, B. J., Davidson, M. & Rosen, B. Functional magnetic resonance imaging: basic principles of and application to developmental science. Dev. Sci. 5 , 301–309 (2002).

Thompson-Schill, S. L., D'Esposito, M., Aguirre, G. K. & Farah, M. J. Role of left inferior prefrontal cortex in retrieval of semantic knowledge: a reevaluation. Proc. Natl Acad. Sci. USA 94 , 14792–14797 (1997).

Thompson-Schill, S. L., D'Esposito, M. & Kan, I. P. Effects of repetition and competition on activity in left prefrontal cortex during word generation. Neuron 23 , 513–522 (1999).

Konishi, S. et al. Transient activation of inferior prefrontal cortex during cognitive set shifting. Nature Neurosci. 1 , 80–84 (1998).

Konishi, S. et al. Common inhibitory mechanism in human inferior prefrontal cortex revealed by event-related functional MRI. Brain 122 , 981–991 (1999).

Nakahara, K., Hayashi, T., Konishi, S. & Miyashita, Y. Functional MRI of macaque monkeys performing a cognitive set-shifting task. Science 295 , 1532–1536 (2002).

Jonides, J., Smith, E. E., Marshuetz, C. & Koeppe, R. A. Inhibition in verbal working memory revealed by brain activation. Proc. Natl Acad. Sci. USA 95 , 8410–8413 (1998).

D'Esposito, M., Postle, B. R. & Rypma, B. Prefrontal cortical contributions to working memory: evidence from event-related fMRI studies. Exp. Brain Res. 133 , 2–11 (2000).

Logothetis, N. K., Pauls, J., Augath, M. A., Trinath, T. & Oeltermann, A. Neurophysiological investigation of the basis of the fMRI signal. Nature 412 , 150–157 (2001).

Caesar, K., Gold, L. & Lauritzen, M. Context sensitivity of activity-dependent increases in cerebral blood flow. Proc. Natl Acad. Sci. USA 100 , 4239–4244 (2003).

Tupper, D. E. in The Encyclopedia of Cognitive Science Vol. 2 (ed. Nadel, L.) 965–969 (Nature Publishing Group, Macmillan, London, 2003).

McIntosh, A. R., Rajah, M. N. & Lobaugh, N. J. Interactions of prefrontal cortex in relation to awareness in sensory learning. Science 284 , 1531–1533 (1999).

Damasio, A. R. Descartes' Error: Emotion, Reason, and the Human Brain (Grosset, Putnam, New York, 1994).

Damasio, A. R. The Feeling of What Happens: Body and Emotion in the Making of Consciousness (Harcourt Brace, New York, 1999).

Damasio, H, Grabowski, T., Frank, R., Galaburda, A. & Damasio, A. R. The return of Phineas Gage: clues about the brain from the skull of a famous patient. Science 264 , 1102–1105 (1994).

Houdé, O. et al. Access to deductive logic depends on a right ventromedial prefrontal area devoted to emotion and feeling: evidence from a training paradigm. NeuroImage 14 , 1486–1492 (2001).

Tranel, D., Bechara, A. & Denburg, N. L. Asymmetric functional roles of right and left ventromedial prefrontal cortices in social conduct, decision-making, and emotional processing. Cortex 38 , 589–612 (2002).

Houdé, O. Consciousness and unconsciousness of logical reasoning errors in the human brain. Behav. Brain Sci. (in the press).

Bush, G., Luu, P. & Posner, M. I. Cognitive and emotional influences in anterior cingulate cortex. Trends Cogn. Sci. 4 , 215–222 (2000).

Gehring, W. & Willoughby, A. The medial frontal cortex and the rapid processing of monetary gains and losses. Science 295 , 2279–2282 (2002).

Carey, S. in Developmental Cognitive Neuroscience (eds Nelson, C. A. & Luciana, M.) 415–431 (The MIT Press, Cambridge, 2001).

Wynn, K. Addition and subtraction by human infants. Nature 358 , 749–750 (1992).

Wynn, K. Findings of addition and subtraction in infants are robust and consistent. Child Dev. 71 , 1535–1536 (2000).

Hauser, M. D., MacNeilage, P. & Ware, M. Numerical representations in primates. Proc. Natl Acad. Sci. USA 93 , 1514–1517 (1996).

Hauser, M. D. Wild Minds: What Animals Really Think (Henry Holt, New York, 2000).

Simon, T. J. Reconceptualizing the origins of number knowledge: a 'non-numerical' account. Cogn. Dev. 12 , 349–372 (1997).

Simon, T. J. The foundations of numerical thinking in a brain without numbers. Trends Cogn. Sci. 3 , 363–365 (1998).

Houdé, O. Numerical development: from the infant to the child. Wynn's (1992) paradigm in 2- and 3-year-olds. Cogn. Dev. 12 , 373–392 (1997).

Spelke, E. S. & Tsivkin, S. Language and number: a bilingual training study. Cognition 78 , 45–88 (2001).

Dehaene, S., Spelke, E., Pinel, P., Stanescu, R. & Tsivkin, S. Sources of mathematical thinking: behavioral and brain-imaging evidence. Science 284 , 970–974 (1999).

Zago, L. et al. Neural correlates of simple and complex mental calculation. NeuroImage 13 , 314–327 (2001).

Siegler, R. S. Emerging Minds: The Process of Change in Children's Thinking (Oxford Univ. Press, New York, 1996).

Dehaene, S., Dehaene, G. & Cohen, L. Abstract representations of numbers in the animal and human brain. Trends Neurosci. 21 , 355–361 (1998).

Dehaene, S. Single-neuron arithmetic. Science 297 , 1652–1653 (2002).

Nieder, A., Freedman, D. J. & Miller, E. K. Representation of the quantity of visual items in the primate prefrontal cortex. Science 297 , 1708–1711 (2002).

Sawamura, H., Shima, K. & Tanji, J. Numerical representation for action in the parietal cortex of the monkey. Nature 415 , 918–922 (2002).

Pesenti, M. et al. Mental calculation in a prodigy is sustained by right prefrontal and medial temporal areas. Nature Neurosci. 4 , 103–108 (2001).

Butterworth, B. What makes a prodigy? Nature Neurosci. 4 , 11–12 (2001).

Hauser, M. D., Chomsky, N. & Fitch, W. T. The faculty of language: what is it, who has it, and how did it evolve? Science 298 , 1569–1579 (2002).

Zago, L. & Tzourio-Mazoyer, N. Distinguishing visuospatial working memory and complex mental calculation areas within the parietal lobes. Neurosci. Lett. 331 , 45–49 (2002).

Simon, O., Mangin, J.-F., Cohen, L., Le Bihan, D. & Dehaene, S. Topographical layout of hand, eye, calculation, and language-related areas in the human parietal lobe. Neuron 33 , 475–487 (2002).

Lakoff, G. & Nunez, R. Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (Basic Books, New York, 2000).

Download references

Acknowledgements

We would like to thank S. Moutier, L. Zago and B. Mazoyer for their contribution to our work on logical and mathematical cognition. Support for our work is provided by The Centre National de la Recherche Scientifique, the Commissariat à l'Energie Atomique, Université de Caen, Université Paris-5 (René-Descartes) and the Institut Universitaire de France. We are also grateful to V. Waltz for her help in preparing the manuscript.

Author information

Authors and affiliations.

the Groupe d'Imagerie Neurofonctionnelle (GIN), Unité Mixte de Recherche 6095, Centre National de la Recherche Scientifique, Commissariat à l'Energie Atomique, Université de Caen and Université Paris-5, France

Olivier Houdé & Nathalie Tzourio-Mazoyer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Olivier Houdé .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Houdé, O., Tzourio-Mazoyer, N. Neural foundations of logical and mathematical cognition. Nat Rev Neurosci 4 , 507–514 (2003). https://doi.org/10.1038/nrn1117

Download citation

Issue Date : 01 June 2003

DOI : https://doi.org/10.1038/nrn1117

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

The influence of semantic alignment on the performance of addition and division operation: age-related differences.

  • Yangyang Wang

Cognitive Processing (2023)

An Eye-Tracking Study of Structural Priming from Abstract Arithmetic to Chinese Structure NP1 + You + NP2 + Hen + AP

Journal of Psycholinguistic Research (2023)

Exploring problem conceptualization and performance in STEM problem solving contexts

  • Thomas Delahunty
  • Niall Seery
  • Raymond Lynch

Instructional Science (2020)

Neural correlates of free recall of “famous events” in a “hypermnestic” individual as compared to an age- and education-matched reference group

  • Thorsten Fehr
  • Angelica Staniloiu
  • Manfred Herrmann

BMC Neuroscience (2018)

Neural circuitry governing anxious individuals’ mis-allocation of working memory to threat

  • Daniel M. Stout
  • Alexander J. Shackman
  • Christine L. Larson

Scientific Reports (2017)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

logical thinking research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of jintell

Critical Thinking: A Model of Intelligence for Solving Real-World Problems

Diane f. halpern.

1 Department of Psychology, Claremont McKenna College, Emerita, Altadena, CA 91001, USA

Dana S. Dunn

2 Department of Psychology, Moravian College, Bethlehem, PA 18018, USA; ude.naivarom@nnud

Most theories of intelligence do not directly address the question of whether people with high intelligence can successfully solve real world problems. A high IQ is correlated with many important outcomes (e.g., academic prominence, reduced crime), but it does not protect against cognitive biases, partisan thinking, reactance, or confirmation bias, among others. There are several newer theories that directly address the question about solving real-world problems. Prominent among them is Sternberg’s adaptive intelligence with “adaptation to the environment” as the central premise, a construct that does not exist on standardized IQ tests. Similarly, some scholars argue that standardized tests of intelligence are not measures of rational thought—the sort of skill/ability that would be needed to address complex real-world problems. Other investigators advocate for critical thinking as a model of intelligence specifically designed for addressing real-world problems. Yes, intelligence (i.e., critical thinking) can be enhanced and used for solving a real-world problem such as COVID-19, which we use as an example of contemporary problems that need a new approach.

1. Introduction

The editors of this Special Issue asked authors to respond to a deceptively simple statement: “How Intelligence Can Be a Solution to Consequential World Problems.” This statement holds many complexities, including how intelligence is defined and which theories are designed to address real-world problems.

2. The Problem with Using Standardized IQ Measures for Real-World Problems

For the most part, we identify high intelligence as having a high score on a standardized test of intelligence. Like any test score, IQ can only reflect what is on the given test. Most contemporary standardized measures of intelligence include vocabulary, working memory, spatial skills, analogies, processing speed, and puzzle-like elements (e.g., Wechsler Adult Intelligence Scale Fourth Edition; see ( Drozdick et al. 2012 )). Measures of IQ correlate with many important outcomes, including academic performance ( Kretzschmar et al. 2016 ), job-related skills ( Hunter and Schmidt 1996 ), reduced likelihood of criminal behavior ( Burhan et al. 2014 ), and for those with exceptionally high IQs, obtaining a doctorate and publishing scholarly articles ( McCabe et al. 2020 ). Gottfredson ( 1997, p. 81 ) summarized these effects when she said the “predictive validity of g is ubiquitous.” More recent research using longitudinal data, found that general mental abilities and specific abilities are good predictors of several work variables including job prestige, and income ( Lang and Kell 2020 ). Although assessments of IQ are useful in many contexts, having a high IQ does not protect against falling for common cognitive fallacies (e.g., blind spot bias, reactance, anecdotal reasoning), relying on biased and blatantly one-sided information sources, failing to consider information that does not conform to one’s preferred view of reality (confirmation bias), resisting pressure to think and act in a certain way, among others. This point was clearly articulated by Stanovich ( 2009, p. 3 ) when he stated that,” IQ tests measure only a small set of the thinking abilities that people need.”

3. Which Theories of Intelligence Are Relevant to the Question?

Most theories of intelligence do not directly address the question of whether people with high intelligence can successfully solve real world problems. For example, Grossmann et al. ( 2013 ) cite many studies in which IQ scores have not predicted well-being, including life satisfaction and longevity. Using a stratified random sample of Americans, these investigators found that wise reasoning is associated with life satisfaction, and that “there was no association between intelligence and well-being” (p. 944). (critical thinking [CT] is often referred to as “wise reasoning” or “rational thinking,”). Similar results were reported by Wirthwein and Rost ( 2011 ) who compared life satisfaction in several domains for gifted adults and adults of average intelligence. There were no differences in any of the measures of subjective well-being, except for leisure, which was significantly lower for the gifted adults. Additional research in a series of experiments by Stanovich and West ( 2008 ) found that participants with high cognitive ability were as likely as others to endorse positions that are consistent with their biases, and they were equally likely to prefer one-sided arguments over those that provided a balanced argument. There are several newer theories that directly address the question about solving real-world problems. Prominent among them is Sternberg’s adaptive intelligence with “adaptation to the environment” as the central premise, a construct that does not exist on standardized IQ tests (e.g., Sternberg 2019 ). Similarly, Stanovich and West ( 2014 ) argue that standardized tests of intelligence are not measures of rational thought—the sort of skill/ability that would be needed to address complex real-world problems. Halpern and Butler ( 2020 ) advocate for CT as a useful model of intelligence for addressing real-world problems because it was designed for this purpose. Although there is much overlap among these more recent theories, often using different terms for similar concepts, we use Halpern and Butler’s conceptualization to make our point: Yes, intelligence (i.e., CT) can be enhanced and used for solving a real-world problem like COVID-19.

4. Critical Thinking as an Applied Model for Intelligence

One definition of intelligence that directly addresses the question about intelligence and real-world problem solving comes from Nickerson ( 2020, p. 205 ): “the ability to learn, to reason well, to solve novel problems, and to deal effectively with novel problems—often unpredictable—that confront one in daily life.” Using this definition, the question of whether intelligent thinking can solve a world problem like the novel coronavirus is a resounding “yes” because solutions to real-world novel problems are part of his definition. This is a popular idea in the general public. For example, over 1000 business managers and hiring executives said that they want employees who can think critically based on the belief that CT skills will help them solve work-related problems ( Hart Research Associates 2018 ).

We define CT as the use of those cognitive skills or strategies that increase the probability of a desirable outcome. It is used to describe thinking that is purposeful, reasoned, and goal directed--the kind of thinking involved in solving problems, formulating inferences, calculating likelihoods, and making decisions, when the thinker is using skills that are thoughtful and effective for the particular context and type of thinking task. International surveys conducted by the OECD ( 2019, p. 16 ) established “key information-processing competencies” that are “highly transferable, in that they are relevant to many social contexts and work situations; and ‘learnable’ and therefore subject to the influence of policy.” One of these skills is problem solving, which is one subset of CT skills.

The CT model of intelligence is comprised of two components: (1) understanding information at a deep, meaningful level and (2) appropriate use of CT skills. The underlying idea is that CT skills can be identified, taught, and learned, and when they are recognized and applied in novel settings, the individual is demonstrating intelligent thought. CT skills include judging the credibility of an information source, making cost–benefit calculations, recognizing regression to the mean, understanding the limits of extrapolation, muting reactance responses, using analogical reasoning, rating the strength of reasons that support and fail to support a conclusion, and recognizing hindsight bias or confirmation bias, among others. Critical thinkers use these skills appropriately, without prompting, and usually with conscious intent in a variety of settings.

One of the key concepts in this model is that CT skills transfer in appropriate situations. Thus, assessments using situational judgments are needed to assess whether particular skills have transferred to a novel situation where it is appropriate. In an assessment created by the first author ( Halpern 2018 ), short paragraphs provide information about 20 different everyday scenarios (e.g., A speaker at the meeting of your local school board reported that when drug use rises, grades decline; so schools need to enforce a “war on drugs” to improve student grades); participants provide two response formats for every scenario: (a) constructed responses where they respond with short written responses, followed by (b) forced choice responses (e.g., multiple choice, rating or ranking of alternatives) for the same situations.

There is a large and growing empirical literature to support the assertion that CT skills can be learned and will transfer (when taught for transfer). See for example, Holmes et al. ( 2015 ), who wrote in the prestigious Proceedings of the National Academy of Sciences , that there was “significant and sustained improvement in students’ critical thinking behavior” (p. 11,199) for students who received CT instruction. Abrami et al. ( 2015, para. 1 ) concluded from a meta-analysis that “there are effective strategies for teaching CT skills, both generic and content specific, and CT dispositions, at all educational levels and across all disciplinary areas.” Abrami et al. ( 2008, para. 1 ), included 341 effect sizes in a meta-analysis. They wrote: “findings make it clear that improvement in students’ CT skills and dispositions cannot be a matter of implicit expectation.” A strong test of whether CT skills can be used for real-word problems comes from research by Butler et al. ( 2017 ). Community adults and college students (N = 244) completed several scales including an assessment of CT, an intelligence test, and an inventory of real-life events. Both CT scores and intelligence scores predicted individual outcomes on the inventory of real-life events, but CT was a stronger predictor.

Heijltjes et al. ( 2015, p. 487 ) randomly assigned participants to either a CT instruction group or one of six other control conditions. They found that “only participants assigned to CT instruction improved their reasoning skills.” Similarly, when Halpern et al. ( 2012 ) used random assignment of participants to either a learning group where they were taught scientific reasoning skills using a game format or a control condition (which also used computerized learning and was similar in length), participants in the scientific skills learning group showed higher proportional learning gains than students who did not play the game. As the body of additional supportive research is too large to report here, interested readers can find additional lists of CT skills and support for the assertion that these skills can be learned and will transfer in Halpern and Dunn ( Forthcoming ). There is a clear need for more high-quality research on the application and transfer of CT and its relationship to IQ.

5. Pandemics: COVID-19 as a Consequential Real-World Problem

A pandemic occurs when a disease runs rampant over an entire country or even the world. Pandemics have occurred throughout history: At the time of writing this article, COVID-19 is a world-wide pandemic whose actual death rate is unknown but estimated with projections of several million over the course of 2021 and beyond ( Mega 2020 ). Although vaccines are available, it will take some time to inoculate most or much of the world’s population. Since March 2020, national and international health agencies have created a list of actions that can slow and hopefully stop the spread of COVID (e.g., wearing face masks, practicing social distancing, avoiding group gatherings), yet many people in the United States and other countries have resisted their advice.

Could instruction in CT encourage more people to accept and comply with simple life-saving measures? There are many possible reasons to believe that by increasing citizens’ CT abilities, this problematic trend can be reversed for, at least, some unknown percentage of the population. We recognize the long history of social and cognitive research showing that changing attitudes and behaviors is difficult, and it would be unrealistic to expect that individuals with extreme beliefs supported by their social group and consistent with their political ideologies are likely to change. For example, an Iranian cleric and an orthodox rabbi both claimed (separately) that the COVID-19 vaccine can make people gay ( Marr 2021 ). These unfounded opinions are based on deeply held prejudicial beliefs that we expect to be resistant to CT. We are targeting those individuals who beliefs are less extreme and may be based on reasonable reservations, such as concern about the hasty development of the vaccine and the lack of long-term data on its effects. There should be some unknown proportion of individuals who can change their COVID-19-related beliefs and actions with appropriate instruction in CT. CT can be a (partial) antidote for the chaos of the modern world with armies of bots creating content on social media, political and other forces deliberately attempting to confuse issues, and almost all media labeled “fake news” by social influencers (i.e., people with followers that sometimes run to millions on various social media). Here, are some CT skills that could be helpful in getting more people to think more critically about pandemic-related issues.

Reasoning by Analogy and Judging the Credibility of the Source of Information

Early communications about the ability of masks to prevent the spread of COVID from national health agencies were not consistent. In many regions of the world, the benefits of wearing masks incited prolonged and acrimonious debates ( Tang 2020 ). However, after the initial confusion, virtually all of the global and national health organizations (e.g., WHO, National Health Service in the U. K., U. S. Centers for Disease Control and Prevention) endorse masks as a way to slow the spread of COVID ( Cheng et al. 2020 ; Chu et al. 2020 ). However, as we know, some people do not trust governmental agencies and often cite the conflicting information that was originally given as a reason for not wearing a mask. There are varied reasons for refusing to wear a mask, but the one most often cited is that it is against civil liberties ( Smith 2020 ). Reasoning by analogy is an appropriate CT skill for evaluating this belief (and a key skill in legal thinking). It might be useful to cite some of the many laws that already regulate our behavior such as, requiring health inspections for restaurants, setting speed limits, mandating seat belts when riding in a car, and establishing the age at which someone can consume alcohol. Individuals would be asked to consider how the mandate to wear a mask compares to these and other regulatory laws.

Another reason why some people resist the measures suggested by virtually every health agency concerns questions about whom to believe. Could training in CT change the beliefs and actions of even a small percentage of those opposed to wearing masks? Such training would include considering the following questions with practice across a wide domain of knowledge: (a) Does the source have sufficient expertise? (b) Is the expertise recent and relevant? (c) Is there a potential for gain by the information source, such as financial gain? (d) What would the ideal information source be and how close is the current source to the ideal? (e) Does the information source offer evidence that what they are recommending is likely to be correct? (f) Have you traced URLs to determine if the information in front of you really came from the alleged source?, etc. Of course, not everyone will respond in the same way to each question, so there is little likelihood that we would all think alike, but these questions provide a framework for evaluating credibility. Donovan et al. ( 2015 ) were successful using a similar approach to improve dynamic decision-making by asking participants to reflect on questions that relate to the decision. Imagine the effect of rigorous large-scale education in CT from elementary through secondary schools, as well as at the university-level. As stated above, empirical evidence has shown that people can become better thinkers with appropriate instruction in CT. With training, could we encourage some portion of the population to become more astute at judging the credibility of a source of information? It is an experiment worth trying.

6. Making Cost—Benefit Assessments for Actions That Would Slow the Spread of COVID-19

Historical records show that refusal to wear a mask during a pandemic is not a new reaction. The epidemic of 1918 also included mandates to wear masks, which drew public backlash. Then, as now, many people refused, even when they were told that it was a symbol of “wartime patriotism” because the 1918 pandemic occurred during World War I ( Lovelace 2020 ). CT instruction would include instruction in why and how to compute cost–benefit analyses. Estimates of “lives saved” by wearing a mask can be made meaningful with graphical displays that allow more people to understand large numbers. Gigerenzer ( 2020 ) found that people can understand risk ratios in medicine when the numbers are presented as frequencies instead of probabilities. If this information were used when presenting the likelihood of illness and death from COVID-19, could we increase the numbers of people who understand the severity of this disease? Small scale studies by Gigerenzer have shown that it is possible.

Analyzing Arguments to Determine Degree of Support for a Conclusion

The process of analyzing arguments requires that individuals rate the strength of support for and against a conclusion. By engaging in this practice, they must consider evidence and reasoning that may run counter to a preferred outcome. Kozyreva et al. ( 2020 ) call the deliberate failure to consider both supporting and conflicting data “deliberate ignorance”—avoiding or failing to consider information that could be useful in decision-making because it may collide with an existing belief. When applied to COVID-19, people would have to decide if the evidence for and against wearing a face mask is a reasonable way to stop the spread of this disease, and if they conclude that it is not, what are the costs and benefits of not wearing masks at a time when governmental health organizations are making them mandatory in public spaces? Again, we wonder if rigorous and systematic instruction in argument analysis would result in more positive attitudes and behaviors that relate to wearing a mask or other real-world problems. We believe that it is an experiment worth doing.

7. Conclusions

We believe that teaching CT is a worthwhile approach for educating the general public in order to improve reasoning and motivate actions to address, avert, or ameliorate real-world problems like the COVID-19 pandemic. Evidence suggests that CT can guide intelligent responses to societal and global problems. We are NOT claiming that CT skills will be a universal solution for the many real-world problems that we confront in contemporary society, or that everyone will substitute CT for other decision-making practices, but we do believe that systematic education in CT can help many people become better thinkers, and we believe that this is an important step toward creating a society that values and practices routine CT. The challenges are great, but the tools to tackle them are available, if we are willing to use them.

Author Contributions

Conceptualization, D.F.H. and D.S.D.; resources, D.F.H.; data curation, writing—original draft preparation, D.F.H.; writing—review and editing, D.F.H. and D.S.D. All authors have read and agreed to the published version of the manuscript.

This research received no external funding.

Institutional Review Board Statement

No IRB Review.

Informed Consent Statement

No Informed Consent.

Conflicts of Interest

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Abrami Philip C., Bernard Robert M., Borokhovski Evgueni, Wade C. Anne, Surkes Michael A., Tamim Rana, Zhang Dai. Instructional interventions affecting critical thinking skills and dispositions: A Stage 1 meta-analysis. Review of Educational Research. 2008; 78 :1102–34. doi: 10.3102/0034654308326084. [ CrossRef ] [ Google Scholar ]
  • Abrami Philip C., Bernard Robert M., Borokhovski Evgueni, Waddington David I., Wade C. Anne. Strategies for teaching students to think critically: A meta-analysis. Review of Educational Research. 2015; 85 :275–341. doi: 10.3102/0034654314551063. [ CrossRef ] [ Google Scholar ]
  • Burhan Nik Ahmad Sufian, Kurniawan Yohan, Sidek Abdul Halim, Mohamad Mohd Rosli. Crimes and the Bell curve: Th e role of people with high, average, and low intelligence. Intelligence. 2014; 47 :12–22. doi: 10.1016/j.intell.2014.08.005. [ CrossRef ] [ Google Scholar ]
  • Butler Heather A., Pentoney Christopher, Bong Maebelle P. Predicting real-world outcomes: Critical thinking ability is a better predictor of life decisions than intelligence. Thinking Skills and Creativity. 2017; 25 :38–46. doi: 10.1016/j.tsc.2017.06.005. [ CrossRef ] [ Google Scholar ]
  • Cheng Vincent Chi-Chung, Wong Shuk-Ching, Chuang Vivien Wai-Man, So Simon Yung-Chun, Chen Jonathan Hon-Kwan, Sridhar Sidharth, To Kelvin Kai-Wwang, Chan Jasper Fuk-Wu, Hung Ivan Fan-Ngai, Ho Pak-Leung, et al. The role of community-wide wearing of face mask for control of coronavirus disease 2019 (COVID-19) epidemic due to SARS-CoV-2. Journal of Infectious Disease. 2020; 81 :107–14. doi: 10.1016/j.jinf.2020.04.024. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chu Derek K., Aki Elie A., Duda Stephanie, Solo Karla, Yaacoub Sally, Schunemann Holger J. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: A system atic review and meta-analysis. Lancet. 2020; 395 :1973–87. doi: 10.1016/S0140-6736(20)31142-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Donovan Sarah J., Guss C. Dominick, Naslund Dag. Improving dynamic decision-making through training and self-re flection. Judgment and Decision Making. 2015; 10 :284–95. [ Google Scholar ]
  • Drozdick Lisa Whipple, Wahlstrom Dustin, Zhu Jianjun, Weiss Lawrence G. The Wechsler Adult Intelligence Scale—Fourth Edition and the Wechsler Memory Scale—Fourth Edition. In: Flanagan Dawn P., Harrison Patti L., editors. Contemporary Intellectual as Sessment: Theories, Tests, and Issues. The Guilford Press; New York: 2012. pp. 197–223. [ Google Scholar ]
  • Gigerenzer Gerd. When all is just a click away: Is critical thinking obsolete in the digital age? In: Sternberg Robert J., Halpern Diane F., editors. Critical Thinking IN Psychology. 2nd ed. Cambridge University Press; Cambridge: 2020. pp. 197–223. [ Google Scholar ]
  • Gottfredson Linda S. Why g matters: The complexity of everyday life. Intelligence. 1997; 24 :79–132. doi: 10.1016/S0160-2896(97)90014-3. [ CrossRef ] [ Google Scholar ]
  • Grossmann Igor, Varnum Michael E. W., Na Jinkyung, Kitayama Shinobu, Nisbett Richard E. A route to well-being: Intelligence ver sus wise reasoning. Journal of Experimental Psychology: General. 2013; 142 :944–53. doi: 10.1037/a0029560. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern Diane F. Halpern Critical Thinking Assessment. Schuhfried Test Publishers; Modling: 2018. [(accessed on 30 March 2021)]. Available online: www.schuhfried.com [ Google Scholar ]
  • Halpern Diane F., Butler Heather A. Is critical thinking a better model of intelligence? In: Sternberg Robert J., editor. The nature of Intelligence. 2nd ed. Cambridge University Press; Cambridge: 2020. pp. 183–96. [ Google Scholar ]
  • Halpern Diane F., Dunn Dana S. Thought and Knowledge: An Introduction to Critical Thinking. 6th ed. Taylor & Francis; New York: Forthcoming. in press. [ Google Scholar ]
  • Halpern Diane F., Millis Keith, Graesser Arthur, Butler Heather, Forsyth Carol, Cai Zhiqiang. Operation ARA: A computerized learn ing game that teaches critical thinking and scientific reasoning. Thinking Skills and Creativity. 2012; 7 :93–100. doi: 10.1016/j.tsc.2012.03.006. [ CrossRef ] [ Google Scholar ]
  • Hart Research Associates [(accessed on 30 March 2021)]; Employers Express Confidence in Colleges and Universities: See College as Worth the Investment, New Research Finds. 2018 Aug 29; Available online: https://hartresearch.com/employers-express-confidence-in-colleges-and-universities-see-college-as-worth-the-investment-new-research-finds/
  • Heijltjes Anita, Gog Tamara van, Lippink Jimmie, Paas Fred. Unraveling the effects of critical thinking instructions, practice, and self-explanation on students’ reasoning performance. Instructional Science. 2015; 43 :487–506. doi: 10.1007/s11251-015-9347-8. [ CrossRef ] [ Google Scholar ]
  • Holmes Natasha G., Wieman Carl E., Bonn DougA. Teaching critical thinking. Proceedings of the National Academy of Sciences. 2015; 112 :11199–204. doi: 10.1073/pnas.1505329112. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hunter John E., Schmidt Frank L. Intelligence and job performance: Economic and social implications. Psychology, Public Policy, and Law. 1996; 2 :447–72. doi: 10.1037/1076-8971.2.3-4.447. [ CrossRef ] [ Google Scholar ]
  • Kozyreva Anastasia, Lewandowsky Stephan, Hertwig Ralph. Citizens versus the internet: Confronting digital challenges with cognitive tools. [(accessed on 30 March 2021)]; Psychological Science in the Public Interest. 2020 21 doi: 10.1177/1529100620946707. Available online: https://www.psychologi calscience.org/publications/confronting-digital-challenges-with-cognitive-tools.html [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kretzschmar Andre, Neubert Jonas C., Wusternberg Sascha, Greiff Samuel. Construct validity of complex problem- solv ing: A comprehensive view on different facts of intelligence and school grades. Intelligence. 2016; 54 :55–69. doi: 10.1016/j.intell.2015.11.004. [ CrossRef ] [ Google Scholar ]
  • Lang Jonas W.B., Kell Harrison J. General mental ability and specific abilities: Their relative importance for extrinsic career success. Journal of Applied Psychology. 2020; 105 :1047–61. doi: 10.1037/apl0000472. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lovelace Berkeley., Jr. Medical Historians Compare the Coronavirus to the 1918 Flu Pandemic: Both Were Highly Political. [(accessed on 30 March 2021)]; CNBC. 2020 Available online: https://www.cnbc.com/2020/09/28/comparing-1918-flu-vs-corona virus.html?fbclid=IwAR1RAVRUOIdN9qqvNnMPimf5Q4XfV-pn_qdC3DwcfnPu9kavwumDI2zq9Xs
  • Marr Rhuaridh. Iranian Cleric Claims COVID-19 Vaccine Can Make People Gay. [(accessed on 30 March 2021)]; Metro Weekly. 2021 Available online: https://www.metroweekly.com/2021/02/iranian-cleric-claims-covid-19-vaccine-can-make-people-gay/
  • McCabe Kira O., Lubinski David, Benbow Camilla P. Who shines most among the brightest?: A 25-year longitudinal study of elite STEM graduate students. Journal of Personality and Social Psychology. 2020; 119 :390–416. doi: 10.1037/pspp0000239. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mega Emiliano R. COVID Has Killed more than One Million People. How Many more will Die? [(accessed on 30 March 2021)]; Nature. 2020 Available online: https://www.nature.com/articles/d41586-020-02762-y [ PubMed ]
  • Nickerson Raymond S. Developing intelligence through instruction. In: Sternberg Robert J., editor. The Cambridge Handbook of Intelligence. 2nd ed. Cambridge University Press; Cambridge: 2020. pp. 205–37. [ Google Scholar ]
  • OECD . The Survey of Adult Skills: Reader’s Companion. 3rd ed. OECD Publishing; Paris: 2019. OECD Skills Studies. [ CrossRef ] [ Google Scholar ]
  • Smith Matthew. Why won’t Britons Wear Face Masks? [(accessed on 30 March 2021)]; YouGov. 2020 Available online: https://yougov.co.uk/topics/health/articles-reports/2020/07/15/why-wont-britons-wear-face-masks
  • Stanovich Keith E. What Intelligence Tests Miss: The Psychology of Rational Thought. Yale University Press; New Haven: 2009. [ Google Scholar ]
  • Stanovich Keith E., West Richard F. On the failure of cognitive ability to predict my-side bias and one-sided thinking biases. Thinking & Reasoning. 2008; 14 :129–67. doi: 10.1080/13546780701679764. [ CrossRef ] [ Google Scholar ]
  • Stanovich Keith E., West Richard F. What intelligence tests miss. The Psychologist. 2014; 27 :80–83. doi: 10.5840/inquiryctnews201126216. [ CrossRef ] [ Google Scholar ]
  • Sternberg Robert J. A theory of adaptive intelligence and its relation to general intelligence. Journal of Intelligence. 2019; 7 :23. doi: 10.3390/jintelligence7040023. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tang Julian W. COVID-19: Interpreting scientific evidence—Uncertainty, confusion, and delays. BMC Infectious Diseases. 2020; 20 :653. doi: 10.1186/s12879-020-05387-8. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wirthwein Linda, Rost Detlef H. Giftedness and subjective well-being: A study with adults. Learning and Individuals Differences. 2011; 21 :182–86. doi: 10.1016/j.lindif.2011.01.001. [ CrossRef ] [ Google Scholar ]

logical thinking research

  • Onsite training

3,000,000+ delegates

15,000+ clients

1,000+ locations

  • KnowledgePass
  • Log a ticket

01344203999 Available 24/7

logical thinking research

What is Logical Thinking? A Beginner's Guide 

What is Logical Thinking? A Beginner's Guide: Discover the essence of Logical Thinking in this detailed guide. Unveil its importance in problem-solving, decision-making, and analytical reasoning. Learn techniques to develop this crucial skill, understand common logical fallacies, and explore how Logical Thinking can be applied effectively in various aspects of life and work.

stars

Exclusive 40% OFF

Training Outcomes Within Your Budget!

We ensure quality, budget-alignment, and timely delivery by our expert instructors.

Share this Resource

  • Strategic Planning and Thinking Course
  • Leadership Skills Training
  • Practical Thinking Course
  • Attention Management Training
  • Introduction to Supervising a Team

course

Whether you're solving a complex problem, engaging in critical discussions, or just navigating your daily routines, Logical Thinking plays a pivotal role in ensuring that your thoughts and actions are rational and coherent. In this blog, we will discuss What is Logical Thinking in detail, its importance, and its components. You'll also learn about the various ways that make up Logical Thinking and how to develop this essential skill.    

Table of contents  

1)  Understanding Logical Thinking 

2)  Components of Logical Thinking 

3)  Why is Logical Thinking important? 

4)  What are Logical Thinking skills?   

5)  Developing Logical Thinking skills 

6)  Exercises to improve Logical Thinking 

7)  Conclusion 

Understanding Logical Thinking  

Logical Thinking is the capacity to employ reason and systematic processes to analyse information, establish connections, and reach well-founded conclusions. It entails a structured and rational approach to problem-solving and decision-making. 

For example, consider a scenario where you're presented with a puzzle. To logically think through it, you would assess the provided clues, break down the problem into smaller elements, and systematically find potential solutions. You'd avoid hasty or emotion-driven judgments and rely on evidence and sound reasoning to arrive at the correct answer, showcasing the essence of Logical Thinking in problem-solving .

Sales Training

C omponents of Logical Thinking  

After knowing What is Logic al Thinking, let’s move on to the key components of Logical Thinking. Logical Thinking comprises several key components that work together to facilitate reasoned analysis and problem-solving. Here are the following key components of Logical Thinking.  

1)  Deductive reasoning : Deductive reasoning involves drawing specific conclusions from general premises or facts. It's like moving from a broad idea to a more specific conclusion. For example, if all humans are mortal, and Socrates is a human, then you can logically conclude that Socrates is mortal. 

2)   I nductive reasoning : Inductive reasoning is the procedure of forming general conclusions based on specific observations or evidence. It's the opposite of deductive reasoning. For instance, if you observe that the sun has risen every day, you might inductively reason that the sun will rise again tomorrow.  

3)  Causal inference : Causal inference is the ability to identify cause-and-effect relationships between events, actions, or variables. It involves understanding that one event or action can lead to another event as a consequence . In essence, it's the recognition that a specific cause produces a particular effect.  

4)  Analogy : Analogical reasoning or analogy involves drawing similarities and making comparisons between two or more situations, objects, or concepts. It's a way of applying knowledge or understanding from one context to another by recognising shared features or characteristics. Analogical reasoning is powerful because it allows you to transfer what you know in one domain to another, making it easier to comprehend and solve new problems. 

Why is Logical Thinking Important?  

Why is Logical Thinking Important

1)  Effective problem-solving : Logical Thinking equips individuals with the ability to dissect complex problems, identify patterns, and devise systematic solutions. Whether it's troubleshooting a technical issue or resolving personal dilemmas, Logical Thinking ensures that problems are approached with a structured and efficient methodology. 

2)  Enhanced decision-making : Making sound decisions is a cornerstone of success in both personal and professional life. Logical Thinking allows individuals to evaluate options, consider consequences, and choose the most rational course of action. This is particularly critical in high-stakes situations. 

3)   Critical thinking : Logical Thinking is at the core of critical thinking . It encourages individuals to question assumptions, seek evidence, and challenge existing beliefs. This capacity for critical analysis fosters a deeper understanding of complex issues and prevents the acceptance of unfounded or biased information. 

4)  Effective communication : In discussions and debates, Logical Thinking helps individuals express their ideas and viewpoints clearly and persuasively. It enables individuals to construct well-structured arguments, provide evidence, and counter opposing views, fostering productive and respectful communication . 

5)  Academic and professional success : Logical Thinking is highly valued in educational settings and the workplace. It allows students to excel academically by tackling challenging coursework and assignments. In the professional world, it's a key attribute for problem-solving, innovation, and career advancement. 

6)  Avoiding Logical fallacies : Logical Thinking equips individuals with the ability to recognise and avoid common logical fallacies such as circular reasoning, straw man arguments, and ad hominem attacks. This safeguards them from being deceived or manipulated by flawed or deceptive arguments. 

Supercharge Your Sales with our Sales Negotiation Skills Course . Check out now and close deals like a pro!  

What are Logical Thinking skills ?  

Logical Thinking skills are cognitive abilities that allow individuals to process information, analyse it systematically, and draw reasonable conclusions. These skills enable people to approach problems, decisions, and challenges with a structured and rational mindset .  

Developing Logical Thinking skills  

Developing strong Logical Thinking skills is essential for improved problem-solving, decision-making , and critical analysis. Here are some key strategies to help you enhance your Logical Thinking abilities.   

1)  Practice critical thinking : Engage in activities that require critical thinking, such as analysing articles, solving puzzles, or evaluating arguments. Regular practice sharpens your analytical skills.  

2)  L earn formal logic : Study the principles of formal logic, which provide a structured approach to reasoning. This can include topics like syllogisms, propositional logic, and predicate logic. 

3)  I dentify assumptions : When faced with a problem or argument, be aware of underlying assumptions. Question these assumptions and consider how they impact the overall reasoning. 

4)  B reak down problems : When tackling complex problems, break them down into smaller, more manageable components. Analyse each component individually before looking at the problem as a whole . 

5)   Seek diverse perspectives : Engage in discussions and debates with people who hold different viewpoints. This helps you consider a range of perspectives and strengthens your ability to construct and counter -arguments. 

6)  Read widely : Reading a variety of materials, from academic articles to literature, exposes you to different modes of reasoning and argumentation. This broadens your thinking and enhances your ability to connect ideas.  

7)  Solve puzzles and brain teasers : Engaging in puzzles, riddles, and brain teasers challenges your mind and encourages creative problem-solving. It's an enjoyable way to exercise your Logical Thinking. 

8)  Develop mathematical skills : Mathematics is a discipline that heavily relies on Logical Thinking. Learning and practising mathematical concepts and problem-solving techniques can significantly boost your logical reasoning skills. 

Unlock Your sales potential with our Soft Skills Training For Sales Professionals . Elevate your sales game today and boost your success – register now!  

Exercises to improve Logical Thinking  

Enhancing your Logical Thinking skills is achievable through various exercises and activities. Here are some practical exercises to help you strengthen your Logical Thinking abilities:  

1)   Sudoku puzzles : Solve Sudoku puzzles, as they require logical deduction to fill in the missing numbers.  

2)   Crossword puzzles : Crosswords challenge your vocabulary and logical word placement.  

3)  Brain teasers : Engage in brain teasers and riddles that encourage creative problem-solving.  

4)  Chess and board games : Play strategic board games like chess, checkers, or strategic video games that require forward thinking and planning.  

5)  Logical argumentation : Engage in debates or discussions where you must construct reasoned arguments and counter opposing viewpoints.  

6)  Coding and programming : Learn coding and programming languages which promote structured and Logical Thinking in problem-solving. 

7)  Mathematical challenges : Solve mathematical problems and equations, as mathematics is inherently logical.  

8)   Mensa puzzles : Work on Mensa puzzles, which are designed to test and strengthen Logical Thinking skills. 

9)  Logic games : Play logic-based games like Minesweeper or Mastermind.  

10)   Logical analogy exercises : Practice solving analogy exercises, which test your ability to find relationships between words or concepts.  

11)  Visual logic puzzles : Tackle visual logic puzzles like nonograms or logic grid puzzles. 

12)  Critical reading : Read books, articles, or academic papers and critically analyse the arguments and evidence presented. 

13)  Coding challenges : Participate in online coding challenges and competitions that require logical problem-solving in coding. 

14)  Scientific method : Conduct simple science experiments or projects, applying the scientific method to develop hypotheses and draw logical conclusions.  

15)   Poker or card games : Play card games like poker, where you must strategi se and make logical decisions based on probabilities and information. 

16)  Analyse real-world situations : Analyse real-world situations or news stories, evaluating the information, causes, and potential consequences. 

These exercises will help you practice and enhance your Logical Thinking skills in a fun and engaging way, making them an integral part of your problem-solving and decision-making toolkit. 

Take your sales to the next level! Elevate your skills with our Sales Training . Join us today!  

Concluson  

In this blog, we have discussed What is Logical Thinking, its importance, its components and ways to improve this skill. When you learn how to think logically, you start gathering each and every information as much as possible, analyse the facts, and methodically choose the best way to go forward with your decision. Logical Thinking is considered the most important tool in brainstorming ideas, assessing issues and finding solutions. 

Master the art of effective meetings. Elevate your Meeting Skills for success. Get started now!  

Frequently Asked Questions

Upcoming business skills resources batches & dates.

Fri 27th Sep 2024

Fri 25th Oct 2024

Fri 22nd Nov 2024

Fri 27th Dec 2024

Fri 10th Jan 2025

Fri 14th Mar 2025

Fri 9th May 2025

Fri 11th Jul 2025

Fri 12th Sep 2025

Fri 14th Nov 2025

Get A Quote

WHO WILL BE FUNDING THE COURSE?

My employer

By submitting your details you agree to be contacted in order to respond to your enquiry

  • Business Analysis
  • Lean Six Sigma Certification

Share this course

Our biggest summer sale.

red-star

We cannot process your enquiry without contacting you, please tick to confirm your consent to us for contacting you about your enquiry.

By submitting your details you agree to be contacted in order to respond to your enquiry.

We may not have the course you’re looking for. If you enquire or give us a call on 01344203999 and speak to our training experts, we may still be able to help with your training requirements.

Or select from our popular topics

  • ITIL® Certification
  • Scrum Certification
  • ISO 9001 Certification
  • Change Management Certification
  • Microsoft Azure Certification
  • Microsoft Excel Courses
  • Explore more courses

Press esc to close

Fill out your  contact details  below and our training experts will be in touch.

Fill out your   contact details   below

Thank you for your enquiry!

One of our training experts will be in touch shortly to go over your training requirements.

Back to Course Information

Fill out your contact details below so we can get in touch with you regarding your training requirements.

* WHO WILL BE FUNDING THE COURSE?

Preferred Contact Method

No preference

Back to course information

Fill out your  training details  below

Fill out your training details below so we have a better idea of what your training requirements are.

HOW MANY DELEGATES NEED TRAINING?

HOW DO YOU WANT THE COURSE DELIVERED?

Online Instructor-led

Online Self-paced

WHEN WOULD YOU LIKE TO TAKE THIS COURSE?

Next 2 - 4 months

WHAT IS YOUR REASON FOR ENQUIRING?

Looking for some information

Looking for a discount

I want to book but have questions

One of our training experts will be in touch shortly to go overy your training requirements.

Your privacy & cookies!

Like many websites we use cookies. We care about your data and experience, so to give you the best possible experience using our site, we store a very limited amount of your data. Continuing to use this site or clicking “Accept & close” means that you agree to our use of cookies. Learn more about our privacy policy and cookie policy cookie policy .

We use cookies that are essential for our site to work. Please visit our cookie policy for more information. To accept all cookies click 'Accept & close'.

  • Interview Questions
  • Career Guide
  • Success Stories

What is logical thinking?

How can you build logical thinking skills, what is logical thinking.

Logical thinking

Logical thinking can also be defined as the act of analysing a situation and coming up with a sensible solution. It is similar to critical thinking. Logical thinking uses reasoning skills to objectively study any problem, which helps make a rational conclusion about how to proceed. For example, you are facing a problem in the office, to address that, you use the available facts, you are using logical reasoning skills.

In this write-up, we will explore tips on how you can improve your logical thinking skills and the reasons why logical thinking can help you be a stronger professional.

Now the question arises in our mind, why are logical thinking skills important?

Also Read – What is Empathy in Design Thinking?

Logical thinking skills play a very important and necessary role in developing your career because they can help you reason through important decisions, solve problems, generate creative ideas, and set goals. Whether you want to advance your career or have just entered the industry, you will encounter challenges daily that require logical reasoning skills. The stronger your logical thinking skills are, the more easily you will be able to come up with solutions and plans that can benefit you and your workplace.

There are many ways in which you can strengthen logical thinking in your daily work.

Methods that help you in developing your logical thinking skills are :

  • Spend time on creative hobbies.
  • Practice questioning.
  • Socializing with others.
  • Learn a new skill.

1. Spending time on creative hobbies

It has been observed that creative hobbies like drawing, painting, writing, or playing music can stimulate the brain and help promote logical thinking. Creative thinking, in a way, naturally develops problem-solving abilities that can help you become a better performer at your workplace.

Let’s talk about one more example, learning a new instrument requires deep thought and concentration. The logical thinking skills that you will gain from the process of learning a new instrument can help you approach your work more intently, developing your ability to solve problems with more flexibility and ease.

In addition to this, creative hobbies also help reduce stress. When your stress levels are manageable, you will have an easier time focusing and making logical decisions wherever required. There are many different ways in which you can handle stress, but developing a creative mind is especially productive and can help you bolster both personal and professional life.

2. Practice questioning

Another best way to strengthen your logical thinking skills is to question things that you typically accept as fact. When you regularly ask the question, it helps you view situations more completely and intricately, allowing you to approach problems at work more logically and creatively.

Asking more and more questions often leads to discoveries about topics you had not considered before, which may encourage you to explore further. This method can be used anywhere, especially at work. Let us take an example of a department at your workplace you are not familiar with. Create a list of questions where you need clarity or understanding. This will help you understand its purpose.

Let us take an example. If you work in the sales-marketing department and want to know more about search engine optimization skills , consider asking someone in that department for an overview to learn more about their current projects and processes. This will help you think more critically about the role you would be taking at work as it relates to that team.

3. Socialize with others

Socializing and building relationships with others help you broaden your perspective, giving you more opportunities to develop your logical thinking skills. When you get to know the point of view of other people, it helps you approach problems at work in a new and different way.

There are many ways in which you can invest time in building relationships. It can be from participating in an activity to simply eating lunch or meeting over coffee together regularly. It is truly said that the more logically you can handle problems at work, the more easily you will be able to advance in your career.

4. Learn a new skill

Learning a new skill can also help in sharpening logical skills.

If you take the opportunity to learn as often as possible, you apply the same level of thinking to your job, making you successful.

For example, suppose you decide to start learning a new coding language. This process will require careful thinking and planning. Practicing every day will help to put you in the mindset of thoughtfully approaching problems at work and will also help you develop a new skill that will help you advance your career.

5. Anticipating the outcome of your decisions

When you are working to strengthen your logical thinking skills, it is helpful for you to consider what impact your decisions might have in the future. The closer you pay attention to the results of your decisions and analyze them, the easier the process will become.

Whenever you come up with a solution to a problem at the workplace, try to think about what the outcome may be. Slowly and eventually, you will find it easier to think of your decisions’ immediate and long-term results. This is an important aspect of logical thinking.

Logical skills can be easily strengthened with daily practice. When you start applying these exercises regularly, and by learn more from professional courses you will observe yourself start to naturally approach everyday decisions at work with a more logical perspective.

Avatar photo

Top Free Courses

Germany

Top 14 Fully Funded Scholarships In Germany For International Students

Guide to Predictive Analytic: Definition, Core Concepts, Tools, and Use Cases

Guide to Predictive Analytics: Definition, Core Concepts, Tools, and Use Cases

Local Search Algorithm for AI: Everything You Need to Know

Local Search Algorithm For AI: Everything You Need To Know

Reason For Job Change

How to Answer ‘Reason for a Job Change’

Career Options after 12th for commerce students

Best Career Options After 12th Commerce

career options after 12th arts

Career Options After 12th Arts

Leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Macquarie University

The development of statistical reasoning in primary school students

While formal statistical practices are not generally accessible to students in the primary years of schooling, the principles underpinning statistical thinking and reasoning—such as posing questions, collecting data, comparing groups, and representing and inferring from data—are relevant in primary mathematics (Watson et al., 2018). Recent Australian studies by English (2012, 2013, 2018), Fielding-Wells (2014, 2018 a,b), Kinnear (2013, 2018), Makar (2014, 2016, 2018), Mulligan (2015) and Watson (2018) have focused on primary school students’ capacities to engage in data modelling and on statistical reasoning more broadly. An early years’ approach to the teaching of statistics involves including students’ personal experiences, encourages self-collected data sets, and emphasises the reasoning process rather than outcomes or conclusions (Doerr, et al., 2017). How young students’ develop and apply the modelling and refinement process is not clearly understood however, especially when working with an abstract or complex data set. This thesis aimed to gain a more coherent understanding of the developmental aspects of Grade 1 through 4 students’ statistical reasoning and metarepresentational competence with explicit emphasis upon predictive reasoning.

Three interconnected design studies on model-based reasoning and predictive reasoning were conducted with 46 Australian students drawn from one cohort of a single, independent, metropolitan primary school. In the first design study, nine high-ability Grade 1 students created a word-based model for categorisation of self-portraits drawn by students in other grades, and assessed the model using three reasoning tasks. Of interest were the features of the modelling process observed in Grade 1 students, and how students’ used test data collected from the model to inform judgements regarding its efficacy and limitations. The second design study focused on predictive reasoning. How Grade 2 students used the variability of the temperature table to inform their predictions, how they justified predictions and their use of probabilistic language was the focus. Ten high-ability Grade 2 students, including seven students retained from the previous study, predicted maximum monthly temperatures from a temperature table then plotted their predictions against background temperature readings using TinkerPlots TM .

For both design studies, student predictions, representations and explanations were coded using three levels of statistical reasoning: idiosyncratic, transitional and quantitative (Leavy, 2008). Seven of the Grade 1 students were observed using data-based reasoning when justifying and revising their decisions. Six of the Grade 2 students made predictions similar to other monthly values in the data table, increasing to nine students after plotting the predictions with TinkerPlots TM . All ten students used probabilistic language when describing the data set, including terms such as outliers, clusters and range.

Following this pilot work, the main study employed 46 students from Grade 3, and 44 of the same students from Grade 4 in a longitudinal teaching experiment. Students predicted maximum monthly temperatures for the current year using a data table containing past maximum temperatures, represented the data table using informal freehand inscriptions or graphing and described their predictive strategies in verbal and written form. Data were collected at the beginning of Grade 3 and the beginning and end of Grade 4 using the same tasks. Data were coded using a data lenses framework (Konold et al., 2015) in Grade 3 and a framework for analysis of structural features (Awareness of Mathematical Pattern and Structure [AMPS]) (Mulligan & Mitchelmore, 2009) in Grades 3 and 4. Most Grade 4 students (87%) made predictions within the historical range, relative to half in Grade 3 (54%). Representations included co-ordinate graphing including column, line and dot plots and were more sophisticated in Grade 4, with 57% demonstrating data transnumeration, while in Grade 3 they were predominately idiosyncratic or copies of the data table. Grade 4 students were more likely (79%) than Grade 3 (51%) to use and describe predictions based on extraction, clustering, aggregation, noticing seasonal trends and range, identifying causal and random variation, and observing measures of central tendency. Large individual differences emerged: three developmental pathways are illustrated through case studies of high, average, and low ability students. This range suggests that pathways for predictive reasoning are somewhat flexible or idiosyncratic.

The design studies in this thesis demonstrated the advanced potential of some young students to reason statistically: Grade 1 students developed a viable word-based model using a complex data set, and Grade 2 students employed TinkerPlots TM to critique their data predictions. Levels of statistical reasoning in these students was higher than previously reported in studies of students in first and second grade such those by Makar (2016) and Lehrer and Schauble (2000b), as demonstrated through their use of data when justifying their reasoning.

The longitudinal study on student predictive reasoning and meta-representational competence contributes to a more in-depth or fine grained analysis of the possible developmental sequence of these capacities across Grades 3 and 4. Primary school students used contextual cues and data content when they make predictions, and appear to make realistic predictions from data tables prior to being able to describe viable prediction strategies, or to select data for representational purposes. However, other skills appear to develop unevenly— some students developing meta-representational competence and formal graphing prior to reasoning about their strategies, while other students developing reasoning strategies prior to meta-representational competence. Intermediate stages of transnumeration of data tables to formal graphs were described, providing a comprehensive longitudinal set of student representations from a single data set. The studies contribute to a growing body of research that investigates the predictive and data-modelling capacities of young students, and makes a distinct contribution by reporting on the use of TinkerPlots TM as a visualisation tool with second graders. The research supports the inclusion and extension of curriculum reform highlighting data-driven learning, and the development of statistical concepts that are integral to statistical literacy and mathematics learning. Research implications include arguments for more explicit outcomes in the Statistics and Probability strand of the mathematics curriculum on informal statistical inference and data exploration in the early years. This needs to be accompanied by newly developed professional development programs, resources and support for teachers’ acquisition of pedagogical content knowledge in statistical reasoning, and for primary school students to have extended opportunities for informal data representation prior to the introduction of formal graphing instruction.

Table of Contents

Awarding institution, degree type, department, centre or school, year of award, principal supervisor, additional supervisor 1, usage metrics.

Macquarie University Theses

  • Other education not elsewhere classified

Concepts and Reasoning: a Conceptual Review and Analysis of Logical Issues in Empirical Social Science Research

  • Published: 08 July 2023
  • Volume 58 , pages 502–530, ( 2024 )

Cite this article

logical thinking research

  • Qingjiang Yao   ORCID: orcid.org/0000-0002-0550-4211 1  

556 Accesses

Explore all metrics

A substantial number of social science studies have shown a lack of conceptual clarity, inadequate understanding of the nature of the empirical research approaches, and undue preference for deduction, which have caused much confusion, created paradigmatic incommensurability, and impeded scientific advancement. This study, through conceptual review and analysis of canonical discussions of concepts and the reasoning approaches of deduction and induction and their applications in social science theorization by philosophers and social scientists, is purported to unveil the logical nature of empirical research and examine the legitimacy of the preference of deduction among social scientists. The findings note that conceptual clarity as the foundation of social science research, exchange, and replication can be achieved through interdisciplinary stress of conceptual analyses to establish universal measurements and that the primacy of deduction in social sciences needs to concede to or be balanced with induction for new knowledge, more discoveries, and scientific advancement. The study recommends that institutions and researchers of social sciences invest more in conceptual analysis and inductive research through collaboration and separate efforts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

logical thinking research

Social sciences, social reality and the false division between theory and method: some implications for social research

logical thinking research

Theory in Social Research

logical thinking research

Reflections on Methodological Issues

Explore related subjects.

  • Artificial Intelligence
  • Medical Ethics

Data Availability

This conceptual review paper involves no empirical data.

American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing . American Educational Research Association. https://www.apa.org/science/programs/testing/standards

Allen, M. S., Iliescu, D., & Greiff, S. (2022). Single item measures in psychological science: A call to action. European Journal of Psychological Assessment, 38 (1), 1–5. https://doi.org/10.1027/1015-5759/a000699

Article   Google Scholar  

Aristotle. (1984). The organon. In J. Barnes (Ed.), The complete works of Aristotle: The revised Oxford translation (v. 1) . Princeton University Press.

Aven, T. (2018). Reflections on the use of conceptual research in risk analysis. Risk Analysis, 38 (11), 2415–2423. https://doi.org/10.1111/risa.13139

Article   PubMed   Google Scholar  

Babbie, E. (2021). The practice of social research (15th ed.). Cengage.

Bacon, F. (2000). In L. Jardine, & M. Silverthorne (Eds.), The new organon . Cambridge University Press.

Bal, M. (2009). Working with concepts. European Journal of English Studies, 13 (1), 13–23. https://doi.org/10.1080/13825570802708121

Baldwin, D. (1980). Interdependence and power: A conceptual analysis. International Organization, 34 (4), 471–506. https://doi.org/10.1017/S0020818300018828

Baskaran, S., Ng, C. H., Mahadi, N., & Ayob, S. A. (2017). Youth and social media comportment: A conceptual perspective. International Journal of Academic Research in Business and Social Sciences, 7 (11), 1260–1277. https://doi.org/10.6007/IJARBSS/v7-i11/3563

Bergkvist, L. (2015). Appropriate use of single-item measures is here to stay. Marketing Letters, 26 , 245–255. https://doi.org/10.1007/s11002-014-9325-y

Bird, F. (2020). A defense of objectivity in the social sciences, rightly understood. Sustainability: Science Practice and Policy, 16 (1), 83–98. https://doi.org/10.1080/15487733.2020.1785679

Birks, M., & Mills, J. (2015). Grounded theory: A practical guide (2nd ed.). Sage Publications.

Blumer, H. (1956). Sociological analysis and the “variable. American Sociological Review, 21 (6), 683–690. https://doi.org/10.2307/2088418

Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quinonez, H., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: A primer. Frontiers in Public Health, 6 , 149. https://doi.org/10.3389/fpubh.2018.00149

Article   PubMed   PubMed Central   Google Scholar  

Boole, G. (1952/2012). Studies in logic and probability . Dover Publications.

Boyd, N. M., & Bogen, J. (2021). Theory and observation in science. In N. Zalta & U. Nodelman (Eds.), Stanford Encyclopedia of Philosophy (Winter 2021 Edition) .   https://plato.stanford.edu/archives/win2021/entries/science-theory-observation/ . Accessed 1 Apr 2023.

Brandt, P., & Timmermans, S. (2021). Abductive logic of inquiry for quantitative research in the digital age. Sociological Science, 8 , 191–210. https://doi.org/10.15195/v8.a10

Brewer, W. F., & Chinn, C. A. (1994). The theory-ladenness of data: An experimental demonstration. In A. Ram & K. Eiselt (Eds.), Proceedings of the sixteenth annual conference of the cognitive science society (p.5). Routledge. https://doi.org/10.4324/9781315789354

Bringmann, L. F., Elmer, T., & Eronen, M. I. (2022). Back to basics: The importance of conceptual clarification in psychological science. Current Direction in Psychological Science, 31 (4), 340–346. https://doi.org/10.1177/09637214221096485

Carnap, R. (1956). The methodological character of theoretical concepts. In H. Feigl, & M. Scriven (Eds.), Foundations of science and the concepts of psychology and psychoanalysis (pp. 38–76). University of Minnesota Press.

Carnap, R. (1963). Replies and systematic expositions. In P. A. Shilpp (Ed.), The philosophy of Rudolf Carnap (pp. 859–1013). Open Court.

Chaffee, S. (1991). Explication . Sage Publications.

Charmaz, K., & Thornberg, R. (2021). The pursuit of quality in grounded theory. Qualitative Research in Psychology, 18 (3), 305–327. https://doi.org/10.1080/14780887.2020.1780357

Clark, L. A., & Watson, D. (2019). Constructing validity: New developments in creating objective measuring instruments. Psychological Assessment, 31 (12), 1412–1427. https://doi.org/10.1037/pas0000626

Clark, T., Foster, L., Sloan, L., & Bryman, A. (2021). Bryman’s social research methods (6th ed.). Oxford University Press.

Cohen, I. B. (1994). A note on “social science” and on “natural science.” In I. B. Cohen (Ed.), The natural sciences and the social sciences (pp. xxv-xxxvi). Kluwer Academic Publishers.

Copi, I., Cohen, C., & McMahon, K. (2010). Introduction to logic (14th ed.). Prentice-Hall.

Darwin, C. (1897). In F. Darwin (Eds.), The life and letters of Charles Darwin . D. Appleton and Company.

Dewey, J. (1910/1997). How we think . Dover Publications.

Dreher, A. (2000). Foundations for conceptual research in psychoanalysis . Karnac (Books) Ltd.

Dreher, A. (2003). What does conceptual research have to offer? In M. Leuzinger-Bohleber, A. Dreher, & J. Ganestri (Eds.), Pluralism and unity? Methods of research in psychoanalysis (pp. 109–124). IPA.

Dubin, R. (1978). Theory building (2nd ed.). Free Press.

Dummett, M. (1991). The logical basis of metaphysics . Harvard University Press.

Dunwoody, S. (2005). Explicate, please. MAPOR News , fall issue, 4. http://www.mapor.org/newsletters/Fall2005.pdf

Eagly, A., & Chaiken, S. (1993). The psychology of attitudes . Harcourt Brace Jovanovich College Publishers.

Fetzer, J. (2022). Carl Hempel. In N. Zalta & U. Nodelman (Eds.), Stanford encyclopedia of philosophy . Retrieved April 1, 2023, from https://plato.stanford.edu/entries/hempel/

Fisher, R. (1955). Statistical methods and scientific induction. Journal of the Royal Statistical Society: Series B(Methodological), 17 (1), 69–78. https://doi.org/10.1111/j.2517-6161.1955.tb00180.x

Fodor, J. (1998). Concepts: Where cognitive science went wrong . Oxford University Press.

Fodor, J. (2004). Having concepts: A brief refutation of the 20th century. Mind and Language, 19 , 29–47. https://doi.org/10.1111/j.1468-0017.2004.00245.x

Foxall, G. R. (1986). The role of radical behaviorism in the explanation of consumer choice. In R. J. Lutz (Ed.), Advances in consumer research (13 vol., pp. 187–191). Association for Consumer Research.

Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin and Review, 19 , 975–991. https://doi.org/10.3758/s13423-012-0322-y

Gardner, D. G., Cummings, L. L., Dunham, R. B., & Pierce, J. L. (1998). Single-item versus multiple-item measurement scales: An empirical comparison. Educational and Psychological Measurement, 58 (6), 898–915. https://doi.org/10.1177/0013164498058006003

Gatzka, T. (2021). Aspects of openness as predictors of academic achievement. Personality and Individual Differences, 170 ,. https://doi.org/10.1016/j.paid.2020.110422

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Aldine de Gruyter.

Gunnell, J. (1975). Philosophy, science, and political inquiry . General Learning Press.

Haslam, N. (2016). Concept creep: Psychology’s expanding concepts of harm and pathology. Psychological Inquiry, 27 (1), 1–17. https://doi.org/10.1080/1047840X.2016.1082418

Haslam, N., Dakin, B. C., Fabiano, F., McGrath, M. J., Rhee, J., Vylomova, E., Weaving, M., & Wheeler, M. A. (2020). Harm inflation: Making sense of concept creep. European Review of Social Psychology, 31 (1), 254–286. https://doi.org/10.1080/10463283.2020.1796080

Haslam, N., Tse, J. S. Y., & Deyne, S. D. (2021). Concept creep and psychiatrization. Frontiers in Sociology, 6 , 806147. https://doi.org/10.3389/fsoc.2021.806147

Hayes, A. F., & Coutts, J. J. (2020). Use Omega rather than Cronbach’s alpha for estimating reliability but…. Communication Methods and Measures, 14 (1), 1–24. https://doi.org/10.1080/19312458.2020.1718629

Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science . Free Press.

Homans, G. (1951). The human group . Routledge.

Hume, D. (1748/1999). An enquiry concerning human understanding . Oxford University Press.

Hurley, P., & Watson, L. (2018). A concise introduction to logic (13th ed.). Cengage Learning.

Husserl, E. (1900/1973). Logical investigations (trans. Findlay J.N.). Routledge & Kegan Paul.

Jackson, F. (1998). From metaphysics to ethics: A defense of conceptual analysis . Oxford University Press.

Kant, I. (1781/1998). The critique of pure reason (Trans. P Guyer, & A. W. Wood). Cambridge University Press.

Kerlinger, F. N, & Lee, H. B. (1999). Foundations of behavioral research (4th ed.). Wadsworth Publishing.

King, G., Keohane, R. O., & Verba, S. (2021). Designing social inquiry: Scientific inference in qualitative research (new ed.). Princeton University Press.

Kistruck, G. M., & Shantz, A. S. (2022). Research on grand challenges: Adopting an abductive experimentation methodology. Organization Studies, 43 (9), 1479–1505. https://doi.org/10.1177/01708406211044886

Kuhn, T. (1996). The structure of scientific revolutions . The University of Chicago Press.

Leibniz, G. W. (1989). Dissertation on the art of combinations. In L. E. Loemker (Ed.) Philosophical papers and letters. The new synthese historical library (Texts and studies in the history of philosophy) (vol 2). Springer. https://doi.org/10.1007/978-94-010-1426-7_2

Lakatos, I. (1978). The methodology of scientific research programmes. In J. Worrall, & G. Currie (Eds.), Philosophical papers, V 1 . Cambridge University Press.

Locke, E. A. (2007). The case for inductive theory building. Journal of Management, 33 (6), 867–890. https://doi.org/10.1177/0149206307307636

Locke, J. (1689/1997). An essay concerning human understanding, book III . Penguin Press.

Locke, E. A., & Latham, G. P. (2020). Building a theory by induction: The example of goal setting theory. Organizational Psychology Review, 10 (3–4), 223–239. https://doi.org/10.1177/2041386620921931

Margolis, E., & Laurence, S. (2022). Concepts. In E. N. Zalta & U. Nodelman (Eds.),  The Stanford encyclopedia of philosophy (Fall 2022 Edition). https://plato.stanford.edu/archives/fall2022/entries/concepts/ . Accessed 1 Apr 2023.

Martinez, R. A. M., Andrabi, N., Goodwin, A. N., Wilbur, R. E., Smith, N. R., & Zivich, P. N. (2023). Conceptualization, operationalization, and utilization of race and ethnicity in major epidemiology journals, 1995–2018: A systematic review. American Journal of Epidemiology, 192 (3), 483–496. https://doi.org/10.1093/aje/kwac146

McCombs, M., & Donald, S. (1972). The agenda-setting function of mass media. The Public Opinion Quarterly, 36 (2), 176–187. https://doi.org/10.1086/267990

McLeod, J., & Chaffee, S. (2017). The construction of social reality. In J. T. Tedeschi (Ed.), The social influence processes (pp50-99) . Routledge.

Mill, J. S. (1843/2011). A system of logic, ratiocinative and inductive: Being a connected view of the principles of evidence, and the methods of scientific investigation (1V vol). Cambridge University Press. https://doi.org/10.1017/CBO9781139149839.017

Mokgohloa, K., Kanakana-Katumba, G., Maladzhi, R., & Xaba, S. (2021). A grounded theory approach to digital transformation in the postal sector in southern Africa. Advances in Science Technology and Engineering Systems Journal, 6 (2), 313–323. https://doi.org/10.25046/aj060236

Mollaret, P. (2009). Using common psychological terms to describe other people: From lexical hypothesis to polysemous conception. Theory and Psychology, 19 (3), 315–334. https://doi.org/10.1177/0959354309104157

Mukumbang, F. C., Kabongo, E. M., & Eastwood, J. G. (2021). Examining the application of retroductive theorizing in realist-informed studies. International Journal of Qualitative Methods, 20 , 1–14. https://doi.org/10.1177/16094069211053516

Nuhoglu, H. (2020). The effect of deduction and induction methods used in modeling current environmental issues with system dynamics approach in science education. Participatory Education Research (PER), 7 (1), 111–126. https://doi.org/10.17275/per.20.7.7.1

Nunnally, J. (1987). Introduction to psychological measurement . McGraw-Hill Book Company.

O’Shaughnessy, J. (1992). Explaining buyer behavior: Central concepts and philosophy of science issues . Oxford University Press.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 , 1–8. https://doi.org/10.1126/science.aac4716

Pan, Z., & Kosicki, G. (1993). Framing analysis: An approach to news discourse. Political Communication, 10 (1), 55–75. https://doi.org/10.1080/10584609.1993.9962963

Peacocke, P. (2009). Frege’s hierarchy: A puzzle. In J. Almog, & P. Leonardi (Eds.), The philosophy of David Kaplan (pp. 159–186). Oxford University Press.

Peirce, C. S. (1898/1992). Reason and the logic of things: The Cambridge conferences lectures of 1898 (ed. Ketner, K.L). Harvard University Press.

Petronio, S., & Child, J. T. (2020). Conceptualization and operationalization: Utility of communication privacy management theory. Current Opinion in Psychology, 31 , 76–82. https://doi.org/10.1016/j.copsyc.2019.08.009

Philipsen, K. (2017). Theory building: Using abductive search strategies. In P. Freytag, & L. Young (Eds.), Collaborative research design (pp. 45–71). Springer. https://doi.org/10.1007/978-981-10-5008-4_3

Popper, K. (1959). The logic of scientific discovery . Hutchinson.

Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge . Routledge & Kegan Paul.

Popper, K. (1982). Unended quest: An intellectual autobiography . Open Court.

Potter, J. (2012). Media effects . Sage Publications.

Putnam, H. (1962). The analytic and the synthetic. In H. Feigl, & G. Maxwell (Eds.), Minnesota studies in the philosophy of science , V III (pp. 358–97). University of Minnesota Press.

Quine, W. (1974). Roots of reference . Open Court.

Rose, J., & Johnson, C. W. (2020). Contextualizing reliability and validity in qualitative research: Toward more rigorous and trustworthy qualitative social science in leisure research. Journal of Leisure Research, 51 (4), 432–451. https://doi.org/10.1080/00222216.2020.1722042

Rosenberg-Jansen, S. (2022). The emerging world of humanitarian energy: A conceptual research review. Energy Research and Social Science, 92 ,. https://doi.org/10.1016/j.erss.2022.102779

Russell, B. (1946). A history of western philosophy . George Allen and Unwin Ltd.

Sætre, A. S., & Van de Ven, A. (2021). Generating theory by abduction. Academy of Management Review, 46 (4), 684–701. https://doi.org/10.5465/amr.2019.0233

Salmon, W. (1971). Statistical explanation and statistical relevance . University of Pittsburgh Press.

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology/Psychologie Canadienne, 61 (4), 364–376. https://doi.org/10.1037/cap0000246

Schlagwein, D. (2021). Natural sciences, philosophy of science and the orientation of the social sciences. Journal of Information Technology, 36 (1), 85–80. https://doi.org/10.1177/0268396220951203

Schrepp, M. (2020). On the usage of Cronbach’s alpha to measure reliability of UX scales. Journal of Usability Studies, 15 (4), 247–258.

Google Scholar  

Shrestha, Y. R., He, V. F., Puranam, P., & von Krogh, G. (2020). Algorithm supported induction for building theory: How can we use prediction models to theorize? Organization Science, 32 (3), 856–880. https://doi.org/10.1287/orsc.2020.1382

Stadler, M., Sailer, M., & Fischer, F. (2021). Knowledge as a formative construct: A good alpha is not always better. New Ideas in Psychology, 60 ,. https://doi.org/10.1016/j.newideapsych.2020.100832

Stich, S., & Weinberg, J. (2001). Jackson’s empirical assumptions. Philosophy and Phenomenological Research, 62 (3), 637–643. https://doi.org/10.1111/j.1933-1592.2001.tb00081.x

Surma-aho, A., & Otto, K. H. (2021). Conceptualization and operationalization of empathy in design research. Design Studies, 78 , 101075. https://doi.org/10.1016/j.destud.2021.101075

Svejvid, P. (2021). A meta-theoretical framework for theory building in project management. International Journal of Project Management, 39 , 849–9722. https://doi.org/10.1016/j.ijproman.2021.09.006

Szatek, P. K. (2020). The Duhem-Quine thesis reconsidered. Studies in Logic, Grammar, and Rhetoric, 62 (75), 73–93. https://doi.org/10.2478/slgr-2020-0014

Tarski, A. (1946/1996). Introduction to logic: And to the methodology of deductive sciences . Dover Publications.

Toulmin, S. (1953). The philosophy of science: An introduction . Hutchinson.

Thomas, C. G. (2021). Research methodology and scientific writing (2nd ed.). Springer Nature. https://doi.org/10.1007/978-3-030-64865-7

Tie, Y. T., Birks, M., & Francis, K. (2019). Grounded theory research: A design framework for novice researchers. Sage Open Medicine, 7 , 1–8. https://doi.org/10.1177/2050312118822927

Veen, M. (2021). Creative leaps in theory: The might of abduction. Advances in Health Sciences Education, 26 , 1173–1183. https://doi.org/10.1007/s10459-021-10057-8

Verster, J. C., Sandalova, E., Garssen, J., & Bruce, G. (2021). The use of single-item ratings versus traditional multiple-item questionnaires to assess mood and health. European Journal of Investigation in Health Psychology and Education, 11 , 183–198. https://doi.org/10.3390/ejihpe11010015

Whitehead, A. N., & Russell, B. (1956). Principia Mathematica to *56 . Cambridge University Press.

Wittgenstein, L. (1922/2007). Tractatus Logico-Philosohicus . Cosimo, Inc.

Woiceshyn, J., & Daellenbach, U. (2018). Evaluating inductive versus deductive research in management studies: Implications for authors, editors, and reviewers. Qualitative Research in Organizations and Management: An International Journal, 13 (2), 183–195. https://doi.org/10.1108/QROM-06-2017-1538

Wu, X., Levis, B., Sun, Y., Krishnan, A., He, C., et al. (2020). Probability of major depression diagnostic classification based on the SCID, CIDI and MINI diagnostic interviews controlling for hospital anxiety and depression scale-depression subscale score: An individual participant data meta-analysis of 73 primary studies. Journal of Psychosomatic Research, 129 , 109892. https://doi.org/10.1016/j.jpsychores.2019.109892

Xin, S., Tribe, J., & Chambers, D. (2013). Conceptual research in tourism. Annals of Tourism Research, 41 , 66–88. https://doi.org/10.1016/j.annals.2012.12.003

Yao, Q. J. (2023a). Conceptual analysis. In J. Mattingly (Ed.), The Sage encyclopedia of theory in science, technology, engineering, and mathematics . Sage Reference. https://doi.org/10.4135/9781071872383

Yao, Q. J. (2023b). Deduction. In J. Mattingly (Ed.), The Sage encyclopedia of theory in science, technology, engineering, and mathematics . Sage Reference. https://doi.org/10.4135/9781071872383

Yao, Q. J. (2023c). Induction. In J. Mattingly (Ed.), The Sage encyclopedia of theory in science, technology, engineering, and mathematics . Sage Reference. https://doi.org/10.4135/9781071872383

Yao, Q. J., Liu, Z., & Stephens, L. S. (2020). Exploring the dynamics in the environmental discourse: The longitudinal interaction among public opinion, presidential opinion, media coverage, policymaking in 3 decades and an integrated model of media effects. Environment Systems and Decision, 40 (1), 14–28. https://doi.org/10.1007/s10669-019-09746-y

Young, C. (2008). The advertising research handbook (2nd ed.). Ad Essentials, LLC.

Download references

Acknowledgements

The author appreciates Dr. Steven H. Chaffee for the inspiration of his work on explication in conducting this study.

Author information

Authors and affiliations.

Department of Communication & Media, Lamar University, P.O. Box 10050, Beaumont, TX, 77710, USA

Qingjiang Yao

You can also search for this author in PubMed   Google Scholar

Contributions

This paper is solely authored by Qingjiang (Q. J.) Yao, who bears all responsibility related to the paper.

Corresponding author

Correspondence to Qingjiang Yao .

Ethics declarations

The author has no financial or non-financial interests that are directly or indirectly related to this work submitted.

Ethical Approval

This conceptual research study is conducted in accordance with relevant guidelines/regulations applicable and involves no human participants. *

Informed Consent

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Yao, Q. Concepts and Reasoning: a Conceptual Review and Analysis of Logical Issues in Empirical Social Science Research. Integr. psych. behav. 58 , 502–530 (2024). https://doi.org/10.1007/s12124-023-09792-x

Download citation

Accepted : 14 June 2023

Published : 08 July 2023

Issue Date : June 2024

DOI : https://doi.org/10.1007/s12124-023-09792-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Conceptual analysis
  • Hypotheticodeductive model
  • Falsification
  • Social sciences
  • Find a journal
  • Publish with us
  • Track your research

This week: the arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Computation and Language

Title: multimath: bridging visual and mathematical reasoning for large language models.

Abstract: The rapid development of large language models (LLMs) has spurred extensive research into their domain-specific capabilities, particularly mathematical reasoning. However, most open-source LLMs focus solely on mathematical reasoning, neglecting the integration with visual injection, despite the fact that many mathematical tasks rely on visual inputs such as geometric diagrams, charts, and function plots. To fill this gap, we introduce \textbf{MultiMath-7B}, a multimodal large language model that bridges the gap between math and vision. \textbf{MultiMath-7B} is trained through a four-stage process, focusing on vision-language alignment, visual and math instruction-tuning, and process-supervised reinforcement learning. We also construct a novel, diverse and comprehensive multimodal mathematical dataset, \textbf{MultiMath-300K}, which spans K-12 levels with image captions and step-wise solutions. MultiMath-7B achieves state-of-the-art (SOTA) performance among open-source models on existing multimodal mathematical benchmarks and also excels on text-only mathematical benchmarks. Our model and dataset are available at {\textcolor{blue}{\url{ this https URL }}}.
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)
Cite as: [cs.CL]
  (or [cs.CL] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. What is Logical Thinking & How to Improve It? With Example

    logical thinking research

  2. Logical Reasoning & Its Importance At Workplace

    logical thinking research

  3. 10 Important Characteristics Of A Logical Thinker

    logical thinking research

  4. 10 Common Examples Of How We Use Logical Thinking In Daily Lives

    logical thinking research

  5. The Most Important Logical Thinking Skills (With Examples)

    logical thinking research

  6. 10 Ways To Develop Logical Thinking Skills

    logical thinking research

VIDEO

  1. Creativity and logical thinking

  2. Carl Sagan: #Evidence #Absence #Science #Logic #Reason #Philosophy

  3. Logical reasoning based on Ranking| Arun Sharma| Examples & Basic Concepts| CAT

  4. Quantitative And Logical Thinking

  5. What is Logical Reasoning in Algebra?

  6. Logic Reasoning Tricks

COMMENTS

  1. Does mathematics training lead to better logical thinking and reasoning

    In contrast, psychological research, which has been empirically investigating the concept of transferability of skills since the early 1900s, points quite oppositely to reasoning skills as being highly domain specific . Therefore, support for claims that studying mathematics engenders more than specific mathematics knowledge is highly pertinent.

  2. Logical Thinking

    Handbook of the History of Logic. William J. Mander, in Handbook of the History of Logic, 2008 1 Introduction. The history of logical thought in Britain contains few texts more central than The Principles of Logic by F.H.Bradley. The date of its first appearance (1883) heralds a sharp and profound turn in logical thinking in Britain as, almost single-handedly, it shifted the main patterns of ...

  3. Studying and improving reasoning in humans and machines

    Further research utilizing open-source models and training/tuning experiments will be necessary to better understand the features that mitigate reasoning errors in these models 59.

  4. Bridging critical thinking and transformative learning: The role of

    In recent decades, approaches to critical thinking have generally taken a practical turn, pivoting away from more abstract accounts - such as emphasizing the logical relations that hold between statements (Ennis, 1964) - and moving toward an emphasis on belief and action.According to the definition that Robert Ennis (2018) has been advocating for the last few decades, critical thinking is ...

  5. Improving analytical reasoning and argument understanding: a quasi

    The ability to analyze arguments is critical for higher-level reasoning, yet previous research suggests that standard university education provides only modest improvements in students ...

  6. 19613 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on LOGICAL THINKING. Find methods information, sources, references or conduct a literature review on ...

  7. Thinking & Reasoning

    Thinking & Reasoning is dedicated to the understanding of human thought processes, with particular emphasis on studies on reasoning, decision-making, and problem-solving. Whilst the primary focus is on psychological studies of thinking, contributions are welcome from philosophers, artificial intelligence researchers and other cognitive scientists whose work bears upon the central concerns of ...

  8. Logical Reasoning and Learning

    Definition. Logical reasoning is a form of thinking in which premises and relations between premises are used in a rigorous manner to infer conclusions that are entailed (or implied) by the premises and the relations. Different forms of logical reasoning are recognized in philosophy of science and artificial intelligence.

  9. Reasoning in Research

    Before the modern concept of research and experimentation surfaced, the term the old philosophers used to denote 'research' was logical reasoning. Therefore, it is natural that some of the essential characteristics of logic have passed over into present-day research. Thus, the inductive and deductive methods of argument or reasoning became ...

  10. Neural foundations of logical and mathematical cognition

    Contrary to Piaget's theory 1, which postulated a logical stage of thinking by the age of 14 or 15 , new studies on the cognitive psychology of reasoning have shown that adolescents and adults ...

  11. A scale on logical thinking abilities

    The aim of this study is to develop a scale to determine the logical thinking abilities of prospective mathematics teachers. The development process consists of two phases, namely, pre-study and validity-reliability studies. The scale was applied to 132 prospective mathematics teachers. The Cronbach α coefficient of the finalized scale was ...

  12. How our brains reason logically

    The aim of this article is to strengthen links between cognitive brain research and formal logic. The work covers three fundamental sorts of logical inferences: reasoning in the propositional calculus, i.e. inferences with the conditional "if...then", reasoning in the predicate calculus, i.e. inferences based on quantifiers such as "all", "some", "none", and reasoning with n ...

  13. Relationship between Logical Thinking, Metacognitive Skills, and

    This study analysis is aimed at examining the relationship between logical thinking, metacognitive skills, and problem-solving abilities. To accomplish the research purpose, 100 senior secondary ...

  14. Understanding the Complex Relationship between Critical Thinking and

    To address this research question, we focused on undergraduate thesis writers in biology courses at two institutions, Duke University and the University of Minnesota, and examined the extent to which students' scientific reasoning in writing, assessed in the undergraduate thesis using BioTAP, corresponds to students' critical-thinking ...

  15. The development of the reasoning brain and how to foster logical

    Abstract reasoning is difficult because it requires one to manipulate information without any referent in the real world. Knowledge is of no help. In fact, neuroscience research indicates that abstract and concrete reasoning rely on two different parts of the brain 5 (see Figure 3).

  16. How emotions affect logical reasoning: evidence from experiments with

    Logical reasoning problems. Logical reasoning goes back to the antique Greek philosopher Aristotle and is today considered to be essential for the success of people in school and daily life and all kinds of scientific discoveries (Johnson-Laird, 2006).In the psychological lab it is often investigated by means of conditional reasoning tasks.

  17. Scientific Thinking and Reasoning

    Abstract. Scientific thinking refers to both thinking about the content of science and the set of reasoning processes that permeate the field of science: induction, deduction, experimental design, causal reasoning, concept formation, hypothesis testing, and so on. Here we cover both the history of research on scientific thinking and the ...

  18. Critical Thinking: A Model of Intelligence for Solving Real-World

    4. Critical Thinking as an Applied Model for Intelligence. One definition of intelligence that directly addresses the question about intelligence and real-world problem solving comes from Nickerson (2020, p. 205): "the ability to learn, to reason well, to solve novel problems, and to deal effectively with novel problems—often unpredictable—that confront one in daily life."

  19. Logical Reasoning in Formal and Everyday Reasoning Tasks

    Logical reasoning is of great societal importance and, as stressed by the twenty-first century skills framework, also seen as a key aspect for the development of critical thinking. This study aims at exploring secondary school students' logical reasoning strategies in formal reasoning and everyday reasoning tasks. With task-based interviews among 4 16- and 17-year-old pre-university students ...

  20. What is Logical thinking? An In-Depth Analysis

    Logical Thinking is the capacity to employ reason and systematic processes to analyse information, establish connections, and reach well-founded conclusions. It entails a structured and rational approach to problem-solving and decision-making. For example, consider a scenario where you're presented with a puzzle.

  21. What is Logical thinking?

    Logical thinking can also be defined as the act of analysing a situation and coming up with a sensible solution. It is similar to critical thinking. Logical thinking uses reasoning skills to objectively study any problem, which helps make a rational conclusion about how to proceed. For example, you are facing a problem in the office, to address ...

  22. The development of statistical reasoning in primary school students

    While formal statistical practices are not generally accessible to students in the primary years of schooling, the principles underpinning statistical thinking and reasoning—such as posing questions, collecting data, comparing groups, and representing and inferring from data—are relevant in primary mathematics (Watson et al., 2018). Recent Australian studies by English (2012, 2013, 2018 ...

  23. Concepts and Reasoning: a Conceptual Review and Analysis of Logical

    A substantial number of social science studies have shown a lack of conceptual clarity, inadequate understanding of the nature of the empirical research approaches, and undue preference for deduction, which have caused much confusion, created paradigmatic incommensurability, and impeded scientific advancement. This study, through conceptual review and analysis of canonical discussions of ...

  24. PDF The development of the reasoning brain and how to foster logical ...

    task involves reasoning with purely abstract information. Abstract reasoning is difficult because it requires one to manipulate information without any referent in the real world. Knowledge is of no help. In fact, neuroscience research indicates that abstract and concrete reasoning rely on two different parts of the brain[5] (see Figure 3). The ...

  25. MultiMath: Bridging Visual and Mathematical Reasoning for Large

    The rapid development of large language models (LLMs) has spurred extensive research into their domain-specific capabilities, particularly mathematical reasoning. However, most open-source LLMs focus solely on mathematical reasoning, neglecting the integration with visual injection, despite the fact that many mathematical tasks rely on visual inputs such as geometric diagrams, charts, and ...