Teaching Patterns of Critical Thinking: The 3CA Model—Concept Maps, Critical Thinking, Collaboration, and Assessment

This is a research report of teaching patterns of critical thinking using the competency-based 3CA (an acronym for the educational practices of Concept maps, Critical thinking, Collaboration, and Assessment) model of classroom instruction to change the grammar of schooling. Critical thinking is defined as the “WH questions”: “what, when, where, how, who, and why” taken from Aristotle’s Nicomachean Ethics. These questions are threaded through the practices of concept maps, collaboration, and assessment. This conceptualization of patterns of thinking is influenced by Ludwig Wittgenstein’s conceptualization of the relations between the language games of practice and language games in the mind. This study compares individual and collaborative approaches to teaching the critical thinking “WH questions” in a child development class. Students in the individual groups used more “what questions,” whereas students in the collaborative group used more “why and how questions.”


Rationale
Critical thinking is an important educational life skill, and there is widespread agreement about the need for critical thinking to improve achievement and deepen understanding across the disciplines. Teaching critical thinking is necessary in fields such as science education (Fettahlıoğlu & Kaleci, 2018), high school chemistry (Suardana, Redhana, Sudiat mika, & Selamat, 2018), business ethics, undergraduate engineering (Adair & Jaeger, 2016;Ralston & Bays, 2015), nursing/medical education (Papp et al., 2014), and music education (Kokkidou, 2013). Despite the widespread interest and research into the teaching of critical thinking, students do not have practical guidelines to apply critical thinking in different contexts. The solution to this problem is the seven consequences: "what, when, why, where, who, how and what for" first described by Aristotle in the Nicomachean Ethics (Sloan, 2010) The six critical thinking questions, "what, when, where, how, who, and why," are a scheme that is widely respected in philosophy; used in the law; practiced in journalism, the cornerstones of inquiry in the sciences; and used in every day conversation to gather information. This study of the Aristotelian critical thinking questions fills a gap in educa tional practice by providing a simple and yet powerful defi nition of critical thinking and a set of tools easily understood by students.
To the best of our knowledge, this study of patterns of critical thinking is the first experimental study of Aristotelian critical thinking questions.

Educational Practice of Teaching Critical Thinking
The 3CA (Concept Maps, Critical Thinking, Collaboration, and Assessment) model is a competencycentered approach to instruction in which students learn patterns of critical thinking. The model has four components: (a) concept maps are a visual method for displaying information as nodes with connecting links; the nodes are visual representation of 885142S GOXXX10.1177/2158244019885142SAGE OpenZandvakili et al.

research-article20192019
1 University of Massachusetts Amherst, USA 2 Columbia University, New York, NY, USA knowledge being learned; (b) the links of concept maps are the critical thinking questions, and are an important concep tual innovation, and make possible the measurement of pat terns of critical thinking; (c) the collaborative phase includes the collaborative construction of a new shared map based on the individual concept maps; and (d) assessment occurs when students collaborate to generate multiplereasoning items for their assessment. The items investigate the reason ing of students as they ask the questions: "what, when, where, how, who, and why." Each of these practices origi nates in the activities of the classroom, and through practice and use, the language games of everyday classroom life are transformed into the language games of the thinking.
Students first work individually to apply critical thinking questions to concept maps. This first step or homework phase of the model is necessary to ensure that all students have a shared base of knowledge. During the collaborative phase, students work together, exchange concept maps, and then create a new shared concept map using the critical thinking questions (Zandvakili, Washington, Gordon, & Wells, 2018). In the final step of the process, students work collaboratively to construct the multiplereasoning questions that are the basis for the assessment of their performance in the course ( Figure 1 depicts the components of the model).
As students construct and interrogate the structures of their concept maps, they are also seeing for the first time a representation of their own thinking. This combination of critical thinking and concept maps affords an opportunity to assess patterns of critical thinking by counting the frequency of use of the critical thinking questions as links between con cepts ( Figure 2). The collaborative component of the model shifts learning from an individual to a group mode of learn ing. Different patterns of critical thinking emerge as students exchange concept maps and engage in social and cognitive synthesis. Assessment is threaded throughout the model, and the culminating moment in the model is when students create multiplereasoning questions to be used in their own evalua tion for the course.

Overview
The aim of the study was to contrast individual and collab orative approaches to the construction of concept maps with critical thinking. We hypothesized that applying the critical thinking "WH questions" to a child development textbook will produce different patterns of critical thinking. We also investigated whether there are significant differences in criti cal thinking patterns that persist over time periods.

Research Questions
Research Question 1: Are there significant differences between individual and collaborative groups in applying the critical thinking questions? Research Question 2: Are there significant differences between groups in learning patterns of critical thinking questions over three time periods? Research Question 3: Are there significant differences in student performance on studentmade tests and tests by the textbook manufacturer?

Concept Maps: The Deconstruction of Text Into Personal Knowledge
Concept maps are a nodelink diagram in which each node represents a concept and a link that identifies the relationship between the two concepts. (Schroeder, Nesbit, Anguiano, & Adesope, 2017, p. 431) Concept maps originated at Cornell University in 1984 in the work of Bill Trochem and a doctoral student, Dorothy Torre, he described concept maps as a form of visual or picture thinking which is fast, automatic, effortless, often unconscious, brings images to mind, spreading neural activation, and enabling the individual or group to respond more easily than before (Donnelly, 2017, p. 186). When we understand something, we say that we see it. We arrive at the solution to a problem through "insight." To bet ter communicate our ideas, we aim to make them "clear" (Fan, 2015). Likening visual experiences to cognitive pro cesses suggests a metaphoric connection between how we see the world and how we think (Zandvakili et al., 2018).
Concept maps are examples of what Jonassen (2000) calls "mind tools" that amplify and reorganize cognition extend ing the range of the human mind. Nesbit and Adesope (2006) reported that the collaborative constructions of concept maps are more beneficial than working individually. It is expected that collaborative learning is more effective than individual learning. Please see the later discussion of collaborative learning. As mentioned in Zandvakili et al. (2018), Schroeder et al. (2017) found that using concept maps over time increased the effectiveness of the learning and retention of the concepts. In a metaanalysis study of concept maps they observed that concept maps or knowledge maps are diagrams that represent ideas as node linkassemblies, and there has been a steady increase in the number of published studies over the past thirty years. All in all, Concept maps are considered as a good tool to assist the instructor to organize knowledge and an appropriate tool for students to notice the important concepts in different materials (Novak, 1991;Jonassen, Beissner & Yacci, 1993).

Critical Thinking
Research on critical thinking. There is now widespread con sensus by scholars that critical thinking skills are teachable and learnable. For example, two programs to enhance critical thinking skills were described by Halpern (1998). In the first study, general problemsolving abilities and skills were taught based on Piagetian theory of "cognitive develop ment." In the second study, the students were taught a par ticular type of critical thinking skill that used visual math presentations more like professionals than beginning stu dents. Kennedy, Fisher, and Ennis (1991) surmised that ped agogical interventions with the purpose of enhancing students' critical thinking abilities have produced more posi tive outcomes. A metaanalysis by Abrami et al. (2008) of 118 empirical studies found positive effects of critical think ing with a 0.34 average effect size. Deductive instruction, modeling, collaborative learning, and constructivist skills are different strategies proposed by different researchers to apply critical thinking. According to many researchers such as Halpern (1998), Abrami et al. (2008), Facione (1990), and Case (2005), the impact of explicit and direct instruction in improving critical thinking is positive, beneficial, and wide spread. They also found that the effect size of the interven tions varied as a function of the type of intervention. One of the persisting issues in applying critical thinking is the prob lem of the transfer of critical thinking from one domain to another. Abrami et al. (2008) have documented this problem in their descriptions of the different interventions.
Critical thinking and asking questions. "Asking questions or inter rogation is part of the natural history of what it is to be human" (Zandvakili et al., 2018). There is a small literature on individ ual children learning to ask questions, and that literature will be briefly reviewed. Pinker (2003) describes children as intuitive psychologists who recognize intentions before they copy them. The first motivation to be like others fuels the need to benefit from the information and knowledge of others and the second is normative, the desire to follow the norms of a community. Learning to ask questions is a normative experience that chil dren learn and imitate. The desire to acquire knowledge is pres ent in children between 2 and 5 years of age as they learn the language games of question asking. According to Chouinard, Harris, and Maratsos (2007), children use an average of 107 questions an hour when engaged in conversation with adults. These youngsters are using language and conversation in a pur poseful and intentional way to gather information and fill in gaps in their knowledge.
Asking questions is also a signature skill of detectives and scientists. The famous English detective, Sherlock Holmes, continually amazes with his ability to observe the dress and presentation of the self by the cast of characters in the novel. From careful observation, he observes and notes the clues necessary to solving the baffling mystery. He makes clear that asking "important questions" is not an easy task. Scientists too are engaged in asking questions and using the clues from careful observations to solve problems.
Asking critical thinking "WH questions." According to Sloan (2010), the patterns of critical thinking questions "what, when, why, how, who and where" were first described in Aristotle's Nicomachean Ethics. Aristotle asked, how should we identify the dispositions that make for a virtuous person, and how to judge if an action is virtuous. Aristotle identified the seven consequences, "what, when, why, how, where and what for," as a schema to determine whether an act is virtu ous (Guest, 2017;Sloan, 2010). Sloan (2010) also mentioned that Cicero adapted the seven consequences as a rhetorical tool that became a staple of discourse in the legal world of the courts. These same seven consequences became the six "WH questions" with the dawn of modernity and are the standard province of journalists.
The six "WH questions" collectively provide information for understanding a narrative. Answers to each question are a declarative statement that provides different information. The critical thinking "WH questions" are deceptively simple, singular, and important sources of information. These singu lar language games merge together into the development of the language games of thinking critically. The critical think ing games may start with who, when, what, where, how, and where, while novelists and scientists alike begin inquiry from these different perspectives. The focus upon a single "WH question" misses the point that much of information sharing involves the serial application of the different "WH questions" to a problem.
The study of patterns of data emerged with the cognitive science revolution and highspeed computers. Relying upon the cognitive and neurosciences, Mattson (2014) argues that the superior pattern processing of human minds is the foundation of imagination, thinking, and creativity. Superior pattern processing is made more salient through the use of computer technology that permits the multivariate analysis of data. Our study of the patterns of the six critical thinking ques tions is made possible because of the philosophical analyses of Aristotle, and the cognitive and neurosciences that have created methodologies to analyze complex data for patterns. These educational practices are direct translations of philosophical inquiry and cognitive science into educational practice.

Collaboration
"The word collaboration is derived from the Latin collabo rare and means to work together" (Zandvakili et al., 2018, p. 51). A halfcentury of research has confirmed that collabo ration results in an improvement in educational achievement. Nevertheless, in schools, the focus remains upon individual learning rather than group or team learning. This tradition is reinforced and maintained by the increasing reliance upon summative assessment of the individual. In contrast to this tradition, Scardamalia and Bereiter (2014) emphasize that in the coming decades, team learning and not individual learn ing will be the focus of attention. They urge researchers to go beyond collaboration to study cognitive responsibility for team learning. They make their case by beginning with the observation that all learning is group learning. Their empha sis on what they call cognitive responsibility can be described under the rubric of team learning to include a shared commit ment to achieving a goal or completing a task.
According to Dillenbourg (1999), cooperation is distin guished from collaboration because the latter involves par ticipants working together on the same task, rather than in parallel on separate portions of the task with each person responsible for some portion of the task. However, Dillenbourg (1999) also note that some spontaneous division of labor may occur during collaboration. Thus, the distinc tion between the two is not necessarily clearcut (as men tioned in Lai, 2011). Andrews and Rapp (2015) reviewed the literature on col laboration citing its distinctive advantages and challenges in enhancing cognitive and psychological development. While the advantages of collaboration for the wellbeing of indi vidual participants are social, affective, and psychological, the challenges include passing on and the incorporation of incorrect information into existing knowledge structures. It is also necessary to notice that the collaborative approach to teach critical thinking is highly recommended by different scholars (Abrami et al., 2008;Bonk & Smith, 1998;Heyman, 2008;ThayerBacon, 2000).

Assessment
"Assessment is a process of reasoning from evidence guided by theories, models and data on the nature of knowledge representations and the development of competence and expertise in typical domains of classroom instruction" (Pellegrino, Chudowsky, & Glaser, 2001, p. 59;Pellegrino, DiBello, & Goldman, 2016). Pellegrino and his colleagues shift our attention to a systems approach that includes data, theory, practice, and evaluation instruments. This review of the 3CA model of assessment uses a systems approach to describe the elements of critical thinking maps, collaboration, and studentmade multiplereasoning items. The Pellegrino model of assessment also includes data, and the data support ing the 3CA model will be discussed later.
The first major source of evidence for the 3CA model is the creation of the critical thinking-concept maps that is the result of applying critical thinking questions to concept maps, which is a unique, visual form of knowledge representation. Chen, Allen, and Jonassen (2018) found that combining con cept maps and linguistic knowledge is a mixed method approach that results in deeper learning. Lachner, Backfisch, and Nückles (2018) argue that the feedback received by stu dents results in an increase in knowledge and understanding. The current approach to the application of the critical think ing questions to concepts asks students to apply a single criti cal thinking question to connect two concepts. The data for the efficacy of the critical thinking maps are found in the repeated measures of patterns of critical thinking from Weeks 1 through 9, and which is a kind of formative assessment.
A second distinctive feature of the 3CA approach to assessment is the collaborative construction of critical think ing maps. This stage of instruction is also called social and cognitive synthesis; it is the moment in the model in which students exchange critical thinking maps and construct a shared map. The data for this stage are the collaborative assessment of critical thinking.
The last stage of assessment in the 3CA model is the col laborative construction of test items by students for their own assessment, and later, the student constructed test is com pared with a publisher produced test. These summative test data are the evidence for the efficacy of this stage of assess ment. According to Ashtiani and Babaii (2007, p. 213), "cooperative test construction is the last temptation of educa tional reform." They situate collaborative test construction within the alternative approaches to assessment that empha sizes learnercentered education featuring students as active participants in designing the assessment process. They cite the support of the following studies for cooperative test con struction: Allwright (1984) argues that putting students in charge of creating and administering tests they have created reduces anxiety and helps make education a joyful experi ence; Murphey (1994) describes studentmade tests as an "effective way to mine students' perceptions which teachers can use to build upon what a group knows as a whole" (p. 12). Rash (1997) proposes that student constructed tests enable teachers to see where students are and provide stu dents with information about the gaps in their knowledge; BaronCohen (2004) argues that student constructed tests help learners to remain accountable for their learning and recognize relevant materials, and promote positive relations between teacher and students; Brink, Capps, and Sutko (2004) compared studentmade tests with a standardized test in a manufacturingengineering class and found that the cre ation of manmade tests was more beneficial for above aver age students, and students who prepared good comprehensive questions were significantly higher achiever than those who did not prepare good quality questions.
The student construction of multiplereasoning items is a fivepart process: (a) indexing the question to the content of the chapter; (b) using the critical thinking questions as stems (why, how did this event happen?); (c) the construction of the alternative answers for the question; (d) all the questions from all the students are posted on Moodle, 1 the class web site; and (e) all the students have access to all the items from which the test will partly determine their grade for the class.
The data collected for formative and summative assess ment in this study are convincing evidence of the efficacy of the 3CA model of critical thinking. Student learning is the ultimate aim of classroom instruction, and the patterns of critical thinking are the evidence that should be the focus of inquiry.

Research Design
The design of this research study is both exploratory and experimental. It is exploratory because 3CA is a new model and each one of the components changed and improved as a consequence of trial and error. The design is a comparative study with the random assignment of students to groups. This is a 3 × 3 factorial design with repeated measures. The three social conditions-individual, individualcollaborative, and collaborative-and the three time periods were 3 weeks in duration. Formative assessments were made weekly using the critical thinking-concept maps.

Participants
The sample included 64 undergraduate students, ranging in ages from 18 to 22 years with an average age of 19 years old, enrolled in a general education course at a major university in north east who are satisfying their requirements for gradua tion. There were 22 students in the collaborative group and 42 students in the individual group. Eightytwo percent of the stu dents were female, 12% male, and 6% did not selfidentify; 60% of the class were White, 14% Asian, and 26% Other.

Setting
The current experiment was conducted over a period of 9 weeks, and each week all students prepared a concept map with critical thinking questions prior to coming to class, so over a 9week period, each student produced nine concept maps with critical thinking. The data were collapsed into three time periods. Time 1 (data averaged over first three ses sions), Time 2 (data averaged over second three sessions), and Time 3 (data averaged over last three sessions).

Procedures
Training. The class was a fourcredit child development course that met for 4 hr once a week. The first day of class was devoted to teaching students the procedures for the experiment. The procedures focused on teaching guidelines for the construction of concept maps, implementation of crit ical thinking questions on the maps, and generation of multi plereasoning questions.

Individual/homework phase of collaborative group (IC)
Step 1: Students at home generated one digital map and they applied critical thinking strategies, "what, when, where, how, who, and why," and came to class with a digital concept map of an entire chapter.
Step 2: Students prioritize key concepts in their map: The concepts were ranked from the most to least important. The purpose of the generation of the prioritization list was to identify the key concepts for the construction of the multiplereasoning questions.
Step 3: Students generated six questions based on the six critical thinking "WH questions." Step 4: Students posted three documents on Moodle: Concept map, Prioritized list, and six critical thinking questions.

Collaborative/in-class phase of collaborative group (CC)
Step 1 (social synthesis): Students were randomly paired together and learned to collaborate in the classroom setting.
Step 2 (cognitive synthesis): Each pair of students exchanged a concept map of the chapter which was gener ated at home individually. They gave feedback to each other, criticized, agreed, disagreed, and identified gaps in their knowledge. While they were discussing and giving feedback to each other, they applied critical thinking strat egies (what, how, why, when, where, and who) to a shared concept map. Each pair of students synthesized and com bined the individual maps to generate a new collaborative map of the chapter.
Step 3 (collaborative prioritization): Students in pairs pri oritized the concepts recognized in the collaborative maps and ranked them from the most important to the least important to be later used for the generation of questions.
Step 4 (assessment): Each pair of students constructed a total of 12 critical thinking questions. They answered each other's question, criticized, and revised the items as a group.
Step 5: A member of the team posted all the questions on Moodle so that all items were open to public view and available to all to study for the exams. Note 1: The semester consisted of three blocks of 3 weeks each. During each block of 3 weeks, the students pro duced approximately 150 items. In addition to assessing themselves formatively each session during the semester, students were also informed that their questions would be used in the final exam. Grades were determined using the following rubric: 40%: Concept maps/application of critical thinking, 35%: Collaboration/creation of criterion referenced questions, 25%: Final (studentmade exam/old exam). Note 2: The students' test items were reviewed weekly, modified, and changed slightly, if necessary, by the instructor to improve accuracy, ease of understanding, and clarity between the stem and the multiplereasoning alternatives.
Individual group (IN). The individual group was the control group in which students always worked alone during the homework phase and the inclass phase. During the home work phase of the individual group, students worked alone preparing their concept maps with critical thinking ques tions. This phase was exactly like the individual phase of the collaborative group. Both groups completed their home work assignments of creating their individual critical thinking-concept maps followed by the construction of multiplereasoning questions.
The second step for the individual group was different from the collaborative group because the individual group did not work together to create a shared concept map with critical thinking questions. Instead, the individual group lis tened to a lecture, viewed a video relevant to the chapter, and created an individual concept map. The purpose of this exer cise was to help students generalize and think visually and more critically about the issues and problems of the day. After watching the video and listening to a short lecture by the instructor, each student produced a concept map of the lecture/video and then created six critical thinking items and posted them to Moodle.
The grade for the Individual group was based on the fol lowing factors: 40%: Concepts/critical thinking, 35%: Participation in class, 25%: Final (studentmade exam).

Data Analysis
Analysis of the concept maps-critical thinking questions. A breakthrough moment occurred when the researchers found a way to analyze the concept maps using critical thinking ques tions. The insight of joining digital concept maps with appli cation of critical thinking questions resulted in a new tool to analyze critical thinking skills. The use of digital maps with the critical thinking questions as links provided a database that lends itself to statistical analysis. This new innovative methodology has the potential to provide a reliable and valid approach to measuring the processual events in learning and achievement. The data from this research were a demonstra tion of the efficacy of this new methodology.
The actual procedures for the analysis of the concept map-critical thinking methodology take the following steps: The frequency data from the concept maps/critical thinking questions were obtained by counting the frequency of each of the "WH questions" from the weekly concept maps and using SAS (Statistical Analysis System, 9.1.3, SAS Institute, Cary, NC, USA) for further statistical analysis. The digitali zation of the critical thinking-concept maps creates the pos sibility of digitizing the complete process from data entry to data analysis.
The distributions of the data were tested using the Kolmogorov-Smirnov test, and a transformation was done when necessary to obtain homogeneous variance. The distri bution of the data was obtained by PROC UNIVARIATE in SAS. The transformations were done based on the Tukey Ladder of Powers, so that we changed the shape of the skewed distributions into a normal distribution.
To answer the research questions, we used repeated measures analysis of variance (ANOVA) using PROC GLM in SAS. Repeatedmeasures ANOVA was appropriate because we evaluated the same subjects over three specific time points (Beginner, Intermediate, and Advanced) on the same dependent variables ("WH questions"). These measure ments were made under different conditions. The conditions were the levels of the independent variable experimental con ditions (CCICIN). Furthermore, Multidimensional Preference Analysis of the dependent variables ("WH ques tions") was performed using PROC PRINQUAL to discover changes and patterns in "WH questions" and to look for the possible clusters of the groupings of critical thinking questions in the 3CA model so that we could see what attributes the "WH questions" have in common. Where points were tightly clustered in a region of a plot, it would represent the groupings with the same preference patterns. Vectors that pointed in the same direction (or roughly the same direction) represent the "WH questions" with similar preference patterns.
An independentsample t test comparison using PROC TTEST was performed to compare the means of the final exam's scores between studentmade tests at the end of the experimental semester called StudentMade Test (STMT) and a previous final exam from 2015 created by the publisher that we called the old test (OLDT). In preparation for the exam, students studied a pool of 450 studentmade items available on Moodle, and from this pool of items, 50 items were selected for the final examination. Students were assured that the items posted on the class website would be used for their final examination.
A power transformation of the studentmade test data was performed, and this transformation produced a normal distri bution of the skewed data from the studentmade test.
To control familywise error rate, Fisher's least significant difference (LSD) at alpha = .05 was used for the main effects. Considering having three groups (levels of conditions) and three time points, Fisher's LSD was the most appropriate method for controlling the familywise error rate.
To partition the responses (dependent variable) into linear or quadratic trends, trend analyses by orthogonal polynomial contrasts were conducted when the effect of time period was significant.
All the graphs were created using spreadsheet programs such as Microsoft Excel 2017 and/or SAS.

ANOVA With Repeated Measures
The first step in the analysis of the data was applying ANOVA with repeated measures to the six different "WH questions": what, why, how, when, where, and who. The repeated mea sures were for the three different time periods: Time 1 = beginning, Time 2 = intermediate, and Time 3 = advanced.
All the significances of the comparison of means are pre sented in Table 1. A general picture of all the trend analyses for repeated measures for each of the critical thinking ques tions is presented in Figure 3.

What
With 99% confidence, there was a significant difference between student conditions for generation of "What ques tions." A comparison of means shows that individual (IN) and individualcollaborative (IC) groups had a significantly higher mean and they generated more "What questions" than the collaborative group. With 99% confidence, there was a significant difference among the time intervals based on the repeatedmeasures ANOVA report. A comparison of means for the interaction between times revealed that student gener ated more "What questions" in the intermediate and advanced  time intervals as compared with the beginner period (Table 1). There was no significant interaction between time intervals and student groupings, however. A trend analysis revealed that the generation of "What questions" across time follows the quadratic trend as students across the three conditions had the tendency to generate more "What questions" in the intermediate period as compared with the beginner period. There was no significant difference between advanced and intermediate periods (Figure 3).

Why
There were significant differences for both times and groups. With 99% confidence, there was a significant difference between student conditions for the generation of "Why ques tions." A comparison of means indicated that the collabora tive group (CC) had a significantly higher mean than individual (IN) and individualcollaborative (IC) groups. With 99% confidence, there was a significant difference among the time intervals based on the repeatedmeasures ANOVA report. A comparison of means for time interval revealed that students generated more "Why questions" in the advanced and intermediate time intervals as compared with the beginner period. However, there was no significant inter action between time intervals and experimental conditions (Table 1). A trend analysis revealed that the generation of "Why questions," across time, followed the quadratic trend as students had a tendency to generate more "Why questions" in the advanced period in the collaborative group as compared with the intermediate and the beginner periods ( Figure 3).

How
There was no significant difference between student groups for the generation of "How questions." With 99% confidence, there was a significant difference among the time intervals based on the repeatedmeasures ANOVA. A comparison of means revealed that students generated more "How questions" in the advanced and intermediate time intervals as compared with the beginner period (Table 1). There was a significant interaction between time intervals and groups. A trend analysis revealed that the generation of "How questions" across time followed the quadratic trend. The collaborative group gener ated fewer "How questions" in the beginner's time period as compared with individual (IN) and individualcollaborative (IC) groups. However, the tendency to generate "How ques tions" shifted in favor of the collaborative group from interme diate period and increased over time (Figure 3).

When
There was a significant difference between groups in the gen eration of "When questions." A comparison of means showed that the collaborative (CC) group generated more "When questions" as compared with the individual (I) and individ ualcollaborative (IC) groups. There was no significant difference among time periods, and there was no significant interaction between time periods and groups for the genera tion of "When questions" (Table 1). A trend analysis revealed that the generation of "When questions" across time followed the quadratic trend. The collaborative group had the tendency to generate more "When questions" in the beginning period. However, it decreased in frequency toward the advanced period ( Figure 3).

Where
There were no significant differences between groups, time intervals, and the interaction between time and groups.

Who
There was no significant difference between student groups for the generation of "Who questions." With 99% confi dence, there was a significant difference among the time intervals based on the repeatedmeasures ANOVA report. Mean comparison showed that student generated more "Who questions" in the intermediate level as compared with the beginner period. There was no significant difference between the intermediate and advanced level despite the student ten dency to use less "Who questions" in the advanced period (Table 1). There was a significant interaction between time intervals and groups. A trend analysis revealed that the gen eration of "Who questions" across time followed the qua dratic trend. All the groups had a tendency to generate more "Who questions" across time; however, the generation of "Who questions" from the intermediate period leveled off and then decreased during the advanced period ( Figure 3).

Multidimensional Preference Analysis
Multidimensional preference analysis between collaborative group (CC) and individual group (IN) is presented in Figure  4. The biplot shows the cluster of preferences for the two groups. As can be seen from Figure 4, the preferences of the IN group clustered around the questions of "what, where, and who." On the contrary, the preference for the CC group clustered mostly around "why, how, and when." As can be seen from Figure 5, the preferences of the IC group clustered around the questions of "what and where." On the contrary, the clusters for the CC group clustered mostly around "why, how, who, and when."

The t Test Between STMT and OLDT
There was a statistically significant difference between the two tests. The mean and SD for studentmade test was M = 83.10, SD = 20.03, and the mean and SD for the old stan dardized test was M = 50.31, SD = 6.45. The results in Table  2 show a significant difference between the studentmade tests and the publishermade tests.

Discussion
This research study produced three outcomes of importance: (1) the methodological innovation of combining concept maps and critical thinking, (2) the finding of patterns of criti cal thinking, and (3) an instructional process, which makes critical thinking transparent and open to public view. Innovations 2 and 3 were not possible or plausible if not for the finding that it is possible to teach and facilitate the learn ing of patterns of critical thinking.

Methodological Innovation
The first methodological breakthrough of combining concept maps and the scheme of critical thinking has several educa tional advantages: (a) Computerized concept maps are a reli able way of recording data on concepts and their links (critical thinking questions); (b) the concept maps are a way of, first, transforming text into visual representation and, sec ond, as can be seen in Figure 2, applying a scheme of critical thinking to the concepts on the map; and (c) the critical thinking maps are a compelling and interesting display of thinking. This is the firsttime opportunity students will have to see their thinking represented on paper.
The second important accomplishment of this research is the demonstration it is possible to identify the patterns of the "critical thinking questions: what, when, why, where, who, and how" used by the students working independently or col laboratively. The different "critical thinking WH questions" are the source of the search for specific information neces sary to solve problems.
The transforming of the text into concepts linked by criti cal thinking questions is a way of visualizing the thinking of the students. Our data indicate that students favor applying the "what questions" in applying the scheme of critical think ing, and the other critical thinking questions follow as indi vidual set out to solve a problem or explain a particular set of circumstances. "Why and how" follow "what" when stu dents and others are trying to understand a situation.

Patterns of Critical Thinking Questions
What. "What" is the most frequently used question by stu dents and reflects the fact that students begin their critical thinking with the questions, "what is this" and "what hap pened?" Asking a "what question" invites a declarative state ment as an answer. The preference for "what question" is a request for information and is also a reflection of the style of teaching in many college courses. G. E. Forman (visual learn er's tendency in preschool children, personal communication, September 20, 2018) in a report of his ongoing research found that "how and why" questions increased if students were shown a video of "what" happened. Without the video, stu dents asked significantly more "what questions." This same finding was replicated in this study with the collaborative group which showed a drop in the frequency of "what ques tions" and an increase in "how and why questions." Why. "Why" is the most researched of the five "WH ques tions." "Why" is famously used by scientists and detectives. In both cases, "why" is only one of the interrogatives applied to understand the situation. "Why" is most often used when there is a mixture of knowledge and ignorance. "Why ques tions" are used to explain a causal relationship between two events. In this study, "Why" was used significantly more often by the collaborative group. "Why" is also used more often in the second and third time periods. This suggests as the students became more critical thinkers, they tended to use more "why questions." How. "How" was the third most frequently used interroga tive in this study. The interrogative "how" is used in teleo logical explanations when the speaker is explaining how some event came about. "How" usually refers to a process, whereas "why" emphasizes a cause. "How" is usually an interrogative about a series of events.
As you can see in Figure 3, the trend for "how" is an increasing pattern for the students in collaborative group; in other words, they used more "How questions" over the three time intervals. There was a significant interaction between time and group. When students collaborated, they used the critical questions of "how and why" significantly more often, while using fewer "what questions"; therefore, they moved beyond factual information and began to ask, "why and how" something happen.
When. "When an event occurs?" is an important information in a child development class. The collaborative group used significantly more "when" questions than the individual groups. There are no significant differences with regard to time or the time group interaction in the usage of "when." This is understandable, because the collaborative group goes beyond asking the questions of "what happen" to asking the questions "when does it happen," so they are focusing on context more. This finding is in consistent with the higher frequency of the use of "why and how." Who. The three experimental groups were not significantly different from each other in terms of frequency of use of the "who question." There was a significant difference due to time but not to group. Students used the question "who" sig nificantly more often in the last two time intervals. The increasing use of "who" is attributable to the students' under standing of the major theories and theorists in child develop ment. Throughout the child development text, there is an emphasis on the major figures in developmental psychology such as Erikson, Freud, and Vygotsky. As students came to understand the theories in the text, they became more aware of who are the major theorists in the field.  Where. The "where questions" were used infrequently in all groups, and there were no significant differences between the groups. This is reasonable, because the child develop ment text used in the class did not emphasize the "where" aspect of the context; the emphases were upon the theory and decontextualizing the events. All in all, from the ANOVA and multidimensional scaling data, we concluded that the "what question" plays a unique and influential role in critical thinking. What is it about the question "what"? In G. E. Forman's (visual learner's tendency in preschool children, personal communication, September 20, 2018) research on problem solving, he found that when stu dents have video available, they do not ask 'what" questions. The function of "what" is to bring to mind a picture. The video in Forman's research took the place of the images that nor mally come to mind in the midst of problem solving. If the question "what" is accompanied by an image, then perhaps that explains why students were slow to change their patterns of critical thinking, because it is also necessary to change the images that come to mind when the question "what" is asked.

Instructional Process: Making Thinking Visible and Transparent
The 3CA model is a studentcentered approach to instruction designed to be visible and transparent to students. Students value visibility and transparency because this gives them a sense of fairness in the educational process. There is a shifting of responsibility from the teacher to the student; this new way of teaching is different for students; and they are used to being told what to do. The first question by students who entered the class was, "What is the catch? this can't all be true. I get to make up the questions for my exam." "Yes, is the answer." The second question raised by students is, "Do we have to do homework every week?" "the answer is yes!" another ques tion on their minds is, "Can you teach thinking? this professor claims that he can teach thinking, I am not so sure about that, I will have to wait and see," the professor answer is yes, we can teach thinking and you can learn to think: Learning to think is learning a second language. Learning a second language is something that everyone can do and will do in accord with life in their communities. This course is like the first semester of a course in a second language. You will learn some new skills, learn some new words, concepts, but you will not yet be fluent in the language of thinking. Most importantly you will recognize the second language when you hear or see it being used. You will also recognize that some patterns are familiar to you. The instructional pattern is among the oldest in history, collaboration, dating back over millennia. The tools of inquiry, questions and answers, are even older dating back to the origins of being human. A healthy respect for asking questions is one of the outcomes of this class designed to teach thinking. The structure of the thinking processes is a mystery to many students.
They are often encouraged to think critically without specific instructions as to how to go about thinking clearly.
The 3CA model shows students a pathway to thinking as they practice their critical thinking skills in the classroom. Students acquire an increasing sense of agency as they prac tice creating critical thinking maps, using critical thinking to construct items for their own assessment and collaboration, creating shared maps, and creating the items using critical thinking for their grade for the semester. During each class session, students see that critical thinking is a concrete set of experiences requiring practice. As students go about practic ing the activities in the classroom, they learn to practice the public language games in their heads. After practice, the lan guage games practiced in public become language games played in the privacy of the mind.
Some students using the 3CA model state they are visual learners, and as a result they prefer to use visual materials in the 3CA model. The central place of critical thinking maps throughout the instructional process provides students with helpful support according to those students who describe themselves as visual learners. This study was not designed to explore the fit between selfdescribed visual learners and the 3CA model with its focus upon visual maps. About 40% of students describe themselves as visual learners in a recent study (Clarke, Flaherty, & Yankey, 2006). It will be interest ing to see in future research if the use of concept maps pro vides visual learners with a learning dividend.
The philosopher Ludwig Wittgenstein (2009) was also interested in the problem of making thinking visible. He worried that separating public language games from private language games suggests that thinking belonged to a differ ent and obscure realm of the ethereal. The idea of the lan guage game was an important contribution from his great book, Philosophical Investigations.
One of the most dangerous ideas for a philosopher is oddly enough that we think with our heads or in our heads. The idea of thinking as a process in the head, in a completely enclosed space, gives him something occult.
Wittgenstein is not denying there are mental events and images, he is commenting on the origins of mental states and images. He argued that mental states begin as a public prac tice much as when you learned to count. He commented that before you learned to count in your head, you learned to count in your hands. Similarly, we learn to read out loud before we learn to read silently. He chose the example of counting because he wanted to show that the most abstract of language games have their origins in everyday activities that are open to public view.

Implications of Findings
The major implications of this study are empirical and theo retical. On the practical side, our data indicate that teaching thinking is feasible and realistic across disciplines and ability levels. The use of digital concept maps with the critical thinking questions deserves replication. The use of this inno vation makes it possible to identify the patterns of critical thinking in a systematic way. Digital concept maps afford researchers the opportunities to measure the changing fre quencies of the different critical questions. The focus of the current research using concept maps has been to emphasize a single connection between concepts. It is possible to go beyond the single connections between concepts and explore multiple critical links between concepts. According to Zandvakili et al. (2018, p. 47), "Understanding is the appli cation of multiple critical thinking questions to a single con cept." As students go about using multiple links to a concept, they are deepening their understanding of the concept and become better critical thinkers.
On the theoretical side, there is the serendipitous possibil ity that the Aristotelian consequences, "what, when, who, where, what, how, and what for," are the corpus for one of the language games of thinking. If that is true, then this particular language game of thinking has a syntax and something resem bling Chomsky's concept of "merge" (Berwick & Chomsky, 2016). There is ample evidence in our data that students do use the combinations of questions in different systematic and appropriate ways. It is common sense to suppose that we all use the sequence of question asking in ways that are appropri ate to the setting. The experience of merge in thinking occurs when two ideas come together in a recursive fashion and one folds into the other. The recursive function of merge lends to language the fluency that we take for granted.

Limitations
This exploratory experiment combined the components of concept maps, critical thinking, collaboration, assessment, and mastery into a single package. There is a need for a fac torial study that systematically explored the inclusion of dif ferent components of the model. It is also important to explore the use of this model in different classroom settings. The beauty of the model is that it can be conducted within any classroom. An individual teacher can implement the model without having to coordinate their efforts with others. This suggests that the model can be tried out in a variety of settings without having to involve large numbers of educa tional personnel.

Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, author ship, and/or publication of this article.

ORCID iD
Elham Zandvakili https://orcid.org/0000000234748821 Note 1. "Moodle is a classroom learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalized and col laborative learning environments" (retrieved from https:// docs.moodle.org/36/en/About_Moodle).