The Impact of Purposeful Questioning and Student-Selected Strategies
Katie Carlson, Intern, Baylor University
Taylor Anderson, M.S.Ed., Mentor Teacher, Woodgate Intermediate, Midway ISD
Rachelle Rogers, Ed.D., Clinical Associate Professor, Baylor University
It has been said that mathematics can be an intimidating and perplexing subject for many school-aged children. In the researcher’s classes, she noticed students constantly asking for confirmation in the answers they found. Her students presented little independence when the expectation was to work individually, so the researcher began contemplating the root cause. The researcher questioned if this behavior occurred due to low mathematical confidence. While reading about how to address this, she found studies that made connections to students’ epistemological beliefs and academic achievements. Foster (2016) utilized a confidence-based assessment, which asked participants, "How sure are you?" and "Does this make sense?" In addition to this study, the researcher also found Rosenzweig’s study of metacognitive strategies during problem-solving. Rosenzweig believed that by expanding students' metacognitive experience, her students' confidence and independence might grow (2011). After reading about these studies, the researcher employed both concepts to create a study that explores the use of purposeful questioning and comprehension strategies.
The researcher aspired to explore the relationship between students’ mathematical confidence and metacognitive strategies. She desired to know if her students would experience growth in their mathematical confidence and affect how they solved problems. More specifically, the researcher wondered, “How might implementing purposeful questioning and supportive strategies, such as turn and talk or drawing a model, impact students’ confidence while working on independent mathematics problems?”
Two sixth-grade, regular mathematics classes participated in this study. The number of participants consisted of 25 students from the second and seventh periods. In this study, the second period was used as a control group, while the seventh period was the experimental group. The instruments used to collect data were a pre-and post-Likert scale, questioning tally, and a self-reflective journal. In a study by Schommer-Aikins et al., (2005), an epistemological beliefs scale was utilized, and students responded using a Likert scale. This study modeled the Likert scale utilized from the Schommer-Aikins et al. (2005) study. The first pre-Likert scale was composed of six statements. The students responded with a 5-point scale ranging from strongly agree to strongly disagree. The statements were about students’ confidence in mathematics, their mindsets while working independently, their awareness of when the confusion began, and their desire for more support. The questioning tally was implemented at the beginning and end of the study. For this assessment, the researcher would mark one tally per question students asked for ten minutes, then total the tallies recorded. This assessment was used in the second and seventh-period classes during ten minutes of independent work. The self-reflective journal allowed students to write about their confidence in mathematics when working independently, record content-related circumstances that confuse them, and identify methods that might be helpful when they got stuck. To begin the study, the researcher employed the questioning tally assessment for three days. After the questioning tally, the researcher administered a pre-Likert scale. Following the students’ completion of the pre-Likert scale, the researcher reviewed the responses and began gathering purposeful questions and strategies to present to the class (Humphrey, 2018). The next day, the researcher presented to her seventh-period class the collection of purposeful questions and strategies for problem-solving. The researcher instructed the students to select or come up with questions and strategies that they would find most helpful for two minutes. The list of strategies included: table talk, re-reading slowly, checking work, using a model, or drawing. The list of purposeful questions selected included: “Can I use a model?” “What do I need to find out?” “What facts do I know?” “Would a drawing be helpful?” and “Does that make sense?” The researcher then implemented these strategies for the next nine school days. After the nine days of implementation, the researcher began another three days of data collection with the questioning tally. Proceeding the third day of question tallying, the researcher administered the post-Likert scale to students. Both classes answered the same six questions from the pre-Likert scale and seventh period answered seven additional questions about the usefulness of each strategy. On the last day of the study, students wrote a self-reflective journal entry about mathematical confidence when working
independently, what confused them the most, and what strategies they find most useful. When analyzing the pre-and post-Likert scales data, the researcher employed measures of central tendency to make conjectures about the class confidence. The mean and the mode were calculated to make comparisons between the results of the pre-and post-Likert scales. The survey was composed of 4 negatively worded statements and 2 positively worded statements. The average of these two types of question connotations was calculated for each student and the class period. The researcher compared the averages from the classes and individual students with the pre-and post-Likert scales to determine an increase or a decrease in students’ mathematical confidence. The researcher computed the mean and median for the questioning tally from the beginning and the ending of the study and compared the results for the class confidence. Lastly, the researcher read all the students’ journal entries and sorted them by the students’ declared mathematical confidence. The researcher then triangulated the data in order to determine students’ growth. After analyzing the data, the researcher found that when implementing purposeful questions and student selected strategies, 64%, or nine out of the fourteen students, experienced growth in their mathematical confidence and their ability to work independently. 28.6%, four out of fourteen students, maintained the same level of mathematical confidence, while 7.1%, one out of fourteen students, declined in their mathematical confidence.
Based on the analysis and data gathered in this study, the result of the study was successful. Students completed the unit with more mathematical confidence than the previous unit. Also, they felt supported by the purposeful questions and comprehension strategies, like turn and talk because it gave them another path to take when confusion struck. Students responded positively to the strategies and questions, and it put learning in the students’ hands. Students had to rely on their knowledge and the use of the strategies to solve problems and this created a resiliency in the students in this study. A strength of this study is that the purposeful questions and most strategies can be employed across different content. This means that this study could be implemented across multiple units or repeated multiple times throughout an academic year during different units. By having students create questions or actions to use when they are confused, accountability is placed on the students to use these strategies. A weakness of this study is that there is no set amount of implementation time during class periods because the agenda of a school day can vary due to the need to teach new content and move on to different topics. Another factor is that students’ mathematical confidence can vary due to new topics, a difficult or unfocused day, or factors outside of the classroom. The researcher recommends extending the length of this study past 3 weeks to obtain more reliable results. As for future actions or research, this study will be repeated when the researcher has a classroom of her own for an extended period and across different mathematical content.
Foster, C. (2016). Confidence and competence with mathematical procedures. Educational Studies in Mathematics, 91(2), 271-288.
Humphrey, W. (2018). Questioning Strategies. Unpublished document.
Rosenzweig, C., Krawec, J., & Montague, M. (2011). Metacognitive strategy use of eighth-grade students with and without learning disabilities during mathematical problem solving: A think-aloud analysis. Journal of learning disabilities, 44(6), 508-520.
Schommer-Aikins, M., Duell, O. K., & Hutter, R. (2005). Epistemological beliefs, mathematical problem-solving beliefs, and academic performance of middle school students. The elementary school journal, 105(3), 289-304.
The Power of Student-Asked Questions
Leslie Wolff, Intern, Baylor University
Cara Neathery, M.S. Ed., Mentor Teacher, Woodgate Intermediate, Midway ISD
Rachelle Rogers, Ed.D., Clinical Associate Professor, Baylor University
One way for students to push their own understanding is by asking questions on their own. This introduces the concept of academic self-efficacy, which is “students’ judgment of their capabilities to complete their schoolwork successfully” (Ryan, Gheen, Midgley, 1998, p. 528). Some students have low self-efficacy, believing that asking for help shows a lack of ability, leading to a feeling of embarrassment. Therefore, these students will shy away from asking questions out of this inbred fear. In contrast, students with high self-efficacy will seek out help as they do not associate this with the feeling of failure. The problem arises then as students with low self-efficacy are typically the ones who need the most help and yet they will not voice their confusion.
Student participation and confidence in asking questions is pursued by all educators in the field. Various methods have been researched such as moral encouragement and the structure of goals that describe ways to promote this type of behavior in students. Encouraging students when they do ask questions and being intentional with the classroom goal structure are beneficial to increasing academic self-efficacy. Yet, these studies focus only on voiced questions or speaking in class. Some students do not ask questions simply because they prefer to not talk in front of their peers.
How would student performance and motivation to ask questions be impacted with the increase of varied opportunities to ask these questions during instruction, at the end of instruction and for homework?
The students in this research are participating in a six-week long study examining if an increase in student opportunities to ask questions impacts performance and motivation. Students are in the sixth grade at Woodgate Intermediate. Two mathematics classes were chosen to participate in the study resulting in a sample of 29 total students. For the purpose of the research, students were kept separated by the period they have math, referred to as Period Two and Period Three.
Along with student performance, students’ actions and thoughts also need to be collected. To capture student actions, a data frequency sheet was created to tally up the number of questions each student asked in a given class period. A survey, including a Likert scale, was given to both math classes to observe students’ thoughts in regard to asking questions. Students were asked to rate how they feel about asking a question in class from one (very uncomfortable) to five (very comfortable). To analyze students’ performance, both weekly homework scores along with their Unit Test results were tracked, which was organized in an Excel spreadsheet.
To begin the research and gather a pre-assessment (Block One), students in both Period Two and Period Three were given the survey with the Likert scale described in the section above. Over a span of four days, the number of questions that students asked in class was tracked for both periods by tallying the number next to the name of the student as they asked questions During this same week, homework scores and test results were noted as well.
The next phase of the research was implementing the varied question methods during instruction, at the end of instruction and for homework in Period Two. Period Three served as the control group, so the researcher only implemented moral encouragements for asking questions in class. During instruction, students were given a Question Card to write down any questions they had throughout the lesson. At the end of each lesson, students filled out an exit ticket that prompted them to ask any questions they still had. If they did not have any questions, students were to summarize what the lesson was about that day. Lastly, a question card was attached to the back of their homework each week. The teacher would answer the questions from the day before to the whole class without specifying which student asked. During the implementation phase, only homework scores were gathered while students became accustomed to these new question methods.
In the final phase of research (Block Two), the number of questions students asked was tracked over a four-day period again in both Period Two and Period Three. In Period Two, the method of question was also tallied to identify if students used the question card, the homework question card, the exit ticket, or raising their hand in the traditional manner. In Period Three, only questions asked in class were tallied as this was the control group. Homework scores as well as the raw test scores were recorded for this post-assessment. The same survey from the pre-assessment was then given to gather students’ thoughts.
When the number of questions asked was compared between the two blocks, it was found that 27% of students in Period Two asked more questions in Block Two than Block One. In comparison, 7% of students in Period Three, the control group, increased the number of questions they asked. Period Two had 87% of students increase in their homework average. For the students that increased the number of questions they asked, 100% of them scored higher in Block Two. Out of the students that decreased in number of questions they asked, only 80% scored higher. Period Three, the control group, had 78% of students with a higher average at the end of Block Two. In looking at the test results holistically, it is observed that Period Two had 80% of students with a higher test score in Block Two. The control group also had 79% of students score higher on the unit test. However, 13% of students in Period Two received the same score on both unit tests while 14% of students in the control group actually decreased in their test scores between the two blocks.
To analyze the effect on motivation, the researcher used the responses from the survey and specifically the results of the Likert scale. The researcher found that 13% of students in Period Two said they were more comfortable asking questions in class at the end of Block Two than they did at the beginning of Block One. Only 7% of students in Period Three felt more comfortable asking questions in Block Two. The majority of students in both periods had no change in their comfort level of asking questions in class. The researcher also identified the manner in which students asked the questions. In Period Two, 12% of the questions were from the exit ticket, 5% were from the homework question card and none were from the in-class question card. This meant that 83% of the questions were in the traditional manner of raising their hand.
The results showed that the varied opportunities increased student performance, yet it did not have much effect on student motivation to ask questions. The researcher implemented moral encouragements for questions in Period Three and found that there was not an increase in the number of questions asked in Block Two. This does not support the previous research findings that found students asking more questions with moral encouragements (Wodajo, Hailu, Tasente, 2021). However, this could be due to the content that was covered in Block Two versus the topic covered in Block One.
One of the weaknesses of the study was the stark difference in difficulty between the two topics covered in the Blocks. Block One was about proportions and percent proportions, which students had a hard time comprehending, especially with the word problems. On the other hand, Block Two covered integers, absolute value, and opposites. This topic was much more straightforward with little comprehension needed as there were rarely any word problems. This is a major factor to consider when analyzing the data. One of the reasons students did not ask as many questions could be attributed to the simplicity of the problems in Block Two. One of the strengths of the study was the similarity in the two periods chosen for the study. These two classes tend to have similar test scores and homework averages, which shows their likeness in ability. In addition, these two periods have a similar number of students, which was important since the research was tracking the number of questions asked. Based on these observations, the researcher recommends implementing this study over a longer period of time to gather a more accurate span of data, ensuring that similar classes are chosen. In implementing this study over a whole semester for example, many more topics would be covered to observe a more holistic view of the results to interpret.
Ryan, A. M., Gheen, M. H., & Midgley, C. (1998). Why Do Some Students Avoid Asking For Help? An Examination of the Interplay Among Students' Academic Efficacy, Teachers' Social–Emotional Role, and the Classroom Goal Structure. Journal of Educational Psychology, 90(3), 528–535. https://doi.org/10.1037/0022-06184.108.40.2068
Wodajo, M. R., Hailu, B., & Tasente, T. (2021). How We Can Improve Students Interest in Asking and Answering Questions in the Classroom, The Case of Second Year Social Science Students of BHU in 2020. Technium Social Sciences Journal, 15, 125–130. https://doi.org/10.47577/tssj.v15i1.1674