5 Data All data are wrong, but some are useful. —With apologies to statistician George Box

Mark Dowley is the director of staff development and instruction at Brighton Grammar School, a prestigious independent boys’ school in Melbourne, Australia. When Mark and Brighton Grammar’s deputy headmaster, Ray Swann, came to ICG’s Intensive Instructional Coaching Institute, one of the most signifi cant ideas they took away was the importance of gathering data during coaching. “Brighton Grammar has been teaching students from 3-year-olds to 18-year-olds for decades,” Mark told me. In recent years, Ray, Mark, and their colleagues have turned Brighton Grammar into a pioneering school promoting evidence-based practice. They have partnered with John Hattie and his colleagues to promote Visible Learning, and the school has placed a heavy emphasis on student well-being. “At the core of everything,” Mark said, “is the Impact Cycle: Identify, Learn, Improve.” Data are central to coaching at Brighton Grammar. Mark told me about his coaching partnership with one teacher, whom I’ll call Jane Sheridan. Jane’s class was in trouble. According to Mark, “Students weren’t learning; they weren’t engaged. They were often disrespectful to each other and to the teacher. It was the kind of class where, when you walk past the classroom and look in, you say, ‘Now that’s not how school should be.’” Jane felt overwhelmed and stressed because the deputy headmaster was getting complaints from parents. “As a teacher, this can keep you awake at night,” Mark said. Mark off ered to observe Jane’s class and gather data. He found that time on task—one way of measuring whether students appear to be engaged in class—was at a dismal 30 percent. To improve those numbers, Jane and Mark started by focusing on the fi rst 10 minutes of class, introducing some teaching routines to get things started smoothly. Jane worked really hard, and soon, time on task increased to around 80 percent for those fi rst 10 minutes. These data showed Jane that the issue wasn’t the boys just being disruptive and uninterested. “She could see that the boys really could engage,” said Mark, “and that built her motivation to carry on for the rest of the class.” After more hard work, Jane eventually was able to get overall time on task up to 70 percent. She started to enjoy the class and built much stronger relationships with the students. “Data helped me track growth,” Mark said, “and that’s probably one of the most powerful things you can do because the teachers’ attitude to the class, their students, and their job improves dramatically.” Data are also central to the continuous improvement of the school. Mark explained that the school uses the Kirkpatrick model (Kirkpatrick Partners, 2009) for evaluating professional development by asking, “Was it relevant?” “Did I learn something new?” “Did I apply what I learned?” and “Did it have an impact on students?” The data from that evaluation have helped them evaluate and justify the use of the Impact Cycle. Compared with the impact of whole-staff presentations and external professional learning activities, results for the Impact Cycle “were through the roof,” according to Mark. Word is spreading in Melbourne about Brighton Grammar School’s partnership approach to coaching. In fact, as Mark explained to me, the school’s coaching model is attracting excellent teachers. “People see what we’re doing in coaching,” he said, “and they want to be part of a community that trusts their staff , values their staff , and partners with their staff .” When I contacted Mark to fact-check this story, he had more good news. Brighton Grammar’s median Australian Tiary Admission Rank (ATAR) score was the highest ever—87.95. The school’s overall rank in the Australian state of Victoria had moved up from 65th to 19th, and it had shown its best scores since coaching was implemented. Additionally, staff well-being scores on an assessment of support, kindness, perseverance, pride in work, and enthusiasm were all between 89 and 93 percent. “Despite the [COVID19] lockdown and distance/remote learning, our school achieved its best-ever results this year,” Mark wrote me. “I’d like to think a large part of that is due to us focusing on the Partnership Principles and coaching.” Why Coaches Need to Gather Data We see the value of data everywhere, every day. On a winter morning, we check our weather app to see the temperature and windchill factor so we can know how to dress. If we go for an early-morning run, we use an app to track how fast and how far we run. If we are trying get in shape, we might get on the scale to see our weight and then type the number of calories from our breakfast into a weight-loss app. And all this happens before we even really start the day! Data are just as central to instructional coaches’ work. They help both teachers and coaches see more of what is happening in the classroom, help teachers establish goals and measure progress toward goals, and build teacher effi cacy by demonstrating the progress that is being made. Data Help Us See More When I studied English literature in graduate school, I came to see how specifi c kinds of knowledge could help me see more in the poetry and prose that I read. When I read Walter Jackson Bate’s (1963) biography of John Keats, for example, I learned that Keats lived by the sea while writing “The Fall of Hyperion.” That knowledge gave me a whole new insight into the rhythm, meter, and structure of Keats’s incomplete epic poem. You may have had a similar experience when you learned a new word and then heard that word used in the following days, or when you bought a new car and felt like suddenly you saw that model everywhere. Obviously, your new car hasn’t suddenly become wildly popular; what has changed is that a piece of knowledge has shaped your perception so that you see more than you did previously. I refer to this experience as using an interpretive lens. Looking at Keats’s poem through the interpretive lens of the sea, I gained a deeper understanding of the poem’s rhythm. During instructional coaching, data make for a similarly powerful interpretive lens. Used eff ectively, they reveal aspects of a learning experience that we would not otherwise see. For example, data about time spent on task helped Jane Sheridan at Brighton Grammar School to better understand how many students were and were not involved in the learning. Similarly, data from formative assessment help teachers better understand how students are learning. Put simply, data make the invisible visible. Data Help Establish Goals Data also bring precision to goal setting by providing a clear fi nish line. If I say that I want to run a 5K in less than 21 minutes (I wish!) or that I want at least 90 percent of my students to be able to write a well-organized paragraph as assessed by a single-point rubric, it’s the precision that makes these goals actionable. When I have a clear view of my destination, I am much more likely to get there. Data Help Measure Progress Data also help us measure progress. A runner who is training to set a personal record will likely time her runs to see if she is getting faster. Similarly, a teacher who wants to improve student learning or well-being can gather data to see if the changes he is making are having an impact. Data also show teachers whether what they are doing is working. Often, the fi rst changes teachers make do not lead directly to students hitting the PEERS goals discussed in Chapter 4. Adaptations almost always have to be made, and data reveal which changes are working. Data Help Build Teacher Effi cacy Finally, data also build effi cacy. As Mark Dowley found at Brighton Grammar School, when data show that students are more successful or more engaged, teachers and coaches see that their eff orts are making a diff erence. Theresa Amabile and Steven Kramer (2011) label this the “progress principle.” They write: Facilitating progress is the most eff ective way for managers to infl uence inner work life. Even when progress happens in small steps, a person’s sense of steady forward movement toward an important goal can make all the difference between a great day and a terrible one. (pp. 76–77) Six Data Rules Teachers and coaches need data to establish goals, monitor progress, make adaptations, and increase effi cacy. However, data are only helpful when used well. I have identifi ed six rules that will help you use data more eff ectively. Data Should Be Chosen by the Teacher Teachers will be most motivated, and consequently will learn the most, when they choose the data that are gathered during coaching. This doesn’t mean a coach can’t suggest types of data to gather. In fact, in some cases, teachers won’t know what data could be gathered and, therefore, will want and need suggestions from their coach. Eff ective coaches master the art of suggesting types of data while still positioning the teacher as the decision maker in the conversation. Data Should Be Objective You can see the diff erence between objective and subjective data if you watch the Winter Olympics. During speed skating, where the data are objective, whoever makes it to the fi nish line in the shortest amount of time goes home with the gold medal. Because the data are objective, assuming everyone is judged to have raced fairly, there are very few controversies about who wins. This is how objective data work. There is very little opinion involved; data just are what they are. But during fi gure skating, where the data are subjective, the experience is often quite diff erent. Figure skaters, or at least fi gure skating commentators, often criticize the subjective way in which skaters are scored. Since subjective data, by defi nition, involve the observer’s opinion, conversations about them can turn away from what happened and toward whether or not a given opinion is accurate. Objective data are not personal—they’re factual. When coaches gather and share reliable, objective data, their opinion shouldn’t guide the conversation; they are just reporting the facts. Objective data keep the focus where it should be—on students and teaching. Data Should Be Gathered Frequently A GPS that only tells us when we have arrived at our destination wouldn’t be of much help. The same is true for data gathered in the classroom. Data won’t help teachers and coaches monitor progress if they are only collected once or twice a year. Instead, data need to be gathered at least weekly. Teachers and coaches need the feedback provided by frequently gathered data because teachers usually need to adjust how strategies are used until those strategies help students move closer to their goals. Data only help us see what is working and what needs to change when they are gathered frequently Data Should Be Valid Valid data measure what they are intended to measure. For example, a valid measurement of whether someone can ride a bicycle would be the act of either riding or failing to ride one; asking the person to complete a multiplechoice test on bicycle riding would be less valid. So, too, in the classroom: teachers and coaches need to make sure that the data they gather actually measure what students are supposed to be learning. Data Should Be Reliable and Mutually Understood When several coaches gather the same type of data and get the same results, we say that their results are reliable. As a general rule, researchers strive for a reliability score of higher than 95 percent. In coaching, reliability can have a slightly diff erent meaning. During coaching, it is most important that the coach and teacher agree on (1) what data to gather, (2) how the data are gathered, and (3) why the data are gathered. There should be no surprises when it comes to data gathering. One way to increase mutual understanding is for the coach and teacher to create a T-chart that depicts examples and nonexamples of whatever data are being gathered, such as the one shown in Figure 5.1. Data Should Be Gathered by Teachers When Possible Coaches have told us that when teachers gather and analyze their own data, they are much more likely to accept the data and change their behavior as needed. The easiest way for teachers to do this is by video-recording their lessons, which also lets observers watch segments of a lesson multiple times to clarify what happened. When the observer is also the teacher, such data can especially lead to powerful learning. The six data rules should inform how coaches and teachers gather data of all types. What those types actually are constitutes much of the rest of this chapter. Engagement Data Students who stay in school do so because they feel they belong, they have hope, they feel safe, and they feel engaged by school. In fact, engagement is the main reason students who stay in school do so (Knight, 2019). If we want students to experience happiness, have healthy relationships, be productive, and graduate, we must do what we can to ensure they are engaged. To do this, we need to fi rst know what we mean by engagement. For that reason, I have broken this discussion down into three categories: behavioral, cognitive, and emotional engagement. Behavioral Engagement To better understand the diff erent types of data, let’s imagine an instructional coach—we’ll call her Tamika Rohl—who works in a middle school in a midwestern U.S. city. Tamika is a qualifi ed and eff ective coach who is well liked and respected by her colleagues. Listening in on Tamika’s typical coaching conversations will help us understand the diff erent ways coaches can gather data. One of the teachers Tamika is partnering with is Allie Sherman, a thirdyear 7th grade science teacher. Allie loves science, and her enthusiasm goes a long way toward keeping students with her, but she wants to be better. She asks Tamika to partner with her, and they kick off the Impact Cycle with Tamika video-recording a lesson. Allie and Tamika review the video separately. They both see that, while Allie’s students enjoy being in class, they present some challenging behaviors. Students blurted out joking comments during instruction, and many of them engaged in side conversations while Allie was teaching. At diff erent times, Allie politely corrected students, but the students only changed their behavior temporarily. On four diff erent occasions, she asked two students to stop talking, but at the end of the lesson, the same two students were still conversing. Measuring behavioral engagement After watching Allie’s class, Tamika wonders if Allie will want to set a behavioral goal. Tamika frequently uses four diff erent measures she could share with Allie to assess behavioral engagement: (1) time on task, (2) disruptions, (3) student responses, and (4) incivility. Time on task. Time on task assesses whether students are doing what the teacher wants them to do. It is one of the most frequently gathered measures, but it has its limitations. On the one hand, it is easy to gather reliable data on time spent on task; on the other hand, time on task tells us only whether students appear to be doing what they’re supposed to be doing. They could be totally immersed in their learning experiences—or they could be confused, or thinking about something unrelated to learning. However, in a class like Allie’s, where many students are off task, increasing the number of students who are on task can be a powerful fi rst step forward. Disruptions. A second measure of behavioral engagement is disruptions —the number of times students say or do things that interrupt the teacher’s teaching or other students’ learning. In a classroom with too many disruptions, students struggle to learn, teachers struggle to teach, and everyone struggles to remain positive and energized. For these reasons, Allie might want to choose reducing disruptions as her goal. Student responses. In engaged classrooms, most, if not all, students are involved in discussions. Allie might choose to measure how often and how many diff erent students respond to questions. Incivility. Students fi nd it diffi cult to learn when they are afraid that they will be insulted, put down, attacked with a sarcastic remark, or made to feel “stupid” or inadequate. There is no room for bullying and verbal abuse in any classroom, anywhere. Tamika or Allie could gather data on incivility to gauge the mood and psychological safety of Allie’s classroom—an important action for any educator. How to gather behavioral data When Tamika meets with Allie, she doesn’t come in with a plan for her. Instead, she asks Allie to describe what she notices on the video. If Allie isn’t sure what her goal should be, Tamika is ready to share some options, but Allie knows what she wants to improve. She feels that if students are engaged, they’ll probably be less disruptive, so she sets a goal that at least 90 percent of her students will consistently be on task. Tamika fi nds that the easiest and most helpful way to gather behavioral data is to record it on the seating chart for the class being observed. In Allie’s class, she uses the existing seating chart. In classes where teachers don’t use a seating chart, she quickly sketches one for the purpose of gathering data. When students are not sitting at desks, she tallies them. Tamika gathers time-on-task data by looking at each student, noting whether they are on task, and marking it on the seating chart by using a plus mark for on-task behavior and a minus mark for off -task behavior (see Figure 5.2). Tamika usually records data every 5 or 10 minutes, but always makes sure to ask teachers when they would like data to be gathered. (Some teachers want the data to be gathered in the middle of each activity, for example.) Tamika also records other types of data on seating charts, such as disruptions, responses, or incivility. Disruptions and responses are fairly easy to identify and tally, but incivility requires a little more discussion. To measure it, coaches and teachers need to determine what they consider to be an uncivil interaction. Typically, insults, hateful statements, put-downs, and sarcastic comments are considered uncivil. Additionally, the teacher and coach might identify forms of uncivil nonverbal communication. Cognitive Engagement Tamika also partners with Jacob Robinson, who has been teaching mathematics in the school for about 10 years. To get started, Jacob agrees to watch a video recording of one of his lessons. When Jacob and Tamika fi rst meet, after separately watching the video, Jacob says that he is most concerned that his students didn’t seem to see the value of what they were learning in his class. “These kids are hearing about stuff that will be really important for them as they move up through high school and hopefully to college,” he says, “but they won’t learn it if they aren’t engaged.” “What I wonder,” Tamika suggests, “is what kind of engagement most concerns you.” Then, after asking Jacob for his permission, she explains the diff erences between behavioral, cognitive, and emotional engagement. Tamika and Jacob quickly agree that behavioral engagement isn’t an issue in his class, so Tamika goes on to explain cognitive engagement. When students are cognitively engaged, they experience what their teacher intends for them to experience during an activity. Cognitive engagement is similar to what Phil Schlechty (2011) calls “authentic engagement,” which he contrasts with “strategic compliance.” According to Schlechty, students who are strategically compliant do activities for strategic reasons—to earn praise, for example, or to get a better grade—rather than because they see the activities as meaningful, relevant, or enjoyable. By contrast, students who are cognitively engaged (or as Schlechty would say, authentically engaged) fi nd meaning and value in learning tasks. They are attentive, committed, and persistent until they complete tasks because they see the value in them. When students are cognitively engaged, they are more motivated, more positive, and most likely learning more than students who are not engaged. Measuring cognitive engagement Tamika explains to Jacob that they can assess cognitive engagement in diff erent ways. And since cognitive engagement occurs within students, the best strategy is to ask students to describe how engaged they are. Interviewing students. One option for gathering data is for Tamika to ask Jacob’s students about their perceptions of the class. Alternatively, she could teach Jacob’s class to free him up to interview students; however, students might be more forthcoming with a coach than with their teacher. When coaches sit down and talk with students, they can learn a lot about how students are experiencing a particular class or even school in general. For interviews, teachers should identify a small sample of students in a class—perhaps one-fi fth—choosing a cross section of students who will share the most useful information. The wider the range of students interviewed, the more useful the data will be. The following are some sample questions for assessing cognitive engagement, but teachers and coaches should think carefully about what they want to learn from students and draft their own. Sample Cognitive Engagement Questions Note: Questions should be modifi ed for content and students’ ages. • What’s the best thing about this class? • What’s the worst thing about this class? • What could make this class a better learning experience for you? • How do you feel when you walk into this classroom? • What do other students say about this class? • How would you describe this class? • How confi dent are you that you will do great in this class? What could increase your confi dence that you will succeed? • Does this class really matter to you? If so, why? If not, what would have to change to make this class matter to you? Exit tickets. Coaches and teachers get a lot of useful information when they interview students, but interviews are hard to schedule more than once or twice a semester. Besides, they sample only a fraction of the students in a classroom. Another way for teachers to better understand their students’ cognitive engagement is to ask students to complete exit tickets once a week. An exit ticket is a slip of paper or an index card with one or more questions on it. Some exit tickets include a scale question for students to answer (“On a scale of 1 to 6, how meaningful was the work in class last week?”). For younger students, they could feature emojis rather than numbers on a scale. Exit tickets can also be designed to ask students what their teacher can do to help them be more engaged (“What can I do to make this class more meaningful to you?”). Students complete exit tickets at the end of the period and turn them in to their teacher as they leave the classroom. Exit tickets are powerful, but they aren’t always valid; many students overestimate their level of engagement. Often, the most important piece of data on an exit ticket is what students say about how the class could be changed to be more engaging. Correct academic responses. Another way to assess cognitive engagement is to identify the percentage of students who give correct answers to the teacher’s questions, often referred to as correct academic responses (CAR). Tamika could gather this type of data in Jacob’s class using a seating chart. Like other kinds of data, CAR data are helpful but imperfect. Learning involves risks, and wrong answers can be better indications of learning than correct ones. Additionally, a 100 percent CAR rate for an entire class, while appearing to be a positive, may suggest that the content is not suffi ciently challenging. Teachers and coaches may fi nd it helpful to gather this type of data along with the number of diff erent students responding. If the CAR rate is 93 percent but only 18 percent of students responded to questions, we cannot assume all students understand the content. Experience sampling. The idea behind this powerful method is simple: Each student has a copy of a form like the one shown in Figure 5.3, and the teacher or coach sets up a timer to go off every 10 minutes during a lesson. Each time the timer goes off , students rank their current level of engagement on the form, with 1 indicating they are not engaged at all and 6 indicating they are completely engaged. Students can also write about what the teacher could do to make learning more engaging. Teachers may fi nd it helpful to audio- or video-record their lesson and then replay the recording as they review the forms the students have completed. Teachers can slide the recording to the points where the timer went off (usually every 10 minutes) to see what was happening when students completed their form, gaining insight into their responses. One other obvious and powerful way to measure cognitive engagement is to assess whether students have learned the content, which I discuss later in this chapter. Emotional Engagement Jacob isn’t sure that cognitive engagement is the most important goal for his students. “What I really want is for my students to feel totally safe speaking out and asking questions,” he says. “I want them to be totally OK with sharing whatever is going on in their heads.” Hearing this, and with Jacob’s permission, Tamika describes another type of data: emotional engagement. She explains that students who are emotionally engaged see their experiences in school as positive and meaningful, feel they belong in their school, feel physically and psychologically safe, have friends, and have hope. In short, emotional engagement measures connectedness, belonging, and physical and psychological safety Measuring emotional engagement Tamika tells Jacob that she can assess emotional engagement using many of the measures she would use for cognitive engagement. Again, the best data often come from the students themselves. Interviewing students. Tamika notes that, as with cognitive engagement, she can interview students to understand their level of emotional engagement. Since emotional engagement can have many dimensions— safety, relationships, hope, well-being—Jacob would have to carefully consider what questions would be most important to ask before they conducted interviews. Fortunately, many resources are available that Jacob and Tamika can review to help them create the questions that would be most helpful in their situation. For example, the Gallup Student Poll (Gallup, 2020) includes the following fi ve statements that may be adapted and used as questions for interviews related to emotional engagement: • I have a best friend at school. • I feel safe in this school. • My teachers make me feel my schoolwork is important. • I have the opportunity to do what I do best every day. • In the last seven days, I have received recognition or praise for schoolwork. Another option is to use questions stemming from Martin Seligman’s acronym PERMA, which stands for positive emotion, engagement, relationships, meaning, and accomplishment (2011, pp. 16–17) • Positive Emotion: How happy and satisfi ed were you last week? • Engagement: How often were you completely engaged in learning activities in this class last week? • Relationships: How positive were your interactions with other people last week? • Meaning: How meaningful were your experiences last week? • Accomplishment: How proud are you of what you accomplished last week? Finally, research on hope (Lopez, 2013)—which identifi es goals, pathways, and agency as essential components—provides another way to ask students about their engagement. • Goal: What is your goal for next week in this class? • Pathways: What are you going to do to hit your goal? • Agency: How confi dent are you that you will hit your goal? Exit tickets. Exit tickets are my favorite way of assessing emotional engagement. Students can complete an exit ticket on the same topic each week. The assessments could be about hope, meaning, positive experiences, happiness, relationships, or some other topic. If teachers include a scale, they can get a quick read on students’ level of emotional engagement (“On a scale of 1 to 6, how safe do you feel speaking up in class?”) while also gathering insight about what they themselves can do to help (“What could I do to make this class an even safer place for you?”). Again, exit tickets for younger students could feature emojis rather than numbers. Other methods for assessing emotional engagement. Emotional engagement can also be assessed through the use of interactive journals in which students and teachers write back and forth to each other each week. Journals build connections with students, which is important, but students may be less candid in their journals than on anonymous exit tickets. Many of the educators we meet tell us that they see emotional engagement as a prerequisite for all learning. Students who feel alone, afraid, or hopeless will struggle to learn until those factors are addressed. This is why coaching that focuses entirely on achievement runs the risk of not addressing students’ greatest needs. Let me be clear, though: this does not mean coaches can ignore learning. Just the opposite, in fact! All coaches need to be able to help teachers assess whether students are actually learning. Achievement Data Like most instructional coaches, Tamika spends a lot of her time working with teachers who are focused on improving student achievement. One such teacher is Courtney Bloom, a language arts teacher who has been teaching in the school for more than 20 years. Courtney is well known in the community because she ends each school year by choosing a famous poem for each of her students, writing the poem out by hand on a card, and handing each card out as a way to remember the class. Courtney frequently encounters students she taught 10, 15, or 20 years earlier who tell her that they still have their poetry card after all that time. Courtney meets with Tamika to set a goal. Courtney has watched a video that Tamika recorded of one of her lessons, but her real concern is that her students are not writing eff ectively and don’t seem interested in getting any better. “These kids need to understand that their ideas have to come out of their heads to make a diff erence,” she says. Although educators can gather data on achievement in many diff erent ways, almost all approaches involve (1) unwrapping the standards (clarifying the knowledge, skills, and big ideas students need to learn); (2) describing learning goals (breaking down the knowledge, skills, and big ideas into precise, discrete statements); (3) breaking down the learning (determining how to gather data on whether students have learned what they need to learn); and (4) assessing student learning (adapting teaching and learning when the data show students aren’t learning what they need to be learning). Together, Tamika and Courtney deal with all four elements of gathering data for achievement. Unwrapping the Standards Assessing achievement begins with identifying what students need to learn. In the United States, that usually involves taking a deep dive into state or Common Core standards. Larry Ainsworth, author of “Unwrapping” the Common Core (2015) and several other texts describing how to unwrap standards, suggests educators break down standards line by line and circle the nouns, which usually describe knowledge, and the verbs, which usually involve skills. Ainsworth and others (e.g., Erickson & Tomlinson, 2007; Wiggins & McTighe, 2005) also recommend that educators look beyond knowledge and skills as they plan curriculum and identify the big ideas students need to learn in a course, unit, or lesson. “Big ideas” are usually concepts, principles, patterns, or themes. Courtney wants her students to develop a deep understanding of the writing process, so she and Tamika decide to unpack the following 7th grade Common Core standard related to the writing process1 : CCSS.ELA-Literacy.W.7.5. With some guidance and support from peers and adults, develop and strengthen writing as needed by planning, revising, editing, rewriting, or trying a new approach, focusing on how well purpose and audience have been addressed. (Editing for conventions should demonstrate command of Language Standards 1–3 up to and including grade 7 here.) Going through the standard together, Courtney and Tamika identify planning, revising, editing, rewriting, trying new approaches, purpose, and audience as the knowledge students need to learn, and planning, revising, editing, rewriting, and trying new approaches as the skills students need to learn. Courtney feels that the overlap between knowledge and skills shows students need to both know and be able to do the writing process. “It’s not enough that they can describe how to plan,” Courtney says. “They actually have to do it, too.” Courtney also identifi es the big ideas she wants students to learn, including “writing to make a diff erence,” “writing with purpose in mind,” “writing as a form of self-expression,” and “everyone can be a writer.” Describing Learning Goals After Courtney and Tamika have identifi ed the knowledge, skills, and big ideas students need to learn, they create guiding questions (as described in my book High-Impact Instruction [Knight, 2013]) so that students can see what they are going to learn. Together, they create six questions for the writing process based on the standard that describes what students need to learn in the unit: 1. What strategies can be used to plan writing? 2. What diff erence does writing make? 3. Why is it true that everyone can become a writer? 4. What is the writing process, and how can I edit, revise, rewrite, and try new approaches to improve writing? 5. Why is writing with the audience in mind important? What strategies can writers use to do this? 6. How can writing be used as a form of self-expression? Why is this important? Breaking Down the Learning After identifying the guiding questions, Tamika asks Courtney to answer them. Acting as Courtney’s “secretary,” Tamika then writes down several simple sentences that provide partial answers to the guiding questions. I refer to these specifi c and assessable segments as specifi c profi ciencies. (Ten or more specifi c profi ciencies may be needed to answer one guiding question.) A specifi c profi ciency states in exact terms what students need to know, do, or understand to correctly answer a guiding question. Therefore, the easiest way to craft specifi c profi ciencies is to answer the question “What knowledge do students need to know, what skills do they need to be able to demonstrate, and what concepts or principles do they need to understand to answer this question satisfactorily?” The following are seven specifi c profi ciencies Courtney identifi ed for the fi rst guiding question she and Tamika developed. Guiding question: What strategies can be used to plan writing? • Planning involves getting ideas out of your head. • Planning involves organizing ideas. • Brainstorming is writing down all the ideas you can think of about a topic. • Clustering is doodling with bubbles to get ideas out of your head. • Free writing is writing nonstop for fi ve minutes or more. • Ideas can be organized by using a planning map, frame, or other tool. • Planning and organizing make writing more coherent. Next, Courtney organizes the profi ciencies into the sequence in which they will be learned. Larry Ainsworth (2015) refers to this sequencing as a “learning progression.” When formatively assessed, specifi c profi ciencies help teachers pinpoint where student learning has broken down. When teachers precisely understand students’ roadblocks to learning, they know what feedback they need to provide to students, what changes they need to make to students’ learning experiences, what adaptations they need to make to their instruction, and what (if any) content they need to reteach. Assessing Student Learning Once the standards are unwrapped, guiding questions are written, and specifi c profi ciencies are identifi ed, Tamika and Courtney discuss how to assess student learning. Tests Selected-response tests such as fi ll-in-the-blank, true-or-false, and multiple-choice or short-answer tests can yield valuable insight into whether students are learning. Checks for understanding One of the easiest and most powerful ways to assess student learning is to use checks for understanding, such as bell work, response cards, whiteboards, exit tickets, and so forth. One of the advantages of checks for understanding is that they can be used at any time during a lesson. You can download a list of sample checks for understanding at www.instructionalcoaching.com/ bookstore/the-defi nitive-guide-to-instructional-coaching. Courtney uses a T-chart like the one in Figure 5.4 to identify which checks for understanding she can use as formative assessments for her specifi c profi ciencies. Rubrics Tamika knows that tests and checks for understanding are eff ective ways of assessing students’ knowledge. But as she explains to Courtney, they are much less eff ective at measuring students’ skills. To do that, she probably needs to use rubrics. In their Introduction to Rubrics (2005), Dannelle Stevens and Antonia Levi defi ne a rubric as follows: At its most basic, a rubric is a scoring tool that lays out the specifi c expectations for an assignment. Rubrics divide an assignment into its component parts and provide a detailed description of what constitutes acceptable or unacceptable levels of performance for each of those parts. (p. 3) Susan Brookhart (2013) distinguishes between two categories of rubrics. Analytic rubrics, she says, “describe work on each criterion separately” (p. 6). Holistic rubrics, by contrast, “describe the work by applying all the criteria at the same time and enabling an overall judgment of the quality of the work” (p. 6). Tamika focuses her attention on diff erent forms of assessment that allow for an analytic assessment of student work or performance. She also uses assessment tools that belong to what Brookhart (2013) refers to as the “family of rubrics.” Each is described below. Checklists. Checklists, like the one shown in Figure 5.5, are eff ective tools for assessing something simple or discrete enough that it can be measured by a yes or no answer. Thus, checklists are excellent for assessing whether students have completed some part of an assignment or a process. If one item on a checklist is “Begin your paragraph with a topic sentence,” that’s not an assessment of the quality of the topic sentence; instead, it simply assesses whether a topic sentence begins the paragraph. Single-point rubrics (SPRs). Jennifer Gonzalez, who writes the blog Cult of Pedagogy (www.cultofpedagogy.com), has popularized the use of single-point rubrics, or SPRs. This form of assessment includes a single criterion at the center of the rubric, with space on either side for someone (teacher, student, or peer) to add comments related to areas for improvement and evidence of exceeding standards. On her website, Gonzalez (2015) lists three advantages of these simple rubrics: • Teachers fi nd them easier and faster to create. . . . • Students fi nd them easier to read when preparing an assignment. . . . • They allow for higher-quality feedback, because teachers must specify key problem areas and notable areas of excellence for that particular student, rather than choosing from a list of generic descriptions. A sample SPR is shown in Figure 5.6. Multi-point rubrics (MPRs). Another option is the multi-point rubric, which involves breaking down diff erent levels of accomplishment or performance. Often, each criterion of a rubric is described at diff erent levels with words such as beginning, developing, accomplished, or exemplary. Eff ective MPRs communicate what a product or process should look like after it has been learned (see Figure 5.7). Courtney can use rubrics to grade student work, assigning diff erent grades based on level of performance. Because rubrics are often most helpful for formative assessment, teachers can share them with students to provide feedback on performance. Students can also use rubrics to self-assess their work or to provide peer feedback. Once students understand rubrics, they better understand what they need to learn and how well they are learning. Rubrics can be diffi cult to create at fi rst. It is challenging to try to describe success criteria precisely and clearly. However, that is exactly why rubrics are so important. Creating rubrics deepens our knowledge of the content we teach and the outcomes we expect for our students. And when we understand our content better, our teaching and feedback become more eff ective, meaning that our students learn more. Teaching Data In her work as an instructional coach, Tamika also gathers data that show how teachers teach. Unlike engagement and achievement data, teaching data are usually not used to develop a PEERS goal since they don’t meet the student- focused criterion. But they are still important. Coaches gather data about instruction so teachers can see how they are implementing new strategies and whether they need to change their teaching. The teaching data that Tamika gathers include ratio of interaction, teacher talk versus student talk, questions, opportunities to respond, and instructional versus noninstructional time. Each is described in the following sections. Ratio of Interaction Ratio of interaction measures how teachers direct their attention to students. As student behavior expert Randy Sprick explains in his presentations and publications, students want to get their teacher’s attention. “Imagine that every time you’re giving students your attention, you’re handing them a fi ve-dollar bill,” he says. “Now think about when you give students that bill. Is it because of appropriate or inappropriate behavior?” Randy’s message is clear: if students are getting their teacher’s attention by acting up, they’ll keep acting up. Tamika gathers data on ratio of interaction the same way she gathers many other forms of data: by using the class seating chart. When a teacher gives a student attention for appropriate behavior, she simply puts a plus under the student’s name on the chart. When a teacher corrects the student, she puts down a minus. Sometimes she will note disruptions on the chart by using a third symbol for disruptive behavior. Teacher Talk Versus Student Talk One thing teachers often see when they watch video of their lessons is that they talk too much and students don’t talk enough. This is an important discovery, because whoever is doing the talking is usually doing the learning (Clinton, Cairns, McLaren, & Simpson, 2014). Tamika gathers data on student talk by using her smartphone to keep track of when students talk and then subtracting that amount of time from the total class time. Questions One of the easiest and most powerful steps teachers can take to increase student engagement, and consequently learning, is to change the way they ask questions. Open questions entail a potentially infi nite number of responses, and they usually elicit longer answers. Closed questions, by contrast, have a fi nite number of answers. A closed question might be “Who was the original voice of Darth Vader?” whereas an open question might be “Why do you think Star Wars movies are so popular?” A second distinction is between right-or-wrong questions and opinion questions. Right-or-wrong questions, as the term implies, have right or wrong answers. Thus, a right-or-wrong question might be “What year was the fi rst Star Wars movie released?” An opinion question, however, is one you can’t get wrong, such as “Who is your favorite Star Wars character?” Finally, a third distinction relates to the level of the questions. Many educators use Bloom’s taxonomy to distinguish between levels. Other ways of sorting levels of questions include Lorin Anderson and David Krathwohl’s (2001) revised version of Bloom, Robert Marzano’s taxonomy (2001), and Norman Webb’s Depth of Knowledge levels (2002). In our work at ICG, we sort questions into three levels: (1) knowledge, (2) skills, and (3) big ideas. Tamika fi nds that what matters most with questioning is asking the right kind of question for the kind of learning that is taking place. She sorts learning into two categories: closed learning, during which students are expected to master content as it is directly taught, and open learning, during which students construct their own understandings. For closed learning, teachers should generally ask a lot of closed, rightor-wrong questions, since a major purpose of questioning during closed learning is to confi rm that students can demonstrate their understanding of the content as the teacher intends. For open learning, where students construct their own understandings, open opinion questions are usually more eff ective, especially for promoting classroom conversation, because learners, whether young or older, usually hesitate to answer closed questions for fear of answering incorrectly in front of their peers. Open questions also usually provoke longer answers. Opportunities to Respond Opportunities to respond (OTR) refers to the number of diff erent times students are prompted to react to what they are learning. Teachers create opportunities to respond by asking questions, directing students to turn to their neighbor and compare answers to a question, asking students to hold up response cards, and so forth. Tamika gathers data on opportunities to respond on a seating chart by putting a tally under students’ names when they are prompted to respond or by putting a tally on the side of the page when the class responds together to a prompt (e.g., in the case of choral responses to a question). Opportunities to respond are most useful during direct instruction or closed learning, when frequent interactions increase engagement and learning. During open learning, a smaller number of questions promotes deeper thought and dialogue. Instructional Versus Noninstructional Time Noninstructional time refers to all the unproductive activities that occur in class, such as talking after the bell before class begins, lining up to leave the room before the bell rings, moving from one center to the next, handing out assignments, taking attendance, and so forth. Noninstructional time may also be referred to as transition or wasted time. Some of the time students spend in any class is inevitably noninstructional. At the same time, it’s obvious that the more time students spend on productive experiences, the more they are likely to learn. Tamika records instructional and noninstructional time the same way she records teacher and student talk. Another way to measure instructional time is to note how much time is spent on various instructional or learning activities during a lesson. Tamika can display these data using a pie chart like the one shown in Figure 5.8 so teachers can immediately see how they are using their time during teaching. As an instructional coach, Tamika gathers data almost every day to help teachers set goals, measure progress, make adaptations, or look at how teaching strategies are being implemented. Data serve as a GPS for the learning journey students take in school. But data only measure the impact of changes: teachers and coaches still need teaching strategies so that students can meet goals. That is why Tamika, like all instructional coaches, needs an instructional playbook that describes the high-impact teaching strategies she most frequently shares with teachers. In Chapter 6, you’ll learn what any coach can do to create such a playbook. When possible, data should be 136 The Defi nitive Guide to Instructional Coaching • Chosen by the teacher; • Objective; • Gathered frequently; • Valid; • Reliable and mutually understood; and • Gathered by teachers. Engagement should be a focus for instructional coaching because it is an essential part of a fulfi lling life and the main reason why students stay in school. Coaches and teachers can gather behavioral, cognitive, and emotional engagement data. Of course, achievement should also be central to instructional coaching. To assess achievement, coaches and teachers must clarify what students need to learn by unpacking standards, creating guiding questions, and developing specifi c profi ciencies. Achievement can be measured with tests, checks for understanding, and both single- and multi-point rubrics