To put together our new LEAP 2025 Algebra I Boot Camp, we dissected the state’s released practice test and assessment guides. Here’s a high-level overview of our findings. The Louisiana Department of Education has done a great job breaking down the test and its contents with its assessment guide, so much of what I provide here is an extract of the most important information from what the state has released.
Three Task Types
Each LEAP 2025 Algebra I item is referred to as a task, and these tasks belong to one of three different types:
- Type I: Major Content and Additional & Supporting Content. Assesses conceptual understanding, fluency, and application. (62% of points)
- Type II: Expressing Mathematical Reasoning. Students provide written arguments, justification, and evidence of math-based reasoning. (16% of points)
- Type III: Modeling & Application. Students solve real-world problems through modeling. (22% of points)
Points per type are specified in state documentation, so these percentage weights are expected to hold true in every test administration.
Three Major Content Subcategories
The most heavily weighted question type, Type I, is split into two categories: Major Content and Additional & Supporting Content. Major Content can be divided into three subcategories:
- Interpreting Functions (approximately 25% of points)
- Solving Algebraically (approximately 40% of points)
- Solving Graphically / Rate of Change (approximately 35% of points)
It should be noted that almost all test items can be associated with one of these three Major Content subcategories, even if the items are also categorized as Type I: Additional & Supporting Content, Type II, or Type III. As a matter of fact, we frame our boot camp sections around these content categories and include tasks of all types in each section, as this flows better pedagogically.
The percentages provided in this section are only rough estimates based on an analysis of the state’s one released practice test. The state has not given guidelines as to the weight of each content subcategory, so it could be that subsequent tests vary widely from the aforementioned proportionality.
Five Achievement Levels
Student outcomes are now defined by five possible achievement levels, rather than four:
- Advanced: Exceeding college and career readiness expectations.
- Mastery: Meeting college and career readiness expectations.
- Basic: Nearly meeting college and career readiness expectations.
- Approaching Basic: Partially meeting college and career readiness expectations.
- Unsatisfactory: Not meeting college and career readiness expectations.
Technology-Enhanced Student Response
What follows is a list of student response types (e.g., dropdown, multiple choice, constructed response) and their relative weights in the state’s released practice test.
- Multiple Choice (34%)
- Multiple Select (9%)
- Fill in the Blank (15%)
- Constructed Response (21%)
- Technology-Enhanced Coordinate Grid (8%)
- Technology-Enhanced Drag-and-Drop (2%)
- Technology-Enhanced Dropdown (9%)
- Technology-Enhanced Keypad Input (2%)
This concept is also referred to as “task type” in state documentation (using the same language differentiating Type I, Type II, and Type III tasks), so we are using the more precise if wordier label “student response type” in our discussions to avoid ambiguity.
The percentages given in this section of the document are not based on state specifications but rather on an analysis of the one released practice test, and so they could vary widely from one test administration to the next.
Sessions and Time Allotments
Below is a summary of the test sessions, their time allotments, and their calculator policies. The new LEAP 2025 Algebra I assessment has strict time limits.
|Test Session||Calculator?||Number of Points||Time Limit|
|Session 1a||No||9||25 minutes|
|Session 1b||Yes||13||55 minutes|
|Session 2||Yes||23||80 minutes|
|Session 3||Yes||23||80 minutes|
Each task is assigned a point value based on two rules:
- In the case of constructed-response questions, students can gain one point per component evaluated in the rubric.
- For all other question types, one point per part.
Students can anticipate the value of a constructed-response question based on how many answer components are involved. For example, if the question asks students to solve an equation, show their work, and justify their answers, then that would be three components and the question would probably be worth three points. The maximum points that were assigned to any one question on the state’s released test was four.
For all student response types other than constructed response, counting the parts gives the point total for the task. If no parts are identified (and the task is not constructed response), the task is a one-part question worth one point.
Points for each part or component are assigned separately from each other related part or component. Just because a student misses the first part or the first component does not necessarily mean he or she automatically misses the rest.
Changes to Our Algebra I Boot Camp
MasteryPrep’s changes to our Algebra I Boot Camp represent the most complete update we have ever given a product, short only of our move from our original ACT Mastery product to our modern, comprehensive ACT Mastery curriculum. LEAP 2025 Algebra I represents a new test, and so we have created a new boot camp! Only about 20% of the items students experienced during the Algebra I EOC Boot Camp of old are used in the new LEAP 2025 Algebra I Boot Camp, and even those items have in many cases experienced a deepening in rigor.
- New Section: We have introduced an entirely new section to the boot camp: Solving Graphically / Rate of Change.
- Three Task Types: Students complete and are coached on many Type I (comprehension), Type II (reasoning), and Type III (modeling) tasks throughout the boot camp.
- New Student Response Types: We provide students exposure to all the new student response types, including simulating (on pencil and paper) dropdowns and direct interactions with coordinate grids.
- New Content: We have added a review of function definition and domain (important concepts on the new test) and updates to over 80% of the content strategies students experience in the workshop.
- Detailed Explanations: Included are step-by-step walkthroughs of all the new questions we have introduced.
Time Management and Test-Taking Skills Changes
- Clock Management: With the new test under strict time limits, our boot camp is now a great opportunity for your students to learn the correct pace.
- Advanced Process of Elimination: Very few questions on this new test yield to simple “guess and check,” so we have retooled our test-taking strategies to provide students with more powerful critical thinking techniques that will yield many extra points.
- Scoring: We share with students how the new test is scored (and what to do if they are stumped on one part of a task) as well as provide examples of the scoring rubrics so they can understand what is being asked of them in constructed-response questions.
My holistic opinion of this test, now that I’ve had the opportunity to analyze its released items in depth, is that the LEAP 2025 Algebra I test represents a significant improvement over the previous Algebra I EOC iteration, both in terms of rigor and in the test’s ability to validate that students have learned the substance of Algebra I as defined by the Louisiana State Standards for Mathematics. The test, in general, is less susceptible to test-taking strategies than its predecessor. In some instances, the technology-enhanced items manage to allow the test to more genuinely reflect the type of tasks students are experiencing in the classroom.
On a more technical note, the state’s released tasks were, relatively speaking, tightly aligned to the state standards, and educators who teach to these standards (while of course keeping the assessed standard as defined by LEAP 2025 in mind to aim for the correct level of rigor) will likely be rewarded in the form of higher standardized test scores without having to explicitly “teach to the test.”
That being said, since the test does represent an observable increase in rigor, putting it on a plane above many state assessments in this regard, I will be interested in how the state defines its raw-to-scale score conversion. I would anticipate that percentage-wise, students will score lower in terms of raw score on this assessment compared to students at similar performance levels who took the old Algebra I EOC. Depending on the scaling system, however, we could see little relative year-over-year change in student achievement levels as the state transitions to the new test. It is also possible that the measured student achievement levels could dive this year to form a new baseline reflective of the more rigorous assessment standards and the presence of five achievement levels rather than four. In this regard, only time will tell. From the educator’s point of view, all we can do is work to increase our students’ comprehension of Algebra I in alignment with the state standards, and consequently improve the raw scores of our students on the LEAP 2025 Algebra I test.
In the end, an assessment is valuable in relation to the information it gives and the activity it incentivizes. This new test gives better information about higher Depth of Knowledge levels and encourages instructors to design lessons with the end of achieving student autonomy with complex mathematical tasks. If students do not achieve this goal, they will not be able to reach the Advanced or even Mastery achievement levels.