The story of James “Ikie” Brooks is emblematic of the lives of many young Americans growing up in disadvantaged communities across the United States. Ikie comes from Boone County, West Virginia, an area devastated by the loss of well-paying mining jobs and the onset of the opioid epidemic. His father died when he was 14, and his mother has struggled with addiction. Despite these challenges, Ikie held onto his dream of attending college and is currently a senior at Marshall University earning a degree in political science.
In the face of so many hurdles, how did Ikie do it?
Although a number of circumstances contributed to his success, a federal college access program called Gaining Early Awareness and Readiness for Undergraduate Programs — better known as GEAR UP — played a big role. As a student at Scott High School in West Virginia during the state’s first GEAR UP grant cycle, 2007 to 2014, Ikie benefited from a unique support system. “Having a strong support system to help me navigate high school and beyond made a difference for me,” Ikie said of the experience. “Being a GEAR UP student allowed me to get the support I needed as a first-generation college student. I understood that life after high school would consist of me pursuing my dreams.”
A Closer Look at College Access and Success Programs
GEAR UP is one of many programs that aim to elevate our country’s educational and economic outcomes by emphasizing strategies that improve participation in higher education among low-income and disadvantaged students. Programs like the Collaborative Regional Education program, developed by Jacksonville State University, seek to transform classroom instruction and prepare educators to accelerate student achievement and college readiness in rural communities.
Other programs like College Possible — which provides near-peer mentoring services for disadvantaged students in urban settings across the country — help prepare students for the rigors of a college environment.The Fund for the Improvement of Postsecondary Education’s (FIPSE) First in the World Grant Program pursues ambitious goals like returning America to first place in the world for bachelor’s degree attainment.
Enabling students to hope and plan for futures that offer career achievement and satisfaction is also critical to motivating them to complete high school and pursue postsecondary education. The National Academy Foundation (NAF), a non-profit organization that provides career academies in high schools, inspires students by offering “real world” internship and work-based learning opportunities, which connect them with a variety of career paths and future possibilities that disadvantaged students may not have learned about from family or friends.
When it Comes to Education, the Stakes Are Rising
One reason for the recent upsurge and proliferation of college access and success programs like those mentioned above is, of course, the increasing importance of education as a prerequisite for gainful employment. According to a recent report from the National Center for Public Policy and Higher Education, “education is key to keeping the U.S. workforce competitive in the evolving knowledge-based economy.”
That’s not just an opinion —more jobs than ever require at least some postsecondary education, a trend that promises to continue. By 2020, 65 percent of all jobs will require postsecondary education and training—compared to 28 percent in 1973. And, despite the increasing importance of postsecondary education, in 2015, fewer than half of all young adults in the U.S. had completed an associate’s degree or higher (46 percent) and only 39 percent had completed a bachelor’s degree or higher (Kena et al., 2015).
A second reason that college access and success programs matter is that they can help mitigate the array of barriers that prevent disadvantaged students from achieving academic success, college enrollment, and college completion goals. Historically, these challenges have included the financial realities of poor and indebted families who do not see any way to pay for postsecondary education. Another barrier is historical in nature —young people in these communities may have few (if any) family members or neighbors who have attended college, meaning they have no one to look to as a model.
Families without firsthand experience of college aren’t likely to have the knowledge to help their children apply for college and financial aid, and may not believe postsecondary education is within their reach. Some aren’t convinced that further education is essential to their child’s economic success. Some parents or guardians have had negative school experiences themselves and hesitate to engage with schools more than absolutely necessary. Others may fear that if their child attends college, they are less likely to return to their community to live.
Schools in such communities often lack the resources, incentives, or orientation to postsecondary education to encourage or even support students who are thinking about further education. These schools may not offer many of the courses, financial aid advice, or training opportunities —like Advanced Placement courses, SAT or ACT support, or FAFSA counseling — that act as stepping stones for other college-bound students. These realities have resulted in college enrollment and completion rates for low-income and disadvantaged students lagging behind those of their more affluent peers.
Outcomes Matter: Evaluating the Success of College Preparedness ProgramsThe need for these programs is clearly established, but simply implementing them is not enough. We need to develop a firm understanding of how well they’re working or where they’re falling short. Well-conducted evaluations help tease out weaknesses, improve strategies, and track progress over time and from high school into postsecondary education for the young people who experience program. Here’s why:
It helps maintain funding resources for program operators and staff.
On the most practical level, if program staff don’t have credible data to demonstrate the impact of their programs, how will they develop them further or secure additional funding to support or expand services? We need sound evidence that we are moving the needle for these disadvantaged students in order to generate opportunities to serve more people.
It makes programs better over time.
And it isn’t only about securing data to demonstrate the impact of programs. Program staff also need evaluation data to intelligently refine their programs. Formative evaluation, in particular—the process of collecting and analyzing ongoing performance data and using this information to make decisions about program processes—provides critical interim feedback about how programs are actually implemented on the ground.
This information helps program staff systematically understand how programs need to be tweaked to produce the impacts we want. For example, if program staff believe a program requires at least 100 service hours be provided to students, and we don’t document whether or not those hours are being provided before the end of the program, we are missing an opportunity to review these data mid-stream and make course corrections to ensure we have our intended impact. Similarly, gathering data from program users and comparing how strategies played out with different populations of students, may reveal that some approaches were more successful in some places or with different groups than others, and why. This is critical to taking our programs to the next level.
The communities we serve deserve it.Above all, evaluation matters because we have a duty to the students we serve. Without funding, staff, or program improvements, we shortchange the communities that stand to benefit from them the most. If society is implementing programs designed to fix historic inequities, then we have an obligation to the students and families we serve to find out whether these programs are actually doing that. We need to rigorously evaluate these programs to hold ourselves accountable to improving outcomes for low-income and disadvantaged students and their families.
What are common challenges in evaluating college access and success programs?
The good news is that many of the programs outlined above require or strongly encourage program providers to conduct an external assessment, or evaluation, of the extent to which their programs impact college readiness and participation outcomes for low-income and disadvantaged students. But addressing the many challenges that disadvantaged students face is neither easy nor quick. The changes these programs generate take time to emerge, and real impact will require years to manifest. Here are some common issues evaluators face — and ideas about how to resolve them:
Programs tend to be longitudinal in nature—that is, they occur over a long period of time—which causes many evaluation challenges.
As much as we want to make a difference for students the moment they enter a program, counteracting the effects of historic and entrenched inequities will require more than a year or two of program services. Programs like GEAR UP recognize that, and provide up to seven years of continuous service to students. And while that level of support helps students, it complicates evaluation because those students will have to be tracked over not only time, but across institutional boundaries (from high school to college, for example, and then perhaps to another college or to employment).
In these instances, program staff should think about capitalizing on the data available in national databases such as the National Student Clearinghouse, a leading provider of educational reporting, data exchange, verification, and research services in which most public and private colleges and universities participate, as well as thousands of high schools. The NSC Research Center provides longitudinal data on some important student outcomes like college enrollment, persistence, and completion.
State longitudinal data systems are another source of information on student outcomes that cross institutional boundaries. Over the course of several federal grant cycles, states developed and invested in these systems to address specific education-related policy and research questions by integrating data from early childhood, public pre-Kindergarten through grade 12, public higher education, and workforce data systems.
Programs may serve small populations of students, leading to other types of evaluation challenges.
While working with fewer students may help program staff provide more individual attention and support, studying smaller groups of students makes it more difficult to show statistically significant effects. This issue especially confronts programs operated in rural schools that might have very few students in each grade or programs that serve very small populations of low-income or disadvantaged students, like American Indians.
Programs of any size, furthermore, are expensive to operate, staff, and expand. But even among these smaller programs, it’s possible to improve statistical power without substantially increasing costs. For example, imagine a program that purchases technology for students, or a mentoring program with the capacity to serve only 15 students per school.
One very useful strategy might be to select comparatively more comparison group students or schools. For example, in the mentoring example above, program staff could serve 15 students in each school, but collect comparison group data from 45 students not served by the program. This increases the total sample size to 60 students per school.
Attempting to establish a control condition for these programs raises serious ethical issues.
The most rigorous evaluation studies use randomized controlled trial (RCT) designs that include a random process to assign students to treatment or control conditions. In these studies, treatment students receive the intervention while control students do not. Likewise, quasi-experimental design (QED) studies require the identification of a comparison group of students who do not receive program services. If all or many of the students in a school face serious disadvantages, it would be seriously unethical to withhold the program from half of them so that it could be studied.
For some projects, one option is to delay the treatment approach so that the control group receives the intervention later—but with college access programs, how long can one delay the intervention without seriously reducing the changes of achieving a good outcome for those students?
Another option is to provide a suite of basic program services to all students, but to assign one group (the one being studied) to receive an extra intervention. This design provides the added benefit of offering an additional activity that program staff wish to study only to treatment students, while the control/comparison group still receives core services. Of course, the absence of a true control group means that the program impact for the added activity may appear to be weaker than it is in actuality.
Students Can’t Wait
At the end of the day, the purpose of evaluation in this context is to ensure that the college access and success programs we offer help accelerate achievement and reduce inequities for low-income and disadvantaged students. We owe it to students like Ikie Brooks and thousands of others across the U.S. to study these important programs and find out how we are doing in meeting our goals.
We know that undertaking evaluations of these programs will undoubtedly challenge us in ways we could not have imagined when we set out on this course. That’s why it’s so important to remain flexible and open-minded to innovative solutions if we wish to overcome these challenges. While it can seem overwhelming at times, we must keep in mind that the reward is great — and there’s no time to waste.