No full text available.
Yesterday’s Words, Tomorrow’s Challenges
Christ, Frank L. “Yesterday’s Words, Tomorrow’s Challenges,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1994. Pp. 9-11.
Yesterday’s Words, Tomorrow’s Challenges
Frank L. Christ
Co-Director of the Winter Institute
There is a moment between yesterday and tomorrow. That moment is always now. Some of those “now” moments make a difference as we act, react, or choose not to act or react. Yet for all the uniqueness of our “now” moments, someone like us has also been suspended between a yesterday and a tomorrow like ours and has drawn upon a remembered phrase for insight, strength, or respite. We call those remembered phrases, quotations, and today I would like to share with you some remembered phrases or quota-tions that have guided my personal and professional life quotations that I began to collect in my prep school days and that I still continue to accumulate today. Quotations that, in the words of Disraeli, perpetuate …”the wisdom of the wise and the experience of the ages.” It is in the spirit of Disraeli’s affirmation of the value of quotations that I speak with you today.
Before I share some of my personal quotations with you, would each of you take a few moments to center yourself and recall one quotation that you have read or heard that has or is impacting on your life. Then, would you share this remembered quotation with someone at your table. The exact words of your quotation or the name of its author are not that important. What is important is that those words affected you either personally or professionally.
Perhaps, one of your quotations that you have just shared at your table will also be one that I will share with you. These few quotations that I have selected from my yesterday do not come from a book of quotations as an artificial exercise in erudition or postprandial platitudes. They are distilled from a lifetime of reading and listening and reflecting. I share them with you to be reflected upon in your future moments of tranquillity when you need an insight into tomorrow’s problems, strength for tomorrow’s decisions, or respite from tomorrow’s problems and decisions.
My first two quotations come from ancient Greece. The first is attributed either to Thales, one of the Seven Sages, or to the Oracle of Apollo at Delphi. It is “gnothe sauton” “know thyself.” As I reflect back upon this phrase, I recall the words of Polonius to his son, Laertes, as he sums up his farewell advice to him: “This above all to thine own self be true…”
To know ourselves and to be true to ourselves is more difficult today than it was in ancient Greece or in medieval Denmark. Vietnam, Watergate and Nixon, pollution of our air, water, and soil, human rights violations both here and abroad, senseless acts of terror-ism, nuclear threat, Desert Storm, inflation, the national deficit, the homeless, AIDS, drug abuse, the information explosion, future shock all have complicated and continue to complicate our knowing who we are. All of these have stressed and continue to stress our will to
[page 9]
maintain our personal and professional integrity. Yet, it is in knowing ourselves and being true to ourselves that we must begin and must end as we spend our lifetime learning about ourselves through practical experience and the study of science, philosophy, music, literature, art, and psychology.
The second of yesterday’s words comes from Socrates as we know him through Plato’s writings: If we live Socrates’s words, we join the ranks of the world’s restless ones the St. Augustines, the Michaelangelos, the Da Vincis, the Martin Luthers, the Galileos, the G. K. Chestertons, the Madame Curies, the Boris Pasternaks, the Anais Nins, the Buckminster Fullers the world’s malcontents who believed with Socrates that “the unexam-ined life is not worth living.”
Socrates’s words have contributed to my restlessness as I have moved through several careers: as a construction engineer in the military, as a physical educator and coach at West Point, as an instructor of classical rhetoric at a Jesuit college, as a director of a Great Books program at a small liberal arts college, as a director of student programs for orientation and foreign students, as an executive trainer for business and industry, and as a counselor and administrator of a university learning assistance center.
Socrates’s words have impacted my professional life as I adapted them to my professional workplace. In an article that I wrote for the Journal of Developmental Education, I suggested that for us as learning assistance professionals “the unexamined activity is not worth doing;” “the unexamined routine is not worth perpetuating;” and “the unexamined file is not worth keeping.”
Socrates’s words can inspire all of us to examine the unexaminable and to reexamine the examined so that we can meet tomorrow’s challenges not only in global and national politics, in the conservation and renewal of natural resources, in the distribution of economic resources, but also in our personal and professional lives as husbands, wives, lovers, fathers, mothers, teachers, practitioners, administrators, and lifelong learners.
Reflecting back on this life of knowing, I often turn to a statement by Alfred North Whitehead, mathematician and philosopher, who reminds me that “a merely well-informed [person] man is the most useless bore on God’s earth.” Knowledge is for doing. Prospero, in Shakespeare’s The Tempest, finds this out when he is driven into exile because he has paid no attention to affairs of state, insisting that his “library is dukedom large enough.” What is learned in the physical and natural sciences can and must provide efficiency, comfort, convenience, health, longevity for ourselves and others. What is learned in the social sciences can and must provide caring for and sharing with others. What is learned in the humanities can and must provide beauty, recreation, and relaxation for ourselves and others. We must remember not only to examine life but also to change it to what we believe it can and must be.
The next examples of yesterday’s words concern lifelong learning learning beyond our degrees and textbook knowledge. Milton’s words remind us that “a good book is the
[page 10]
precious lifeblood of a master spirit embalmed and treasured up on purpose to a life beyond life.”
As we reflect on these words, how would we answer this question that Admiral Rickover put to junior officers as he interviewed them for positions on his staff: “What three good books have you read in the past month?” Not textbooks but rather books of great poets and thinkers, what Jacques Maritain called “the foster fathers of intelligence.” Can we write down titles of ten to twenty good books that we have read in the past year? Books that can help us to echo the words of educator Sylvia Ashton Warner, who proclaimed passionately: “I am my own uni-verse, I my own professor.” Our education, our life should prepare us to be our own professors. There is a quotation by Thomas Carlyle from Heroes and Hero Worship that I have used for years to end my reading and study skills workshops. It sums up one of the goals of higher education a goal with which we are all intimately involved. Carlyle said, “All that a university or final highest school can do is but what the first school began doing, teach us to read.” To read with understanding. To read with compassion. To read and remember what we need to know. To read between the lines. To read selectively. To read to solve problems. To read to make decisions. To read to ask questions. To read and interpret. To read and extrapolate. To read and apply.
Recently I have added another quotation to Carlyle’s, one more appropriate to our era of an information explosion in which a Niagara of words engulfs us with more than 30,000 new books annually, 500,000 scientific papers annually, and hundreds of thousands of pages in journals, periodicals, newspapers, not annually but weekly. This quotation is from Gujuroy, a Human Resources Management consultant, who said, “The illiterate [person] man is not one who cannot read. It is the [person] man who has not learned to learn.” For us professionally, learning to learn and helping others to learn to learn is what we are all about in postsecondary learning assistance programs.
We are aware that decisions are becoming increasingly more complex and that information or lack of information can mean the difference between problem and solution, success or failure. We are also painfully aware that facts are not enough. We must not only be informed but wise with a wisdom that sees decisions affecting humans, living and unborn. That wisdom lies buried in the tribal lore, the folk lore, the literature of our cultures. It is there for us to seek out, to reflect upon, to adapt for tomorrow’s challenges. It lies in books like the Bible, the Koran, the Talmud, the Analects and Dialects of Confucius, the Popul Vuh of the Mayans. In such books are more of yesterday’s words for you and me to read and reread, so that we can meet tomorrow’s challenges.
With these words from yesterday, you and I can live a different life a life that is exemplified in my final quotation from William Allen White: “I am not afraid of tomorrow, for I have seen yesterday, and I love today.”
[page 11]
Learning Assistance and Development Education
Boylan, Hunter. “Learning Assistance and Developmental Education: The History and the State of the Art,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1997. Pp. 12-19
Learning Assistance and Developmental Education:
The History and the State of the Art
Dr. Hunter R. Boylan
National Center for Developmental Education
The purpose of this manuscript is threefold. First, it provides a brief history of learning assistance. Second, it discusses where the field is today. Finally, it presents some preliminary findings from the National Center for Developmental Education’s study of developmental education and learning assistance programs.
In endeavoring to do those things, it should begin, as the caterpillar told Alice… at the Beginning. And the beginning in American higher education was the founding of Harvard College in 1636. Few recall that Harvard College was originally established for the purpose of training ministers for the churches of the New World – at least the Puritan Churches. The Puritans valued literacy and classical scholarship – at least among their clergy – and considered these to be requisite for advanced study. So the founding fathers of Harvard College initiated what has since become the bane of postsecondary education – admissions requirements.
Classical scholarship of the time involved, primarily, the reading of Greek and Latin. There were two reasons for this. First, most of the religious writings of the time were in Greek and Latin. Second, there were few books written in English.
The King James version of the Bible was the first book actually written in modern English, and it was published only 20 years earlier. Academic literacy in those days required a minimum reading knowledge of at least three languages – Greek, Latin, and English. Possession of this knowledge was a requirement for admission to Harvard.
Not only had Latin been a dead language long before the Massachusetts Bay Colony was founded, but there were few Greeks among the early English settlers. Consequently, almost no one in the colony qualified for admission to Harvard.
The only way to qualify for admission was to be tutored by a member of the clergy who was proficient in Greek and Latin. It is ironic that, just as soon as we had our first college, we had our first underprepared students. Furthermore, just as soon as we estab-lished our first college, we also had to establish our first tutoring programs.
Those who consider underpreparedness among college students to be a recent problem-or learning assistance to be a “new” phenomenon-are totally in error. Both have been around long before the institutions most of us work in were founded.
By the middle 1800s the situation had changed little-underpreparedness was still a problem. There were still far more students attending college than there were students who had the skills to be there. Furthermore, the admissions requirements of the day were not particularly stringent. Martha Maxwell (1985) points out that in order to qualify for
[page 12]
admission to Iowa State University in the middle of the 19th century, students had to be able to read, write, and be at least 14 years old.
While colleges had a lot of people who were over 14, they had fewer who could read and write at the college level. One response to this situation was the establishment of college preparatory departments – places that taught basic courses and offered tutoring to students so that they could improve their skills well enough to take regular college courses.
The first such program was established in 1849 at the University of Wisconsin – 142 years ago (Brier, 1986). For the next 50 years, college preparatory programs were to grow almost as rapidly as colleges and universities.
During this period, a major impetus to the expansion of college preparatory programs was the Morrill Act of 1862. It is well-known that this act provided for the sale of land to support the establishment of public universities. What is less well-known is the fact that the act was also designed to provide a pool of skilled engineering, agricultural, technical, and manage-rial personnel for what Congress rightly saw as a burgeoning technological society. Furthermore, Congress specifically intended that public colleges should bring about social and economic mobility for what were then called the “industrial classes” (Boylan & White, 1987, p. 2).
As more and more land grant colleges were established, more and more members of the industrial classes attempted to gain entry to these institutions. And as more students presented themselves for admission, more students were found to be under-prepared. Conse-quently, college preparatory programs, the precursor of the modern developmental education or learning assistance programs, were established at practically every land grant institution in the nation. The model was also borrowed by many of the private institutions of the era.
By 1889, the National Education Association reported that 80% of American colleges and universities had established college preparatory programs (Canfield, 1889). This percentage was to remain unchanged for the next hundred years.
In 1915, for instance, the U.S. Commissioner of Education reported that about 80% of American colleges and universities had college preparatory departments to serve under-prepared students (Maxwell, 1985). In 1973, Pat Cross surveyed American colleges and universities and found that about 80% of them offered remedial courses, developmental, or learning assistance programs (Cross, 1973). Sixteen years later, in 1989, the National Center for Education Statistics also surveyed American colleges and universities and found, again, that about 80% of them offered some sort of special programs for underprepared college students (NCES, 1991).
What surprises most people is not just that we have had the equivalent of develop-mental and learning assistance programs throughout the history of American higher education, we have also had about the same number of these programs for the last 140 odd years.
[page 13]
The problem of underpreparedness among college students is not a new problem. It has been with us since the first aspiring Harvard entrant had to take the equivalent of remedial Latin. And, ever since that time, there have been professionals there to respond to the problem. So the first message of this manuscript is that we are not parvenus in postsecondary education. We are not the new kids on the block responding to some new phenomenon. Many of our colleagues do not realize, that when they first began teaching sociology and psychology and computer science and business administration, developmental education and learning assistance professionals were there to meet them.
The second area addressed in this manuscript is “Where are we now?” Just what is the state of the art in developmental education and learning assistance?
One aspect of that state of the art is the size of the endeavor. According to the National Center for Education Statistics (1991), more than 30,000 instructors are engaged in the teaching of remedial courses in American higher education. And that refers only to instructional personnel. When you add tutors and counselors and managers, the number of people engaged in our business is closer to 45,000. And that represents about 10% of the American professoriate.
There are three national professional associations serving these personnel – the College Reading and Learning Association, the National Association for Developmental Education, and Commission XVI of the American College Personnel Association. The state and regional organizations of these associations represent every state in the union. In addition, the Educational Opportunity Associations and their ten regional organizations represent the entire United States as well as Puerto Rico.
These professional associations are also asserting leadership in establishing standards for practice in the field. During the 1980s, Commission XVI of the American College Personnel Association developed a set of standards for learning assistance programs that may be used in accreditation as well as for self study. The College Reading and Learning Association has developed standards for tutorial programs and has established a certification system for tutors and programs. That association is also developing other standards and criteria for professional practice in cooperation with the National Association for Developmental Education.
There are five major publications for professionals in the field – the Journal of College Reading and Learning, The Journal of Developmental Education, Review of Research in Developmental Education, Research and Teaching in Developmental Education, and the Journal of the National Council of Educational Opportunity Associations.
Each year, more than 50 state, regional, and national conferences are held for professionals in developmental education and learning assistance. There are graduate programs for developmental education and learning assistance at Appalachian State University, Grambling State University, National Louis University, Northeastern University, Southwest Texas State University, and the University of Texas-El Paso. There are training institutes at the University of Arizona and Appalachian State University as well as one sponsored by the Midwest College Learning Center Association.
[page 14
]
Let there be no mistake about it… learning improvement – which Ruth Keimig considers to be the bottom line in our profession (1983) – is a big business in the American higher education enterprise. Scott Miller, formerly the chief program officer for the Exxon Education Foundation, refers to developmental education as the largest growth industry in American higher education. The Chronicle of Higher Education recently cited remedial and developmental courses as the largest “hidden curriculum” in the American postsecondary system.
Twelve states – Alabama, Arkansas, Colorado, Connecticut, Florida, Georgia, Minnesota, New Jersey, Texas, Tennessee, Virginia, and West Virginia – now have in place legislative mandates requiring assessment of incoming students and the provision of remedi-ation for those who need it (Carpenter, 1991). Even more states will join in such mandates before the end of the decade.
As noted earlier, there have always been tutorial programs in American higher education… there have always been under-prepared students in American higher education… there have always been people and programs to work with these students. But now, there are some major differences.
Today, whether our colleagues or our administrators or our legislators like it or not, we stand at the forefront of the American higher education enterprise. We have moved from being a collection of people who provide services to unwanted students to being a movement of professionals whose services are needed by our institutions. We should no longer view ourselves as sub-professionals on the fringes of higher education. We are part of a profession – and one that is becoming of ever greater importance to our institutions.
Professionals are characterized by several things. First, they have specialized training. And we have that in our graduate programs, our training institutes, and our professional conferences. Second, they have professional associations that promote high standards of practice. Third, they provide for peer review for purposes of accreditation or certification. Our professional associations are now providing leadership in both these areas. Fourth, they have professional publications and a body of literature to guide practice. The number of journals and books in our field certainly attests to our professionalization in this area. Finally, a profession encourages and is the subject of on-going research to establish basic knowledge and to improve practice.
And that brings us to the last concern of this manuscript – research. In many respects, the Exxon Education Foundation has provided a major impetus for the professionalization of developmental education and learning assistance. The foundation helped to legitimize research in the field. It did so by providing a major grant to study the state of the art in our business – to gather some basic information about the students who receive our services and the institutions that host our programs.
The resulting study involved 159 institutions in all regions of the country. Programs at these institutions served 23,168 students. Of these, a random sample of over 5,000 students who attended college between 1984 and 1990 is currently included in our data base. The data base represents community colleges, technical colleges, private four-year institutions,
[page 15]
public four-year institutions, and research universities. It is the most comprehensive data base ever assembled on students participating in developmental and learning assistance programs.
This study has been going on for over three years and includes millions of bits of data on students, programs, and institutions. Unfortunately, we are still sorting through these millions of bits of data. As a result, our final analysis will not be completed for another six months or so. But we do have some preliminary findings of a general, preliminary nature.
These findings refute at least two myths about developmental education and learning assistance. Chief among these myths is that most participants in these programs come from minority backgrounds. In fact, the vast majority (62.5%) of participants are white.
A second myth is that these programs are not cost effective because students who participate in them fail to graduate. What we found is that the persistence and graduation rates for students who participate in developmental or learning assistance programs is generally consistent with, and in some cases higher, than national averages.
In 1968, Jencks & Riesman, estimated that throughout the 20th century, the gradu-ation rate for four-year institutions was about 35%. Vincent Tinto (1987) estimates it to be slightly higher at 45%. For two-year institutions, this figure has been variously estimated at 10 to 20%. Tinto (1987) estimates that only 13 of 100 community college entrants will earn a two-year degree.
If you average the persistence and graduation rates of students in our study from the various four-year institutions, it comes out to 38.9% over a 5 1/2 year period. Developmental students enrolled in two-year institutions had an average persistence and graduation rate of 28.7% over a four-year period. If looked at on the whole, students who participate in developmental or learning assistance programs have a higher rate of persistence and graduation than the national averages for ALL community college students and a rate comparable to that for ALL four-year college students. Our data, therefore, helps to make a solid case for the fact that developmental and learning assistance programs can contribute to student persistence and graduation.
On the other hand, the performance of students at various kinds of institutions is uneven. For instance, 24% of developmental students at community colleges had either graduated or were still enrolled after four years while 33.7% of those in technical colleges had graduated or were still enrolled after four years. At two year schools, the rate of retention and graduation for developmental students is comparatively low. Nevertheless, it is still considerably higher than the national rates for ALL students. Furthermore, it is unfair to judge community colleges against the standard of graduation.
Community colleges are not designed specifically to simply generate associate degrees. They are designed, among other things, to help students develop so that they can attain skills, leave the college and do something else.
A somewhat disturbing note in our findings was the performance of developmental students at public universities. Only 28.4% of those at public universities had graduated or
[page 16]
were still enrolled after 5 1/2 years while 40.2% of those at private colleges had graduated or were still enrolled after 5 1/2 years. At research universities, 48.3% of the develop-mental students had graduated or were still enrolled after 5 1/2 years.
It should come as no surprise that developmental students at research universities have the highest persistence and graduation rates. After all, what Berkeley or Harvard considers to be an underprepared student is, nevertheless, a pretty good student. What did come as some-thing of a surprise was that colleges devoted exclusively to vocational and technical training have a higher rate of graduation among developmental students than comprehensive com-munity colleges. Another surprise was that private colleges do a much better job with these students than state universities. The underprepared students at private four-year colleges persisted or graduated at a 40.2% rate while similar students at public four-year colleges persisted or graduated at the rate of only 28.4%.
It is worth noting that when I talk about private colleges, I am not talking about Stanford or Duke. I am talking about the small, private, non-selective, four-year institutions. In fact, the private colleges in our study took in about the same sort of student as the community and technical colleges. When matched for high school GPA and SAT scores, there was no significant difference between the students participating in developmental and learning assistance programs at private four-year colleges and those participating in the same programs at two-year colleges.
This, in itself, is a significant finding. Apparently, private colleges are often competing for the same pool of students as community and technical colleges. However, the private colleges do a substantially better job insofar as graduation and persistence rates are con-cerned.
It is also interesting to note that, among institutions in our study, students attending public four-year institutions had consistently higher total SAT scores than those attending private four-year colleges. As noted earlier, SAT scores for those attending private colleges were similar to those attending community colleges. Nevertheless, the graduation and persistence rates were considerably higher for these students enrolled at private colleges.
Along the same lines, we found that Black developmental students are vastly more likely to graduate from research universities or four-year private colleges than from public four- year colleges or technical colleges. The graduation rate for Black developmental students at research universities was 31.9% and 26.9% at private colleges. The graduation rate for Black developmental students at public universities was only 15.5% and only 7.2% at community colleges. Although the percentages differ slightly, a similar pattern is also found for Hispanic students. The highest graduation rates for developmental students of Hispanic background are found at four-year private and at research universities. The lowest graduation rates for these students are found at public universities and community colleges. Unfortunately, we cannot yet explain the reasons for this. We are currently at the primary level of analysis of our data. Our data base has the information that should enable us to answer some of these questions in the future. At present, however, we can only speculate.
[page 17]
Another finding, which confirms that of other researchers, is that the single most significant predictor of graduation for developmental students is their first semester GPA. The higher the grades of a developmental student during his or her first semester in college, the greater that student’s chance of graduation. This is consistent with previous research by Kulik, Kulik, & Schwalb (1983) which suggests that the earlier a developmental program intervenes in a student’s career, the more likely that program is to have success.
Another finding worthy of note has to do with scores on achievement tests. The SAT and ACT scores of white students in developmental programs are consistently higher than those of non-white students. What may come as a surprise, however, is that for develop-mental students of minority background, SAT and ACT scores do not seem to be closely related to graduation rates.
Obviously, a student with an SAT score of 1,000 is more likely to succeed than one with a score of 400. In general, however, the SAT and ACT scores of non-white develop-mental students do not correlate well with their graduation rates. Minority developmental students with low SAT scores are just as likely to graduate as white students with higher SAT scores.
Now what does all this suggest about developmental education and learning assistance programs? First, the data indicates that what we do is working. Our students regularly graduate at rates consistent with those of better prepared students.
Second, the data clearly suggests that we can improve our performance by intervening early in the academic careers of our students. The earlier we provide our services, the more successful our students will be.
Third, our results suggest need to be cautious in judging students, particularly non-white students, by scores on achievement tests. If we trying to pick potential winners and screen out losers, SAT and ACT scores are a poor measure-particularly for minority students and particularly those minority students who participate in developmental programs.
Finally, there may be some lessons to learn from the private, four-year colleges. For some reason, they are consistently successful in graduating their students-regardless of entry credentials and regardless of race.
It should be remembered that we are at the very beginning of the process of analyzing our data. The few findings noted here were taken right “off the top” so to speak. In the months to come, we will be looking at what specific services were provided at what institu-tions, what staffing and organizational patterns tend to be related to success, what student characteristics seem to be correlated with success, and what program characteristics seem to be associated with success.
As more data becomes available, we will be presenting it at NADE and at CRLA. Soon we will have a complete report available on all aspects of our findings. So, as they say on TV – “Stay tuned. There’s more to come.”
[page 19]
References
Boylan, H. & White, W. (1987). The historical roots of developmental education: Educating all the nation’s people. Research in developmental education. 5,3, 1-4.
Boylan, H. (1988). The historical roots of developmental education: Part III. Research in developmental education. 5,1, 1-5.
Brier, E. (1986). Bridging the academic preparation gap: An historical view. Journal of Developmental Education. 8,1, 2-5.
Canfield, J. (1889). The opportunities of the rural population higher education. National Council on Education, Nashville, TN.
Carpenter, T. (1991). Mandatory assessment survey results. Lansing, MI: Michigan Developmental Education Association/Lansing Community College.
Cross, K. (1973). Beyond the open door. San Francisco: Jossey-Bass.
Jencks, C. & Riesman, D. (1968). The academic revolution. Garden City, NJ: Doubleday & Co.
Keimig, R. (1983). Raising academic standards: A guide to learning improvement. ASHE/ERIC Research Report # 1, Washington, DC: Association for the Study of Higher Education.
Kulik, J., Kulik, C-L. & Schwalb, B. (1983). College programs for high risk and disadvan-taged students: A meta-analysis of findings. Review of educational research. 53, 397-414.
Maxwell, M. (1985). Improving student learning skills. San Francisco: Jossey-Bass.
National Center for Education Statistics. (1991). College level remedial education in the fall of 1989. Washington, DC: U.S. Department of Education.
Tinto, V. (1987). Leaving college. Chicago: University of Chicago Press.
[page 19]
A Study to Formulate a Learning Assistance Model for the California Community College
Burns, Marie-Elaine. “A Study to Formulate a Learning Assistance Model for the California Community College,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1994. Pp. 20-23.
A Study to Formulate a Learning Assistance Model for the California Community College
Dr. Marie-Elaine Burns, Hartnell College
Futurists have suggested that there be a change in curriculum to develop more self-directed learners who will be able to adapt to their own personal futures as well as to the future needs of the world. Small, in her paper “Systemic and Global Learning,” discussed the systemic approach to learning. This approach included the French author de Rosnay’s guidelines for a new systemic education. De Rosnay’s guidelines included learning skills such as integration and synthesis of information found in some time management and study-reading programs taught in a comprehensive learning assistance center environment.
Learning to learn skills are an international as well as national concern. These skills are necessary to all human beings adjusting to a changing futuristic world, enabling learners to become academically successful. Universities from as far away as Japan, Nigeria, Canada, Australia, and Switzerland have sent representatives to California State University, Long Beach for learning assistance management and practitioner training, to mention a few of the more than 300 administrators, counselors, and skills practitioners from 130 post-secondary institutions (Christ, 1980).
Forrest promoted the notion that learning to learn skills are not only significant to “remedial” learners when he recom mended:
Systematic placement into general education courses should apply to all students. Many entering freshmen, even at elite institutions, need to improve their academic survival skills…special remedial courses should be provided to them…(Noel, 1985).
Based on significant demographic changes in California, the open door policy, and charges made to the California community college as recommended by the Commission for the Review of the Master Plan for Higher Education and Assembly Bill 1725, the necessity for a learning assistance program model is apparent. Therefore, the purpose of this study was to formulate a learning assistance program model in a California community college.
The Study
Based on the literature reviewed, key components and characteristics of learning assistance were identified. A feasibility study was conducted to assess the readiness of the California community college for the proposed learning assistance model. The proposed model was implemented at a California community college. Finally, appropriate recommendations for implementation of the model in other community colleges were formulated.
[page 20]
The Proposed Model
The proposed learning assistance model for the community college will be discussed and displayed in figures 1 and 2. Figure 1 describes the proposed learning assistance model and the relationships between its key components and characteristics. Figure 2 describes how the learning assistance program model works.
The proposed model consists of eleven components: (1) Testing (Placement and Diagnostic); (2) Staff Training (Tutor Training, Learning Skills Specialist Training, Faculty Training, and Instructional Assistant Training); (3) Developmental Laboratories (English, math, reading, and international conversation laboratory; (4) Developmental Courses (English, math, reading, and English as a Second Language); (5) Study Skills; (6) Computer-Assisted Instruction; (7) Multi-media Systems for Learning (audio and video technologies); (8) Printed Instructional Programs and Materials; (9) Tutoring; (10) Ongoing Publicity and Public Relations; and (11) Supplemental Instruction.
The proposed model also possess twelve key characteristics: (1) Individualized and Self-paced Learning; (2) Learner-centered Environment; (3) Centralized Resources; (4) Diagnostic Testing; (5) Prescriptive Recommendations; (6) Administratively Supported; (7) Faculty Supported (8) Readily Accessible to Learners; (9) Visible to the Campus Community; (10) Effective Interrelationships with other Programs, Departments, and Services; (11) Cybernetic; and (12) Open to All Students.
As seen in figure 1, successful implementation of the learning assistance program model requires simultaneous implementation of all key components. When this is done the proposed model will reflect all of the key characteristics. The key components represent the integral operational elements of the learning assistance model. The key characteristics of the proposed model, on the other hand suggests the model’s unique qualities, setting the proposed learning assistance model apart from most existing learning assistance programs.
Figure 2 describes the operation of the proposed learning assistance model (how it works). Implementation of the model begins with the Staff Training component (shown at the top center of the model). For the model to be effective, all staff members must be trained not only with respect to their immediate job responsibilities, but also with respect to how their role effects the operation of the total learning assistance program. All professional staff need to be aware of the “why’s” and “how to’s” of the following: (1) making the learning assistance program learner-centered; (2) maintaining centralized resources; (3) the importance of diagnostic testing and prescriptive recommendations; (4) gaining administrative and faculty support; (5) maintaining easy access to students; (6) developing effective interrelationships with other programs, departments, and services; (7) keeping the learning assistance program cybernetic (self-correcting through evaluation); and finally, (8) emphasizing that the learning assistance program is for all students. Staff Training, displayed in the central portion of the model, also ensures the effectiveness of the key components and characteristics and produces the four key characteristics at the bottom center of the model.
Next, effective ongoing publicity and public relations (left, middle of the model) is emphasized in Staff Training and produces the following six key characteristics:
[page 21]
(1) Administratively Supported; (2) Faculty Supported; (3) Readily Accessible to Learners; (4) Visible to the Campus Community; (5) Effective Interrelationships with Other Programs, Departments, and Services; and (6) Open to All Students. Once awareness has been established, student referrals (to the right of Ongoing Publicity and Public Relations) to the learning assistance program, are more readily made. Students then may be referred to any or all of four key components: (1) Developmental Laboratories; (2) Developmental Courses; (3) Study Skills; (4) Tutoring. Student referrals may also be made between and among the four components.
Once students have been referred, and tested for placement based on level of ability (exclusive of Study Skills and Tutoring) and diagnosed for skills improvement (exclusive of Tutoring), within their respective levels of ability, specific prescriptive recommendations are made for skills improvement. Prescriptive recommendations are then carried out through individualized and self-paced learning. Individualized and Self- paced Learning may be in the form of Computer-Assisted Instruction, any one of the Multi-media Delivery Systems for Learning, or through Printed Instructional Programs and Materials.
At this point, the learning assistance program employs its final component, cybernetic (self-correcting through evaluation) which should be ongoing throughout the year. The outcomes of the learning assistance program are evaluated in the following ways: (1) by providing follow-up to prescriptive recommendations and assessing student progress; (2) by eliciting student, staff and faculty feedback with regard to the effectiveness of programs and services; (3) by tracking student and faculty usage of courses, programs, equipment, and materials; and (4) by reviewing the learning assistance programs’s goals and objectives for accountability, each term.
Conclusions and Recommendations
Based on the findings of the study, several conclusions were drawn regarding the formulation of a learning assistance model for the California community college. However, this article will list only those conclusions and recommendations relative to a broader, national audience.
1. There is a need for innovative approaches to learning, such as the learning assistance program model established in this study.
2. The existence of the learning assistance model established in this model works for all students, faculty, staff, and administrators and adds to the effectiveness of inter- relationships among departments, classes, student organizations, and offices on campus.
3. The current status of learning assistance-type programs on community college campuses will not effectively meet the needs of students, business and industry, colleges or universities, and the community as we move into the twenty-first century.
4. If community college administrators and faculty work cooperatively with the learning assistance program, as established in this study, there should be a reduction in attrition and an increase in retention, transfer, and graduation rates.
[page 22]
5. The philosophy that learning assistance program services are provided solely to the underprepared, remedial, basic skills, or categorical program student will decrease the effectiveness of the model formulated in this study.
6. The Staff Development Office should work with the Learning Assistance Program Director to develop faculty training around the issues of diversity and innovative teaching techniques and strategies.
7. Without institutional commitment the model will not be effective.
The significant recommendations of this study are as follows:
1. The model should be used for revising existing learning assistance-type programs as well as for planning new ones.
2. There should be at least one full-time Learning Skills Specialist in each learning assistance program to provide intake services, diagnosis, prescription, and follow-up with students who are not enrolled in a Developmental Course.
3. Students not enrolled in Developmental Courses should be allowed to walk into a Developmental Laboratory and have access to staff, materials, programs, and Multi-media Delivery Systems for Learning. In other words, Developmental Laboratories should be available to all students.
4. The Board of Trustees and the Chief Executive Officer of the community college should provide full institutional support to the learning assistance program through adequate funding, staffing, and physical space and fully support the characteristic, Open to All Students.
5. A formal field test and evaluation of the established model should be conducted.
The model established in this study is adaptable not only to any community college in this country, but to any four-year institution as well.
References
Christ, F. L. (1980). Learning assistance at a state university: A cybernetic model. In K. Lauridsen (Ed.), New directions for college learning assistance: Examining the scope of learning centers. San Francisco: Jossey-Bass Inc., Publishers.
Noel, L. & Levitz, R. (1985). Student developers partners in students success. In J. S. Keyser (Ed.), Toward the future vitality of student development services. Iowa City: ACT.
Small, M. G. Systemic and global learning. Ashland, WI: Northland College ERIC, ED 222 971.
[page 23]
Tutor Training: An Examination of Community College and University Tutor Programs in the United States and Canada
Gier, Tom & Karan Hancock “Tutor Training: An Examination of Community College, College and University Tutor Programs in the United States and Canada,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1994. Pp. 24-42.
Tutor Training: An Examination of Community College, College and University Tutor Programs in the United States and Canada
Dr. Tom Gier and Dr. Karan Hancock
University of Alaska Anchorage
Introduction
The purpose of this paper is to give the reader an over- view of the actual tutor training practices that are in use today in the United State and Canadian community/junior colleges, colleges, and universities. The authors, coordinators of the College Reading & Learning Association’s International Tutor Certification Program, surveyed the tutor training documents of the 118 tutor programs that, as of December 1992, had been certified by CRLA. The following is a breakdown, by institution type, of the 118 programs surveyed. There were 27 community/junior colleges, 39 colleges, and 52 universities. The distribution of these certified programs around the U.S. and Canada was 19 Eastern, 22 Southern, 17 Mid-western, 23 South-western, 32 Western, and 3 Canadian.
In Part One of this paper, six tutoring areas/topics from these 118 programs will be examined:
1) tutor program objectives;
2) amount and duration of tutor training;
3) the modes of training;
4) the topics/areas covered in training;
5) the tutor selection process;
6) the tutor evaluation process.
In addition to providing the CRLA guidelines that the certified tutor programs follow for each of the topics, the authors will also randomly select excerpts from community/junior college, college, and university programs that illustrate the application of the guidelines and topics. These selections will help the reader to get a more accurate picture of how these areas/topics are dealt with at different types of institutions around the U.S. and Canada. It should be mentioned that though the documents were randomly selected from the certified programs, they do represent typical tutor training practices.
In Part Two of this paper, five tutor training activities will be presented. These activities have been used successfully by numerous tutor training programs around the U.S. and Canada. They are presented here in order to give the reader some actual, hands-on training devices that she/he can take back and use in her/his own program:
1) communication skills;
2) communication/listening;
[page 24]
3) functional fixedness;
4) role modeling and problem solving; and
5) empathy training.
The reader has permission of the authors to replicate any and all of these activities.
In another article in this monograph, a bibliography, prepared by Marilyn White, Center for Independent Study, Butler County Community College, El Dorado, Kansas, is presented to enable the reader to further her/his own tutor research. It should be noted that this bibliography is an excerpt from the CRLA Tutor Registry and Resource Guide, Second Edition, which will be published in Spring 1993.
For more information concerning any of the program mentioned, tutor training in general, or the CRLA International Tutor Training Program, contact:
Dr. Tom Gier
or
Dr. Karan Hancock
English Department
University of Alaska Anchorage
3211 Providence Drive
Anchorage, Alaska 99508
#1: Examination of Tutor Training Topics
Though goals and objectives are not required as part of the documentation for certified programs, and thus no standards nor guidelines are set, the three following programs’ goals and objectives do represent the norm for the certified programs. The first two programs, Glendale Community College and the University of Nebraska at Kearney, list goals for their tutor programs in relation to those who receive their services. The third program listed, University of Texas at El Paso, documents goals for the training of its tutors.
Program 1: Glendale Community College, Glendale, California
The Tutoring Center (TC) and The Writing Lab (WL) exist to serve students, faculty, and staff. Our chief goals are the following:
1) to help students improve not only their grades, but also, more importantly, their understanding, expression, and application of course concepts, through tutoring in the Tutoring Center and in The Writing Lab.
2) to advise students, when appropriate, concerning methods and habits of study, and provide them with relevant examples and materials.
3) to sustain a friendly environment and a receptive attitude that encourage students to overcome adversity and to build upon success.
[page 25]
Program 2: University of Nebraska at Kearney, Kearney, Nebraska
The objectives of our tutoring program are as follows:
1) To supplement the instructional resources;
2) To increase the probability of students’ success in passing academic subjects;
3) To individualize the learning-teaching process;
4) To increase students’ self-confidence and motivation.
Program 3: University of Texas at El Paso, El Paso, Texas
The goals of our training program are as follows:
1) To inform tutors of the procedures, policies, and job responsibilities involved in working at Study Skills and Tutorial Services.
2) To define the roles of a tutor and to provide instruction that will help a tutor to acquire the skills that will enable him or her to effectively assume each role.
3) To identify and practice specific techniques for dealing with problem students.
5) To obtain information from the tutors that will enable the supervisory staff to organize the tutoring program so that it efficiently meets the needs of the population it serves.
6) To evaluate the effectiveness of the training program and to assess the know-ledge that was gained by each individual tutor.
7) To inform tutors of the variety of services they can help provide through Study Skills and Tutorial Services.
8) To provide tutors with the opportunity to continue their own growth and development in life management skills, study techniques, and interpersonal skills.
9) To help tutors meet tutor certification requirements.
#2: Amount/Duration of Tutor Training
The CRLA International Tutor Certification Program has the following guidelines for amount/duration of tutor training. It should be noted that these guidelines were arrived at after three years of research and interaction with leading experts and practitioners from all parts of the United States and Canada. They represent not only what actually is being done by leading tutor programs but, also, what should be done according to the latest research. In other words, the guidelines covered in topics 2-6 are both field and research based.
One or more of the following needs to be accomplished for each level of tutor certification:
[page 26]
1. Minimum of ten hours of tutor training
2. A quarter/semester tutor training course
3. A quarter/semester of tutor training (non-course work)
The following three, randomly selected programs are representa-tive of the over one hundred certified programs. The amount/duration of the training listed will be concerned only with the initial, Level 1 training.
Program 1: Windward Community College, Kaneohe, Hawaii.
Tutor trainees must complete a pre-service seminar consisting of five two hour sessions, plus two hours of hands-on tours of the Learning Assistance Center and Math Labs, plus four hours of workshops on content or study skills, plus one hour “RAP” session, plus one hour evaluation session. Total tutor training hours: 18.
Program 2: Schreiner College, Kerrville, Texas.
To become eligible for tutoring through the Fasken Learning Center, students who have been accepted into the program must complete a minimum of ten hours of training. Eight of the hours are required before the tutor will be assigned clients; an additional two hours is added which covers time management, reviewing how to conduct a tutor session and organizing a tutor session. Total tutor training hours: 1O+.
Program 3: University of Vermont, Burlington, Vermont.
Students in-training must observe other Study Skills tutors for four hours; in addition they attend bi-weekly one-hour Study Skills tutor sessions; they also prepare workshop presentations in pairs. Total tutor training hours: 14.
#3: Modes of Tutor Training
The CRLA International Tutor Certification Program has the following guidelines for modes of tutor training.
1. Classroom instruction and/or workshop instruction PLUS any combination of the following:
a. Tutor training videotapes
b. Conferences with tutor trainer/supervisor
c. Special tutor projects
d. Other
The following three, randomly selected programs are representa-tive of the over one hundred certified programs.
[page 27]
Program 1: Pima Community College, East Campus, Tucson, Arizona.
The following modes of tutor training are used: 1) workshops; 2) video tapes; 3) written modules; 4) text; 5) conference with instructor/supervisor/peer tutors; and 6) experiential role modeling.
Program 2: Wayne State College, Wayne, Nebraska.
The following modes of tutor training are used: 1) class- room and workshop instruc-tion; 2) conferences with supervisor/division chair/faculty; and 3) Myers-Briggs Personality Type Inventory and audio tapes.
Program 3: State University of New York at Plattsburgh, Plattsburgh, New York.
The following modes of tutor training are used: 1) classroom and workshop instruc-tion; 2) conferences with tutor trainer/supervisor; 3) observa-tion of experienced- tutors; 4) tutoring practicum; 5) bi-weekly staff meetings; and 6) special tutor projects (workshops, conferences, publications, presentations).
#4: Areas/Topics to Be Covered in Tutor Training
The CRLA International Tutor Certification Program has the following guidelines for areas/topics to be covered in tutor training. A minimum of eight of the following topics should be covered in Level 1/Regular training.
1. Definition of tutoring and tutor responsibilities
2. Basic tutoring guidelines
3. Techni-ques for successfully beginning and ending a tutor session
4. Some basic tutoring “Do’s”
5. Some basic tutoring “Don’ts”
6. Role modeling
7. Setting goals/planning
8. Communication skills
9. Active listening and paraphrasing
10. Referral skills
11. Study s-kills
12. Critical thinking skills
13. Compli-ance with the ethics and phi-los-ophy of the tutor program
14. Modeling problem solving
15. Other
The following areas/topics are covered in Level 2/Advanced and Level 3/Master.
1. Review of previous level topics
2. Use of probing questions
3. Characteristics of adult learners/learning styles
[page 28]
4. Cul-tur-al awareness and inter-cultural communications
5. Identifying and using resources
6. Tutoring in speci-fic skill/subject areas
7. Recordkeeping/documentation
8. Assertiveness training
9. How to tu-tor/deal with target pop-ula-tions
10. How to admin-is-ter and interpret a learning style inven-tory
11. Structuring the learning experience
12. Training and supervising other tutors (supervi-sory skills)
13. Group management skills (group interaction and group dynamics)
14. Other
The following three, randomly selected programs are representative of the over one hundred certified programs.
Program 1: Oakton Community College, Des Plaines, Illinois.
The following topics are covered in tutor training.
Level 1
1) Ice Breaker
2) Organizational Chart and Brochures
3) What Is/Is Not Expected of Tutors
4) Tutoring Contracts (ASSIST and Content)
5) Tutor Evaluation Form
6) Special Needs Students
A) College Students with Learning Disabilities
B) ASSIST information
7) Tutoring Situations
A) The Kind of Help That Helps
B) Skills Needed
C) Tips for Tutors
D) Piaget Stages of Development
8) Active Listening
9) Probing Questions
10) Whimby Analytical Skills Inventory
11) Learning and Study Strategies Inventory
12) Mini Study Skills Workshop
A) Learning Style Questionnaire
B) Learning Style Tally Sheet
C) Quiz Sample
D) Mind Loads up Best in Spurts
E) Reading Textbooks
F) Generating Questions from Lecture Notes
13) Myers-Briggs
A) MBTI and Reading
[original document page 29]
B) MBTI and Writing
C) MBTI Check List
D) Relation Type to Instructional Strategies
14) Tutor Information Forms and Materials
A) Mixed Media Materials Available
B) Procedures for Record Keeping
C) ASSIST Tutoring Information
D) Tutorial Record Form
E) Tutorial Intake Form
F) Tutor Sign-in Sheet
15) Tutor Contracts and Time Sheets
A) Forms
B) Time Sheets
C) Tutor Training Evaluation Level 2
1) Philosophy
2) Welcome
3) ISS At-A-Glance
4) Probing Skills Videotape
A) Summary of Probing Techniques
B) Annotated Coding Form
5) Learning Strategies Presentation – Major Trends
6) Learning Strategies Presentation – Practical Strategies
A) Previewing a Textbook
B) Note-taking Strategy
C) Proof-reading
D) When In-structors Talk Fast
E) Diagram of an Essay
F) Develop-ment of an Essay
G) Exam Strate-gies Review
7) Learning Strategies Presentation – Assessment
A) Incorporating Learning Strategies Into Tutoring Sessions
B) Learning Styles Inventory – Samples
C) Connections: Learning Strategies
8) Learning Strategies Exercise
A) Task Analysis
B) Sample Task – How to Obtain a Parking Permit
9) Learning Strategies Questions/Demonstration
10) Record Keeping
A) Forms, Forms, Forms
B) ISS Tutor Materials
C) Suggested Readings
11) Tutoring Tips
A) What Is/Is Not Expected of Tutors
B) The Kinds of Help That Helps
[original document page 30]
C) Tips For Tutors
D) Skills Needed to Work With Other Students
12) “On Your Own”
13) On Going Tutor Evaluations
A) Tutor Evaluation Form – Blank
B) Tutor Evaluation Forms – Samples
14) Evaluation of Session
A) Tutor Training Evaluation – Tally Sheet
B) Tutor Training Evaluation – Sample Level 3
1) Philosophy
2) Brochures
3) ISS At-A-Glance
4) STEPS Flyer
5) A Tutor’s Guide
6) Tutoring Tips
A) Who Do We Tutor – Review
B) Helpful Hints – Review
7) Organizational Structure
A) OCC Organization – Review
B) ISS Organization – Review
C) Responsibilities of Personnel
8) Record Keeping
9) Learning Styles – Presentation
A) Style Inventory
B) Feeling Thinking
10) Target Populations: The ESL Student
A) Understanding ESL Students
B) E.S.L. (article)
C) Areas of Communication and Cultural Receivers
D) Obstacles to Learning
E) Acceptable English for Non-Native Receiver
F) Word Problems
G) Crack the Book
H) Blood, Sweat, And a Lot Of Tears
11) Handling an Emergency
A) Crisis Procedures
B) How To Get Help
C) Procedures For Emergency Situations
12) Assertiveness Training
A) Assertiveness Inventory
B) Non-Assertive, Assertive and Aggressive Behavior
C) The Basic Tenets of Assertion
13) Structuring the Learning Experience
A) Structuring The Tutor-ing Experience
[original document page 31]
B) 8 Step Tutor Training
14) Structuring the Learning Experience – Problem Solving
A) Tutoring Scenarios
B) What I Did On Saturday!
15) Tutor Showcase
A) Bag of Tricks
B) That Ol’ Gang of Mine
C) No Shows
D) Signed Numbers
E) Meigs and Meigs Tutorial Program
F) Frag-ments, Run-ons and Comma Splices
G) How To Tame the Ac-counting Monster
H) ESL Text Information Chart
I) Prewrit-ing I & II
J) Taking an Essay Test
16) Evaluations of Sessions
A) Tutor Training – Sample Summary
B) Tutor Training Evalua-tions – Samples
17) Gemutlich!
Program 2: Ventura College, Ventura, California.
The following topics are covered in tutor training, Level 1.
1) Introduction to Tutoring
A) Definition of tutoring and tutor respon- sibilities
B) Some basic “do’s” and “don’t’s”
2) The Tutoring Cycle
A) Beginning and ending a tutoring session
B) Identifying the task
C) Setting goals and planning
D) Summarizing and confirmation of the tutee’s understanding
E) Pre-planning next session
3) Communication Skills
A) Questioning techniques
B) Listening techniques
4) Working with Special Students
A) ESL students
B) DSPS students
C) Learning styles
D) Left brain/right brain preference
5) Resources and Special Topics Covered
A) Health Center
B) Learning Center
C) Sexual Harassment
[pages 29-32]
Program 3: Rutgers University, New Brunswick, New Jersey.
The following topics are covered in tutor training, Level 1.
1) Introduction to program
A) LRC staff and roles
B) LRC services
C) Tutoring Center paperwork and policies
D) Rutger’s employment forms
E) Tutor certification
2) Tutor’s Role & Relationship with Tutee
A) Referrals from LRC to Counseling Center
B) Tutor’s role
C) Tutor/tutee relationship
D) Active listening
3) Diversity and Multiculturalism
4) Tutoring Strategies and Skills
A) Question strategies
B) Group interaction
C) Referring students to Learning Specialists
D) Study strategy workshops
E) The faculty link
#5: Tutor Selection Criteria
The CRLA International Tutor Certification Program has the following guidelines for tutor selection.
1. Written approval of a content/skill instructor AND/OR
2. Endorsement of tutor trainer/supervisor PLUS at least one of the following:
3. Grades of “A” or “B” in subject content being tutored
4. Documented experi-ence equivalent to #3
5. Other
The following three, randomly selected programs are representative of the over one hundred certified programs.
Program 1: Northern Essex Community College, Haverhill, Massachusetts.
1) Recommendation in writing from pertinent content instructor.
2) Interview with Peer Tutor Coordinator
3) Completion of pertinent course minimum final grade of “B” or have verifiable skills demonstrated by course work at a level above perti-nent course.
[page 33]
Program 2: College of Charleston, Math Lab, Charleston, South Carolina.
1) Student is nominated by Math Department and approved by Math Lab director.
2) Grades are checked – must have a “B” or better in math courses and high overall average.
Program 3: New Mexico State University, Las Cruces, New Mexico.
All prospective tutors must submit an application and transcript, and interview for the position, along with meeting the following requirements.
1) Must be full-time student.
2) Must be junior, senior, or graduate student.
3) Must have a cumulative GPA of 3.O or higher.
4) Must have grades of “A” or “B” in subject area being tutored.
5) Must have departmental approval in subject area.
#6: Tutor Evaluation Criteria
The CRLA International Tutor Certification Program has the following guidelines for tutor evaluation.
1. A formal/informal evaluation process in place
2. Formal/informal evaluation occurs on a regular basis
3. The results of the evaluation process are made known to the tutors.
The following three, randomly selected programs are representative of the over one hundred certified programs.
Program 1: Minneapolis Community College, Minneapolis, Minnesota.
Tutors rate themselves on an evaluation form and then meet with the supervisor for a conference. This evaluation occurs quarterly. The tutor and the supervisor discuss both the tutor’s evaluation form and any other concerns or suggestions about tutoring, tutor training, or the Learning Assistance Center in general.
Program 2: Lakeland College, Vermilion, Alberta, Canada.
Forms for evaluating the tutors are sent to every tutee before the end of each term. Random informal evaluation of the tutors is also conducted by the LAC coordinator through-out the year. The results of both of these are shared with the tutors.
At year end, the tutors are given the opportunity to evaluate the program. This is helpful, clearly, for planning the following year’s program. The LAC coordinator also requests the tutors to be introspective and give a self-evalua-tion. This gives the tutors insight into their growth in their role in the teaching-learning process.
[page 34]
Program 3: Tulane University, New Orleans, Louisiana.
Tutors are evaluated based on the criteria listed below.
1) Mid- and end-of-semester evaluations by tutees
2) Individual end-of-term evaluation interview with tutor coordinator during which mid-semester evaluations are discussed
3) Informal meetings and/or discussions with tutors as needed (pats-on-the-back, suggestions, problem solving)
Tutor Training Activities: An Overview
To say that students wishing to be tutors have excellent content area knowledge and skills is, of course, an understatement and quite obvious. Tutors are required to have an “A” or”B” grade in the subject content being tutored; written approval/endorsement of a content/skill instructor; endorsement of the tutor train-er/supervisor; or documented experi-ence equivalent to an “A” or “B” grade .
Tutors, however, sometimes are lacking in the finer points of “instruction” in other words, “it’s one thing to know your subject and another to be able to get it across to someone else.” This is why tutor training is such a crucial part of a successful tutoring program.
Tutors in programs certified by the CRLA International Tutor Certification Program are trained in the following areas/topics.
Level 1/Regular: A minimum of eight of the following areas are to be covered.
1. Definition of tutoring and tutor responsibilities
2. Basic tutoring guide-lines
3. Tech-niques for successfully beginning and ending a tutor session
4. Some basic tuto-ring “Do’s”
5. Some basic tutoring “Don’ts”
6. Role modeling
7. Goal setting and goal plan-ning
8. Communi-cation skills
9. Active listen-ing and para-phrasing
10. Referral skills
11. Study skills
12. Critical thinking skills
13. Tutor ethics; compli-ance with philoso-phy of the tutor program
14. Model-ing problem solving techniques
15. Others (defined by the individ-ual institution/program)
[page 35]
Level 2/Advanced certification: In addition to reviewing the topics covered in Level 1, a minimum of four of the following topics should be covered.
1. Review of all level 1 topics
2. Use of probing questions
3. Characteristics of adult learners and learn-ing styles
4. Cultural aware-ness and cross cultural communication
5. Identifying and using resources
6. Tutoring in specific skill/subject areas
7. Record keep-ing/documentation
8. Other (as defined by individual pro-gram/institu-tion)
Level 3/Master certification: In addition to reviewing the topics covered in Levels 1 and 2, a minimum of four of the following topics should be covered.
1. Review of all level 1 and level 2 topics
2. Assertiveness training
3. How to tutor target populations
4. How to admin-ister and interpret a Learning Style Inventory
5. Structuring the learning experience
6. Training and supervising other tutors (supervisory skills)
7. Group manage-ment skills (group interaction and group dynamics)
8. Other (as defined by individual program/institution)
The following tutor training activities are just examples of the hundreds of activities that the colleges, universities, and institutions certified by the CRLA International Tutor Program use in their programs.
#1: Communication Skills
#2: Communication/listening
#3: Functional Fixedness
#4: Role-modeling and Problem Solving
#5: Empathy Training
#1: Communication Skills
How you communicate with yourself influences how you communicate with others. Tutors are asked to think about and share/discuss during tutor training the “things” that can influence INTRAPERSONAL communication — intrapersonal communication is defined as “how you communicate with yourself.”
What influences have or had an influence on how you communicate with yourself and subsequently others?
[page 36]
Objectives of this activity are:
to help tutors identify their own communication influences and to ascer-tain if they are helpful or harmful
to help tutors realize the influence of environment (cultural and geo-graph-ical) on communication
to help tutors recognize that intrapersonal communication can be influ-enced in a positive manner
#2: Communication/listening
Sometimes tutors have communication problems when ideas or concepts that they are trying to deal with “get lost.” Tutors use certain words or phrases which mean something entirely different or mean nothing at all to their tutees. The following activity is used in tutor training to give tutors the opportunity to discuss this and plan for it when it occurs in a tutoring ses-sion.
“To me that means…”
Look at each of the following words, what meaning(s) do you give to each?
spirit ____________ later ____________ down __________
book ______________ alone ____________ up ____________
hot _______________ high _____________ soon __________
nuts ______________ nurse ____________ success _______
Objectives of this activity are:
__ to indicate to tutors that a word’s meaning s not the same to everyone
__ to indicate the relevance of culture, gender, etc. to word meaning
__ to help tutors choose words appropriately
__ to help tutors recognize the need to sometimes re– phrase or use a different modality i.e., draw a picture, diagram, etc.
[page 37]
#3: Functional Fixedness
This activity can be used in a number of ways for a number of reasons. This activity can be used to strengthen problem solving skills, communication skills, and critical and creative thinking skills.
— Divide into groups of four to five members.
— Give each group one of the following items or a 3 x 5 card with the 1 item listed.
— Give each group time so that all members know what the item is.
— Time for two minutes: each group member -individually writes down all the things she/he can think of that can or could be done with the item.
— Then as a group each group discusses the item and compiles a group list of things that can or could be done with the item time can vary usually 10 – 15 minutes is sufficient.
— Each group shares what the item was and the list of possibilities arrived at.
Some sample items:
— a paper clip
— a 3 x 5 card
— a length of cord or string
— a coat hanger
— a Bic pen
— a paper cup
— a trash bag
— an old tire
— a rubber band
— a ruler
— After each group has shared its list for one item, have two groups com-bine to brainstorm what could be done with the two items. Allow for sharing time.
— Have these two groups join a third group, and so on, until all groups have combined into one big group to brainstorm what could/can be done with all the items.
Objectives:
— to indicate there’s is always more than one way to look at something and that one way to look at something is just that one way
[page 38]
— to help tutors look beyond the obvious
— to hone brainstorming ideas and problem solving techniques
— to enhance communication skills, oral and listening
— to strengthen critical and creative thinking skills
#4: Role-modeling and Problem Solving
Ask any instructor of any age/grade level from kindergarten through graduate school and he/she will tell you that the best teacher is EXPERIENCE. Tutors have excellent experience in a specific content area that’s probably why they have chosen to become tutors; they may be lacking in instructional experi-ence, however. To help novice tutors “over the humps” the following type of activity is used in many tutor training programs. Many times veteran tutors lead the session providing the scenarios and discussion.
Role playing:
— have a novice tutor assume the role of tutor and a veteran tutor the role of the tutee
— each duo has 5 – 10 minutes to “role play” the scenario with the rest of the participants observing
— follow each role-playing situation with discussion
Sample situations:
1. The tutee wants the tutor to just do the work.
2. The tutee wants to argue and disagree with the tutor.
3. The tutee criticizes and “bad mouths” her/his instructor or other tutors.
4. The tutee has low self-esteem.
5. The tutee has set unrealistic academic goals.
6. The tutee comes to a session unprepared.
7. The tutee just wants to talk and socialize.
8. The tutee becomes too dependent upon the tutor.
9. The tutee is just not trying or working hard enough- has a major case of “yes, but”-itis.
10. The tutee is late or fails to show for appointed sessions.
[page 39]
Objectives:
— to give novice tutors an opportunity to “try out” situations before actual exposure to a similar situation
— to include veteran tutors in the learning process and to share their experiences, what worked and what did not work
— to discuss alternative solutions
— to discuss and become aware of cultural influences
#5: Empathy Training
This is really several activities that can be used in tutor training either individu-ally or as a package. The activities are primarily to remind tutors that while they have expertise and knowledge in specific areas, there are still areas where they may not be so adept; thus, increasing their empathy with the tutees who come to them with difficulties is very impor- tant.
A. Yeh…but…
— Make a list of five things you would like to do but don’t because of anxiety or fear. Try to limit this list to the academic realm. For exam-ple, speak out more, take a ________ class, etc.
— Identify the type of fear that blocks you from doing it.
— Think of what you could do to help you overcome each fear.
— Answer the question, “Is this fear REALLY justified?”
Example:
Fear: asking questions in class
Type: fear of looking dumb
Overcome: ask questions before or after class, see professor during office hours, ask a classmate, ask the TA
Justified: not really but I remember I was laughed at once in fifth grade when I asked a question
[page 40]
B. The power of “less than positive thinking”
— Make an I CAN’T list of five things that, in your opinion, you can’t do.
— Then really think about each item and decide if you have ABSOLUTELY NO TALENT/ABILITY for the thing, OR simply LESS TALENT than you would like.
— Then list at least one contradiction for each I CAN ‘T.
— This should demonstrate that sometimes we let negative thinking have more power and control than it should.
Example:
1. I am just hopeless with math I can’t do it.
2. However, every year I do my own income taxes; I often do my sister’s too. I have NEVER BEEN AUDITED.
3. THEREFORE, I can handle math I can do math when I have to.
C. “OUCH…we have all failed/goofed at one time or another.”
Goofing/failing does provide food for thought about what to do or try differ-ent-ly next time; it also provides for insight into critical and creative thinking and learning, not to mention problem solving. It also provides a means of differentiat-ing between being externally located or internally located in other words examining these set-backs, lets us realize that we are responsible for our SUCCESSES and our GOOFS.
— Think about your own academic goofs, disasters, or failings. Jot them down.
— What happened? Why did it happen?
— What did you learn from this goof?
Example:
— I flunked French my freshman year.
— I cut a lot of the classes because I thought I knew the vocabulary well enough –after all I had made straight “A’s” through four years of high school French.
— I learned to go to class — ALL OF THEM (for every subject!). I knew the vocabu-lary well enough, but I missed out on all the nuances and fun of
[page 41]
learning the language from a native speaker from Paris. I also missed out on those random quizzes. I did not read about in the course syllabus.
D. Non-verbal communication
For a day keep track of the non-verbal communi-c-ation YOU use. Do you always get your point across? What happened when you sent a message, but it was misinter-preted? How can you use your non-verbal communication more effectively?
For a day keep track of the non-verbal communication OTHERS use with you. Did you always “get” the correct message? How did you react to the use of non-verbal communication?
References
Baker, L. L. (1981). Communication. Englewood Cliffs, NJ: Prentice-Hall.
Benjamin, A. (1969). The helping interview. Boston: Houghton Mifflin.
Combs, A. W., Avila, D. L., & Purkey, W. W. (1978). Helping relationships. Boston: Allyn and Bacon.
Edwards, D. D. (1979). How to be more creative. Campbell, CA: Occasional Productions.
Gier, T. & Hancock, K. (1985). Helping others learn: A guide to peer tutoring. Anchorage, Alaska: University of Alaska Anchorage.
Loughary, J. W. & Ripley, T. M. (1979). Helping others help themselves: A guide to counseling skills. New York: McGraw- Hill.
Losoncy, L. E. (1977). Turning people on: How to be an encouraging person. Englewood Cliffs, NJ: Prentice- Hall.
Stewart, J. & D’Angelo, G. (1980). Together: Communicating interpersonally. Reading, MA: Addison-Wesley.
[page 42]
Teaching Mathematics Effectively
Hackworth, Robert D. “Teaching Mathematics Effectively,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1994. Pp. 43-55.
Teaching Mathematics Effectively
Robert D. Hackworth, Educational Consultant
Unless we produce students who are active, thoughtful learners, we are simply pushing them up a greased pole.
Curtis Miles
One of the procedures I was taught long ago is that a teacher needs to preview a lesson. I believed that. Also, I was told that a presenter needs to set an agenda for the audience. I believed that too. And when they came along, I was thoroughly convinced of the value of behavioral objectives: a teacher is obligated to state specifically his/her purpose of instruction and communicate the objectives to the students. Again, I believed what was claimed about behavioral objectives.
Today, I am less enthralled with behavioral objectives. I agree completely with the idea that each instructor should have prepared well for each lesson, but I am today less certain about the specificity that is needed. I was taught that the only acceptable behavioral objectives were those that could be measured, and today I feel we have trivialized many objectives just so they can be measured.
At this point, you should be wondering where I am going. That’s part of my strategy for this presentation. First, I want you to know that I’m pulling old belief systems out of the education closet and seeing which ones should be discarded. Second, I want you to know that I have planned carefully for this presentation with the full confidence that if you pull me away from my original objectives, then I can effectively deal with that situation.
Before I leave behavioral objectives completely, there is a second problem about them that also violates the latest thinking in mathematics education. To some degree, the practice of informing a student at the beginning of a lesson of its purpose is being questioned. The idea that a student should be encouraged to accept a preordered organization of concepts and skills contradicts some of the current beliefs about the most effective way to learn mathematics.
This is all prelude, of course, to my hope that you will not feel restrained by your own belief that I might be doing something so important here that you should not interrupt with questions or comments. I am, I think, well prepared with my own script, but I shall be delighted if we find some other paths to explore along the way. As you know, a paper copy of what I have planned will be included in the proceedings of the Institute, so the information I have planned to bring you will be available anyway. That includes all of my overheads, so there is really no need to take notes. I encourage you to sit back, listen, think, and ask questions.
[original document page 43]
The first order of business we need to address here is the tension, maybe outright hostility, that exists between Mathematics Departments and all other departments within our institutions. In truth, there has been very little communication between these warring camps for years. Most attempts to communicate are frustrating exchanges in which different value systems make the best of ideas fly by the other side without ever making contact. Nothing we do here today is going to solve this difficult problem, but I assure you that progress in mathematics education cannot be achieved without some improvement in our communication. There should be no adversarial relationship between mathematicians and non-mathematicians. I am not your academic enemy, even though I may have credentials that may indicate it. You are certainly not an enemy of mathematics, even though some of the problems I need to talk about may touch programs close to your heart.
Consider me a bridge between the mathematics establishment and yourselves. My experiences have put one foot in each camp, and when I speak of the hostility between them, I remember the heat I have taken from both sides.
I began my teaching career in mathematics with a plan for my work life that is not recognizable when compared to my actual experience. I still counsel my offspring and my students to plan ahead, but it is only with the belief that some planning will better prepare one for facing the drastic changes that are, in my experience, unavoidable. Please treat the advice I give you today accordingly. I would probably give different advice if asked tomorrow and undoubtedly would do so next year.
I plunged into developmental education by accident. My first steps toward a lifetime’s work were taken without any intention of commitment. Florida in 1966 was making a serious (that means well-funded) attempt to deal with students unprepared for any college mathematics course, and I volunteered to participate for one-third of my class load. My responsibility was to teach a class of eight students! The mathematical content was beginning and intermediate algebra. Obviously, we believed that small classes with a great deal of personal attention for each student would be effective. It was. It would be today. And, of course, it is no longer considered economically feasible EXCEPT, as some of you realize, with some student tutoring programs.
Returning to my own plunge into developmental mathematics, when the first economic crunches began, our successes with small classes and personal attention seemed too valuable to simply dismiss. When compared to the frequent failures associated with traditional programs of lecture or lecture/discussion, our positive experiences made us doubt the efficacy of even those instructional programs, like calculus, where the failure rate was “acceptable.” But we began with developmental mathematics because that was the point in the curriculum where mathematics professors allowed some, not much, tinkering with instruction.
And this is a good point to consider the meaning of “instruction.” In the late 1960’s, I had a fairly simple concept for instruction which limited it to “delivery systems.” How could I best “deliver” knowledge to my students? Issues of motivation, attitudes, study skills, goal setting, were students strengths/weaknesses which I knew had a serious impact on learning,
[original document page 44]
but at that time they were separate from “instruction.” Consequently, my first attempts to improve mathematics instruction focused on delivering knowledge to students in ways that would make the material easier to grasp and master. Examples of improved delivery systems included the development of better explanations in more logical sequences, more information available in print materials, mastery testing materials, and expanded use of video and computer materials. Results of those efforts were gratifying, and by the end of the 1970’s, I was approaching Bloom’s criteria for excellence: 90% of my students were achieving 90% of our course objectives.
Then the numbers began to drop. Delivery systems, by themselves, were no longer sufficient to guarantee quality learning. The reasons for the drastic fall-off in mathematics learning are complex (social, political, educational), but the effects are familiar to all of us trying to teach, tutor, and/or manage instructional programs. Students are failing mathematics in large numbers. Fear and anxiety of mathematics is commonplace. Complaints of mathematics instruction abound. Serious questions of the relevance of mathematics are raised even in the face of technology growth that demands greater not less understanding.
Meanwhile, the vast majority of mathematics professors are today defining our instructional problems in the same terms as they were described in the 1960’s. Most mathematics teachers continue to look for the WONDER BOOK or WONDER COMPUTER PROGRAM that will miraculously change student achievement levels. However, the leadership of the major mathematics associations in the country have shown a far deeper understanding of the problems and are accepting major responsibility for addressing it. The National Council of Teachers of Mathematics (NCTM) has courageously attacked the teaching practices of its members. In 1991, NCTM published its Professional Standards for Teaching Mathematics that promotes a different vision of mathematics teaching. The Standards states in positive terms the nature of the subject and methods for teaching it appropriately and effectively. No reader of the Standards can avoid its underlying criticism of current mathematics instruction. There are today more articles in the mathematics journals concerning instruction or learning than I have ever observed in the past. And change is coming. It’s slow too slow but improvements in teaching mathematics have begun.
This last week one of the employees at my school said they really liked working when there weren’t any students around. That’s because students are a problem, and student attitudes toward mathematics are a serious obstacle to any improvement in the services we provide. Large numbers of students who seek help with mathematics have a very different idea of “help” than those of us who are to provide it. Conversations with such students rarely are instances of good communication. Excellent explanations are frequently ignored as useless ritual. The student waits for the rote procedure needed to get answers. Mathematics teachers see the explanations as the “true mathematics” and the procedures used to actually solve problems as following logically. Meanwhile, the students see the procedures as “true mathe-mat-ics” and the explanations as unnecessary fluff. These students try to pass the course by memorization. Often, they make Herculean efforts and still fail because those efforts are so inappropriate for learning mathematics.
It is easy to place the blame on the students, but it is also obvious that they learned to treat mathematics this way. Now, as professionals faced with this challenge, what do we do
[original document page 45]
about it? Some mathematics professors and tutors seem oblivious to other sources for the learning problems and explicitly state that the problem is one for the student to overcome. Other professionals recognize that the difficulties are more complex, but have no strategy for teaching students more effective ways of learning mathematics.
When minds are stuffed with knowledge they don’t understand, their thinking becomes chaotic.
Kamii & DeVries, 1978
Major blame for student failures in mathematics can be properly placed on their education. Teachers, school systems, social myths, and public policies have created these problems, and they will be with us until they are eliminated from elementary/secondary school practices. This indictment of elementary/secondary school practices does not mean that we in higher education are any better (maybe worse), but the problems begin earlier than we see them, and solutions must be implemented earlier in our education systems. For us, we can expect at least ten to twenty years more of accepting new students who have learning deficiencies caused by their past schooling. The extent to which we deal wisely and kindly with these individuals will have a major impact on their intellectual achievements across the curriculum.
Most education “reforms” have been failures. Longer school days, more days, more required courses, standardized tests have not brought improvements. My newspaper last Friday announced a study that American students continued to compare poorly against most others. Although I think the standardized testing craze is part of our problem, I am pleased for the results that indicate our reforms have been a failure.
Some current efforts are not only ineffective; they are self-defeating. In some instances, mathematics professors and tutors, under criticism from students and administrators, have attempted to find better ways for their students to learn mathematics by rote. Materials and testing procedures have been altered accordingly. Students in these situations may overcome the short-term hurdles they face, but the long-term results are disastrous. In other words, they pass their mathematics requirements, but are illiterate in the subject. Sadly, it is many of these students who have become elementary teachers, or other decision-makers in education, and have promoted the wider use of practices which greatly enlarge the problem.
As a first step toward improving our teaching of mathematics, I would require all of us with responsibilities in the subject to spend some time and effort acquiring an insight into the nature of the subject. There is a startling disparity of thought between mathematicians and those who have trouble with mathematics. That disparity effectively blocks most communication. When I say, “Mathematics is fun,” many of my students reply, “That’s because you know all the rules and we don’t.” When a student says piously, “Mathematics is very important,” I am reminded of Henry Whitehead’s statement that some avocations deserve a special position because of their intrinsic worth, and then he cited music, mathematics, and the making of good shoes!
Peter Hilton, distinguished professor of mathematics at SUNY Binghampton describes the nature of his subject in this way:
[original document page 46]
Mathematics is systematized thought, supported by a beautifully adapted language and notation. It is characterized by the recognition, discovery and creation of pattern, and by the establishing of subtle connections between its apparently very dissimilar parts. Contrary to traditional school practice, it is not a set of distinct subdisciplines, but a unity, drawing on a diverse but interrelated repertoire of concepts and techniques. Again contrary to popular belief, it is not a set of facts; and mathematical understanding is not to be measured by tests of knowledge and memory. Thus, for the student, what matters is that he or she learn to think mathematically and any significant part of mathematics can be used as the vehicle to convey the necessary understanding and thinking ability. Conversely, no part of mathematics, however seemingly appropriate, can prepare the student really to use mathematics intelligently and effectively, if it is taught simply as a set of isolated skills, to be retained by the exercise of undiscriminating memory.
Joe Garofalo, professor of mathematics education, University of Virginia describes mathematics in terms of the student outcomes we should expect:
I want students to develop mathematical power, meaningful concepts, healthy beliefs about the nature and value of mathematics, confidence in their ability to learn and use mathematics, and useful problem solving strategies. Traditional teaching methods are not very effective for helping students achieve these goals; actually such methods often work against them.
It is my experience that most students having trouble with mathematics focus on retaining subject matter content without first learning the intellectual skills needed to support that effort. Consequently, a second major thrust toward improving mathematics education must involve a broader view of instruction. I stated earlier that instruction has been viewed as a “delivery system” which ignores many factors which impact on learning. Today’s quality instruction must take responsibility for these other factors. Claire Ellen Weinstein, professor of educational psychology at the University of Texas, states that learning has at least three facets that must be addressed by instruction:
1. Skill. The language, techniques, facts, etc. of a subject.
2. Will. The attitudes and motivations which drive efforts to learn.
3. Management. The ability to make appropriate decisions for engaging in learning activities.
Obviously, most traditional instruction devotes little time or effort to two of these facets. In fact, all planning is focused on content delivery. Course calendars are written in
[original document page 47]
terms of content coverage, and serious criticism is given the teacher who does not “cover the required curriculum.”
When the instruction does attempt to address other issues besides content, there is often a superficial quality about those efforts. We have had few experiences, in school, upon which to model such instruction. In fact, the best models for such instruction are found outside our schools.
Some years ago, Claire Ellen Weinstein outlined four areas of concern for quality instruction. She claimed that they were necessary and sufficient conditions for learning. I have applied them to the teaching of mathematics and have found them to greatly improve my awareness and monitoring of my own instruction. When I am aware, I include all four areas in my instruction. When I evaluate my instruction, I find that using the four areas gives me an excellent model for monitoring it. I believe that instruction of mathematics based on these four areas of concern can meet the views of mathematicians about the subject and also conform with theories of learning. We need to do both.
1. Create Quality Learning Environments
Each of our students lives in two environments: an academic environment and a nonacademic environment. Most students in community colleges spend the greater part of their waking hours in non-academic settings, and many of the difficulties for those students are directly related to influences outside the campus boundaries. Those of us who work in institutions where most or all of the students live on campus are blessed by circumstances where the on-campus problems generally dominate. I’m going to restrict my comments to the academic environment, but that is due, in part, to my helplessness to suggest ways of overcoming problems in the non-academic environments.
Our premise is that what a student learns depends to a great degree on how he or she learned it…. for each individual, mathematical power involves the development of personal self-confidence.
Curriculum and Evaluation Standards for School Mathematics
For some of you, your workspace is on a campus with real academic character. Others here, no doubt, work on campuses which were built on a warehouse model. In my teaching situation, it seemed that the same architect designed all the colleges and used the same design he had used earlier for the prisons.
Students tend to transfer their memories of past mathematics learning environments to new situations. Frequently, we make that easier because most math classrooms do look alike: a blackboard with rows of desks. Make the environment different from what the student has experienced in the past. Encourage the student to approach this learning experience differently by changing the physical qualities around them. Quantitative changes like less blackboards, less desks, are helpful, but qualitative changes are vital. Look for ways to raise the cultural level.
[original document page 48]
Changes that might be made include the following:
1. Hang plants in the room.
2. Play background music (preferably classical with slow beat).
3. Hang paintings from the college library.
4. Put flowers on the tables.
5. Carpet seating areas.
The major environmental factor for the student will be the teacher, tutor, and/or manager. Break the stereotypical image of a math person. Students expect mathematicians to be cold and logical. Surprise them; be warm, friendly, and maybe illogical. Students often believe mathematicians have no other interests; share the fact that you have a life outside mathematics:
1. Bring a novel you are reading with you to class.
2. Talk about a concert or play you recently attended.
3. Mention characteristics of parents, friends, that made you mathematical.
4. Be personal.
Play down the role of education for making money. That idea is far oversold, and the student already gets plenty of it. Instead, emphasize the historical, cultural, intellectual role of mathematics. All content makes good sense under that umbrella. Puzzles, ridiculous word problems, rationalizing denominators in the age of calculators, are wonderful topics when we get past the argument of “How am I ever going to use this?”
Anxiety is a major problem for many mathematics students. Recognize their fears and publicly admit to being lucky that our own experiences were unlike theirs. Avoid scare tactics and harsh rules because they will not work with students who are already afraid. Exude patience, confidence, and a belief that hard work under your direction will overcome. Emphasize that anxiety frequently interferes with properly budgeting time, persevering, asking questions, and taking responsibility for their own learning.
Most students in trouble are reactive rather than proactive. They may attend class religiously, take prodigious quantities of notes, and strenuously attempt to decipher those notes. If so, they deserve an “A” for effort and will probably earn a low grade in mathematics.
To become proactive your students should be taught Benjamin Bloom’s three factors for predicting success/failure. Bloom’s first and major predictive factor is the amount the student knows before a topic is taught cognitive entry skills. Those who know the most at the beginning almost always know the most at the end. This is often a prediction of failure for weak students. In fact, it is a strong indication of what needs to be done to be successful: Prepare before each session, and learn as much as possible about the topic before instruction begins.
Bloom’s second factor is affective entry skills. Affective domain factors such as motivation, beliefs, values, and attitudes invariably facilitate and/or debilitate thinking and
[original document page 49]
learning. Occasionally these affective entry skills dominate all others. This is the case with students suffering severe anxiety or with those who are unable to accept a mathematician’s view of the subject. In general, however, affective entry skills are of minor importance compared to prior knowledge. Another important aspect of affective domain factors is the fact that they often appear to be less under the control of the student. In fact, however, good students do improve their probabilities of success by their awareness of strengths or weaknesses in this area.
The last major factor listed by Bloom is teacher behavior, and the research can be most discouraging for teachers. It seems to indicate that we have a minor influence on success/failure. In many instances that is true, but the teacher who uses Bloom’s research can reverse that negative outcome. The teacher that assesses cognitive and affective skills, and then actually does something about them, can use them to positively influence outcomes. For example, since cognitive entry skills are so immensely important, quality instruction must always begin where the student is rather than where the course assumes she is.
Students who are aware of Bloom’s factors can also see themselves as responsible for success/failure an absolute necessity for any instruction to be effective. They can regulate their study to learn more effectively. They can also voice some needs that must be addressed if they are to succeed.
2. Process Information Correctly
Mathematics information can be divided into two categories:
1. Information that needs to be carefully memorized.
Examples are definitions, symbols, postulates, perhaps some formulas, and occasionally a rule that needs to be practiced before it can be understood.
2. Information that needs to be figured out each time it is encountered. Examples are procedures, rules, and problems.
Good mathematics students use these two categories correctly. Poor mathematics students practice them in exactly the opposite way.
The current rage in mathematics education is Constructivism. The allure of this psychological theory is that it emphasizes prior knowledge and making inferences. Mathematical knowledge under this interpretation is developed like a brick wall. The wall is made of bricks, but the integrity of the structure depends upon the positioning of the bricks and the quality of the mortar that connects them. The knowledge of mathematicians is organized like the bricks in a wall, but the knowledge of students having trouble with mathematics is more like the bricks in a pile, where each brick represents a separate entity, and there is neither organization or connections.
A word that is used frequently in discussing mathematics achievement is “understanding,” but when most of us are asked to explain what that means, the results are often so
[original document page 50]
vague that any teacher can claim quite honestly that they teach for “understanding.” The constructivists have given us a better way for evaluating our teaching.
Mathematics information is normally hierarchical. Often, there is a logical learning sequence which will make learning easier.
Mathematics information is always interrelated. Regardless of the topic, it has relationships with all other mathematics.
With few exceptions, new topics in mathematics can be described by concrete situations. The famous psychologist Piaget strongly suggests that all new topics be introduced in that manner and later translated to more abstract situations. Some teaching and learning violates this Piagetian principle. For example, many elementary texts teach a method for finding the least common multiple of two numbers that is, for the student, a bit of magic. If that student first listed multiples of pairs of numbers and selected the least common multiple from the lists, the magic would be unclothed and seen as a shortcut for the more understandable process.
Besides using concrete situations to introduce abstract concepts, a second requirement would be to provide time and opportunity for the student to process new topics. The psychologist David Kolb describes the amount of learning as the addition of two vectors: one vector is the amount of instruction, and the other perpendicular vector is the amount of processing. If you double the processing,then you double the learning.
One of the most valuable understandings that needs to be explicitly taught is the existence of three types of knowledge in mathematics.
1. Declarative Knowledge The Whats of Learning
2. Procedural Knowledge The Hows of Learning
3. Conditional Knowledge The Whys and Whens of Learning
Good mathematics students treat the three types of knowledge as equals or, if not, they treat #1 and #3 as the most important. Poor students generally are unaware of the different types and, when they are, place great emphasis on #2. This means, of course, that they are making learning more difficult and less enjoyable. Teachers can help their students with these forms by constantly modeling by asking questions such as:
“What” questions that seek declarative knowledge:
1. What do I know about this?
2. What am I trying to find?
3. What can I do to present the problem another way?
4. What part of this problem can I solve?
5. What steps/strategies will I use?
6. What does this problem mean?
[original document page 51]
7. What sort of answer might I anticipate?
“Why” questions that seek conditional knowledge:
1. Why is this answer reasonable?
2. Why does this work?
3. Can I explain my strategy to someone else?
4. Is there another way to solve this problem?
5. Why is this the best approach on this problem?
6. Why would someone make errors on this type of problem?
7. Why would anyone want to solve this type of problem?
3. Maintain an Active Mind
Research consistently demonstrates that we use and retain very little of what we are told, what we read, or what we watch. Learners learn because they are engaged in creating, processing, and interpreting experience, both real and simulated.
ISETA News-let-ter (exploring teaching alter-na-tives)
Winter 1989
A mind that wanders is normal. Most of us can be reading a very interesting novel and suddenly find we don’t know what has happened the last few pages. Our reaction is to return to those pages and begin again where we need the review. Students frequently experience a similar experience when studying mathematics, but often they try to continue when they have no understanding of the preceding material. In mathematics, this is like building a two-story house and then trying to dig out the basement.
An active mind is a necessity for learning, and students need to be taught to maintain the mind’s activity. “AHA!” experiences are the best learning experiences we can bring our students. We have excellent clues on how to build situations that are nurturing climates for “AHA!” but those situations are difficult to create in schools. The major reasons: Time and Accountability.
Two types of strategies should be applied on a mathematical problem. A cognitive strategy is an identifiable and reproducible thought process directed at a particular type of task (the quadratic formula for solving equations). A heuristics strategy is the use of a smorgasbord of thinking tools when reacting to a situation (solving nonroutine problems). Some students want every problem reduced to a cognitive strategy, but that is neither possible nor desirable. Wherever possible, use heuristics. The best way to do this is through the use of nonroutine problems that is, problems where no cognitive strategy has been developed or where the cognitive strategies are more difficult to apply than heuristics.
Keeping an active mind may be translated as “THINKING,” but a major problem with the translation is the meaning of the word. Every teacher-tutor aspires to teach “thinking,” but most descriptions of how that is accomplished are vague and difficult to replicate.
[original document page 52]
Consider mathematical thinking to be a search for meaning and constantly stress that search. Asking questions about meaning is an effective way to teach the process.
What does this mean?
What does 3 + 5 mean?
What does “Find the least common multiple of 21 and 35” mean?
What does 7% of 832 mean?
What does x + 5 = 12 mean?
What does (x – 5)2 = 49 mean?
Examples of other questions which search for meaning are:
Write a simpler problem.
What does the answer to this problem look like?
If you had the answer, what would you do to check it?
Find the last problem you could do correctly.
What is different about this problem from others you have done?
If you could change something about this problem to make it easier, what would you select?
Teachers of mathematics need to primarily ask “W” questions (What, Where, Why, When, Who). They need to avoid asking “How” questions because these frequently encourage the replication of some known process.
Teacher questions are rarely as good as student questions. And the very best questions are those that a student asks of him/herself. When a student questions her/himself, it requires:
(1) active processing, (2) thinking about their own thinking processes, and (3) the recall of prior knowledge.
4. Monitor Comprehension
The skilled learner strives to reach two goals:
to understand the meaning of the tasks at hand, and
to regulate his or her own learning
Strategic Teaching and Learning, ASCD
Self-evaluation is a crucial, on-going process. Just as prior knowledge is crucial to beginning successful study, so too is review crucial for integrating and consolidating it. The students who pass one test and immediately forget its content have never learned that constant review and assimilation of knowledge is a necessity in mathematics.
Teach students to treat evaluation and review as important aspects of their study. You might begin this process with some learning theory. Learning theory should be a first course in education, but few students or teachers seem to act consciously upon one. The three step
[original document page 53]
process shown below is sufficient for students to apply and monitor their own effectiveness in learning.
1. Awareness of the new learning intended
a. Background check (prior knowledge needed)
b. Focus (put attention directly on the new learning)
2. Active response to some question or problem intended to illustrate acquisition of the new learning.
3. Feedback on the degree to which the active response was appropriate or correct.
Students also will benefit by evaluating their instruction. Making a conscious effort to judge their instruction is part of the process for students becoming responsible for their own success/failure. Bloom’s criteria for quality instruction provides a simple four-phase method for evaluation.
1. Does the instruction provide clear cues or directions?
2. Does the instruction include an appropriate learning activity?
3. Does the instruction provide feedback?
4. When difficulties are encountered, does the instruction provide corrective recycling?
At the heart of comprehension monitoring are those skills which are labeled metacognitive. Metacognitive skills are the thoughts (knowledge and skills) used to plan, monitor, and evaluate an individual’s cognition. If cognition were thinking, then metacognition would be thinking about thinking. The monitoring and evaluating functions of metacognition are important when a student has only a vague idea of how well or poorly they are learning. Many students are unaware of what they know and what they do not know. These students are having trouble monitoring and evaluating their comprehension. Successful students learn efficiently by utilizing the feedback they receive from the monitoring and evaluating functions to improve their future performance.
I have found an interesting way to encourage my students to engage in more metacognition. I tell them that all learning of mathematics comprises two components. The first is learning the content of the subject. The second is learning to what degree that content has been learned. The permutations of these two components include the following:
1. The student who knows the subject and knows he/she knows the subject. This is the student who is probably working towards an “A.”
[original document page 54]
2. The student who knows the subject, but doesn’t know whether he/she knows the subject. This is frequently the student with anxiety, and anxiety reduction techniques may bring a dramatic improvement in performance.
3. The student who doesn’t know the subject, and also knows he/she doesn’t know the subject. This student has the knowledge to improve performance. The question here is if the student has the will and management skills.
4. The student who doesn’t know the subject, and doesn’t know he/she doesn’t know the subject. This student suffers from “double ignorance.” All is not lost, however, because awareness of the situation may move this student to take advantage of the plethora of materials now available to evaluate the degree of knowledge. If so, this student can become a #3 just with that knowledge. Again, does the student have the will and management?
Students today present new, more difficult challenges for their mathematics teachers-tutors. Explicit teaching of the nature of mathematics rather than solving routine problems can best overcome the mindblocks of our students that continually lead to more failure.
[page 55]
Postsecondary Student Assessment and Placement: History, Status, Direction
Kerstiens, Gene. “Postsecondary Student Assessment and Placement: History, Status, Direction,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1994. Pp. 56-62.
Postsecondary Student Assessment and Placement:
History, Status, Direction
Dr. Gene Kerstiens, Andragogy Associates
This paper is not an intentionally objective, dutifully documented appraisal of student assessment as it obtains in postsecondary institutions. Rather, it is an observation based upon a review of the professional literature together with experience derived from forty years while serving on and visiting a variety of campuses. Many of these institutions were occasion for the author’s being tested, evaluated, and advised under the same conditions endured by students who seldom find them edifying. Without apology, this exposure is given to a mindset that considers assessment chiefly from the student’s point of view. For those interested in an explication of postsecondary student assessment and placement from the perspective of consumer advocacy, read on.
Briefly, assessment and placement as it is most commonly practiced on college campuses today consists principally of basic skills testing – reading, writing, and math skills as measured on a standardized, time-critical, objective, paper-and-pencil test battery. Scores on these tests are used to qualify students for “college level” courses or to direct underprepared students to some intervention calculated to prepare them for the rigors of the post-secondary curricular experience. To be sure, at some institutions other measures and means are included in the assessment package: writing samples, high school grades or class standing, learning skills surveys, experiential credit, and learning styles and self-esteem surveys. But a review of research (Gabriel, 1989), as well as the latest national survey (Boylan, Bliss, and Bonham, 1992), reveals that 96% of colleges use scores earned on basic skills tests as the principal, if not the sole, determinant for student placement.
How we arrived at this condition and predicament has interesting historical roots in Nineteenth Century intelligence testing and some of the conceptual and measurement errors related to that movement (Gould, 1981). But how we have persisted in this tradition in spite of the preponderance of research criticizing it and our own less than exemplary experience while carrying on the practice (Weber 1985; Kerstiens 1993) is not a pretty story. Perhaps understandably it is avoided as a treatable topic in certain graduate schools of education that regard such ironies as sensible and inevitable and logic as a systematic method of arriving at the wrong conclusion with confidence.
By the turn of the century, Binet, Terman, Spearman, and others had established the construct of general intelligence. They came to believe that ability to learn (IQ), especially in the academic arena, could be measured on paper-and-pencil intelligence tests and that IQ scores predicted success in college at least reasonably well. This opinion survives in some quarters today, sometimes transmuted into curious and convenient persuasions (Conner, 1989). But by 1921, Gates was successfully selling the notion that a more specific ability, the ability to read and comprehend texts, best accounted for proficiencies related to success in college. This rationale inspired a flood of reading comprehension tests that often became exclusionary to other instruments of assessment. Finally, by the late ’50’s, writing and math skills also were construed as skills necessary to college learning. When tests measuring these
[56]
skills were incorporated with the well entrenched reading skills requirement, the practice of basic skills testing emerged and blossomed. During the ’70’s, basic skills batteries became regarded as the common and acceptable method for measuring college aptitude. With slight modifications, many academics sustain unshakable confidence in this system, relinquishing their hold on speeded, objective, basic skills tests only when we pry them from their cold, dead fingers.
Of course, most sobering is the fact that these practices have flourished through the process of reification, that is, our coming to regard a theory or construct as having real or concrete existence. Which is to say that basic skills objective testing is commonly equated in the public mind and in many academics’ consciousness with assessment itself, so that testing and assessment are now blurred or even indistinguishable constructs. And because standardized basic skills tests are inexpensive, their administration conveniently conforms to institutional time frames, and faculty authored textbooks teaching test-taking strategies have become imbedded in developmental course curriculum, we have found it expedient to remain serenely indifferent to critical research that should discourage their employment.
Accordingly, through the years, standardized, paper-and-pencil basic skills testing has collected popular support as a ship collects barnacles. As the tests enjoyed wider use, they were naturally cited more often in studies and reports. Deft references to these tests abound in the professional literature, especially during the last 30 years, this frequency and duration implying respectability. Consequently, the political correctness of this assessment practice has become well established through its popularity rather than any proven validity, accuracy of measurement, or track record of efficient placement. Even normally scrutinous and skeptical professionals have substituted their faith in group preference for their own independent judgment based upon observation, research, or gut feeling. It’s amazing how colleagues’ sentiments are influenced when they take notice of the adoption trends generated by their fellows.
As indicated, the movement toward this assessment system grew, nourished by glittering advertisement claims and practitioner endorsements, but in spite of a steady stream of disfavoring professional literature. During the past seven decades extensive and comprehensive research reviews have repeatedly pointed out not only the limitations of the method but also its debilitating effects on our student constituency. (Gates 1921; Flanagan 1939; Preston & Botel 1951; Rankin 1962; Tillman 1977; Stetson 1982) These writers indict most standardized, basic skills instruments and, in almost unanimous agreement, make nine charges: they sacrifice accuracy of response for speed of response, encourage chance-success responses (guessmanship), discourage analytical reasoning, unnecessarily elevate anxiety, delay feedback of test results, rely on norm-referenced measurement, facilitate or demand mass testing format, provide a scarcity of alternative test forms, and promote inconvenient scheduling of test administration. Tillman (1977) succinctly identified the inconsistency between research findings and our assessment practices: “Ironically, the increasing popularity of certain tests seems to be inversely related to the negative comments of critics” (p. 253).
[57]
Meanwhile, back in the political and popular opinion arenas, vigorous unrest concerning assessment/placement practices can be evidenced in abundance. During the past twelve years, no fewer than eight bureaus, commissions, and councils have been appointed by the President or his designee to study the problem, to make recommendations, and to serve as federal assessment regulatory agencies. The results have been unrewarding. The latest in this succession of failures was the National Commission on Education Standards and Testing (Public Law 102-62, 1992) whose recommendations were evaluated in Congressional Testimony as follows:
We believe that the proposed NESAC would not be capable of evaluating the new standards and examinations meaningfully. We see the need for an independent, non-partisan body with sufficient expertise and credibility to evaluate the technical qualities of alternative assessments, examine the evidence about their feasibility and costs, monitor the consequences of their use, and judge the comparability of results. (Institute for Education and Training, 1992, p. 1)
Echoing these concerns are articles in the Chronicle of Higher Education, like George Madaus’s (1990) “Standardized Testing Needs a Consumer Protection Agency,” and the following charges leveled by Rand researchers:
Our testing policies have failed to achieve many of their intended positive effects, while creating some clearly negative consequences. Initially created to facilitate tracking and sorting of students, these instruments were not intended to support or enhance instruction. Because of the way in which the tests are constructed, they place test takers in a passive, reactive role, rather than a role that engages their capacities to structure tasks, produce ideas, and solve problems. The tests thus exclude many kinds of knowledge and types of performance that we expect of students. They are inappropriate tools for many of the purposes that they are expected to serve. (Darling-Hammond and Lieberman, 1992, B-l)
Of course, we have been invited to believe that one reason why these tests fail to fairly evaluate matriculating students is that they are based on norms developed years ago when, presumably, the norming population possessed better skills. Reinforcing this belief are countless alarming reports in both the media and professional press about students’ declining scores. Much as investors pore over stock indexes with frightful eagerness, we have been preoccupied with periodic reports of disappointing fluctuations in student scores, some of us becoming operatic about declining standards and evangelical about reestablishing them.
But such data contribute to a distorted view of the students we serve. Indeed, if we compare today’s average student ACT and SAT scores with those of twenty years ago, the results are surprising.
[58]
ACT Mean Composite Score Comparison* | | SAT Mean Composite Score Comparison* (Maximum Score = 800) | YEAR | SCORE | YEAR |
| ||
1970 | 19.9 | 1970 | 474 | |||||
1990 | 20.6 | 1990 | 450 |
Since ACT and SAT tests are patently heavy hitters in the postsecondary testing industry, these data should provide a credible comparison of the entrance scores of today’s students with scores of students 20 years their junior. We can notice that on the ACT, there is a .7 of a point (5%) increase in scores. On the SAT, there is a 24 point (3%) decrease in scores. The plus-and-minus variance between these average scores would appear to cancel each other to represent overall score levels that are essentially unchanged. Which is to say that students may be different from what they were 20 years ago, but, as they are measured on standardized entrance examinations that enjoy high usage, they are no worse – or, let us say, they are just as bad. Therefore, the declining scores scenario does not account for the tests’ mismeasurement of today’s student population.
Whatever the historical circumstances that have occasioned or sustained the sad condition of postsecondary assessment and placement, there is wide agreement that changes need to be made. Students have voiced their protests and have even initiated organizations designed either to facilitate reform or abolish the entire process. At the federal level, regulatory agencies with a firm grasp of the obvious have registered displeasure and frustration. Few academics are satisfied with their institution’s assessment process. Face-to-face inquiries elicit guardedly discreet responses from professors, counselors, advisors, and administrators, most of whom confess that their institution’s assessment/placement strategies are ineffective and probably unfair.
What can be done to improve the typical assessment process? There are three modest measures, based upon the best available research and experience, not really too threatening, and certainly cost-effective, that most institutions can apply on a given Monday morning to take significant steps toward a solution.
First of all, those institutions employing mandatory placement need to reconsider this policy, especially because research supports discretionary placement. According to the latest national survey (Boylan, Bliss, and Bonham, 1992), 57% of postsecondary institutions stated
[original document page 59]
that their placement was mandatory as a result of assessment. However, on six success variables including persistence, success in critical classes, and cumulative GPA, students enrolling in colleges with mandatory placement policies were significantly less successful than students attending institutions allowing options. Additionally, Utterback’s (1989) exhaustive review of research, together with his own well designed study, lend credence to the position that insisting on student participation in interventions based on questionable assessment practices is not only unwarranted but untenable.
Next, most schools need to consider augmenting and enriching their assessment packages. While a majority of campuses will probably continue to employ paper-and-pencil objective basic skills testing, they might choose to include promising alternative means and measures for assessing and placing students. One example supporting such consolidation stands out in the research. In their national survey, Boylan, Bliss, and Bonham, (1992) learned that 26% of institutions incorporated learning skills inventories as a component of their assessment system. On seven success variables including mean first-semester GPA, persistence and success in critical classes, and graduation rates, students enrolled in schools utilizing learning skills inventories as part of their assessment system were significantly more successful than students in schools that did not. Additionally, Bliss and Mueller (1987) learned that results on one learning skills inventory predicted first-semester GPAs at an unprecedented .79, a correlation high enough to engage our actuarial and statistical attention and encourage implementation.
Finally, consider adopting a computer-adaptive test to replace the paper-and-pencil basic skills instrument most probably presently in place on most campuses. Why? First of all, computer-adaptive basic skills testing addresses the nine most common researcher objections mentioned in the seventh paragraph of this article. It manages to avoid most if not all of the negative features deliberated by objective basic skills testing critics. Next, because its format and presentation are based on item response theory, the instrument presents a student with items of optimum challenge rather than displaying an entire spectrum of item difficulty that either encourage guessmanship or occasion boredom. Finally, since test items measure research based proficiencies typically required of students engaging in the college experience (College Board, 1983), test results need not be revealed in terms of points or percentiles but can be reported in criterion terms as levels of proficiency and performance. Although only one computer-adaptive basic skills instrument is presently available (College Board), another is being prepared for marketing in the near future (American College Testing).
Of course, a problem that has been decades in the making is not likely to be remedied with dispatch. Nor are mandated regulations liable to inspire an epiphany of collective insight in an infrastructure colonized by petty bureaucrats more interested in turf concerns than what is right for students, the course of study they face, and the faculty delivering instruction. Oxymoronically speaking, it is difficult to provoke revisionist thinking among those seeking innovation without change.
[60]
References
Bliss, L., and Mueller, R. (1987). Assessing study behaviors of college students: Findings of a new instrument. Journal of Developmental Education, 11(2), 14-18.
Boylan, H., Bliss, L. and Bonham, B. (1992). National study of developmental education, National Center for Developmental Education, Appalachian State University.
College Board. (1983). Academic preparation for college: What students need to know and be able to do. New York: The College Board.
Conner, J. (1989). Renee’s St. George vs Binet’s dragon. Journal of Developmental Education, 13(2), 28-29.
Darling-Hammond, L. & Lieberman, A. (1992). The shortcomings of standardized tests. Chronicle of Higher Education, January 29, 38, Bl-B3.
Flanagan, J. (1939). A study of the effect of comprehension of varying speeds of reading. In Research in the foundations of american education (pp.47-50). Washington, DC: American Educational Research Association.
Gabriel, D. (1989). Assessing assessment. Review of Research in Developmental Education, 6(5), 1-6.
Gates, I. A. (1921). An experimental and statistical study of reading and reading tests, Journal of Educational Psychology, 12, September, October, November 1921, 303-314, 378-391, 445-464.
Gould, S. (1981). The mismeasure of man. New York: W.W. Norton.
Institute for Education and Training (1992). National educational standards and testing: A response to the recommendations of the national council on education standards and testing. Santa Monica, CA: The Rand Corporation, 90407-2138.
Kerstiens, G. (1993). A quarter-century of student assessment in CRLA publications. Journal of College Reading and Learning, 25(2), in press.
Madaus, G. (1990). Standardized testing needs a consumer- protection agency. Chronicle of Higher Education, September 5, A-52.
Preston, R. & Botel, M. (1951). Reading comprehension under timed and untimed conditions. School & Society, 74, 71.
Rankin, E. (1962). The Relationship between reading rate and comprehension. In E. Bliesmer & R. Staiger (Eds.), Eleventh yearbook of the national reading conference (pp. 1-5). Boone, NC: The National Reading Conference.
[61]
Stetson, E. (1982). Reading tests don’t cheat, do they? Journal of Reading, 25, 634-639.
Tillman, C. (1977). Readabillty and other factors in college reading tests. In D. Pearson & J. Hank (Eds.), Twenty-sixth yearbook of the national reading conference (pp. 253-258). Rochchester, NY: The National Reading Conference.
Utteback, J. (1989). Closing the door: A critical review of forced placement. Journal of College Reading and Learning, 22(1), 14-22.
Weber, J. (1985). Assessement and placement: A review of the research. Community College Review, 13(3), 21-33.
[62]
Are the Skills We Are Teaching Obsolete? A Review of Recent Research in Reading and Study Skills
Maxwell, Martha. “Are the Skills We Are Teaching Obsolete?
A Review of Recent Research in Reading and Study Skills,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1994. Pp. 63-77.
Are the Skills We Are Teaching Obsolete?
A Review of Recent Research in Reading and Study Skills
Dr. Martha Maxwell MM Associates
In the past decade there have been many changes in the way educators and psychologists view learning. Research studies based on new theories in cognitive psychology, information processing, linguistics, and neurophysiology have produced results that challenge old ideas about memory, intelligence, and learning skills as well as teaching strategies. More sophisticated statistical designs and methods have made it possible to design studies that tell us more about the interaction between methods, student characteristics and learning outcomes, and experimenters are more cautious about controlling variables like background knowledge and motivation than were their forebears.
Not only have there been changes in theory, research methods and technology, the demographics of those who attend college have changed. Diversity in culture, educational background, age, and ability characterize today’s college student, and this diversity has had an impact on every phase of college life. Many institutions, faced with budgetary restrictions, have increased the size of classes. Modern technology including computers and better teaching aids are increasingly incorporated into classes, and today’s students may participate in a broader range of learning experiences including collaborative learning. College textbooks too have changed, as have the expectations of instructors. This paper addresses these challenges with these questions: “Do the skills we teach in reading and study skills reflect these changes and are we preparing today’s students for the real world of college study? Or are our methods locked in the assumptions and traditions of the past?”
To address these questions, we will examine some recent research studies relevant to college reading and study skills. Specifically, we will look at research in time scheduling, note-taking, exam skills, textbook reading, as well as studies on the effects of affective factors such as locus of control on learning. This review is not meant to be exhaustive, but merely represents a sampling of research results that appear to have implications for skills teaching. I’ll leave you to answer the question of whether what you are teaching and how you are teaching it is relevant and timely.
In examining the results of research, it is well to remember that no one study is meaningful unless it’s placed in context with other studies. Fifty years of research history has not changed this basic tenet. For every five studies supporting a position, there may be one that negates the same conclusion. In other words, we still don’t have the final answer and even if we did, it too may change as conditions change.
[page 63]
Recent Research on Time Management
Study skills programs routinely include skills in time management, based on principles that are generally accepted in management training programs like Lakein’s (1973) How to Get Control of Your Time and Your Life. Although there are many books on this topic, most agree that the student should identify needs and wants, rank them in regard to their importance or priority, and then allocate time and resources appropriately. Other time-honored tips include delegate work, handle each piece of paper only once, and continually ask yourself, “What is the best use of my time right now?”
That managing their time is a major adjustment problem of freshmen is attested to by the large numbers of schedules and time management tips requested by students from Learning Centers – some distribute as many as 5,000 time schedules each semester. Even at exclusive Harvard some students are reported to have difficulty making the adjustment from high school to college and planning time to study. R. Light (1992), in his assessment of Harvard students and faculty about teaching, learning, and student life, points out that for some students the inability to manage their time will spell failure and encourages advisors to work with those students on time management. He states that how freshmen allocate their energies and plan their study time is crucial to success. For example, K. W. Light trained 173 Harvard freshmen to track how they actually spent their time, and then debriefed each student with questions like, “How was your time actually spent?” “Are you pleased with the way you spend each day?” “Are there changes you might like to make?” He felt that it was helpful to ask students to divide the day into three parts, Morning, Afternoon, and Evening and encourage them to choose the extracurricular activities they might like as well as planning for uninterrupted study time. Furthermore, K. Light reports that logging time has another advantage, for it enables the advisor and student to get together with an agenda to discuss and states that it provides a great chance for an advisor to genuinely advise.
Despite the prevalence and pervasiveness of the problem for students, there has been surprisingly little research on time management. Academic skills counselors use the same strategies that were developed for business situations. What research there is has mainly concerned how training in time management can change behavior and, although a number have reported behavior changes, few studies have shown that time management training reduces stress or improves overall performance. Macan et al. (1990) point out that the research so far has dealt with time management training aimed at changing what is assumed to be a unidimensional construct of good time management. Not only has the concept of a uni-dimensional time management construct been untested, but there have been no systematic attempts to develop a test that assesses conventional time management behaviors. Furthermore, they state that little is known about the correlation of naturally occurring time management with personality and indicators of stress and performance.
In 1990, Macan et al. developed a time management questionnaire, administered it to college students, and then checked it against time management behaviors, attitudes, stress and self-perceptions of performance and grade point average. Of the four factors revealed by the questionnaire, they found that the one most predictive of GPA was Perceived Control of Time. Students who felt they were able to control their own time reported significantly
[page 64]
greater evaluations of their performance, greater satisfaction with work and life, less role ambiguity, less role overload, fewer job induced and somatic tensions, and higher GPAs.
Overall scores on the time management questionnaire were found to be positively correlated with age and sex. Older students were more likely to engage in traditional time management activities while women made significantly higher overall scores on the time management questionnaire, but were significantly lower on one factor – they did not feel that they were in control of their time. In other words, women were better time managers than men but were lower on perceived control of time.
Macan et al. (1990) concluded that time management is multidimensional and is comprised of four independent factors: Factor 1, setting short-term goals and priorities; Factor 2, mechanics, scheduling, planning – time management behaviors taught in seminars like “I carry an appointment book with me.” ” I make a list of things to do each day and check off each task as it is accomplished”; Factor 3, representing the student’s perception of control of time includes items like “I feel in control of my time” and “I feel overwhelmed by trivial and unimportant tasks”; Factor 4, preference for disorganization consisted of items like “I can find the things I need more readily when my workplace is messy” and “I have some of my most creative ideas when I am disorganized.” (Note: As might be expected, those students with a high preference for disorganization felt more ambiguity about their roles as students, had higher somatic tensions and made lower GPAs.)
Although a limitation of this study is that it was based on self-reported information, the fact that multiple factors of time management were discovered suggests that time management is a more complex activity than was previously thought and that attitudes about one’s ability to manage time are more important than the strategies one uses.
In a more recent study on time management, Britton and Tesser (1991) gave 90 freshmen a time-management questionnaire and compared their responses with their cumulative grade-point averages four years later. Two time management components were found to relate to overall grades: 1. a time attitudes factor – consistent with Bandura’s 1989 concept of self-efficacy and the findings of the Macan et al. (1990) study mentioned above – that is, students with positive time attitudes seem to be able to control their time, say “No” to people, and stop unprofitable activities or routines. Feelings of self-efficacy, according to Bandura (1989), allow and support more efficient cognitive processing, more positive affective responses, and more persevering behavior; and 2) strong short-term planning skills. (Note: this study did not use the same questionnaire used in the Macan et al. study, but both studies agreed on these two factors.)
Interestingly, long-term planning skills were not related to final GPA, and the researchers postulate that, in a college environment, short-term planning may be more important than long term planning due to changes in expectations and demands that are relatively rapid and frequent. “Different parts of the course may unpredictably vary in difficulty; the overlapping of demand from different courses is often unpredictable; instructors may even change their mind about the due date on papers or the date an exam will be scheduled; on occasion, there is no syllabus, and even in courses where there is a syllabus, there are often consequential deviations from it. Perhaps in this type of environment if the
[page 65]
goal is to maximize grades, a short-term planning window is more optimal … long range planning may be more important in a less volatile, more stable environment” (Britton & Tesser, 1991).
The fact that long-range planning was negatively correlated with SAT scores was a bit more difficult to explain. Britton and Tesser suggest that several of the items on the long-range planning scale may reflect an inability to tolerate complexity; e.g., “keeping a clean desk.” To the extent that items like this one indicate a low tolerance for ambiguity, people who score high on this factor may be unable to cope with other kinds of complexity such as the complexity involved in taking the SAT. These are questions for further research.
Implications. Current research suggests that time management is a more complex activity than we had previously considered and may be comprised of several independent factors. The fact that four independent factors were found in one study and two in the other indicates that more research is needed, but also suggests that time management is probably not a unitary trait. Both studies indicate that the most important factor in predicting achievement (whether it is current GPA or GPA at graduation) is whether students feel that they are in control of their own time, not the mechanics nor activities they engage in. Both studies agreed on a second independent time-management factor – short-term goals setting. The finding that people who are long-term planners are at a disadvantage in college and don’t do as well as those who plan only for the short-term is intriguing. Perhaps we should avoid stressing the need to have long-term goals and encourage our students to be more responsive and adaptable to the inevitable changes they face in college.
Note – Taking
Using Notes for Review. Haenggi & Perfiti (1992) examined the roles of basic reading processes and prior knowledge in processing expository text. Average and above-average college readers were instructed to either review their notes, reread notes, or reread their textbook on human decision-making. Results showed that the three strategies were equally effective in improving comprehension for text-explicit and text-implicit information, and reading ability and prior knowledge were more predictive for comprehension than was the type of reproductive activity. Rereading the text might help average readers compensate for their lower performance in answering test-implicit questions, whereas above-average readers seem to be better able to combine more text information with their previous knowledge. Working memory played the major role in comprehending text-implicit information, whereas knowledge was relatively more important for explicit and script-implicit information.
These results tend to support Arnold’s data (1942) showing that a rereading strategy improved comprehension on immediate and delayed tests better than note- taking, summarizing, or outlining. In fact, Anderson (1980) in a review of research on study strategies found that only two studies reported rereading strategy was inferior to either note-taking or underlining, while several studies showed no difference. Subsequent studies tend to show that rereading is superior when processing time is constant.
Yet how many of us still warn students that rereading is less effective than SQ3R or note-taking? And do we still discourage students from rereading their textbook chapters?
[page 66]
Kiewra et al. (1991) investigated three note-taking functions: taking notes/no review, taking notes/review, and absent self from lecture and review somebody else’s notes. (Note, this third condition is similar to what students do when they rent note taking services or borrow notes from a friend.) The results indicated that taking notes and reviewing them was superior to taking notes and not reviewing them and to reviewing borrowed notes for performance on a recall test and superior to not reviewing notes on a test of synthesis. However, borrowing someone else’s notes was superior to taking notes and not reviewing them on a test of synthesis. They also reported that taking notes in a matrix fashion (i.e., mapping) was superior to linear note-taking.
A similar study to the 1991 research by Kiewra et al. described above examined the relationships among information-processing, note-taking effectiveness and academic performance indicators of students with above average verbal SAT scores (McIntyre, 1992). Subjects were given four different note-taking conditions: notes/no review, notes/review, no notes/no review, and no notes/review. The results showed that information processing ability (as measured by two tests) accounted for a small percent of the variance of note-taking effectiveness (5%). However, as was expected, students who reviewed their notes outperformed students who did not review their notes on a lecture-specific quiz when verbal ability and information-processing ability were controlled. This result suggests that students who take notes process information better than do students who do not take notes.
Finally, McIntyre found that students averaged recall of less than 60% of the information in the lecture and were able to record only about half of the ideas in the lecture suggesting that note-taking skills need to be taught and practiced.
Mapping. Boyle and Peregoy (1991) studied the effects of mapping on students’ learning from college texts and concluded that students trained in mapping improved their ability to write, but not their reading comprehension.
Recent Research on Textbook Reading
Reading specialists have long been aware of ways to measure the reading difficulty level of textbooks through applying readability formulae, but the effects of rhetorical structures on comprehension are less generally recognized. For example, some rhetorical structures are common in a number of disciplines such as argument, comparison and contrast, problem and solution, and these are familiar to many readers and taught in developmental reading courses. Other structures are more specialized and used for organizing information in a specific discipline, such as legal documents or scientific materials, and these special rhetorical structures make them difficult for the novice reader to comprehend. In this section, we will look at studies about the organization of college textbooks and how it impacts on reading comprehension.
How the Organization of Science Textbooks Affects Comprehension. Dee-Lucas and Larkin (1990), in a study aptly subtitled “Consider the particle p…,” compared the rhetorical structure common to many mathematics and science texts (a proof-first structure) with an alternative organization that is more typical of expository writing – i.e., where the principle is stated first. For example, in science texts, the author may start a chapter by
[page 67]
describing an atom and work up to a theory or principle. The researchers found that readers were more likely to recall the gist of the principle after a delay when they had read the principle-first texts. Furthermore, the amount of information recalled was greater, and more readers recalled the core-principle sentence after reading the principle-first than after the proof-first texts. The investigators posited a number of reasons for this including serial position effect, i.e., more new information is presented at the beginning of the text in the proof-first structure, and the habitual perceptions of readers who expect the most important information to be presented first.
In other words, readers had more difficulty determining what was important when reading proof-first texts and reorganized proof-first texts into principle-first texts when summarizing. The proof-first text also decreased the recall of the principle.
Implications. The authors point out that the traditional way of writing science texts (where the proof is given first) may penalize the novice reader who is uncertain about the importance of the information given in this sequence. Using the principle as a conceptual framework is typical of the thinking in other disciplines and some support for this can be found in research by Sheila Tobias, who had a group of liberal arts graduate students enroll in a basic freshmen chemistry course and describe their experiences (Tobias, 1990). One of the things that frustrated them in reading was the fact that the basic principles were not presented first and, unlike the rhetorical organization in their liberal arts courses, the chemistry text required them to infer the principle.
These studies suggest that we must either train students how to read proof-first materials and learn to find the principle quickly in science and math textbooks so that they can organize their ideas and process the material more easily (see Dee-Lucas & Larkin, 1988 for a description of strategies for comprehending scientific texts), and/or to encourage science text book authors to organize material in a principle-first manner, especially when writing to a general audience of beginning students.
In another study on the effects of text structure on comprehension, Zabrucky (1990) tested the ability of college students to find sentences that were inconsistent with the theme of texts in order to determine whether students were able to recognize their own comprehension failure (i.e., unfamiliar words, inconsistencies, violations of background knowledge, etc.). Previous studies suggest that beginning level college students often fail to evaluate their understanding while reading text material. The effects of two components of reading proficiency, comprehension ability and reading speed, on their ability to evaluate texts for errors were tested. Subjects were directed to look for inconsistencies or contradictions in the text they read. Despite these instructions, Zabrucky found that beginning college students frequently fail to evaluate their understanding of text material. Students who scored higher in reading comprehension and speed detected more errors than students who were slower readers and poorer comprehenders. However, even the better readers were surprisingly low on evaluation skills. In addition, student performance was weaker on texts that were less cohesive, and this could be a particularly large problem for the weaker readers.
Implications. The results of this study suggest that beginning college students, even though they may score high on reading speed and comprehension tests, are weak in evaluating
[page 68]
inconsistencies in texts and overestimate their ability to comprehend textbooks. Poor readers are much weaker. Although other studies suggest that students develop evaluation skills with experience as they advance through college, it is important to provide training in evaluating one’s own comprehension to beginning students, particularly for those whose reading skills are weak.
The Effects of Headings on Reading Comprehension. Grant and Davey (1991) studied the effects of headings on text processing behaviors during immediate and delayed testing. College students read an expository passage, answered questions about major ideas and supporting details, and then identified the placement of answers to questions using a prototype of the text. Headings did not appear to affect overall comprehension or overall answer location accuracy. However, those subjects in the headings group who answered major ideas correctly were better able to use the prototype to locate the place where answers appeared than were those who read the text with no headings. Since these findings were contrary to expectations that headings would improve comprehension, the researchers speculate that perhaps the text read was too easy, and students did not need headings to process the information. It is clear that more research should be done on this question.
Are College Textbook Authors Writing Texts That Take into Consideration Students’ Problems in Comprehension? Smith and Chase (1991) examined introductory college psychology textbooks to find out 1) how often paragraphs included a topic sentence as an explicitly stated main idea and 2) when these topic sentences occur, with what frequency are they positioned in the first, middle, or last sentence? Their results show that over half (58%) of the paragraphs analyzed contained explicitly stated main ideas – a higher percentage than previous studies on social studies texts had shown. In addition, 66 of the topic sentences were in the beginning position in the paragraph, an aid to comprehension.
Noting that although the majority of paragraphs had explicitly stated ideas, the researchers stated there was still a need for students to be able to infer and construct main ideas from paragraphs that do not have a main idea, and this is a skill that needs to be taught.
Do College Reading Skills Textbooks Prepare Students to Read Difficult Textbooks? Schumm, Haager, and Leavell (1991) content-analyzed 46 college textbooks to determine the extent to which postsecondary reading textbooks provide an awareness of and strategies for the use of both considerate and inconsiderate text features. Considerate or “friendly” textbooks are defined as those that possess text-based features such as elements of text organization, explication of ideas, control of conceptual density, incorporation of instructional devices that facilitate information gathering. Inconsiderate texts do not have these features. The results show that substantially more strategies in reading texts concern considerate textbooks than inconsiderate text features and concludes that college reading textbooks are providing strategies for reading text that is easy, but not for reading difficult text. In other words, we’re not teaching students how to read difficult textbooks.
Effects of Beliefs about the Nature of Knowledge on Comprehension. Schommer’s (1990) research addressed the questions: “What are students’ beliefs about the nature of knowledge?” and “How do these beliefs affect comprehension?” She found four factors in her
[page 69]
questionnaire about the nature of knowledge reflecting degrees of belief in a) innate ability; b) simple knowledge; c) quick learning, and d) certain knowledge. In a second experiment she asked students to read a passage from either the social sciences or physical sciences in which the concluding paragraph was missing and had them rate their confidence in understanding the passage, write a conclusion, and take a mastery test. She found that those who believed in quick learning predicted oversimplified conclusions, had poor performance on the mastery test and were overconfident about how well they understood the test. Those who believed in the certainty of knowledge predicted inappropriately absolute conclusions.
Reading Methods
The Best Ways to Teach Developmental Readers. Stahl, Simpson & Hayes (1992) culled the ten best ideas for teaching developmental reading from their years of college teaching of reading and described them in an article that synthesizes research, theory and experience. These ideas should be particularly helpful for novice college reading instructors. Specifically, the ten ideas include: 1) adopt a cognitive-based philosophy (as opposed to a deficit view in which the goal for the students becomes increasing their scores on the Nelson-Denny or some similar test. The cognitive viewpoint assumes that students are active participants and in control of their own learning – capable of becoming effective independent learners; 2) use a course model that stresses transfer of skills learned to “real” college courses; 3) use reliable, process-oriented assessment procedures rather than an over-reliance on standardized tests; 4) broaden the students conceptual background knowledge since many developmental students lack the reading experience and have misconceptions about reading and college courses; 5) reconceptualize vocabulary development by helping students realize that “the fundamental avenue to college success is the ability to quickly expand their vocabulary, and that students must immerse themselves totally in the language of the academy”; 6) use learning strategies that have been research-validated and insure that students know how to use them and how to choose among them; 7) systematically train students to employ strategies through self-control training and other validated training approaches and insure that instruction is direct, informed, and explanatory; 8) promote strategy control and regulation by teaching students to plan, monitor, and evaluate their own learning; 9) teach high utility strategies to maximize immediate acceptance and reduce the negative attitudes students have about taking developmental courses; and 10)incorporate writing into the curriculum to insure that students become co-creators of the texts they read, create their own understanding of content material, and can develop a way to monitor and revise their understanding.
Implications. This paper should be very valuable in training novice college reading instructors as well as in evaluating college reading courses.
Description of a Successful Course. Stone and Miller (1991) evaluated the success of a developmental community college reading course which includes a three- step reading comprehension cycle: predicting, confirming, and integrating, and teaches students the strategies to use at each step of the cycle. Also students were taught the KWL technique – to ask themselves and record “What I know,” “What I want to know,” and “What I learned.” The instruction is described as following the direct teaching model of demonstration, guided practice, and independent practice, and the reading course is a corequisite with a sociology
[page 70]
course and uses the sociology textbook. Evaluation measures show more students passing the revised reading course, a higher retention than in previous years, significant improvement in reading comprehension, and a high pass rate for the sociology course. Passing students were interviewed and found to be highly confident, self-aware comprehenders who transferred reading strategies to coursework in subsequent terms.
Effects of a Metacognitive Approach. O’Neill and Todaro (1991), after reviewing the literature on the positive gains in reading that developmental readers made following training using a metacognitive approach, designed a study to evaluate the relative effectiveness of metacognitive training in reading and study skills at two different reading levels – basic and upper level remedial. Students in the metacognitive intervention were taught what metacognitive strategies are and why, how, and when to use them. They were taught to monitor their own work, summarize and evaluate it by observing instructors modeling the skills, and then modeling them themselves for the whole class and in small groups. They found that, although students increased their use of metacognitive strategies, there were no significant differences between the comprehension of students who received metacognitive intervention and those who received traditional instruction. Both groups improved their comprehension. They suggest that the best time to introduce metacognitive strategies may be after the student has mastered the most basic reading skills.
Recent Research on Test Taking Skills
Does Planning Answers before Writing Help on Essay Exams? Gillis and Olson (1991) studied the notes students made on exam papers after they had been given instructions to plan their answers before writing by brainstorming or outlining. They found significant differences between the scores of students who did various amounts of planning, with students who did extensive planning scoring the highest.
Differences were significant between all three groups – those who did no planning, those who did some planning, and those who did extensive planning before answering the exam questions. The conclusion was that those students who plan before writing earn higher marks on essay tests, but the question remains as to whether students who are specifically required to plan, as were the students in this study, will do as well as those who use these metacognitive strategies spontaneously.
Differences in the Interpretation of Essay Questions among Test Makers, Test Raters and Test Takers. Tedick, Bernhardt and De Ville (1991) looked at how different groups of test makers, raters, and takers interpreted essay test questions. In this instance, test raters refer to those who design the topics, and test makers were those who wrote the items. Although previous research suggests that students often have different interpretations of a question than their instructors, this study was designed to examine not only if there were differences in interpretations between the three groups, but whether differences in interpretation of questions would influence the scores on the test writer’s written responses. The results indicated that the three groups made highly similar interpretations of two test topics, but highly idiosyncratic interpretations of three topics. One topic where teachers felt that the questions were the most straight forward and simple turned out to be the most difficult for
[page 71]
students, but, generally, whether students interpreted items differently from the instructors made no difference in their scores.
The Effects of Locus of Control on Achievement. Educators have long believed that marginal, at-risk students should be exposed to the most effective teaching from the best teachers. But previous research on control theory suggests that unless students perceive that they have some control over and can influence their environment, their capacity to learn from instruction is limited. In other words, high-risk students who feel they lack control over their academic performance are incapable of benefiting from good instruction. The feeling that one has lost control can be induced by many factors in the typical college classroom such as unannounced tests, excessive content, poor organization, as well as such internal factors as believing one cannot learn the subject. As Perry and Penner (1990) point out, “Ironically, the students who are in most need of good instruction are the least likely to benefit from it.” However, research also suggests that when students with low perceived control are given feedback on an aptitude test before a lecture that temporarily altered their perceptions of control, their performance improved, a finding that has replicated in three separate studies.
Arguing that if perceived control can be increased in at-risk students, then their achievement should improve as a result of both their own effort and the quality of instruction, Perry and Penner (1990) studied the effects of attributional retraining, a therapeutic method for reinstating psychological control, in groups of students with internal and external locus of control. The training involved was a short, 8-minute videotape that was given before a class experiment. On the tape, a male college professor described his freshman year at university recounting an instance in which, despite repeated failure, he persisted only because a friend urged him and went on later to succeed in university and graduate school. He encouraged students to attribute poor performance to lack of effort and good performance to ability and proper effort. He also explained that persistence is a major part of successful effort and that long-term effort enhances ability. Following the training tape, students were given an aptitude test consisting of analogies where they learned the correct answer after they had answered each item.
The investigators found that the experiment improved external, but not internal, students’ performance on a test following lecture, a test given a week later, and on homework performance. Having an effective lecturer also enhances lecture- and homework-related achievement in external-controlled, but not internal-controlled, students. The results suggest that cognitive factors influencing students’ perceived control (e.g., internal or external locus) must be taken into account when remedial interventions for academic achievement are developed and that brief training can influence the learning of students with low perceived locus of control.
Conclusions. Attributional training and feedback on an aptitude test enabled high-risk students to learn more during a lecture and to make better use of study materials than they previously had. This is attributed to the effect of training to view academic achievement as being based on effort not on ability.
The Effects of Affect on Supplemental Instruction. Visor, Johnson, and Cole (1992) studied the effects of locus of control, self-efficacy, and self-esteem on students’
[page 72]
participation in Supplemental Instruction (SI). Previous studies have confirmed that SI participation leads to higher course grades even when the non-SI group has significantly higher ability scores, and SI students earn higher GPAs. In this study, students were tested for the affective variables and given the opportunity to voluntarily attend SI sessions in a psychology course in which they were enrolled. Then they were divided into three groups: regular participants (attended four or more SI sessions), occasional participants (attended one to three SI sessions) and non-participants who attended no SI sessions. Results indicated that regular participants had the most internal orientation for locus of control and the highest mean on self-efficacy, and they were higher than the other groups on self-esteem although they did not improve on these scores significantly as a result of their participation in SI. The investigators discuss the implications of these finds for the marketing of SI to at-risk students who are often the target of SI programs. They point out that the study suggests that if we want students to attend regularly and actively participate in SI, the program must be marketed to appeal to students with different affective characteristics. Furthermore, the SI sessions themselves must be designed to demonstrate to students with low self-esteem and external locus of control that they can succeed. Information must be carefully sequenced so as not to frustrate these students, and leaders must find new ways to encourage them to continue to attend. Perhaps the attributional retraining described by Perry and Penner (1990) is one approach that might be helpful in SI classes also.
Correlates of Help Seeking. Karabenick and Knapp (1991) describe three studies in which they examined factors that correlate with a college student’s seeking help when faced with the prospect of failing. In the first study, seeking help was found to be related to whether the student viewed help as learning the process (instrumentally motivated) rather than gaining the minimum assistance to solve the problem (dependency-motivated), directly related to the student’s global self-esteem, and inversely related to students’ perceptions that seeking help is threatening. In all three studies, students who were willing to seek help in an academic context viewed it as an enhancement-related rather than a dependent behavior.
The Downside of Help. Graham & Barker (1990) examined the possibility that unsolicited help can function as a low-ability cue. Children viewed a videotape of boys working problems – one boy received assistance from an a teacher or peer; the other didn’t. Children judged the helped student as lower in ability.
“They’d Do It, but I Wouldn’t.” Researchers proposed that when students expect to fail and believe that failure will reflect their incompetence, they intentionally reduce effort so their failure can be attributed to low effort, not low ability. However, when Jagacinski and Nicholls (1990) asked college students how they would behave in a situation where they expected that failure would indicate their incompetence, they rejected the notion that they would not work hard, but said they expected others would reduce their effort. The study concludes, “Thus if students reduce effort when their perceived ability is threatened, it may not represent an intentional strategy designed to maintain perceived ability.”
[Page 73]
Study Skills Tests
Studies on the LASSI. Nist and Others (1990) investigated the use of the LASSI (Learning and Study Strategies Inventory) in measuring students cognitive and affective growth following a study skills course and, in addition, examined how well it predicted grades in subsequent content area courses. They report that both regularly admitted students and developmental students showed significant improvement in LASSI scores as a result of taking the course. Although the LASSI scores were predictive of grades in regular courses for regularly admitted students, no score or combination of scores on the LASSI was predictive of grades for developmental students.
Implications. Instructors using the LASSI with developmental students should be wary of the results. More research is needed before the test should be used unequivocally for developmental students.
Cross Cultural Comparison of Study Habits. Moreno and Di Vesta (1991) describe the use of the Cognitive Skills Inventory (CSI) in a cross-cultural study where they administered the test to bilingual Puerto- Rican students, monolingual Spanish students, and English-speaking American students. They found differences between the three groups on scores reflecting cultural differences, but no differences on the test factors – integration, repetition, monitoring and coping. This suggests that factors measured by the test remain constant across cultures, although different norms should be developed for different cultural groups.
Summary
We reviewed studies in a number of areas related to college study skills and reading. The results suggest that in many instances the behaviors we wish to change are more complex and less amenable to change through teaching than we previously believed.
Affective factors such as locus of control, self-esteem, and self-efficacy make a difference in whether and how much students learn. Even in something as apparently simple as time management, the key seems to be whether the student feels able to control her own time, not her activities nor the logs nor the schedules she keeps nor the priorities she sets.
Similarly whether we are teaching reading, SI, or skills, it is apparent that it’s not what we teach, but the way that we teach it that may be the determining factor in whether students learn skills and can transfer them to their mainstream college courses.
References
Anderson, T. H. (1980). In R. J. Spiro, B. C. Bruce, & W. F. Brewer (Eds.), Theoretical issues in comprehension: Perspectives from cognitive psychology, artificial intelligence, linguistics, and education (483-502). Hillsdale, N.J.: Erlbaum.
[Page 74]
Barnett, J. E., DI Vesta, F. J. & Rogozinski, J. T. (1981). What is learned in note- taking? Journal of Educational Psychology, 73, 181-192.
Barnett, J. E. & Seefeldt, R. W. (1989). Read something once, why read it again?: Repetitive reading and recall. Journal of Reading Behavior, 4, 351-360.
Boyle, O. F. & Peregoy, S. F. (Spring 1991). The effects of cognitive mapping on students’ learning from college texts. Journal of College Reading and Learning, xxiii(2), 14-22.
Britton, B. K. & Tesser, A. (1991). Effects of time-management practices on college grades. Journal of Educational Psychology, 83(3), 405-410.
Dee-Lucas, D. & Larkin, J. H. (1990). Organization and comprehensibility in scientific proofs, or “Consider a Particle p…”. Journal of Educational Psychology, 82(4), 701-714.
Dee-Lucas, D. & Larkin, J. H. (1988). Novice Strategies for comprehending scientific texts. Journal of Memory & Language, 27, 288-308.
DePalma, A. (Nov. 6, 1991). How undergraduates can succeed: Study together and in small classes. New York Times, 9.
Fishbein, H. D., Eckart, T., Lawer, E., Van Leeuwent, & Langemeyer, D. (1990). Learner’s questions and comprehension. in a tutoring situation. Journal of Educational Psychology, 82(1), 163-170.
Gigliotti, R. J. & Buchtel, F. S. (1990). Attributional bias and course evaluations. Journal of Educational Psychology, 82(2), 341-351.
Gillis, M. K. & Olson, M. W. (1991). Do College Students Who Plan Before Writing Score Better on Essay Exams? In T. V. Rasinski, N. D. Padak, and J. Logan (Eds.), Reading Is Knowledge, 13th Yearbook of the College Reading Association. Pittsburg, KS: College Reading Association, Pittsburg State University, 7-9.
Graham, S. & Barker, G. P. (1990).The down side of help: An attributional-developmental analysis of helping behavior as a low-ability cue. Journal of Educational Psychology, 82(1), 7-14.
Grant, R. & Davey, B. (1991). How do headings affect text processing? Reading Research and Instruction, 31(1) 12-21.
Haenggi, D. & Perfetti, C. A. (1992). Individual differences in reprocessing of text. Journal of Educational Psychology, 84(2), 182- 192.
Jagacinki, C. M. & Nicolls, J. G. (1990). Reducing effort to protect perceived ability: They’d do it, but I won’t. Journal of Educational Psychology, 82(1), 15-21.
[Page 75]
Karabenick, S. A. & Knapp, J. R. (1991). Relationship of academic help seeking to the use of learning strategies and other instrumental achievement behavior in college students. Journal of Educational Psychology, 83(2), 221-230.
Kiewra, K. A. (1989). A review of note-taking: The encoding-storage paradigm and beyond. Educational Psychology Review, 2, 147-172.
Kiewra, K. A., DuBois, N. F., Christian, D., McShane, A., Meyerhoffer, M., & Roskelly, D. (1991). Note-taking functions and techniques. Journal of Educational Psychology, 83(2), 240-245.
Kiewra, K. A., Mayer, R. E., Christensen, M., Kim, S. & Risch, N. (1991). Effects of repetition on recall and note-taking strategies for learning from lectures. Journal of Educational Psychology, 83(1), 120-123.
Light, K. W. (1991). Analyzing freshmen time-use to improve freshman advising at Harvard. Unpublished doctoral dissertation accepted at the Harvard Graduate School of Education, June 1991.
Light, R. J. (1990). The Harvard assessment seminars: Explorations with students and faculty about teaching, learning, and student life. First Report, 1990. Cambridge, MA: Harvard University.
Light, R. J. (1992). The Harvard Assessment Seminars. Second Report, 1992. Cambridge, MA: Harvard University.
Macan, T. H., Shahani, C., Dipboye, R. L. & Phillips, A. P. (1990). College students’ time management: Correlations with academic performance and stress. Journal of Educational Psychology, 82(4), 760-768.
McIntyre, S. (1992). Lecture notetaking, information-processing, and academic achievement Journal of College Reading and Learning, XXIV(1), 7-17.
Moreno, V. & DI Vesta, F. I. (1991). Cross-cultural comparisons of study habits. Journal of Educational Psychology, 83(1), 231-239.
Nist, S. L., Mealey, D. L., Simpson, M. L. & Kroc, R. (1990). Measuring the affective and cognitive growth of regularly admitted and developmental studies students using the Learning and Study Strategies Inventory (LASSI), Reading Research and Instruction, 30(1), 44-49.
O’Neill, S. P. & Todaro, J. (1991). The effect of metacognitive training on the reading achievement of urban community college students. Journal of College Reading and Learning, xxiv(1), 16-31.
[Page 76]
Perry, R. P. & Penner, K. S. (1990). Enhancing academic achievement in college students through attributional retraining and instruction. Journal of Educational Psychology, 92(2), 262-271.
Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational Psychology, 82(3), 498-504.
Schumm, J. S., Haager, D. S., & Leavell, A. G. (1991). Considerate and inconsiderate text instruction in postsecondary developmental reading textbooks: A content analysis. Reading Research and Instruction, 30(4), 42-51.
Sherman, T. M. (Fall 1991). Creating a disposition to learn: Promoting enduring effects from learning improvement programs. Research & Teaching in Developmental Education, 8(1), 37-50.
Smith, B. D. & Chase, N. D. (1991). The frequency and placement of main idea topic sentences in college psychology textbooks. Journal of College Reading and Learning, XXIV(1), 46-53.
Stahl, N., Simpson, M. L., & Hayes, C. G. (1992). Ten recommendations from research for teaching high risk college students. Journal of Developmental Education, 16(1), 2-11.
Steinberg, I., Bohning, G., & Chowning, F. (1992). Comprehension monitoring strategies of non-proficient college readers. Reading Research and Instruction, 30(3), 63-75.
Stone, N. & Miller, K. (1991). Developmental college reading: Secrets of our success. Research & Teaching in Developmental Education, 7(2), 7-42.
Tedick, D. J. & Bernhardt, E. B. (1991). Interpreting essay examination topics used for assessing content knowledge: Differences among test makers, test raters, and test takers. Journal of College Reading and Learning, XXIV(1), 63-78.
Tobias, S. (1990). They’re not dumb: They’re different. Phoenix, AZ: Research Corporation.
Visor, J. N., Johnson, J. J., & Cole, L. N. (1992). The relationship of supplemental instruction to affect. Journal of Developmental Education, 16(2), 12-19.
Zabrucky, K. (1990). Evaluation of understanding in college students: Effects of text structure and reading proficiency. Reading Research and Instruction, 29(4), 46-54.
[Page 77]
Supplemental Instruction
Ramirez, Gen . “Supplemental Instruction,” in Mioduski, Sylvia and Gwyn Enright (editors), PROCEEDINGS OF THE 13th and 14th ANNUAL INSTITUTES FOR LEARNING ASSISTANCE PROFESSIONALS: 1992 AND 1993. Tucson, AZ: University Learning Center, University of Arizona, 1997. Pp. 78-91.
Supplemental Instruction
Gen M. Ramirez
California State University, Long Beach
Introduction and rationale
After 10-15 years in which the trend in academic support had been to focus on high-risk students, higher education in the 1980’s began more critically examining the established approaches toward working with underpreparation and underachievement. Except for developmental courses, whose prolongation was defined by the academic calendar, most interventions with struggling students remained sporadic and brief. Students used the services of learning centers (study skills instruction, tutors) when they, their instructors, or their advisors perceived an immediate need, and the services rendered generally were focused accordingly. With the exception of study skills ó if the instruction was applied ó these resources often got students through the conscious bottleneck and produced success in the course or assignment involved, but they became necessary once again as each subsequent difficulty arose. The goal of learning assistance and of its multiple components, including tutoring, is usually seen by the practitioners as equipping students to be independent, successful learners; its perception and use by students, however, is typically so narrow and fragmentary that that goal cannot always be achieved.
As summarized by Martin et al. (1982), the traditional approaches to servicing high risk students became questionable in the light of four arguments:
1. the insufficiency of standardized testing to accurately identify high-risk students (students may do better or more poorly than what test scores would predict)
2. the impossibility of remediating skill deficits quickly enough to avert a failure pattern (attrition begins some six weeks into a semester)
3. the reluctance of high-risk students to acknowledge difficulties and to seek assistance initially or consistently
4. the stigmatization of learning centers as remedial programs because of their aggressive outreach to high-risk students
In the 1980’s, Missouri educators at both the university and community college levels began to disseminate information about their innovative approaches in response to the above and similar factors. Their focus was no longer on high risk students but instead on high risk courses, the latter defined as “those traditionally difficult, entry-level courses wherein student D and F rates and withdrawals exceed 30 percent of course registrants.” (Blanc et al., 1983) The intervention they designed to impact attrition and failure rates was designated “Supplemental Instruction.” It included the presentation of review sessions given repeatedly
[page 78]
throughout the week whose emphasis was on content review combined with the “model[ing] of thinking and languaging behavior” appropriate for the field while student competency in reading, reasoning and study skills was also increased. Attendance was voluntary. Leaders, presented as being themselves “students of the subject,” prepared by attending course lectures and completing assigned readings; they offered the review sessions three or four times weekly for the convenience of participants.
According to its developers, SI distinguished itself from other intervention modes in two ways: the shift in emphasis from high-risk students to high-risk courses, and the organization of services on an outreach rather than a drop-in basis (i.e., delivery of instruction outside the Student Learning Center) in classrooms and in association with established curriculum.
Perhaps the most significant distinctive of the program is its foundation on a cognitive development theory. SI is one of the few intervention models which, practiced properly, recognizes the critical need for developing the learning and thinking skills basic to content mastery. In that respect, SI endorses the “new assumption about the purpose of education” articulated by such scholars as Malcolm S. Knowles (1981), that being to produce autonomous lifelong learners. The primary objective of the program is skill development and, only secondarily, the review of a common subject matter content to which these skills are applied.
Blanc et al. (1983) note, very appropriately, that a significant proportion of entering freshmen (their estimate is 50%) lack reasoning skills at the formal (abstract) operational level defined by Piaget and Inhelder. Those still working at the concrete level struggle to process unfamiliar information presented through the abstract media of lecture and textbooks, focusing on detail and surface content rather than on synthesis or application of concepts. Because basic courses sometimes evaluate memorization rather than the assimilation of concepts, students operating at the preformal level might do well in these courses if they identify known “pegs” to which they can anchor information or find other effective memory techniques. However, they will fail wherever they are required to demonstrate an integration or application of their knowledge at introductory or advanced levels.
A. Program Description
The SI program model is designed to give students the opportunity to address what they presume to be their content-centered needs, but in fact provides a strong emphasis on developing higher level cognitive skills in those who lack them. Although course and discipline materials serve as a common vehicle for skill instruction and development in problem or deficient areas, SI leaders give priority attention to reasoning and questioning skills. They’re trained to assess the quality of student questions and responses to identify those whose thinking levels would limit their mastery of new concepts. At the same time, they pursue the development of traditional study skills such as study reading, notetaking, memory, test-taking, and time management.
As described in the model, leaders range from professional staff in the Learning Center to student assistants. Their basic operational guidelines are six:
[page 79]
1. attendance at every class lecture (or emergency replacement by a substitute)
2. presentation of every scheduled SI session (modeling consistency and providing students the security of regularity)
3. a supportive posture toward the professor (helping students cope and adapt, rather than validating criticism)
4. encouragement of all questions, serious attention to them, and modeling of appropriate ways to think about course content
5. provision of feedback about student difficulties to the professor whenever opportunities arise
6. availability to students to discuss course material beyond the SI classroom or meeting time.
B. Evaluation
The University of Missouri-Kansas City, which initiated its SI offerings in 1978, disseminated early years’ results in 1983. That they attempted an impact analysis of the program is itself unusual among such interventions, because the complexity of factors that can affect student outcomes is usually considered an impediment to accurate measurement. However, the attachment of SI sessions to common performance obligations and to measured achievement allows outcomes to be examined in ways not usually possible with the diverse populations served by Learning Centers.
Their findings (Blanc, et al., 1983), for students who attended from 1-25 sessions (with a program average of 6.5 sessions per semester), showed significant improvement attributable to SI. In order to account for motivation as a factor in SI participation, they compared SI students to two control groups (one “motivational” population reporting high interest but unable to participate due exclusively to scheduling conflicts and the other the remaining nonparticipants). As indicated by the following chart, SI students did significantly better with respect to GPA in the target course, semester GPA, and their attrition (“D”, “F”, “W”) rates in the class.
[page 80]
Table 1 | | Non-SI Group | Measures | SI Group | Motivational Control (N = 132) | Others (N = 353) | |||||
High school class rank (percentile)*Converted test score (percentile)*Course grade** GPA, spring semester 1980** Percentage D, F, & W’s*** | 72.5 56.2 2.50 2.70 18.4 | 71.4 56.2 2.12 2.36 26.5 | 80.9 58.7 1.57 2.25 44.0 | ||||||||
NOTES: Courses served by Supplemental Instruction (SI) were Biology 109, Chemistry 212 and 222, Economics 201 and 202, and History 1020 and 2020. All were entry-level courses for the particular discipline. Mean course grade was based upon a 4.0 scale (A = 4, B = 3, C = 2, D = 1, and F = 0). * Level of significance: N.S. ** Level of significance: 0.01 using t-test. *** Level of significance: 0.05 using chi-square test. |
[page 81]
Table 2 | Group | Percentage of Group | Course Grade | Percentage Reenrollment during Subsequent Semester |
Top Quartile (N = 149) SI Non-SIBottom Quartile (N = 75) SI Non-SI | 30 70 31 69 | 3.10* 2.30* 1.72* 0.88* | 86%** 78%** 74%** 62%** |
NOTE: Top quartile students were those scoring in the 75-99th percentile range on entrance tests, and the bottom quartile students were those scoring in the 0-25th percetile range. * Statistical test and level of significance: 0.05 using t-test ** Statistical test and level of significance: 0.10 using chi-square test. |
Table 3 Impact of SI on “D”, “F”, “W” grade rates | | 1977 | 1978 | 1979 | 1980 |
“D”/”F”/”W” rateSI Utilization | 33% | 27% 13% | 17% 32% | 18% 45% |
adapted from Blanc et al., 1983 |
[page 82]
1. a proactive approach where the service precedes difficulties
2. direct attachment to specific courses so that skills instruction has immediate application
3. student perception of the program as enrichment (for all students) rather than remediation (for the underprepared)
4. promotion of high degrees of student interaction and support (that produces peer study groups and facilitates mainstreaming non-traditional students).
Program developers point out, as an additional advantage, that SI is more economical to administer than tutoring, the traditional service it most closely approximates. At UMKC, it was estimated that the cost of servicing 106 students by one leader during two semesters was $12 per student, substantially more service at lower cost than what one individual could have provided these students in a conventional program. Martin and Blanc (1981) describe the program as “cost-effective, in terms of both staff allocation and budget allocation.”
C. Adaptation of the Model
Both for its philosophy and for the results it’s been found to produce, SI is an ideal program model. Based on very sound principles of cognitive development, if instruction carries this emphasis, the changes not only in the target course, but in general student achievement certainly qualify it as the most economical and most effective known approach to academic support. Beyond that, it’s a democratic program, i.e., one that offers something for virtually any college student, as distinct from the traditional developmental or remedial interventions provided for students who are defined by an institution as deficient or disadvantaged; SI fails to apply that stigma.
SI is unquestionably a model that deserves wider recognition than it has received. Many campuses, without any knowledge of the SI model, have developed adjunct and paired courses that serve a similar purpose through a more formal arrangement. But for many, tutoring and other conventional approaches to academically supporting students, which continue to have their place, are the only instructional support services available.
1. Rationale:
Although staff at hundreds of institutions have been trained to implement it, the number of campuses actively providing SI programs is still a very small proportion of the nation’s colleges. This is due in part to difficulties encountered when local needs or constraints do not seem to be effectively served by the model. If a large, urban state university in California is typical, the problem lies not in the educational model, but in the noneducational campus realities that preclude its effective adoption. The discussion that follows is offered as an example of the adaptation process required to tailor the prototype to local objectives or obligations. For community colleges, residential campuses, or other types of institutions, there is a different constellation of special considerations on which an adaptation must rest.
[page 83]
In 1985, a campus Trio director attended SI training at UMKC and returned to campus to initiate a local effort parallel to the traditional tutorial offerings. During each semester of 1985-86, five high-risk classes were chosen for SI adjuncts supported by federal (Trio) funds: economics, calculus, physics, government, and biology. These were either general education courses or prerequisites to the campus’s most popular majors at that time (business and engineering). For those who participated, results were very good, equal to those reported in the UMKC data. But after three semesters of introducing the model, those to whom administrative responsibility had been given recognized inherent problems that demanded attention.
Funding was a significant factor. As is typical of many institutions, tutorial funds were not general but categorical. This meant that not only was the primary target population for SI the supporting program’s own students, but their students had to be involved in SI sessions in order for the expenditure of funds to be justified. The way SI advertised its services was through the target high-risk course, so every student in the class was offered access. In practice, attendance at SI sessions was sporadic and inconsistent. But the greatest response was from traditional students, not from the categorical program students, and the trickle of participants swelled dramatically just before each exam. For the faithful few attending regularly, the skill development was aborted by the temporary throng seeking only help with content and refusing anything else. And because the majority were not categorical students at all, many scheduled sessions were paid for questionably; furthermore, attendance patterns raised the issue of whether leaders should be paid for time committed to SI, even if they had no students present.
From a legal perspective, the funding issues had to be resolved. Pragmatically, because of both funding and the campus concern for assisting underprepared and under-achieving students, attendance questions required new strategies. And politically, administrators faced the all-too-common reluctance of advisors to accept and endorse a departure from the well-established tutorial model: despite the move toward SI and drop-in tutoring for general education courses, they continued to advocate a return to scheduled weekly appointments for all of their students requesting help in any course they might be taking.
2. Adapted model:
The result of these deliberations was an adaptation of the SI model to meet local needs. The task, as defined, was to effectively service high risk students enrolled in high risk courses but, like the UMKC model, achieve a mix of that target group with traditional students, including high achievers.
So in Fall 1987, 13 sections of SI 060 were introduced as a one-unit non-baccalaureate adjunct, each section attached to a designated high risk target course. That single element successfully addressed the attendance problem. One reason categorical students hadn’t participated consistently or at all was their failure to make provision for these sessions in their schedules. The SI class was scheduled to meet either the hour immediately following or immediately preceding the target course; students enrolled for it and, thereby, made a time commitment to attend. Although administrators could control the availability of seats in SI offerings, it was critical that students get into the proper section of the target course; those in participating academic assistance programs (EOP, SSS/Trio, minority engineering and busi-
[page 84]
ness, student athletes) whose advisors were directing them into an SI section received priority enrollment. This provision was a practical necessity, but it also actively involved program advisors in placing students into SI sections during advisement. Listing SI courses in the Schedule of Classes allows any student able to register in the correct target course to enroll simultanously in SI if he/she desires.
Enrollment in SI courses now averages 15-35; the number of target courses supported each semester has grown from the original five to about 35. With semesterly fluctuations, academic assistance program students average 50-60% of enrollment.
Besides student commitment, there are some other practical benefits to this arrangement. Although non-baccalaureate, this course gives students studyload credit toward financial aid or other full-time enrollment obligations. As a course, it also generates FTE and therefore funding; SI sections are now taught by senior tutors (funded by categorical state and federal allocations) and by teaching assistants (paid through the university’s instructional funds). It is significant that the campus has taken some fiscal responsibility for offering a service originally limited to soft monies for a prescribed population.
To select target courses readily, an attempt was made to isolate some of the characteristics of a high risk course on this particular campus. Though over time many of the same sections became regular target classes (due to known high risk, the supportive posture of instructors and the preference of advisors in special assistance programs), the following elements served initially to aid target course selection:
1. large lectures, minimizing faculty-student interaction
2. courses found difficult based on their technical nature (such as the sciences) or based on the unfamiliarity of our target population with these disciplines (economics, advanced math).
Final grades in target classes served as evidence that those chosen were truly high risk. In Fall 1990, for example, 13 of 34 courses analyzed (38%) had average grades for non-SI students below 2.0, and in Spring 1991, 15 of 30 courses (50%) had probationary averages for non-SI enrollees. Using UMKC’s criteria for high risk courses, these target sections definitely exceed the 30% “D”, “F”, & “W” rate.
Though the SI courses carry one unit of studyload credit, they actually meet the same number of hours as the target class and on the same days. Regularizing the time devoted to this activity to match actual course time was desirable conceptually: it got students into a pattern of reviewing and processing course content directly before or after each lecture. It was also a necessity for working effectively with numbers far greater than the traditional SI study group of four to eight.
3. Staffing:
The need to use only student leaders and no professional LAC staff has carried its own complications. It was a fiscal necessity, because fully-occupied Learning Skills
[page 84]
Professionals were not available, and augmenting their numbers was not an option. The Learning Skills and Writing Center staff have always been available to SI instructors for classroom presentations or consultation just as they are to any university faculty member, but university students alone have filled the ranks as SI leaders.
Student SI Leaders have been recruited through a variety of networks including the recommendations of target course faculty and the corps of outstanding tutors within the LAC. They are required to hold GPA’s of 3.0 or higher in their discipline and overall, present three letters of recommendation from faculty they’ve worked with, and support the policies and procedures of the SI program. Those selected receive extensive presemester training on study skills and classroom techniques, as well as periodic inservices during the semester and clinical supervision while on the job. Their assignment involves attending all target lectures, preparing for and conducting all scheduled SI class meetings, holding an office hour each week, meeting as scheduled with supervisor(s), reporting student attendance and performance (on target course exams), and administering mid-term (program) and final (university) faculty evaluations to their classes. They also assign grades of Credit/No Credit to SI enrollees.
As a group, these individuals are very competent in their disciplines, but their consciousness of their own or of model study skills varies. Some have a strong sense of how they study and what students need to know in order to really master the content; their objectives are as broadly based as the SI model’s (cognitive development). Others, despite the study skills training imparted to them, would define their objectives as helping students get good grades in the target course. They find it difficult to abandon their own discipline-priority enough to superimpose a study skills approach on the curriculum. Feeling pressured to review and work with every detail of the target professor’s lectures, they focus so heavily on content that students may do well in the course, but remain at the preformal stage of cognitive development despite their participation in SI. So while the involvement of student leaders serves many purposes, it places an additional supervisory demand on the program administration to assure maximum effectiveness.
4. Program evaluation:
The enrollment of the approximately 35 SI sections each semester is about 600 students, with an average of 50-60% of them coming from academic assistance programs. To some extent, this skews the distribution toward the lower end of the UMKC population group studied, where the benefit of SI often carried otherwise probationary students into a satisfactory grade range. Similarly, in some of this university’s courses, the difference between SI participation and non-participation is a clean passing grade of “C” rather than a probationary level grade:
Table 4: Spring 1991 Outcomes | Course | Target # | SI # | Target Avg. | SI Avg. |
Chem 111A Econ 333 Hist 131 Hist 172 | 104 65 65 137 | 27 18 16 33 | 1.94 1.92 1.95 1.80 | 2.63 2.17 2.31 2.06 |
Course | Target # | SI # | Target Avg. | SI Avg. |
Math 115B Psy 100 | 58 116 | 14 25 | 2.79 2.34 | 3.29 3.28 |
[page 87]
overall, but 2.33 in SI (+0.39). Juniors, by comparison, earned a GPA of 2.19 overall, with modest improvement over that–2.33–with the benefit of SI (+0.14). It was assumed in selecting a great majority of 100-level general education courses to receive SI adjuncts that student need at that level was the greatest and that introducing them to good study skills early in their enrollment had the greatest potential payoff. These data seem to support the first hypothesis. The longitudinal impact of SI on retention and performance is now under review, using a random sample of SI and target-only students from the same academic year.
Similarly, the data from 1990-91 (with program outcomes very close to those measured the previous academic year) indicate that students from academic assistance programs benefit considerably more than traditional students.
The 1990-91 overall annual SI grade impact of +0.29 is a composite of two factors:
+0.22 received by traditional students in SI compared with their peers not in SI and
+0.44 earned by SI students from academic assistance programs in SI compared with their non-participating peers
(level of significance: 0.01 using z-test for all three comparisons).
As these figures indicate, program students benefit twice as much (the improvement in their GPA is twice as much) as traditional students in SI.
Figure 1
Graphically, the performance of traditional students in SI over their target peers appears as a fairly normal distribution of differences in grades (with the 0.22 mean improvement over the norm established by peers not in SI). However, the performance of high-risk students from academic assistance programs is both shifted to a higher mean and skewed in distribution showing that a greater proportion of students surpassed the overall SI mean of 0.29. (See Figure 1.)
More strikingly, in 45 of 62 target classes (73%) analyzed, program students enrolled in SI did better than their non-SI peers; and in 34 of these 62 courses (55%), as a group they surpassed the GPA earned by all target class students not in SI (84% of these being traditional students). (See Figure 2 and Figure 3.) Level of difficulty in these target classes is best documented by the fact that in
[page 88]
half of the 34 courses in which program SI students did better than any other group, their average grade in the target course was still below 2.0! It must also be noted that there were 17 target courses (of 62, or 27%) in which program SI students did not do as well as their peers who weren’t enrolled in SI. There is sufficient variation among these courses that the results cannot be attributed primarily or exclusively to student preparation, effort or SI leadership.
Figure 2 ———————Figure 3
In general, then, high-risk/underprepared students tend to benefit more from SI than traditional students in the immediate application of their SI instruction. A number of reasons were suggested to explain that pattern. The greater skill or academic background needs of program students may make them more reliant on SI classroom instruction for their performance. Or the style of teaching typically found in many of these classes may be more beneficial to that group than to differently prepared, traditional students.
Because the answer to this question would be a strategic planning tool for future semesters, this matter received prompt attention. The only factor found to be statistically significant in explaining the divergent results was the extent to which critical thinking and study skills were primary elements. Interestingly, in a sampling of course sections where good SI Leaders whose primary emphasis was on content were compared with counterparts in the same disciplines who subordinated content to critical thinking and study skills, both program and traditional students improved measurably in the skills classes over non-SI peers. Here, still, the program students did even better than mainstream students, but both groups benefited. In the content-bound sections, program student performance was mixed compared with their non-SI peers, while traditional students in SI generally matched or did less well than their peers not in SI. It may be that some content-emphasis SI classes are little different from a tutorial, which is less needed by traditional students but known to benefit under-prepared students. A course contributing to cognitive development, on the other hand, benefits all students and has long-term measurable and immeasurable impact.
[page 89]
5. Fiscal considerations
Certainly the cost of this program, compared to other interventions, is minimal. Based on the average number of semesterly sessions attended by every enrolled student, the per student cost for a whole semester’s instruction is about $45, which on a student assistant salary scale would buy seven to eight hours of tutoring (as compared to the 45 semester hours of SI). That figure includes the training and supervision of student leaders, their 10 hr./wk. salary as either (undergraduate) student assistants or (graduate) teaching assistants, acquisition of textbooks for them, and a portion of the administrative cost for overseeing the SI component. Servicing large numbers of high risk students for that amount seems a very judicious and efficient use of resources.
Funding for this particular adapatation of the model currently comes from two sources: about half are categorical monies allocated from state and federal educational equity programs to the LAC for tutoring and SI; the other half is the university’s contribution in the form of the Coordinator’s position and 25-30 teaching assistanceships from instructional funds.
D. Conclusions:
Both published outcome data and direct experience suggest that Supplemental Instruction is the direction for the future, not at the expense of other learning center programming, but certainly as a significant element within a campus’s planning. There are a number of reasons why this appears to be the case.
Fiscally, as state economic resources continue to shrink and the cost of educating individuals remains on the rise, institutions are forced to seek effective ways of stemming attrition. Every academic failure is costly, whether that comes in the form of repeated courses, replacement of a lost individual with a new recruit/applicant/enrollee, or simply prolongation of one student’s campus career because of academic difficulties. The cost of failure is greater than the cost of SI.
Educationally, if Blanc et al.’s assessment of cognitive development levels is accurate (more than likely, it’s conservative), SI is the only academic support model to effectively address the need for critical thinking and cognitive development with sufficient continuity to have substantial impact. If secondary schools are not preparing students adequately in this area, whether they see it as their proper role or not, it falls to higher education to provide resources for this purpose. Preformal cognitive levels may prove adequate for introductory college courses in areas where factual information predominates or where an instructor’s testing mode is very basic. But it will not permit success in those disciplines, courses, or classrooms where higher level thinking skills are essential. The SI leader’s modeling of those skills as well as his/her direct efforts to inculcate such skills and approaches in SI students are valuable ways of addressing that need in order to produce a population of capable, successful college students.
Finally, there are responsibilities implicit in campus’s admission policies and in their affirmative action efforts. Published data continue to show very poor retention and graduation rates for historically underrepresented student groups, except in those institutions where
[page 90]
they are the majority. The fact that many of these students enter institutions only under special action has been cited as a justification for poor outcomes: i.e., given that they would not have been accepted at the university, the fact that 20% or 40% succeed is a positive outcome. However, it appears to be the moral obligation of institutions which seek out and enroll these students to aggressively enhance their success by making them competitive, not by lowering standards. Dropping expectations or letting them fail is reinforcing the false assumption (affirmed throughout their education) that it is they, not the instruction given them, who are deficient. Approaches that single out underprepared students for special help and make them dependent on that help perpetuate such stereotypes. If attrition rates remain high despite traditional interventions, the latter are either inadequate or ineffective for the majority of the students they’re designed to serve. In contrast, the SI model is working for these students; furthermore, it avoids segregating and stigmatizing them. As a program, SI offers benefits for any student, for the high achiever as well as for the struggler, giving either one cognitive tools and their refinement for long-term benefit. It replaces the message of traditional academic supports (“you’re incapable” or “you’ll fail on your own”) with a note of confident encouragement: “This program offers college students the secrets of academic success!”
References
Blanc, Robert A. et al. (1983). Breaking the attrition cycle: The effects of Supplemental Instruction on undergraduate performance and attrition, Journal of Higher Education, 54, 80-90.
Caswell, R. (1991). Adjunct class sessions: Assisting underprepared students to achieve academic success, Teacher Education and Practice, 6(2), 73-74.
Knowles, M. S. (1981). Preface. In D. Boud (Ed.), Developing student autonomy in learning. London: Kogan Page.
Lundeberg, M. A. (1990). Supplemental instruction in chemistry, Journal of Research in Science Teaching, 27(2), 45-55.
Martin, D. C. & Arendale, D. R. (1990). Supplemental Instruction: Improving student performance, increasing student persistence, Eric Document 327103, 14 p.
Martin, D. C. & Blanc, R. (1981). The Learning Center’s role in retention: Integrating student support services with developmental instruction, Journal of Developmental and Remedial Education, 4(3), 2-4 & 21-23.
Martin, D. et al. (1982). Supplemental Instruction – A model for increasing student performance and persistence. nd
Wolfe, R. F. (1987). The Supplemental Instruction program: Developing learning and thinking skills. Journal of Reading, 31(3), 228-32.
See 1996-97 Proceedings
Learning Support Centers in Higher Education
Site built with Foundry for Rapidweaver
All LSCHE website pages have all rights reserved under Creative Commons License: