CHAPTER III

METHODOLOGY

Type of Research

This investigation was a field study. It was designed to investigate the effects of training and tutoring experience on adult peer tutors’ responses to presented tutoring situations. The degree to which other factors contributed to a tutors’ selection of an appropriate course of action was also investigated. The other factors included highest degree earned, age, reasons for becoming a tutor, rewards for being a tutor, grade point average, prior coursework in subject area(s) tutored, prior work experience in subject area(s) tutored.

Field studies, as described by Kerlinger (1986), are “nonexperimental scientific inquiries aimed at discovering the relations and interactions among sociological, psychological, and educational variables in real social structures…any scientific studies, large or small, that systematically pursue relations or test hypotheses, that are nonexperimental, and that are done in life situations like communities, schools, factories, organizations, and institutions…” Kerlinger (1986) identifies some of the weaknesses of using field studies. The foremost concern is that the researcher looks at existing groups and situations to design research and does not manipulate any independent variables. He also cautions that the field situation has a plethora of independent variables and variance not easily controlled, which would not be encountered in a laboratory setting. Lack of precision is a weakness inherent in field research studies as the effects of identified independent variables may not be recognized or other independent variables may not be identified.

However, Kerlinger (1986) holds that field studies are especially valuable in educational and other settings where randomization is impractical and would lower the realism of the situation. Field studies can be used effectively to investigate differences among existing “intact” groups in realistic or near to real-life settings. In addition to realism, Kerlinger (1986) notes that other strengths of field studies include social and scientific significance, strength of variables, and a heuristic quality. Field studies investigate existing settings by identifying and analyzing the effects of independent variables on dependent variables rather than manipulating them.

The purpose of this field study was to investigate the effects of training and of tutoring experience on adult peer tutors in post-secondary institutions. Two researcher-created instruments were developed. National and local experts in tutoring and/or training participated in the method of evaluating tutor responses on both instruments.

Research Questions

  1. Does training affect a tutor’s ability to identify an appropriate course of action with a student?
  2. Does tutoring experience affect a tutor’s ability to identify an appropriate course of action with a student?
  3. What other factors contribute to a tutor’s ability to identify an appropriate course of action with a student?
  4. What are the relationships between the tutors’ abilities to identify an appropriate course of action and their abilities to construct an appropriate course of action?

Hypotheses

The first research question was expanded into two hypotheses which investigate differences in the tutors’ mean scores on a researcher-created instrument (the TSORA). The TSORA is a multiple choice assessment comprised of questions in six topics: a) Definition of tutoring and tutoring responsibilities, b) Active listening and paraphrasing, c) Setting goals/planning, d) Modeling problem-solving, e) Referral skills, and f) Study skills.

The first hypothesis followed the International Tutor Certification Program (ITCP) guidelines for training: a minimum of 10 hours of training is needed for certification at the first level (see Appendix A). Study participants (tutors) were assigned to training levels based on the college at which they tutor and the amount of training offered by the program director at that college. The first hypothesis investigates differences in the total mean scores of the groups based on the amount of training offered at that college.

H0.1: There are no significant differences in the total mean score on the TSORA among three groups of tutors, those with 1) no training, 2) 0-9.9 hours of training, and 3) 10 or more hours of training, based on the amount of training offered during the study.

The second hypothesis examines differences in the sub-test mean scores of the groups based on the amount of training offered in each of the six sub-test topics.

H0.2: There are no significant differences in any one of the six sub-test mean scores on the TSORA among three groups of tutors, those with

1) no training, 2) 0-.9 hours of training, and 3) 1 or more hours of training, based on the amount of training offered during the study in each of the following six sub-test topics:

a) Definition of tutoring and tutoring responsibilities

b) Active listening and paraphrasing

c) Setting goals/planning

d) Modeling problem-solving

e) Referral skills

f) Study skills

The second research question was also expanded into two hypotheses. The third and fourth hypotheses investigate differences in the tutors’ mean scores on the researcher-created instrument (TSORA) based on experience acquired during the semester of the study.

The International Tutor Certification Program guidelines require a minimum of 25 hours of experience in addition to the training requirement to qualify for the first level of certification. However, in a discussion with the Learning Assistance Center Consortium (LACC) of the Maricopa Community Colleges, it was believed by the group that very few, if any, tutors would have acquired less than 25 hours after one semester of tutoring. One semester lasts 16 weeks, and even with an average of two hours per week, most tutors would acquire more than 25 hours. Many tutors spend 5-18 hours per week tutoring. Therefore, the three groups of experience investigated were:

1) 0-99.9 hours of tutoring, 2) 100-199.9 hours of tutoring, and 3) 200+ hours of tutoring.

The third and fourth hypotheses parallel the first two but investigate differences based on tutoring experience rather than on training. The third hypothesis investigates group differences in the total mean scores on the TSORA, based on the amount of experience received during the semester of the study.

H0.3: There are no significant differences in the total mean score on the TSORA among three groups of tutors, those who, during the study, acquired 1) 0-99.9 hours of tutoring experience, 2) 100-199.9 hours of tutoring experience, and 3) 200 or more hours of tutoring experience.

The fourth hypothesis examines group differences in each of the six sub-test mean scores based on the amount of tutoring experience acquired during the semester of the study.

H0.4: There are no significant differences in any one of the six sub-test mean scores on the TSORA among three groups of tutors, those who, during the study, acquired 1) 0-99.9 hours of tutoring experience, 2) 100-199.9 hours of tutoring experience, and 3) 200 or more hours of tutoring experience, for the following six sub-test topics:

a) Definition of tutoring and tutoring responsibilities

b) Active listening and paraphrasing

c) Setting goals/planning

d) Modeling problem-solving

e) Referral skills

f) Study skills

The fifth hypothesis was created in response to the third research question and identifies factors which are believed to contribute to a tutor’s ability to identify an appropriate course of action with a student. The literature identifies that class standing may be a factor. Findings indicated that students prefer tutors closer to their own class standing (Maxwell, 1990a). Other factors were identified by the researcher as potential independent variables which could affect the results of the field study if they were not considered. The more obvious factors were the age of the tutor and the tutor’s grade point average. Two other factors considered were any prior coursework or work experience that the tutors had acquired in the area related to the subject or skills they were tutoring. Finally, two last factors were taken into consideration: 1) motivations or reasons for becoming tutors or 2) the tutors’ perceived rewards of tutoring which might motivate them to continue tutoring. Tutors desiring to help others or feeling that they are a help to others might have a different level of motivation than tutors whose motivation is the money or the flexible hours. The differences in motivation might affect the total score on the researcher-created instrument, the TSORA.Thus, the fifth hypothesis expands the third research question to investigate the effects of the above-mentioned factors:

H0.5: None of the following factors contribute to a higher total mean score on the TSORA:

a) Age

b) Highest degree earned

c) Reasons for becoming a tutor

d) Perceived rewards of being a tutor

e) Grade point average

f) Prior coursework completed in subject area tutored

g) Prior work experience related to subject area tutored

Research question four was exploratory in nature, and thus, a hypothesis was inappropriate. The researcher will explore and report on potential relationships between the tutors’ abilities to select an appropriate course of action from presented choices and the tutors’ abilities to construct an appropriate course of action.

Description of Study Variables

Dependent Variables-

The mean score on the total TSORA or on each sub-test of the TSORA (Tutor Situational Objective Response Assessment, see Appendix D), the researcher-created instrument.

Independent Variables-

Tutor training
Ordinal variable; levels of treatment assigned based on the amount of tutor training offered during the study at the tutor’s college
Tutoring experience
Ordinal variable; levels of treatment assigned based on the amount of tutoring experience acquired during the study
Highest degree earned
Ratio variable-categorized by type and level of degree and translated into years of education
Age
Continuous variable (nothing under 16 expected)
Reasons for becoming a tutor
Continuous variables, range between 0-100
Perceived rewards of being a tutor
Continuous variables, range between 0-100
Grade point average
Continuous variable, range between 0-4
Prior coursework completed in subject area tutored
Continuous variable
Prior work experience related to subject area tutored
Continuous variable

Population and Sample

The population was that of adult peer tutors working in post-secondary institutions. The sample for this study was comprised of adult peer tutors working for the Maricopa Community College District (MCCD). MCCD consists of ten community colleges and one vocational training center located in Phoenix, Arizona. The study focused on the ten community colleges. The sample was chosen to lessen differences that might exist among various educational systems and levels (e.g., between different community college districts or between community colleges and other post-secondary institutions). Tutor participation for this study was coordinated through the tutoring supervisor at each site, also referred to in this study as the program director.

Setting

All adult peer tutors working at any of the ten campuses in the MCCD were to be invited to participate in this study. The program director at each campus agreed to be the contact through which the researcher would coordinate efforts at each campus. This study does not provide any interventions; instead, it investigates existing differences between groups of tutors working in one community college district. Levels for training will be based on requirements adopted for MCCD by the district’s Learning Assistance Center Consortium. These requirements, follow guidelines (see Appendix A) established by the College Reading & Learning Association’s (CRLA) International Tutor Certification Program (ITCP) for the first level of certification, and include 10 or more hours of tutor training. These guidelines have been approved as minimum training standards for tutors to be certified in MCCD. Competencies for a district-wide tutor training course are currently being developed by the Learning Assistance Center Consortium (LACC, a district-wide group of program directors and representatives).

The competencies will be proposed for district-wide acceptance in the Fall, 1994 semester. These proposed course competencies will also meet CRLA’s International Tutor Certification Program guidelines. Treatment levels for experience were established on expected levels of experience as discussed with MCCD’s LACC.

Instrumentation

Two researcher-created instruments were developed for this study. Both instruments, the TSORA and the TSFRA will be described below. The Tutor Situational Objective Response Assessment (TSORA) is a researcher-created instrument (See Appendix D) developed to score tutor responses

to presented tutoring situations. A draft form of the TSORA was tested by novices, trained tutors, and tutoring experts to validate the situations and response choices before it was presented to experts or study participants.

Each of the questions on the TSORA fall into one of following six

sub-test categories (see Table 2). All six topics are supported by the

literature and are included within CRLA’s International Tutor Certification Program guidelines (See Appendix A).

 

Table 2
Sub-test Categories and Question Number on the TSORA

 

Sub-test Categories
Question Numbers
Pre-Test
Post-Test
1) Definition of tutoring and tutor responsibilities 5, 8, 10 4, 14, 17
2) Active listening and paraphrasing 4, 13, 16 1, 10, 16
3) Setting goals/planning 6, 15, 18 3, 12, 18
4) Modeling problem-solving 2, 9, 12 6, 8, 15
5) Referral skills 3, 11, 17 5, 9, 11
6) Study skills 1, 7, 14 2, 7, 13

The TSORA (see Appendix D) was developed as a multiple choice test which presents three questions pertaining to each of six specific tutoring situations. Each of the eighteen questions (six situations with threequestions each) offered five actions and asked respondents to select the”most appropriate” response choice.

A second instrument was created to supplement the TSORA. The TSORA provided tutors with a question regarding a presented situation, tutors were then asked only to select the “most appropriate” response from multiple choices. The concern was raised that tutors may be able to “identify” an appropriate action, but might not be able to “construct” an appropriate action. Since this investigation was a field study, the opportunities were available to test hypotheses and to explore existing relationships that might be developed into potential hypotheses to be tested in later studies (Kerlinger, 1986). A second instrument was created, the Tutor Situational Free Response Assessment (TSFRA, see Appendix C). The TSFRA was designed to elicit tutor reactions to a presented tutoring situation by having them construct their own responses of actions that are “most appropriate” and “most inappropriate.” These responses could be investigated for relationships. If relationships were found, these could be developed into hypotheses for later studies.

One of the cautions identified in the summary of Chapter II was that the researcher be careful in assessing correct responses. The recommendation was that the scoring be free from the researcher’s own biases. To comply with this recommendation, local and national experts, in addition to the researcher, were invited to participate in ranking the responses in this study (see Appendix F: Expert Situational Reaction Packet). The Expert Situational Reaction Packet contained both instruments, the TSORA and the TSFRA. Twenty-one experts including the researcher responded. Table 3 lists the name, title, and institution of each of the experts participating in this study. Additional information as to each expert’s background, training, and experience can be found in Appendix G.

All twenty-one experts participated in ranking responses on the TSORA. The experts’ rankings formed the basis for scoring the participants’ responses. Experts were to rank each of the 90 response choices on the

TSORA (18 questions with five choices each) with a value of “0, 1, or 2”

(see Expert Situational Reaction Packet, Appendix F). The experts were

asked to use the following guidelines for ranking each response.

Each question has five responses (a-e):

“2” identifies the most appropriate response choice

(one per question)

“1’s” are in the middle as typical responses that are neither the most appropriate nor most inappropriate (two per question)

“0’s” are the most inappropriate response (two per question)

The experts’ ranked values were collected and averaged to establish an expert mean value between 0.00 and 2.00 that was assigned to each question response. This mean value became the score the participant would receive for that response.

Some of the experts only chose one response for each value while others assigned the value they felt each response ranked, independent of the number of that same value they had assigned for other responses on that question. A few experts chose to leave some responses blank. One expert assigned two values for a few responses (these were treated as missing).

 

Table 3
List of Experts Participating in Study
———————————————————————————–

Boylan, Hunter
Director, National Center for Developmental Education
Appalachian State University, NC
Carpenter, Kathy
Director of Tutoring Program
University of Nebraska at Kearney, NE
Christ, Frank
Retired Director, Learning Assistance Center-Cal. State, Long Beach
Coordinator, Winter Institute for Learning Assistance Professionals, AZ
(formerly Summer Institute for Learning Assistance Professionals, CA)
Fendley, Clara
Writing Center Coordinator & English Faculty
Scottsdale Community College, MCCD, AZ
Field, Betty
Mathematics instructor
Maricopa Center for Learning & Instruction, MCCD, AZ>
Gerkin, David
Interim Director, Learning Assistance Center
Learning Technician, Learning Assistance Center
Paradise Valley Community College, MCCD, AZ
Gier, Tom
President, College Reading and Learning Association
Former Coordinator of International Tutor Certification Program Committee
University of Alaska-Anchorage
Hancock, Karan
Coordinator of International Tutor Certification Program Committee
Affiliate professor, English department
University of Alaska-Anchorage
Hartman, Hope
Director, City of New York Tutoring & Training Cooperative Program
City College of City University of New York
Kerstiens, Gene
Adult Learning Specialist
Andragogy Associates, CA
Kubasch, Cheryl
Executive Assistant in charge of Employee Development & Total Quality Management Training and Development
Paradise Valley Community College, MCCD, AZ
Lara, Ernie
Former Learning Assistance Center Director, Glendale Community College, MCCD, AZ
Estrella Mountain Community College Center, MCCD, AZ
Maxwell, Martha
Founder of Learning Services at Berkeley, the American University,
and the University of Maryland, Retired.
McGrath, Jane
Reading/English faculty (23 years)
Former Director of Learning Assistance Center, SMCC
Paradise Valley Community College, MCCD, AZ
Mosher, Donna
Counselor & Learning Assistance Center Counseling Faculty Liaison
Paradise Valley Community College, MCCD, AZ
Olsen, Marie
Lead Teacher/Tutoring Coordinator
Maricopa Skills Center, MCCD, AZ
Rings, Sally
Reading/English Faculty
Learning Assistance Center Faculty Liaison Coordinator
Paradise Valley Community College, MCCD, AZ
Rolinger, Jack
Director, Learning Center/Special Services
Phoenix College, MCCD, AZ
Sheets, Rick
Director, Learning Assistance Center
Paradise Valley Community College, MCCD, AZ
Stern, Craig
Program Coordinator, Learning Assistance Center
Northern Arizona University, AZ
Zeka, Yvonne
Director, Learning Center
GateWay Community College, MCCD, AZ

The frequencies of the values chosen by the experts for each response choice are shown in Appendix H. Each page in the appendix displays the descriptive statistics for all five response choices for one question. The frequency, percent, valid percent (percent of those who did not leave it blank) and a cumulative total of the valid percent are listed. The mean, standard deviation (SD), minimum value selected, maximum value selected, and the number of experts who assigned a value to that choice (N=) are also listed. Finally, the frequency of the values given for each response are graphically illustrated.

On each of the 18 questions, at least 17 (usually 19 or 20) of the 21 experts agreed on the “most appropriate” response for each question by giving it a value of “2.” There was less agreement among the experts as to what constituted a “1” or “0” value. Several of the response choices were evenly split between these two values (see Appendix H). Further investigation as to reasons for differences could have been conducted as most of the experts did list reasons for their responses on most questions. However, the concern for this instrument was of agreement for the “most appropriate” response because tutors are asked to select the “most appropriate” only of the multiple choices presented on the TSORA.

Table 4 presents the number of experts who agreed on the “most appropriate” response for each question and the percentage of agreement among the experts responding to that choice. The lowest percentage of agreement on any one question was 81% with most questions having a percentage of agreement of over 90%. The average percentage of agreement among the experts on the “most appropriate” responses on the instrument was 93.6% (see Table 4).

 

Table 4
Agreement Among the Experts on Most Appropriate Response on TSORA

Question # Experts
Selecting
This
Answer
Number
of Experts
Who
Responded
Percent of
Agreement
of Most
Appropriate
1 20 20 100.0%
2 17 21 81.0%
3 19 21 90.5%
4 20 21 95.2%
5 18 21 85.7%
6 20 20 100.0%
7 19 21 90.5%
8 19 21 90.5%
9 20 21 95.2%
10 19 21 90.5%
11 20 21 95.2%
12 18 21 85.7%
13 20 21 95.2%
14 20 21 95.2%
15 20 21 95.2%
16 21 21 100.0%
17 21 21 100.0%
18 21 21 100.0%




Total 352 376 93.6%

 

On the basis of the values assigned by the experts for each of the five choices on each question, an average was calculated (the mean). This value (between 0.00 and 2.00) was assigned as the score the participants who selected that response choice would receive. Table 5 lists the value assigned for each response choice.

The same questions for the TSORA were given on the pre-test (see Appendix D) as on the post-test (see Appendix E). The order of the questions and response choices was changed on the post-test to reduce testing effects in using a repeated measures instrument. Table 5 lists the question number and letter for both the pre-test response choice and the corresponding response choice for the post-test. It also lists the Response Value that is the mean score of the experts’ rankings.

Of the 21 experts participating, only 14 ranked all responses with a single value. Some experts chose to leave some response choices blank while others gave more than one value as a response. In both situations, the response choice was counted as missing. A list of the actual values assigned by each of the experts is listed in Appendix I.

In Table 5, note that the lowest score for the “most appropriate” response on any question occurred once and was 1.81 (out of a possible 2.00). Five of the 18 questions had a mean of 2.00, the highest score possible (all experts who ranked this response were in agreement that it was the “most appropriate” response). The highest score possible from a total of the experts’ mean scores is 34.83 out of a possible 36 (18 questions at “2” each); thus the experts were 93.6% in agreement for the “most appropriate” response.

Page last updated on February 14th, 2014