• YU Learning Assessment

  • Our Commitment

    Welcome to the Yeshiva University Learning Assessment website. At Yeshiva University we are committed to student learning assessment in order to ensure that our colleges and schools, programs/majors, and courses are successfully fulfilling their educational missions, goals, and objectives. The purpose of this website is to provide faculty and staff with assessment-related tools and resources to help guide development and implementation of effective learning assessment plans.

    Learning Assessment Spotlight:

    Wilf Campus Writing Center 

    By Dr. Lauren Fitzgerald, Director 

     Assessing writing center effectiveness is complicated because, unlike what happens in courses, the students we tutor almost never produce work for a curriculum we’ve designed and we seldom have access to the final versions of what we help them with. However, writing center administrators, student tutors, and student writers across the country are confident that the services we offer are effective, so other ways to assess student learning in these contexts have been developed.

     In fall 2013, an experienced tutor and I wrote up a set of goals and student learning outcomes (SLOs) that articulated much of the learning we believed that the 400 or so student writers and the 20 or so student tutors underwent as a result of their work in the Center each year. Then we mapped these SLOs onto a set of learning experiences typical in writing centers—not only individual tutoring sessions but also what writers do after their sessions as well as the tutor education program that all tutors engage in each semester they are employed.

    Building on assessment programs at other writing centers, Rachel Ebner and I devised two questionnaires—one for tutors and one for students—that mirror each other and focus on tutors’ and students’ perceptions of what students learn during their tutoring sessions. Comparison of the results of these two questionnaires in spring 2014 suggested a shared belief among tutors and students that Writing Center sessions do indeed help improve students’ writing.

    To complete the feedback loop, early in the fall 2014, as part of the Wilf Writing Center’s tutor education program, I shared this comparison of the results of the two questionnaires with tutors. Tutors in turn used these data to develop a specific tutoring strategy: setting aside time at the end of each session for writers and tutors to articulate what was accomplished and to develop plans for next steps. Results from our most recent (spring 2015) survey suggest that students were even more confident than they’d been the previous year that tutoring helped them form a specific plan for what to do next with their writing, provided them with useful strategies to improve their writing, and, overall, improved their writing projects.

    Data collection has always been part and parcel of my job as Writing Center Director since, again, unlike courses, information about the numbers of students served and sessions held, for example, isn’t automatically generated by the Registrar’s Office. Now that we are operating within an assessment framework, however, tutors and I are asking much bigger questions and collecting different kinds of data to answer them. Next, we will consider whether we are meeting writers’ expectations for sessions by comparing the kinds of help writers say they want to the kinds of help tutors say they provide and what tutors themselves learn through the tutor education program and tutoring itself. 

    English Department, Stern College for Women

    by  Dr. Linda Shires, Department Chair;  Dr. Nora Nachumi, Assessment Coordinator (2013, 2014)

     Assessment of the English Department curriculum at Stern College for Women has been constructed to take into account the department's three tracks-- Literature, Creative Writing and Media Studies--and its responsibility for the college-wide course in expository writing, English 1100/1200 H. 

    Beginning in 2011, we first articulated goals and student learning objectives essential to the major, regardless of track. In doing so, we recognized some of these goals and objectives would not be relevant to all of our courses. We had designed the English Major's tracks so students take a range of courses that, in combination, work to accomplish department-wide goals and learning objectives. In turn, the English Major works to accomplish many of the college-wide goals and learning objectives. At the same time, all of our Majors, regardless of track, take our foundational course: Gateway: Introduction to Critical Methods, as do English Minors. All English majors are also required to take courses that pertain to different periods in literary history and courses that provide the opportunity to focus on a particular topic in-depth.  Meanwhile, each track has its own required courses and list of electives relevant to that track. 

    Given the variety of requirements and electives in our department, we have focused our assessment efforts on four types of courses: those that all of our majors share (e.g. Gateway: Introduction to Critical Methods), those that are required for the individual tracks and electives (e.g. Media Studies, Introduction to Creative Writing), those that may be used to meet the requirements of more than one track (e.g. Survey of British Literature III, Writing Women's Lives), and analytic writing courses: Composition and Rhetoric, required of all students in the college, and Advanced Exposition, required of all English  Literature majors.

    The creation and implementation of a formal assessment process in our department has recently led to the widespread use of rubrics to assess the effectiveness of our courses and tracks.  For example, faculty members have created rubrics to score assignments in their individual courses and the department currently is finalizing a uniform rubric to use for grading papers across Composition and Rhetoric.  We are also using rubrics to define and assess other modes of student performance including  class participation and group projects. In the future, we will develop assessment rubrics for Exit Projects in each track.

     We have responded individually and as a department to Assessment results. Our individual instructors have made changes to the design of their courses and to the nature of their assignments.  As a department, we have adjusted course requirements for certain tracks and for the major as a whole: introducing a required introductory course in Media Studies, paralleling the number of other required courses in our Media specialties, eliminating a Junior Seminar for all majors, and creating a new Senior-level course for Literature majors (Advanced Exposition). We have altered material in our Gateway course, so that we now teach reading strategies for different genres--from advertising to poetry to narrative-- as well as a variety of theoretical approaches.. We have also redesigned the Exit Project in Literature and in Creative Writing.  Clearly, assessment is a fluid, ongoing process. We will continue to work on modes of assessment, adjusting curriculum and evaluation procedures, as needed.

    Department of Mathematical Science, Yeshiva University


    Dr. Thomas Otway, Professor and Department Chair

    Assessment of the computer science curriculum was facilitated by the way in which the computer science curriculum evolved. Many years ago, Michael Breban, with the help of Arnold Lebow and other department faculty, put together a computer science program at Yeshiva College which conforms to Association for Computing Machinery (ACM) standards. In an ACM-approved curriculum, courses are designed to implement learning goals that practicing computer scientists feel are important for graduates of computer science programs. When Van Kelly joined the department in 2010, he enhanced that curriculum with courses that reflect very recent trends and concerns in industrial computing. The curriculum map that we prepared for the formal assessment turned out to be a natural way to organize the goals that had been already built into the curriculum. The careful tracking of student performance is also standard procedure for our computer science faculty. What is new is that such data are now concentrated in one source available to the entire department, rather than being scattered in an assortment of directories in various individual faculty computers, and are assessed using metrics that have been agreed upon by the computer science faculty. The national societies for many fields release standards for specialized education in that field. We found these standards to be quite useful as a general guide for assessing student learning in computer science.

    Concretely, the curriculum map for the computer science program was constructed as follows. First, the topics of the required courses were written down. We asked ourselves how each topic contributes to the education of a computer scientist. Then we compared those results to the general goals for computer science programs in the ACM guidelines that we follow. In this way, the curriculum map was "reverse-engineered" from the required course offerings. The ACM guidelines (including very recent ones which have just been released) were also reviewed, to decide whether all of the ACM guidelines were represented in the curriculum. The curriculum map is, except for that last step, the inverse of the process which originally created the current computer science major at Yeshiva College from the ACM guidelines in the 1990s.

    Assessment of the curriculum in mathematics is more complicated, for several reasons: there are three different tracks (the actuarial science track, the computer track, and the general track); the program is given on two campuses, partly with shared faculty, and with slight differences in curriculum; and three degrees are given, the B.A., M.A., and Ph.D. It was found, for example, that splitting the assessment of advanced courses into the three tracks produced differences among the subgroups which were statistically insignificant; for that reason, achievement of learning goals for each course in the Mathematics program at each campus is assessed for the whole class, rather than for the students in each separate track. (Assessing courses by track has the additional disadvantage that students tend not to formally declare their track until they apply for the degree.) Although we pool data among the tracks, we do not pool data across the campuses. For historical reasons, and because of differences in faculty specialization, the rubrics adopted at the two campuses are similar, but not identical. For example, computer science is integrated into the electives for the major at Beren Campus, whereas it is a separate track of the major at Wilf Campus.  Also, there are options for the Advanced Calculus requirement at Beren, so those options needed to be taken into account when determining the rubrics for that campus. We also found that assessment practices for the department’s small doctoral program differed qualitatively from assessment of the undergraduate programs. The main difference is that on the doctoral level, quantifiable progress in student learning may not be evident for several semesters, due to the nature of advanced research in mathematics.

    The adoption of rubrics for the Mathematics program required a certain amount of self-discipline. Certain of the faculty have some doubts that the standard, traditional mathematics curriculum is the best possible for majors. The adoption of such a curriculum in a mathematics department such as ours, having a very small faculty, a large number of majors, and a truly huge number of students in our many service courses, is as much a response to the expectations of other departments, professional programs, and employers as it is an expression of our shared belief about what constitutes the best preparation in mathematics. But we realized that assessment activities have to be directed at the program that we have rather than at the program that many of us would like to have under better conditions. Revisions in the program, which are instituted continually to meet changing conditions but which are subject to the usual external constraints, are only reflected in the rubrics once the revisions have been fully incorporated into program requirements. This policy allowed us to focus on the question of whether current learning objectives are being achieved.

    Physics Department, Stern College for Women


     Dr. Anatoly I. Frenkel, Department Chair                             

    During the Spring 2014 semester, we designed and applied rubrics to assess eight program-level Student Learning Objectives (SLOs) for our four core programs: General Education courses, Major in Physics, Major in Physical Sciences, and Major in Pre-Engineering.

    For example, in the General Education courses, one of the objectives is for students to be able to know the fundamental physics laws (in their most general formulations) and understand their physical implications.  In addition, we want students to know how to adapt these general formulations to concrete applications.  As another example, in the Major in Physics program, a key objective is for students to be able to choose relevant theories and research for solving a specific physics problem. Other student learning objectives relate to the knowledge of fundamental physics laws and concepts and their implications, numerical insights in solving problems, and analytical techniques in laboratory settings.

    To determine whether students are attaining program-level objectives in their physics courses, we collected and analyzed various sources of performance data from tests, lab reports, and student presentations using faculty designed rubrics.  More specifically, for each program-level SLO, a Department faculty member, often in consultation with other faculty, designed a rubric to analyze student work in light of the program-level objective.  The final version of the rubric was approved by the entire Department.  Departmental assistant, Rakhi Podder performed statistical analyses of all data collected by faculty in their classes. At the faculty meeting on March 26, 2014, these data were discussed and used to identify problems that students are having with different program components, such as appropriate mathematical background for some advanced courses, and also to reveal  students’ particular strengths (e.g., graphical representation of concepts)

    At the end of the Spring semester, the analysis will be complete and the Department’s faculty will meet again to discuss possible changes in the programs and, if needed, in the SLOs. A new set of SLOs will be tested during the Fall 2014 semester.





  • Contact Us

    Please contact us if there is any aspect of this website or student learning assessment that you would like to discuss.

    Rachel J. Ebner, PhD
    Director of Student Learning Assessment
    Belfer Hall 1300A; 215 Lex. Room 606
    212.960.5400, ext. 6138

Yeshiva University
500 West 185th Street
New York, NY 10033

Connect With YU