2.1 Assessment System and Unit Evaluation
How does the unit use its assessment system to improve candidate performance, program quality and unit operations?
2a. Unit Assessment System
The purpose of the Assessment System is to articulate processes for collecting and reviewing data/information about candidate performance, to monitor and guide candidates, and to improve the effectiveness of programs and Unit operations. The Unit regularly collects and analyzes data on applicant qualifications, candidate performance, and operations; however, due to the void in leadership over the last four years, the analysis has not been consistent across the Unit. This situation has improved greatly in the past two years, with appointments of a new Provost, three new Deans, and a new NCATE Coordinator.
The Assessment System has been developed collaboratively with faculty, supervisors, cooperating teachers, and members of our partner districts. Common key assessments have been agreed upon and program specific assessments were developed. The Assessment System is also used to manage and improve programs. Data are used to continuously evaluate and improve Unit operations and its programs, as well as to hold candidates accountable for demonstrating a high level of competence in content knowledge, pedagogical skills, and professional dispositions.
How the Assessment System Reflects the Conceptual Framework
The Conceptual Framework provides the foundation for the Assessment System through the alignment with Unit and University missions; as well as promoting the knowledge base, professional commitments, and dispositions through goals and outcomes. The Unit mission provides a clear direction for the instruction and assessment of candidates for teaching and school personnel in content knowledge, pedagogical content knowledge, professional skills, and dispositions.
Objectives of the Assessment System
- To provide an assessment system that has been developed by the professional community and open to revision for the purpose of meeting the growing demands of the schools we serve;
- To monitor candidates’ progress throughout the program from their admission to and exit from the program and into their first year of induction;
- To systematically collect performance data that are aligned to Unit, state, and professional standards;
- To analyze data at specific transition points or phases within the candidates’ programs;
- To use candidates’ performance data to make decisions on individual candidates for improving their performance;
- To use data on a regular basis for improving instruction, courses, programs, and the Unit.
In reimplementing a systematic approach, the updated features of the Assessment System include the following:
- Collaboration
There is ongoing collaboration among university professional education faculty across programs and schools, amongst clinical faculty, alumni, school-based practitioners, and other K-12 stakeholders in the cycle of implementation, evaluation, redesign, and refinement of the Assessment System.
- Assessment Management System (Tk20)
A recent evaluation of the electronic assessment management system uncovered inconsistencies with data collection, storage, and analysis.As a result, Tk20 representatives were asked to complete a comprehensive review of the current assessment system and to revise the system to make it more versatile, accessible, and user-friendly. In spring 2013, an Assessment & Accreditation Committee was formed consisting of Unit faculty and administrators to oversee the management of the system.
- Rubric Review
Established scoring guides or rubrics are used for determining levels of candidate performanceand completion of their programs.A review of the key assessment rubrics, completed during the writing of the SPA reports in Summer 2012, along with feedback from the National Recognition Reports, indicated that revisions were needed in several assessments. These revisions are ongoing as new state regulations are pending.Issues of fairness, consistency, accuracy, and avoidance of biasin the development of rubrics were identified and addressed.
- Transition Points
Candidates’ performance of knowledge, skills, and dispositions are evaluated at multiple transition points in programs, using multiple measures for formative and summative purposes. Where transition points were not indicated, unclear, or not monitored, action was taken to rectify the issues.
- Assessments:
Multiple assessments are aligned with CCCT 2010 competencies and standards and are administered in a variety of forms. Advanced programs are aligned with appropriate national standards.Faculty in initial and advanced programs assist in the assessment system by monitoring candidates from admission to exit from the program and play an active and ongoing role in evaluating candidates’ performances, professional commitments, and dispositions.
2b. Data collection, analysis, and evaluation
The Unit has numerous processes in place to ensure that data are regularly collected, compiled, aggregated, summarized, analyzed, and used for continuous improvement. The Assessment System is facilitated by a comprehensive assessment system designed to address programmatic assessment needs; faculty, program coordinators, university supervisors, cooperating teachers, and candidates submit key assessment data directly into Tk20.Personnel from Tk20 worked directly with members of the newly formed Assessment and Accreditation Committee (Spring 2013) to redesign the Tk20 assessment matrix with corresponding rubric alignments and to ensure that Tk20 data collection and analysis is maintained across the Unit. A workshop, billed as a Tk20 system reboot, will roll out in Fall 2013. A consultant was also hired to review rubrics for their psychometric qualities.
Faculty are responsible for summarizing data at each transition point for their assigned advisees. When an issue arises, the faculty advisor notifies the program coordinator. At the end of each semester, the program coordinator reviews candidates’ performances. They help candidates submit key assessments to Tk20 and evaluate the key assessments using the Unit or program rubric.
The Educational Review Committee institutionalized the review of program and unit data in 2012. Reviews of data and feedback from the Special Program Associations (SPA) reports generated evidence to support course and programmatic changes. For example, based on data analysis, the Interdisciplinary History/Social Studies endorsement must add an Economics course to the program. A consistent assessment for unit design across all disciplines is being developed for Assessment #3. A new dispositions instrument for initial programs has been created and will be implemented in Fall 2013.
The Appeals System
When a candidate does not agree with program faculty decision on an issue, he or she has a right to file an appeal to the Chairperson of the Education Department. The Academic Review Committee is convened and hears the case. The Chairperson consults with the Assessment Committee and reconsiders how the candidates have met or failed to meet each criterion. The Department Chair will notify the candidate with respect to individual Academic Review Committee decisions. Offices of the Dean of Professional Studies maintain written records of all formal candidate complaints and appeals.
Procedures for these and other types of appeals such as Proficiency Appeals Process (2.4.e.3), Academic Honesty Policy (2.4.e.4) are located in the University catalogs (I.5.a.2.1; I.5.a.2.2), and in the Student Teaching Handbook (I.5.a.4). The appeal procedures related to academic dishonesty and final course grades for the University; complaints or appeals related to unethical behavior, affirmative action (4.4.g.1), and sexual harassment issues are listed in the WCSU Student Handbook (I.5.a.2.3) and the WCSU Undergraduate and Graduate Catalogs (I.5.a.2.1; I.5.a.2.2).
2c. Use of data for program improvement
Programs have consistently used internal and external feedbackfor ongoing reflection on their work. Reviews of candidate performance assessments are completed relative to program standardsand the Conceptual Framework. Section Vof the program reportsin AIMS provides analyses of assessment data and self-reported recommendations for improvement.
Based on these analyses, changes have been made in course sequence, course assignments, or assessments and have been formalized and standardized across all programs as of Fall 2012. For example, content competencies were added to our Student Teacher Evaluation Instrument in 2009. The Teacher Work Sample was revised in Fall 2012 to align with the ed Teacher Performance Assessment (edTPA). We also recognized the need for a common disposition instrument to be used at the beginning, middle, and end of each program.
Graduate and Employer surveys are used to bring about further program and unit improvements. Through annual reviews and assessments, program chairs and program coordinators are assisted in making program improvements on a continuous basis. Program data retreats, to identify trends and areas for improvement for curricular and assessment revisions for specific programs, are now held biannually. Using multiple assessments to triangulate data to evaluate candidate knowledge, skills, and dispositions allows us to examine data from multiple perspectives for continuous improvement.

