CHAMPAIGN, Ill. - While social media such as Facebook and Twitter have transformed the way people communicate, educational practices haven't kept pace, relying on outdated, limited tools such as standardized tests that don't reflect the profound changes precipitated by the Web. An interdisciplinary team of experts at the University of Illinois is developing software that they believe will transform the practice of writing assessment - and potentially eliminate cumbersome proficiency testing such as that mandated by state and federal agencies as a result of the No Child Left Behind Act.
The software, called "u-author" - the "u" for "ubiquitous," available anywhere, anytime on any Web-enabled device - embeds the practice of writing in a social media environment that promotes complex learning and interaction among peers.
The software provides a writing space in which students' compositions become portals for evaluating their progress in language arts and science. State standards and teachers' assessment rubrics can be integrated into it, furnishing immediate data on users' learning and eliminating the need for end-of-program exams and proficiency tests, according to Bill Cope, a professor of education policy, organization and leadership in the College of Education at Illinois who is leading the project team.
The U.S. Department of Education is funding the project with four grants totaling $4 million. Two of the grants were awarded to the College of Education and two were awarded to Common Ground Publishing.
Common Ground is Cope's startup company, which is building some components of the software and subcontracting others to a team in the college.
The project sprang from a conversation involving team members Mary Kalantzis, dean of the College of Education; Marc Snir, a faculty member in computer science and co-principal investigator on the Blue Waters petascale computing project at Illinois; and Cope. Cope said that he and Kalantzis, who are literacy experts as well as husband and wife, moved to the U.S. from Australia and were troubled to discover the predominance of standardized testing in American classrooms and the consequence of mandated student achievement benchmarks.
"We thought, 'This is disastrous!' " Cope said. "The whole literacy curriculum is driven by tests - and the same thing is replicated in mathematics and science. Testing can only assess limited things, particularly if it's to be mechanized, computerized and cheap - then it drives the whole curriculum. The only way to do anything about it is to do testing better."
In two articles accepted for publication in Computers and Composition, the team wrote that the social Web has profoundly changed writing and learning, and assessment practices need to keep pace. Educators are clinging to an outdated view of writing as a solitary, text-based practice, although the Internet has transformed it into a socially situated activity enabling writers to express meaning in video as well as pictures and text.
Also outdated is the conventional view of learning and assessment, which presumes that "there is an exteriorized body of reified knowledge, the facts and logics of which can be transferred to memory" for an item-based test. In actuality, knowledge is distributed and produced by social interaction. Valid assessment would measure how well students figure out how to access and deploy distributed knowledge resources, the team wrote.
A well-written essay about a lab experiment or a science fair project can assess a student's depth of knowledge about scientific concepts and reasoning - as well as their mastery of language arts - more accurately and efficiently than an item-based test, Cope said.
Human-graded writing assessment tests are expensive to conduct, however, and reading assessments are often used as substitutes.
Although media such as wikis and Google apps are commonly used in the classroom, they weren't designed for educational purposes and lack the assessment infrastructure needed to provide reliable and valid assessment of students' learning and writing, Cope said.
New technologies are being used largely to reinforce old practices, neglecting the potential that emerging technologies offer for promoting a broader vision of writing as a learning and assessment tool, the team wrote.
"There is an urgent need to create dedicated educational applications for these Web writing environments which integrally incorporate formative and summative assessment," the team wrote. "Indeed there is enormous potential to develop new modes of assessment in which student activity is continuously assessed without disrupting time spent for learning in the classroom."
In reviewing the various computer-based writing assessment programs now available, the team found that they tend to reinforce conformity to standardized writing practices, marginalizing personal creativity and the social aspects of writing.
"We learned a lot by looking at some of the software, especially that which uses natural language processing to assess a text," said team member Colleen Vojak, who is the program coordinator and an adjunct professor in education. "These programs purport to actually analyze ideas and the way an essay is constructed, but we haven't been very impressed with what's out there right now."
The programs' grading criteria can be dubious as well, the team concluded, after one program gave failing marks to the Gettysburg Address, denouncing it as "too wordy" and repetitious.
The u-author learning environment will capitalize on social connectivity, providing seven writing assessment tools and an array of computer-generated and user-generated feedback based upon assessment rubrics that teachers design for it, Cope said.
"We're trying to build in as many different evaluative perspectives around the work that the students are doing as possible," Cope said. "And each of those is not just an evaluative perspective, it's a different technology."
Users hone their critical thinking skills by participating as authors, collaborators and reviewers. They also can respond to and rate the feedback they receive from their peers. The student compositions and author-approved feedback will be archived so that students can see how well their work comports to other students' writing on similar topics and view suggestions that have benefitted other writers.
Instructors will be able to create computer-administered quizzes and surveys to check students' content knowledge and gather data on aspects of learning such as attitudes, perceptions and metacognition. The system will amass students' scores and data on
long-range learning trends to provide a complex assessment of learning progress for individual students and the class as a whole without the use of standardized tests.
Students at three area schools are testing the software and providing feedback as it's built. The team has completed about three tools and by this summer hopes to have them embedded in a social networking environment that will allow students to create their profile pages and post their compositions for feedback or private use, Vojak said.
Other university-based team members working on the project: Hua-Hua Chang, an education professor and psychometric expert; Jennifer Greene and Joe Robinson, education professors and evaluation experts; Sarah McCarthey, a professor of curriculum and instruction; computer science professor Dan Roth, who is also a natural language processing expert; and Duane Searsmith, a senior software developer.
Common Ground has a team of developers and educational implementation specialists working on the project, which also employs five graduate assistants.