Selasa, Oktober 18, 2016

1. TESTING, ASSESSING, TEACHING




A.    WHAT IS A TEST?
  A TEST an in simple terms, is a method of measuring a person’s ability, knowledge, or performance, in a given domain. Let’s look at the components of this definition. A test is first, a method.  It is an instruments – a set of techniques, procedures, or items that requires performance on the part of the test taker. To qualify as a test, the method that requires must be explicit and structured. : Multiple – choice question with prescribed correct answers: a writing prompt with a scoring rubric an oral interview based on a question script and a checklist of expected responses to be filled in by the administrator.
Second, a test measure.  Some test measure general ability, while others focus on very specific competencies or objectives. A multi – skill proficiency test determines a general ability level: a quiz on recognizing correct use of definite articles measure specific knowledge. The way the result or measurements are communicated may vary. Some test, such as classroom – based short – answer essay test, may earn the test – taker a letter grade accompanied by the instructor’s marginal comments.
Next, a test measures an individual’s ability, knowledge, or performance.   Testers need to understand who the test –takers are. What is their previous experience and background? Is the test appropriately matched to their abilities? How should test = takers interpret their scores?
A test measures performance, but the result imply the test – taker ability, or, to use a concept common in the field of linguistic, competence. Most language test measure one’s ability to perform language that is, to speak write, read, or listen to a subset of language. On the other hand, it is not uncommon to find test designed to subset of language: defining a vocabulary item, reciting, a grammatical rule, or identifying a rhetorical feature in written discourse. Performance based test sample the test – takers actual use of language, but from those sample the test administrator infers general competence. A test of reading general comprehension, for example, may consist of several short reading behavior. But from the result of that test, the examiner may infer a certain level of general reading ability. Finally, a test measures a given domain. In the proficiency test, even though the actual performance on the test involves only a sampling of skills, that domain is overall proficiency in a language – general competence in all skills of a language.

B.     ASSESSMENT AND TEACHING
   Assessment is a popular and sometime misunderstood terms in current educational practice. You might be tempted to think of testing and assessing as synonymous terms, but they are not. Test are prepared administrative procedures that occur that occur at identifiable times in a curriculum when learners muster all their faculties to offer peak performance, knowing that their responses are being measured and evaluated.
Test then, are subset of assessment: they are certainly not the only form of assessment that a teacher can make.  Test can be useful devices, but they are only one among many procedures and task that teachers can ultimately use to assess students.
But now, you might be thinking, if you make assessment every time you teach something in the classroom, does all teaching involve assessment? Are teachers constantly assessing students with no interaction that is assessment- free?
The answer depends on your perspective. For optimal learning to take place, students in the classrooms must have the freedom to experiment, to try out their own hypothesis about language without feeling that their overall competence is being .Teaching set up the practice games of language learning the opportunities for learners to listen. Think, take, set goals, and process feedback from the “coach” and then recycle through the skill that they are trying to master. (Of diagram of the relationship among testing, teaching, and assessment is found in figure 1.1).
At the same time, during these practice activities, teachers (and tennis coaches) are indeed observing students’ performance were better than others in the same learning community? In the ideal classroom, all these observation feed into the way the teachers provides instruction to each students.

  1. Informal And Formal Assessment
   One way to begin untangling the lexical conundrum created by distinguishing among tests, assessment, and teaching is to distinguish between informal and formal assessment. Informal assessment can take a number forms, starting with incidental, unplanned comments and responses, along with coaching and other impromptu feedback to the students. Examples include saying “nice job ““good work “! Did you say can or cannot? “I think you mean to say you broke the glass, not you break the glass, “or putting on same work. Informal assessment does not stop there. A good deal of a teacher informal assessment is embedded in classroom tasks designed to elicit performance without recording result and making fixed judgments about a student’s competence. 
Examples at this end of the continuum are marginal comments on papers, responding to a draft of an essay, advice about how to better pronounce a word, suggestion for a strategy for compensating for a reading difficulty, and showing how to modify a student’s note – taking to better remember the content of a lecture.
On the other hand, Formal Assessment are exercise or procedures specifically designed to tap into a store house of skills and knowledge. They are systematic, planned sampling techniques constructed to give teacher and student achievement. Is formal assessment the same as a test? We can say that all   tests are formal assessments, but not all formal assessment is testing. For example you might use a student’s journal or portfolio of materials as a formal assessment of the attainment of certain course object but its problematic to call those two procedures “ tests “ . Test are usually relatively time – constrained (usually spanning a class period or at most several hours) and draw on a limited sample of behavior.

  1. Formative and Summative Assessment
Another useful distinction to bear in mind is the function of an assessment: How is the procedure to be used? Two functions are commonly identified in the literature: formative and summative assessment. Most of our classroom assessment is formative assessment: evaluating students in the process of “forming” their competencies and skills with the goal of helping them to continue that growth process. The key to such formation is the delivery (by the teacher) and internalization (by the student) of appropriate feedback on performance, with an eye toward the future continuation (or formation) of learning.
For all practical purposes, virtually all kinds of informal assessment are (or should be) formative. They have as their primary focus the ongoing development of the learner’s language. So when you give a student a comment or a suggestion, or call attention to an error, that feedback is offered in order to improve the learner’s language ability.
Summative assessment aims to measure, or summarize, what a student has grasped, and typically occurs at the end of a course or unit of instruction. A summation of what a student has learned implies looking back and taking stock of how well that student has accomplished objectives, but does not necessarily point the way to future progress. Final exams in a course and general proficiency exams are examples of summative assessment.
One of the problems with prevailing attitudes toward testing is the view that all tests (quizzes, periodic review tests, midterm exams, etc.) are summative. At various points in your past educational experiences, no doubt you’ve considered such tests as summative. You may have thought, “Whew! I’m glad that’s over. Now I don’t have attitude among your students: Can you instill a more formative quality to what your students might otherwise view as a summative test? Can you offer your students an opportunity to convert teats into “learning experiences”? We will take up that challenge in subsequent chapters in this book.

  1. Norm-Referenced and Criterion-Referenced Tests
Another dichotomy that is important to clarify here and that aids in sorting out common terminology in assessment is the distinction between norm-referenced and criterion-referenced testing. In norm-referenced tests, each test-take’s score is interpreted in relation to a mean (average score), median (middle score), standard deviation (extent of variance in scores), and/or percentile rank. The purpose in such tests is to place test-takers along a mathematical continuum in rank order. Scores are usually reported back to the test-taker in the form of a numerical score (for example, 230 out of 300) and a percentile rank (such as 84 percent, which means that the test-taker’s score was higher than 84 percent of the total number of test-takers, but lower than 16 percent in that administration). Typical of norm-referenced tests are standardized tests like the Scholastic Aptitude Test (SAT) or the test of English as a Foreign Language (TOEFL), intended to be administered to large audiences, with result efficiently disseminated to test-takers. Such test must have fixed, predetermined responses in a format that can be scored quickly at minimum expense. Money and efficiency are primary concerns in these tests.
Criterion-Referenced tests, on the other hand, are designed to give test-takers feedback, usually in the form of grades, on specific course or lesson objectives. Classroom tests involving the students in only one class, and connected to a curriculum, are typical or criterion-referenced testing. Here, much time and effort on the part of the teacher (test administrator) are sometimes required in order to deliver useful, appropriate feedback to students, or what Oller (1979, p. 52) called “instructional value.” In a criterion-referenced test, the distribution of students’ scores across a continuum may be of little concern as long as the instrument assesses appropriate objectives. In Language Assessment, with an audience of classroom language teachers and teachers in training, and with its emphasis on classroom-based assessment (as opposed to standardized, large-scale testing), criterion-referenced testing is of more prominent interest than norm-referenced testing.

C.    APPROACHES TO LANGUAGE TESTING: A BRIEF HISTORY
Now that you have a reasonably clear grasp of some common assessment terms, we now turn to one of the primary concerns of this book: the creation and use of tests, particularly classroom tests. A brief history of language testing over the past half-century will serve as a backdrop to an understanding of classroom-based testing.
Historically, language-testing trends and practices have followed the shifting sands of teaching methodology (for a description of these trends, see Brown, Teaching by Principles [hereinafter TBP], Chapter 2). For example, in the 1950s, an era of behaviorism and special attention to contrastive analysis, testing focused on specific language elements such as the phonological, grammatical, and lexical contrasts between two languages. In the 1970s and 1980s, communicative theories of language brought with them a more integrative view of testing in which specialists claimed that “the whole of the communicative event was considerably greater than the sum of its linguistic elements” (Clark, 1983, p. 432). Today, test designers are still challenged in their quest for more authentic, valid instruments that simulate real-world iterance.

a.      Discrete Point and Integrative Testing
This historical perspective underscores two major approaches to language testing that were debated in the 1970s and early 1980s. These approaches still prevail today, even if in mutated form: the choice between discrete-point and integrative testing methods (Holler, 1979). Discrete point tests are constructed on the assumption that language can be broken down into its component parts and that those parts can be tested successfully. These components are the skills of listening, speaking, reading and writing, and various units of language (discrete points) of phonology/graphology, morphology, lexicon, syntax, and discourse. It was claimed that an overall language proficiency test, then, should sample all four skills and as many linguistic discrete points as possible.
Such an approach a demanded a decontextualization that often confused the test-taker. So, as the profession emerged into an era of emphasizing communication, authenticity and context, new approaches were sought. Holler (1979) argued that language competence is a unified set of interacting abilities that cannot be tested separately. His claim was that communicative competence is so global and requires such integration (hence the term "integrative" testing) that it cannot be captured in additive tests of grammar, reading, vocabulary, and other discrete points of language. Others (among them Cziko, 1982, and Savignon, 1982) soon followed in their support for integrative testing.
What does an integrative test look like? Two types of tests have historically been claimed to be examples of integrative tests: cloze tests and dictations. A cloze test is a reading passage (perhaps 150 to 300 words) in which roughly every sixth or seventh word has been deleted; the test-taker is required to supply words that fit into those blanks. Holler (1979) claimed that cloze test results are good measures of overall proficiency. According to the theoretical constructs underlying this claim, the ability to supply appropriate words in blanks requires a number of abilities that lie at the heart of competence in a language: knowledge of vocabulary, grammatical structure, discourse structure, reading skills and strategies, and an internalized "expectancy" grammar (enabilizing one to predict an item that will come next in a sequence). It was argued that successful completion of cloze items taps into all of those abilities, which were said to be the essence of global language proficiency.
Dictation is a familiar language-teaching technique that evolved into a testing technique.  Essentially, learners listen to a passage of 100 to 150 words read aloud by an administrator (or audiotape) and write what they hear, using correct spelling. The listening portion usually has three stages: an oral reading without pauses; an oral reading with long pauses between every phrase (to give the learner time to write down what is heard); and a third reading at normal speed to give test-takers a chance to check what they wrote. (See Chapter 6 for more discussion of dictation as assessment device)
Supporters argue that dictation is an integrative test because it taps into grammatical and discourse competencies required for other modes of performance in language.  Success on a dictation requires careful listening, reproduction in writing of what is heard, and efficient short-term memory and, to an extent, some expect rules to aid the short-term memory. Further, dictation test results tend to correlate strongly with other tests of proficiency. Dictation testing usually classroom-centered since large-scale administration of dictations quite impractical from scoring standpoint. Reliability of scoring criteria for dictation tests can be improved by designing multiple-choice or exact-word cloze test scoring.
Proponents of integrative test methods soon centered their arguments on what became known as the unitary trait hypothesis, which suggested an "indivisible" view of language proficiency: that vocabulary, grammar, phonology, the "four skills", and other discrete points of language could not be disentangled from catch other of tasks for at language performance. The unitary trait hypothesis contended that there that is a general factor of language proficiency such that all the discrete points do not add up to that whole.
Others argued strongly against the unitary trait position. In a study of students in Brazil and the Philippines, Farhady (1982) found significant and widely varying differences in performance on an ESL proficiency test, depending on subject’s native country, major field of study, and graduate versus undergraduate status. For example, Brazilians scored very low in listening comprehension and relatively high in reading comprehension. Filipinos, whose scores on five of the six components of the test were considerably higher than Brazilian's scores, were actually lower than Brazilians in reading comprehension scores. Farhady's contentions were supported in other research that seriously questioned the unitary trait hypothesis. Finally, in the face of the evidence, Holler retreated from his earlier stand and admitted that "the unitary trait hypothesis was wrong" (1983, p.352).
b.      Communicative language teaching
By the mid-1980s, the language testing field had abandoned arguments about the unitary trait hypothesis and had begun to focus on designing communicative language-testing tasks. Bachman and Palmer (1996, p. 9) include among “fundamental” principles of language testing the need for a correspondence between language test performance and language use: “in order for a particular language test to be useful for its intended purposes, test performance must correspond in demonstrable ways to language use in non-test situations. The problem that language assessment experts faced was that tasks tended to be artificial, contrived, and unlikely to mirror language use in real life. As Weir (1990, p. 6) noted, “Integrative tests such as cloze only tell us about a candidate’s linguistics competence. They do not tell us anything directly about a student’s performance ability.”
And so a quest for authenticity was launched, at test designers centered on communicative performance. Following Canale and Swain’s (1980) model of communicative competence, Bachman (1990) proposed a model of language competence consisting of organizational and pragmatic competence, respectively subdivided into grammatical and textual components, and into illocutionary and sociolinguistic components. (Further discussion of both Canale and Swain’s and Bachman’s models can be found in PLLT). Bachman and Palmer (1996, pp. 70f) also emphasized the importance of strategic competence (the ability to employ communicative strategies to compensate for breakdowns as well as to enhance the rhetorical effect of utterances) in the process of communication. All elements of the model, especially pragmatic and strategic abilities, needed to be included in the constructs of language testing and in the actual performance required of test-takers.
   Communicative testing presented challenges to test designers. Test constructors began to identify the kinds of real-world tasks that language learners were called upon to perform. It was clear that the contexts for those tasks were extraordinarily widely varied and that the sampling of texts for any one assessment procedure needed to be validated by what language users actually do with language. Weir (1990, p.11) reminded his reader that “to measure language proficiency … account must now to be taken of: where, when, how, with whom, and why language is to be used, and on what topics, and whit what effect”. And the assessment field became more and more concerned with the authenticity of tasks and the genuineness of texts.

c.       Performance-Based assessment
In this language course and programs around the world, test designers are now tackling this new and more student-centered agenda (Alderson, 2001, 2002). Instead of just offering paper-and-pencil selective response tests of a plethora of separate items, performance-based assessment of language typically involve oral production, written production, open-ended responses, integrated performance (across skill areas), group performance, and other interactive tasks. To be sure, such assessment is time-consuming and therefore expensive, but those extra efforts are paying off in the form of more direct testing because students are assessed as they perform actual or simulated real-world tasks.  In technical terms, higher content validity is achieved because learners are measured in the process of performing the targeted linguistic acts.
In an English language teaching context, performance-based assessment means that you may have a difficult time distinguishing between formal and informal assessment.  If you rely a little less on formally structured tests and a little more on evaluation while students are performing various tasks, you will be taking some steps toward meeting the goals of performance-based testing
  A characteristic of many (but not all) performance-based language assessments is the presence of interactive tasks.  In such cases, the assessments involve learners in actually performing the behavior that we want to measure.  In interactive tasks, test-takers are measured in the act of speaking, requesting, responding, or in combining listening and speaking, and in integrating reading and writing.  Paper-and-pencil tests certainly do not elicit such communicative performance.  A prime example of an interactive language assessment procedure is an oral interview.  The test-taker is required to listen accurately to someone else and to respond appropriately. If care is taken in the test design process, language elicited and volunteered by the student can be personalized and meaningful, and tasks can approach the authenticity of real life language use.

D.    CURRENT ISSUES IN CLASSROOM TESTING
The design of communicative, performance-based assessment rubrics continues to challenge both assessment experts and classroom teachers. Such efforts to improve various facets of classroom testing are accompanied by some stimulating issues, all of which are helping to shape our current understanding of effective assessment. Let's look at three such issues:  the effect of new theories of intelligence on the testing industry; the advent of what has come to be called "alternative" assessment; and the increasing popularity of computer-based testing.

a.      New Views on Intelligence
Intelligence one once viewed strictly as the ability to perform (a) linguistic and (b) logical mathematical problem solving. This “IQ” (intelligence quotient) concept of intelligence has permeated the western world and its way pf testing for almost a century, since “smartness” in general is measured by timed, discrete-points tests consisting of a hierarchy of separate items, why shouldn’t every field of study be so measured? For many years, we have lived in a world of standardized, norm-referenced tests that are timed in a multiple-choice format consisting of a multiplicity of logic constrained items, many of which are inauthentic.
However, research on intelligence by psychologist like Howard Gardner, Robert Sternberg, and Daniel Golemen has begun to turn the psychometric world upside down. Gardner (1983, 1999), for example, intended the traditional view of intelligence to seven different components. He accepted the traditional conceptualizations of linguistic intelligence and logical mathematical intelligence on which standardized IQ tests are based, but he included five others “frames of mind” in this theory of multiple intelligence:
·         Spatial intelligence (the ability to find your way around an environment, to form mental images of reality)
·         Musical intelligence (the ability to perceive and create pitch and rhythmic patterns)
·         Bodily-kinesthetic intelligence (fine motor movement, athletic prowess)
·         Interpersonal intelligence (the ability to understand others and how they feel, and to interact effectively with them)
·         Intrapersonal intelligence (the ability to understand oneself and to develop sense of self-identify)
Robert Stenberg (1988; 1997) also charted new territory in intelligence research in recognizing creative thinking and manipulative strategies as part of intelligence. All “smart” people aren’t necessary adept and fast, reactive thinking. They may be very innovative in being able to think beyond the normal limits imposed by existing tests, but they may need a good deal of processing time to enact this creativity. Other forms of smartness are found in those who know how to manipulate their environment, namely, other people, debaters, politicians, successful salespersons, smooth talkers, and con artists are all smart in their manipulative ability to persuade others to think their way, vote for them, make a purchase, or do something they might not otherwise do...
More recently, Daniel Golemen’s (1995) concept of”IQ” (emotional quotient) has spurred as to underscore the importance of the emotions in our cognitive processing. Those who manage their emotions – especially emotions that can be detrimental-tend to be more capable of fully intelligence processing. Anger, grief, resentment, self-doubt, and other feelings can easily impair peak performance in everyday tasks as well as higher-order problem solving.
These new conceptualizations of intelligence have not been universally accepted by the academic community (see while, 1998, for example). Nevertheless, their intuitive appeal infused the decade of the 1990s with a sense of booth freedom and responsibility in our testing agenda. Coupled with parallel educational reforms at the time (Armstrong, 1994,) they helped to free us from relying exclusively on timed, discrete-point, analytical tests in measuring language. We were prodded to cautiously combat the potential tyranny of “objectivity” and it’s accompanying impersonal approach. But we also assumed the responsibility for tapping into whole language skills, learning processes, and the ability to negotiate meaning. Our challenge was to test impersonal, creative, communicative, interactive skills, and in doing so to place some trust in our subjectivity and tuition.

b.      Traditional and “Alternative” Assessment
Implied in some of the earlier descriptions of performance-based classroom assessment is a trend to supplement traditional test designs with alternatives that are more authentic in their elicitation of meaningful communication.
Two caveats need to be stated here. First, the concepts in table 1.1 represent some overgeneralization and should therefore be considered with caution. It is difficult, in fact, to draw a clear line of distinction between what Amstrong (1994) and Ballely (1998) have called traditional and alternative assessment. Many forms of assessment fail in between the two, and some combine the best of both.
Second, it is obvious that the table shows a blast toward alternative assessment and one should not be misled into thinking that everything one the left-hand side is tainted while the list on the right-hand side offers salvation to the field of language assessment as Brown and Hudson (1998) aptly pointed out, the assessment traditions available to us should be valued and utilized for the function that they provide. At the same time, we might all be stimulated to look at the right-hand list and ask ourselves if, among those concepts, there are alternatives to assessment that we can constructively use in our classroom.
It should be noted here that considerable more time and higher institutional budgets are required to administer and score assessment that presuppose more.

Table 1.1. Traditional and alternative assessment
Traditional Assessment
Alternative Assessment
One-shot, standardized exams
Timed, multiple-choice format
Decontextualized test items
Scores-suffice for feedback
Norm-referenced scores
Focus on the “right” answer
Summative.
Oriented to product
Non-interactive performance
Fosters extrinsic motivation
Continuous Long-Term assessment
Untimed, free-response format
Contextualized communicative tasks
Individualized Feedback and wash back
Criterion-referenced scores
Open-ended, creative answers
Formative.
Oriented to process
Interactive performance
Fosters intrinsic motivation
   
Subjective evaluation, more individualization, and more interaction in the process of offering feedback. The payoff for the letter, however, comes with more useful feedback to student, the potential for intrinsic motivation, and ultimately a more complete description of a student’s ability. More and more educators and advocates for educational reform are arguing for a de-emphasis on large-scale standardized tests in favor of building budgets that will offer the kind of contextualized, communicative performance-based assessment that will better facilitate learning in our schools.

c.       Computer-Based Testing
Recent years have seen a burgeoning of assessment in which the test-taker performs responses on a computer. Some computer-based tests (also known as “computer assisted” or” web-based tests) are small-scale “home-grown” tests available on websites. Others are standardized, large-scale test in which thousands or even tens of thousands of test-takers are involved. Students receive prompts (or probes, as they are sometimes referred to) in the form of spoken or written stimuli from the computerized test and are required to type (or in some cases, speak) their responses. Almost all computer-based test items have fixed, closed-ended responses; however, test like the test of English as a foreign language (TOEFL) offer a written essay section that must be scored by humans (as opposed to automatic, electronic, or machine scoring). As this book goes to press, the designers of the TOEFL are on the verge of offering a spoken English section.
A specific type of computer-based test, a computer-adaptive test, has been available for many years but has recently gained momentum. In a computer-adaptive test (CAT), each test-taker receives a set of questions that meet the test specifications and that are generally appropriate for his or her performance level. The CAT starts with questions of moderate difficulty. As test-takers answer each questions, the computer scores the questions and uses that information, as well as the responses to previous question, to determine which question will be presented next. As long as examinees respond correctly, the computer typically selects questions of greater or equal difficulty. Incorrect answers, however, typically bring question of lesser or equal difficulty. The computer is programmed to fulfill the test design as it continuously adjust to find questions of appropriate difficulty for test-takers at all performance levels. In CATs, the test-taker sees only one question at a time, and the computer scores each questions before selecting the next one. As a result, test-takers cannot skip questions, and once they have entered and confirmed their answers, they cannot return to questions or to any earlier part of the test.
Computer-based testing, with or without CAT technology, offers these advantages:
·         Classroom-based testing
·         Self-directed testing on various aspects of a language (vocabulary, grammar, discourse, one or all of the four skills, etc.)
·         Practice for high stakes standardized tests
·         Some individualizing, in the case of CATs
·         Large-scale standardized tests that can be administered easily to thousands of test-takers at many different stations, then scored electronically for rapid reporting of results.
Of course, some disadvantages are present in our current predilection for computerizing testing. Among them:
·         Lack of security and the possibility of cheating are inherent in classroom-based, unsupervised computerized tests.
·         Occasional “home-grown” quizzes that appear on unofficial websites may be mistaken for validated assessments.
·         The multiple-choice format preferred for most computer-based tests contains the usual potential for flawed item design (see chapter 3)
·         Open-ended responses are less likely to appear because of the need for human scorers, with all the attendant issue of cost, reliability, and turnaround time.
·         The human interactive element (especially in oral production) is absent.
More is said about computer-based testing in subsequent chapters, especially chapter 4, in a discussion of large-scale standardized testing. In addition, the following website provide further information and examples of computer-based tests:

Educational Testing Service                                                      www.ets.org
Test of English as a Foreign Language                                     www.toefl.org
Test of English for International Communication                     www.toelc.com
International English Language Testing System                       www.lelts.org
Dave’s ESL Café (computer quizzes)                                       www.eslcafe.com

Some argue that computer-based testing, pushed to its ultimate level, might mitigate against recent efforts to return testing, to its artful form of being tailored by teachers for their classrooms, of being designed to be performance-based, and of allowing a teacher-student dialogue to form the basis of assessment. This need not be the case. Computer technology can be a boon to communicative language testing. Teachers and test-makers of the future will have access to an ever-increasing range of tools to safeguard against impersonal, stamped-out formulas for assessment. By using technological innovations creatively, testers will be able to enhance authenticity, to increase interactive exchange, and to promote autonomy.
As you read this book, I hope you will do so with an appreciation for the place of testing in assessment, and with a sense of the interconnection of assessment and teaching. Assessment is an integral part of the teaching-learning cycle. In an interactive, communicative curriculum, assessment is almost constant .tests, which are a subset of assessment, can provide authenticity, motivation, and feedback to the learner. Tests are essential components of a successful curriculum and one of several partners in the learning process. Keep in mind these basic principles:
1.      Periodic assessments, both formal and informal, can increase motivation by serving as milestones of student progress.
2.      Appropriate assessment aid in the reinforcement and retention of information.
3.      Assessment can confirm areas of strength and pinpoint areas needing further work.
4.      Assessment can provide a sense of periodic closure to modules within a curriculum.
5.      Assessments can promote student autonomy by encouraging students’ self-evaluation of their progress.
6.      Assessments can spur learners to set goals for themselves.
7.      Assessments can aid in evaluating teaching effectiveness.


By. 1st Group
Ariani Andespa
Ririn Ariani
Lusy Bebi Hertika
Rivaria Safitri
Tiya Rosalina
Mery Herlina
Share:

1 komentar:

  1. 1. Testing, Assessing, Teaching ~ Lteclass English Department Baturaja University >>>>> Download Now

    >>>>> Download Full

    1. Testing, Assessing, Teaching ~ Lteclass English Department Baturaja University >>>>> Download LINK

    >>>>> Download Now

    1. Testing, Assessing, Teaching ~ Lteclass English Department Baturaja University >>>>> Download Full

    >>>>> Download LINK

    BalasHapus