This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

Jumat, 08 Agustus 2014

Tests as Tools


One of the most important things to keep in mind when making or using language tests is that tests and other assessments are tools. We want to use a test or assessment for a particular reason, to do a certain job, not “just because.” We should have in mind what that reason is, and who is likely to be taking the test, before we start planning the test—let alone before we start writing it. Almost without fail, the reason for giving the test will have something to do with making decisions about students, or other people (for example, prospective students, prospective employees, or people wanting to have their language ability certified for some purpose). These decisions, naturally, should inform the way that we design our tests (Mislevy 2007).Keeping in mind that a test is a tool can do a lot to clarify our thinking about how to use it. A particular tool is better for some tasks than for others, as anyone who has ever used pliers to remove a screw can understand. Similarly, a certain test might work quite well for one purpose, but not so well for something else. Some tools are poorly made, and are not useful for much of anything; so are some tests, particularly those that are random collections of questions thrown together without any planning. Likewise, some tools are well made, but are highly specialized; in the same way, a given test might be intended for a particular purpose, such as assessing the English-speaking ability of air traffic controllers, and it might do a wonderful job performing that task, but it might not be a good indicator of a doctor’s ability to converse with nurses and patients. Often, there may be several options available for a tool, some high-priced and some cheap, but one of the cheaper alternatives may do the job quite well enough, and while the more expensive options might work even better, they may not be better enough to justify the extra expense. Finally, to draw the tool analogy to a close, we should always keep in mind that nobody asks whether someone has a good tool that they can borrow. If someone needs a hammer, they ask for one, not for a screwdriver or wrench! In spite of this, though, it is an all-too-common occurrence for a teacher to ask colleagues if they know any good tests that can be used. Keeping this firmly in mind, we will next consider some of the purposes we use tests for, and some of the ways we look at test results.

Test Purposes and Types


As Brown (1995) points out, language tests are normally used to help make decisions, and there are a number of types of decisions that they can be used for. We generally refer to tests, in fact, by the type of decision they are used to make. I think it is useful to divide these test and decision types into two broad categories: those that are closely related to a teaching or learning curriculum, and those that are not. I use this distinction because curriculum-related tests all have a specific domain—the curriculum—to which we can refer when planning and writing these tests. In contrast, when a test is not based on a particular curriculum, we have the burden or freedom (depending on one’s point of view) of deciding what specifically
These types of tests are summarized in Table 1.1. Brief consideration, of course, will show that many tests are used for more than one purpose; I will refer to several common types of overlap in the following discussion. This is not necessarily problematic.  it should be based on.
Furthermore, as will become evident shortly, the dividing line between one type of test and another is not always as clear and sharp as we might pretend. Nevertheless, there are several clearly identifiable types of decisions that are informed by testing, for which some sort of classification system is useful. Because the actual use of a test may change from what was originally planned, it is important to think in terms of types of decisions more so than types of tests p er se; however, it is common in actual usage to refer to types of tests as a convenient shorthand.

Curriculum-Related Tests


The first type of curriculum-related test that a new student might encounter is an admission test, which is used to decide whether a student should be admitted to the program at all; this could of course be viewed as a screening test for a language program (see below), illustrating that, as noted earlier, the lines between categories can often be rather fuzzy. A related type of test is a placement test, which is used to decide at which level in the language program a student should study. The student then gets “placed” into that level—hence the name. In many cases, a single test might be used for both purposes: to decide whether a students language ability is adequate for even the lowest level in the program (admission decisions), and if they pass that threshold, to decide which level is most appropriate for them (placement decisions).
Diagnostic tests are used to identify learners’ areas of strength and weakness. Sometimes diagnostic information is obtained from placement (or admissions) tests, but sometimes diagnostic tests are administered separately once students have already been placed into the appropriate levels. Some language programs also use diagnostic tests to confirm that students were placed accurately. This can be a good idea, especially if a program is not highly confident in its placement procedures, but it is debatable whether this is actually a diagnostic purpose p er se. Diagnostic information can be used to help teachers plan what points to cover in class, to help them identify what areas a student may need extra help with, or to help students know which areas they need to focus on in their learning.
Once students are placed appropriately, teachers may wish to find out whether or how well their students are learning what is being taught. Progress tests assess how well students are doing in terms of mastering course content and meeting course objectives. This is done from the point of view that the learning is still ongoing—that is, that students are not expected to have mastered the material yet. Many progress decisions in the classroom do not involve testing, however, but are made informally, in the midst of teaching (see, for example, Leung 2004). This is often referred to as monitoring, or “just paying attention,” and is assumed to be a fundamental part of teaching, but this does not make it any less a form of assessment. More formally, we often refer to smaller progress assessments as quizzes. However, to the extent that we are using these assessments—quizzes, tests, or whatever—to grade students, we are assessing something other than progress. Achievement tests are those that are used to identify how well students have met course objectives or mastered course content. To a large extent, the question ofwhether a particular test or quiz is an achievement or progress test depends upon how it is being used. To the extent that the test is used to make decisions about what or how fast to teach, it is a progress test, and to the extent that it is used to make decisions about how individual students have learned what they were supposed to, it is an achievement test.
For example, imagine that a test is given in the middle of a course. It is used to assign grades for how well students have learned the material in the first half of the course, but it is also used by the teacher to decide whether any of those points need to be reviewed in class. In such a case, the test is both a progress and an achievement test. As a second example, consider a test given at the very end of a course. This test is used to assign grades to students—to make decisions about how much learning they have achieved in the course—so it is purely an achievement test. In considering whether a test is actually serving as an assessment of progress, achievement, or both—regardless of what it is being called by a teacher or program—the key is to think in terms of the type(s) of decisions being made. This is especially important when the actual use of a test has changed from what was intended when it was originally designed.
Moving beyond the level of an individual course, achievement tests can also be used at the level of the school or language program for decisions about whether to promote students to the next level or tier of levels, or for program exit or graduation decisions. Often, of course, practicality dictates that achievement testing for such purposes be combined with end-of-course achievement testing. Finally, there are two additional types of test-based decisions that closely relate to language curricula and programs, but which do not involve their “own” types of tests. The first involves program evaluation—one source of evidence to use when evaluating a programs effectiveness is tests. While we may want to consider the results of placement tests—and how good a job of placing students they seem to be doing—we may also want to examine achievement test results. In particular, if achievement tests are used at the end of a course, or for graduation, and if these tests are clearly tied to the goals and objectives (Brown 1995) of the course or program, then student performance on those tests should tell us something about how well the program is working.

Rabu, 06 Agustus 2014

Norm-Referenced and Criterion-Referenced Testing

One major way in which test results can be interpreted from different perspectives involves the distinction between norm- and criterion-referenced testing, two different frames of reference that we can use to interpret test scores. As Thorndike and Hagen (1969) point out, a test score, especially just the number of questions answered correctly, “taken by itself, has no meaning. It gets meaning only by comparison with some reference” (Thorndike and Hagen: 241). That comparison may be with other students, or it might be with some pre-established standard or criterion, and the difference between norm- and criterion-referenced tests derives from which of these types of criterion is being used.
Norm-referenced tests (NRTs) are tests on which an examinees results are interpreted by comparing them to how well others did on the test. NRT scores are often reported in terms of test takers’ percentile scores, that is, the percentage of other examinees who scored below them. (Naturally, percentiles are most commonly used in large-scale testing; otherwise, it does not make much sense to divide test takers into 100 groups!). Those others may be all the other examinees who took the test, or, in the context of large-scale testing, they may be the norming sample—a representative group that took the test before it entered operational use, and whose scores were used for purposes such as estimating item (i.e. test question) difficulty and establishing the correspondence between test scores and percentiles. The norming sample needs to be large enough to ensure that the results are not due to chance—for example, if we administer a test to only 10 people, that is too few for us to make any kind of trustworthy generalizations about test difficulty. In practical terms, this means that most norm-referenced tests have norming samples of several hundred or even several thousand; the number depends in part on how many people are likely to take the test after it becomes operational.
The major drawback of norm-referenced tests is that they tell test users how a particular examinee performed with respect to other examinees, not how well that person did in absolute terms. In other words, we do not know how much ability or knowledge they demonstrated, except that it was more or less than a certain percentage of other test takers. That limitation is why criterion-referenced tests are so important, because we usually want to know more about students than that. “About average,” “a little below average,” and “better than most of the others by themselves do not tell teachers much about a learner s ability p er se. On the other hand, criterion-referenced tests (CRTs) assess language ability in terms of how much learners know in “absolute ’ terms, that is, in relation to one or more standards, objectives, or other criteria, and not with respect to how much other learners know. When students take a CRT, we are interested in how much ability or knowledge they are demonstrating with reference to an external standard of performance, rather than with reference to how anyone else performed. CRT scores are generally reported in terms of the percentage correct, not percentile. Thus, it is possible for all of the examinees taking a test to pass it on a CRT; in fact, this is generally desirable in criterion-referenced achievement tests, since most teachers hope that all their students have mastered the course content.
Note also that besides being reported in terms of percentage correct, scores may also be reported in terms of a scoring rubric or a rating scale, particularly in the case of speaking or writing tests. When this is done with a CRT, however, the score bands are not defined in terms of below or above “average'5 or “most students,’ but rather in terms of how well the student performed—that is, how much ability he or she demonstrated. A rubric that defined score bands in terms of the “average,” “usual,” or “most students,” for example, would be norm-referenced. 

pembaharuan baru

vcvzcvzcvzz

Senin, 04 Agustus 2014

asal usul terminologi of kata dancuk

etimologi
Menurut Kamus Online Universitas Gadjah Mada , istilah “jancuk, jancok, diancuk, diancok, cuk, atau cok” didefinisikan sebagai “sialan, keparat, brengsek (ungkapan berupa perkataan umpatan untuk mengekspresikan kekecewaan atau bisa juga digunakan untuk mengungkapkan ekspresi keheranan atas suatu hal yang luar biasa)”.

Kata ini memiliki sejarah yang masih rancu. Kemunculannya banyak ditafsirkan karena adanya pelesetan oleh orang-orang terdulunya yang salah tangkap dalam pemaknaannya, dimana versi-versi ini muncul dari beberapa negara tetangga yang orang-orangnya mengucapkan kata yang memiliki intonasi berbeda namun fon-nya hampir sama. Dikarenakan orang-orang dari beberapa negara tetangga tersebut mengucapkan kata yang hampir mirip kata jancok itu dengan ekspresi marah atau geram dan semacamnya, orang-orang Jawa dulu mengartikan kata jancok (menurut lidah orang Jawa) adalah kata makian.
Setidaknya terdapat empat versi asal-mula kata Jancok.


1.Versi kedatangan Arab
Salah satu versi asal-mula kata “Jancuk” berasal dari kata Da’Suk. Da’ artinya “meninggalkanlah kamu”, dan assyu’a artinya “kejelekan”, digabung menjadi Da’Suk yang artinya “tinggalkanlah keburukan”. Kata tersebut diucapkan dalam logat Surabaya menjadi “Jancok”.


2.Versi penjajahan Belanda
Menurut Edi Samson, seorang anggota Cagar Budaya di Surabaya, istilah Jancok atau Dancok berasal dari bahasa Belandayantye ook” yang memiliki arti “kamu juga”. Istilah tersebut popular di kalangan Indo-Belanda sekitar tahun 1930-an. Istilah tersebut diplesetkan oleh para remaja Surabaya untuk mencemooh warga Belanda atau keturunan Belanda dan mengejanya menjadi “yanty ok” dan terdengar seperti “yantcook”. Sekarang, kata tersebut berubah menjadi “Jancok” atau “Dancok”. 


3.Versi penjajahan Jepang
Kata “Jancok” berasal dari kata Sudanco berasal dari zaman romusha yang artinya “Ayo Cepat”. Karena kekesalan pemuda Surabaya pada saat itu, kata perintah tersebut diplesetkan menjadi “Dancok”.

4. Versi umpatan
Warga Kampung Palemahan di Surabaya memiliki sejarah oral bahwa kata “Jancok” merupakan akronim dari “Marijan ngencuk” (“Marijan berhubungan badan”). Kata encuk merupakan bahasa Jawa yang memiliki arti “berhubungan badan”[4], terutama yang dilakukan di luar nikah. Versi lain menyebutkan bahwa kata “Jancuk” berasal dari kata kerja “diencuk”. Kata tersebut akhirnya berubah menjadi “Dancuk” dan terakhir berubah menjadi “Jancuk” atau “Jancok”


Kata “Jancok” merupakan kata yang tabu digunakan oleh masyarakat Pulau Jawa secara umum karena memiliki konotasi negatif. Namun, penduduk Surabaya dan Malang menggunakan kata tersebut sebagai identitas komunitas mereka[1] sehingga kata “Jancok” memiliki perubahan makna ameliorasi (perubahan makna ke arah positif).
Sujiwo Tedjo mengatakan:[

“Jancuk” itu ibarat sebilah pisau. Fungsi pisau sangat tergantung dari user-nya dan suasana psikologis si user. Kalau digunakan oleh penjahat, bisa jadi senjata pembunuh. Kalau digunakan oleh seorang istri yang berbakti pada keluarganya, bisa jadi alat memasak. Kalau dipegang oleh orang yang sedang dipenuhi dendam, bisa jadi alat penghilang nyawa manusia. Kalau dipegang orang yang dipenuhi rasa cinta pada keluarganya bisa dipakai menjadi perkakas untuk menghasilkan penghilang lapar manusia. Begitupun “jancuk”, bila diucapkan dengan niat tak tulus, penuh amarah, dan penuh dendam maka akan dapat menyakiti. Tetapi bila diucapkan dengan kehendak untuk akrab, kehendak untuk hangat sekaligus cair dalam menggalang pergaulan, “jancuk” laksana pisau bagi orang yang sedang memasak. “Jancuk” dapat mengolah bahan-bahan menjadi jamuan pengantar perbincangan dan tawa-tiwi di meja makan.(Sujiwo Tedjo, 2012, halaman x)
Jancuk merupakan simbol keakraban. Simbol kehangatan. Simbol kesantaian. Lebih-lebih di tengah khalayak ramai yang kian munafik, keakraban dan kehangatan serta santainya “jancuk” kian diperlukan untuk menggeledah sekaligus membongkar kemunafikan itu. (Sujiwo Tejo. 2012 : 397)