Workshops

Date: December 4, 2021    Sessions: Morning session: 8:30am – 11:50am   Afternoon session: 1:30pm – 4:50pm   [WORKSHOP] Introduction to Rasch Measurement Model (1-DAY) Speaker: Zi YAN (in Chinese)  More information:
The Rasch model – one of a family of Item Response Theory models – provides human science researchers (including educators, language researchers, psychologists, social workers, nurses, and the like) with a conceptual and analytic tool to develop, utilize and monitor high quality measures of latent variables such as attitudes, traits, and abilities. Since Rasch measurement model focuses on the items and the persons rather than test scores, the synthesis of both using the principle of conjoint measurement; quantitative analysis with qualitative issues is now experienced in a way that is rare in social science. Ultimately, Rasch Model can facilitate more efficient, reliable, and valid assessment while improving convenience to users in interpreting their research findings.
The workshop will begin with an introduction to the theory and practice of Rasch measurement and will include an explanation of the advantages of Rasch analysis over classical approaches to test and questionnaire scores. Subsequent topics will include how Rasch analysis can be applied to dichotomous data (the basic Rasch model), and Likert-style questionnaire data (the Rasch rating scale model). The workshop is useful for anyone who wants to understand the role of modern measurement (not limited to education or competency but many other related research of measurement).  
Workshop schedule: 
Session 1: 8:30am-10:00am
Break: 10:00am-10:20am
Session 2: 10:20am-11:50am
Lunch: 11:50am-1:30pm
Session 3: 1:30pm-3:00pm
Break: 3:00pm-3:20pm
Session 4: 3:20pm-4:50pm
Date: December 4, 2021    Morning session: 8:30am – 11:50am    [WORKSHOP] Introduction to R (half day) Speaker: Cynthia TONG (in Chinese)  More information:
R is an open source programming language with a lot of facilities for problem solving through statistical computing. R is a language and an environment for everything related to data. It includes statistical computing, data mining, data analysis, machine learning, predictive modelling, quantitative analysis, optimisation and operations research etc. – all of which are somewhat inter-related terms. 
This workshop will introduce what R is and how it is applied in Rasch Measures. Participants will work actively with R and RStudio during the workshop. No previous knowledge of R and RStudio is necessary. The workshop is useful for anyone who is interested in this topic, regardless of their previous experience with R and Rasch Measures. 
Workshop schedule: 
Session 1: 8:30am-10:00am
Break: 10:00am-10:20am
Session 2: 10:20am-11:50am
Date: December 4, 2021    Afternoon session: 1:30pm – 4:50pm  [WORKSHOP] Educational data: Ways to tell stories and find teaching enlightenment (教育数据:说故事及寻找教学启示的方法, half day) Speaker: Kit Tai HAU (in Chinese)  More information:
Programme for International Student Assessment (PISA) hosted by OECD has won broad public attention all over the world. In this workshop, we will discuss: (1) How to select the questionnaire used in educational monitoring; (2) How to use the questionnaire results to analyze and write an attractive and influential “story”. We will use PISA type tools and their materials to explain and serve as a case. Participants put forward some research topics that we can discuss together to improve these topics and turn them into more attractive stories. The workshop is dedicated to the training of forming vivid and interesting “stories”, rather than complex and profound statistical technology training. (OECD主持的国际学生评估项目(PISA)已经赢得了世界各地公众的广泛关注。此次工作坊我们将讨论:(1)教育监测使用的问卷是如何选取的;(2)如何利用问卷结果分析撰写成有吸引力、有影响的“故事”。我们将使用PISA类型的工具及其资料来进行阐释并作为案例。参与者提出一些研究课题,我们大家一起讨论,改善这些课题变成更有吸引力的故事。工作坊致力于形成生动有趣“故事”的训练,而非进行复杂高深的统计技术培训。)
Workshop schedule: 
Session 1: 1:30pm-3:00pm
Break: 3:00pm-3:20pm
Session 2: 3:20pm-4:50pm

Keynote speeches

DateSpeaker & TitleDescription
December 5, 2021
The 1st keynote speech: 9:30am-10:20am (40-min presentation and 10-min Q & A)  Prof. Trevor BOND AUSTRALIA   From estimation to consideration: The role of Rasch measurement in promoting understanding and validity  Many come to Rasch measurement when they are put in the position of developing their own instruments for measurement or trying to determine whether the instrument they have selected works in the way it was intended. The first step in constructing measures is to have a deep understanding of the variable under investigation. In order to construct an instrument that measures just one variable at a time, the items generated must be both, as similar as possible, as well as being as different as possible. Rasch measurement helps us untangle that apparent conundrum. A distinctive feature of Rasch analysis is that items and persons are analysed together, so that their performances maybe examined independently. This presentation uses empirical evidence from a recent examination of the ability of second language English learners to write English language essays. The results show how the Many Facets Rasch model can be used to untangle the effects, not only of rater and examinee, but the interactions between topic, grade and a common making rubric.
Break: 10:20am-10:40am (20 mins)
The 2nd keynote speech: 10:40am – 11:30am (40-min presentation and 10-min Q & A)Prof. Kit Tai HAU Hong Kong, CHINA   Large Scale International Educational Assessment: Uses, Limitations and Counter-Intuitive FindingsIn the last two decades, policy makers, researchers, and teachers have paid great attention to large scale international student assessment programs such as PISA. Using results from these programs, we would discuss some counter-intuitive findings, limitations of the research design, and other comparability of scale issues.
DateSpeaker & TitleDescription
December 6, 2021
The 1st keynote speech: 8:30am-9:20am (40-min presentation and 10-min Q & A)  Prof. Ricardo PRIMI BRAZIL   Response styles as Person Differential Functioning: methodological approaches to solve Person DIF  Likert-type self-report scales are frequently used in large-scale educational assessment of social-emotional skills. Self-report scales rely on the assumption that their items elicit information only about the trait they are supposed to measure. Specifically, in children, the response style of acquiescence is an important source of systematic error. Balanced scales, including an equal number of positively and negatively keyed items, have been proposed as a solution to control for acquiescence, but the reasons why this design feature worked from the perspective of modern psychometric models have been underexplored. Three methods for controlling for acquiescence are compared: classical method by partialling out the mean; an item response theory method to measure differential person functioning (DPF); and multidimensional item response theory (MIRT) with random intercept. Comparative analyses are conducted on simulated ratings and on self-ratings provided by 40,649 students (aged 11–18) on a fully balanced 30-item scale assessing conscientious self-management. Acquiescence bias was found to be explained as DPF.
Break: 9:20am-9:30am (10 mins)
The 2nd keynote speech: 9:30am – 10:20am (40-min presentation and 10-min Q & A)Prof. Kelly BRADLEY AMERICA   (Soon)  (Soon)
Break: 10:20am-10:40am (20 mins)
The 3rd keynote speech: 10:40am – 11:30am (40-min presentation and 10-min Q & A)Prof. Steven STEMLER AMERICA   Better Measurement and Fewer Parameters! The True Value of Rasch over IRTProponents of Item Response Theory models have sometimes described the Rasch model as “the one-parameter IRT model”. In doing so, however, they miss both the point and the power of Rasch model. Only the Rasch model can guarantee that the scale being constructed has the same meaning for all test takers, and this provides a powerful advantage over 2 and 3 parameter IRT models. By modeling a second parameter (item discrimination) and allowing item characteristic curves to cross, as IRT models do, more information is incorporated into person ability and item difficulty estimates, but this comes with an attendant loss in the power to interpret the test scale in a way that means the same thing for all test takers. Thus, any approach to assessment that aims to be able to report what all test-takers know and can do at each level of ability or which hopes to use adaptive testing algorithms to select the appropriate items to administer must necessarily rely on the Rasch model and not on 2 or 3 parameter IRT models.

Presentations

Oral presentations

The schedule will be arranged later base on abstract submission and acceptance.

Symposiums

The schedule will be arranged later base on abstract submission and acceptance.