1: Inconsistency found in data collected in mix method evaluation
In Ke & Hoadley's article, it gives a overview of literatures related to On-line Learning Communities (OLC) evaluation. It also presents a conceptual framework for evaluating OLC. I found this article is trying to combine many dichotomies in the discussion and the framework.
First of all, I questioned about how to mix them together. Then I thought, different methods referring to solve specific problems or questions. So if the evaluation purposes can be better answered by which way or mix ways, then it's natural to combine the two
Usually for Combing different/diverse methods, evaluators or researchers tend to see the consistency among data. When there is an inconsistency among like the qualitative and the quantitative data? How does this conflict and inconsistency remind us? What will you do when you face the inconsistency (I mean if you use mix methods to collect data for evaluation, but you found for one questions, there are on-poles feedback from like the survey and open-ended interview)
2. Context difference and validity of a survey of assessing social ability
When I am reading the Assessing Social Ability in online learning environment (Laffey, Lin, and Lin, 2006), I noticed the specific context in which this research was conducted- a formal schooling for graduates and some high-level undergraduates in college level. My question if this survey is applied to other contexts like informal learning, different learning levels or groups, will the validity of the survey be consistent? Probably, the researchers need to include this perspective to strengthen the validity of this instrument.

No comments:
Post a Comment