Inviting Video Games to the Educational Table

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40

According to Gee (2008), “good video games recruit good learning” but it all rests on good design (p.21). This is because well-designed video games provide experiences to the learner which meet conditions that “recruit learning as a form of pleasure and mastery” (p. 21).  These conditions include in which providing an experience that is goal structured and requiring interpretation towards making those goals, that provides immediate feedback and the opportunity to apply prior knowledge/experiences of self and others towards success in meeting goals. If done in such a way, Gee (2008) argues that they allow the learner’s experiences to be “organized in memory in such a way that they can draw on those experiences as from a data bank”(p. 22). As Gee presented (2008), these conditions, coupled with the social identity building that good game design incorporates,  help “learners understand and make sense of their experience in certain ways. It helps them understand the nature and purpose of the goals, interpretations, practices, explanations, debriefing, and feedback that are integral to learning” (p. 23). These conditions are the key to good game design as they provide several key aspects which play into learning science.  First they create a “situated learning matrix”- the set of goals and norms which require the player to “master a certain set of skills, facts, principles, and procedures” and utilize the tools and technologies available within the game to do this – including other player and non-player characters who represent a community of practice in which the learner is self-situating (Gee, 2009,p. 25).  This combination of game (in game design) and Game (social setting), as Gee (2009) explained, provides the learner with a foundation for good learning since “learning is situated in experience but goal driven, identity-focused experience” (p. 26). In addition, many well-designed games  incorporate models and modeling, which “simplify complex phenomena in order to make those phenomena easier to deal with” (Gee, 2009,p. 37). Many good games also enhance learning through the emphasis on distributed intelligence, collaboration, and cross-functional teams which create “a sense of production and ownership,” situate meanings/terms within motivating experiences at the time they are needed and provide an emotional attachment for the player (which aids in memory retention) while keep frustration levels down to prevent them pulling away (Gee, 2009, p. 37). As Gee (2009) pointed out “the language of learning is one important way in which to talk about video games, and video games are one important way in which to talk about learning. Learning theory and game design may, in the future, enhance each other” (p. 37).
In breaking down the connections to learning which can be present within well-designed video games, Gee (2009) has not only outlined the structures through which good educational games should be built but is constructively addressing common arguments presented against using video games. Recognizing the assets well-designed games can bring to the educational table is important since, more often than not, the skills and content learned in games are learner-centered and content connected but are “usually not recognized as such unless they fall into a real-world domain” (Gee, 2009, p. 27). This is likely why the discussion of the role of video games within education is necessary. As Gee (2009) commented,

“any learning experience has some content, that is, some facts, principles, information, and skills that need to be mastered. So the question immediately arises as to how this content ought to be taught?Should it be the main focus of the learning and taught quite directly? Or should the content be subordinated to something else and taught via that “something else”? Schools usually opt for the former
approach, games for the latter. Modern learning theory suggests the game approach is the better one” (p. 24)

 

 

 

Video Games as Digital Literacy

Steinkuelher, C. (2010). Digital literacies: Video games and digital literacies. Journal of Adolescent & Adult Literacy, 54(1), 61-63.

In reflecting on if educators are selling video games short when it comes to learning, Steinkueler (2010) offered the anecdotal case of “Julio”, 8th grade student. Julio spent a significant amount of free time involved in video game culture, designing and writing about gaming. However read three grade levels below where he should be and was often disinterested and disengaged from school. Even when presented with game-related readings he still also did not excel. But when given choice in reading, he selected a 12th grade reading that appealed to his interests and managed to succeed despite the obstacles this reading presented him. Steinkueler (2010) argued it was the action of giving him choice to select something that appealed to his interest that increased his auto-correction rate and thus gave him persistence to overcome and meet the challenge.  Steinkuler (2010) opined that “video games are a legitimate medium of expression. They recruit important digital literacy practices” (p. 63) and as such may offer an outlet for the student, particularly disengaged males, to engage in learning that may otherwise be unmet through traditional structures.

The efforts the author highlighted Julio engaged in– writing, reading, researching for gaming — certainly suggest that video games may offer a way to engage between new and traditional literacies as Gee (2008) suggests.  However, this is but a single example and alone offers very little in terms of tangible data to rest any confirmed ideas about the important of video gaming in education. However, it does offer the notion of considering how video games present as new literacies which can open doors for expression of meaning and ideas particularly those who my feel marginalized within traditional curriculum plans and by those who consider video games a “waste of time”.

The appeal of the qualitative analysis approach to investigating how students view and experience the use of gaming with education is especially appealing given this case of Julio. Would he have seen that is outside activities were translatable into educational acumen? Would his teacher or parents? There is so little in this small single case study to say much but it does give one ideas.

Gee, J. Learning and Games. The Ecology of Games: Connecting Youth, Games, and Learning. Edited by Katie Salen. The John D. and Catherine T. MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press, 2008. 21–40.

 

Digital Games, Design and Learning: A Meta-Analysis

Clark, D. B, Tanner-Smith, E.E, and Killingsworth, S.S. (2016) Digital Games, Design and Learning: A Systematic Review and Meta-Analysis. Review of Educational Research 86(1):  79-122.

Within this article, Clark, Tanner-Smith and Killingsworth (2016) offer a refined and expanded evaluation of research on digital games and learning.  To ground their study, the authors summarize three prior meta-analyses of digital games. It is from these three studies and their findings that the authors develop a set of two core hypotheses about how digital games impact learning  that were tested in their meta-analysis. These two core hypotheses were further examined for that the authors term as moderator conditions and from this the authors developed sub-theories for each core theory to also test in their meta-analysis. Utilizing databases spanning “Engineering, Computer Science, Medicine, Natural Sciences, and Social Sciences” the authors sought research published between 2000 and 2012 to identify studies which examined digital games in K-16 settings, which addressed “cognitive, intrapersonal and interpersonal learning outcomes”(p. 82) and had studies which either had comparisons of digital games versus non-game conditions or utilized a value-added approach (something the prior meta-analyses ignored) to compare standard and enhanced versions of the same game. In addition they required a set of criteria for these studies to meet which included specifics on game design, participant parameters, and pre and post testing data which could be used to assess change in outcomes. Overall, they identified 69 studies which met the parameters outlined in their research procedures. From this population they discerned the following signficant patterns:

  1. In studies of game versus non-game conditions in media comparisons, students in digital game conditions demonstrated signficantly better outcomes overall relative to students in the non-game comparisons conditions (p. 94). This was significant for both cognitive and interpersonal outcomes (p.95). The number of studies with interpersonal outcomes was too small for statistical significance.
  2.  In studies of standard game and enhanced game versions through value-added comparisons, students in enhanced games showed “significant positive outcomes” relative to standard versions (p. 98). While overall there were too few studies with specific features for cross comparisons, the one feature of enhanced scaffolding (personalized, adaptive play)was present in enough studies and showed a significant overall effect (p. 99).
  3. Overall in examining game conditions, games which allowed the learner multiple play sessions performed better than those of single game play when compared against non-game conditions. Game duration (time played) seemed to have no impact on overall impact. (p. 99) These results did not vary even when considerations of the visual aspects of the game were measured.
  4. Despite what was seen in previous meta-analyses, there was no difference in outcomes for games paired with additional non-game instruction versus those without the additional non-game instruction. (p. 99)
  5. There was significant differences with player configurations within games. Overall, single player games had the most signficant impact on learning outcomes relative to group game structure and these outcomes were higher in single player games with no formal collaboration or competition. (p. 100). However games with collaborative team competition had signficantly larger effects on learning outcomes when compare to single competitive player games.
  6. Games with greater engagement of the player with actions within the game had greater impact than those with only a small variety of actions of the screen which did not change much over the course of play.
  7. Overall the visual and narrative perspective qualities of the games both simple and more complex game designs showed effectiveness in learning outcomes but overall schematic (schematic, symbolic or text-based) games were more effective than cartoon or realistic games

In reflecting on their findings, the authors recognized some limitations present based upon both their search parameters and their methodological breakdowns for analysis and encourage further examination of studies which fell outside of their range (for example simulation games) and greater examination of the subtleties of the individual studies included within their analysis before any larger generalizations can be made as to the specifics of best practices for game design.

Perhaps the most interesting aspect of this study is not the outcomes it presents for future study (even though these are great food for thought about intentional game design for educational purposes) but the proposition it makes that educational technology researchers should “shift emphasis from proof-of-concept studies (“can games support learning?”) and media comparison analyzes (“are games better or worse than other media for learning?”) to cognitive-consequences and value-added studies exploring how theoretically driven design decisions can influence situated learning outcomes for the board diversity of learners within and beyond our classrooms” (p. 116).

 

 

Online learning as online participation

Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education, 52(1), 78–82

In this article, Hrastinski (2009) presents the argument that online participation is a critical and often undervalued aspect of online learning and that models which relegate it to solely a social aspect for learning are ignoring its larger contributions to how students connect to materials and each other in the online environment.  In support of his ideas, Hrastinski (2009) offers an overview of literature on online participation which highlights that online learning is “best accomplished when learners participate and collaborate” (p.  79) and this translates into better learning outcomes when measured by “perceived learning, grades, tests and quality of performances and assignments” (p. 79).  In order to evaluate online participation, Hrastinski (2009) presents a conceptualization of online participation as more than just counting how often a student participates in a conversation but rather reflects on the online participation as “a process of learning by taking part and maintaining relations with others. It is a complex process comprising doing, communicating, thinking, feeling and belonging which occurs both online and offline” (p. 80). Hrastinski (2009) in reflecting on the work of others, offers up a view that participation creates community which in turn supports collaboration and construction of knowledge-building communities which foster learning between each other and the group at large. This learning through participation requires physical tools for structuring this participation and the psychological tools to help the learner engage with the materials.  This suggests examining aspects of motivation to learn within the structure of designing materials directed towards participation. He presents this means we should be looking at participation through more than just counting how much someone talks or writes but developing activities which require engagement with others in variety of learning modes.

While the importance of participation being seen as a critical component of online learning and the idea of reflecting on ways in which students may reflect online participation through more than just discussion boards is a good thing to see. Hrastinski (2009) offers little in terms of concrete examples to demonstrate how he sees this theory of online participation playing out through these different learning modes. While he may not have included examples as a way of preventing a formulaic approach to considering online participation, the inclusion of either examples or greater descriptions with how he sees faculty being able to construct both the physical and psychological tools of online participation would have been helpful for those less familiar with these to visualize the increasing ways they can apporach structuring online engagement.

As I have a deep interest in examining ways in which community and culture are structured through online classes and the impacts this has on learning, I found this article both intersting and encouraging for research avenues. In particular the rethinking he proposes on how we see online participation being constructed is encouraging and I would like to see the ways in which faculty and students may seem this idea of “what is participation” similarly or differently and the connection these perceptions have on how they both approach online larning and how they evaluate online learning.

 

 

Promoting Student Engagement in Videos Through Quizzing

Cummins, S. Beresford, A.R. and Rice. A (2016) Investigating Engagement with In-Video Quiz Questions in a Programming Course. IEEE Transactions on Learning Technologies 9(1): 57-66

The use of videos to supplement or replace lectures that were previously done face-to-face is a standard to many online courses. However these videos often encourage passivity on the part of the learner. Other than watching and taking notes, there may be little to challenge to the video-watching learner to transform the information into retained knowledge, to self-assess whether or not they understand the content, and to demonstrate their ability to utilize what they have learned towards novel situations. Since engagement with videos is often the first step towards learning, Cummins, Beresford, and Rice (2016) tested whether or not student can become actively engaged in video materials through the use of in-video quizzes. They had two research questions: a) “how do students engage with quiz questions embedded within video content” and b) “what impact do in-video quiz questions have on student behavior” (p. 60).

Utilizing an Interactive Lecture Video Platform (ILVP) they developed and open sourced, the researchers were able to collect real-time student interactions with 18 different videos developed as part of a flipped classroom for programmers. Within each video, multiple choice and text answer based questions were embedded and were automatically graded by the system. Videoplay was automatically stopped at each question and students were require to answer. Correct answers automatically resumed playback while students had the option of retrying incorrect ones or moving ahead. Correct responses were discussed immediately after each quiz question when payback resumed. The style of questions were on the level of Remember, Understand, Apply, and Analyse within Bloom’s revised taxonomy . In addition to the interaction data, the researchers also administered anonymous questionnaires to collect student thoughts on technology and on behaviors they observed and also evaluated student engagement based on question complexity. Degree of student engagement was measured by on the number of students answering the quiz questions relative the number of students accessing the video.

According to the Cummins et. al. (2016), that students were likely to engage with the video through the quiz but that question style, question difficulty, and the overall number of questions in a video impacted the likelihood of engagement. In addition, student behaviors were variable in how often and in what ways this engagement took place. Some students viewed videos in their entirety while others skipped through them to areas they felt were relevant. Others employed a combination of these techniques. The authors suggest that, based both on the observed interactions and on questionnaire responses, four patterns of motivating are present during student engagement with the video – completionism (complete everything because it exists), challenge-seeking (only engage in those questions they felt challenged by), feedback (verify understanding of material), and revision (review of materials repeatedly). Interestingly, the researchers noted that student recollection of their engagement differed in some cases with actual recorded behavior but, the authors suggest this may actually show that students are not answering the question in the context of the quiz but are doing so within other contexts not recorded by the system. Given the evidence in student selectivity in responding to questions based on motivations, the author’s suggest a diverse approach to question design within videos will offer something for all learners.

While this study makes no attempt to assess the actual impact on performance and retention of the learners (due to the type of class and the assessment designs within it relative to the program), it does show that overall in-video quizzes may offer an effective way to promote student engagement with video based materials. It is unfortunate the authors did not consider an assessment structure within this research design so as to collect some assessment of learning. However given that the platform they utilized it available to anyone (https://github.com/ucam-cl-dtg/ILVP-prolog) and that other systems of integrated video quizzing are available  (i.e. Techsmith Relay) which, when combined with key-strokes and eye movement recording technology, could capture similar information does open up the ability to further test how in-video quizzing impacts student performance and retention.

In terms of further research, one could visual a series of studies using a similar processes which could examine in-video quizzing to greater depth not only for data on how it specifically impacts engagement, learning and retention but also how these may be impacted based on variables such as video purpose, length, context and the knowledge level of the questions.  As Schwartz and Hartmann (2007) noted design variations with regards to video genres may depend on learning outcomes so assessing if this engagement only exists for lecture based transitions or may transfer to other genre is intriguing. As the Cummins et. al (2016) explain, students “engaged less with the Understand questions in favour of other questions” (p.  62) which would suggest that students were actively selecting what they engaged with based on what they felt were most useful to them. Thus further investigation of how to design more engaging and learner centered questions would be useful towards knowledge retention. In addition, since the videos were sessions to replace lectures and ranged in length from 5 minutes and 59 seconds to 29 minutes and 6 seconds understanding how length impacts engagement would help to understand if there is a point at which student motivation and thus learning waivers. While the authors do address some specifics as to where drop-offs in engagement occurred relative to specific questions, they do not offer a breakdown as to engagement versus the relative length of the video and overall admit that the number of questions varied between videos (three had no questions at all) and that there was no connection between number of questions and the video length. Knowing more about the connections between in-video quizzing and student learning as well as the variables which impact this process could help to better assess the overall impact of in-video quizzing  and allow us to optimize in-video quizzes to promote student engagement, performance and retention.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science. pp 349-366 Mahwah, NJ: Lawrance Erlbaum Associates.

Video Podcasts and Education

Kay, R. H. (2012). Exploring the use of video podcasts in education: A comprehensive review of the literature. Computers in Human Behavior, 28, 820-831

While the use of podcasts in education is growing, the literature to support their effectiveness in learning is far from concluded. Kay (2012) offers an overview of the literature on the use of podcasts in education a) to understand the ways in which podcasts have been used,  b) to identify the overall benefits and challenges to using video podcasts, and c) to outline areas of research design which could enhance evaluations of their effectiveness in learning. Utilizing keywords, such as ‘podcasts, vodcasts, video podcasts, video streaming, webcasts, and online videos” (p. 822), Kay searched for articles published in peer-reviewed journals. Through this she identified 53 studies published between 2009 and 2011 to analyze. Since the vast number of these were of focused on specific fields of undergraduates, Kay presents this as a review of  “the attitudes, behaviors and learning outcomes of undergraduate students studying science, technology, arts and health” (p. 823) Within this context, Kay (2012) shows there is a lot of diversity in how podcasts are used and how they are structured and tied into learning. She notes that podcasts generally fall into four categories (lecture-based, enhanced, supplementary and worked examples), can be variable in length and segmentation, designed for differing pedagogical approaches (passive viewing, problem solving and applied production) and have differing levels of focus (from narrow to specific skills to broader to higher cognitive concepts).  Because of the variability in research design, purpose and analysis methods, Kay (2012) approached this not from a meta-analysis perspective but from a broad comparison perspective with regards to the benefits from and challenges presented in using video podcasts.

In comparing the benefits and challenges, Kay (2012) presents that while there are great benefits shown in most studies, some studies are less conclusive. In examining the benefits, Kay finds that students in these studies are coming into podcasts primarily in evenings and weekends, primarily on home computers and not mobile devices (but this will vary by the type of video),  are utilizing different styles of viewing and that access is tied to a desire to improve knowledge (often ahead of an exam or class). This suggests that students are engaged in the flexibility and freedom afforded them through podcasts to learn anywhere and in ways that are conducive to their learning patterns. Overall student attitudes with regards to podcasts are positive in many of the studies. However, some showed a student preference for lectures over podcasts which limited the desire of the student to access them. Many studies commonly noted that students felt podcasts gave them a sense of control over their learning,  motivated them to learn through relevancy and attention, and helped them improve their understanding and performance. In considering performance, some of the studies showed improvement over traditional approaches with regards to tests scores while others showed no improvement. In additional while some studies showed that educators and students believed there were specific skills such as team building, technology usage and teaching skills the processes as to how these occur were not shared. In addition, some studies indicate technical problems with podcasts and lack of awareness can made podcasts inaccessible to some students and that several studies showed that students who regularly accessed podcasts attended class less often.

In reflecting on this diverse outcomes, Kay presents that the conflict evident in understanding the benefits and challenges is connected to research design. Kay (2012) argues that issues of podcast description, sample selection and description and data collection need to be addressed  “in order to establish the reliability and validity of results, compare and contrast results from different studies, and address some of the more difficult questions such as under what conditions and with whom are video podcasts most effective” (p. 826).  She argues that understanding more about the variation in length, structure and purpose of podcasts can better help to differentiate and better compare study data. Furthermore, Kay asks for more diverse populations (K-12) and better demographic population descriptions within studies so as to remove limits on ability to compare any findings among different contexts. Finally, she presents that an overall lack of examination of quantitative data and overall low quality descriptions of qualitative data techniques undermine the data being collected. “It is difficult to have confidence in the results reported, if the measures used are not reliable and valid or the process of qualitative data analysis and evaluation is not well articulated.” (p. 827) From these three issues, Kay recommends an overall greater depth to the design, descriptions, and data collection of research is needed in video podcasting research.

While literature review offers a general overview of the patterns the author witnessed in the studies collected, there are questions about data collection process as the author is unclear as to a) why three prior literature reviews were included as part of an analysis and b) as to whether the patterns she discusses are only from those papers which had undergraduate populations (as is intimated by her statement on this – as noted in italics above) or is it of all samples she collected. The author also used articles published in peer-reviewed journals and included no conference papers. It is unclear what difference in data would have resulted from including these other sources.

Overall the most critical information she provides from this study is the fact that there is no unifying research design that underlies the studies on video podcasts and this results in a diverse set of studies without complete consensus on the effective use of podcasts in education and overall little applicability on how to effectively implement video podcasts. The importance of research design in creating a comparative body of data cannot be understated and is something which should be considered in all good educational technology research. Unfortunately, while Kay denotes the issues present in how various studies are coding and how data is collected and analyzed in the studies she examined, she does not address the underlying research design issues much when thinking about areas of further research.  While this is not to lessen the issues she does bring up for future research, the need for better research design is evident and given little specifics by Kay.  One would have liked a more specific vision from her on this issue since greater consideration towards the underlying issues of research design with regards to describing and categorizing video podcasts, sampling strategies and developing methods of both qualitative and quantitative analysis are needed.

 

Intentional Design for On Screen Reading

Walsh, G. (2016) Screen and Paper Reading Research – A Literature Review. Australian Academic & Research Libraries, Vol.47(3), p.160-173

As more students move into on-line courses and as more faculty consider incorporating open educational resources (O.E.R) into their courses, the impact of screen reading and learning material design on reading comprehension and overall learning is of essential consideration.  Walsh (2016), desiring to help academic librarians gain knowledge on issues of online reading, examines the current research (last 6 years) with regards to reading comprehension and the screen versus paper debate. Overall Walsh found no consistency in research design among the studies she examined, making cross-comparisons difficult. However, she concludes that “most studies find little differences between the print and screen reading for comprehension” (p 169). But, she notes, most were not focused on scholarly readings and those that did “concluded that participants gain better understanding of the content when reading from paper” (p 169).

Overall, this article offers a synthesis of recent scholarly literature (2010-2016) located in information management databases. While the scope of the study does not specify the exact search parameters used nor if search parameters were used to eliminate any studies from consideration, it does offer a brief overall glance at some of the literature that exists on this subject from an information management perspective. If the author had opened up this research to examine databases within learning, education and educational technology, additional research may have been found. However despite this limited search parameter, the information within this article, when synthesized together, highlights several aspects to screen reading which should be considered within educational technology.

In her article, Walsh (2016) notes that when considering reading and comprehension, neuroscience research suggests that deep reading is necessary for “furthering comprehension, deductive reasoning, critical thought and insight” (p 162) but that there is variation in the areas of the brain which are stimulated by print reading and versus those stimulated by screen reading. This variation may indicate that there may be some impingement upon the screen reader’s “ability to reflect, absorb and recall information as effectively as in formation in the paper form” (p 162) and may encourage more shallow or skim reading. While not specifically addressed by Walsh but when considered further, this information suggests that educators which rely on-screen based reading to help students gain material knowledge for their course may need to develop activities which work to promote deeper reading in students. This is not something students learn early on due to the predominance of paper assigned materials in early education. At the same time, this may not be a skill that can be developed with something as simple as giving them a set of questions to answer after having read. Kuiper et. al (2005) offered that, when examining how students searched the Internet, how the teacher structured the task impacted how the student approached the content. In the case of screen reading, well-structured tasks (to borrow from Kuiper et. al) may support only a seek-and-find strategy and not necessarily support the ability of the student to creatively and critically come to comprehend and synthesize the materials.

Walsh’s review also offers information which shows that the content’s format, intention and its length can impact how much the student may learn from screen reading. Walsh (2016) notes that even though students read off of screens for entertainment, when it comes to academic documents, students prefer to print off a document rather than reading it on the screen. This preference is related to not only the “high level of concentration and text comprehension” necessary but that academic reading also required the reader to interact with the document through annotating, highlighting and bookmarking passages for reference (p 163).  Walsh’s research suggests that students do not perceive themselves as being able to accomplish as much with screen reading of academic documents as print reading.  This perception is critical since even though many students within the studies indicated interest in screen reading, they doubted their own ability to be competent with it. This perception of competence could potentially undermine student interest in engaging with the reading fully. Thus, while Walsh does not specify this within the article, it does recommend that an educator who utilizes screen based academic reading as part of their course may need to offer more guidance to the readers with regards to both how they may engage with the reading (through digital annotation, tagging and bookmarking) and more encouragement for students to build self-confidence in their abilities.  In addition, Walsh (2016) highlights research showing there is very little difference in outcomes of performance between screen readers and print readers for shorter content but that for longer, more complex materials, learning and information retrieval can be impacted when reading from a screen. Furthermore text which were less data and fact based, which were less visual, and required more cognitive reasoning were easier to read in paper format than on-screen. These two points would suggest that a simple transformation of printed text to a digital format for screen reading – a common practice among educators and journals alike, may not be sufficient for materials to be comprehended as easily as the text version. Rather that utilizing technology to optimize the reading experience through visuals, textual divisions, and structured hypertext may benefit the comprehension of more complex longer materials.

Finally Walsh presents research which outlines how the platform characteristics with regards to design, user interaction and navigation can impact comprehension. The research Walsh presents suggest that platform structures not only create technical frustrations but may limit the level of engagement the student can have with the reading or increase the level of distractions they can experience. Not all readings are equally optimized for learning for all students in all platforms. Therefore this could recommend to the educator that careful consideration of platform tools (navigate, annotate, explore), overall student familiarity with a platform and its usability, and the ability of the educator and student to turn off and on hypertext/pop-ups should be considered when selecting for digital materials.

These points, taken together, suggest that educators need to have a more thoughtful, approach to the incorporation of digital reading materials in their courses and that students may be better served by educators approaching onscreen reading with more intentional design than is currently in use.

Additional References

Kuiper, E., Volman, M., & Terwel, J. (2005). The Web as an information resource in K–12 education: Strategies for supporting students in searching and processing information. Review of Educational Research, 75, 285–328

 

Using A Learning Ecology Perspective

Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecology perspective. Human Development, 49, 193-224.

Not all learning is done in school. While such a statement may seem obvious, Barron (2006) denotes that studies of learning often focus specifically on formal atmospheres of learning (schools and labs) and in doing so miss the big picture of how a learner will co-op and connect various resources, social networks, activities and interactions together to create a landscape where their learning takes place.  Using a learning ecology framework, the author desires to understand how exactly a learner may go about learning by examining the multiple contexts and resources they have available to them.  Learning ecology is “the set of contexts found in physical and virtual spaces that provide opportunities for learning.” (Barron, 2006)  By understanding how the learner negotiates this landscapes for learning that surround them, the author believes educators can think more broadly about ways to connect in-class and outside learning together. Using qualitative interviews with students and their families as the focal point of her research, Barron (2006) focuses on creating “portraits of learning about technology” (p 202) to better understand how interest is found and then self-sustained through several contexts. Through this work she demonstrates that there is not one means by which a student may develop interest and maintain learning but that common themes are prevalent. Among her case studies, Barron (2006) outlines five modes of self-initiated learning. These include finding text-based resources to gain knowledge, building knowledge networks for mentoring and gaining opportunities, creating interactive activities to promote self-learning, seeking out structured learning through classes and workshops, and exploring media to learn and find examples of interests.  By examining the interplay of these various strategies, the Barron (2006) demonstrates how the learner was an active participant in constructing their own learning landscape such that “learning was distributed across activities and resources” (p. 218). Because of this Barron (2006) argues that researchers should consider “the interconnections and complex relations between formal learning experiences provided by schools and the informal learning experienced that students encounter in contexts outside of school” (p 217).

To me, the strengths in Barron’s work come from three areas. First, by using the foundation of learning ecology as “a dynamic entity,” which is shaped by a variety of interconnected interactions and interfaces, she is centering the discussion of learning on the learner and how they are an active agent using interest to seek out new sources and applications for knowledge. Secondly, by emphasizing that what a student accesses outside of school may be as, if not more, critical to fostering their own learning, Barron suggests that the science of learning needs to consider how to take in and study these other contexts along side what is done in formal educational settings. Thirdly, by approaching this from a interview perspective, Barron demonstrates how qualitative data enables a deeper understanding of the how and why learning can occur. Such a methodology is time and analysis intensive and does limit the researcher in what they can accomplish. In Barron’s case, she only presents three case studies for analysis and it would be interesting and beneficial to see how these same five factors of self-initiated learning are present throughout the larger in-depth interviews she conducted and if specific variations are present based on different population demographics.

For me, this work is extremely interesting for how it connects to what I understand as I enter the field of education from the field of anthropology.  In anthropology, the marrying of qualitative and quantitative data has always been considered necessary to better understand human endeavors – including that how we learn and what impacts that learning.  In anthropology, the human is not only a receptor of culture but actively participates in the transformation of that culture and thus their agency is a given. Finally, the examination of the interconnections of contexts and the interplay between them mirrors the integrative nature by which humans operate in their world.  Thus the learning ecology perspective married to a qualitative data collection technique seems to hold great potential for deeper exploration of how learning occurs and what impacts technology can have in that process.