The secret life of face-to-face learning in MOOCs (Part 1)

Massive, open, online courses (MOOCs) are generally designed with the intention that learners will learn online. Indeed, the name implies this. And one of the key advantages of MOOCs, from a learning design point of view, is supposed to be that all activity by learners is visible online and therefore available as data for analysis, with the power to help course designers improve the course in the next iteration. Reports on MOOC-based learning usually include quantification of enrolment numbers – often compared with completion numbers – as well as data on average viewing time on any video content, number of clicks indicating resources accessed, and analysis of various kinds of learner activity on the course platform. Data of this nature has been used to explain how learners progress through courses (Perna et al, 2014; Milligan, 2015), and is sometimes claimed to have the capacity to reveal how learning takes place (O’Reilly & Veeramachaneni, 2014).

Some researchers are seeking further insights into the nature of MOOC-based learning by investigating social learning in MOOCs, focusing on interactions taking place within the MOOC discussion forums (for example, Brinton et al, 2014). Other studies have also investigated the nature of social learning in groups set up by MOOC learners outside the MOOC platform, such as on Facebook and other online social networking sites (for example, Alario-Hoyos et al, 2014).

I have a hunch, however, that quite a lot of learning in MOOCs is taking place face-to-face (F2F). I think this is likely to range from spontaneous, informal chats with friends and family members about the MOOC content (I can attest to that happening on the MOOCs that I and my friends have taken), through to regular, planned gatherings of MOOC learners in local meeting places – organised perhaps by a MOOC learner or by a local institution, with varying degrees of formality. I suspect also, that at the latter end of this spectrum, there may be many additional active MOOC participants who have not themselves enrolled on the MOOC, and are therefore invisible to the MOOC organisers.

In order to test my hunch, I did a Google Scholar search using the terms “MOOC” and “study group”/”meet-up”, and found a handful of articles which confirmed that MOOC-related F2F learning is playing an important role in the MOOC experience for some learners in places as diverse as Mexico City, Northern Sweden, Indonesia and Tokyo (Sanchez-Gordon, 2016; Norberg et al., 2015; Firmansyah & Timmis, 2016; Oura et al., 2015). However, the literature in this area seems to be embryonic – and it begs more questions than it answers.

A key question is: what are the effects of F2F interaction between MOOC participants on learning? Linked to that question are several others about the why, the how and the what of F2F learning in MOOCs, such as:

  1. Why do MOOC learners choose to participate in F2F gatherings?
  2. How are F2F gatherings organised? (E.g. via Twitter, Facebook or, by word of mouth, or by someone sticking a notice on a local library/ church hall notice board?)
  3. Who organises the F2F gatherings? (E.g. a learner on the MOOC, a local volunteer teacher, an education institution?)
  4. How often are the F2F gatherings held?
  5. How many people come to the gatherings?
  6. What percentage of the participants at the gatherings are enrolled on the MOOC? (E.g. all, just a few, or just one person who shares content with the rest of the group?)
  7. What happens in these gatherings?
  8. What do participants get out of these gatherings? (E.g. social enjoyment, a meal, drinks, deeper insights or understanding of the MOOC subject matter?)
  9. Do participants have to pay to participate, and if so, where does the money go?

I will soon be summarising my findings from the literature, but in the meantime I would be very grateful to hear from anyone who has experience of F2F learning in a MOOC. I would love to get comments on any or all of the questions above – and any further questions you think I should have asked.

Thank you in advance – I’m looking forward to your comments!


Alario-Hoyos, C., Perez-Sanagustin, M., Delgado-Kloos, C., Parada G., H. A., & Munoz-Organero, M. (2014). Delving into participants’ profiles and use of social tools in MOOCs. IEEE Transactions on Learning Technologies, 7(3), 260–266.

Brinton, C. G., Chiang, M., Jain, S., Lam, H., Liu, Z., & Wong, F. M. F. (2014). Learning about social learning in MOOCs: From statistical analysis to generative model. IEEE Transactions on Learning Technologies, 7(4), 346–359.

Firmansyah, M., & Timmis, S. (2016). Making MOOCs meaningful and locally relevant? Investigating IDCourserians—an independent, collaborative, community hub in Indonesia. Research and Practice in Technology Enhanced Learning, 11(11), 1–23.

Milligan, S. (2015). Crowd-sourced learning in MOOCs. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge – LAK ’15 (pp. 151–155). New York, New York, USA: ACM Press.

Norberg, A., Händel, Å., & Ödling, P. (2015). Using MOOCs at learning centers in Northern Sweden. International Review of Research in Open and Distance Learning, 16(6), 137–151.

O’Reilly, O.-M., & Veeramachaneni, K. (2014). Technology for mining the big data of MOOCs. Research and Practice in Assessment, 9(Winter), 29–37. Retrieved from

Oura, H., Anzai, Y., Fushikida, W., & Yamauchi, Y. (2015). “What Would Experts Say About This?” An Analysis of Student Interactions outside MOOC Platform Methods Post-survey procedure Post-interview procedure Acknowledgments. In O. Lindwall, P. Häkkinen, T. Koschmann, P. Tchounikine, & S. Ludvigsen (Eds.), Exploring the Material Conditions of Learning: The Computer Supported Collaborative Learning (CSCL) Conference 2015, Volume 2 (pp. 711–712). Gothenberg: International Society of the Learning Sciences. Retrieved from

Perna, L. W., Ruby, A., Boruch, R. F., Wang, N., Scull, J., Ahmad, S., & Evans, C. (2014). Moving through MOOCs: Understanding the Progression of Users in Massive Open Online Courses. Educational Researcher, 43(9), 421–432.

Sanchez-Gordon, S., & Luján-Mora, S. (2016). Accessible blended learning for non-native speakers using MOOCs. In Proceedings of 2015 International Conference on Interactive Collaborative and Blended Learning, ICBL 2015 (pp. 19–24). Mexico City.

Posted in mooc, social learning | Tagged | 6 Comments

Three degrees of innovation in designing a new distance programme

The FUTURA report that I co-wrote with Brenda Padilla, Lourdes  Guàrdia and Cris Girona was recently published by the Open University of Catalonia’s eLearn Center. It documents current and emerging practices in online teaching in higher education. We analysed over 100 initiatives from a wide range of higher education institutions which were seen as being innovative in their respective contexts. A thematic analysis of descriptions of these initiatives indicated that there were five underlying themes running through them, which could be described under the headings Intelligent, Distributed, Engaging, Agile and Situated (or IDEAS for short). The key aspects of each of these themes are outlined in the image below:


Source: Witthaus, Padilla, Guàrdia and Campillo (2016, p.6). Image available at:

One of the purposes of the FUTURA report was to inspire leaders and academics in higher education to consider new possibilities for the ways in which they provide online programmes. Of course, what is a new possibility for one institution or department may be old hat for another – innovation is a relative concept that is meaningless without knowing the context in which it is enacted. With this in mind, I would like to look at how the IDEAS model could be used to inform the design of a new distance learning programme in the specific context of a typical mainstream university that is primarily campus-based, with little or no experience of offering distance learning programmes.

In the table below, I give indicative examples of three degrees of innovation in the design of the new programme in such an institution, in all five categories of the IDEAS model.   The ‘first degree of innovation’ represents existing practices of face-to-face delivery being merely tweaked for online delivery (in this case, the only innovation is the shift in delivery mode from face-to-face to online); the ‘second degree’ represents something genuinely new being created for the purpose of enhancing or improving the learning experience for distance learners; and the ‘third degree’ represents a radical departure from mainstream practice at the institution. The degree of innovation in each case is described in relative terms, with traditional campus-based delivery as the norm for the institution I have in mind.

Intelligent pedagogy

First degree of innovation Second degree of innovation Third degree of innovation
Using the existing institutional VLE, upload PDFs and provide video recordings for students of f2f lectures, offer online multiple-choice quizzes, and use discussion forums for students to ask questions about the course or the assessment. Design new, online activities specifically for distance learners using the full functionality of the VLE, including discussion forum, blogs, wikis and webinars (e.g. Salmon’s e-tivities as developed at the University of Leicester). Abandon the VLE altogether and set up a new online learning ecosystem using open source, Web-based, mobile-friendly educational apps (see Merriman et al’s UOC/MIT report on Next Generation Learning Architecture).

Distributed pedagogy

First degree of innovation Second degree of innovation Third degree of innovation
Build on any existing partnerships with other higher education institutions, as well as with public and community organisations, that can offer a greater international dimension to the curriculum, and can enrich the learning experience for students in some way.




Structure the curriculum in such a way that students are required to choose at least one elective module from elsewhere in your institution, or from another institution altogether. Encourage students to look for suitable, credit-bearing MOOCs as electives to complement the modules provided by your programme team (e.g. Open University Netherlands, as described in Witthaus et al., 2016, p.16). Collaborate with other HE institutions to recognise non-formal, open learning and award students full credentials (such as degrees) for programmes in which many or most of the modules were studied at other institutions (e.g. the OERu). This may involve disaggregating the services your department or institution provides, i.e. separating out the course content provision, teaching, tutorial support, pastoral support, assessment and credentialling activities.

Engaging pedagogy

First degree of innovation Second degree of innovation Third degree of innovation
Offer an online version of the f2f induction week for distance learners, with ‘incentives’ (prizes) for participation. In this induction, include a walk-through of the VLE and library resources, a link to the institution’s study skills support portal, as well as a quiz on key information provided in the programme handbook. Storyboard each module of the programme in the design process to ensure that there is clear alignment between intended learning outcomes, assessment and learning activities. Within this framework, build online activities (e-tivities) that encourage learners to engage positively at a behavioural, emotional and cognitive level (Trowler, 2010).


Build in social tasks that require learners to solve complex problems collaboratively or to complete projects in cooperation with others. Include elements of gamification in the design of these tasks if appropriate (e.g. TU Delft – see Iosup & Ipema, 2013).



Agile pedagogy

First degree of innovation Second degree of innovation Third degree of innovation
Provide flexibility for distance learners by having an open-ended timetable for each module, with only one deadline at the end for assignment submission or examination. Provide a personal tutor to support each student via email or Skype, stipulating a maximum number of hours’ support per student per module.


Design collaborative learning activities that can be completed at a flexible pace but with a clear structure and regular milestones/ deadlines.

Expand the options for recognition of prior learning, for example by giving students Challenge Exams, or inviting students to submit e-portfolios (e.g. Athabasca University – see Spencer, n.d.).

Allow learners to create personalised pathways by combining modules in different ways and progressing through the curriculum at their own pace, through a series of competency-based assessments (e.g. Capella University’s FlexPath model).



Situated pedagogy

First degree of innovation Second degree of innovation Third degree of innovation
Ensure that all module learning outcomes, prescribed readings and assessment tasks include a focus on contextualising the learning in a range of real-world situations.



Seek opportunities for virtual placements for distance learners, mirroring the kinds of placements that f2f students undertake (e.g. the EU VALS project, in which computer programming students created code for businesses). Set up public, online platforms for students to collaborate with other students outside of the institution, employers or industry bodies, and community groups in practical applications of their learning (e.g. SustainabilityConnect at Arizona State University).


While in the above tables, only one example was given for each category of the IDEAS model, a great many more possible examples and scenarios would need to be considered in the case of a real planning exercise. I hope that I have shown how the IDEAS model could be used in a workshop or brainstorming process, to inspire planning for meaningful innovation in the case of a traditional institution embarking on the development of a new distance learning programme.


Iosup, A. & Epema, D. (2013). An Experience Report on Using Gamification in Technical Higher Education. Retrieved from

Merriman, J., Coppeto, T., Santanach, F., Shaw, C. and Aracil, X. (2016). Next Generation Learning Architecture. Barcelona: Universitat Oberta de Catalunya. Retrieved from

Salmon, G. (2013). E-tivities: The key to active online learning (2nd ed.). London and New York: Routledge.

Spencer, B. (n.d.). Defining prior learning assessment and recognition. Athabasca University. Retrieved from:

Trowler, V. (2010). Student Engagement Literature Review. York: Higher Educational Academy. Retrieved from:

Witthaus, G., Padilla, B.C., Guàrdia, L. and Campillo, C. (2016). Next Generation Pedagogy: IDEAS for Online and Blended Higher Education. Final report of the FUTURA (Future of University Teaching: Update and a Roadmap for Advancement) project. Universitat Oberta de Catalunya. Retrieved from:

Posted in learning design | Tagged , , | 3 Comments

Lecture capture: what can we learn from the research?

This week I presented at the Chartered Association of Business Schools Learning, Teaching and Student Experience 2016 conference in Birmingham. My talk was on the literature review I had done with Carol Robinson at Loughborough University to find out more about how Lecture Capture is being used by Higher Education institutions around the world, and what impact, if any, it is having on student learning.

My slides are at, and the original paper is at (opens as a Word doc). Because the original paper was published under a Creative Commons (CC-BY-SA) licence, I will include the Executive Summary in this blog post. But first I’ll give a brief summary of some of the responses to my presentation from participants in the audience.

  • Claire Hoy, from Sunderland University, shared some fascinating findings from her PhD research that had included looking at the emotions of students and lecturers when using lecture capture. I’m looking forward to reading publications from her on this topic.
  • At lunch after the presentation, I was delighted to meet one of the contributors to our literature review, Prof. Caroline Elliott, whose investigation into the use of lecture capture at Lancaster University was referred to in our study.
  • One person said that a very popular lecturer at their institution had begun using lecture capture, and there was a dramatic drop in attendance at his lectures. His lecture style had also become more stilted, as he had to stand in one fixed position throughout the lecture. In this instance, it seemed that lecture capture had not added value to the learning experience for students. Others in the audience commented that they had not experienced any significant drop in attendance, and that it was unnecessary for the lecturer to remain within sight of the camera, as long as the audio was clear and the slides were visible.
  • There was one particularly thought-provoking comment from an academic who said that some of his students, particularly international learners, had developed a study style that involved rote-learning from lecture recordings, transcribing the lecture word-for-word and then regurgitating that text back in exams or assignments. There was a brief discussion about the need to include study skills support for these kinds of students.

The executive summary of the full literature review follows. I will be interested in any comments from others who have looked into the use of lecture capture in their own or others’ teaching in Higher Education.

Executive Summary

This report was written for the Centre for Academic Practice at Loughborough University in order to provide a snapshot of how lecture capture (LC) is currently being used in higher education. It draws from literature published internationally between 2012 and mid-2015. The aim was not to provide an exhaustive review of the relevant literature, but rather to provide indicative findings that could inform day-to-day practice in a higher education institution.

It should be noted that, in many of the studies reviewed, data was gathered by self-reporting of students, and not all students responded to surveys, therefore providing only a partial picture. Also, most of the studies were conducted within specific programmes, mostly within the STEM subjects, and so we should be cautious about making generalisations from the findings.

The first finding was that there appeared to be a wide range in the percentage of students who used LC, from as low as 21% to as high as 100% of cohort members. A few studies found that usage increased when the LC recordings were “enriched” with additional online materials, and some found increased usage when LC was made available in formats that lent themselves to mobile access. There were some examples of different usage patterns across different years of study, with first-year students either watching more LCs than students in later years (possibly due to the novelty effect of the technology) or watching less (possibly because they had not yet settled into a “serious” study routine). Very little information was found in the literature regarding advice given to students about the use of LC by lecturers.

There appears to be a great deal of variety in the manner in which students use LC. LC is most often used for revision and note taking. Almost all respondents claimed to use LC as a supplement to, rather than a replacement for, lectures. Most students use LC selectively, choosing specific sections of videos to watch. A small number of students watch entire LC recordings – these are often (but not always) speakers of English as a second or foreign language or students with learning difficulties such as dyslexia. There is tentative evidence from one study to show that LC is most efficient for learners when the recording contains only the slides and the lecturer’s voiceover (without the video of the lecturer) – this is because the video of the presenter reduces the amount of space available for the slides.

With regard to whether LC has any impact on student learning, the findings here are varied. Some studies found little or no evidence of any impact. Two examples were found of the provision of LC apparently having a negative impact on a minority of students: in these cases, students who used LC as a substitute for attendance at lectures were found to be at a severe disadvantage in terms of their final marks; moreover, those students who attended very few live lectures did not close the gap by watching more LC online. In one study it was found that the quality of student interaction in class dropped when LC was introduced, as students were reluctant to speak up when being recorded. By contrast, in another case it was reported that students’ contributions in class were of a better quality when the class was being recorded.

There were several situations in which a positive relationship was found between the use of LC and student learning outcomes. Students perceive the greatest value for LC in courses that move quickly, rely heavily on lectures, and for which the information provided via lectures is not readily available from any other sources, as well as courses which emphasise the assimilation of information rather than the development of applied skills (an important distinction in medicine and related subjects, where many of these studies took place). LC was also found to have a positive effect when the teacher used it as a tool to “flip” the classroom and asked students to view the LC before coming to class. In several of the studies, students who were non-native speakers of English emphasised the value of LC to them, and this sentiment was echoed by learners with dyslexia or other learning disabilities.

A positive relationship was also identified between learners who used LC and certain approaches to learning. In one paper (Brooks et al, 2014), learners were categorised according to their usage patterns (i.e. how often they viewed the LCs, and at which points in the semester), and it was found that students categorised as “High Activity” outperformed their peers by up to 16.45%, while students in other clusters obtained more or less the same grades as each other. Other studies concluded that there was a positive effect only for those students who use LC as a supplement to regular lecture attendance, and that LC appeared to be correlated with “deep” learning as opposed to “surface” learning.

The overwhelming majority students, when asked, say they do not view recorded lectures as a replacement for attending live lectures. This finding was borne out by several studies which included evidence from analytics on lecture attendance and LC views. In one case, increased attendance at live lectures was reported, on the basis that learners felt more confident about their grasp of the subject matter from having viewed the LCs. However, in several studies, lower attendance at live lectures was found to be a direct result of implementing LC. There is some discussion in the literature about contributing factors here, especially around the notion that learners who skip lectures tend to be “surface learners” (as opposed to “deep learners”, and that these learners do not generally compensate for missing lectures by watching the LC.

A few examples were found of lecturers changing their teaching style as a result of the introduction of LC. These generally revolved around the concept of the so-called flipped classroom (teachers providing lecture content for students to read or view before coming to class, and changing their teaching style towards more active, learner-centred learning in the classroom). Other opportunities for innovating in teaching related to the use of LC by lecturers for reflection on their teaching style, and the creation of additional materials to support learners’ independent learning from LC.

A few further points arose out of the literature that are worth highlighting. It is clear from the comments made by students throughout the literature that the provision of LC is perceived as strongly enhancing their learning experience. There is evidence from one study that if LC is mentioned as being an integral part of the learning and teaching approach in marketing brochures or on programme websites, it may influence students’ choice of programme – or even institution to study at. One study also found that LC was particularly useful for students on work placements.

Certain recommendations arose out of the literature – sometimes implicitly. For example, there is a gap in the literature regarding the nature of the advice given by lecturers to students. This might be especially important in the case of first year students who seem to be less consistent in their viewing patterns. Guidance given by lecturers to learners as to how to make effective use of LC may help here. In addition, at-risk students can be identified through a combination of tracking views on the LC system and tracking attendance in class, and automated alerts could be sent to them with advice on recommended behavioural changes, or information about support mechanisms available.

Another important consideration for institutions is the growth in mobile access to LC by students, which suggests that institutional platforms and tools used to deliver LC to learners need to be mobile-friendly.

The paper concludes with responses from the literature to a number of statements from academics in an earlier survey at Loughborough University, where concerns were expressed about the use of LC. It is clear that lecturers need to be supported in the adoption and implementation of LC – not just from a technological point of view but also in terms of their questions about potential copyright infringement, their worries about the potential drop in attendance if LC is introduced, and any other concerns they have about the possible impact of LC on the learning and teaching experience. For LC to have the greatest possible positive impact on learning for students, lecturers, managers and support staff need to jointly create a learning environment that is conducive to effective use of LC and that limits the risks.

For the full paper, see

Posted in blended learning | Tagged , | 3 Comments

Validation of Non-formal MOOC-based Learning

The report I co-authored for the EU, “Validation of Non-formal MOOC-based Learning: an Analysis of Assessment and Recognition Practices in Open Education” has at last been published. The study, referred to as OpenCred, began in May 2014, and originally aimed to find examples of recognition of open learning, for example, learning based on open educational resources or massive open online courses (MOOCs). A rework of the original draft was undertaken in late 2015 with substantial input from Anne-Christine Tannhäuser, in response to feedback from reviewers that the word “recognition” needed to be clarified in the report.

Some people use the term “recognition” rather loosely to refer to any form of credentialisation or certification awarded to a learner at any point in their learning journey, while others, including specialists in the field of recognition of prior learning, use the term specifically to refer to the process of validation of credentials by an educational institution or employer. This latter meaning implies a two-stage process – credentialisation, followed by recognition, usually by a different body, at a later stage. To avoid ambiguity in the OpenCred report, we distinguished between credentialisation and recognition as follows:

Credentialisation versus recognition of learning outcomes (Witthaus, Inamorato dos Santos, Childs, Tannhäuser, Conole, Nkuyubwatsi and Punie, 2016, p.6)

Credentialisation versus recognition of learning outcomes (Witthaus, Inamorato dos Santos, Childs, Tannhäuser, Conole, Nkuyubwatsi and Punie, 2016, p.6)

One of the main outcomes of the study was a model which describes elements of non-formal, open learning assessment using a “traffic light” metaphor. In this model, a MOOC can be analysed in terms of the extent to which the following six characteristics are present:

  • Suitable, supervised assessment
  • Identity verification of the learner during assessment
  • Partnership and collaboration with other institutions
  • Award of credit points to learners
  • Quality assurance mechanisms
  • Informative certificates acknowledging specific learning achievements
Open Learning Traffic Light Model (Witthaus, Inamorato dos Santos, Childs, Tannhäuser, Conole, Nkuyubwatsi and Punie, 2016, p.6)

Open Learning Traffic Light Model (Witthaus, Inamorato dos Santos, Childs, Tannhäuser, Conole, Nkuyubwatsi and Punie, 2016, p.6)

The green rim of the hexagon indicates strong presence of each of the elements; the yellow layer indicates some presence, and the red inner core indicates little or no presence. Learners are in a better position to obtain recognition for their MOOC-based learning if the MOOC has all six elements in the green rim. For example, a MOOC learner who travels to a physical location to sit an invigilated exam, where their identity is verified, and who is awarded ECTS credits upon passing the exam, is already in a strong position to have those credits validated at a later stage. If the certificate they receive contains detailed information about the course contents and assessment procedures, all the better. If, in addition, the MOOC provider is known to other organisations in the sector through partnership and collaboration in professional networks, and if the quality assurance procedures used by the MOOC provider are transparent, it would be very difficult for another institution to justify not validating the learning.

It is hoped that the report will be of use to both MOOC providers and institutions/ employers that have to recognise prior learning, and will ultimately enable open learners to receive meaningful, life-changing acknowledgement of their learning achievements.

This post was edited on 4 April 2016 with a fuller description of the writing and editing process of the OpenCred report.


Witthaus, G., Inamorato dos Santos. A., Childs, M., Tannhäuser, A., Conole, G., Nkuyubwatsi, B., Punie, Y. (2016) Validation of Non-formal MOOC-based Learning: An Analysis of Assessment and Recognition Practices in Europe (OpenCred). EUR 27660 EN; doi:10.2791/809371. Available at:

Posted in open education | Tagged , , | 1 Comment

My top ten research utilities this week

This is Week 1 of my PhD in Higher Education: Research, Evaluation & Enhancement through Lancaster University, and to get us started, Paul Trowler has asked us to share the top ten research utilities we’ve used in the past week. Paul’s list is great – I didn’t know about Research into Higher Education Abstracts, ABBYY TextGrabber which apparently takes photos, scans and converts to editable text (really looking forward to trying that out!) or Fastscanner, which enables you to take photos on your phone and store them as PDFs.  So those three go at the top of my list of new tools to try.

Things I’ve used this week include:

Trello – I use this app every day for all my to-do lists, and also for lists of articles to read or videos to view. I can update it on my laptop or my mobile phone, so it enables me to do a lot of work planning while on buses and trains. I find it very comforting having lots of lists… (I’m just waiting for the app that carries out all the actions on those lists!)

Google Docs (and Spreadsheets) – brilliant for collaborating with colleagues on research and writing projects

Google Hangouts for meeting with people in other places (This morning I met with Brenda Padilla who was in Copenhagen, and colleagues from the Open University of Catalonia in Barcelona. While we were online together, we collaboratively edited a number of Google Docs.)

Google Forms – a useful survey tool which lives in Google Drive, along with Google Docs and spreadsheets

Twitter – for finding and sharing links to interesting articles, blog posts and thoughts (I’m @twitthaus – when I got my twitter name I totally entered into the spirit of it…)

Paul mentions, which I also find hugely useful. In addition, I am also on ResearchGate. These have become essential literature search tools for me, along with Google Scholar and Mendeley (an alternative to Zotero, which enables me to store bibliographic data about everything I find, as well to upload the articles in one place and to highlight and annotate them online – and to share “libraries” with colleagues).

Finally, one more thing I would have used this week if it only existed… is an A4-sized e-book reader for PDFs. It would need to link to Mendeley in order to be really useful. (There are promising signs that Onyx is developing such a device so I’m watching this space…)

That’s it for now. Thanks, Paul, for getting me thinking, and I’m looking forward to hearing from others about their top ten research utilities.

Posted in Research | Tagged , | 5 Comments

Yes, technology can lead pedagogy.

A while back I was issued a #blideo challenge by Terese Bird. (A blideo is described like this by its initiator, Steve Wheeler: “You share a short video clip on your blog and challenge 3 people in your personal learning network to write learning related blog posts about it. When they post their response, they include another short video clip of their choice and challenge 3 other people within their network… and so on.”)

The video clip Terese chose for me was this epic scene from School of Rock:

In it, Jack Black, as the radical, disruptive impostor substitute music teacher in an American school, secretly watches his class playing a classical concert and discovers that they are very capable musicians. Overcome by excitement at the potential he has seen, he bursts into their classroom for the next class and begins by picking on the kids, one by one, to come up to the instrument most closely resembling the one they were playing in the classical concert, and follow his directions to play the first few bars of Deep Purple’s Smoke on the Water. The scene is fast-paced and bristling with tension as each child attempts to do something new on an unfamiliar instrument under the urgent and energetic guidance of their teacher, and under the stunned gaze of all their classmates. Of course they end up sounding like they’ve been in a band together playing Deep Purple all their lives within five minutes – there might be some artistic licence in that, but the kids were very quickly getting the hang of their new instruments. Some of them had to make a relatively small leap from a piano to an electric keyboard, or an acoustic to an electric guitar, while others had to go from the cello to the bass guitar, or from a single percussion cymbal to a full set of drums. What made this scene work was that Jack Black didn’t start by trying to get the kids to understand, or even appreciate, the musical intentions of Deep Purple – he went straight for the instruments (the technology) and got the kids actually thumping the keyboard, plucking a string on the bass guitar and so on. And that’s what makes it plausible too – we can see that, by actually experiencing the new technology with a real song, they are truly getting the feel of what it means to play rock music.

The parallels to helping teachers learn how to teach with online technologies are obvious – I have sat in workshop sessions where lecturers have started out being avowedly anti-technology, but within an hour have become ardent users of blogs, wikis or other tools in their teaching – simply as a result of being asked to suspend their judgment and try it out in the safe space of a workshop setting. There’s plenty of time for discussion about pedagogic possibilities and rationales after folks have got the feel of what they can do with the technology – and they’re having fun with it.

So now it’s my turn to issue a #blideo challenge. Sticking with the musical theme, here is a clip from the London Symphony Orchestra’s Masterclass in conducting:

Sandra Huskinson, Brenda Padilla and Ale Armellini, it’s over to you… and anyone else who wants to take on this challenge!

Posted in learning design | 1 Comment

Students’ views on independent learning: findings from an HEA-NUS study

Today I participated in a workshop run by the Higher Education Academy and National Union of Students in York in which they shared their findings from a study on independent learning. The work is not yet complete – a report will be submitted to the HEA in July, and hopefully disseminated more widely after that.

Some background: the study was concucted by Liz Thomas Associates, and included obtaining diary data from 120 undergraduate students on a week-by-week basis reflecting on their experience of independent learning. ‘Independent learning’ was defined for the students as ‘any course-related study that you undertake when not being taught by lecturers or other academic staff’. To gather more in-depth data, three of the diary-writers were selected to conduct peer interviews with approximately 18 others.

Preliminary findings show that students tend to see independent learning as being one of two kinds of activities:

  1. ‘Homework‘ type activities which are reminiscent of school, such as revision of lecture notes, guided reading, or quiz/task completion (referred to by the researchers as ‘IL1’ for short);
  2. Going beyond what was presented in lectures or prescribed readings to find out more information, solve a problem or generate new insights (referred to as ‘IL2’).

Much discussion was had about these two key findings at the workshop, and some speculation on the implications. In no particular order, here are some of the main points from the discussion:

  • There appeared to be some correlation between students holding the IL1 view and those who wanted more structure and support for their learning (i.e. a more ‘school-like’ environment). These students also tended to believe that the reason why their lecturers asked them to carry out independent study was because there wasn’t enough time in class to ‘cover’ all the content.
  • By contrast, there appeared to be some correlation between students holding the IL2 view and an openness to risk-taking (e.g. putting the time into reading something that might turn out not be relevant), as well as a belief that independent learning was valuable for its own sake. (Whether these correlations were significant or not was not clarified, but as the analysis is ongoing, I expect these issues will be given further attention by the research team.)
  •  Students showed awareness of the fact that programmes are usually designed to focus more on the acquisition of core concepts in first year, with a gradual increase in student responsibility for conducting independent learning through second, third and (where applicable) fourth years. They appreciated this progression, and final year students generally showed much greater confidence in their ability to undertake independent learning than first-year students did.
  •  It was suggested by colleagues at the workshop that a discipline divide might emerge when the data is more finely analysed, in that students in the ‘hard’ sciences will be more closely aligned with IL1 than those in the social sciences. Examples were given from geology, dentistry and computer science to back up the suggestion that, in some disciplines, there is a lot of ‘stuff’ to learn and a generally agreed sequence for learning this stuff before students can be asked to apply their knowledge meaningfully. Requirements from professional bodies were mentioned as a (sometimes unhelpful) contributing factor here.
  • Approximately 25% of the students in the study were international students. National or cultural differences have not yet been correlated with the other findings.
  • Students indicated that they typically form social networks of peers to support their learning – often on Facebook or Whatsapp. The workshop did not discuss whether this social learning was more correlated with IL1 or IL2. (I think this would be an interesting avenue to explore.)
  • Survey and interview results indicate that assessment is a significant driver in motivating students to learn in certain ways. Predictably, multiple-choice type assessements tend to encourage more rote learning, while authentic problem-solving tasks tend to encourage more independent learning.

This last point – the extent to which assessment drives (or indeed should drive) independent learning – received a lot of attention in the workshop. Opinion was divided on this: on the one hand, some colleagues argued against a ‘narrow, teaching-to-the-exam’ approach to curriculum design and delivery. On the other hand, the case was made (by me amongst others) for re-evaluating our assessments to ensure that they a) reflect the full richness and depth of the intended learning outcomes for the module; b) are creative, interesting and engaging for students (and markers!); and c) provide choice for learners. A good example of such an assessment is given in a case study in the HEA’s ‘Compendium of Effective Practice in Directed Independent Learning‘ by my colleague at Loughborough University School of Business and Economics, Keith Pond. The case involved students engaging in an assignment for the ‘Corporate Reconstruction and Turnaround’ module, in which a real, currently failing business is analysed based on court records. Students are also given the opportunity to meet with the Administrator for that business, who is dealing with the case at the time of the module delivery. In this assignment students carry out substantial independent learning – from searching for and selecting relevant information, through to analysing the case and predicting the success of the business for survival. This kind of learning is far more like the kind of work that students will do as members of a professional community of practice in their future careers than revising their lecture notes for an exam.

Posted in independent learning | Tagged | 3 Comments

Making massive learning social – the next big challenge for MOOCs?

Yesterday I attended the University of London’s annual RIDE conference. One of the keynote speakers was Mike Sharples, Academic Lead for the OU-owned FutureLearn. He mentioned that the design of the FutureLearn platform was based on principles from Laurillard and Pask’s Conversational Framework. One of the ideas behind the platform is that the interface should seamlessly integrate the content and the conversations around the content, so that learners can interact with one another effortlessly about each piece of content provided.

By way of example, he described the Forensic Science MOOC by the University of Strathclyde, which is based upon a reconstruction of an actual murder case. Each week, learners are given a bit more information about the murder via videos and text, and also another forensic technique to help them solve the mystery. There are no discussion forums; however, next to each video is a rolling comments feed, where learners will see the most recent comments from other learners and can add replies or new comments. In this comments feed, learners share their ideas in order to collaboratively solve the mystery. Because of the large number of learners on the course, it would be impossible for anyone to scroll through and read all of the comments (in one case, in a different MOOC,  17,000 comments were recorded next to one video!) and so there is a certain degree of serendipity at play as to whether the learner happens to see anything that catches their interest in the moment that they look at the comments. FutureLearn helps learners filter comments by means of three tabs at the top of the screen: “Following” (listing comments from other learners whom they have chosen to follow), “Most popular” (comments with the most “Likes” from other learners) and “My comments” (previous comments made by the learner).

My question to Mike in the Q&A session was whether feedback from learners indicated that there was a desire to be able to learn in small groups, and whether that would be technically possible to set up on FutureLearn. This question was predicated on a hypothesis I have that social learning is more effective in small groups where ties between learners are relatively strong, rather than in a massive global pool of learners where they might never interact with the same person twice. A recent study at Oxford University (described in “What are the limitations of learning at scale? Investigating information diffusion and network vulnerability in MOOCs“) addresses the issue from a networked learning perspective, based on an investigation into learner participation in the discussion forums on two Coursera MOOCs, and concludes that:

[…] when it comes to significant communication between learners, there are simply too many discussion topics and too much heterogeneity […] to result in truly global-scale discussion. Instead, most information exchange, and by extension, any knowledge construction in the discussion forums occurs in small, short-lived groups […]

So, when faced with the opportunity to interact with thousands of other learners, the learners in this study tried to interact in small groups. The fact that these small groups were short-lived might have been because the MOOCs did not provide a convenient way for learners to repeatedly interact with others in the same small groups throughout the course.

Back to the Q&A: Mike replied that the idea of enabling group work on FutureLearn is under active consideration. The barrier seems to be technical. I can see why FutureLearn abandoned threaded discussion forums – traditional forums might not be the best way to enable group interaction at scale. (I have previously commented on Gilly Salmon’s successful use of group-based discussion forums in the Carpe Diem MOOC, but I’m not sure how scalable that would be in a MOOC running into the tens or hundreds of thousands.) So, within the framework of FutureLearn’s approach, I’m wondering whether the solution would be to add another tab at the top of the rolling comments section, which might be called something like “Study Group”. This tab would show comments made by a relatively small group of learners, which would be generated by an algorithm based on information provided by learners in their profiles (a bit like the algorithms used in online dating sites, where members are matched with others who have ticked the same boxes as them) plus a randomly generated code. Codes would only be given to those participants who had taken the trouble to complete their profiles, as this is a sign of commitment to at least starting the MOOC, and each code would be allocated to a maximum of say, 40 participants, thus effectively creating a group of 40 learners. By clicking on the “Study group” tab, every learner would then be able to tap into the comments of only those 40 learners with the same code as them. Assuming that 25-50% of those learners who created their profile actually completed the course, we could predict that between 10 and 20 people of the initial 40 in each group would complete the course together. The actual maximum number of learners per group and predicted number of completers would need to be derived from participation statistics from previous iterations of the course.

Speaking personally as a learner who dropped out of a FutureLearn course last year because of the lack of a sense of coherent community, this would be a strong motivator for me to complete the next FutureLearn MOOC!

Posted in mooc | Tagged , | 9 Comments

Storyboarding OOC Week 3: learning outcomes and assessment

Week 3 of the Storyboarding OOC just ended. It was an interesting one, with an initial debate around the question as to whether learning outcomes are really necessary. There was general agreement that statements of intended learning outcomes are necessary and important for guiding the course design process, and that they also help in managing learners’ expectations. The discussion then focused on writing clear outcome statements, and describing the assessments that were being planned. Several participants shared the links to their storyboards (in Linoit, Popplet and Google Sheets), showing how the learning outcomes are distributed across the timeline of the course, and how they are aligned with any assessment tasks.

Meanwhile, a number of people who joined the OOC late have been catching up on the earlier activities, introducing themselves to everyone (Week 1) and selecting their storyboarding tools (Week 2).

What I am really enjoying about the OOC is the fact that the storyboard provides a visual point of reference for all the activities, as participants focus on a different element of their storyboard each week, gradually building it up in layers. This represents a more realistic use of storyboards, in my view, than when they are simply inserted into a course on learning design as one component of many. In real life, course designers will spend many hours on a storyboard, often spread over many days or weeks, especially if they are creating a new course from scratch. Also, working through the storyboard one layer at a time in the OOC enables course designers to focus on every aspect of course design, from the high-level design with overarching outcomes, and rough outlines of assessment tasks, learning activities and supporting resources, down to more granular descriptions of each of these elements. Course designers can keep returning to the storyboard and adding more detail, until they get to the point where they feel it has served its purpose. It will be interesting to see whether different participants on the OOC have different criteria for considering their storyboards to be complete.

This week we’re focusing on adding in the learning activities to the storyboard – in the first instance, just titles and purpose statements of the learning activities (using Salmon’s five-stage model to guide sequencing decisions), and later adding more detail, such as description of the task, response to other learners and timing. I’ll be back with the next update in a week’s time.

Posted in learning design, open education | Tagged , , | 1 Comment

Storyboarding OOC Week 2: tools and processes for storyboarding learning design

It’s the end of Week 2 of the Storyboarding OOC, so it’s time for another update.

New participants

We now have 137 participants registered (23 more than this time last week). New participants include a handful of students on an Instructional Design course at the University of Mauritius and their teacher, who have said they are participating in the OOC as part of an international “benchmarking” process. We’re happy to have them on the course and will try to get some feedback from them about how the OOC has contributed to their learning when the OOC is over.

Question of the week: “When do you do storyboarding?”

One participant posted in the discussion forum:

One thing I found tricky to understand from the video was that the small flip chart/post-its prevented too much information and deliberation about the substance of the course – it seemed to me like you’d have to have done a lot of pre-planning and preparation to make the storyboarding effective. So one question – when is the best time to do the storyboarding in the course design process? After you got a really clear idea of learning outcomes, aligned activities/assessment, technologies etc. So the storyboard is really just to visualise and sort of organise all the prior work? Or can it be used as part of this deeper thinking about the course?

In my answer, I said that I think the demo videos (by Gilly Salmon and team and also the ones I’ve created about using Linoit and Popplet) are a bit misleading in terms of when to use storyboarding, because we have tried to encapsulate a process that is usually spread out over many days or weeks in a 10-minute video. Storyboarding is useful right from the start of the process, and ideally the storyboard should be built up in layers, with the course team adding more detail in a fairly structured way over time. You need to make sure you can keep adding more layers of detail, so if you’re using a flipchart, this may mean you end up with a series of sticky notes stuck on top of one another in parts of the storyboard. Storyboards often spill over onto several flipchart sheets as they develop and become more detailed.

The video I created on using Popplet for storyboarding shows a fairly early stage in the process. I had probably spent about 2 hours developing the storyboard for the coures on ‘Online Academic Identity’ before making the video, so you can see my ideas were still at a very formative stage. I did all my thinking on Popplet and did not make any handwritten or other notes. On the other hand, my Linoit demo is really a summary of a process that Brenda and I have been going through since October 2014. The storyboard on Google Docs reflects this longer-term process the best. You’ll see there is a ‘Brief version’ and a ‘Detailed version’. We worked on the brief version first. The detailed version is still changing as we finalise preparations for activities and resources for the remaining weeks of the course – and you might see this version changing before your eyes if you happen to go in while we are working on it.

Tips for brainstorming when creating a storyboard collaboratively: 

There were many great tips given by participants who had experience in creating storyboards, mainly using flipcharts and coloured sticky notes or similar paper and pen tools. Here are some of them:

  • visibility: not only size of the chart and liability of the writing; but mobility  and access to the chart; people can get turned if they can’t really see or follow the build up;  (as an alternative, “bricks” on a wall with sheets of A4 and blu-tak worked really well once with a group of 15 designing a large departmental programme)
  • trying to get (at least some) sticky notes on display before they go up on the chart  can sometimes be helpful:  it will get people engaged and get the sharing started more quickly
  • try to be open, and encourage everyone else to be open to ideas that come up; but at the same time ask people to clarify, check with people are saying the same things in different ways ;
  • see it as an organic process of review and re-shaping on the go, ,  let it flow
  • as a group look for places where ideas merge and overlap and come to a common agreement about re-shaping if its needed;
  • ensure you get some sticky notes up on display before the group activity starts, to encourage sharing.
  • try and model what you need on the flip notes – it might be better to have “Students practise w BP cuff”‘ than “Blood pressure” (i.e. being clear and descriptive)
  • make sure everyone’s voice is heard; take time to consider each contribution and see how it fits with the whole plan
  • we begin from learning objectives rather than from topics and units of time – although we get there eventually
  • use large post-its and, if you can, have one or two people with neat printing who actually write down what each person wants to contribute
  • start with loose groupings as you don’t want to shut down new ideas – I always have a “sidepen” for ideas that don’t really fit but may fit later
  • take pictures of the wall or whiteboard at significant points in the development of your course

Storyboarding tools

While many people had used flipcharts and sticky notes, most felt it was time to try online brainstorming tools for greater ease of use with distributed teams, and for greater ease of storage and version control. About six people said they liked Linoit, and we had the same number of positive comments on Popplet. Two people said they planned to use Google Sheets for their storyboard. A few are experimenting with Gliffy. There was some interest in Scrumblr, but this seems to have less functionality than the others. There was some frustration from someone who struggled to get access to Linoit (password not recognised and no immediate solution offered by Linoit), and several who noted that Popplet was erratic, often not functioning on a particular browser or being unavailable at a particular time.

There were also some questions about the terms and conditions of the various online tools for storyboarding, as well as whether these tools would work on mobile devices. These are obviously important issues, and I’ll report more on them in a later post.

What’s next

In Week 3 (starting tomorrow) we will focus on developing the learning outcomes and assessment for the courses being storyboarded. Watch this space for further updates :-)

Posted in learning design | 1 Comment