Pages

Showing posts with label SOLO. Show all posts
Showing posts with label SOLO. Show all posts

Thursday, October 2, 2014

Improving the learning with Web2.0 tools

Improving the learning with Web2.0 tools 


The teaching profession is filled with good people, well intentioned people, people who want the best for the students that sit in front of them. These are people who would often almost literally give the shirts off their backs for their students.

Yet these are also all too often people who fill their daily working lives with practice based upon little more than mythology. Challenge it and the response is all too often that wonderful professional 'put down' - 'well of course it works, and you'd need to be pretty stupid to think otherwise.'

That observation leads to more than a touch of nervousness when I start to describe the impact of what we might call 'Web2.0' tools in the classroom. In between those bouts of Senior Management 'stuff' that fill my day, I teach two Level 2 Economics classes, and over the past 5 years I have progressively brought several new tools into the class repertoire. The focus of the work has been on developing students' ability to think critically, and to express that thinking in writing. It is no coincidence that the three external Level 2 Economics standards now count towards UE literacy (writing) in NCEA terms,  a commentary on the reasoning and writing expectations of these standards.

I now have a well established pattern of development through the year. Running in amongst the more traditional 'direct instruction' I begin by developing students' ability to reason and write with the use of online discussion forums. I moderate these carefully, giving every student detailed feedback on their posts (perhaps one of the more powerful differentiation tools?) before setting them to the process of constructing and deconstructing argument with each other.

As students develop confidence in their ability to argue, and to record those arguments in writing, I then transition them to more formal writing using GoogleDocs. I create a writing structure using the SOLO framework, ensuring that their written work closely matches the expectations of the NCEA standards and assessments. Each piece of work receives detailed feedback from me in writing.

While content tends to be developed through more than traditional direct instruction, I often use GoogleForms to elicit responses to content related questions. I gather this data under 'test' conditions. This provides a rich array of thinking, not all of which is correct. Students then take this response data (which I have shared back with them) and in groups co-construct the correct answers to a range of questions that help them to develop their understanding of subject content. Throughout all of this classes are engaged with the use of Twitter both inside and outside the class.

I am also teaching specific writing techniques (in particular the Statement, Explanation, eXample/evidence structure with which our English colleagues are enamoured).

Finally, because our external assessment system still requires students to sit examinations in which they write their answers using traditional pen and paper, I then start on the final transition to answering questions on paper.

What have I found? I can hear the voices (quite rightly) calling 'where's your data?' I have no reliable, authentic, replicable data on the outcomes.  I have some observations to make.

Consistent with international research (Goldberg et al), students in my classes right more and write better than they ever did before. Whereas 10 years ago on paper they might have written a paragraph of 5 lines, now they will write anything between 15 and 30 lines in explanation (mostly - there are ALWAYS exceptions). The reasoning is clearer (in general) and the examination results improved. The students are now more often writing responses  that I would classify as Relational and Extended Abstract as they respond to questions. A class taught without these approaches over the same period saw reasonably static results.

This year for the first time I ran an NCEA internal assessment using GoogleDocs. The results were dramatically different. The overall pass rate went from 55/75% to 94%. There is almost no evidence of collaboration (the task had to be completed independently). The answers were fuller in nature, the reasoning clearer and more accurate with most students. I have clear evidence of metacognitive thinking. As I supervised one of the 'in class' sessions of this assessment I watched over the shoulders of students as they used the comments function to pose questions of themselves on what they had written, ready for review later before they submit their work. Interestingly (perhaps??) this was not something I'd 'taught' them to do.

I see great engagement amongst students with both the class work, and the subject. The use of Twitter has been enlightening as students pose questions, offer answers to questions (both in and out of class), and tweet interesting resource material to me and the class. I run a unique class hashtag for each class.

Is any of this conclusive? NO!!! Is this any better than the mythologising of which I accused too many colleagues at the start of this post? Probably not.

It is quite possible that the results have nothing whatsoever to do with the technology. What is needed is some quality research. I'm a numbers man, I need quantitative data. I have little other than probably meaningless word and line counts (there are the NCEA achievement stats for both internals and externals), but I am not bold enough to try to suggest cause and effect yet. That's next year's Teaching as Inquiry project.

References

1. Goldberg A, Russell M, and Cook A "Meta Analysis: Writing with computers 1992-2002", Technology and Assessment Study Collaborative, Boston College.
2. Hattie J "Visible learning for teachers: Maximizing impact on learning", Routledge, 2012

Thursday, August 14, 2014

More Twitter variation ...

Today's lesson made what I thought was a now familiar use of Twitter, but the evidence of outcome was fascinating. Our current topic is the price taker model of trade, a cool variation on the old supply and demand market model. I had taught the model the old fashioned way in the previous class. Hattie calls it direct instruction, and apparently it works (who knew??? .. <engage sarcasm circuits>).

The lesson went like this. The boys were handed a small square of blank scrap paper on which they first drew the basic model. I then gave them a scenario, and required them to draw the impact of the scenario on the market, and then tweet the outcomes on three variables. The diagrams were reasonably well done (but they were after all scratch sketch diagrams), and about half the boys got the answers correct. We went over the analysis (this wasn't a 'closed question' situation, slightly deeper thinking was required).

I then set the boys a more complex exercise requiring deeper thinking, but following the same format. Now most boys got their analysis correct.

Finally I set them a third problem that included a cunningly set trap. Every boy (in both classes) got the answers correct. How do I know? They were tweeting their answers. Did any of them copy the answers of others? Possibly, that's something I couldn't control for in this setting.

Finally they had to pair and share something new that each had learned, and then tweet their learning.

Here was one tweet:





This was the consequence of an increase in productivity for a small price taking nation that improves its own productivity. This boy had completed an important piece of relational thinking (I think) within the context of the SOLO framework.

On reflection I should have pushed for an additional deeper question that looked at wider connections and flow on effects, but given the lack of 'mastery' of the model at the start of the class, I was happy with the progress that we'd made.

The use of Twitter appeared to engage the boys, and support and affirm their thinking. Of course I can't prove any of that, but the Twitter responses gave me some data on the levels of competence that we had achieved by the end of the lesson.

Friday, August 8, 2014

Twitter, student engagement, and feedback

It started out as a lesson on interest rates and exchange rates. It ended up as a lesson on interest rates and exchange rates, but the path through the lesson wasn't what I'd planned.

I posed two questions to the class, one a closed question, and the other a question demanding an explanation. It was a spur of the moment decision, as you do sometimes in teaching when your instinct says that you should try something, and I asked the boys to open up their Twitter app (either the Twitter web page, or Tweetdeck, as it turns out). They answered the closed question with a tweet, a warm up for Twitter as much as anything else, and then they had to Tweet the explanation of what they had just tweeted. The tweets were streamed live onto the whiteboard at the front of thew class (yes I still have a whiteboard at the front of the class, and yes the boys still sit at desks, in chairs - I'm so old fashioned..).

Here are a couple of tweets. I haven't shown the whole image as I wanted to make sure that I anonymised the tweets, so I had to screen shot to avoid each boy's name.




As the individual tweets appeared on the screen I gave each boy feedback on his response. The feedback was in the form of additional questions that might prompt them to edit and improve their tweets, which many did (just a touch of the old socratic questioning here, based largely on pushing boys through the SOLO thinking framework).We then repeated the exercise with an additional question.

Normally I'd have run a class discussion. Despite my practised skills in running class discussion, I would not have managed to get an individual explanation/answer from every boy. I asked them to put their hands up if they would have worked to stay under my radar in a class discussion - over half the hands went up.

What happened here? I managed to engage every boy in the class. What's more, I'd managed to give every boy individual feedback on his answer.

Hattie says:
"The aim is to provide feedback that is 'just in time', just for me', just for where I am in my learning process"
(Hattie, J 'Visible learning for teachers, maximising impact on learning', Page 122)

The feedback I'd given related to exactly where each boy's tweet suggested he was in his understanding of the issue at hand.

This felt like a 'pretty good day at the office'.