Introducing the CASS Guided Reading Project (Part 1)

In collaboration with the Department of Psychology, CASS is investigating the critical features of guided reading that can benefit the language and literacy skills of typically developing children.

What is guided reading?

Guided reading is a technique used by teachers to support literacy development. The teacher works with a small group of children, typically not more than 6, who are grouped according to ability and who work together on the same text. This ability-grouping enables the teacher to focus on the specific needs of those children, and to provide opportunities for them to develop their understanding of what they read through discussion, as well as their reading fluency. In this project we are investigating the features of effective guided reading, with a particular emphasis on reading comprehension.

Features of guided reading

Teachers aim to bridge the gap between children’s current and potential ability. Research indicates that this is best achieved by using methods that facilitate interaction, rather than by providing explicit instruction alone (e.g., Pianta et al., 2007).

The strategies that teachers can use to support and develop understanding of the text are best described as lying on a continuum, from low challenge strategies – for example, asking children simple yes/no or closed-answer questions – to high challenge strategies, that might require children to explain a character’s motivation and evaluate the text. Low challenge strategies pose more limited constraints on possible answers: they may simply require children to repeat back part of the text or provide a one word response, such as a character’s name. High challenge strategies provide greater opportunity for children to express their own interpretation of the text.

Low challenge questions can be used by the teacher to assess children’s basic level of understanding and are also a good way to encourage children to participate in the session. High challenge questions assess a deeper understanding and more sophisticated comprehension skills. Skilled teachers will adapt questions and their challenge depending on the group and individual children’s level of understanding and responsiveness, with the intent of gradually increasing the responsibility for the children to take turns in leading the discussion. This technique is used to scaffold the discussion.

Our investigation: How is guided reading effective?

Previous studies observing guided reading highlight substantial variability in what teachers do and, therefore, in our understanding of how guided reading can be used to best foster language and literacy skills. A more fine-grained and detailed examination of teacher input and its relation to children’s responses is needed to determine the teacher strategies that are most effective in achieving specific positive outcomes (see Burkins & Croft, 2009; Ford, 2015).

Previous research on this topic has typically taken the form of observational studies, in which researchers have had to laboriously parse and hand-code transcriptions of the teacher-children interactions (a corpus) to identify teacher strategies of interest. Because this is a long and painstaking process, it limits the size of the corpus to one that can be coded within a realistic time window. In this project, we aim to maximise interpretation of these naturalistic classroom interactions using powerful corpus search tools. These enable precise computer-searches for a wide range of language features, and are much faster and more reliable compared to hand-coding. This enables us to create and explore a much larger corpus of guided reading sessions than in previous studies, making a fine-grained analysis possible. For an introduction to corpus search methods, check out this CASS document.

Future blogs will provide more detail about the specific corpus search measurements that CASS are using to identify what makes for effective guided reading. The next (upcoming) blog, however, will explain the motivation for using corpus methods to investigate the effective outcomes of guided reading.

Meet the Author of this blog: Liam Blything

Since July 2016, I have been working as a Senior Research Associate on the CASS guided reading project. My Psychology PhD focused on language acquisition and has been awarded by Lancaster University. It is a great privilege to be working on such an exciting project that answers psychological questions with all these exciting and advanced corpus linguistics methods. I look forward to providing future updates!

 

References

Burkins, J. & Croft, M. M. (2010). Preventing misguided reading: new strategies for guided reading teachers. Thousand Oaks CA: Corwin.

Pianta, R. C., Belsky, J., Houts, R., Morrison, F., & the National Institute of Child Health and Human Development Early Child Care Research Network. (2007). Opportunities to learn in America’s elementary classrooms. Science, 315, 1795–1796.

Ford, M. P. (2015). Guided Reading: What’s New, and What’s Next? North Mankato, MN : Capstone.

 

Controlling the scale and pace of immigration: changes in UK press coverage about migration

The issue of immigration prominently featured in debates leading up to the June 2016 EU Referendum vote. It was argued that too many people were entering the UK, largely from other EU member states. Politicians and media also talked about ‘taking back control’—notably in the contexts of deciding who can enter Britain and enforcing borders. But, as our new Migration Observatory report ‘A Decade of Immigration in the British Press’ reveals through corpus linguistic methods, such language wasn’t necessarily new: in fact, under the coalition government from 2010-2015, the press was increasingly casting migration in terms of its scale or pace. And, the relative importance of ‘limiting’ or ‘controlling’ migration rose over this period, too.

Our report aimed to understand how British press coverage of immigration had changed in the decade leading up to the May 2015 General Election. We built upon previous research done at Lancaster University (headed by CASS Deputy Director Paul Baker) into portrayals of migrant groups. Our corpus of 171,401 items comes from all 19 national UK newspapers (including Sunday versions) that continuously published between January 2006 and May 2015. Using the Sketch Engine, we identified the kinds of modifiers (adjectives) and actions (verbs) associated with the terms ‘immigration’ and ‘migration’.

The modifiers that were most frequently associated with either of these terms included ‘mass’ (making up 15.7% of all modifiers appearing with either word), ‘net’ (15.6%), and ‘illegal’ (11.9%). Closer examination of the top 50 modifiers revealed a group of words related to the scale or pace of migration: in addition to ‘mass’ and ‘net’, these included terms such as ‘uncontrolled’, ‘large-scale’, ‘high’, and ‘unlimited’. Grouping these terms together, and tracking their proportion of all modifiers compared to those related to illegality—which is another prominent way of referring to immigrants—reveals how these terms made up an increasingly larger share of modifiers under both the Labour and coalition governments since 2006. Figure 1 shows how these words made up nearly 40% of all modifiers in 2006, but over 60% in the five months of 2015. Meanwhile, the share of modifiers referring to legal aspects of immigration (‘illegal’, ‘legal’, ‘unlawful’, or ‘irregular’) declined from 22% in 2006 to less than 10% in January-May 2015.

Figure 1.

blog

 

 

 

 

 

 

 

Another way of examining this dimension of ‘scale’ or ‘pace’ is to look at the kinds of actions (verbs) done to ‘immigration’ or ‘migration’. For example, in the sentences ‘the government is reducing migration’ and ‘we should encourage more highly-skilled immigration’, the verbs ‘reduce’ and ‘encourage’ signal some kind of action being done to ‘immigration’ and ‘migration’. In a similar way to Figure 1, we looked at the most frequent verbs associated with either term. A category of words expressing efforts to limit or control movement—what we call ‘limit’ verbs in the report—emerged from the top 50 verbs. These included examples such as ‘control’, ‘tackle’, ‘reduce’, and ‘cap’.

Figure 2 shows how the overall frequency of these limit verbs, indicated by the solid line, rose by about five times between 2006 and the high point in 2014—most notably from 2013. But, as a share of all verbs expressing some action towards ‘immigration’ or ‘migration’, this category was consistently making up 30-40% from 2010 onwards. This suggests that, although these kinds of words weren’t that frequent in absolute terms until 2014, the press had already started moving towards using them from 2010.

Figure 2.

blog1

 

 

 

 

 

 

 

These results show how the kind of language around immigration has changed since 2006. Corpus methods allow us to look at a large amount of text—in this case, over a significant period of time in British politics—in order to put recent rhetoric in its longer context. By doing so, researchers contribute concrete evidence about how the British press has actually talked about migrants and migration. Such evidence opens timely and important debates about the role of the press in public discussion (how does information presented through media impact public opinion?) and the extent to which press outputs should be scrutinised.

About the author: William Allen is a Research Officer with The Migration Observatory and the Centre on Migration, Policy, and Society (COMPAS), both based at the University of Oxford. His research focuses on the ways that media, public opinion, and policymaking on migration interact. He also is interested in the ways that migration statistics and research evidence is used in non-academic settings, especially through data visualisations.

New CASS PhD student!

CASS is delighted to welcome new PhD student Andressa Gomide to the centre, where she will be working on data visualization in corpus linguistics. Continue reading to find out more about Andressa!


I am in the first year of a my PhD in Linguistics, which is focused on data visualizations for corpus tools. Being a research student at CASS, I am looking forward to gaining a better understanding of how different fields of study use corpus tools in their research.

IMG_4188

I’ve been involved with corpus linguistics since 2011, when I started my undergraduate research program on leaner corpora. Since then, I have developed a strong interest in corpus studies, which led me to devote my BA and my MA to this theme. I completed both my BA and my MA at the Universidade Federal de Minas Gerais in Brazil.

Aside from my interest in linguistics, I also enjoy outdoor activities such as cycling and hiking.

CASS goes to the Wellcome Trust!

Earlier this month I represented CASS in a workshop, hosted by the Wellcome Trust, which was designed to explore the language surrounding patient data. The remit of this workshop was to report back to the Trust on what might be the best ways to communicate to patients about their data, their rights respecting their data, and issues surrounding privacy and anonymity. The workshop comprised nine participants who all communicated with the public as part of their jobs, including journalists, bloggers, a speech writer, a poet, and a linguist (no prizes for guessing who the latter was…). On a personal note, I had prepared for this event from the perspective of a researcher of health communication. However, the backgrounds of the other participants meant that I realised very quickly that my role in this event would not be so specific, so niche, but was instead much broader, as “the linguist” or even “the academic”.

Our remit was to come up with a vocabulary for communication about patient data that would be easier for patients to understand. As it turned out, this wasn’t too difficult, since most of the language surrounding patient data is waffly at its best, and overly-technical and incomprehensible at its worst. One of the most notable recommendations we made concerned the phrase ‘patient data’ itself, which we thought might carry connotations of science and research, and perhaps disengage the public, and so recommended that the phrase ‘patient health information’ might sound less technical and more 14876085_10154608287875070_1645281813_otransparent. We undertook a series of tasks which ranged from sticking post-it notes on whiteboards and windows, to role play exercises and editing official documents and newspaper articles. What struck me, and what the diversity of these tasks demonstrated particularly well, was how the suitability of our suggested terms could only really be assessed once we took the words off the post-it notes and inserted them into real-life communicative situations, such as medical consultations, patient information leaflets, newspaper articles, and even talk shows.

The most powerful message I took away from the workshop was that close consideration of linguistic choices in the rhetoric surrounding health is vital for health care providers to improve the ways that they communicate with the public. To this end, as a collection of methods that facilitate the analysis of large amounts of authentic language data in and across a variety of texts and contexts, corpus linguistics has an important role to play in providing such knowledge in the future. Corpus linguistic studies of health-related communication are currently small in number, but continue to grow apace. Although the health-related research that is being undertaken within CASS, such as Beyond the Checkbox and Metaphor in End of Life Care, go some way to showcasing the rich fruits that corpus-based studies of health communication can bear, there is still a long way to go. In particular, future projects in this area should strive to engage consumers of health research not only in terms of our findings, but also the (corpus) methods that we have used to get there.

Corpus Linguistics, and why you might want to use it, despite what (you think) you know about it

As part of the Spatial Humanities project at Lancaster University, and in collaboration with the Centre for Corpus Approaches to Social Sciences, the central aim of my PhD research project is to investigate the potential of corpus linguistics to allow for the exploration of spatial patterns in large amounts of digitised historical texts. Since I come from a Sociology/Linguistics background, my personal aim since the start of my PhD journey has been to try and understand what historical scholarship practices look like, what kinds of questions historians are interested in (whether they are presently being asked or not), how historians may benefit from using corpus linguistics, and also what challenges historians might encounter when trying to take advantage of corpus linguistics’ affordances. I don’t think I can over-stress how helpful coming to the RSVP conferences has been in this respect, and how grateful I am to the welcoming and helpful community of scholars I have encountered there.

I have chosen to write this post as an introduction to corpus linguistics for several reasons. First, on many counts RSVP members have asked me to explain to them what corpus linguistics consists of; I hope this post begins to answer that question. Second, I have sometimes encountered a reluctance to consider computerised text analysis methods. This reluctance is understandable and should be taken seriously. There are indeed very real challenges to working with computers in the Humanities and it is worth considering them. Ultimately, I hope to help bring corpus linguistics to the attention of those scholars who may find it useful.

The Humanities unavoidably involve messy data and the messy, fluid, categories which we try to apply to them. Computers on the other hand are all about known quantities and a lack of ambiguity. So why use computers in the Humanities? Computers are bad at what humans are good at: understanding. But they are also good at what humans are bad at: performing large and accurate calculations at remarkable speed. Generating historical insight cannot be done at the press of a button, but computers can assist us in manipulating the large amounts of data which are relevant to the questions we care about.

The most fundamental tool in corpus linguistics – the area of linguistics devoted to developing tools and methods to facilitate the quantitative and qualitative analysis of large amounts of text – is the concordance: a method which has been in use for centuries. A concordance is simply a list of occurrences of a word (or expression) of interest, accompanied by some limited context (see figure 1). Drawing up such a concordance manually is a very lengthy process, which can occupy years of an individual’s life. In contrast, once the data has been prepared in certain ways, specialised computer-based corpus linguistic tools can draw up such a concordance within a few seconds, even for tens of thousands of lines of text drawn from a database containing millions of words. For just this simple feat, computers are invaluable for the historian. But why use corpus linguistics tools? After all, all historical digital collections come with interfaces which offer searchability through queries of some sort.

Figure 1: Search results for ‘police’ presented as a concordancefigure1-png

Read the rest from the RSVP website.

Further Trinity Lancaster Corpus research: Examiner strategies

This month saw a further development in the corpus analyses: the examiners. Let me introduce myself, my name is Cathy Taylor and I’m responsible for examiner training at Trinity and was very pleased to be asked to do some corpus research into the strategies the examiners use when communicating with the test takers.

In the GESE exams the examiner and candidate co-construct the interaction throughout the exam. The examiner doesn’t work from a rigid interlocutor framework provided by Trinity but instead has a flexible test plan which allows them to choose from a variety of questioning and elicitation strategies. They can then respond more meaningfully to the candidate and cover the language requirements and communication skills appropriate for the level. The rationale behind this approach is to reflect as closely as possible what happens in conversations in real life. Another benefit of the flexible framework is that the examiner can use a variety of techniques to probe the extent of the candidate’s competence in English and allow them to demonstrate what they can do with the language. If you’re interested more information can be found in Trinity’s speaking and listening tests: Theoretical background and research.

After some deliberation and very useful tips from the corpus transcriber, Ruth Avon, I decided to concentrate my research on the opening gambit for the conversation task at Grade 6, B1 CEFR. There is a standard rubric the examiner says to introduce the subject area ‘Now we’re going to talk about something different, let’s talk about…learning a foreign language.’  Following this, the examiner uses their test plan to select the most appropriate opening strategy for each candidate. There’s a choice of six subject areas for the conversation task listed for each grade in the Exam information booklet.

Before beginning the conversation examiners have strategies to check that the candidate has understood and to give them thinking time. The approaches below are typical.

  1. E: ‘Let’s talk about learning a foreign language…’
    C: ‘yes’
    E:Do you think English is an easy language?’ 
  1. E: ‘Let ‘s talk about learning a foreign language’
    C: ‘It’s an interesting topic’
    E: ‘Yes uhu do you need a teacher?
  1. It’s very common for the examiner to use pausing strategies which gives thinking time:
    E: ‘Let ‘s talk about learning a foreign language erm why are you learning English?’
    C: ‘Er I ‘m learning English for work erm I ‘m a statistician.’

There are a range of opening strategies for the conversation task:

  • Personal questions: ‘Why are you learning English?’ ‘Why is English important to you?’
  • More general question: ‘How important is it to learn a foreign language these days?’
  • The examiner gives a personal statement to frame the question: ‘I want to learn Chinese (to a Chinese candidate)…what do I have to do to learn Chinese?’
  • The examiner may choose a more discursive statement to start the conversation: ‘Some people say that English is not going to be important in the future and we should learn Chinese (to a Chinese candidate).’
  • The candidate sometimes takes the lead:
  • Examiner: ‘Let’s talk about learning a foreign language’
  • Candidate: ‘Okay, okay I really want to learn a lo = er learn a lot of = foreign languages’

A salient feature of all the interactions is the amount of back channelling the examiners do e.g. ‘erm, mm’  etc. This indicates that the examiner is actively listening to the candidate and encouraging them to continue. For example:

E: ‘Let’s talk about learning a foreign language, if you want to improve your English what is the best way?
C: ‘Well I think that when you see programmes in English’
E: ‘mm
C: ‘without the subtitles’
E: ‘mm’
C: ‘it’s a good way or listening to music in other language
E: ‘mm
C: ‘it’s a good way and and this way I have learned too much

When the corpus was initially discussed it was clear that one of the aims should be to use the findings for our examiner professional development programme.  Using this very small dataset we can develop worksheets which prompt examiners to reflect on their exam techniques using real examples of examiner and candidate interaction.

My research is in its initial stages and the next step is to analyse different strategies and how these validate the exam construct. I’m also interested in examiner strategies at the same transition point at the higher levels, i.e. grade 7 and above, B2, C1 and C2 CEFR. Do the strategies change and if so, how?

It’s been fascinating working with the corpus data and I look forward to doing more in the future.

Continue reading

Textual analysis training for European doctoral researchers in accounting

Professor Steve Young (Lancaster University Management School and PI of the CASS ESRC funded project Understanding Corporate Communications) was recently invited to the 6th Doctoral Summer Program in Accounting Research (SPAR) to deliver sessions specializing in textual analysis of financial reporting. The invitation reflects the increasing interest in narrative reporting among accounting researchers.

The summer program was held at WHU – Otto Beisheim School of Management (Vallendar, Germany) 11-14 July, 2016.

Professor Young was joined by Professors Mary Bath (Stanford University) and Wayne Landsman (University of North Carolina, Chapel Hill), whose sessions covered a range of current issues in empirical financial reporting research including disclosure and the cost of capital, fair value accounting, and comparative international financial reporting. Students also benefitted from presentations by Prof. Dr. Andreas Barckow (President, Accounting Standards Committee of Germany) and Prof. Dr. Sven Hayn (Partner, EY Germany).

The annual SPAR training event was organised jointly by the Ludwig Maximilian University of Munich School of Management and the WHU – Otto Beisheim School of Management. The programme attracts the top PhD students in accounting from across Europe with the aim of introducing them to cutting-edge theoretical, methodological, and practical issues involved in conducting high-quality financial accounting research. This year’s cohort comprised 31 carefully selected students from Europe’s leading business schools.

Professor Young delivered four sessions on textual analysis. Sessions 1 & 2 focused on the methods currently applied in accounting research and the opportunities associated with applying more advanced approaches from computational linguistics and natural language processing. The majority of extant work in mainstream accounting research relies on bag-of-words methods (e.g., dictionaries, readability, and basic machine learning applications) to study the properties and usefulness of narrative aspects of financial communications; significant opportunities exist for accounting researchers applying more mainstream textual analysis methods including part of speech tagging, semantic analysis, topic models, summarization, text mining, and corpus methods.

Sessions 3 & 4 reviewed the extant literature on automated textual analysis in accounting and financial communication. Session 3 concentrated on earnings announcements and annual reports. Research reveals that narrative disclosures are incrementally informative beyond quantitative data for stock market investors, particularly in circumstances where traditional accounting data provide an incomplete picture of firm performance and value. Nevertheless, evidence also suggests that management use narrative commentaries opportunistically when the incentives to do so are high.  Session 4 reviewed research on other aspects of financial communication including regulatory information [e.g., surrounding mergers and acquisitions (M&A) and initial public offerings (IPOs)], conference calls, analysts’ reports, financial media, and social media. Evidence consistently indicates that financial narratives contain information that is not captured by quantitative results.

Slides for all four sessions are available here.

The event was a great success. Students engaged actively in all sessions (including presentations and discussions of published research using textual analysis methods). New research opportunities were explored involving the analysis of new financial reporting corpora and the application of more advanced computational linguistics methods. Students also received detailed feedback from faculty on their research projects, a significant number of which involved application of textual analysis methods. Special thanks go to Professor Martin Glaum and his team at WHU for organizing and running the summer program.

Dealing with Optical Character Recognition errors in Victorian newspapers

CASS PhD student, Amelia Joulain-Jay, has been researching to what extent OCR errors are a problem when researching historical texts, and whether these errors can be corrected. Amelia’s work has recently been featured in a very interesting blog post on the British Library’s website – you can read the full post here.

 

Tracking terrorists who leave a technological trail.

Dr Sheryl Prentice’s work on using technology to aid in the detection of terrorists has been gaining a lot of attention in the media this week! Sheryl’s discussion of the different ways in which technology can be used to tackle the issue of terrorism and how effective these methods are was originally published in The Conversation, and then republished by the ‘i’ newspaper on 23rd June 2016. You can read the original article here.

TLC and innovation in language testing

One of the objectives of Trinity College London investing in the Trinity Lancaster Spoken Corpus has been to share findings with the language assessment community. The corpus allows us to develop an innovative approach to validating test constructs and offers a window into the exam room so we can see how test takers utilise their language skills in managing the series of test tasks.

Recent work by the CASS team in Lancaster has thrown up a variety of features that illustrate how test takers voice their identity in the test, how they manage interaction through a range of strategic competences and how they use epistemic markers to express their point of view and negotiate a relationship with the examiner (for more information see Gablasova et al. 2015). I have spent the last few months disseminating these findings at a range of language testing conferences and have found that the audiences have been fascinated by the findings.

We have presented findings at BAAL TEASIG in Reading, at EAQUALS in Lisbon  and at EALTA in Valencia. Audiences ranged from assessment experts to teacher educators and classroom practitioners and there was great interest both in how the test takers manage the exam as well as the manifestations of L2 language. Each presentation was tailored to the audience and the theme of the conference. In separate presentations, we covered how assessments can inform classroom practice, how the data could inform the type of feedback we give learners and how the data can be used to help validate aspects of the test construct. The feedback has been very positive, urging us to investigate further. Comments have praised the extent and quality of the corpus and range from the fact that the evidence “is something that we have long been waiting for” (Dr Parvaneh Tavakoli, University of Reading) to musings on what some of the data might mean both for how we assess spoken language and the implications for the classroom. It has certainly opened the door on the importance of strategic and pragmatic competences as well as validating Trinity’s aims to allow the test taker to bring themselves into the test.  The excitement spilled over into some great tweets. There is general recognition that the data offers something new – sometimes confirming what we suspected and sometimes – as with all corpora – refuting our beliefs!

We have always recognised that the data is constrained by the semi-formal context of the test but the fact that each test is structured but not scripted and has tasks which represent language pertinent to communicative events in the wider world allows the test taker to produce language which is more reflective of naturally occurring speech than many other oral tests. It has been enormously helpful to have feedback from the audiences who have fully engaged with the issues raised and highlighted aspects we can investigate in greater depth as well as raising features they would like to know more about. These features are precisely those that the research team wishes to explore in order to develop ‘a more fine-grained and comprehensive understanding of spoken pragmatic ability and communicative competence’ (Gablasova et al. 2015: 21)

One of the next steps is to show how this data can be used to develop and support performance descriptors. Trinity is confident that the features of communication which the test takers display are captured in its new Integrated Skills in English exam validating claims that Trinity assesses real world communication.