Corpus Linguistics, and why you might want to use it, despite what (you think) you know about it

As part of the Spatial Humanities project at Lancaster University, and in collaboration with the Centre for Corpus Approaches to Social Sciences, the central aim of my PhD research project is to investigate the potential of corpus linguistics to allow for the exploration of spatial patterns in large amounts of digitised historical texts. Since I come from a Sociology/Linguistics background, my personal aim since the start of my PhD journey has been to try and understand what historical scholarship practices look like, what kinds of questions historians are interested in (whether they are presently being asked or not), how historians may benefit from using corpus linguistics, and also what challenges historians might encounter when trying to take advantage of corpus linguistics’ affordances. I don’t think I can over-stress how helpful coming to the RSVP conferences has been in this respect, and how grateful I am to the welcoming and helpful community of scholars I have encountered there.

I have chosen to write this post as an introduction to corpus linguistics for several reasons. First, on many counts RSVP members have asked me to explain to them what corpus linguistics consists of; I hope this post begins to answer that question. Second, I have sometimes encountered a reluctance to consider computerised text analysis methods. This reluctance is understandable and should be taken seriously. There are indeed very real challenges to working with computers in the Humanities and it is worth considering them. Ultimately, I hope to help bring corpus linguistics to the attention of those scholars who may find it useful.

The Humanities unavoidably involve messy data and the messy, fluid, categories which we try to apply to them. Computers on the other hand are all about known quantities and a lack of ambiguity. So why use computers in the Humanities? Computers are bad at what humans are good at: understanding. But they are also good at what humans are bad at: performing large and accurate calculations at remarkable speed. Generating historical insight cannot be done at the press of a button, but computers can assist us in manipulating the large amounts of data which are relevant to the questions we care about.

The most fundamental tool in corpus linguistics – the area of linguistics devoted to developing tools and methods to facilitate the quantitative and qualitative analysis of large amounts of text – is the concordance: a method which has been in use for centuries. A concordance is simply a list of occurrences of a word (or expression) of interest, accompanied by some limited context (see figure 1). Drawing up such a concordance manually is a very lengthy process, which can occupy years of an individual’s life. In contrast, once the data has been prepared in certain ways, specialised computer-based corpus linguistic tools can draw up such a concordance within a few seconds, even for tens of thousands of lines of text drawn from a database containing millions of words. For just this simple feat, computers are invaluable for the historian. But why use corpus linguistics tools? After all, all historical digital collections come with interfaces which offer searchability through queries of some sort.

Figure 1: Search results for ‘police’ presented as a concordancefigure1-png

Read the rest from the RSVP website.

Further Trinity Lancaster Corpus research: Examiner strategies

This month saw a further development in the corpus analyses: the examiners. Let me introduce myself, my name is Cathy Taylor and I’m responsible for examiner training at Trinity and was very pleased to be asked to do some corpus research into the strategies the examiners use when communicating with the test takers.

In the GESE exams the examiner and candidate co-construct the interaction throughout the exam. The examiner doesn’t work from a rigid interlocutor framework provided by Trinity but instead has a flexible test plan which allows them to choose from a variety of questioning and elicitation strategies. They can then respond more meaningfully to the candidate and cover the language requirements and communication skills appropriate for the level. The rationale behind this approach is to reflect as closely as possible what happens in conversations in real life. Another benefit of the flexible framework is that the examiner can use a variety of techniques to probe the extent of the candidate’s competence in English and allow them to demonstrate what they can do with the language. If you’re interested more information can be found in Trinity’s speaking and listening tests: Theoretical background and research.

After some deliberation and very useful tips from the corpus transcriber, Ruth Avon, I decided to concentrate my research on the opening gambit for the conversation task at Grade 6, B1 CEFR. There is a standard rubric the examiner says to introduce the subject area ‘Now we’re going to talk about something different, let’s talk about…learning a foreign language.’  Following this, the examiner uses their test plan to select the most appropriate opening strategy for each candidate. There’s a choice of six subject areas for the conversation task listed for each grade in the Exam information booklet.

Before beginning the conversation examiners have strategies to check that the candidate has understood and to give them thinking time. The approaches below are typical.

  1. E: ‘Let’s talk about learning a foreign language…’
    C: ‘yes’
    E:Do you think English is an easy language?’ 
  1. E: ‘Let ‘s talk about learning a foreign language’
    C: ‘It’s an interesting topic’
    E: ‘Yes uhu do you need a teacher?
  1. It’s very common for the examiner to use pausing strategies which gives thinking time:
    E: ‘Let ‘s talk about learning a foreign language erm why are you learning English?’
    C: ‘Er I ‘m learning English for work erm I ‘m a statistician.’

There are a range of opening strategies for the conversation task:

  • Personal questions: ‘Why are you learning English?’ ‘Why is English important to you?’
  • More general question: ‘How important is it to learn a foreign language these days?’
  • The examiner gives a personal statement to frame the question: ‘I want to learn Chinese (to a Chinese candidate)…what do I have to do to learn Chinese?’
  • The examiner may choose a more discursive statement to start the conversation: ‘Some people say that English is not going to be important in the future and we should learn Chinese (to a Chinese candidate).’
  • The candidate sometimes takes the lead:
  • Examiner: ‘Let’s talk about learning a foreign language’
  • Candidate: ‘Okay, okay I really want to learn a lo = er learn a lot of = foreign languages’

A salient feature of all the interactions is the amount of back channelling the examiners do e.g. ‘erm, mm’  etc. This indicates that the examiner is actively listening to the candidate and encouraging them to continue. For example:

E: ‘Let’s talk about learning a foreign language, if you want to improve your English what is the best way?
C: ‘Well I think that when you see programmes in English’
E: ‘mm
C: ‘without the subtitles’
E: ‘mm’
C: ‘it’s a good way or listening to music in other language
E: ‘mm
C: ‘it’s a good way and and this way I have learned too much

When the corpus was initially discussed it was clear that one of the aims should be to use the findings for our examiner professional development programme.  Using this very small dataset we can develop worksheets which prompt examiners to reflect on their exam techniques using real examples of examiner and candidate interaction.

My research is in its initial stages and the next step is to analyse different strategies and how these validate the exam construct. I’m also interested in examiner strategies at the same transition point at the higher levels, i.e. grade 7 and above, B2, C1 and C2 CEFR. Do the strategies change and if so, how?

It’s been fascinating working with the corpus data and I look forward to doing more in the future.

Continue reading

Textual analysis training for European doctoral researchers in accounting

Professor Steve Young (Lancaster University Management School and PI of the CASS ESRC funded project Understanding Corporate Communications) was recently invited to the 6th Doctoral Summer Program in Accounting Research (SPAR) to deliver sessions specializing in textual analysis of financial reporting. The invitation reflects the increasing interest in narrative reporting among accounting researchers.

The summer program was held at WHU – Otto Beisheim School of Management (Vallendar, Germany) 11-14 July, 2016.

Professor Young was joined by Professors Mary Bath (Stanford University) and Wayne Landsman (University of North Carolina, Chapel Hill), whose sessions covered a range of current issues in empirical financial reporting research including disclosure and the cost of capital, fair value accounting, and comparative international financial reporting. Students also benefitted from presentations by Prof. Dr. Andreas Barckow (President, Accounting Standards Committee of Germany) and Prof. Dr. Sven Hayn (Partner, EY Germany).

The annual SPAR training event was organised jointly by the Ludwig Maximilian University of Munich School of Management and the WHU – Otto Beisheim School of Management. The programme attracts the top PhD students in accounting from across Europe with the aim of introducing them to cutting-edge theoretical, methodological, and practical issues involved in conducting high-quality financial accounting research. This year’s cohort comprised 31 carefully selected students from Europe’s leading business schools.

Professor Young delivered four sessions on textual analysis. Sessions 1 & 2 focused on the methods currently applied in accounting research and the opportunities associated with applying more advanced approaches from computational linguistics and natural language processing. The majority of extant work in mainstream accounting research relies on bag-of-words methods (e.g., dictionaries, readability, and basic machine learning applications) to study the properties and usefulness of narrative aspects of financial communications; significant opportunities exist for accounting researchers applying more mainstream textual analysis methods including part of speech tagging, semantic analysis, topic models, summarization, text mining, and corpus methods.

Sessions 3 & 4 reviewed the extant literature on automated textual analysis in accounting and financial communication. Session 3 concentrated on earnings announcements and annual reports. Research reveals that narrative disclosures are incrementally informative beyond quantitative data for stock market investors, particularly in circumstances where traditional accounting data provide an incomplete picture of firm performance and value. Nevertheless, evidence also suggests that management use narrative commentaries opportunistically when the incentives to do so are high.  Session 4 reviewed research on other aspects of financial communication including regulatory information [e.g., surrounding mergers and acquisitions (M&A) and initial public offerings (IPOs)], conference calls, analysts’ reports, financial media, and social media. Evidence consistently indicates that financial narratives contain information that is not captured by quantitative results.

Slides for all four sessions are available here.

The event was a great success. Students engaged actively in all sessions (including presentations and discussions of published research using textual analysis methods). New research opportunities were explored involving the analysis of new financial reporting corpora and the application of more advanced computational linguistics methods. Students also received detailed feedback from faculty on their research projects, a significant number of which involved application of textual analysis methods. Special thanks go to Professor Martin Glaum and his team at WHU for organizing and running the summer program.

Dealing with Optical Character Recognition errors in Victorian newspapers

CASS PhD student, Amelia Joulain-Jay, has been researching to what extent OCR errors are a problem when researching historical texts, and whether these errors can be corrected. Amelia’s work has recently been featured in a very interesting blog post on the British Library’s website – you can read the full post here.


Tracking terrorists who leave a technological trail.

Dr Sheryl Prentice’s work on using technology to aid in the detection of terrorists has been gaining a lot of attention in the media this week! Sheryl’s discussion of the different ways in which technology can be used to tackle the issue of terrorism and how effective these methods are was originally published in The Conversation, and then republished by the ‘i’ newspaper on 23rd June 2016. You can read the original article here.

TLC and innovation in language testing

One of the objectives of Trinity College London investing in the Trinity Lancaster Spoken Corpus has been to share findings with the language assessment community. The corpus allows us to develop an innovative approach to validating test constructs and offers a window into the exam room so we can see how test takers utilise their language skills in managing the series of test tasks.

Recent work by the CASS team in Lancaster has thrown up a variety of features that illustrate how test takers voice their identity in the test, how they manage interaction through a range of strategic competences and how they use epistemic markers to express their point of view and negotiate a relationship with the examiner (for more information see Gablasova et al. 2015). I have spent the last few months disseminating these findings at a range of language testing conferences and have found that the audiences have been fascinated by the findings.

We have presented findings at BAAL TEASIG in Reading, at EAQUALS in Lisbon  and at EALTA in Valencia. Audiences ranged from assessment experts to teacher educators and classroom practitioners and there was great interest both in how the test takers manage the exam as well as the manifestations of L2 language. Each presentation was tailored to the audience and the theme of the conference. In separate presentations, we covered how assessments can inform classroom practice, how the data could inform the type of feedback we give learners and how the data can be used to help validate aspects of the test construct. The feedback has been very positive, urging us to investigate further. Comments have praised the extent and quality of the corpus and range from the fact that the evidence “is something that we have long been waiting for” (Dr Parvaneh Tavakoli, University of Reading) to musings on what some of the data might mean both for how we assess spoken language and the implications for the classroom. It has certainly opened the door on the importance of strategic and pragmatic competences as well as validating Trinity’s aims to allow the test taker to bring themselves into the test.  The excitement spilled over into some great tweets. There is general recognition that the data offers something new – sometimes confirming what we suspected and sometimes – as with all corpora – refuting our beliefs!

We have always recognised that the data is constrained by the semi-formal context of the test but the fact that each test is structured but not scripted and has tasks which represent language pertinent to communicative events in the wider world allows the test taker to produce language which is more reflective of naturally occurring speech than many other oral tests. It has been enormously helpful to have feedback from the audiences who have fully engaged with the issues raised and highlighted aspects we can investigate in greater depth as well as raising features they would like to know more about. These features are precisely those that the research team wishes to explore in order to develop ‘a more fine-grained and comprehensive understanding of spoken pragmatic ability and communicative competence’ (Gablasova et al. 2015: 21)

One of the next steps is to show how this data can be used to develop and support performance descriptors. Trinity is confident that the features of communication which the test takers display are captured in its new Integrated Skills in English exam validating claims that Trinity assesses real world communication.

From Corpus to Classroom 2

There is great delight that the Trinity Lancaster Corpus is providing so much interesting data that can be used to enhance communicative competences in the classroom. From Corpus to Classroom 1 described some of these findings. But how exactly do we go about ‘translating’ this for classroom use so that it can be used by busy teachers with high pressured curricula to get through? How can we be sure we enhance rather than problematize the communicative feature we want to highlight?

Although the Corpus data comes from a spoken test, we want to use it to illustrate  wider pragmatic features of communication. The data fascinates students who are entranced to see what their fellow learners do, but how does it help their learning? The first step is to send the research outputs to an experienced classroom materials author to see what they suggest.

Here’s how our materials writer, Jeanne Perrett, went about this challenging task:

As soon as I saw the research outputs from TLC, I knew that this was something really special; proper, data driven learning on how to be a more successful speaker. I could also see that the corpus scripts, as they were, might look very alien and quirky to most teachers and students. Speaking and listening texts in coursebooks don’t usually include sounds of hesitation, people repeating themselves, people self-correcting or even asking ‘rising intonation’ questions. But all of those things are a big part of how we actually communicate so I wanted to use the original scripts as much as possible. I also thought that learners would be encouraged by seeing that you don’t have to speak in perfectly grammatical sentences, that you can hesitate and you can make some mistakes but still be communicating well.

Trinity College London commissioned me to write a series of short worksheets, each one dealing with one of the main research findings from the Corpus, and intended for use in the classroom to help students prepare for GESE and ISE exams at a B1 or B2 level.

I started each time with extracts from the original scripts from the data. Where I thought that the candidates’ mistakes would hinder the learner’s comprehension (unfinished sentences for example), I edited them slightly (e.g. with punctuation). But these scripts were not there for comprehension exercises; they were there to show students something that they might never have been taught before.

For example, sounds of hesitation: we all know how annoying it is to listen to someone (native and non-native speakers) continually erm-ing and er-ing in their speech and the data showed that candidates were hesitating too much. But we rarely, if ever, teach our students that it is in fact okay and indeed natural to hesitate while we are thinking of what we want to say and how we want to say it. What they need to know is that, like the more successful candidates in the data,  there are other words and phrases that we can use instead of erm and er. So one of the worksheets shows how we can use hedging phrases such as ‘well..’ or ‘like..’ or ‘okay…’ or ‘I mean..’ or ‘you know…’.

The importance of taking responsibility for a conversation was another feature to emerge from the data and again, I felt that these corpus findings were very freeing for students; that taking responsibility doesn’t, of course, mean that you have to speak all the time but that you also have to create opportunities for the other person to speak and that there are specific ways in which you can do that such as making active listening sounds (ah, right, yeah), asking questions, making short comments and suggestions.

Then there is the whole matter of how you ask questions. The corpus findings show that there is far less confusion in a conversation when properly formed questions are used. When someone says ‘You like going to the mountains?’ the question is not as clear as when they say ‘Do you like going to the mountains?’ This might seem obvious but pointing it out, showing that less checking of what has been asked is needed when questions are direct ones, is, I think very helpful to students. It might also be a consolation-all those years of grammar exercises really were worth it! ‘Do you know how to ask a direct question? ‘Yes, I do!’

These worksheets are intended for EFL exam candidates but the more I work on them, the more I think that the Corpus findings could have a far wider reach. How you make sure you have understood what someone is saying, how you can be a supportive listener, how you can make yourself clear, even if you want to be clear about being uncertain; these are all communication skills which everyone needs in any language.



Syntactic structures in the Trinity Lancaster Corpus

We are proud to announce collaboration with Markus Dickinson and Paul Richards from the Department of Linguistics, Indiana University on a project  that will analyse syntactic structures in the Trinity Lancaster Corpus. The focus of the project is to develop a syntactic annotation scheme of spoken learner language and apply this scheme to the Trinity Lancaster Corpus, which is being compiled at Lancaster University in collaboration with Trinity College London. The aim of the project is to provide an annotation layer for the corpus that will allow sophisticated exploration of the morphosyntactic and syntactic structures in learner speech. The project will have an impact on both the theoretical understanding of spoken language production at different proficiency levels as well as on the development of practical NLP solutions for annotation of learner speech.  More specific goals include:

  • Identification of units of spoken production and their automatic recognition.
  • Annotation and visualization of morphosyntactic and syntactic structures in learner speech.
  • Contribution to the development of syntactic complexity measures for learner speech.
  • Description of the syntactic development of spoken learner production.


From Corpus to Classroom 1

The Trinity Lancaster Corpus of Spoken Learner English is providing multiple sets of data that can not only be used for validating the quality of our tests but also – and most importantly – to feedback important features of language that can be utilised in the classroom. It is essential that some of our research is focused on how Trinity informs and supports teachers in improving communicative competences in their learners and this is forming part of an ongoing project the research team are setting up in order to give teachers access to this information.

Trinity has always been focused on communicative approaches to language teaching and the heart of the tests is about communicative competences. The research team are especially excited to see that the data is revealing the many ways in which test takers use these communicative competences to manage their interaction in the spoken tests. It is very pleasing to see that not only does the corpus evidence support claims that the Trinity tests of spoken language are highly interactive but also it establishes some very clear features of effective communicative that can be utilised by teachers in the classroom.

The strategies which test takers use to communicate successfully include:

  • Asking more questions

Here the test taker relies less on declarative sentences to move a conversation forward but asks clear questions (direct and indirect) that are more immediately accessible to the listener.

  • Demonstrating active listenership through backchannelling

This involves offering more support to the conversational partner by using signals such as okay, yes, uhu, oh, etc to demonstrate engaged listenership.

  • Taking responsibility for the conversation through their contributions

Successful test takers help move the conversation along by by creating opportunities with e.g. questions, comments or suggestions that their partner can easily react to.

  • Using fewer hesitation markers

Here the speaker makes sure they keep talking and uses fewer markers such as er, erm which can interrupt fluency.

  • Clarifying what is said to them before they respond

This involves the test taker checking through questions that they have understood exactly what has been said to them.

Trinity is hopeful that these types of communicative strategies can be investigated across the tests and across the various levels in order to extract information which can be fed back into the classroom.  Teachers – and their learners – are interested to see what actually happens when the learner has the opportunity to put their language into practice in a live performance situation. It makes what goes on in the classroom much more real and gives pointers to how a speaker can cope in these situations.

More details about these points can be found on the Trinity corpus website and classroom teaching materials will be uploaded shortly to support teachers in developing these important strategies in their learners.

Also see CASS briefings for more information on successful communication strategies in L2.

“Fleeing, Sneaking, Flooding” – The importance of language in the EU migrant crisis

With tensions over the current EU migrant crisis increasing, we at CASS thought it would be timely to highlight the importance of the language used in the debate about this humanitarian crisis. In this paper, by Paul Baker and Costas Gabrielatos, the authors analyse the construction of refugees and asylum seekers in UK press articles.
For readers who do not have access to Sage, you can find a final draft of the paper here free of charge. Please note that this version of the paper has the tables and figures at the end of the paper.