Have you ever wondered what real spoken English looks like? Have you ever asked the question of whether people from different backgrounds (based on gender, age, social class etc.) use language differently? Have you ever thought it would be interesting to investigate how much English has changed over the last twenty years? All these questions can be answered by looking at language corpora such as the Spoken BNC 2014 and analysing them from a sociolinguistic persective. Corpus Approaches to Contemporary British Speech: Sociolinguistic Studies of the Spoken BNC2014 is a book which offers a series of studies that provide a unique insight into a number of topics ranging from Discourse, Pragmatics and Interaction to Morphology and Syntax.
This is, however, only the first step. We are hoping that there will be many more studies to come based on this wonderful dataset. If you want to start exploring the Spoken BNC 2014 corpus, it is just three mouse clicks away:
Get access to the BNC2014 Spoken
- Register for free and log on to CQPweb.
- Sign-up for access to the BNC2014 Spoken.
- Select ‘BNC2014’in the main CQPweb menu.
Also, right now there is a great opportunity to take part in the written BNC 2014 project, a written counterpart to the Spoken BNC2014. If you’d like to contribute to the written BNC2014, please check out the project’s website for more information.
On Saturday 12 May 2018, CASS hosted a small training event at Lancaster University for a group of participants, who came from different universities in the UK. We talked about the BNC2014 project and discussed both the theoretical underpinnings as well as the practicalities of corpus design and compilation. Slides from the event are available as pdf here.
The participants then tried in practice what is involved in the compilation of a large general corpus such as the BNC2014. They selected and scanned samples of books from current British fiction, poetry and a range of non-fiction books (history, popular science, hobbies etc.). Once processed, these samples will become a part of the written BNC2014.
Here are some pictures from the event:
Carmen Dayrell and Vaclav Brezina before the event
Elena Semino welcoming participants
In the computer lab: Abi Hawtin helping participants
A box full of books
If you are interested in contributing to the written BNC2014, go to the project website to find out about different ways in which you can participate in this exciting project.
The event was supported by ESRC grant no. EP/P001559/1.
Here’s some good news for the beginning of the term: all Lancaster University staff and students have now access to Sketch Engine, an online tool for the analysis of linguistic data. Sketch Engine is used by major publishers (CUP, OUP, Macmillan, etc.) to produce dictionaries and grammar books. It can also be used for a wide range of research projects involving the analysis of language and discourse. Sketch Engine offers access to a large number of corpora in over 85 different languages. Many of the web-based corpora available through Sketch Engine include billions of words that can be analysed easily via the online interface.
In Sketch Engine, you can, for example:
- Search and analyse corpora via a web browser.
- Create word sketches, which summarise the use of words in different grammatical frames.
- Load and grammatically annotate your own data.
- Use parallel (translation) corpora in many languages.
- Crawl the web and collect texts that include a combination of user-defined keywords.
- Much more.
How to connect to Sketch Engine?
- Go to https://the.sketchengine.co.uk/login/
- Click on ‘Authenticate using your institution account (Single Sign On)’
3. Select ‘Lancaster University’ from the drop-down menu and use your Lancaster login details to log on. That’s all – you can start exploring corpora straightaway!
Other corpus tools
There are also many other tools for analysis of language and corpora available to Lancaster University staff and students (and others, of course!). The following table provides an overview of some of them.
||Analysis of own data
|Desktop (offline) tools
||This tool runs on all major operating systems (Windows, Linux, Mac). It has a simple, easy-to-use interface and allows searching and comparing corpora (your own data as well as corpora provided). In addition, #LancsBox provides unique visualisations tools for analysing frequency, dispersion, keywords and collocations.
|Web-based (online) tools
||This tool offers a range of pre-loaded corpora for English (current and historical) and other languages including Arabic, Italian, Hindi and Chinese. It includes, the BNC 2014 Spoken, a brand new 10-milion-word corpus of current informal British speech. It has a number of powerful analytical functionalities. The tool is freely available from https://cqpweb.lancs.ac.uk/
||This tool allows processing users’ own data and adding part-of-speech and semantic annotation. Corpora can also be searched and compared with reference wordlists. Wmatrix is available from http://ucrel.lancs.ac.uk/wmatrix/.
Vaclav Brezina and Gabriele Pallotti
Inflectional morphology has to do with how words change their form to express grammatical meaning. It plays an important role in a number of languages. In these languages, the patterns of word change may for example indicate number and case on nouns, or past, present and future tense on verbs. For example, to express the past participle in German we regularly add the prefix ge- and optionally modify the base. Ich gehe [I go/walk] thus becomes Ich bin gegangen [I have walked]. English also inflects words (e.g. walk – walks – walking – walked; drive – drove – driven) but the range of inflected forms is narrower than in many other languages. The range of morphological forms in a text can be seen as its morphological complexity. Simply put, it is an indicator of the morphological variety of a text, i.e. how many changes to the dictionary forms of the words are manifested in the text.
To find out more about morphological complexity, how it can be measured and how L2 speakers acquire it, you can read:
Gabriele Pallotti and I have been working together to investigate the construct and develop a tool that can analyse the morphological complexity of texts. So far, the tool has been implemented for English, Italian and German verbal morphology. Currently, together with Michael Gauthier from Université Lyon we are implementing the morphological complexity measure for French verbs.
To analyse a text in the Morphological complexity tool, copy/paste the text in the text box, select the appropriate language and press ‘Analyse text now’ (Fig. 1).
Figure 1. Morphological tool: Interface
The tool will output the results of the linguistic analysis that highlights all verbs and nouns in the text and identifies morphological changes (exponences). After clicking on the ‘Calculate MCI’ button the tool also automatically calculates the Morphological Complexity Index (MCI) – see Fig. 2.
Figure 2. Morphological tool output: Selected parts
On Friday 29th April 2016, Lancaster University hosted a symposium which brought together researchers and practitioners interested in Chinese linguistics and the corpus method. The symposium was supported by the British Academy (International Mobility and Partnership Scheme IPM 2013) and was hosted by the ESRC Centre for Corpus Approaches to Social Science (CASS). The symposium introduced the Guangwai-Lancaster Chinese Learner Corpus, a 1.2-million-word corpus of spoken and written L2 Chinese produced by learners of Chinese at different proficiency levels; the corpus was built as part of a collaboration between Guangdong University of Foreign Studies (Prof. Hai Xu and his team) and Lancaster University. The project was initiated by Richard Xiao, who also obtained the funding from the British Academy. Richard’s vision to bring corpus linguistics to the analysis of L2 Chinese (both spoken and written) is now coming to fruition with the final stages of the project and the public release of the corpus planned for the end of this year.
The symposium showcased different areas of Chinese linguistics research through presentations by researchers from Lancaster and other UK universities (Coventry, Essex), with the topics ranging from the use of corpora as resources in the foreign language classroom to a cross-cultural comparison of performance evaluation in concert reviews, second language semasiology, and CQPweb as a tool for Chinese corpus data. As part of the symposium, the participants were also given an opportunity to search the Guangwai-Lancaster Chinese Learner Corpus and explore different features of the dataset. At the end of the symposium, we discussed the applications of corpus linguistics in Chinese language learning and teaching and the future of the field.
Thanks are due to the presenters and all participants for joining the symposium and for very engaging presentations and discussions. The following snapshots summarise the presentations –links to the slides are available below the images.
Hai Xu (Guangdong University of Foreign Studies ): Guangwai-Lancaster Chinese Learner Corpus: A profile – via video conferencing from Guangzhou
Simon Smith (Coventry University): 语料酷！Corpora and online resources in the Mandarin classroom
Fong Wa Ha (University of Essex): A cross-cultural comparison of evaluation between concert reviews in Hong Kong and British newspapers
Vittorio Tantucci (Lancaster University): Second language semasiology (SLS): The case of the Mandarin sentence final particle 吧 ba
Andrew Hardie (Lancaster University): Using CQPweb to analyse Chinese corpus data
Vaclav Brezina (Lancaster University): Practical demonstration of the Guangwai-Lancaster Chinese Learner Corpus followed by a general discussion.
Clare Wright: Using Learner Corpora to analyse task effects on L2 oral interlanguage in English-Mandarin bilinguals
We are proud to announce collaboration with Markus Dickinson and Paul Richards from the Department of Linguistics, Indiana University on a project that will analyse syntactic structures in the Trinity Lancaster Corpus. The focus of the project is to develop a syntactic annotation scheme of spoken learner language and apply this scheme to the Trinity Lancaster Corpus, which is being compiled at Lancaster University in collaboration with Trinity College London. The aim of the project is to provide an annotation layer for the corpus that will allow sophisticated exploration of the morphosyntactic and syntactic structures in learner speech. The project will have an impact on both the theoretical understanding of spoken language production at different proficiency levels as well as on the development of practical NLP solutions for annotation of learner speech. More specific goals include:
- Identification of units of spoken production and their automatic recognition.
- Annotation and visualization of morphosyntactic and syntactic structures in learner speech.
- Contribution to the development of syntactic complexity measures for learner speech.
- Description of the syntactic development of spoken learner production.
On Friday 30th January 2015, I gave a talk at the International ESOL Examiner Training Conference 2015 in Stafford. Every year, the Trinity College London, CASS’s research partner, organises a large conference for all their examiners which consists of plenary lectures and individual training sessions. This year, I was invited to speak in front of an audience of over 300 examiners about the latest development in the learner corpus project. For me, this was a great opportunity not only to share some of the exciting results from the early research based on this unique resource, but also to meet the Trinity examiners; many of them have been involved in collecting the data for the corpus. This talk was therefore also an opportunity to thank everyone for their hard work and wonderful support.
It was very reassuring to see the high level of interest in the corpus project among the examiners who have a deep insight into examination process from their everyday professional experience. The corpus as a body of transcripts from the Trinity spoken tests in some way reflects this rich experience offering an overall holistic picture of the exam and, ultimately, L2 speech in a variety of communicative contexts.
Currently, the Trinity Lancaster Corpus consists of over 2.5 million running words sampling the speech of over 1,200 L2 speakers from eight different L1 and cultural backgrounds. The size itself makes the Trinity Lancaster Corpus the largest corpus of its kind. However, it is not only the size that the corpus has to offer. In cooperation with Trinity (and with great help from the Trinity examiners) we were able to collect detailed background information about each speaker in our 2014 dataset. In addition, the corpus covers a range of proficiency levels (B1– C2 levels of the Common European Framework), which allows us to research spoken language development in a way that has not been previously possible. The Trinity Lancaster Corpus, which is still being developed with an average growth of 40,000 words a week, is an ambitious project: Using this robust dataset, we can now start exploring crucial aspects of L2 speech and communicative competence and thus help language learners, teachers and material developers to make the process of L2 learning more efficient and also (hopefully) more enjoyable. Needless to say, without Trinity as a strong research partner and the support from the Trinity examiners this project wouldn’t be possible.
During our ESRC Festival of Social Science “Language Matters: Communication, Culture, and Society” event, CASS Senior Research Associate Vaclav Brezina tells us about his research into foreign language learning.
On Monday 19 May we came together to celebrate the completion of the first part of the Trinity Lancaster Spoken Learner Corpus project. The transcription of our 2012 dataset is now complete and the corpus comprises 1.5 million running words. The Trinity Lancaster Spoken Learner Corpus represents a balanced sample of learner speech from six different countries (Italy, Spain, Mexico, India, China and Sri Lanka) covering the B1.2 – C2 levels of the Common European Framework (CEFR). Below are some pictures from our small celebration.
We are continuing with the corpus development adding more data from our 2014 dataset so there is still a lot of work to be done. However, we are really excited about the possibilities of applied linguistic and language testing research based on this unique dataset.
You can read more about the Trinity Lancaster Spoken Learner Corpus in the AEA-Europe newsletter report.
Corpus linguistics (CL) is a set of incredibly versatile methods of language analysis applicable to a number of different contexts. So, for example, if you are interested in language, culture, history or society, corpus linguistics has something to offer. Today, thanks to the amazing development in computer technology, corpus linguistic tools are literally only a mouse click away or a touch away, if you are using a tablet or a smartphone. Are you then ready to get your hands dirty with computational analysis of large amounts of language? If the answer is yes, you have probably already registered for the new massively open online course (MOOC) on Corpus Linguistics, created and run by Tony McEnery and other members of the CASS team. (If you haven’t managed to register yet, you can still do so at the FutureLearn website. The course kicks off on 27th January 2014.)
An essential part of the Corpus Linguistics MOOC is its unique feedback system. You will be given a question, a data set and a software tool, and you will be asked to apply what you have learnt in the MOOC lectures to real language analysis. You will explore a topic using corpus techniques which will enable you to uncover interesting patterns in language data. We have a range of topics in store for you. These include English grammar, British and American language and culture, historical discourse of 17th century news books and learner language. But don’t worry, we won’t ask you to write an essay on the topic. Instead, we will give you a number of analyses and descriptions of the corpus data and you will decide which ones use the corpus techniques correctly. After you’ve made your decisions we will provide detailed comments on each of the options. In this way, the CASS Corpus Linguistics MOOC system aims to promote independent learning so that next time you can apply the corpus tools with confidence to answer your own questions.