Covid-19 and the International Baccalaureate

A month passed, but yet our pain hasn’t diminished and justice unserved (#ibscandal, Aug 6)

Three months ago, when I wrote my introductory post for the CASS blog, I had a clear research plan for my SSHRC (Canada’s Social Sciences & Humanities Research Council) postdoctoral fellowship, which involved examining IB discourses in a large corpus of global (English) newspapers to see how these compared to IB discourses in Canada. However, that plan took a completely new and unexpected turn last month when Covid-19 and the IB collided.

The word “unprecedented” has been used a great deal in connection with our current Covid-19 world. While readers of history may raise a sceptical eyebrow about exactly how unprecedented this situation is, it does apply to the IB organization and the events unfolding globally in relation to the May 2020 final examination results which were released on July 6. This year has been unlike any other year in the organization’s 52 year history because for the first time, the high stakes IB diploma program final exams were cancelled due to Covid-19. The announcement was made on March 23, further elaborated by the Director General Siva Kumari on March 24, with a follow-up statement on May 13 describing in detail the alternate assessment model to be used.

On July 6, the IB organization published the final results for 174,355 IB Diploma Program (DP) and Career-related Program (CP) students with great fanfare. Of these, 170,343 were DP candidates from 146 countries, whose results would most likely be linked to university admission. Congratulatory messages were splashed on the IB organization website and Twitter feeds, celebrating the triumph of the Class of 2020 for their great achievements in such a difficult year. Messages from the Director General, Siva Kumari, Deputy Director Sally Holloway, Chief Assessment Officer Paula Wilcock and representatives from IB schools around the world joined together in their praise for this cohort, who had been forced to adapt to a new and fluid situation.

But a problem was brewing that was not evident from the IB organization’s celebratory communication. Reports began to emerge from a variety of sources (e.g., Wired, Reuters, TES, Financial Times, Bloomberg) about issues with IB final results, which turned out to be lower than many had expected and thus put students’ university admission and/or scholarships in jeopardy. Within four days of the release of results, an online petition calling for “Justice for May 2020 IB Graduates”, with the hashtag #ibscandal, had already collected 15,000 signatures. Government bodies such as Ofqual (Office of Qualifications and Examinations Regulation) in the UK and the Data Protection Authority in Norway also became involved in seeking clarification regarding the IB organization’s grading system. Despite such wide coverage, the IB website released only a single statement on July 15 about the results, and the @iborganization Twitter account remained largely inactive, with nine posts on July 6 in connection to the results, and the next post on July 20 saying: In response to the enquiries received, we share further clarity regarding our awarding model for the May 2020 session. We are in direct communication with schools, providing support options including a new process to review extraordinary cases. Learn more:

Over the past month, the #ibscandal hashtag has evolved to become a space where not only students, parents and teachers voice their opinions (e.g., the quote at the start), but also where key information is exchanged, such as newly published articles or videos. There are also academics and journalists posting on this site, most recently a professor from New York University looking to talk to students “affected by the 2020 #ibscandal”. And on July 30, there was an announcement saying that the IB controversy was now on Wikipedia. In sum, there is a stark contrast between the IB organization’s silence on anything to do with results on one side, while anger and frustration mount on the other.

Meanwhile, in Canada, no coverage of these events can be found anywhere. This is curious since Canada not only has the second highest number of DP candidates in the world (11,962 reported for the May 2020 examination session) but ranks as the number one “destination for IB transcripts of any university in the world” (Arida, 2016). It would seem reasonable to expect that there might be some interest in the events taking place globally. Just to be sure I wasn’t missing anything, I conducted a search for international AND baccalaureate on Canadian News stream, a database containing over 280 news sources. Of the 17 results for the month of July, three were not about the IB, two mentioned the IB in passing as part of a person’s qualifications, one reported on a school in Vancouver going ahead with its plans to become an IB school, 10 reported on complaints against Canada’s Governor General Julie Payette and her assistant, who had been friends “going back to their days in an international baccalaureate program decades ago”. And one article, from July 21 (over two weeks after the IB results were published), is a reprint of the Reuters story Global exam grading algorithm under fire for suspected bias by A. A. Schapiro. In other words, there is no Canadian news story even though the topic is clearly newsworthy.

So like everyone else who is interested in this topic, I am a regular visitor to the #ibscandal hashtag, observing events unfold in real-time. As a result I’ve noticed some rather interesting developments over the past four weeks which I hope to explore further using corpus tools and methods. As they often say here at CASS, watch this space!

Update: Since this piece was written, the IB organization issued a statement on August 17 explaining changes to their assessment model in light of data and evidence they received from schools. The announcement also appeared on the organization’s Twitter feed (

Time to Celebrate: Trinity Lancaster Corpus

On Wednesday 30 October, The ESRC Centre for Corpus Approaches (CASS) organised a small get-together in its new location, Bailrigg House, to celebrate the research that is being carried out at the centre. Specifically, on this occasion, we wanted to highlight the Trinity Lancaster Corpus, a corpus of spoken learner English built in collaboration between Lancaster University and Trinity College London.

Cutting the cake with the Trinity Lancaster Corpus logo

We are really proud of the corpus, which is the largest learner corpus of its kind. It took us over five years to complete this part of the project. Here are a few numbers that describe the Trinity Lancaster Corpus:

  • Over 2,000 transcripts
  • Over 4.2 million words
  • Over 3,500 hours of transcription time
  • Over 10 L1 and cultural backgrounds
  • Up to four speaking tasks

A balanced sample of the corpus is now available for online searching via TLC Hub (password: Lancaster1964). To read more about the corpus and its development, check out this article in the International Journal of Learner Corpus Research:

Gablasova, D., Brezina, V., & McEnery, T. (2019). The Trinity Lancaster Corpus: Development, Description and ApplicationInternational Journal of Learner Corpus Research5(2), 126-158. [open access]

A new special issue of the journal featuring articles on various aspects of learner language, which use the Trinity Lancaster Corpus as their primary data source, is available from this link.

Table of contents of the special issue of the International Journal of Learner Corpus Research

A cake to celebrate the Trinity Lancaster Corpus

Celebrations at CASS

Celebrations at CASS (posters featuring research on TLC in the background)

CASS is strengthening its links with colleagues at the University of Mosul in Iraq

As reported in the media, in recent months we have been delighted to support staff and students at the University of Mosul in Iraq who are rebuilding the Department of English after the devastation caused by the so-called Islamic State group . Via the CorpusMOOC and other forms of long-distance support, we have begun to interact with colleagues in Mosul, and to appreciate both the size of the task ahead of them and their determination to succeed. We are now in the process of arranging a month-long visit to Lancaster from two Mosul academics, so that we can strengthen our ties, including by exploring joint projects. Watch this space for updates on the visit and our future joint activities.

Introductory Blog – Gavin Brookes

This is the second time I have been a part of CASS, which means that this is the second time I’ve written one of these introductory blog pieces. I first worked in CASS in 2016, on an eight-month project with Paul Baker where we looked at  the feedback that patients gave about the NHS in England. This was a really fun project to work on – I enjoyed being a part of CASS and working with Paul and made some great friends in the Centre with whom I’m still in contact to this day. Since leaving CASS in October 2016, I completed my PhD in Applied Linguistics in the School of English at the University of Nottingham, which examined the ways that people with diabetes and eating disorders construct their illnesses and identities in online support groups. Following my PhD, I stayed in the School of English at Nottingham, working as a Postdoctoral Research Fellow in the School’s Professional Communication research and consultancy unit.

As you might have guessed from the topic of my doctoral project and my previous activity with CASS, my main research interests are in the areas of corpus linguistics and health communication. I am therefore very excited to return to the Centre now, with its new focus on the application of corpora to the study of health communication. I’m currently working on a new project within the Centre, Representations of Obesity in the News, which explores the ways that obesity and people affected by obesity are represented in the media, focussing in particular on news articles and readers’ responses. I’m very excited to be working on this important project. Obesity is a growing and seemingly ever-topical public health concern, not just in the UK but globally. However, the media’s treatment of the issue can often be stigmatising, making it quite deserving of scrutiny! Yet, our aim in this project isn’t just to take the media to task, but to eventually work with media outlets to advise them on how to cover obesity in a way that is more balanced and informative and, crucially, less stigmatising for people who are affected by it. In this project, we’re also working with obesity charities and campaign groups, which provides a great opportunity to make sure that the focus of our research is not just fit for academic journals but is relevant to people affected by this issue and so can be applied in the ‘real world’, as it were.

So, to finish on more of a personal note, the things I said about myself the last time I wrote one of these blog posts  are still true ; I still like walking, I still travel lots, I still read fantasy and science fiction, I still do pub quizzes, my football team are still rubbish and I don’t think I’ve changed that much since the photo used in that piece was taken… Most of all, though, it still excites me to be a part of CASS and I am absolutely delighted to be back.


Learn about the BNC2014, scan a book sample and contribute to the corpus…

On Saturday 12 May 2018, CASS hosted a small training event at Lancaster University for a group of participants, who came from different universities in the UK.  We talked about the BNC2014 project and discussed both the theoretical underpinnings as well as the practicalities of corpus design and compilation. Slides from the event are available as pdf here.

The participants then tried in practice what is involved in the compilation of a large general corpus such as the BNC2014. They selected and scanned samples of books from current British fiction, poetry and a range of non-fiction books (history, popular science, hobbies etc.). Once processed, these samples will become a part of the written BNC2014.

Here are some pictures from the event:

Carmen Dayrell and Vaclav Brezina before the event

Elena Semino welcoming participants

In the computer lab: Abi Hawtin helping participants

A box full of books

If you are interested in contributing to the written BNC2014, go to the project website  to find out about different ways in which you can participate in this exciting project.

The event was supported by ESRC grant no. EP/P001559/1.

40th Anniversary of the Language and Computation Group


Recently I was given the chance to attend the 40th anniversary of the Language and Computation (LAC) group at The University of Essex. As an Essex alumni I was invited to present my work with CASS on Financial Narrative Processing (FNP) part of the ESRC funded project . Slides are available online here.

The event celebrates 40 years of the Language and Computation (LAC) group: an interdisciplinary group created to foster interaction between researchers working on Computational Linguistics within the University of Essex.

There were 16 talks by Essex University alumnus and connections including Yorick Wilks, Patrick Hanks, Stephen Pulman and Anne de Roeck.

The two day workshop started with Doug Arnold from the Department of Language and Linguistics at Essex. He started by presenting the history and the beginning of the LAC group which started with the arrival of Yorick Wilks in the late 70s and others from Language and Linguistics, this includes Stephen Pulman, Mike Bray, Ray Turner and Anne de Roeck. According to Doug the introduction of the cognitive studies center and the Eurotra project in the 80s led to the introduction of the Computational Linguistics MA paving the way towards the emergence of Language and Computation. Something I always wondered about.

The workshop referred to the beginning of some of the most influential conferences and associations in computational linguistics such as CoLing, EACL and ESSLLI. It also showed the influence of the world events around that period and the struggle researchers and academics had to go through, especially during the cold war and the many university crises around the UK during the 80s and the 90s. Having finished my PhD in 2012 it never crossed my mind how difficult it would have been for researchers and academics to progress under such intriguing situations during that time.

Doug went on to point out how the introduction of the World Wide Web in the mid 90s and the development of technology and computers helped to rapidly advance and reshape the field. This helped in closing the gap between Computation and Linguistics and the problem of field identity between Computational Linguists coming from a Computing or Linguistics background. We now live surrounded by rapid technologies and solid networks infrastructure which makes communications and data processing a problem no more. I was astonished when Stephen Pulman mentioned how they used to wait a few days for the only machine in the department to compile a few lines-of-code of LISP.

The presence of Big Data processing in 2010 and the rapid need for resourcing, crowd-sourcing and interpreting big data added more challenges but interesting opportunities to computational linguists. Something I very much agree with considering the vast amount of data available online these days.

Doug ended his talk by pointing out that in general Computational Linguistics is a difficult field; computational linguists are expected to be experts in many areas, concluding that training computational linguists is deemed to be a challenging and difficult task. As a computational linguist this rings a bell. For example, and as someone from a computing background, I find it difficult to understand how part of speech taggers work without being versed in the grammatical aspect of the language of study.

Doug’s talk was followed by compelling and very informative talks from Yorick Wilks, Mike Rosner and Patrick Hanks.

Yorick opened with “Linguistics is still an interesting topic” narrating his experience in moving from Linguistics towards Computing and the challenge imposed by the UK system compared to other countries such as France, Russia and Italy where Chomsky had little influence. This reminded me of Peter Norivg’s response to Chomsky’s criticism of empirical theory where he said and I quote: “I think Chomsky is wrong to push the needle so far towards theory over facts”.

In his talk, Yorick referred to Lancaster University and the remarkable work by Geoffrey Leech and the build up of the CLAWS tagger, which was one of the earliest statistical taggers to ever reach the USA.

“What is meaning?” was Patrick Hanks talk’s opening and went into discussing word ambiguity saying: “most words are hopelessly ambiguous!”.  Patrick briefly discussed the ‘double helix’ rule system or the Theory of Norms and Exploitations (TNE), which enables creative use of language when speakers and writers make new meanings, while at the same time relying on a core of shared conventions for mutual understanding. His work on pattern and phraseologies is of great interest in an attempt to answer the ”why this perfectly valid English sentence fits in a single pattern?” question.

This was followed by interesting talks from ‘Essexians’ working in different universities and firms across the globe. This included recent work on Computational Linguistics (CL), Natural Language Processing (NLP) and Machine Learning (ML). One of those was a collaboration work between Essex University and Signal– a startup company in London.

The event closed with more socialising, drinks and dinner at a Nepalese restaurant in Colchester, courtesy of the LAC group.

In general I found the event very interesting, well organised and rich in terms of historical evidences on the beginning of Language and Computation. It was also of great interest to know about current work and state-of-the-art in CL, NLP and ML presented by the event attendances.

I would very much like to thank The Language and Computation group at Essex Universities for the invitation and their time and effort organising this wonderful event.

Mahmoud El-Haj

Senior Research Associate

CASS, Lancaster University


NewsHack 2016 Retrospective

The BBC’s multilingual NewsHACK event was run on the 15th and 16th of March as an opportunity for teams of language technology researchers to work with multilingual data from the BBC’s connected studio.  The theme was ‘multilingual journalism: tools for future news’, and teams were encouraged to bring some existing language technologies to apply to problems in this area. Nine teams attended from various news and research organisations. Lancaster University sent two teams with funding from CASS, CorCenCC, DSI, and UCREL: team ‘1’ consisting of Paul, Scott and Hugo, and team ‘A’ comprising Matt, Mahmoud, Andrew and Steve.


The brief from the newsHACK team suggested two possible directions: to provide a tool for the BBC’s journalist staff, or to create an audience-facing utility. To support us, the BBC provided access to a variety of APIs, but the Lancaster ‘A’ team were most interested to discover that something we’d thought would be available wasn’t — there is no service mapping news stories to their counterparts in other languages. We decided to remedy that.

The BBC is a major content creator, perhaps one of the largest multilingual media organisations in the world. This presents a great opportunity. Certain events are likely to be covered in every language the BBC publishes in, providing ‘translations’ of the news which are not merely literal translations at the word, sentence or paragraph level, but full-fledged contextual translations which identify the culturally appropriate ways to convey the same information. Linking these articles together could help the BBC create a deeply interesting multilingual resource for exploring questions about language, culture and journalism.

Interesting, but how do we make this into a tool for the BBC? Our idea was to take these linked articles directly to the users. Say you have a friend who would prefer to read the news in their native tongue — one different to your own — how would you share a story with them? Existing approaches seem to involve either using an external search engine (But then how do you know the results are what you intend to share, not speaking the target language?) or to use machine translation to offer your friend a barely-readable version of the exact article you have read. We came up with an idea that keeps content curation within the BBC and provides readers with easy-access to the existing high-quality translations being made by professional writers: a simple drop-down menu for articles which allows a user to ‘Read about this in…’ any of the BBC’s languages.


To implement this, in two days, required a bit of creative engineering. We wanted to connect articles based on their content, but we didn’t have tools to extract and compare features in all the BBC’s languages. Instead, we translated small amounts of text — article summaries and a few other pieces of information — into English, which has some of the best NLP tool support (and was the only language all of our team spoke). Then we could use powerful existing solutions to named entity recognition and part-of-speech tagging to extract informative features from articles, and compare them using a few tricks from record linkage literature. Of course, a lack of training data (and time to construct it!) meant that we couldn’t machine-learn our way to perfection for weighting these features, so a ‘human learning’ process was involved in manually tweaking the weights and thresholds until we got some nice-looking links between articles in different languages.

Data is only part of the battle, though. We needed a dazzling front-end to impress the judges.  We used a number of off-the-shelf web frameworks to quickly develop a prototype, drawing upon the BBC’s design to create something that could conceivably be worked into a reader’s workflow: users enter a URL at the top and are shown results from all languages in a single dashboard, from which they can read or link to the original articles or their identified translations.

Here we have retrieved a similar article in Arabic, as well as two only-vaguely-similar ones in Portuguese and Spanish (the number indicates a percentage similarity).  The original article text is reproduced, along with a translated English summary.


The judges were impressed — perhaps as much with our pun-filled presentation as our core concept — and our contribution, the spontaneously-titled ‘Super Mega Linkatron 5000’ was joint winner in the category of best audience-facing tool.

The BBC’s commitment to opening up their resources to outsiders seems to have paid off with a crop of high-quality concepts from all the competitors, and we’d like to thank them for the opportunity to attend (as well as the pastries!).

The code and presentation for the team ‘A’ entry is available via github at and images from Lancaster’s visit can be seen at .  Some of the team have also written their own blog posts on the subject: here’s Matt’s and Steve’s.

Team ‘1’ based their work around the BBC Reality Check service. This was part of the BBC News coverage of the 2015 UK general election and published news items on twitter and contributed to TV and radio news as well. For example, in May 2015 when a politician claimed that the number of GPs has declined under the coalition government, BBC Reality Check produced a summary of data obtained from a variety of sources to enable the audience to make up their own mind about this claim. Reality Check is continuing in 2016 with a similar service for the EU referendum, providing, for example, a check on how many new EU regulations there are every year (1,269 rather than the 2,500 claimed by Boris Johnson!!). After consulting with the BBC technology producer and journalist attending the newsHACK, Team ‘1’ realised that this current Reality Check service could only serve its purpose for English news stories, so set about making a new ‘BBC Multilingual Reality Check’ service to support journalists in their search for suitable sources. Having a multilingual system is really important for the EU referendum and other international news topics due to the potential sources being written in languages other than English.

In order to bridge related stories across different languages, we adopted the UCREL Semantic Analysis System (USAS) developed at Lancaster over the last 26 years. The system automatically assigns semantic fields (concepts or coarse-grained senses) to each word or phrase in a given text, and we reasoned that the frequency profile of these concepts would be similar for related stories even in different languages e.g. the semantic profile could help distinguish between news stories about finance or education or health. Using the APIs that the BBC newsHACK team provided, we gathered stories in English, Spanish and Chinese (the native languages spoken by team ‘1’). Each story was then processed through the USAS tagger and a frequency profile was generated. Using a cosine distance measure, we ranked related stories across languages. Although we only used the BBC multilingual news stories during the newsHACK event, it could be extended to ingest text from other sources e.g. UK Parliamentary Hansard and manifestos, proceedings of the European parliament and archives of speeches from politicians (before they are removed from political party websites).

The screenshot below shows our analysis focussed on some main topics of the day: UK and Catalonia referendums, economics, Donald Trump, and refugees. Journalists can click on news stories in the system and show related articles in the other languages, ranked by our distance measure (shown here in red).

Team ‘1’s Multilingual Reality Check system would not only allow fact checking such as the number of refugees and migrants over time entering the EU, but also allow journalists to observe different portrayals of the news about refugees and migrants in different countries.


From Corpus to Classroom 2

There is great delight that the Trinity Lancaster Corpus is providing so much interesting data that can be used to enhance communicative competences in the classroom. From Corpus to Classroom 1 described some of these findings. But how exactly do we go about ‘translating’ this for classroom use so that it can be used by busy teachers with high pressured curricula to get through? How can we be sure we enhance rather than problematize the communicative feature we want to highlight?

Although the Corpus data comes from a spoken test, we want to use it to illustrate  wider pragmatic features of communication. The data fascinates students who are entranced to see what their fellow learners do, but how does it help their learning? The first step is to send the research outputs to an experienced classroom materials author to see what they suggest.

Here’s how our materials writer, Jeanne Perrett, went about this challenging task:

As soon as I saw the research outputs from TLC, I knew that this was something really special; proper, data driven learning on how to be a more successful speaker. I could also see that the corpus scripts, as they were, might look very alien and quirky to most teachers and students. Speaking and listening texts in coursebooks don’t usually include sounds of hesitation, people repeating themselves, people self-correcting or even asking ‘rising intonation’ questions. But all of those things are a big part of how we actually communicate so I wanted to use the original scripts as much as possible. I also thought that learners would be encouraged by seeing that you don’t have to speak in perfectly grammatical sentences, that you can hesitate and you can make some mistakes but still be communicating well.

Trinity College London commissioned me to write a series of short worksheets, each one dealing with one of the main research findings from the Corpus, and intended for use in the classroom to help students prepare for GESE and ISE exams at a B1 or B2 level.

I started each time with extracts from the original scripts from the data. Where I thought that the candidates’ mistakes would hinder the learner’s comprehension (unfinished sentences for example), I edited them slightly (e.g. with punctuation). But these scripts were not there for comprehension exercises; they were there to show students something that they might never have been taught before.

For example, sounds of hesitation: we all know how annoying it is to listen to someone (native and non-native speakers) continually erm-ing and er-ing in their speech and the data showed that candidates were hesitating too much. But we rarely, if ever, teach our students that it is in fact okay and indeed natural to hesitate while we are thinking of what we want to say and how we want to say it. What they need to know is that, like the more successful candidates in the data,  there are other words and phrases that we can use instead of erm and er. So one of the worksheets shows how we can use hedging phrases such as ‘well..’ or ‘like..’ or ‘okay…’ or ‘I mean..’ or ‘you know…’.

The importance of taking responsibility for a conversation was another feature to emerge from the data and again, I felt that these corpus findings were very freeing for students; that taking responsibility doesn’t, of course, mean that you have to speak all the time but that you also have to create opportunities for the other person to speak and that there are specific ways in which you can do that such as making active listening sounds (ah, right, yeah), asking questions, making short comments and suggestions.

Then there is the whole matter of how you ask questions. The corpus findings show that there is far less confusion in a conversation when properly formed questions are used. When someone says ‘You like going to the mountains?’ the question is not as clear as when they say ‘Do you like going to the mountains?’ This might seem obvious but pointing it out, showing that less checking of what has been asked is needed when questions are direct ones, is, I think very helpful to students. It might also be a consolation-all those years of grammar exercises really were worth it! ‘Do you know how to ask a direct question? ‘Yes, I do!’

These worksheets are intended for EFL exam candidates but the more I work on them, the more I think that the Corpus findings could have a far wider reach. How you make sure you have understood what someone is saying, how you can be a supportive listener, how you can make yourself clear, even if you want to be clear about being uncertain; these are all communication skills which everyone needs in any language.



CASS receives Queen’s Anniversary Prize for Further and Higher Education

Queen's Anniversary AwardAt the end of February, a team of CASS researchers attended the Presentation of the Queen’s Anniversary Prizes for Further and Higher Education, held at Buckingham Palace. The CASS team officially received the award from their Royal Highnesses, The Prince of Wales and the Duchess of Cornwall on 25th February 2016.

Back in November, it was announced that CASS received the esteemed Queen’s Anniversary Prize for its work in “computer analysis of world languages in print, speech, and online.” The Queen’s Anniversary Prizes are awarded every two years to universities and colleges who submit work judged to show excellence, innovation, impact, and benefit for the institution itself, and for the people and society generally in the wider world.

10 of us were selected to attend the ceremony itself, including the Chancellor, Vice-Chancellor, our Centre Director Tony McEnery, and three students. Buckingham Palace sent strict instructions about dress code and the possession of electronic devices, and we were well-read on royal etiquette by the time the big day arrived. I think all of us were a little nervous about what the day would have in store, but we met bright-eyed and bushy-tailed at 9:30am, and took a taxi to Green Park. We entered through the front gates into Buckingham Palace, and looked back at the crowd of adoring fans on the other side of the railings.

We showed our entry cards, and found ourselves being ushered across the courtyard and into the Palace itself. We dropped off our coats and bags, and then went up the grand staircase into the Ballroom where the ceremony was held. We began to relax as the Equerry told us what would be happening throughout the ceremony, and the Countess of Wessex’s String orchestra provided excellent music throughout the event. The score ranged from Handel, right through to John Lennon’s ‘Imagine’, and even a James Bond theme.

As the ceremony started, Vice-Chancellor Mark Smith and CASS Centre Director Tony McEnery passed through the guests, along with representatives from other universities and colleges, and then proceeded to form a line to receive the award. Chancellor Alan Milburn was seated at the front of the Ballroom, along with Anne, Princess Royal. Whilst receiving the award on behalf of CASS, The Prince of Wales asked the Vice-Chancellor about our work, and was fascinated to discover what we have undertaken in the past 40 years. After a brief chat about our work, Mark Smith and Tony McEnery were presented with the Queen’s Anniversary Prize medal and certificate that will be displayed in the John Welch Room in University House.

After the ceremony, we filed through into the Picture Gallery for the reception. Over the course of the next 60-90 minutes, guests were free to mingle and network with each other whilst canapés were served. Dignitaries passed through and spoke to the visitors; Anne, Princess Royal, had a keen interest in the impact of our work on dictionary-making, and I must admit that Tony McEnery was excellent at giving a summary of what corpus research entails. He outlined how it is used in modern-day dictionary building, and discussed some of the historical texts that we now have access to.

The Duchess of Cornwall also visited our group over the course of the event, and made a point of speaking to both Gill Smith and Rosie Knight about the practical applications of their research. They discussed extensively why corpus research is such a useful method in the social sciences, and spoke of their personal connection to the research centre.

Having the opportunity to promote and discuss our research with royalty was a true honour, and I think it is fantastic to see the work of CASS recognised in this unique and special way.