The link information below provides a persistent link to the article you've requested.
link to this record: Following the link below will bring you to the start of the
article or citation.
Cut and Paste: To place article links in an external web document, simply copy and paste the HTML below, starting with "<A HREF"
To continue, in Internet Explorer, select FILE then SAVE AS from your browser's toolbar above. Be sure to save as a plain text file (.txt) or a 'Web Page, HTML only' file (.html). In Netscape, select FILE then SAVE AS from your browser's toolbar above.
Abstract. The research interview constitutes one of the main methods for obtaining information from library users. Typically, interviews are now recorded and then analysed later by the interviewer. In this paper we consider the advantages and disadvantages of using discourse and conversation analysis when examining users' responses. We focus upon two concrete examples; a study employing discourse analysis when re-designing an index, and research in which conversation analysis is used as part of the evaluation procedure for interface design. Our suggestion is that discourse analysis is particularly useful when seeking information of a general nature, whereas conversation analysis has the advantage of uncovering implicit models and metaphors employed by people when using library services. Our primary aim in this paper is to highlight some of the costs and benefits of utilising discourse and conversation analytic methods for library staff.1. Introduction
For a number of years the interview has been one of the primary methods whereby researchers and other interested parties gain information from users of library services. (We might note in passing that the interview continues to constitute a preliminary phase of questionnaire design in Library and Information Studies (LIS) [ 1]). Increasingly qualitative methods which centre on the interview (and in this context one can consider the focus group as a kind of group interview) are considered advantageous on both academic and practical criteria [ 2]. Our aim in this paper is to consider what is involved in using either discourse or conversation analysis when it comes to making sense out of what people have said during interviews. Over and above providing a summary description, the intention is to highlight particular advantages and disadvantages of these forms of analysis.
Historically, within the social and behavioural sciences whenever qualitative methods are advocated discourse rather than conversation analysis is preferred (see Westbrook [ 3] for an overview relevant for LIS). However, the term `discourse analysis' covers a multitude of methods, and numerous disciplines. For example; discourse analysis of pupil-teacher classroom interaction [ 4] derives from linguistics, of early child language from ethnography [ 5], of text content from critical theory [ 6] and of gender differences from sociology [ 7]. The kind of discourse analysis found in LIS mainly stems from a more general amalgam of approaches from human-computer interaction and behavioural analysis [ 8-10].
Conversational analysis (CA) has emerged as a sub-discipline within sociology influenced by researchers in social anthropology and particularly ethnomethodology. Ethnomethodology can be traced to a group of sociologists who, during the 1960's, began to call into question the way mainstream sociology imposed its own analytic categories in the classification of sociological phenomena. Building in part on the work of social anthropology and ethnography, ethnomethodologists insisted on placing centre stage participants own understandings of their everyday interactions. In other words, the object of enquiry should be `the set of techniques that the members of a society themselves utilise to interpret and act within their own social world.'[ 11]. Ethnomethodology (EM) can be defined as the study of the participant's own methods of production and interpretation of all forms of social interaction. The central concern is with the rational analysis of the structures, procedures and strategies that participants themselves employ so as to make sense out of their everyday world.
Another important aspect of CA and EM is the emphasis on accountability. Accountability captures the idea that people design their behaviour with an awareness of appropriate normative conventions. In other words, they orient themselves to whatever rule is relevant to the situation in which they find themselves, and they choose to follow (or not to follow) the rule in the light of what they expect the consequences of that choice to be. All behaviour concerned with communication will be accountable and will follow appropriate conventions for display, signalling, politeness and related phenomena. Some of the conventions of the classroom, for example, have been shown to exhibit orderliness and sensitivity to such extra-curricular conventions[ 12].
Conversational analysis itself seeks to understand people speaking to each other as a quintessential form of social interaction, and focuses on the relationship between intra-turn and inter-turn organisation in everyday talk[ 11]. CA is particularly concerned to identify the sequential structures that participants co-produce and orient to as in the process of an evolving conversation. These may include patterns of repetition and variation, openings and closings, greetings, question-answer sequences, marking devices (intonation, emphasis), overlaps, and more extended sequences such as telling-a-story.
We will argue that conversation analysis can be very useful within LIS, particularly where the concern is to understand the implicit models, metaphors and presuppositions of explanations elicited from users in interviews. We turn first, however, to describing one general approach within an LIS context employing discourse analysis.2. Discourse analysis: re-designing an index (the Taxation Service)
The problem which forms the background to the interviews conducted for this analysis is centred around the provision of a practical index to a multi-volume tax series (known as the Taxation Service). Responding to the publishers request the first author agreed to conduct interviews with relevant users (tax accountants) so as to ascertain reported problems encountered with the index. The multi-volume text is primarily used as the main advice source for accountants providing a service to small businesses. The taxation service combined case law examples with expositions of statute and the provision of relevant ready-reckoner types of advice about specific problems.
The publishers feared, and the accountants interviewed broadly agreed, that the index was difficult to use, incomprehensible and unwieldy. In this instance a representative group of accountants were interviewed using a semi-structured interview schedule where a number of specific questions were asked (Table 1 provides a sample). Designing and constructing the questions to be asked in the interview followed from an analysis of
(a) the concerns of the publisher;
(b) the structure of the taxation service and the index;
(c) detailed answers attained from the first interview carried out.
The categories which formed the basis for the range of questions asked are reflected in the sections of the results and were
(a) user satisfaction and their perception of the taxation service
(b) patterns of use of the taxation service;
(c) the structure of the taxation service;
(d) the structure of the index, problems encountered with it and suggestions for improvement
(e) typical tasks carried out when using the index.
Following the recorded interviews the tapes were transcribed and the transcriptions analysed. At the risk of oversimplification, we should, note that in practice this involves a number of procedures. During the transcription itself, various checks are carried out to ensure that there is no ambiguity over any aspect of the recorded speech. The recording conditions ensured that the playback was of a high quality. In addition, the transcription was carried out no more than two or three days after the original recordings, so that associated notes taken during the interview could supplement the recordings (this is essential where you wish to check any detail of a complex task which involves a series of actions). The transcripts were then coded in line with the various categories of interest (reflected in the questions above). Part of this protocol analysis involved checking for external validity. A randomly selected 10% of the recordings were coded and categorised by two independent raters. Adequate levels of `inter-observer' reliability were obtained for the transcripts (around 90% reliability), that is with each other and with the principal investigator. Furthermore, all disagreements were resolved through further playback of the recordings. Table 2 provides a representative selection of the answers given. Coding and analysing the responses in this way makes it possible to provide overview results of a general nature as well as summary recommendations (Table 3). Where desired the resulting categorisation of responses can be analysed statistically using non-parametric methods (see [ 10] for further details on this study).
We will consider this form of discourse analysis in more detail below, particularly with respect to the nature of the statistical information which can be produced. For now, and in light of the comparisons we will emphasise, we turn to conversation analysis (CA).3. Conversation analysis: a method for informing evaluation for interface design
Although there are a number of studies which are employing this method in humancomputer interaction (HCI) and education, we know of no applications of this methodology in LIS. We have been employing CA as part of an evaluation study of an electronic library project (JournalsOnLine[ 1]), in order to gauge whether we can gain more insight into the perceptions and expectations of the library staff user. So far, we have looked in detail at library staff's explanations or accounts of a typical and everyday task concerning inter-library service.
The extracts described here were recorded as part Of an interview conducted by the second author with a library assistant at the University of Kent. The interviewer and respondent were sitting together at a computer terminal where a prototype interface for JournalsOnLine was being tested. The discussion focused around problems encountered by the inter-library loan (ILL) library assistant with (current paper-based) requests that are not filled in. properly. This conversation serves as the background for discussing the design of the interface. The section selected for analysis here is approximately three minutes long and the transcript is given in full in the appendix.
As noted in the introduction, conversation analysis is a fine-grained analysis of the structures, procedures and techniques participants deploy as they make sense of their everyday experience. In the extract given here the respondent is being asked to engage in and provide an explanation for the kinds of queries and requests received from users. In this short example, by focusing on the minutiae of the conversation, we can articulate a model of the task context held by this library assistant. Consider for example Table 4 where she articulates what she understands by a waste of time (details on the conventions for the transcriptions are given in the appendix).
Considering this one extract in detail will help provide a flavour of CA as methodology. First, the latching (utterances following each other very quickly) in line 36 on the word points to some of the to be assumed professional practice of the ILL library assistant. Next, the intonation markers in line 42, rise and fall at the beginning, the stretching of the word aware and then the latching (lines 44-45) point to the library assistant's prior patterns of use as well as articulating the nature of the task structure. Further on, in line 51, the emphasis on the phrase `wasting time' and projection of a pause (0.3 sec.) indicates that the library assistant wants the point to be noticed. Furthermore, the nature of the researchers reply indicates that he has taken up this point (his stress of `right'). In this tiny sequence of talk the library assistant is:
(i) giving an account of an activity within her routine work
(ii) demonstrating the ways in which she is institutionally accountable in the course of her work to manage mixed concerns, obligations and responsibilities between parties directly and consequentially involved in that activity
(iii) clarifying the conflicts of interest that can arise between the needs of the library user and the provision of a service from both her library and a third party institution: the British Library. Finally, notice how the library assistant has not only stated the task and the potential conflicts, that can arise, but is telling us how she competently manages all those activities as part of her routine work in the library.
In Table 5, the library assistant points out that under some conditions it will be necessary to send off a form which they have not had time to check.
Here, in line 72 she notes the conditions under which they would send an unchecked form to Boston Spa (the supplier). Note the repetition of `very' and the emphasis on `will'. Not only would they have to be very very busy, but yes, even this professional library assistant would send off an unchecked form. This points out a kind of triple accountability the library assistant sees herself as having - to the service user, to the journal supplier and to the researcher in the context of the interview. Furthermore, lines 59-62 provide an indication of what constitutes competence (the library assistant's) in this instance: she can read the Boston Spa response. She is speaking about what Boston Spa sends back when she sends a form with an error and they query it. This sets up (in lines 72-73) criteria for competence even where the library assistant sometimes knowingly passes on erroneous material
Overall, using CA on this short section of the interview (which was 30 minutes in all) it has been possible to highlight relations between a professional library assistant's explanations for bad user requests to her own display of competence in cases of error, and how this particular task can be understood with reference to other interested parties. The transcript contains a specification of the technical problem of ascertaining the correct details of the ILL which are missing (on the form - lines 13-24); recognition of the accountability of the problem (i.e. who's accountable for a mistake - lines 36-38); actions and explanations structured with reference to the immediate working environment (throughout the transcript) and arguably, an evolving narrative of what constitutes trouble for other library staff (e.g. lines 17-23; 59-63). These elements are included in the summary of Fig. 1.
A number of points can now be made in evaluation. As the popularity of qualitative methods increases in LIS there is every likelihood that we will see a variety of different analysis methods being used (see other papers in this volume). Correspondingly, researchers and evaluators will need to be able to decide about the particular costs and benefits of different procedures. They will also benefit by having some idea of the kinds of assumptions and background theory which informs different methods. With reference to discourse and conversation analysis we can note specific advantages and disadvantages.
- DA can meet the requirements of traditional social science methodology. Using appropriate coding and categorisation procedures statistical procedures can be employed in order to fulfil specific methodological criteria. New research findings are readily compared with the results of previous research.
- It is relatively easy to learn how to employ suitable coding and categorisation procedures.
- Useful where general information is required (summary recommendations).
- There is a danger in providing unrepresentative information. The initial categorisation codes may those of interest to the researcher, but not necessarily of concern to either users or participants.
- The analysis can be over-formalised. Statistical significance (e.g. of particular distributional patterns of categories across a corpus of data) may have little bearing on participants own understandings of the tasks in hand.
Discourse analysis lends itself to established procedure used for quantitative analysis. At the same time there is a failure to recognise that the practice of coding and categorisation is itself interpretatively circular. Note: first you begin with a fairly lengthy corpus of data. Then you impose certain codes and categories on that data. Having done so, in order to check for validity and reliability, you train an independent group of category raters on how to classify similar kinds of utterances (similar interview comments). When you are sure that they are sufficiently trained you randomly select 10% of the corpus and ask them to rate these comments blind to the codings you have similarly done. If agreement is then achieved of around 80-90% (on the codings) the data is assumed to be correct, i.e. in terms of the distributional coding frequencies achieved. Statistical analysis can then proceed. Invariably this is how the methodology proceeds. Summaries and recommendations made following the analysis should be understood with reference to the analytic practices that were employed. This is rarely commented upon in the literature.
- Detailed analysis can reveal conceptually complex user-oriented understandings.
- Attention need only focus on a small amount of symptomatic data, selected from a larger body of data to which reference can still be made.
- The methodology ensures that user-perceptions remain the central concern.
- High cost in terms of time when conducting the analysis (fine-grained transcription).
- Relatively difficult to learn to employ CA.
- The methodology of CA may itself cast the `interview context' as problematic. In other words what is being uncovered are the presuppositions implicit in explanatory accounts (by the respondent) when asked to participate in this particular type of interview. CA provides the framework within which questions of presupposition can be formulated and answered (precisely because of foreground issues of what members notice as troubling). It may be an act of faith to assume that this interaction reflects the everyday task context.
To conclude, discourse analysis aids in understanding broad categories, conversation analysis in seeing how these are manifested and made real and effective in interaction with users, colleagues and institutions.
Table 1. Sample of the questions used during the interview 1. What are the most general problems you encounter with the TS (Taxation Service)? 2. Are the problems you have specific to the TS or are they general to all such texts? 3. What very specific problems do you have? 4. Can the problems you have be grouped together in some way? 5. What type of enquiry do you bring to the TS? 6. When you are searching for an item what kinds of things do you try and do? 7. Where else might you consider looking for information that you find in the TS? 8. How would you describe the TS? 9. What kinds of questions would you not bring to the TS? 10. Can you remember the last time you had a problem with it? 11. Is an index a navigational tool of some kind? 12. How do you find the presentation aspects of the index? 13. What would a good index do (be)? 14. Is the TS comprehensive? Table 2. Typical answers to questions What are your criteria for a good or bad index? a: The problem is whether the compiler of the index is using the same buzz words as users. Is he switched onto same phrases and words which would open up an enquiry into a series of problems...to which you want answers. If they are switched on then this quickly becomes apparent. b: Also somebody somewhat outside the area can pick words which seem to make sense to them, while missing essential concepts. What kind of deficiencies do you see in this kind of TS? a: As an example of the sort of thing I would complain about is that there is a fundamental distinction between trading companies and investment companies...but if t was looking for...complaints of overzealousness, I would never look for `non-trading company'...i.e, if it's not a trading company then it is some other type of company such as an investment company...it is a meaningless distinction here. Well, how would you improve it (the TS)? a: I would have an index for each volume with cross-references to other volumes. Also in the index you must have a clear indication of what volume the reference is in. A section number is not enough. How would you improve this TS index then? a: Well given this type of publication there is really no alternative but to have a pretty fundamental index...I mean the range of customers for TS is considerable and you cannot really assume too much knowledge on their part. It could be improved by better terminology and more frequent and sophisticated cross-referencing relative to"the kinds of tasks typically related to the items somebody might look up. b: I mean if you are engaged in a task you want to follow through all the leads and find information relevant to that task...cross referencing is not only vital but is also a good way of navigating around the index...I can't predict the kinds of things that could be there and I suppose you could link important cross-references relative to some task or other to each other using colour or different styles. Table 3. Resulting recommendations for re-designing the TS (a) Detailed chapter headings (full and detailed table of contents for each of the chapters within each volume). (b) The contents pages should be presented in a clear, ordered fashion and where possible indentation should be used in a structured presentation format so that the relation between sub-sections and sections is very clear. (c) There should be clear and corresponding page numbering between the chapter heading details and the pages of the section text itself. Where appropriate this should also indicate the amount of material under any given topic. (d) The index should be re-positioned at the back of the text. (e) Indicators (section, page numbering and so on) should be restricted to the bottom and top right and left outside parts of the page. The inner-bottom indication of section, date and page could be deleted. (f) The index, should be much shorter being comprised only of (i) general definitions and main topic items and (ii) unusual and out of the ordinary items. The main items would lead users into the more detailed chapter headings and then the appropriate pages. The unusual items should indicate volume, section and page numbers. (g) There should be much more cross-referencing in the text itself and if possible footnotes which indicate how cross-references are related to different kinds of tasks and topics. Table 4. Extract 1: 36. LI: so:: erm: but we'd would normally ask them=obviously=to fill in as much detail on 37. the form as possible 38. R: right 39. LI: but i won't! send it back straight away if i know it is wrong i will try and by whatever 40. means make sure that the the details are right 41. (1.8) 42. so[arrow down] [arrow up]but=as i say=in the checking we usually are awa:::re if we find on the boston 43. spa serials ((reference to serials catalogue)) we'll see what the holdings ((refers to: 44. archive of journals)) are= 45. =if it's very obvious that the the volume number and the date (.) aren't correct 46. (1.1)=then: ? we'll realise its wro[arrow down]ng and we'll try and= 47. R: umm 48. LI: =before we send it off! 49. otherwise 50. R: umm 51. LI: -'a -'i if we don't find that the british library just send it back and it's wasting time 52. (0.3) 53. R: right 54. LI: while they're waiting=i mean it'll take three or four days for them to send it back and 55. say wull (0.5)=all they do? is say is sor- give give us the source of 56. reference Table 5. 58. (0.5) 59. LI: the [arrow up] hiero [arrow down] glyphics on the form will usually show (0.3) for the journal title what 60. what the holdings are=you know=volume one hhhh nineteen-eighty-three then 61. they'll put the year that you've put an question mark as if to say well (0.2) this 62. doesn't make any [degree]sense[degree] (0.3) so 63. R: right 64. (0.2) 65. LI: we could just send things off with no checking= 66. (1.0) 67. =but as i explained to you be[arrow up]fore: (0.4) in the long [arrow down] term (0.4) it wastes more time i 68. think 69. (0.4) 70. R: right 71. (0.6) 72. LI: but (.) min if were very very busy we we will send things like that off without a lot of 73. checking 74. (0.2) 75. R: [degree]right[degree] 76. LI: but because of looking up on the variou[arrow up]s databases (1.0)=the checking we do isn't 77. necessarily only bibliographic checking it's (.) loc[arrow down]ation checking (.) where we're 78. going to send them
DIAGRAM: Fig. 1. What constitutes a `waste of time'?References
 R.E. Landrum & D.M. Muench. Assessing students library skills and knowledge - the library research strategies questionnaire. Psychological Reports, 75 (3 Pt2) (1994) 1619-1628.
 P. Sturges. Qualitative research in information studies - a Malawian study. Education for information, 14 (1996) 117-126.
 L. Westbrook. Qualitative research methods - a review of major stages, data-analysis techniques, and quality controls. Library & Information Science Research, 16 (1994) 241-254.
 M. Stubbs. Language and literacy. London: Routledge (1980).
 E. Ochs. Talking to children in Western Samoa. Language in Society, 11 (1982) 77-105.
 N. Fairclough. Discourse and social change. Cambridge: Polity Press (1992).
 V. Walkerdine. Counting girls out. London: Virago (1989).
 C.A. Barry. Critical issues in evaluating the impact of IT on information activity in academic research - developing a qualitative research solution. Library & Information Science Research, 17 (1995) 107-134.
 K.M. Drabenstott & M.S. Weller. Failure analysis of subject searches in a test of a new design for subject access to online catalogs. Journal of the American Society for Information Science, 47 (1996) 519-537.
 M. Forrester. Hypermedia and indexing: identifying appropriate models from user studies. Paper presented at the 17th International OnLine Information Meeting, London (1993).
 S. Levinson. Pragmatics. Cambridge: Cambridge University Press (1983).
 G. Payne & D. Hustler. Teaching the class: the practical management of a cohort. British Journal of Educational Sociology, 1 (1979) 49-56.Appendix: Transcript and coding conventions
R = Researcher (Chris Ramsden) LA = Library assistant (Both participants sitting in front of a computer screen running a beta version of JournalsOnLine). 1. LA: #SMC ((single mouse click)) 2. (1.0) 3. LA: so from there if i do that that should get th[box bottom right]em back to the for[box bottom right]m i think 4. R: [box top right].hhhh ((rapid nasal noise)) 5. [box top right]form 6. (1.1) 7. LA: yes [arrow up] it does 8. R: right 9. (2.6) 10. R: do dwud do you get any other kinds of(.) quer [arrow down] ies that are bas:ed even on on 11. somethink like (.) key words 12. (0.5) 13. LA: oh y-yes! yes=you do sometimes get people [arrow up] with (.) th'tt-you know they haven't got 14. an article (.) title at all (1.3)=umm or they [arrow up] ll give an article (.) and you realise that the 15. (1.7) bibliograph- the bibliographic details might be correct but they've given you the 16. wrong year cause they've copied down half of one (.) citations [arrow down] and half of 17. another=[degree]it's not uncommon[arrow up][degree] 18. LA: hhhhh ((heavy inhalation))=so they'll have the general title [arrow up] and an author [arrow down] 19. (1.3) 20. LA: =and:: (0.8) but the (.) volume and number of pages (0.4) you know don't coincide 21. [arrow up] cause w[hen you look up what the journal dates are you realise that that nineteen- 22. ninety-five isn't volume twenty-three (.)=and maybe it's (.) [degree]you know ((crescendo)) 23. as i say[degree] (.) it doesn't happen all the time 24. R: [ri[arrow up]ght] 25. [box top right]ri[arrow up]ght] 26. LI: hhhh=but then if they say that (.) give their source of referenc:e (0.6)=i mean this is 27. usually when i have been looking up- (.) when i have had to look up bids 28. it's the usually the only time i look it up=if they have said they 29. have found it on bids=cause quite a lot of the hhh research is now down on bids 30. (0.4) 31. and i will then go into bids and put in the journal title and the author and of course it 32. will bring me up the correct (.) [bibliographic detail 33. R: [ri[arrow up]ght] 34. (0.3) 35. [right] 36. LI: so:: erm: but we'd would normally ask them=obviously=to fill in as much detail on 37. the form as possible 38. R: right 39. LI: but i won't! send it back straight away if i know it is wrong i will try and by whatever 40. means make sure that the the details are right 41. (1.8) 42. so[arrow down][arrow up] but=as i say=in the checking we usually are awa:::re if we find on the boston 43. spa serials ((reference to serials catalogue)) we'll see what the holdings ((refers to: 44. archive of journals)) are= 45. =if it's very obvious that the the volume number and the date (.) aren't correct 46. (1.1)=then: we'll realise its wro[arrow up]ng and we'll try and= 47. R: umm 48. LI: =before we send it off! 49. otherwise 50. R: umm ((technical interruption by R: at prior L)) 51. LI: -'a -'i if we don't find that the british library just send it back and it's wasting time 52. (0.3) 53. R: right 54.LI: while they're waiting=i mean it'll take three or four days for them to send it back and 55. say wull (0.5)=all they do is say is sor- give give us the source of 56. reference 57. R: ummm 58. (0.5) 59. LI: the [arrow up]hiero[arrow down]glyphics on the form will usually show (0.3) for the journal title what 60. what the holdings are=you know=volume one hhhh nineteen-eighty-three then 61. they'll put the year that you've put an question mark as if to say well (0.2) this 62. doesn't make any [degree]sense[degree] (0.3) so 63. R: right 64. (0.2) 65. LI: we could just send things off with no checking= 66. (1.0) 67. =but as i explained to you be[arrow up]fore: (0.4) in the long [arrow down] term (0.4) it wastes more time i 68. think 69. (0.4) 70. R: right 71. (0.6) 72. LI: but (.) min if were very very busy we we will send things like that off without a lot of 73. checking 74. (0.2) 75. R: [degree]right[degree] 76. LI: but because of looking up on the variou[arrow up]s databases (1.0)=the checking we do isn't 77. necessarily only bibliographic checking it's (.) loc[arrow up]ation checking (.) where we're 78. going to send them 79.(0.5) 80. LI: cer[box bottom right]tainly for the articles= 81. R: [box top right]right]Conversation analysis conventions
Code Transcription conventions employed [arrow up] or [arrow down] Marked rise (or fall) in intonation. Underlining Used for emphasis (parts of the utterance that are stressed). Upper-case letters Indicate increased volume (note this can be combined with underlining where appropriate). ::: Sounds that are stretched or drawn out (number of :: provides a measure of the length of stretching).  Overlaps, cases of simultaneous speech or interruptions. Where appropriate the spacing and placing of the overlap markers indicates the point at which simultaneous speech occurred. (.) Small pauses. [degree]...[degree] Shown where a passage of talk is noticeably quieter than the surrounding talk. (1.4) Silences with the time given in seconds. (()) Double parentheses surround transcriber's additional comments h Exhalation or inhalation where the duration is marked by the number of hs. = Where there is nearly no gap at all between one utterance and the following utterance.
Michael A. Forrester, Christopher Ramsden and David Reason[*],
[*] Department of Psychology and Centre for Psychoanalytic Studies in the Humanities, University of Kent, UK