Experiences of NVivo

When I did my PhD I had the chance to have training in NVivo, which is a qualitative data analysis software package provided by QSR International. Initially I had the training (I think in NVivo version 2) quite early on in the PhD – prior to data collection – and the theory was that if we got used to it before collecting data then adding data as we went along would be a piece of cake and we would be thoroughly versed in the software so be able to use it to its full potential. I could certainly see the logic of that, and with retrospect it may well have been that if I had started off with my literature review and research diary I could have perhaps ordered them within NVivo and then used it a bit more consistently throughout the PhD.

Of course real research life is not like that, and hindsight is a wonderful thing! My diary keeping prior to fieldwork was sporadic at best and didn’t really amount to anything much, and the literature review I undertook prior to fieldwork ended up being at least partly irrelevant once I had collected a bit of data and realised that entirely new themes were emerging which I hadn’t even considered; hence I had to completely redo it once I got back. Although this was a bit soul-destroying at the time, it was nonetheless quite a useful exercise because I was reading literature in light of my data, and reviewing the literature at the same time as analysing my data meant that I made so many more connections between them than if they had been done as entirely the linear, separate processes that PhD timetables (and by extension the advice of the NVivo trainers) often seem to assume. I ended up not using NVivo for either diary or literature purposes, but started the process once back from the field of importing my interview transcripts into NVivo (which by now had morphed into NVivo 7, I think) and coding them. I had also collected, over the course of my PhD, more media articles than I knew what to do with, so putting them into NVivo in order to create some sort of order was very useful.

Despite the training I fell into the classic ‘coding trap’ of micro-coding every last little detail, and I found that (for my interview transcripts in particular) this was spectacularly unhelpful as I was left with fragments of texts and no sense at all of any overarching narrative (this was though at least in part mitigated by the ability of the software to grab text a specified number of lines above and below the coded segment). I avoided this trap with the media articles by coding entire articles to just one code (rather than microcoding within that); that was more successful at getting an overall view of what was ‘out there’, but was not without its own problems in that because there were such large chunks of text within a single code, I often found when I tried to do a search that my PC would just crash. However, by using NVivo to identify the documents where I could find articles on particular topics and going to the hard copy just prior to the big search, I managed to avoid the constant crashing, generally found what I was looking for, and overall found that using NVivo was more helpful (from the coding and retrieving point of view) than having paper fragments of texts cut up, differently coloured and stuck together. I can’t say that I ever used NVivo to its full potential – a lot of the whizzy features were pretty much surplus to requirements – but as a tool for coding and retrieval it was fine for what I needed. I could pretty easily specify the location of my respondents and the type of source (blog, newspaper article etc) with a couple of clicks, and that was really helpful in my searching and querying of my data.

Now I am undertaking a new research project for this postdoc position, and still using NVivo. My university is using NVivo 9 now (the most recent version is NVivo 10 apparently), and it appears that a couple of versions down the line, there are more bells and whistles than ever. I have learnt my lessons from the micro-coding debacle of my PhD interviews, and this time am doing a lot of broad-brush coding which is preserving the narrative and enabling me to access the rich data that is emerging. One of the features that seems to have disappeared is that ability to widen out the search to lines above and below the coded segment, so this more broad-brush approach seems to be doing the same thing. For my baseline data I have been able to run reports on codes and access them fairly straightforwardly.

However, now that I have started the next phase of data collection, I now want to add a bit more information to my sources, so that I can differentiate them more easily. Up till now keeping it simple has been fine, as all I’ve really wanted up till now is “everything that every respondent says about X”. Now though I am undertaking a second round of interviews with the same people, and also interviewing a second category of respondents (professionals as well as the patients I have already interviewed), so I want to be able to differentiate between baseline and follow-up interviews, between treatment types, between patient and professional, and also specify gender and location of respondents (amongst other things). Having previously found that reasonably simple (as mentioned above, it was pretty easy to specify that my sources were from one country or the other, or that they were blog posts rather than newspaper articles, by in effect ‘tagging’ the source) I wasn’t expecting this to be a big issue. So I have to say that I am not happy with the amount of time I have spent trying to work out how to do this in NVivo 9. I first tried a few weeks ago, but got so confused by the help topics that I gave up. I have tried again today, and spent pretty much the whole afternoon trying to work it out, writing down everything I’m doing as I go along as I won’t remember it otherwise, and I still haven’t got there. It seems that in adding all the bells and whistles, the manufacturers have created something which is so counter-intuitive I am tearing my hair out. Being able to specify gender and location for respondents should not be difficult – previous versions seemed to be able to do it with a few clicks – but in this version it is so spectacularly complicated that I am still struggling. I am not being helped at all by the help topics – they tell me what to do up to a point, and describe what I should be seeing on the screen, but aren’t telling me why things are as they are. I am also struggling because something which should be so simple is now totally illogical. I appreciate that I am no techie, but I am not stupid and have managed with other software packages, so I know I’m not completely thick! If I want to specify that someone is a woman and is based in Edinburgh, it seems to me that the most logical thing to do would be to ‘tag’ the interview transcript with those attributes. However, in this version I need to create a new node (code) unique to that person (which then sits alongside all my thematic codes), then classify that new node with the classification ‘gender’, and from there I should be able to then specify male or female. However, following the attributes help page I can only get as far as classifying the node – adding separate attributes (male, female) within this is not obvious or intuitive at all. Once I get that far I learn that attribute values are either ‘unassigned’ or ‘not applicable’, but nowhere does it tell me a) the difference between these things, b) what they signify, or c) how to add the specific values ‘male’ and ‘female’. I’m amazed I have any hair left, because I am pulling it out at the moment.

I am persevering with this a. because NVivo is the software the university uses and supports, and b. because once I manage this simple thing I know my coding and retrieval is going to be relatively smooth and not complicated. But I really do want to register my frustration that in making their software all-singing and all-dancing QSR seem to me to have thrown out the simple basic functionality that is always going to be the foundation for any of the more complex searching and analysis. Whilst not being the world’s biggest technophile, I am comfortable with and relatively adept at using technology to assist my work, and I shouldn’t have to spend entire afternoons trying to figure out how to tag a respondent with basic demographic data. Although I mainly want to use NVivo for coding and retrieval, once it’s set up I like that I can, if I want, be more creative with it and do quite complex queries. However, in order to be able to do that I need to get the basics right, and so it seems obvious to me that the basics need to be simple, intuitive and quick.

To give credit where it is due, when I tweeted the NVivo support people they did respond quickly (within the boundaries of different timezones) – but ultimately directed me to the NVivo help pages which as outlined above make an awful lot of assumptions about knowledge and give no illustrative examples. It is still not clear to me why I have to assign demographic classifications to nodes (codes) rather than sources (transcripts), it is still not clear to me the difference between unassigned or N/A attribute values, and most of all it is still not clear to me why something so simple has to be so frustratingly complicated. I really hope NVivo do something about this for the next version, because I have wasted so much time on this today that I am really grumpy and not at all inclined to recommend their software as a research tool. If anybody has any suggestions as to alternatives (especially if they are open source) I’d be all ears.

Advertisements

8 responses to “Experiences of NVivo

  1. Hello, thanks for this great blog post. I tend to rely on YouTube videos to help me get the most out of NVivo – there are some really helpful ones. Good luck with your research!

  2. Hi there, really sorry to hear you are struggling with classifications. This youtube video may help you https://www.youtube.com/watch?v=hn1u-r4Q5jo
    Aslo, the key difference between “Unassigned” and “Not Applicable” is that “Unassigned” is a message to you as a researcher, that you haven’t yet set the attribute value to anything meaningful for this attribute, it is the default “default”, but if it makes more sense you can select another default value for any attribute value. “Not Applicable” is basically saying that this attribute for this node/source doesn’t make any sense whatsoever, and as such should never have a meaningful value.
    Probably the reason why the documentation suggests classifying the nodes is that when you are dealing with people (as a classification) often they span multiple source material, or vice versa, there are multiple ‘people’ referenced within a single source file, and it makes sense to code all references to that person to a node bearing their name and then classify that node, and then querying later will give you the right results. There is nothing stopping you classifying the source if that makes sense in your project, it may even make more sense to do both. i.e. have a classification of person that applies to a node that codes all source material for that person, and assign things like “gender” and “location” and “profession” which don’t change across sources, and then create an “interview” classification for a source which can identify things such as interview date, follow-up vs baseline etc…
    Hope that helps.
    As Technical Architect of NVivo, I am always interested in user feedback to make our product better, and we do listen. It is always a challenge trying to simplify complex concepts, and make them generic enough for everyone to use, but powerful enough so that they add real value to researchers.
    Thanks for your honest post.
    Good luck with your research.

    • Hi Scott
      Thank you so much for your detailed reply. I found that very YouTube video yesterday and it did indeed help me out. I guess I just feel frustrated that my first port of call (the ‘help’ pages) did not explain things clearly and meant that I ended up going down quite a long garden path! In the words of your final paragraph, I think that the concepts were simplified and genericised (is that a word?!) to such an extent that the *reason* for doing it this way was lost, and it is the reason that, for me in any case, acts as the conceptual stepping stone for *getting* what the software can help and enable me to do. I get that the software *can* do this or that in this or that way, but what was missing for me was the *why*. I wonder if in future versions it would be worth adding a link in the help page to the relevant YouTube video (if there is one) as accessing that video earlier would have saved an awful lot of frustration!
      I also do appreciate your explanation about node classification due to people appearing across different source material, that makes a lot of sense and I hadn’t thought of it that way before – actually that will serve as one of those conceptual stepping stones for me to move on to the next level of using the software. Actually thanks to this (painful!) process I have been thinking a lot about sources and representation and meaning and the like, and may well blog about that soon.
      Many thanks for your response, which is much appreciated.
      Jackie

  3. Hi Jackie, glad you are sorted. I will pass on your suggestions to our technical writers. I look forward to your promised blog post.

    Scott.

  4. I’ve only lightly use NVivo but would a solution be to have the person as an aggregate node with sub nodes for each interview? Looking ahead, I think I’ll come up against the same problem 😦

  5. Thanks, Jackie, for replying to my Twitter message, as I am having the same troubles. I have even watched that same video a few times. I just returned to read the post in full, and I will now attempt the video one more time. For me, I need to work on my pilot study data, and want this under control as I am now moving into the main study.

    Scott, I may be in touch soon 🙂

    I will update once I resolve my situation.

  6. Thank you so much for writing about this and the problem of micro-coding – much of your post resonated with me (e.g. I usually pick up software pretty quickly and yet I don’t find Nvivo intuitive beyond the basics) – feel like there is potential but its really got to move away from the “microsoft” feel.

    I’ve heard about “Dedoose”- supposedly good for analysis too-

  7. Seems like you haven’t grasped the fundamentals of case nodes, node classifications and attributes. It’s beyond my time and inclination to give you a full explanation here – maybe you should go on a course or get the workbooks from QSR and work through the sample project. It is all quite logical and easily implemented.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s