Sunday, November 8, 2015

Paper Review: Notes from "Using thematic analysis in psychology"

Using thematic analysis in psychology

Virginia Braun & Victoria Clarke

Qualitative Research in Psychology


Thematic analysis is a poorly demarcated, rarely acknowledged, yet widely used qualitative analytic method within psychology. In this paper, we argue that it offers an accessible and theoretically flexible approach to analysing qualitative data. We outline what thematic analysis is, locating it in relation to other qualitative analytic methods that search for themes or patterns, and in relation to different epistemological and ontological positions. We then provide clear guidelines to those wanting to start thematic analysis, or conduct it in a more deliberate and rigorous way, and consider potential pitfalls in conducting thematic analysis. Finally, we outline the disadvantages and advantages of thematic analysis. We conclude by advocating thematic analysis as a useful and flexible method for qualitative research in and beyond psychology. 


Useful paper to plug the gap of the absence of an introductory paper on thematic analysis for non-qualitative researchers.  While I do have problems with this as a technique in contrast to quantitative methods that I have usually used in either software engineering, computer graphics or otherwise, it does enable contextualisation of the thoughts of participants in user studies.

Reading starts here:

Seems to be a lack of consensus on the placement of thematic analysis, is or is not a specific method.  The authors argue it is a specific method.

They argue that it is to be well defined in this paper, without restricting its flexibility.

Corpus refers to all data, while data set is the data to be used in analysis.  

Key Point - "Thematic analysis is a method for identifying, analysing and reporting patterns (themes) within data." Obvious, but it has to be stated.

They claim that it is a method of analysis, and that other named approaches are essentially still a form of thematic analysis.

They comment that themes should not emerge; this is a passive approach, denying the place of the researcher.  They say that themes reside in our head from our thinking about the data.  ME: I wonder if it is a product of an interaction with the data, which probably puts me into a social constructivist position.  Does the data turn into an entity in my phenomenological experience.  No, that is too silly.  But, one could imagine that I am giving the data some agency in some way.  Something to think about...

"What counts as a theme? A theme captures something important about the data in relation to the research question, and represents some level of patterned response or meaning within the data set."

They suggest there is no hard percentage of data that establishes a theme; ME: can this get any more vague?  This suggests no standard computational component to thematic analysis.  Though they suggest prevalence as a heuristic approach.  I guess one instance is disproving evidence of the absence of that instance in the rest of the cohort.

"Part of the flexibility of thematic analysis is that it allows you to determine themes (and prevalence) in a number of ways. What is important is that you are consistent in how you do this within any particular analysis."

ME: I am struggling here.  How do you take a measurement without a clear comparable metric that generalises to other cohorts?  While I get the primacy of consistency, one can be consistently wrong as well.

They then compare and contrast the top down theoretical approaches to bottom up inductive processes.  Inductive is considered richer, top down is more detailed analysis of a particular theme.

Semantic or latent themes.  Semantic is more of an analysis of the themes present in the data prima facie, while latent analysis looks for underlying ideas from the themes; thus is more interpretive.  They also note it is more constructivist, but is not necessarily completely constructivist. ME: man, my poor quantitative brain is struggling a little here. :-)  while I get that phenomenological outcomes are going to be inexact and dynamic, I am finding this a little hard going. 

"Those approaches which consider specific aspects, latent themes and are constructionist tend to often cluster together, while those that consider meanings across the whole data set, semantic themes, and are realist, often cluster together."

There are 6 guidelines (not rules, of course! :-))  With some comments or extracts from their descriptions of these phases.

1. Familiarizing yourself with your data:
2. Generating initial codes:
3. Searching for themes:
4. Reviewing themes:
5. Defining and naming themes:
6. Producing the report:

Phase 1

They recommend re-reading data.  Mark down codes now, as part of an interpretive process.  Make sure not to process transcriptions, as punctuation can matter at this stage.   Interestingly they suggest to personally transcribe (my IS school hires people) in order to develop deep reading skills.  Good point.

Phase 2

"remember that you can code individual extracts of data in as many different ‘themes’ as they fit into 􏰀/ so an extract may be uncoded, coded once, or coded many times, as relevant."  ME: This seems to be an m-to-m mapping for themes.  I would need an example, but if many themes use elements that reappear, then that must be problematic for identification of themes as entities.  Just seems too incoherent.

Phase 3

Key point, do not throw out any themes, just keep them in the basket for later.

Phase 4 

Make sure your themes fit all the data; which to me is almost impossible, but may work as an inexact process.

Stop refining when there is no new contribution to the themes.  ME: Almost a embedded grounded theory exercise.

Phase 5

The key point to take home is the need for the themes to be an essential description of the data, with a punchy and clear title for communicating the ideas.

Phase 6

The report provides sufficient evidence of the presence of the themes within the data set analysed.

Needs to go beyond description to make an argument about the data analysed.  ME: Key point here.

Common Problems

Lack of actual analysis - obvious, but I have seen it a lot in tool evaluations.  Maybe because the response to the tool is not expected to be that complex?

"The third is a weak or unconvincing analysis, where the themes do not appear to work, where there is too much overlap between themes, or where the themes are not internally coherent and consistent."


"‘anecdotalism’ in qualitative research 􏰀/ where one or a few instances of a phenomenon are reified into a pattern or theme, when it or they are actually idiosyncratic."

Consider alternative thematic explanations in the analysis, to show deep insight into the data and its context.

"One of the criticisms of qualitative research from those outside the field is the perception that ‘anything goes’." ME: The authors have insight! :-) 

As can be seen by my comments, I have some skepticism about the results of such thematic analysis.  However, I do give it validity for its ability to contextualise quantitative results.  I aim to do that with more rigour in the future in my research, as my work usually involves software tool analysis.


No comments: