Psychological Intelligence Foundation CIC

PSYCHOLOGICAL INTELLIGENCE FOUNDATION CIC

 

Blog No.5: Research – why bother !

This is my 5th blog based on the monthly workshops and webinars I run. As you will see below, March will be about Research. We need to know how to interpret and use research whichever field of TA applies, and we also use forms of research – without realising – whenever we work professionally with TA – for instance: we are doing autoethnography when we self-reflect; critical ethnography when we hypothesise, make an intervention, see what happens, and repeat; action research when we work with clients to help them grow . . . .  so it matters that we know enough to do it effectively.

How might research go wrong?

Researching efficacy and/or effectiveness sounds straightforward but of course research is not that easy – there are several ways in which spurious results may be obtained, whether we are measuring efficacy or effectiveness. These problems include:

  • Spurious causality – this means that we may assume causality that does not exist – it may well be that a practitioner asks more questions and the client talks more – but the link may be in the other direction, so that the more the client says, the more the practitioner gets curious, or the easier the practitioner finds it to think of questions to ask.

 

  • Extraneous variables – there may be unrelated factors that are affecting the practitioner and/or the client – the clients may start doing their jobs better because new equipment has been provided, or perhaps a procedure has been simplified.  It may even be that time passing has an impact – people tend to continue learning and growing in many small ways that we may not be able to define.  For example, what bothered us a great deal when we were young may seem a minor problem when we are older, without us being able to identify what happened in our life to bring about such a change in reaction.

 

  • Wrong variable researched – it may be that the practitioners who ask the most questions are also the most skilled at paraphrasing or creating a good relationship, or some other factor – and it is this other factor that is actually making the difference.

 

  • Poor choice of research subjects – by subject we mean the people who are being researched rather than the topic. We may choose a group that is not representative of the general population, such as researching friendliness based on a sample of people who were willing to stop and talk to us in the shopping centre, or researching employee satisfaction based only on those employees who bothered to return the survey questionnaires. In ‘researcher jargon’ this may be referred to as restriction of range, because it means we have unwittingly chosen subjects who will not show the full range of whatever we are measuring.

 

  • Biased choice of research subjects – known in research jargon as regression towards the mean, this applies when we choose subjects because they are at the extreme of a variable that we are interested in. What happens is that often they are at the extreme because of current circumstances, so that if they were to be considered at another time they will have moved closer to the average.  For instance, Bannon (1976) pointed out that, for a study about the impact of a TA programme, choosing students who were the best or the worst behaved would distort results because over time both best and worst are likely to become less so.

 

  • Poor choice of research topic – such as choosing to study mathematical ability amongst engineers who have become managers, only to find that they all score high because they would not have been good enough engineers to be considered for  promotion (rather than retraining or dismissal) without good mathematical ability.

 

  • Hawthorne Effect (Roethlisberger & Dickson 1939) – this was named after a factory where ongoing research was being carried out and the researchers found that whatever they changed, even when they made conditions worse, the productivity of the group being researched went up. For instance, productivity went up when they made the lighting worse just as it did when they made the lighting better. Eventually they realised that what made the difference was the attention being given to the group by the researchers (which might have been a better research study for Berne to have quoted to support his contention that we all need strokes!)

There are also problems that arise concerning the use of research results, such as:

  • Dodo Bird Effect (Rosenweig 1936) – this name comes from Alice’s Adventures in Wonderland.  (Carroll 1865). In that book, after a race amongst the animals, the Dodo bird says that everyone has won.  This is similar to the situation where research into different therapy approaches often shows that they all work.  This may of course be due to the ‘common factors’ mentioned above. We can expect that something similar may happen if we try to compare developmental TA with other non-TA approaches.

 

  • Application assumption error – we may assume incorrectly that the results of a research study can be applied to situations that do not match those existing during the original research. For example, Guthrie (2000) points out that much psychotherapy research is done in controlled conditions, where the patients are chosen because they have a specific psychiatric diagnosis and the therapists are very experienced practitioners using a manualised approach and having regular supervision. She compares this to when the same therapy is delivered by less well trained staff with busy workloads, no supervision, and no screening of clients. This may mean that researching a training programme run by a specific trainer for a specific group of participants does not guarantee that a different trainer with a different group will get the same results, even if they work to the same training manual.

References

Bannon, Vincent (1976) Standards of Experimental Research Transactional Analysis Journal 6:3 318-322

Roethlisberger, Fritz Jules & Dickson, William J (1939) – Management and the Worker: Cambridge, Harvard University Press

Rosenzweig, Saul (1936) Some implicit common factors in diverse methods in psychotherapy. American Journal of Orthopsychiatry, 6, 412-415

​Carroll, Lewis (1865) Alice’s Adventures in Wonderland – nowadays available as Penguin Popular Classics

Guthrie, Else (2000) Enhancing the clinical relevance of psychotherapy outcome research Journal of Mental Health 9:3 267-271

We offer CPD and’ taster’ opportunities as well as ongoing qualification programmes.   We expect participants to try out before they commit to more.

Click here to see how you can choose topics that interest you now and have your attendances credited later if/when you decide to continue.