New perspectives on how psychometrics and coaching can inform each other

Ian Florance

In the past testing and coaching have influenced each other less than they might; that’s unfortunate since both areas (psychometric testing in some applications, coaching at its core) seek to help people develop self-awareness and understanding to improve their lives. Both areas are also evolving, and their respective changes are bringing them closer together. To create a richer interplay between them, practitioners in both areas need to understand the other area better. For instance, many coaches need to be familiar with the advantages and disadvantages of more models of personality measurement than the type approach, if they are to get the most benefit from personality psychology. Equally test publishers and developers need to create more reports informed by in-depth understanding of what’s unique about coaching, rather than treating it as psychology-lite. I’ve tried to make reasons for this lack of in-depth conversation clear in this chapter, as well as to show how developments in testing could aid or, in certain cases, inhibit closer understanding. I’ve observed a few guidelines in writing it.

First, since testing is undergoing a very fast and fundamental transformation, references to specific tests and publishers are liable to go out-of-date very quickly, especially around on-line delivered testing. I’ve only been specific where it is unavoidable.

Second, testing can be a technical area, often unnecessarily so in my view. This chapter is not a primer of testing statistics. One of the points to make is that the statistical methods by which you analyse data to gain reliable, meaningfill information have changed, and will change, much less than how and where you obtain the data in the first place. The statistics coaches need to understand are covered in good test training. I’ve therefore simplified ideas, tried to make them practically relevant and avoided numbers, while stressing that numbers are one of the types of evidence that testing supplies.

Thirdly, I concentrate on formal psychometric testing tools rather than assessment in general. There are several reasons for this but the most compelling one is that it’s precisely the relationship between formal psychometrics and coaching that needs discussion. Many coaches are used to using informal methods of evaluation: strengths inventories; 360° tools; structured interviews with co-workers and managers; and self-authored surveys, for instance. They have worried less about the difference between formal tests and informal assessments than psychometricians and psychologists. I rrse words such as ‘testing’ and ‘assessment’ where they make a sentence readable, rather than to make minute - and in coaching terms -irrelevant discriminations.

Finally, since all the other chapters in this book paint a detailed picture of how and why coaching is changing, I’ve focused here on changes in testing and how this might make it a more useful, acceptable teclmique within the coaching armoury.

Coaching and assessment

The origins and approaches of testing and coaching were and, in some cases still are, different (see Table 3.1).

However, the view of assessment detailed in this table, has begun to look out-of-date. In certain countries and applications, assessment has moved from being a more academic, research-based area of psychology, applied by experts to ‘subjects’, to becoming a more practical tool used to underpin positive action in schools, hospitals and clinics, business and sports teams. In other words, assessment has become more like coaching.

Despite some statements that assessment is central to coaching (Chapman, Best, and van Casteran 2003, 51) coaches have only dipped their toes into the huge range of formal tests available. In conversation, Professor Adrian Fumham srrggested that there are now around 50,000 tests measuring different psychological aspects, yet most coaching primers only reference a very narrow and focused selection. This usually comprises more formal tests of personality and one or

Table 3.1 Contrasting coaching and assessment models

Coaching

Equalises power:'equalisation of power has become a mantra’ (Rogers 2016,45)

Assessment

A process done between people: coaching methods are specifically designed to create relationships in which power is equalised; expertise is less critical than attention and the key work is done by the ‘client’ rather than an ’expert’.

Power / expertise on tester’s side: there has been a constant emphasis on the very specific knowledge required to use formal tests, conferred by initial or professional training (British Psychological Society 2000; Smith and Smith 2005, throughout).

A process done by someone to someone else: this perhaps reflects the roots of testing in academic research but has been reinforced by the emphasis on test users learning high level skills in order to accurately describe, categorise or diagnose a lay person.

Coaching Assessment

Future-focused: fundamental coaching models, such as GROW, are feedforward systems: they facilitate moving forward to a future vision.

Action-oriented: as many other contributors to this book imply, coaching is focused on increasing the likelihood of beneficial action by basing it on personal realisation rather than external orders or instructions.

Emphasises possibilities: coaching is, at its base, a strengths-based practice.

Rests on words and gestures: coaching training emphasises active listening, body language, the unacknowledged implications of words. It is possible to become a coach without touching on very basic statistics.

Either a-theoretical or drawing on several different disciplines.

Jenny Rogers discusses this briefly (Rogers 2016,6).

Past-focused in three ways:

  • 1 Testing’s grounding in clinical and educational contexts tends to focus it on the past causes of present behaviour
  • 2 Evaluating whether a test is measuring what it’s supposed to often involves comparing its results with those

of earlier tests - using earlier technologies as a present validating reference point.

3 Future-oriented predictive studies are frequently honoured by their rarity.They’re expensive and need a continuity which is difficult to arrange.

Categorisation / description oriented: the roots of testing are in categorisation. Examples include:

  • - Alfred Binet’s original educational tests set out ‘[to identify which French children he felt were not worth schooling’ (Roseveare 2017,252).
  • - Clinical testing often categorises people by the DSM system (American Psychiatric Association 2013).
  • - Even developmental tests, while suggesting action will implicitly consign individuals to binary categories (for instance'warrants training’ vs'does not warrant training’).

Seeks out deficits: this is not always true and has changed under the influence of developments such as positive psychology. But as some of the issues raised indicate, much educational and clinical testing was designed to identify problems that could be solved or, at least, dealt with.

A numbers-based discipline: testing is underpinned by complex statistics and training emphasises this.

Based on a developed theory of psychology and measurement. For a good if technical discussion of this see Rust and Golombok, 1999. two related areas (emotional intelligence and increasingly, values, resilience and motivation) rather than those of ability, attainment or other aspects of human behaviour. Psychometrics in Coaching: Using Psychological and Psychometric Tools for Development (Passmore 2012) is an exception because its core subject is assessment.

Even when it comes to formal personality measures, coaches tend to use a limited number of titles. A very good introductory text, the one I trained with, has a section on psychometrics which evaluates two tests - MBTI, FIRO-B - and two strengths inventories, while mentioning representative titles in three other linked areas: dark-side assessment, entrepreneurship and leadership (Scoular 2011, 108-122). It is generally accepted that the type model of personality is the most popular one among coaches and has given rise to an excellent recent one-volume introduction to the whole area (Rogers 2017). If coaches know only one test it will usually be the Myers-Briggs Type Indicator. This may be a good entry point offering a simplified, easy-to-use and discuss set of reports (Florance and Moyle 2018) but many coaches go no further in investigating the whole testing area.

External issues, not least the revolution in digital technology, are transforming both assessment and coaching. Carol Braddick’s chapter in this book paints a more detailed picture of how technology might change coaching. I’ll treat it as one of a range of influences affecting both coaching and assessment practice.

Why and where tests are used in coaching now

Whether they like it or not, coaches assess at the start of a coaching relationship: either explicitly and formally via test use or implicitly and informally; for instance, chemistry sessions (Scoular 2011,36) are precisely intended for coach and coachee to ‘check each other out’ and decide whether they want to work together.

There’s general agreement about many of the purposes of assessment in a coaching context. The coach may get ‘usefol insights’ into the development task and how this might be achieved (Chapman et al. 2003, 52). For the individual being coached, assessment brings increased self-awareness which underpins change and improvement. In addition, assessment introduces people to psychological insights that will be usefol in dealing with others. Testing feedback is a form of developmental training, particularly usefol to new people managers.

But I’d add a few more functions assessment fulfils now, whether the participants realise it or not (see Table 3.2).

Why tests are used less than they might be in coaching

The issues outlined in Table 3.1 answer this question in part. There are, I think, other reasons. The Hidden History of Coaching (Wildflower 2013) recounts the non-academic - often counter-cultural - roots of coaching (see also the Introduction to this book). This history is one reason why many coaches see their practice

Table 3.2 Additional reasons for using tests in coaching

Measuring progress

Coaching is about change. Assessments can be used to measure what changes in human characteristics, such as personality and values.

Helping coaches improve

Some tests, whether self-administered, administered by coaches or coach’s supervisors, give insights into a coach’s style which will help him or her improve the service he or she offers and avoid blind spots and prejudices.

Coaching professionalisation

Training in tests may provide coaches with reassurance that they do have a transferable set of tangible skills to ‘sell’. Fellow students in my initial coaching training course confessed to imposter syndrome (see Watts, Swindin, Al Khalil, and Cavett 2020) at the very least insecurity, about not having a definable and explicable expertise or knowledge base related to coaching. Reassurance provided by technical test training might help coach’s well-being.

Increasing engagement in the coaching process

While ability and knowledge tests often cause anxiety, my experience as a test publisher confirmed that people enjoy tests of personality and other personal characteristics. In this they serve a function akin to active, focused listening, increasing engagement, enjoyment and a sense of being valued, reactions which seem to improve the chance of coaching’s success as detailed in Time to Think: Listening to Ignite the Human Mind (Kline 2002).

Gathering focused information quickly

‘How many times do my seminar delegates need to see me to make a decision about my honesty! The answer is at least 10-15, maybe as many as 25-30, observed over six months’.This is a quote from the original English manuscript of a fascinating book published in Dutch (Robertson, 2012) It takes us much longer to get to know critical aspects of other people -such as values - than we think. Psychometric tests gather very focused, objective information in a very short time.

as syncretic; dealing with existential experience and ‘what works’ rather than what research recommends as robust. Another reason is that coaches come from many different backgrounds; from schools and clinics; from commercial organisations and not-for-profits; from new age, experiential and professional backgrounds. Many will have had no grounding in research practice; others will have an initial qualification in the arts. This diversity may explain the huge variety of approaches used by coaches: put simply, I’ve only occasionally met a coach who initially volunteers that their practice is a research-based science when asked to describe it.

Coaches have thus tended to pay less attention to numbers than other ‘mind sciences’ such as psychology. Hence the paucity of basic research on whether coaching works (Groves and Furnham 2016). Psychometrics, which is highly numbers-oriented, makes claims to scientific stringency and often delivers judgements rather than mutual insight, so may not have appealed to coaches in the past, hence the relative lack of test use. (For further discussion of the nature of coaches see Rogers 2016, 186-205.) But this suggested bias against numbers in coaching is changing, particularly in leadership and other work-related applications. Here, the need to provide a return on investment (ROI) for coaching moves the process away from a purely humanistic/verbal approach to efficiency/economic models and research underpinning. Organisations demand numerical proof that coaching works and is worth the money: anecdotes and case studies no longer suffice. Assessments can provide numbers; the digitisation of assessment further strengthens this trend.

Influences on testing

There are new and vigorous conversations going on within testing. Much of the basic work that underpins widely used personality tests took place between the 1930s and 1960s, and many of the tests that we irse are later editions of tests developed then. While some were changed fundamentally dming translation to digital and on-line delivery, others are still paper and pencil tests translated onto screen without utilising the capabilities of digital environments to make them more attractive and/or accurate in their predictions. Testing, until tire early ’90s at least, remained pretty unchanged from its early to mid-20th-century roots. Meanwhile the world changed dramatically. Table 3.3 suggests some of the obvious recent influences that have created the need for new sorts of tests and new ways of testing. It is no coincidence that they are also behind some of the rethinking of coaching that is going on now.

Present and future developments in testing

Given all of this what are the emerging trends and conversations in testing? As this section heading suggests, some new trends have already started and may continue in the future. Others are more speculative.

Changes in what is measured

Some practices and areas of application in testing are long-established. For instance:

  • • busmess test use has long been focused on leadership, not least for economic reasons since leadership tests can have a higher price and, often, expensive add-on consultancy services.
  • • personality testing tends to be based on one of a small number of theoretical approaches (typically factor-analytic, type, ipsative and, to a lesser extent, projective).
  • • certain aspects of educational and clinical testing have been based on a model in which specific sorts of ability are measured as surrogates for, or more focused applications of the robust, but often criticised, practice of IQ testing.

Table 3.3 Some top-level influences on testing

Changes in technology

Life span

Developments in psychology as a discipline

General awareness of psychological issues

Changes in testing

As mentioned earlier, Carol Braddick’s chapter gives a good summary of technological changes that might influence both coaching and testing.A by-product of technological innovation is the generation of huge amounts of manipulable data.This can be analysed in many ways.Thus.the need for a tool (a test) to generate psychological data is reduced: the need is for ways of analysing that data, creating valid, reliable and useful psychological information and using that to create action plans.

The Hundred Year Life (Gratton and Scott 2016) gives an excellent review of this area. It draws out the implications of longer lives in a number of areas, not least the shape of careers.The implications for testing range from the need for more testing (and coaching) in increasing life and work transitions, to the need to measure different sorts of skills and human characteristics, comparing results to different norm groups.

In particular it is claimed that the neurosciences (especially the use of high-powered scanning technologies) allied to advances in genetics are doing away with the need for some sorts of testing. But Hamira Riaz’s chapter in this book suggests some concerns about claims made on behalf of the neurosciences.

Bestselling novels, books, films and magazines often take psychological issues as a core theme. It can be argued we live in a psychologised society. Often informal tests and quizzes are one way of taking part in this and psychological language is used freely outside traditional settings.

Over the past 20-30 years, technology has influenced the theory and practice of testing, reintroducing techniques like item banking and item response theory to create a more flexible set of tools which, to some extent, adapt to the individual test taker. One other influence on testing is similar to that on coaching.The question ‘Does testing truly predict future behaviour?’ may be better researched than ‘Does coaching work?’ but, in both cases, users are commenting that given the amount they spend on these two techniques, they’d like to be more certain that they ‘do what they say on the tin’.

However, new areas to be measured are developing all the time. As an example: Howard Gardner’s book on Multiple Intelligences (Gardner 2006)) underlay the publication of many tests of emotional intelligence, a concept which has been accepted enthusiastically in business and educational contexts. This area and its link to positive psychology seems more compatible with many coaching approaches, engages coaches and has generated interest in some new sorts of test. Gardner’s book also fuelled interest in other sorts of ‘intelligences’, ranging from the though-provoking (kinaesthetic and musical) to the bizarre (sexual intelligence anyone?). This sort of development, coupled with the seemingly annual remodelling of what leadership is, suggests that many of these new areas of assessment may be driven by management fads and media stories, rather than genuine psychological insights.

All this said, new areas of assessment are developing. These are often simply more focused areas within personality or ability. Some of them need little explanation. Some are less relevant to coaches: those which aid recruitment in business, diagnosis in clinical practice and measure attainment in education for instance. Others, however, do seem to fit in to coaching more easily.

It should be obvious that at least some of these newer testing areas - and this is just a selection - would be usefill in coaching. Most of these tests investigate

Table 3.4 Some new and developing subjects for assessment which may be of interest to coaches

What’s measured

Discussion

Situational

Tests of situation judgement measure responses to realistic

judgement

scenarios rather than a more rarefied, not contextual decision-making ability.

Service

Became more important when service industries were seen

orientation

to be taking over from manufacturing. Particularly used in telesales and service centres. A development/adaptation of these assessments might be the creation of care-orientation given the growth of care roles in Western society (Gratton and Scott 2016).

Creativity and values

Seen as a gap in the present range of assessments.

‘Game changing’

The question 'How do you identify very young people who have no track record but whose natural ability (in areas such as IT) allows them to think “outside the box”?’ is exercising a lot of companies. In a sense this returns assessment to one of its starting points - identifying prodigies.

Integrity

Integrity tests have long been popular in the USA but there has been some resistance to them in Europe due to cultural and legal issues.

Trainability/

Given huge numbers of work transitions in any life and the

flexibility

constant pace of change, how easily will individuals find it to learn new skills or adapt to new situations (Gratton and Scott 2016 throughout).

Prejudice/bias

This is a controversial area as there are huge dangers in labelling someone (rightly or wrongly) as prejudiced.This is also a very difficult area to assess. Harvard’s work in this area has highlighted possibilities and problems (Implicit Project 1998).

What’s measured

Discussion

Resilience

Developmental differences

There is much debate about the definition of this.

Given longer life spans and the fact that younger people may have ‘game changing contributions’ to make in organisations (see earlier) and older people are active longer, one would expect more assessments to look at differences created by life-long development.

Clinical and quasi clinical areas

There has been a very gradual breaking down of the barrier between clinical, work and educational assessmentThere are still very specialist tests which can only be used by medical and other professionals. But the growing understanding of child mental health issues and reports on the incidence of mental health problems among the general, working population have created tests which begin to cross the boundaries. Good examples are tests of derailment and dark side behaviour.

Team assessment

Assessment has focused on individual differences and measurement of team compatibility and effectiveness has tended to add together the qualities of individual team members.A renewed interest in social psychology may lead to more sophisticated assessments of the characteristics of groups of people.

characteristic behaviour or the basic motivations and values that underlie life and work choices.

Changes driven by digital technology

Over the past two decades, the increasing delivery of assessments in digital formats has changed several features of the practice. For instance, rather than giving every test taker the same content to measure something, technology and measurement theory enables us to change the questions we ask while measuring the same thing. Newer developments like automated item generation (a computer generates questions on the fly, based on a psychometric rale base) and item banking (creating a huge number of questions or items whose properties are fatown, a unique selection of which is used in any particular testing session) underpin these uses. These developments can prevent cheating where there are right or wrong answers. While this benefit will not impinge on coaching to any great extent, another one -shortening the time to take a test - might. This can be achieved by comparing someone’s responses on a few questions to how others have responded, generating a hypothesis of the final results and then asking a smaller number of questions to confinn or deny this hypothesis.

Other benefits of digital delivery which might encourage coaches to use tests include the creation of more involving test items. Generations bought up on

DVDs, video games and the internet are not going to be satisfied with verbally based, multiple choice questions, ink blots or geometrical matrices. So, there has been a move from words to pictures (sometimes moving) as the stimulus to which a test subject responds. I deal with the implications of this in the next section of New Ways of Delivering and Structuring Tests

Changes in reports

Test reports have changed dramatically. They’ve shortened, placed more emphasis on action rather than description, downplayed statistics and translated technical psychological terms into more natural language. They’ve tended to emphasise maximising potential rather than identifying problems and, as such, have started to directly influence training and consultancy (such as coaching) that follows on from testing. The underlying algorithms that generate reports from test-takers’ answers have grown exponentially more sophisticated. Increasingly tests -particularly personality titles - generate a wide range of reports for different purposes (for instance coaching, selection, succession planning) and for different users (the test taker, his or her manager, the specialist test user etc). The emphasis has moved away from expert interpretation of tests to the test taker reaction to how their answers have been analysed - in other words it’s moved in a more coaching direction. But there is still a lot to be do, not least explaining the basis on which ‘expert reports’ have been constructed.

Changes in where testing is done

Tests are increasingly administered in many different environments. How you administered a test used to be a major element in test user training, but this is no longer true. Certain sorts of tests are now delivered on-line, anywhere from someone’s home to an internet cafe. Trends suggest that mobile phones will be the increasingly dominant way of delivering assessment (as they already are for so many techniques) and this must have an impact on how tests are designed, scored, reported (McHenry 2017).

New ways of delivering and structuring traditional tests

Gamification

Gamification is the application of game-design elements and game principles in non-garne contexts. These are early days for this development though several publishers are both pioneering their use and, it would seem, theorising and researching their future development.

Several generations have grown up with increasingly sophisticated digital games as one of their primary means of entertainment. Games require gamesters to make choices, evaluate evidence, show preferences, leani from the past and react to change, as well as exhibit a variety of different reasoning skills. Many implicitly force gamesters to exhibit personal preferences, typical behaviours and even prejudices. Whereas games used for entertainment may offer situations involving killing zombies, building worlds and defeating Orcs, new serious games (as they’re sometimes called) can offer environments built round leadership situations, the experience of family dynamics or psychological syndromes, sports matches and games. With a sophisticated statistical underpinning and a focused design, tests can therefore be used to measure psychological qualities in an environment which is visually and narratively rich and which can be made to seem much more relevant to the choices to be made on their basis.

Video reporting

As a logical conclusion to these points, IT capabilities could inform test reports as well as tests. For instance, a film of someone acting in a way that suggests he or she scores very liigli on a preference for Introversion and very low on Openness to Experience would be more engaging (and arguably, would be understood better, more often) than a bar chart reporting percentile scores on those two personality scales.

b2c

Increasingly test publishers and providers talk about the growth of a potential direct-to-consumer (b2c) market for tests. Whereas at one time tests were only used by trained professionals, improved access via Smartphones, tablets and other devices means tests can be delivered directly to consumers (McHenry 2017). While the dangers of access to very personal, sometimes technical information by untrained people have been pointed out, developments in reports outlined earlier reduce the dangers of misinterpretation. Social and work changes suggest that, just as coaching will increasingly be bought by individuals to help them navigate through increasingly complex and numerous life choices, the same is true of testing (see Gratton and Scott 2016 for a general treatment of these complex life changes).

New ways of accessing data

While these innovations have been affecting and updating traditional tests, designed to generate reliable and valid data on human beings, a more fundamental change has been taking place which is causing a lot of discussion.

We often believed we needed (and sometimes still believe we need) psychometric tests to generate data about people’s psychological states on which to base plans, treatment, job offers, education, life-changing decisions etc. One prediction of the future is that soon we either won’t need such tools. Personal data is everywhere. Michal Kosinski (2014) argues that in analysing social media data we are analysing psychology in action - we are observing people making real choices rather than the artificial ones tests create. So, the issue of how you get psychological data becomes almost irrelevant next to the techniques that underlay traditional tests: how you draw accurate inferences from the data; how you ensure the resulting information is valid and reliable; and how you accurately translate this information into actions, plans and treatments. So, as I wrote at the beginning of this chapter, ‘the principles of how you analyse data to gain meaningful information have changed much less than the way you get the data in the first place’.

What are some of the sources for this sort of data? It’s worth taking the points about the influence of technology on coaching Carol Braddick makes in her chapter, and applying them to testing. For instance, Alexa, Sin and other virtual Al assistants could use more sophisticated natural language analysis to hypothesise enduring psychological traits and temporary psychological states from the language someone is using, the tone of his or her voice, her or his speed of articulation

There are beginning to be robust and proven correlations between physical states as measured by devices such as Fitbit and wear ables manufactured by Garmin. This is not a new idea: biodata has long been an entry point for understanding someone else’s (and one’s own) mood. Pulse rate, sweating, increase or decrees in weight have always been data from which we extract psychological descriptions. The new wearables just make the process quicker, easier and, once more research has been done, more accurate. (McHenry 2017) for an excellent introduction to this.

The most controversial source of data which can be analysed psychometrically are social media. The Cambridge Analytica scandal continues to raise questions about privacy, influence and disclosure in social media. As Kosinski has argued, this method of analysing people’s data has huge advantages, but also attendant dangers and it must be used professionally and ethically. (Kosinski 2014).

Some warning voices

As the previous sections show, change brings threats as well as opportunities. Increased access to psychometric analysis, the use of different sorts of data from a variety of sources - all the issues I have mentioned - have potential downsides.

Claudia Filsinger-Mohun’s chapter on diversity implicitly raises several of them. The use of psychometric tests in a non-coaching area such as selection may tend to psychologically ‘clone’ someone who has previously done a job. Rather than seeking to employ someone who will meet the challenges of a changing market or industry, the recruiter uses tests to find someone like the person who did the job before. Tests can be used to find people who are Tike us’ - who fit our culture. This reduces diversity and Claudia’s chapter argues the critical importance of diversity in any team or organisation. Coaches are not involved in selection, but these examples highlight a tendency to simplify test results and fit them into pre-conceived patterns. This tendency can range from the Barnum Effect in which someone will agree with a test report whatever it says, to the way some test users give undue weight to extreme scores or certain scales and fit these into a preconceived, often pathologising model. If coaching is about working as an individual, with an individual, the perceived tendency of tests to simplify individuals makes some coaches suspicious of tests’ increasing use.

 
Source
< Prev   CONTENTS   Source   Next >