Extensive research literature exists in support of the use and value of simulations as part of the selection system. However, a number of important areas are in need of more research. We highlight some of the emerging research needs below. Because of the rapidly changing nature of this area of work, this list can be expected to grow quickly. It will be important to do all we can to ensure research keeps pace and that the same rigorous standards that have guided the development and use of other predictors (e.g., cognitive ability and personality) must be applied to simulations.
First, additional research is needed that examines the cost and utility of various simulation features in order to guide development and use. For example, it is unclear what level of fidelity is required to achieve the benefits simulations provide, such as improved applicant reactions and validity. To date, no same sample studies have been conducted which examine the impact of fidelity on important organizational outcomes. The field would also benefit from cost-utility analyses similar to those that have been conducted for simulations used in training. As fidelity increases, so do the costs and resources required for development and administration. This is particularly true as organizations scale for cross-national administration. As organizations continue to seek ways to cut costs in resource-constrained environments, it will be important to understand the minimum level of fidelity required to achieve desired outcomes. In the same vein, additional research is needed to further understand the level of simulation specificity required, especially as it relates to the gener- alizability of assessment results. For example, how specific does a simulation need to be to achieve desired levels of validity, improve applicant reactions or be used as an RJP? It may be possible to leverage technology to package generic items with a job-specific skin that improves the user experience and achieve the same level of validity and applicant reactions as more job-specific simulations at a reduced cost.
Second, issues related to the cross-cultural application of simulations are not well understood. It is not enough simply to explore the frequency of use in different cultures. Rather, research examining the specific features of simulations as they relate to contextual factors that may drive their acceptance across cultures is needed, as is work examining the gener- alizability of validity findings. Where the research has been undertaken, it has largely focused on cultures with a shared European heritage, thus neglecting many emerging markets such as South America. Particular attention will need to be paid to across-country applications and the utility tradeoff between localization and the need to compare assessment results across locations.
Third, as technology is leveraged, psychometric research will be needed on emerging methodologies and new item types. For example, research allowing for stable estimates of validity has yet to be accumulated for computer-based in-baskets that present items in a non-linear fashion or for branching role-plays. It will be important to amass the studies needed from which meta-analytic validity estimates can be made to support the use of new and emerging simulations in the same way that has been done for more traditional simulations and selection assessments (e.g., cognitive ability). It will be interesting to understand the incremental validity provided by new simulations compared to more traditional simulation types. Additionally, questions about the construct validity and the potential for the introduction of construct irrelevant variance remain. These are questions that have led to some controversy over the use of simulations in the past (Whetzel at al., 2012). It will be important to explore these issues so that the reasons why simulations are predictive of job performance are well understood. As well, the potential for subgroup mean differences and adverse impact needs to be explored as the impact of irrelevant constructs on the performance of different subgroups are not known.
Finally, more applicant reaction research will be needed. While it has been shown that simulations result in positive application reactions through the lenses of justice models, it will be important to understand the specific features of simulations that drive those applicant reactions. Is it merely an engaging look and feel, or is it the items that showcase the job that drive the positive results observed? What features are most important from a recruiting and branding perspective? This will have important cost and utility implications. For example, it is unclear whether applicants will have similar positive reactions when simulations are presented as serious games. The literature suggests that perceived job relevance and opportunity to perform are important drivers of applicant reactions, yet it is not known if applicants in an immersive game will implicitly make those connections. From a branding perspective, it is possible that an applicant who is immersed and engaged but does not see a direct link to the job will become quite upset if not offered a job because of a ‘game’. An in-depth understanding of technology-based simulations as it relates to these issues is needed, but the complex interactions will be difficult to tease apart. For example, we are only now beginning to explore the impact of the use of mobile devices on testing. Fursman and Tuzinski (2015) found that applicants have less trust in mobile delivery compared to personal computer use.