The 31st Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held April 14-16, 2016 in Anaheim, California. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information. Many sessions cover topics of interest to the federal contractor community, including employment law, testing, diversity and inclusion, big data, and regulations for individuals with a disability. DCI Consulting Group staff members were well represented in a number of high profile SIOP presentations and also attended a variety of other sessions worth sharing. Notable session summaries and highlights can be found below.
High-stakes employment scenarios with legal ramifications historically rely on a frequentist statistical approach that assesses the likelihood of the data assuming a certain state of affairs in the population. This, however, is not the same as the question that is usually of interest, which is to assess the likelihood of a certain state of affairs in the population given the data. This session explored the use of a Bayesian statistical approach, which answers the latter question, across different high-stakes employment scenarios. In each of the presented studies, data were simulated and analyzed, and results between the Bayesian and frequentist approaches compared:
In each of the studies, the results suggested the utility of a Bayesian analysis in some specific circumstances. Overall, the presenters agreed that the Bayesian analysis should supplement more traditional frequentist analyses and noted specific issues to consider when designing these analyses. Given the lack of legal precedent and difficulties introducing a new set of statistical interpretations into the courtroom, the takeaway was that the best current value-add for Bayesian approaches is in proactive, non-litigation applications.
The opportunity for credentialing or micro-credentialing is ever increasing, with credentials popping up in many professional fields that previously had none. What it takes to develop and maintain these credentialing exams, however, is something that many people know little about. In this session led by Samantha Holland (DCI), panelists from both private and public sector credentialing programs shared their experiences with issues such as maintaining test security, developing test content, and establishing validation evidence for their exams. Some highlights are noted below:
DCI’s Joanna Colosimo moderated this panel, featuring DCI’s Mike Aamodt, Michelle Duncan of Jackson Lewis, Eyal Grauer of Starbucks, and David Schmidt of DDI, providing an update on recent regulatory changes, enforcement trends, and other topics related to compliance.
In fiscal year 2015, the OFCCP completed fewer compliance evaluations, but the duration of audits has increased as a result of the revised scheduling letter and more in-depth follow-up requests, particularly related to compensation. The panel also discussed the increase in steering allegations and settlements where whites and/or males were the alleged victims of systemic hiring discrimination.
Dr. Aamodt spoke about two hot topics: the EEOC’s proposed pay data collection tool and the use of criminal background checks for employment decisions. With regard to the EEO-1 pay data collection tool, he highlighted the burden of reporting pay data for 10 EEO-1 categories, 12 pay bands, 7 race/ethnicity categories, and 2 sex categories, as well as some of the limitations of using W-2 data. Additionally, he discussed how difficult it would be for the EEOC to use the resulting data to identify pay issues. For employers using criminal background checks, Dr. Aamodt recommended that contractors adopt narrowly-tailored policies that consider the nature of the offense, the duration of time since the offense, and the nature of the job being sought.
This session presented research conducted by DCI's Kristen Pryor, Rachel Gabbard, and Joanna Colosimo to investigate best practices amongst federal contractors in complying with the 503-VEVRAA formal evaluation of outreach and recruitment obligations. Representatives from 77 federal contractor organizations provided survey feedback on current methods and prospective strategies for evaluation. Results identified strategies such as tracking resource specific metrics on qualified referrals and hires as well as ROI analysis for evaluating the success of outreach efforts. Results also suggest general frustration among federal contractors due to insufficient and ambiguous regulatory guidance on this requirement. In addition, DCI will be conducting follow-up research in the near future to determine if further progress has been made in this area, now that the regulations have been in effect for over two years.
DCI’s Emilee Tison moderated this session where panelists discussed their perspectives and experiences related to identifying and evaluating reasonable alternatives. Panelists included Winfred Arthur, Jr (Texas A&M Univ.), Theodore Hayes (FBI), James Kuthy (Biddle Consulting Group, Inc.), and Ryan O’Leary (PDRI, a CEB Company).
Discussion topics included:
Panelists ended the session with a few parting words, including:
In light of recent developments in case law and updated regulatory guidance, panelists provided competencies and strategies for expert witness testimony, focusing on three main topics: social framework analysis (SFA), new measures for test validation, and wage and hour concerns related to revised FLSA regulations on exempt status employees. Panelists included DCI’s Eric Dunleavy and Arthur Gutman, in addition to Margaret Stockdale of IUPUI, Cristina Banks of Lamorinda Consulting, Caren Goldberg of Bowie State University, and David Ross of Seyfath Shaw.
The goal of SFA as it relates to expert witnesses is to educate the court and jury on the processes underlying cognitive bias and other socially constructed concepts like gender inequality. Panelists cited the 2011 Supreme Court case of Walmart v. Dukes as a prime example of applying SFA methodology to diagnose discrimination in personnel practices. Although SFA has been met with some criticism, it can be said that there is a certain degree of subjectivity in many employment processes that have the potential to lead to discrimination. For this reason, experts are encouraged to look at seemingly neutral factors that may have a disproportionate impact on members of a protected group.
Shifting focus to standards regarding test validation, panelists commented on the outdated nature of the Uniform Guidelines on Employee Selection Procedures (UGESP), which have not been updated in nearly 40 years. Although the panel was not aware of any initiatives to update the guidelines, it was noted that several SIOP representatives have met with the Equal Employment Opportunity Commission (EEOC) regarding the guidelines and other topics of mutual interest. Panelists also advised the audience to rely on both the SIOP Principles and APA Standards as supplemental, more contemporary resources regarding test validation standards. Additionally, SIOP will be publishing a white paper on minimum qualifications and adverse impact analyses that addresses data aggregation concerns and other testing considerations.
The final topic discussed focused on wage and hour issues concerning the revised FLSA regulations. The panel discussed the difficulties that many employers face in accurately classifying jobs as exempt or non-exempt, and also when determining whether independent contractors should be considered employees. It was recommended that job analyses be done for individual positions, rather than general ones, to help determine exempt status and how much time is spent doing each type of work. Employers should also be aware of any differences regarding state law.
The subject of “big data” has become a hot topic as access to increasingly large amounts of data provides employers with new opportunities to make informed decisions related to recruitment, selection, retention, and other personnel decisions. However, “data scientists” often overlook the legal implications of using big data algorithms within an employment context, especially when it comes to employee selection. Panelists discussed several issues emerging from the use of big data algorithms, including the potential for discrimination, Title VII consequences, and strategies for mitigating risk.
As suggested by DCI’s Eric Dunleavy, many of the “big data” models really do not differ from empirically keyed biodata, which is not a new concept. What is new are methods of collecting larger amounts of data from new sources. Like empirically keyed biodata, big data can be very effective in predicting work-related outcomes. However, if the employer cannot explain how the algorithm works or illustrate that it is job-related, it may be difficult to justify use of the algorithm if facing a legal challenge.
In addition to traditional adverse impact concerns related to women and minorities, some big data techniques may have the potential to discriminate against other protected groups. For example, one panelist mentioned a computer program that can automatically score an applicant’s body movements and analyze vocal attributes from a video recording of an interview. Several other panelists noted that certain body movements or vocal attributes may be related to protected class status, in particular individuals with disabilities. The main takeaway here is that if an employer is using data algorithms, it is imperative that they not only validate the model, but also understand how it is making decisions.
In this session, speakers highlighted the increasing popularity of the use of big data techniques (e.g., machine learning) within organizations to predict work outcomes , pointing out both benefits and challenges inherent to these approaches.
As one example of a big data “win”, Facebook’s David Morgan described how data collected on the current workforce can be used to identify employees at risk of turnover. More caution is required, however, when using big data to inform selection decisions. Many big data algorithms are essentially “black boxes”: data goes in and results come out with little transparency of the how or the why. Not being able to explain the “why” makes these approaches very difficult to defend in court. Rich Tonowski, representing the EEOC, advised that companies be knowledgeable and comfortable with the process being used as the agency will obtain access to the algorithm. Similarly, companies should be able to explain how the information being used is job-related, especially when data have been mined from social media or other Internet sources.
A final caveat was that machine learning tools may use data that is correlated with protected-class status in some way. Dave Schmitt of DDI suggested one way to test for this is to determine if the model can predict the race or sex of applicants. If so, then it may be subterfuge for discrimination. This may be especially compounded by the “digital divide,” where minorities may be less likely to have regular access to the Internet due to lower socio-economic status.
This panel, which included DCI’s Art Gutman, discussed a variety of challenges faced when working to conduct criterion-related validation studies for client organizations. Challenges included study design issues, data collection problems, determinations regarding appropriate analysis, and meeting reporting requirements. Specifically, presenters discussed the criteria problem (obtaining appropriate and accurate measures of job performance), problems with predicting low base rate events, issues of range restriction and the appropriateness of applying corrections, among others. The panelists hypothesized that upcoming issues in criterion validation will include dealing with big data (“messy predictors”), processes for validating non-psychometric assessments, addressing validity equivalence (or lack thereof) in multi-platform or mobile assessments, and the eventuality of court cases evaluating validity generalization.
In this session, a panel of experts provided insights on the proposed changes to the FLSA exemption criteria. The panel discussed the salary test for exemption, which would increase from $455 a week to the 40th percentile of weekly earnings for full-time salaried workers (estimated at $970 for 2016) and the implied potential changes to the job duties test. Regarding the salary test, panelists agreed that a change is overdue. However, they argued that a phased approach would be more appropriate and that the regulation should not be set at a dollar value, but instead aligned to a value that will allow it to stay in line with inflation. The NPRM’s discussion of the job duties test did not propose a change, but asked for feedback on whether a quantitative threshold, like the 50% “primarily engaged” test in California, should be implemented. The DOL estimated that approximately 20% of exempt employees would be impacted by the salary changes alone. Implications for employers are staggering, especially in light of the potential for a 60 day implementation window. First, employers must assess the extent to which they are comfortable with their exempt/nonexempt classifications and reasoning and plan to re-evaluate where needed. Second, budgeting and cost scenarios for moving exempt positions to non-exempt, realigning duties, or increasing pay should be evaluated. Finally, internal messaging and communication plans should be in place to outline the changes, reasoning, and any new procedures.
In this session, four different presenters provided insights on diversity training. Three presented information from academic research, and one presenter provided information from an organization context. A full 67% of organizations provide some form of diversity training, though research into the impact of that training on the job is varied. One series of studies found that individuals who are high in social dominance orientation (e.g., high preference for hierarchy in a social system and dominance over lower-status groups) tend to be more resistant to diversity training, but that this resistance can be mitigated when the training is endorsed by an executive leader. Another series of studies found that men are more likely to place importance on gender issues addressed when those issues are put forth by other men, and that this holds in both written context and in-person contexts. A Google employee presented on the training Google has implemented as part of new hire on-boarding on implicit or unconscious biases. The training focuses first on increasing awareness and understanding of the topic, to provide a common language, and initial suggestions for mitigation. Follow-up training has focused more on role playing type scenarios to cement the behavior change and mitigation aspect, increasing employee comfort level with calling out biases when and where they are observed.
Panelists discussed their experiences conducting surveys, times when things went wrong, and recommendations for a successful survey. Anyone can use and develop a survey, but issues can arise when multiple stakeholders are involved, each with a different opinion. For this reason, it is important to communicate the purpose of the survey and how the results will be used. Branding can be beneficial to help develop awareness, generate interest, and increase participation. Positive changes implemented based on survey results can also lead to increased participation the following year. Additionally, it is important to research any null or opposite findings between survey iterations to give you a better understanding of any issues that may be present within your organization.
Panelists also addressed problems they have encountered when implementing results, including trying to do too much with the findings, or slicing the data so many ways that your results become less reliable. It was also emphasized that results should be presented in a way that leaves little room for subjective interpretation to avoid making conclusions that are not supported by the data.
Finally, the panel provided a few recommendations for a successful survey:
This debate-style session posed the question of whether or not big data techniques (specifically deep learning or machine learning) could/should be used to eliminate adverse impact during selection. The panel included data scientists and I/O psychologists to present their perspectives. The I/O psychologists opposing this technique – including DCI’s Emilee Tison – presented the following high-level points:
In summary, the panelists came from very different perspectives and foundational knowledge bases; however, it was the start of what hopefully becomes meaningful cross-discipline dialogue.
By: Kayo Sady, Senior Consultant; Samantha Holland, Consultant; Brittany Dian, Associate Consultant; Dave Sharrer, Consultant; Kristen Pryor, Consultant; Rachel Gabbard, Associate Consultant; Joanna Colosimo, Senior Consultant; Emilee Tison, Senior Consultant; and Bryce Hansell, Associate Consultant at DCI Consulting Group