DCI Staff Attend SIOP'S 29TH Annual Conference Held in Honolulu

The 29th Annual Conference for the Society of Industrial and Organizational Psychology (SIOP) was held May 15-17, 2014 in Honolulu, HI. This conference brings together members of the I/O community, both practitioners and academics, to discuss areas of research and practice and share information. Many sessions cover topics of interest to the federal contractor community, including employment law, testing, and new regulations for individuals with a disability. DCI Consulting Group staff members were involved in a number of SIOP presentations and attended a variety of sessions. Session summaries and highlights can be found below.

Fisher v. University of Texas: The Future of Affirmative Action

Making the Most of SMEs: Strategies for Managing SME Interactions

How Big of a Change Will Big Data Bring?

Recruitment of Individuals with Disabilities: Regulatory, Research, and Employer Perspectives

Meta-analysis Methods for Messy, Incomplete, and Complex Data

Predictive Analytics: Evolutionary Journey from Local Validation to Big Data

Using and interpreting statistical corrections in high-stakes selection contexts

Cruising the Validity Transportation Highway: Are We There Yet?

Within-Group Variability: Methodological and Statistical Advancements in the Legal Context

What Goes Unseen: Mental Disabilities in the Workplace

Fraud on Employment Tests Happens: Innovative Approaches to Protecting Tests

Employees with Disabilities Section 503 Changes: Implications and Recommendations

New Developments in Biodata Research and Practice

 _______________________________________________________________________________________

Fisher v. University of Texas: The Future of Affirmative Action

This panel discussion included several experts in the field of high-stakes selection and covered the recent Supreme Court ruling for Fisher v. University of Texas. The case focused on the use of race in college admissions. As discussed in a recent blog, this Supreme Court ruling was essentially a pass to the district courts to reevaluate their decision to determine whether the University of Texas at Austin’s inclusion of race in the admission process was tailored enough to pass the standard set by Grutter v. Bollinger (2003). They felt the previous standard used for summary judgment in favor of the university was incorrect.

During the panel, experts covered a number of questions largely focused on the future of affirmative action for academics, but also in employment settings. Much time was spent discussing the intent of universities in using race as a factor and the ultimate goal of a broader diversity. Many felt that race was commonly used because it is an accessible factor, whereas other factors like worldliness, team performance, self-confidence, etc. are more difficult to systematically measure. The session chair, Dr.  James Outtz, commented that “race is a distraction from figuring out better variables,” to which many panelists agreed that improvements should be made to develop selection measures in education to obtain a diverse class that many institutions desire. Overall, panelists reached consensus that the current ruling will have no effect on the federal contractor community and likely won’t have much effect if the lower courts rule differently. The larger effect will be felt by the education system and may require a search for alternatives or better ways to measure the diversity they seek for incoming classes.

 _______________________________________________________________________________________

Making the Most of SMEs: Strategies for Managing SME Interactions

Many organizations rely on subject matter experts (SMEs) to gather information for a variety of projects such as assessment development, training programs and job analyses. This panel of experts, including Dr. Dunleavy from DCI, discussed best practices and approaches to working with SMEs. Several issues were covered, such as the amount of information communicated to the SME, getting their buy-in and understanding time constraints. The panel agreed that organizations should only share information that will help SMEs understand their involvement in the project and the overall objectives and outcomes. Other recommendations from the panel included:

 

  • Setting expectations early so SMEs can bring the true dynamics of the organization
  • Recognizing indicators SMEs do not understand (e.g., instructions, language)
  • Being proactive in identifying and managing disengagement
  • Recognizing rating patterns
  • Providing feedback for performance ratings

 _______________________________________________________________________________________

How Big of a Change Will Big Data Bring?

I/O psychologists discussed many pertinent issues at SIOP this year, but one topic in particular seemed to surface again and again: “big data.” I/O practitioners and researchers alike raised a number of questions related to the growing availability of data and the seemingly limitless potential for analysis. The following questions, among others, were discussed by an esteemed panel of I/O practitioners during a debate on the impact of big data on I/O psychology.

What is big data? Big data has been described using four terms: variety, volume, velocity, and veracity. “Variety” refers to the diversity of data sources to pull from and data types to explore. “Volume” describes the vast quantity of data available for analysis. “Velocity” refers to the speed with which we are capable of finding patterns in the data. Finally, “veracity” is a reference to the general accuracy of the data and, as a result, the outcomes of big data analyses.

What are the implications for psychology and business at large? Big data analyses are commonplace in industries such as insurance and finance. However, where does I/O psychology come into play? Some psychologists argue that while the skills needed to conduct such analyses are evolving, I/O practitioners have been working with big data for some time now (e.g., large-scale validity studies). Others argue that the influx of big data at our fingertips will mean revolutionary changes for the field: new skills, new techniques, and a new set of ethical issues to be wary of. Regardless of stance, I/O psychologists tend to agree that we have something unique to offer: an understanding of human behavior that can shed light on why we are seeing a particular pattern, which goes a step beyond describing what the pattern is.

Is there potential for ethical dilemma? With big data, there comes the potential for big problems. The more data we have to analyze, the more likely we are to find some sort of pattern by chance alone. Dr. Scott Erker advised that, with the increased potential for Type I error (false positives), I/O practitioners must be the “informed skeptics” who raise the point that a finding may not be “as significant as you might think.”

In spite of concerns, big data analyses are becoming more and more prevalent in research across a variety of industries. With a dual role as experts in both data analytics and human behavior, I/O psychologists undoubtedly have a lot to offer.

 _______________________________________________________________________________________

Recruitment of Individuals with Disabilities: Regulatory, Research, and Employer Perspectives

This session focused on a variety of issues regarding the recruitment and selection of individuals with disabilities in the context of new OFCCP regulations. The panel (including Dr. Dunleavy from DCI) included a diverse set of experts from academic, internal HR, test vendor, and HR risk management settings. The panelists covered topics including the new regulations, an update on available research regarding subgroup differences, and strategies for promoting and retaining individuals with disabilities. From a more practical perspective, the panelists also discussed the potential consequences of being under-utilized, whether adverse impact analyses of applicant flow data are worth pursuing, and how to conduct analyses in ways that mirror reality. Practitioners in federal contractor companies shared resources to help employers determine reasonable accommodations and discussed strategies for ensuring that processes are perceived as fair by individuals with disabilities. All of the panelists noted that the new regulations should promote a contemporary research agenda of value because applicant and employee data will now be available that had previously been illegal to collect. It will be interesting to see follow this line of research as data become available in the upcoming years.

 _______________________________________________________________________________________

Meta-analysis:  Methods for Messy, Incomplete, and Complex Data

Meta-analysis is a procedure for combining results over many different studies to obtain more stable research results than might be achieved in any one research study. The general idea behind meta-analysis is that all studies suffer from idiosyncratic problems, which affects statistical results, but averaging results across studies allows the study problems to cancel each other out, such that the average effect provides a realistic picture of the true relationships between variables. In this symposium, methods for improving the accuracy of meta-analytic results were presented.

Meta-analysis is an important methodology in the personnel selection research, and the presenters highlighted important considerations for those conducting meta-analyses. Of particular relevance were considerations of the importance of (1) aggregating comparable statistics, (2) appropriately contending with study outliers that may corrupt meta-analytic estimates, and (3) including the comprehensive set of research on the topic and recognizing the limitations of the meta-analysis if only a subset of research is included.

 _______________________________________________________________________________________

Predictive Analytics: Evolutionary Journey from Local Validation to Big Data

A panel of experts discussed the advantages and challenges for the two different approaches of local validation and utilizing “big data.” Several of the highlights are summarized below.

Local Validation

  • Local validation done poorly is worse than not doing a study at all
  • Quality of the study will largely depend on how good the construct validity is
  • Often a  good criterion measure isn’t good/available and will need to supplement (e.g., use vendor performance appraisal and (re)survey managers
  • Limited resources can largely impact quality of study
  • Can increase buy-in (i.e., perceived well locally)
  • Many of the experts will use validity generalization in addition to content validity, or will use it to test theories (more academic).

Big Data

  • With the increase in technology, many organizations are privy to “big data” which can be utilized to answer important business questions and guide strategic planning if harnessed properly
  • Timeliness is very important – can consider using longer periods of data, but placing more weight on recent data.
  • Challenges mostly center on integrating systems and/or using non-traditional sources. This often requires a lot of manual input if the “smoke stacks don’t talk” (i.e., no unique identifier across systems). It is recommended to work with IT to address, but start small – with a specific organization issue and work backwards (a lot of lessons learned to carry over).
  • If you can get big data right/clean, this can prove to be better than using meta-analyses or publications. Published research often has “publication bias,” whereas your internal database will allow for multiple findings, including those that are null
  • Important to keep in mind the importance of reliability and validity when conducting analytics – an area where I/O psychologists are a great asset.
  • Additionally, need to keep in mind the influence of statistical power with big data – statistical significance does not necessarily mean important (or practical significance)
  • A panelist recommended the website flowingdata.com for ideas using big data

 _______________________________________________________________________________________

Using and interpreting statistical corrections in high-stakes selection contexts

In criterion-related validity studies, validity coefficients provide evidence of the job relatedness of selection procedures, as they represent the degree to which scores on a selection procedure are related to performance on the job. The higher the validity coefficient, the stronger the evidence that the assessment is job related. In practice, statistical corrections are often applied to validity coefficients to account for the fact that observed validity coefficients often underestimate the relationship between the selection procedure and performance. In a well-attended symposium chaired by Dr. Kayo Sady, four presenters highlighted four lines of research exploring issues of applying statistical corrections.

Dr. Dana Dunleavy and colleagues highlighted medical school admissions research that illustrated the importance of accurately defining applicant populations and their characteristics in order to derive accurate validity coefficient estimates that have been corrected based on presumed population characteristics. Dr. Jeff Johnson presented cutting edge research that extends statistical correction formulas to synthetic validity coefficients, thus enhancing the available tools for evaluating validity based on non-traditional validation strategies. Dr. Lorin Mueller presented test construction research that underscored how final test quality is influenced by the particular correction procedure applied to item-level statistics. Finally Dr. Kayo Sady and David Morgan from DCI Consulting presented a legal risk matrix that introduced a method for determining selection procedure legal risk based on concurrent evaluation of the uncorrected and corrected coefficient characteristics. A common theme across the four presentations was that the assumptions underlying correction procedures should be carefully evaluated to help ensure accurate calculation and interpretation of corrected validity coefficients.

 _______________________________________________________________________________________

Cruising the Validity Transportation Highway: Are We There Yet?

This session of noted experts from academia and practice discussed selection procedure validation strategies that borrow validity evidence from other sources. There are many scenarios where selection tools may provide substantial value to organizations, but there is no opportunity to conduct local validation research. A lack of research may prevent the organization from demonstrating that the tools predict important work outcomes, which can insulate the organization from EEO risk. In these scenarios, borrowing validity evidence from other sources may be a worthwhile strategy.  There are a variety of strategies, yet there are few clear standards for understanding the appropriateness and persuasiveness of each. Toward that end, this panel evaluated contemporary strategies including validity transportability, synthetic validation methods, and meta-analysis as a validity generalization strategy. One common theme centered on the disconnect between practices accepted by the broad I/O psychology community and those that are regularly endorsed in EEO enforcement settings, where it appears that local research is favored. The session closed with some practical considerations regarding strengths and weaknesses of each approach.

 _______________________________________________________________________________________

Within-Group Variability: Methodological and Statistical Advancements in the Legal Context

When businesses fall under legal scrutiny, the question that is oftentimes raised is: Was one group of employees or applicants treated differently than another group? But what constitutes a group in these situations? It is critical to determine that the individuals in question share certain characteristics that justify grouping them together for purposes such as pay analysis and class certification. In this forum, I/O psychologists discussed a number of advancements in answering the question: Are these individuals similar enough to be treated as a group?

Drs. Kayo Sady and Mike Aamodt of DCI Consulting presented a research method for determining pay equity analysis groupings. The method aims to balance research rigor with time and effort, such that data might be collected to inform the grouping process without requiring an exorbitant amount of time and effort. That said, the method may require more time and effort than is reasonable for a typical proactive analysis, but under audit circumstances in which OFCCP is pursuing aggregations beyond the job title level, the method may be used to evaluate the appropriateness of groupings. In the presented method, the lowest level of meaningful aggregation (e.g., job title) is determined first followed by two steps to justify aggregation. In Step 1, subject matter experts (SMEs) rate all job title pairs on similarity of duties, skills, qualifications, and level of responsibility. To the extent that groups of jobs are rated to be similar on those four characteristics, the groups of jobs are evaluated at Step 2. Step 2 involves determining whether pay is influenced by the same factors (e.g., merit variables such as education, experience, or time in company) and in the same way by common factors across the groups. Two methods for completing Step 2 were presented. The first method for Step 2 involves evaluating the similarity of regression coefficients for a simple set of predictors across the groups. To the extent that the coefficients are similar across groups, there may be justification for aggregation. (Note, this method does not necessarily involve statistical tests of equivalence, like the Chow Test, as parsimony may be valued over analytic rigor in some circumstances.) The second option presented for Step 2 is to have SMEs rate, using a structured rating scale, the projected influence of the common factors on compensation. Once such data are collected, common similarity indices such as the squared Euclidean distance between ratings for different jobs can be used to evaluate the similarity of influence ratings across job titles, thus informing whether a common pay model exists across groups.

Other presenters discussed new methods for determining the appropriateness of class certification. For instance, Dr. Chester Hanvey explored class certification through the technique of time-and-motion observational methodology, which considers whether employees in a group allocate different amounts of time to different tasks, even within the same job. Hanvey proposed that variability in time spent on individual tasks demonstrates that employees are in fact not necessarily doing the same job. Similarly, Dr. David Dubin discussed the use of cluster analysis to show that time spent by employees on tasks can vary, which can serve as evidence against class certification. Finally, Dr. Kevin Murphy discussed a useful statistic for making the degree of group variability easy to understand: the coefficient of variation (CV). For groups with a normal amount of variability, CV is around .33. The CV increases as groups become more variable and decreases as groups become less variable. In comparison to significance tests that simply convey whether or not variability is greater than zero, the CV gives descriptive information on the degree of variability.

 _______________________________________________________________________________________

What Goes Unseen: Mental Disabilities in the Workplace

With the release of the new regulations under Section 503 of the Rehabilitation Act, discussions of disability in the workplace have become increasingly prevalent. The impact that these regulations will have on individuals with invisible disabilities is of particular concern among researchers and practitioners alike. The new regulations require federal contractors to begin soliciting disability status on a voluntary basis at the pre-offer phase of employment. Due to the oftentimes concealable nature of invisible disability, many are questioning whether applicants will disclose at the risk of potential stigmatization. In a forum dedicated to the topic of mental disability, Dr. Adrienne Colella, who has conducted extensive research on disability in the workplace, advised that I/O psychologists begin learning more about disability. Several researchers explored the topic of invisible disability in the workplace, shedding light on some of the recent progress being made in this area.

Presenters included researchers Anna Hullet and Christine Nittrouer and Dr. Sam Hunter. Specifically, research addressed issues surrounding adult attention deficit hyperactivity disorder (ADHD), autism spectrum disorder (ASD), and severe disability in an employment setting. The researchers discussed potential barriers for employees with disabilities as well as methods for producing positive work outcomes. For example, Nittrouer discussed her research on the use of goal-setting and self-monitoring as techniques for allowing employees with disabilities to better stay on task and complete tasks at work. She discussed the particular success of self-monitoring as a means for improving work performance within her study. Dr. Hunter raised an interesting question regarding selection barriers for applicants with ASD: Because ASD is a social disorder, will selection processes with social aspects (e.g., interviews) disadvantage applicants who are otherwise qualified for a position?

As a whole, the forum called attention to the reality that, from an I/O perspective, there is a lack of understanding of disability in the workplace. It is becoming more and more apparent that our knowledge of this topic needs to extend beyond the clinical setting.

 _______________________________________________________________________________________

Fraud on Employment Tests Happens: Innovative Approaches to Protecting Tests

Representatives from Microsoft, Caveon Test Security, CEB, and Select International participated in a panel discussion about protecting the integrity of tests and testing programs. The experts discussed several questions surrounding the prevalence of cheating and test piracy and approaches to protecting and/or addressing security breaches. The major take-home message was that practitioners need to be prepared and aware of the security risks for the specific test(s) used – they are constantly changing and likely to become more prevalent as technology changes. All felt that cheating wasn’t a “big” issue in the aggregate; however, there is little research in the area to truly know the prevalence and/or effects. It was noted that if cheating will have a large impact on the test (or selection decision), then unproctored testing is not an option.

Several of the key points from the session are summarized below:

  • Security tends to be a continuum, ranging from one end (less secure) with a fixed form with no item bank to a computer adaptive test (CAT) with large item bank on the other end (more secure). Certain item types are more vulnerable to cheating; for example, multiple choice questions. A performance-based item would be much less vulnerable.
  • As technology changes (e.g., mobile testing), reliance on less-secure item types may increase
  • Tips for unproctored internet testing (and testing in general):
    • Build dynamic tests (not standard form for each candidate)
    • Single use links
    • Have a plan for information monitoring (e.g. patrolling the web)
    • Utilize data forensics to identify anomalies (e.g., utilizing a data warehouse)
    • Cheating is more of a validity issue than a behavior issue
      • It takes a large amount of inflation by a large number of test-takers to affect the validity
      • Cheating has a greater impact on scoring that uses cut scores rather than rank ordering
      • It is recommended to include internal consistency measures, items to detect misbehavior, and increase length of the test
      • If using a CAT, it is recommended that a fixed length form is used (rather than variable). Candidate reactions tend to be more favorable for the fixed length forms.
      • Be creative – assume your test or content will be stolen
      • Have an action plan for security breaches (e.g., quickly correcting compromised item/form or content, dealing with candidate, etc.)
      • Budget ahead of time for security and prioritize areas of security (e.g., different forms, authenticate candidates via ID checks, test monitors, etc.)
        • Recommended to do a risk analysis ahead of time

 _______________________________________________________________________________________

Employees with Disabilities Section 503 Changes: Implications and Recommendations

This panel focused on the new Section 503 regulations which became effective on March 24, 2014. This rule prohibits federal contractors and subcontractors from engaging in employment discrimination against individuals with disabilities, and requires various affirmative action practices in recruitment, hiring and retention of protected individuals. Panelists discussed the impact of the new rule, ways organizations can help implement the law and why it is important. Peter Rutigliano, Senior Consultant at Sirota Consulting discussed ongoing research related to individuals with disabilities and referred to this group as the forgotten diversity segment. His research showed that some of the larger differences between individuals with disabilities (IWD) and individuals without disabilities (IWOD) were found in perceived treatment of the employee by the company, satisfaction with physical working condition, job support and job achievement. Smaller differences were found in compensation, management feedback and local environment (e.g., team). The session presenters discussed the following best practices and recommendations an organization can focus on as they come into compliance with the regulation.

 

Advice individuals with disabilities give employers:

  • Make sure job description is representative
  • Focus on ability and not disability
  • Consider flexible working conditions
  • Treat candidates equally
  • Allow candidates to prove themselves (trial period)
  • Prepare when interviewing candidates with disabilities
  • Invest in sensitivity and interview training
  • C-level involvement

 

Where to invest:

  • Training
  • Marketing
  • Real estate
  • Structured interviews
  • Job description (not limiting or unnecessary qualifications)
  • Performance management

 _______________________________________________________________________________________

New Developments in Biodata Research and Practice

Several presenters summarized research and practical recommendations surrounding the use of biodata as a selection assessment. Biodata is biographical information, which is typically collected through a questionnaire that asks about life and work experiences, and may also ask questions about opinions, values, attitudes, etc.  The session discussant argued that biodata items should focus on a life event and include a past tense verb; otherwise, biodata may be assessed using personality items. The presenters all agreed that there is great empirical support for assessments of biodata; however, biodata remains an undervalued technique. Two areas that are typically of concern for employers using or considering biodata assessments are applicant faking and test security. Presenters recommended having a large item bank and also exploring different item types as a method of content generation (which would also increase the pool). One example was to utilize response elaboration, which requires the applicant to provide open-ended responses to a previous multiple-choice question. To further decrease the likelihood of faking, it was recommended that the employer follow-up on certain score groups; word will get out that there is follow-up and the open-ended responses will be taken more seriously.  For employers considering the use of a biodata assessment in selection, it was recommended to not use validity generalization, but rather transportability as biodata is a process tapping different latent constructs and not a construct itself. Overall, the presenters encouraged the use of biodata in making employment decisions and hoped to see the field of research grow.

 

By Yesenia Avila, M.P.S., HR Analyst; Eric Dunleavy, Ph.D., Principal Consultant; Rachel Gabbard, M.A., HR Analyst; Kayo Sady, Ph.D., Senior Consultant; and Amanda Shapiro, M.S., Consultant, DCI Consulting Group

Stay up-to-date with DCI Alerts, sign up here:

Advice, articles, and the news you need, delivered right to your inbox.

Expert_Witness_1st_Place_badge

Stay in the Know!