DCI Consulting Blog

DCI Staff Attend SIOP’s 28th Annual Conference Held In Houston

Written by Eric Dunleavy, Ph.D. | May 7, 2013 1:17:00 PM

The 28th Annual Conference for the Society for Industrial and Organizational Psychology (SIOP) was held April 11-13, 2013 in Houston, TX. Attendees and presenters from a variety of industries and backgrounds took part in the event, including but not limited to Industrial/Organizational Psychology. As is typically the case for the Annual Conference, many sessions covered topics related to current EEO-related issues, as well as important themes concerning the workplace, such as assessment, selection, and adverse impact measurement. This year also saw a focus of sessions on hot topics for the federal contractor community such as the recruitment and retention of veterans, as well as accommodations for individuals with disabilities. DCI Consulting Group staff members were involved in a number of SIOP presentations. Some conference presentation highlights are found below.

 

SIOP Focus on Veterans and Individuals with Disabilities

The 2013 conference saw several sessions that focused on veterans and individuals with disabilities, two groups that are particularly important for the federal contractor community. In addition to the relevance given recent OFCCP regulatory activity, the presence of these sessions also fits nicely with current SIOP initiatives, including the Soldier to Civilian program started this year and lead by one of the sessions’ moderators. Relevant sessions are summarized below.

Recruitment and Retention of Veterans

Two independent sessions focused on the recruitment and retention of veterans. Several reasons for this focus is the large demobilization of the military and already existing unemployment rates for veterans, difficulty experienced for veteran applicants in translating their military skills and experience to a civilian sector job, and the work needed for companies to create a culture that allows this to happen. These sessions included representatives from 6 companies that have found successful ways to address problems that may be driving veterans away from employment. These companies include: Macy’s, Wal-Mart, GE, AT&T, Sodexo, and Frito-Lay.

There are a variety of reasons for building a robust Veteran program, including the obvious: “it’s the right thing to do”. Additionally, panelists felt that it is a good business decision and having veteran applicants and employees “raised the bar” for the rest of their workforce. Panelists described several characteristics of veterans that aligned with their business values, including service orientation, goal attainment, calm under pressure, demonstrated success in training, stress tolerance, and problem-solving.

Panelists provided recommendations to improve Veteran recruitment:

  • Massive overhaul of basic and preferred qualifications, as well an additional information of “military equivalents” for these qualifications
    • Consider developing a competency crosswalk
    • Don’t let groups “waive” basic qualifications
  • Partnerships with outside groups and consulting firms
  • Before designing veteran recruitment program, conduct a gap analysis

While the recruitment of veterans is important, the retention of those hired is even more important. Each of the companies discussed programs that they have in place to work towards retention goals. Some of these programs include:

  • Formal support for veterans and families during periods of deployment
    • Maintaining previous level of compensation received
    • Organized meetings/groups/events
    • Increased communication through cell phones for their soldiers abroad
  • Internal education programs for non-veteran incumbents
  • Assignment of an internal champion for their veteran programs at each facility
  • Buddy Programs
  • Employee resource groups

The panelists also provided tips for successful Veteran recruitment and retention programs:

  • Focus on the employee “life cycle”
  • Ensure senior leadership support
  • “Be serious about it” or the brand will fail
  • Accept that you will need to modify standard HR processes
  • Balance veteran programs with other initiatives or company programs
  • Don’t overcommit and budget the appropriate time and resources
  • Realize that it may take more than one attempt to get it right
  • Beware of quotas

In addition to learning about experiences and best practices in veterans’ programs from 6 companies, one of the sessions also included research and resource information. Presenters from PDRI highlighted two resources that veterans can utilize from the Department of Veterans Affairs. VA for Vets allows for comprehensive employment search and career support, in addition to numerous resources and coaching tools for veterans. Additionally “MyCareer@VA” provides additional tools to veterans, including fit, career path, and job mapping.

Accommodating Individuals with Disabilities: Legal and Applied Perspectives

One session was dedicated to the accommodation process of individuals with disabilities, from pre-employment testing through postemployment training. Experts provided tips and best practices to ensure compliance while still maintaining a positive experience for applicants and/or employees.

For confidentiality, it was recommended that there is a main person in human resources responsible for receiving and recording of requests. Other than this individual and a medical department, other employees should not be involved in the process – especially hiring managers and recruiters. Other recommendations from the panel included:

  • Be consistent in methodology and content even if the mode is different
  • If translators are needed, recommend using company translators. If a personal translator is the only option, it was recommend that the session be recorded and reviewed by the company for approval
  • Be aware that accommodations are more easily addressed due to advancements in technology
  • Develop a well-defined process and procedures and allow for a reasonable amount of time for each step in order to move all qualified candidates through the process
  • Provide candidates upfront with as much information as possible on the process (time, technology used, etc.) and required documentation. This allows candidates to “get in front of the situation”
  • Keep a distinction between accommodations needed to apply or test for a job and the accommodations needed in order to do the job if hired.
  • Store disability information completely separate from applicant or employee files. It should not be connected to the applicant tracking system at all
  • Conduct job analysis to define essential functions of the job
  • When defining “reasonable accommodations”, involve the help of outside groups/organizations
  • Stay on top of regulations and current events
  • Include a process that allows applicants who reapply (and had accommodations previously accepted), to not need to reproduce documentation within a specified time frame
  • Be as flexible as possible
    • Recommended to err on the side of caution when approving documents – the cost of accepting the request is typically much less than the cost of a challenge

If there is not an individuals with disabilities program already in place (or the current one needs to be updated), the following recommendations were given:

  • Step one of the process is to assess the knowledge, skills, and abilities required for each job, as well as job functions. Review and update these job elements as needed in job descriptions and postings (this is where a job analysis is crucial).
  • Create detailed process for when and how follow-ups occur when accommodations are requested
  • Develop a protocol for the accepted forms of documentation and process for tracking (important if you need to reconstruct the process in the future)
  • Create a process to ensure that confidentiality is maintained
    • Candidates must be comfortable otherwise the program will not work
  • Get all of the “right people” involved (i.e., upper management, legal, HR, medical).
  • Create an internal group to champion the program
SIOP and EEOC: Developing Guidance on Employee Selection 

At last year’s SIOP Conference, Jacqueline A. Berrien, the Chair of the Equal Employment Opportunity Commission (EEOC), gave the keynote address and called for an open dialogue between the EEOC and SIOP. This panel was an update on the progress towards that goal.

Moderator Dr. Joan Brannick (Brannick HR Connections) hosted a panel that included representatives from several involved groups. SIOP President Dr. Doug Reynolds (DDI), spoke about how SIOP has created a Task Force on Contemporary Selection Practice Recommendations (CSR) to EEOC with goals of relationship building and promoting and advancing the science and practice of I/O psychology in relevant domains. DCI Consulting Group’s Dr. Eric Dunleavy, Task Force Chair, indicated that over months of discussions, two topics of mutual interest have been identified. These areas are the measurement of adverse impact and the transportability of validity evidence. These are the first two topics that the Task Force will explore, but they may not be the last. The Task Force will keep the SIOP community updated on their progress through the publication TIP and other resources.

Dr. Richard Tonowski, EEOC’s Chief Psychologist, indicated that the EEOC was glad to be partnering with SIOP, but wanted to make sure it was understood that EEOC will not be revising or abolishing the 1978 Uniform Guidelines on Employee Selection Procedures (UGESP). Although there have been requests in recent years to review these guidelines, Tonowski suggested that UGESP does not currently need to be changed. He noted that UGESP states “[t]he provisions of these guidelines relating to validation of selection procedures are intended to be consistent with generally accepted professional standards for evaluating standardized tests and other selection procedures...,”. As such, UGESP may be “silent on an issue,” but that does not necessarily preclude use of new methods and techniques. These other methods or techniques would be considered acceptable as long as they are consistent with current science and practice. Stay tuned.

Moving the State of Adverse Impact Measurement Forward 

Many organizations measure and monitor adverse impact because of possible legal responsibility and risk associated with the results. Practitioners often seek advice on the appropriate means to measure adverse impact. This panel of experts was brought together to discuss the contemporary adverse impact analysis while making recommendations on how to improve the measurement of adverse impact. Each panelist gave a short symposium-style presentation followed by a discussion moderated by Dr. Eric Dunleavy (DCI Consulting Group).

Dr. Art Gutman (Florida Institute of Technology) provided the historical context of adverse impact measurement in case law, including the origin of the “two-standard deviation rule” in Castaneda v. Partida (1977) and the first application to an adverse impact case in Guardians v. Civil Service Commission (1980). Additionally, he mentioned that the “two-standard deviation” test used by the courts is actually referring to what statisticians would call a “two-standard error” test, and suggested adopting the term “statistical significance testing” to help eliminate some of the confusion. Dr. Morris continued the discussion on this topic, explaining that because standard error is sensitive to sample size, “it’s worse to be big than to be bad” when using only statistical significance tests.

Dr. Scott Morris (Illinois Institute of Technology) also argued that adverse impact should be evaluated using directional statistical tests. Not only are these tests more powerful than non-directional tests, but they are also more appropriate for testing a directional hypothesis (e.g., the minority group will be selected at a lower rate than the nonminority group). Currently, non-directional statistical significance testing is the norm.

Dr. Fred Oswald (Rice University) presented several ways that selection rates can be compared. In addition to the commonly used impact ratio (i.e., 4/5th rule), we can examine its reciprocal, termed “relative risk.” The likelihood of applicants being selected can also be represented as odds, and those odds can be compared using an odds ratio. We can also use a chi-square test to compare the observed results to expected values. All of these measures have different meanings, but each one is important. Dr. Oswald suggested having separate guidelines to determine meaningful effects for each.

Both Dr. Dunleavy and Dr. Rick Jacobs (EB Jacobs and Penn State) spoke about what the scientist and practitioner communities can do going forward to advance best practices in the measurement of adverse impact. Their advice was to share what we know with relevant government entities, share literature and comments with EEO regulatory agencies, as well as submit amicus briefs to the Supreme Court. Additionally, the panel suggested that the best approach was to use a preponderance of evidence. If you are using statistical significance, you should also include a measure of practical significance. They seemed to agree that the 4/5th rule should probably be replaced as the standard “rule of thumb” for demonstrating practical significance.

Experts Offer Practical Guidance for Multiple Regression Analysis

A panel of regression experts (including Drs. Mike Aamodt and Kayo Sady from DCI) discussed practical issues in using regression to analyze organizational data. Much of the discussion focused on pay equity analyses. Several issues were covered in this session, such as appropriate sample sizes (e.g., the number of observations per variable needed for interpretable results), handling outliers, dummy coding data, convincing courts of appropriate methods, as well as model building strategies and interpreting results. Panel members reached broad consensus in a number of areas. For example, the experts agreed that context matters when determining appropriate sample sizes. Factors such as expected effect size, the number of explanatory variables, and desired statistical power should all be considered when determining appropriate sample size in a particular situation. Unit weighting of variables or cohort analyses were offered as alternative analytic strategies to regression when sample sizes are smaller than ideal. The panelists also noted the importance of practical significance considerations when dealing with very large sample sizes. The take-home message was that there is no one-size-fits-all approach for conducting multiple regression analysis in applied settings; however, the panelists agreed that rational model-building, paired with scientifically-sound regression techniques, is critical for appropriate model specification and results interpretation. Pay equity analyses are not exempt from regression best practices. It is clear that thoroughly understanding an organization’s pay practices, structures, and systems is critical to modeling such processes and drawing appropriate conclusions about the reasons for existing pay differences.

MQ/PQ Best Practices: Valid Selection at the First Hurdle

Dr. Lisa Lewen (AON Hewitt) presented the importance of validating the minimum qualification stage of a selection process during a master tutorial session. Employers rely on minimum qualifications to help screen out those candidates that do not meet the basic requirements to minimally perform a job. Minimum qualifications will ultimately allow an employer to identify applicants that should be considered for the next stage of the selection process until the applicant pool is narrow enough to make a final hiring decision. Legal defensibility of this first hurdle was a large focus of the presentation. Specifically, case law was summarized and agency guidance was shared with the audience. The tutorial was useful in guiding participants through reviewing and critiquing example job postings with minimum and preferred qualifications. Additionally, information was shared on best practices to follow when creating minimum qualifications. A take-away message from this session was to create minimum qualifications from a job analysis and maintain documentation of the validation process in order to support legal defensibility of this first selection step.

Practical and Legal Considerations for Alternative Validation Strategies

Presenters in this symposium discussed non-traditional validation procedures in organizations. Oftentimes, traditional test validation procedures are challenging to conduct due to small applicant or incumbent sample sizes, unique jobs, insufficient job performance data, etc. Alternative validation strategies from the viewpoints of various organizations were presented, with subsequent insight from the forum’s discussant.

Experiences with synthetic validation and test transportability strategies dominated the symposium. Examples of synthetic validation—a technique in which job component validity is inferred based on job analysis research—were provided by one presenter, for the complex job of astronaut. Using this technique, a literature review for similarly complex positions is conducted, job components and performance predictors are identified, and component validities are synthesized into one overall validity coefficient. A separate presenter (acting on behalf of an absent one) indicated, however, that the legality of synthetic validation has not been tested by the Supreme Court. Additionally, according to the presenter, there may be trade-offs by using this method in the form of hampered diversity efforts. Test transportability—a strategy in which criterion-oriented evidence is transported from one job to another similar job—was discussed as another potential alternative strategy. This technique is also endorsed by the Uniform Guidelines.

The discussant, Dr. S. Morton McPhail (The Corporate Executive Board), offered his perspective on alternative strategies, and provided his expert views to the presenters and the audience. Dr. McPhail generally supported the use of alternative validation strategies, such as synthetic validation or transportability, in certain situations. One recommendation when relying on the synthetic validation strategy, though, was the consideration of dissimilar jobs that have common job components, as they can be just as useful as leveraging information from similar jobs with common job components. However, maintaining a continuously updated job analysis database is also prudent. Regardless of whether synthetic validation or transportability research is conducted, Dr. McPhail suggested that it should be done rigorously using solid science, along with well-documented procedures. This approach will result in the most benefit both in terms of utility and legal defensibility.

Inconsistencies in Understanding and Application of I/O Science 

Several well-known Industrial/Organizational psychologists spoke on the issue of misuse or misinterpretation of the science in the applied world, more specifically in employment legal matters. Chester Hanvey, Kayo Sady, James Outtz, and Arthur Gutman each spoke about an aspect of I/O science, with Wayne Cascio serving as the discussant.

Interestingly, Dr. Outtz argued that I/O psychologists are not doing their part to correct the misapplication of science in the field. He stated that I/O psychologists often fail to engage in the self-evaluation needed to “sharpen key concepts/theories”. He noted that although cognitive ability tests are great predictors of performance, it is a gross over assumption that they are always the single best predictor of all outcomes, or that lessening the use of them would lower the standards of a selection model. Another notable point of the session, made by Dr. Sady, was the common misinterpretation of “standard deviation” test. Although the courts typically refer to a group difference of more than two standard deviations to be a “gross” disparity, the courts often should be referring to the standard error. Conflating the two terms confuses a measure of sampling variability (standard error) with one used to determine effect size (standard deviation). The confusion is problematic given recent calls to focus on both effect size and statistical significance when evaluating the importance of scientific findings. Dr. Hanvey emphasized the confusion over the use of statistics in certifying classes in the legal context, while Dr. Gutman focused on misinterpretations of subjective selection procedures and the fact that there are steps that can be taken to ensure that they are legally defensible. Dr. Casio shared experiences that highlighted all of these inconsistencies in practice.

Legal Update: Recent Cases, Trends and Implications for I/O Practice

This roundtable session presented by Keith Pyburn, Jr. and John Weiner offered three primary takeaways for DCI clients concerning (a) permissible post-administration adjustments to test scoring protocols, (b) the requirement to search for alternative selection measures, and (c) the generalizability of job analysis results from one location to another.

The Supreme Court’s decision in Ricci v. DeStefano provides the foundation for issues concerning post-administration scoring adjustments. In the case, the court found that, consistent with the Civil Rights Act of 1991, it is an illegal practice to discard test results based on the distribution of pass rates according to protected status. The Ricci case has already been cited by 52 Federal Courts of Appeal, and although the implications of Ricci are still evolving, case law points toward the illegality of making post-administration scoring protocol adjustments with the goal of achieving smaller differences between groups within a protected status variable.

The session presenters discussed whether the Ricci decision, and subsequent related case law, muddies the waters around what constitutes a reasonable alternative as specified by UGESP, as it may be illegal to substitute a test with small subgroup differences for one with large subgroup differences if the validity coefficient associated with the former test is even slightly smaller than the validity coefficient associated with the latter test.

The third takeaway is based on a recent ruling in the M.O.C.H.A Soc’y v. City of Buffalo. At issue in the case was whether a firefighter promotional exam based on job analysis data collected statewide, and with data from a small number of Buffalo firefighters participated, can be used to promote City of Buffalo firefighters. Essentially, the issue is whether the results from a sound job analysis can be transported or generalized to locations in which incumbents provided no job analysis data or in which incumbent data were underrepresented in the larger set of data. The court found that, indeed, employer-specific data are not a necessity for the employer to meet its burden at the second stage of an adverse impact case. This ruling has important implications for the defensibility of alternative validation strategies based on validity generalization, and it will be interesting to see how the ruling plays out in later case law.

by Eric Dunleavy, Ph.D., David Morgan, Kayo Sady, Ph.D., Amanda Shapiro, Dave Sharrer, and Keli Wilson, DCI Consulting Group