Some Follow up Thoughts on Recent EEO Posts

by Eric M. Dunleavy Ph.D., Principal Consultant, DCI Consulting

If you follow this blog regularly, you know that we have been staying busy over the last few months keeping the EEO community updated on various court rulings, settlements, regulatory updates and other legal/compliance issues. I wanted to provide some additional insights and observations related to a few specific topics.

1. On the OFCCP front, there are a number of potential regulatory changes (E.g., VEVRAA changes that would affect EEO/AA requirements for protected veterans, Section 503 changes that would affect EEO/AA requirements for persons with disabilities, rescission of the compensation standards, etc.). Some of these proposed regulatory changes are controversial for a variety of reasons related to cost, accuracy of burden estimates, likelihood of success, and data-driven rationale for regulatory change. At the same time, some of the overarching goals of these proposed changes are important and noble in spirit. Regardless, we don’t expect anything to happen before the election, and the fate of many of these initiatives will likely depend on the election. Until these regulatory changes are approved, federal contractors may want to become familiar with the proposed changes, start to think about what implementation of the proposed changes would look like, but don’t actually make any changes to processes or systems until the election plays out and OFCCP’s regulatory road map becomes clearer. 

2. Art wrote about the 10th Circuit ruling in Apsley v. Boeing, which was decided on August 27, 2012 [2012 U.S. App. Lexis 18161]. http://dciconsult.com/10th-circuit-rules-practical-significance-needed-for-large-sample-comparisons-in-adea-case/. I cannot stress how important this case could be, simply because the 10th circuit articulated a notion that the social scientific community knows to be true: Statistical significance tests can be trivial when sample sizes are very large. In this case, disparities in reduction in force (RIF) selection rates were unlikely due to chance at a probability or 4 or more standard errors (or deviations as the legal community has come to refer to it). However, the actual differences in rates were extremely small, and the differences between the expected number of selections and the actual number of selections (often called the “shortfall”) were very small relative to the total number of selections and expected values. The 10th Circuit correctly noted that statistical significance test results and shortfalls need to be interpreted relative to the size of the pool and the total number of selections being made. Trivial differences will still be “likely not due to chance” when applicant pools are very large, and the 10th Circuit ruled that in this case it is NOT reasonable to infer discrimination from statistical significance tests alone. When practical significance measures were considered, the Court ruled that there were no meaningful differences.

As this blog has discussed many times, statistical significance tests have been given deference as the preferred measure of adverse impact over the last decade, in spite of some well-known issues associated with these tests. This deference could be for a variety of reasons, including the notion that these tests produce an implied “Yes/No” answer regarding our confidence that a difference in rates is due to chance. Perhaps these issues can be most clearly organized into two categories: (1) limitations of statistical significance testing as a function of sample size and (2) lack of attention to the magnitude or size of the difference in selection rates as a measure of practical significance. Criticisms of statistical significance testing as stand-alone evidence are not new (e.g., Cohen, 1992), and neither are endorsements of the importance of practical significance (e.g., Kirk, 1996). However, recent trends in EEO enforcement had suggested that existing criticisms of methods based on statistical significance testing did not yet change the direction of practice in evaluating adverse impact.

This may have changed with the Boeing ruling, and the 10th circuit endorsed an intuitive perspective that is shared by the social scientific community. For example, in the psychological literature I am most familiar with, the American Psychological Association (APA) and top tier scholarly journals require evidence of practical significance in addition to statistical significance for research to be published. In other words, statistical significance alone is not enough for a finding to warrant attention. The reason is simple: if samples are very large, any non-zero finding will achieve statistical significance. Statistical significance does not mean that a research finding or a difference in rates between 2 groups is large or important. I found the 10th circuit ruling to be elegant, focused, and insightful. It is worth a detailed read.

3. Although I was not able to attend the NILG conference in Hawaii in August, it sounds like one hot topic was the National Academy of Science (NAS) report for EEOC that evaluated the possibility and impact of a pay equity survey. This report, entitled “Collecting Compensation Data from Employers: Panel on measuring and collecting pay information from U.S. Employers by Gender, Race and National Origin” was released in mid-August and developed in part by a panel of 12 experts in the area. The report is certainly interesting, and based on the conclusions, it sounds like federal agencies will have a difficult time getting a successful pay survey off the ground.

Perhaps this section from the conclusions and recommendations section puts it best: “Based on the literature we reviewed and the papers and presentations made to us by the staff of EEOC, the Office of Federal Contract Compliance Programs (OFCCP) in the U.S. Department of Labor, and the U.S. Department of Justice, the panel finds that there is no clearly articulated vision of how data on wages would be used in the conduct of the enforcement responsibilities of these agencies.” Without a clear vision or purpose, a pay survey could be a substantial amount of work that is of minimal help in understanding, identifying, or eliminating discrimination as the cause of wage gaps that exist.

Sections of chapter 4 were particularly interesting to me, which describes survey design and methodology. Of particular interest was the section on utility for statistical analyses. While most of the chapter was straightforward, some of the factors considered and methodological examples do not seem entirely consistent with Title VII pay equity research methods, while some other sections are difficult to interpret (e.g., models with as many variables as observations, statistical power graphs, coding of protected group status in analyses, etc.). However, these analyses are clearly intended to be exemplary in nature, and as such the purpose and goals of the survey would obviously influence these types of considerations.

It is important to note that, while the NAS report provides some very useful information related to whether a pay equity survey would be feasible and successful for federal agencies to develop, implement, and analyze as part of combating pay discrimination, it has no effect at all on current OFCCP or EEOC pay equity investigation or enforcement. The NAS report is not a statute, regulation, or compliance guidance. This report was not intended to address the state current pay equity enforcement. 

The report does mention some potential data security issues, and while these points are noteworthy, they in no way affect anything that agencies are doing right now on the pay equity front. For example, OFCCP’s Compensation Standards are still agency protocol until they are rescinded (and it will be interesting to see if that ever happens), and the NAS report does nothing to change that. Even if the NAS report casts doubt on whether agencies are prepared for a broad and sweeping pay equity survey tool, it is business as usual on the pay equity enforcement front. Federal contractors should prepare accordingly, and continue to assess pay disparities using well established guidance from Title VII case law, and from the OFCCP Standards, which are based on Title VII case law. 



Stay up-to-date with DCI Alerts, sign up here:

Advice, articles, and the news you need, delivered right to your inbox.

Expert_Witness_1st_Place_badge

Stay in the Know!