On 9/21/16 the OFCCP filed suit with the Office of ALJs against Palo Alto Palantir, a technology company. The claim is that Palantir discriminated systematically against Asian job applicants in hiring in comparison to Non-Asian applicants for three positions: (1) QA Engineer, (2) Software Engineer, and (3) QA intern. The statistical findings are depicted in the table below. All comparisons were statistically significant (Z test results = 3.96, 5.80 and 6.09, respectively for the three positions). The complaint alleges the following:
(1) For the QA Engineer position, from a pool of more than 730 qualified applicants – approximately 77% of who were Asian – Palantir hire six non-Asian applicants and only one Asian applicant. The adverse impact calculated by OFCCP exceeds three standard deviations. The likelihood that this results occurred according to chance is approximately one in 741.
DCI Replication of Results
|QA ENGINEER||Select||Reject||Total||Selection Rate||IR||ztest|
(2) For the Software Engineer position, from a pool of more than 1,160 qualified applicants – approximately 85% of who were Asian – Palantir hire 14 non-Asian applicants and only 11 Asian applicants. The adverse impact calculated by OFCCP exceeds five standard deviations. The likelihood that this result occurred according to chance is approximately one in 3.4 million
DCI Replication of Results
|SOFTWARE ENGINEER||Select||Reject||Total||Selection Rate||IR||ztest|
(3) For the QA Engineer Intern position, from a pool of more than 130 qualified applicants – approximately 73% of whom were Asian – Palantir hired 17 non-Asian applicants and only four Asian applicants. The adverse impact calculated by OFCCP exceeds six standard deviations. The likelihood that this result occurred according to chance is approximately one in a billion.
DCI Replication of Results
|QA ENGINEER INTERN||Select||Reject||Total||SelectionRate||IR||ztest|
That said, this one falls into the category of “been there done that” ---- Comparison of one group to the aggregation of several other groups (e.g., Blacks to non-Blacks, Whites to non-Whites, and so on). We wrote about this topic in two Alerts relating to OFCCP v. VF Jeanswear Limited. I described the case on 8/7/13, and Jana Garman and David Cohen discussed its implications on 9/4/13.
The Jeanswear ruling was handed down by ALJ Kevin A. Krantz on 8/5/13. Interestingly, a consultant hired by the OFCCP testified that he saw no evidence of disparate treatment of applicants, which requires proof of motive, but testified there was evidence of disparate impact, which requires no proof of motive. The consultant, a labor economist, then took the adverse impact route. Comparing 182 Asian applicants to 367 non-Asian applicants, he testified that 87 Asians as compared to 86 non-Asians were selected for interview, representing a shortfall of 29.6 non-Asians. His conclusions were:
- Applicants referred by employees were given priority in selection for interviews;
- Asian employees were much more likely to use the referral system;
- Asian employees made the most referrals; and
- Asian employees were highly likely to refer Asian applicants when making referrals.
Of course, in this case, it was Asians who were deemed advantaged and non-Asians deemed disadvantaged (the opposite of Palantir).
Also of interest, the OFCCP did not request the consultant to disaggregate the non-Asian groups, leaving, in ALJ Krantz’s words, a combination of:
"one group that was over-represented (Hispanics), one group that was under-represented (Whites) and one group that was closely proportional to the regional percentage (Blacks)”.
Citing both implementation regulations for Executive Order 11246 and the Uniform Guidelines, Krantz noted that employers are prohibited from using:
"employee selection procedures with a disparate impact on a “race” or “ethnic group”, but also, that the non-Asian category “is neither a race nor an ethnic group, either by regulatory definition or as used in common parlance.”
In the follow-up, Garman and Cohen first noted that in the prior five years or so, some OFCCP regions had moved away from this “aggregation” analysis, and that the OFCCP has explicitly stated:
"Presenting data for “minorities” in the aggregate is useful for the utilization analyses and goal setting components of the contractor’s affirmative action programs. However, to determine whether the contractor has discriminatory employment practices requires analyzing data by sex, and by separate racial or ethnic groups."
That seems fairly clear --- it’s OK to aggregate when evaluating AAPs, but disaggregated analyses are need for discrimination claims.
Garman and Cohen then presented the critical argument against aggregation of groups vs. non-groups; that it capitalizes on chance. Indeed, that Uniform Guidelines suggest use of pairwise (2x2) comparisons of the highest selected groups to each of the other groups. To their point, Garman and Cohen noted that races are defined as Blacks, Asians, Hispanics, American Indians, Alaskan Natives, Native Hawaiians, Pacific Islanders, and Whites (8 groups in all), that gives you 7 pairwise comparisons. With a little math, that 7!/(5x2!) = 42 divided by two yielding 21 possible comparisons, rendering chance .05 probabilities a virtual certainty. That’s an extreme case, but it shows the danger of using aggregated data in discrimination claims.
Putting all this together we (me, Garman and Cohen) see nothing new in the Palantir case and wonder why now? Why again?
That said, one should not ignore what the OFCCP failed to do --- to look more carefully at potential evidence relating to word-of-mouth disparate treatment scenarios. Enough said for now ---- I will address that issue in a separate Alert.
By Art Gutman, Ph.D., Professor, Florida Institute of Technology