By: Lisa Harpe and Don Lustenberger
In our first installment of our summer blog series contrasting "expert” versus ”robot” approaches to conducting a pay-equity study, we described how the more traditional, consultant-led approaches (i.e., expert) differ from newer, automated approaches (i.e., robot). In our second installment, we examine how these approaches to pay equity differ in the initial phases of a typical study: project kickoff and data cleaning.
The project kickoff represents a crucial step in the pay equity study. This phase usually involves a discussion with the compensation team (and often legal counsel). Decisions made during this step can affect the quality and accuracy of the analysis and subsequent findings and conclusions. Mistakes or oversights may potentially lead to organizations making tens or hundreds of thousands of dollars in ill-advised pay adjustments. So, it is important that organizations conducting pay equity studies do things right from the outset.
Several topics should be covered during the project kickoff, including the following:
These topics provide critical context for understanding an organization’s compensation and making decisions related to employee groupings and factors to include in a pay equity study.
An expert brings experience with pay equity projects and can guide discussions with organizations about these topics. The expert can probe as necessary and ask nuanced questions to elicit answers that can make a big difference in project specifications. For example, answers to questions about an organization’s past compensation practices, its recent hiring trends, or how it uses market data to create salary bands can make an expert consultant rethink how to design certain aspects of the pay equity study.
When it comes to the project kickoff phase, there are multiple issues with the robot approach. First, if an organization does not have expert external or internal expertise, they may not (know to) consider important questions and details and may rely on default settings or overly simplistic instructions to conduct analyses later in the project. Second, and somewhat related, certain software programs (or robots) have limited capabilities (e.g., can only run regressions). Important aspects of an organization’s structure may be ignored or downplayed because the software only runs analyses in one way or only permits one way of grouping employees. Where an employer has already spent sufficient time identifying the type(s) of pay to analyze, the appropriate employee groupings, and the factors that influence pay, the robot approach may not be problematic. However, skipping this step could lead to meaningless results and potentially inappropriate actions based on those results. An employer should not make pay adjustments generated from an analysis with inappropriate employee groupings or the inclusion of the wrong or insufficient factors that influence pay.
During the project kickoff phase, organizations will provide the data (and relevant information pertaining to the data) needed to conduct the pay equity study. Once submitted, the data of some organizations will invariably be messier than others. Most often, there will be data errors or other issues that need to be resolved before the analysis can proceed. This is typically done during the data cleaning phase of a pay-equity project.
Common data issues that arise and that need to be addressed during the data cleaning phase can include the following:
If these data issues are not identified and addressed, the analysis results can be misleading or simply wrong. Again, if employers want to use the analysis to make pay adjustments, the integrity of the data and the reliability of the results is crucial. An employer should not make adjustments generated from an analysis based on data with errors or missing data.
An expert can conduct organization-specific checks, work with the compensation team to ensure data integrity, and provide feedback on potential proxies (e.g., age as a proxy for prior experience). In addition, an expert can advise the client on the implications of analyses when data on important factors is unavailable. Experienced experts have the added ability to look across data fields and identify anomalies or other inconsistencies that could otherwise go unnoticed but that could materially affect the study results.
A robot, on the other hand, may have some standard checks, particularly related to missing data, but may not be able to address organization-specific data issues or evaluate harder-to-identify data inconsistencies (e.g., pay range midpoints that are not adjusted for geographic differentials). Additionally, a robot likely cannot provide insight on how to handle or remedy missing data.
So, relying on a robot to assist with data cleaning—or worse, assuming one’s data is already clean and error-free—can be a risky venture. Conversely, experts with experience know what to look for, an important qualification since almost all organizations have data errors that would otherwise go unnoticed.
In our next installment in the series, we will cover planning and conducting the analysis.