How to promote employees using machine learning without societal bias
Though it may at first read as being callous, a managerialist stance on inequality in statistical classification can help untangle some of the rhetoric around this tricky issue.
Consider the example that’s been in the news lately:
Suppose a company begins to use an algorithm to make decisions about which employees to promote. It uses a classifier trained on past data about who has been promoted. Because of societal bias, women are systematically under-promoted; this is reflected in the data set. The algorithm, naively trained on the historical data, reproduces the historical bias.
This example describes a bad situation. It is bad from a social justice perspective; by assumption, it would be better if men and women had equal opportunity in this work place.
It is also bad from a managerialist perspective. Why? Because if the point of using an algorithm were not to correct for societal biases introducing irrelevancies into the promotion decision, then it would not make managerial sense to change business practices over to using an algorithm. The whole point of using an algorithm is to improve on human decision-making. This is a poor match of an algorithm to a problem.
Unfortunately, what makes this example compelling is precisely what makes it a bad example of using an algorithm in this context. The only variables discussed in the example are the socially salient ones thick with political implications: gender, and promotion. What are more universal concerns than gender relations and socioeconomic status?!
But from a managerialist perspective, promotions should be issued based on a number of factors not mentioned in the example. What factors are these? That’s a great and difficult question. Promotions can reward hard work and loyalty. They can also be issued to those who demonstrate capacity for leadership, which can be a function of how well they get along with other members of the organization. There may be a number of features that predict these desirable qualities, most of which will have to do with working conditions within the company as opposed to qualities inherent in the employee (such as their past education, or their gender).
If one were to start to use machine learning intelligently to solve this problem, then one would go about solving it in a way entirely unlike the procedure in the problematic example. One would rather draw on soundly sourced domain expertise to develop a model of the relationship between relevant, work-related factors. For many of the key parts of the model, such as general relationships between personality type, leadership style, and cooperation with colleagues, one would look outside the organization for gold standard data that was sampled responsibly.
Once the organization has this model, then it can apply it to its own employees. For this to work, employees would need to provide significant detail about themselves, and the company would need to provide contextual information about the conditions under which employees work, as these may be confounding factors.
Part of the merit of building and fitting such a model would be that, because it is based on a lot of new and objective scientific considerations, it would produce novel results in recommending promotions. Again, if the algorithm merely reproduced past results, it would not be worth the investment in building the model.
When the algorithm is introduced, it ideally is used in a way that maintains traditional promotion processes in parallel so that the two kinds of results can be compared. Evaluation of the algorithm’s performance, relative to traditional methods, is a long, arduous process full of potential insights. Using the algorithm as an intervention at first allows the company to develop a causal understanding its impact. Insights from the evaluation can be factored back into the algorithm, improving the latter.
In all these cases, the company must keep its business goals firmly in mind. If they do this, then the rest of the logic of their method falls out of data science best practices, which are grounded in mathematical principles of statistics. While the political implications of poorly managed machine learning are troubling, effective management of machine learning which takes the precautions necessary to develop objectivity is ultimately a corrective to social bias. This is a case where sounds science and managerialist motives and social justice are aligned.