The first time I took on the role of a lead engineer, a few years ago, I had a really hard time learning how to prioritize and delegate work. For much of my early career, I had simply never needed any planning skills beyond “say yes to everything and work yourself into the ground”. One of the best pieces of professional advice I’ve ever received came during this time, from a mentor who told me to delegate the things I was already good at. If I’m good at something, it means I’m actually equipped to evaluate whether my team is doing a good job. It also means I don’t need the practice as much, so delegating frees me up to improve other skills. There’s an oft-repeated myth about artificial intelligence that says that since we all know that humans are prone to being racist and sexist, we should figure out how to create moral machines that will treat human beings more equitably than we could. You’ve seen this myth in action if you’ve ever heard someone claim that using automated systems to make sentencing decisions will lead to more fairness in the criminal legal system. But if we all know that humans are racist and sexist and we need the neutrality of machines to save us—in other words, if we should delegate morality to AI—how will we ever know if the machines are doing the job we need them to do? And how will we humans ever get better?