AI Is Coming for Your Favorite Menial Tasks
An argument in favour of automation using Artificial Intelligence replacing humans with machines for jobs that are menial is that humans will go on to devoting more time to solving the harder and more challenging problems the world is facing. However, cognitive psychology suggests that in some fields, taking away the menial jobs from humans is doing more harm than good in terms of the morale of the workers involved. The argument being most humans need easy and quick wins to keep themselves motivated to solve the harder bits. By taking away the menial chores which delivered those easy wins for workers, AI is actually creating work related stress for workers who now spend most of their time finding solutions to problems the machine couldn’t solve.
“Decision making is very cognitively draining,” the author and former clinical psychologist Alice Boyes told me via email, “so it’s nice to have some tasks that provide a sense of accomplishment but just require getting it done and repeating what you know, rather than everything needing very taxing novel decision making.”
That people “need both experiences of mastery and pleasure for healthy mood,” Boyes said, is a core idea of cognitive-behavioral psychology. It’s important, she said, “to vary the difficulty of the mastery experiences, rather than having everything be super challenging.”
At Kickstarter, our robot picked off the projects that were clearly the easiest to analyze. Left for human reviewers were projects with muddier scores—particularly ones for ideas that tested the limits of our guidelines, such as projects for helmets (Kickstarter prohibits medical devices) or overly optimistic gadgets (Kickstarter also banned hyperrealistic 3-D renderings of technology that didn’t exist yet). Staffers who had been used to reviewing slam-dunk proposals were no longer seeing them. Their jobs weren’t quite as enjoyable as they had been.
This problem is cropping up at other companies, too. For example, the tech journalist Kara Swisher reported last year that YouTube content-moderation staff, once used to reviewing clips featuring cute animals, are now frustrated by the difficult ethics decisions that dominate their work. She wrote that “their jobs used to be about wrangling cat videos and now they had degenerated into a daily hell of ethics debates about the fate of humanity.”
The best-designed machine-learning systems are the ones that can pass on the unclear or low-confidence decisions and redirect them to humans. But when humans are looking only at the difficult, muddiest, most intractable cases, not only will morale suffer, but human workers’ chances of making the “right” decision will drop as well. The frustration will only be compounded when their accuracy, as humans, is compared with that of a robot, which will continue to pick off the easiest work.”