In a report published in the July-August 2020 edition of the Harvard Business Review entitled, “A Better Way to Onboard AI,” NYU Tandon School of Engineering Associate Professor of Technology Management and Innovation, Anne-Laure Fayard sets forth an innovative model that treats AI systems as new human employees, dividing their ‘onboarding’ into four phases consisting of the roles of assistant, monitor, coach and then teammate.
Co-authored by Boris Babic, Daniel L. Chen, and Theodoros Evgeniou, the report utilizes a Workforce Institute survey of 3000 managers across eight industrialized nations from 2018 and also relies on such fields as cognitive anthropology as well as commonly used employee onboarding processes.
The first phase of the onboarding where an AI system is treated as an assistant involves more common deployment of AI systems, such as tasking them with data sorting, with the report noting that “AI is very helpful during high-volume decision-making, when humans may be tired.”
The second phase of treating AI as a monitor, involves setting up the AI system to provide real-time feedback to make up for the “limited and imperfect reasoning capabilities” of humans, that the report asserts that psychology, behavioral economics and cognitive science has revealed. A noteworthy recommendation from the report here is that employees be involved in the design stage of the rules that will govern the AI system in order to help retain their sense of control and in light fears that they might be replaced by such a system.
As to the third phase of considering an AI system as a coach, the report sets forth that AI has the potential to provide a more effective feedback experience for employees when compared to that of a more traditional system under which hierarchical superiors provide performance reviews at fixed times. It provides the example of an AI system generating a “monthly summary analyzing data drawn from … past behavior,” capable of helping an employee better understand his or her decision patterns and practices. Consistent with the overall theme of the report, allowing employees to “relate [to] and control exchanges with artificial intelligence,” is said to help them to see AI as a ‘safe channel’ for feedback.
The fourth phase under which an AI system is viewed as a teammate, relies on the theory of distributed cognition under which “[e]xternal tools and instruments can … play a role in cognitive processing and create … a coupled system.” Here, a ‘coupled network’ of humans and machines would be utilized, with the report noting that to their knowledge, no such network has been implemented to date.
The belief is that AI would improve through its interactions with employees, which would help lead to a community of experts consisting of both humans and machines, with the report dubbing the machine systems as ‘AI coachbots.’