Recent years have seen an unprecedented level of technological uptake and engagement by the mainstream. From deepfakes for memes to recommendation systems for commerce, machine learning (ML) has become a regular fixture in society. This ongoing transition from purely academic confines to the general public is not smooth as the public does not have the extensive expertise in data science required to fully exploit the capabilities of ML. As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the ‘how’ and ‘why’ of human-computer interaction (HCI) within these frameworks. This is necessary for optimal system design and leveraging advanced data-processing capabilities to support decision-making involving humans. It is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy.
In this monograph, the authors focus on the following questions: (i) What does HCI currently look like for state-of-the-art AutoML algorithms? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, the authors project existing literature in HCI into the space of AutoML and review topics such as user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, they contemplate how AutoML may manifest in effectively open-ended environments. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.