Data Science
Data Science Blog
Artificial Intelligence Ability to Autonomously Adapt to Changing Demands
One key aspect of robust artificial intelligence is the ability to autonomously adapt to changing task demands or even learning completely new tasks as the need arises. Many data science and machine learning approaches anticipate little-to-no change in task demands over time. In the most typical cases, distinct statistical or machine learning models are assigned to learn entirely separate tasks. Such an approach requires manual intervention when task demands shift over time or when new tasks arise.
Past work in psychology and neuroscience has developed theories on how humans and animals overcome such limitations: one key component is contextualized learning supported by working memory. Computational simulations and neuroimaging studies of psychological working memory function helped establish the neural basis of working memory in the brain as subsisting in the interactions between two brain regions: the prefrontal cortex and the mesolimbic dopamine system.
However, much of the complexity associated with the biological details in such models have been recently been removed, exposing the core computational mechanisms of task-switching behavior. These core mechanisms can be integrated into deep-learning models: a powerful learning framework which has demonstrated tremendous success in a wide variety of domain tasks in recent years (https://www.pnas.org/content/117/48/30033).
Work by two students in the Department of Computer Science, David Ludwig and Lucas Remedios under the direction of Data Science Institute affiliate Dr. Joshua L. Phillips, has helped bridge the gap between these fields by integrating autonomous task-switching mechanisms inspired by human working memory into one of the most popular deep-learning frameworks, TensorFlow/Keras (https://www.tensorflow.org/).
Lucas was an undergraduate who helped to develop the initial framework under funding from the MTSU Undergraduate Research and Creative Activity program, and later David, a graduate student working on a master’s thesis, completed the framework and tested it against a range of different tasks with differing types of data. The framework proposes two complementary mechanisms which allow the deep-learning models to adapt to different tasks over time: a context layer which can swap out task-specific context representations (analogous to the prefrontal cortex in the brain) and a method for computing context loss which can be used to decide when to switch or update context representations (analogous to the mesolimbic dopamine system).
A manuscript describing their work was peer-reviewed and recently accepted at the 33rd IEEE International Conference on Tools with Artificial Intelligence (https://ictai.computer.org/), and code for all models and experiments is freely available online (https://github.com/DLii-Research/context-learning).
(This paper was among 550 papers submitted at the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence and one of 110 accepted as full papers. It won the Bourbakis-Ramamoorthy Best Paper Award at the conference on Tuesday, November 2, 2021.)
Ludwig, D., Remedios, L., and Phillips, J. L. (in press). A neurobiologically-inspired deep learning framework for autonomous context learning. In Proceedings of the IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI 2021)
Follow Us!
Contact Us
Data Science Institute
MTSU Box 0499
1301 East Main Street
Murfreesboro, TN 31732
615-898-2122