Retaining information can prove to be a challenge for both humans and machines alike. Ohio State University’s electrical engineers have been exploring how “continual learning” impacts the performance of artificial agents to shed light on this issue.

“Continual learning” refers to the continuous learning process where a computer utilizes knowledge from previous tasks to efficiently learn new ones.

However, there’s a significant obstacle that needs to be addressed in this field. This is the artificial intelligence (AI) equivalent of memory loss, termed “catastrophic forgetting.” As AI neural networks learn new tasks, they tend to lose previously acquired information. This can pose significant problems as our reliance on AI systems grows, says Ness Shroff, an Ohio Eminent Scholar and professor of computer science and engineering at Ohio State University.

Shroff asserts, “As we educate robotic systems or automated driving applications, it’s crucial that they retain past lessons for everyone’s safety.” He adds, “Our study investigates the complexities of continual learning in AI neural networks, and we’ve discovered insights that start to align the learning processes of machines and humans.”

Shroff and his team found that AI neural networks, much like humans, struggle to remember contrasting facts about similar situations but can recall entirely different scenarios quite easily. This means that when these networks encounter varied tasks one after the other, they’re able to retain information better than when faced with similar tasks.

This month, at the 40th annual International Conference on Machine Learning in Honolulu, Hawaii, the team, including postdoctoral researchers Sen Lin and Peizhong Ju and professors Yingbin Liang and Shroff from Ohio State, will share their findings.

Training autonomous systems to demonstrate dynamic, lifelong learning can be difficult. However, having such capabilities would expedite the scaling of machine learning algorithms and make them more adaptable in changing environments and unpredictable situations. Essentially, the objective is for these systems to mimic human learning capabilities.

Advertisement

Traditional machine learning algorithms are trained on a batch of data all at once, but this research showed that variables such as task similarity, negative and positive correlations, and even the sequence in which a task is taught to an algorithm, are critical for the retention of knowledge in an artificial network.

For example, to enhance an algorithm’s memory retention, dissimilar tasks should be introduced early in the continual learning process, suggests Shroff. This approach expands the network’s capacity for new information and boosts its ability to learn similar tasks later.

Understanding the parallels between machines and the human brain is crucial for a deeper comprehension of AI, says Shroff, and he concludes, “Our research ushers in a new age of intelligent machines that can learn and adapt just like humans.”

This study received support from the National Science Foundation and the Army Research Office.

0 0 votes
Article Rating
Subscribe
Notify of
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

[…] future also beckons a thorough examination of the ethical frameworks governing AI and a robust dialogue on the societal implications of AI consciousness. As we venture into the […]