Abstract
While deep learning has achieved remarkable results for text classification, incremental learning for text classification is still a challenge. The main problem is that models suffer from catastrophic forgetting, which is they always forget knowledge learned before when labelled data comes sequentially and is trained in sequence. In this study, we propose methods of preventing catastrophic forgetting to handle unbalanced increased data. As an improvement over experience replay, our approaches improve the accuracy about 23.3% with 23% of all training data on Yahoo and 9.5% with 12% of all training data and on DBPedia.
Export citation and abstract BibTeX RIS
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.