Agent Tuning & Optimization 相关度: 5/10

Neural Network Conversion of Machine Learning Pipelines

Man-Ling Sung, Jan Silovsky, Man-Hung Siu, Herbert Gish, Chinnu Pittapally
arXiv: 2603.25699v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

论文研究了将非神经网络的机器学习Pipeline迁移学习到神经网络,以实现统一推理。

主要贡献

  • 提出将非神经网络Pipeline迁移学习到神经网络
  • 探索用神经网络模仿随机森林分类器
  • 实验验证了神经网络可以通过迁移学习模仿随机森林

方法论

通过迁移学习,将随机森林的知识迁移到神经网络,并在OpenML数据集上进行实验验证。

原文摘要

Transfer learning and knowledge distillation has recently gained a lot of attention in the deep learning community. One transfer approach, the student-teacher learning, has been shown to successfully create ``small'' student neural networks that mimic the performance of a much bigger and more complex ``teacher'' networks. In this paper, we investigate an extension to this approach and transfer from a non-neural-based machine learning pipeline as teacher to a neural network (NN) student, which would allow for joint optimization of the various pipeline components and a single unified inference engine for multiple ML tasks. In particular, we explore replacing the random forest classifier by transfer learning to a student NN. We experimented with various NN topologies on 100 OpenML tasks in which random forest has been one of the best solutions. Our results show that for the majority of the tasks, the student NN can indeed mimic the teacher if one can select the right NN hyper-parameters. We also investigated the use of random forest for selecting the right NN hyper-parameters.

标签

迁移学习 知识蒸馏 神经网络 随机森林

arXiv 分类

cs.LG cs.AI