Learning within Model Trees
We conduct 4 motivating experiments that illustrate the benefits of learning within Model Trees. In each experiment, we train a linear classifier to predict the classes a ViT model was fine-tuned on. First, we show that learning within Model Trees is significantly simpler (a) by comparing a metanetwork trained on models from the same tree \(T\) with one trained on models from different trees \(F\). Next, we demonstrate positive transfer within the same tree (b) by showing that adding more models from the same tree improves the performance. Surprisingly, we observe negative transfer between Model Trees (c), where adding samples from other trees degrades performance on a single tree. Finally, we find that expert learning is preferable when learning from multiple trees (d), as a single shared metanetwork performs worse than an expert metanetwork per tree (MoE).