Ask a Question

Prefer a chat interface with context about you and your work?

GreedyNAS: Towards Fast One-Shot NAS With Greedy Supernet

GreedyNAS: Towards Fast One-Shot NAS With Greedy Supernet

Training a supernet matters for one-shot neural architecture search (NAS) methods since it serves as a basic performance estimator for different architectures (paths). Current methods mainly hold the assumption that a supernet should give a reasonable ranking over all paths. They thus treat all paths equally, and spare much effort …