POPNAS (Models Architecture Search)
Pareto Optimal Progressive Neural Network Architecture Search tool
This system offers an automated mechanism for the design of deep network architectures, thereby assisting developers in the creation of AI models. Upon receiving specific Quality of Service (QoS) criteria, the employed algorithm diligently searches for the most suitable network configuration that adheres to these stipulated requirements. Furthermore, in the presence of available labelled samples, the Pareto-Optimal Progressive Neural Architecture Search (POPNAS) adeptly discerns a collection of potential models. This selection is based on the Pareto frontier, which represents the optimal trade-off between time and accuracy.
The instrument streamlines the process of developing artificial intelligence models, thereby empowering individuals with minimal machine learning proficiency to effectively train models of superior quality.
A distinguishing feature of the Network Architecture Search (NAS) algorithm is its incorporation of time as a constraint within the optimization process. This enhancement is achieved by augmenting the Progressive Neural Architecture Search (PNAS), a state-of-the-art technique proposed initially at Google, with an additional time regressor. This modification enables the concurrent prediction of both time and accuracy for each proposed neural network. Consequently, the refined algorithm not only exhibits competitive performance but also boasts a diminished search duration in comparison to the original PNAS.