Search for a command to run...
As machine learning algorithms increasingly enter real-world settings, there is rising interest in controlling the cpu-cost during test-time. In industry, computational resources must be budgeted and costs must be strictly accounted for. At its very core, this problem is inherently a tradeoff between accuracy and test-time computation. Test-time computation consists of two components: 1. the actual running time of the algorithm; 2. the time required for feature extraction. The latter can vary drastically if the feature set is diverse. In this abstract, we propose a novel algorithm that explicitly considers the feature extraction cost during training. We first state the (non-continuous) global objective, which explicitly trades off feature cost and accuracy, and then relax it into a continuous loss function. Subsequently, we derive an update rule that shows the resulting loss lends itself naturally to greedy optimization with stage-wise regression [4]. The resulting learning algorithm is much simpler than any prior work, yet leads to superior test-time performance. Its accuracy matches that of the unconstrained baseline (with unlimited resources) while achieving an order of magnitude reduction of test-time cost. Cost-sensitive learning. We use gradient-boosting [4] to learn a classifier H(x) = ∑T t=1 βtht(x) to minimize some loss ℓ(H). Here, ht ∈ H where H is the set of all possible regression trees [1] of some limited