Deep models are `black-box’, meaning that their decision-making process is often not transparent to users. To address this issue, several post hoc methods have been proposed for explaining the model’s predictions. However, post hoc explanations are often unreliable and not faithful to the model. Interpretable-by-design methods, such as Information Pursuit (IP) and its variants, map the input data to a small set of interpretable concepts by asking a set of queries, and make a prediction based on the sequence of query answers. Such models are faithful by design because their predictions are based on the explanations, i.e. the sequence of query answers. However, they require either a very complex algorithm for selecting which queries to ask or fully annotated datasets for training a query-answering system. This paper proposes IP-OMP-ConceptQA, an interpretable-by-design method that combines an efficient query selection method (OMP) with an accurate zero-shot query answering system (Concept-QA). Experiments on vision data sets show that IP-OMP-ConceptQA outperforms existing methods in terms of accuracy, interpretability, faithfulness, and efficiency in scenarios where very short explanations are desired.