Two different approaches for interpretable Concept-Based Models (CBMs) exist: locally interpretable CBMs, which allow humans to understand the prediction of individual instances, and globally interpretable CBMs, which provide a broader understanding of their reasoning. In practice, the former focus on achieving high predictive accuracy, while the latter emphasize robustness and verifiability. To bridge this gap between extremes, we propose a hybrid model that integrates the strengths of both approaches. Our model, called Unified Concept Reasoner (UCR), leverages the high explainability of globally interpretable CBMs and high accuracy of locally interpretable CBMs, resulting in a powerful CBM with two heads that can be used for prediction. In our preliminary experimental evaluation, we show that UCR reaches comparable accuracy with competitors, converges to coherent global and local heads and is more stable w.r.t. hyperparameters.