In this work, we design an energy management strategy (EMS) for hybrid electric vehicles (HEVs) using a deep reinforcement learning (DRL) algorithm. Specifically, this paper introduces a soft actor-critic (SAC)-based EMS, tailored for devising optimal energy distribution for HEVs. The proposed SAC-based approach is useful for addressing inherent drawbacks that exist in many DRL methods such as slower convergence rate, discretization error, as well as suboptimal solutions. The designed SAC algorithm presents a self-adaptive efficiency in executing continuous decision-making policies through the balance of exploration and exploitation using an entropy-based action selection method and an entropy-added reward function. Extensive experiments are carried out to demonstrate the merits of the adaptive SAC algorithm over the widely adopted Q-learning (QL), deep-Q-network (DQN), and deep deterministic policy gradient (DDPG) approaches on fuel economy and battery charge sustainability. An unknown driving cycle is also employed to show the adaptability feature of the proposed scheme, revealing fuel savings of 6.26%, 3.01%, and 2.03% over the QL-based, DQN-based, and DDPG-based methods, respectively.