Current methods for assessing individual well-being in team collaboration at the workplace rely often on manually collected surveys. This limits continuous real-world data collection and proactive measures to improve team member workplace satisfaction. We propose a method to automatically derive social signals related to individual well-being in team collaboration from raw audio and video data collected in teamwork contexts. The goal is to develop computational methods and measurements to facilitate the mirroring of individuals’ well-being to themselves. We are focusing on how speech behavior is perceived by team members to improve their well-being. Our main contribution is the assembly of an integrated toolchain to perform multi-modal extraction of robust speech features in noisy field settings and to find which features are predictors of self-reported satisfaction scores. We apply the toolchain to a case study where we collected videos of 22 teams with 56 participants collaborating over a four-day period in a team project. Our audiovisual speaker diarization extracts individual speech features in a noisy environment. As the dependent variable team members filled out a daily PERMA (positive emotion, engagement, relationships, meaning, and accomplishment) survey. These well-being scores have been predicted with speech features extracted from the videos using machine learning. The results suggest that the proposed toolchain is able to automatically predict individual well-being in teams, leading to better teamwork and happier team members.