Foundation models (FMs) become the backbone of intelligent systems. Collaborative development of FMs enables multiple teams to fine-tune different aspects of an FM simultaneously. However, conflicts in model updates across teams, particularly when modifying overlapping parameters, pose significant challenges to maintaining model performance. In this paper, we propose Medusa, a novel framework designed to support collaborative FM development by managing model branches and introducing a structured system of parameter ownership. Medusa tracks fine-tuning efforts as separate branches, similar to Git, allowing developers to work on different tasks without destabilizing the base model. Instead of passively merging parameters from already fine-tuned models, Medusa proactively controls the merging through our parameter ownership assignment algorithm to generate merging-aware masks to guide the fine-tuning process, ensuring that only specific branches can modify designated parameters. Medusa approximates the optimal assignment even as model complexity increases, ensuring scalability in large, fine-tuned models. We conduct extensive evaluations on five datasets and three large models with state-of-the-art post-training model merging approaches to investigate the efficacy of Medusa. Evaluation results show that Medusa substantially and generally improves the effectiveness of collaborative model development, across different models, fine-tuning methods, and datasets. Specifically, with automated parameter ownership assignment and masked fine-tuning, Medusa outperforms state-of-the-art post-training model merging approaches by improving 3.19% absolute model performance after merging. Ablation studies further demonstrate the efficacy of algorithms in Medusa.