This is the perfect scenario for UX ab testing! Instead of relying on opinions, you can run a controlled experiment where you show each version to different user segments and measure actual performance. What really helped our team was this practical guide that breaks down how to set up proper A/B tests for UX decisions. It covers everything from creating meaningful hypotheses to determining statistical significance and avoiding common pitfalls like testing too many variables at once. The framework for interpreting results has been invaluable for making objective design decisions. Here's the resource that changed how we approach these debates:
https://clay.global/blog/ux-guide/ab-testing