What do swarm robotics and political redistricting have in common? One answer is Markov chains, which have recently been used in very different ways to address problems in both these areas. To get a large swarm to exhibit a desired behavior, one solution is to make each individual in the swarm fairly intelligent; another is to make the individuals simple, but to let the desired behavior emerge as a result of their interactions. My collaborators and I recently used Markov chains and ideas from statistical physics to develop distributed algorithms that follow this second paradigm. We also worked with physicists to create a physical robot system where each individual cannot compute anything, but the system as a whole can still accomplish complex tasks. For political redistricting, the main mathematical technique developed in the last few years for detecting gerrymandering is to compare a proposed plan to the space of all possible alternative plans; if the proposed plan is an outlier, that’s an indicator it might be gerrymandered. However, the space of all possible districting plans is far too large to ever be studied in its entirety. Instead, Markov chains are used to generate random samples of alternative plans, where the hope is that the sampled plans are reasonably representative of all possible plans. This approach has already been used successfully in court cases around the country, though questions still remain about what mathematical guarantees we can give about the randomly sampled districting plans.