Markov chains are widely used throughout mathematics, statistics, and the sciences, often for modelling purposes or for generating random samples. In this talk I’ll discuss a different, more recent application of Markov chains, to developing distributed algorithms for programmable matter systems. Programmable matter is a material or substance that has the ability to change its features in a programmable, distributed way; examples are diverse and include robot swarms and smart materials. We study an abstraction of programmable matter where particles independently move on a lattice according to simple, local algorithms. We want to design these algorithms so that the system has a desired collective behavior, such as compression of the particles into a shape with small perimeter or separation of differently colored particles. In our stochastic approach, we describe a desired collective behavior using an energy function; design a Markov chain that uses local moves and converges to the Gibbs distribution for this energy function; and then turn the Markov chain into an asynchronous distributed algorithm that each particle can execute independently. In several of our algorithms, changing just a single parameter results in a different, but equally desirable, emergent global behavior. To prove our algorithms are correct, we must show this Gibbs distribution has the desired properties with high probability, which we do using proof techniques from probability, statistical physics, and Markov chain analysis. This principled approach has been used to inform the design of real-world robot systems. Joint work with Marta Andres Arroyo, Enis Aydin, Joshua J. Daymude, Bahnisikha Dutta, Cem Gokmen, Daniel I. Goldman, Shengkai Li, Dana Randall, Andrea Richa, William Savoie, and Ross Warkentin.