In this conversation, Sean shares insights on swarms in nature and multi-robot systems found in industry. He explores why getting robots to work in the real world is so much harder (and more exciting) than it seems.

Fascination with collective behavior. Sean focuses on an uncomfortable truth: we have terrible intuition for how swarms work. This lack of natural understanding is what draws him to the field. When you watch ants, bees, or termites coordinate complex tasks without a central leader, you’re witnessing something fundamentally different from human organization. Sean loves the cross-disciplinary challenge of studying systems that don’t come naturally to us, requiring him to look beyond human behavior patterns and draw insights from biology, mathematics, and control theory. This fascination stems from genuine intellectual humility—recognizing where our intuitions fail us. Most of us can imagine how we’d organize a group of people to move furniture or search an area. But how would you program robots to spontaneously form teams, divide labor, and adapt to failures without anyone giving orders? That’s the puzzle that keeps Sean engaged. It requires bridging multiple fields and accepting that the solution won’t feel intuitive even when you find it. The beauty of swarm robotics lies in its counterintuitive nature, forcing researchers to set aside their human-centric assumptions and learn from nature’s millions of years of evolutionary optimization.

Replicating swarm intelligence in nature. Sean’s PhD journey began with a simple video that changed everything: desert ants collectively transporting a fig. These Novomessor cockerelli ants displayed something miraculous—they spontaneously formed teams, carried heavy objects back to their colony without pheromone trails, and navigated around obstacles, all without any apparent communication or centralized control. What made this particularly impressive is the harsh desert environment where food is scarce. The ants can’t afford to chop up the fig and make multiple trips because competitors would steal it. They need collaborative transport, and they’ve mastered it. Here’s what struck Sean: if he dropped an object and told 100 robots to retrieve it, he doesn’t think even 10 of them would magically figure out how to get there, pick it up without talking to each other, and bring it back without a centralized controller. Even today, this remains a hard problem. We simply don’t have good ways of replicating what these ants do naturally. The challenge lies in understanding both how they accomplish this feat and how to translate that understanding into working robotic systems. The bridge between biology and robotics works because you can study the limitations of biological organisms—their sensors, actuators, and computational capabilities—then form hypotheses about their strategies. You can test these hypotheses in nature a few times, but robots offer something unique: repeatability. Run the same algorithm a million times on robots, and they’ll behave consistently, letting you prove whether your hypothesis is possible. You can say “this could be what they’re doing,” even if you can’t definitively prove it’s what they are doing.

Decentralization versus centralization. There’s a fundamental tension in collaborative robotics that Sean describes with a simple phrase: “pick your poison”. At one end of the spectrum, you have completely decentralized systems like ant colonies, where every individual makes its own decisions based on local information. These systems are beautifully scalable and robust to failure—remove one ant and the group keeps functioning. At the other end, you have centralized warehouse robots with one big server commanding everything, knowing exactly where each robot is and what it’s doing. The tradeoff is profound. Decentralized systems are robust and scalable, but you can’t make hard guarantees about what they’ll do. Each agent is autonomous, reacting to its environment and neighbors, so you can model the group’s collective behavior but can’t precisely predict individual actions. Centralized systems give you strong guarantees—you know exactly what commands will be issued—but they don’t scale well and create a single point of failure. If the central controller goes down, everything stops. Sean’s work at the Georgia Tech Research Institute focuses on finding the right spot on this spectrum for each specific problem. There’s no universal solution because robots today are what he calls “idiot savants”—really good at specific tasks but terrible at generalizing. The current state of warehouse robotics illustrates this well. Systems are heavily instrumented with tracking cameras and floor markers because navigation is hard with your eyes closed. KUKA robots famously use stickers every two feet for localization—brilliant for warehouses but useless in the real world. The future of collaborative autonomy lies in matching the control architecture to the task requirements.

North vision for collaborative autonomy. Sean’s vision for collaborative autonomy centers on what Vijay Kumar calls the “dull, dirty, and dangerous”. Robots should handle boring tasks like stocking shelves, dirty work like cleaning sewers and septic tanks, and dangerous missions in minefields or nuclear reactor meltdowns—places you’d never want to send humans. The key insight is robustness through redundancy. A single Boston Dynamics Atlas robot doing backflips is impressive, but if its leg breaks, it’s done. Send a hundred cheaper robots and ten failures won’t stop the mission. The immediate goal is simply getting multi-robot systems out into the real world. Beyond Waymo cars, Tesla autopilots, and warehouse robots, we don’t see many truly collective robotic systems deployed. This reveals how young and challenging the field remains. One fundamental problem Sean finds fascinating: how does a single human digest information from hundreds of robots? If each robot is sending data, making decisions, encountering problems, how does one person understand and control the swarm? Researchers are working on it, but it remains an open question that highlights the field’s nascent state. The North Star is sending robot teams into extreme environments to accomplish tasks that individuals cannot. Search and rescue missions where swarms of drones scan forests or oceans. Precision agriculture with specialized robots doing spot weeding, spot harvesting, and targeted watering. Heterogeneous warehouse teams where armed robots do pick-and-place instead of bringing shelves to humans. These aren’t science fiction—they’re engineering challenges being solved today. The vision is pragmatic: get collaborative robots working reliably in real-world applications where their collective capabilities exceed what any single system could achieve.

Lessons from the racetrack. Long before he was navigating the complexities of robot learning, Jeremiah was managing a 50-person team as the co-founder and captain of a Formula SAE team in his freshman and sophomore years of college. This student engineering competition challenges teams to build a single-seater race car from scratch — an undertaking that consumed nearly all his time before his internship at NASA. This experience, he says, taught him one of the most crucial lessons in engineering: success is often more about project and time management than about creating a super fancy design. He emphasizes that implementation is everything. It’s far better to have a simple, working system than a complex, unfinished one. This philosophy has become a cornerstone of his approach to his PhD research. In the fast-paced world of academic publishing, a brilliant idea is useless if it cannot be implemented, tested, and validated in a timely manner. He notes that it’s easy for researchers to get stuck in their own heads, perpetually brainstorming without ever producing tangible results. His time leading the race car team instilled in him a practical, hands-on mindset focused on execution and delivering a finished product, a skill that has proven invaluable in his academic career. The pressure of building a functional race car from the ground up taught him to prioritize what is essential and to consistently move forward, one implemented component at a time.

Physics and math background. Sean never planned to become a robotics researcher. As a physics and math undergrad, he hit a wall when concepts stopped being intuitive. Quantum mechanics and Einstein notation broke his brain—he could do the math but couldn’t picture what was happening, and that frustrated him. Math felt the same way: just a tool, not a passion, especially when doing theorem proofs that didn’t map to anything tangible in reality. He’d always loved computer science, but his father warned him it would be outsourced and wasn’t a good career path. So he stuck with more “fundamental” subjects. At the end of undergrad, Sean asked himself: what’s at the intersection of physics, math, and computer science? Robotics. He started grad school still working theoretically—dots on screens, simulations—but it didn’t feel right. He felt like he was making things up, and if he couldn’t see a physical robot moving, he didn’t believe in what he was creating. That’s when something clicked. He started building robots and became a middleman between theoreticians and hardware, translating algorithms into working systems. This transformation revealed his true calling: bridging theory and practice. As a fundamentally practical person, Sean needed to see concepts implemented in the real world to believe in them. His physics and math background gave him the analytical foundation to understand complex systems, but his hands-on approach ensured those systems actually worked. This combination—theoretical rigor meeting practical implementation—became his defining trait. He realized there was a genuine need for researchers who could take theoretical work and validate it on actual hardware, even if not yet fully robust. That validation role, that translation work, is where Sean found his spark and has stayed ever since.

The impact of the Robotarium. The Robotarium has become one of Georgia Tech’s most recognizable robotics landmarks. With over 16,500 experiments submitted from every continent except Antarctica (Sean’s still trying to get that research station to submit something), it’s transformed how robotics research gets done. What started as Magnus Egerstedt’s tabletop idea—letting anyone in the world submit experiments and run them remotely—became a fully automated facility that wakes up, runs experiments with nobody in the room, and goes back to sleep. The journey wasn’t easy. Sean joined as a postdoc and had six months to transform that tabletop into a full room system. Early days meant running 10-20 experiments monthly, which was groundbreaking but manageable. Then suddenly they hit 500 experiments a month, and Sean was practically living in the lab with grad students, running submissions constantly. The challenge was diagnostic hell—when something went wrong, was it the user’s code, the robots, the tracking system, or the WiFi? They didn’t know, and neither did the users. This drove Sean to automate everything and design the GRITSBot X (now GTernal) for maximum robustness and easy maintenance. But the Robotarium’s real impact goes beyond research. Magnus understood it needed to be visually inspiring, and he was right. When K-12 students visit and see robots moving in formation, something clicks. Sean believes robotics is a “gateway drug to math and science”—students will learn trigonometry all day if they’re navigating a robot, but drawing triangles doesn’t motivate anyone. That’s the Robotarium’s legacy: democratizing access to robotics research while inspiring the next generation.

Key components of a robotics education. What makes robotics education effective? Sean’s answer, developed through co-teaching CS 7785 with Sonia Chernova, focuses on breadth and implementation. The course deliberately avoids deep dives into any single topic, instead giving students a full-spectrum view of robotic systems: controls, perception, planning, and machine learning all interacting with each other. Most importantly, it shows students why they’re taking other classes. When robotics curriculum gets heavy into math or computer science, students lose sight of the purpose. Why am I doing nonlinear Lyapunov stability proofs? The intro course answers that question. The pedagogy relies on open-ended lab challenges rather than prescriptive assignments. Instead of “use RRT for path planning,” students get: “Here are waypoints. I’m throwing obstacles in front of your robot. Hit the waypoints, don’t bump into anything. Go.” The course still teaches methods—if students don’t know anything, they get tools to solve it—but the emphasis is on making mistakes and figuring out how to just get it done. Once it works, then comes the conversation about optimization and better approaches. Sean’s mantra comes through here: “pick your poison”. There’s no optimal solution, only optimization with respect to a cost function. Some students’ cost function is simply getting an A with minimal effort, and that’s fine. But for those who dig deeper, the course reveals better approaches. The key components? Implementation is crucial—students must put their work on a robot at least once to understand the gap between theory and practice. Seeing the full spectrum matters too, because even if they contribute to a small chunk later, understanding how all components interact is essential for any robotics researcher.

Challenges in real-world robotics. Why is real-world robotics so brutally hard? For multi-agent systems, Sean points to communication as the persistent killer—radio signals in real environments are just lossy and unreliable, period. But the deeper issue plaguing all robotics is what he calls the “straw that breaks the camel’s back” problem. It’s never one thing. If it were, we’d fix it and move on. Instead, it’s a thousand tiny issues that accumulate: maybe this integer should have been a 64-bit float instead of 32-bit. Maybe this message needs to transmit at a different rate. Maybe perception returned something unexpected. These little things add up catastrophically. One researcher might spend months developing an algorithm that works beautifully in simulation, only to have it fail on real hardware because of timing issues, sensor noise, or communication delays they never anticipated. It’s frustrating, exhausting work hunting down these problems. Yet paradoxically, this is what gives Sean hope for job security—AI isn’t solving these problems anytime soon. Real-world robotics will require very specific solutions for very specific problems for a long time. There’s a silver lining to this difficulty. First, it’s genuinely fun if you have the right mindset. You get better at diagnosing issues, learning to sniff out where problems lurk. Pattern recognition develops over time. Second, because it’s challenging, it’s deeply fulfilling when you finally get it working. That moment when the robot does the thing—when all those tiny pieces finally align and the system operates as intended—makes all the debugging worthwhile. Sean still gets excited seeing robots work, especially watching students struggle and then succeed. The difficulty isn’t a bug; it’s a feature that makes robotics research meaningful.

Balancing academia and industry. Sean calls himself “a very strange individual” for walking the gap between academia and industry so constantly. His joint appointment between Georgia Tech and GTRI keeps him sharp in ways that staying on one side never could. Academia offers freedom to explore questions that have no immediate profit motivation—staring at ants could never be justified as GTRI work. But academics can also get lost in problems that are intellectually fascinating yet practically impossible to translate into working systems. That’s where industry perspective matters. Working at GTRI exposes Sean to real practical problems, hard implementation challenges that academics might not even know exist. He brings these concrete issues back to the academic side, where researchers have the freedom and theoretical tools to solve them. Conversely, he scans academic publications for work from five or ten years ago that showed promise and has been built upon, then brings it to GTRI to robustify and generalize for real-world deployment. That bidirectional flow—practical problems to academia, promising research to industry—is what makes the dual role valuable. It is hard, though. Wearing multiple hats as professor, industry researcher, and family man requires thriving in chaos. Sean admits he likes being busy and handling multiple things simultaneously. Over time, he’s gotten more rigorous with scheduling—his young son’s early wake-up means no more sleeping until 2 PM and working until 5 AM. He’s also learned to delegate better, though it was initially difficult. Early in his career, he preferred doing everything himself. Now he’s better at tasking others and helping course-correct when needed—a teaching and mentorship skill that only comes through experience. The balance isn’t easy, but the cross-pollination between academia and industry makes him more effective in both worlds.

Heterogeneity in swarm robotics. One of Sean’s key PhD findings was deceptively simple: groups of ants move at the speed of the slowest ant. This revealed something profound about heterogeneity—not every ant is the same, and natural variation creates interesting dynamics. Ants might have different strength and speed naturally, but heterogeneity can also emerge from temporary factors: maybe you didn’t sleep well last night, so even though you’re normally stronger, today you’re the limiting factor. This natural, sometimes transient heterogeneity shapes how collectives function. In robotics, heterogeneity is simultaneously a strength and weakness—pick your poison again. If everything is homogeneous, any robot can do any task, which simplifies planning and control. Add heterogeneity and complexity explodes. But here’s the thing: even supposedly identical robots have inherent heterogeneity. Tell twelve differential-drive robots to drive straight and they won’t. Manufacturing variations, motor differences, battery states—these create heterogeneity whether you design for it or not. You can also introduce intentional heterogeneity through different behaviors, sensors, or capabilities when compute and memory are limited. Sean sees heterogeneity’s potential but remains pragmatically focused on nearer-term problems. Researchers are already exploring mixed teams: surface vehicles with air vehicles with ground vehicles, or robots with different sensor suites working together. Heterogeneity might come from design or from failure—if a robot’s camera breaks but its microphone still works, how does the team adapt? How do they reallocate tasks based on remaining capabilities? These are fascinating questions, but Sean is clear-eyed about the current state: we’re still struggling to make robots navigate from point A to B without hitting things. True emergent heterogeneity like human cultural diversity is far off, even if designed heterogeneity is already here.

Breakthroughs in multi-robot systems. Looking ahead five to ten years, Sean sees several multi-robot breakthroughs coming. Autonomous vehicles are the obvious one—there’s reportedly an agreement between automakers allowing vehicles to communicate with each other, enabling collaborative, heterogeneous systems on our roads. Warehousing will improve with heterogeneous teams: instead of ground robots just bringing shelves to humans for pick-and-place, armed robots (humanoid or otherwise) will handle manipulation directly. Search and rescue is reaching maturity—sending swarms of vehicles across forests or oceans to find missing people, ships, or survivors. But Sean is particularly excited about precision agriculture, which has been “on the horizon” for years but is finally arriving. Imagine replacing one massive tractor with 10-20 specialized robots doing spot weeding, spot harvesting, and targeted watering. Instead of ripping up entire fields, robots find ripe berries and pick only those. They identify dry patches and water just those areas, reducing water usage and chemical inputs. Sean loves the laser weeding robots—tiny systems zapping individual weeds as they cruise through fields. These aren’t pipe dreams; the technology is maturing now. What makes these breakthroughs feasible is that they leverage multi-robot systems’ core strengths: coverage, redundancy, and specialization. One expensive robot failing ends the mission. Twenty cheaper robots with ten failures? The mission continues. We’re moving from asking “Can robots do this?” to “Can we make it reliable and cost-effective?” That shift from possibility to practicality marks the transition from research to deployment. The next five years will see collaborative autonomy moving from labs into fields, warehouses, roads, and disaster zones.

Dispelling myths about robotics. Sean encounters two opposite misconceptions about robotics, both equally wrong. Some people over-trust autonomous systems, believing robots are ready to take over the world. He saw this firsthand with emergency braking systems—people stopped braking themselves, assuming the car would handle it. When they see Boston Dynamics robots doing backflips, they don’t realize there are 800 failed attempts for every success. It’s still a really hard problem, and full autonomy isn’t coming soon. The flip side is complete distrust: robots don’t work, they’re too limited, they’ll never be practical. Sean’s job becomes pulling both extremes toward the practical middle, showing real examples of robots working in the wild while honestly representing current limitations. Trust is the critical issue on both sides. Over-trusting leads to complacency and accidents. Under-trusting prevents adoption of genuinely useful systems. Finding the right calibration of trust for each application is essential.​ This trust problem extends beyond public perception into research and deployment. When Sean explains his work to non-technical people (he uses the test of “could I explain this to my mom, a former third-grade teacher?”), he needs to balance enthusiasm with honesty. Yes, collaborative robot systems can do amazing things. No, they’re not ready for every application. The myth-busting work involves showing both the extraordinary progress we’ve made and the substantial challenges remaining. Robotics needs advocates who are neither hype-merchants nor cynics, but realistic translators of what’s possible today, what’s coming tomorrow, and what remains genuinely hard.

Legibility in human-swarm interaction. When a firefighter enters a building with drones overhead, they need to know what those drones are trying to do—that’s the legibility problem in human-swarm interaction. Sean admits it’s not his expertise (his student works with Julie Adams, a leader in this field), but he recognizes it as fundamentally hard. The core challenge: humans can only track so many individuals. With swarms of hundreds or thousands, you simply cannot monitor everything. The solution likely involves abstraction and trust. Instead of tracking individual robots, you understand at a high level what groups are doing: these quads are searching for survivors, those are scanning for hotspots. You abstract away the details and trust the system is functioning correctly. Sean draws a parallel to human society—when driving, you watch local neighbors but assume everyone else is doing things correctly. In business or any organized group, you abstract away most details and trust people are handling their responsibilities, only intervening when something goes wrong. This abstraction-and-trust approach is probably the best we can do currently. It requires designing swarm systems that surface the right information at the right level of detail. Too much information overwhelms operators. Too little leaves them blind to critical issues. Finding that balance—what information to show, how to aggregate it, when to alert humans—is an active research area. Sean’s glad he doesn’t have to work on it right now, but he acknowledges it’s crucial for deploying large-scale multi-robot systems. You can’t send a hundred robots into a disaster zone if the incident commander can’t understand what they’re doing.

Creating a user-friendly Robotarium. Making the Robotarium accessible to everyone required solving two design problems: interface and reliability. For interface, the initial approach gave users nonlinear system dynamics and expected them to compute control inputs—a researcher’s natural instinct. Feedback was immediate: “I don’t necessarily want to do that. I might want position control or just tell the robot to go that way”. So they added higher-level control schemes to accommodate different expertise levels, from experts building trajectories to novices wanting simple waypoint control. Reliability was the harder challenge. Building robust hardware and software infrastructure is “an extremely daunting task,” even if Sean makes it sound simple. The engineering approach: identify the weakest link and design so you can either address or replace it. For the ground robots running hundreds of hours, cheap motors are the failure point. So they designed robots where motors pop off and on easily—quick swaps mean minimal downtime. Knowing fault points lets you design for graceful degradation and quick repair. Automation required identifying failure modes through their outputs. The system doesn’t need to know every possible failure—just the signals indicating something’s wrong. Robots autonomously charge after experiments. If one isn’t charging, something’s wrong: could be tracking, network, or hardware, but the system knows to alert humans. At the end of the day, some problems need human intervention—a robot stuck on the battery charger can’t flip its own switch. So the system sends email and Slack alerts when it can’t resolve issues itself. This layered approach—robust hardware, smart failure detection, appropriate human escalation—keeps the Robotarium running with minimal manual intervention.

Future of the Robotarium. The Robotarium’s immediate future has two priorities: quadrotors and education. Sean’s dream is giving people access to ground robots, walking robots, underwater robots, and flying robots—basically every domain. The next step is quads (drones), which present unique challenges. First is interface: experts might want torque control on motors, but novices just want waypoint control. Finding the right abstraction level for different users is tricky. Second is safety: when ground robots fail, they stop. When flying robots fail, they’re rocks potentially heading toward expensive tracking cameras. Making quads robust enough to operate autonomously is really hard. The educational push reflects surprising usage patterns—the Robotarium is now roughly 50/50 research and education, though it was built by researchers for researchers. There’s been strong demand for K-12 accessibility. Sean believes robotics is a “gateway drug to math and science”—students will enthusiastically learn trigonometry to navigate robots, though drawing triangles on paper motivates nobody. Plans include building curriculum, creating block programming interfaces, and developing tools for logical thinking and programming concepts. Beyond theory or simulation, real robots help students build intuition about how small errors accumulate to cause system failures—that “straw that broke the camel’s back” lesson. Further out, Sean envisions remotely accessible research platforms for expensive systems: Atlas robots, large manipulator arms, stuff too consequential for general public access but valuable for the research community. He’d also love an outdoor Robotarium dealing with real-world conditions, though finding outdoor space in Atlanta is challenging. These expansions maintain the core vision: democratizing access to robotics hardware so anyone, anywhere can test ideas without maintaining their own systems.

On the go? There’s an audio-only version too. Click here.

Sean Wilson is a Senior Research Engineer serving as the Collaborative Autonomy Branch Chief for the Aerospace, Transportation & Advanced Systems Laboratory at the Georgia Tech Research Institute (GTRI). Additionally, he serves as the Director of the Robotarium Lab (https://www.robotarium.gatech.edu/) at Georgia Tech, which enables people around the world to deploy robotic algorithms onto robotic hardware free of charge.