In this conversation, Franky discusses the evolution of AI wearables, focusing on their transition from passive recording devices to real-time assistants. He emphasizes the importance of understanding user behavior, the challenges of false positives, and the need for a human in the loop. The discussion also covers the psychological effects of being recorded, privacy concerns, and the technical interplay between edge and cloud computing.

Current state of AI wearables. Most wearables today are what Franky calls “point and shoot devices” — simple tools where you manually start recording, stop recording, and upload footage later. This applies to video devices like GoPros and phones, as well as audio recorders. The model is straightforward: humans decide what to record, for how long, and then retrieve it later if needed. Some devices have slightly fancier capabilities, like detecting faces, but that’s essentially where the technology stands today. While this approach works for certain applications, Franky believes we can do significantly better. The past few years have demonstrated substantial opportunities for improvement. The fundamental limitation is that these devices require constant human agency—you must remember to press record, which disrupts your natural behavior and workflow. This becomes especially problematic in high-stakes situations where your attention needs to be fully on what’s happening in front of you, not on operating a device. The opportunity lies in more proactive, AI-enabled behavior from wearables themselves. Rather than replacing human judgment, these smarter devices could work in conjunction with users, assisting with capturing and analyzing moments without requiring explicit activation. The technology exists to move beyond passive recording toward systems that understand context and can help users document important moments they might otherwise miss. This shift from reactive to proactive wearables represents the next frontier in the field.

Effortless wearable activation. Making wearables effortless to activate requires understanding users deeply—their current behavior, where lapses occur, and what causes those gaps. In fields like private security, hospitals, and retail, honest mistakes happen constantly. People genuinely forget to start recording because it’s not part of their everyday behavior. Then there are situations where events unfold so quickly that even someone with good intentions simply doesn’t have time to activate their device. The solution starts with three key questions. First, what defines the behavior and intended goal? Understanding when users would want to capture something helps identify the right triggers. Second, how do they currently initiate recording? Is it a button press, a trigger phrase, or a specific codeword used in their profession? Third, what interaction patterns and policies determine when recording should begin? Law enforcement provides an interesting case study in sophisticated activation protocols. Modern body cameras often start recording before incidents even begin—as officers exit their vehicles, they activate cameras because protocol dictates early activation, not waiting until confrontations escalate. These systems have advanced auto-activation features: they trigger when sirens turn on, when officers draw tasers or guns, or when nearby officers activate their cameras. This works because law enforcement goes into situations anticipating potential incidents, and they’re legally required to record. Officers are trained to make camera activation part of their pre-engagement routine, so it becomes second nature well before any physical altercation occurs. The challenge in other domains—private security, hospitals, retail—is fundamentally different and harder. Unlike law enforcement responding to calls with clear incident potential, workers in these fields often can’t predict when situations will escalate. A retail employee doesn’t know if a customer interaction will turn hostile. A security guard patrolling doesn’t know which conversation will require documentation. There’s no equivalent to “exiting the vehicle” that signals recording should begin. This unpredictability makes the activation problem considerably more complex. The same applies to everyday contexts: you’re wearing a pendant or ring for recording important conversations, but you forget to press play because there’s no clear trigger moment. Making activation effortless means moving toward more open-ended, hands-free trigger mechanisms. This could involve detecting verbal cues beyond simple keywords—phrases like “please leave the property” or “stop” in private security contexts. But the key is simplicity: the more obvious and intuitive the activation method, the better. This must be balanced with understanding that context matters enormously, and what works for one profession or situation may not transfer to another. Interestingly, observations from Plix reveal that users do eventually develop recording routines. Over time, they rely less on auto-triggering features and more on their own muscle memory as activation becomes second nature—similar to how law enforcement officers are trained. This suggests that while smart activation helps during the learning curve and catches forgotten moments, the ultimate goal isn’t replacing human judgment but supporting users until good habits form. The ultimate goal is devices that understand when you would have wanted to record and do it for you, not instead of you, but alongside you.

Situational awareness across different contexts. Separating signal from noise is highly context-dependent, with no silver bullet solution. Franky illustrates this with private security as a concrete example. Typically, escalations begin with verbal altercations—a security guard might say “please leave the property, you’re trespassing” or ask someone to stop their behavior. These verbal deterrents often signal that something physical could follow, making them natural triggers for recording. But the challenge runs deeper than simple keyword detection. These conversations can last long periods with extensive back and forth, making it unclear when recording should actually begin. Simply triggering on words like “stop” creates massive false positive problems—that word appears constantly in normal contexts through casual conversations, retelling stories, or interactions with passersby. The system needs richer contextual understanding to distinguish between a security guard telling someone to stop trespassing versus someone saying “stop by later for coffee.” Context comes from understanding the user’s typical interactions, professional protocols, and behavioral patterns. For security personnel, this might include listening for professional phrases like “I just want to stop by and make sure you’re good” followed by affirmative responses, which signal normal operations rather than escalations. High-stakes environments like policing differ dramatically from coffee shops in terms of what constitutes a critical moment worth capturing. The technical challenge is building systems that can understand these nuanced differences—not just detecting keywords, but comprehending tone, sequence, professional jargon, and situational cues. Vision can add crucial context that audio alone misses, providing information about body language, physical proximity, and environmental factors. Success requires deep domain knowledge about each use case, extensive training data that captures the full spectrum of interactions, and sophisticated AI that can synthesize multimodal signals into reliable situational awareness.

Potential of false positives. The question of false positives hinges on what downstream actions occur when the system detects something. If the action is simply starting to record or capture what’s happening, false positives are actually acceptable. You can analyze later to confirm whether it was truly a critical incident or discard it if it wasn’t. Making that determination afterward is far better than missing the moment entirely—false negatives in high-stakes situations are catastrophic. Human-in-the-loop review is essential when specific actions need to be taken—alerting supervisors, filing reports, or escalating situations. But real-time human oversight doesn’t scale, especially for wearables. Unlike home security systems where a human can review every camera trigger, having humans constantly monitor wearable feeds from potentially thousands of users is simply impractical. The infrastructure and personnel requirements make it economically and logistically infeasible. The better approach is capturing with reasonable confidence that something important is occurring, even if some false positives slip through, then verifying afterward whether it truly mattered. This philosophy extends beyond security to wellness applications. If your wearable detects irregular heartbeats or voice shakiness that might indicate a medical emergency, you’d rather it flag potential issues that turn out to be nothing than miss a genuine crisis. The same applies to professional contexts—capturing an interaction that seemed tense but resolved peacefully is infinitely preferable to missing evidence of actual misconduct or danger. This creates a fundamental design principle: optimize for recall (catching all the important moments) rather than precision (avoiding false alarms). Storage is relatively cheap; missing critical footage is expensive or dangerous. The human verification step provides the precision—confirming what truly needs attention and discarding what doesn’t. The AI’s job is ensuring nothing important falls through the cracks.

Expanding wearables to a wellness context. The principles of proactive wearable activation extend naturally into health and wellness. Imagine a Fitbit detecting heart arrhythmia, or voice analysis picking up shakiness that could indicate distress. Just as security wearables should capture escalating situations without waiting for manual activation, wellness devices should alert to potential medical issues before they become emergencies. This might mean notifying healthcare providers or emergency services when biomarkers indicate serious concern, though stopping short of automatically calling 911 without confirmation. The medical field has actually pioneered this space effectively—health wearables are more mature than their security counterparts. Devices already track heart rate irregularities, sleep patterns, activity levels, and numerous other health metrics. What makes wellness particularly interesting is that often no single signal indicates a problem; it’s the combination and progression over time that matters. A slightly elevated heart rate means nothing in isolation, but combined with irregular sleep, reduced activity, and changes in other biomarkers, it could signal something requiring attention. The actions triggered by wellness wearables differ from security applications in time-criticality. Detecting professionalism violations or safety incidents might be urgent but still allows for human validation. Improved health metrics showing steady fitness gains are genuinely positive but not urgent. Medical emergencies obviously require immediate response. This spectrum of urgency changes how systems should be designed—what gets escalated immediately versus what aggregates for weekly review. Wellness contexts also raise different validation questions. Should an AI definitively tell you something is wrong with your health, or should it present information and recommend professional consultation? Franky believes strongly that AI should aggregate and present information, not make final decisions—especially about health. The technology should empower users with insights and awareness, not replace medical judgment with algorithmic determinations.

Integrating different wearables into one. The vision of unified wearables raises interesting possibilities and challenges. Today’s landscape is fragmented: Garmins and Fitbits for wellness, Meta’s Ray-Ban glasses for visual capture, audio pendants for meeting transcription. Each serves a distinct purpose with its own hardware and software ecosystem. Consolidation seems attractive—fewer devices to manage, more seamless integration—but Franky questions whether a single sensor can truly do everything. Different sensors naturally specialize based on their form factor and placement. A camera worn on your chest can’t effectively capture heart rate data, but a ring or watch with skin contact can easily monitor pulse. Conversely, chest or head-mounted cameras provide superior visual perspective compared to wrist-worn devices. This suggests a networked approach where multiple specialized devices communicate and synthesize their data might be more effective than forcing everything into one form factor. The real potential lies in devices interacting with each other, creating a personal sensor network. This isn’t a new concept for general computing, but it remains relatively unexplored for wearables on a single person. Each device could specialize in what its form factor does best—visual capture, biometric monitoring, audio recording—while sharing data to create holistic situational awareness. Imagine wellness data from a watch combining with visual context from glasses and audio from a pendant to understand not just that your heart rate spiked, but that it coincided with a tense conversation at work. However, asking people to wear multiple devices creates its own problems. Nurses, security guards, and retail workers already carry radios, notepads, phones, and other tools—adding multiple wearables becomes burdensome. The design challenge is balancing specialization’s advantages against convenience and user comfort. Perhaps the answer isn’t complete integration or complete separation, but thoughtful decisions about which capabilities truly need to be combined and which can remain distributed across a minimal, carefully chosen set of devices that users will actually wear consistently.

Data sharing hurdles. Different wearable companies maintain their own proprietary data ecosystems and sharing practices, creating significant barriers to integration. Even if multiple specialized devices could theoretically work together to provide richer insights, getting data to flow between them is complicated by business models, privacy policies, and technical architectures that weren’t designed for interoperability. Each company has incentives to keep users within their ecosystem rather than enabling seamless data exchange with competitors. This fragmentation means the networked wearable vision remains largely theoretical despite its technical feasibility. A Garmin watch might collect excellent biometric data while Meta glasses capture visual context, but getting those data streams to meaningfully combine requires overcoming both technical protocols and business reluctance to share. Companies worry about losing competitive advantage, liability for how partner companies use their data, and complex privacy compliance across jurisdictions. The solution likely requires unifying frameworks that can bridge these proprietary systems. These frameworks would need to handle different data formats, privacy policies, and access controls while providing developers with consistent interfaces for building applications that leverage multiple wearable data sources. Open standards could help, but adoption remains slow when companies see data lock-in as a business asset rather than a barrier to overcome. There’s also the question of whether frameworks should be open source or proprietary themselves, and who would develop and maintain them. Industry consortiums sometimes succeed in creating such standards, but they require broad participation and alignment of incentives. Until these data sharing hurdles are addressed, the promise of holistic wearable intelligence from multiple devices will remain largely unrealized, with each device operating in isolation despite their potential to offer far greater value through collaboration.

Design considerations for wearables. Designing wearables is complex enough that world-class product designers like Jony Ive spend years perfecting form factors. Multiple critical factors must be balanced. First and foremost: what needs to be captured? If video is required, a wrist placement doesn’t work—you need front-facing perspective, typically from chest or head height to capture above-waist interactions. This immediately eliminates many potential mounting locations and raises questions about attachment method: clips, magnets, vests, or integrated garments. But there’s no one-size-fits-all solution. Nurses wear scrubs, some security guards wear polos, others wear vests, some wear suits. The variety of work attire means attachment methods must be flexible or profession-specific. Weight is another crucial factor. The tradeoff between battery life, processing power, capabilities, and physical size forces difficult compromises. Most users prefer discreet wearables that don’t dominate their appearance, but smaller devices mean smaller batteries and less capability. Form factor affects social dynamics too. Glasses offer huge advantages because they’re already socially normalized—people wear them constantly, and they don’t dramatically change behavior. Franky shares an interesting insight from a friend: using Meta glasses to record their newborn worked beautifully because the baby never changed behavior around what looked like normal glasses. Contrast this with a chest-mounted camera, which is obvious and can alter how both the wearer and others behave. Sometimes that’s desirable—a visible body camera serves as a deterrent—but for capturing natural interactions, discretion has value. Other form factors like helmets make perfect sense for construction workers but would be absurd in retail. The wearing experience matters too: Is it comfortable for eight-hour shifts? Does it interfere with other required equipment? Can it withstand the physical demands of the job? These considerations extend far beyond aesthetics into ergonomics, durability, social acceptability, and practical usability in actual working conditions.

Psychological impact of recording devices. Visible recording devices absolutely change behavior, and that’s often intentional. Body cameras with red recording lights, like those from Plix, explicitly signal to everyone present that they’re being captured. This parallels grocery store signs that say “smile, you’re being recorded”—the point is deterrence. Research confirms that visible recording influences both the wearer’s behavior and those they interact with. For law enforcement and security contexts, this deterrent effect is actually desirable. It reminds everyone, including the officer or guard, that their actions are documented, encouraging professionalism and de-escalation. The psychological awareness that you’re being recorded tends to moderate behavior on both sides of interactions. But this raises interesting questions: If the goal is capturing genuine professionalism and authentic interactions, does the camera’s presence create a performance for the device rather than revealing true behavior? Are people “psychologically flipping” to act differently specifically because they’re on camera? The answer depends on the intended use case. For accountability and incident documentation, changing behavior toward more professional conduct is precisely the goal. For wellness monitoring or naturalistic observation, behavior change might skew the data you’re trying to collect. That’s why devices include both visual cues like lights and audible ones like beeps—they serve the dual purpose of notifying others they’re being recorded (addressing consent and privacy) and reminding wearers that the device is active. Interestingly, this notification isn’t just for the people being recorded—it’s equally important for the wearer themselves. You want to know if you accidentally started recording, or if the device is unexpectedly capturing your conversations. Haptic feedback, vibration, visual indicators, and audio cues all work together to create intuitive awareness of device status. When designing wearables, these notification mechanisms shouldn’t be afterthoughts but first-class considerations that shape both human behavior and trust in the technology.

Privacy concerns with wearables. Privacy concerns with wearables must be addressed through both legal frameworks and thoughtful design. In the United States, consent laws vary by state—some require only one-party consent (meaning just one person in a conversation must agree to recording), while others require two-party consent (everyone must agree). Private establishments can also set their own recording policies. These legal boundaries exist for good reason and must be respected. Beyond legal requirements, transparency is essential. Recording indicators like lights, beeps, or verbal announcements (“recording started”) serve critical notice functions. This mirrors Zoom meetings that announce “this meeting is being recorded” or customer service calls that warn “this call may be monitored for quality assurance”. People should clearly understand what’s being captured and when. Franky notes this is particularly important in an era where devices are increasingly sleek and discreet—Ray-Ban glasses might not make obvious recording sounds, leaving others unaware they’re being captured. Privacy protections should be designed in from the start, not added as afterthoughts. This includes capabilities like redacting personally identifiable information (PII) from captured footage or audio, obfuscating faces when not necessary for the use case, and providing clear data governance policies. Some critics call privacy an “afterthought of design,” but Franky strongly disagrees—it must be considered from the beginning, influencing decisions about notification mechanisms, data handling, and user controls. Consent isn’t just about others—it’s about the wearer too. You need to know if you accidentally triggered recording or if the device is capturing when you thought it wasn’t. Visual and audible cues plus haptic feedback ensure wearers maintain awareness and control. The challenges aren’t unique to wearables—Echo devices and Google Home raised similar concerns about always-listening technology. But wearables’ mobility and personal nature amplify these issues. Clear guidelines, transparent operation, robust data governance, and user control mechanisms must all be core design principles rather than compliance checkboxes addressed after the product is built.

Edge vs cloud interplay. Edge devices—whether phones, wearables, or other personal technology—differ fundamentally from cloud infrastructure in their capabilities and constraints. Edge devices are small, physically limited, and most critically, battery-powered. This means their computational capacity is strictly bounded. Cloud infrastructure, by contrast, sits in data centers with effectively unlimited power and can scale compute resources dramatically. The cloud offers processing power, storage capacity, and specialized hardware that no edge device can match. This creates a natural division of labor. Edge devices capture data and perform lightweight local processing, then communicate with the cloud to offload heavy computation, storage, and analysis. The cloud also delivers updates, manages long-term data retention, and enables complex AI models too large to run on battery-powered devices. The two work hand-in-hand: edge devices as distributed sensors and light processors, cloud as the powerful analytical brain. The relationship isn’t optional for most advanced wearable applications—it’s fundamental to the architecture. Running sophisticated computer vision models or large language models directly on a wearable would drain the battery in hours. But uploading raw video continuously to the cloud for processing has its own problems: bandwidth limitations, latency, cost, and privacy concerns. The art lies in deciding what processing happens where. This creates fascinating research questions: What preprocessing should happen on-device before uploading? Can you compress, filter, or extract features locally to reduce data transmission? How do you handle intermittent connectivity when the device temporarily can’t reach the cloud? What falls back to device-only operation versus waiting for cloud availability? The edge-cloud relationship in wearables isn’t just a technical detail—it’s a core architectural question that shapes battery life, latency, capability, privacy, and cost. Getting this balance right determines whether a wearable is practical for real-world deployment or remains a prototype that works only in ideal conditions.

Docking stations for body cameras. Docking stations for body cameras might seem old-fashioned in an era of wireless everything, but they offer significant practical advantages. The hands-free, wireless approach is certainly convenient—upload anywhere without returning to a central location. This matters when officers or security personnel take devices home rather than returning to headquarters, or when they’re constantly mobile without regular access to a docking point. However, docking stations provide crucial benefits that wireless transfer struggles to match. Most importantly, they’re typically wired with direct high-bandwidth network connections. Downloading hundreds of megabytes or even gigabytes of video footage over LTE or even Wi-Fi can be painfully slow and unreliable depending on network conditions. Wired connections through docks transfer data much faster and more reliably. Plus, you’re charging the device anyway—why not leverage that stationary time for data transfer? Docking arrays—multiple charging stations together—create additional advantages. They enable simultaneous data offload from many devices at once, rather than bottlenecking on wireless bandwidth shared across dozens of cameras trying to upload footage. Docks also provide physical device tracking: if a camera is in its designated dock, you know exactly where it is and that the assigned user has checked it in. This inventory management becomes important for organizations managing fleets of expensive devices. The ideal solution isn’t choosing between wired and wireless, but supporting both. Mission-critical situations might require pushing data from the field immediately, so devices need wireless capability for urgent transfers. But bulk uploads benefit enormously from docking stations’ speed and reliability. Different situations demand different approaches, and robust wearable systems should accommodate both rather than forcing users into a one-size-fits-all solution. The dock versus wireless question is really about when each makes sense, not which is universally better.

Pros and cons of cloud validation. The cloud offers dramatically more powerful computational resources than any edge device, enabling sophisticated AI models and extensive analysis impossible on battery-powered wearables. But this power comes with significant tradeoffs. Sending data to the cloud, waiting for processing, and receiving results back introduces latency. Even in ideal conditions with fast networks and quick cloud inference, you’re looking at seconds of round-trip time—perhaps a second to upload, a second to process, and a second to receive results. Network quality drastically affects this equation. Poor connectivity doesn’t just add frustration—it can double or triple the time even though cloud processing time remains constant. If network conditions are spotty or you’re in an area with weak signal, even lightweight cloud requests become painfully slow. This creates fundamental tension: the cloud has the power to run much better models and achieve higher confidence in decisions, but getting data there and back can introduce unacceptable delays for time-sensitive applications. The challenge extends beyond pure latency to reliability. Sometimes you simply don’t have good network connectivity at all. In these situations, having no local processing capability means the device becomes useless regardless of how powerful the cloud resources are. This has sparked interesting research into edge-cloud architectures: Where should different processing tasks live? What can run on-device despite battery constraints? What must be offloaded despite latency penalties? Are there intermediate “edge clouds”—local processing points between devices and distant data centers that offer a middle ground? For high-stakes scenarios like security incidents, this dance becomes critical. You might want cloud validation to increase confidence before taking action, but response time matters enormously. The solution often involves tiered processing: lightweight models on-device make initial assessments, trigger recording or alerts, then upload to the cloud for validation and deeper analysis. This acknowledges that neither pure edge nor pure cloud is optimal—real systems need intelligent orchestration across both, understanding when to use each based on urgency, battery state, network quality, and confidence requirements.

10-year prediction for AI wearables. Looking ahead, Franky expects continued proliferation of wearables, particularly in health where smartwatches and devices like the Oura Ring have already gained traction. Visual wearables like Meta glasses are starting to catch on despite current limitations in battery life and capabilities. As these constraints ease through better hardware and more efficient AI, adoption will accelerate. The future isn’t just about passive capture but proactive assistance—devices that don’t just record what’s in front of you, but help you act on it. This might not be for external oversight by supervisors, but for personal benefit: tracking conversations where you want to add reminders or organize yourself, almost like self-scheduling with a private, local AI assistant. Imagine wearables with local language models that help you stay organized based on what you naturally say throughout the day, without sending your conversations to the cloud. A major shift will be cultural acceptance. Initially, visible wearables change how people behave around the wearer—they become self-conscious, wondering “what is that thing?” and “should I be concerned about what I say?” But as wearables become more common, this reaction will fade just as people stopped being paranoid about smartphones everywhere. This normalization is crucial for wearables to reach their potential. However, acceptance requires radical transparency about what devices do, how they’re designed, what data they collect, and how that data is governed. Privacy and data handling can’t be afterthoughts—they must be first-class design considerations. Franky envisions AI companions that aggregate information and help with decision-making, but strongly believes AI should not make final decisions for us. Whether it’s determining if an incident was handled correctly, making medical diagnoses, or automatically scheduling appointments, final judgment should remain human. AI’s role is providing information and analysis to enable better human decisions, not replacing human agency with algorithmic authority. That principle, maintained over the next decade, could shape a future where wearables genuinely enhance human capability rather than supplanting human judgment.

Research directions for PhDs in AI Systems. For researchers entering AI wearables and systems, the challenges span the full stack from AI algorithms to hardware substrates. On the AI side, critical work remains in developing models that can run efficiently on resource-constrained edge devices. This includes learning to distill knowledge from large models into smaller ones suitable for wearables, while preserving accuracy for task-specific applications. Systems research involves fundamental questions about workload orchestration: What processing should happen on-device versus in the cloud? How do you make intelligent decisions about multimodal sensing—when to capture audio versus video versus other sensor data? These multimodal query optimization problems are complex and context-dependent. There’s also the challenge of designing proper compute, memory, and networking substrates. Do current systems-on-chip (SOCs) have the right architecture for AI workloads, or do we need to fundamentally redesign hardware to match AI computational patterns? Perhaps the biggest open problem is one that Franky notes has persisted for years: battery life. How do we make wearables perform sophisticated AI analysis while still lasting a full day on a single charge? This isn’t just an engineering optimization problem—it may require rethinking fundamental tradeoffs between sensing, computation, communication, and power consumption. Every decision about what processing happens where, what models to use, and what data to capture affects battery drain. Solving this requires cross-layer thinking that spans algorithms, systems software, and hardware design. Beyond pure technology, research is needed in understanding user behavior, designing appropriate interfaces, and ensuring privacy and security. What information should devices surface to users, and when? How do you calibrate trust so people neither over-rely on nor under-trust AI assistance? What frameworks enable interoperability across proprietary wearable ecosystems? These sociotechnical questions are just as crucial as the technical ones, requiring researchers who can bridge computer science with human-computer interaction, psychology, and domain expertise in fields like healthcare, security, and education.

Desired impact of the LLAMAS lab. Franky’s vision for impact centers on developing wearable systems that perform efficient, intelligent analysis while providing the right abstractions and system design insights. This includes creating software and potentially hardware that enables practical deployment of AI wearables. But impact extends beyond just building technology—it includes educating people about the breadth of possibilities. When Franky taught a recent class, he discovered most students thought video analytics was purely for security cameras. But the field expands far beyond that to sports performance analysis, fitness tracking, wellness monitoring, agriculture, and countless other domains. Once you think beyond the narrow “security camera” framing, entirely new application spaces open up, each with unique challenges and opportunities. This reframing creates new technical problems. If you want to detect something happening to the wearer rather than in front of them—like someone hitting their face—a forward-pointing camera won’t capture it. You need different hardware placement or alternative detection methods. Starting with the application—asking “what different domain could benefit from this technology?”—opens new dimensions of inquiry that might not be obvious from a purely technical starting point.

Georgia Tech provides an ideal environment for this vision. Franky notes the abundance of bright minds at all levels, from first-year undergraduates through graduate students, faculty, and staff. Atlanta as a city and Georgia Tech as an institution offer multidisciplinary opportunities that make the work exciting. The LLAMAS lab aims to produce not just research publications and technology artifacts, but also researchers who understand the full scope of AI wearables—their technical foundations, their application possibilities, their societal implications, and their potential to genuinely enhance human capability when designed thoughtfully with users at the center.

On the go? There’s an audio-only version too. Click here.

Francisco Romero is an assistant professor in the Georgia Tech School of Computer Science. He is also the co-founder of Plix AI, an AI body camera company.