28-Days-to-Lean Meal Plan
With the right plan and the right discipline, you can get seriously shredded in just 28 days.
Read article
Flex AI spent seven years teaching algorithms to see movement like a trainer. The result? Your phone just got eerily good at spotting bad form.
Your deadlift looks fine. You think. Maybe your back’s a little rounded? Hard to tell from that shaky phone video you shot between sets. You post it to Reddit. Three people say it’s great. Two say you’re going to herniate a disk. Nobody actually knows.
This is the problem Amin Niri kept running into at the gym. Not a small problem, either—more than half of gym injuries stem from poor form, compounding over months until something tears or inflames or just stops working right. But good coaching is expensive, and recording yourself only works if you know what you’re looking for.
So Niri and co-founder Amol Gharat built an AI that does. Not a chatbot that spits out generic advice. An actual computer vision system that watches you move in real-time and catches the subtle biomechanical errors that lead to injury. Through your phone camera. No wearables. No cloud processing. Just you, your phone, and seven years of machine learning infrastructure.

Here’s the part that makes this interesting: nobody had built the dataset Flex needed. Medical motion labs capture movement data, sure, but with specialized camera arrays and marker suits in controlled environments. Consumer fitness videos exist by the millions on YouTube, but they’re not annotated for the biomechanical detail an AI needs to actually learn proper form.
Flex had to build their own. From scratch. Over seven years.
Thousands of hours of exercise footage. Every frame is analyzed by certified trainers. Every movement is tagged with dozens of biomechanical markers: hip angle, knee tracking, spinal position, weight distribution. The result is one of Canada’s largest proprietary exercise datasets, and the foundation of a patented AI system that’s genuinely good at something most computer vision still struggles with: understanding human movement in messy, real-world conditions.
“Teaching an AI to see form is different from teaching it to recognize objects,” Gharat explains. “We’re not looking for a cat in a photo. We’re tracking kinetic chains across time, understanding how joints should move relative to each other under load. That’s a hard problem.”
Gyms are hostile environments for computer vision. Lighting is inconsistent—harsh fluorescents, dark corners, windows letting in variable daylight. People hold their phones at random angles and distances. Equipment gets in the way. Other people walk through the frame.
Standard pose estimation models fall apart under these conditions. Flex’s team spent years optimizing their algorithms to handle chaos. The system is able to process everything in real-time on mobile hardware, and do it all on-device for privacy.
When they started in 2018, the tools for this barely existed. They had to push beyond what frameworks like OpenPose and MediaPipe could handle, building custom neural networks optimized for speed without sacrificing accuracy.
“We’re running deep learning models on phones that would normally require server-grade GPUs,” Gharat says. “Every millisecond of latency matters when you’re trying to give feedback mid-rep.”

Beta users report something unsettling: the AI catches things they don’t feel. That slight knee cave during squats. The subtle hip shift at the bottom of a deadlift. The shoulder compensation that happens when you’re fatigued.
Eighty-four percent of beta testers discovered form issues they weren’t aware of. Seventy percent saw performance improvements within two weeks. Injury risk dropped across all major lifts.
The AI isn’t just pattern-matching against an ideal form template. It’s learned the biomechanical principles that make movement safe and efficient. It understands why a rounded spine under load is dangerous, why knee valgus increases ACL stress, why elbow position affects shoulder health during pressing movements.
“The model doesn’t just flag errors,” Gharat explains. “It understands movement mechanics. That’s what seven years of training data gets you.”

The implications stretch further than fitness. Physical therapists could monitor rehabilitation remotely. Athletes could get movement analysis without access to university biomechanics labs. Office workers could get ergonomic feedback in real-time.
Flex’s current roadmap includes adaptive coaching that adjusts programs based on how you actually move, not how you’re supposed to move. The AI tracks your biomechanics, recovery patterns, daily readiness, then modifies programming accordingly. Personalized training based on your body’s actual constraints and capabilities.
As edge computing gets more powerful and smartphone cameras improve, this kind of real-time movement analysis could be everywhere. Your mirror. Your watch. Your AR glasses. The technology Flex built for gym-goers becomes infrastructure for understanding human movement at scale.
“We’re not replacing trainers,” Niri says. “We’re making expertise accessible. Most people can’t afford $100-per-hour coaching. But they can afford a subscription to an AI that learned from thousands of hours of expert analysis.”
The fitness industry has promised AI-powered personalization for years. Most of it has been smoke and mirrors recommending systems calling themselves intelligence, chatbots pretending to coach. Flex actually built the hard parts. The dataset. The algorithms. The infrastructure to make it work on consumer hardware.
Your phone’s been watching you. Now it’s finally smart enough to tell you what it sees.
Try it: flexfitnessapp.com
Follow: @flex.ai
Connect: Amin Niri | Amol Gharat
This article is a paid advertisement provided by our sponsor. The views and claims expressed are those of the sponsor and do not necessarily reflect those of Muscle & Fitness or its editorial staff.