top of page
Writer's pictureMarcus Brown

Is this not up to your standards?

As we get better, so does your data. Reanalyze all of your data whenever our algorithms improve. Now, you don’t need to leave any historical data behind.


Despite talking in hyperbole and with a hint of sarcasm, the issue I am hoping to shed some light on is potentially the most transcending problem in clinical biomechanics: standardization.


I once attended a workshop in lieu of a colleague. It was more or less the meeting of minds in a particular field of study to determine how we were going to share data across labs. This field was stuck because we just couldn’t collect enough data to make a big difference, and I suppose we wanted to change that. So, a bunch of people and a few companies gathered. I was likely not entirely qualified to be there, but it was my field of study so I wasn’t totally out of my element. During the four days, we talked about a ton of different topics: marker sets, protocols, recruiting, literally anything and everything involved in the collection of biomechanics data. We all got along well, and I think we seemed to really agree. Unfortunately, it's been almost ten years now, and not a single decision has been made to address our problem. Even when surrounded by like-minded individuals, we were unable to determine how we were going to collect data together to meet a common goal. The problem was that we could not determine standards. We all had our little flavor, and that little flavor was the undoing of this project.



Now this is not the first or last time this has occurred in biomechanics. The history of standardization is really as old as the field. And to be honest, this is actually really healthy. As advancements are made in technology, marker sets or protocols, they should be studied and reviewed, and then if appropriate (and with a little luck), they become standards for everyone to adhere to. Now, this can actually be a double-edged sword. What if the standards from the last ten years change? What if we did a big lit review, and were studying and accumulating data over the last decade, and then the standards changed? There is now a new, more accurate way of doing the same thing, but if we go with the new way, we can’t compare to our old data, so it’s not really that useful. What should we do?


I had been thinking about this problem for many years during my research and professional days. But, when talking to my collaborators, it seemed so unlikely that we would be able to produce the biggest and most perfect algorithm ever the first try because no one is perfect. Rather, a more likely case is that we iterate and improve things over time, basically the same as any technology. But then, what about this issue of standardization? We wouldn’t even be standardized to ourselves!


Fortunately, in a sense, we got lucky. As long as the raw data never changes, it actually doesn’t matter, because you can always reanalyze. So, for us, video quality and calibration is critical, but after that, we can improve to our heart's content without worrying about not being able to compare to historical data! As we add more key features, bring on a smarter developer than me that can solve our problems, and build better, more accurate models, we, and all of our customers, can just reanalyze the data! It’s a lot of writing to say that we don’t have the same problem, but for us, this is a huge deal. As we get better, the standards get better, but no one is left behind.


108 views0 comments

Recent Posts

See All

Comments


Join our mailing list

Thanks for subscribing!

bottom of page