Choose Language Hide Translation Bar
Assessing my Skulpt Aim data with JMP

In my last post, I mentioned that I have recently acquired several new quantified-self devices, including the Skulpt Aim. These new devices and the data from them have brought me greater opportunities to think more deeply about measurement systems.

When I begin using a new self-tracking device, I have the same kind of basic questions that any scientist or engineer might ask about a novel measurement tool:

  • Why does this tool interest me?
  • What does this device measure?
  • Does this new data confirm what I would expect?
  • Do changes over time represent important trends or random noise?
  • These questions have been on my mind over the past six weeks as I’ve collected daily data with the Aim, a body fat and muscle quality monitoring tool.

    Why does this tool interest me?

    I've blogged in my Fitness and Food series about my past body size fluctuations and how I adopted quantified-self practices such as food logging, activity monitoring and weight tracking with a wireless scale to reach a healthy weight maintenance zone. Tracking my diet, activity and weight over the past six years has helped me better understand how my food intake and exercise habits have affected my short- and long-term weight trends throughout my lifetime. However, since strength training is my workout of choice, body weight has always felt unsatisfying as a long-term success metric.

    Weight history purple and aqua2

    What does this device measure?

    Unlike other methods I have tried before, the Aim provides two different metrics: % fat (the tried-and-true measure of body fat percentage) and a novel measure called muscle quality (MQ). In short, the device estimates % fat by passing a current through a specific body part and measuring its resistance. It uses the time between discharge of the current into the muscle and detection of the corresponding voltage measurement to calculate muscle quality. The basic idea is that larger, fitter muscle fibers retain current longer. (The Skulpt site has more information about how it works.)

    The Aim estimates overall body fat percentage and average muscle quality through a four-site measurement, similar to the multisite approach used by caliper assessments. But the Aim’s real novelty is its ability to assess and report measures for individual muscle areas. This fills a gap in my quantified-self data collection by providing me a frequent and convenient way to quantify muscle maintenance and incremental changes in body areas due to training choices. I had seen the Aim online several months ago, but having the chance to try the device myself at the recent QS15 conference really sealed the deal.

    To use the Aim, I spray water on the area I am going to measure and on the back of the device; I then set the device on a specific muscle area, following recommendations for device placement shown in the app’s embedded videos. A few seconds later, the Aim displays % fat and MQ for that area. When I fit a regression line to data points across all body parts in Graph Builder as shown in the first graph below, you can see that there is an inverse relationship between MQ and % fat. Intuitively, it makes sense that areas with higher muscle quality will tend to have lower fat percentages.

    Percent fat vs muscle quality 8-29-15

    However, adding Body part as an overlay variable in the second graph reveals that the MQ and % fat profiles of different muscle areas can vary greatly.

    Percent fat vs muscle quality overlay 8-29-15

    Does this new data confirm what I would expect?

    To answer this question, I had to start collecting data! So for the six weeks, I have been performing three to five replicates of the Aim’s Quick Test each day. It uses measurements of my right side bicep, tricep, ab, and quadricep muscles to estimate my overall body fat. Every week or so, I also measure other individual muscle areas. I perform all these tests first thing in the morning before eating and drinking, right after I weigh myself. The graph below summarizes the number of measurements I have taken for different areas over this period of time.

    N reps 8-30-15

    Muscle quality is a new metric to me so I don’t have any past measurements for comparison. But the patterns in the data I collected indicate that the muscles that I train regularly and heavily tend to have the highest muscle quality (MQ) scores. As expected, areas that I haven’t trained regularly with weights in recent years (e.g., calves) have lower muscle quality scores. My abs are an interesting exception. I rarely train them directly, but their MQ scores are very high, probably because most weight training exercises require the use of abdominal muscles to stabilize the movement.

    The best body fat data I have comes from a January 2014 DXA scan, which assessed me at 17.5% body fat at a dieted-down weight of 127.5 lbs. My recent quick test measurements with the Aim have been taken at a more typical maintenance weight around 135 lbs and estimate my % fat at 18-19%. Although my weight is not directly comparable to my weight on the day of my DXA, my results are in the ballpark of what I’d expect after adding in a few pounds for extra food and water in my system, a few pounds for extra body fat, and 1.75 years of training time.

    I used my Skulpt data with a custom body map I created earlier this year in JMP to show mean MQ and % fat by body area (averaged over left and right sides). I reversed the color scales so the trends for each measure could be compared more easily. Like the body-part specific regression lines shown above, this graph also reflects the inverse relationship between MQ and % fat.

    Mean MQ 8-30-15Mean fat 8-30-15

    Do changes over time represent important trends or random noise?

    I had some questions I wanted to answer before assessing how my workouts might affect my % fat and MQ measures in the short and long term. While casual Aim users might be satisfied by taking a single measurement daily or weekly, I expected my measurements to vary around the true mean for each body part/side combination due to random and systematic variables.

    Without daily access to a gold standard test like DXA, I could not verify the accuracy of the Aim’s measurements, but that has never been my intent. I am much more interested in establishing a measurement routine that generates precise measurements each day so I can make sense of daily or weekly trends in the context of my weight, eating and workout variables. The Skulpt blog mentioned an expected between-day test-retest variation of 5%. Put another way, an area measured at 20% body fat one day would be expected to measure 20% +/-1% the next day. But I predicted that I would see variables like water weight impact my daily measurements, such that my true values could differ between days, so I was more concerned with assessing replicate measurements taken on the same day. Establishing within-day precision would be key to establishing a baseline for my MQ and % fat values.

    To assess within-day variation, I used Graph Builder to create a graph of the standard deviations of my MQ and % fat measurements for the four sites I measure daily. I used a light blue shaded reference range to indicate the 1% fat and 1 MQ point standard deviation that I hoped to achieve.

    Measurement variability

    The variability trends I saw in my July measurements caused me to question and adjust my measurement techniques:

    • Early on, my within-day variability for my MQ and body fat scores was relatively high. I soon realized that I wasn’t following the Aim instructions to the letter. I began to spray the back of the unit before each and every rep, ensuring that the metal contacts were consistently soaked for each measurement. You can see this change begin to reduce the variability of my data around July 21.
    •  Once I made the above improvement, I started to notice a new pattern. My first rep for a muscle group seemed to be different than later reps. I confirmed this suspicion by examining my raw data. I theorized that perhaps this might happen because the device was wet before rep 1, but the muscle area itself was dry until after rep 1 was complete. I began spraying each body area before I started measuring, and this further improved my measurement consistency.
    • At the end of July, I noticed another disturbing trend. The standard deviation of my MQ measurement for my right bicep was trending up, not down! This affected the consistency of my four-site average. In puzzling it over, I concluded that since my bicep is a relatively small and narrow muscle, slight position changes probably affected its measurement more than for larger muscle groups.

      I decided to test my theory by experimenting with the position of the device. For five reps (group 1), I made an effort to hold the unit slightly higher on my bicep area, and then moved it to a slightly lower position for five reps (group 2). The figure below illustrates how this affected my results. Although one rep in the higher position group had an MQ score of 125 (marked with a red x), the rest of the MQ scores in the higher position group were several points lower than those in the second group. It seemed clear that I needed to choose one of these positions and stick with it to obtain the most consistent measurements for this problematic muscle area.

      Device position 9-2-15

      Over subsequent days, I applied the lessons learned above and chose my bicep measurement area more consistently, reducing the SD(MQ) and SD(% fat) for biceps in my data set. At this point, I’m happy with being able to consistently measure MQ +/- 2 and % fat +/- 1% on most days for almost all areas, and the 4 position overall estimate that I take daily has fallen into a predictable range.

      What's next with this data?

      Given my initial adventures in measurement consistency above, I knew I had more work to do with this data set. I was continuing to collect daily data, but wanted to assess it and my measurement technique more systematically. JMP has an MSA (Measurement Systems Analysis) platform designed to help assess sources of variability in a measurement system. I wanted to learn more about the platform and use it to assess my measurements so far. I already knew I had outlier measurements in my data table. What’s the best way to identify and remove them? I needed to explore my data, evaluate my outlier filtering options, apply them, and assess how their removal affected my within-day measurement consistency. I’ll share what I discovered in future posts.

      Article Labels

        There are no labels assigned to this post.


      Peter wrote:

      Awesome! How did you export your Skulpt Aim data? I've been wanting to analyze my data too.


      Shannon Conners wrote:

      Unfortunately, the only export route right now seems to be to send a daily body fat percentage to the Apple Health app. If you want the raw data, manual data entry is the current way to get it out. Painful, I know! I have asked for csv export from the app. Please, please send your request along to the Skulpt support site too!