Speaker

Transcript

Nicholas Shelton 
Okay recording started and I'm going to go on mute and then, when you're ready, you can begin all right go ahead. 

Okay. 
Annie Weisbrod 
Hello, everyone. I'm Annie Weisbrod. I'm here with my colleagues, A Narayanan and Mark Bailey from Procter and Gamble. we're going to be presenting. 

Evaluation of a meditation program using the JMP SEM platform. 

So first I'd like to start off by explaining why are we even talking about this. 

So P&G's 

priorities, especially through the COVID time period, we've really focused on people's health. 

It's a very stressful time for many reasons due to the pandemic, but also areas of civil unrest, changing supply chains where things aren't available, 

changing our living habits, illnesses, taking care of others. There's so many ways where our lives have changed quite significantly in the last 

20 months, two years or so. And so protecting the health and wellbeing of P&G employees has become an even greater priority during these times. 

So what we've been focusing on is how to help people take care of themselves, physically and mentally, as they are doing their jobs, 

to serve customers, to serve consumers, and also support the communities in which we're living. We have internal and external programs that are in place that educate and allow people to practice different aspects to wellbeing, hopefully to to help their health. 

So, as the pandemic has occurred, we've had to change how we're offering the support to both individuals and to departments. 

Every employee has a different situation, 

whether it's the kind of job that they have for the company, the location in which they live, or personal settings related to their health or the care of other members of their family. 

And so what I have been teaching for several years, is to teach inperson classes at one physical site. People would show up into the room, and I would share that information about stress and resiliency and meditation. And we've all been forced to switch to livestreaming. 

And my audience has gone global, as quite a few of our audiences have different time zones, different languages. And playing around with the livestreaming technique, just as we're doing right now for this presentation. 

So, as we made those switches, we were curious, how effective are these wellbeing programs? Does the content and this new format of global livestreaming, how does that impact the top mental and emotional health domains for wellbeing? 

I teach a course called Meditation Without 

Expectations. It's a livestreaming global course that's eight weeks. We meet once a week for 45 minutes. 

It's highly experiential. It starts off with a brief introduction and then we do practices for most of that class. 

And then the participants are encouraged to practice whatever that approach is throughout the rest of the week. The goal is for people to become more selfsufficient with their own personal mindfulness, 

paying attention, focus, resiliency, 

empathy, and also stress reduction, practicing these skills over this longer period of time. 

The practices are selected to help us increase our awareness about what's going on around us, how are we feeling, what are we thinking, 

different aspects related to stress and how we can reduce that, have a different approach to those experiences, and also to improve joy. 

We do this through several different practices related to insight, discipline, and also compassion. So of the eight different practices, we have several that are related to just being here now. 

So an example of that would be a five senses meditation, where we 

just see, just hear, just 

smell, just taste, 

just feel what's on the skin as an example of coming to just here just now. We also have practices related to introspection and gratitude. 

Dealing with distractions, because we always get distracted. The mind just does its thing, and so counting practices, mantra practices are examples of that. We also have practices related to heart. 

So these very old practices, metta 

and tonglen are thousands of years old, coming out of cultures in Asia, to practice loving kindness, caring about others, giving and receiving. 

And then, finally, this last one of choiceless awareness. 

Just being. 

Not doing anything, just being. Very popular among Zen 

practitioners, for example. We've had more than 1,000 people sign up for these courses over the past year. About 50% of the people that sign up actually complete the full eight weeks and we're going to share some statistics. 

demonstrating what these offerings do actually deliver. 

We survey 

participants' attitudes and behaviors before the course and after the course. We have 45 different questions, based on medical standards. 

The surveys are voluntary and anonymous, and we are looking basically at these four standard questionnaires. The first one is five facet mindfulness questionnaire. 

We also use the interpersonal reactivity index, which is related to empathy, so empathetic concern and taking perspectives, different perspectives. 

The grief resiliency scale and the perceived stress scale, which is what we're going to focus on in the demonstration today. 

This is the most widely used of the different questionnaires related to how people feel about stress, the stressfulness of their situations they're in, and any 

effectiveness of stressreducing interventions, like this meditation course. So the standard statements, the questions that are asked tap into how unpredictable 

the person is feeling their situation is, is it controllable or not, and do they feel overloaded, and then, what's their perceived level of helplessness. 

There's nothing they can do about the situation. 

Or their selfefficacy, how confident do they feel that things will change and they are able to manage the stress. So we're going to focus on two of those subscales related to feeling overloaded and feeling uncontrolled in that situation. 

We are focusing on stress on purpose, not just for the demonstration, but also for these courses, because it is one of the top risk lifestyle factors in the world today. 

Many organizations are developing integrated wellbeing programs to respond to different reports that are coming out all over. The World Health Organization, for more than the past decade, has identified that chronic or unresolved stress is the top lifestyle risk factor, exceeding 

obesity and lack of physical activity, that contributes most to physical and mental chronic illnesses worldwide. 

So 

the good news, brief summary of results, is that over this eightweek time span, we do see notable increases. 

When we first looked at the data coming in, just the simple means of what do we see here, we do see notable increases across mindfulness, 

empathy, resiliency, and also clear reductions in how people perceive stress. So that's great and we were interested in taking a deeper investigation into this, so I reached out to my partner in statistics who's going to share with you 

some of the survey results from the pre and post paired responses. And so he will take over now and show you some of this JMP functionality for what he's been working on with this responses that we have from the surveys. 
AN 
Thank you, Annie. 

Are you guys seeing my screen? I'm not able to. 

Hey guys I need to I need to take some time and I don't know, things are not working here. 

Thank you, Annie. Hi, everyone. My name is Narayanan. 

I am part of the advanced consumer modeling and statistics department at P&G. What I want to do today is share the analysis details of the data that 

came from the program that Annie just described. So my agenda is going to consist of three topics. I'm going to talk about factor model optimization, then I'm going to talk about the confirmatory factor analysis of the 

optimized factor model. Then finally I'm going to discuss longitudinal measurement invariance. This is important because we want to measure the change over time, of the latent means course. So let us jump right into the first topic. 

And factor model optimization, why are we even doing this? The reason...there are two reasons why I want to do this. First the 10 questions that Annie mentioned 

are really not optimized for the two constructs that we are interested in measuring. 

And also, we have a small sample size and we want to make sure that we do not have too many questions coming from pre and post. 

So those are the two reasons we are doing factor model optimization. And in order to do that, I'm going to draw upon two ideas from classical test theory. 

One is reliability. Reliability is actually concerned with the consistency of measurement. We want to make sure that the items within a construct are measuring the same thing. And there are many different measures of reliability. The one that I'm going to use here is the 

reliability coefficient, or Cronbach's alpha. You can see the formula there. And one reason I'm using this is because it is available in the multivariate platform in JMP using the option item reliability. 

Now one 

drawback to this measure is that it assumes that all variables or items are equally reliable. 

And we know some items do not have the same reliability as others, and in that case we could use the average variance extractor. 

Now what is that? In order to explain that, I'm going to actually use the path diagram here of confirmatory factor model. There are three components here that I want to describe. 

The one at the top, which is actually enclosed in a circle, that's the latent variable. 

Then one at the bottom, which is enclosed in a rectangle, there's the indicator, And in this example, there are four indicators that are such big questions which tap into this latent variable. 

Then there are four Lambda variables, which actually are the factor loadings that give the strength of each indicator 

tapping into the construct. So the average variance extracted is a function of the Lambdas and the 

variances of the errors, which are at the bottom. The delta is at the bottom. Here are the error 

components. And for simplicity, we will consider them as measurement errors, and the variance of those errors are also included in the calculation of the average ratings extracted. 

If you look at the formula, if there are no error variances at the bottom, the maximum value will be 1 and that's what we want. 

So 

another item that is also useful in measuring reliability is item reliability, which in fact is nothing but the individual R squares of each of these regressions of the indicator on the latent variable. And we will see that when we get into JMP. 

The second concept from classical test theory that is important is validity. Validity is concerned with whether the variable measures what it is supposed to measure. And again here, there are many different 

versions of validity. The one I'm going to be focusing on is discriminate validity, which has to do with whether the constructs that you are measuring are unique and distinct. 

Okay, with those two concepts, I'm actually going to get into JMP and show some of the details of the modeling that we did. So I'm going to start with an exploratory factor analysis of the 10 questions 

concerned with the stress domain. Now in all my analysis, I am going to 

show the results of the analysis. I'm not going to take you through this step by step, point and click, because it'll take a long time, and they are well documented in JMP. 

the big one, which we are calling as the 

overloaded component, and the smaller one with three items, we are calling as uncontrollable. So this is an exploratory factor analysis, which seems to support two factors. 

And we want to confirm that it, in fact, is true in our data, and the data will support that. So I'm going to launch the confirmatory factor analysis of the same 10 questions. 

And we can see that the overloaded component is connected to seven indicators and the uncontrollable component is connected to three indicators. 

We want to look at how well this data...how well this model fits the data. And we can see, looking at the Chi square of...Chi square is 44 at 34 degrees of freedom and it is not significant, so this model is supported by the data. 

And something else we want to see at the bottom is the construct validity matrix that I was mentioning. 

The diagonal of this matrix gives the average variance extracted. And we can see that the variance of the uncontrollable factor is rather high and the variance of the overloaded factor is actually low. 

The orthagonal element tells you the amount of variance each of the factors shares with the other factor. And for the overloaded factor, the variance that this 

factor shares with uncontrollable is, in fact, higher than the variance it shares their own indicators, as shown on the diagonal. And this is not an optimized factor structure that we want. 

And also, if you go and look at the 

path diagram with the estimates, you can, in fact, see 

the individual item reliability, which are shown here in the 

rectangles here. And you can see 

the seven indicators that are connected to the overloaded factor component has different item reliability, ranging from .277 to .551. 

And we want to choose a few that are 

having high reliability. 

So we are going to use these concepts of item reliability, 

discriminate validity to, in fact, optimize our factor. 

And I'm going to go back and look at 

all these metrics put together in a table. 

And this table has the Cronbach's alpha in these two columns, it also has the change in Cronbach's alpha when an item is deleted 

from the Cronbach's alpha of the entire composite, and also it has the item reliability in the last column. So one way to look at this matrix of numbers and optimize your factor is choose items with the high reliability, which are probably the top three. 

Now, in this particular situation, I also like to use 

the 

Cronbach's alpha when the item is deleted. And if you look at the third item, 

even though that item has the third highest 

item reliability, when we look at this column, I call it alpha delta, 

which is the change in reliability from the entire composite when the item is deleted. There, in fact, is a big change for the third item, and for that reason, I did not choose this item, and in fact, I chose one just below it. 

So for the optimized factor solution, I'm going to choose the top two items from the first factor and the fifth item, and then for the second factor, uncontrollable, I'm going to choose the top two items, because the third item has a very low item reliability. 

So, having 

optimized the factor solution from 10 questions to five, 

I'm going to confirm that optimized factor solution and see how we are doing in terms of all the metrics we have just discussed. 

So here is the optimized factor solution, the order factor has three indicators, the uncontrollable factors, two indicators, and again our model fits the data much better, Chi square of 2, 4 degrees of freedom and the Chi square value is not rejected. 

Then we go down 

and look at the discriminate validity. Now with the optimized factors solution, you can see the variance of overloaded has in fact increased 

and the variance it shares with uncontrollable has decreased. So the orthoginal value is smaller than the values on the diagonal, and this is exactly what we want to support 

discriminate validity. So by optimizing the solution, we have achieved discriminate validity, and we have also gotten a better model than what we had with the original 10 questions. 

Okay, so we have optimized our factor solution. 

The next step is to do the measurement model invariance. 

Let us remind ourselves where we are. We have seen a lot of different tables and analysis. We have 

empathy, stress, resiliency, and mindfulness. 

And we are going to concentrate only on stress. The main question we want to answer here is is there an improvement in mean scores on the latent variables 

rom the pre to post during the eightweek program? 

And in order to establish that, we have to first show longitudinal measurement invariance. 

Now what is that? In fact, I recently learned about this, and let me put in a plug here for an excellent paper written by the JMP developers Laura Schilo and Paul Russo 

in the current issue of Structural Equation Modeling, where they have detailed all the different capabilities of JMP, including longitudinal measurement invariance. 

The idea of longitudinal measurement invariance is about making sure that the meaning of the latent variable has not changed over time. 

Before we can compute the mean change in the latent variables, we want to make sure we are measuring the same thing in the beginning and at the end. So in order to establish that, there are four steps. 

These are actually four different models. First is a configural model with the least amount of restrictions with the same number of factors and loading pattern. 

The second is a weak invariance model, where we add a few more restrictions. We sort of tighten the screws a little more and ask the question, is this model still supported? 

And then we tighten the screws some more and say on top of this, what if I impose the quality of item intercepts? Then finally, 

the last thread on the screw, what if we add item variances and put a quality of those across the two time points? 

So the details of how to do do this step by step, is included in that article I just mentioned. I'm just going to only show the results here. 

Oops. 

So in this 

path diagram, I'm showing 

the two factors, overloaded and uncontrollable, both at the pre survey stage and the post survey stage. So we have the overloaded pre, uncontrollable pre, and the same thing for the post. 

Overloaded is explained by three items and uncontrollable is connected to two items. Now the letters here are the different 

constraints that have been put on a model. For example, C3 

from the overloaded to the item "felt nervous and stressed" for the pre 

is constrained to be equal to the same item in the post. And this is part of the constraints that we are putting on to the model to make sure that the constraints are supported. 

So the letters indicate the different constraints that have been put on a model and these estimates are made equal. Similarly, 

the intercepts of the equivalent items are also made equal, 

which are indicated by 

A. 