Choose Language Hide Translation Bar

JMP Saving Lives: Work to Eliminate Maternal Death Due to Postpartum Hemorrhages (2022-US-30MP-1125)

Postpartum hemorrhage (PPH) is a major cause of maternal death in low-resource countries, accounting for 661,000 deaths worldwide between 2003 and 2009. To assess this burden, the WHO conducted studies to find methods for the prevention and treatment of PPH. Three large clinical trials were conducted in the past two decades by collecting blood loss volume data (V) for more than 70,000 deliveries. The outcomes were PPH (V>500 mL) and severe PPH (V>1000 mL). The parameters under comparison were the proportion of these events. The comparisons of small proportions led to very large (20,000 to 30,000) trial sizes.

By using data from large trials,the Survival platform in JMP Pro showed clearly that the distribution of V is very close to the lognormal distribution. This finding allowed the efficiency of estimates of probabilities and relative risks to be improved and permitted a substantial reduction of sample size for treatments comparison (typically less than 4,000), in regard to those needed by the binomial outcome. Quicker and less expensive trials are very welcome to speed up obtaining results and have become common practice.

 

 

Hello.

I  am  Jose  Carvalho,

a  statistician  at  Statistic  Consulting  in  Campinos,  Brazil.

I  thank  you  for  the  opportunity  to  show an  application  of  JMP  to  clinical  trials

where  a  major  improvement  movement came  from  a  statistical  discovery.

As  a  result  of  that  discovery,

one  trial  ended  with  the  expected and  very  desired  results.

Subsequent  trials  on  the  same  syndrome will  be  much  cheaper  and  faster.

The  problem  is  the  bleeding  after  birth  or  postpartum  haemorrhage,  P PH  for  short.

P PH  accounts  for  125,000  deaths  per  year.

Even  in  developed  countries  like  the  United  States,

it's  the  cause  of  11%  of  the  maternal  deaths.

Now,   PPH  is  defined,  just  for  classification,

as  blood  volume  in  excess  of  500 mL  in  24  hours  after  delivery.

If  the  volume  exceeds  1,000  mL, then  it's  severe   PPH.

It'll be  interesting  to  know the  main  cause  of   PPH.

90%  of  the  cause  are  uterus atony .

It's  a  failure  of  the  uterus to  contract  after  the  delivery.

If  the  uterus  failed  to  contract, then  the  bleeding  continues.

Then  we  can  treat  that  by  giving  drugs to  contract  the  uterus

or  by  some  physical  action.

The   main  cause  are  trauma

and  placental tissue  retention and  coagulation  system  failure.

You'll  be  dealing  with  uterus atomy and  its  prevention.

PPH  can  bring  serious  threat to woman's  life  and  health.

Its  onset  must  be  quickly  diagnosed  during  the  delivery  and  treated.

Treatments  include,  as  I  said,  drug  treatment  with  additional  uterotonics

and as a last  resort,

artery  irrigation  or h ysterectomy,  the  uterus  removal.

New  drugs  and  devices  are  being  developed  to  prevent  PPH.

Every  one  of  those  must  be  tested in  clinical  trials  before  they  are  allowed

to  use  in  the  natural  deliveries.

We  have  data  on  three  very  large  trials.

The  first  one,  the  oldest  one, was  published  in  2001.

It  was  the  Misoprostol.

That's  the  name  of  a  drug

that  was  compared  to  the  standard treatment  and  used  18,000  women.

The  second  one  after  that,

and  that  was  published  in  2012, is  the  Active  Management,  not  a  drug,

but  a  physical  procedure of  pulling  the  umbilical  cords.

Now,  the  Misoprostol  didn't  prove  to  be  as  effective  as  the  standard  drug

of  treatment,  which  is  oxytocin.

The  Active  Management  did not  show  any  improvement  also  on  PPH.

Now  we're  going  to  deal...  Sorry,  with  the  Carbetocin trial ,

published  in  2018,  the  largest  of  all,  that  enrolled  29,000  women.

In  all  these  trials,  the  primary  outcomes  were  severe  PPH  and/or PP H.

Now,  to  diagnose  sPPH  and/or PP H,   we  need  to  know  the  blood  volume.

The  observations  were  volume, the  numbers  volumes  in m L.

But  only  the  indicators  of  SPPH  and   PPH

were  considered   in  the  statistical  analyses.

That  is  binomial  variance zero,  one,   yes  or  no.

Okay,  in  spite  that  we  had  the  information about  the  blood  volume.

Before  we  proceed,   just  a  small  explanation

about  the  two  drugs   that  we'll  be  dealing  with, again.

The  standard  drug to  use in  deliveries  is  oxytocin.

It's  given  routinely   to  every  delivery  work,

every  part  in  the  world.

As  soon  as  the  baby  is  delivered, the  woman  receives  a  shot  of  oxytocin.

It's  a  standard  procedure.

Now,  oxytocin  is  very  nice.

It  reduces  the  severe  PPH  rates  from  3.84%  to  2%.

It  helps  the  incidents or  the  rates  of  the  sPPH.

But  there  is  a  problem, it  is  a  heat-labile  substance.

It  must  be  kept  in  a  cold  chain at  seven  centi grades  all  the  time.

Now,  in  countries  with  low  resources, this  can  be  a  problem.

If  you  do  not  keep  it  in  this  cold  chain  logistic,

the  drug  will  lose  its  efficacy,  I'm  sorry.

Sometimes  you  can  apply  a  drug that  is  not  effective  at  all.

Now,  carbetocin,  it's  a  new  drug  which  has the  same  active  principle  of  oxytocin

and  just  a  change  in  the  excipients that  makes  it  heat- stable.

Carbetocin  can  be  kept for  six  months  at  30  centigrades,

which  is  about  room  temperature in  most  places  in  the  world.

Now,  there  were  very  high  hopes

that  carbetocin  would  be   a  good  replacement  for  oxytocin,

most  of  all  for  use  in  those  low- resource  countries.

A clinical  trial  was  devised  for   PPH, it  was  done  by  the WHO

and  it  was a  non- inferiority  trial.

The  parameters  for  this  trial  are  in  the  objective.

The  investigators  said  that, to  declare  carbetocin  non- inferior

to  oxytocin,  it  should  preserve  75%  of  the  benefits.

Now,  the  benefit  is  this  3.84%  minus 2%

so  this  gives  them  non-inferiority  margin of  0.4 6%.

We  are  talking  about  very  low  rates and  the  relative  risk  of  1.23.

Carbetocin  would  be  declared non-inferior  to  oxytocin

if  in  the  trial  we  could  prove   or  bring  evidence  on

that  a  relative  risk  is  less  than  1.23.

This  result  in  just  a mazing  competition  in  a  sample  size  with  over  30,000  people.

We  ended  up  with  a  trial with  about  29,000.

Those  were  in  several  countries  as  we  signed  that  table  before

in  the  many  centres.

It  was  a  very  expensive  trial,  just  a  data  collection

of  well  over  almost  two  years.

It's  a  very  serious  thing.

Why  are  the  trials  so  large?

Well,   the  obvious  response  answers  to  that  question,

is  the  proportion  being  compared  are  small.

The  effects  are  necessarily  even  smaller.

Not  so  obvious,  but  it's  still  obvious,

that  the  triumph  needs  to  be  so  large because  we  are  losing  a  great  deal

of  information  by  mapping  V,  the  volume, into  two  categories,  like  this.

On  this  histogram  here  we  have  the  actual distribution  of  the  blood  loss  volume

for  the  29,000  subjects  of  the  trial and  then  the  cut- off point, thousands .

Just  imagine,

just  looking  at  the  histogram, how  much  information  is  lost

by  taking  all  the  niceties   of  the  frequency  of  the  histogram

in  zero, one  left  to  the  thousand  line, right  to  the  thousand  line.

But  that's  the  way  it  was  done,

because  for  some  reasons people  like  to  use  this  dichotomization.

If  it's  over  1,000,  it's  severe  PPH.

If  not,  it's  not.

I  don't  know  even  if  that's  well  too  associated

with  any  further  consequence  on  the  health  of  the  women.

That's  the  way  it's  done.

The  classification  is  that.

Now  JMP  helped  us  to  discover

that  the  distribution  of  the  blood  loss  volume  is  log normal.

There  is  a  story  behind  it.

We  set  forth  to  analyze  the  experiment  as  decided  by  the  investigators

used in  the  binomial  distribution.

But  we  saw  that  very  easily   that  the  two  distributions

of  carbetocin  and  oxytocin, the  blood  loss volume  distribution

were  pretty  much  the  same.

We  were  not  very  happy  with  this dichotomization  to  begin  with,

but  we  had  to  do  it.

That's  what  the  protocol  said.

Now,  once  we   the  statisticians   at  the  trial,  we  found  beyond  any  doubt

that  the  distribution  was  log normal.

When  I  say  the  distribution   of  blood  loss  volume  is  log normal,

I  mean  a  big  if, it is,

it is  not  an  approximation,  a  nice  fit, things  that  we  statisticians  like.

No,  we  had  29,000  points  and  the  fit you  are  going  to  see  was  perfect.

Then  we  went  to  do  some  homework   and  we  found  from  physics

that  the  blood  loss  volume  distribution...  Excuse  me,  the  fluid  volume

in  pipes  has  a  log normal  distribution,

and  that  has  been  known since  the  19th  century.

Coming  from  physics.

Of  course,  we  realized  that  our  pipes are  blood  vessels,  so  they  are  elastic.

The  viscosity  of  the  blood  changes  because  of  coagulation.

But  still,  we  have  sort  of  a  model.

We  have  fluid  in  pipes,  flowing  in  pipes, and  the  data  showed  that.

We  were  very  excited  with  that.

We  went  further  to  see  the  consequence of  using  V  for  the  estimation  of  the  risk

and  we  got  nice  results.

Now  then  we  had  to  convince the  investigators.

Such  a  large  trial  has  lots of  investigators,  big shots.

The  physicians,  they  own  the  problem,

so  they  have  the  last  word   and  everything.

They  thrown  at  the  idea.

Some  of  them   really  didn't  like.

They  said,  "Well,  we  use  no  hypothesis

since  it  just  binomial  variant,   it  has  no  model."

It  has,  but  they  think  it  doesn't.

People  think  it's  too  simple.

What  if  the  log normal distribution  is  not  correct?

We  can  have  wrong  results.

Then  we  did  exactly  what  we're  going  to  do  right  here.

Now,  we  did  the  analysis  in  front  of  them,

and  that  with  JMP  was  very  compelling, and  I  hope  you  agree  with  that.

JMP  helped  also  on  the  communication

of  the  discovery  to  the  investigators in  a  very  compelling  way.

Just  then,  to  advance  the  result,

using  the lognormal  distribution  saves the  results  of  the  experiment.

That's  part  of  the  story.

We  went  on  then  to  publish  those  results

after  the  publication  of  the  experiment was  done,  because  the  experiment  failed.

You'll  see  that  it's  a  nice  story.

But  then   we  published  the  results   with  the lognormal  distribution

as  a  secondary  analysis.

That  touched  the  hearts of  the  European  authorities,

like the ADMA.

Right  now,  carbetocin  is  very  happily

being  used  in  low- resource  countries,  where  it  is  needed.

We are  very  happy  with  that.

Let  me  show  you  how  it  went.

First  of  all,  the  measurement.

You  see  on  the  left, a  sort  of  collector  to  collect  the  blood.

It's  used  in  many  case  in  deliveries.

As  I  told  you,  sometimes

you  have  to  take  very  fast  action when  the  woman  is  bleeding  too  much.

People  can  evaluate  the  blood  loss  by  just  seeing  the  stain

in  the  bed,  in  the  floor.

But  in  many  case, people  use  that  collector.

That  collector  has  a  scale that  I  enlarged  on  the  right.

In  the  first  two  trials,

the  blood  loss  volume  was  evaluated   with  that.

Then  they  changed.

They  changed  because  it  was  no  good, not  perfect  for  our  experiments,

the  three  of  them  that's  been  running  for  about  20  years  now.

Let  me  show  you  how  it  goes  with  JMP.

Let's  see.

I  feel  more  comfortable  with  JMP.

Here  is  a  data  table  with  all  the  71,000  case

of  the  three  trials.

Miso prostol.

Here  they  are.

Mis oprostol, A ctive  Management,  and  Carbetocin .

Let's  see  the  distribution  of  the  blood  loss  volume

for  three  of  them by  trial,  not  by  treatment.

The  difference  by  treatment  is  so  small

that  it  won't  matter  for  this  short  demonstration  here.

I'm  not  analyzing  the  experiment  yet.

Here  is  for  Misoprostol  distribution.

You  see  that  it's  a  very  nice  log normal,  isn't  it?

Can  be  something  else, but  it  is  log normal.

It  looks  like  a  nice  distribution, but  it  has  problems.

It's  hiding  the  problems,  actually, not  for  fitting  a  log normal,

but  for  analyzing  the  way  it  was   with  the  binomial  variants.

Let's  use  the  Grabber  tool and  change  the  pins  of  the  histogram.

Make  them  thinner.

Okay,  there  we  go.

What  we  see,  we  see  spikes  in  distribution.

Regularly  you  have  spikes,   you  can  see  them  here.

Let  me  change  a  little  bit, yes.

Now  you  said,  well,  there's  no  problem.

It's  like  numerical  integration.

You  lose  on  the  one  beam  then  you  have  access  on  the  other  beam

and  they  alternate  and  you  end  up with  a  nice  integration.

Well,  not  the  case  here,

because  we  have  a  problem

that  in  1,000  we  have  a  cut- off  here.

Let  me  take  a  zoom  of  distribution around  1,000,  which  matter  most  for  us.

See  here's  the  spike  at  1,000 .

But  you  see  part  of  this  frequency here  comes  from  the  left,  from  the  900.

Because  of  the  reading  of  that  scale,

that  scale  was  rough   and  people  tended  to  round  the  numbers.

There  is  a  sort  of  a  digit  preference  here.

It's  very  clear  that   some  of  the  known  cases  of  PPH

were  moved  to  PPH.

It's  no  trivial  quantity for  that  small  frequency  here.

That  means  that  in  spite  of   having  no  model,  as  my  colleague  said,

for  the  binomial  variants,

we  probably  have  a  positive  bias   on  this  estimation.

Now,  this  problem was  taken  care  of  by  taking  the  weight

of  the  collector  device  before  the  collection,  before  the  use,

before  the  procedure,  and  then   weighing it  again  after  the  procedure.

That  was  done  only  for  the  carbetocin  trial

that  started  on  the  carbetocin  trial.

If  you  go  the  same  trick  here, change  the  beams.

Now  you  see  that  we  have  a  nice  distribution,

no  problem  with  spikes  anymore.

Weighing  solved  that  problem.

Now,  let  me  tell  you  this  collector is  not  for  the  experiment.

It's  for  actual  clinical  use.

The  evaluation  of  the  blood  loss   and  its  speed  during  the  delivery

is  perfect  with  that  scale.

We  cannot  remove  that and then  weigh  then  to  decide  that,

you  have  to  take,  say, a hysterectomy or thing  like  that.

It's  still  in  place, it's  still  used  like  that.

We  just  changed  it  for  the  trial.

We  wait  at  the  end.

That's  just  for  curiosity or  something  interesting.

That  came  also  from  the  ability that  we  have  so  easily

to  do  this  sort  of  analysis  with  JMP.

That's  more  important than  we  can  even  think  of.

Now  let's  go  to  the  real  problem.

It  is  also  easy  with  JMP.

I'm  going  to  analyze the  results  of  the  carbetocin  trial

but  then,  so  that  I  don't  get  mixed  up in  front  of  you,

I  prepared  data  set  with   subset just  for  the   carbetocin trial .

Here  it  is,  29,000 case  only.

That's  a  subset  of  that  other  trial.

Let  me  take  the  opportunity  to  tell you  what  the  data  that  I  have  here,

of  course,  that's  not  the  full  data of  the  trial,  that  clinical  trials.

Clinical  trials,  you  collect  the  hundreds of  columns  of  [inaudible 00:20:56]

for  many  reasons and  for  controlling so on  and  so  forth.

Here  we  have  just  the  center,

because  the  experiment  was  randomized by  center  so  I  have  to  keep  it.

Then  the  arm,  it's  one  and  two  here but  here  have  the  issue  that's  closed

and  I  have  open  treatment  and  control  here the   trial is  over,  of  course.

Then  the  volume, that's  all  the  data  we  need.

Those  two   columns  here  are  derivations, are the sPPH indicator and  P PH  indicator

so  they  are  just  very  easy  to  do.

Just  an  indicator  of  [inaudible 00:21:48] PPH in  this  case.

Let's  start  by  analyzing  the  way protocol  SEBs, perhaps  in  a  simple  way,

not  doing  the  complete  analysis, but  let's  analyze  the  SPPH  response.

Remember...  Not  remember, I  didn't  say  that  yet.

In  the  actual  trial  analysis, we  came  to  the  relative  risk  of  1.26

and  the  maximum,  I  told  you, for   non-inferiority  was  1.23.

So it  was  a  near  missed  situation.

We  could  not  declare   non-inferiority

and  if  you  go  to  the  publication of  the  experiment,

you  can  find  in  the  reference  in  the  last slide  here.

We  have  to  publish  that  we  didn't  prove non-inferiority,  much  to  our  regrets.

Let's  go  and  do  it  just  to  show that's  a  sort  of  show  off  for  JMP.

How  we  need  now  is  a  fit  Y  by  X.

It's  so  simple  after  all  that  work.

We  have  treatment  for  X

and  we  have  to  use  block  for  centers, just  to  respect  randomization.

And  there  we  can  explore the  results  of  this  here.

But  I'm  looking  just  for  the  relative risk,  which  is  one  item in the...

It's  one  item  on the  menu here,  relative  risk.

Well,  one  is  our  response and  treatment  must  be  in  the  numerator.

That's  our  choice.

There  we  go.  We  have  down  here  1.255,

that's  the  1.26 that  we  got  with  those  nice  models,

random  models  for  center and  things  like  that.

So  it's  1.25.  It's  a  near  miss  situation.

We  didn't  prove   non-inferiority.

Instead  of  just  weeping  over  the  results,

we  went  on  and  tried  to  do  an  analysis that  was  not  planned,

but  anyway,  we  published  it  as  a  sort of  secondary  analysis  afterwards.

Let's  analyze  the  distribution  of  v.

To  do  that,  I'm  not  going to  the  distribution  platform.

Rather,  I'm  going  to  use  reliability and  survival,  life distribution

because  it's  a  much  richer  platform for  studying  distributions,

except  that  the  variable, the  column  must  be  non- negative.

That's  the  case  for  volume,  okay.

I  can  use  this  instead  of  timing  here.

I  don't  need  sensory,  nothing  like  that. There's  no  such  a  thing  here.

It's  just  a  tool for  fitting  distributions.

Now  let's  get  down  to  business here.

I  have  distribution  of  both  treatment and  control,

that  is  carbetocin   and  oxytocin.

Let's  separate  those.

You  can  do  that  by  a  local  data  filter for  treatment

and  then  I'll  choose  treatment  here, that's  carbetocin.

On  the  right  here  we  have  the  data  points, those  black  dots,.

They  are  so  many,  15,000  of  them. Those  that  were  treated  with  carbetocin,

that  they  look  like  a  continuous  line but  those  are  the  points.

They're  not  having  blue, they're  nonparametric  estimates,

nonparametric estimates [inaudible 00:25:41].

They  are  the  same  as  the  binomial point wise,  because  they  have  no  sensory.

Then  where's  the   lognormal  here?

There's  no   lognormal in  the  menu  for  distributions.

That's  because there  are  zeros  in  the  data.

Then  we  cannot  fit a   lognormal  with  two  parameters.

Some  women  are  very  lucky  enough to  have  [inaudible 00:26:06] zero

millilitre for  blood  loss.

Probably  that  was  some  mistake.

There  were  women  that  went  almost to  4000  in  the  control

and  those  were  probably  in  shock.

This  large  span  here  for  the  binomial variation  was  separating  just  two.

Okay,  let's  fit  the  threshold   lognormal.

The   lognormal  that  you  take  a  shift  so that  we  can  put  the  zeros  in  the  field.

Now  we  have  three  lines  here.

The  red  one  is  the  threshold   lognormal.

They  are  all  three.  They  are  hiding themselves,  the  three  of  them.

Then  people  can  say  "Well, okay,  the  fit  is  very  nice,  perfect."

It's  not  always  like  that. If  I  fit  a  normal  or  a  smallest  value  here

things  like  that ,  you  can  see  that  you come  out  but  that's  no  need here.

We  can  find  the  risk  in  several place  in  this   result here.

The  risk  is  one minus 0.985.  If  you don't  want  to  do  this  sort  of  subtraction,

we  can  show  the  survival  curve and  the  risk  is  1.47  for  carbetocin.

If  I  want  to  see  the  risk  at  1,000

for  oxytocin,  it's  again  the  same,  1.47. Wow.

We  have  also  confidence  interval  here.

People  will  challenge  us  say,

"T hose  distributions,  they  look  the  same because  of  the  scale  of  the  graph."

Well,  let's  take  up  this  challenge.

Let's  do  a  zoom  here.

Let's  do  a  zoom  around  1,000.

Just  because  we  are  caring  about  that.

Look  how  close  the  fit  is.

It's  very  close.

Now  I  can  go  even  further,  like  this.

And  now  we  can  see  even  more.

We  see  that  the  point  estimates  this  black dot  here,  if  you  want,

it's  almost  the  same  as  the  red  line, which  is  the   lognormal  fence.

My  fellow  investigators  there  could  see that  I  don't  have  expressions  or  a  table.

A  table  won't  say  anything. They  could  even—

I  don't  know— but  they  could  even  think that  statisticians  were  cheating.

Here  is  the   easy  way  to  show  it but  there  is  more  to  see  here.

If  you  see  the  confidence  interval for  the   lognormal  distribution,

it's  one  third of  that  of  the  nonparametric distribution.

Well,  since  the  precision   goes with the...

increase with  the  square  root of  the  experiment  size.

We  can  guess  that  if  we  take size  one  ninth  of  that

I  would  get  for  the   lognormal the  same  confidence  interval

that  I  get  for  the  nonparametric  here.

That's  interesting.

Instead  of  using  30,000  women,  essentially I  could  use  3000  and  get  this  result.

That  was  very  good  for  the  investigators, they  planned  on  that.

This reduced

[inaudible 00:29:55] of  the  confidence  interval

came  from  the  lognormal, which  was  not  planned.

So  something  else  to  hear.

Well,  okay,  you're  doing fine  for  the  risk .

You're  getting  the  risk  from  the  log normal which  is  the  same  as  the  binomial  rate

and  you  have a  closer  confidence  interval

if  the  log normal assumption  is  okay,  it  is.

Now  what  about  the  relative  risk ?

Well,  we  can  go  and  take the  logarithm  of  the  V.

You  have  a  normal  distribution

so  we  have  a  standard  apparatus  to  do  some regressions  and  find  the  relative  risk.

But  I  remember  John  Sol  talking on  this  same  meeting  last  year.

His  talk  has  a  nice  title, Delicate  Brute  Force.

Let's use  the  same  thing, delicate  brute  force.

If it's  good  for  John Sol, it's  going  to  be  good  for  us  too.

Here  is  the  estimation...

the estimated  parameters of  the   lognormal  that  we  get.

If  we  can  do  a  bootstrap  sample  of  this,

we  can  compute  the  risk, the  bootstrap  risk.

We  have  a  bootstrap  sample  for  the  risk.

We  can  do  that  for  carbetocin and  for  oxytocin  and  that's  good.

Then  you  say, "Well,  I  have  to  program  this."

"I  have  to  program  the  bootstrap  sample." It's  not  difficult  but  you  have  to  program

and  then  you  have  to  compute  1000  times, 2000  times,  whatever it is  lognormal  fits.

But  no,  JMP  is  nice  twice. If  you  click  with  the  right  button,

this  table, you  have  bootstrap  on  the  menu.

The  suggestion  is  to  take  2500,  we  can  take  5,000  or  whatever,

but  it  takes  a  long  time.

We  did  that  with  1,000.

We  were  very  happy  with  that.

It  takes  10  minutes  or  so  for  each of  treatment  and  carbetocin  or  oxytocin.

I'm  not  going  to  make  you  wait  10  minutes, I  didn't  want  to  wait  for  longer, right.

We  did  that  before

and  here  is  the  bootstrap  sample for  the  control.

I  mean,  that  is  oxytocin.

The  output  are  the  parameters  here.

The  first  line is  the  actual  result  of  the  experiments

and  all  the  rest  is  1,000  bootstrap samples  that's  why  we  have  1,001  here.

Now  this  column  here

came  from  the  parameters. It's  just  the  risk  estimate.

One  minus  the  log normal  distribution  at  the  point   1,000 minus  threshold,

location  and  scale.

Fine,  easy.

Now  here's  the  same  thing  for  carbetocin.

Now  I  use  a  result  that  I've  read the  book  by   [inaudible 00:33:18]   ,

the  man  who  knows everything  about  bootstrap.

To  have  a  bootstrap  sample  of  the  relative risk,

all  I  have  to  do  is  take those  two  bootstrap  samples  here

and  join  the  tables  row  wise.

It's  a  Mickey  Mouse  operation  for  JMP, like  we  do  with  the  tables  here  and  so  on.

Here's  the  results.

I  kept  just  the  risks  column  here for   carbetocin  here  and  oxytocin.

If  you  don't  want  to  use  this  extra  point, I  don't  know  why  you  wouldn't  bu t.

We  can  exclude  it  to  use just  the  bootstrap.

We  have  the  relative  risk  here, just  the  quotient  of  those  two  columns.

We're  done.

Take  the  distribution  of  this  bootstrap sample,  the  relative  risks

and  here  we  are, we're  almost  to  celebrate  now.

Here's  the  distribution.

You  don't  see  1.23  here... Yes,  you  see,  but  then

we  need  now  one  sided  confidence  interval with  95%  coverage.

I  need  the  5%  quantile, which  is  not  here.

Okay,  so  we  kindly  asked  JMP

to  compute  that  you  can  put  display options,  custom  quantiles

and  we  need  0. 95 quantile, which  turns  out  to  be  1.11.

We  even  have  a  bonus  result  which  is  the confidence  interval  for  this  estimate.

If  you  want  to  be  really  safe, we  can  use  the upper,

the upper  confidence  limit... For  the  limit of the confidence limit

that's  too  involved  to  say.  Anyway, it's  far  away  than  1.23.

Then  we  have  proven in  some  sense,

we  have  thrown  evidence  that  carbetocin is  non- inferior  to  oxytocin.

That's  the  result  we  published.

A s  I  told  you,  that  publication with  some  work  by  the  investigators,

it's  warmed  the  hearts  of  the  EMA,

the  European  authority who  was  overlooking  this  trial

and  carbetocin  is  not  being  used  on  places where  you  have  no  code  chain  assured.

Let  me  use  your  time  if  I  can,

just  to  show  the  efficiency  that  we  get.

Let  me  go  back  to  presentation  here.

Let's  see  the  relative  efficiency of  binomial  versus  lognormal.

Let's  take  the  problem, not   non-inferiority  but  simple  problem

of  testing the  superiority of  a  new  drug  over  oxytocin.

The  new  drug  would  be  declared  superior if  it's  risk

for  sPPH  is  less  than  1.5% compared  with  2%  of  oxytocin.

We  have  here  all  we  need to  do  a  binomial  test.

For  the   lognormal  test  we  need to  convert  from  this  piece  to  the  means.

Let's  do  it.

For  the  [inaudible 00:37:15] , you  have  this,

for  the   lognormal,  we  just  do  this.

We  want  to  know  the  risk,

is  the  probability  of  being  larger than  1,000 so we  take  logs  on  both  sides,

no  subtract  then  standardized, which  is  now  a  normal  variance.

And  here  for  S, the  standard  deviation,  use  0.7.

It's  important  in  every  bit  that  we  did,

every  inference  that  we  did  with  those three  trials  and  then  a  few  more,

that  we  have  data  available.

Every  time the  standard  deviation  came  about  0.7.

We  called  jokingly, this  is  a  universal  constant  of  P PH.

The  standard  deviation  of  the  logarithm of  the  blood  volume  loss  is 0.7

we  replace  S  by  0.7.  We  compute the  quantiles  at  2%  and  then  1.5% here

and we  solve  the  equations  on  them and  we  have  those  two  means  to  compare

with  a  normal  distribution.

The  difference  is  0.08 14.

Okay,  now  we  go  back  to  JMP.

I  feel  even  ashamed  of  showing  that but  it's  fun

and  we  did  that  for  our  medical  team  there and  it  was  very  compelling,  as  I  told  you.

Sorry  about  all  those  windows open  here.

Very  simple. You  come  to  DOE,  Sample  Size  Explorer,

Power,  it's mickey  mouse  stuff.

Let's  do  that  for  the  sample  size for  two  independent  sample  proportions.

We  have  one  sided, it's  a  superiority  test.

The  proportion  under  the  new  is  2%,

under  the  alternative  is  1.5%.  It's  going to  change  to  too  much,  1.5  in  hiding  here.

Then  we  want  8%  for  Power and  the  sample  size  is  17,000.

Okay,  17,000  to  go  old  fashioned.

Let's  compute  the  sample size  for  the   lognormal.

Using  Sample  Size  Explorers  from  DOE,

Power  and  Power for  Two  Independent  Sample  Means.

We  have  a  one  sided  test.

We  have  to  add  the  standard  deviations which  are  0.7  for  both  groups

and  then  the  difference  to  detect  we  had compute  0.0814

and  we  want  80%  Power.

Okay,  we  came  to  the  result.

Now,  the  sample  size  or the  experiment  size  is  1831.

That's  about  one  ninth  of  the  17000 that  we  had  computed  for  the  binomial

[inaudible 00:40:31].

That's  what  we  got  by  just  inspecting  the

the width  of  the  confidence  intervals in  the  reliability  platform.

That's  how  much  more  efficient  using lognormal  [inaudible 00:40:48]   binomial.

Just to finish...

Just  to  finish  the  rap-up  is

the   lognormal  distribution  fits  very  well the  blood  loss  volume  distribution

so  why  not  to  use  it?

Using  this  fact,  the  estimates of the  risks are  much  more  precise.

We  even  show  that  our  big  trial  was  saved

in  some  sense  by  showing  non- inferiority of   carbetocin,  using  the   lognormal.

We  are  very  happy  to  communicate  to  you

that  a  new  trial  is  already underway  now  using  the   lognormal.

Now  this  trial  would  not  come  to  life

because  we  don't  have  money for  30,000  people.

But  since  we  are  using  only  less  than 4000,  that  made  the  trial  possible.

It's  underway  now.  It's  for  treatment, not  for  prevention  like  they're  of  others.

That's  what  I  had  to  tell  you. Thank  you  very  much.