I agree with your approach: use the transformed variables for estimating the best linear regression model, predict the mean, then back-transform the prediction to the original scale. Do the same for the confidence limits. You cannot use the JMP formula for the confidence interval, though. You need a surrogate computation.
Here is an example of what I propose:
Names Default to Here( 1 );
// example of linear regression
dt = Open( "$SAMPLE_DATA/Big Class.jmp" );
// transform varibles
dt << New Column( "Log weight", "Numeric", "Continuous", Formula( Log( :weight ) ) )
<< New Column( "Log height", "Numeric", "Continuous", Formula( Log( :height ) ) );
// fit linear regression model on transformed variables
fls = dt << Fit Model(
Y( :Log weight ),
Effects( :Log height ),
Personality( "Standard Least Squares" ),
Emphasis( "Minimal Report" ),
Run(
:Log weight << {Summary of Fit( 1 ), Analysis of Variance( 1 ),
Parameter Estimates( 1 ), Scaled Estimates( 0 ),
Plot Actual by Predicted( 0 ), Plot Residual by Predicted( 0 ),
Plot Studentized Residuals( 0 ), Plot Effect Leverage( 0 ),
Plot Residual by Normal Quantiles( 0 ), Box Cox Y Transformation( 0 )}
)
);
// save mean prediction formula
fls << Conditional Pred Formula;
// save mean confidence interval formula
mci = fls << Mean Confidence Limit Formula( .05 );
// save individual confidence interval formula
ici = fls << Indiv Confidence Limit Formula( .05 );
// proof of concept - fit polynomial function to upper mean Ci to regressor X
dt << Bivariate( Y( mci[2] ), X( :Log height ), Fit Polynomial( 2 ) );
// use back transformed value of polynomial for upper mean limit, and so on...