I changed it a bit, too. Mostly a matter of personal style.
 
Names Default to Here( 1 );
// use a small example
data = Open( "$SAMPLE_DATA/Big Class.jmp" );
// launch SVM platform
svm = data << Support Vector Machines(
	Y( :age ),
	X( :height, :weight )
);
// iterate over a range of hyper-parameters
For( cost = 0.5, cost <= 5, cost += 0.5,
	For( gamma = 0.1, gamma <= 1, gamma += 0.1, 
		svm << Fit(
			Kernel Function( "Radial Basis Function" ),
			Gamma( gamma ),
			Cost( cost ),
			Validation Method( "None" )
		);
	);
);
// separate parameters and performance in a new table
results = Report( svm )["Model Comparison"][TableBox(1)] << Make Into Data Table;
// fit interpolator for profiling
results << Neural(
	Y( :Training Misclassification Rate ),
	X( :Cost, :Gamma ),
	Informative Missing( 0 ),
	Validation Method( "Holdback", 0.3333 ),
	Fit(
		NTanH( 3 ),
		Profiler(
			1,
			Confidence Intervals( 1 ),
			Term Value(
				Cost( 2.75, Lock( 0 ), Show( 1 ) ),
				Gamma( 1, Lock( 0 ), Show( 1 ) )
			)
		),
		Plot Actual by Predicted( 1 ),
		Plot Residual by Predicted( 1 )
	)
);
 
But I also added another step to understand how the miss-classification rate depended on the hyper-parameters. I saved the table from Model Comparison and fit a simple (but adequate) neural network. The profiler can be used to find good values and understand them. For example, using a high gamma and a moderate cost with little improvement by increasing the cost.