cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
AngeloF88
Level II

rule-extraction algorithms for verify the Artificial Neural Network (ANN)

Hello,

 

I recently start to use JMP Pro also for the artificial neural network.

I would like to know, how can I verify the neural network architectures with JMP?
E.g., using the mathematical expressions, symbolic logic, fuzzy logic, or decision trees...
I read that this system could resolve the "black box."

 

Thank you so much,
Angelo

 

 

 

17 REPLIES 17
Marco1
Level IV

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)

Hello dan,

Excellent explanation!!! thanks, I'm going to try your solution, in relation to the price of the shares (attached table 1 base and table 2 objective), I have 4 columns of 74,547 rows with open, high, low and close prices.... how can I convert with a formula in JMP ...... 400 or more new columns that have the maximum possible number of rows with labels and data created by the formula...for example:
open100, high100, low100, close100, open99, high99, low99, close99, open98, high98, low98, close98 ....

Greetings,

Marco

Marco1
Level IV

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)

Hello dan,

I tried your 200 column solution and I think I'm doing something wrong, I've attached the table with the formula to know what I'm doing wrong, thanks!
Cheers,

Marco

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)

Thank you, @Dan_Obermiller, for answering Marco's questions. I would also like to add the white paper by Chris Gotwalt, who manages the JMP statistical developers group.

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)

I have further information from JMP Development about your inquiry. They say that it is difficult to verify neural networks (NN) from any software product. NN are not like linear regression where the coefficients and predictions will be the same no matter what product you use. Every product will be very different. The best you can do is use crossvalidation to assess the predictive performance of the data mining models. If there is a strong need to verify the models that are used then stay with least squares and logistic regression.

I included a JMP white paper that addresses as much as we can say about NN in JMP.

AngeloF88
Level II

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)

Thank you so much! I read the paper, and it is interesting. I also read some publication (also in the journal with high impact factor) concerning the use of ANN in the plant physiological field. I believe that they are a strong predictive strument, but the problem of the black box is hard to exlicate at researchers that not work all day with mathematical. However, JMP helps to use the artificial neural network and understand the result. After your suggestion concerning the profile and estimation, I have a vision more clear of Artificial Neural Network. I read the excellent books directly available from JMP. I also found in ANN and other statistic analysis the Montecarlo simulation from the profile. Bingo!! Sometimes, this is an important simulation for paleoclimatology and in general people that work with prediction in ecology. Is very easy to do this simulation in JMP. If you know, what is in JMP the difference, between a random tables and simulation from the profile?, are bolt two type of Montecarlo simulation?

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)


@AngeloF88 wrote:
Is very easy to do this simulation in JMP. If you know, what is in JMP the difference, between a random tables and simulation from the profile?, are bolt two type of Montecarlo simulation?


Please clarify the tables ("random tables" and "simulation from profile") to which you are referring before I can answer your question.

AngeloF88
Level II

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)

Hi,

Thank you so much for reply me!

 

I read the JMP documentation, section prediction profiler Options. And I found that "Output random table" is the filtered Montecarlo technique, while the "Simulation" is just a Montecarlo simulation that uses the random noise. 

I do not know the difference between the two techniques and what method is better, I have to study better this two difference. If you have some advice , it is well appreciated. However, I have checked that Montecarlo simulation takes more time than filtered Montecarlo technique. 

 

I have to do this in the profile of the neural network. Therefore I thought that it could be a Hybrid ANN-Montecarlo simulation. 

 

To help to understand:

For random table,

1. I selected the option neural

2. I clicked the red triangle in the model

3. I clicked the option Prediction profile

4. I clicked the red triangle in prediction profile.

5. I selected Output random tables.

For simulation, I performed the same procedure, but instead of clicking the "Output random tables", I selected the option "simulator". I clicked the red triangle and selected "Simulation Experiment".

I adjusted the noise in the interactive cells and selected the random variable.

 

 

Re: rule-extraction algorithms for verify the Artificial Neural Network (ANN)

This quote is from the help portion about the prediction profiler command to output a random table:

 

"Prompts for a number of runs and creates an output table with that many rows, with random factor settings and predicted values over those settings. This is equivalent to (but much simpler than) opening the Simulator, resetting all the factors to a random uniform distribution, then simulating output. This command is similar to Output Grid Table, except it results in a random table rather than a sequenced one.
The prime reason to make uniform random factor tables is to explore the factor space in a multivariate way using graphical queries. This technique is called Filtered Monte Carlo.
Suppose you want to see the locus of all factor settings that produce a given range to desirable response settings. By selecting and hiding the points that do not qualify (using graphical brushing or the Data Filter), you see the possibilities of what is left: the opportunity space yielding the result that you want.
Some rows might appear selected and marked with a red dot. These represent the points on the multivariate desirability Pareto Frontier - the points that are not dominated by other points with respect to the desirability of all the factors."
 

So it is a simple way to obtain a uniform random simulation of the predictors and the model prediction of the response. It has the same purpose as the Output Grid Table command.

The Simulator function gives you much more control over the nature of the variation of the predictors and additional random variation of the response. It is primarily for assessing process capability but obviously has other applications.