Here's a funny thing:
You might save a little bit of time using
y * J ( M, N, randomNormal () ) + x
instead of
J( M, N, randomnormal (x, y) ).
The reason is that the J function is really a loop... so in the J( M, N, randomnormal (x, y) ) example, each time through the loop a standard random normal is chosen, multiplied by y and added to x to produce a random N(x, y).
In the y * J ( M, N, randomNormal () ) + x example, all of the standard random normals are generated, and afterward the entire matrix is multiplied by y and added to a dimensionally similar x matrix.
It's not a huge difference, but in the 1000x1000 matrices I was investigating, the 2nd approach ran in about 3/4 the time of the first.