You don't provide the full details on what you are rating, but I believe that @jthi is on the right track here. In order to get Kappa values you need to have the same possibilities for the standard and the raters. You currently only have 1's for the standard, but the raters have 0 and 1 as possibilities.
Typically an attribute chart is used when you are rating an item on a scale, say 1 to 5. The attribute gage analysis then compares the raters to see if they are getting the correct "standard" value. In the JMP table you would enter the rating from each of the panelists rather than just if they are correct or not.
With your data, even if there are only two possibilities (1 and 0), only having a standard of 1 is not a complete picture. You can see if your raters can identify the 1 level, but how do they do when the other level (the 0) is what they should see? In other words, do they ever see a part without the feature and incorrectly say the feature is there? That is important to know as well, and is needed for JMP to calculate Kappa.
As a quick test to see if this truly resolves the problem, enter one row at the bottom with a standard of 0, enter values of 0 for the raters and run the analysis. I expect you will then have a Kappa value.
Dan Obermiller