Choose Language Hide Translation Bar

I Know the Jaguar by His Paw: Distinguishing Jaguar and Puma Tracks in JMP

In the neotropics of Central and South America, the sympatric species of  jaguar (Panthera onca) and puma (Puma concolor) are threatened due to human activity. Being able to identify the absence or presence of each species within a given region is fundamental in developing effective conservation strategies. While these cryptic and elusive species are rarely observed in the wild, the tracks they leave behind are indicators of their presence and distribution. However, jaguar and puma tracks cannot be easily distinguished visually. 

In this talk, we unveil a new, intuitive field tool in JMP, allowing conservation biologists to rapidly differentiate between jaguars and pumas. First, we demonstrate how to extract features with the JMP FIT (Footprint Identification Technology) add-in and apply linear discriminant analysis. Next, we show how to extract the shape of a track with a statistical shape analysis in JMP. Finally, we predict the species using a neural network model of the shape coordinates, visualizing and exploring the output though a novel, interactive shape profiler tool.

 

 

Hello, everybody. Thank you very much for joining us. We're honored to be here and to share some of our work with you. This is a joint presentation between Sky and I from WildTrack, Caleb King from JMP, and Dr Juarez Pezzuti and Dr Dany Felix from University of Para in Brazil. Unfortunately, they can't be here with us today, but we will be presenting with Caleb.

Going through this, it'll be a presentation in three parts. I'll be setting the scene for the work. Sky will be talking a bit about the initial data analytics, and Caleb will be showcasing some exciting new features in developments in JMP that are going to be helping us with this work. We hope you'll enjoy it. The title of the talk is I Know the Jaguar by His Paw. It's about distinguishing two of the North American apex predators, top predators, from each other using their footprints.

The work that we do at WildTrack is unique in that we are combining some of the most ancient techniques in conservation biology with some of the most modern. We're integrating traditional ecological knowledge with cutting-edge data analytics to provide non-invasive and community-friendly approaches to saving wildlife. Approaches that don't disturb animals, but at the same time are able to generate rapid data analytics to help protect them, which is an unusual approach in conservation, but it's gaining a lot of ground. We've been honored to win some awards for our work and to be able to work with some amazing people all over the world to try and improve the situation in conservation biology and help protect endangered species.

One of the big issues that we're all aware of is the loss of biodiversity on our planet. This is very much tied up with changes in climate, pollution, deforestation, human-wildlife conflict. All these things together have really compressed the space and the habitats available for wild mammals by probably 94% of what it originally was. That is to say that wild mammals now have 6% of the world's biomass compared with the other 94%, which is humans and our domestic animals. Now, that's just looking at mammals. Of course, there are lots of other endangered species that are not mammals, but it gives a very interesting perspective, I think, of the frightening extent to which wildlife has been damaged and compromised by human activities.

I think these threats and these problems that affect biodiversity conservation in general can be nicely encapsulated by one of the projects we're working on, which is the THREATS to Puma and Jaguar in the Americas. As I said at the beginning, these are some of our most iconic and endangered species in North America. There are only about 170,000 puma left, and there are even fewer, about 50,000 jaguar. They coexist in habitats across North America, with the puma extending right up from the north of North America to the tip of South America, jaguar being a little bit more centered in the middle, mostly in South America.

But the conflicts, the problems that these animals face are very similar to the problems I've just discussed: human-wildlife conflict, deforestation, and poaching in terms of jaguar. Jaguar actually poached for their fur, also their teeth and their bones for the traditional medicine market. These are really major problems. One of the issues that we have is that we don't really have reliable and frequent data updates on numbers and distribution.

We have these current maps, which are very much broad strokes with the brush. They're not updated frequently enough on a local scale for us to really know what's going on and to be able to get reliable data. Numbers and distribution are key. We have to have those to know where these animals are, how to protect them. Although these maps look great, in fact, they're very coarse lens. We need much more fine detail so that we can act quickly where these animals are endangered.

What techniques do we have at the moment to be able to differentiate jaguar from puma, and to know where they need to be protected? The answer is not really a lot and not enough. We have camera traps, which are used very commonly in wildlife conservation, and they're put out on trails where animals walk along in the jungle or the forest. As an animal goes past, it'll trigger a light path which would take a picture of it.

The problem with camera traps is that they're technically quite demanding to set up to be able to get a statistical survey. They tend to be stolen. Animals will avoid them. They're very expensive to deploy over landscape scale. While they're quite useful for a snapshot, they're not really going to give us landscape-scale information. Other techniques like look at DNA are expensive to process per sample. You need to have all the genome information to be able to type an animal.

Then there are other approaches, like using instrumentation to monitor animals, so putting colors or tags on them, getting GPS data. Those are also expensive. They're also potentially damaging to the animals themselves. A lot of our work has elucidated that in terms of changes in habitat ranging, changes in fertility, changes in litter size, changes in social structure. All these things can happen when you start closely interfering with animals, tagging them, catching them, darting them.

What can we do? If we want numbers and distribution of these animals, what can we do that will deliver high-resolution data, but at a landscape scale? This is really what we're going to be talking about today.

At WildTrack, we've developed a footprint identification technology, which potentially offers a transformative solution. I won't read all the text on this slide because there's quite a lot, but I think the take-home message is that footprints are everywhere. They're a unique signal from nature, and they actually encapsulate a lot of data. Using footprints, we can identify the species of an animal, the sex, the individual, the age class. All we have to do is look down, collect images of those footprints. They are a source of data generally, greatly underutilized, but fantastically rich in terms of being able to inform us on conservation status.

This is what our footprint identification technology does. One of the side benefits of this, if you like, is that because we're using footprints, they're very accessible to local communities who can track. Using this approach with footprints brings on traditionally ecological knowledge expertise, which is so underrated by science and yet could be so powerful as a conservation tool. Again, this is bringing together the ancient and the modern to make a tool which is going to help us better identify these animals.

As I was saying, really, a footprint is a signal, if you like. It could be any signal, but it's a signal which is regularly refreshed in the environment. The footprints are generally washed away after 24–48 hours, and they're sitting there waiting for us. It's like the world is a blank canvas with footprints on it. If we can start identifying which species left those footprints, which individual, where they were, we're going to populate that world map with data, which is going to be a fantastic resource for conservation.

Now, I'm going to stop here and hand over to Sky, but I'd just like to say we have lots of publications on our site if anybody's interested in reading a bit more about the conservation angle. It's all up there. These are our publications from the last 2 years. I'll hand over to Sky to talk a little bit more about the analytics, and then Caleb will talk about the exciting new features in JMP.

I hope everyone can see my screen. I'm Sky. What I'm going to do is I'm going to describe very, very quickly the way in which we do feature extraction to divide the variables which we then use for the analytics. Now, first of all, what you need to do with footprints is to get slightly familiar with the outlay and the configuration. You need to decide which are the front, which are the hind, which are the right, and the left. That's not very difficult. I mean, it's like our hands. I mean, if you look at the configuration of our digits, left and right, it's not very dissimilar to the configuration you see over here in a puma pool and over here, which is the front feet, and similarly in the hind feet. Once you get used to that, you know which foot you are actually looking at.

The other thing to bear in mind is the fact that when carnivores walk normally in their gait, they sometimes will overstep. It is called registration. This is where the hind footsteps over the front foot. Now, this happens quite often, and as a result, we end up using just the hind feet to make the whole process very, very simple. Not only do we use the hind feet, but we concentrate on the left hind foot. In other words, developing a very, very simple way of using the analytics so that it can be used by many, many other researchers.

When you look at a puma and a jaguar left-hind footprint, you can see the similarity there. It's very, very difficult. At times, with a very, very good tracker, you might be able to tell them apart, but more often than not, they're so similar that visually, they're very difficult to tell apart. What we have to bear in mind, of course, is the variation that you're likely to get within an individual, then the variation you get within the species, and then you look at the variation that you're likely to get between the species. This is just a demo to show what the… I learned a very brief description of the variation that you're going to get within footprints from different individuals for both the jaguar and the puma.

Now, what we've done with FIT is to develop a protocol which is standardized for carnivores. We place these what we call landmark points. As you can see over here, they're numbered. This is a puma footprint. They're numbered from 1 to 25. Now, you immediately say, "Well, how do you know where to put these points?" Well, once you get used to looking at a footprint, you can begin to tell that these are the north-south points and these are the west-east-west points for each digit, for each toe. It's very much like the human fingerprint, as I said. The first toe is missing over here, so you got the four toes, and then the pad. Once you get used to it, the placing of these points can be done with a very, very high level of consistency and accuracy.

This is just a diagramic representation of the way in which we place the points and extract the metrics. We've got 25 landmark points. We then create 15 derived points from that, and that then leads us on to an automatic extraction of distances, angles, and areas. Then we create the FIT algorithms for individual identification, sex discrimination, species discrimination, and even in some instances, age class discrimination. Here, of course, we're concentrating on sex discrimination. What I'd like to do now is to quickly show you the way in which we do the feature extraction in FIT.

Now, this is a JMP window. What we've done is we've actually created an FIT in the menu. If I click on that, you will see, hopefully you can read those species. We've actually created algorithms and protocols for a large number of species: the elephant, lion, tiger, Bengal tiger, Amur tiger, rhinos, cheetah, cougar, and various other species over here.

How does this work? Well, if I go to Jaguar, it brings up this screen over here. Over here is the main menu, which enables us to do the image feature extraction. It also does pairwise data analysis, and I won't be talking about that today. It also has functionality for validated discriminant analysis, and also mapping. In other words, you capture an image, you put it through the process, and it actually gives you the ID and where you found it. It's a complete picture all done within JMP.

I'm just going to stick to my image feature extraction, if I launch that application, it brings up this window here. The left-hand side of the window is just a demo. It's an inert window. Everything happens on the right side over here. This is obviously our main menu over here. These are the fields where we put in the input, the information. Then these are the active buttons over here.

I'm just going to run through this very, very quickly. I'll bring in an image. There's my image. Of course, you can't see it, but we have functionality to resize it. It automatically resizes it. It retains the proportions, and that's very, very important. Over here, we've got all the instructions. You drag and drop an image, you resize, then you rotate with the rotation points.

We've resized the image. These are the rotation points over here. Why are these important? I say, rotate over here, and it rotates the image. That's because we are placing the points, the landmark points we are placing are done visually. The top point in a toe will vary depending on the configuration or the placing of the footprint. Then you put the scale in, and the scale is over here. Scale factor is set at 10 centimeters over here. We've even got functionality for substrate depth, which I won't be using. Then these are the driving buttons over here.

I'm going to run this very, very quickly. We've got the scale at 10 centimeters. I've done that, and then it says you put in the landmark points. Now, I'm just going to do that very, very quickly. You can see where I'm placing them. Once you get used to it, this process becomes quite simple. I can do it more accurately by picking up the crosshairs and placing the point very, very accurately. If I just go through the routine of putting all the points… As I said, once you get used to it, it becomes second nature. I've done thousands of these.

I've done the 25, and it tells me over here, I'm done on points. I then click the Derive button, points. You can barely see them, but those are the 15 points that are scripted in the script, and they basically generate those derived points.

Then finally, all I do… There are the functionalities over here which I don't need to go into. I say append row, and it basically generates a row of data for that particular footprint. Now, it generates all the XY coordinates, and there are 40 of them, 15 derived points and 25 landmark points. Then starting at V1, we've got all the variables.

Now, these are all scripted so that we can actually change them, we can add them. We don't know which variables are going to prove to be significant in trying to discriminate the two species. I basically create a protocol where I try to figure out all the distances, angles, and areas that are likely to contribute to the analysis. This is basically what I end up with.

I think finally, if I go to slideshow again, a demo, and that's basically our data generated. This gives just a summary of what we've got for the jaguar and puma data. We've got captive individuals, 21 jaguar, 18 puma. We've also collected trails from the wild to basically allow for the notion that in the wild, we're going to get slightly different footprint types, different substrates, and so on. We've added those. In total, we got about 500 footprints, 261 for jaguar, and 220. I'm going to finish here and let Caleb take over to tell you about his magic with the data analytics.

Great. Okay. Well, thank you, Sky, for that introduction. I got a lot to live up to for calling it magic. At this point, I'll talk a little bit about how we're doing the analysis JMP, but primarily focus on a newer technique that we've been exploring in this project. Sky walked you through how it's been traditionally done in FIT. This is going to be a slightly different approach using something called Statistical Shape Analysis.

To start, we need a bit of context. By that, I mean, if I can get it to click through, there we are. What is a shape? This seems like a very silly question. We all know what a shape is. It's one of the first things we learn as children. The shape here on the screen, that's a triangle. We all agree that's a triangle. Actually, giving a definition of shape, you might start to have a little bit of trouble because conceptually we get it. How do we actually put that into words, what exactly a shape is?

Well, the folks who work on Statistical Shape Analysis, the pioneers of this field, came up with the following definition. That shape is all the information that is invariant. That means it doesn't change when you do certain operations, translation, scaling, and rotation. What do we mean by that?

Well, this is a triangle, we all agree. If I do this, I move the triangle down. Well, after it's done, it's still a triangle. The shape has not changed. I just moved where it was. I could make it grow bigger, and that's just now a big triangle and a little triangle. The shape is still a triangle. I could even rotate it. It looks like a play button now, but still a triangle. That's what we mean. The shape has remained the same no matter which of these operations we have done to it. That's the definition that we're working with.

Next, we need to ask, okay, we know what a shape is. How do we identify that, specifically in a mathematical framework? How do we do this mathematically? Well, there are a couple of different ways. The one I'll be talking about is related to our work in FIT. We start with the set of landmark points that Sky just told us about.

In fact, I'll bring up one of those images before. You just pick up points that help identify the key parts of the shape. You could think of it as essentially defining some sort of polygon, or in this case, it's actually several polygons because we have the digits and the pad. All of these make up the key features of the shape we're interested in. Now that we have these landmark points, we need to then decide, okay, how do we extract just the shape information from this collection of data points?

To do that, we have to do what's called an alignment procedure. We're going to take away all of the changes in the shape that don't relate to the shape. How do we do that? Well, Sky showed you one approach that they use in FIT. I'm going to talk about a little more general approach. Again, there are multiple ways to do this, but I'll talk about more general one. That's something called for Procrustes analysis. Quite an interesting name.

I'll give you a bit of background on the naming. It comes from a figure in Greek mythology who had a little residence on the side of a trail. As people would pass by, they're walking along, he'd invite them in to come rest a while. "You've been traveling a lot, here have some refreshment, rest a while." When they got in this bed that he prepared for them, they never seemed to quite fit. That's okay, Procrustes was very accommodating. If you were too small, he'd take some rope, tie you down, stretch you out until you fit. If you were too big, well, he just cut you down to size, literally. Yes, this guy was actually a thief, and he would dispatch his victims in this rather creative macabre way until sometime another Greek hero, Theseus, came along and gave Procrustes a taste of his own medicine.

A bit of a controversial name there, but you can see where the name for this procedure came from. Now, what are the main steps? Well, there's basically three steps, and they correspond to each of those changes that we talked about. One is, if we have shapes all over the place, let's just center them, at a common point, usually the origin. The way we do that is for each shape, so each track in this case, we're going to take the means of the X coordinates, and the means of the Y coordinates, track them off from every coordinate, and that will center it at the origin.

In fact, as this guy showed, that's actually already done by hanging the scenes in the fit software. They already do that bit of alignment there. The next thing is we need to account for that scale. We want everybody to have the same scale. We don't want any changes related to scale. One way we do that is what's called the root mean squared distance. Now I've got a triangle over here on the side. You can think of it as each of these. I've got my three landmark points. That's the corner. Here's my mean in the center. These lines represent the distance from each point to the center.

The root mean squared distance is just the average of those distances. We take that average distance and divide each coordinate by that distance, then the new coordinates will have a root mean squared distance of one. Now we've centered everybody to zero. We scaled them all to have scale one. In my case, I actually saved this original scaling information because that can be informative for the predictions.

The final thing that we need to do is take care of the rotational alignment. Now, these previous two, we've had a nice standard that we can work with. For rotation, you have some options. Sky showed you one where you pick two landmark points and say this serves as my horizontal reference. Another one that I'll talk about here is more general procedure where you can start by taking the mean shape.

By mean shape, for each track in this case, start with a landmark point, take the mean of the coordinates across all of your data, across all of your tracks. Do that for each landmark point, and that will define this thing we call a mean shape. You start with that and say, that's going to be my initial reference. I'm going to rotate everybody else to align with that. Then I'll take the new mean and repeat that. Usually after two or three times, we converge on a nice alignment. That is called Procrustes or general Procrustes alignment procedure.

Now that we've done that, we can finally get to some of the good stuff, or at least almost. You see, once we have our aligned data coordinates, we need to think about what space do these shapes live on. Now, technically, they live on what's called a manifold. This is just a fancy math term, meaning globally, it looks all weird, wonky, and curvy, straight lines. Everything's not really straight lines, but locally, you might be able to get away with some straight line distances.

A good example, you're standing on one. It's called the Earth. This is a giant sphere, spheroid, technically, but it's considered a manifold because distances at large scale aren't straight. They're curved. Because of that, the reason we have to deal with that is I start off with a shape which is a set of coordinates. I can transform that to a vector. I have a nice vector space, but as I start removing translation, scale, rotation, I start constraining the space on which it lives.

As a simple example, triangles, all variations on triangular shapes, live on a manifold that looks like you take its hemisphere, cut it into quarters, take one of those quarters, and that's the shape, and the space that triangles live on. The simplest example you can come up with. You can already see it's a little bit hard to visualize or even understand. Because of this, we live on this weird high dimensional manifold, so we have to deal with things like a tangent space, basically.

Try and get it to a space where you have nice straight line distances because that makes things a lot easier. We have things called exponential and logarithmic maps to transform between those. Once we get to that space, we can then do our statistical analysis. Some common techniques, principal component analysis, discriminant analysis, neural networks, insert any other statistical analysis you want to do. I've highlighted LDA networking up in neural networks because that's something we'll focus on in just a moment.

One other thing, though, that you could do is to simply ignore everything I just said and pretend that you're in a straight line, that you're in the space you want. That seems very arbitrary and lazy, if you will, but there are actually some reasoning to this. As a quick example, I have a washing machine not too far from me here. If I wanted to figure out the distance between me and the washing machine, I can do this in two ways.

One is, I just said I lived on a giant spheroid. The best measurement of distance on a spheroid is arc length. I need to determine the angle between me and the washing machine as done from the center of the Earth. Once I get that information, I can then compute the arc length between me and the washing machine here, and that will be my distance. Or I could say it looks flat here. I'll just compute the straight line distance. Hopefully you can see the reasoning why you might choose this over the more accurate approach.

Obviously, computing art like now that is just overkill, for what I'm dealing with. In our case, because the shapes we're dealing with are very similar to each other. There's not a lot of variation. You can get away with just computing distances as you would in a standard analysis. You don't have to worry about the high dimensional stuff. Only the shapes are going to be very different. Maybe I'm dealing for a simple example, isosceles triangles versus right triangles. You might need to consider the higher math or triangles versus squares. Then you definitely need to take care of the math there. In our case, the shapes are all so similar that we don't need to worry about that. We can go straight to the analysis that we would like. Okay, with that, let's actually see this thing in action.

Let me go here to my data, and I'll show you first an example of how we can do that, request this alignment. Here's the raw data that was sent to me by Sky. We've got all our metrics here. From there, I can extract or, if it's provided already, take the coordinates. Here's the Y values, the X values. These are the data I want, is the raw coordinates. Just to show you what that might look like. Here's a plot of those coordinates. There's some alignment, but it's still messy. We should probably do some better alignment with it. With that, I will pull up a tool I call the Procustes's alignment tool. All of this was created in the JMP scripting language. Even though it's being used in this particular application, technically, it can be used for any application where you have X and Y data. I'm not sure I got the data table selected. I'll run it. It's going to open up a simple little window here.

I'm going to pull in my coordinates. I don't need all of them. I just need some. Some of them are used to help generate some of the other coordinates.

Generally what I have here is a complete outline of the track. I will pull in, let's see, these two as supplementary variables. These are essentially the things that I want to use to predict. I click Go, and it has done the alignment. And there they are, nicely aligned tracks. They look much better than what we saw earlier. That is how simple it is to do this general Procustes's alignment. Again, there are other ways to do the alignment. I just want to show you the general approach. Now, this approach, I am going to close all these data tables. I actually have already done some cleanup on the aligned data, so I will simply pull that data up now. Let's go to that. There we are. Here's the final-aligned data. I've already done some analysis on this.

To start, we should have a reference for visualizing the data. You should always plot your data. I've done that here. It's a little bit easier to do it with the stacked data rather than spread out as this. Spreading it out makes it easier for analysis. Now, when I mention analysis, what I should clarify is, so Sky mentioned collecting these morphometric variables, areas, angles, distances.

What we're doing here in shape analysis is we're instead actually using the actual aligned coordinates as our inputs because these coordinates represent the shape. The only other thing I include is this scaling factor here. Again, because in analyzing footprint, scale could be informative. There might be cases where just knowing how big the track is could be informative as to what species you're working with. We'll keep this graph here to the side for visual reference. Now, again, as I said, I've already done some analysis. The first one I did is this most basic analysis that we do in fit, which is discriminant analysis.

I can run that here just to show you what the output looks like. We get a misclassification rate about 2-5 %. That's actually pretty good. One of the benefits that we can do with this shape analysis, it actually helps us peak under the hood and say, can we give a physical type explanation for how the model is working. An example here in discriminant analysis, the way it works is that it's essentially taking a linear combination of the inputs to create this axis we call a canonical.

The idea is that if I plot values on that canonical, those in different groups will be as far apart from each other as they can be. That's the whole idea behind the discriminant part of the analysis. Now, of course, this is a very high mathematical object. Well, it's one dimension, but it's a function of multiple things. It can seem a bit like a black box. What exactly is going on there? Wouldn't it be nice if we could understand what's happening? Well, there are things called canonical loadings. They're essentially the correlation between this axis and the input values.

We could plot that as maybe a bar graph for each input. That could be informative, but we have a shape. We can actually use this to our advantage. In this case, I've created something called a canonical viewer. All I did was plot the mean shape, and then I plotted these canonical loadings as the size of an arrow. The sign and the magnitude are both combined in this arrow. This is actually combined from the X and Y components because the X and Ys are the inputs. I just combine them into a single line. What's interesting is that if we compare this to the data and look at going from blue to red, so that's puma to jaguar, you'll notice that the directions of the arrows actually correspond pretty well to the changes the differences we see in the data, which is very good.

We do need confirmation with the data. Another source of confirmation, I showed this to our colleague, Juárez, who was out in Brazil collecting this data. As soon as he saw this, he was like, that's exactly what I see in the differences between tracks. Not only do I have confirmation with the data, I actually have confirmation from people out in the field doing this. That tells you that this model is definitely doing something right, which is always good to hear for us analysts. It's interesting because now we can actually look at what's physically happening. What are these differences in the shapes? This can have biological meaning for people who are analyzing this. Very useful information. Sorry, I will mind gap there. Now, I did just stick with discriminant analysis. Of course, this is one approach. There could be many different types of approaches to modeling this type of data.

What I did is I ended up going into our analyze menu on your predictive modeling and doing a model screening where it combines several different types of models and tries to assess which one does the best. In this case, it happened to be the neural network. Probably not as surprising. If I go and run that, we can see here, so I usually use misclassification rate as my standard comparison. In this case, it's zero. Our model has perfectly predicted the responses. You can't do better than this. I did not fudge the numbers or anything. This is exactly how it came out.

This is pretty amazing. Again, we can have questions about, well, what's it looking at? Can we try and understand the model? Now, this is a neural network, which is notorious for being a very black box method, but we could try some techniques. One thing I did do is this is a single layer network. For each node in the layer, I did the same thing. I computed a correlation between its input. I took it out of this NTanH transformation first, the linear part, and looked at the correlation between it and the inputs.

Then I could weight that based on the final weight given to that node to come up with a plot that looks something like this. In this case, I'm going through each layer, and you can see as I go through all the tiny little variations that each layer is focusing on now. If you ask me why they are so different? Unfortunately, I can't explain that part. That's the neural network doing its thing. What is interesting is if we scroll through a little bit here, we do see some of the same similar patterns as we saw in the discriminant model. Now, sometimes this sign is reversed. That's just a negative sign. You can think about flipping it in your mind. It's all variations on the same type of pattern, all of which correspond to the data that we're seeing.

Again, this shows some commonality between the models. At least we're identifying something intrinsic to the nature of the shape of these tracks. You see this little arrow here? That's my attempt at assessing the impact of scale off to its side. You'll notice that sometimes the scale has a small impact, other times there's a large impact. In the discriminant analysis, it has a small impact. What's really driving the differences here is the shape that we're seeing, which is very informative. Now, there is one last thing I'd like to show you. To do that, I have to ask a bit of a thought-provoking question.

Wouldn't it be nice? These are static images, but wouldn't it be nice if we could actually touch these points, drag them around, maybe see how if I manually change the shape, how does that affect its classification? Wouldn't that be awesome? Wouldn't that be interesting? Well, I'm so glad you asked because as it turns out, I have created something here. Again, this is just a little table script right now. All of this done in JSL. This is what I call a shape profiler, borrowing some terminology from a common tool used here in JMP. Of course, the idea here is this isn't a static clock. I can actually click these points. Here we go. Move them around, so you can see how changing little aspects of the shape can affect the outcome.

There's a lot more information here. Those arrows that you see, I call them sensitivity arrows. The technical term is gradient. Essentially, if I move that point a little bit in that direction, it's showing you, based on the size of the arrow, how much its probability will increase. In this case, this point 23 that I've been playing around with is a very sensitive point because I can move it just a little bit and you can see how much that classification changes just based on this point. You get a sense of how sensitive is the classification sensitive to certain points on the shape. Of course, I can break it down. Again, the original inputs are the X-sense of Y. If you want to see it there, you can have it. I usually just give a single arrow. It's a little bit easier to visualize.

Now, there is one final thing that we can do with this, and it will take a little bit of set up. I'm going to hide the classification probably for dramatic effect. I will also hide the sensitivity arrows just to make things a little bit easier. As I go through setting some of this up, I'll start to close out our talk. First of all, I need to change some settings. I know what they need to be. I'm going to make those changes here. Just bear with me a moment. Don't worry, I'm not going to leave that space. I'm also going to change the scale here. I'm going to come over here, and drag in an image. Remember, you saw that earlier with what Sky was doing in FIT. You can also do it here.

I'm going to fill the graph. This is the wrong image. That's okay. The right one, is here. It helps to have the right image. I wanted to make sure I had the landmark points because I am no expert in putting in those landmark points, so I could really mess that up. I want to make sure that these tick marks align fairly well with the scale here because this is an actual scale. What is in the FIT is definitely much more advanced than what I'm doing now. It's much better. I'm still doing things a little bit manually here, but that's okay. It should be good. I think we're about ready there. Here we are. That fits about pretty well. I'm going to lighten up the image a little bit just so you can actually see the points that I'm working with.

There we are. I'm going to lock it because I don't want to accidentally move the image now that I got it set. As I'm doing this, I'll help close this out. As you saw at the beginning, Zoe presented to us just the importance of their efforts in animal conservation, how vital it is to understand where these animals are, what their ranges are. I, of course, as a kid, loved animals, so I definitely like to have a lot more than around. My daughter is already into animals. She likes to look at books about bugs and all these creepy, crawling things. She doesn't get that from me. Not much of the creepy, crawling stuff. I'm glad to see she's quite interested in that.

Of course, as Sky was showing us how they use this somewhat new noninvasive technique. It's actually not new. It's been around for a very long time. We're just catching up to it with our software. Using this noninvasive technique to track the animals and actually how it's not too difficult to do using the FIT software, which again, we provide in JMP. Then, of course, here with the shape analysis, this is something new we're exploring. It seems a perfect fit, if you will. For this type of analysis, because a lot of it is about the shape of the tracks. Of course, the applications for shape analysis aren't just limited to WildTrack, although, as we pointed out to me earlier, that is the most important application.

There are other areas where this could be used that we are looking into. For example, there are known applications in biomedical industries where shape is an important factor. You might be looking at the shape of a certain organ or something to determine whether it's diseased or not. Another area where this could have application is in additive manufacturing, where maybe you're trying to track the variation as you're building something or track the shape of a product and see what's affecting that. Instead of shape as an input, you're actually getting a shape as a response.

All of that to say that here at JMP, one of the things we try and do is provide you tools, so you can perform at your absolute best or for this here, Puma track, do it with absolute 100% of your efforts. With that, I'd like to thank you for watching our joint talk together. Please check out some of the other talks that are going to be here at Discovery online and in person. If you can make it in person, we'd love to see you there. That's it for me. Thank you for everyone.