Choose Language Hide Translation Bar

Behind the Scenes: Software Development in JMP Live (2020-US-EPO-553)

Aurora Tiffany-Davis, Senior Software Developer, SAS

 

Get a peek behind the scenes to see how we develop JMP Live software.

Developing software is a lot more than just sitting down at a keyboard and writing code.  See what tools and processes we use before, during and after the "write code" part of our job.

Find out how we:

  • Understand what is wanted.
  • Define what success looks like.
  • Write the code.
  • Find problems (so that our customers don't have to).
  • Maintain the software.

 

 

Auto-generated transcript...

 

Speaker

Transcript

Aurora Tiffany-Davis Hi I'm Aurora Tiffany-Davis. I'm a software developer on the JMP Live team. And I'd like to talk to you today about how we develop software on the JMP Live team.
First, I'd like you to imagine what creating software looks like. If you're not a developer yourself, then the image you have in your mind is probably colored by TV and movies that you've seen.
You might be imagining somebody who works alone, somebody who's really smart, maybe even a genius and their process is really simple.
They think about a problem, they write some code, and then they make sure that it works. And it probably does, because after all, they're a genius.
If we were all geniuses in JMP Live, then this simple process would work for us. But we're not and we live in the real world. So we need a little bit more than that.
First of all, we don't work alone, we work very collaboratively and our process has steps in it to try to ensure that we produce not just some software but quality software.
I'll point out that there's no one correct way to develop software. It might differ across companies or even within companies, but I'm going to walk you through our process
by telling you what I would do if I was going to develop a new feature in JMP Live.
First of all, before I sit down to write code, I have to have some need to do so.
There has to be some real user out there with a real problem, something they've identified in JMP Lives that they don't work...that doesn't work the way they think it ought to,
which is a nice way of saying they found a bug or any feature that they've requested.
We keep track of these requests in a system internally. So the first thing I would do is go into that system and find a high-priority issue that's a good match for my skill set and start working on it.
Next I need to understand the need a little bit more. I'll talk to people and try to figure out what kind of user wants this feature. How are they going to use it.
And then I'll run JMP Live on my machine and take a look at where this feature might fit in to JMP Live as it exists today.
Or I might open up the code and look at where the new code might fit into our existing code base.
Once I think I have a pretty good understanding of what's needed, I'll start working on the design. Again, I'll talk to people. I'll talk to people on the team and say, "Have you worked on a feature similar to this? Have you worked in this part of the code before?"
And I'll talk to user experience experts that we have in JMP.
I'll try not to reinvent the wheel. So if there's an aspect of the feature that is very common and is not specific to JMP Live, for example, something like logging some information to a file.
That's a solved problem, a thousand people have had that problem in the past. There might be a good open source solution for it.
And if so, I might use that after carefully vetting it to make sure that it's safe to use.
That still leaves a lot of ground to cover, a lot of JMP Live specific code that needs to be written. And so I'll write articles and diagrams and
describe what I propose and make sure that everybody on the team is comfortable with the direction I'm going with the design. Then I'll actually sit down and write code.
For that I'll use an integrated development environment, which is a tool for writing code that has a lot of bells and whistles to help you be more efficient at your job.
Now I've written some code. Before I check it in, I want to find the problems that exist in the code that I just wrote. I'm only human, so the chances that I wrote 100% flawless code on my first try are pretty slim.
I'm going to start by looking for very obvious problems and for that, I use static analysis.
Static analysis looks at my code not while it's running, but as though it was just written down on a page. An analogy would be spellcheck in Microsoft Word.
Spellcheck can't tell you that you've written a compelling novel, but it can tell you if you missed a comma somewhere. Static analysis does that for code.
Once I've found and fixed really obvious stuff like that, I'll move on to finding less obvious problems. And for that, we use an automated test suite.
This differs from static analysis because it actually does run the code. It'll run a piece of the code with a certain input and expect a certain output. We've written a broad range of tests for our code.
And I'll sit down and write tests for the feature that I'm working on.
This is really useful because sitting down to write the tests forces me to clarify my thinking about how exactly the code is supposed to work. Also, it offers a safeguard against somebody else accidentally breaking the feature in the future. It's a great way to find problems early.
Now move on to manual tests. I'll run JMP Live on my machine and exercise the new feature and make sure that it's working the way I think it ought to.
I might even poke around in the database that sits behind JMP Live and keeps track of the records, posts, groups, users, and comments to make sure that all those records are being written in the way that I think they should be.
Now I'm cautiously optimistic that I've written some good code and next step is peer review. I'd like my peers to help me look at the code and find anything that I might have missed.
I might have missed something because I just have my blinders on about something or because somebody else on my team just has knowledge that I lack. This step is often really helpful for the reviewer as well because they might learn about new techniques.
After it's gone through peer review, we're ready to actually commit the code or check it into a source code repository.
We have a continuous build system on our servers that watches for us to check code in. And when we do, it does a bunch of stuff with that code.
For example, making sure the right files are there in the right place, named the right way and so on. It will also go back and rerun our stack analysis and rerun our entire automated test suite.
This is useful because after all, we're only human. Someone might have forgotten to do this stuff earlier in the process or something might have changed since they last did it.
Once the code makes it through the continuous build, it's now available to people outside of the development group. The first people to pick up on this our test professionals within JMP.
They're going to go through and they might add to our automated test suite. They might run some manual tests.
They might look at how the software works on different operating systems, different browsers and so on. They'll think up crazy things the user might do and look at what happens and how the software responds.
They are really crucial part of our process. They think really creatively about trying to find problems in our product.
Once they've signed off, now the software is available to be picked up in our next software release. Let me zoom out now and show you the product...the process as a whole.
And it's a lot of steps. That's a lot of stuff. Do we really do all this for all of our code?
Believe or not we do, but this process scales a great deal. So for a really simple obvious bug, we're going to step through this process pretty darn quickly.
But for a very complex new feature on every step of this process, we're going to slow down, take our time and take a lot of care to make sure that we really get it right.
Anything that takes time, costs money for a company. So why is it that JMP is willing to invest this money in software quality?
One reason is really simple. We have pride in our work and we want to produce a good product.
The second reason is a little bit less idealistic. We know that if our product has problems and we don't find those problems, our customers will, and that's not good for business. So we'd like to minimize that.
I should point out that while JMP does invest time and money in software quality,
we don't invest that kind of money that, for example, an organization like NASA would, so we can't promise perfection. Like any piece of software that's out there in the marketplace,
you might find something in JMP Live that doesn't work the way you think it ought to. If you find any...suggestion,
we'd like to invite you to go to www.jmp.com/support and let us know if there's anything that you think should work differently or any features that you would like in the future.
That kind of real-world feedback from actual users is incredibly valuable to us and we really welcome it. That's all I have for you today, but I really hope you enjoy the rest of Discovery. Thank you.