cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
Behind the Scenes: Software Development in JMP® Live (2021-EU-PO-732)

Level: Beginner

 

Aurora Tiffany-Davis, JMP Senior Software Developer, SAS

 

Get a peek behind the scenes to see how we develop JMP Live software. Developing software is a lot more than just sitting down at a keyboard and writing code. See what tools and processes we use before, during and after the "write code" part of our job.

Find out how we:

  • Understand what is wanted.
  • Define what success looks like.
  • Write the code.
  • Find problems (so that our customers don't have to).
  • Maintain the software.

 

 

Auto-generated transcript...

 


Transcript

Hi I'm Aurora Tiffany-Davis, a software developer on the
JMP Live team and I'd like to talk to you today about
how we develop software in the JMP Live team.
First, I'd like you to imagine what software development looks
like to you. If you're not a developer yourself and the
picture you have in your mind is probably colored by TV and
movies that you've seen.
You might be picturing an oddball genius who works alone,
and the process for developing software is very simple. They
think really hard about a problem. They write some code
and then they make sure that it works. And it probably does,
because, after all, they are a
genius. If we were all geniuses like Hollywood says that we are,
this process might work for us, but actually we're only human,
and so we need a bit more than
that. First of all, we don't work alone. We work very
collaboratively and we have a process that helps us to
produce quality software. I'll point out here that there is no
one correct process for developing software. Your
process is going to differ across companies and often even
within companies. But what I'll do today is walk you through
what I would do if I were developing a new feature in
JMP Live.
First of all, before I ever sit down to write code, I have to
have some actual need to do so. There has to be some real person
out there who has found something in JMP Live that
doesn't quite work the way that they think it ought to, which is
a nice way of saying that they found a bug or they have a
request for a new feature. We keep track of these issues in an
internal system, and so the first thing I would do is go to
that system and find a high-priority issue that's a good
match for my skill set, and start thinking about it, and start
talking about it. The next thing I would do is try to understand
the need by talking to people. What kinds of users are going to
need this feature? How are they
going to use it? Then I would run JMP Live on my own machine,
and start to take a look at where the new feature might fit
into JMP Live as it already
exists today. Then I would bring up the code and start thinking
through how the new code is going to fit into our code base.
Once I think I have a good understanding of the need, I
would move on to the design step. And here again, my first
step is going to be to talk to people. I'll talk to user
experience experts that we have within JMP and I'll talk to my
coworkers and ask, "Have you worked on a feature similar to
this before? Have you worked in this part of the code before?"
Then I'll start to flesh out the design. I might write articles
and draw diagrams and share these around so that I can make
sure that the other members of the team are generally
comfortable with the direction
I'm moving in. Now I'll sit down and start to write some
code. And for this I'll use the integrated development
environment, which is a program that has a lot of
features designed to help us do better at our job.
Now that I've written some code, my first step is to find the
problems that I just created. I'm only human, so the chances
that I wrote 100% flawless code on my first try are pretty slim.
My first step is going to be to
use static analysis. Static analysis doesn't run the code,
it looks at it as though it's written down on a piece of
paper. An analogy might be spellcheck in Microsoft Word. Spellcheck
can't tell me whether or not I've written a compelling
novel, but it can tell me if I missed a comma somewhere.
Static analysis does that for code.
Now that I have found and fixed those very obvious problems, my
next step is going to be to look for less obvious problems,
and for that I'll use an automated test suite. We have
written a broad range of tests against our code. Automated
tests do run the code, or rather it runs a subset of the
code, provided a certain input, and expecting a certain output.
It's a really valuable exercise to sit down and write these
tests, because sometimes when I sit down to write a new test for
any feature, it forces me to really clarify my thinking about
how the software is supposed to work in the first place.
It also provides a really crucial safeguard against
colleagues accidentally breaking this feature in the future. So
it's a great way to find
problems early. Once I've written and run these automated
tests, I'll move on to some manual tests. I might run JMP
Live on my local machine, or on one of several servers we have
in the office and poke around and try stuff and just make sure
that it works the way that I
think it ought to. I may even look into the database that
sits behind JMP Live and keeps track of records on
users, posts, groups, comments, and make sure that records are
written in the way that I expect them to be.
Now I'm cautiously optimistic that I've written some good
code, and I've created a useful
feature. The next step is to ask my peers for their opinion. I'll
put this through peer review. Now in JMP Live, we put 100% of
the code that we write through peer review and the reason is
that we find it so valuable.
There might be something I have missed because I just sort of
have blinders on about something. Or it might be the
case that somebody else on my team simply has knowledge that I
lack. It's often the case in a review that the reviewer also
gets a lot of value out of the exercise because they might
learn about some new techniques.
Now that the team is pretty optimistic that we've created
some good code, I will finally commit my code or check it into
a source code repository.
We have a continuous build system that watches for us to
commit code, and when we do, it
analyzes it. It looks for really basic stuff like, are the files
named the right way and in the right place? It will also rerun
static analysis and it will rerun our entire automated test
suite. It does this in case someone has forgotten to do this
earlier in the process. Or more often, it could be the case that
something small and subtle has changed since the last time the
automated test suite was run.
So it does all these analyses and when it makes it through
this step, the code is now available for people outside of
the development team to take a
look at it. The first people who pick up on this are the testing
experts that we have within JMP Live and within the JMP team
more broadly. These are test professionals. It's what they do
for their whole career, and they're really good at it.
They might rerun automated test suite. They might add more tests
to our suites. They might run JMP Live manually and poke around
generally. They might run JMP Live locally and pretend to do
things as users that would be absolutely insane. What's the
craziest thing the user can do? And let's see how the software
responds to it. They're really creative with these test
scenarios. They might also run the software on various
different browsers and operating systems to make sure that our
software is flexible and robust.
Once they have signed off on this new feature, it's now
available to be picked up in our next software release.
Let me zoom out now and show you the process as a whole.
That's a lot of steps. Do we actually do all of this for all
of our software changes? Believe it or not, we do. However, this
process scales a great deal, depending on the complexity of
the work. If we have a really simple obvious bug, we'll step
through this process, but we'll do it pretty darn quickly.
On the other hand, if we have a very complex new feature, at
every step of the process, we're going to slow down.
We'll take our time and take a lot of care to make sure that
we get it right.
Now, anything that takes time costs money, so why is it that
JMP is willing to invest this money in software quality?
One reason is really simple. We have pride in our work and we
want to produce a good product.
The next reason is rather less idealistic. We know that
whatever problems exist in our
process, and in our product, if we don't find them, our
customers will and that's not great for business. So we
would like to minimize that.
Now no process can deliver
absolute perfection. And so we really invite you to stay a part
of the JMP community. Go to community.jump.com and let us
know if there's anything about JMP Live if it doesn't work the
way you think it ought to, or anything new that you would like
to see in the future.
That kind of real-world feedback is immensely valuable to us, and
we really welcome it.
That's all I have for you today, but I really hope that
you enjoy the rest of your Discovery experience. Thank
you.