There are several approaches, in addition to the @Jed_Campbell idea (which is the easiest if the browser's address bar shows an address change in a predictable way.)
0. If the site provides an API, possibly for rest services, that will be the best choice. Maybe you can ask the owner of the system for an API.
or, 1. Use F12 in your browser (I'm using FireFox, but Chrome and Edge are similar). Pick the network tab which shows you what is sent over the network. Click the next page button on the web site and start studying the requests that are sent. It's hard to say for sure what you are looking for, but it will often be JSON data. You'll see if it was a POST or GET and what sort of headers were used. If there is a choice, JSON will likely be better than HTML.
or, 2. This will be significantly harder, but will often work when other choices fail. Browser Scripting with Python Selenium
Scraping data from web pages is hard in the best case. In the worst case, some sites are actively trying to prevent automated bots from working, generally by detecting not a real browser or not a real human. Sites that don't update the address bar are using some sort of ajax-like protocol with JavaScript; that will show up in the F12 window, perhaps with an easy-to-decode URL and headers. Sometimes with a cookie or encoded parameters or password.
JMP has special handling for HTML tables, but that might not work for you unless there is a simple URL that returns each page as HTML with a table. JMP also has JSON, XML, and CSV import wizards that might help.
Craige