Apologies, apparently a noob question (again...).
We are finally working towards performing some analyses on the online sensor ("tag") data we've been collecting for years in our manufacturing plant. The standard time resolution for these tags is 1'' and there are about 1400 tags to be sifted through for interesting information. But to get to that stage, I believe I need to suggest some data format to our Digitalization team. I believe they are expecting everyone who is not on their team (which I'm not) to work with CSV files, but I realize that this format wouldn't be able to cope with approximately 30 million rows, which roughly corresponds with the number of seconds in one year. Now, while this time resolution may not be required, I think it is still worth to approach this as if it were. Also, just using 1-minute granularity would mean approximately 0.5M rows for every year, which still appears inconvenient to handle using CSV.
For handling large tables, i.e., importing them into JMP, is JSON the preferred approach? Or would it perhaps be better to tie into a data base, loading the data directly into JMP from there? Any feedback, any comments I can feed back to our Digitalization team with regards to this issue are highly appreciated. Thank you very much in advance fellow JMPers!