Although conjecture, i think you are correct. It really seems like every second adds 1000 seconds. Also the reset is happening somewhere between 23 and 123 seconds. I could imagine, that the issue is that the datetime format in SQL features milli, micro and nanoseconds (dd.mm.yyyy hh:mm:ss.mmmmmmn). Continuing the speculation, it could be, that up on import from the database to JMP, JMP thinks that 1000 milliseconds is 1000 seconds instead of 1 second.
I played with a few formats and styles but couldn´t teach JMP how to import the table from the database correctly. Here is my latest code, which assures that the timestamp format in the database is the same as in JMP. The format is called "smalldatetime" and looks like "dd.mm.yyyy hh:mm:ss".
...
Execute SQL(dbc,
"
ALTER TABLE dbo.berstTry01 ADD timestamp AS
DATEADD(ss, (CAST(berstTry01.SecondsSince1904 AS BIGINT)%86400), CAST(
DATEADD(dd, (CAST(berstTry01.SecondsSince1904 AS BIGINT)/86400), '1904-01-01 00:00:00' ) AS smalldatetime) ) ;
ALTER TABLE dbo.berstTry01 ADD SecondsAfterMidnight AS
(CAST(berstTry01.SecondsSince1904 AS BIGINT)%86400);
"
);
...
Unfortunately this also didn´t work. And right now i have no more clue what to do to make it work
The formula applied in the database is actually not interacting with JMP at all. And the calculation is obviously correct, as can be inferred from the first picture. What is happening is, that an INTEGER is transferred to the database and the values of the INTEGER happens to be the elapsed seconds since the "JMP epoche" (seconds since 1904).
If i run your example, i get the correct date, but a wrong time (18:07:17 instead of 14:14:14)...
The more i investigate this issue, the more i have the feeling this is actually a bug. I am running JMP 12.2.0.