cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Register for our Discovery Summit 2024 conference, Oct. 21-24, where you’ll learn, connect, and be inspired.
Choose Language Hide Translation Bar
lala
Level VIII

For large amounts of data, is it faster to use python to process JSON asynchronously into structured data?

The JSON of the following structures is convenient to handle with JSL of JMP software.The JSON format is fixed and only 13 columns in [] are extracted.(Only two files are listed here.)

JSON1

{"ZJB":4271175,"ZJS":-3443749,"trend":[["09:30",0,-444931,0,-444931,0,1,0,21100,0,0,0,444931],["09:33",2,1433022,1433022,0,2,0,67100,0,0,1433022,0,0],["09:34",3,-316128,0,-316128,0,1,0,14800,0,0,0,316128],["09:45",4,318570,318570,0,1,0,15000,0,0,318570,0,0],["09:52",5,403965,403965,0,1,0,19100,0,0,403965,0,0],["10:03",7,-345725,328755,-674480,1,1,15500,31800,328755,0,0,674480],["10:25",8,419440,419440,0,1,0,19600,0,419440,0,0,0],["10:32",9,-623500,0,-623500,0,1,0,29000,0,0,0,623500],["10:40",10,353925,353925,0,1,0,16500,0,0,353925,0,0],["13:52",11,-1065500,0,-1065500,0,1,0,50000,0,0,0,1065500],["14:17",12,332436,332436,0,1,0,15600,0,332436,0,0,0],["14:25",13,-319214,0,-319214,0,1,0,15000,0,0,319214,0],["14:54",14,681065,681065,0,1,0,31900,0,0,681065,0,0]],"active":1080631,"passive":3190547,"Active":-319214,"Passive":-3124539,"AvgPrice":21.32,"AvgPrice":21.3,"time2":1725535598,"ttag":0.004174999999999929,"errcode":"0"}

JSON2

{"ZJB":1913404,"ZJS":-4366449,"trend":[["09:30",0,-730500,0,-730500,0,1,0,50000,0,0,0,730500],["09:34",1,402408,402408,0,1,0,27600,0,0,402408,0,0],["09:52",2,-442380,0,-442380,0,1,0,30300,0,0,0,442380],["10:51",3,-314545,0,-314545,0,1,0,21500,0,0,0,314545],["11:17",4,-339184,0,-339184,0,1,0,23200,0,0,0,339184],["13:06",5,-438600,0,-438600,0,1,0,30000,0,0,0,438600],["13:27",6,-337491,0,-337491,0,1,0,23100,0,0,337491,0],["13:47",7,-323676,0,-323676,0,1,0,22200,0,0,0,323676],["13:49",8,-447299,0,-447299,0,1,0,30700,0,0,0,447299],["14:00",9,-630448,0,-630448,0,1,0,43300,0,0,630448,0],["14:27",11,344796,707124,-362328,1,1,48400,24800,707124,0,0,362328],["14:31",12,320426,320426,0,1,0,21902,0,320426,0,0,0],["14:32",13,483449,483449,0,1,0,33000,0,0,483449,0,0]],"active":1027550,"passive":885857,"Active":-967939,"Passive":-3398512,"AvgPrice":14.62,"AvgPrice":14.6,"time2":1725535597,"ttag":0.0024069999999999925,"errcode":"0"}

But is python's asynchronous processing faster when these files are large?
But I don't know how python handles this JSON, and I asked ChatGPT for an answer.
So ask community experts.Thank you very much!


Please make this file in the C:\8 directory and want to call python by encoding it into JSL.Concatenate all files into a JMP table.


This requires that each file also have one additional file name.

11 REPLIES 11
lala
Level VIII

Re: For large amounts of data, is it faster to use python to process JSON asynchronously into structured data?

Well, by comparing the consumption of each step, less time to download the data each time.

However, it takes more time to assemble and sort many data each time.

 

Ask experts: Which database has a speed advantage in this regard: splicing, sorting.The key is that this is only intermediate data, not stored.

I found that JMP18's splicing speed is significantly not as fast as JMP14's.

 

Thanks Experts!

lala
Level VIII

Re: For large amounts of data, is it faster to use python to process JSON asynchronously into structured data?

I only know a little about JSL.So I'm not familiar with how JSL and python can handle data better and faster in memory.

 

Thanks!