cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
] />

JMP Wish List

We want to hear your ideas for improving JMP. Share them here.
Choose Language Hide Translation Bar
0 Kudos

chatty mode() for AI debugging

☑ cool new feature
☑ could help many users!
☑ removes something that feels like a „bug“
☐ nice to have
☐ nobody needs it
 
 
 
What inspired this wish list request?
AI‑generated JSL can create very creative, but incorrect message chains, wrong parameter types or invented platform calls. Due to JSL’s silent‑ignore design for unrecognized messages, these errors frequently go unnoticed and the user ends up debugging in the completely wrong place. The current log output only reports the symptom, not the cause, which makes AI‑assisted scripting unexpectedly difficult.
 
 
What is the improvement you would like to see?
Introduce a command‑controlled AI Debug Mode, e.g.
chatty mode(1,0,1);
This mode deliberately produces “noisy, but incredibly helpful” diagnostic output. It makes objects more talkative at key points: when receiving invalid messages, when a platform‑like function is mis‑used, when argument types do not match, or when message chaining silently returns the wrong object.
The mode should allow multiple flags (e.g., display objects, platforms, general scriptable objects) so users can activate precisely the level of verbosity they need.
 
 
Why is this idea important?
This feature would dramatically reduce debugging time, especially when AI‑generated scripts behave strangely. Instead of spending hours searching in the wrong place, users could enable this explicit chatty mode and immediately see where the logic began to derail. It increases transparency, supports learning, helps prevent subtle logic errors, and makes JMP significantly more resilient in real‑world AI‑assisted workflows.
 
 
3 Comments
Status changed to: Acknowledged

Thank you for taking the time to submit this request! 

hogi
Level XIII
hogi
Level XIII