When using the simba spark ODBC driver to connect to databricks, it is successful. However, it will only list tables that are part of the default catalog as shown below in terraform config for databricks. While the setting is "default_catalog_name" (see https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/metastore_assign...) this appears to be really the default "Schema" in databricks terms, or default "database" in ODBC terms. When using excel, this is not a problem, but for JMP it is. Also for JMP when setting the ODBC setting "database" to, say "bronze", JMP appends it to server default catalog name which results in gold.bronze as the attempted schema, which of course fails on the databricks side because there is no such thing.
resource "databricks_metastore_assignment" "this" {
metastore_id = databricks_metastore.acompany_metastore.id
workspace_id = local.workspace_id
# System default is "hive_metastore" which is just awful
default_catalog_name = "gold"
}