cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
  • Register to attend Discovery Summit 2025 Online: Early Users Edition, Sept. 24-25.
  • New JMP features coming to desktops everywhere this September. Sign up to learn more at jmp.com/launch.
Choose Language Hide Translation Bar

JMP ODBC Databricks and default catalog

  When using the simba spark ODBC driver to connect to databricks, it is successful. However, it will only list tables that are part of the default catalog as shown below in terraform config for databricks. While the setting is "default_catalog_name"  (see https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/metastore_assign...) this appears to be really the default "Schema" in databricks terms, or default "database" in ODBC terms. When using excel, this is not a problem, but for JMP it is. Also for JMP when setting the ODBC setting "database" to, say "bronze", JMP appends it to server default catalog name which results in gold.bronze as the attempted schema, which of course fails on the databricks side because there is no such thing.

 

resource "databricks_metastore_assignment" "this" {
metastore_id = databricks_metastore.acompany_metastore.id
workspace_id = local.workspace_id
# System default is "hive_metastore" which is just awful
default_catalog_name = "gold"
}

3 REPLIES 3
AlbertLesire1
Level II

Re: JMP ODBC Databricks and default catalog

I have a similar issue, I get to connect, but it does not matter if I specify the database name, it will still not show any table and the connection prompt will re-pop up 

Re: JMP ODBC Databricks and default catalog

This is most likely due to the connection string missing:

Catalog=<catalog-name>

where <catalog-name> is the catalog where your data is. There are many drivers that do not persist <catalog-name>, so JMP doesn't know there is one.

For instance, in say, Open Database, you'd modify it like (adding the Catalog=<catalog-name>entry):

Open Database(

  "Driver=Simba Spark ODBC Driver;Host=adb-***********.**.azuredatabricks.net;Port=443;HTTPPath=sql/protocolv1/o/*********/****-*******-**********;SSL=1;ThriftTransport=2;AuthMech=3;UID=token;PWD=*****;Catalog=<catalog-name>;Schema=<schema-name>;ProxyHost=***************;ProxyPort=****"

  );

For Data Connectors (introduced in JMP 18), you'd add the Catalog=<catalog-name> in the Connection value like:

bryan_boone_0-1752609532747.png


The other values are probably OK. With a JMP 18 Data Connector, ODBC configurations are obtained from the ODBC Manager and converted to a Data Connector.

 

AlbertLesire1
Level II

Re: JMP ODBC Databricks and default catalog

Thank Bryan. 
FYI It got resolved when I installed the Microsoft spark ODBC driver instead of the Simba ODBC driver  

Recommended Articles