Does RAW always require Hadoop?

The most common deployment requires Hadoop as it is assumed most data is already present on a Hadoop-based data lake. Nonetheless, it is possible to deploy RAW locally without any Hadoop dependencies. Contact us for additional information.

Is there an ODBC connector to RAW?

Yes. There is an ODBC connector for RAW. Note that it is also possible to save the data as Hive and use an existing Hive ODBC connector. Contact us for more information.

Is it possible to use RAW without passing through the REST client/server API?

Yes. Contact us for information on how to run RAW in an embedded scenario.

How to obtain RDD in case using Hadoop?

Yes. Contact us for information on how to run RAW in an embedded scenario.

Is RAW integrated with the Hive metastore?

Yes. RAW can read and write data files and register them in the Hive metastore, assuming the data is compatible with Hive’s semantics. See the language guide for more information.

Is RAW integrated with a visualisation like Tableau?

See Using the Tableau Connector.

How does RAW compare to Apache Drill?

RAW has a richer data model and more powerful query language when compared to Apache Drill. In particular, RAW’s support for hierarchical data goes beyond Apache Drill’s, which means that queries on hierarchical data are simpler to express and more powerful in RAW. In addition, RAW includes data management functionality, including autonomous decisions on which data to cache and how to cache it.