Powerful, Codeless Json based ETL for Hadoop
Dynamically Integrate with SQL, NoSQL, Elasticsearch or Flat Files
JsonEDI's Declarative ETL Simplifies & Automates Data Integration
SQL/NoSQL Integration with Hive or Hadoop Java Libraries
FREE Webinar on Declarative ETL and Hadoop
JsonEDI uses the declarative rapid development methodology. Declarative means the developer only specifies "what" should be done, not "how" it should be done. The "what" is business oriented requirements (i.e. the data model in a data dictionary format) and the "how" is technical or logistical ETL requirements. These have been automated or are preconfigured.
JsonEDI's endpoints can be any JDBC, REST, File, native DB library or bulk loading tool (if accessible in Java) or ESB.
Learn More at Our Website
Explore Use Cases with Hadoop
- In fully automated mode, JsonEDI can take semi-structured source database with document/form data and pivot out the data into tabular SQL tables (Hive) or flat files per form. New forms created in the source are automatically created as new tables or files. New Json elements are automatically added to tables/files as columns. Json subarrays are similarly split out into sub tables (i.e. normalization). Each node in the Json hierarchy is either pivoted or split based on normalization rules. This is all maintained and tracked in our data dictionary without needing to write or manage code. This is ideal to maintain a reporting database, ODS, datamart or staging to a data warehouse.
- Hive: We have dimensional modeling transforms of Json and surrogate key management. JsonEDI's declarative methodology provides rapid data integration for integrated data warehouses.
- SQL/NoSQL to Hadoop: JsonEDI can implement any Hadoop Java native library or JDBC to Hive. JsonEDI provides data transformation and data enrichment without manual Java coding compared to Sqoop.
WEBINAR TIME AND DATE
11 AM EST