ETL MongoDB to Oracle, SQL Server, MySQL & Postgres Note: This article is a ...
METL / Metadata ETL
Simplify Management of Your Complex Data Integration
Dynamic ETL Integration between SQL, NoSQL, Cloud/Microservices and Flat Files
Automate your ETL using our Data Rules Engine, Metadata Management and Json Processing
Java / Spring
METL's primary value to ETL is similar to Low Code programming tools. Most work can be done without manual coding. METL is a universal ETL engine designed for all major use cases across microservices, SQL, NoSQL and flat files. It can can equally handle streaming, batch or low latency micro-batch. METL's multiple Rules Engine workflows can address the many complex data architecture paradigms from a data warehouse, data lake, data marts, OLTP, ODS, search engines and application integration. We use Apache Kafka as our persisted message queue so we can support both push or pull of destination data sources.
We make metadata management an art form. It is a design pattern that is implemented at every point of our platform. Our metadata is a multitiered system that separates general ETL tasks from data architecture considerations. Our metadata is extensible, customizable and easy to understand. The most important aspect is that it correlates to schema management of any database, api or flat file. This correlation allows us to implement powerful data integration solutions. You can use our design wizards to create and manage the data dictionary style metadata. As well as us real-time schema changes using rule based processing.
METL is prebuilt to implement ETL and data architecture best practices. We make the job of the DBAs, data analysists and architects as easy as possibly. Our features and advantages are difficult to overstate.
The declarative programming paradigm is an popular methodology to simplify and automate all types of technologies. It's main goal is to separate the business requirements of an application (what needs to be done) from it's technical implementation. METL has separated it's metadata into two categories. The data dictionary to support your business requirements and environment metadata and rules engines to implements your data architecture.
Our design wizards guide users by extracting schema information from nearly any data source including databases, APIs, ORM or files to populate a data dictionary style repository (including hierarchical data). The data dictionary defines mappings and is extended with an innovative tagging system to implement all types of ETL functionality like the data model, data cleansing, field validation, PII, data transformations etc. METL's prebuilt but extensible intelligent rules engine implements the data ingestion, transformation and persistence from the metadata.
Dynamic, Real Time Schema Changes
You can choose between a design time ETL designer wizards or use our rules engine with dynamic runtime schema change detection and propagation. These concepts work well together. Start with you initial design, then as new schema changes occur at the source they can be automatically added to the data dictionary. Schema changes can also be applied to the destination.
METL uses Json for all internal processing which allows for easy integration and extensibility. Json processing also allows loosely bound coupling to the schema. That means changes to schema does not create fatal errors. METL's rule engine can detect and propagate real time schema changes while updating the data dictionary and logs. This rare functionality has several enterprise use cases.
Centralized Metadata Management
The concept of metadata ETL is not new and every ETL tool has metadata. Unfortunately most ETL tools intermix their metadata with their programming code. METL has gain significant advantage by separating our metadata from our code. The metadata is centralized in a database and the code is implemented in our data rules engine. We have an extensible metadata tag system that is easy to read and correlated to the specific rules engine being used. The metadata can be populated with design wizards, ad hoc SQL or at run time using our dynamic schema rules engine.
METL's metadata design pattern is very effective at encapsulating functionality. Encapsulation is a critical design goal that reduces code duplication and the need for "boilerplate" programming. Legacy ETL tools tend to fail at this with ETL programmers duplicating their work and changes requiring manually touching many packages.
METL also separates our ETL platform from the rules engine that sit on top of it. The platform provides an extensive set of ETL resources thereby simplifying the rules engine to implement logic.
Managing data is inherently more cost effective than managing custom programming code. A METL implementation combines data rules that a data architect would defines and merged with schema metadata to create the data dictionary. METL functionality is tightly encapsulated and normally invoked by metadata tags. But we do have user defined functions and expressions. Many architecture design changes can be made by simple metadata change. The metadata is user expandable. Nearly all METL functionality is database vendor independent.
Java / Spring Application
Most customers can use METL "out of the box" with the preconfigured rule engines. But to be a universal ETL platform we need to be easily extensible and adaptable. Our foundation is a Java/Spring platform with rules engine classes and a workflow to implement ETL. METL's "platform" supplies a multi-threaded ETL engine plus management of metadata, job/batch, schema, errors, rest endpoints, file management, cache, Kafka, Zookeeper and many other ETL resources. All data flowing through our system is Json. We have a unique design pattern of metadata, Json and rules processing that very efficient in coding. Our Java technology can leverage nearly any Java library to connect to almost any data connection.
Talk to a Data Architect
Our POC Challenge:
We Can Implement Major Aspects of Your ETL Within Days in a Proof of Concept.
Problems: ETL of Boarding Loans ETL of loan boarding is a complex data integration ...
With 25 years plus experience as a Data Architect in both Fortune 500 and ...
By Jared DeckerExpertAnalytics.com To accommodate the massive influx and wide variety of data that ...
By Jared DeckerExpertAnalytics.com We often come across people that self-identify as fitting into one ...
Low latency, complex data model SQL data synchronization with Elasticsearch Elasticsearch (ES) is a ...