Problems: ETL of Boarding Loans ETL of loan boarding is a complex data integration ...
METL / Metadata ETL
Simplify Your Complex Data Integration
Our Technology: Data Rules Engine, Metadata Management and Json Processing
Dynamic ETL Integration between SQL, NoSQL, Cloud/Microservice and Flat Files
Java / Spring
We Have Many Innovations
METL's primary value to ETL is similar to Low Code programming tools. Most work can be done without manual coding. METL is a universal ETL engine designed for all major use cases across microservices, SQL, NoSQL and flat files. METL's multiple Rules Engine workflows can address the many complex data architecture paradigms from a data warehouse, data lake, data marts, ODS, search engines and application integration. METL can equally handle streaming, batch or files. We use Apache Kafka as our persisted message queue that can support client rest service pulling data.
METL uses Json for all internal processing which allows for easy integration and extensibility. Json processing also allows loosely bound coupling to the schema. That means changes to schema does not create fatal errors. METL's rule engine can detect and propagate real time schema changes while updating the data dictionary and logs. This rare functionality has several enterprise use cases.
The declarative programming paradigm is an popular methodology to simplify and automate all types of technologies. It's main goal is to separate the business requirements of an application (what needs to be done) from it's technical implementation. METL has separated it's metadata into two categories. The data dictionary to support your business requirements and environment metadata and rules engines to implements your data architecture.
Our design wizards guide users by extracting schema information from nearly any data source including databases, APIs, ORM or files to populate a data dictionary style repository (including hierarchical data). The data dictionary defines mappings and is extended with an innovative tagging system to implement all types of ETL functionality like the data model, data cleansing, field validation, PII, data transformations etc. METL's prebuilt but extensible intelligent rules engine implements the data ingestion, transformation and persistence from the metadata.
Dynamic, Real Time Schema Changes
You can choose between a design time ETL designer wizards or use our rules engine with dynamic runtime schema change detection and propagation. These concepts work well together. Start with you initial design, then as new schema changes occur at the source they can be automatically added to the data dictionary. Schema changes can also be applied to the destination.
Managing data is inherently more cost effective than managing custom programming code. A METL implementation combines data rules that a data architect would defines and merged with schema metadata to create the data dictionary. METL functionality is tightly encapsulated and normally invoked by metadata tags. But we do have user defined functions and expressions. Many architecture design changes can be made by simple metadata change. The metadata is user expandable. Nearly all METL functionality is database vendor independent.
Centralized Metadata Management
The concept of metadata ETL is not new and every ETL tool has metadata. Unfortunately most ETL tools intermix their metadata with their programming code. METL has gain significant advantage by separating our metadata from our code. The metadata is centralized in a database and the code is implemented in our data rules engine. We have an extensible metadata tag system that is easy to read and correlated to the specific rules engine being used. The metadata can be populated with design wizards, ad hoc SQL or at run time using our dynamic schema rules engine.
METL's metadata design pattern is very effective at encapsulating functionality. Encapsulation is a critical design goal that reduces code duplication and the need for "boilerplate" programming. Legacy ETL tools tend to fail at this with ETL programmers duplicating their work and changes requiring manually touching many packages.
METL also separates our ETL platform from the rules engine that sit on top of it. The platform provides an extensive set of ETL resources thereby simplifying the rules engine to implement logic.
Java / Spring Application
Most customers can use METL "out of the box" with the preconfigured rule engines. But to be a universal ETL platform we need to be easily extensible and adaptable. Our foundation is a Java/Spring platform with rules engine classes and a workflow to implement ETL. METL's "platform" supplies a multi-threaded ETL engine plus management of metadata, job/batch, schema, errors, rest endpoints, file management, cache, Kafka, Zookeeper and many other ETL resources. All data flowing through our system is Json. We have a unique design pattern of metadata, Json and rules processing that very efficient in coding. Our Java technology can leverage nearly any Java library to connect to almost any data connection.
90% of ETL Programming is Boilerplate Coding. There's Got to be a Better Way . . .
By Jared DeckerExpertAnalytics.com To accommodate the massive influx and wide variety of data that ...
By Jared DeckerExpertAnalytics.com We often come across people that self-identify as fitting into one ...
Implementing low latency SQL data synch into Elasticsearch Elasticsearch is a powerful text search ...
Legacy ETL solutions are a procedural programming exercise. An ETL process starts with a ...
Mongo, Redis, Elasticsearch or Couch Legacy ETL tools are problematic for NoSQL databases. They ...