Data Integration Steps out of Extract, Transform, and Load (ETL) Borders
We all got used to speaking of data integration in terms of extract, transform, and load (ETL) mostly. And that’s pretty fair, for gathering and transforming data from one location and putting it into another location has always been, and still is the major task for data integration.
However, according to Rick Sherman, a data management expert, new trends make data integration step out of mere ETL borders, as technologies and processes evolve helping data integration tools turn data into “comprehensive, consistent, clean and current information.” Many tools support processes like data migration, data profiling and quality, application consolidation, etc. The time when IT departments had to build those processes into data integration, have passed, and tools appeared with all above mentioned functions pre-built.
Thus, enterprise data integration initiatives that once were extremely time-consuming more and more tend to become real-time as business demands more current information.
One more thing, Sherman dwells on is hand-coding being an out-of-date practice. Frankly, why keep using error-prone hand-coding, when there is a wide range of ETL tools available. There is a choice in configuration and price making it possible to find the tool to fit one’s needs. Taking into account open source ETL tools available almost for nothing, the devotion to hand-coding seems pretty strange or, at least unwise.