Nowadays, most companies’ existence, competitiveness and profitability depends on data flow. However, the variety of data sources, types, volumes and velocity has become somewhat out of control. The rapid growth of databases’ size, semi-structured data and SaaS applications have slowed down the data finding process.
It’s all about effective information consolidation, storing and access. Uncategorized data is assigned to different platforms and systems. As a consequence, finding required data is challenging – users need to know which data is administered, where the data is located (and whether they have proper access), and finally, how to remove data.
When it comes to making decisions based on data, the biggest challenge is finding and accessing data itself. Transporting and consolidating data from disparate sources takes too long.
Not only the users suffer but also the IT departments have to struggle with data in different formats, quality and sources. Slow and extremely time-consuming (or sometimes omitted) transformation of data makes it impossible to provide requested information in demanded time. Formed divergence between when the data is provided versus when it is needed harms the IT department’s image.
To achieve better results, companies are investing in cloud-based data pipelines and modern ETL solutions. Until recently, most of the world’s ETL tools were on-prem and based on batch processing. Historically, most organizations used to utilize their free compute and database resources to perform nightly batches of ETL jobs and data consolidation during off-hours. With IT moving to the cloud, more and more cloud-based ETL services started to emerge. Some of them keep the same basic batch model of the legacy platforms, while others start to offer real-time support, intelligent schema detection, and more.
Doing your ETL in batches makes sense only if you do not need your data in real-time. It might be good for salary reporting or tax calculations. However, most modern applications require real-time access to data from different sources.
Data brought up from external sources
Data to fit demanded standard
Converted data into a destination DW
The three-stage ETL process and the ETL tools implementing the concept are a potential solution for the needs described above.
ETL stands for 'Extract, Transform, and Load' – these words describe the idea of the system. ETL tools were created to improve and facilitate data warehousing.
The ETL process consists of the following steps:
- Build reference data
- Extract data from sources
- Load into stages tables
- Audit reports
- Clean up
Sometimes those steps are supervised and performed indirectly, which is very time-consuming and might prove to be inaccurate. Using ETL tools is is more efficient and makes the entire process more reliable.
The need to become more data-dependent has forced many companies to invest in complicated data warehousing systems. Differentiation and incompatibility between systems led to an uncontrolled growth of costs and time needed to coordinate the processes. ETL tools were created to simplify the data management and simulataneously reduce the absorbed effort.
Several types of ETL tools exist, according to different customer needs:
- Tools that perform and supervise only selected stages of the ETL process, such as data migration tools (EtL Tools or “small t tools”) or data transformation tools (eTl Tools, “capital T tools”).
- Tools that offer complete solutions (ETL Tools) and include many functions intended for processing large amounts of data or more complicated ETL projects.
- Code base tools is a family of programing tools that support many operating systems and programing languages.
- GUI-based tools remove the coding layer and allow users to work without little knowledge of coding languages.
Some tools, such as server engine tools, execute many simulatneous ETL steps performed by more than one developer, while other tools, such as client engine tools, are simpler and execute ETL routines on the same machine as they are developed.
How ETL Tools Work
The first task includes data extraction from internal or external sources. After sending queries to the source system, data may go indirectly to the database. However, there is usually a need to monitor or gather more information and then go to the staging area. Some tools extract only new or changed information automatically, so users don't need to update it independently.
The second task includes transformation, which is a broad category:
- Transforming data into a stucture required to continue the operation (extracted data usually includes a sructure typical to the source)
- Sorting data
- Connecting or separating
- Checking quality
The third task includes loading the data into a data warehouse.
ETL tools include many additional capabilities beyond the main three - extraction, transformation and loading. For example, sorting, filtering, data profiling, quality control, cleansing, monitoring, synchronization and consolidation.
Providers of ETL Tools
Following is a list of the most popular commercial and freeware (open-source) ETL tools.
Comercial ETL tools:
- Ab Initio
- Adeptia ETL
- Business Objects Data Services
- Business Objects Data Integrator (BODI)
- Informatica PowerCenter
- Information Server
- Integration Services
- iWay Software
- Oracle Data Integrator (ODI)
- Oracle Warehouse Builder (OWB)
- Pervasive ETL
- SAS ETL
- Sybase ETL
Freeware, open source ETL tools:
- Apach NiFi
- Clover ETL
- Pentaho Data Integration (Kettle)
- Talend Integrator Suite
As illustrated above, there are many types of ETL tools. The next step is to choose the appropriate tool. Some tools are relatively expensive, or some might be be too complex, if you don't want to transform a large amount of information, are not interested in using many sources, or do not need sophisticated features.
Begin with defining the business requirements, then consider the technical aspects, and finally, choose the right ETL tool. We created a new ETL wiki to help you become more informed when choosing a technology or provider.