Founded in 2006, SnapLogic provides platform-agnostic connection of data, applications and APIs. Their enterprise integration cloud provides a unified integration platform as a service (iPaaS), which they call the Elastic Integration Platform.
How it Works
SnapLogic uses pre-built, intelligent connectors, called Snaps, for 400+ applications and data stores. These connectors simplify the effort of moving data from one database to another. Hybrid batch and streaming support provide data movement flexibility.
The SnapLogic architecture is illustrated in the following diagram:
Image source: https://www.snaplogic.com/why-snaplogic/how-it-works
There are several types of Snaplex:
- Cloudplex, which is hosted in the cloud.
- Groundplex, which is behind the customer’s firewall.
- Hadooplex, which uses Yet Another Resource Negotiator (YARN) to execute pipelines. SnapLogic can run natively on a Hadoop cluster. Users can create Hadoop-based pipelines without coding.
- Sparkplex, a data processing platform with a collection of processing nodes (containers) that take data pipelines, convert them to the Spark framework, and then execute them on a cluster. Users can create Spark-based pipelines without coding.
Big data integration uses three base tools:
- Designer: A user interface based in HTML5 to specify and build “pipelines”, or integration workflows out of Snaps with drag and drop. These pipelines can be streaming or accumulating. The accumulating type collects all data from the input source before emitting it from the pipeline. Accumulating pipelines are used for more complex data manipulations.
- Manager: An application that controls and monitors data integration, and administers data and process flow lifecycles. This interface also administers users, projects, security, single sign on (SSO) and password encryption.
- Dashboard: Used for viewing data integrations, including performance, utilization and health. The interface includes drill-down capabilities and provides for triggered event notifications.
How is SnapLogic Different?
Traditional ETL tools are built for the point-to-point, source-to-target column mapping ETL method. As data stores grow, as users need real-time information, and as the sheer volumes of data increase; these tools are usually unable to keep up with demand. By using SnapLogic’s approach, companies can simplify the data integration process. Users can create pipelines themselves. This shortens the time to develop a new data integration pipeline, simplifies maintenance and lowers the cost.
- Data Snaps for all major SQL databases and data sources: MySQL, SQL Server, Oracle, Teradata, Cassandra, Trilium, Amazon Dynamo DB, Confluent, Amazon Redshift.
- Analytics Snaps for a variety of systems: HDFS Read/Write, Anaplan, Google Analytics, Amazon Dynamo DB, Cassandra, Amazon Redshift, Birst.
- Core Snaps used for data analysis on common systems and file types: CSV Read/Write, REST, Filter, Spreadsheet Reader, Unique, ERP, Fixed Width Reader/Writer, Field Cryptography, Email Snap, Sequence, Sort, XML Read/Write, Transform, JSON Read/Write, Transform.
- An SDK and APIs are available to build custom Snaps or embed them and integration flows into other applications and platforms.
To get started and see how SnapLogic works see: