Srikanth M - Data Engineer

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Srikanth Madapati srikanthmadapati0@gmail.

com
Sr. Data Engineer +1 469 268 8630

Basic Python Certified, Certified Snowflake Developer

SUMMARY OF EXPERIENCE
Senior data engineer with 10 years of experience in building data intensive applications, data analytics,
business intelligence, data integration and migration using SAS, Python, Oracle, Snowflake, ADF, Databricks,
Synapse Analytics, Kafka and DBT.
 Expertise in building/migrating data warehouse on Snowflake cloud database.
 Experience in creating snowflake warehouses and moving data from traditional databases to Snowflake.
 Experience in building data pipelines in Azure Data Factory.
 Implemented data skew patterns for removing the data skewness across the partitions.
 Implemented Databricks transformations on data using notebooks and provided the configuration in notebook
files for the various stages.
 Good exposure in Snowflake Cloud Architecture and SNOWPIPE for continuous data ingestion.
 Proficient in understanding business processes/requirements as per user stories and translating into technical
requirements.
 Experience in creation of dedicated SQL pools and spark notebooks in synapse analytics.
 Good understanding of data storage in synapse SQL pools.
 Extensively worked in ETL process consisting of data transformation, data sourcing, mapping, conversion along
with data modelling.
 Advanced SQL skills including complex Joins, snowflake stored procedures, clone, views, Materialized views etc.
 Experience in building data pipelines in Azure Data Factory.
 Experience with Snowflake Datawarehouse, deep understanding of Snowflake architecture and processing.
 Created Clone objects to maintain zero copies in Snowflake.
 Handling large and complex data sets like JSON, CSV files from various sources like ADF and AWS S3.
 Experience in writing complex SQL scripts using Statistical Aggregate functions and Analytical functions to
support ETL in snowflake cloud data warehouse.
 Used COPY/INSERT, PUT, GET commands for loading data into Snowflake tables from internal and external
stages.
 Good understanding of Kafka topics, consumers, and providers.
 Experience in integrating DBT and snowflake.
 Created the SQL models in DBT for data movement in snowflake.
 Experience in creating Azure Event Hub, Azure Key Vault, Stream Analytics.
 Experience of development using software development methodologies such as Agile and Waterfall.
 Worked on bulk loading data into Snowflake tables.
 Expertise in SAS/Base and SAS/Macros programming.
 Excellent understanding of SAS ETL, SAS BI.
 Rich hands-on experience in SAS CI Studio, SAS/Data Integration Studio, SAS BI Tools and SAS Enterprise Guide.
 Expertise in SAS 9.4 and SAS 9.3 administration activities.
 Ability to work independently and as a team with a sense of responsibility, dedication, commitment and with an
urge to learn new technologies.
 Excellent Client interaction skills and proven experience in working independently as well as in a team.
 Exposure to Power BI concepts and little experience on creation of dashboards.
TECHNICAL EXPERTISE
Cloud Data Warehouse : Snowflake
Cloud ETL& Analytics : Azure Data Factory, Data Bricks, Synapse Analytics
Big Data : HDFS, Hive, Pig, Spark, Airflow
Streaming Tools : Kafka, Azure Stream Analytics
Cloud Environment : Azure, AWS
Programming Language : Python
Database : Oracle, PGSQL, MySQL
Operating Systems : Linux, Windows
Analytical Tools : SAS 9.4 and SAS 9.1.3, SAS Base 9.4, SAS Macros, SAS Management
Console 9.4 and 9.1, SAS Data Integration Studio 3.4 & 7.1, SAS OLAP
Cube Studio 9.1, SAS Information Map Studio 3.1, SAS Web Report
Studio, SAS information Delivery Portal, SAS customer Intelligence
CI/CD tools : Jenkins, GIT, Azure Devops

EDUCATION
Bachelor of Technology in Bioinformatics, Sathyabama University, Chennai, India- 2011

PROFESSIONAL EXPERIENCE
United Health Group (Optum), MN, USA. April 2023 – Till Date
Cloud Data Engineer
Project: Healthcare Economics
Roles and Responsibilities:
 Migrated the data from SQL server management to cloud data warehouse snowflake.
 Migrated the HIVE quires to Databricks.
 Developed and executed the spark jobs to perform data cleaning and business transformation.
 Developed notebooks using azure PySpark and established the connection between Databricks and Azure Data
factory.
 Implemented transformation logic on delta lake tables and audit log creations.
 Developed the ADF pipelines as per the business requirements and written the Databricks notebooks which are
consumed by the ADF pipeline.
 Developed Snowflake stored procedures and materialized views as per business needs.
 Load the streaming data to Databricks delta lake layer through Kafka.
 Good understating of Kafka topics, producers, and consumers.
 Integrated Kafka Confluent and snowflake to read streaming data to generate business reports.
 Integrated DBT and snowflake to build SQL models in DBT to execute snowflake queries.
 Used DBT to debug complex chains of queries. They can split into multiple models that can be tested separately.
 Worked on pipeline creating activities using Airflow tool.
 Load the data into Azure Synapse Analytics from Azure Data Lake Storage.
 Created External tables and Materialized views in Synapse Analytics.
 Experience in creation of serverless SQL and dedicated SQL pool.
.
United Health Group (Optum)
Cloud Data Engineer Feb 2023 – Apr 2023
Project: SMART Modernization
Roles and Responsibilities:
 Converted legacy oracle procedures based ETL processes to snowflake SQL procedures.
 Designed ADF ETL pipelines to orchestrate the snowflake procedures.
 Implemented the email mechanism in ADF by using Microsoft Graph API.
 Integrated Kafka and snowflake to consume the streaming data.
 Created Airflow DAGS to schedule the ingestion, ETL jobs and various business reports.
 Redesign and optimize the performance of existing snowflake procedures.
 Extract transforms and loads data from source systems to Azure data storage using an Azure Data Factory, Spark
SQL and process the data in data bricks.
 Analyze and solve business problems at their root, stepping back to understand the broader context.
 Developed a data validation framework, resulting in an improvement of data quality.
 Worked on Snowflake streams to process incremental records.
 Addressing data issues and providing permanent fixes.
 Created materialized views to speed up the query processing for the rarely updated large tables.
 Used Temporary and Transient Data Objects on different datasets.
 Used the copy command to load bulk data into snowflake from various sources.
 Create and manage the Snow pipe for continuous data loading.
 Zero Copy cloning – Cloning databases for Dev and QA environments.
 Estimating requirements and committing deadlines with business.
 Converting design documents into technical specification.

Nestle Jul 2021 – Jan 2023


Cloud Data Engineer
Roles and Responsibilities:
 Responsible for all activities related to the development, implementation, administration, and support of ETL
processes for cloud data warehouse.
 Data ingestion through ADF pipelines.
 Loading data from SAP BW and C4C into Snowflake staging by ADF pipeline.
 Loading data into Snowflake static tables from internal stage and on local machine.
 Writing complex Snow SQL scripts and stored procedures in Snowflake cloud data warehouse as per Business
requirement and reporting.
 Involved in data analysis and handling ad-hoc requests by interacting with BI team and clients.
 Used performance technics like clustering key, autoscaling for faster way of loading, increase query execution.
 Performed data reconciliation for source and target data using audit framework built in the project.
 Created all snowflake objects like tables, streams, tasks, procedures and deployed them to higher environments
by using azure Dev ops CI/CD pipelines.
 Created views on top of tables with join conditions for reporting purpose.
 Created pipelines and implemented looping by using until and for each activity in ADF.
 Implemented RLS policies as per requirement.
 Implemented ADF and SharePoint Integration to analyze the business data.
 Cloned production data for code modification and testing.
 Perform troubleshooting analysis and resolution of critical issues.
 Created clone objects to maintain Zero-Copy cloning.
 Created the linked services in ADF to establish the connection for SAP and Snowflake.
 Monitored the ADF scheduled pipelines.

Environment: Snowflake, SQL, Azure Data Factory, Azure Data Bricks, JIRA, Postman, SAP BW, Azure Blob.

Ericsson May 2016- July 2021


Data Engineer
Roles and Responsibilities:
 Responsible for all activities related to the development, implementation, administration, and support of ETL
processes for on-premises and cloud data warehouse.
 Migrated data from On-premises to Snowflake Data warehouse.
 Bulk loading from external stage (AWS S3).
 External stage (snowflake) using COPY command.
 Worked on Snowflake streams to process incremental records.
 Loading data into Snowflake tables from internal stage and on local machine.
 Used COPY, LIST, PUT and GET commands for validating internal and external stage files.
 Used import and export from internal stage (Snowflake) vs external stage (S3 Bucket).
 Writing complex Snow SQL scripts in Snowflake cloud data warehouse to Business Analysis and reporting.
 Perform troubleshooting analysis and resolution of critical issues.
 Involved in data analysis and handling ad-hoc requests by interacting with business analysts, clients and
resolving the issues as part of production support.
 Used performance technics like clustering key, autoscaling for faster way of loading, increase query execution.
 Performed data validations for source and target using load history and copy history.
 Involved in data analysis and handling ad-hoc requests by interacting with business analysts, clients and
resolving the issues as part of production support.
 Developed snowflake queries using various joins like self, left, right and full outer joins.
 Created views and materialized views and snowflake procedures.
 Worked on assigning roles to various users like development, testing etc.
 Designed Update and Insert strategy for merging changes into existing Dimensions and Fact tables.
 Maintained SAS DI Jobs to create and populate tables in data warehouse for daily reporting across departments.
 Created DI jobs for ETL process and reporting.
 Creation of SAS FM templates and publishing in SA FM studio.
 Addressing data issues and providing permanent fixes.
 Estimating requirements and committing deadlines with business.
 Converting design documents into technical specification.
 Mentoring team to improve their technical skills.
 Installed and configured SAS 9.4 Enterprise Business Intelligence Servers in Solaris environment.
 Setting up the individual project repositories through SMC.
 Troubleshooting SAS server related issues and co-ordaining with SAS vendors over the track for solutions.
 Applied licenses on SAS Servers.
 Created libraries for various data sources like Oracle, flat files, and SAS SPDS.
 Monitoring server resources and performance along with mount points, CPU usage and logs monitoring.
 Performing maintenance activity of SAS servers which includes taking backups and restarting SAS services.

Environment: Snowflake, AWS S3, Oracle, SQL, MySQL, PGSQL, SAS DI, SAS SMC, SAS LSF, SAS FM, SAS OLAP

Reliance Tech Services Jun 2014 – May 2016.


SAS Developer
Roles and Responsibilities:
 Using PROC SQL and SQL PASS THROUGH for writing queries of ETL, joining tables, ad hoc analysis, testing and
validation of datasets.
 Involved in writing SAS MACROS and generalized code for implementing incremental jobs for ad hoc and
daily/monthly ETL processing.
 Worked on PROC SPDO to manage data in the form of clusters (Dynamic Tables) on SPDS server for efficient
operation of ETL process.
 Create and manage Meta data objects that define sources, targets and jobs for various trans- formations and
consolidate user transformations in process flow via Process Designer in DIS.
 Deploy jobs, create dependencies in DI Studio, and schedule accordingly via LSF.
 Administration of SAS Enterprise Business Intelligence applications on SAS production servers.
 Responsible for SAS client tools installation and SAS server port opening.
 Responsible for setting up the individual project repositories through SMC.
 Applied licenses on SAS Servers, SAS DIS Server and SAS SPD Server.
 Responsible for taking backups of Metadata.
 Performing maintenance activity of SAS servers which includes taking backups and restarting SAS services.

Environment: Oracle, SQL, MySQL, PGSQL, SAS DI, SAS EG, SAS SMC, SAS LSF, SAS FM, SAS OLAP, Linux

You might also like