Sandeep Hipparagi

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

SANDEEP HIPPARAGI

Data Analyst
Location: Bangalore/Bengaluru, Contact: +91 8660 256 286, Email: [email protected]

Summary: Highly skilled data analyst with 4+ years of experience in data analysis, visualization, and reporting. Proven
track record of providing actionable insights to drive business decisions. Proficient in data manipulation, SQL
programming, and data visualization tools. Excellent communication and collaboration skills to work effectively with
stakeholders across all levels of the organization.

Professional Experience:
Data Analyst, Tata Consultancy Services, Bangalore December 2022 - Present
Key Responsibilities:
• Analysed large and complex datasets to identify patterns, trends, and insights that drive business decisions.
• Experience with Big Data technologies such as Hive/Spark and hands on experience in AWS.
• Develop and maintain reports and dashboards using Power BI.
• Create and maintain data models, ETL processes, and data pipelines.
• Written SQL queries and stored procedures to extract, transform, & load data from multiple data sources.
• Collaborate with cross-functional teams to understand business requirements & provide data-driven insights.
• Perform data quality checks and provide recommendations for data cleaning and normalization.
• Automate reporting and data processing workflows using Python.

ETL Developer, Maveric Systems, Bangalore June 2019 - November 2022


Key Responsibilities:
• Conducted data analysis on various datasets to identify insights, trends, and patterns.
• Created reports and dashboards using PowerBI to visualize data and communicate insights to stakeholders.
• Conducted ad-hoc analysis and data queries to support business decisions.
• Developed and maintained ETL processes using Ab Initio, DataStage, SQL and Python.
• Worked closely with cross-functional teams to develop metrics for reporting.
• Conducted data quality checks and provided recommendations for data cleaning and normalization.
• Collaborated with IT teams to implement data governance and data management best practices.

Education:
Bachelor of Engineering from Visvesvaraya Technological University, Belagavi.

Skills:
• ETL development • SQL / PLSQL • Excellent communication and interpersonal skills
• Data analysis & visualization • Unix • Strong analytical and problem-solving skills
• Spark/Hive • AWS • Abinitio Developer (GDE, EME, Co>Operating
• Tableau • Performance tuning System)
• Power BI • Python • IBM DataStage

Certifications:
• Google Data Analytics – Issued by Coursera.
• Data Visualization Using Python – Issued by IBM.
• Databases and SQL for Data Science with Python – Issued with Honors by Coursera.
• Tableau and Power BI in Datacamp.
• SQL Basic, Intermediate, and Advanced Skill Certificate – Issued by HackerRank.

References:
Sandeep Hipparagi | LinkedIn
Sandeep Hipparagi | Credly Badges
Coursera | Accomplishments
1
HackerRank Profile

Project Name 1: Financial Datamart (FDM) Retrofit


Project Overview: A leading logistics company, United Parcel Services (UPS), wanted to upgrade their existing
financial Datamart to improve data processing speed and enhance data accuracy. The objective of this project was to
retrofit the existing financial Datamart to ensure it was scalable, modular, and better optimized.
Project Duration: Ongoing (December 2022 - Present)
Roles and Responsibilities:
• Collaborated with business stakeholders to understand data requirements and data processing bottlenecks.
• Analysed the existing financial Datamart architecture to identify performance issues and areas of
improvement.
• Worked with the technical team to design a new data model and ETL processes to retrofit the financial
Datamart.
• Conducted data profiling and data validation to ensure data quality and accuracy.
• Developed complex SQL queries to extract data from multiple source systems, perform data transformation,
and load data into the Datamart.
• Created ETL jobs using IBM DataStage to automate the data integration process.
• Optimized database indexes and partitioning to improve data processing speed.
• Developed dashboards and reports using Business Objects to monitor data processing status and identify
potential issues.
• Collaborated with the technical team to conduct testing and performance tuning of the ETL processes.
• Provided end-user support and training on using the new financial Datamart.
Technologies Used:
• IBM DataStage for ETL processing
• Oracle Database for data storage and management
• SQL for data querying and optimization
• Business Objects for reporting and data visualization
Project Name 2: Sales Performance Analysis
Project Overview: The NetApp Inc, technology company wanted to analyse their sales performance to identify
opportunities for improvement and increase revenue. The goal of this project was to develop a sales performance
analysis model to help identify key performance indicators and improve sales strategies.
Project Duration: 5 months (July 2022 - November 2022)
Roles and Responsibilities:

• Gathered and analysed sales data from various sources, including CRM systems and financial reports.
• Developed a data model and ETL processes to clean, transform, and load the data into a data warehouse.
• Conducted data exploration and analysis to identify key performance indicators for sales, such as conversion
rates, customer acquisition costs, and revenue per customer.
• Developed a sales performance analysis model using regression analysis and data visualization techniques to
identify trends and patterns in the data.
• Collaborated with sales teams to develop sales strategies based on the insights gained from the analysis.
• Developed dashboards and reports using Tableau to monitor sales performance and evaluate the
effectiveness of the model.
• Provided insights and recommendations based on the analysis to senior management.
Technologies Used:

• ETL processing
• SQL Server for data storage and management
• Python for data cleaning and transformation, and regression analysis
2
• PowerBI for data visualization and reporting
Project Name 3: Claims Data Analysis for AIG Insurance

Project Overview: AIG Insurance wanted to improve its claims processing and reduce the time taken to settle claims.
The goal of this project was to analyse claims data and identify trends to help optimize the claims process and reduce
processing time.

Project Duration: 6 months (January 2022 - June 2022)

Roles and Responsibilities:

• Gathered and analysed claims data from various sources, including policy information, medical records, and
adjuster notes.
• Developed a data model and ETL processes to clean, transform, and load the data into a data warehouse.
• Conducted data exploration and analysis to identify trends and patterns in claims data, including frequency,
severity, and claim duration.
• Developed predictive models using machine learning algorithms to identify potentially fraudulent claims and
prioritize claims for investigation.
• Collaborated with claims adjusters and other stakeholders to implement process improvements based on the
analysis and recommendations.
• Developed dashboards and reports using Power BI to monitor claims performance and evaluate the
effectiveness of the models.
• Provided insights and recommendations based on the analysis to senior management.

Technologies Used:

• ETL processing
• SQL Server for data storage and management
• Python for data cleaning and transformation, and machine learning algorithms
• Power BI for data visualization and reporting

Project Name 4: Population Health Management Strategies


Project Overview: Cigna is a healthcare company that wanted to develop population health management strategies
to improve patient outcomes and reduce costs. The goal of this project was to analyse claims and clinical data to
identify high-risk patients and develop interventions to improve their health.
Project Duration: 8 months (May 2021 - December 2021)
Roles and Responsibilities:

• Collected and analysed claims and clinical data from multiple sources to identify high-risk patients.
• Developed and implemented a data model to integrate the various data sources into a centralized database.
• Conducted statistical analysis and data mining to identify patterns and trends in the data.
• Developed predictive models to identify patients at risk of hospitalization or readmission.
• Collaborated with clinicians and care managers to develop interventions to improve the health of high-risk
patients.
• Developed dashboards and reports to monitor the performance of the interventions and evaluate the
effectiveness of the model.
• Provided insights and recommendations based on the analysis to senior management.
Technologies Used:

• Ab Initio for ETL processing


• SQL Server for data storage and management
• Python for data cleaning, statistical analysis.
• Tableau for data visualization and reporting

3
Project Name 5: Credit Risk Analysis for Small Business Loans
Project Overview:
JPMorgan Chase aimed to improve the accuracy of credit risk assessments for small business loans. The goal of this
project was to develop a credit risk model that incorporates both financial and non-financial data to better predict
the probability of default for small business loans.
Project Duration: 16 months (January 2020 - April 2021)
Roles and Responsibilities:

• Gathered and analysed small business loan data from various sources, including transaction history, financial
statements, and business plans.
• Developed a data model and ETL processes to clean, transform, and load the data into a data warehouse.
• Conducted data exploration and analysis to identify key factors contributing to credit risk for business loans.
• Developed a credit risk model using ML algorithms, including logistic regression and decision trees.
• Collaborated with loan officers to integrate the credit risk model into their loan approval process.
• Provided insights and recommendations based on the analysis to senior management.
Technologies Used:

• ETL processing
• SQL Server for data storage and management
• Python for data cleaning and transformation, and machine learning algorithms
• Excel for data visualization and reporting
Skills in Summary:

• Data analysis
• Data visualization
• Statistical analysis
• Programming skills: SQL, Python, R, and Excel
• Data cleaning and preparation
• Communication skills
• Business acumen
• Problem-solving skills
• Attention to detail and
• Time management.
Experience in summary:
• Experience with data analysis tools and libraries (Pandas, NumPy, Scikit-Learn)
• Experience with data visualization tools (Matplotlib, Seaborn)
• Ability to work with large datasets and databases
• Familiarity with machine learning algorithms and techniques
• Understanding of data modelling and database design
• Knowledge of data warehousing and ETL processes (IBM DataStage and Ab Initio)
• Experience with data mining and predictive modelling
• Familiarity with cloud computing and distributed computing systems (AWS, Hadoop, Spark)
• Experience with data reporting and dashboards
• Knowledge of software development processes and best practices.

***

You might also like