Resume

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 5

CURRICULUM -VITAE

CELL: + 91- 9456292035, 8728011178


VINIT KUMAR SHARMA E-MAIL: [email protected]
[email protected]

PROFESSIONAL EXPERIENCE (IN BRIEF)

Total Exp.- 8+ year

 I am working as a Technology Analyst in Infosys Ltd. Chandigarh since Sep.2014 till now.

 I have worked as a Software Engineer in EXTREME BUSINESS SOLUTIONS, Gurgaon


since May 2011 to till Aug.2014.
 Over 8+ years of professional IT experience and over 4+ year of Big Data Ecosystem
experience in ingestion, storage, querying, processing and analysis of big data .
 In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS,
Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce concepts.
 Familiar with storage layer Hadoop Distributed File System (HDFS), computation layer Map Reduce,
Spark framework and Hadoop eco system Yarn, Hive, HBase, Pig.
 Hands on experience in writing Spark programs using features RDD, spark SQL, streaming.
 Familiar with advance spark feature GraphX, Machine Learning.
 Having basic knowledge on HBase, Kafka, Flume.
 5.0 years of development experience with Informatica PowerCenter.
 Experience and knowledge in the Data Warehousing using Data Extraction, Data Transformation and
Data Loading (ETL).
 Have good Experience on Informatica Designer Components - Source Analyzer, Target Designer and
Mapping Designer.
 Have worked with various transformations such as source qualifier, expression, joiner, router, filter,
aggregator, lookup, update strategy, sequence generator, transaction control etc.
 Have also worked on SQL Server 2008, Oracle 10g.
 Sound knowledge in Unix Shell Scripting.
 Have also worked on Autosys Scheduling tool, Perforce (Version Control tool).
 Ability to rapidly grasp the new technologies and new concepts.
 Strong analytical, problem-solving skills.

EDUCATION

 M.Tech in Digital Communication from U.T.U .


 B.Tech in Electronics & Communication from F.E.T Agra College, Agra affiliated to U.P
Technical University, Lucknow .
 Intermediate from U.P.Board .
 High School from U.P.Board .
SOFTWARE PROFILE

Big Data SPARK, HDFS, HBase, Sqoop, Hadoop, .


IDE tools Eclipse
Operating System UNIX, Windows, Linux, ubuntu

Programming Languages SQL, PL/SQL, Spark-SQL, Scala


Tools Informatica PowerCenter 9.5.1, Toad, Putty, WinSCP, Tortoise
SVN, Filezilla,Rapid SQL, PL/SQL Developer, JIRA
Databases Oracle10g/11g
Scripting Languages Unix, Power Shell
Scheduling Tools Autosys
Versioning Tools Perforce, SLM

PROFESSIONAL EXPERIENCE (IN DETAIL)

Project:
1.Project Name: CER and Transparency
Client: Investment client(USA) (Nov 2014- Till Now)

Description:
There are two projects of CGC. We migrated all CGC data from oracle server to hadoop and
Microsoft PDW server(Azure cloud) and developed logic for informatica logic and oracle procedure
which was used in old architecture. Ingested all the data in to hadoop using file ingestion concept like
we have data in files from the oracle server for the fundamental tables and ingested that files into hadoop and
then created external table on that ingested data. We used that tables and created same ETL process to write
the procedures as like oracle side and put all the data on Azure cloud. Used the spark for analytics.
Responsibilities:
 Interaction with business partners and client for status updates on tasks and requirements gathering for
new enhancements
 Gathering and analyzing the requirements and providing a better solution for implementation.
 Preparing the high level design documents on the basis of requirements.
 Development of ETL flow , database solutions, testing and maintenance, improve data quality.
 Used Spark for data analytics.
 Development of stored procedures, in order to process data/files.
 Managing the deployment of code in ITE and UAT environments.
 Value Add to the client - Automation Scripts to reduce manual work and save effort.
 Ensuring Quality Standards and improving overall quality of deliverables.
 Provide technical assistance to team members.
 Reviewing the mappings, sessions and workflows for ETL standards.
 Identify the bugs in existing flow by analyzing the data flow.
 Responsible for writing unit test cases and performing unit testing.
 The data is processed using spark-SQL which used all the basic features of spark sql like
dataframe.

POC: Did below three POC in SPARK.

2. .Project Name: Titanic data analysis


 This Application is used for the analytics of the titanic data.
 The data is stored in the HDFS in distributed manner over a cluster of nodes, which
addresses the problems of scalability high availability and fault tolerance.
 The data is processed using spark-SQL which used all the basic features of spark sql like
dataframe.
 It contains the data of all the passengers like name, age, sex, room, ticket etc .

3.Project Name: Olympic data analysis


 This Application is used for the analytics of the olympic data.
 The data is stored in the HDFS in distributed manner over a cluster of nodes, which
addresses the problems of scalability high availability and fault tolerance.
 The data is processed using spark-SQL which used all the basic features of spark sql like
dataframe.
 It contains the data of all the athlete like name, age, sex, medal type, game name etc .

4.Project Name: World Bank Data Analytics


 This Application is used for the analytics of the world bank logs data.
 The data is stored in the HDFS in distributed manner over a cluster of nodes, which
addresses the problems of scalability high availability and fault tolerance.
 The wbd logs are processed using spark which used all the basic features like transformation,
action etc.
 It contains the data like population, health, internet, GDP.

5.Project Name: IDW(Investment Data Warehouse)


Client: Investment client(USA) (Nov 2014- Till Now)

Description:
The project involves the development of Investment Data Warehouse (IDW) for client Capital Group of
Companies. It involves integration of data from disparate source systems into a centralized data warehouse
that helps in making better investment decisions and better management of various funds like mutual funds,
equity funds etc.

This project involves data transformations through Extract, Transform and Load techniques implemented
through Informatica and various database objects. This is an end-to-end application development project
involving requirements elaboration, design, build, testing, implementation and support. The system will be
developed in multiple releases using the staged waterfall development methodology. A prototype will be
developed as a part of this effort to proof out the proposed architecture.

Responsibilities:
Development Responsibilities:

 Interaction with business partners and client for status updates on tasks and requirements gathering for
new enhancements
 Gathering and analyzing the requirements and providing a better solution for implementation.
 Preparing the high level design documents on the basis of requirements.
 Development of ETL interfaces, Informatica mapping and workflows, UNIX scripts, database
solutions, testing and maintenance, improve data quality, support very high volumes of trades,
accounts and groups of accounts
 Development of stored procedures, Unix shell scripts in order to process data/files
 Scheduling the jobs using AutoSys scheduler.
 Managing the versioning of the code through Perforce.
 Managing the deployment of code in ITE and UAT environments.
 Value Add to the client - Automation Scripts / Macros to reduce manual work and save effort.
 Ensuring Quality Standards and improving overall quality of deliverables.
 Provide technical assistance to team members.
 Reviewing the mappings, sessions and workflows for ETL standards.
 Identify the bugs in existing mapping by analyzing the data flow.
 Responsible for writing unit test cases and performing unit testing.

Production Support Responsibility:

 Provided L3 support for batches, implemented fixes in production in case of job failure.
 Extraction of Incidents data from Remedy for the purpose of WSR (Weekly Status Report).

Role : Application Developer


Team Size : 15

6.Project Name: Paramount


Client: Paramount Electronics. (July 2012- March 2014)
Description: Paramount Electronics is a U.K based organization headquartered at Buckingham
which develops high quality Electrical equipments. It is a huge supplier of Electrical equipments to
the U.K, parts of Mexico and Canada. They also use the AMAPS (Advanced Manufacturing
Production Systems) systems to store all their production related data.
Roles and Responsibilities:
 Interaction with business partners and client for status updates on tasks and requirements gathering for
new enhancements
 Gathering and analyzing the requirements and providing a better solution for implementation.
 Preparing the high level design documents on the basis of requirements.
 Development of ETL interfaces, Informatica mapping and workflows, UNIX scripts, database
solutions, testing and maintenance, improve data quality, support very high volumes of trades,
accounts and groups of accounts
 Development of stored procedures, Unix shell scripts in order to process data/files
 Scheduling the jobs using AutoSys scheduler.
 Managing the versioning of the code through Perforce.
 Managing the deployment of code in ITE and UAT environments.
 Value Add to the client - Automation Scripts / Macros to reduce manual work and save effort.
 Ensuring Quality Standards and improving overall quality of deliverables.
 Provide technical assistance to team members.
 Reviewing the mappings, sessions and workflows for ETL standards.
 Identify the bugs in existing mapping by analyzing the data flow.
 Responsible for writing unit test cases and performing unit testing.

7. Project Name: ADM( Arihant Data Mart) (8 month)


Client: ARIHANT ELECTRICALS Pvt. Ltd. (Nov 2010- June 2012)
Company profile: Arihant Group offers active and passive components, electromechanical products,
protection components, industrial automation products and solutions, and other specialized electrical
and electronic products.
Roles and Responsibilities:
 Interaction with business partners and client for status updates on tasks and requirements gathering for
new enhancements
 Gathering and analyzing the requirements and providing a better solution for implementation.
 Preparing the high level design documents on the basis of requirements.
 Development of ETL interfaces, Informatica mapping and workflows, UNIX scripts, database
solutions, testing and maintenance, improve data quality, support very high volumes of trades,
accounts and groups of accounts
 Development of stored procedures, Unix shell scripts in order to process data/files

CERTIFICATIONS/ACHEVEMENTS
Big Data University Certifications:
 Certificate on “Hadoop fundamentals I – Version 3”
 Certificate on “Accessing Hadoop Data Using Hive V2”
 Certificate on “Developing Distributed Applications Using ZooKeeper”
 Certificate on “Moving Data into Hadoop”
 Got INSTA award in Financial Year 2014-15 Q4.
 Got Dream Team Award for performance in Project Team in Financial Year 2015-16 Q3
 Got INSTA award in Financial Year 2015-16 Q1.
 Actively participated in coordinating various team building activities within team as well as on
Account level.

PERSONAL DETAILS

Date of Birth July 26, 1985


Languages Known Hindi, English
Father’s Name Mr. S. K. Sharma
Current Address DayanandNagar, Shamli, Distt.-Muzaffarnagar
Permanent Address DayanandNagar, Shamli, Distt.-Muzaffarnagar
Passport No. M9467334
Pan No. CCAPS8565C

DECLARATION

I hereby declare that the information furnished above is true to the best of my knowledge and belief.

PLACE:

DATE: Vinit Sharma

You might also like