About The Exam: Print Exit Print Mode

Download as pdf or txt
Download as pdf or txt
You are on page 1of 65

Print Exit Print Mode

About the exam

Dear Participant,

Greetings!
You have completed the "Final Exam" exam.
At this juncture, it is important for you to understand your strengths and focus on them to
achieve the best results.
We present here a snapshot of your performance in "Final Exam" exam in terms of marks
scored by you in each section, question-wise response pattern and difficulty-wise analysis of
your performance.

This Report consists of the following sections that can be accessed using the left navigation
panel:

Overall Performance: This part of report shows the summary of marks


scored by you across all sections of the exam and the comparison of your performance
across all sections.

Section-wise Performance: You can click on a section name in the left


navigation panel to check your performance in that section. Section-wise performance
includes the details of your response at each question level and difficulty-wise analysis of
your performance for that section.

NOTE : For Short Answer, Subjective, Typing and Programing Type Questions participant will
not be able to view Bar Chart Report in the Performance Analysis.

Subject Questions Attempted Correct Score


Final 40/99 31 31

Final, 100%FinalMarks Obtained Subject Wise


NOTE : Subject having negative marks are not considered in the pie chart. Pie chart
will not be shown if all the subject contains 0 marks.

Final
The Final section comprises of a total of 99 questions with the following difficulty level
distribution: -

Difficulty Level No. of questions


Easy 0
Moderate 99
Hard 0

Question wise details


Please click on question to view detailed analysis

= Not Evaluated

= Evaluated

= Correct

= Incorrect

= Not Attempted

= Marked For Review

= Correct Option

= Your Option

Question Details

Q1.Key/Value is considered as hadoop format.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q2.What kind of servers are used for creating a hadoop cluster?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : Server grade machines.
Option 2 : Commodity hardware.
Option 3 : Only supercomputers
Option 4 : None of the above.

Q3.Hadoop was developed by:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : Doug Cutting
Option 2 : Lars George
Option 3 : Tom White
Option 4 : Eric Sammer

Q4.One of the features of hadoop is you can achieve parallelism.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : False
Option 2 : True

Q5.Hadoop can only work with structured data.

Difficulty Level : Moderate


Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : False
Option 2 : True

Q6.Hadoop cluster can scale out:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : By upgrading existing servers
Option 2 : By increasing the area of the cluster.
Option 3 : By downgrading existing servers
Option 4 : By adding more hardware

Q7.Hadoop can solve only use cases involving data from Social media.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : 1
Option 2 : False

Q8.Hadoop can be utilized for demographic analysis.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q9.Hadoop is inspired from which file system.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : AFS
Option 2 : GFS
Option 3 : MPP
Option 4 : None of the above.

Q10.For Apache Hadoop one needs licensing before leveraging it.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : True
Option 2 : False

Q11.HDFS runs in the same namespace as that of local filesystem.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : False
Option 2 : True

Q12.HDFS follows a master-slave architecture.


Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : False
Option 2 : True

Q13.Namenode only responds to:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : FTP calls
Option 2 : SFTP calls.
Option 3 : RPC calls
Option 4 : MPP calls

Q14.Perfect balancing can be achieved in a Hadoop cluster.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : False
Option 2 : True

Q15.What does Namenode periodically expects from Datanodes?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : EditLogs
Option 2 : Block report and Status
Option 3 : FSImages
Option 4 : None of the above

Q16.After client requests JobTracker for running an application, whom does JT


contacts?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 : DataNodes
Option 2 : Tasktracker
Option 3 : Namenode
Option 4 : None of the above.

Q17.Intertaction to HDFS is done through which script.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Fsadmin
Option 2 : Hive
Option 3 : Mapreduce
Option 4 : Hadoop

Q18.What is the usage of put command in HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : It deletes files from one file system to another.
Option 2 : It copies files from one file system to another
Option 3 : It puts configuration parameters in configuration files
Option 4 : None of the above.

Q19.Each directory or file has three kinds of permissions:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : read,write,execute
Option 2 : read,write,run
Option 3 : read,write,append
Option 4 : read,write,update

Q20.Mapper output is written to HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : False
Option 2 : True

Q21.A Reducer writes its output in what format.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Key/Value
Option 2 : Text files
Option 3 : Sequence files
Option 4 : None of the above

Q22.Which of the following is a pre-requisite for hadoop cluster installation?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 : Gather Hardware requirement
Option 2 : Gather network requirement
Option 3 : Both
Option 4 : None of the above

Q23.Nagios and Ganglia are tools provided by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : None of the above

Q24.Which of the following are cloudera management services?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Activity Monitor
Option 2 : Host Monitor
Option 3 : Both
Option 4 : None of the above

Q25.Which of the following is used to collect information about activities running


in a hadoop cluster?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Report Manager
Option 2 : Cloudera Navigator
Option 3 : Activity Monitor
Option 4 : All of the above

Q26.Which of the following aggregates events and makes them available for
alerting and searching?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Event Server
Option 2 : Host Monitor
Option 3 : Activity Monitor
Option 4 : None of the above

Q27.Which tab in the cloudera manager is used to add a service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Hosts
Option 2 : Activities
Option 3 : Services
Option 4 : None of the above

Q28.Which of the following provides http access to HDFS?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : HttpsFS
Option 2 : Name Node
Option 3 : Data Node
Option 4 : All of the above

Q29.Which of the following is used to balance a load in case of addition of a new


node and in case of a failure?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Gateway
Option 2 : Balancer
Option 3 : Secondary Name Node
Option 4 : None of the above

Q30.Which of the following is used to designate a host for a particular service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Gateway
Option 2 : Balancer
Option 3 : Secondary Name Node
Option 4 : All of the above

Q31.Which of the following are the configuration files?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Core-site.xml
Option 2 : Hdfs-site.xml
Option 3 : Both
Option 4 : None of the above

Q32.Which are the commercial leading Hadoop distributors in the market?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera , Intel, MapR
Option 2 : MapR, Cloudera, Teradata
Option 3 : Hortonworks, IBM, Cloudera
Option 4 : MapR, Hortonworks, Cloudera

Q33.What are the core Apache components enclosed in its bundle?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : HDFS, Map-reduce,YARN,Hadoop Commons
Option 2 : HDFS, NFS, Combiners, Utility Package
Option 3 : HDFS, Map-reduce, Hadoop core
Option 4 : MapR-FS, Map-reduce,YARN,Hadoop Commons

Q34.Apart from its basic components Apache Hadoop also provides:


Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Apache Hive
Option 2 : Apache Pig
Option 3 : Apache Zookeeper
Option 4 : All the above

Q35.Rolling upgrades is not possible in which of the following?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : Possible in all of the above

Q36.In which of the following Hbase Latency is low with respect to each other:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : IBM BigInsights

Q37.MetaData Replication is possible in:

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : Teradata

Q38.Disastor recovery management is not handled by:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : Hortonworks
Option 2 : MapR
Option 3 : Cloudera
Option 4 : Amazon Web Services EMR

Q39.Mirroring concept is possible in Cloudera.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q40.Does MapR supports only Streaming Data Ingestion ?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : True
Option 2 : False

Q41.Hcatalog is open source metadata framework developed by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : MapR
Option 3 : Hortonworks
Option 4 : Amazon EMR

Q42.BDA can be applicable to gain knowledge on user behaviour, prevents


customer churn in Media and Telecommunications Industry.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q43.What is the correct sequence of Big Data Analytics stages?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Big Data Production > Big Data Consumption > Big Data Management
Option 2 : Big Data Management > Big Data Production > Big Data Consumption
Option 3 : Big Data Production > Big Data Management > Big Data Consumption
Option 4 : None of these

Q44.Big Data Consumption involves:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Mining
Option 2 : Analytic
Option 3 : Search and Enrichment
Option 4 : All of the above

Q45.Big Data Integration and Data Mining are the phases of Big Data
Management.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : True
Option 2 : False

Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a
big data environment.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q47.For which of the following type of data it is not possible to store in big data
environment and then process/parse it?
Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 4
Option 1 : XML/JSON type of data
Option 2 : RDBMS
Option 3 : Semi-structured data
Option 4 : None of the above

Q48.Software framework for writing applications that parallely process vast


amounts of data is known as:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Map-reduce
Option 2 : Hive
Option 3 : Impala
Option 4 : None of the above

Q49.In proper flow of the map-reduce, reducer will always be executed after
mapper.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q50.Which of the following are the features of Map-reduce?

Difficulty Level : Moderate


Status : Correct

Marks Obtained : 1

Response : 4
Option 1 : Automatic parallelization and distribution
Option 2 : Fault-Tolerance
Option 3 : Platform independent
Option 4 : All of the above

Q51.Where does the intermediate output of mapper gets written to?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 4
Option 1 : Local disk of node where it is executed.
Option 2 : HDFS of node where it is executed.
Option 3 : On a remote server outside the cluster.
Option 4 : Mapper output gets written to the local disk of Name node machine.

Q52.Reducer is required in map-reduce job for:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : It combines all the intermediate data collected from mappers.
Option 2 : It reduces the amount of data by half of what is supplied to it.
Option 3 : Both a and b
Option 4 : None of the above

Q53.Output of every map is passed to which component.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1
Response : 2
Option 1 : Partitioner
Option 2 : Combiner
Option 3 : Mapper
Option 4 : None of the above

Q54.Data Locality concept is used for:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Localizing data
Option 2 : Avoiding network traffic in hadoop system
Option 3 : Both A and B
Option 4 : None of the above

Q55.No of files in the output of map reduce job depends on:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : No of reducer used for the process
Option 2 : Size of the data
Option 3 : Both A and B
Option 4 : None of the above

Q56.Input format of the map-reduce job is specified in which class?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 : Combiner class
Option 2 : Reducer class
Option 3 : Mapper class
Option 4 : Any of the above

Q57.The intermediate keys, and their value lists, are passed to the Reducer in
sorted key order.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q58.In which stage of the map-reduce job data is transferred between mapper
and reducer?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Transfer
Option 2 : Combiner
Option 3 : Distributed Cache
Option 4 : Shuffle and Sort

Q59.Maximum three reducers can run at any time in a MapReduce Job.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : True
Option 2 : False

Q60.Functionality of the Jobtracker is to:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : Coordinate the job run
Option 2 : Sorting the output
Option 3 : Both A and B
Option 4 : None of the above

Q61.The submit() method on Job creates an internal JobSummitter instance and


calls _____ on it.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : jobSubmitInternal()
Option 2 : internalJobSubmit()
Option 3 : submitJobInternal()
Option 4 : None of these

Q62.Which method polls the job's progress and after how many seconds?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : WaitForCompletion() and after each second
Option 2 : WaitForCompletion() after every 15 seconds
Option 3 : Not possible to poll
Option 4 : None of the above

Q63.Job Submitter tells the task tracker that the job is ready for execution.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoop
cluster.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : Flase

Q65.Map and Reduce tasks are created in job initialization phase.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q66.Based on heartbeats received after how many seconds does it help the job
tracker to decide regarding health of task tracker?

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : After every 3 seconds
Option 2 : After every 1 second
Option 3 : After every 60 seconds
Option 4 : None of the above

Q67.Task tracker has assigned fixed number of slots for map and reduce tasks.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q68.To improve the performance of the map-reduce task jar that contains map-
reduce code is pushed to each slave node over HTTP.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q69.Map-reduce can take which type of format as input?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Text
Option 2 : CSV
Option 3 : Arbitrary
Option 4 : None of these

Q70.Input files can be located at hdfs or local system for map-reduce.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : True
Option 2 : False

Q71.Is there any default InputFormat for input files in map-reduce process?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : KeyValueInputFormat
Option 2 : TextInputFormat.
Option 3 : A and B
Option 4 : None of these

Q72.An InputFormat is a class that provides the following functionality:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Selects the files or other objects that should be used for input
Option 2 : Defines the InputSplits that break a file into tasks
Option 3 : Provides a factory for RecordReader objects that read the file
Option 4 : All of the above

Q73.An InputSplit describes a unit of work that comprises a ____ map task in a
MapReduce program.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : One
Option 2 : Two
Option 3 : Three
Option 4 : None of these

Q74.The FileInputFormat and its descendants break a file up into ____MB


chunks.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : 128
Option 2 : 64
Option 3 : 32
Option 4 : 256

Q75.What allows several map tasks to operate on a single file in parallel?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Processing of a file in chunks
Option 2 : Configuration file properties
Option 3 : Both A and B
Option 4 : None of the above

Q76.The Record Reader is invoked ________ on the input until the entire
InputSplit has been consumed.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 : Once
Option 2 : Twice
Option 3 : Repeatedly
Option 4 : None of these

Q77.Which of the following is KeyValueTextInputFormat?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : Key is separated from the value by Tab
Option 2 : Data is specified in binary sequence
Option 3 : Both A and B
Option 4 : None of the above

Q78.In map-reduce programming model mappers can communicate with each


other is:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : True
Option 2 : False

Q79.User can define own partitioner class.


Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q80.The Output Format class is a factory for RecordWriter objects; these are
used to write the individual records to the files as directed by the OutputFormat
is:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q81.Which of the following are part of Hadoop ecosystem.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Talend,MapR,NFS
Option 2 : Mysql,Shell
Option 3 : Pig,Hive,Hbase
Option 4 : None of the above

Q82.Default Metostore location for Hive is:

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : Mysql
Option 2 : Derby
Option 3 : PostgreSQL
Option 4 : None of the above

Q83.Extend the following class to write a User Defined Function in Hive.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : HiveMapper
Option 2 : Eval
Option 3 : UDF
Option 4 : None of the above

Q84.Which component of hadoop ecosystem supports updation?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Zookeeper
Option 2 : Hive
Option 3 : Pig
Option 4 : Hbase

Q85.Which hadoop component should be used if a join of dataset is required?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Hbase
Option 2 : Hive
Option 3 : Zookeeper
Option 4 : None of the above

Q86.Which hadoop component can be used for ETL?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Pig
Option 2 : Zookeeper
Option 3 : Hbase
Option 4 : None of the above

Q87.Which hadoop component is best suited for pulling data from the web?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Hive
Option 2 : Zookeeper
Option 3 : Hbase
Option 4 : Flume

Q88.Which hadoop component can be used to transfer data from relational DB


to HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Zookeeper
Option 2 : Pig
Option 3 : Sqoop
Option 4 : None of the above

Q89.In an application more than one hadoop component cannot be used on top
of HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q90.Hbase supports join.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q91.Pig can work only with data present in HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q92.Which tool out of the following can be used for an OLTP application?
Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Pentaho
Option 2 : Hive
Option 3 : Hbase
Option 4 : None of the above

Q93.Which tool is best suited for real time writes?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Pig
Option 2 : Hive
Option 3 : Hbase
Option 4 : Cassandra

Q94.Which out of the following hadoop component is called as ETL of hadoop?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Pig
Option 2 : Hbase
Option 3 : Talend
Option 4 : None of the above

Q95.Hadoop can completely replace tradtional Dbs.

Difficulty Level : Moderate

Status : Correct
Marks Obtained : 1

Response : 2
Option 1 : True
Option 2 : False

Q96.Zookeeper can be used as data transfer also.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : False
Option 2 : True

Q97.Map-reduce cannot be tested on data/files present in local file system.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : True
Option 2 : False

Q98.Hive was developed by:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 4
Option 1 : Tom White
Option 2 : Cloudera
Option 3 : Doug Cutting
Option 4 : Facebook

Q99.Mrv1 programs cannot be run on top of clusters configured for Mrv2.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

WgI
Marks Obtained Subject Wise
Final
WgI

Print Exit Print Mode

About the exam

Dear Participant,

Greetings!
You have completed the "Final Exam" exam.
At this juncture, it is important for you to understand your strengths and focus on them to
achieve the best results.
We present here a snapshot of your performance in "Final Exam" exam in terms of marks
scored by you in each section, question-wise response pattern and difficulty-wise analysis of
your performance.

This Report consists of the following sections that can be accessed using the left navigation
panel:

Overall Performance: This part of report shows the summary of marks


scored by you across all sections of the exam and the comparison of your performance
across all sections.

Section-wise Performance: You can click on a section name in the left


navigation panel to check your performance in that section. Section-wise performance
includes the details of your response at each question level and difficulty-wise analysis of
your performance for that section.

NOTE : For Short Answer, Subjective, Typing and Programing Type Questions participant will
not be able to view Bar Chart Report in the Performance Analysis.

Subject Questions Attempted Correct Score


Final 40/99 17 17

Final, 100%FinalMarks Obtained Subject Wise


NOTE : Subject having negative marks are not considered in the pie chart. Pie chart
will not be shown if all the subject contains 0 marks.

Final
The Final section comprises of a total of 99 questions with the following difficulty level
distribution: -

Difficulty Level No. of questions


Easy 0
Moderate 99
Hard 0

Question wise details


Please click on question to view detailed analysis

= Not Evaluated

= Evaluated

= Correct

= Incorrect

= Not Attempted

= Marked For Review

= Correct Option

= Your Option

Question Details

Q1.Key/Value is considered as hadoop format.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q2.What kind of servers are used for creating a hadoop cluster?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Server grade machines.
Option 2 : Commodity hardware.
Option 3 : Only supercomputers
Option 4 : None of the above.

Q3.Hadoop was developed by:


Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : Doug Cutting
Option 2 : Lars George
Option 3 : Tom White
Option 4 : Eric Sammer

Q4.One of the features of hadoop is you can achieve parallelism.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : False
Option 2 : True

Q5.Hadoop can only work with structured data.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : False
Option 2 : True

Q6.Hadoop cluster can scale out:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : By upgrading existing servers
Option 2 : By increasing the area of the cluster.
Option 3 : By downgrading existing servers
Option 4 : By adding more hardware

Q7.Hadoop can solve only use cases involving data from Social media.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : 1
Option 2 : False

Q8.Hadoop can be utilized for demographic analysis.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q9.Hadoop is inspired from which file system.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : AFS
Option 2 : GFS
Option 3 : MPP
Option 4 : None of the above.

Q10.For Apache Hadoop one needs licensing before leveraging it.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q11.HDFS runs in the same namespace as that of local filesystem.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : False
Option 2 : True

Q12.HDFS follows a master-slave architecture.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : False
Option 2 : True

Q13.Namenode only responds to:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 4
Option 1 : FTP calls
Option 2 : SFTP calls.
Option 3 : RPC calls
Option 4 : MPP calls

Q14.Perfect balancing can be achieved in a Hadoop cluster.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : False
Option 2 : True

Q15.What does Namenode periodically expects from Datanodes?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : EditLogs
Option 2 : Block report and Status
Option 3 : FSImages
Option 4 : None of the above

Q16.After client requests JobTracker for running an application, whom does JT


contacts?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : DataNodes
Option 2 : Tasktracker
Option 3 : Namenode
Option 4 : None of the above.

Q17.Intertaction to HDFS is done through which script.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Fsadmin
Option 2 : Hive
Option 3 : Mapreduce
Option 4 : Hadoop

Q18.What is the usage of put command in HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : It deletes files from one file system to another.
Option 2 : It copies files from one file system to another
Option 3 : It puts configuration parameters in configuration files
Option 4 : None of the above.

Q19.Each directory or file has three kinds of permissions:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : read,write,execute
Option 2 : read,write,run
Option 3 : read,write,append
Option 4 : read,write,update

Q20.Mapper output is written to HDFS.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : False
Option 2 : True

Q21.A Reducer writes its output in what format.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Key/Value
Option 2 : Text files
Option 3 : Sequence files
Option 4 : None of the above

Q22.Which of the following is a pre-requisite for hadoop cluster installation?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 4
Option 1 : Gather Hardware requirement
Option 2 : Gather network requirement
Option 3 : Both
Option 4 : None of the above

Q23.Nagios and Ganglia are tools provided by:

Difficulty Level : Moderate


Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : None of the above

Q24.Which of the following are cloudera management services?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Activity Monitor
Option 2 : Host Monitor
Option 3 : Both
Option 4 : None of the above

Q25.Which of the following is used to collect information about activities running


in a hadoop cluster?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : Report Manager
Option 2 : Cloudera Navigator
Option 3 : Activity Monitor
Option 4 : All of the above

Q26.Which of the following aggregates events and makes them available for
alerting and searching?

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : Event Server
Option 2 : Host Monitor
Option 3 : Activity Monitor
Option 4 : None of the above

Q27.Which tab in the cloudera manager is used to add a service?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 : Hosts
Option 2 : Activities
Option 3 : Services
Option 4 : None of the above

Q28.Which of the following provides http access to HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : HttpsFS
Option 2 : Name Node
Option 3 : Data Node
Option 4 : All of the above

Q29.Which of the following is used to balance a load in case of addition of a new


node and in case of a failure?

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : Gateway
Option 2 : Balancer
Option 3 : Secondary Name Node
Option 4 : None of the above

Q30.Which of the following is used to designate a host for a particular service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Gateway
Option 2 : Balancer
Option 3 : Secondary Name Node
Option 4 : All of the above

Q31.Which of the following are the configuration files?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Core-site.xml
Option 2 : Hdfs-site.xml
Option 3 : Both
Option 4 : None of the above

Q32.Which are the commercial leading Hadoop distributors in the market?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : Cloudera , Intel, MapR
Option 2 : MapR, Cloudera, Teradata
Option 3 : Hortonworks, IBM, Cloudera
Option 4 : MapR, Hortonworks, Cloudera

Q33.What are the core Apache components enclosed in its bundle?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : HDFS, Map-reduce,YARN,Hadoop Commons
Option 2 : HDFS, NFS, Combiners, Utility Package
Option 3 : HDFS, Map-reduce, Hadoop core
Option 4 : MapR-FS, Map-reduce,YARN,Hadoop Commons

Q34.Apart from its basic components Apache Hadoop also provides:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 4
Option 1 : Apache Hive
Option 2 : Apache Pig
Option 3 : Apache Zookeeper
Option 4 : All the above

Q35.Rolling upgrades is not possible in which of the following?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : Possible in all of the above

Q36.In which of the following Hbase Latency is low with respect to each other:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : IBM BigInsights

Q37.MetaData Replication is possible in:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : Hortonworks
Option 3 : MapR
Option 4 : Teradata

Q38.Disastor recovery management is not handled by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Hortonworks
Option 2 : MapR
Option 3 : Cloudera
Option 4 : Amazon Web Services EMR

Q39.Mirroring concept is possible in Cloudera.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q40.Does MapR supports only Streaming Data Ingestion ?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q41.Hcatalog is open source metadata framework developed by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Cloudera
Option 2 : MapR
Option 3 : Hortonworks
Option 4 : Amazon EMR

Q42.BDA can be applicable to gain knowledge on user behaviour, prevents


customer churn in Media and Telecommunications Industry.
Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q43.What is the correct sequence of Big Data Analytics stages?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Big Data Production > Big Data Consumption > Big Data Management
Option 2 : Big Data Management > Big Data Production > Big Data Consumption
Option 3 : Big Data Production > Big Data Management > Big Data Consumption
Option 4 : None of these

Q44.Big Data Consumption involves:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Mining
Option 2 : Analytic
Option 3 : Search and Enrichment
Option 4 : All of the above

Q45.Big Data Integration and Data Mining are the phases of Big Data
Management.

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a
big data environment.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q47.For which of the following type of data it is not possible to store in big data
environment and then process/parse it?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : XML/JSON type of data
Option 2 : RDBMS
Option 3 : Semi-structured data
Option 4 : None of the above

Q48.Software framework for writing applications that parallely process vast


amounts of data is known as:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Map-reduce
Option 2 : Hive
Option 3 : Impala
Option 4 : None of the above

Q49.In proper flow of the map-reduce, reducer will always be executed after
mapper.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q50.Which of the following are the features of Map-reduce?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Automatic parallelization and distribution
Option 2 : Fault-Tolerance
Option 3 : Platform independent
Option 4 : All of the above

Q51.Where does the intermediate output of mapper gets written to?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Local disk of node where it is executed.
Option 2 : HDFS of node where it is executed.
Option 3 : On a remote server outside the cluster.
Option 4 : Mapper output gets written to the local disk of Name node machine.

Q52.Reducer is required in map-reduce job for:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : It combines all the intermediate data collected from mappers.
Option 2 : It reduces the amount of data by half of what is supplied to it.
Option 3 : Both a and b
Option 4 : None of the above

Q53.Output of every map is passed to which component.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : Partitioner
Option 2 : Combiner
Option 3 : Mapper
Option 4 : None of the above

Q54.Data Locality concept is used for:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Localizing data
Option 2 : Avoiding network traffic in hadoop system
Option 3 : Both A and B
Option 4 : None of the above

Q55.No of files in the output of map reduce job depends on:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : No of reducer used for the process
Option 2 : Size of the data
Option 3 : Both A and B
Option 4 : None of the above

Q56.Input format of the map-reduce job is specified in which class?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Combiner class
Option 2 : Reducer class
Option 3 : Mapper class
Option 4 : Any of the above

Q57.The intermediate keys, and their value lists, are passed to the Reducer in
sorted key order.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q58.In which stage of the map-reduce job data is transferred between mapper
and reducer?
Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Transfer
Option 2 : Combiner
Option 3 : Distributed Cache
Option 4 : Shuffle and Sort

Q59.Maximum three reducers can run at any time in a MapReduce Job.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : True
Option 2 : False

Q60.Functionality of the Jobtracker is to:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : Coordinate the job run
Option 2 : Sorting the output
Option 3 : Both A and B
Option 4 : None of the above

Q61.The submit() method on Job creates an internal JobSummitter instance and


calls _____ on it.

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : jobSubmitInternal()
Option 2 : internalJobSubmit()
Option 3 : submitJobInternal()
Option 4 : None of these

Q62.Which method polls the job's progress and after how many seconds?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : WaitForCompletion() and after each second
Option 2 : WaitForCompletion() after every 15 seconds
Option 3 : Not possible to poll
Option 4 : None of the above

Q63.Job Submitter tells the task tracker that the job is ready for execution.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : True
Option 2 : False

Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoop
cluster.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : True
Option 2 : Flase

Q65.Map and Reduce tasks are created in job initialization phase.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : True
Option 2 : False

Q66.Based on heartbeats received after how many seconds does it help the job
tracker to decide regarding health of task tracker?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : After every 3 seconds
Option 2 : After every 1 second
Option 3 : After every 60 seconds
Option 4 : None of the above

Q67.Task tracker has assigned fixed number of slots for map and reduce tasks.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q68.To improve the performance of the map-reduce task jar that contains map-
reduce code is pushed to each slave node over HTTP.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q69.Map-reduce can take which type of format as input?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Text
Option 2 : CSV
Option 3 : Arbitrary
Option 4 : None of these

Q70.Input files can be located at hdfs or local system for map-reduce.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q71.Is there any default InputFormat for input files in map-reduce process?

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : KeyValueInputFormat
Option 2 : TextInputFormat.
Option 3 : A and B
Option 4 : None of these

Q72.An InputFormat is a class that provides the following functionality:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Selects the files or other objects that should be used for input
Option 2 : Defines the InputSplits that break a file into tasks
Option 3 : Provides a factory for RecordReader objects that read the file
Option 4 : All of the above

Q73.An InputSplit describes a unit of work that comprises a ____ map task in a
MapReduce program.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : One
Option 2 : Two
Option 3 : Three
Option 4 : None of these

Q74.The FileInputFormat and its descendants break a file up into ____MB


chunks.

Difficulty Level : Moderate

Status : Unanswered
Marks Obtained : 0

Response :
Option 1 : 128
Option 2 : 64
Option 3 : 32
Option 4 : 256

Q75.What allows several map tasks to operate on a single file in parallel?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Processing of a file in chunks
Option 2 : Configuration file properties
Option 3 : Both A and B
Option 4 : None of the above

Q76.The Record Reader is invoked ________ on the input until the entire
InputSplit has been consumed.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 : Once
Option 2 : Twice
Option 3 : Repeatedly
Option 4 : None of these

Q77.Which of the following is KeyValueTextInputFormat?

Difficulty Level : Moderate

Status : Incorrect
Marks Obtained : 0

Response : 3
Option 1 : Key is separated from the value by Tab
Option 2 : Data is specified in binary sequence
Option 3 : Both A and B
Option 4 : None of the above

Q78.In map-reduce programming model mappers can communicate with each


other is:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : True
Option 2 : False

Q79.User can define own partitioner class.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : True
Option 2 : False

Q80.The Output Format class is a factory for RecordWriter objects; these are
used to write the individual records to the files as directed by the OutputFormat
is:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q81.Which of the following are part of Hadoop ecosystem.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3
Option 1 : Talend,MapR,NFS
Option 2 : Mysql,Shell
Option 3 : Pig,Hive,Hbase
Option 4 : None of the above

Q82.Default Metostore location for Hive is:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Mysql
Option 2 : Derby
Option 3 : PostgreSQL
Option 4 : None of the above

Q83.Extend the following class to write a User Defined Function in Hive.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : HiveMapper
Option 2 : Eval
Option 3 : UDF
Option 4 : None of the above

Q84.Which component of hadoop ecosystem supports updation?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Zookeeper
Option 2 : Hive
Option 3 : Pig
Option 4 : Hbase

Q85.Which hadoop component should be used if a join of dataset is required?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : Hbase
Option 2 : Hive
Option 3 : Zookeeper
Option 4 : None of the above

Q86.Which hadoop component can be used for ETL?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Pig
Option 2 : Zookeeper
Option 3 : Hbase
Option 4 : None of the above

Q87.Which hadoop component is best suited for pulling data from the web?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 4
Option 1 : Hive
Option 2 : Zookeeper
Option 3 : Hbase
Option 4 : Flume

Q88.Which hadoop component can be used to transfer data from relational DB


to HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Zookeeper
Option 2 : Pig
Option 3 : Sqoop
Option 4 : None of the above

Q89.In an application more than one hadoop component cannot be used on top
of HDFS.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2
Option 1 : True
Option 2 : False

Q90.Hbase supports join.


Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1
Option 1 : True
Option 2 : False

Q91.Pig can work only with data present in HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q92.Which tool out of the following can be used for an OLTP application?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2
Option 1 : Pentaho
Option 2 : Hive
Option 3 : Hbase
Option 4 : None of the above

Q93.Which tool is best suited for real time writes?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1
Option 1 : Pig
Option 2 : Hive
Option 3 : Hbase
Option 4 : Cassandra

Q94.Which out of the following hadoop component is called as ETL of hadoop?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3
Option 1 : Pig
Option 2 : Hbase
Option 3 : Talend
Option 4 : None of the above

Q95.Hadoop can completely replace tradtional Dbs.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q96.Zookeeper can be used as data transfer also.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : False
Option 2 : True

Q97.Map-reduce cannot be tested on data/files present in local file system.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

Q98.Hive was developed by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : Tom White
Option 2 : Cloudera
Option 3 : Doug Cutting
Option 4 : Facebook

Q99.Mrv1 programs cannot be run on top of clusters configured for Mrv2.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :
Option 1 : True
Option 2 : False

WgI
Marks Obtained Subject Wise
Final
WgI

You might also like