Seminar Report: Hadoop

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 41

SEMINAR REPORT on

HADOOP

TABLE OF CONTENTS
INTRODUCTION.................................................................................................................... 3 Need for large data processing...........................................................................................4 Challenges in distributed computing --- meeting hadoop................................................... CO!"#RI$ON %IT& OT&'R $($T'!$.............................................................................) Comparison *ith RD+!$...................................................................................................) ORI,IN O- &#DOO"............................................................................................................ . $U+"RO/'CT$..................................................................................................................... 0 Core.................................................................................................................................... 0 #1ro.................................................................................................................................... 0 !apreduce.......................................................................................................................... 0 &D-$.................................................................................................................................. 0 "ig...................................................................................................................................... 0 T&' &#DOO" #""RO#C&.................................................................................................23 Data distribution................................................................................................................ 23 !apReduce4 Isolated "rocesses.......................................................................................22 INTRODUCTION TO !#"R'DUC'....................................................................................23 "rogramming model.......................................................................................................... 24 T5pes................................................................................................................................ 26 &#DOO" !#"R'DUC'...................................................................................................2. Combiner -unctions.......................................................................................................... 77 &#DOO" $TR'#!IN,....................................................................................................77 &#DOO" "I"'$............................................................................................................... 77 &#DOO" DI$TRI+UT'D -I8'$($T'! 9&D-$:.................................................................77 #$$U!"TION$ #ND ,O#8$ .........................................................................................73 &ard*are -ailure ........................................................................................................73 $treaming Data #ccess ..............................................................................................73 8arge Data $ets ......................................................................................................... 73 $imple Coherenc5 !odel ...........................................................................................73 ;!o1ing Computation is Cheaper than !o1ing Data< .................................................74 "ortabilit5 #cross &eterogeneous &ard*are and $oft*are "latforms ........................74 D'$I,N............................................................................................................................ 74 7|"age

&D-$ Concepts................................................................................................................ 7 +loc=s ......................................................................................................................... 7 Namenodes and Datanodes........................................................................................7) The -ile $5stem Namespace .....................................................................................7. Data Replication ......................................................................................................... 70 Replica "lacement......................................................................................................70 Replica $election ........................................................................................................33 $afemode ................................................................................................................... 33 The "ersistence of -ile $5stem !etadata ..................................................................32 The Communication "rotocols .........................................................................................32 Robustness ...................................................................................................................... 37 Data Dis= -ailure> &eartbeats and Re-Replication .....................................................37 Cluster Rebalancing .........................................................................................................37 Data Integrit5 ................................................................................................................... 37 !etadata Dis= -ailure ......................................................................................................37 $napshots ........................................................................................................................ 33 Data Organi?ation ............................................................................................................ 33 Data +loc=s ................................................................................................................ 33 $taging ....................................................................................................................... 33 Replication "ipelining .................................................................................................34 #ccessibilit5 ..................................................................................................................... 34 $pace Reclamation .......................................................................................................... 34 -ile Deletes and Undeletes ........................................................................................34 Decrease Replication -actor ......................................................................................3 &adoop -iles5stems....................................................................................................3 &adoop #rchi1es............................................................................................................... 3) Using &adoop #rchi1es...............................................................................................3) #N#TO!( O- # !#"R'DUC' /O+ RUN..........................................................................3. &adoop is no* a part of4-.....................................................................................................30
INTRODUCTION

3|"age

Computing in its purest form, has changed hands mu tip e times! "irst, from near the #eginning mainframes $ere predicted to #e the future of computing! Indeed mainframes and arge sca e machines $ere #ui t and used, and in some circumstances are used simi ar % toda%! The trend, ho$e&er, turned from #igger and more e'pensi&e, to sma er and more afforda# e commodit% PCs and ser&ers! Most of our data is stored on oca net$or(s $ith ser&ers that ma% #e c ustered and sharing storage! This approach has had time to #e de&e oped into sta# e architecture, and pro&ide decent redundanc% $hen dep o%ed right! A ne$er emerging techno og%, c oud computing, has sho$n up demanding attention and )uic( % is changing the direction of the techno og% andscape! *hether it is +oog e,s uni)ue and sca a# e +oog e "i e S%stem, or Ama-on,s ro#ust Ama-on S. c oud storage mode , it is c ear that c oud computing has arri&ed $ith much to #e g eaned from!

C oud computing is a st% e of computing in $hich d%namica % sca a# e and often &irtua i-e resources are pro&ided as a ser&ice o&er the Internet! /sers need not ha&e (no$ edge of, e'pertise in, or contro o&er the techno og% infrastructure in the 0c oud0 that supports them! Need for large data processing *e i&e in the data age! It,s not eas% to measure the tota &o ume of data stored e ectronica %, #ut an IDC estimate put the si-e of the 1digita uni&erse2 at 3!45 -etta#%tes in 6337, and is forecasting a tenfo d gro$th #% 6344 to 4!5 -etta#%tes! Some of the arge data processing needed areas inc ude89

: The Ne$ ;or( Stoc( E'change generates a#out one tera#%te of ne$ trade data per da%!

: "ace#oo( hosts appro'imate % 43 #i ion photos, ta(ing up one peta#%te of storage!

4|"age

: Ancestr%!com, the genea og% site, stores around 6!< peta#%tes of data! : The Internet Archi&e stores around 6 peta#%tes of data, and is gro$ing at a rate of 63 tera#%tes per month! : The =arge Hadron Co ider near +ene&a, S$it-er and, $i produce a#out 4< peta#%tes of data per %ear!

The pro# em is that $hi e the storage capacities of hard dri&es ha&e increased massi&e % o&er the %ears, access speeds>the rate at $hich data can #e read from dri&es ha&e not (ept up! One t%pica dri&e from 4??3 cou d store 4.@3 MA of data and had a transfer speed of B!B MACs,D so $e cou d read a the data from a fu dri&e in around fi&e minutes! A most 63 %ears ater one tera#%te dri&es are the norm, #ut the transfer speed is around 433 MACs, so it ta(es more than t$o and a ha f hours to read a the data off the dis(! This is a ong time to read a data on a sing e dri&e>and $riting is e&en s o$er! The o#&ious $a% to reduce the time is to read from mu tip e dis(s at once! Imagine if $e had 433 dri&es, each ho ding one hundredth of the data! *or(ing in para e , $e cou d read the data in under t$o minutes!This sho$s the significance of distri#uted computing!

Challenges in distributed computing --- meeting hadoop

Earious cha enges are faced $hi e de&e oping a distri#uted app ication! The first pro# em to so &e is hard$are fai ure8 as soon as $e start using man% pieces of hard$are, the chance that one $i fai is fair % high! A common $a% of a&oiding data oss is through rep ication8 redundant copies of the data are (ept #% the s%stem so that in the e&ent of fai ure, there is another cop% a&ai a# e! This is ho$ RAID $or(s, for instance, a though Hadoop,s fi es%stem, the Hadoop Distri#uted "i es%stemFHD"SG, ta(es a s ight % different approach! The second pro# em is that most ana %sis tas(s need to #e a# e to com#ine the data in some $a%H data read from one dis( ma% need to #e com#ined $ith the data from an% of the other ?? dis(s! Earious distri#uted s%stems a o$ data to #e com#ined from mu tip e sources, #ut doing this correct % is notorious % cha enging! MapReduce pro&ides a programming mode that a#stracts the pro# em from dis( reads and $rites transforming it into a computation o&er sets of (e%s and &a ues! This, in a nutshell, is what Hadoop pro ides! a reliable shared storage and anal"sis s"stem# The storage is pro ided b" HD$%, and anal"sis b" &apReduce# There are other parts to Hadoop, but these capabilities are its 'ernel#

|"age

Hadoop is the popular open source implementation of &apReduce, a powerful tool designed for deep anal"sis and transformation of er" large data sets# Hadoop ena# es %ou to e'p ore comp e' data, using custom ana %ses tai ored to %our information and )uestions! Hadoop is the s%stem that a o$s unstructured data to #e distri#uted across hundreds or thousands of machines forming shared nothing c usters, and the e'ecution of MapCReduce routines to run on the data in that c uster! Hadoop has its o$n fi es%stem $hich rep icates data to mu tip e nodes to ensure if one node ho ding data goes do$n, there are at east 6 other nodes from $hich to retrie&e that piece of information! This protects the data a&ai a#i it% from node fai ure, something $hich is critica $hen there are man% nodes in a c uster Fa(a RAID at a ser&er e&e G!

CO&()RI%ON *ITH OTH+R %,%T+&% Comparison with RD-&%

/n ess $e are dea ing $ith &er% arge &o umes of unstructured data Fhundreds of +A, TA,s or PA,sG and ha&e arge num#ers of machines a&ai a# e %ou $i i(e % find the performance of Hadoop running a MapCReduce )uer% much s o$er than a compara# e SI= )uer% on a re ationa data#ase! Hadoop uses a #rute force access method $hereas RDAMS,s ha&e optimi-ation methods for accessing data

)|"age

such as inde'es and read9ahead! The #enefits rea % do on % come into p a% $hen the positi&e of mass para e ism is achie&ed, or the data is unstructured to the point $here no RDAMS optimi-ations can #e app ied to he p the performance of )ueries! Aut $ith a #enchmar(s e&er%thing has to #e ta(en into consideration! "or e'amp e, if the data starts ife in a te't fi e in the fi e s%stem Fe!g! a og fi eG the cost associated $ith e'tracting that data from the te't fi e and structuring it into a standard schema and oading it into the RDAMS has to #e considered! And if %ou ha&e to do that for 4333 or 43,333 og fi es that ma% ta(e minutes or hours or da%s to do F$ith Hadoop %ou sti ha&e to cop% the fi es to its fi e s%stemG! It ma% a so #e practica % impossi# e to oad such data into a RDAMS for some en&ironments as data cou d #e generated in such a &o ume that a oad process into a RDAMS cannot (eep up! So $hi e using Hadoop %our )uer% time ma% #e s o$er Fspeed impro&es $ith more nodes in the c usterG #ut potentia % %our access time to the data ma% #e impro&ed! A so as there aren,t an% mainstream RDAMS,s that sca e to thousands of nodes, at some point the sheer mass of #rute force processing po$er $i outperform the optimi-ed, #ut restricted on sca e, re ationa access method! In our current RDAMS9dependent $e# stac(s, sca a#i it% pro# ems tend to hit the hardest at the data#ase e&e ! "or app ications $ith Just a handfu of common use cases that access a ot of the same data, distri#uted in9memor% caches, such as memcached pro&ide some re ief! Ho$e&er, for interacti&e app ications that hope to re ia# % sca e and support &ast amounts of IO, the traditiona RDAMS setup isn,t going to cut it! /n i(e sma app ications that can fit their most acti&e data into memor%, app ications that sit on top of massi&e stores of shared content re)uire a distri#uted so ution if the% hope to sur&i&e the ong tai usage pattern common % found on content9rich site! *e can,t use data#ases $ith ots of dis(s to do arge9sca e #atch ana %sis! This is because see' time is impro ing more slowl" than transfer rate# %ee'ing is the process of mo ing the dis'.s head to a particular place on the dis' to read or write data ! It characteri-es the latenc" of a dis( operation, $hereas the transfer rate corresponds to a dis(,s #and$idth! If the data access pattern is dominated #% see(s, it $i ta(e onger to read or $rite arge portions of the dataset than streaming through it, $hich operates at the transfer rate! On the other hand, for updating a sma proportion of records in a data#ase, a traditiona A9Tree Fthe data structure used in re ationa data#ases, $hich is imited #% the rate it can perform see(sG $or(s $e ! "or updating the maJorit% of a data#ase, a A9Tree is ess efficient than MapReduce, $hich uses SortCMerge to re#ui d the data#ase! Another difference #et$een MapReduce and an RDAMS is the amount of structure in the datasets that the% operate on! Structured data is data that is organi-ed into entities that ha&e a defined format, such as KM= documents or data#ase ta# es that conform to a particu ar predefined schema! This is the rea m of the RDAMS! Semi9structured data, on the other hand, is ooser, and though there ma% #e a schema, it is often ignored, so it ma% #e used on % as a guide to the structure of the data8 for e'amp e, a spreadsheet, in $hich the structure is the grid of ce s, a though the ce s themse &es ma% ho d an% form of data! /nstructured data does not ha&e an% particu ar interna structure8 for e'amp e, p ain te't or image data! MapReduce $or(s $e on unstructured or semistructured data, since it is designed to interpret the data at processing time! In other$ords, the input (e%s and &a ues for

6|"age

MapReduce are not an intrinsic propert% of the data, #ut the% are chosen #% the person ana %-ing the data! Re ationa data is often norma i-ed to retain its integrit%, and remo&e redundanc%! Norma i-ation poses pro# ems for MapReduce, since it ma(es reading a record a non oca operation, and one of the centra assumptions that MapReduce ma(es is that it is possi# e to perform Fhigh9speedG streaming reads and $rites!

Data si/e )ccess Updates %tructure Integrit" %caling

Traditional RD-&% +iga#%tes Interacti&e and #atch Read and $rite man% times Static schema High Non inear

&apReduce Peta#%tes Aatch *rite once, read man% times D%namic schema =o$ =inear

Aut hadoop hasn,t #een much popu ar %et! M%SI= and other RDAMS,s ha&e stratospherica % more mar(et share than Hadoop, #ut i(e an% in&estment, it,s the future %ou shou d #e considering! The industr% is trending to$ards distri#uted s%stems, and Hadoop is a maJor p a%er!

ORI0IN O$ H)DOO(

Hadoop $as created #% Doug Cutting, the creator of Apache =ucene, the $ide % used te't search i#rar%! Hadoop has its origins in Apache Nutch, an open source $e# searchengine, itse f a part of the =ucene proJect! Aui ding a $e# search engine from scratch $as an am#itious goa , for not on % is the soft$are re)uired to cra$ and inde' $e#sites comp e' to $rite, #ut it is a so a cha enge to run $ithout a dedicated operations team, since there are so man% mo&ing parts! It,s e'pensi&e too8 Mi(e Cafare a and Doug Cutting estimated a s%stem supporting a 49#i ion9page inde' $ou d cost around ha f a mi ion do ars in hard$are, $ith a month % running cost of L.3,333! Ne&erthe ess, the% #e ie&ed it $as a $orth% goa , as it $ou d open up and u timate % democrati-e search engine a gorithms! Nutch $as started in 6336, and a $or(ing cra$ er and search s%stem )uic( % emerged! Ho$e&er, the% rea i-ed that their architecture $ou dn,t sca e to the #i ions of pages on the *e#! He p $as at hand $ith the pu# ication of a paper in 633. that descri#ed the architecture of +oog e,s distri#uted fi es%stem, ca ed +"S, $hich $as #eing used in production at +oog e!M +"S, or something i(e it, $ou d so &e their storage needs for the &er% arge fi es generated as a part of the $e# cra$ and inde'ing process! In particu ar, +"S $ou d free up time #eing spent on administrati&e

.|"age

tas(s such as managing storage nodes! In 633B, the% set a#out $riting an open source imp ementation, the Nutch Distri#uted "i es%stem FND"SG! In 633B, +oog e pu# ished the paper that introduced MapReduce to the $or d!N Ear % in 633<, the Nutch de&e opers had a $or(ing MapReduce imp ementation in Nutch, and #% the midd e of that %ear a the maJor Nutch a gorithms had #een ported to run using MapReduce and ND"S! ND"S and the MapReduce imp ementation in Nutch $ere app ica# e #e%ond the rea m of search, and in "e#ruar% 6337 the% mo&ed out of Nutch to form an independent su#proJect of =ucene ca ed Hadoop! At around the same time, Doug Cutting Joined ;ahooO, $hich pro&ided a dedicated team and the resources to turn Hadoop into a s%stem that ran at $e# sca e Fsee side#arG! This $as demonstrated in "e#ruar% 6335 $hen ;ahooO announced that its production search inde' $as #eing generated #% a 43,3339core Hadoop c uster! In Apri 6335, Hadoop #ro(e a $or d record to #ecome the fastest s%stem to sort a tera#%te of data! Running on a ?439node c uster, Hadoop sorted one tera#%te in 633? seconds FJust under .P minutesG, #eating the pre&ious %ear,s $inner of 6?@ secondsFdescri#ed in detai in 1TeraA%te Sort on Apache Hadoop2 on page B74G! In No&em#er of the same %ear, +oog e reported that its MapReduce imp ementation sorted one tera#%te in 75 seconds!D As this #oo( $as going to press FMa% 633?G, it $as announced that a team at ;ahooO used Hadoop to sort one tera#%te in 76 seconds!

%U-(RO1+CT%

A though Hadoop is #est (no$n for MapReduce and its distri#uted fi es%stemFHD"S, renamed from ND"SG, the other su#proJects pro&ide comp ementar% ser&ices, or #ui d on the core to add higher9 e&e a#stractions The &arious su#proJects of hadoop inc udes89 Core A set of components and interfaces for distri#uted fi es%stems and genera ICOFseria i-ation, Qa&a RPC, persistent data structuresG! ) ro A data seria i-ation s%stem for efficient, cross9 anguage RPC, and persistent datastorage! FAt the time of this $riting, A&ro had #een created on % as a ne$ su#proJect, and no other Hadoop su#proJects $ere using it %et!G &apreduce A distri#uted data processing mode and e'ecution en&ironment that runs on arge c usters of commodit% machines! HD$% A distri#uted fi es%stem that runs on arge c usters of commodit% machines! (ig

0|"age

A data f o$ anguage and e'ecution en&ironment for e'p oring &er% arge datasets! Pig runs on HD"S and MapReduce c usters! H-)%+ A distri#uted, co umn9oriented data#ase! HAase uses HD"S for its under %ing storage, and supports #oth #atch9st% e computations using MapReduce and point )ueries Frandom readsG! 2oo'eeper A distri#uted, high % a&ai a# e coordination ser&ice! RooSeeper pro&ides primiti&es such as distri#uted oc(s that can #e used for #ui ding distri#uted app ications! Hi e A distri#uted data $arehouse! Hi&e manages data stored in HD"S and pro&ides a )uer% anguage #ased on SI= Fand $hich is trans ated #% the runtime engine to MapReduce Jo#sG for )uer%ing the data! Chu'wa A distri#uted data co ection and ana %sis s%stem! Chu($a runs co ectors that store data in HD"S, and it uses MapReduce to produce reports! FAt the time of this $riting, Chu($a had on % recent % graduated from a 1contri#2 modu e in Core to its o$n su#proJect!G

THE HADOOP APPROACH

Hadoop is designed to efficient % process arge &o umes of information #% connecting man% commodit% computers together to $or( in para e ! The theoretica 43339CP/ machine descri#ed ear ier $ou d cost a &er% arge amount of mone%, far more than 4,333 sing e9CP/ or 6<3 )uad9core machines! Hadoop $i tie these sma er and more reasona# % priced machines together into a sing e cost9effecti&e compute c uster! Performing computation on arge &o umes of data has #een done #efore, usua % in a distri#uted setting! *hat ma(es Hadoop uni)ue is its simp ified programming mode $hich a o$s the user to )uic( % $rite and test distri#uted s%stems, and its efficient, automatic distri#ution of data and $or( across machines and in turn uti i-ing the under %ing para e ism of the CP/ cores! Data distribution

In a Hadoop c uster, data is distri#uted to a the nodes of the c uster as it is #eing oaded in! The Hadoop Distri#uted "i e S%stem FHD"SG $i sp it arge data fi es into chun(s $hich are managed #% different nodes in the c uster! In addition to this each chun( is rep icated across se&era machines, so that a sing e machine fai ure does not resu t in an% data #eing una&ai a# e! An acti&e monitoring

23 | " a g e

s%stem then re9rep icates the data in response to s%stem fai ures $hich can resu t in partia storage! E&en though the fi e chun(s are rep icated and distri#uted across se&era machines, the% form a sing e namespace, so their contents are uni&ersa % accessi# e! Data is conceptua % record-oriented in the Hadoop programming frame$or(! Indi&idua input fi es are #ro(en into ines or into other formats specific to the app ication ogic! Each process running on a node in the c uster then processes a su#set of these records! The Hadoop frame$or( then schedu es these processes in pro'imit% to the ocation of dataCrecords using (no$ edge from the distri#uted fi e s%stem! Since fi es are spread across the distri#uted fi e s%stem as chun(s, each compute process running on a node operates on a su#set of the data! *hich data operated on #% a node is chosen #ased on its oca it% to the node8 most data is read from the oca dis( straight into the CP/, a e&iating strain on net$or( #and$idth and pre&enting unnecessar% net$or( transfers! This strateg% of mo ing computation to the data, instead of mo&ing the data to the computation a o$s Hadoop to achie&e high data oca it% $hich in turn resu ts in high performance!

MapReduce: Isolated Processes

Hadoop imits the amount of communication $hich can #e performed #% the processes, as each indi&idua record is processed #% a tas( in iso ation from one another! *hi e this sounds i(e a maJor imitation at first, it ma(es the $ho e frame$or( much more re ia# e! Hadoop $i not run Just an% program and distri#ute it across a c uster! Programs must #e $ritten to conform to a particu ar programming mode , named 0MapReduce!0

22 | " a g e

27 | " a g e

In MapReduce, records are processed in iso ation #% tas(s ca ed Mappers! The output from the

Mappers is then #rought together into a second set of tas(s ca ed Reducers, $here resu ts from different mappers can #e merged together! Separate nodes in a Hadoop c uster sti communicate $ith one another! Ho$e&er, in contrast to more con&entiona distri#uted s%stems $here app ication de&e opers e'p icit % marsha #%te streams from node to node o&er soc(ets or through MPI #uffers, communication in Hadoop is performed imp icit %! Pieces of data can #e tagged $ith (e% names $hich inform Hadoop ho$ to send re ated #its of information to a common destination node! Hadoop interna % manages a of the data transfer and c uster topo og% issues! A% restricting the communication #et$een nodes, Hadoop ma(es the distri#uted s%stem much more re ia# e! Indi&idua node fai ures can #e $or(ed around #% restarting tas(s on other machines! Since user9 e&e tas(s do not communicate e'p icit % $ith one another, no messages need to #e e'changed #% user programs, nor do nodes need to ro #ac( to pre9arranged chec(points to partia % restart the computation! The other $or(ers continue to operate as though nothing $ent $rong, ea&ing the cha enging aspects of partia % restarting the program to the under %ing Hadoop a%er!

INTRODUCTION TO &)(R+DUC+

MapReduce is a programming mode and an associated imp ementation for processing and generating argedata sets! /sers specif% a map function that processes a (e%C&a ue pair to generate a set of intermediate (e%C&a ue pairs, and a reduce function that merges a intermediate &a ues associated $ith the same intermediate (e%! Man% rea $or d tas(s are e'pressi# e in this mode ! This a#straction is inspired #% the map and reduce primiti&es present in =isp and man% other functiona anguages! *e rea i-ed that most of our computations in&o &ed app %ing a map operation to each ogica !record! in our input in order to compute a set of intermediate (e%C&a ue pairs, and then

23 | " a g e

app %ing a reduce operation to a the &a ues that shared the same (e%, in order to com#ine the deri&ed data appropriate %! Our use of a functiona mode $ith user speci i-ed map and reduce operations a o$s us to para e i-e arge computations easi % and to use re9e'ecution as the primar% mechanism for fau t to erance!

(rogramming model The computation ta(es a set of input (e%C&a ue pairs, and produces a set of output (e%C&a ue pairs! The user of the MapReduce i#rar% e'presses the computation as t$o functions8 Map and Reduce! Map, $ritten #% the user, ta(es an input pair and produces a set of intermediate (e%C&a ue pairs! The MapReduce i#rar% groups together a intermediate &a ues associated$ith the same intermediate (e% I and passes them to the Reduce function! The Reduce function, a so $ritten #% the user, accepts an intermediate (e% I and a set of &a ues for that (e%! It merges together these &a ues to form a possi# % sma er set of &a ues! T%pica % Just -ero or one output &a ue is produced per Reduce in&ocation! The intermediate &a ues are supp ied to the userTs reduce function &ia an iterator! This a o$s us to hand e ists of &a ues that are too arge to fit in memor%! &)( map 3in4'e", in4 alue5 -6 3out4'e", intermediate4 alue5 list

+7ample! Upper-case &apper et mapF(, &G U emitF(!to/pperFG, &!to/pperFGG F1foo2, 1#ar2G 99V F1"OO2, 1AAR2G F1"oo2, 1other2G 99VF1"OO2, 1OTHER2G

24 | " a g e

F1(e%62, 1data2G 99V F1SE;62, 1DATA2G

R+DUC+ reduce 3out4'e", intermediate4 alue list5 -6 out4 alue list

+7ample! %um Reducer

et reduceF(, &a sG sum U 3 foreach int & in &a s8 sum WU & emitF(, sumG

F1A2, XB6, 433, .46YG 99V F1A2, B<BG

2 |"age

F1A2, X46, 7, 96YG 99V F1A2, 47G

+7ample8!Counting the num#er of occurrences of each $ord in a arge co ection of documents! The user $ou d $rite code simi ar to the fo o$ing pseudo9code8 mapFString (e%, String &a ueG8 CC (e%8 document name CC &a ue8 document contents for each $ord $ in &a ue8 EmitIntermediateF$, 040GH reduceFString (e%, Iterator &a uesG8 CC (e%8 a $ord CC &a ues8 a ist of counts int resu t U 3H for each & in &a ues8 resu t WU ParseIntF&GH EmitFAsStringFresu tGGH The map function emits each $ord p us an associated count of occurrences FJust Z4T in this simp e e'amp eG! The reduce function sums together a counts emitted for a particu ar $ord! In addition, the user $rites code to [ in a mapreduce specification o#Ject $ith the names of the input and output [ es, and optiona tuning parameters! The user then in&o(es the MapReduce function, passing it the specification o#Ject! The userTs code is in(ed together $ith the MapReduce i#rar% Fimp emented in CWWG Programs $ritten in this functiona st% e are automatica % para e i-ed and e'ecuted on a arge c uster of commodit% machines! The run9time s%stem ta(es care of the detai s of partitioning the input data, schedu ing the programTs e'ecution across a set of machines, hand ing machine fai ures, and managing the re)uired inter9machine communication! This a o$s programmers $ithout an% e'perience $ith para e and distri#uted s%stems to easi % uti i-e the resources of a arge distri#uted s%stem!

2) | " a g e

The issues of ho$ to para e i-e the computation, distri#ute the data, and hand e fai ures conspire to o#scure the origina simp e computation $ith arge amounts of comp e' code to dea $ith these issues! As a reaction to this comp e'it%, +oog e designed a ne$ a#straction that a o$s us to e'press the simp e computations $e $ere tr%ing to perform #ut hides the mess% detai s of para e i-ation, fau t9 to erance, data distri#ution and oad #a ancing in a i#rar%!

T"pes

E&en though the pre&ious pseudo9code is $ritten in terms of string inputs and outputs, conceptua % the map and reduce functions supp ied #% the user ha&e associated t%pes8 map F(4,&4G O istF(6,&6G reduce F(6, istF&6GG O istF&6G I!e!, the input (e%s and &a ues are dra$n from a different domain than the output (e%s and &a ues! "urthermore, the intermediate (e%s and &a ues are from the same domain as the output (e%s and &a ues! Our CWW imp ementation passes strings to and from the user9de[ned functions and ea&es it to the user code to con&ert #et$een strings and appropriate t%pes!

In erted Inde7! The map function parses each document, and emits a se)uence of h$ordH document IDi pairs! The reduce function accepts a pairs for a gi&en $ord, sorts the corresponding document IDs and emits a h$ordH istFdocument IDGi pair! The set of a output pairs forms a simp e in&erted inde'! It is eas% to augment this computation to (eep trac( of $ord positions!

26 | " a g e

Distributed %ort! The map function e'tracts the (e% from each record, and emits a h(e%H recordi pair! The reduce function emits a pairs unchanged!

H)DOO( &)(R+DUC+

Hadoop Map9Reduce is a soft$are frame$or( for easi % $riting app ications $hich process &ast amounts of data Fmu ti9tera#%te data9setsG in9para e on arge c usters Fthousands of nodesG of commodit% hard$are in a re ia# e, fau t9to erant manner! A Map9Reduce Jo# usua % sp its the input data9set into independent chun(s $hich are processed #% the map tas(s in a comp ete % para e manner! The frame$or( sorts the outputs of the maps, $hich are then input to the reduce tas(s! T%pica % #oth the input and the output of the Jo# are stored in a fi e9s%stem! The frame$or( ta(es care of schedu ing tas(s, monitoring them and re9e'ecutes the fai ed tas(s! T%pica % the compute nodes and the storage nodes are the same, that is, the Map9Reduce frame$or( and the Distri#uted "i eS%stem are running on the same set of nodes! This configuration a o$s the frame$or( to effecti&e % schedu e tas(s on the nodes $here data is a read% present, resu ting in &er% high aggregate #and$idth across the c uster! A MapReduce Jo# is a unit of $or( that the c ient $ants to #e performed8 it consists of the input data, the MapReduce program, and configuration information! Hadoop runs the Jo# #% di&iding it into tas(s, of $hich there are t$o t%pes8 map tas(s and reduce tas(s! There are t$o t%pes of nodes that contro the Jo# e'ecution process8 a Jo#trac(er and a num#er of tas(trac(ers! The Jo#trac(er coordinates a the Jo#s run on the s%stem #% schedu ing tas(s to run on tas(trac(ers! Tas(trac(ers run tas(s and send progress reports to the Jo#trac(er, $hich (eeps a record of the o&era progress of each Jo#! If a tas(s fai s, the Jo#trac(er can reschedu e it on a different tas(trac(er! Hadoop di&ides the input to a MapReduce Jo# into fi'ed9si-e pieces ca ed input sp its, or Just sp its! Hadoop creates one map tas( for each sp it, $hich runs the userdefined map function for each record in the sp it!

2. | " a g e

Ha&ing man% sp its means the time ta(en to process each sp it is sma compared to the time to process the $ho e input! So if $e are processing the sp its in para e , the processing is #etter oad9 #a anced if the sp its are sma , since a faster machine $i #e a# e to process proportiona % more sp its o&er the course of the Jo# than a s o$er machine! E&en if the machines are identica , fai ed processes or other Jo#s running concurrent % ma(e oad #a ancing desira# e, and the )ua it% of the oad #a ancing increases as the sp its #ecome more fine9grained! On the other hand, if sp its are too sma , then the o&erhead of managing the sp its and of map tas( creation #egins to dominate the tota Jo# e'ecution time! "or most Jo#s, a good sp it si-e tends to #e the si-e of a HD"S # oc(, 7B MA #% defau t, a though this can #e changed for the c uster Ffor a ne$ % created fi esG, or specified $hen each fi e is created! Hadoop does its #est to run the map tas( on a node $here the input data resides in HD"S! This is ca ed the data oca it% optimi-ation! It shou d no$ #e c ear $h% the optima sp it si-e is the same as the # oc( si-e8 it is the argest si-e of input that can #e guaranteed to #e stored on a sing e node! If the sp it spanned t$o # oc(s, it $ou d #e un i(e % that an% HD"S node stored #oth # oc(s, so some of the sp it $ou d ha&e to #e transferred across the net$or( to the node running the map tas(, $hich is c ear % ess efficient than running the $ho e map tas( using oca data! Map tas(s $rite their output to oca dis(, not to HD"S! Map output is intermediate output8 it,s processed #% reduce tas(s to produce the fina output, and once the Jo# is comp ete the map output can #e thro$n a$a%! So storing it in HD"S, $ith rep ication, $ou d #e o&er(i ! If the node running the map tas( fai s #efore the map output has #een consumed #% the reduce tas(, then Hadoop $i automatica % rerun the map tas( on another node to recreate the map output! Reduce tas(s don,t ha&e the ad&antage of data oca it%>the input to a sing e reduce tas( is norma % the output from a mappers! In the present e'amp e, $e ha&e a sing e reduce tas( that is fed #% a of the map tas(s! Therefore the sorted map outputs ha&e to #e transferred across the net$or( to the node $here the reduce tas( is running, $here the% are merged and then passed to the user9defined reduce function! The output of the reduce is norma % stored in HD"S for re ia#i it%! "or each HD"S # oc( of the reduce output, the first rep ica is stored on the oca node, $ith other rep icas #eing stored on off9rac( nodes! Thus, $riting the reduce output does consume net$or( #and$idth, #ut on % as much as a norma HD"S $rite pipe ine

20 | " a g e

consume! The dotted #o'es in the figure #e o$ indicate nodes, the ight arro$s sho$ data transfers on a node, and the hea&% arro$s sho$ data transfers #et$een nodes! The num#er of reduce tas(s is not go&erned #% the si-e of the input, #ut is specified independent %!

MapReduce data f o$ $ith a sing e reduce tas( *hen there are mu tip e reducers, the map tas(s partition their output, each creating one partition for each reduce tas(! There can #e man% (e%s Fand their associated &a uesG in each partition, #ut the records for e&er% (e% are a in a sing e partition! The partitioning can #e contro ed #% a user9defined partitioning function, #ut norma % the defau t partitioner>$hich #uc(ets (e%s using a hash function> $or(s &er% $e ! This diagram ma(es it c ear $h% the data f o$ #et$een map and reduce tas(s is co o)uia % (no$n as 1the shuff e,2 as each reduce tas( is fed #% man% map tas(s! The shuff e is more comp icated than this diagram suggests, and tuning it can ha&e a #ig impact on Jo# e'ecution time! "ina %, it,s a so possi# e to ha&e -ero reduce tas(s! This can #e appropriate $hen %ou don,t need the shuff e since the processing can #e carried out entire % in para e !

73 | " a g e

MapReduce data f o$ $ith mu tip e reduce tas(s

MapReduce data f o$ $ith no reduce tas(s

72 | " a g e

Combiner $unctions

Man% MapReduce Jo#s are imited #% the #and$idth a&ai a# e on the c uster, so it pa%s to minimi-e the data transferred #et$een map and reduce tas(s! Hadoop a o$s the user to specif% a com#iner function to #e run on the map output>the com#iner function,s output forms the input to the reduce function! Since the com#iner function is an optimi-ation, Hadoop does not pro&ide a guarantee of ho$ man% times it $i ca it for a particu ar map output record, if at a ! In other $ords, ca ing the com#iner function -ero, one, or man% times shou d produce the same output from the reducer!

H)DOO( %TR+)&IN0

Hadoop pro&ides an API to MapReduce that a o$s %ou to $rite %our map and reduce functions in anguages other than Qa&a! Hadoop Streaming uses /ni' standard streams as the interface #et$een Hadoop and %our program, so %ou can use an% anguage that can read standard input and $rite to standard output to $rite %our MapReduce program! Streaming is natura % suited for te't processing Fa though as of &ersion 3!64!3 it can hand e #inar% streams, tooG, and $hen used in te't mode, it has a ine9oriented &ie$ of data! Map input data is passed o&er standard input to %our map function, $hich processes it ine #% ine and $rites ines to standard output! A map output (e%9&a ue pair is $ritten as a sing e ta#9de imited ine! Input to the reduce function is in the same format>a ta#9separated (e%9 &a ue pair>passed o&er standard input! The reduce function reads ines from standard input, $hich the frame$or( guarantees are sorted #% (e%, and $rites its resu ts to standard output!

H)DOO( (I(+% Hadoop Pipes is the name of the CWW interface to Hadoop MapReduce! /n i(e Streaming, $hich uses standard input and output to communicate $ith the map and reduce code, Pipes uses soc(ets as the channe o&er $hich the tas(trac(er communicates $ith the process running the CWW map or reduce function! QNI is not used!

H)DOO( DI%TRI-UT+D $I9+%,%T+& 3HD$%5

77 | " a g e

"i es%stems that manage the storage across a net$or( of machines are ca ed distri#uted fi es%stems! Since the% are net$or(9#ased, a the comp ications of net$or( programming (ic( in, thus ma(ing distri#uted fi es%stems more comp e' than regu ar dis( fi es%stems! "or e'amp e, one of the #iggest cha enges is ma(ing the fi es%stem to erate node fai ure $ithout suffering data oss! Hadoop comes $ith a distri#uted fi es%stem ca ed HD"S, $hich stands for Hadoop Distri#uted "i es%stem! HD$%, the Hadoop Distributed $ile %"stem, is a distributed file s"stem designed to hold er" large amounts of data 3terab"tes or e en petab"tes5, and pro ide high-throughput access to this information# "i es are stored in a redundant fashion across mu tip e machines to ensure their dura#i it% to fai ure and high a&ai a#i it% to &er% para e app ications! )%%U&(TION% )ND 0O)9% Hardware $ailure &ard*are failure is the norm rather than the e@ception. #n &D-$ instance ma5 consist of hundreds or thousands of ser1er machines> each storing part of the file s5stemAs data. The fact that there are a huge number of components and that each component has a non-tri1ial probabilit5 of failure means that some component of &D-$ is al*a5s non-functional. Therefore> detection of faults and Buic=> automatic reco1er5 from them is a core architectural goal of &D-$. Streaming Data Access #pplications that run on &D-$ need streaming access to their data sets. The5 are not general purpose applications that t5picall5 run on general purpose file s5stems. &D-$ is designed more for batch processing rather than interacti1e use b5 users. The emphasis is on high throughput of data access rather than lo* latenc5 of data access. "O$IC imposes man5 hard reBuirements that are not needed for applications that are targeted for &D-$. "O$IC semantics in a fe* =e5 areas has been traded to increase data throughput rates. Large Data Sets #pplications that run on &D-$ ha1e large data sets. # t5pical file in &D-$ is gigab5tes to terab5tes in si?e. Thus> &D-$ is tuned to support large files. It should pro1ide high aggregate data band*idth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.

Simple Co erenc! Model &D-$ applications need a *rite-once-read-man5 access model for files. # file once created> *ritten> and closed need not be changed. This assumption simplifies data coherenc5 issues and enables high throughput data access. # !apDReduce application or a *eb cra*ler application fits perfectl5 *ith this model. There is a plan to support appending-*rites to files in the future.

73 | " a g e

"Mo#ing Computation is C eaper t an Mo#ing Data$ # computation reBuested b5 an application is much more efficient if it is e@ecuted near the data it operates on. This is especiall5 true *hen the si?e of the data set is huge. This minimi?es net*or= congestion and increases the o1erall throughput of the s5stem. The assumption is that it is often better to migrate the computation closer to *here the data is located rather than mo1ing the data to *here the application is running. &D-$ pro1ides interfaces for applications to mo1e themsel1es closer to *here the data is located. Portabilit! Across Heterogeneous Hard%are and So&t%are Plat&orms &D-$ has been designed to be easil5 portable from one platform to another. This facilitates *idespread adoption of &D-$ as a platform of choice for a large set of applications.

D+%I0N

HD"S is a fi es%stem designed for storing &er% arge fi es $ith streaming data access patterns, running on c usters on commodit% hard$are! =et,s e'amine this statement in more detai 8 :er" large files 1Eer% arge2 in this conte't means fi es that are hundreds of mega#%tes, giga#%tes, or tera#%tes in si-e! There are Hadoop c usters running toda% that store peta#%tes of data!N %treaming data access HD"S is #ui t around the idea that the most efficient data processing pattern is a $rite9once, read9 man%9times pattern! A dataset is t%pica % generated or copied from source, then &arious ana %ses are performed on that dataset o&er time! Each ana %sis $i in&o &e a arge proportion, if not a , of the dataset, so the time to read the $ho e dataset is more important than the atenc% in reading the first record! Commodit" hardware Hadoop doesn,t re)uire e'pensi&e, high % re ia# e hard$are to run on! It,s designed to run on c usters of commodit% hard$are Fcommon % a&ai a# e hard$are a&ai a# e from mu tip e &endors\G for $hich the chance of node fai ure across the c uster is high, at east for arge c usters! HD"S is designed to carr% on $or(ing $ithout a noticea# e interruption to the user in the face of such fai ure! It is a so $orth e'amining the app ications for $hich using HD"S does not $or( so $e ! *hi e this ma% change in the future, these are areas $here HD"S is not a good fit toda%8 9ow-latenc" data access App ications that re)uire o$9 atenc% access to data, in the tens of mi iseconds range, $i not $or( $e $ith HD"S! Remem#er HD"S is optimi-ed for de i&ering a high throughput of data, and this ma% #e at the e'pense of atenc%! HAase FChapter 46G is current % a #etter choice for o$9 atenc% access!

74 | " a g e

=ots of sma fi es Since the namenode ho ds fi es%stem metadata in memor%, the imit to the num#er of fi es in a fi es%stem is go&erned #% the amount of memor% on the namenode! As a ru e of thum#, each fi e, director%, and # oc( ta(es a#out 4<3 #%tes! So, for e'amp e, if %ou had one mi ion fi es, each ta(ing one # oc(, %ou $ou d need at east .33 MA of memor%! *hi e storing mi ions of fi es is feasi# e, #i ions is #e%ond the capa#i it% of current hard$are! &ultiple writers, arbitrar" file modifications "i es in HD"S ma% #e $ritten to #% a sing e $riter! *rites are a $a%s made at the end of the fi e! There is no support for mu tip e $riters, or for modifications at ar#itrar% offsets in the fi e! FThese might #e supported in the future, #ut the% are i(e % to #e re ati&e % inefficient!G

HD$% Concepts -loc's A dis( has a # oc( si-e, $hich is the minimum amount of data that it can read or $rite! "i es%stems for a sing e dis( #ui d on this #% dea ing $ith data in # oc(s, $hich are an integra mu tip e of the dis( # oc( si-e! "i es%stem # oc(s are t%pica % a fe$ (i o#%tes in si-e, $hi e dis( # oc(s are norma % <46 #%tes! This is genera % transparent to the fi es%stem user $ho is simp % reading or $riting a fi e>of $hate&er ength! Ho$e&er, there are too s to do $ith fi es%stem maintenance, such as df and fsc(, that operate on the fi es%stem # oc( e&e ! HD"S too has the concept of a # oc(, #ut it is a much arger unit >7B MA #% defau t! =i(e in a fi es%stem for a sing e dis(, fi es in HD"S are #ro(en into # oc(9si-ed chun(s, $hich are stored as independent units! /n i(e a fi es%stem for a sing e dis(, a fi e in HD"S that is sma er than a sing e # oc( does not occup% a fu # oc(,s $orth of under %ing storage! *hen un)ua ified, the term 1# oc(2 in this #oo( refers to a # oc( in HD"S! HD"S # oc(s are arge compared to dis( # oc(s, and the reason is to minimi-e the cost of see(s! A% ma(ing a # oc( arge enough, the time to transfer the data from the dis( can #e made to #e significant % arger than the time to see( to the start of the # oc(! Thus the time to transfer a arge fi e made of mu tip e # oc(s operates at the dis( transfer rate! A )uic( ca cu ation sho$s that if the see( time is around 43ms, and the transfer rate is 433 MACs, then to ma(e the see( time 4] of the transfer time, $e need to ma(e the # oc( si-e around 433 MA! The defau t is actua % 7B MA, a though man% HD"S insta ations use 465 MA # oc(s! This figure $i continue to #e re&ised up$ard as transfer speeds gro$ $ith ne$ generations of dis( dri&es! This argument shou dn,t #e ta(en too far, ho$e&er! Map tas(s in MapReduce norma % operate on one # oc( at a time, so if %ou ha&e too fe$ tas(s Ffe$er than nodes in the c usterG, %our Jo#s $i run s o$er than the% cou d other$ise!

7 |"age

Ha&ing a # oc( a#straction for a distri#uted fi es%stem #rings se&era #enefits! The first #enefit is the most o#&ious8 a fi e can #e arger than an% sing e dis( in the net$or(! There,s nothing that re)uires the # oc(s from a fi e to #e stored on the same dis(, so the% can ta(e ad&antage of an% of the dis(s in the c uster! In fact, it $ou d #e possi# e, if unusua , to store a sing e fi e on an HD"S c uster $hose # oc(s fi ed a the dis(s in the c uster! Second, ma(ing the unit of a#straction a # oc( rather than a fi e simp ifies the storage su#s%stem! Simp icit% is something to stri&e for a in a s%stems, #ut is important for a distri#uted s%stem in $hich the fai ure modes are so &aried! The storage su#s%stem dea s $ith # oc(s, simp if%ing storage management Fsince # oc(s are a fi'ed si-e, it is eas% to ca cu ate ho$ man% can #e stored on a gi&en dis(G, and e iminating metadata concerns F# oc(s are Just a chun( of data to #e stored>fi e metadata such as permissions information does not need to #e stored $ith the # oc(s, so another s%stem can hand e metadata orthogona %G! "urthermore, # oc(s fit $e $ith rep ication for pro&iding fau t to erance and a&ai a#i it%! To insure against corrupted # oc(s and dis( and machine fai ure, each # oc( is rep icated to a sma num#er of ph%sica % separate machines Ft%pica % threeG! If a # oc( #ecomes una&ai a# e, a cop% can #e read from another ocation in a $a% that is transparent to the c ient! A # oc( that is no onger a&ai a# e due to corruption or machine fai ure can #e rep icated from their a ternati&e ocations to other i&e machines to #ring the rep ication factor #ac( to the norma e&e ! FSee 1Data Integrit%2 on page @< for more on guarding against corrupt data!G Simi ar %, some app ications ma% choose to set a high rep ication factor for the # oc(s in a popu ar fi e to spread the read oad on the c uster! =i(e its dis( fi es%stem cousin, HD"S,s fsc( command understands # oc(s! "or e'amp e, running8 ] hadoop fsc' -files -bloc's $i ist the # oc(s that ma(e up each fi e in the fi es%stem! 'amenodes and Datanodes A HD"S c uster has t$o t%pes of node operating in a master9$or(er pattern8 a namenode Fthe masterG and a num#er of datanodes F$or(ersG! The namenode manages the fi es%stem namespace! It maintains the fi es%stem tree and the metadata for a the fi es and directories in the tree! This information is stored persistent % on the oca dis( in the form of t$o fi es8 the namespace image and the edit og! The namenode a so (no$s the datanodes on $hich a the # oc(s for a gi&en fi e are ocated, ho$e&er, it does not store # oc( ocations persistent %, since this information is reconstructed from datanodes $hen the s%stem starts! A c ient accesses the fi es%stem on #eha f of the user #% communicating $ith the namenode and datanodes!

7) | " a g e

The c ient presents a POSIK9 i(e fi es%stem interface, so the user code does not need to (no$ a#out the namenode and datanode to function! Datanodes are the $or( horses of the fi es%stem! The% store and retrie&e # oc(s $hen the% are to d to F#% c ients or the namenodeG, and the% report #ac( to the namenode periodica % $ith ists of # oc(s that the% are storing! *ithout the namenode, the fi es%stem cannot #e used! In fact, if the machine running the namenode $ere o# iterated, a the fi es on the fi es%stem $ou d #e ost since there $ou d #e no $a% of (no$ing ho$ to reconstruct the fi es from the # oc(s on the datanodes! "or this reason, it is important to ma(e the namenode resi ient to fai ure, and Hadoop pro&ides t$o mechanisms for this!

76 | " a g e

The first $a% is to #ac( up the fi es that ma(e up the persistent state of the fi es%stem metadata! Hadoop can #e configured so that the namenode $rites its persistent state to mu tip e fi es%stems! These $rites are s%nchronous and atomic! The usua configuration Choice is to $rite to oca dis( as $e as a remote N"S mount! It is a so possi# e to run a secondar% namenode, $hich despite its name does not act as a namenode! Its main ro e is to periodica % merge the namespace image $ith the edit og to pre&ent the edit og from #ecoming too arge! The secondar% namenode usua % runs on a separate ph%sica machine, since it re)uires p ent% of CP/ and as much memor% as the namenode to perform the merge! It (eeps a cop% of the merged namespace image, $hich can #e used in the e&ent of the namenode fai ing! Ho$e&er, the state of the secondar% namenode ags that of the primar%, so in the e&ent of tota fai ure of the primar% data, oss is a most guaranteed! The usua course of action in this case is to cop% the namenode,s metadata fi es that are on N"S to the secondar% and run it as the ne$ primar%!

T e (ile S!stem 'amespace

HD"S supports a traditiona hierarchica fi e organi-ation! A user or an app ication can create directories and store fi es inside these directories! The fi e s%stem namespace hierarch% is simi ar to most other e'isting fi e s%stemsH one can create and remo&e fi es, mo&e a fi e from one director% to another, or rename a fi e! HD"S does not %et imp ement user )uotas or access permissions! HD"S does not support hard in(s or soft in(s! Ho$e&er, the HD"S architecture does not prec ude imp ementing these features! The NameNode maintains the fi e s%stem namespace! An% change to the fi e s%stem namespace or its properties is recorded #% the NameNode! An app ication can specif% the num#er of rep icas of a fi e

7. | " a g e

that shou d #e maintained #% HD"S! The num#er of copies of a fi e is ca ed the rep ication factor of that fi e! This information is stored #% the NameNode! Data Replication

HD"S is designed to re ia# % store &er% arge fi es across machines in a arge c uster! It stores each fi e as a se)uence of # oc(sH a # oc(s in a fi e e'cept the ast # oc( are the same si-e! The # oc(s of a fi e are rep icated for fau t to erance! The # oc( si-e and rep ication factor are configura# e per fi e! An app ication can specif% the num#er of rep icas of a fi e! The rep ication factor can #e specified at fi e creation time and can #e changed ater! "i es in HD"S are $rite9once and ha&e strict % one $riter at an% time! The NameNode ma(es a decisions regarding rep ication of # oc(s! It periodica % recei&es a Heart#eat and a A oc(report from each of the DataNodes in the c uster! Receipt of a Heart#eat imp ies that the DataNode is functioning proper %! A A oc(report contains a ist of a # oc(s on a DataNode!

Replica Placement

70 | " a g e

The p acement of rep icas is critica to HD"S re ia#i it% and performance! Optimi-ing rep ica p acement distinguishes HD"S from most other distri#uted fi e s%stems! This is a feature that needs ots of tuning and e'perience! The purpose of a rac(9a$are rep ica p acement po ic% is to impro&e data re ia#i it%, a&ai a#i it%, and net$or( #and$idth uti i-ation! The current imp ementation for the rep ica p acement po ic% is a first effort in this direction! The short9term goa s of imp ementing this po ic% are to &a idate it on production s%stems, earn more a#out its #eha&ior, and #ui d a foundation to test and research more sophisticated po icies! =arge HD"S instances run on a c uster of computers that common % spread across man% rac(s! Communication #et$een t$o nodes in different rac(s has to go through s$itches! In most cases, net$or( #and$idth #et$een machines in the same rac( is greater than net$or( #and$idth #et$een machines in different rac(s! The NameNode determines the rac( id each DataNode #e ongs to &ia the process out ined in Rac(

A$areness! A simp e #ut non9optima po ic% is to p ace rep icas on uni)ue rac(s! This pre&ents osing
data $hen an entire rac( fai s and a o$s use of #and$idth from mu tip e rac(s $hen reading data! This po ic% e&en % distri#utes rep icas in the c uster $hich ma(es it eas% to #a ance oad on component fai ure! Ho$e&er, this po ic% increases the cost of $rites #ecause a $rite needs to transfer # oc(s to mu tip e rac(s! "or the common case, $hen the rep ication factor is three, HD"S,s p acement po ic% is to put one rep ica on one node in the oca rac(, another on a different node in the oca rac(, and the ast on a different node in a different rac(! This po ic% cuts the inter9rac( $rite traffic $hich genera % impro&es $rite performance! The chance of rac( fai ure is far ess than that of node fai ureH this po ic% does not impact data re ia#i it% and a&ai a#i it% guarantees! Ho$e&er, it does reduce the aggregate net$or( #and$idth used $hen reading data since a # oc( is p aced in on % t$o uni)ue rac(s rather than three! *ith this po ic%, the rep icas of a fi e do not e&en % distri#ute across the rac(s! One third of rep icas are on one node, t$o thirds of rep icas are on one rac(, and the other third are e&en % distri#uted across the remaining rac(s! This po ic% impro&es $rite performance $ithout compromising data re ia#i it% or read performance! The current, defau t rep ica p acement po ic% descri#ed here is a $or( in progress! Replica %election To minimi-e g o#a #and$idth consumption and read atenc%, HD"S tries to satisf% a read re)uest from a rep ica that is c osest to the reader! If there e'ists a rep ica on the same rac( as the reader node, then that rep ica is preferred to satisf% the read re)uest! If anggC HD"S c uster spans mu tip e data centers, then a rep ica that is resident in the oca data center is preferred o&er an% remote rep ica! %afemode

33 | " a g e

On startup, the NameNode enters a specia state ca ed Safemode! Rep ication of data # oc(s does not occur $hen the NameNode is in the Safemode state! The NameNode recei&es Heart#eat and A oc(report messages from the DataNodes! A A oc(report contains the ist of data # oc(s that a DataNode is hosting! Each # oc( has a specified minimum num#er of rep icas! A # oc( is considered safe % rep icated $hen the minimum num#er of rep icas of that data # oc( has chec(ed in $ith the NameNode! After a configura# e percentage of safe % rep icated data # oc(s chec(s in $ith the NameNode Fp us an additiona .3 secondsG, the NameNode e'its the Safemode state! It then determines the ist of data # oc(s Fif an%G that sti ha&e fe$er than the specified num#er of rep icas! The NameNode then rep icates these # oc(s to other DataNodes! T e Persistence o& (ile S!stem Metadata

The &D-$ namespace is stored b5 the NameNode. The NameNode uses a transaction log called the 'dit8og to persistentl5 record e1er5 change that occurs to file s5stem metadata. -or e@ample> creating a ne* file in &D-$ causes the NameNode to insert a record into the 'dit8og indicating this. $imilarl5> changing the replication factor of a file causes a ne* record to be inserted into the 'dit8og. The NameNode uses a file in its local host O$ file s5stem to store the 'dit8og. The entire file s5stem namespace> including the mapping of bloc=s to files and file s5stem properties> is stored in a file called the -sImage. The -sImage is stored as a file in the NameNodeAs local file s5stem too. The NameNode =eeps an image of the entire file s5stem namespace and file +loc=map in memor5. This =e5 metadata item is designed to be compact> such that a NameNode *ith 4 ,+ of R#! is plent5 to support a huge number of files and directories. %hen the NameNode starts up> it reads the -sImage and 'dit8og from dis=> applies all the transactions from the 'dit8og to the in-memor5 representation of the -sImage> and flushes out this ne* 1ersion into a ne* -sImage on dis=. It can then truncate the old 'dit8og because its transactions ha1e been applied to the persistent -sImage. This process is called a chec=point. In the current implementation> a chec=point onl5 occurs *hen the NameNode starts up. %or= is in progress to support periodic chec=pointing in the near future. The DataNode stores &D-$ data in files in its local file s5stem. The DataNode has no =no*ledge about &D-$ files. It stores each bloc= of &D-$ data in a separate file in its local file s5stem. The DataNode does not create all files in the same director5. Instead> it uses a heuristic to determine the optimal number of files per director5 and creates subdirectories appropriatel5. It is not optimal to create all local files in the same director5 because the local file s5stem might not be able to efficientl5 support a huge number of files in a single director5. %hen a DataNode starts up> it scans through its local file s5stem> generates a list of all &D-$ data bloc=s that correspond to each of these local files and sends this report to the NameNode4 this is the +loc=report. T e Communication Protocols #ll &D-$ communication protocols are la5ered on top of the TC"DI" protocol. # client establishes a connection to a configurable TC" port on the NameNode machine. It tal=s the Client"rotocol *ith the

32 | " a g e

NameNode. The DataNodes tal= to the NameNode using the DataNode "rotocol. # Remote "rocedure Call 9R"C: abstraction *raps both the Client "rotocol and the DataNode "rotocol. +5 design> the NameNode ne1er initiates an5 R"Cs. Instead> it onl5 responds to R"C reBuests issued b5 DataNodes or clients. Robustness The primar5 obEecti1e of &D-$ is to store data reliabl5 e1en in the presence of failures. The three common t5pes of failures are NameNode failures> DataNode failures and net*or= partitions. Data Dis) (ailure* Heartbeats and Re+Replication 'ach DataNode sends a &eartbeat message to the NameNode periodicall5. # net*or= partition can cause a subset of DataNodes to lose connecti1it5 *ith the NameNode. The NameNode detects this condition b5 the absence of a &eartbeat message. The NameNode mar=s DataNodes *ithout recent &eartbeats as dead and does not for*ard an5 ne* IO reBuests to them. #n5 data that *as registered to a dead DataNode is not a1ailable to &D-$ an5 more. DataNode death ma5 cause the replication factor of some bloc=s to fall belo* their specified 1alue. The NameNode constantl5 trac=s *hich bloc=s need to be replicated and initiates replication *hene1er necessar5. The necessit5 for rereplication ma5 arise due to man5 reasons4 a DataNode ma5 become una1ailable> a replica ma5 become corrupted> a hard dis= on a DataNode ma5 fail> or the replication factor of a file ma5 be increased. Cluster Rebalancing

The &D-$ architecture is compatible *ith data rebalancing schemes. # scheme might automaticall5 mo1e data from one DataNode to another if the free space on a DataNode falls belo* a certain threshold. In the e1ent of a sudden high demand for a particular file> a scheme might d5namicall5 create additional replicas and rebalance other data in the cluster. These t5pes of data rebalancing schemes are not 5et implemented. Data Integrit!

It is possible that a bloc= of data fetched from a DataNode arri1es corrupted. This corruption can occur because of faults in a storage de1ice> net*or= faults> or bugg5 soft*are. The &D-$ client soft*are implements chec=sum chec=ing on the contents of &D-$ files. %hen a client creates an &D-$ file> it computes a chec=sum of each bloc= of the file and stores these chec=sums in a separate hidden file in the same &D-$ namespace. %hen a client retrie1es file contents it 1erifies that the data it recei1ed from each DataNode matches the chec=sum stored in the associated chec=sum file. If not> then the client can opt to retrie1e that bloc= from another DataNode that has a replica of that bloc=. Metadata Dis) (ailure

37 | " a g e

The -sImage and the 'dit8og are central data structures of &D-$. # corruption of these files can cause the &D-$ instance to be non-functional. -or this reason> the NameNode can be configured to support maintaining multiple copies of the -sImage and 'dit8og. #n5 update to either the -sImage or 'dit8og causes each of the -sImages and 'dit8ogs to get updated s5nchronousl5. This s5nchronous updating of multiple copies of the -sImage and 'dit8og ma5 degrade the rate of namespace transactions per second that a NameNode can support. &o*e1er> this degradation is acceptable because e1en though &D-$ applications are 1er5 data intensi1e in nature> the5 are not metadata intensi1e. %hen a NameNode restarts> it selects the latest consistent -sImage and 'dit8og to use. The NameNode machine is a single point of failure for an &D-$ cluster. If the NameNode machine fails> manual inter1ention is necessar5. Currentl5> automatic restart and failo1er of the NameNode soft*are to another machine is not supported. Snaps ots

$napshots support storing a cop5 of data at a particular instant of time. One usage of the snapshot feature ma5 be to roll bac= a corrupted &D-$ instance to a pre1iousl5 =no*n good point in time. &D-$ does not currentl5 support snapshots but *ill in a future release. Data Organi,ation Data -loc)s &D-$ is designed to support 1er5 large files. #pplications that are compatible *ith &D-$ are those that deal *ith large data sets. These applications *rite their data onl5 once but the5 read it one or more times and reBuire these reads to be satisfied at streaming speeds. &D-$ supports *rite-onceread-man5 semantics on files. # t5pical bloc= si?e used b5 &D-$ is )4 !+. Thus> an &D-$ file is chopped up into )4 !+ chun=s> and if possible> each chun= *ill reside on a different DataNode. Staging # client reBuest to create a file does not reach the NameNode immediatel5. In fact> initiall5 the &D-$ client caches the file data into a temporar5 local file. #pplication *rites are transparentl5 redirected to this temporar5 local file. %hen the local file accumulates data *orth o1er one &D-$ bloc= si?e> the client contacts the NameNode. The NameNode inserts the file name into the file s5stem hierarch5 and allocates a data bloc= for it. The NameNode responds to the client reBuest *ith the identit5 of the DataNode and the destination data bloc=. Then the client flushes the bloc= of data from the local temporar5 file to the specified DataNode. %hen a file is closed> the remaining un-flushed data in the temporar5 local file is transferred to the DataNode. The client then tells the NameNode that the file is closed. #t this point> the NameNode commits the file creation operation into a persistent store. If the NameNode dies before the file is closed> the file is lost.

33 | " a g e

The abo1e approach has been adopted after careful consideration of target applications that run on &D-$. These applications need streaming *rites to files. If a client *rites to a remote file directl5 *ithout an5 client side buffering> the net*or= speed and the congestion in the net*or= impacts throughput considerabl5. This approach is not *ithout precedent. 'arlier distributed file s5stems> e.g. #-$> ha1e used client side caching to impro1e performance. # "O$IC reBuirement has been rela@ed to achie1e higher performance of data uploads. Replication Pipelining %hen a client is *riting data to an &D-$ file> its data is first *ritten to a local file as e@plained in the pre1ious section. $uppose the &D-$ file has a replication factor of three. %hen the local file accumulates a full bloc= of user data> the client retrie1es a list of DataNodes from the NameNode. This list contains the DataNodes that *ill host a replica of that bloc=. The client then flushes the data bloc= to the first DataNode. The first DataNode starts recei1ing the data in small portions 94 F+:> *rites each portion to its local repositor5 and transfers that portion to the second DataNode in the list. The second DataNode> in turn starts recei1ing each portion of the data bloc=> *rites that portion to its repositor5 and then flushes that portion to the third DataNode. -inall5> the third DataNode *rites the data to its local repositor5. Thus> a DataNode can be recei1ing data from the pre1ious one in the pipeline and at the same time for*arding data to the ne@t one in the pipeline. Thus> the data is pipelined from one DataNode to the ne@t.

Accessibilit! &D-$ can be accessed from applications in man5 different *a5s. Nati1el5> &D-$ pro1ides a Ea1a #"I for applications to use. # C language *rapper for this /a1a #"I is also a1ailable. In addition> an &TT" bro*ser can also be used to bro*se the files of an &D-$ instance. %or= is in progress to e@pose &D-$ through the %ebD#G protocol. Space Reclamation $ile Deletes and Undeletes *hen a fi e is de eted #% a user or an app ication, it is not immediate % remo&ed from HD"S! Instead, HD"S first renames it to a fi e in the Ctrash director%! The fi e can #e restored )uic( % as ong as it remains in Ctrash! A fi e remains in Ctrash for a configura# e amount of time! After the e'pir% of its ife in Ctrash, the NameNode de etes the fi e from the HD"S namespace! The de etion of a fi e causes the # oc(s associated $ith the fi e to #e freed! Note that there cou d #e an apprecia# e time de a% #et$een the time a fi e is de eted #% a user and the time of the corresponding increase in free space in HD"S! A user can /nde ete a fi e after de eting it as ong as it remains in the Ctrash director%! If a user $ants to unde ete a fi e that heCshe has de eted, heCshe can na&igate the Ctrash director% and retrie&e the fi e!

34 | " a g e

The Ctrash director% contains on % the atest cop% of the fi e that $as de eted! The Ctrash director% is Just i(e an% other director% $ith one specia feature8 HD"S app ies specified po icies to automatica % de ete fi es from this director%! The current defau t po ic% is to de ete fi es from Ctrash that are more than 7 hours o d! In the future, this po ic% $i #e configura# e through a $e defined interface! Decrease Replication $actor *hen the rep ication factor of a fi e is reduced, the NameNode se ects e'cess rep icas that can #e de eted! The ne't Heart#eat transfers this information to the DataNode! The DataNode then remo&es the corresponding # oc(s and the corresponding free space appears in the c uster! Once again, there might #e a time de a% #et$een the comp etion of the setRep ication API ca and the appearance of free space in the c uster!

Hadoop $iles"stems Hadoop has an a#stract notion of fi es%stem, of $hich HD"S is Just one imp ementation! The Qa&a a#stract c ass org!apache!hadoop!fs!"i eS%stem represents a fi es%stem in Hadoop, and there are se&era concrete imp ementations, $hich are descri#ed in fo o$ing ta# e!

) 9ocal file fs#9ocal$ile%"stem

files"stem

for

locall"

connected dis' with client-side chec'sums# Use Raw9ocal$ile%"s tem for a local files"stem with no chec'sums# Hadoop,s distri#uted fi es%stem! HD"S is designed to $or( efficient % HD$% hdfs hdfs!Distri#uted"i eS%stem in conJunction $ith Map9 Reduce! A fi es%stem pro&iding read9on % access to HD"S o&er HTTP! FDespite H$T( hftp hdfs!Hftp"i eS%stem its name, H"TP has no connection $ith "TP!G Often used $ith distcp

3 |"age

F1Para e Cop%ing $ith A fi es%stem pro&iding read9on % access to HD"S o&er HTTPS! FAgain, H%$T( hsftp Hdfs!Hsftp"i eS%stem this has no connection $ith "TP!G A fi es%stem a%ered on another fi es%stem for archi&ing fi es! Hadoop H)R har "s!Har"i eS%stem Archi&es are t%pica % used for archi&ing fi es in HD"S to reduce the namenode,s memor% usage! C oudStore Fformer % Sosmos fi es%stemG ;$%3Cl oud %tore5 $T( %<3Nati e5 A fi es%stem #ac(ed #% Ama-on S., $hich stores fi es in # oc(s %<3-loc ' -ased5 S. fs!s.!S."i eS%stem A Fmuch i(e HD"SG to o&ercome S.,s < +A fi e si-e imit! ftp s.n fs!ftp!"tp"i eS%stem fs!s.nati&e!Nati&eS."i eS%stem Sfs fs!(fs!Sosmos"i eS%stem is a distri#uted fi es%stem i(e HD"S or +oog e,s +"S, $ritten in CWW! A fi es%stem #ac(ed #% an "TP ser&er! A fi es%stem #ac(ed #% Ama-on S.!

Hadoop )rchi es HD"S stores sma fi es inefficient %, since each fi e is stored in a # oc(, and # oc( metadata is he d in memor% #% the namenode! Thus, a arge num#er of sma fi es can eat up a ot of memor% on the namenode! FNote, ho$e&er, that sma fi es do not ta(e up an% more dis( space than is re)uired to store the ra$ contents of the fi e! "or e'amp e, a 4 MA fi e stored $ith a # oc( si-e of 465 MA uses 4 MA of dis( space, not 465 MA!G Hadoop Archi&es, or HAR fi es, are a fi e archi&ing faci it% that pac(s fi es into HD"S # oc(s more efficient %, there#% reducing namenode memor% usage $hi e sti a o$ing transparent access to fi es! In particu ar, Hadoop Archi&es can #e used as input to MapReduce!

Using Hadoop )rchi es A Hadoop Archi&e is created from a co ection of fi es using the archi&e too ! The too runs a MapReduce Jo# to process the input fi es in para e , so to run it, %ou need a MapReduce c uster running to use it!

3) | " a g e

9imitations There are a fe$ imitations to #e a$are of $ith HAR fi es! Creating an archi&e creates a cop% of the origina fi es, so %ou need as much dis( space as the fi es %ou are archi&ing to create the archi&e Fa though %ou can de ete the origina s once %ou ha&e created the archi&eG! There is current % no support for archi&e compression, a though the fi es that go into the archi&e can #e compressed FHAR fi es are i(e tar fi es in this respectG! Archi&es are immuta# e once the% ha&e #een created! To add or remo&e fi es, %ou must recreate the archi&e! In practice, this is not a pro# em for fi es that don,t change after #eing $ritten, since the% can #e archi&ed in #atches on a regu ar #asis, such as dai % or $ee( %! As noted ear ier, HAR fi es can #e used as input to MapReduce! Ho$e&er, there is no archi&e9a$are Input"ormat that can pac( mu tip e fi es into a sing e MapReduce sp it, so processing ots of sma fi es, e&en in a HAR fi e, can sti #e inefficient!

36 | " a g e

A'ATOM. O( A MAPRED/CE 0O- R/' : The c ient, $hich su#mits the MapReduce Jo#! : The Jo#trac(er, $hich coordinates the Jo# run! The Jo#trac(er is a Qa&a app ication $hose main c ass is Qo#Trac(er! : The tas(trac(ers, $hich run the tas(s that the Jo# has #een sp it into! Tas(trac(ers are Qa&a app ications $hose main c ass is Tas(Trac(er! : The distri#uted fi es%stem $hich is used for sharing Jo# fi es #et$een the other entities!

3. | " a g e

Hadoop is no% a part o&:+

Ama-on S. Ama-on S. FSimp e Storage Ser&iceG is a data storage ser&ice! ;ou are #i ed month % for storage and data transfer! Transfer #et$een S. and Ama-onEC6 is free! This ma(es use of S. attracti&e for Hadoop users $ho run c usters on EC6! Hadoop pro&ides t$o fi es%stems that use S.! S. Nati&e "i eS%stem F/RI scheme8 s.nG

A nati&e fi es%stem for reading and $riting regu ar fi es on S.! The ad&antage of this fi es%stem is that %ou can access fi es on S. that $ere $ritten $ith other too s! Con&erse %, other too s can access fi es $ritten using Hadoop! The disad&antage is the <+A imit on fi e si-e imposed #% S.! "or this reason it is not suita# e as a rep acement for HD"S F$hich has support for &er% arge fi esG!

S. A oc( "i eS%stem F/RI scheme8 s.G

A # oc(9#ased fi es%stem #ac(ed #% S.! "i es are stored as # oc(s, Just i(e the% are in HD"S! This permits efficient imp ementation of renames! This fi es%stem re)uires %ou to dedicate a #uc(et for the fi es%stem 9 %ou shou d not use an e'isting #uc(et containing fi es, or $rite other fi es to the same #uc(et! The fi es stored #% this fi es%stem can #e arger than <+A, #ut the% are not interopera# e $ith other S. too s!

There are t$o $a%s that S. can #e used $ith HadoopTs MapCReduce, either as a rep acement for HD"S using the S. # oc( fi es%stem Fi!e! using it as a re ia# e distri#uted fi es%stem $ith support for &er% arge fi esG or as a con&enient repositor% for data input to and output from MapReduce, using either S. fi es%stem! In the second case HD"S is sti used for the MapCReduce phase! Note a so, that #% using S. as an input to MapReduce %ou ose the data oca it% optimi-ation, $hich ma% #e significant!

$)C+-OO;

30 | " a g e

"ace#oo(,s engineering team has posted some detai s on the too s it,s using to ana %-e the huge data sets it co ects! One of the main too s it uses is Hadoop that ma(es it easier to ana %-e &ast amounts of data! Some interesting tid#its from the post8

Some of these ear % proJects ha&e matured into pu# ic % re eased features F i(e the "ace#oo( =e'iconG or are #eing used in the #ac(ground to impro&e user e'perience on "ace#oo( F#% impro&ing the re e&ance of search resu ts, for e'amp eG!

"ace#oo( has mu tip e Hadoop c usters dep o%ed no$ 9 $ith the #iggest ha&ing a#out 6<33 cpu cores and 4 PetaA%te of dis( space! The% are oading o&er 6<3 giga#%tes of compressed data Fo&er 6 tera#%tes uncompressedG into the Hadoop fi e s%stem e&er% da% and ha&e hundreds of Jo#s running each da% against these data sets! The ist of proJects that are using this infrastructure has pro iferated 9 from those generating mundane statistics a#out site usage, to others #eing used to fight spam and determine app ication )ua it%!

O&er time, $e ha&e added c assic data $arehouse features o&er Hadoop is ca ed Hi&e!

i(e partitioning,

samp ing and inde'ing to this en&ironment! This in9house data $arehousing a%er

,)HOO= ;ahooO recent % aunched the $or dTs argest Apache Hadoop production app ication! The ;ahooO Search *e#map is a Hadoop app ication that runs on a more than 43,333 core =inu' c uster and produces data that is no$ used in e&er% ;ahooO *e# search )uer%! The *e#map #ui d starts $ith e&er% *e# page cra$ ed #% ;ahooO and produces a data#ase of a (no$n *e# pages and sites on the internet and a &ast arra% of data a#out e&er% page and site! This deri&ed data feeds the Machine =earned Ran(ing a gorithms at the heart of ;ahooO Search! Some *e#map si-e data8

Num#er of in(s #et$een pages in the inde'8 roughl" > trillion lin's Si-e of output8 o er <?? T-, compressed= Num#er of cores used to run a sing e Map9Reduce Jo#8 o er >?,??? Ra$ dis( used in the production c uster8 o er @ (etab"tes

This process is not ne$! *hat is ne$ is the use of Hadoop! Hadoop has a o$ed us to run the identica processing $e ran pre9Hadoop on the same c uster in 77] of the time our pre&ious s%stem too(! It does that $hi e simp if%ing administration!

43 | " a g e

R+$+R+NC+%

OTrei %, Hadoop8 The Definiti&e +uide #% Tom *hite

http8CC$$$!c oudera!comChadoop9training9thin(ing9at9sca e http8CCde&e oper!%ahoo!comChadoopCtutoria Cmodu e4!htm http8CChadoop!apache!orgCcoreCdocsCcurrentCapiC


http8CChadoop!apache!orgCcoreC&ersion[contro !htm

42 | " a g e

You might also like