Papers by Sebastian Kaune
This paper proposes a new delivery-centric abstraction. A delivery-centric abstraction allows app... more This paper proposes a new delivery-centric abstraction. A delivery-centric abstraction allows applications to generate content requests agnostic to location or protocol, with the additional ability to stipulate high-level requirements regarding such things as performance, security, resource consumption and monetary cost. A delivery-centric system therefore constantly adapts to fulfil these requirements, given the constraints of the environment. This abstraction has been realised through a delivery-centric middleware called Juno, which uses a reconfigurable software architecture to (i) discover multiple sources of an item of content, (ii) model each source's ability to provide the content, then (iii) adapt to interact with the source(s) that can best fulfil the application's requirements. Juno therefore utilises existing providers in a backwards compatible way, supporting immediate deployment. This paper evaluates Juno using Emulab to validate its ability to adapt to its environment.
Decentralized monitoring mechanisms enable obtaining a global view on different attributes and th... more Decentralized monitoring mechanisms enable obtaining a global view on different attributes and the state of Peer-to-Peer systems. Therefore, such mechanisms are essential for managing and optimizing Peer-to-Peer systems. Nonetheless, when deciding on an appropriate mechanism, system designers are faced with a major challenge. Comparing different existing monitoring mechanisms is complex because evaluation methodologies differ widely. To overcome this challenge and to achieve a fair evaluation and comparison, we present a set of dedicated benchmarks for monitoring mechanisms. These benchmarks evaluate relevant functional and non-functional requirements of monitoring mechanisms using appropriate workloads and metrics. We demonstrate the feasibility and expressiveness of our benchmarks by evaluating and comparing three different monitoring mechanisms and highlighting their performance and overhead.
A recommender system can be used to suggest users potentially interesting content based on their ... more A recommender system can be used to suggest users potentially interesting content based on their previous consumption behavior. Such services already became common in centralized systems, such as Amazon, and approaches exist for decentralized recommender systems. However, common P2P recommender systems expose the userpsilas preferences in the whole system. This is not desirable if privacy is required.Realization of a recommender system in a private P2P environment is not a trivial task, since we cannot gather the user data at central servers or just spread them in the community. In this work we propose a private file sharing application based on social contacts. Instead of gathering all the information about users at one place the users exchange information only with their social contacts. We show how a personalized recommender system can be built in such an environment.
This paper presents SwarmTella, a novel architecture for content distribution in Community Scenar... more This paper presents SwarmTella, a novel architecture for content distribution in Community Scenarios. SwarmTella is a general framework allowing the distribution of contents of different type (file-sharing, VoD Distribution and Live Streaming). For this purpose, it uses delivery techniques based on swarming. On the other hand, the searching of contents is based on multi attribute semantic queries. Both, multi attribute semantic searches and swarming delivery are exploited by a novel algorithm, the Ranking Algorithm, which is used to create the structure of Communities. Furthermore, SwarmTella introduces the Secure Permanent Peer-ID that allows to establish a long term structure of Communities reducing drastically the effects produced by churn. Finally, we have evaluated SwarmTella through simulation and compared it with Gnutella (the most used fully distributed and unstructured P2P system). Our results show that SwarmTella clearly outperforms Gnutella. It increases the probability of success in the search procedure by 10% while reducing the bandwidth utilization in the search procedure up to 60%. Furthermore the simulations also demonstrate that the Ranking Algorithm and the Secure Permanent Peer-ID are suitable tools in order to form Communities with an stable structure along the time.
Computing Research Repository, 2009
BitTorrent suffers from one fundamental problem: the long-term availability of content. This occu... more BitTorrent suffers from one fundamental problem: the long-term availability of content. This occurs on a massive-scale with 38% of torrents becoming unavailable within the first month. In this paper we explore this problem by performing two large-scale measurement studies including 46K torrents and 29M users. The studies go significantly beyond any previous work by combining per-node, per-torrent and system-wide observations to ascertain the causes, characteristics and repercussions of file unavailability. The study confirms the conclusion from previous works that seeders have a significant impact on both performance and availability. However, we also present some crucial new findings: (i) the presence of seeders is not the sole factor involved in file availability, (ii) 23.5% of nodes that operate in seedless torrents can finish their downloads, and (iii) BitTorrent availability is discontinuous, operating in cycles of temporary unavailability. Due to our new findings, we consider it is important to revisit the solution space; to this end, we perform large-scale trace-based simulations to explore the potential of two abstract approaches.
The evaluation of peer-to-peer (P2P) systems is crucial for understanding their performance and t... more The evaluation of peer-to-peer (P2P) systems is crucial for understanding their performance and therefore their feasibility in the real world. Different techniques, such as testbeds, analytical analysis, and simulations, can be used to evaluate system performance. For peer-to-peer systems, simulations are often the most reasonable approach, simply because P2P systems are both inherently large-scale and complex.
Information Technology, 2007
Zusammenfassung Peer-to-Peer ist ein fundamentales Designprinzip und stellt einen Paradigmenwechs... more Zusammenfassung Peer-to-Peer ist ein fundamentales Designprinzip und stellt einen Paradigmenwechsel für die Kommunikation in Computernetzwerken dar. In diesem Beitrag wird zunächst definiert, welche Charakteristika Peer-to-Peer-Systeme ausmachen. Anhand von vier Qualitätsmerkmalen hat die Forschergruppe ,,QuaP2P" ihre Arbeitsbereiche untergliedert. Dieser Gliederung folgend wird anschließend der aktuelle Stand der Wissenschaft und Herausforderungen im Bereich Peer-to-Peer-Forschung zusammengefasst. Dies gibt dem Leser einen strukturieren Überblick, um sich mit den wesentlichen Arbeiten dieses hoch aktuellen Forschungsthemas auseinandersetzen zu können.
BitTorrent suffers from one fundamental problem: the long-term availability of content. This occu... more BitTorrent suffers from one fundamental problem: the long-term availability of content. This occurs on a massive-scale with 38% of torrents becoming unavailable within the first month. In this paper we explore this problem by performing two large-scale measurement studies including 46K torrents and 29M users. The studies go significantly beyond any previous work by combining per-node, per-torrent and system-wide observations to ascertain the causes, characteristics and repercussions of file unavailability. The study confirms the conclusion from previous works that seeders have a significant impact on both performance and availability. However, we also present some crucial new findings: (i) the presence of seeders is not the sole factor involved in file availability, (ii) 23.5% of nodes that operate in seedless torrents can finish their downloads, and (iii) BitTorrent availability is discontinuous, operating in cycles of temporary unavailability.
ABSTRACT Energy consumption is responsible for a large fraction of costs in today's conte... more ABSTRACT Energy consumption is responsible for a large fraction of costs in today's content distribution networks. In upcoming decentralized architectures based on set-top boxes (STB), acting as tiny servers, idle times can dominate distribution costs, since no cooling costs occurs and the Internet access is often paid in a flat-rate manner. The often assumed always-on property of STBs provides high availability but might also waste up to 93% of the baseline energy. In this paper we consider suitable standby policies that reduce energy consumption but still allow offloading content servers significantly. We devise optimal and heuristic standby policies and evaluate them in a realistic scenario to show that a near-optimal behavior can be reached by utilizing the specific features of STBs.
Peer-to-peer and mobile networks gained significant attention of both research community and indu... more Peer-to-peer and mobile networks gained significant attention of both research community and industry. Applying the peer-to-peer paradigm in mobile networks lead to several problems regarding the bandwidth demand of peer-to-peer networks. Time-critical messages are delayed and delivered unacceptably slow. In addition to this, scarce bandwidth is wasted on messages of less priority. Therefore, the focus of this paper is on bandwidth management issues at the overlay layer and how they can be solved. We present HiPNOS.KOM, a priority based scheduling and active queue management system. It guarantees better QoS for higher prioritized messages in upper network layers of peerto-peer systems. Evaluation using the peer-to-peer simulator PeerfactSim.KOM shows that HiPNOS.KOM brings significant improvement in Kademlia in comparison to FIFO and Drop-Tail, strategies that are used nowadays on each peer. User initiated lookups have in Kademlia 24% smaller operation duration when using HiPNOS.KOM.
As peer-to-peer systems are evolving from simplistic application specific overlays to middleware ... more As peer-to-peer systems are evolving from simplistic application specific overlays to middleware platforms hosting a range of potential applications it has become evident that increasingly configurable approaches are required to ensure appropriate overlay support is provided for divergent applications. This is exacerbated by the increasing heterogeneity of networked devices expected to host the overlay. Traditional adaptation approaches rely on simplistic design-time isolated fine-tuning of overlay operations. This, however, cannot fully support the level of configurability required by next generation peer-to-peer systems. To remedy this, a middleware overlay framework is designed that promotes the use of architectural reconfiguration for adaptive purposes. Underpinning this is a generic reusable component pattern that utilises software reflection to enable rich and extensible adaptation of overlays beneath divergent applications operating in heterogeneous environments. This is evaluated through a number of case-study experiments showing how overlays developed using the framework have been adapted to address a range of application and environmental variations.
Peer-to-peer and mobile networks gained significant attention of both research community and indu... more Peer-to-peer and mobile networks gained significant attention of both research community and industry. Applying the peer-to-peer paradigm in mobile networks lead to several problems regarding the bandwidth demand of peer-to-peer networks. Time-critical messages are delayed and delivered unacceptably slow. In addition to this, scarce bandwidth is wasted on messages of less priority. Therefore, the focus of this paper is on bandwidth management issues at the overlay layer and how they can be solved. We present HiPNOS.KOM, a priority based scheduling and active queue management system. It guarantees better QoS for higher prioritized messages in upper network layers of peer-to-peer systems. Evaluation using the peer-to-peer simulator PeerfactSim. KOM shows that HiPNOS.KOM brings significant improvement in Kademlia in comparison to FIFO and drop-tail, strategies that are used nowadays on each peer. User initiated lookups have in Kademlia 24% smaller operation duration when using HiPNOS.KOM.
The heterogeneous, large-scale and decentralised nature of peerto-peer systems creates significan... more The heterogeneous, large-scale and decentralised nature of peerto-peer systems creates significant issues when deploying new functionality and adapting peer behaviour. The ability to autonomously deploy new adaptive functionality is therefore highly beneficial. This paper investigates middleware support for evolving and adapting peers in divergent systems through reflective component based design. This approach allows selfcontained functionality to exist in the network as a primary entity. This functionality is autonomously propagated to suitable peers, allowing nodes to be evolved and adapted to their individual constraints and the specific requirements of their environment. This results in effective functionality flourishing whilst suboptimal functionality dies out. By this, a self-managed infrastructure is created that supports the deployment of functionality following the evolutionary theory of natural selection. This approach is evaluated through simulations to highlight the potential of using natural selection for the deployment and management of software evolution.
Multimedia Systems, 2011
Peer-to-Peer (P2P) techniques for multimedia streaming have been shown to be a good enhancement t... more Peer-to-Peer (P2P) techniques for multimedia streaming have been shown to be a good enhancement to the traditional client/server methods when trying to reduce costs and increase robustness. Due to the fact that P2P systems are highly dynamic, the main challenge that has to be addressed remains supporting the general resilience of the system. Various challenges arise when building a resilient P2P streaming system, such as network failures and system dynamics. In this paper, we first classify the different challenges that face P2P streaming and then present and analyze the possible countermeasures. We classify resilience mechanisms as either core mechanisms, which are part of the system, or as cross-layer mechanisms that use information from different communication layers, which might inflict additional costs. We analyze and present resilience mechanisms from an engineering point of view, such that a system engineer can use our analysis as a guide to build a resilient P2P streaming system with different mechanisms and for various application scenarios.
The peer-to-peer (P2P) paradigm has greatly influenced the design of Internet applications nowada... more The peer-to-peer (P2P) paradigm has greatly influenced the design of Internet applications nowadays. It gained both user popularity and significant attention from the research community, aiming to address various issues arising from the decentralized, autonomous, and the self-organizing nature of P2P systems [379]. In this regard, quantitative and qualitative analysis at large scale is a crucial part of that research. When evaluating widely deployed peer-to-peer systems an analytical approach becomes, however, ineffective due to the large number of simplifications required. Therefore, conclusions about the real-world performance of P2P systems can only be drawn by either launching an Internet-based prototype or by creating a simulation environment that accurately captures the major characteristics of the heterogeneous Internet, e.g. round-trip times, packet loss, and jitter.
Existing approaches for modelling the Internet delay space predict end-to-end delays between two ... more Existing approaches for modelling the Internet delay space predict end-to-end delays between two arbitrary hosts as static values. Further, they do not capture the characteristics caused by geographical constraints. Peer-to-peer (P2P) systems are, however, often very sensitive to the underlying delay characteristics of the Internet, since these characteristics directly influence system performance.
While the performance of peer-to-peer (p2p) systems largely depend on the cooperation of the memb... more While the performance of peer-to-peer (p2p) systems largely depend on the cooperation of the member nodes, there is an inherent conflict between the individuals’ self interest and the communal social welfare. In this regard, many interesting parallels between p2p systems and cooperation in human societies can be drawn. On the one hand, human societies are organized around a certain level of altruistic behavior. Whilst, on the other hand, individuals tend to overuse public goods, if they are free to do so. This paper proposes a new incentive scheme that extracts and modifies sociological incentive patterns, based on the Tragedy of Commons analogy, to work efficiently in a p2p environment. It is shown through simulations that this scheme encourages honest peers whilst successfully blocking non-contributors.
BitTorrent has become the de-facto standard for peer-to-peer content delivery, however, it has be... more BitTorrent has become the de-facto standard for peer-to-peer content delivery, however, it has been found that it suffers from one fundamental problem: the long-term availability of content. Previous work has attributed this to what is termed the seeder promotion problem in which peers refuse to continue serving content after their own download has completed. As of yet, no deployed solution exists to this problem. In this paper, we objectively investigate the solution space for dealing with the seeder promotion problem. Specifically, both single-torrent and cross-torrent approaches are investigated to ascertain which is superior based on three key metrics: availability, performance, and fairness. To achieve this, two large-scale BitTorrent measurement studies have been performed which include 46K torrents and 29M users. Through these, we first quantify the seriousness of the seeder promotion problem before exploiting the data logs to execute accurate trace-based simulations for the different solutions considered. Using the results, we ascertain and describe the different trade-offs between the four general solutions: extending seeding times, cross-torrent bartering, local persistent histories, and global shared histories. We find that single-torrent solutions are profoundly impractical when considering the user behaviour observed in our studies. In contrast, we discover that the different cross-torrent approaches can offer a far more effective solution for satisfying (to varying degrees) the need for high availability, good performance, and fairness between users.
Network virtual environments (NVE) are an evolving trend combining millions of users in an intera... more Network virtual environments (NVE) are an evolving trend combining millions of users in an interactive community. A distributed NVE platform promises to lower the administration costs and to benefit from research done in the peer-to-peer (p2p) domain. In order to reuse existing mature p2p overlays for NVEs, a comparative evaluation has to be done in the same environment (e.g. resources of peers, peer behavior, churn, etc.), using appropriate test cases (scenarios) and observing relevant performance metrics. In this paper we present a benchmarking approach for p2p overlays in the context of NVEs. We define related quality attributes, scenarios, and metrics and use them to evaluate Chord and Kademlia as most popular p2p overlays and assess their suitability to NVE.
Multimedia streaming of mostly user generated content is an ongoing trend, not only since the upc... more Multimedia streaming of mostly user generated content is an ongoing trend, not only since the upcoming of Last.fm and YouTube. A distributed decentralized multimedia streaming architecture can spread the (traffic) costs to the user nodes, but requires to provide for load balancing and consider the heterogeneity of the participating nodes. We propose a DHT-based information gathering and analyzing architecture which controls the streaming request assignment in the system and thoroughly evaluate it in comparison to a distributed stateless strategy. We evaluated the impact of the key parameters in the allocation function which considers the capabilities of the nodes and their contribution to the system. Identifying the quality-bandwidth tradeoffs of the information gathering system, we show that with our proposed system a 53% better load balancing can be reached and the efficiency of the system is significantly improved.
Uploads
Papers by Sebastian Kaune