<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="program.xsl"?>
<program>
  <session>
    <code>MS1</code>
    <sessiontitle>Mini-conference Session 1</sessiontitle>
    <sessionsubtitle>Fault Management</sessionsubtitle>
	<sessionchair>Mauro Tortonesi, University of Ferrara, Italy</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
	<sessionspeaker/>
    <sessiondetails/>
    <date>Monday, 9 November, 2015</date>
    <range>09:00-10:00</range>
    <starttime>2015-11-09T09:00:00-05:00</starttime>
    <endtime>2015-11-09T10:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>09:00</starttime>
        <endtime>09:20</endtime>
        <paperid>1570161213</paperid>
        <sessionid>MS1.1</sessionid>
        <papertitle>LogCluster - a data clustering and pattern mining algorithm for event logs</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Modern IT systems often produce large volumes of event logs, and event pattern discovery is an important log management task. For this purpose, data mining methods have been suggested in many previous works. In this paper, we present the LogCluster algorithm which implements data clustering and line pattern mining for textual event logs. The paper also describes an open source implementation of LogCluster.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Risto</givenname>
              <surname>Vaarandi</surname>
            </name>
            <id>92021</id>
            <affiliation>Tallinn University of Technology</affiliation>
            <country>Estonia</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Mauno</givenname>
              <surname>Pihelgas</surname>
            </name>
            <id>1113567</id>
            <affiliation>Tallinn University of Technology</affiliation>
            <country>Estonia</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>09:20</starttime>
        <endtime>09:40</endtime>
        <paperid>1570163689</paperid>
        <sessionid>MS1.2</sessionid>
        <papertitle>Proactive Failure Detection Learning Generation Patterns of Large-scale Network Logs</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>With the growth of services in IP networks (e.g. IPTV, VoIP) that demand higher quality and reliability than in previous decades, network operators are required to perform proactive operation that quickly detects the signs of critical failures and prevents future problems. Network log data, such as router syslog, are rich sources for such operations. However, log data are a large number of text messages written in an unstructured format and contain various types of network events ranging critical hardware failures to normal login events of operators. Thus, it has become impossible for network operators to find genuinely important logs that lead to serious network problems and work out new alarm rules. We propose a log analysis system for proactive detection of failures before they occur. It automatically detects the abnormal patterns of log messages from a massive amount of data without previous knowledge on them. Our key observation is that the abnormality of logs depends on not just the keywords in the messages, but generation patterns such as burstiness. According to this observation, our system is consists of three functions: (i) extracting log templates automatically and quickly from a massive amount of unstructured log data; (ii) constructing log feature vectors to characterize the generation patterns of log messages such as frequency, periodicity, and burstiness; and (iii) using a supervised machine learning approach to associate critical failures with the log data that appeared before them. </abstract>
        <authors>
          <author>
            <name>
              <givenname>Tatsuaki</givenname>
              <surname>Kimura</surname>
            </name>
            <id>686951</id>
            <affiliation>NTT</affiliation>
            <country>Japan</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Akio</givenname>
              <surname>Watanabe</surname>
            </name>
            <id>956969</id>
            <affiliation>NTT Corporation &amp; NTT Network Technology Labolatories</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Tsuyoshi</givenname>
              <surname>Toyono</surname>
            </name>
            <id>956977</id>
            <affiliation>NTT Corporation</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Keisuke</givenname>
              <surname>Ishibashi</surname>
            </name>
            <id>326227</id>
            <affiliation>NTT</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>09:40</starttime>
        <endtime>10:00</endtime>
        <paperid>1570164703</paperid>
        <sessionid>MS1.3</sessionid>
        <papertitle>Recommending Ticket Resolution using Feature Adaptation</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>In recent years, IT Service Providers have been rapidly introducing automation in their service delivery model. Driven by market pressure to reduce cost and maintain quality of services, they are looking for technologies that will allow rapid progress towards attainment of truly automated service delivery; that is, the ability to deliver the same service automatically using the same process with the same quality. 

Software monitoring systems are designed to actively collect and signal event occurrences and, when necessary, automatically generate incident tickets. Repeating events generate similar tickets, which in turn have a vast number of repeated problem resolutions likely to be found in earlier tickets.

In our work we develop techniques to recommend an appropriate resolution for incoming events by making use of similarities between the events and historical resolutions of similar events. 
The traditional KNN (K Nearest Neighbor) algorithm has been first applied to recommend resolutions for incoming tickets. Massive heterogeneous applications as well as various monitoring software are running on clients' servers to accomplish required tasks and to monitor system health in different metrics. It leads to generation of correlated tickets that have different symptom descriptions but similar resolutions. Furthermore, change of servers' environments can also bring similar situation in which tickets' description differ before and after change but might have similar resolutions. 
These correlated tickets cause performance degradation in ticket resolution recommendation. 
Therefore, we propose using SCL (structural corresponding learning) based consecutive feature adaptation to uncover feature mapping in different time interval. 
Moreover, to put more insights into the periodic regularities existing in our ticket datasets, we apply our algorithm on tickets grouped by different time interval granularities. 
Extensive empirical evaluations on real-world ticket data sets demonstrate the effectiveness and efficiency of our proposed methods.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Wubai</givenname>
              <surname>Zhou</surname>
            </name>
            <id>1295303</id>
            <affiliation>Florida International University</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Tao</givenname>
              <surname>Li</surname>
            </name>
            <id>133584</id>
            <affiliation>Florida International University</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Larisa</givenname>
              <surname>Shwartz</surname>
            </name>
            <id>134381</id>
            <affiliation>IBM Research</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Genady</givenname>
              <surname>Grabarnik</surname>
            </name>
            <id>861547</id>
            <affiliation>St. John's University</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>MS2</code>
    <sessiontitle>Mini-conference Session 2</sessiontitle>
    <sessionsubtitle>Cloud Management</sessionsubtitle>
	<sessionchair>Mauro Tortonesi, University of Ferrara, Italy</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>10:30-11:50</range>
    <starttime>2015-11-09T10:30:00-05:00</starttime>
    <endtime>2015-11-09T11:50:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>10:30</starttime>
        <endtime>10:50</endtime>
        <paperid>1570163151</paperid>
        <sessionid>MS2.1</sessionid>
        <papertitle>Design of a Hierarchical Software-Defined Storage System for Data-Intensive Multi-Tenant Cloud Applications</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Software-Defined Storage (SDS) is an evolving concept in which the management and provisioning of data storage is decoupled from the physical storage hardware, and virtualization is often used to provide the required storage resources. Data-intensive multi-tenant SaaS applications running on the public cloud could benefit from the concepts introduced by SDS by managing the allocation of tenant data from the tenant's perspective, taking custom tenant policies and preferences into account.

In this paper, we propose the design of a scalable multi-tenant SDS system. In our approach, tenants are hierarchically clustered based on multiple scenario-specific characteristics. The SDS system consists of two main components, a multi-tenant data storage module running on the application servers, responsible for the communication between the application servers and the storage pool, and a storage elasticity component responsible for the dynamic (re-)allocation of tenant data over the available storage resources. We introduce the Hierarchical Bin Packing algorithm, used by the storage elasticity component for determining an optimized distribution of tenant data based on the hierarchical tenant tree.

We evaluate our system by means of two case studies based on real-life data sets. Experiments confirm that the Hierarchical Bin Packing algorithm achieves a good performance, with execution times below 100 milliseconds to calculate the allocation for 1000 tenants in a worst-case scenario. Furthermore, our system achieves an average utilization of the storage resources close to the configured allocation factor, with reallocation of tenant data balanced over time.
</abstract>
        <authors>
          <author>
            <name>
              <givenname>Pieter-Jan</givenname>
              <surname>Maenhaut</surname>
            </name>
            <id>1088065</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Hendrik</givenname>
              <surname>Moens</surname>
            </name>
            <id>912293</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Bruno</givenname>
              <surname>Volckaert</surname>
            </name>
            <id>117109</id>
            <affiliation>University of Ghent &amp; IBBT</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Veerle</givenname>
              <surname>Ongenae</surname>
            </name>
            <id>1089727</id>
            <affiliation>Ghent University</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Filip</givenname>
              <surname>De Turck</surname>
            </name>
            <id>97039</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>10:50</starttime>
        <endtime>11:10</endtime>
        <paperid>1570163857</paperid>
        <sessionid>MS2.2</sessionid>
        <papertitle>Forecasting Methods for Cloud Hosted Resources, a comparison</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Cloud management systems, specifically with the adoption of elastic resource services, enable dynamic adjustment of cloud hosted resources and provisioning. In order to effectively provision for dynamic workloads presented on cloud platforms, an accurate forecast of the load on the cloud resources is required. In this paper, we investigate various forecasting methods presented in recent research, identify a set of metrics used to evaluate and compare forecasting methods on prediction performance. We investigate the improvement in accuracy gained by combining three of the best performing models into one model, using a straight average and an combination neural network. From our evaluations using Google's Cluster dataset we find that our Auto-regression model and Feed-Forward Neural Network methods perform as the best measured using our time series and provisioning specific metrics. We also show an improvement in the accuracy when combining these models into an ensemble model.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Manrich</givenname>
              <surname>van Greunen</surname>
            </name>
            <id>1276307</id>
            <affiliation>Stellenbosch University &amp; MIH Media Lab</affiliation>
            <country>South Africa</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Herman</givenname>
              <surname>Engelbrecht</surname>
            </name>
            <id>818053</id>
            <affiliation>Stellenbosch University &amp; MIH Medialab</affiliation>
            <country>South Africa</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:10</starttime>
        <endtime>11:30</endtime>
        <paperid>1570165185</paperid>
        <sessionid>MS2.3</sessionid>
        <papertitle>ICC: An Incentive-Compatible Inter-Cloud Communication Traffic Management Mechanism</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>In this paper we introduce the Inter-Cloud Communication Traffic Management mechanism (ICC). ICC performs rate control over the ISP transit link(s) aiming to attain a target reduction of transit cost. ICC reduces ISPs transit charge by shaping a portion of the inter-domain traffic (e.g. delay-tolerant inter-cloud traffic), which has been marked as time-shiftable by the traffic source, i.e. by the business customer of the ISP such as a cloud/data center. In particular, ICC reduces the transmission rate of marked traffic at peak 5-min billable intervals and increases it at off-peak, acting at even shorter timescales according to a novel rate-adaptation algorithm. We evaluate numerically ICC by employing real traffic traces. The results reveal that ICC can significantly reduce the ISP transit charge and thus can be a promising and practically applicable solution for ISPs while also being beneficial for their customers whom the ISP should incentivize by sharing its savings.
</abstract>
        <authors>
          <author>
            <name>
              <givenname>Manos</givenname>
              <surname>Dramitinos</surname>
            </name>
            <id>2251</id>
            <affiliation>Athens University of Economics and Business</affiliation>
            <country>Greece</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>George</givenname>
              <surname>Stamoulis</surname>
            </name>
            <id>8078</id>
            <affiliation>Athens University of Economics and Business</affiliation>
            <country>Greece</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:30</starttime>
        <endtime>11:50</endtime>
        <paperid>1570164825</paperid>
        <sessionid>MS2.4</sessionid>
        <papertitle>A Resource Allocation Mechanism for Video Mixing as a Cloud Computing Service in Multimedia Conferencing Applications</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Multimedia conferencing is the conversational exchange of multimedia content between multiple parties. It has a wide range of applications (e.g. Massively Multiplayer Online Games (MMOGs) and distance learning). Many multimedia conferencing applications use video extensively, thus video mixing in conferencing settings is of critical importance. Cloud computing is a technology that can solve the scalability issue in multimedia conferencing, while bringing other benefits, such as, elasticity, efficient use of resources, rapid development, and introduction of new applications. However, cloud based multimedia conferencing approaches proposed so far have several deficiencies when it comes to efficient resource usage while meeting Quality of Service (QoS) requirements. This paper proposes a solution to optimize resource allocation for cloud-based video mixing service in multimedia conferencing applications, which can support scalability in terms of number of users, while guaranteeing QoS. We formulate the resource allocation problem mathematically as an Integer Linear Programming (ILP) problem and design a heuristic for it. Simulation results show that our resource allocation model can support more participants compared to the state-of-the-art, while honoring QoS, with respect to end-to-end delay.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Abbas</givenname>
              <surname>Soltanian</surname>
            </name>
            <id>1215887</id>
            <affiliation>Concordia University</affiliation>
            <country>Canada</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Mohammad</givenname>
              <surname>Salahuddin</surname>
            </name>
            <id>420195</id>
            <affiliation>University of Quebec at Montreal &amp; Concordia University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Halima</givenname>
              <surname>Elbiaze</surname>
            </name>
            <id>88448</id>
            <affiliation>University of Quebec at Montreal</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Roch</givenname>
              <surname>Glitho</surname>
            </name>
            <id>1096325</id>
            <affiliation>Concordia University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>MS3</code>
    <sessiontitle>Mini-conference Session 3</sessiontitle>
    <sessionsubtitle>Network Function Virtualization and Security</sessionsubtitle>
	<sessionchair>Jürgen Schönwälder, Jacobs University, Germany</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>13:30-14:50</range>
    <starttime>2015-11-09T13:30:00-05:00</starttime>
    <endtime>2015-11-09T14:50:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>13:30</starttime>
        <endtime>13:50</endtime>
        <paperid>1570160515</paperid>
        <sessionid>MS3.1</sessionid>
        <papertitle>On Orchestrating Virtual Network Functions</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Middleboxes or network appliances like firewalls, proxies, and WAN optimizers have become an integral part of today's ISP and enterprise networks. Middlebox functionalities are usually deployed on expensive and proprietary hardware that require trained personnel for deployment and maintenance. Middleboxes contribute significantly to a network's capital and operational costs. In addition, organizations often require their traffic to pass through a specific sequence of middleboxes for compliance with security and performance policies. This makes the middlebox deployment and maintenance tasks even more complicated. Network Function Virtualization (NFV) is an emerging and promising technology that is envisioned to overcome these challenges. It proposes to move packet processing from dedicated hardware middleboxes to software running on commodity servers. In NFV terminology, software middleboxes are referred to as Virtual Network Functions (VNFs). It is a challenging problem to determine the required number and placement of VNFs that optimize network operational costs and utilization, without violating service level agreements. We call this the VNF Orchestration Problem (VNF-OP) and provide an Integer Linear Programming (ILP) formulation with implementation in CPLEX. We also provide a dynamic programming based heuristic to solve larger instances of VNF-OP. Trace driven simulations on real-world network topologies demonstrate that the heuristic can provide solutions that are within 1.3 times of the optimal solution. Our experiments suggest that a VNF based approach can provide more than 4x reduction in the operational cost of a network.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Md. Faizul</givenname>
              <surname>Bari</surname>
            </name>
            <id>610615</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Shihabur Rahman</givenname>
              <surname>Chowdhury</surname>
            </name>
            <id>341839</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Reaz</givenname>
              <surname>Ahmed</surname>
            </name>
            <id>147715</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Raouf</givenname>
              <surname>Boutaba</surname>
            </name>
            <id>5035</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>13:50</starttime>
        <endtime>14:10</endtime>
        <paperid>1570160519</paperid>
        <sessionid>MS3.2</sessionid>
        <papertitle>Behavioral and Dynamic Security Functions Chaining For Android Devices</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>We present an approach for dynamically outsourcing and composing security functions for mobile devices, according to the network behavior of their running applications. Applications are characterized from a network point of view using data mining and clustering techniques with the aim to select their appropriate security functions. Software-defined networking mechanisms are employed to chain the selected functions and to redirect mobile apps traffic through the resulting security compositions. Those ones can be fully outsourced or split between in-cloud and on-device. Both a prototype and extensive simulations demonstrate the feasibility of the approach and assess its benefits.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Gaëtan</givenname>
              <surname>Hurel</surname>
            </name>
            <id>815421</id>
            <affiliation>INRIA</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Remi</givenname>
              <surname>Badonnel</surname>
            </name>
            <id>313781</id>
            <affiliation>TELECOM Nancy - LORIA/INRIA</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Abdelkader</givenname>
              <surname>Lahmadi</surname>
            </name>
            <id>313542</id>
            <affiliation>LORIA</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Olivier</givenname>
              <surname>Festor</surname>
            </name>
            <id>95496</id>
            <affiliation>INRIA Nancy - Grand Est</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:10</starttime>
        <endtime>14:30</endtime>
        <paperid>1570165241</paperid>
        <sessionid>MS3.3</sessionid>
        <papertitle>VGuard: A Distributed Denial of Service Attack Mitigation Method using Network Function Virtualization</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Distributed denial of service (DDoS) attacks have caused tremendous damage to ISPs and online services. They can be divided into attacks using spoofed IPs and attacks using real IPs (botnet). Among them the attacks from real IPs are much harder to mitigate since the attack traffic can be fabricated to be similar to legitimate traffic. The corresponding DDoS defence strategies proposed in past few years have not been proved to be highly effective due to the limitation of participating devices. However, the emergence of the next generation networking technologies such a network function virtualization (NFV) provide a new opportunity for researchers to design DDoS mitigation solutions.

In this paper we propose VGuard, a dynamic traffic engineering solution based on prioritization, which is implemented on a DDoS virtual network function (VNF). The flows from external zone are directed to different tunnels based on their priority levels. This way trusted legitimate flows are served with guaranteed quality of service, while attack flows and suspicious flows compete for resources with each other. We proposed two methods for flow direction: the static method and the dynamic method. We evaluated the performance of both methods through simulation. Our results show that both methods can effectively provide satisfying service to trusted flows under DDoS attacks, and both methods have their pros and cons under different situations.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Carol</givenname>
              <surname>Fung</surname>
            </name>
            <id>150276</id>
            <affiliation>Virginia Commonwealth University</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Bill</givenname>
              <surname>McCormick</surname>
            </name>
            <id>1295083</id>
            <affiliation>Huawei Canada Research Center</affiliation>
            <country>Canada</country>
            <presenter>1</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:30</starttime>
        <endtime>14:50</endtime>
        <paperid>1570172623</paperid>
        <sessionid>MS3.4</sessionid>
        <papertitle>A Research Process that Ensures Reproducible Network Security Research</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Access to ground-truth data is limited in network security research, especially at large-scale. If data is available, sharing is typically not possible due to privacy concerns and contractual requirements. Hence, reproducibility of research and comparability of results is difficult. For a prevailing empirical domain of research, the resulting lack of transparency is a methodological problem which especially affects network security management in practice. To address this problem, in this paper we propose a research process that ensures reproducibility by embodying both, synthetic and real-world data. Our motivation for this is to combine best of both worlds: synthetic data is used to establish ground-truth and real-world data to assure validity of results. To the best of our knowledge, no such process has been formulated until today.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Sebastian</givenname>
              <surname>Abt</surname>
            </name>
            <id>972873</id>
            <affiliation>Hochschule Darmstadt / CASED</affiliation>
            <country>Germany</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Harald</givenname>
              <surname>Baier</surname>
            </name>
            <id>953587</id>
            <affiliation>Hochschule Darmstadt / CASED</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>MS4</code>
    <sessiontitle>Mini-conference Session 4</sessiontitle>
    <sessionsubtitle>Autonomic, QoS/QoE, and Mobile Management</sessionsubtitle>
	<sessionchair>Jürgen Schönwälder, Jacobs University, Germany</sessionchair>
	<sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>15:30-16:30</range>
    <starttime>2015-11-09T15:30:00-05:00</starttime>
    <endtime>2015-11-09T16:30:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:50</endtime>
        <paperid>1570163791</paperid>
        <sessionid>MS4.1</sessionid>
        <papertitle>A Framework for Autonomic, Ontology-based IT Management</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>The growing complexity and heterogeneity of modern IT systems demand for intelligent management tools, capable of horizontally integrating technologies of different vendors and domains and vertically relating them with business processes and high level requirements. Due to their often hard-coded and unextendable management models, existing tools are not able to meet those requirements well. Recent advances in semantic web technologies let ontologies experience a revival for the modeling of domain knowledge and new standards as the Web Ontology Language (OWL) have been established by the W3C. The abilities to semantically model, map and link independent domains let them appear as very suitable for the application in IT management. Nevertheless, approaches for the implementation of an ontology-based IT management have shown that the scalability for large domain models is insufficient and important features such as the representation of temporal knowledge, relations between values, aggregations and mapping from event streams are missing. In this paper, dedicated solutions for each of those problems will be presented and combined to a comprehensive, ontology-based management framework. The proposed approach has been experimentally validated in a case study of a medium-sized management problem for an air traffic management system.
</abstract>
        <authors>
          <author>
            <name>
              <givenname>Fabian</givenname>
              <surname>Meyer</surname>
            </name>
            <id>961649</id>
            <affiliation>RheinMain University of Applied Sciences</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Reinhold</givenname>
              <surname>Kroeger</surname>
            </name>
            <id>522243</id>
            <affiliation>Hochschule RheinMain - University of Applied Sciences</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:50</starttime>
        <endtime>16:10</endtime>
        <paperid>1570165195</paperid>
        <sessionid>MS4.2</sessionid>
        <papertitle>Modeling the Impact of QoS Pricing on ISP Integrated Services and OTT Services</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>We are concerned with whether a vertically integrated broadband and content provider can unreasonably advantage itself over competing content providers, either by selling quality-of-service (QoS) to content providers at unreasonably high prices, or by refusing to provide access to QoS to competing content. We address this question by modeling the competition between one such vertically integrated provider and one over-the-top (OTT) content provider. We analytically determine when the broadband provider will sell QoS and when the OTT content provider or users will purchase QoS. We characterize the optimal QoS and video service prices. The ISP's market share increases with the difference in the value of the two video services and decreases with the difference in the corresponding costs. Numerical results illustrate the effect of QoS price on content price, and the variation of market share and profit with QoS price. The ISP may sell QoS to users at a lower price than when QoS is sold to the OTT provider.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Wei</givenname>
              <surname>Dai</surname>
            </name>
            <id>772675</id>
            <affiliation>University of California, Irvine</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Ji Won</givenname>
              <surname>Baek</surname>
            </name>
            <id>1295759</id>
            <affiliation>University of California, Irvine</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Scott</givenname>
              <surname>Jordan</surname>
            </name>
            <id>2058</id>
            <affiliation>University of California, Irvine</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>16:10</starttime>
        <endtime>16:30</endtime>
        <paperid>1570158951</paperid>
        <sessionid>MS4.4</sessionid>
        <papertitle>On the Limits of PCI Auto Configuration and Reuse in 4G/5G Ultra Dense Networks</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Mini-conference papers</trackname>
        <abstract>Increased demand for higher user throughput has led to deployment of multi-layer networks commonly called heterogeneous networks (Hetnets). Therein, small cells are deployed alongside traditional macro cells, in many cases on the same spectrum. Such scenarios complicate the configuration of network parameters such as the Physical Cell Identity (PCI). A number of approaches have as such been proposed to automate the allocation of PCIs in such scenarios. These approaches struggle to address the two conflicting objectives for PCI assignment in a hetnet scenario: 1) the need for optimal performance by avoiding conflicts, against 2) the requirement to separate the different layers and avoid any need to share knowledge among the layers. However, as the density of small cells increases evolving the Hetnets into what are called Ultra Dense Networks (UDN), these approaches reach their limits. In this paper, we study the performance of the current PCI allocation strategies in such UDN scenarios and evaluate their break down points. Our results show that these strategies do not adequately address PCI allocation for the UDN scenario. Specifically, we observe that PCI assignment in one layer requires knowledge of the assignments in the other layer, otherwise the consequence is a very high count of PCI confusions.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Stephen</givenname>
              <surname>Mwanje</surname>
            </name>
            <id>762343</id>
            <affiliation>Nokia Networks</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Janne</givenname>
              <surname>Ali-Tolppa</surname>
            </name>
            <id>1217249</id>
            <affiliation>Nokia Networks</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Henning</givenname>
              <surname>Sanneck</surname>
            </name>
            <id>341860</id>
            <affiliation>Nokia Networks</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>KS1</code>
    <sessiontitle>Keynote Session 1</sessiontitle>
    <sessionsubtitle>Beyond TCP: The evolution of Internet transport protocols</sessionsubtitle>
    <sessionspeaker>Olivier Bonaventure (University Catholique de Louvain, Belgium)</sessionspeaker>
    <sessionspeakerurl>https://inl.info.ucl.ac.be/obo</sessionspeakerurl>
	<sessionchair>Jürgen Schönwälder, Jacobs University, Germany</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessiondetails>The transport layer is one of the key layers of the Internet protocol stack. It enrichs the network layer service to make it suitable for applications. Almost 40 years after its initial design, TCP remains the most widely used transport protocol. In the early 2000s, SCTP was proposed as an alternative to TCP. Despite a clean and extensible design and many useful features, it did not reach wide deployment. This failure is mainly caused by middleboxes. We'll describe their operation and explain why Multipath TCP, which is a backward compatible evolution to TCP, has better chances of being deployed. We'll explain the main principles behind Multipath TCP and the lessons that can be drawn from its design. We'll then analyse why Internet giants like Google and Microsoft now consider application-layer solutions like QUIC to replace standard protocols like TCP.</sessiondetails>
    <date>Tuesday, 10 November, 2015</date>
    <range>09:00-10:00</range>
    <starttime>2015-11-10T09:00:00-05:00</starttime>
    <endtime>2015-11-10T10:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers/>
  </session>
  <session>
    <code>TS1</code>
    <sessiontitle>Technical Session 1</sessiontitle>
    <sessionsubtitle>Software-Defined Networks and Network Function Virtualization</sessionsubtitle>
	<sessionchair>Luciano Paschoal Gaspary, UFRGS, Brazil</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>10:30-12:00</range>
    <starttime>2015-11-10T10:30:00-05:00</starttime>
    <endtime>2015-11-10T12:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>10:30</starttime>
        <endtime>11:00</endtime>
        <paperid>1570165505</paperid>
        <sessionid>TS1.1</sessionid>
        <papertitle>Abstract Model of SDN Architectures Enabling Comprehensive Performance Comparisons</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Software-defined Networking (SDN) is a new network architecture that decouples the control plane from the data plane. Scalability of the control plane with respect to network size and update frequency is an important problem that has been addressed by previous studies from a variety of viewpoints. However, the solutions found in these studies may be only locally optimized solutions. To find a globally optimized solution, a broader viewpoint is required: one in which various SDN architectures can be evaluated and compared. In this paper, we propose an abstract model of SDN architectures, which enables multiple SDN architectures to be compared under a unified evaluation condition, and discuss the modeling of an SDN architecture and its variations to find the optimal design from a global viewpoint. We first propose a generic model of SDN architectures and derive variations in terms of composition unit (single or multiple), processing principle (sequential or parallel), or location (intra- or inter-node). We then show that existing SDN architectures can be represented as one of the variations of our abstract model with fitted parameters. Finally we discuss how variation of components affects performance and show, using message-driven simulations, that our model enables comprehensive performance comparisons of different SDN designs represented as parameterized models.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Tatsuya</givenname>
              <surname>Sato</surname>
            </name>
            <id>1292841</id>
            <affiliation>Graduate School of Engineering, Osaka City University</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Yasuhiro</givenname>
              <surname>Sato</surname>
            </name>
            <id>164764</id>
            <affiliation>Japan Coast Guard Academy</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Shingo</givenname>
              <surname>Ata</surname>
            </name>
            <id>14463</id>
            <affiliation>Osaka City University</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Ikuo</givenname>
              <surname>Oka</surname>
            </name>
            <id>14474</id>
            <affiliation>Osaka City University</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:00</starttime>
        <endtime>11:30</endtime>
        <paperid>1570161747</paperid>
        <sessionid>TS1.2</sessionid>
        <papertitle>Virtual Network Functions Orchestration in Wireless Networks</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated middleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Roberto</givenname>
              <surname>Riggio</surname>
            </name>
            <id>146926</id>
            <affiliation>Create-Net</affiliation>
            <country>Italy</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Abbas</givenname>
              <surname>Bradai</surname>
            </name>
            <id>526222</id>
            <affiliation>XLIM Institute, University of Poitiers</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Tinku</givenname>
              <surname>Rasheed</surname>
            </name>
            <id>128058</id>
            <affiliation>Create-Net Research</affiliation>
            <country>Italy</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Julius</givenname>
              <surname>Schulz-Zander</surname>
            </name>
            <id>857433</id>
            <affiliation>Telekom Innovation Laboratories / TU-Berlin</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Slawomir</givenname>
              <surname>Kuklinski</surname>
            </name>
            <id>979071</id>
            <affiliation>Orange</affiliation>
            <country>Poland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Toufik</givenname>
              <surname>Ahmed</surname>
            </name>
            <id>12447</id>
            <affiliation>University of Bordeaux-1 / CNRS-LaBRI</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:30</starttime>
        <endtime>12:00</endtime>
        <paperid>1570172683</paperid>
        <sessionid>TS1.3</sessionid>
        <papertitle>DynSDM: Dynamic and Flexible Software-Defined Multicast for ISP Environments</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>A number of today's over-the-top (OTT) services could greatly benefit from a scalable and efficient network-layer multicast support on the Internet. IP multicast showed to not meet these requirements and, thus, is not available for this purpose. Content Delivery Networks emerged as global alternative but usually end at the border of ISP networks. Software-Defined Multicast (SDM) is proposed in a previous work by the authors, enabling ISP-internal network-layer multicast delivery of OTT traffic. While it coins fundamental concepts, it does not detail the ISP-internal traffic and service management and leaves important questions unanswered. To this end, DynSDM is proposed in this paper, detailing the multicast planning and management, proposing a novel network-layer multi-tree mechanism to distribute traffic on links inside the ISP network, and introducing mechanisms to handle group and network dynamics. DynSDM was prototypically evaluated, showing its high traffic efficiency, good scalability, and superior traffic distribution characteristics.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Julius</givenname>
              <surname>Rückert</surname>
            </name>
            <id>397803</id>
            <affiliation>Technische Universität Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Jeremias</givenname>
              <surname>Blendin</surname>
            </name>
            <id>792361</id>
            <affiliation>TU Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Rhaban</givenname>
              <surname>Hark</surname>
            </name>
            <id>1303973</id>
            <affiliation>TU Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>David</givenname>
              <surname>Hausheer</surname>
            </name>
            <id>99597</id>
            <affiliation>TU Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>TS2</code>
    <sessiontitle>Technical Session 2</sessiontitle>
    <sessionsubtitle>Management of Clouds</sessionsubtitle>
	<sessionchair>Brendan Jennings, TSSG, Ireland</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>13:30-15:00</range>
    <starttime>2015-11-10T13:30:00-05:00</starttime>
    <endtime>2015-11-10T15:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>13:30</starttime>
        <endtime>14:00</endtime>
        <paperid>1570151553</paperid>
        <sessionid>TS2.1</sessionid>
        <papertitle>PRACTISE: Robust Prediction of Data Center Time Series</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>We analyze workload traces from production data centers and focus on their VM usage patterns of CPU, memory, disk, and network bandwidth. Burstiness is a clear characteristic of many of these time series: the existence of peak loads within clear periodic patterns but also patterns that do not have clear periodicity. We present PRACTISE, a neural network based framework that can efficiently and accurately predict future loads, peak loads, and their timing. Extensive experimentation using traces from IBM data centers illustrates PRACTISE's superiority when compared with ARIMA and baseline neural network models, with average prediction errors that are significantly smaller. Its robustness is also illustrated with respect to the prediction window that can be short-term (i.e., hours) or long-term (i.e., a week).</abstract>
        <authors>
          <author>
            <name>
              <givenname>Ji</givenname>
              <surname>Xue</surname>
            </name>
            <id>1236925</id>
            <affiliation>College of William and Mary</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Feng</givenname>
              <surname>Yan</surname>
            </name>
            <id>601811</id>
            <affiliation>College of William and Mary</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Robert</givenname>
              <surname>Birke</surname>
            </name>
            <id>130026</id>
            <affiliation>IBM Zurich Research Laboratory</affiliation>
            <country>Switzerland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Lydia</givenname>
              <surname>Chen</surname>
            </name>
            <id>311615</id>
            <affiliation>IBM Zurich Research Laboratory</affiliation>
            <country>Switzerland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Thomas</givenname>
              <surname>Scherer</surname>
            </name>
            <id>1177079</id>
            <affiliation>IBM Research</affiliation>
            <country>Switzerland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Evgenia</givenname>
              <surname>Smirni</surname>
            </name>
            <id>120686</id>
            <affiliation>College of William and Mary</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:00</starttime>
        <endtime>14:30</endtime>
        <paperid>1570164031</paperid>
        <sessionid>TS2.2</sessionid>
        <papertitle>Predicting service metrics for cluster-based services using real-time analytics</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Predicting the performance of cloud services is intrinsically hard. In this work, we pursue an approach based upon statistical learning, whereby the behaviour of a system is learned from observations. Specifically, our testbed implementation collects device statistics from a server cluster and uses a regression method that accurately predicts, in real-time, client-side metrics for a video streaming service running on the cluster. The method is service-agnostic in the sense that it takes as input operating-systems statistics instead of service-level metrics. We show that feature set reduction significantly improves prediction accuracy in our case, while simultaneously reducing model computation time. We also discuss design and implementation of a real-time analytics engine, which processes streams of device statistics and service metrics from testbed sensors and produces model predictions through online learning.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Rerngvit</givenname>
              <surname>Yanggratoke</surname>
            </name>
            <id>764923</id>
            <affiliation>KTH - Royal Institute of Technology</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Jawwad</givenname>
              <surname>Ahmed</surname>
            </name>
            <id>381523</id>
            <affiliation>Ericsson Research</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>John</givenname>
              <surname>Ardelius</surname>
            </name>
            <id>480851</id>
            <affiliation>SICS</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Christofer</givenname>
              <surname>Flinta</surname>
            </name>
            <id>147832</id>
            <affiliation>Ericsson Research</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Andreas</givenname>
              <surname>Johnsson</surname>
            </name>
            <id>124224</id>
            <affiliation>Ericsson Research</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Daniel</givenname>
              <surname>Gillblad</surname>
            </name>
            <id>284172</id>
            <affiliation>Swedish Institute of Computer Science</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Rolf</givenname>
              <surname>Stadler</surname>
            </name>
            <id>149700</id>
            <affiliation>KTH Royal Institute of Technology</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:30</starttime>
        <endtime>15:00</endtime>
        <paperid>1570165423</paperid>
        <sessionid>TS2.3</sessionid>
        <papertitle>Computing Resource Transformation, Consolidation and Decomposition in Hybrid Clouds</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>With the promise of providing flexible/elastic computing resources on demand, Cloud computing has been attracting enterprises and individuals to migrate workloads in the legacy environment to the public/private/hybrid clouds. Also, customers want to migrate between one cloud providers with different requirements such as cost, performance, and manageability. However the workload migration is often interpreted as an image migration or re-installation/data copying as the exact snapshot of the source machine. Also the various cloud platforms and service models are rarely taken into consideration during the migratioin analytics. Therefore, although the expectation has risen with more various requirements on the target cloud platforms and environments, the migration techniques do not provide enough options that can accommodate the various requirements. In this paper we propose a model to tackle the migration challenges that transform one resource into same or another resource in hybrid clouds. We formulate the problem as a constraint satisfaction problem, and iteratively decompose the server components and consolidate the servers. The ultimate goal is to recommend the optimal target cloud platform and environment. Through the evaluation of the proposed model using the real enterprise dataset (up to 2012 machines), we prove that the proposed model satisfies the goal. We show that when migrating into virtualized environments, the thorough resource planning can reduce 16% of current resources, about 5% - 10% servers can be consolidated, and more than 60% servers are possible candidates for server decomposition.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Jinho</givenname>
              <surname>Hwang</surname>
            </name>
            <id>734457</id>
            <affiliation>IBM Research</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>PS1</code>
    <sessiontitle>Poster Session 1</sessiontitle>
    <sessionsubtitle/>
	<sessionchair>Juan-Luis Gorricho, UPC, Spain</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>15:30-17:00</range>
    <starttime>2015-11-10T15:30:00-05:00</starttime>
    <endtime>2015-11-10T17:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570148463</paperid>
        <sessionid>PS1.1</sessionid>
        <papertitle>Towards HSS as a Virtualized Service for 5G Networks</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>Home Subscriber Server (HSS) is the main database of the current generation's cellular communications systems. It contains subscriber-related information, such as the authentication information and the list of services to which each user is subscribed. The anticipated tremendous increase in the number of subscribers, services and devices (M2M) in 5G networks brings new challenges with regard to HSS provisioning. It calls for more scalability and elasticity regarding information storage, access and management. The current method for increasing the number of HSSs deployed is certainly not the most cost efficient solution. On the other hand, advanced virtualization techniques can aid in tackling the challenges while enabling a smooth migration to 5G. This paper proposes a new architecture for a scalable and elastic HSS using virtualization. The new architecture enables easy and rapid deployment of new HSS instances at a minimal cost, while increasing efficiency of the use of resources. The paper presents the architecture and demonstrates how the architecture can be used by presenting a case scenario, in which three QoS-enabled video telephony service providers share the same virtualized-HSS. The paper also describes the implemented proof of concept prototype and evaluates the performance results. </abstract>
        <authors>
          <author>
            <name>
              <givenname>Hanieh</givenname>
              <surname>Alipour</surname>
            </name>
            <id>1277557</id>
            <affiliation>Concordia University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Fatna</givenname>
              <surname>Belqasmi</surname>
            </name>
            <id>147660</id>
            <affiliation>Zayed University</affiliation>
            <country>UAE</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Mohammad</givenname>
              <surname>Abu-Lebdeh</surname>
            </name>
            <id>756555</id>
            <affiliation>Concordia University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Roch</givenname>
              <surname>Glitho</surname>
            </name>
            <id>1096325</id>
            <affiliation>Concordia University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570154833</paperid>
        <sessionid>PS1.2</sessionid>
        <papertitle>State-of-the-Art Multihoming Solutions for Android: a Quantitative Evaluation and Experience Report</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>The technical challenges associated with multihoming management in mobile systems and applications have attracted relevant research activities, as demonstrated by the wide related literature of the recent years. However, only very recently some multihoming solutions and techniques have started to be applied in industrially-relevant platforms and cases, often in a limited and very controlled way. This paper has the specific and focused objective of reporting a fresh state-of-the-art overview of the maturity of multihoming solutions for Android and to describe our practical experience of multihoming configuration and evaluation over off-the-shelf Android devices. In particular, we report the experience made while considering the relevant Locator/Identifier Separation Protocol (LISP) and especially LISPmob support solutions, by i) showing how to efficiently configure LISPmob on non-rooted Android devices; and ii) thoroughly analyzing its supported features towards the abstraction of seamless mobility. In addition, the paper includes a qualitative and quantitative comparison of different multihoming support approaches, as well as original experimental results about the performance of LISPmob over Android terminals. We claim that these results can be a valuable contribution to the community of researchers in the field in order to shed new light on the status of implementation of Android multihoming supports as well as on the research directions to be better investigated in the near future for more comprehensive and efficient support to seamless mobility.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Paolo</givenname>
              <surname>Bellavista</surname>
            </name>
            <id>431136</id>
            <affiliation>University of Bologna</affiliation>
            <country>Italy</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Luca</givenname>
              <surname>Stornaiuolo</surname>
            </name>
            <id>1284273</id>
            <affiliation>University of Bologna</affiliation>
            <country>Italy</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570158711</paperid>
        <sessionid>PS1.3</sessionid>
        <papertitle>Impact of Revenue-Driven CDN on the Competition among Network Operators</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract> We investigate the impact of decisions made by a CDN willing to maximize its revenue through the management of cache servers. Based on a model with two network providers, we highlight that revenue-oriented management policies can affect the user-perceived quality of experience, impacting the competition among network access providers in favor of the largest one. Since this contradicts the principle underpinning network neutrality--although not with the technical net neutrality rules--we discuss the necessity to regulate CDN activity.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Patrick</givenname>
              <surname>Maillé</surname>
            </name>
            <id>94224</id>
            <affiliation>Institut Mines-Telecom / Telecom Bretagne</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Gwendal</givenname>
              <surname>Simon</surname>
            </name>
            <id>205964</id>
            <affiliation>Institut Mines Telecom - Telecom Bretagne</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Bruno</givenname>
              <surname>Tuffin</surname>
            </name>
            <id>9844</id>
            <affiliation>Inria Rennes - Bretagne Atlantique</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570160459</paperid>
        <sessionid>PS1.4</sessionid>
        <papertitle>Certifying Spoofing-Protection of Firewalls</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>We present an algorithm to certify IP spoofing protection of firewall rulesets. The algorithm is machine-verifiably proven sound and its use is demonstrated in real-world scenarios.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Cornelius</givenname>
              <surname>Diekmann</surname>
            </name>
            <id>862125</id>
            <affiliation>Technische Universität München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Lukas</givenname>
              <surname>Schwaighofer</surname>
            </name>
            <id>1055081</id>
            <affiliation>Technische Universität München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Georg</givenname>
              <surname>Carle</surname>
            </name>
            <id>93235</id>
            <affiliation>Technische Universität München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570163515</paperid>
        <sessionid>PS1.7</sessionid>
        <papertitle>Modelling of IP Geolocation by use of Latency Measurements</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>IP Geolocation is a key enabler for many areas of application like Content Delivery Networks, targeted advertisement and law enforcement. Therefore, an increased accuracy is needed to improve service quality. Although IP Geolocation is an ongoing field of research for over one decade, it is still a challenging task, whereas good results are only achieved by the use of active latency measurements. This paper presents an novel approach to find optimized Landmarks positions which are used for active probing and introduce an improved location estimation. Since a reasonable Landmark selection is important for a highly accurate localization service, the goal is to find Landmarks close to the target with respect to the infrastructure and hop count. Current techniques provide less information about solving this problem as well as are using imprecise models. We demonstrate the usability of our approach in a real-world environment. The combination of an optimized Landmark selection and advanced modulation results in an improved accuracy of IP Geolocation.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Peter</givenname>
              <surname>Hillmann</surname>
            </name>
            <id>855611</id>
            <affiliation>Universität der Bundeswehr München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Lars</givenname>
              <surname>Stiemert</surname>
            </name>
            <id>1279083</id>
            <affiliation>Universität der Bundeswehr München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Gabi</givenname>
              <surname>Dreo Rodosek</surname>
            </name>
            <id>275129</id>
            <affiliation>Universität der Bundeswehr München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Oliver</givenname>
              <surname>Rose</surname>
            </name>
            <id>1294041</id>
            <affiliation>Universität der Bundeswehr München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570163781</paperid>
        <sessionid>PS1.8</sessionid>
        <papertitle>Traffic Flow Analysis of Tor Pluggable Transports</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>Tor provides the users the ability to use the Internet anonymously. On the Tor network, the users connect to three relays run by volunteers. The addresses of these relays are publically available. Some organizations prevent access to Tor by blocking the addresses of these relays. To mitigate this, Tor has introduced the concept of bridges and Pluggable Transports. Bridges are relays that their addresses do not have publicly available addresses to evade the blocking. Pluggable Transports are used to obfuscate the connection to these bridges. In this paper, we investigate the robustness of these pluggable transports in evading the flow based traffic analysis and blocking systems.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Khalid</givenname>
              <surname>Shahbar</surname>
            </name>
            <id>1294359</id>
            <affiliation>Dalhousie University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Nur</givenname>
              <surname>Zincir-Heywood</surname>
            </name>
            <id>102310</id>
            <affiliation>Dalhousie University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570163815</paperid>
        <sessionid>PS1.9</sessionid>
        <papertitle>Dynamically Adaptive Policies for Dynamically Adaptive Telecommunications Networks</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>New technologies are changing the world of communication networks and even more so their management. Cloud computing and predictive analytics have removed the need for specialized compute hardware and created products that continuously search for and find insights in management data. Virtualization of networks and network functions, SDN and NFV, are beginning to be mature enough for production networks resulting in much more flexible and dynamic networks. IoT and M2M traffic and new customer demands are driving new thinking and demands for 5G networks. Almost every aspect in the control and management of networks has seen new dimensions of flexibility and dynamicity, with the notable exception of the policies that drive them. This paper discusses the need to add adaptiveness to classic policies, describes a novel approach for adaptive policies, shows how adaptive policies will form part of future network frameworks and architectures, and finally discusses early use cases developed for mobile operators.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Sven</givenname>
              <surname>van der Meer</surname>
            </name>
            <id>117181</id>
            <affiliation>Ericsson LM</affiliation>
            <country>Ireland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>John</givenname>
              <surname>Keeney</surname>
            </name>
            <id>174143</id>
            <affiliation>Ericsson</affiliation>
            <country>Ireland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Liam</givenname>
              <surname>Fallon</surname>
            </name>
            <id>150427</id>
            <affiliation>Ericsson</affiliation>
            <country>Ireland</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570172833</paperid>
        <sessionid>PS1.10</sessionid>
        <papertitle>On Selective Compression of Primary Data</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>With the advent of social media, Internet of Things
(IoT), widespread use of richer media formats such as video, and
generally increased use of mobile devices, volume of online data
has seen a rapid increase in recent years. To cope with this data
explosion, businesses and cloud providers are scrambling to lower
the cost of storing data without sacrificing the quality of their
service using space reduction techniques such as compression and
deduplication. Capacity savings, however, are achieved at the cost
of performance and additional resource overheads. One drawback
of compression techniques is the additional computation
required to store and fetch data, which may significantly increase
response time, i.e., I/O latency. Worse yet, inefficient compression
algorithms that fail to compress data satisfactorily suffer from
the latency penalty with marginal capacity savings, e.g., deciding
to compress data that is encrypted or already compressed. Therefore,
from a data center administrator's perspective, we should
pick the set of volumes that will yield the most compression space
saving with the least latency for a given amount of computation
capacity, without exhaustively inspecting the data content of
volumes. To fill this void, this paper proposes an approach to
manage compression for a very large set of volumes. It maximizes
capacity savings and minimizes latency impact without scanning
the actual data content (to avoid security concerns). Our pilot
deployments show significant capacity savings and performance
improvements compared to benchmark compression strategies.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Gabriel</givenname>
              <surname>Alatorre</surname>
            </name>
            <id>1304155</id>
            <affiliation>IBM Research</affiliation>
            <country>USA</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Nagapramod</givenname>
              <surname>Mandagere</surname>
            </name>
            <id>1075655</id>
            <affiliation>IBM Research Center, Almaden</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Yang</givenname>
              <surname>Song</surname>
            </name>
            <id>215795</id>
            <affiliation>IBM Research</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Heiko</givenname>
              <surname>Ludwig</surname>
            </name>
            <id>188346</id>
            <affiliation>IBM Research - Almaden</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>KS2</code>
    <sessiontitle>Keynote Session 2</sessiontitle>
    <sessionsubtitle>Is Multipath Routing Really a Panacea?</sessionsubtitle>
    <sessionspeaker>Deep Medhi (University of Missouri - Kansas City, USA)</sessionspeaker>
    <sessionspeakerurl>https://sce2.umkc.edu/csee/dmedhi/</sessionspeakerurl>
	<sessionchair>Joan Serrat, UPC, Spain</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessiondetails>It is often believed that multipath routing is always beneficial. For network management, network operators focus on traffic engineering; centralized approaches such as SDN are preferred for better control and management. Furthermore, network operators are interested in how they can optimize their networks through multipath routing, but at the same time they like single-path routing as it is helpful in network troubleshooting. I will discuss results that show that when all node pairs (demands) in a network have traffic, multipath routing has very little benefit compared to single-path routing, especially as the network becomes large for traffic engineering goals. Not only that, under certain traffic conditions, single-path routing is found to be optimal. In this keynote, I will give insights into the issues and discuss results and impact i) on ISP networks, ii) networks with most traffic going to multiple cloud data center locations, and iii) on intra-data center networks. Through this work, I will also touch on what this means in regards to science and engineering aspects of network management.</sessiondetails>
    <date>Wednesday, 11 November, 2015</date>
    <range>09:00-10:00</range>
    <starttime>2015-11-11T09:00:00-05:00</starttime>
    <endtime>2015-11-11T10:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers/>
  </session>
  <session>
    <code>TS3</code>
    <sessiontitle>Technical Session 3</sessiontitle>
    <sessionsubtitle>Management of Clouds and Network Virtualization</sessionsubtitle>
	<sessionchair>Jorge Lobo, UPF, Spain</sessionchair>
	<sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>10:30-12:00</range>
    <starttime>2015-11-11T10:30:00-05:00</starttime>
    <endtime>2015-11-11T12:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>10:30</starttime>
        <endtime>11:00</endtime>
        <paperid>1570163017</paperid>
        <sessionid>TS3.1</sessionid>
        <papertitle>Fault-tolerant application placement in heterogeneous cloud environments</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>The Internet of Things (IoT) has inspired a myriad
of real-time applications, such as robotics and human-machine
interaction (HMI). Many IoT applications have significant computational
requirements, while at the same time they demand very
low latencies. The cloud can provide the needed resources on-demand,
however often fails to meet these timing-requirements.
Low latencies can only be realized by having computational
infrastructure in close vicinity. Therefore we investigate to what
extent the cloud can be extended in the direct wireless surroundings
of the IoT devices. This environment is highly heterogeneous
as it comprises a wide variety of devices (e.g. sensor nodes, smartphones,
laptops and desktop PCs), connected using a plethora
of technologies (both wired and wireless). A direct implication
is that, compared to traditional cloud infrastructure, many of
those nodes and links are likely to fail. This paper focusses
on how intelligent application placement can overcome failure-related
challenges. We demonstrate that availability-awareness
can easily tenfold the number of applications that can be
hosted simultaneously. Furthermore we find that an additional
increase of more than 50% can be realized through redundant
provisioning of resources.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Bart</givenname>
              <surname>Spinnewyn</surname>
            </name>
            <id>1293425</id>
            <affiliation>University of Antwerp &amp; iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Bart</givenname>
              <surname>Braem</surname>
            </name>
            <id>185857</id>
            <affiliation>iMinds - University of Antwerp</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Steven</givenname>
              <surname>Latré</surname>
            </name>
            <id>1059359</id>
            <affiliation>University of Antwerp - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:00</starttime>
        <endtime>11:30</endtime>
        <paperid>1570165269</paperid>
        <sessionid>TS3.2</sessionid>
        <papertitle>Experiments or Simulation? A Characterization of Evaluation Methods for In-Memory Databases</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>The recent growth of interest for in-memory databases poses the question on whether established prediction methods such as response surfaces and simulation are effective to describe the performance of these systems. In particular, the limited dependence of in-memory technologies on the disk makes methods such as simulation more appealing than in the past, since disks are difficult to simulate. To answer this question, we study an in-memory commercial solution, SAP HANA, deployed on a high-end server with 120 physical cores. First, we apply experimental design methods to generate response surfaces that describe database performance as a function of workload and hardware parameters. Next, we develop a class-switching queueing network model to predict in-memory database performance under similar scenarios. By comparing the applicability of the two approaches to modeling multi-tenancy, we find that both queueing and response surface models yield mean prediction errors in the range 5%-22% with respect to mean memory occupancy and response times, but the accuracy for the latter deteriorates in response surfaces as the number of experiments are reduced, whereas simulation is effective in all cases. This suggests that simulation can be very effective in performance prediction for in-memory database management.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Karsten</givenname>
              <surname>Molka</surname>
            </name>
            <id>1293125</id>
            <affiliation>Imperial College London &amp; SAP (UK) Ltd.</affiliation>
            <country>United Kingdom</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Giuliano</givenname>
              <surname>Casale</surname>
            </name>
            <id>1036471</id>
            <affiliation>Imperial College London</affiliation>
            <country>United Kingdom</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:30</starttime>
        <endtime>12:00</endtime>
        <paperid>1570164177</paperid>
        <sessionid>TS3.3</sessionid>
        <papertitle>SiMPLE: Survivability in Multi-Path Link Embedding</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Internet applications are deployed on the same network infrastructure, yet they have diverse performance and functional requirements. The Internet was not originally designed
to support the diversity of current applications. Network Virtualization can enable heterogeneous applications and network architectures to coexist without interference on the same infrastructure. Embedding a Virtual Network (VN) into a physical network is a fundamental problem in Network Virtualization. A VN Embedding that aims to survive physical (e.g., link) failures is known as the Survivable Virtual Network Embedding (SVNE). A key challenge in the SVNE problem is to ensure VN survivability with minimal resource redundancy. To address this challenge, we propose SiMPLE. By exploiting path diversity in the physical network, SiMPLE provides guaranteed VN survivability against single link failure. In addition, SiMPLE produces highly survivable VN embeddings in presence of multiple link failures while incurring very low resource redundancy. We provide an ILP formulation for this problem and implement it using GLPK. We also propose a greedy algorithm to solve larger instances of the problem. Simulation results show that our solution outperforms full backup and shared backup schemes for SVNE, and produces near-optimal results.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Md Mashrur</givenname>
              <surname>Alam Khan</surname>
            </name>
            <id>1197999</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Nashid</givenname>
              <surname>Shahriar</surname>
            </name>
            <id>325615</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Reaz</givenname>
              <surname>Ahmed</surname>
            </name>
            <id>147715</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Raouf</givenname>
              <surname>Boutaba</surname>
            </name>
            <id>5035</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>TS4</code>
    <sessiontitle>Technical Session 4</sessiontitle>
    <sessionsubtitle>Flow and QoS/QOE Measurements</sessionsubtitle>
	<sessionchair>Aiko Pras, University of Twente, The Netherlands</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>13:30-15:00</range>
    <starttime>2015-11-11T13:30:00-05:00</starttime>
    <endtime>2015-11-11T15:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>13:30</starttime>
        <endtime>14:00</endtime>
        <paperid>1570163599</paperid>
        <sessionid>TS4.1</sessionid>
        <papertitle>Dictyogram: A statistical approach for the definition and visualization of network flow categories</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Network managers have to deal with tons of measurement data provided by monitoring systems. Such data is difficult to both process and translate into concrete management actions. As an attempt to make managerial work easier, we propose a novel statistical approach that summarizes the behavior of network flow characteristics —e.g., flow sizes or durations. Bearing in mind that losses in the summarized information can lead to restricted or even erroneous conclusions, our approach solves this by exploiting the probability integral transform theorem. This theorem allows the definition of a set of intervals, mapped into concrete categories, where the number of flows according to a given characteristic would be uniformly distributed among categories. This eases the use of both statistical tests and simple visual inspection to detect changes in the behavior of the characteristic under analysis, as typically abrupt changes are understood as signs of intrusion, malfunction or other types of anomalies. This proposal gave rise to the visualization and analytical framework Dictyogram, which has been applied to monitor the Spanish Academic Network —more than one million users. Its results are shown as a case study assessing the usefulness of our proposal.</abstract>
        <authors>
          <author>
            <name>
              <givenname>David</givenname>
              <surname>Muelas</surname>
            </name>
            <id>1294161</id>
            <affiliation>Universidad Autónoma de Madrid</affiliation>
            <country>Spain</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Miguel</givenname>
              <surname>Gordo</surname>
            </name>
            <id>1294163</id>
            <affiliation>Universidad Autónoma de Madrid</affiliation>
            <country>Spain</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>José Luis</givenname>
              <surname>García-Dorado</surname>
            </name>
            <id>730533</id>
            <affiliation>Universidad Autónoma de Madrid</affiliation>
            <country>Spain</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Jorge</givenname>
              <surname>López de Vergara</surname>
            </name>
            <id>130998</id>
            <affiliation>Universidad Autónoma de Madrid (UAM)</affiliation>
            <country>Spain</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:00</starttime>
        <endtime>14:30</endtime>
        <paperid>1570163137</paperid>
        <sessionid>TS4.2</sessionid>
        <papertitle>Behind the NAT - A Measurement Based Evaluation of Cellular Service Quality</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Mobile applications such as VoIP, (live) gaming, or video streaming have diverse QoS requirements ranging from low delay to high throughput. The optimization of the network quality experienced by end-users requires detailed knowledge of the expected network performance. Yet, the achieved service quality is affected by a number of factors, including network operator and available technologies. However, most studies focusing on measuring the cellular network do not consider the performance implications of network configuration and management. To this end, this paper reports about an extensive data set of cellular network measurements, focused on analyzing root causes of mobile network performance variability. Measurements conducted in a 4G cellular network in Germany show that management and configuration decisions can indeed have a substantial impact on the performance. Specifically, it could be observed that the association of mobile devices to a Point of Presence (PoP) within the operator's network can influence the end-to-end RTT by a large extent. Given the collected data a model predicting the PoP assignment and its resulting RTT leveraging Markov Chain and machine learning approaches is developed. Overheads of 58% to 73% compared to the optimum performance have been observed in more than 57% of the measurements.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Fabian</givenname>
              <surname>Kaup</surname>
            </name>
            <id>912877</id>
            <affiliation>TU Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Foivos</givenname>
              <surname>Michelinakis</surname>
            </name>
            <id>862157</id>
            <affiliation>IMDEA Networks Institute</affiliation>
            <country>Spain</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Nicola</givenname>
              <surname>Bui</surname>
            </name>
            <id>142269</id>
            <affiliation>IMDEA Networks Institute</affiliation>
            <country>Spain</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Joerg</givenname>
              <surname>Widmer</surname>
            </name>
            <id>4351</id>
            <affiliation>IMDEA Networks Institute</affiliation>
            <country>Spain</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Katarzyna</givenname>
              <surname>Wac</surname>
            </name>
            <id>128813</id>
            <affiliation>University of Geneva &amp; Quality of Life group</affiliation>
            <country>Switzerland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>David</givenname>
              <surname>Hausheer</surname>
            </name>
            <id>99597</id>
            <affiliation>TU Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:30</starttime>
        <endtime>15:00</endtime>
        <paperid>1570172433</paperid>
        <sessionid>TS4.3</sessionid>
        <papertitle>Taming QoE in Cellular Networks: from Subjective Lab Studies to Measurements in the Field</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>A quarter of the world population will be using smartphones to access the Internet in the near future. In this context, understanding the Quality of Experience (QoE) of popular apps in such devices becomes paramount to cellular network operators, who need to offer high quality levels to reduce the risks of customers churning for quality dissatisfaction. In this paper we address the problem of QoE provisioning in smartphones from a double perspective, combining the results obtained from subjective lab tests with end-device passive measurements and QoE crowd-sourced feedback obtained in operational cellular networks. The study addresses the impact of the downlink bandwidth on the QoE of three popular smartphone apps: YouTube, Facebook and Google Maps. As a main contribution, we show that the results obtained in the lab are highly applicable in the live scenario, as mappings track the QoE provided by users in real networks. We additionally provide hints and bandwidth thresholds for good QoE levels on such apps, as well as discussion on end-device passive measurements and analysis. The results presented in this paper provide a sound basis to better understand the QoE requirements of popular mobile apps, as well as for monitoring the underlying provisioning network. To the best of our knowledge, this is the first paper providing such a comprehensive analysis of QoE in mobile devices, combining network measurements with users QoE feedback in lab tests and operational networks.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Pedro</givenname>
              <surname>Casas</surname>
            </name>
            <id>244435</id>
            <affiliation>Telecommunications Research Center Vienna (FTW)</affiliation>
            <country>Austria</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Bruno</givenname>
              <surname>Gardlo</surname>
            </name>
            <id>370128</id>
            <affiliation>Telecommunications Research Center Vienna (FTW)</affiliation>
            <country>Austria</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Michael</givenname>
              <surname>Seufert</surname>
            </name>
            <id>857079</id>
            <affiliation>University of Würzburg</affiliation>
            <country>Germany</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Florian</givenname>
              <surname>Wamser</surname>
            </name>
            <id>720551</id>
            <affiliation>University of Wuerzburg</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Raimund</givenname>
              <surname>Schatz</surname>
            </name>
            <id>415907</id>
            <affiliation>Telecommunications Research Center Vienna (FTW)</affiliation>
            <country>Austria</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>PS2</code>
    <sessiontitle>Poster Session 2</sessiontitle>
    <sessionsubtitle/>
    <sessionspeaker/>
    <sessiondetails/>
	<sessionchair>Juan-Luis Gorricho, UPC, Spain</sessionchair>
    <sessionroom>Aula Màster</sessionroom>    
    <range>15:30-17:00</range>
    <starttime>2015-11-11T15:30:00-05:00</starttime>
    <endtime>2015-11-11T17:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570163819</paperid>
        <sessionid>PS2.1</sessionid>
        <papertitle>Towards Composite Semantic Reasoning for Realtime Network Management Data Enrichment</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>Monitoring the massive volume of data streaming from managed nodes in Telecommunication networks reacting in a timely manner is increasingly critical for modern Telecommunications Operations Support Systems (OSS). Given the large number and the varieties of the nodes in a telecoms network, the streaming monitoring data is naturally diverse and the volume is often at scales of multiple millions data points each second. These data are well modelled using formal syntaxes (e.g. Management Information Bases), making formal semantics and automated reasoning a viable solution for Telecom data modeling and correlation. This paper proposes an approach that will leverage recent developments in Semantic Reasoning and Big Data. The paper introduces how we propose to use RDF stream reasoning methods for real time event correlation, combined with MapReduce technologies in order to decentralize the large number of reasoning and correlation tasks that need to be undertaken in real time. The proposed approach is currently being implemented and will be evaluated using the diverse data types and volumes that are expected.</abstract>
        <authors>
          <author>
            <name>
              <givenname>John</givenname>
              <surname>Keeney</surname>
            </name>
            <id>174143</id>
            <affiliation>Ericsson</affiliation>
            <country>Ireland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Wei</givenname>
              <surname>Tai</surname>
            </name>
            <id>386065</id>
            <affiliation>SAP</affiliation>
            <country>Ireland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Liam</givenname>
              <surname>Fallon</surname>
            </name>
            <id>150427</id>
            <affiliation>Ericsson</affiliation>
            <country>Ireland</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Declan</givenname>
              <surname>O'Sullivan</surname>
            </name>
            <id>124280</id>
            <affiliation>Trinity College Dublin</affiliation>
            <country>Ireland</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570163963</paperid>
        <sessionid>PS2.2</sessionid>
        <papertitle>A Fast Algorithm for Detecting Anomalous Changes in Network Traffic</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>Anomalies in communication network traffic caused by malware or denial-of-service attacks manifest themselves in structural changes in the covariance matrix of traffic features. Real-time detection of anomalies in high-dimensional data demands a very efficient algorithm to identify these changes in a compact low-dimensional representation. This paper presents an efficient algorithm for the rapid detection of structural differences between two covariance matrices, as measured by the maximum possible angle between the subspaces specified by subsets of the two sets of principal components of the matrices. We show that our algorithm achieves a significantly lower computational complexity compared to a naive approach. Finally, we apply our results to real traffic traces from Internet backbone links and show that our approach offers a substantial reduction in the computational overhead of anomaly detection.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Tingshan</givenname>
              <surname>Huang</surname>
            </name>
            <id>1294623</id>
            <affiliation>Drexel University</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Harish</givenname>
              <surname>Sethu</surname>
            </name>
            <id>7234</id>
            <affiliation>Drexel University</affiliation>
            <country>USA</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Nagarajan</givenname>
              <surname>Kandasamy</surname>
            </name>
            <id>92843</id>
            <affiliation>Drexel University</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570164061</paperid>
        <sessionid>PS2.3</sessionid>
        <papertitle>Impact of Intermediate Layer on Quality of Experience of HTTP Adaptive Streaming</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>HTTP Adaptive Streaming (HAS) adapts the video quality to the current network condition by switching between different quality layers. As HAS was shown to perform better than classical video streaming, it is becoming increasingly popular. Recent research showed that quality switch amplitude and time on layer have an impact on the Quality of Experience (QoE) of HAS. However, those studies focused only on adaptation between two layers so far. This work extends these findings by taking adaptation between three layers into account. Thereby, especially the impact of an intermediate layer on user perceived quality is investigated. Crowdsourcing experiments were conducted in order to collect subjective ratings for adaptation between three layers. The results indicate that the quality of each layer and the time on each layer are important QoE parameters. This encourages the usage of temporal pooling approaches for QoE prediction and QoE-aware traffic management. Therefore, mean pooling of per-frame metrics will be applied and its performance will be validated with the subjective crowdsourcing results.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Michael</givenname>
              <surname>Seufert</surname>
            </name>
            <id>857079</id>
            <affiliation>University of Würzburg</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Tobias</givenname>
              <surname>Hoßfeld</surname>
            </name>
            <id>118152</id>
            <affiliation>University of Duisburg-Essen</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Christian</givenname>
              <surname>Sieber</surname>
            </name>
            <id>1066709</id>
            <affiliation>Technische Universität München</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570164477</paperid>
        <sessionid>PS2.4</sessionid>
        <papertitle>Just-in-time server procurement to private cloud for mobile thin-client service</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>Mobile thin-client services are gaining lots of attention from companies who concern about the security yet recognize the benefits of mobile computing in their businesses. The service is based on a private cloud system hosting virtual machines that can execute mobile OS instances. The owner of the private cloud needs to prepare sufficient server resources for hosting those virtual machines. In this paper, we propose a framework to guide server procurement decisions in a private cloud for mobile thin-client service, which aims to minimize the cost of unused servers while avoiding service level violation due to the lack of resources. To make a timely server procurement decision, the framework combines the techniques for workload estimation to individual VMs, demand estimation of newly created VMs and repetitive simulation of VM replacement algorithm. The experimental results show that the proposed framework can reduce the cost of unused servers by up to 60% while satisfying a service level, compared with a time-based heuristic decision method.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Fumio</givenname>
              <surname>Machida</surname>
            </name>
            <id>1124455</id>
            <affiliation>NEC Corporation</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Shunsuke</givenname>
              <surname>Kohno</surname>
            </name>
            <id>1295125</id>
            <affiliation>NEC Corporation</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Kosuke</givenname>
              <surname>Maebara</surname>
            </name>
            <id>1295127</id>
            <affiliation>NEC Corporation</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Masayuki</givenname>
              <surname>Nakagawa</surname>
            </name>
            <id>1295129</id>
            <affiliation>NEC Communication Systems</affiliation>
            <country>Japan</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570164757</paperid>
        <sessionid>PS2.5</sessionid>
        <papertitle>Modeling Service Variability in Complex Service Delivery Operations</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>One of the key promises of IT strategic outsourcing is to deliver greater IT service management through better quality and lower cost. However, this raises a critical question on how to model highly variable services for diverse customers with heterogeneous infrastructure and service demands. In this paper we propose the use of statistical learning approaches for service operation variability modeling. Specifically, we use the partial least squares regression that projects service attributes to explain the service volume variability, and the decision tree approach to model the service effort based on categorical customer and service properties. We demonstrate the applicability of the proposed methodology using data from a large IT service delivery environment.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Yixin</givenname>
              <surname>Diao</surname>
            </name>
            <id>117319</id>
            <affiliation>IBM TJ Watson Research Center</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Larisa</givenname>
              <surname>Shwartz</surname>
            </name>
            <id>134381</id>
            <affiliation>IBM Research</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570165123</paperid>
        <sessionid>PS2.6</sessionid>
        <papertitle>Deterministic Flow Marking for IPv6 Traceback (DFM6)</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>Although some security threats were taken into consideration in the IPv6 design, DDoS attacks still exist in the IPv6 networks. The main difficulty to counter the DDoS attacks is to trace the source of such attacks, as the attackers often use spoofed source IP addresses to hide their identity. This makes the IP traceback schemes very relevant to the security of the IPv6 networks. Given that most of the current IP traceback approaches are based on the IPv4, they are not suitable to be applied directly on the IPv6 networks. In this research, a modified version of the Deterministic Flow Marking (DFM) approach for the IPv6 networks, called DFM6, is presented. DFM6 embeds a fingerprint in only one packet of each flow to identify the origin of the IPv6 traffic traversing through the network. DFM6 requires only a small amount of marked packets to complete the process of traceback with high traceback rate and no false positives.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Vahid</givenname>
              <surname>Aghaei Foroushani</surname>
            </name>
            <id>827819</id>
            <affiliation>Dalhousie University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Nur</givenname>
              <surname>Zincir-Heywood</surname>
            </name>
            <id>102310</id>
            <affiliation>Dalhousie University</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570165213</paperid>
        <sessionid>PS2.7</sessionid>
        <papertitle>Ant Colony Optimization for QoE-Centric Flow Routing in Software-Defined Networks</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>We present design, implementation, and an evaluation of an ant colony optimization (ACO) approach to flow routing in software-defined networking (SDN) environments. While exploiting a global network view and configuration flexibility provided by SDN, the approach also utilizes quality of experience (QoE) estimation models and seeks to maximize the user QoE for multimedia services. As network metrics (e.g., packet loss) influence QoE for such services differently, based on the service type and its integral media flows, the goal of our ACO-based heuristic algorithm is to calculate QoE-aware paths that conform to traffic demands and network limitations. A Java implementation of the algorithm is integrated into SDN controller OpenDaylight so as to program the path selections. The evaluation results indicate promising QoE improvements of our approach over shortest path routing, as well as low running time.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Ognjen</givenname>
              <surname>Dobrijevic</surname>
            </name>
            <id>164565</id>
            <affiliation>University of Zagreb, Faculty of Electrical Engineering and Computing</affiliation>
            <country>Croatia</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Matija</givenname>
              <surname>Santl</surname>
            </name>
            <id>1295677</id>
            <affiliation>University of Zagreb &amp; Faculty of Electrical Engineering and Computing</affiliation>
            <country>Croatia</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Maja</givenname>
              <surname>Matijasevic</surname>
            </name>
            <id>8477</id>
            <affiliation>University of Zagreb</affiliation>
            <country>Croatia</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>15:30</starttime>
        <endtime>15:30</endtime>
        <paperid>1570171149</paperid>
        <sessionid>PS2.8</sessionid>
        <papertitle>A Scalable Source Multipath Routing Architecture for Datacenter Networks</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Short Papers</trackname>
        <abstract>The continuous growth in traffic volumes on modern Data Center Networks (DCNs) can be classified into either delay- or throughput-sensitive applications. In order to providing quality delivery to such applications, it is important that multipath routing and/or forwarding can be supported in the DCNs. However, some inherent problems of multipath forwarding have not been totally solved by existing solutions. Aiming to balance all these requirements for the multipath DCNs, we propose a novel solution—Code-Oriented eXplicit MultiPath (COXMP) scheme which evenly distributes traffic along multiple paths, resulting in better network throughput, delay, jitter, and resilience. The simulation results show that COXMP outperforms existing solutions and achieves a reasonable tradeoff between scalability and overhead.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Wen-Kang</givenname>
              <surname>Jia</surname>
            </name>
            <id>282338</id>
            <affiliation>Institute for Information Industry (III)</affiliation>
            <country>Taiwan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Gen-Hen</givenname>
              <surname>Liu</surname>
            </name>
            <id>1302403</id>
            <affiliation>NCTU</affiliation>
            <country>Taiwan</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Yaw-Chung</givenname>
              <surname>Chen</surname>
            </name>
            <id>152206</id>
            <affiliation>National Chiao Tung University</affiliation>
            <country>Taiwan</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>KS3</code>
    <sessiontitle>Keynote Session 3</sessiontitle>
    <sessionsubtitle>Software repair and Software antifragility</sessionsubtitle>
    <sessionspeaker>Martin Monperrus (University of Lille, France)</sessionspeaker>
    <sessionspeakerurl>https://www.monperrus.net/martin/</sessionspeakerurl>
	<sessionchair>Mauro Tortonesi, University of Ferrara, Italy</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessiondetails>Automatic software repair is the process of fixing software bugs automatically. This is a recent and active research area in the software engineering, programming language, operating systems and security research communities (https://www.monperrus.net/martin/survey-automatic-repair.pdf). This talk first presents an overview of this fascinating research field. However, repair is reactive, one waits for the occurrences of bugs to find a fix. The goal of antifragile software engineering is to device proactive techniques, where bugs are triggered in a controlled manner to better understand, anticipate and handle field errors.</sessiondetails>
    <date>Thursday, 12 November, 2015</date>
    <range>09:00-10:00</range>
    <starttime>2015-11-12T09:00:00-05:00</starttime>
    <endtime>2015-11-12T10:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers/>
  </session>
  <session>
    <code>TS5</code>
    <sessiontitle>Technical Session 5</sessiontitle>
    <sessionsubtitle>Management of Wireless and Mobile Networks</sessionsubtitle>
	<sessionchair>Yoshiaki Kiriha, NEC, Japan</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>10:30-12:00</range>
    <starttime>2015-11-12T10:30:00-05:00</starttime>
    <endtime>2015-11-12T12:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>10:30</starttime>
        <endtime>11:00</endtime>
        <paperid>1570153299</paperid>
        <sessionid>TS5.1</sessionid>
        <papertitle>Real-Time Data Reduction at the Network Edge of Internet-of-Things Systems</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>The expected huge increase in the number of IoT data sources (sensors, embedded systems, personal devices etc.) will give rise to network-edge computing, i.e., data pre-processing, local storage, and filtering close to the data sources. Specifically, data reduction at the network edge (e.g., on an IoT gateway device or a mini-server deployed locally at an IoT area network) can prevent I/O bottlenecks, as well as dramatically reduce storage, bandwidth, and energy costs. However, current solutions face two main obstacles towards achieving this benefits of network-edge computing. Firstly, the most efficient algorithms for data reduction of time series (which is one of the prevailing kinds of data in IoT) are developed to work a posteriori upon big datasets and they cannot take decisions per incoming data item. Secondly, the state of the art lacks systems that can apply any of many different possible data reduction methods without adding significant delays or heavyweight re-configurations. This paper presents a solution that automates the switching between different data handling algorithms at the network edge, including an analysis of adjusted data reduction methods, as well as three flavors of a new algorithm that is capable of performing real-time reduction of incoming time series items based on the concept of Perceptually Important Points. The potential benefits are evaluated upon real datasets from street, household, and robot sensors, showing that our solution achieves accuracies between 76,1 % and 93,8 % despite forwarding only 1/3 of the data items, without adding significant forwarding delays.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Apostolos</givenname>
              <surname>Papageorgiou</surname>
            </name>
            <id>544981</id>
            <affiliation>NEC Laboratories Europe</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Bin</givenname>
              <surname>Cheng</surname>
            </name>
            <id>910609</id>
            <affiliation>NEC Laboratories Europe</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Ernö</givenname>
              <surname>Kovacs</surname>
            </name>
            <id>289165</id>
            <affiliation>NEC Europe Ltd.</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:00</starttime>
        <endtime>11:30</endtime>
        <paperid>1570163993</paperid>
        <sessionid>TS5.2</sessionid>
        <papertitle>Exploiting Short-Range Cooperation for Energy Efficient Vertical Handover Operations</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>The availability of multiple collocated wireless networks of various technologies and the multi-access support of contemporary mobile devices have allowed wireless connectivity optimization, enabled through vertical handover (VHO) operations. However, this comes at a high energy consumption on the mobile device, due to the inherently expensive nature of some of the involved operations. This work proposes exploiting short-range cooperation among collocated mobile devices to improve the energy efficiency of vertical handover operations. The proactive exchange of handover-related information through low-energy short-range communication technologies, like Bluetooth, can help in eliminating expensive signaling steps when the need for a vertical handover (VHO) arises. A model is developed for capturing the mean energy expenditure of such an optimized VHO scheme in terms of relevant factors by means of closed-form expressions. The model is validated through simulations and results demonstrate that the proposed scheme has superior performance in many realistic usage scenarios considering important relevant factors, including network availability, the local density of mobile devices and the range of the cooperation technology.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Xenofon</givenname>
              <surname>Foukas</surname>
            </name>
            <id>1008697</id>
            <affiliation>The University of Edinburgh</affiliation>
            <country>United Kingdom</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Kimon</givenname>
              <surname>Kontovasilis</surname>
            </name>
            <id>11833</id>
            <affiliation>NCSR Demokritos</affiliation>
            <country>Greece</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Mahesh</givenname>
              <surname>Marina</surname>
            </name>
            <id>128084</id>
            <affiliation>The University of Edinburgh</affiliation>
            <country>United Kingdom</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>11:30</starttime>
        <endtime>12:00</endtime>
        <paperid>1570165141</paperid>
        <sessionid>TS5.3</sessionid>
        <papertitle>Evaluating Device-to-Device Content Delivery Potential on a Mobile ISP's Dataset</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Device-to-Device (D2D) content delivery is an emerging approach, where end-user devices exchange content with other end-user devices in communication range, instead of retrieving content from an operator's infrastructure. This way, the operator network can be offloaded from congestion caused by the transmission of popular content, and the content consumer's quality of experience may increase. However, D2D content delivery is only effective in situations where a device in proximity has the requested content available, which is more likely to happen with popular content in crowded areas. The availability of content in communication range of a consumer constitutes an upper bound of the success of a D2D content delivery mechanism, which is referred to as the potential of D2D delivery. This paper provides a quantitative answer to the question of this potential, and identifies the most important properties a D2D mechanism must provide. An evaluation model is proposed and developed, which can be applied to real-world mobile user traces to determine the quota of content requests that could be served via D2D content delivery. The model is applied on a dataset of a major European Internet service provider and the evaluation results are discussed. The paper concludes that there is potential to deliver up to 60% of requests for popular content via D2D, if a reliable mechanism to predict a user's content consumption is available. </abstract>
        <authors>
          <author>
            <name>
              <givenname>Leonhard</givenname>
              <surname>Nobach</surname>
            </name>
            <id>769097</id>
            <affiliation>TU Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Yannick</givenname>
              <surname>Lelouedec</surname>
            </name>
            <id>707989</id>
            <affiliation>Orange Labs FT</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>David</givenname>
              <surname>Hausheer</surname>
            </name>
            <id>99597</id>
            <affiliation>TU Darmstadt</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>TS6</code>
    <sessiontitle>Technical Session 6</sessiontitle>
    <sessionsubtitle>Management of Multimedia Applications and Network Search</sessionsubtitle>
	<sessionchair>Marinos Charalambides, UCL, United Kingdom</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>13:30-15:00</range>
    <starttime>2015-11-12T13:30:00-05:00</starttime>
    <endtime>2015-11-12T15:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
      <paper>
        <starttime>13:30</starttime>
        <endtime>14:00</endtime>
        <paperid>1570163749</paperid>
        <sessionid>TS6.1</sessionid>
        <papertitle>An Announcement-based Caching Approach for Video-on-Demand Streaming</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>The growing popularity of over the top (OTT) video streaming services has led to a strong increase in bandwidth capacity requirements in the network. By deploying intermediary caches, closer to the end-users, popular content can be served faster and without increasing backbone traffic. Designing an appropriate replacement strategy for such caching networks is of utmost importance to achieve high caching efficiency and reduce the network load. Typically, a video stream is temporally segmented into smaller chunks that can be accessed and decoded independently. This temporal segmentation leads to a strong relationship between consecutive segments of the same video. Therefore, caching strategies have been developed, taking into account the temporal structure of the video. In this paper, we propose a novel caching strategy that takes advantage of clients announcing which videos will be watched in the near future, e.g., based on predicted requests for subsequent episodes of the same TV show. Based on a Video-on-Demand (VoD) production request trace, the presented algorithm is evaluated for a wide range of user behavior and request announcement models. In a realistic scenario, a performance increase of 11% can be achieved in terms of hit ratio, compared to the state-of-the-art.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Maxim</givenname>
              <surname>Claeys</surname>
            </name>
            <id>912449</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>1</presenter>
          </author>
          <author>
            <name>
              <givenname>Niels</givenname>
              <surname>Bouten</surname>
            </name>
            <id>893391</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Danny</givenname>
              <surname>De Vleeschauwer</surname>
            </name>
            <id>459703</id>
            <affiliation>Alcatel-Lucent Bell Labs</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Werner</givenname>
              <surname>Van Leekwijck</surname>
            </name>
            <id>524006</id>
            <affiliation>Alcatel-Lucent Bell Labs</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Steven</givenname>
              <surname>Latré</surname>
            </name>
            <id>1059359</id>
            <affiliation>University of Antwerp - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Filip</givenname>
              <surname>De Turck</surname>
            </name>
            <id>97039</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:00</starttime>
        <endtime>14:30</endtime>
        <paperid>1570172025</paperid>
        <sessionid>TS6.2</sessionid>
        <papertitle>Design and Evaluation of Elastic Media Resource Allocation Algorithms using CloudSim Extensions</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>With the maturity of Cloud computing comes research into converting a range of traditionally best effort programs into cloud-enabled services. One such service currently under investigation in the Elastic Media Distribution (EMD) project, is how to enable qualitative, reliable and scalable real-time media collaboration services using proven Cloud technology. While existing best-effort solutions provide plenty of features, they do not provide the quality guarantees and reliability required for critical services in globally distributed corporations. On the other hand, some pricey dedicated solutions do offer these low-delay, reliable cooperation services, but without the benefits that clouds can bring in terms of scalability. In this paper we describe results attained in the EMD project on novel resource provisioning algorithms for a mixture of end-to-end Audio/Video streams with file-based transfers, allowing for configurable trade-offs between service response time and cost. We extended the CloudSim simulator with models allowing us to simulate collaborative interactive sessions (more specifically educational real-time collaboration), and evaluated the performance of our proposed provisioning heuristics. The results show that the proposed dynamic algorithm allows for automated cost-performance trade-off by reducing average total Virtual Machine (VM) cost by a maximum of 58% compared to more naive approaches, while keeping average time for clients to join a meeting in line. </abstract>
        <authors>
          <author>
            <name>
              <givenname>Rafael</givenname>
              <surname>Xavier</surname>
            </name>
            <id>876437</id>
            <affiliation>iMinds - Ghent University</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Hendrik</givenname>
              <surname>Moens</surname>
            </name>
            <id>912293</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Bruno</givenname>
              <surname>Volckaert</surname>
            </name>
            <id>117109</id>
            <affiliation>University of Ghent &amp; IBBT</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Filip</givenname>
              <surname>De Turck</surname>
            </name>
            <id>97039</id>
            <affiliation>Ghent University - iMinds</affiliation>
            <country>Belgium</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
      <paper>
        <starttime>14:30</starttime>
        <endtime>15:00</endtime>
        <paperid>1570164893</paperid>
        <sessionid>TS6.3</sessionid>
        <papertitle>Spatial Search in Networked Systems</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - Full papers</trackname>
        <abstract>Information in networked systems often has spatial semantics: routers, sensors, or virtual machines have coordinates in a geographical or virtual space, for instance. In this paper, we propose a peer-to-peer design for a spatial search system that processes queries, such as range or nearest-neighbor queries, on spatial information cached on nodes inside a networked system. Key to our design is a protocol that creates a distributed index of object locations and adapts it to object and node churn. The index is build around the MBR concept to efficiently encode locations. We present a search protocol, which is based on an echo protocol that prunes the search space and performs query routing. Simulations show the efficiency of the protocol in pruning the search space, thereby reducing the protocol overhead. For many queries, the protocol efficiency increases with the network size and approaches that of an optimal protocol for large systems. The protocol overhead depends on the network topology and is lower if neighboring nodes are spatially close. In contrast to recent works in spatial databases, our design is bottom-up, which makes query routing network-aware and thus efficient in networked systems.</abstract>
        <authors>
          <author>
            <name>
              <givenname>Misbah</givenname>
              <surname>Uddin</surname>
            </name>
            <id>611721</id>
            <affiliation>KTH Royal Institute of Technology</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Rolf</givenname>
              <surname>Stadler</surname>
            </name>
            <id>149700</id>
            <affiliation>KTH Royal Institute of Technology</affiliation>
            <country>Sweden</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Alexander</givenname>
              <surname>Clemm</surname>
            </name>
            <id>86835</id>
            <affiliation>Cisco Systems, Inc.</affiliation>
            <country>USA</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
    </papers>
  </session>
  <session>
    <code>DEP</code>
    <sessiontitle>Distinguished Experts Panel</sessiontitle>
	<sessionchair>Filip de Turk, iMinds, Ghent University, Belgium</sessionchair>
    <sessionroom>Aula Màster</sessionroom>
    <sessionsubtitle/>
    <sessionspeaker/>
    <sessiondetails/>
    
    <range>15:30-17:00</range>
    <starttime>2015-11-12T15:30:00-05:00</starttime>
    <endtime>2015-11-12T17:00:00-05:00</endtime>
    <room/>
    <chairs/>
    <papers>
	<paper>
        <starttime>15:30</starttime>
        <endtime>17:00</endtime>
        <paperid/>
        <sessionid>DEP</sessionid>
        <papertitle>Service Quality in Virtualized Environments: Improvement or Deterioration?</papertitle>
        <trackname>11th International Conference on Network and Service Management 2015 - DEP</trackname>
        <abstract/>
        <authors>
          <author>
            <name>
              <givenname>Raouf</givenname>
              <surname>Boutaba</surname>
            </name>
            <id>611721</id>
            <affiliation>University of Waterloo</affiliation>
            <country>Canada</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Prosper</givenname>
              <surname>Chamouil</surname>
            </name>
            <id>149700</id>
            <affiliation>Orange Labs</affiliation>
            <country>France</country>
            <presenter>0</presenter>
          </author>
          <author>
            <name>
              <givenname>Axel</givenname>
              <surname>Clauberg</surname>
            </name>
            <id>86835</id>
            <affiliation>Deutsche Telekom AG</affiliation>
            <country>Germany</country>
            <presenter>0</presenter>
          </author>
		  <author>
            <name>
              <givenname>George</givenname>
              <surname>Pavlou</surname>
            </name>
            <id>86835</id>
            <affiliation>University College London</affiliation>
            <country>UK</country>
            <presenter>0</presenter>
          </author>
        </authors>
      </paper>
	</papers>
  </session>
</program>