powered and hosted by
University of Federal Armed Forces,
Institute for information Systems
Munich, Germany














Tutorials










München
IM 2007 Tutorials

IM 2007 features state-of-the-art tutorials from leading experts in their fields on the days before (on May 21, 2007) and after (on May 25, 2007) the technical program. Topics cover different levels, ranging from introductory to highly advanced skills. All tutorials are addressing highly important and relevant subject areas of systems, network, and service management.

Burkhard Stiller, University of Zürich and ETH Zürich, Switzerland

 

 


Monday, 21 May 2007


Friday, 25 May 2007


Tutorial 1: Autonomic Networking - Theory and Practice

The increasing complexity of computing systems is beginning to overwhelm the capabilities of software developers and system administrators to design, evaluate, integrate, and manage these systems. Autonomic computing is a collection of technologies and mechanisms that enable Systems and components to govern their behavior in accordance with policies. This enables business needs to drive the services and resources available from the network. This tutorial is aimed at giving the participant a reasonably deep understanding of the motivation for autonomic computing, concentrating on the semantic and behavioral aspects of network management. After defining autonomic computing and networking, this tutorial will first describe relevant technologies that are used in building autonomic systems, components, and networks, and then elaborate on different architectural styles of autonomics. A novel autonomic networking architecture will be examined in detail. This theory will be reinforced with use cases and practical examples, including a demonstration of ongoing research work in Motorola Labs.


Biography of the Instructor:

 
John Strassner is a Motorola Fellow and Director of Autonomic Computing at Motorola Research Labs, where he is responsible for directing Motorola's efforts in autonomic computing, policy management, and knowledge engineering. He is active in both forging partnerships (especially with academia) and international standards, where he is the Chair of the Autonomic Communications Forum and the Vice-Chair of WG6 (Reconfigurability and autonomics) of the WWRF. Previously, John was the Chief Strategy Officer for Intelliden and a former Cisco Fellow. John invented DEN (Directory Enabled Networks) and DEN-ng as a new paradigm for managing and provisioning networks and networked applications. He is also the past chair of the TMF's NGOSS SID, meta-model and policy working groups, as well as being active in the ITU, OMG, and OASIS. He has also authored two books (Directory Enabled Networks and Policy Based Network Management) and written chapters for 3 other books. Finally, John is the recipient of the Daniel A. Stokesbury memorial award for excellence in network management, a TMF Fellow, and has authored over 145 refereed journal papers and publications.

(top)


Tutorial 2: Netflow, IPFIX, and Beyond: Integrated Routing, Traffic Anaylsis, and Modeling for Highly Accurate Network Engineering

Network management has traditionally been carried out using SNMP polling, in some cases augmented by codebook-based correlation. More recently, flow record-based analysis has been utilized to provide further insight into the application and traffic dynamics of IP networks. However, periodic polling falls far short of capturing the complex and dynamic layer 3 operations of IP networks, and flow record based analysis is typically viewed on a link by link basis. These limited viewpoints force network engineers to do the "hard" work of trying to figure out the global state of the network in the present and most perplexingly, in the past, in order to surmise root causes from symptoms and to plan changes effectively. In particular, the routing dynamics of IP networks often lead to unpredictable and intermittent behaviors that leave network managers unable to explain what happened or why. This half-day tutorial looks at the use of flow record-based analysis such as Netflow and the upcoming IPFIX standard, its uses and limitations and how an emerging technology called route analytics is merged with traffic flow analysis to provide network-wide understanding of network and traffic phenomena for better troubleshooting and planning. The tutorial will demonstrate how "route-flow fusion" can be used practically to increase the reliability and predictability of IP networks for ever more sensitive and demanding converged applications.


Biography of the Instructor:

 
Cengiz Alaettinoglu is a fellow at Packet Design, Inc. Currently he is working on scaling and convergence properties of both inter-domain and intra-domain routing protocols. He was previously at the USC Information Sciences Institute, where he worked on the Routing Arbiter project. He co-chaired the IETF Routing Policy System Working Group to define the Routing Policy Specification Language and the protocols to enable a distributed, secure routing policy system. Alaettinoglu received a B.S. degree in computer engineering in 1988 from the Middle East Technical University, Ankara, Turkey; and M.S. and Ph.D. degrees in computer science in 1991 and 1994 from the University of Maryland at College Park. He was a Research Assistant Professor at the University of Southern California, where he taught graduate and undergraduate classes on operating systems and networking from 1994 to 2000. He has given numerous talks at NANOG, IETF, RIPE and APNIC meetings, as well as at ACM and IEEE conferences and workshops.

(top)


Tutorial 3: Peer-to-Peer Networking - State of the Art and Research Challenges

The past few years have witnessed the emergence of Peer-to-Peer (P2P) systems as a means to further facilitate the formation of communities of interest over the Internet in all areas of human life including technical/research, cultural, political, social, and entertainment. P2P technologies involve data storage, discovery and retrieval, overlay networks and application-level routing, security and reputation, measurements and management. This tutorial will give an appreciation of the issues and state of the art in Peer-to-Peer Networking. It will introduce the underlying concepts, present existing architectures, highlight the design requirements, discuss the research issues, compare existing approaches, and illustrate the concepts through case studies. The ultimate objective is to provide the tutorial attendees with an in-depth understanding of the issues inherent to the design, deployment and operation of large-scale P2P systems.


Biography of the Instructor:

 
Dr. Raouf Boutaba is an Associate Professor in the School of Computer Science of the University of Waterloo. Before that he was with the Department of Electrical and Computer Engineering of the University of Toronto. Before joining academia, he founded and was the director of the telecommunications and distributed systems division of the Computer Science Research Institute of Montreal (CRIM). Dr. Boutaba conducts research in the areas of network and distributed systems management and resource management in multimedia wired and wireless networks. He has published more than 200 papers in refereed journals and conference proceedings. He is the recipient of the Premier's Research Excellence Award, two NORTEL Networks research excellence Award and several Best Paper awards. He is a distinguished lecturer of the IEEE Communications Society. Dr. Boutaba is the Chairman of the IFIP Working Group on Networks and Distributed Systems, the Chair of the IEEE Communications Society Technical Committee on Information Infrastructure, and the Director of the IEEE ComSoc Related Societies Board. He is the founder and acting editor in Chief of the IEEE Transactions on Network and Service Management, on the advisory editorial board of the Journal of Network and Systems Management, on the editorial board of the KIKS/IEEE Journal of Communications and Networks, the editorial board of the Journal of Computer Networks. He acted as the general and program committees co-chair for several IFIP and IEEE conferences including NOMS, MMNS, ICC and Globecom.

(top)


Tutorial 4: Efficient Network and Traffic Monitoring

Offering reliable novel services in modern heterogeneous networks is a key challenge and the main prospective income source for many network operators and providers. Providing reliable future services in a cost effective scalable manner requires efficient use of networking and computation resources. This can be done by making the network more self-enabled, i.e. making it capable of making distributed local decisions regarding the utilization of the available resources. However, such decisions must be correlated in order to achieve a global overall goal (maximum utilization or maximum profit, for example). A key building block for all such systems is the ability to monitor the network parameters and the relevant traffic, and to infer from these measurements the relevant information needed in each one of the local decision points. Due to the heterogeneous nature of modern networks and to the very high amount of traffic, even monitoring a local location introduces significant difficulties. It is much more challenging to decide what type of traffic or network information should be collected at each network segment in order to acquire the needed global information without investing too much effort in the monitoring process or its management. In fact, efficient network and traffic monitoring may become a very significant ingredient in the ability to provide modern network services in a cost effective way. This Tutorial deals with practical and efficient techniques to retrieve information from modern network devices. We start by examining the SNMP suit and the various methods to collect information from possibly large MIB tables. Then we develop a framework for quantifying resource (bandwidth and CPU) utilization in distributed network management. To demonstrate both the theoretical and practical impact of this framework, advanced techniques for efficient reactive traffic monitoring and efficient QoS parameter monitoring are presented and analyzed together with empirical results indicating the actual overhead reduction.


Biography of the Instructor:

 
Prof. Raz received his doctoral degree from the Weizmann Institute of Science, Israel, in 1996. From September 1995 until September 1997 he was a postdoctoral fellow at the International Computer Science Institute, (ICSI) Berkeley, CA, and a visiting lecture at the University of California, Berkeley. From October 1997 until October 2001 he was with the Networking Research Laboratory at BellLabs, Lucent Technologies. In October 2000 Danny Raz joined the faculty of the Computer Science Department at the Technion in Israel. His primary research interest is the theory and application of management related problems in IP networks. Prof. Raz has been engaged in network management research in the last seven years. His main contributions are in the field of efficient network  management and the use of active and programmable networks in network management. Prof. Raz gave talks and tutorials on this subject in many international conferences, he was the general chair of OpenArch 2000, a program committee member in many of the leading conferences both in the general field of networking (INFOCOM 2002, 2003), network management (IM and NOMS 2001-2007, DSOM 2003-2006), and active and programmable networks (IWAN, OpenArch). He is an editor for the IEEE/ACM Transaction on Networking (ToN), and edited a special issue in JSAC.

(top)


Tutorial 5: Modern Web Applications with Ajax and Web 2.0

Web 2.0 is a comprehensive term for a set of interesting, trend-setting advancements of the World Wide Web. With the programming paradigm of Asynchronous Javascript and XML (Ajax) Web 2.0 Sites are characterized e.g. by a high degree at interactivity and user friendliness, which so far was only reached by classical desktop applications. Ajax breaks the rigid request-response interaction pattern between Browser and Web servers and allows to react immediately to user inputs and to adapt contents of the web page dynamically. Ajax loads data in the background, while the application remains usable in the foreground. Furthermore Ajax creates new possibilities of structuring web applications by replacing the unclear mixture of client- and server-side scripts and program fragments. A further advantage is to be able to integrate external data sources by calling programmable interfaces which are accessed via SOAP or REST. In this case Web 2.0 applications serve as the graphic interface for service-oriented architectures (SOA). SOA is considered to be the solution to data and application integration in the backend. However, Web 2.0 portals are increasingly taking over this role in the frontend. The so-called Mashup Sites offer comfortable access to several data sources and make a mix of applications appear as an integrated overall experience. Finally, Web 2.0 prompts a wave of individualization and democratization of the Internets. It promotes a strong commitment of the individual participant, supported by so-called �social� software such as Wikis and Blogs. Social software opens the door to another broad spectrum of applications. Wikis for example are an a excellent tool for knowledge management and globally available data sources such as Wikipedia and services such as Google maps can be linked with data which is internal to a corporation. In order to take advantage of these opportunities, one must master this new programming paradigm, the appropriate standards, and the available tools, assess the security implications, and understand the Web 2.0 culture.


Biography of the Instructor:

 
Dr. Andreas Eberhart is a software architect with HP Germany where, among other projects, he leads the UI development of the Virtual Machine Management and Server Migration products. Before joining HP he worked for Informix Software in Portland, Oregon, at the International University in Germany in Bruchsal as well as the AIFB at the University of Karlsruhe, where he led a number of research project in the area of the Semantic Web. He also co-authored books on Enterprise Applications, Web services and Java programming.

(top)


Tutorial 6: Assessing and Hacking Network Security

The objective of this tutorial is to give a hands-on experience to network security assessment. Looking at your own network through the eyes of an enemy might surprise you in many cases. Some necessary insights about your vulnerabilities and poor security practices might suddenly become visible, thus identifying high security risks. This tutorial presents a comprehensive overview of the technical procedures and techniques that drive such a process. There is a fine line between a full penetration study, in which vulnerabilities are detected and exploited by a red team hired for this purpose (or a malicious hacker), and an assessment procedure, where an overall picture of the potential vulnerabilities and weaknesses is drawn. While some of the tools are common in both activities, including network scanners (tools to detect the network topology and services available on a network), enumeration tools, and automatic vulnerability scanners for network services or Web applications (helpful in identifying whether the target system is exposed to a series of known vulnerabilities), security assessment is less invasive (no fine tuning of exploit code is done to prove the effective exploitation of a vulnerability), and focuses more on providing the overall security level for a network and its available services.This tutorial will provide an introduction to this topic, covering both the operational procedures and the required technical skills and tools.


Biography of the Instructor:

Radu State holds a Ph.D from INRIA and a Master of Science in Engineering from the Johns Hopkins University (USA).  He is a researcher in network security and network management with more than 60 papers published in international conferences and journals. He is member in the technical program committees of IEEE/IFIP Integrated Network Management, IEEE/IFIP Network Operations and Management and IEEE/IFIP DSOM. He lecturers at major conferences on topics related to security and network management and control. His activities range from network security assessment, software security to VoIP intrusion detection and assessment.

(top)


Tutorial 7: IT Service Management in a Service-oriented Environment: Best Practices, Challenges, and Shared Experiences

In this tutorial we will describe the transformation that is taking place within IT organizations to provide IT service management rather than just IT systems management. In this new services oriented paradigm, the traditional focus on technology to provide systems management is augmented with process management to provide a service management focus. The tutorial starts with an introduction to IT organizations and what IT service management entails. The tutorial provides a brief overview of the IT Infrastructure Library (ITIL), and the IT Service Management (ITSM) framework that is based on it, including the service support domain of Configuration Management, Change Management, Incident Management and Problem Management and Release Management, and service delivery domain encompassing Service Level Management, Financial Management, Capacity Management, Availability Management and Service Continuity Management. The tutorial then illustrates how services oriented architecture provides the ideal enabler for process based IT service management, by following the lifecycle of a representative IT service request. The tutorial concludes with a discussion on adoption challenges, including handling process variances, deployment planning process monitoring and education. At the conclusion of the course, participants are expected to have an introductory understanding of IT Service Management including the four major elements: Organization, Process, Technology, and Information. The participants will be conversant in ITIL and understand the basics of the service support domain and the delivery domain. The participants will also understand how Service Oriented Architecture (SOA) enabled ITSM can be deployed for IT service management and the major benefits and challenges.


Biography of the Instructors (two Presenters):

 
Claudio Bartolini, HP Laboratories Palo Alto, 1501 Page mill Road, Palo Alto, CA 94304 (claudio.bartolini@hp.com) Claudio is a senior researcher at the HP Laboratories in Palo Alto, USA. His background is on architecture and design of software systems and frameworks. His current research interest is in methodologies for business and IT alignment. He holds a M.Sc. degree in electronic engineering and computer science from the University of Bologna, Italy. He has published over twenty papers on international journals, conferences and workshop, and contributed to book chapters. He is a co-author of the W3C WSCL specification. He holds a number of patents in various countries. He is a frequent speaker at conferences, and chaired a number of conferences and workshops.


Christopher Ward, IBM Research Division, T.J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 (cw1@us.ibm.com) Dr. Ward is a senior Research Staff Member and manager in the IT Systems and Services Management Group in the Service Delivery Department at the T.J. Watson Research Center. He joined IBM in 2000 and is most recently responsible for architecture and development of configuration management process elements for a major IT Service Management product. Prior to researching in IT service management he was responsible for a data management model to represent the complex relationships required for proactive SLA management. Since joining IBM he has received various achievement awards, has chaired selected standards committees and has published many technical papers. Dr. Ward has published extensively on a variety of computer science problems, is author or co-author of numerous patents and is a Senior Member of the IEEE. He received a Ph.D. degree in Computer Science from the University of Florida in 1988.

(top)


Tutorial 8: Traffic Engineering and Quality of Service Management for IP-based Next Generation Networks

Next Generation IP-based Networks will offer Quality of Service (QoS) guarantees by deploying technologies such as Differentiated Services (DiffServ) and Multi-Protocol Label Switching (MPLS) for traffic engineering and network-wide resource management. Despite the progress already made, a number of issues still exist regarding edge-to-edge intra-domain and inter-domain QoS provisioning and management. This tutorial will start by providing background on technologies such as DiffServ, MPLS and their potential combination for QoS support. It will subsequently introduce trends in Service Level Agreements (SLAs) and Service Level Specifications (SLSs) for the subscription to QoS-based services It will then move to examine architectures and frameworks for the management and control of QoS-enabled networks, including the following aspects: approaches and algorithms for off-line traffic engineering and provisioning through explicit MPLS paths or through hop-by-hop IP routing; approaches for dynamic resource management to deal with traffic fluctuations outside the predicted envelope; a service management framework supporting a "resource provisioning cycle"; the derivation of expected traffic demand from subscribed SLSs and approaches for SLS invocation admission control; a monitoring architecture for scalable information collection supporting traffic engineering and service management; and realization issues given the current state-of-the-art of management protocols and monitoring support. The tutorial will also include coverage of emerging work towards inter-domain QoS provisioning, including aspects such as: an inter-domain business model; customer and peer provider SLSs; an architecture for the management and control of interdomain services; inter-domain off-line traffic engineering; and QoS extensions to BGP for dynamic traffic engineering. Relevant industrial activities such as IPsphere will be also covered. In all these areas, recent research work will be presented, with pointers to bibliography and a specially tailored Web page with additional resources.


Biography of the Instructor:


Prof. George Pavlou holds the Chair of Communication and Information Systems at the Center for Communication Systems Research, Dept. of Electronics Engineering, University of Surrey, UK, where he leads the activities of the Networks Research Group (http://www.ee.surrey.ac.uk/CCSR/Networks/). He received a Diploma in Engineering from the National Technical University of Athens, Greece and MSc and PhD degrees in Computer Science from University College London, UK. His research interests encompass network and service management, network planning and dimensioning, traffic engineering, quality of service, mobile ad hoc networks, service engineering, multimedia service control and management, code mobility, programmable networks and communications middleware. He is the author or co-author of over 150 papers in fully refereed international conferences and journals and has contributed to 4 books. He has also contributed to standardization activities in ISO, ITU-T, TMF and IETF. He was the technical program co-chair of IEEE/IFIP Integrated Network Management 2001 and he is co-editor of the bi-annual IEEE Communications Network and Service Management series. See also http://www.ee.surrey.ac.uk/Personal/G.Pavlou/ for additional information and his publications in PDF.

(top)