Today’s Internet is increasingly expected to support interactive applications – the emerging wave of virtual, augmented, and mixed reality applications is putting unprecedented demands on the network. Unfortunately, today’s Internet offers a best effort service, which often falls short of meeting the desired goals of these applications.
This research will revisit the quality of service (QoS) problem in the context of the emerging cloud infrastructure: the global footprint of data centers (DC) hosted by major cloud providers. These DCs have good network connectivity (between them as well as to the end-users) but are costly to use; these factors lead to two important questions: what QoS can be achieved if these DCs are used as an assistive overlay for wide area communication, and ii) can we use DCs in a judicious manner to enhance the best effort nature of the Internet, so as to get their benefits without incurring excessive cost? This project will address these two questions. First, using extensive network measurements, it will quantify the potential benefits of using the cloud as an assistive overlay for wide area communication. Second, it will investigate how best effort Internet paths can be be judiciously combined with cloud overlays to provide bandwidth and latency guarantees for wide area communication, using appropriate and economically efficient pricing mechanisms.
This research project will leverage redundancy, which is naturally present in typical cloud environments, to improve application performance. The key idea is to make the resources aware of replicas, and to design mechanisms that pro-actively use duplicate requests to avoid stragglers. DAS leverages spare capacity in the system to mitigate stragglers with negligible overhead. We realize DAS on different bottleneck resources (e.g. disk and network) using D-Stage abstraction. D-Stage abstraction decouples the duplication policy from the mechanism and simplifies DAS realization for legacy layers.
This project is about designing a learning based scheduling policy (2D) that is robust to changes in workloads (job size distributions). 2D uses principles from existing scheduling policies and learning to meet its objective of being tail-optimal in the face of changing workloads. 2D combines fundamental scheduling policies such as FIFO, and Processor Sharing (PS) in a principled way to achieve its objectives.
Network impairments (e.g., jitter, outages, etc) can have a negative impact on user experience, especially for interactive, multi-user applications, like collaborative augmented reality (AR). For example, network outages during rapid model manipulation (re-sizing, re-positioning) in AR applications could lead to confusion or disorientation. Moreover, in applications that use content rich 3D models (distance learning, at-home therapy, etc), outages could cause users to miss out on instructions and demonstrations, thereby interfering with task performance. This work will explore impact of network impariments on users and how negetive effects can be reduced?
We are moving towards an Internet where most of the packets may be consumed by machines – set-top-boxes or smartphone apps prefetching content, Internet of Things (IoT) devices uploading their data to the cloud, or data centers doing geo-distributed replication. We observe that such machine centric communication can afford to have slack built into it: every packet can be marked as to when it will be consumed in future. Slack could be anywhere from seconds to hours or even days. In this project, we will make a case for slack-aware networking by illustrating slack opportunities that arise for a wide range of applications as they interact with the cloud and its pricing models (e.g., spot pricing). We will also sketch the design of SlackStack, a network stack with explicit support for slack at multiple levels of the stack, from a slack-based interface to slack-aware optimizations at the transport and network layers.