A pinboard by
this curator

Masters Student, Aalto


We try to reduce bandwith needed to stream video for cloud gaming.

Imagine being able to play a graphics intensive game on an average laptop or smart phone without draining your battery. Currently, even high end smartphones and laptops having average hardware cannot play the latest graphics intensive games with the same quality as on high end computers laptops. This is primarily because they don't have enough computing power and the energy requirements of such graphics intensive games are very high, making portable gaming of such games well nigh impossible. Cloud gaming aims to ameliorate this while accruing the benefits of cloud computing like efficient usage of hardware resources, economies of scale and platform agnosticity. Cloud games offloads the resource intensive part of gaming to the cloud. The game is executed in the cloud and the game play video is rendered there as well. The game play video is captured, encoded and streamed to a client and played for the player. The player's inputs for the game, for example clicking and keystrokes are captured by the client and sent to the server which inputs them to the game, as if the player were playing at the server. The client can be any device capable of playing video, in effect any smartphone, tablet or computer, no matter how weak the hardware. But there are two intertwined challenges. There must be very low latency between when the user inputs something and corresponding video appears. Also high end graphics mean large size for the resulting video , meaning longer time taken to stream it. My research tries to reduce the size of video that needs to be streamed without reducing the quality of experience by leveraging the characteristics of human eye. The human eye doesn't watch a video scene with the same visual acuity. The eye retina has a region called fovea at the back, diametrically opposite the pupils. Regions directly in front of the fovea are perceived at a higher resolution than regions which are at an angular distance from it. This is called foveation. We leverage this property of the eye and stream video with a quality profile which corresponds to the visual acuity profile of the eye, tracking the eye in real time at the client. The client sends, in addition to player's control inputs, only the location of the eye gaze on the screen to the server, which encodes the video with quality centered around the gaze location. Another application of this approach is HD video streaming on non optimal (e.g weak wireless) network links.


A hybrid edge-cloud architecture for reducing on-demand gaming latency

Abstract: The cloud was originally designed to provide general-purpose computing using commodity hardware and its focus was on increasing resource consolidation as a means to lower cost. Hence, it was not particularly adapted to the requirements of multimedia applications that are highly latency sensitive and require specialized hardware, such as graphical processing units. Existing cloud infrastructure is dimensioned to serve general-purpose workloads and to meet end-user requirements by providing high throughput. In this paper, we investigate the effectiveness of using this general-purpose infrastructure for serving latency-sensitive multimedia applications. In particular, we examine on-demand gaming, also known as cloud gaming, which has the potential to change the video game industry. We demonstrate through a large-scale measurement study that the existing cloud infrastructure is unable to meet the strict latency requirements necessary for acceptable on-demand game play. Furthermore, we investigate the effectiveness of incorporating edge servers, which are servers located near end-users (e.g., CDN servers), to improve end-user coverage. Specifically, we examine an edge-server-only infrastructure and a hybrid infrastructure that consists of using edge servers in addition to the cloud. We find that a hybrid infrastructure significantly improves the number of end-users served. However, the number of satisfied end-users in a hybrid deployment largely depends on the various deployment parameters. Therefore, we evaluate various strategies that determine two such parameters, namely, the location of on-demand gaming servers and the games that are placed on these servers. We find that, through both a careful selection of on-demand gaming servers and the games to place on these servers, we significantly increase the number of end-users served over the basic random selection and placement strategies.

Pub.: 11 Apr '14, Pinned: 16 Aug '17

A game attention model for efficient bit rate allocation in cloud gaming

Abstract: The widespread availability of broadband internet access and the growth in server-based processing have provided an opportunity to run games away from the player into the cloud and offer a new promising service known as cloud gaming. The concept of cloud gaming is to render a game in the cloud and stream the resulting game scenes to the player as a video sequence over a broadband connection. To meet the stringent network bandwidth requirements of cloud gaming and support more players, efficient bit rate reduction techniques are needed. In this paper, we introduce the concept of game attention model (GAM), which is basically a game context-based visual attention model, as a means for reducing the bit rate of the streaming video more efficiently. GAM estimates the importance of each macro-block in a game frame from the player’s perspective and allows encoding the less important macro-blocks with lower bit rate. We have evaluated nine game video sequences, covering a wide range of game genre and a spectrum of scene content in terms of details, motion and brightness. Our subjective assessment shows that by integrating this model into the cloud gaming framework, it is possible to decrease the required bit rate by nearly 25 % on average, while maintaining a relatively high user quality of experience. This clearly enables players with limited communication resources to benefit from cloud gaming with acceptable quality.

Pub.: 27 Apr '14, Pinned: 16 Aug '17

Layered Coding for Mobile Cloud Gaming Using Scalable Blinn-Phong Lighting.

Abstract: In a mobile cloud gaming, high-quality, high-frame-rate game images of immense data size need to be delivered to the clients over wireless networks under stringent delay requirement. For good gaming experience, reducing the transmission bit rate of the game images is necessary. Most existing cloud gaming platforms simply employ standard, off-the-shelf video codecs for game image compression. In this paper, we propose the layered coding scheme to reduce transmission bandwidth and latency. We leverage the rendering computation of modern mobile devices to render a low-quality local game image, or the base layer (BL). Instead of sending a high-quality game image, cloud servers can send enhancement layer information, which clients can utilize to improve the quality of the BL. Central to the layered coding scheme is the design of a complexity-scalable BL rendering pipeline that can be executed on a range of power-constrained mobile devices. In this paper, we focus on the lighting stage in modern graphics rendering and propose a method to scale the popular Blinn-Phong lighting for the use in BL rendering. We derive an information-theoretic model on the Blinn-Phong lighting to estimate the rendered image entropy. The analytic model informs the optimal BL rendering design that can lead to maximum bandwidth saving subject to the constraint on the computation capability of the client. We show that the information rate of the enhancement layer could be much less than that of the high-quality game image, while the BL can be generated with only a very small amount of computation. Experiment results suggest that our analytic model is accurate in estimating. For layered coding scheme, up to 84% reduction in bandwidth usage can be achieved by sending the enhancement layer information instead of the original high-quality game images compressed by H.264/AVC.

Pub.: 24 Jan '17, Pinned: 16 Aug '17