Prof. Nishanth Sastry is Joint Head of the Distributed and Networked Systems Group at Department of Computer Science, University of Surrey. He is also a Visiting Researcher at the Alan Turing Institute, where he is a co-lead of the Social Data Science Special Interest Group.
Prof. Sastry holds a Bachelor’s degree (with distinction) from R.V. College of Engineering, Bangalore University, a Master’s degree from University of Texas, Austin, and a PhD from the University of Cambridge, all in Computer Science. Previously, he spent over six years in the Industry (Cisco Systems, India and IBM Software Group, USA) and Industrial Research Labs (IBM TJ Watson Research Center). He has also spent time at the Massachusetts Institute of Technology Computer Science and AI Laboratory.
His honours include a Best Paper Award at SIGCOMM Mobile Edge Computing in 2017, a Best Paper Honorable Mention at WWW 2018, a Best Student Paper Award at the Computer Society of India Annual Convention, a Yunus Innovation Challenge Award at the Massachusetts Institute of Technology IDEAS Competition, a Benefactor’s Scholarship from St. John’s College, Cambridge, a Best Undergraduate Project Award from RV College of Engineering, a Cisco Achievement Program Award and several awards from IBM. He has been granted nine patents in the USA for work done at IBM.
Nishanth has been a keynote speaker, and received media coverage from print media such as The Times UK, New York Times, New Scientist and Nature, as well as Television media such as BBC, Al Jazeera and Sky News. He is a member of the ACM and a Senior Member of the IEEE.
PhD in Computer Science
University of Cambridge
MA in Computer Science
University of Texas at Austin
BE in Computer Science and Engineering
R.V. College of Engineering, Bangalore University
Full list of past and current projects is available in PURE
Online debates typically possess a large number of argumentative comments. Most readers who would like to see which comments are winning arguments often only read a part of the debate. Many platforms that host such debates allow for the comments to be sorted, say from the earliest to latest. How can argumentation theory be used to evaluate the effectiveness of such policies of sorting comments, in terms of the actually winning arguments displayed to a reader who may not have read the whole debate? We devise a pipeline that captures an online debate tree as a bipolar argumentation framework (BAF), which is sorted depending on the policy, giving a sequence of induced sub-BAFs representing how and how much of the debate has been read. Each sub-BAF has its own set of winning arguments, which can be quantitatively compared to the set of winning arguments of the whole BAF. We apply this pipeline to evaluate policies on Kialo debates, where it is shown that reading comments from most to least liked, on average, displays more winners than reading comments earliest first. Therefore, in Kialo, reading comments from most to least liked is on average more effective than reading from the earliest to the most recent.
IoT-driven smart societies are modern service-networked ecosystems, whose proper functioning is hugely based on the success of supply chain relationships. Robust security is still a big challenge in such ecosystems, catalyzed primarily by naive cyber-security practices (e.g., setting default IoT device passwords) on behalf of the ecosystem managers, i.e., users and organizations. This has recently led to some catastrophic malware-driven DDoS and ransomware attacks (e.g., the Mirai and WannaCry attacks). Consequently, markets for commercial third party cyber-risk management services (e.g., cyber-insurance) are steadily but sluggishly gaining traction with the rapid increase of IoT deployment in society, and provides a channel for ecosystem managers to transfer residual cyber-risk post attack events. Current empirical studies have shown that such residual cyber-risks affecting smart societies are often heavy-tailed in nature and exhibit tail dependencies. This is both, a major concern for a profit-minded cyber-risk management firm that might normally need to cover multiple such dependent cyber-risks from different sectors (e.g., manufacturing, energy) in a service-networked ecosystem, and a good intuition behind the sluggish market growth of cyber-risk management products. In this paper, we provide (i) a rigorous general theory to elicit conditions on (tail-dependent) heavy-tailed cyber-risk distributions under which a risk management firm might find it (non)sustainable to provide aggregate cyber-risk coverage services for smart societies, and (ii) a real-data driven numerical study to validate claims made in theory assuming boundedly rational cyber-risk managers, alongside providing ideas to boost markets that aggregate dependent cyber-risks with heavy-tails. To the best of our knowledge, this is the only complete general theory till date on the feasibility of aggregate cyber-risk management.
New applications such as remote surgery and connected cars, which are being touted as use cases for 5G and beyond, are mission-critical. As such, communications infrastructure needs to support and enforce stringent and guaranteed levels of service before such applications can take off. However, from an operator’s perspective, it can be difficult to provide uniformly high levels of service over long durations or large regions. As network conditions change over time, or when a mobile end point goes to regions with poor coverage, it may be difficult for the operator to support previously agreed upon service agreements that are too stringent. Second, from a consumer’s perspective, purchasing a stringent service level agreement with an operator can also be expensive. Finally, failures in mission critical applications can lead to disasters, so infrastructure should support assignment of liabilities when a guaranteed service level is reneged upon – this is a difficult problem because both the operator and the customer have an incentive to lay the blame on each other to avoid liabilities of poor service. To address the above problems, we propose AJIT, an architecture that allows creating fine-grained short-term contracts between operator and consumer. AJIT uses smart contracts to allow dynamically changing service levels so that more expensive and stringent levels of service need only be requested by a customer for short durations when the application needs it, and operator agrees to the SLA only when the infrastructure is able to support the demand. Second, AJIT uses trusted enclaves to do the accounting of packet deliveries such that neither the customer requesting guaranteed service levels for mission-critical applications, nor the operator providing the infrastructure support, can cheat.
This paper aims to understand how third-party ecosystems have developed in four different countries: UK, China, AU, US. We are interested in how wide a view a given third-party player may have, of an individual user’s browsing history over a period of time, and of the collective browsing histories of a cohort of users in each of these countries. We study this by utilizing two complementary approaches: the first uses lists of the most popular websites per country, as determined by Alexa.com. The second approach is based on the real browsing histories of a cohort of users in these countries. Our larger continuous user data collection spans over a year. Some universal patterns are seen, such as more third parties on more popular websites, and a specialization among trackers, with some trackers present in some categories of websites but not others. However, our study reveals several unexpected country-specific patterns: China has a home-grown ecosystem of third-party operators in contrast with the UK, whose trackers are dominated by players hosted in the US. UK trackers are more location sensitive than Chinese trackers. One important consequence of these is that users in China are tracked lesser than users in the UK. Our unique access to the browsing patterns of a panel of users provides a realistic insight into third party exposure, and suggests that studies which rely solely on tt Alexa top ranked websites may be over estimating the power of third parties, since real users also access several niche interest sites with lesser numbers of many kinds of third parties, especially advertisers.
When users browse to a so-called First Party website, other third parties are able to place cookies on the users’ browsers. Although this practice can enable some important use cases, in practice, these third party cookies also allow trackers to identify that a user has visited two or more first parties which both share the second party. This simple feature been used to bootstrap an extensive tracking ecosystem that can severely compromise user privacy. In this paper, we develop a metric called tangle factor that measures how a set of first party websites may be interconnected or tangled with each other based on the common third parties used. Our insight is that the interconnectedness can be calculated as the chromatic number of a graph where the first party sites are the nodes, and edges are induced based on shared third parties. We use this technique to measure the interconnectedness of the browsing patterns of over 100 users in 25 different countries, through a Chrome browser plugin which we have deployed. The users of our plugin consist of a small carefully selected set of 15 test users in UK and China, and 1000+ in-the-wild users, of whom 124 have shared data with us. We show that different countries have different levels of interconnectedness, for example China has a lower tangle factor than the UK. We also show that when visiting the same sets of websites from China, the tangle factor is smaller, due to blocking of major operators like Google and Facebook. We show that selectively removing the largest trackers is a very effective way of decreasing the interconnectedness of third party websites. We then consider blocking practices employed by privacy-conscious users (such as ad blockers) as well as those enabled by default by Chrome and Firefox, and compare their effectiveness using the tangle factor metric we have defined. Our results help quantify for the first time the extent to which one ad blocker is more effective than others, and how Firefox defaults also greatly help decrease third party tracking compared to Chrome.