Examples of the Type of Research We Do:

Social networks and complex networks analysis

  • (Social Simulation and Emergence) Social Structure Emergence: A Multi-agent Reinforcement Learning Framework for Relationship Building

Social structures naturally arise from social networks, yet no model well interprets the emergence of structural properties in a unified dimension. Here, we unify explanations for the emergence of network structures by revealing the pivotal role of social capital, i.e., benefits that a society grants to individuals, in network formation. We propose a game-based framework social capital games that mathematically conceptualizes social capital. Through this framework, individuals are regarded as independent learning agents that aim to gain social capital via building interpersonal ties. We adopt multi-agent reinforcement learning (MARL) to train agents. By varying configurations of the game, we observe the emergence of classical structures of community, small-world, and core-periphery.

  • (SNA Meets Machine Learning) On the Maximization of Influence Over an Unknown Social Network

Influence maximization is a well-investigated problem which asks for key individuals who have significant influence in a given social network. This paper addresses this problem when the social network structure is hidden. We adopt the framework of influence learning from samples and build a neural network model to represent the information diffusion process. Based on the model, we propose two new algorithms NeuGreedy and NeuMax. NeuGreedy simulates the traditional greedy algorithm whilst NeuMax utilizes the weights of connections between neurons. We test the algorithms on both synthetic and real-world datasets. The results verify the effectiveness of the proposed methods as compared to existing algorithms with or without the network structure.

  • (Dynamic Networks) From the Periphery to the Core: Information Brokerage in an Evolving Network

Interpersonal ties are pivotal to individual efficacy, status and performance in an agent society. This paper explores three important and interrelated themes in social network theory: the center/periphery partition of the network; network dynamics; and social integration of newcomers. We tackle the question: How would a newcomer harness information brokerage to integrate into a dynamic network going from periphery to center? We model integration as the interplay between the newcomer and the dynamics network and capture information brokerage using a process of relationship building. We analyze theoretical guarantees for the newcomer to reach the center through tactics; proving that a winning tactic always exists for certain types of network dynamics. We then propose three tactics and show their superior performance over alternative methods on four real-world datasets and four network models.

Multi-agent systems, market and economic issue

  • (Mechanism Design Meets Data Privacy) Selling Data at an Auction under Privacy Constraints

Private data query combines mechanism design with privacy protection to produce aggregated statistics from privately-owned data records. The problem arises in a data marketplace where data owners have personalised privacy requirements and private data valuations. We focus on the case when the data owners are single-minded, i.e., they are willing to release their data only if the data broker guarantees to meet their announced privacy requirements. For a data broker who wants to purchase data from such data owners, we propose the Single MindedQuery (SMQ) mechanism, which uses a reverse auction to select data owners and determine compensations. SMQ satisfies interim incentive compatibility, individual rationality, and budget feasibility. Moreover, it uses purchased privacy expectation maximisation as a principle to produce accurate outputs for commonly-used queries such as counting, median and linear predictor. The effectiveness of our method is empirically validated by a series of experiments.

  • (Game theory) Network, Popularity and Social Cohesion: A Game-Theoretic Approach

In studies of social dynamics, cohesion refers to a group’s tendency to stay in unity, which – as argued in sociometry — arises from the network topology of interpersonal ties. We follow this idea and propose a game-based model of cohesion that not only relies on the social network, but also reflects individuals’ social needs. In particular, our model is a type of cooperative games where players may gain popularity by strategically forming groups. A group is socially cohesive if the grand coalition is core stable. We study social cohesion in some special types of graphs and draw a link between social cohesion and a classical notion of structural cohesion by White and Harary. We then focus on the problem of deciding whether a given social network is socially cohesive and show that this problem is CoNP-complete. Nevertheless, we give two efficient heuristics for coalition structures where players enjoy high popularity and experimentally evaluate their performances.

  • (Agent-based SNA) Balancing the Pain and Gain of Hobnobbing: Utility-Based Network Building over Atributed Social Networks

The establishment of interpersonal ties is a pivotal problem in the structural analysis of social networks. In particular, link recommendation problem asks for valuable future links to establish by an individual. Existing methods for this problem rely on link prediction that evaluates the likelihood of successful tie creation between two individuals. Such methods do not consider the social capital gained by agents, nor do they concern with the required cost of this process. In light of this limitation, we propose a utility-based network building problem, with an aim to strike a balance between the gained social capital - in the form of closeness centrality - and the cost of establishing ties. We propose algorithms to solve this problem over networks whose nodes may or may not be labelled with attributes, and test their performance on a range of synthesized and real-world social networks. By having multiple agents adopting utility-based network building strategies, we propose a suite of models of network formation and demonstrate empirically that the they capture important structural properties. In particular, we investigate the emergence of a core/periphery structure as a joint result of preferential attachment and network building strategies.

Natural language processing and AI in general

  • (Integrating Decision Theory with NLP) What‘s in a Gist? Towards an Unsupervised Gist Representation for Few-Shot Large Document Classification

The gist can be viewed as an abstract concept that represents only the quintessential meaning derived from a single or multiple sources of information. We live in an age where vast quantities of information are widely available and easily accessible. Identifying the gist contextualises information which facilitates the fast disambiguation and prediction of related concepts bringing about a set of natural relationships defined between information sources. In this paper, we investigate and introduce a novel unsupervised gist extraction and quantification framework that represents a computational form of the gist based on notions from fuzzy trace theory. To evaluate our purposed framework, we apply the gist to the task of semantic similarity, specifically to few-shot large document classification where documents on average have a large number of words. The results show our proposed gist representation can effectively capture the essential information from a text document while dramatically reducing the features used.

  • (Healthcare AI) MANDY: Towards a Smart Primary Care Chatbot Application

Mandy is a primary care chatbot system created to assist healthcare staffs by automating the patient intake process. The chatbot interacts with a patient by carrying out an interview, understanding their chief complaints in natural language, and submitting reports to the doctors for further analysis. The system provides a mobile-app front end for the patients, a diagnostic unit, and a doctor’s interface for accessing patient records. The diagnostic unit consists of three main modules: An analysis engine for understanding patients symptom descriptions, a symptom-to-cause mapper for reasoning about potential causes, and a question generator for deriving further interview questions. The system combines data-driven natural language processing capability with knowledge-driven diagnostic capability. We evaluate our proof-of-concept on benchmark case studies and compare the system with existing medical chatbots.

  • (Religious Text-mining) Finding Answers from the Word of God: Domain Adaptation for Neural Networks in Biblical Question Answering

Question answering (QA) has significantly benefitted from deep learning techniques in recent years. However, domain-specific QA remains a challenge due to the significant amount of data required to train a neural network. This paper studies the answer sentence selection task in the Bible domain and answer questions by selecting relevant verses from the Bible. For this purpose, we create a new dataset BibleQA based on bible trivia questions and propose three neural network models for our task. We pre-train our models on a large-scale QA dataset, SQuAD, and investigate the effect of transferring weights on model accuracy. Furthermore, we also measured the model accuracies with different answer context lengths and different Bible translations. We affirm that transfer learning has a noticeable improvement in the model accuracy. We achieved relatively good results with shorter context lengths, whereas longer context lengths decreased model accuracy. We also find that using a more modern Bible translation in the dataset has a positive effect on the task.

  • (Machine Learning) Periodic Neural Networks for Multivariate Time Series Analysis and Forecasting

Designing systems that make accurate forecasts based on time dependent data is always a challenging and significant task. In this regard, a number of statistics and neural network-based models have been proposed for analyzing and forecasting time series datasets. In this paper, we propose a novel machine learning model for handling and predicting multivariate time series data. In our proposed model we focus on supervised learning technique in which (1) some features of time series dataset exhibit periodic behaviour and (2) time t is considered as an input feature. Due to periodic nature of multivariate time series datasets, our model is a simple neural network where the inputs to the single output source are assumed to be in the form A sin(Bt + C)x as opposed to the standard form inputs Ax + B. We train our proposed model on various datasets and compare our model’s performance with standard well-known models used in forecasting multivariate time series datasets. Our results show that our proposed model often outperforms other exiting models in terms of prediction accuracy. Moreover, our results show that the proposed model can handle time series data with missing values and also input data-values that are non-equidistant. We hope that the proposed model will be useful in fostering future research on designing accurate forecasting algorithms.

Data security and privacy

  • (Structural Information Theory) REM: From Structural Entropy to Community Structure Deception

By exploiting the community affiliations of user accounts, an attacker may infer sensitive user attributes. This raises the problem of community structure deception (CSD), which asks for ways to minimally modify the network so that a given community structure maximally hides itself from community detection algorithms. We investigate CSD through an information-theoretic lens. To this end, we propose a community-based structural entropy to express the amount of information revealed by a community structure. This notion allows us to devise residual entropy minimization (REM) as an efficient procedure to solve CSD. Experimental results over 9 real-world networks and 6 community detection algorithms show that REM is very effective in obfuscating the community structure as compared to other benchmark methods.

  • (Transportation) Zero-to-Stable Driver Identification: A Non-Intrusive and Scalable Driver Identification Scheme

Driver identification faces various challenges in real-time applications. These challenges include high dimensional input data, moderate accuracy, scalability issues and need for custom-built in-vehicle sensors. The conventional biometric solutions are either less accurate, unscalable, or costly for practical application. In those solutions, high accuracy comes at the cost of the preservation of privacy. Similarly, scalability and cost-effectiveness are inversely proportional to one another. This paper proposes a driver identification scheme to pare these challenges. The scheme uses data from the global positioning system (GPS) to learn an individual’s driving pattern, which is commonly deployed in in-car navigation systems and can also be found in general-purpose hand-held devices in the prevailing market. The proposed scheme is innovative in terms of providing a single solution for the existing challenges. It is more practical and scalable with large numbers of the drivers also being cost-effective and accurate, which makes it applicable in real-time applications with the least overhead costs for different resources. The consequent analysis and empirical results show that the scheme could identify drivers with significant accuracy given only GPS data.

  • (Wireless Communication) WiPOS: A POS Terminal Password Inference System Based on Wireless Signals

WiFi access points are sources of considerable security risks as the wireless signals have the potential to leak important private information such as passwords. This paper examines the security issues posed by point-of-sale (POS) terminals which are widely used in WiFi-covered environments such as restaurants, banks and libraries. In particular, we envisage an attack model on passwords entered on POS terminals. We put forward the WiPOS, a password inference system based on wireless signals. Specifically, the WiPOS is a device-free system that uses two commercial off-the-shelf (COTS) devices to collect WiFi signals. Implementing a new keystroke segmentation algorithm and adopting support vector machine (SVM) classifiers with global alignment kernel (GAK), the WiPOS achieves an improvement on both keystroke recognition and password prediction. The experimental results show that the WiPOS can achieve more than 73% accuracy for 6-digit password with the top 100 candidates. This paper calls the community to take a closer look at the risks posed by the current ubiquitous WiFi devices.