Date of Award


Degree Name

Doctor of Philosophy


Computer Science

First Advisor

Dr. Shameek Bhattacharjee

Second Advisor

Dr. Ajay Gupta

Third Advisor

Dr. Alvis Fong

Fourth Advisor

Dr. Lee Wells


Internet of Things (IoT), trust, security, smart home, machine learning, fog middleware


Trust in Smart Home technology security is a primary concern for consumers, which can prevent them from adopting smart home services. Such concerns are due to following reasons; (i) nature of IoT devices– which due to their limited computational and resource capabilities, cannot support traditional on-device security controls (ii) any breach to cyber-attacks have an immediate impact on the smart homeowner, compared to traditional cyber-attacks (iii) a large variety of different applications and services under the umbrella of make an overarching security framework for smart homes fundamentally challenging for both providers to offer and owners to manage.

This dissertation offers a unified approach towards establishing trust scores as an indicator of the security status of an IoT device in a smart home. The approach is said to be unified because it is independent of attack types or device manufacturer or protocols compared to existing solutions that treat each of these aspects in silos. The proposed dissertation can be viewed as a series of sequential modules that are invoked one after the other. There are three main phases: evidence collection, a trust scoring model, trust management, and updates.

Specifically, for evidence collection, we propose a pre-processing step, that involves the design of an access control mechanism that takes a service-level view rather than a device-level view, establishing baseline rules of authorized communication flows. Then our evidence collection proposes a unified set of factors that are affected significantly if a smart home IoT device is under attack. Such a body of evidence carefully collected over certain temporal granularities, serve as inputs to the proposed trust scoring module.

The trust scoring module maps the device-specific evidence (observations) into a trust score, such that it can produce lower trust scores when devices are under attack. Specifically, we propose a Bayesian Belief based Model augmented with novel non-linear weighing and activation functions, designed specifically for our problem. The weighing functions are designed such that depending on the severity of the attack surface, probabilistic discounting of parts of the evidence caused by benign changes are appropriately embedded in the scoring module that explains the success under attacks.

Finally, the scores are fed to a trust management and update module that counters real-time temporal evolution of real cyber-attacks. Specifically, we propose a four-factor asymmetric trust update scheme that can defend against advanced attack strategies such as on-off and incremental ramp attacks.

For evaluation of the framework, we use three real datasets that contain a variety of actual cyber-attacks and benign datasets. Our evaluation seeks to investigate the generality of our framework across multiple datasets, with various devices and cyber-attacks.

Access Setting

Dissertation-Campus Only

Restricted to Campus until