Game Theoretic Framework for Future Generation Networks Modelling and Optimization [projectsNS2]

An Efficient Distributed Link Selection Scheme for AF-Based Cognitive Selection Relaying Networks [NS2 2014]

COGNITIVE radio (CR) with spectrum sharing is regarded as a promising solution to enhance spectral efficiency and alleviate the issue of spectrum scarcity . On the other hand, cooperative relaying can provide remarkable performance advantages in ameliorating the An Efficient Distributed Link Selection Scheme for AF-Based Cognitive Selection Relaying Networks reliability and throughput of wireless systems. In recent years, incorporating relays into cognitive networks has received tremendous attentions, owing to its aforementioned merits. Previous related works are briefly introduced next. In , the authors proposed a novel design paradigm to maximize the cognitive user’s (CU’s) rate for MIMO CR under a minimum primary user (PU) rate constraint. and studied the performance of amplify-and-forward-based (AF-based) cognitive relay networks (CRNs) over Nakagami-m fading. An Efficient Distributed Link Selection Scheme for AF-Based Cognitive Selection Relaying Networks Considering both high and low-to-medium signal-to-noise ratio (SNR) regimes, and derived closed-form expressions for the outage probability of an interference-limited AF-based CRN with only one PU. Further, the authors of investigated the scenario with multiple PUs. However, since the direct link between the source and destination was ignored in both, the resulting systems may suffer from potential diversity losses . Later, by incorporating the direct path into an An Efficient Distributed Link Selection Scheme for AF-Based Cognitive Selection Relaying Networks AF-based CRN, Duong et al. investigated the outage performance of a cognitive system, where a centralized link selection protocol was employed. Although the centralized selection criterion can improve outage behavior, its feedback overhead is considerable. Recently, inspired by an idea of distributed decision feedback, the authors of established several novel link/antenna selection schemes for An Efficient Distributed Link Selection Scheme for AF-Based Cognitive Selection Relaying Networks AF-based relaying systems, which significantly reduce the feedback overhead of link/antenna selection. Knowing that acquiring the instantaneous channel state information (CSI) of the interference links may incur additional cost, examined cognitive cooperative systems where only imperfect CSI of the interference links can be obtained at the secondary transmitters. An Efficient Distributed Link Selection Scheme for AF-Based Cognitive Selection Relaying Networks Motivated by the foregoing observations, in this paper we advocate a distributed link selection mechanism for a cognitive AF selection relaying network. Specifically, by adopting an approximation to the received SNR at the destination, the tractable closed-form expression for the outage lower bound and the average feedback overhead of the secondary system are derived, where the cognitive transmitters can acquire either the instantaneous or statistical CSI pertaining to the interference links. Particularly, no matter whether the CSI knowledge of the interference links at the secondary nodes is perfect or not, the proposed scheme can alleviate the CSI feedback overhead while maintaining almost the same outage performance with that of , rendering it an attractive solution in practice.

Tradeoff Between Reliability and Security in Multiple Access Relay Networks Under

Falsified Data Injection Attack [NS2 2014]

COOPERATIVE relaying is gaining a significant attention in that multiple intermediate relay nodes can collaborate with each other to enhance the overall network efficiency. It exploits the physical-layer broadcast property offered by the wireless Tradeoff Between Reliability and Security in Multiple Access Relay Networks Under Falsified Data Injection Attack medium that transmitted signals can be received and processed by any node in the neighborhood of a transmitter. The cooperative relaying approach has a great potential to provide substantial benefits in terms of reliability (diversity gain) and rate (bandwidth or spectral efficiency). These benefits can extend the coverage, reduce network energy consumption, and promote uniform energy drainage by exploiting neighbors’ resources. They can be of great value in many applications, including ad-hoc networks, mesh networks, and next generation wireless local area networks and cellular networks. In multiple access relay networks, relay nodes may combine Tradeoff Between Reliability and Security in Multiple Access Relay Networks Under Falsified Data Injection Attack the symbols received from different sources to generate parity symbols (packets) and send them to the destination. Then, the destination may use the network generated parity symbols (packets) to enhance the reliability of decoding. While this technology is promising in improving communication quality, it also presents a new challenge at the physical layer due to the dependency of the cooperation. Tradeoff Between Reliability and Security in Multiple Access Relay Networks Under Falsified Data Injection Attack That is, reliance on implicit trust relationship among participating nodes makes it more vulnerable to falsified data injection. Although this might also occur in a traditional system without cooperative communication, its effect is far more serious with cooperative communication. If a false packet is injected into the buffer of a node, the output of the node will become polluted, and this may soon propagate to the entire network. The problem of detecting malicious relay nodes in singlesource, Tradeoff Between Reliability and Security in Multiple Access Relay Networks Under Falsified Data Injection Attack multi-relay networks has been studied in the literature for different relaying strategies . Relay nodes in apply network coding while those in follow the decode-and-forward protocol. In , the authors consider a peer-to-peer (P2P) network in which peers receive and forward a linear combination of the exogenous data packets. To check the integrity of the received packets, a signature vector is generated at the source node and broadcasted Tradeoff Between Reliability and Security in Multiple Access Relay Networks Under Falsified Data Injection Attack to all nodes where it is used to check the integrity of the received packets. In , several information theoretic algorithms for mitigating falsified data injection effects are proposed. The network model used in these works is composed of a single source, multiple intermediate nodes which apply network coding.

Integrated Security Analysis on Cascading Failure in Complex Networks [NS2 2014]

THE MODERN complex network systems, including communication network, social network and smart grid, have become a key focus of security analysis nowadays. With increasing interconnection of local networks, growing communication traffic and user demand, Integrated Security Analysis on Cascading Failure in Complex Networks as well as diversifying services and emerging new technologies, the complex systems are becoming increasingly sophisticated to operate coordinately. One of many threats posed to complex network systems due to the large scale inter-connectivity is the cascading failure. Integrated Security Analysis on Cascading Failure in Complex Networks A small contingency or failure could trigger a series of chain effects across the entire system, causing a massive impact to the network operation and services . A notable case of cascading failure threats has been witnessed in power infrastructure, where blackouts, or large scale outages due to failure propagation, have affected millions of people after some disastrous cascading failures . As a practical complex network system with unique physical properties affecting Integrated Security Analysis on Cascading Failure in Complex Networks cascading failures, the electrical power infrastructure is chosen as a study case for the integrated security analysis of cascading failure against malicious attacks in this paper. It is notable that the threat of cascading failures, which is usually caused by extreme random events in the past, can be intensified by the growing integration and utilization of computer-based control and communication networks. While development of automation and intelligence brings significant benefits to the complex systems and networks, e.g. the Internet, social and business networks, and power grids alike, it is inevitable that this upgrade also comes with growing risks and complexity of cyber-security issues. Integrated Security Analysis on Cascading Failure in Complex Networks Take the future intelligent power infrastructure, i.e. the Smart Grid, as an example: studies have put forward the fact that intelligence will bring new security challenges to the power grid, a gigantic system that already yields inherent structural vulnerability of cascading failures due to its physical nature. For instance, malicious attackers can take advantage of the potential open access from smart meters in the Advanced Metering Infrastructure (AMI) to plan the attack with intelligence collected from their penetration, so that they can maximize the impact of their attacks . Integrated Security Analysis on Cascading Failure in Complex Networks Therefore, how to secure a complex network system like power grid against cascading failures has been motivating development of models and methodologies to simulate potential selective attacks that result in cascading failures in a power system with the consideration of specific network physical properties. These studies will contribute to both defensive strategies and decision supports to protect the critical components in the complex systems.

Video-Aware Scheduling and Caching in the Radio [NS2 2014]

WITH the worldwide growth in the adoption of smartphones and tablets, access to Internet video and video applications from mobile devices is projected to grow very significantly . When Internet video is accessed by a mobile device, the video must be Video-Aware Scheduling and Caching in the Radio fetched from the servers of a Content Delivery Network (CDN) . CDNs help reduce Internet bandwidth consumption and associated delay/jitter, but the video must additionally travel through the wireless carrier Core Network (CN) and Radio Access Network (RAN) before reaching the mobile device. Video-Aware Scheduling and Caching in the Radio Besides adding to video latency, bringing each requested video from the Internet CDNs can put significant strain on the carrier’s CN and RAN backhaul, leading to congestion, significant delay, and constraint on the network’s capacity to serve a large number of concurrent video requests. The above problem will be further exacerbated by the recent advances in radio technologies and architectures like LTE, LTE Advanced, small cells, and Het Nets, which will increase the radio access capacities very significantly, shifting the capacity challenge and congestion problem to RAN backhaul. Video-Aware Scheduling and Caching in the Radio According to Juniper Research, operatorswill need to spend almost $840 billion globally over the next five years in order to address serious bottlenecks in their RAN backhaul networks. According to a report just released by Strategy Analytics , “as global mobile data traffic grows by another 5 to 6 times over the next five years operators will face a new mobile capacity crunch by 2017 unless they increase traditional backhaul investment levels to match the anticipated growth in Radio Access Networks (RAN) capacity and user traffic.” According to the Video-Aware Scheduling and Caching in the Radio report, there will be potentially a 16-PB shortfall in backhaul capacity by 2017. To facilitate the tremendous growth of mobile video consumption without the associated problems of congestion, delay, and lack of capacity, in this paper we introduce caching of videos at (e)NodeBs at the edge of the RAN, shown in Fig. 1, so that most video requests can be served from the RAN caches, instead of having to be fetched from the Internet CDNs and travel through the RAN backhaul. Video-Aware Scheduling and Caching in the Radio In order to address end-to-end video capacity of the network, we also propose a video-aware wireless channel scheduler that will maximize the number of videos that can be delivered through the wireless channel, conscious of the channel conditions and the QoE needs of the videos.

Supporting Highly Mobile Users in Cost-Effective Decentralized Mobile Operator Networks [NS2 2014]

As mobile networking and services are entering a new communication era offering smart phones to users with higher capabilities and more diverse applications, emerging service requirements create new challenges for the current mobile network architecture. Supporting Highly Mobile Users in Cost-Effective Decentralized Mobile Operator Networks Such new requirements partly reflect the popularity of several new services and the emerging content rich and bandwidth-intensive mobile applications. In addition, they capture the operator’s desire to offer flat rate tariffs to attract more users encouraging the adoption Supporting Highly Mobile Users in Cost-Effective Decentralized Mobile Operator Networks of new services. Such business paradigm may work great in an early phase assisting the success of new technologies, i.e. Long Term Evolution (LTE), but at a later stage it may create a rebound effect with serious revenue problems for operators. Indeed mobile operators are facing a challenging task to accommodate huge traffic volumes, far beyond the original network capacity Effectively, such new requirements challenge the current mobile network architecture, which is highly centralized, not optimized for high-volume data applications. Supporting Highly Mobile Users in Cost-Effective Decentralized Mobile Operator Networks The main problem relates to the fact that central gateways handle all mobile traffic, acting as a data and mobility anchor for several radio access points without any complementary caching or data offload support at the network edge. A straightforward solution for mobile operators is to invest in upgrading their network infrastructure in terms of backhaul speed and core network resources with the objective to always be capable to accommodate peak hour traffic demands. Supporting Highly Mobile Users in Cost-Effective Decentralized Mobile Operator Networks Whilst these are technical-wise feasible solutions, financially they are challenging, particularly due to the modest Average Revenues per Users (ARPU), given, in turn, the trend towards flat rate business models. Operators are thus interested in cost-effective methods for accommodating the ever-increasing mobile network traffic ensuring minimal investment into the current infrastructure. Supporting Highly Mobile Users in Cost-Effective Decentralized Mobile Operator Networks Network decentralization is a key enabler, which allows operators to be equipped with economically competitive solutions against increased traffic demands and flat rate charges. The basis for realizing network decentralization is to place small-scale network nodes with mobility and IP access functionalities, similar to those provided by the currently centralized gateways, towards the network edge. Such local data anchor gateways allow operators to employ solutions that can selectively offload traffic as close to the Radio Access Network (RAN) as possible.

Optimal Probabilistic Encryption for Secure Detection in Wireless Sensor Networks [NS2 2014]

WIRELESS sensor networks (WSNs) are vulnerable to several types of attacks including passive eavesdropping, jamming, compromising of the sensor nodes, and insertion of malicious nodes into the network . Widespread adoption of WSNs, particularity for mission-critical tasks, hinges on the development of strong protection mechanisms against such attacks . Optimal Probabilistic Encryption for Secure Detection in Wireless Sensor Networks Due to the scarcity of resources, traditional wireless network security solutions are not viable for WSNs. The life span of a sensor node is usually determined by its energy supply which is mostly expended for data processing and communication . Optimal Probabilistic Encryption for Secure Detection in Wireless Sensor Networks Moreover, size and cost constraints of the nodes limit their memory size and processing power. Therefore, security solutions which demand excessive processing, storage or communication overhead are not practical. In particular, due to their high computational complexity, public key ciphers are not suitable for WSNs Optimal Probabilistic Encryption for Secure Detection in Wireless Sensor Networks. An important application of WSNs, which has been extensively studied in recent years, involves decentralized detection whereby the sensors send their (quantized) measurements to an ally fusion center (AFC) which attempts to detect the state of nature using the data received from all the sensors. Due to the broadcast nature of the wireless media, the sensors’ Optimal Probabilistic Encryption for Secure Detection in Wireless Sensor Networks data are prone to interception by unauthorized parties . In this paper we are concerned with data confidentiality in the presence of passive eavesdropping. In particular, we

assume that the transmissions of the nodes are over insecure channels. An eavesdropping fusion center (EFC) is attempting to intercept the sensor’s messages and to detect the state of nature. Since the sensors’ data are used for hypothesis testing, security can be provided by degrading the detection performance of EFC. Optimal Probabilistic Encryption for Secure Detection in Wireless Sensor Networks The communication between the sensors and AFC (or EFC) is assumed to be over a parallel access channel where the sensors are connected to AFC (or EFC) by a dedicated channel. The dedicated channels are assumed to be independent and identical and are modeled by (noisy) discrete memoryless channels .

Multi-Layer Capacity Planning for IP-Optical Networks [NS2 2014]

Multi-layer IP-optical networking promises significant cost reductions for the same availability as today’s networks However, to realize these savings, it is necessary to Multi-Layer Capacity Planning for IP-Optical Networks change the planning process such that it is aware of the behavior of both layers. This article explains today’s nonintegrated planning process and its deficiencies. It then suggests a multi-layer router bypass optimization process. Next, it explains how different multi-layer restoration schemes work, and the required changes to the planning process to efficiently design the network in an optimized way for such schemes. The process is demonstrated on a small four-node example, and the resulting savings are compared to savings achieved for real-world networks that approximate the Multi-Layer Capacity Planning for IP-Optical Networks Deutsche Telekom and Telefonica backbone networks. The resulting IP layer links drive the demands of the optical layer. The optical layer design phase ensures that each of the links is feasible from a transmission perspective. Based on the output of this phase, it is possible to acquire transponders and regenerators and implement the lightpaths defined.   At the same time, the required additional IP ports can be acquired and connected to the transponders. Once the IP layer is connected over these lightpaths, the topology is extended to include the new links. Multi-Layer Capacity Planning for IP-Optical Networks The network now enters an operations phase, where traffic data and IP and optical performance is collected This data is used to drive the next planning phase. bviously, this is an idealized iew of the process. In reality, different phases happen in parallel; for example, the network continues to operate during the next planning phase. The steps are sometimes not as distinct; for example, the IP planning team may interact with the optical planning team to ensure that the optical paths provided for the IP layer are sufficiently diverse. Multi-Layer Capacity Planning for IP-Optical Networks But overall, the interaction is manual and error-prone. One of the basic capabilities needed in a multilayer tool is the ability to optimize the IP layer given knowledge of the optical layer topology. This process typically starts with a basic IP topology, in which traffic has to go through many IP Multi-Layer Capacity Planning for IP-Optical Networks hops to reach its destination. The links are well utilized since the IP layer can re-groom traffic at every hop. In order to save router ports and transponders, thereby reducing network cost, the algorithm considers the traffic demands in the IP layer and identifies intermediate routers that can be bypassed. Only links that contribute to reduction of the overall IP+optical network cost are selected to ensure that expensive optical resources are not wasted. We call this process multi-layer bypass optimization or MLBO.

Planning and Operating Flexible Optical Networks: Algorithmic Issues and Tools [NS2 2014]

The continuous growth of consumers’ IP traffic fed by the generalization of broadband access (through digital subscriber line and fiber to the home) and the emerging rich-content high-rate and bursty applications, such as video on demand, HDTV, and cloud computing, can only be met with the abundant capacity Planning and Operating Flexible Optical Networks: Algorithmic Issues and Tools provided by optical transport networks. For the future, it is expected that the traffic will not only increase in volume traffic increase of 34 percent on average per year but will also exhibit high burstiness, resulting in large variations over time and direction. Recent research efforts on optical networks have focused on architectures that support variable spectrum connections as a way to increase spectral efficiency and reduce costs. Flexible or elastic optical networks appear as a promising technology for meeting the requirements of next generation networks that will span across both the core and metro segments, Planning and Operating Flexible Optical Networks: Algorithmic Issues and Tools and potentially also across the access, all the way to the end user. A flexible network is based on the flex-grid technology, which migrates from the fixed 100 or 50 GHz grid that traditional wavelengthdivision multiplexing, (D)WDM, networks utilize . Flex-grid has granularity of 12.5 GHz, standardized by the International Telecommunication Union and can combine the spectrum units, referred to as slots, to create wider channels on an as needed basis. Planning and Operating Flexible Optical Networks: Algorithmic Issues and Tools Flexible networks are built using bandwidth variable optical switches that are configured to create optical paths of sufficient spectrum slots. We refer to such a connection as a flexpath, a variation of the word lightpath used in standard WDM networks. Bandwidth variable switches operate in a transparent manner for transit traffic that is switched while remaining in the optical domain. Flexible networks in addition to flex-grid switches assume the use of bandwidth variable transponders . Planning and Operating Flexible Optical Networks: Algorithmic Issues and Tools Various BVT implementations exist , employing single- or multicarrier transmission schemes, and usually having some sort of digital signal processing (DSP) capabilities at the receiver but also at the transmitter side. Several transmission parameters can be controlled in a BVT, including the baud rate, the modulation format (number of bits encoded per symbol), the forward error correction (FEC) used, the spectrum slots employed, and the useful bit rate. Since transmission parameters are controllable, the term software defined optics has also recently been used, implying that optical networks, which currently rely on the slowly changing circuit switching paradigm, become more dynamic. Planning and Operating Flexible Optical Networks: Algorithmic Issues and Tools Deciding the transmission parameters is quite complicated since physical layer impairments (PLIs) such as noise, dispersion, interference, and nonlinear effects accumulate and deteriorate the quality of transmission (QoT) of the flexpaths. In particular, the QoT of a flexpath depends on its BVT transmission parameters, the guardband used from its spectrum-adjacent flexpaths, and their transmission parameters.

Mobile Ad Hoc Networking: Milestones, Challenges, and New Research Directions

[NS2 2014]

The multihop (mobile) ad hoc networking paradigm emerged, in the civilian field, in the 1990s with the availability of off-the-shelf wireless technologies able to provide direct network connections among users devices: Bluetooth for personal area networks, and the 802.11 Mobile Ad Hoc Networking: Milestones, Challenges, and New Research Directions standards family for high-speed wireless LAN . Specifically, these wireless standards allow direct communications among network devices within the transmission range of their wireless interfaces, thus making the single-hop ad hoc network a reality, that is, infrastructureless Mobile Ad Hoc Networking: Milestones, Challenges, and New Research Directions WLAN/WPAN where devices communicate without the need for any network infrastructure (Fig. 1). The multihop paradigm was then conceived to extend the possibility to communicate with any couple of network nodes, without the need to develop any ubiquitous network infrastructure. In the ’90s, we assisted in the usage of the multihop paradigm in mobile ad hoc networks, Mobile Ad Hoc Networking: Milestones, Challenges, and New Research Directions where nearby users directly communicate (by exploiting the wireless-network interfaces of their devices in ad hoc mode) not only to exchange their own data but also to relay the traffic of other network nodes that cannot directly communicate, thus operating as routers do in the legacy Internet. For this reason, in a MANET, the users’ devices cooperatively provide the Internet services, usually provided by the network infrastructure. At its birth, the MANET was seen as one of the most innovative and challenging wireless networking paradigms , and was promising to become one of the major technologies, increasingly present in the everyday life of everybody. The potentialities of this networking paradigm made Mobile Ad Hoc Networking: Milestones, Challenges, and New Research Directions ad hoc networking an attractive option for building fourth-generation (4G) wireless networks, and hence MANET immediately gained momentum, and this produced tremendous research efforts in the mobile network community . The Internet model was central to the MANET Internet Engineering Task Force (IETF) working group, which, inheriting the TCP/IP protocols stack layering, assumed an IPcentric view of a MANET; see “Mobile Ad Hoc Networks” by J. P. Macker and M. Mobile Ad Hoc Networking: Milestones, Challenges, and New Research Directions S. Scott Corson in . The MANET research community focused on what we call pure generalpurpose MANETs, where pure indicates that no infrastructure is assumed to implement the network functions, and no authority is in charge of managing and controlling the network. Generalpurpose denotes that these networks are not designed with any specific application in mind, but rather to support any legacy TCP/IP application .

Scheduling Multi-Channel and Multi-Timeslot in Time Constrained Wireless Sensor Networks via Simulated Annealing and Particle Swarm Optimization [NS2 2014]

Wireless sensor networking (WSN) is a continuously evolving technology for various applications, such as environment monitoring, patient monitoring, and many industrial applications. Wireless sensors can potentially be deployed in a large geographical area via multihop Scheduling Multi-Channel and Multi-Timeslot in Time Constrained Wireless Sensor Networks via Simulated Annealing and Particle Swarm Optimization communications. Unlike delay-tolerant applications, patient monitoring, disaster warning, intruder detection, and many industrial applications require timely responses. However, it is challenging to provide timely and reliable communication in WSNs, mainly due to the fact that conventional WSNs operate on a single channel. Sensor nodes must Scheduling Multi-Channel and Multi-Timeslot in Time Constrained Wireless Sensor Networks via Simulated Annealing and Particle Swarm Optimization compete with other nodes to access a single channel medium of limited bandwidth. If a transceiver operates on multiple channels, multiple simultaneous transmissions and receptions are feasible on wireless media without interfering with each other, and the bandwidth limitation can be relieved. Therefore, using multiple channels and time slots facilitates timely communication. In IEEE Std 802.15.4 for WSNs, a superframe structure consists of a contention access period (CAP) and a guaranteed time slot (GTS). Our proposal utilizes this superframe structure, but each time slot is extended to accommodate multiple channels as in IEEE Std 802.15.4e to guarantee end-to-end delay. The channels and time slots available to a node vary because each Scheduling Multi-Channel and Multi-Timeslot in Time Constrained Wireless Sensor Networks via Simulated Annealing and Particle Swarm Optimization node’s selection of channels and time slots imposes a set of constraints on the channels and time slots available to its neighbors. Our proposal affords each node the freedom to choose the optimal time slot and channel in establishing communication links to its neighbors, resulting in high throughput and low delay. Scheduling Multi-Channel and Multi-Timeslot in Time Constrained Wireless Sensor Networks via Simulated Annealing and Particle Swarm Optimization Scheduling is a critical process for virtually all resource-allocation problems, especially to meet quality of service (QoS) requirements. Scheduling channels and time slots for all nodes constituting an end-to-end (e2e) path to meet certain delay bounds is challenging because each node has a different remaining path length to the destination and encounters dissimilar channel environments. Assuming the channels and time slots are integer-numbered from 1 to some arbitrary number, a simple approach would be to schedule them in a sequenced and Scheduling Multi-Channel and Multi-Timeslot in Time Constrained Wireless Sensor Networks via Simulated Annealing and Particle Swarm Optimization staggered fashion from the ource to the destination; that is, each node chooses the smallest number out of the available time slots and channels, and this channel-time slot combination becomes unavailable to its children, parent, and their neighbors.

Management Driven Hybrid Multicast Framework for Content Aware Networks [NS2 2014]

The recent, and strong, orientation of the Internet towards services has led to a closer coupling between the transport/network and service/application layers aiming to increase the overall Management Driven Hybrid Multicast Framework for Content Aware Networks efficiency through cross layer optimization. This can be achieved by making networks more aware of the transported content — content aware networks (CANs), or making applications more aware of network conditions — network aware applications (NAA). Management Driven Hybrid Multicast Framework for Content Aware Networks In parallel, recent developments of multimedia and content oriented services (e.g,. IPTV, video streaming, video on demand, and Internet TV) have reinforced the interest in multicast technologies. IP multicast has not been globally deployed due to problems related to group management, router capabilities, inter-domain transport, and lack of quality of service (QoS) support Management Driven Hybrid Multicast Framework for Content Aware Networks. Overlay multicast, despite its lower efficiency, has emerged as an alternative . In a complex scenario, a hybrid multicast, combining IP multicast with overlay multicast, can be attractive in terms of scalability, efficiency, and flexibility . Another trend, aiming to overcome the current Internet ossification by creating customized flexible networks, is to use network virtualization . Management Driven Hybrid Multicast Framework for Content Aware Networks New business entities (Fig. 1), named virtual network providers, can offer customized virtual networks. In particular, services providers (SP) can deploy their services on top of some hired virtual networks without the burden of performing connectivity control. Such virtualized transport service can be deployed by network providers (NPs), either enhanced to become virtual NPs, or by cooperating with separate new Management Driven Hybrid Multicast Framework for Content Aware Networks entities that offer network virtualization. However, each NP still manages its own infrastructure. While full network virtualization is challenging in terms of seamless deployment, more “light” solutions can be attractive by being deployed as parallel data planes , logically separated but under the coordination of a single management and control plane. In [9], several research challenges Management Driven Hybrid Multicast Framework for Content Aware Networks related to the management and control planes are identified. The proposed solution addresses some of them. In particular, the guaranteeing of service availability in accordance to a pre-established service level agreement (SLA), guaranteeing QoS, supporting large-scale service provisioning and deployment, enabling higher integration between services and networks, and the capability of accepting new activated ondemand services.

The Impact of Application Signaling Traffic on Public Land Mobile Networks [NS2 2014]

The widespread use of mobile devices using third-generation (3G) and Long-Term Evolution (LTE) networks has led to the development of various applications that take advantage of the always-on Internet connectivity provided by these networks. Instant messenger (IM) or social network services (SNSs) like Facebook and The Impact of Application Signaling Traffic on Public Land Mobile Networks Twitter are some examples of this class of new mobile applications. Traditional Internet applications, such as web surfing and file transfer, are characterized by a usage pattern that has distinct active and inactive phases. An active phase is a period in which several bursts of packets are transmitted, while an inactive phase is characterized by no data transmission during a sustained time period. The traffic pattern of recent and emerging applications that rely on always-on connectivity is quite different. Since the emerging mobile applications support real-time communications services, they are often constantly running in background mode to receive status updates or messages from other parties. The Impact of Application Signaling Traffic on Public Land Mobile Networks Thus, the applications continuously generate short signaling messages such as keep-alive and ping requests to maintain the always-on connectivity. Although the traffic volume of keep-alive messages is not large, frequent short messages can incur a large amount of related signaling traffic in the mobile network. In 3G or LTE networks, the user equipment (UE) and radio access networks keep the radio resource control (RRC) states. The Impact of Application Signaling Traffic on Public Land Mobile Networks The UE stays in RRC Connected mode when it transmits or receives data during active periods and stays in RRC Idle mode during inactive periods. To send even a small data packet, the UE changes the RRC state to the RRC Connected mode prior to transmission. This RRC radio state change generates a lot of signaling messages, resulting in a rapid increase in traffic loading. The Impact of Application Signaling Traffic on Public Land Mobile Networks The amount of signaling traffic leads to two major problems: rapid drainage of the mobile device’s battery and a signaling traffic surge in the mobile network. In , the authors focused on the issues of the energy impact on the mobile device. In this article, we focus on the signaling impact of these applications on public land mobile networks (PLMNs). The signaling traffic surge, or so-called signaling storm, due to the rapid growth in use of The Impact of Application Signaling Traffic on Public Land Mobile Networks these applications is having a serious impact on mobile network performance. The frequent RRC state change leads to increased signaling overhead over the air interface and through the core elements of a mobile network. The effect of signaling traffic loading gets more severe for the core network as the number of UE devices connected to the core network elements increases.Several mobile network operators (MNOs) have experienced severe service outage or degraded network performance due to the increase of application signaling traffic . Furthermore, the stability of the network can also be impacted by signaling traffic when there is an application server failure or outage.

Distributed Sampled-Data Filtering for Sensor Networks With Nonuniform Sampling Periods [NS2 2014]

A SENSOR network consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions. The purpose of a sensor network is to provide users with the information of interest from data gathered by spatially distributed sensors. Distributed Sampled-Data Filtering for Sensor Networks With Nonuniform Sampling Periods Thus, it is not surprising that signal estimation has been one of the most fundamental collaborative information processing problems in sensor networks and has found wide applications in military and civilian fields, such as target tracking and localization, air traffic Distributed Sampled-Data Filtering for Sensor Networks With Nonuniform Sampling Periods control, guidance, and navigation. Such signal estimations in a sensor network could be done under the end-to-end information flow paradigm by communicating all the relevant data to a central collector node, e.g., a sink node. This, however, is a highly inefficient solution in sensor networks, because it may cause long packet delay, and it has the potential for a critical failure point at the central collector node, and most of all, the sensor networks are usually severely constrained in energy and bandwidth,. Distributed Sampled-Data Filtering for Sensor Networks With Nonuniform Sampling Periods To avoid these problems, an alternative solution is for the estimation to be performed in-network every sensor with both sensing and computation capabilities performs not only as a sensor but also as an estimator, and it collects measurements from its neighbors to generate estimates. This is known as the distributed estimation and has attracted increasing attention during the past few years . In sensor networks,measurements aresampled and Distributed Sampled-Data Filtering for Sensor Networks With Nonuniform Sampling Periods transmitted to estimators via unreliable communication networks. lthoughfrequent measurement sampling and transmission may improve estimation performance, it, however, consumes much energy and is thus not desirable in sensor networks with constrained energy. In other words, estimation should be performed in an energyefficient way in sensor networks, and one straightforward yet efficient way is to increase measurement sampling periods. Distributed Sampled-Data Filtering for Sensor Networks With Nonuniform Sampling Periods However, thismayin turn degrade estimation erformance. Thus, one has to tradeoff between estimation performance and energy consumption in sensor network based estimations and the tradeoff can be intuitively realized by adopting a nonuniform sampling strategy. Such a strategy brings much design flexibility, e.g., onemayincrease thesampling period to save energies during some periods while decrease it to improve estimation performance during some other time intervals when necessary.

Wireless Body Area Networks: A Survey [NS2 2014]

WORLD population growth is facing three major challenges: demographic peak of baby boomers, increase of life expectancy leading to aging population and rise in health care costs. In Australia, life expectancy has increased significantly from 70.8 years in 1960 to 81.7 Wireless Body Area Networks: A Survey years in 2010 and in the United States from 69.8 years in 1960 to 78.2 years in 2010, an average increase of 13.5%1. Given the U.S. age pyramid2 shown in Fig. 1, the number of adults ranging from 60 to 80 years old in 2050 is expected to be double that of the year 2000 (from 33 million to 81 million Wireless Body Area Networks: A Survey people) due to retirement of baby boomers3. It is expected that this increase will overload health care systems, significantly affecting the quality of life. Wireless Body Area Networks: A Survey Further, current trends in total health care expenditure are expected to reach 20% of the Gross Domestic Product (GDP) in 2022, which is a big threat to the US economy. Moreover, the overall health care expenditures in the U.S. has significantly increased from 250 billion in 1980 to 1.85 trillion in 2004, even though 45 million Americans were uninsured4. These statistics necessitate a dramatic shift in current health care systems towards more affordable and scalable solutions. On the other hand, Wireless Body Area Networks: A Survey millions of people die from cancer, cardiovascular disease, Parkinson’s, asthma, obesity, diabetes and many more chronic or fatal diseases every year. The common problem with all current fatal diseases is that many people experience the symptoms and have disease diagnosed when it is too late. Research has shown that most diseases can be prevented if they are detected in their early stages. Wireless Body Area Networks: A Survey Therefore, future health care systems should provide proactive wellness management and concentrate on early detection and prevention of diseases. One key solution to more affordable and proactive health care systems is through wearable monitoring systems capable of early detection of abnormal conditions resulting in major improvements in the quality of life. Wireless Body Area Networks: A Survey In this case, even monitoring vital signals such as the heart rate allows patients to engage in their normal activities instead of staying at home or close to a specialized medical service .

Dynamic p-Cycle Protection in Spectrum-Sliced Elastic Optical Networks [NS2 2014]

NOWADAYS, the spectrum-sliced elastic optical networking based on the optical orthogonal frequency-division multiplexing (O-OFDM) technology has attracted intensive research interests as it can improve the spectral efficiency of the optical layer significantly Dynamic p-Cycle Protection in Spectrum-Sliced Elastic Optical Networks with flexible bandwidth allocation. Unlike the wavelength-division multiplexing (WDM) networks that operate on discrete wavelength channels with a bandwidth of 50 or 100 GHz, the O-OFDM networks groom the capacities of a few narrow-band subcarrier channels (frequency slots) that are spectrally contiguous and achieve high-speed data transmission over them. Dynamic p-Cycle Protection in Spectrum-Sliced Elastic Optical Networks Hence, by adjusting the number of assigned frequency slots (FS’) to each lightpath, O-OFDM networks can allocate optical spectrum with a finer granularity and agile bandwidth management for different network applications can be achieved. To this end, people refer the optical networks based on the O-OFDM technology as elastic optical networks (EONs) . Dynamic p-Cycle Protection in Spectrum-Sliced Elastic Optical Networks Previously, researchers have investigated the routing and spectrum assignment (RSA) of lightpaths intensively for realizing efficient service planning and provisioning in EONs . However, most of these studies did not consider how to setup Dynamic p-Cycle Protection in Spectrum-Sliced Elastic Optical Networks lightpath connections with protection or restorability. It is known that in optical networks, the amount of service disruption and data loss caused by a network-related outage can be huge , because a single optical fiber can carry over 20 Tb/s transmission capacity . Meanwhile, natural disasters and other factors can trigger unpredictable failures of network elements and make network survivability a serious issue . Dynamic p-Cycle Protection in Spectrum-Sliced Elastic Optical Networks In EONs, a link failure may lead to more severe service disruption due to the higher data-rate provided by the supper-channels. Therefore, it is not only important but also necessary to study the protection schemes for EONs, and the network operators need to implement them to ensure certain service availability for the lightpath connections. The authors of proposed an SPP scheme for EONs, which was called elastic separateprotection- at-connection and could realize spectrum sharing by using first-fit to assign working traffic and last-fit to assign backup traffic.

A Spectrum-aware Clustering for Efficient Multimedia Routing in Cognitive Radio Sensor

Networks [NS2 2014]

A Cognitive radio network (CRN) is formed by advanced radio devices, which observe the radio environment for a suitable band, employ an intelligent agent for decision-making, and a frequency-agile radio that can be tuned to a wide range of frequency bands and eventually A Spectrum-aware Clustering for Efficient Multimedia Routing in Cognitive Radio Sensor Networks operate on an intelligently selected band. Motivated by the spectrum utilization and regulation issue for exclusive use by the licensed or primary users, it brings a revolutionary change in this new paradigm by introducing a new class of unlicensed or secondary users who can share the spectrum opportunistically without interfering with the primary users. A Spectrum-aware Clustering for Efficient Multimedia Routing in Cognitive Radio Sensor Networks This new paradigm has also been investigated for wireless sensor networks (WSNs) to enjoy the potential benefits of cognitive radios, thus forming cognitive radio sensor networks (CRSN). CRSN can be utilized in many different application scenarios, for instance, intelligent transportation system , industrial monitoring, surveillance , smart grids , etc. Dynamic spectrum access plays a key role to mitigate the noisy spectrum bands and eases the reconfiguration of spectrum usage. Wireless multimedia A Spectrum-aware Clustering for Efficient Multimedia Routing in Cognitive Radio Sensor Networks sensors have been realized for monitoring and intelligent transportation in public transport vehicles, train. However, the spectrum utilization issues for multimedia delivery in vehicular networks have not been addressed adequately. The lack of established infrastructure, network dynamics, constrained spectrum access privileges along with the unpredictable band opportunity, and the nature of the wireless medium offer an unprecedented set of challenges in supporting demanding applications over CRSN. A Spectrum-aware Clustering for Efficient Multimedia Routing in Cognitive Radio Sensor Networks Thus, supporting multimedia applications of traditional WMSNs over CRSN presents many key issues, which are not dealt in its counterpart wireless multimedia sensor network. The varying capacity of wireless links in CRSN deteriorates the performance of a routing protocol in achieving end-to-end delay bound. The strict delay constraint is usually compensated by setting a suitable playout deadline to take into account the underlying network bottlenecks. Thus, by setting appropriate deadline in conjunction with playout time, the multimedia routing protocol should address the significant variation in delay and jitter to ensure the persistent quality of service for multimedia applications.

Outage Probability in Arbitrarily-Shaped Finite Wireless Networks [NS2 2014]

OUTAGE probability is an important performance metric for wireless networks operating over fading channels . It is commonly defined as the probability that the signalto interference-plus-noise ratio (SINR) drops below a given threshold. Outage Probability in Arbitrarily-Shaped Finite Wireless Networks The analysis of the outage probability and interference in wireless networks has received much attention recently . For the sake of analytical convenience and tractability, all the aforementioned studies and many references therein assumed infinitely large wireless networks and often used a homogeneous Poisson point process (PPP) as the underlying model for the spatial node distribution. Outage Probability in Arbitrarily-Shaped Finite Wireless Networks A homogeneous PPP is stationary i.e., the node distributionis invariant under translation. This gives rise to locationindependent performance, statistically the network characteristics such as mean aggregate interference and average outage probability as seen from a node’s perspective are the same for all nodes. Mathematical tools from stochastic geometry have been applied to obtain analytical expressions for the outage probability in infinitely large wireless networks ,. Outage Probability in Arbitrarily-Shaped Finite Wireless Networks The outage analysis in infinite wireless networks has also been extended to wireless networks with the Poisson cluster process as well as to coexisting networks sharing the same frequency spectru. In practice, many real-world wireless networks comprise a finite number of nodes distributed at random inside a given finite region. The boundary effect of finite networks gives Outage Probability in Arbitrarily-Shaped Finite Wireless Networks rise to non-stationary location-dependent performance, i.e., the nodes located close to the physical boundaries of the wireless network experience different network characteristic as compared to the nodes located near the center of the network. As a result, the modeling and performance analysis of finite wireless networks requires different approaches as opposed to infinite wireless networks. For example, when a finite number of nodes are independently and uniformly distributed (i.u.d.) inside a finite network, a Binomial point process (BPP), rather than a PPP, provides an accurate model for the spatial node distribution . Outage Probability in Arbitrarily-Shaped Finite Wireless Networks Unlike infinite wireless networks, deriving general results on the outage probability in finite wireless networks is a very difficult task because the outage performance depends strongly on the shape of the network region as well as the location of the reference receiver. In this work, we would like to investigate whether there exist general frameworks that provide easy-to-follow procedures to derive the outage probability at an arbitrary location in an arbitrarily-shaped finite wireless network.

Efficient and Truthful Bandwidth Allocation in Wireless Mesh Community Networks [NS2 2014]

WIRELESS mesh networks (WMNs) have emerged in recent years as a promising communication paradigm toward the cost-effective deployment of all-wireless network infrastructures [1]. Several operators have started usingWMNs as a valuable technology to Efficient and Truthful Bandwidth Allocation in Wireless Mesh Community Networks provide broadband Internet access in urban and rural areas, where the low return on investments cannot cover all costs to deploy more expensive wired solutions. With the aim of further reducing the overall maintenance costs and maximizing the profit, WMN operators have been fostering the deployment of wireless mesh community networks (WMCNs) . In WMCNs, a group of independent mesh routers owned by different individuals forms or extends a WMN to Efficient and Truthful Bandwidth Allocation in Wireless Mesh Community Networks enhance the broadband connectivity, whose availability can be shared with other users not directly involved in the management of the community network. In this context, we envision a marketplace scenario where an operator may lease the bandwidth of its wireless access network to a subset of customers in order to increase the network coverage of its WMN and provide access to other residential users through the customers’ mesh client devices. Efficient and Truthful Bandwidth Allocation in Wireless Mesh Community Networks The customers1 who manage these mesh clients pay the network operator to exploit the access bandwidth, while they are rewarded directly by the residential users they serve. Note that both the operator and the customers gain from this agreement since the former can lease the bandwidth of itsWMN, savingmanagement andmaintenance costs, while the latter can earn money by subleasing the purchased bandwidth to other residential users. Efficient and Truthful Bandwidth Allocation in Wireless Mesh Community Networks Finally, the residential users that would not have been covered by the WMN operator (because of low payoffs) obtain a better Internet service. Efficient and Truthful Bandwidth Allocation in Wireless Mesh Community Networks The proposed marketplace would therefore contribute to overcome the Digital Divide problem, improving the economical efficiency of public-private wireless partnerships like those analyzed in. In order to be an attractive solution, the aforementioned bandwidth market managed by theWMN operator needs convincing allocation and payment mechanisms that should act as incentives for customers to participate and subscribe to the service.

19.Reliable Multicast with Pipelined Network Coding using Opportunistic Feeding and

Routing [NS2 2014]

AS the advance of wireless communication techniques, wireless networks have become universal and novel applications have proliferated in various fields such as mobile auctions, military command and control, distance education and intelligent Reliable Multicast with Pipelined Network Coding using Opportunistic Feeding and Routing transportation systems. In these applications, multicast is a key mechanism developed to disseminate information from a single source to multiple destinations. It has attracted significant efforts to improve its performance in wireless environment with different metrics including throughput, delay, energy efficiency, etc. Traditionally, in order to facilitate routing protocol design, an ideal wireless network model is used with the assumption that the wireless transmission links are lossfree. In reality, transmission failures would happen because the quality of wireless links is affected or even jeopardized by Reliable Multicast with Pipelined Network Coding using Opportunistic Feeding and Routing many factors like collisions, fading or environmental noise . Therefore, a new model for wireless networks with lossy links should be considered in the multicast protocol design, especially for some applications in adverse environment such as wireless sensor networks in the wilds. Recently, cooperation between nodes is proposed to improve the multicast performance in lossy wireless networks. Reliable Multicast with Pipelined Network Coding using Opportunistic Feeding and Routing When a node fails to receive a packet from its direct upstream node, other neighboring nodes that have successfully received it can cooperatively feed the packet to this node. Such opportunistic routing (OR) strategy at each receiver, like ExOR , is referred as forwarder-cooperation in this paper. Later, MORE is proposed to simplify the coordination by combining forwarder-cooperation and intra-session network coding. Reliable Multicast with Pipelined Network Coding using Opportunistic Feeding and Routing It has shown great advantages in increasing the network throughput and simplifying protocol design by eliminating the coordination between nodes. Unfortunately, the multicast in MORE is not efficient since excessive forwarders may be evolved in data dissemination, which would incur serious MAC contention and degrade the multicast performance. Moreover, the batch-by-batch policy makes the protocol susceptible to the “crying baby” problem, which is pointed out in [ and solved by a round-robin batch scheduling. However, the proposed algorithm Pacifier in is not energy-efficient since substantial number of useful packets may be flushed away during frequent batch scheduling over the whole network. Furthermore, its privilege for data dissemination to the destinations with good connections from source would lead to serious unfairness in throughput.

Distortion-Fair Cross-Layer Resource Allocation for Scalable Video Transmission

in OFDMA Wireless Networks [NS2 2014]

THE design and the optimization of video communications over wireless networks is attracting a lot of attention from both academia and industry. The main challenge is to enhance the quality of service (QoS) support in terms of packet loss rate, end-to-end delay Distortion-Fair Cross-Layer Resource Allocation for Scalable Video Transmission in OFDMA Wireless Networks and minimum guaranteed bit-rate, while providing fairness where needed. The cross-layer approach, i.e., the exchange of information among different layers of the system, is one of the key concepts to be exploited to achieve this goal. Distortion-Fair Cross-Layer Resource Allocation for Scalable Video Transmission in OFDMA Wireless Networks In beyond-3G and 4G wireless system orthogonal frequency division multiple access (OFDMA) has been selected as a key physical (PHY) layer technology to support a very flexible access with high spectral efficiency. In order to exploit the available temporal, frequency and multi-user diversity, Distortion-Fair Cross-Layer Resource Allocation for Scalable Video Transmission in OFDMA Wireless Networks and to provide a given level of QoS, suitable adaptive resource allocation and scheduling strategies have to be implemented. Opportunistic schedulers, as for instance, proportional fair (PF) and maximum signal-to-noise ratio (SNR) schedulers, take advantage of the knowledge of the channel state information (CSI) in order to maximize the spectral efficiency. However, with these schedulers, the final share of throughput often results unfair, especially for the cell-edge users which suffer of data-rate limitations due to high Distortion-Fair Cross-Layer Resource Allocation for Scalable Video Transmission in OFDMA Wireless Networks path-loss and inter-cell interference. In real-time streaming the mismatch between the allocated PHY layer rate and the rate required by the delay-constrained application may cause the loss of important parts of the streams, which significantly degrades the end-user quality of experience (QoE). The provision of acceptableQoE to every user is enabled by the use of a scheduler at the medium access control (MAC) layer which delivers a fair throughput, according to specific utilities and constraints defined by the application. Moreover, the presence of an optimized source rate adaptation technique at the application (APP) layer becomes crucial to improve stability, to prevent buffer Distortion-Fair Cross-Layer Resource Allocation for Scalable Video Transmission in OFDMA Wireless Networks overflow and to maintain video play-back continuity. Rate adaptation is enabled by the use of video encoders that support multiple layers which can be sequentially dropped, thereby providing a graceful degradation. One of the most promising tool is the H.264 Advanced Video Coding (AVC) standard with scalable extension, also known as Scalable Video Coding.

A Complex Network Approach to Topology Control Problem in Underwater Acoustic

Sensor Networks [NS2 2014]

A Underwater acoustic sensor networks (UASNs) are the technology that enables various underwater applications, and the interest in UASNs is growing. UASNs consist of underwater sensors anchored nodes and surface sinks that perform collaborative monitoring tasks over a three-dimensional deployment space. A Complex Network Approach to Topology Control Problem in Underwater Acoustic Sensor Networks Anchored nodes are equipped with floating buoys inflated by pumps, and the depth of the anchored node is regulated by adjusting the length of the wire. The buoyant force from buoys is far greater than the gravity of nodes. As shown in the measurements of environmental events are locally A Complex Network Approach to Topology Control Problem in Underwater Acoustic Sensor Networks monitored by the anchored nodes, and transferred to a surface sink by multi-hops. Both electromagnetic waves and laser waves are not suitable for underwater transmission, and acoustic communication is the typical physical layer technology in UASNs. A Complex Network Approach to Topology Control Problem in Underwater Acoustic Sensor Networks Therefore, the distinguishing feature of UASNs is propagation delay because acoustic waves are much slower than electromagnetic waves the speed of acoustic waves is approximately 1500m/s. Consequently, propagation delay in UASNs cannot be neglected. A Complex Network Approach to Topology Control Problem in Underwater Acoustic Sensor Networks Another inevitable issue regarding UASNs is signal irregularity signal is not uniform in all directions, which is caused by various factors, such as antenna directions and gains, transmitting power, battery status, signal-tonoise ratio threshold, and obstacles . In particular, various obstacles are distributed in underwater environments: thus, signals are more easily reflected, diffracted, or scattered during propagation , A Complex Network Approach to Topology Control Problem in Underwater Acoustic Sensor Networks so probabilistic coverage and connectivity problems are more appropriate for acoustic detection applications. Signal irregularity directly or indirectly affects the performance of network protocols, such as the MAC, routing, localization and topology control. Therefore, signal irregularity is a non-negligible issue, especially in UASNs. Moreover, the battery power of nodes is limited. Batteries usually cannot be easily replaced underwater, and solar energy is rarely exploited as well.

Efficient Virtual Backbone Construction without a Common Control Channel in Cognitive Radio

Networks [NS2 2014]

Cognitive radio networks (CRNs) are a promising solution to the channel (spectrum) congestion problem nowadays. Primary users (PUs) in CRNs are privileged users, for whom there should be no interference. Each secondary user (SU) or node in a CRN Efficient Virtual Backbone Construction without a Common Control Channel in Cognitive Radio Networks is capable of sensing the available channels, and can make opportunistic use of them without causing interference with primary users. When a PU begins to occupy a channel, SUs on that channel need to quit immediately. Hence, the dynamics of channel availability makes it difficult to carry out end-to-end data transport in CRNs. For example, in Fig. 1, there are two PUs, Tx and Rx. There is a data transmission route, consisting of three SUs, S1, S2, and S3. When the link between PU Tx and Rx is active, the links between the three SUs may be broken if they use the same channel as the two PUs. Therefore, the endto- end data transmission from Efficient Virtual Backbone Construction without a Common Control Channel in Cognitive Radio Networks S1 to S3 is unstable A practical scenario is that the two PUs in Fig. 1 are TV towers, and the SUs here are wireless devices using IEEE 802.22. If a node in a CRN wants to reach another node that is multiple hops away, two problems arise. First, the node needs to calculate the route to the destination node. However, the high dynamics of channel availability makes it costly to collect informationfrom other nodes and construct a routing path. Second, even if the route is built, the links on the route are unstable. When the dynamic channels on a link of the Efficient Virtual Backbone Construction without a Common Control Channel in Cognitive Radio Networks route become unavailable, the route is broken. To solve the problem of broken routes caused by unstable links, we can make use of the virtual backbone structure. A virtual backbone consists of a connected subset of nodes in the network where every node is either in the subset, or is a neighbor of a node in the subset. We use area to refer to a backbone node and the nodes attached to it. If a virtual backbone is constructed for a CRN, the backbone nodes can calculate area routes for end-to-end communications. An area route means a set of areas that would be passed in order to reach the destination. For example, in Fig. 2, each node is either a backbone node or is attached to a backbone node. Efficient Virtual Backbone Construction without a Common Control Channel in Cognitive Radio Networks A1 denotes an area, which includes the backbone node and its attached nodes. Nodes on the borders are called gateway nodes. The source node S wants to reach the destination node D, which is located in another area. The backbone node that S is attached to calculates an area route for S, which is Moreover, the virtual ackbone can solve the unstable link problem, because with the area route, a packet can be sent to any node in the next-hop area. This is much more robust than the case with the route consisting of nodes, where a packet must be sent to the next-hop node. Therefore, the influence of unpredictable channel availability is reduced.

Sustainability Analysis and Resource Management for Wireless Mesh Networks with

Renewable Energy Supplies [NS2 2014]

THE EXPLOSIVELY growing demand for ubiquitous broadband wireless access has led to a significant increase in energy consumption by wireless communication networks. To counter this increase, future generations of wireless networks are expected to make use of renewable energy sources, e.g., wind, solar, tides, etc., to fulfill the everincreasing user demand, Sustainability Analysis and Resource Management for Wireless Mesh Networks with Renewable Energy Supplies while reducing the detrimental effects of conventional energy production. However, unlike traditional energy supplied from the electricity grid, renewable energy sources are intrinsically dynamic with unstable availability and time varying capacity. For example, a wind turbine usually provides intermittent power which depends on how windy the weather is. Although solar panels can provide relatively continuous power supply, the energy supply varies across the time of a day and the season of the year, and is influenced by atmospheric conditions and geography. As a result, when renewable energy is deployed to power wireless communication Sustainability Analysis and Resource Management for Wireless Mesh Networks with Renewable Energy Supplies networks, its dynamic and unreliable nature will affect the availability and efficiency of communications, and therefore will make energy-sustainable network design a necessity. Improving energy efficiency has long been a fundamental research issue in wireless communications, mainly because of the limited battery power of mobile terminals and/or the increasing cost of the energy from the electricity grid. Sustainability Analysis and Resource Management for Wireless Mesh Networks with Renewable Energy Supplies In traditional systems powered by batteries, the energy is a limited resource but it is stable during the battery lifetime. The electricity grid generally provides continuous power on demand with no stringent usage limit; however, this power is primary generated from limited and non-sustainable resources, such as coal, natural gas, and petroleum. In contrast, renewable energy sources are sustainable in the long term but are unstable and intermittently available in the short term. Sustainability Analysis and Resource Management for Wireless Mesh Networks with Renewable Energy Supplies As a result, the fundamental design criterion and the main performance metric have shifted from energy efficiency to energy sustainability in a network powered by renewable energy . While many existing works focus on energy efficiency, energy sustainability has not been well explored and deserves further investigation. Thus motivated, we first develop a mathematical model to study the “energy sustainability” performance of wireless devices theoretically and, based on this analysis, we further dimension the resource management and admission controlstrategies to improve the sustainable network performance under an energy sustainability constraint

Distributed Energy Efficient Clouds Over Core Networks [NS2 2014]

CLOUD computing exploits powerful resource management techniques to allow users to share a large pool of computational, network and storage resources over the Internet. The concept is inherited from research oriented grid computing and further expanded toward a business model where consumers are charged for the diverse offered services Distributed Energy Efficient Clouds Over Core Networks. Cloud computing is expected to be the main factor that will dominate the future Internet service model by offering a network based rather than desktop based users applications . Virtualization lies at the heart of cloud computing, where the requested resources are created, managed and removed flexibly over the existing physical machines such as servers, storage and networks. This opens the doors towards resource consolidation that cut the cost for the cloud provider and eventually, cloud consumers. However, cloud computing elastic management and Distributed Energy Efficient Clouds Over Core Networks economic advantages come at the cost of increased concerns regarding their privacy , availability and power consumption. Cloud computing has benefited from the work done on datacenters energy efficiency . However, the success of the cloud relies heavily on the network that connects the clouds to their users. This means that the expected popularity of the cloud services has implications on network traffic, hence, network power consumption, especially if we consider the total path that information traverses from clouds storage through its servers, internal LAN, core, aggregation and access network up to the users’ devices. For instance, the authors in have shown that Distributed Energy Efficient Clouds Over Core Networks transporting data in public and sometimes private clouds might be less energy efficient compared to serving the computational demands by traditional desktop. Designing future energy efficient clouds, therefore, requires the co-optimization of both external network and internal clouds resources. The lack of understanding of this interplay between the two domains of resources might cause eventual loss of power Distributed Energy Efficient Clouds Over Core Networks. For instance, a cloud provider might decide to migrate virtual machines (VMs) or content from one cloud location to another due to low cost or green renewable energy availability, however, the power consumption of the network through which users data traverse to/from the new cloud location might outweigh the gain of migration.

Vehicles as Information Hubs During Disasters: Glueing Wi-Fi to TV White Space to Cellular Networks [NS2 2014]

Reliance of society on being connected anytime and anywhere via many kinds of devices utilizing an enormously complicated telecommunications infrastructure exposes its vulnerability during disasters. Resiliency of the communication infrastructure during and after earthquakes, Vehicles as Information Hubs During Disasters: Glueing Wi-Fi to TV White Space to Cellular Networks hurricanes, floods and other natural or man-made disasters has become one of the foremost issues both for governments and private telecommunications carriers. Flow of emergency aid to areas affected by a disaster hinges on timely information coming from those areas. The infrastructure including cellular operations, might get disrupted either locally or in very wide areas due to a myriad of reasons ranging from base station power outages to equipment failures, Vehicles as Information Hubs During Disasters: Glueing Wi-Fi to TV White Space to Cellular Networks from collapsed antennas to operator level call prioritization policies. Moreover, Wi-Fi hotspot and access point connectivity might be lost due to similar causes. This, in turn, instantly renders expensive and multi-functional gadgets such as smart phones, tablets, personal computers and countless other communication devices useless. This hypothetical sounding scenario is exactly what happened during and after the Great East Japan Earthquake in March 2011 leaving scores of people hopelessly trying Vehicles as Information Hubs During Disasters: Glueing Wi-Fi to TV White Space to Cellular Networks to reach their families, relatives and friends over a nonfunctioning or partially functioning network. The following is a brief description of the system and the flow of events during the demonstration which was presented at the 20th ITS World Congress Tokyo 2013. We showed that during disasters vehicles can convey information from an area where the telecommunications network is disrupted, to an area where the telecommunications infrastructure is intact. The demonstration was a combination Vehicles as Information Hubs During Disasters: Glueing Wi-Fi to TV White Space to Cellular Networks of different technologies including Wi-Fi, TV white space, cellular networks, and the movement of the vehicles themselves. We applied and expanded Internet’s cornerstone concept of store-and-forward packet switching in a different context where the unit of “packet” was replaced with a piece of information belonging to a person, place or thing. TV white space used for V2V Vehicles as Information Hubs During Disasters: Glueing Wi-Fi to TV White Space to Cellular Networks communications in this demonstration was the first trial carried out in any metropolitan area in the world. The demonstration starts with several users in the “disaster affected” area inputting text and voice to a tablet and transmitting it to a nearby vehicle equipped with a Wi-Fi access point. Each user tablet screen was made to have a different background color so that the users would know when and how their messages move hop-by-hop in between vehicles eventually to appear in the cloud.

Improving Spectrum Efficiency via In-Network Computations in Cognitive Radio Sensor Networks [NS2 2014]

WIRELESS sensor networks (WSNs) have attracted tremendous attention for their mission-driven development and deployment. Improving Spectrum Efficiency via In-Network Computations in Cognitive Radio Sensor Networks For a large-scale WSN comprising lots of sensors, providing an efficient spectrum sharing with existing wireless networks is surely a trend. As facing the increasing spectrum demand of wireless services and devices , cognitive radio technology is widely employed to enhance spectrum utilization . Specifically, exploiting WSNs for smart grid applications , spectrum-aware technique is recognized as a promising solution to enable reliable Improving Spectrum Efficiency via In-Network Computations in Cognitive Radio Sensor Networks and low-cost remote monitoring for smart grids. To fully exploit this technology especially for large WSNs , more concurrent transmission opportunities within given spectrum are desired to realize spatial reuse of spectrum. In addition, maintaining reliable data transportation on top of numerous opportunistic links in cognitive (radio) multi-hop sensor networks becomes an essential requirement to bring the spectrum efficiency into reality. However, Improving Spectrum Efficiency via In-Network Computations in Cognitive Radio Sensor Networks as indicated by , there exists an significant end-to-end delay for greater network diameter in large cognitive machine networks and prevents practical applications. Thus, it becomes a great challenge to support an effective end-to-end quality-of-service (QoS) guarantee with regards of reliable communications in cognitive radio sensor networks (CSNs), while such likely technology is applicable for machine-to-machine communications, cyberphysical systems , and spectrum-sharing WSNs. Improving Spectrum Efficiency via In-Network Computations in Cognitive Radio Sensor Networks To achieve efficient spectrum management for cognitive radios, it is often done via forming the allocation optimization problems, such as spectrum or resource block allocation, user-based station assignment, and so on. Regarding multi-channel cognitive radio networks, time-spectrum blocks are allocated by constructing the subset of the good assignments and therefore obtain the suboptimal from given assignments Improving Spectrum Efficiency via In-Network Computations in Cognitive Radio Sensor Networks. A CSMA-based multi-channel MAC protocol is proposed by that optimizes the throughput performance for co-existing multiple systems. A distributed multi-channel MAC protocol is further proposed by for energy-efficient communication in multi-hop cognitive radio networks. Above efforts only focus on the efficient allocation of primary systems’ (PSs’) spectrum holes.

Femtocell Access Strategies in Heterogeneous Networks using a Game Theoretical Framework

[NS2 2014]

LONG-TERM evolution-advanced (LTE-A) techniques are proposed by the 3rd generation partnership project (3GPP) to provide higher spectrum efficiency and data rate. According to the technical report from 3GPP the downlink and uplink peak data rates are respectively required to achieve Femtocell Access Strategies in Heterogeneous Networks using a Game Theoretical Framework Gbps anMbps in order to fulfill the quality-of-service (QoS) requirement for the user equipment (UE). For achieving these objectives, imposing additional low-power base stations (BSs) into the original networks naturally becomes a feasible solution for increasing the spectrum efficiency and data rate. On the other hand, according to the statistical data in , it Femtocell Access Strategies in Heterogeneous Networks using a Game Theoretical Framework is expected that there will be nearly 90% of data services and 60% of phone calls taken place in indoor environments. Hence, femtocell BSs (fBSs) with the properties of short-range, lowpower, low-cost, and plug-and-play are designed to connect into the end user’s broadband line in order to provide high throughput and QoS for the UEs. Moreover, installation of fBSs can share the traffic load of its coexisting macrocell BSs. For the macrocell/femtocell heterogeneous networks Het- Nets, it has been studied in that co-channel deployment Femtocell Access Strategies in Heterogeneous Networks using a Game Theoretical Framework of frequency spectrum can achieve higher system throughput than independent channel deployment because of spectrum reuse. However, critical challenge associated with femtocell technology is the co-channel interference if the fBSs utilize the same frequency spectrum as the overlay mBSs, especially in the case that fBSs are operated in the closed access mode. Note that the closed and open access modes are two different access methods for the femtocell. The closed access mode only allows specific UEs that possess proper authorization, i.e., subscribers, to access the corresponding fBS. In general, subscribers are the UEs who purchase closed access fBS in order Femtocell Access Strategies in Heterogeneous Networks using a Game Theoretical Framework to improve their own throughput; while the nonsubscribers are prohibited to access the closed accessed fBS. On the other hand, the open access mode provides all the UEs with the permission to connect and access the fBS. One severe problem for this type of HetNets is that the fBS will produce strong interference to those UEs that are situated close by this Fbs but not connect to it. Apparently, this problem tends to occur in closed access mode since those nonsubscribers close to the fBS are not allowed to access it. Note that for the closed access mode, nonsubscribers are defined as the UEs who are not permitted to access the fBS; while subscribers represent those UEs that are authorized and allowed to connect with the fBS.

Joint Optimization of Clustering and Cooperative Beamforming in Green Cognitive Wireless Networks [NS2 2014]

CELLULAR network operators face hurdles in supporting the escalating growth in wireless data traffic due to spectrum scarcity. Joint Optimization of Clustering and Cooperative Beamforming in Green Cognitive Wireless Networks To tackle this challenge, cognitive radio has been proposed to improve the spectrum efficiency by allowing a secondary system to opportunistically use a spectrum band licensed to a primary system provided that the former respect the interference limits imposed by the later. Joint Optimization of Clustering and Cooperative Beamforming in Green Cognitive Wireless Networks However, spectrum sharing and interference mitigation are great challenges when the BSs perform their cognitive function individually. In this work, we thus consider the cooperation between cognitive BSs. In particular, we focus on the cooperative beamforming technique, also known as coordinated multipoint transmission (CoMP), which was first proposed to improve the performance of cell-edge users . Joint Optimization of Clustering and Cooperative Beamforming in Green Cognitive Wireless Networks When cognitive BSs cooperate, not only can they reap larger capacity and diversity gains, but also they mitigate the interference to primary users more effectively. However, these benefits of CoMP come with significant costs . First, the cooperating BSs must be connected by a backhaul through which they exchange the channel knowledge and user data. Second, cooperation largely increase the energy consumption due to the extra signal processing . However, energy efficiency has become a central issue for operators as they seek to decrease their carbon footprint and operating costs . Although the theoretical benefits and practical issues of CoMP have been studied in many works, only few have looked into its energy efficiency. Joint Optimization of Clustering and Cooperative Beamforming in Green Cognitive Wireless Networks An energy consumption model for BS cooperation has just been recently developed in. With that model, the authors in analyzed the energy efficiency of an idealized CoMP system and concluded that the cooperative processing power must be kept low for CoMP to provide an energy efficiency gain. This raises questions on when and how BSs should form clusters and cooperate. Joint Optimization of Clustering and Cooperative Beamforming in Green Cognitive Wireless Networks When the service requirements is high, cooperation may help the BSs to better serve their users and protect primary users from interference. Otherwise, they should use a simpler coordination strategy to save energy.

Confederation Based RRM with Proportional Fairness for Soft Frequency Reuse LTE Networks [NS2 2014]

SOFT frequency re-use (SFR) pattern maximizes spectrum utilization in Long Term Evolution (LTE) networks by allowing all macrocell basestations (MBSs) to perform transmission over the entire available spectrum However, considering that LTE also employs micro-, Confederation Based RRM with Proportional Fairness for Soft Frequency Reuse LTE Networks pico- and femtocells basestations (BSs), as small cell BSs (SBSs) within each macrocell, when all subcarriers are occupied, SFR leads to more interference at the SBS’s user equipments (UEs). Furthermore, the presence of femtocells, as low cost alternative to picocells, results in additional interference as they are installed and controlled by the end-user . Therefore, in order to implement the SFR approach effectively Confederation Based RRM with Proportional Fairness for Soft Frequency Reuse LTE Networks in LTE heterogeneous cellular networks (HetNets), all BSs must have adaptive interference avoidance capability . In 4G HetNets, which employ orthogonal frequency division multiple access (OFDMA), downlink interference is practically reduced using radio resource management (RRM). This includes frequency spectrum allocation and power control ,, where in the case of interfering BSs, spectrum allocation minimises interference by allocating different subsets of subcarriers to those BSs Confederation Based RRM with Proportional Fairness for Soft Frequency Reuse LTE Networks This however reduces the ability of the interfering BSs to fully exploit multiuser diversity and consequently reduces the achievable throughput. Thus, in order to capture this, it is important to evaluate the combined performance of RRM and scheduling together. The most popular scheduling algorithms in OFDMA systems include maximum sum rate (MSR), maximum fairness (MF), proportional rate constraints (PRC), proportional fairness (PF) and the cumulative distribution function based scheduling policy , Confederation Based RRM with Proportional Fairness for Soft Frequency Reuse LTE Networks , where it retains a similar characteristic with PF scheduler that maximises multiuser diversity and maximises users fairness. Due to this reason, PF based scheduler is commonly applied in the cellular environment . Although fairness Confederation Based RRM with Proportional Fairness for Soft Frequency Reuse LTE Networks of a system can be assessed with proportion of resources assigned to a user with some normalisation factor [10], this paper interest in assessing the fairness in terms of quality of service improvement. In general, OFDMA RRMs can be classified into three categories, which are, distributed, centralized and self-organizing network. Distributed RRM works by allowing each SBS to allocate its UEs’ subcarriers based on measurements of the interference received , while the centralized RRM uses a central node to compute the subcarriers allocation for all UEs. On the other hand, SON RRM utilizes a number of functions to manage the resource. Often, SON RRM uses both the distributed and centralized approach to reduce interference.

Lightweight Robust Device-Free Localization in Wireless Networks [NS2 2014]

DUE TO its potential and promising commercial and military applications, wireless localization technique has drawn extensive attention in recent years. Most of traditional wireless localization techniques, such as sensor networks localization , Lightweight Robust Device-Free Localization in Wireless Networks RFID localization , robot and pedestrian localization [4], equip the target with a wireless device which emits signals that can be detected by some anchor nodes whose locations are known a prior, and localization is realized in a cooperative way by utilizing the wireless measurements between the target and anchor nodes. However, in some applications, such as battlefield surveillance, security safeguard, and emergency rescue, the target is uncooperative, and thus it is impractical to equip the target with a wireless device. Lightweight Robust Device-Free Localization in Wireless Networks How to achieve device-free localization (DFL) without the need of equipping the target with a wireless device becomes a hallenging problem in such a scenario. Within the deployment area of the wireless networks (WNs), communications between pairs of nodes construct lots of wireless links which travel through the space. When a target moves into the area, it may shadow some of the wireless links and absorb, diffract, reflect or scatter some of the transmitted power. The shadowed links will be different when the target locates at different locations Lightweight Robust Device-Free Localization in Wireless Networks, which makes it possible to realize DFL based on the link measurements. TheDFL techniquewas originally proposed independently by Youssef et al. and Zhang et al.. Youssef et al. modeled the problem as a machine learning problem and realized DFL with a fingerprint matching method. Zhang et al. presented a signal dynamic model, and adopted the geometric method as well as the dynamic cluster based probabilistic cover algorithm to solve the DFL problem. These works make valuable exploration on the DFL problem, and prove the feasibility of making use of the shadowing effect of the Lightweight Robust Device-Free Localization in Wireless Networks wireless links to realize DFL. However, the machine learning method requires an off-line training process which is laborious and time-consuming, while the geometric method is sensitive to noises since it uses only the current observation to realize location estimation. More recently, Savazzi et al. evaluated DFL technique with plenty of experiments. Wilson et al. and Zhao et al Lightweight Robust Device-Free Localization in Wireless Networks.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Complex-Valued B-Spline Neural Networks for Modeling and Inverting Hammerstein Systems [NS2 2014]

COMPLEX-VALUED (CV) artificial neural networks have attracted considerable attention from both theoretical research and practical application communities . In particular, the communication signal processing community has long been interested in neural network representations for the CV nonlinear systems as well as in inverting the CV nonlinear systems. Complex-Valued B-Spline Neural Networks for Modeling and Inverting Hammerstein Systems It is well-known that most artificial neural networks cannot be automatically extended from the real-valued (RV) domain to the CV domain because the resulting model would in general violate Cauchy–Riemann conditions, and this means that the training algorithms become unusable. A number of analytic functions were introduced for the fully CV multilayer perceptrons Complex-Valued B-Spline Neural Networks for Modeling and Inverting Hammerstein Systems. A fully CV radial basis function network was introduced in for regression and classification applications. Alternatively, the problem can be avoided using two RV artificial neural networks, one processing the real part and the other processing the imaginary part of the CV signal/system. A more challenging problem is the inversion of a CV nonlinear system, which is typically found in communication signal processing applications. Complex-Valued B-Spline Neural Networks for Modeling and Inverting Hammerstein Systems This is a much under-researched area, and a few existing methods, such as the algorithm proposed in, are not very effective in tackling practical CV signal processing problems. The RV signal processing field offers motivations and inspirations for the development of efficient techniques for modeling and inversion of the CV nonlinear systems. A popular approach to nonlinear systems modeling in the RV domain is to use block-oriented nonlinear models, which comprise the linear dynamic models and static or memoryless nonlinear functions . Complex-Valued B-Spline Neural Networks for Modeling and Inverting Hammerstein Systems In particular, the two types of RV block-oriented nonlinear models that have found wide range of applications are the Wiener model, which comprises a linear dynamical model followed by a nonlinear static transformation, and the Hammerstein model , which consists of a nonlinear static transformation followed by a linear dynamical model. Complex-Valued B-Spline Neural Networks for Modeling and Inverting Hammerstein Systems An efficient B-spline neural network approach for modeling CV Wiener systems was derived in . With its best conditioning property, the RV B-spline curve has been used in computer graphics and computer-aided geometric design .

Max-Min SNR Signal Energy Based Spectrum Sensing Algorithms for Cognitive Radio Networks with Noise Variance Uncertainty [NS2 2014]

THE current wireless communication networks adopt fixed spectrum access strategy. The Federal Communications Commission have found that this fixed spectrum access strategy utilizes the available frequency bands inefficiently .A promising approach of addressing this problem is Max-Min SNR Signal Energy Based Spectrum Sensing Algorithms for Cognitive Radio Networks with Noise Variance Uncertainty to deploy a cognitive radio (CR) network. One of the key characteristics of a CR network is its ability to discern the nature of the surrounding radio environment. This is performed by the spectrum sensing (signal detection) part of a CR network. The most common spectrum sensing algorithms for CR networks are matched filter, energy and cyclostationary based algorithms. Max-Min SNR Signal Energy Based Spectrum Sensing Algorithms for Cognitive Radio Networks with Noise Variance Uncertainty If the characteristics of the primary user such as modulation scheme, pulse shaping filter and packet format are known perfectly, matched filter is the optimal signal detection algorithm as it maximizes the received Signal-to-Noise Ratio (SNR). This algorithm has two major drawbacks: The first drawback is it needs dedicated receiver to detect each signal characteristics of a primary user. The second drawback is it requires perfect synchronization between the transmitter and receiver which is impossible to achieve. Max-Min SNR Signal Energy Based Spectrum Sensing Algorithms for Cognitive Radio Networks with Noise Variance Uncertainty This is due to the fact that, in general, the primary and secondary networks are administered by different operators. Energy detector does not need any information about the primary user and it is simple to implement. However, energy detector is very sensitive to noise variance uncertainty, and there is an SNR wall below which this detector can not guarantee a certain detection performan. Cyclostationary based detection algorithm is robust against noise variance uncertainty and it can reject the effect of adjacent channel interference. However, the computational complexity of this detection algorithm is high, and large number of samples are required to exploit the cyclostationarity behavior of the received signal . On the other hand, this Max-Min SNR Signal Energy Based Spectrum Sensing Algorithms for Cognitive Radio Networks with Noise Variance Uncertainty algorithm is not robust against cyclic frequency offset which can occur due to clock and timing mismatch between the transmitter and receiver . In , Eigenvalue decomposition (EVD)-based spectrum sensing algorithm has been proposed. This algorithm is robust against noise variance uncertainty but its computational complexity is high. Furthermore, for single antenna receiver, this algorithm is sensitive to adjacent channel interference signal, and for multi-antenna receiver, this algorithm requires a channel covariance matrix different from a scaled identity.

BASA: Building Mobile Ad-hoc Social Networks on Top of Android [NS2 2014]

The proliferation of consumer-oriented communication technologies across the Internet and mobile communication has fostered growing attention to social networks platforms (SNs), and expedited a large-scale of SN services on the Internet. These SN services help connecting people from different geographic location, BASA: Building Mobile Ad-hoc Social Networks on Top of Android facilitating smooth function of their work, different moods of communication and socialization. Despite the widespread success of social networks, they have certain limitation which can be further boosted with the proposed new design solution. Firstly, people often demand a kind of ad-hoc social networks to strengthen local communication with proximal contact and closeness on local address. BASA: Building Mobile Ad-hoc Social Networks on Top of Android In scenarios such as conferences and expositions, the participants might exchange information and share documents with new partners. However, there is no direct way in current social networks to facilitate local social communication. Thus, the participants might give up exploiting interpersonal affinities for personal benefits. Although, face-to-face communication is a way, but it is less useful for BASA: Building Mobile Ad-hoc Social Networks on Top of Android file sharing and group discussions, where social media is the main goal on social network. Secondly, the existing SN services implicitly assume that the Internet or cellular network infrastructures are always available. This assumption, however, may not be held true at all time owing to the blind network spots, device heterogeneity, and security considerations. BASA: Building Mobile Ad-hoc Social Networks on Top of Android Thirdly, it is time-consuming to build and manage local SNs in Android platforms without general development schemes. Each service provider accomplishes local SN functions by individual schemes, incurring much repetition of work and heavy human resource costs~\cite{Katsaros}. Additionally, Google Android has provided developers with common API libraries and development tools necessary to build, test and debug applications. However, it does not provide support for local social community orchestration.Recently, the MASNs become prevalent for the ubiquitous usage in laptops, smart phones and touch PADs. The MASNsrefer to a kind of self-configuring and self-organizing social networking paradigm, which set up local social communication via mobile devices without utilizing the underlying infrastructures BASA: Building Mobile Ad-hoc Social Networks on Top of Android. They bring both convenience and challenges to SNs. On one side of the spectrum, the MASNs relax the requirement that communication infrastructures are indispensable. By short-range communication techniques such as Bluetooth and ZigBee, the MASNs establish local community. On the other side of the spectrum, they impose new challenges on SN services due to local socialization and user mobility. The related studies of MASNs cover a series of areas, mainly comprising of community detection, evolution and data transmission. There are some schemes close to our work.

An Enhanced Public Key Infrastructure to Secure Smart Grid Wireless Communication Networks [NS2 2014]

When the legacy power infrastructure is augmented by a communication infrastructure, it becomes a smart grid. This additional communication infrastructure facilitates the exchange of state and control information among different components of the power infrastructure. An Enhanced Public Key Infrastructure to Secure Smart Grid Wireless Communication Networks As a result, the power grid can operate more reliably and efficiently .Although deploying the smart grid enjoys enormous social, environmental and technical benefits, the incorporation of information and communication technologies into the power infrastructure will introduce many security challenges. For example, it is estimated that the data to be collected by the An Enhanced Public Key Infrastructure to Secure Smart Grid Wireless Communication Networks smart grid will be an order of magnitude more than that of existing electrical power systems. This increase in data collection can possibly introduce security and privacy risks. Moreover, the smart grid will be collecting new types of information that were not recorded in the past, and this can lead to more privacy issues .As shown in Fig. 1, an essential part of the smart grid will An Enhanced Public Key Infrastructure to Secure Smart Grid Wireless Communication Networks be its communication networks. This is a three-tier network which connects the different components of the smart grid together, and allows two-way information flow. The first tier connects the transmission system located at the power plant and the control centers of Neighborhood Area Network (NAN). Each NAN comprises a number of Building Area Networks (BANs) and provides them interfaces to the utility’s wide-area network. Here, BANs are customer networks and belong to the second tier of the shown system. Each BAN consists of a number of third-tier networks, Home Area Network (HANs). The HAN is a customer premises network which manages the on-demand power requirements of end users. Note that there is no standard definition of these networks yet. Their structures described above feature a practical configuration that can be found in established smart grids. An Enhanced Public Key Infrastructure to Secure Smart Grid Wireless Communication Networks While different components of the power infrastructure of the smart grid are networked together to exchange information, as illustrated in Fig. 1, there is a potential increase of the security risk of the system. For example, it will increase the complexity of the electrical power grid, which in turn can increase new security vulnerabilities. Also, the number of entry points that can be used to gain access to the electrical power system will increase when all of the components are networked together. In the remainder of this article, we mainly focus on the security of wireless communication subnetworks of the smart grid. Security in wired links can be achieved by existing techniques such as firewalls, virtual private networks, Secure Shell or other higher layer security mechanisms.

Smart Grid Neighborhood Area Networks: A Survey [NS2 2014]

Smart grid is an intelligent power network that combines various technologies in power, communication, and control, which can monitor and optimize the operations of all functional units from electricity eneration to end-customers. It is featured by its two-way flows of electricity and information , Smart Grid Neighborhood Area Networks: A Survey based on which an optimized energy delivery network can be constructed. By introducing distributed control and pervasive communications into the grid, real-time information can be delivered and exchanged amongst all domains . Customers can optimize their electricity usage for minimizing utility costs, and the control Smart Grid Neighborhood Area Networks: A Survey centers can make real-time power pricing and many other decisions according to energy demands . Thus, a balance of power generation and demands in the entire grid can be achieved to significantly improve power quality and efficiency. Needless to say, many intelligent electronic devices (IEDs), such as intelligent sensors and smart meters , can be used to support various network functions throughout power generation, storage, transmission, and distribution domains. Smart grid NAN is deployed within the Smart Grid Neighborhood Area Networks: A Survey distribution domain of the grid, i.e., it forms the communication facility of a power distribution system. The distribution domain dispatches power to households in the customer domain through the electrical and communication architectures between the transmission and customer domains. Smart grid NANs offer distribution domain with the capability of monitoring and controlling electricity delivery to each household according to user demands and energy availability. Smart Grid Neighborhood Area Networks: A Survey NANs directly connect all the end users in regional areas, forming the most important segment in power grid that can determine the efficiency of the whole grid. As one of the core technologies, an efficient, reliable, and secure communication network plays an important role in realization of all the goals of smart grid NANs . Communication network is required to connect IEDs and other power devices in large distributed areas, and it forms a framework for real-time bidirectional information transmission and exchange in smart grid NANs. Smart Grid Neighborhood Area Networks: A Survey The communication networks in the current power grids were built regionally for control and monitoring, and it can not meet the requirements of the next generation power grids. In order to upgrade the communication networks in the current power grid, many researchers started to find out the ways to apply advanced communication and networking technologies to power systems. These technologies include the up-to-date wired and wireless communication Smart Grid Neighborhood Area Networks: A Survey network technologies, such as broadband power line communications (BPLC), wireless sensor networks (WSNs), wireless local area networks (WLANs), and wireless mesh networks (WMNs). Different communication and networking

technologies are complementary in nature, and communication scenarios and characteristics of NANs in a power grid should be investigated.

36.On the ASEP of Decode-and-Forward Dual-Hop Networks with Pilot-Symbol Assisted M-PSK [NS2 2014]

COOPERATIVE communication networks promise high quality of services for contemporary and next-generation communication systems . On the ASEP of Decode-and-Forward Dual-Hop Networks with Pilot-Symbol Assisted M-PSK Their end-to-end (e2e) performance can be further improved by employing multiple antennas relays and using efficient combining schemes. A reasonable choice is the maximal-ratio combining (MRC) scheme, that maximizes the instantaneous output signal-tonoise ratio (SNR), and consequently, offers the best error rate performance. On the ASEP of Decode-and-Forward Dual-Hop Networks with Pilot-Symbol Assisted M-PSK For coherent detection based on this scheme channel estimates are required. In practice, these estimates are imperfect. Specifically, noise is added to them, due to the channel estimation technique used, resulting in performance degradation. It is, therefore, important to study the effect of such imperfections on the e2e performance and optimize critical parameters, such as the pilot symbols power, On the ASEP of Decode-and-Forward Dual-Hop Networks with Pilot-Symbol Assisted M-PSK number of pilots per channel block, number of relays and antennas, etc., to compensate for the degradation. In the open technical literature there are many important works on point-to-point communications considering pilotsymbols assisted modulation (PSAM) techniques,. Motivated by these early works, various papers on cooperative communications with imperfect channel state information (CSI) have been published. For example in the impact of imperfect transmitter CSI on the diversity gain has been investigated assuming dynamic decode-and-forward On the ASEP of Decode-and-Forward Dual-Hop Networks with Pilot-Symbol Assisted M-PSK (DF) relaying channels, while an analysis for the diversity and multiplexing tradeoff has been also presented. The effect of outdated channel estimates on DF relay selection, when operating over Nakagami-m fading channels has been studied in , where closed-form expressions for the e2e outage probability have been derived. The performance of a multihop wireless communication system with arbitrary number of intermediate relays has been analyzed in assuming Rayleigh fading and the DF relaying protocol. Recently in , the quadrature phase-shift keying (QPSK) average bit error probability (ABEP) of a cooperative network with adaptive DF relaying and PSAM over time-selective and frequencyflat Rayleigh fading has been studied. On the ASEP of Decode-and-Forward Dual-Hop Networks with Pilot-Symbol Assisted M-PSK Although the aforementioned works cover important aspects of DF cooperative systems with imperfect CSI, specific assumptions are being made that limit their generality, e.g., by assuming a specific channel model, a single-channel scenario, uncorrelated channels, or a specific modulation order.

Collusion-Resistant Repeated Double Auctions for Relay Assignment in Cooperative Networks [NS2 2014]

WIRELESS channels often suffer from time-varying fading caused by multi-path propagation and Doppler shifts, resulting in significant performance degradation. Recently, one important technique that exploits spatial diversity achieved by employing multiple transceiver antennas has been shown to be very effective in coping with channel fadings. Collusion-Resistant Repeated Double Auctions for Relay Assignment in Cooperative Networks However, in reality equipping each wireless node with multiple antennas may not be feasible, as the footprints of multiple antennas may not fit in the wireless node. To enhance spatial diversities among wireless nodes, cooperative communications, having each node equipped with a single antenna and exploiting the spatial diversity via antennas of some relay nodes, have exhibited great potentials in the improvement of both data rates and qualities Collusion-Resistant Repeated Double Auctions for Relay Assignment in Cooperative Networks. Various types of networks particularly the mobile cellular networks are hungering for high-rate and quality-guaranteed communication techniques to cater ever increasing demands of multimedia data services. However, deploying more communication infrastructures (base stations) to existing 3G/4G wireless networks has been shown to be Collusion-Resistant Repeated Double Auctions for Relay Assignment in Cooperative Networks very costly and thus is not be applicable to small cell phone carriers. In contrast, cooperative communication technology does not require adding any extra infrastructures into existing networks but offers great flexibilities. With the cooperative communication, cell phone carriers can economically enhance their network coverage and data rates by leasing their infrastructures Collusion-Resistant Repeated Double Auctions for Relay Assignment in Cooperative Networks from other carriers. They are the independent entities perating marketing and billing services on the behalf of three of them to maximize their own benefits. These mentioned mechanisms thus may not always be applicable to the scenarios where the auctioneer is rational. This raises an important question, how to give each selfish entity an incentive to encourage it to participate in the trading while considering the revenue of the auctioneer Collusion-Resistant Repeated Double Auctions for Relay Assignment in Cooperative Networks.

Dynamic Packet Length Control in Wireless Sensor Networks [NS2 2014]

A fundamental challenge in wireless networks is that radio links are subject to transmission power, fading, and interference, which degrade the data delivery performance. This challenge is exacerbated in wireless sensor networks (WSNs), where severe energy and resource constraints preclude the use of many sophisticated techniques that may be found in other wireless systems . Dynamic Packet Length Control in Wireless Sensor Networks In this paper, we consider a simple, cost-effective solution based on the technique of dynamic packet length control to improve the performance in these varying conditions. A tradeoff exists between the desire to reduce the header overhead by making packet large, and the need to reduce packet error rates (PER) in the noisy channel by using small packet length . Dynamic Packet Length Control in Wireless Sensor Networks Although there have been several studies on packet length optimizations in the literature , existing approaches usually require that a set of parameters to be carefully tuned such that it can better match the level of dynamics seen by any particular data trace. However, any fixed set of parameters will not adapt to the changing conditions since one parameter set does not fit all conditions. Furthermore, the update process would require user intervention, further data collection and Dynamic Packet Length Control in Wireless Sensor Networks reprogramming the parameters. This is precisely what we want to avoid in our case, and one of the strengths of using dynamic packet length optimization scheme. We design and implement DPLC based on TinyOS 2.1. The current implementation of DPLC Dynamic Packet Length Control in Wireless Sensor Networks on TelosB motes is lightweight. We evaluate DPLC in a testbed consisting of 20 TelosB nodes, running the CTP protocol, and compare its performance with a simple aggregation scheme and AIDA . Results show that DPLC achieves the best perform . The rest of this paper is structured as follows. Section II Dynamic Packet Length Control in Wireless Sensor Networks discusses related work. Section III describes the experimental observations that motivate our design. Section IV presents the design of DPLC. Section V presents an analysis the energy consumption and the convergence rate of DPLC. Section VI introduces the implementation details. Section VII shows the simulation results. Section VIII shows the evaluation results. Finally, Section IX concludes this paper.

Game Theoretic Framework for Future Generation Networks Modelling and Optimization [NS2 2014]

WIRELESS network design involves modelling numerous factors that define the performance of the network.It is therefore necessary to comprehend and assess the impact of each parameter on the network performance in order to determine the most Game Theoretic Framework for Future Generation Networks Modelling and Optimization precise optimization model that guarantees maximum network efficiency. However, the complexity of the model grows rapidly with the network size and the traffic complexity. As a consequence, a minor adjustment of a single parameter may cause a significant impact on the entire network performance. Game Theoretic Framework for Future Generation Networks Modelling and Optimization Future generation wireless networks (FGWNs) offer users heterogeneous traffic which demands different levels of data rate, quality of service (QoS) and bandwidth. This diversity in traffic demands implies high complexity management to maintain network cost efficiency.1 In order to provide an economical solution that balances network cost and efficiency, network designers have to develop new deployment methods that can balance QoS and network cost whilst counting a Game Theoretic Framework for Future Generation Networks Modelling and Optimization list of constraints and influential factors that govern the network performance. As a solution to this challenge, orthogonal frequencydivision multiple access (OFDMA) is adopted in FGWNs as a promising candidate not only due to its high immunity against multipath but also because it enables simultaneous multi-user transmission along with exploiting both multi-user and multi-path diversities . Game Theoretic Framework for Future Generation Networks Modelling and Optimization As a result of power and bandwidth constraints, capacity at BS is bounded by the available resources and the channel coefficients between a BS and the surrounding traffic density distribution. The most dominant factors governing the channel coefficients are the propagation loss and the interference between the co-channels assigned to the cells/sectors across the network. Additionally, traffic distribution and density shape the probability density function of the channel coefficients between a BS and the surrounding traffic. Game Theoretic Framework for Future Generation Networks Modelling and Optimization Hence, optimizing BSs number and distribution results in optimizing the network parameters and maximizing capacity whilst maintaining both QoS and network cost. Traditional and advanced planning methods treat these conditions either separately or insufficiently, which often leads to

inaccurate network design. Therefore, all the aforementioned factors must be treated simultaneously to achieve maximum network performance.

Scheduling in Single-Hop Multiple Access Wireless Networks with Successive Interference Cancellation [NS2 2014]

INTERFERENCE avoidance has been commonly used to deal with wireless channel interference in the design of network protocols. In interference avoidance, a receiver can only decode one transmission at a time by considering all other transmissions as interference. Scheduling in Single-Hop Multiple Access Wireless Networks with Successive Interference Cancellation The arrival of multiple transmissions at a receiver results in a collision so failure of reception. On the other hand, interference cancellation allows detecting multiple transmissions at a time by decomposing all the signals in a composite signal. Among many interference cancellation techniques, SIC appears to be the most promising due to its simplicity, overall system robustness and existing prototypes . SIC is based on decoding the signals from multiple transmitters successively. Each time a signal is decoded, it is subtracted from the composite signal to improve the Signal-to-Interference-plus-Noise Ratio (SINR) of the remaining signals. Scheduling in Single-Hop Multiple Access Wireless Networks with Successive Interference Cancellation Time Division Multiple Access (TDMA), where only one transmission is scheduled at a time, has often been used as a conflict-free scheduling in single-hop multiple access wireless networks with interference avoidance. Scheduling in Single-Hop Multiple Access Wireless Networks with Successive Interference Cancellation Although DMA is simple and easy to implement, it leads to suboptimal channel usage since it doesn’t exploit the capability of multiple transmission detection. On the other hand, SIC based singlehop multiple-access wireless networks still require an efficient scheduling algorithm since in practice a receiver node may only decode a certain number of transmissions at a time . The goal of this paper is to study the joint optimization of the scheduling algorithm and rate allocation to minimize the completion time required to satisfy the given traffic demands of the links, defined as the schedule length, in SIC based single-hop multiple-access wireless networks Scheduling in Single-Hop Multiple Access Wireless Networks with Successive Interference Cancellation. The resource allocation problem has received a lot of attention for interference avoidance based wireless networks in the past. However, the optimization problems formulated for these networks cannot be adapted to SIC based networks due to the difficulty of including the SINR requirement considering the ordering of the SIC decoding as a constraint in the optimization problem . In SIC based networks, the SINR of a link depends on the decoding order Scheduling in Single-Hop Multiple Access Wireless Networks with Successive Interference Cancellation of simultaneous transmissions eliminating

the previously decoded ones and considering the later decoded ones as interference. Including all possible SIC decoding orderings is only possible via assigning a variable to every possible subset of the links sorted according to their decoding order for SIC at the receiver . Since such a formulation requires a high number of variables exponential in the decoding capability of the receiver, a greedy heuristic algorithm has been proposed in.

Bounding the Advantage of Multicast Network Coding in General Network Models [NS2 2014]

NETWORK coding encourages information flows to be encoded within a data network, besides merely being forwarded and replicated. Such a departure from the classic store-and-forward principle has proven effective in increasing the network capacity. Bounding the Advantage of Multicast Network Coding in General Network Models Higher end-to-end throughput, particularly for multicast data transmission, is witnessed in a number of network scenarios . Multicast represents an increasingly more important class of applications on the Internet, encompassing traditional and emerging one-to-many data dissemination applications, such as software patch distribution, Bounding the Advantage of Multicast Network Coding in General Network Models live media streaming and video conferencing. fundamental problem in network coding is to quantify the benefits of network coding over routing, known as the coding advantage, measured as the ratio of the achievable throughput with network coding over that with routing. Bounding the Advantage of Multicast Network Coding in General Network Models Without network coding, a multicast routing solution is based on a multicast tree, or packing a set of multicast trees . In the directed network setting where each link has a predefined direction, there exists a combination network pattern where the coding advantage is unbounded as the network size grows . However, in the undirected network setting, Bounding the Advantage of Multicast Network Coding in General Network Models where capacity at each link can be shared flexibly between the two directions, a contrasting result was proved: the coding advantage is upper-bounded by a constant of. Directed and undirected graphs are classic subjects of study in theoretical computer science. While simple and easy to apply, they do not faithfully depict the wireline or wireless network topologies in practice. For example, large coding advantages in the directed setting are observed in contrived, extremely asymmetric topologies that favors network coding over tree packing, with links existing in one direction only Bounding the Advantage of Multicast Network Coding in General Network Models between neighboring nodes. This is apparently different from the picture of the Internet, where pair-wise router interconnections are mostly bidirectional, i.e., if a router A can transmit to a neighbor router B, so can B transmit to A. Bounding the Advantage of Multicast Network Coding in General Network Models This work studies the coding advantage in two types of parameterized networks with richer modeling power. The first is the bidirected network model, parameterized with α, the highest ratio of opposite link capacities between neighboring nodes.

Robust Beamforming for Cognitive Multi-Antenna Relay Networks with Bounded Channel Uncertainties [NS2 2014]

COGNITIVE radio (CR) is a promising technology to alleviate the spectrum shortage problem and to improve the spectrum utilization. In CR networks, the secondary user (SU) is allowed to access the same spectrum owned by the primary user (PU) Robust Beamforming for Cognitive Multi-Antenna Relay Networks with Bounded Channel Uncertainties subject to the interference constraint that the interference power from the SU to the PU is below a threshold. Thus, the CR networks can achieve higher spectrum utilization. To satisfy the interference constraint, the SU transmitter (SU-Tx) should know the perfect channel state information (CSI) from the SU-Tx to the PU. In practice, however, the perfect CSI from the SU-Tx to the PU is seldom perfectly known becaus In general, Robust Beamforming for Cognitive Multi-Antenna Relay Networks with Bounded Channel Uncertainties the channel uncertainty is characterized by two different models: the stochastic and deterministic (or worst-case models . In the stochastic model, the channel uncertainties are modeled as Gaussian random variables and the system design is then based on optimizing the average or outage performance Alternatively, the worst-case model assumes that the channel uncertainties, though not exactly known, are bounded by possible values . Robust Beamforming for Cognitive Multi-Antenna Relay Networks with Bounded Channel Uncertainties In this case, the system is optimized to achieve a given quality of service (QoS) for every possible channel uncertainty if the problem is feasible, thereby, achieving absolute robustness. It was also shown in that a bounded worst-case model is able to cope with quantization errors in CSIs. We consider a non-regenerative CRN as where an SU transmitter (SU-Tx), an SU receiver (SU-Rx) and a cognitive relay are allowed to share the same spectrum with M PUs. Each of the SU-Tx, SU-Rx, and PUs is equipped with a single antenna and the cognitive relay is equipped with N antennas. Robust Beamforming for Cognitive Multi-Antenna Relay Networks with Bounded Channel Uncertainties We assume that there is no direct link between the SU-Tx and the SU-Rx, where the reliable communication link is established by the relay. The scenario is typical for deviceto- device communications where two mobile phones in a underlaying cellular system communicate directly with the help of a femtocell or a laptop. It is noted that in practical CR system, there is usually a bunch of secondary users trying to access the network. To generalize the point-to-point CRN communication to the multi-user CRN communication is an interesting future work.

A Graph-Theoretic Approach to Scheduling in Cognitive Radio Networks [NS2 2014]

WIRELESS networks are currently characterized by a fixed spectrum assignment policy. Due to the proliferation of wireless technologies and services, the demand for the radio spectrum continuously increases. This increasing demand together with the fixed spectrum assignment policy creates a shortage of spectrum. However, this shortage is artificial because studies A Graph-Theoretic Approach to Scheduling in Cognitive Radio Networks show that a very small portion of the assigned spectrum is actually utilized . This situation calls for techniques that utilize the radio spectrum more efficiently. To overcome the inefficiency in the spectrum usage, the dynamic spectrum access (DSA) concept has been introduced by researchers in the wireless networking community. DSA hinges upon the idea of having an intelligent device that opportunistically A Graph-Theoretic Approach to Scheduling in Cognitive Radio Networks utilizes the temporarily unused parts of the spectrum and vacates them as soon as the licensed owner of that spectrum band resumes its operation. These intelligent devices are called cognitive radios. A Graph-Theoretic Approach to Scheduling in Cognitive Radio Networks The licensed owners of the spectrum are called primary users (PUs), and the cognitive radio devices are called the secondary users (SUs). PUs are unaware of the SUs, and SUs are obliged not to disturb the PUs. In a centralized cognitive radio network (CRN), cognitive base station (CBS) is the central entity that has cognitive capabilities; in other words, a CBS is aware of the DSA concept. A Graph-Theoretic Approach to Scheduling in Cognitive Radio Networks The CBS controls and guides the SUs in its service area by ensuring that the PUs in the region are not disturbed by the data communication of the SUs with the CBS. Opportunistic scheduling concept is based on the exploitation of the time-varying channel conditions in wireless networks to increase the overall performance of the system. A Graph-Theoretic Approach to Scheduling in Cognitive Radio Networks All schedulers make frequency, time-slot, and data rate allocations to the SUs. Furthermore, all of them ensure that the PUs in the service area of the CBS are not disturbed, no collisions occur among the SUs, reliable communication of the SUs with the CBS is maintained, each SU is assigned at least one time-slot whenever possible, and the number of frequencies assigned to an SU in a particular time-slot is not more than the number of its transceivers (antennas) for data transmission.

Distributed Maximum Likelihood Sensor Network Localization [NS2 2014]

NOWADAYS, wireless sensor networks are developed to provide fast, cheap, reliable, and scalable hardware solutions to a large number of industrial applications, ranging from surveillance and tracking to exploration monitoring , robotics, and other sensing tasks . Distributed Maximum Likelihood Sensor Network Localization From the software perspective, an increasing effort is spent on designing distributed algorithms that can be embedded in these sensor networks, providing high reliability with limited computation and communication requirements Distributed Maximum Likelihood Sensor Network Localization for the sensor nodes. Estimating the location of the nodes based on pair-wise distance measurements is regarded as a key enabling technology in many of the aforementioned scenarios, where GPS is often not employable. From a strictly mathematical standpoint, this sensor network localization problem can be formulated as determining the node position in or ensuring their consistency with the Distributed Maximum Likelihood Sensor Network Localization given inter-sensor distance measurements and with the location of known anchors. As it is well known, such a fixed-dimensional problem often phrased as a polynomial optimization is NP-hard in general. Consequently, there have been significant research efforts in developing algorithms and heuristics that can accurately and efficiently localize the nodes in a given dimension . Besides heuristic geometric schemes, such as multi-lateration, typical methods encompass multi-dimensional scaling , belief propagation techniques and standard non-linear filtering . Distributed Maximum Likelihood Sensor Network Localization A very powerful approach to the sensor network localization problem is to use convex relaxation techniques to massage the non-convex problem to a more tractable yet approximate formulation. First adopted in, this modus operandi has since been extensively developed in the literature example for a comprehensive survey in the field of signal processing. Distributed Maximum Likelihood Sensor Network Localization Semidefinite programming (SDP) relaxations for the localization problem have been proposed in. Theoretical properties of these methods have been discussed in, while their efficient implementation has been presented in. Further convex relaxations, namely second-order cone programming relaxations (SOCP) have been proposed in to alleviate the computational load of standard SDP relaxations, at the price of some performance degradation. Highly accurate and highly computational demanding sum of squares (SOS) convex relaxations have been instead employed in .

Dynamic Survivable Multipath Routing and Spectrum Allocation in OFDMBased

Flexible Optical Networks [NS2 2014]

In conventional wavelength division multiplexing (WDM) optical networks, a connection is supported by a wavelength channel occupying a 50 GHz spectrum. Dynamic Survivable Multipath Routing and Spectrum Allocation in OFDMBased Flexible Optical Networks This rigid and coarse granularity leads to waste of spectrum when the traffic between the end nodes is less than the capacity of a wavelength channel. To address this issue, optical networks capable of flexible bandwidth allocation with fine granularity are needed. Orthogonal frequency division multiplexing (OFDM) is a promising modulation Dynamic Survivable Multipath Routing and Spectrum Allocation in OFDMBased Flexible Optical Networks technology for optical communications because of its good spectral efficiency, flexibility, and tolerance to impairments . In optical OFDM, a data stream is split into multiple lower rate data streams, each modulated onto a separate subcarrier. By allocating an appropriate number of subcarriers, optical OFDM can use just enough bandwidth to serve a connection request. A novel OFDM-based optical transport network architecture called a spectrum-sliced elastic optical path network (SLICE) is proposed in . The SLICE network can efficiently accommodate subwavelength and superwavelength traffic by allocating just Dynamic Survivable Multipath Routing and Spectrum Allocation in OFDMBased Flexible Optical Networks enough spectral resource to an end-to-end optical path according to the user demand. The performance superiority of OFDM-based flexible optical networks over conventional WDM optical networks has been demonstrated in . An important problem in the design and operation of OFDM-based flexible optical networks is the routing and spectrum allocation (RSA) problem. The RSA problem for static demands is studied in , dynamic RSA algorithms are proposed to efficiently accommodate Dynamic Survivable Multipath Routing and Spectrum Allocation in OFDMBased Flexible Optical Networks connection requests as they arrive at the network. In , the authors propose a split spectrum approach that splits a bulky demand into multiple spectrum channels, all of which are routed over the same path. This approach relaxes the constraint of transmission impairment over long distance and also makes more efficient use of discontinued spectrum fragments. A similar approach called lightpath fragmentation is proposed in. Dynamic Survivable Multipath Routing and Spectrum Allocation in OFDMBased Flexible Optical Networks A dynamic multipath provisioning (MPP) algorithm with differential delay constraints for OFDM-based elastic optical networks is proposed in . Here a demand is split over multiple routing paths. In , the authors propose several dynamic routing, modulation, and spectrum assignment algorithms in elastic optical networks with hybrid single-/multipath routing. These algorithms achieve lower bandwidth blocking probability (BBP) than the conventional single-path routing and the split spectrum approaches.

WIRELESS network design involves modelling numerous factors that define the performance of the network.It is therefore necessary to comprehend and assess the impact of each parameter on the network performance in order to determine the most Game Theoretic Framework for Future Generation Networks Modelling and Optimization precise optimization model that guarantees maximum network efficiency. However, the complexity of the model grows rapidly with the network size and the traffic complexity. As a consequence, a minor adjustment of a single parameter may cause a significant impact on the entire network performance. Game Theoretic Framework for Future Generation Networks Modelling and Optimization Future generation wireless networks (FGWNs) offer users heterogeneous traffic which demands different levels of data rate, quality of service (QoS) and bandwidth. This diversity in traffic demands implies high complexity management to maintain network cost efficiency.1 In order to provide an economical solution that balances network cost and efficiency, network designers have to develop new deployment methods that can balance QoS and network cost whilst counting a Game Theoretic Framework for Future Generation Networks Modelling and Optimization list of constraints and influential factors that govern the network performance. As a solution to this challenge, orthogonal frequencydivision multiple access (OFDMA) is adopted in FGWNs as a promising candidate not only due to its high immunity against multipath but also because it enables simultaneous multi-user transmission along with exploiting both multi-user and multi-path diversities . Game Theoretic Framework for Future Generation Networks Modelling and Optimization As a result of power and bandwidth constraints, capacity at BS is bounded by the available resources and the channel coefficients between a BS and the surrounding traffic density distribution. The most dominant factors governing the channel coefficients are the propagation loss and the interference between the co-channels assigned to the cells/sectors across the network. Additionally, traffic distribution and density shape the probability density function of the channel coefficients between a BS and the surrounding traffic. Game Theoretic Framework for Future Generation Networks Modelling and Optimization Hence, optimizing BSs number and distribution results in optimizing the network parameters and maximizing capacity whilst maintaining both QoS and network cost. Traditional and advanced planning methods treat these conditions either separately or insufficiently, which often leads to

inaccurate network design. Therefore, all the aforementioned factors must be treated simultaneously to achieve maximum network performance.