aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
We review the main theoretical results presented in this paper. In Section we introduce a simple but useful sensor network model that can be used to model sensing applications such as chemical sensing applications and computer network monitoring. For this model, we define and bound the sensing capacity. The sensing capacity bound differs significantly from the standard channel capacity results, and requires novel arguments to account for the constrained encoding of a sensor network. This is an important observation due to the use of mutual information as a sensor selection heuristic @cite_13 . Our result shows that this is not the correct metric for large-scale detection applications. Extensions are presented to account for non-binary target vectors, target sparsity, and heterogeneous sensors. Plotting the sensing capacity bound, we demonstrate interesting sensing tradeoffs. For example, perhaps counter-intuitively, sensors of shorter range can achieve a desired detection accuracy with fewer measurements than sensors of longer range. Finally, we also compare our sensing capacity bound to simulated sensor network performance.
{ "cite_N": [ "@cite_13" ], "mid": [ "2159352259" ], "abstract": [ "We bound the number of sensors required to achieve a desired level of sensing accuracy in a discrete sensor network application (e.g. distributed detection). We model the state of nature being sensed as a discrete vector, and the sensor network as an encoder. Our model assumes that each sensor observes only a subset of the state of nature, that sensor observations are localized and dependent, and that sensor network output across different states of nature is neither identical nor independently distributed. Using a random coding argument we prove a lower bound on the 'sensing capacity' of a sensor network, which characterizes the ability of a sensor network to distinguish among all states of nature. We compute this lower bound for sensors of varying range, noise models, and sensing functions. We compare this lower bound to the empirical performance of a belief propagation based sensor network decoder for a simple seismic sensor network scenario. The key contribution of this paper is to introduce the idea of a sharp cut-off function in the number of required sensors, to the sensor network community." ] }
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
In Section we introduce a sensor network model that accounts for contiguity in a sensor's field of view. Contiguity is an essential aspect of many classes of sensors. For example, cameras observe localized regions and seismic sensors sense vibrations from nearby targets. We demonstrate sensing capacity bounds that account for such sensors by extending results about Markov types @cite_20 , and use convex optimization to compute these bounds. The first result in Section assumes the state of the environment is modeled as a one-dimensional vector. In Section we extend this result to the case where the state of the environment is modeled as a two-dimensional grid. While a one-dimensional vector can model sensor network applications such as border security and traffic monitoring, results about two dimensions significantly increase the type of applications described by our models.
{ "cite_N": [ "@cite_20" ], "mid": [ "2159352259" ], "abstract": [ "We bound the number of sensors required to achieve a desired level of sensing accuracy in a discrete sensor network application (e.g. distributed detection). We model the state of nature being sensed as a discrete vector, and the sensor network as an encoder. Our model assumes that each sensor observes only a subset of the state of nature, that sensor observations are localized and dependent, and that sensor network output across different states of nature is neither identical nor independently distributed. Using a random coding argument we prove a lower bound on the 'sensing capacity' of a sensor network, which characterizes the ability of a sensor network to distinguish among all states of nature. We compute this lower bound for sensors of varying range, noise models, and sensing functions. We compare this lower bound to the empirical performance of a belief propagation based sensor network decoder for a simple seismic sensor network scenario. The key contribution of this paper is to introduce the idea of a sharp cut-off function in the number of required sensors, to the sensor network community." ] }
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
The performance of sensor networks is limited by both sensing resources and non-sensing resources such as communications, computation, and power. One set of results has been obtained by considering the limitations that communications requirements impose on a sensor network. @cite_1 extends the results in @cite_6 to account for the different traffic models that arise in a sensor network. @cite_34 studies network transport capacity for the case of regular sensor networks. @cite_32 studies the impact of computational constraints and power on the communication efficiency of sensor networks. @cite_36 has considered the interaction between transmission rates and power constraints. Another set of results has been obtained by extending results from compression to sensor networks. Distributed source coding @cite_11 , @cite_19 provides limits on the compression of separately encoded correlated sources. @cite_10 applies these results to sensor networks. @cite_40 provides an overview of this area of research.
{ "cite_N": [ "@cite_36", "@cite_1", "@cite_32", "@cite_6", "@cite_19", "@cite_40", "@cite_34", "@cite_10", "@cite_11" ], "mid": [ "1960247298", "2144261701", "1978999584", "2145706050", "2962919833", "2167140162", "2112953516", "2109004672", "2076865372" ], "abstract": [ "This paper addresses the coverage breach problem in wireless sensor networks with limited bandwidths. In wireless sensor networks, sensor nodes are powered by batteries. To make efficient use of battery energy is critical to sensor network lifetimes. When targets are redundantly covered by multiple sensors, especially in stochastically deployed sensor networks, it is possible to save battery energy by organizing sensors into mutually exclusive subsets and alternatively activating only one subset at any time. Active nodes are responsible for sensing, computing and communicating. While the coverage of each subset is an important metric for sensor organization, the size of each subset also plays an important role in sensor network performance because when active sensors periodically send data to base stations, contention for channel access must be considered. The number of available channels imposes a limit on the cardinality of each subset. Coverage breach happens when a subset of sensors cannot completely cover all the targets. To make efficient use of both energy and bandwidth with a minimum coverage breach is the goal of sensor network design. This paper presents the minimum breach problem using a mathematical model, studies the computational complexity of the problem, and provides two approximate heuristics. Effects of increasing the number of channels and increasing the number of sensors on sensor network coverage are studied through numerical simulations. Overall, the simulation results reveal that when the number of sensors increases, network lifetimes can be improved without loss of network coverage if there is no bandwidth constraint; with bandwidth constraints, network lifetimes may be improved further at the cost of coverage breach.", "Distributed processing through ad hoc and sensor networks is having a major impact on scale and applications of computing. The creation of new cyber-physical services based on wireless sensor devices relies heavily on how well communication protocols can be adapted and optimized to meet quality constraints under limited energy resources. The IEEE 802.15.4 medium access control protocol for wireless sensor networks can support energy efficient, reliable, and timely packet transmission by a parallel and distributed tuning of the medium access control parameters. Such a tuning is difficult, because simple and accurate models of the influence of these parameters on the probability of successful packet transmission, packet delay, and energy consumption are not available. Moreover, it is not clear how to adapt the parameters to the changes of the network and traffic regimes by algorithms that can run on resource-constrained devices. In this paper, a Markov chain is proposed to model these relations by simple expressions without giving up the accuracy. In contrast to previous work, the presence of limited number of retransmissions, acknowledgments, unsaturated traffic, packet size, and packet copying delay due to hardware limitations is accounted for. The model is then used to derive a distributed adaptive algorithm for minimizing the power consumption while guaranteeing a given successful packet reception probability and delay constraints in the packet transmission. The algorithm does not require any modification of the IEEE 802.15.4 medium access control and can be easily implemented on network devices. The algorithm has been experimentally implemented and evaluated on a testbed with off-the-shelf wireless sensor devices. Experimental results show that the analysis is accurate, that the proposed algorithm satisfies reliability and delay constraints, and that the approach reduces the energy consumption of the network under both stationary and transient conditions. Specifically, even if the number of devices and traffic configuration change sharply, the proposed parallel and distributed algorithm allows the system to operate close to its optimal state by estimating the busy channel and channel access probabilities. Furthermore, results indicate that the protocol reacts promptly to errors in the estimation of the number of devices and in the traffic load that can appear due to device mobility. It is also shown that the effect of imperfect channel and carrier sensing on system performance heavily depends on the traffic load and limited range of the protocol parameters.", "Wireless sensor networks have attracted attention from a diverse set of researchers, due to the unique combination of distributed, resource and data processing constraints. However, until now, the lack of real sensor network deployments have resulted in ad-hoc assumptions on a wide range of issues including topology characteristics and data distribution. As deployments of sensor networks become more widespread [1, 2], many of these assumptions need to be revisited.This paper deals with the fundamental issue of spatio-temporal irregularity in sensor networks We make the case for the existence of such irregular spatio-temporal sampling, and show that it impacts many performance issues in sensor networks. For instance, data aggregation schemes provide inaccurate results, compression efficiency is dramatically reduced, data storage skews storage load among nodes and incurs significantly greater routing overhead. To mitigate the impact of irregularity, we outline a spectrum of solutions. For data aggregation and compression, we propose the use of spatial interpolation of data (first suggested by in [3] and temporal signal segmentation followed by alignment. To reduce the cost of data-centric storage and routing, we propose the use of virtualization, and boundary detection.", "In a sensor network, in practice, the communication among sensors is subject to: 1) errors that can cause failures of links among sensors at random times; 2) costs; and 3) constraints, such as power, data rate, or communication, since sensors and networks operate under scarce resources. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. We model the network as a Bernoulli random topology and establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail. In particular, a necessary and sufficient condition is for the algebraic connectivity of the mean graph topology to be strictly positive. With these results, we show that the topology design with random link failures, link communication costs, and a communication cost constraint is a constrained convex optimization problem that can be efficiently solved for large networks by semidefinite programming techniques. Simulations demonstrate that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the performance of a non-random network at a fraction of the communication cost.", "In passive monitoring using sensor networks, low energy supplies drastically constrain sensors in terms of calculation and communication abilities. Designing processing algorithms at the sensor level that take into account these constraints is an important problem in this context. Here we study the estimation of correlation functions between sensors using compressed acquisition and one-bit-quantization. The estimation is achieved directly using compressed samples, without considering any reconstruction of the signals. We show that if the signals of interest are far from white noise, estimation of the correlation using @math compressed samples out of @math can be more advantageous than estimation of the correlation using @math consecutive samples. The analysis consists of studying the asymptotic performance of the estimators at a fixed compression rate. We provide the analysis when the compression is realized by a random projection matrix composed of independent and identically distributed entries. The framework includes widely used random projection matrices, such as Gaussian and Bernoulli matrices, and it also includes very sparse matrices. However, it does not include subsampling without replacement, for which a separate analysis is provided. When considering one-bit-quantization as well, the theoretical analysis is not tractable. However, empirical evidence allows the conclusion that in practical situations, compressed and quantized estimators behave sufficiently correctly to be useful in, for example, time-delay estimation and model estimation.", "We analyze various critical transmitting sensing ranges for connectivity and coverage in three-dimensional sensor networks. As in other large-scale complex systems, many global parameters of sensor networks undergo phase transitions. For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (respectively, below) which the property exists with high (respectively, a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors, and their transmitting sensing ranges. More specifically, we consider the following problems: assume that n nodes, each capable of sensing events within a radius of r, are randomly and uniformly distributed in a 3-dimensional region R of volume V, how large must the sensing range R sub SENSE be to ensure a given degree of coverage of the region to monitor? For a given transmission range R sub TRANS , what is the minimum (respectively, maximum) degree of the network? What is then the typical hop diameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks.", "Due to their low cost and small form factors, a large number of sensor nodes can be deployed in redundant fashion in dense sensor networks. The availability of redundant nodes increases network lifetime as well as network fault tolerance. It is, however, undesirable to keep all the sensor nodes active at all times for sensing and communication. An excessive number of active nodes lead to higher energy consumption and it places more demand on the limited network bandwidth. We present an efficient technique for the selection of active sensor nodes in dense sensor networks. The active node selection procedure is aimed at providing the highest possible coverage of the sensor field, i.e., the surveillance area. It also assures network connectivity for routing and information dissemination. We first show that the coverage-centric active nodes selection problem is NP-complete. We then present a distributed approach based on the concept of a connected dominating set (CDS). We prove that the set of active nodes selected by our approach provides full coverage and connectivity. We also describe an optimal coverage-centric centralized approach based on integer linear programming. We present simulation results obtained using an ns2 implementation of the proposed technique.", "Motivated by limited computational resources in sensor nodes, the impact of complexity constraints on the communication efficiency of sensor networks is studied. A single-parameter characterization of processing limitation of nodes in sensor networks is invoked. Specifically, the relaying nodes are assumed to \"donate\" only a small part of their total processor time to relay other nodes information. The amount of donated processor time is modelled by the node's ability to decode a channel code reliably at given rate R. Focusing on a four node network, with two relays, prior work for a complexity constrained single relay network is built upon. In the proposed coding scheme, the transmitter sends a broadcast code such that the relays decode only the \"coarse\" information, and assist the receiver in removing ambiguity only in that information. Via numerical examples, the impact of different power constraints in the system, ranging from per node power bound to network wide power constraint is explored. As the complexity bound R increases, the proposed scheme becomes identical to the recently proposed achievable rate by Gupta & Kumar (2003). Both discrete memoryless and Gaussian channels are considered.", "Wireless sensor networks have attracted a lot of attention recently. Such environments may consist of many inexpensive nodes, each capable of collecting, storing, and processing environmental information, and communicating with neighboring nodes through wireless links. For a sensor network to operate successfully, sensors must maintain both sensing coverage and network connectivity. This issue has been studied in [2003] and Zhang and Hou [2004a], both of which reach a similar conclusion that coverage can imply connectivity as long as sensors' communication ranges are no less than twice their sensing ranges. In this article, without relying on this strong assumption, we investigate the issue from a different angle and develop several necessary and sufficient conditions for ensuring coverage and connectivity of a sensor network. Hence, the results significantly generalize the results in [2003] and Zhang and Hou [2004a]. This work is also a significant extension of our earlier work [Huang and Tseng 2003; 2004], which addresses how to determine the level of coverage of a given sensor network but does not consider the network connectivity issue. Our work is the first work allowing an arbitrary relationship between sensing ranges and communication distances of sensor nodes. We develop decentralized solutions for determining, or even adjusting, the levels of coverage and connectivity of a given network. Adjusting levels of coverage and connectivity is necessary when sensors are overly deployed, and we approach this problem by putting sensors to sleep mode and tuning their transmission powers. This results in prolonged network lifetime." ] }
0901.2094
2107734287
This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of seismic sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as large-scale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define the “sensing capacity” and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that for a fixed sensor configuration, codewords are dependent and nonidentically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an intriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.
The problem of estimating a continuous field using a sensor network is an active area of research. @cite_2 considers the relationship of transport capacity and the rate distortion function of a continuous random processes. @cite_15 proves limits on the estimation of an inhomogeneous random fields using sensor that collect noisy point samples. Other work on the problem of estimating a continuous random field includes @cite_12 , @cite_5 , @cite_21 , @cite_26 . @cite_33 considers the estimation of continuous parameters of a set of underlying random processes through a noisy communications channel. The results presented in this paper consider the detection of a discrete state of an environment. We do not consider extensions to environments with a continuous state.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_21", "@cite_2", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2962919833", "2111224874", "2167484620", "2109914635", "2119102657", "2953338282", "2146217114" ], "abstract": [ "In passive monitoring using sensor networks, low energy supplies drastically constrain sensors in terms of calculation and communication abilities. Designing processing algorithms at the sensor level that take into account these constraints is an important problem in this context. Here we study the estimation of correlation functions between sensors using compressed acquisition and one-bit-quantization. The estimation is achieved directly using compressed samples, without considering any reconstruction of the signals. We show that if the signals of interest are far from white noise, estimation of the correlation using @math compressed samples out of @math can be more advantageous than estimation of the correlation using @math consecutive samples. The analysis consists of studying the asymptotic performance of the estimators at a fixed compression rate. We provide the analysis when the compression is realized by a random projection matrix composed of independent and identically distributed entries. The framework includes widely used random projection matrices, such as Gaussian and Bernoulli matrices, and it also includes very sparse matrices. However, it does not include subsampling without replacement, for which a separate analysis is provided. When considering one-bit-quantization as well, the theoretical analysis is not tractable. However, empirical evidence allows the conclusion that in practical situations, compressed and quantized estimators behave sufficiently correctly to be useful in, for example, time-delay estimation and model estimation.", "We consider a problem of broadcast communication in sensor networks, in which samples of a random field are collected at each node, and the goal is for all nodes to obtain an estimate of the entire field within a prescribed distortion value. The main idea we explore in this paper is that of jointly compressing the data generated by different nodes as this information travels over multiple hops, to eliminate correlations in the representation of the sampled field. Our main contributions are: (a) we obtain, using simple network flow concepts, conditions on the rate distortion function of the random field, so as to guarantee that any node can obtain the measurements collected at every other node in the network, quantized to within any prescribed distortion value; and (b) we construct a large class of physically-motivated stochastic models for sensor data, for which we are able to prove that the joint rate distortion function of all the data generated by the whole network grows slower than the bounds found in (a). A truly novel aspect of our work is the tight coupling between routing and source coding, explicitly formulated in a simple and analytically tractable model - to the best of our knowledge, this connection had not been studied before.", "Sensing, processing and communication must be jointly optimized for efficient operation of resource-limited wireless sensor networks. We propose a novel source-channel matching approach for distributed field estimation that naturally integrates these basic operations and facilitates a unified analysis of the impact of key parameters (number of nodes, power, field complexity) on estimation accuracy. At the heart of our approach is a distributed source-channel communication architecture that matches the spatial scale of field coherence with the spatial scale of node synchronization for phase-coherent communication: the sensor field is uniformly partitioned into multiple cells and the nodes in each cell coherently communicate simple statistics of their measurements to the destination via a dedicated noisy multiple access channel (MAC). Essentially, the optimal field estimate in each cell is implicitly computed at the destination via the coherent spatial averaging inherent in the MAC, resulting in optimal power-distortion scaling with the number of nodes. In general, smoother fields demand lower per-node power but require node synchronization over larger scales for optimal estimation. In particular, optimal mean-square distortion scaling can be achieved with sub-linear power scaling. Our results also reveal a remarkable power-density tradeoff inherent in our approach: increasing the sensor density reduces the total power required to achieve a desired distortion. A direct consequence is that consistent field estimation is possible, in principle, even with vanishing total power in the limit of high sensor density.", "We consider the problem of obtaining a high quality estimates of band-limited sensor fields when sensor measurements are noisy and the nodes are irregularly deployed and subject to random motion. We consider the mean square error (MSE) of the estimate and we analytically derive the performance of several reconstruction estimation techniques based on linear filtering. For each technique, we obtain the mean value of the MSE, as well as its asymptotic expression in the case where the field bandwidth and the number of sensors grow to infinity, while their ratio is kept constant. Our results provide useful guidelines for the design of sensor networks when many system parameters have to be traded off.", "Sensor networks have emerged as a fundamentally new tool for monitoring spatial phenomena. This paper describes a theory and methodology for estimating inhomogeneous, two-dimensional fields using wireless sensor networks. Inhomogeneous fields are composed of two or more homogeneous (smoothly varying) regions separated by boundaries. The boundaries, which correspond to abrupt spatial changes in the field, are nonparametric one-dimensional curves. The sensors make noisy measurements of the field, and the goal is to obtain an accurate estimate of the field at some desired destination (typically remote from the sensor network). The presence of boundaries makes this problem especially challenging. There are two key questions: 1) Given n sensors, how accurately can the field be estimated? 2) How much energy will be consumed by the communications required to obtain an accurate estimate at the destination? Theoretical upper and lower bounds on the estimation error and energy consumption are given. A practical strategy for estimation and communication is presented. The strategy, based on a hierarchical data-handling and communication architecture, provides a near-optimal balance of accuracy and energy consumption.", "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by KrzakalaEtAl , message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of non-zero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate @math exceeds the (upper) R 'enyi information dimension of the signal, @math . More precisely, for a sequence of signals of diverging dimension @math whose empirical distribution converges to @math , reconstruction is with high probability successful from @math measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension @math and @math non-zero entries, this implies reconstruction from @math measurements. For discrete' signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from @math measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal @math .", "The problems of sensor configuration and activation for the detection of correlated random fields using large sensor arrays are considered. Using results that characterize the large-array performance of sensor networks in this application, the detection capabilities of different sensor configurations are analyzed and compared. The dependence of the optimal choice of configuration on parameters such as sensor signal-to-noise ratio (SNR), field correlation, etc., is examined, yielding insights into the most effective choices for sensor selection and activation in various operating regimes." ] }
0901.1703
2951868322
This paper considers a multi-cell multiple antenna system with precoding used at the base stations for downlink transmission. For precoding at the base stations, channel state information (CSI) is essential at the base stations. A popular technique for obtaining this CSI in time division duplex (TDD) systems is uplink training by utilizing the reciprocity of the wireless medium. This paper mathematically characterizes the impact that uplink training has on the performance of such multi-cell multiple antenna systems. When non-orthogonal training sequences are used for uplink training, the paper shows that the precoding matrix used by the base station in one cell becomes corrupted by the channel between that base station and the users in other cells in an undesirable manner. This paper analyzes this fundamental problem of pilot contamination in multi-cell systems. Furthermore, it develops a new multi-cell MMSE-based precoding method that mitigate this problem. In addition to being a linear precoding method, this precoding method has a simple closed-form expression that results from an intuitive optimization problem formulation. Numerical results show significant performance gains compared to certain popular single-cell precoding methods.
Over the past decade, a variety of aspects of downlink and uplink transmission problems in a single cell setting have been studied. In information theoretic literature, these problems are studied as the broadcast channel (BC) and the multiple access channel (MAC) respectively. For Gaussian BC and general MAC, the problems have been studied for both single and multiple antenna cases. The sum capacity of the multi-antenna Gaussian BC has been shown to be achieved by dirty paper coding (DPC) in @cite_8 @cite_14 @cite_6 @cite_24 . It was shown in @cite_1 that DPC characterizes the full capacity region of the multi-antenna Gaussian BC. These results assume perfect CSI at the base station and the users. In addition, the DPC technique is computationally challenging to implement in practice. There has been significant research focus on reducing the computational complexity at the base station and the users. In this regard, different precoding schemes with low complexity have been proposed. This body of work @cite_0 @cite_25 @cite_16 @cite_30 @cite_29 demonstratesƒs that sum rates close to sum capacity can be achieved with much lower computational complexity. However, these results assume perfect CSI at the base station and the users.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_8", "@cite_29", "@cite_1", "@cite_6", "@cite_24", "@cite_0", "@cite_16", "@cite_25" ], "mid": [ "2514483739", "2083962008", "2161410889", "2015301486", "2032760019", "2963217749", "2767375015", "1970767514", "2030546921", "2135096483" ], "abstract": [ "This paper investigates the compress-and-forward scheme for an uplink cloud radio access network (C-RAN) model, where multi-antenna base stations (BSs) are connected to a cloud-computing-based central processor (CP) via capacity-limited fronthaul links. The BSs compress the received signals with Wyner-Ziv coding and send the representation bits to the CP; the CP performs the decoding of all the users’ messages. Under this setup, this paper makes progress toward the optimal structure of the fronthaul compression and CP decoding strategies for the compress-and-forward scheme in the C-RAN. On the CP decoding strategy design, this paper shows that under a sum fronthaul capacity constraint, a generalized successive decoding strategy of the quantization and user message codewords that allows arbitrary interleaved order at the CP achieves the same rate region as the optimal joint decoding. Furthermore, it is shown that a practical strategy of successively decoding the quantization codewords first, then the user messages, achieves the same maximum sum rate as joint decoding under individual fronthaul constraints. On the joint optimization of user transmission and BS quantization strategies, this paper shows that if the input distributions are assumed to be Gaussian, then under joint decoding, the optimal quantization scheme for maximizing the achievable rate region is Gaussian. Moreover, Gaussian input and Gaussian quantization with joint decoding achieve to within a constant gap of the capacity region of the Gaussian multiple-input multiple-output (MIMO) uplink C-RAN model. Finally, this paper addresses the computational aspect of optimizing uplink MIMO C-RAN by showing that under fixed Gaussian input, the sum rate maximization problem over the Gaussian quantization noise covariance matrices can be formulated as convex optimization problems, thereby facilitating its efficient solution.", "In multiple-antenna broadcast channels, unlike point-to-point multiple-antenna channels, the multiuser capacity depends heavily on whether the transmitter knows the channel coefficients to each user. For instance, in a Gaussian broadcast channel with M transmit antennas and n single-antenna users, the sum rate capacity scales like Mloglogn for large n if perfect channel state information (CSI) is available at the transmitter, yet only logarithmically with M if it is not. In systems with large n, obtaining full CSI from all users may not be feasible. Since lack of CSI does not lead to multiuser gains, it is therefore of interest to investigate transmission schemes that employ only partial CSI. We propose a scheme that constructs M random beams and that transmits information to the users with the highest signal-to-noise-plus-interference ratios (SINRs), which can be made available to the transmitter with very little feedback. For fixed M and n increasing, the throughput of our scheme scales as MloglognN, where N is the number of receive antennas of each user. This is precisely the same scaling obtained with perfect CSI using dirty paper coding. We furthermore show that a linear increase in throughput with M can be obtained provided that M does not not grow faster than logn. We also study the fairness of our scheduling in a heterogeneous network and show that, when M is large enough, the system becomes interference dominated and the probability of transmitting to any user converges to 1 n, irrespective of its path loss. In fact, using M= spl alpha logn transmit antennas emerges as a desirable operating point, both in terms of providing linear scaling of the throughput with M as well as in guaranteeing fairness.", "We consider a multiuser multiple-input multiple- output (MIMO) Gaussian broadcast channel (BC), where the transmitter and receivers have multiple antennas. Since the MIMO BC is in general a nondegraded BC, its capacity region remains an unsolved problem. We establish a duality between what is termed the \"dirty paper\" achievable region (the Caire-Shamai (see Proc. IEEE Int. Symp. Information Theory, Washington, DC, June 2001, p.322) achievable region) for the MIMO BC and the capacity region of the MIMO multiple-access channel (MAC), which is easy to compute. Using this duality, we greatly reduce the computational complexity required for obtaining the dirty paper achievable region for the MIMO BC. We also show that the dirty paper achievable region achieves the sum-rate capacity of the MIMO BC by establishing that the maximum sum rate of this region equals an upper bound on the sum rate of the MIMO BC.", "We propose joint spatial division and multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter (CSIT). JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing (FDD) systems, for which uplink downlink channel reciprocity cannot be exploited. In the proposed scheme, the multiuser MIMO downlink precoder is obtained by concatenating a prebeamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional “effective” channel matrix. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is approached in the large number of antennas limit. For this case, we use Szego's asymptotic theory of Toeplitz matrices to show that a DFT-based prebeamforming matrix is near-optimal, requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a 2-D base station antenna array, with 3-D beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the prebeamforming optimization and calculate the system spectral efficiency under proportional fairness and max-min fairness criteria, showing extremely attractive performance. Our numerical results are obtained via asymptotic random matrix theory, avoiding lengthy Monte Carlo simulations and providing accurate results for realistic (finite) number of antennas and users.", "We consider the downlink transmission of a wireless communication system where M antennas transmit independent information to a subset of K users, each equipped with a single antenna. The Shannon capacity of this MIMO broadcast channel (MIMO-BC) can be achieved using a non-linear preceding technique known as dirty paper coding (DPC) which is difficult to implement in practice. Motivated to study simpler transmission techniques, we focus on a linear precoding technique based on the zero-forcing (ZF) algorithm. In contrast to the typical sum power constraint (SPC), we consider a per-antenna power constraint (PAPC) motivated both by current antenna array designs where each antenna is powered by a separate amplifier and by future wireless networks where spatially separated antennas transmit cooperatively to users. We show that the problem of power allocation for maximizing the weighted sum rate under ZF with PAPC is a constrained convex optimization problem that can be solved using conventional numerical optimization techniques. For the special case of two users, we find an analytic solution based on waterfilling techniques. For the case where the number of users increases without bound, we show that ZF with PAPC is asymptotically optimal in the sense that the ratio of the expected sum-rate capacities between ZF with PAPC and DPC with SPC approaches one. We also show how the results can be generalized for multiple frequency bands and for a hybrid power constraint. Finally, we provide numerical results that show ZF with PAPC achieves a significant fraction of the optimum DPC sum-rate capacity in practical cases where K is bounded", "We study a distributed antenna system where L antenna terminals (ATs) are connected to a central processor (CP) via digital error-free links of finite capacity R0, and serve K user terminals (UTs). This model has been widely investigated both for the uplink (UTs to CP) and for the downlink (CP to UTs), which are instances of the general multiple-access relay and broadcast relay networks. We contribute to the subject in the following ways: 1) For the uplink, we consider the recently proposed “compute and forward” (CoF) approach and examine the corresponding system optimization at finite SNR. 2) For the downlink, we propose a novel precoding scheme nicknamed “reverse compute and forward” (RCoF). 3) In both cases, we present low-complexity versions of CoF and RCoF based on standard scalar quantization at the receivers, that lead to discrete-input discrete-output symmetric memoryless channel models for which near-optimal performance can be achieved by standard single-user linear coding. 4) We provide extensive numerical results and finite SNR comparison with other “state of the art” information theoretic techniques, in scenarios including fading and shadowing. The proposed uplink and downlink system optimization focuses specifically on the ATs and UTs selection problem. In both cases, for a given set of transmitters, the goal consists of selecting a subset of the receivers such that the corresponding system matrix has full rank and the sum rate is maximized. We present low-complexity ATs and UTs selection schemes and demonstrate through Monte Carlo simulation that the proposed schemes essentially eliminate the problem of rank deficiency of the system matrix and greatly mitigate the noninteger penalty affecting CoF RCoF at high SNR. Comparison with other state-of-the art information theoretic schemes, show competitive performance of the proposed approaches with significantly lower complexity.", "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is proposed to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. Utilizing the multiantenna multicasting opportunities provided by the CC technique, the proposed method is shown to perform well over the entire SNR region, including the low SNR regime, unlike the existing schemes based on zero forcing (ZF). Instead of nulling the interference at users not requiring a specific coded message, general multicast beamforming strategies are employed, optimally balancing the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several base-line schemes including, the joint ZF and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming.", "This paper studies the uplink of a cloud radio access network (C-RAN) where the cell sites are connected to a cloud-computing-based central processor (CP) with noiseless backhaul links with finite capacities. We employ a simple compress-and-forward scheme in which the base stations (BSs) quantize the received signals and send the quantized signals to the CP using either distributed Wyner-Ziv coding or single-user compression. The CP first decodes the quantization codewords and then decodes the user messages as if the remote users and the cloud center form a virtual multiple-access channel (VMAC). This paper formulates the problem of optimizing the quantization noise levels for weighted sum rate maximization under a sum backhaul capacity constraint. We propose an alternating convex optimization approach to find a local optimum solution to the problem efficiently, and more importantly, to establish that setting the quantization noise levels to be proportional to the background noise levels is near optimal for sum-rate maximization when the signal-to-quantization-noise-ratio (SQNR) is high. In addition, with Wyner-Ziv coding, the approximate quantization noise level is shown to achieve the sum-capacity of the uplink C-RAN model to within a constant gap. With single-user compression, a similar constant-gap result is obtained under a diagonal dominant channel condition. These results lead to an efficient algorithm for allocating the backhaul capacities in C-RAN. The performance of the proposed scheme is evaluated for practical multicell and heterogeneous networks. It is shown that multicell processing with optimized quantization noise levels across the BSs can significantly improve the performance of wireless cellular networks.", "The Gaussian multiple-input multiple-output (MIMO) broadcast channel (BC) is considered. The dirty-paper coding (DPC) rate region is shown to coincide with the capacity region. To that end, a new notion of an enhanced broadcast channel is introduced and is used jointly with the entropy power inequality, to show that a superposition of Gaussian codes is optimal for the degraded vector broadcast channel and that DPC is optimal for the nondegraded case. Furthermore, the capacity region is characterized under a wide range of input constraints, accounting, as special cases, for the total power and the per-antenna power constraints", "For a multiple-input single-output (MISO) down- link channel with M transmit antennas, it has been recently proved that zero-forcing beamforming (ZFBF) to a subset of (at most) M \"semi-orthogonal\" users is optimal in terms of the sum-rate, asymptotically with the number of users. However, determining the subset of users for transmission is a complex optimization problem. Adopting the ZFBF scheme in a cooper- ative multi-cell scenario renders the selection process even more difficult since more users are involved. In this paper, we consider a multi-cell cooperative ZFBF scheme combined with a simple sub-optimal users selection procedure for the Wyner downlink channel setup. According to this sub-optimal procedure, the user with the \"best\" local channel is selected for transmission in each cell. It is shown that under an overall power constraint, a distributed multi-cell ZFBF to this sub-optimal subset of users achieves the same sum-rate growth rate as an optimal scheme deploying joint multi-cell dirty-paper coding (DPC) techniques, asymptotically with the number of users per cell. Moreover, the overall power constraint is shown to ensure in probability, equal per-cell power constraints when the number of users per-cell increases." ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Our study is related to the problem of optimal cache placement in wireless networks. Several works have addressed this issue by exploiting its similarity to the facility location and the @math -median problems. Both these problems are NP-hard and a number of constant-factor approximation algorithms have been proposed for each of them @cite_5 @cite_12 @cite_13 ; these algorithms however are not amenable to an efficient distributed implementation.
{ "cite_N": [ "@cite_5", "@cite_13", "@cite_12" ], "mid": [ "2098653858", "2550555189", "1481341908" ], "abstract": [ "In this paper, we address the problem of efficient cache placement in multi-hop wireless networks. We consider a network comprising a server with an interface to the wired network, and other nodes requiring access to the information stored at the server. In order to reduce access latency in such a communication environment, an effective strategy is caching the server information at some of the nodes distributed across the network. Caching, however, can imply a considerable overhead cost; for instance, disseminating information incurs additional energy as well as bandwidth burden. Since wireless systems are plagued by scarcity of available energy and bandwidth, we need to design caching strategies that optimally trade-off between overhead cost and access latency. We pose our problem as an integer linear program. We show that this problem is the same as a special case of the connected facility location problem, which is known to be NP-hard. We devise a polynomial time algorithm which provides a suboptimal solution. The proposed algorithm applies to any arbitrary network topology and can be implemented in a distributed and asynchronous manner. In the case of a tree topology, our algorithm gives the optimal solution. In the case of an arbitrary topology, it finds a feasible solution with an objective function value within a factor of 6 of the optimal value. This performance is very close to the best approximate solution known today, which is obtained in a centralized manner. We compare the performance of our algorithm against three candidate cache placement schemes, and show via extensive simulation that our algorithm consistently outperforms these alternative schemes.", "We consider the problem of delivering content cached in a wireless network of n nodes randomly located on a square of area n. The network performance is described by the 2n × n-dimensional caching capacity region of the wireless network. We provide an inner bound on this caching capacity region, and, in the high path-loss regime, a matching (in the scaling sense) outer bound. For large path-loss exponent, this provides an information-theoretic scaling characterization of the entire caching capacity region. The proposed communication scheme achieving the inner bound shows that the problems of cache selection and channel coding can be solved separately without loss of order-optimality. On the other hand, our results show that the common architecture of nearest-neighbor cache selection can be arbitrarily bad, implying that cache selection and load balancing need to be performed jointly.", "In this work we consider the problem of an optimal geographic placement of content in wireless cellular networks modelled by Poisson point processes. Specifically, for the typical user requesting some particular content and whose popularity follows a given law (e.g. Zipf), we calculate the probability of finding the content cached in one of the base stations. Wireless coverage follows the usual signal-to-interference-and noise ratio (SINR) model, or some variants of it. We formulate and solve the problem of an optimal randomized content placement policy, to maximize the user's hit probability. The result dictates that it is not always optimal to follow the standard policy “cache the most popular content, everywhere”. In fact, our numerical results regarding three different coverage scenarios, show that the optimal policy significantly increases the chances of hit under high-coverage regime, i.e., when the probabilities of coverage by more than just one station are high enough." ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Distributed algorithms for allocation of information replicas are proposed, among others, in @cite_19 @cite_6 @cite_4 @cite_9 . These solutions typically involve significant communication overhead, especially when applied to mobile environments, and focus on minimizing the information access cost or the query delay. In our work, instead, we consider a cooperative environment and aim at a uniform distribution of the information copies, while evenly distributing the load among the nodes acting as providers.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_4", "@cite_6" ], "mid": [ "2963531496", "1972442117", "1920382817", "2289230595" ], "abstract": [ "This paper studies the resource allocation algorithm design for secure information and renewable green energy transfer to mobile receivers in distributed antenna communication systems. In particular, distributed remote radio heads (RRHs antennas) are connected to a central processor (CP) via capacity-limited backhaul links to facilitate joint transmission. The RRHs and the CP are equipped with renewable energy harvesters and share their energies via a lossy micropower grid for improving the efficiency in conveying information and green energy to mobile receivers via radio frequency signals. The considered resource allocation algorithm design is formulated as a mixed nonconvex and combinatorial optimization problem taking into account the limited backhaul capacity and the quality-of-service requirements for simultaneous wireless information and power transfer (SWIPT). We aim at minimizing the total network transmit power when only imperfect channel state information of the wireless energy harvesting receivers, which have to be powered by the wireless network, is available at the CP. In light of the intractability of the problem, we reformulate it as an optimization problem with binary selection, which facilitates the design of an iterative resource allocation algorithm to solve the problem optimally using the generalized Bender's decomposition (GBD). Furthermore, a suboptimal algorithm is proposed to strike a balance between computational complexity and system performance. Simulation results illustrate that the proposed GBD-based algorithm obtains the global optimal solution and the suboptimal algorithm achieves a close-to-optimal performance. In addition, the distributed antenna network for SWIPT with renewable energy sharing is shown to require a lower transmit power compared with a traditional system with multiple colocated antennas.", "We deal with the competitive analysis of algorithms for managing data in a distributed environment. We deal with the file allocation problem, where copies of a file may be be stored in the local storage of some subsets of processors. Copies may be replicated and discarded over time so as to optimize communication costs, but multiple copies must be kept consistent and at least one copy must be stored somewhere in the network at all times. We deal with competitive algorithms for minimizing communication costs, over arbitrary sequences of reads and writes, and arbitrary network topologies. We define the constrained file allocation problem to be the solution of many individual file allocation problems simultaneously, subject to the constraints of local memory size. We give competitive algorithms for this problem on the uniform network topology. We then introduce distributed competitive algorithms for on-line data tracking (a generalization of mobile user tracking) to transform our competitive data management algorithms into distributed algorithms themselves.", "Many resource allocation problems can be formulated as an optimization problem whose constraints contain sensitive information about participating users. This paper concerns a class of resource allocation problems whose objective function depends on the aggregate allocation (i.e., the sum of individual allocations); in particular, we investigate distributed algorithmic solutions that preserve the privacy of participating users. Without privacy considerations, existing distributed algorithms normally consist of a central entity computing and broadcasting certain public coordination signals to participating users. However, the coordination signals often depend on user information, so that an adversary who has access to the coordination signals can potentially decode information on individual users and put user privacy at risk. We present a distributed optimization algorithm that preserves differential privacy, which is a strong notion that guarantees user privacy regardless of any auxiliary information an adversary may have. The algorithm achieves privacy by perturbing the public signals with additive noise, whose magnitude is determined by the sensitivity of the projection operation onto user-specified constraints. By viewing the differentially private algorithm as an implementation of stochastic gradient descent, we are able to derive a bound for the suboptimality of the algorithm. We illustrate the implementation of our algorithm via a case study of electric vehicle charging. Specifically, we derive the sensitivity and present numerical simulations for the algorithm. Through numerical simulations, we are able to investigate various aspects of the algorithm when being used in practice, including the choice of step size, number of iterations, and the trade-off between privacy level and suboptimality.", "The next generation of cellular networks deploying wireless distributed femtocaching infrastructure proposed by is studied. By taking advantage of multihop communications in each cell, the number of required femtocaching helpers is significantly reduced. This reduction is achieved by using underutilized storage and communication capabilities in user terminals, which results in reducing the deployment costs of distributed femtocaches. A multihop index coding technique is proposed to code the cached contents in helpers to achieve order-optimal capacity gains. As an example, we consider a wireless cellular system in which contents have a popularity distribution and demonstrate that our approach can replace many unicast communications with multicast communication. We will prove that simple heuristic linear index code algorithms based on graph coloring can achieve order-optimal capacity under Zipfian content popularity distribution." ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Relevant to our study is also the work in @cite_14 , which computes the (near) optimal number of replicas of video clips in wireless networks, based on the bandwidth required for clip display and their access statistics. However, the strategy proposed in @cite_14 requires a centralized implementation and applies only to strip or grid topologies. In the context of sensor networks, the study in @cite_11 analytically derives the minimum number of sensors that ensure full coverage of an area of interest, under the assumption of a uniform sensor deployment.
{ "cite_N": [ "@cite_14", "@cite_11" ], "mid": [ "193898447", "1960247298" ], "abstract": [ "This study investigates replication of data in a novel streaming architecture consisting of ad-hoc networks of wireless devices. One application of these devices is home-to-home (H2O) entertainment systems where a device collaborates with others to provide each household with on-demand access to a large selection of audio and video clips. These devices are configured with a substantial amount of storage and may cache several clips for future use. A contribution of this study is a technique to compute the number of replicas for a clip based on the square-root of the product of bandwidth required to display clips and their frequency of access , i.e., where . We provide a proof to show this strategy is near optimal when the objective is to maximize the number of simultaneous displays in the system with string and grid (both symmetric and asymmetric) topologies. We say “near optimal” because values of less than 0.5 may be more optimum. In addition, we use analytical and simulation studies to demonstrate its superiority when compared with other alternatives. A second contribution is an analytical model to estimate the theoretical upper bound on the number of simultaneous displays supported by an arbitrary grid topology of H2O devices. This analytical model is useful during capacity planning because it estimates the capabilities of a H2O configuration by considering: the size of an underlying repository, the number of nodes in a H2O cloud, the representative grid topology for this cloud, and the expected available network bandwidth and storage capacity of each device. It shows that one may control the ratio of repository size to the storage capacity of participating nodes in order to enhance system performance. We validate this analytical model with a simulation study and quantify its tradeoffs.", "This paper addresses the coverage breach problem in wireless sensor networks with limited bandwidths. In wireless sensor networks, sensor nodes are powered by batteries. To make efficient use of battery energy is critical to sensor network lifetimes. When targets are redundantly covered by multiple sensors, especially in stochastically deployed sensor networks, it is possible to save battery energy by organizing sensors into mutually exclusive subsets and alternatively activating only one subset at any time. Active nodes are responsible for sensing, computing and communicating. While the coverage of each subset is an important metric for sensor organization, the size of each subset also plays an important role in sensor network performance because when active sensors periodically send data to base stations, contention for channel access must be considered. The number of available channels imposes a limit on the cardinality of each subset. Coverage breach happens when a subset of sensors cannot completely cover all the targets. To make efficient use of both energy and bandwidth with a minimum coverage breach is the goal of sensor network design. This paper presents the minimum breach problem using a mathematical model, studies the computational complexity of the problem, and provides two approximate heuristics. Effects of increasing the number of channels and increasing the number of sensors on sensor network coverage are studied through numerical simulations. Overall, the simulation results reveal that when the number of sensors increases, network lifetimes can be improved without loss of network coverage if there is no bandwidth constraint; with bandwidth constraints, network lifetimes may be improved further at the cost of coverage breach." ] }
0901.1782
1494256758
We investigate the problem of spreading information contents in a wireless ad hoc network with mechanisms embracing the peer-to-peer paradigm. In our vision, information dissemination should satisfy the following requirements: (i) it conforms to a predefined distribution and (ii) it is evenly and fairly carried by all nodes in their turn. In this paper, we observe the dissemination effects when the information moves across nodes according to two well-known mobility models, namely random walk and random direction. Our approach is fully distributed and comes at a very low cost in terms of protocol overhead; in addition, simulation results show that the proposed solution can achieve the aforementioned goals under different network scenarios, provided that a sufficient number of information replicas are injected into the network. This observation calls for a further step: in the realistic case where the user content demand varies over time, we need a content replication drop strategy to adapt the number of information replicas to the changes in the information query rate. We therefore devise a distributed, lightweight scheme that performs efficiently in a variety of scenarios.
Again in the context of sensor networks, approaches based on active queries following a trajectory through the network, or agents propagating information on local events have been proposed, respectively, in @cite_15 and @cite_3 . Note that both these works focus on the forwarding of these messages through the network, while our scope is to make the desired information available by letting it move through nodes caches.
{ "cite_N": [ "@cite_15", "@cite_3" ], "mid": [ "2060660772", "2047977378" ], "abstract": [ "Flooding based querying and broadcasting schemes have low hop-delays of Theta(1 R(n)) to reach any node that is a unit distance away, where R(n) is the transmission range of any sensor node. However, in sensor networks with large radio ranges, flooding based broadcasting schemes cause many redundant transmissions leading to a broadcast storm problem. In this paper, we study the role of geographic information and state information (i.e., memory of previous messages or transmissions) in reducing the redundant transmissions in the network. We consider three broadcasting schemes with varying levels of local information where nodes have: (i) no geographic or state information, (ii) coarse geographic information about the origin of the broadcast, and (Hi) no geographic information, but remember previously received messages. For each of these network models, we demonstrate localized forwarding algorithms for broadcast (based on geography or state information) that achieve significant reductions in the transmission overheads while maintaining hop-delays comparable to flooding based schemes. We also consider the related problem of broadcasting to a set of \"spatially uniform\" points in the network (lattice points) in the regime where all nodes have only a local sense of direction and demonstrate an efficient \"sparse broadcast\" scheme based on a branching random walk that has a low number of packet transmissions. Thus, our results show that even with very little local information, it is possible to make broadcast schemes significantly more efficient.", "While sensor networks are going to be deployed in diverse application specific contexts, one unifying view is to treat them essentially as distributed databases. The simplest mechanism to obtain information from this kind of a database is to flood queries for named data within the network and obtain the relevant responses from sources. However, if the queries are (a) complex, (b) one-shot, and (c) for replicated data, this simple approach can be highly inefficient. In the context of energy-starved sensor networks, alternative strategies need to be examined for such queries. We propose a novel and efficient mechanism for obtaining information in sensor networks which we refer to as ACtive QUery forwarding In sensoR nEtworks (ACQUIRE). The basic principle behind ACQUIRE is to consider the query as an active entity that is forwarded through the network (either randomly or in some directed manner) in search of the solution. ACQUIRE also incorporates a look-ahead parameter d in the following manner: intermediate nodes that handle the active query use information from all nodes within d hops in order to partially resolve the query. When the active query is fully resolved, a completed response is sent directly back to the querying node. We take a mathematical modelling approach in this paper to calculate the energy costs associated with ACQUIRE. The models permit us to characterize analytically the impact of critical parameters, and compare the performance of ACQUIRE with respect to other schemes such as flooding-based querying (FBQ) and expanding ring search (ERS), in terms of energy usage, response latency and storage requirements. We show that with optimal parameter settings, depending on the update frequency, ACQUIRE obtains order of magnitude reduction over FBQ and potentially over 60–75 reduction over ERS (in highly dynamic environments and high query rates) in consumed energy. We show that these energy savings are provided in trade for increased response latency. The mathematical analysis is validated through extensive simulations. � 2003 Elsevier B.V. All rights reserved." ] }
0901.1853
2951410353
In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x_1,...,x_n) bit-by-bit over a communication channel. The adversarial jammer can view the transmitted bits x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each bit x_i the jammer's decision on whether to corrupt it or not (and on how to change it) must depend only on x_j for j <= i. This is in contrast to the "classical" adversarial jammer which may base its decisions on its complete knowledge of x. We present a non-trivial upper bound on the amount of information that can be communicated. We show that the achievable rate can be asymptotically no greater than min 1-H(p),(1-4p)^+ . Here H(.) is the binary entropy function, and (1-4p)^+ equals 1-4p for p < 0.25, and 0 otherwise.
To the best of our knowledge, communication in the presence of a causal adversary has not been explicitly addressed in the literature (other than our prior work for causal adversaries over large- @math channels). Nevertheless, we note that the model of causal channels, being a natural one, has been on the table'' for several decades and the analysis of the online causal channel model appears as an open question in the book of Csisz ' a r and Korner @cite_3 (in the section addressing Arbitrary Varying Channels @cite_11 ). Various variants of causal adversaries have been addressed in the past, for instance @cite_11 @cite_10 @cite_16 @cite_4 @cite_6 -- however the models considered therein differ significantly from ours.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_6", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2027174613", "2799031185", "2145385126", "1970816336", "1960178078", "2026910567" ], "abstract": [ "Channels with adversarial errors have been widely considered in recent years. In this paper we propose a new type of adversarial channel that is defined by two parameters ρr and ρw, specifying the read and write power of the adversary: for a codeword of length n, adversary can read ρrn components and add an error vector of weight up to ρwn to the codeword. We give our motivations, define performance criteria for codes that provide reliable communication over these channels, and describe two constructions, one deterministic and one probabilistic, for these codes. We discuss our results and outline our direction for future research.", "In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security. In particular, MagNet consisting of an adversary detector and a data reformer is by far one of the strongest defenses in the black-box oblivious attack setting, where the attacker aims to craft transferable adversarial examples from an undefended DNN model to bypass an unknown defense module deployed on the same DNN model. Under this setting, MagNet can successfully defend a variety of attacks in DNNs, including the high-confidence adversarial examples generated by the Carlini and Wagner's attack based on the @math distortion metric. However, in this paper, under the same attack setting we show that adversarial examples crafted based on the @math distortion metric can easily bypass MagNet and mislead the target DNN image classifiers on MNIST and CIFAR-10. We also provide explanations on why the considered approach can yield adversarial examples with superior attack performance and conduct extensive experiments on variants of MagNet to verify its lack of robustness to @math distortion based attacks. Notably, our results substantially weaken the assumption of effective threat models on MagNet that require knowing the deployed defense technique when attacking DNNs (i.e., the gray-box attack setting).", "Consider two parties who wish to communicate in order to execute some interactive protocol π. However, the communication channel between them is noisy: An adversary sees everything that is transmitted over the channel and can change a constant fraction of the bits arbitrarily, thus interrupting the execution of π (which was designed for an error-free channel). If π only contains a single long message, then a good error correcting code would overcome the noise with only a constant overhead in communication. However, this solution is not applicable to interactive protocols consisting of many short messages. Schulman [1992, 1993] introduced the notion of interactive coding: A simulator that, given any protocol π, is able to simulate it (i.e., produce its intended transcript) even in the presence of constant rate adversarial channel errors, and with only constant (multiplicative) communication overhead. However, the running time of Schulman's simulator, and of all simulators that followed, has been exponential (or subexponential) in the communication complexity of π (which we denote by N). In this work, we present three efficient simulators, all of which are randomized and have a certain failure probability (over the choice of coins). The first runs in time poly(N), has failure probability roughly 2-N, and is resilient to 1 32-fraction of adversarial error. The second runs in time O(N log N), has failure probability roughly 2-N, and is resilient to some constant fraction of adversarial error. The third runs in time O(N), has failure probability 1 poly(N), and is resilient to some constant fraction of adversarial error. (Computational complexity is measured in the RAM model.) The first two simulators can be made deterministic if they are a priori given a random string (which may be known to the adversary ahead of time). In particular, the simulators can be made to be nonuniform and deterministic (with equivalent performance).", "We generalize the Gel'fand-Pinsker model to encompass the setup of a memoryless multiple-access channel (MAC). According to this setup, only one of the encoders knows the state of the channel (noncausally), which is also unknown to the receiver. Two independent messages are transmitted: a common message and a message transmitted by the informed encoder. We find explicit characterizations of the capacity region with both noncausal and causal state information. Further, we study the noise-free binary case, and we also apply the general formula to the Gaussian case with noncausal channel state information, under an individual power constraint as well as a sum power constraint. In this case, the capacity region is achievable by a generalized writing-on-dirty-paper scheme.", "We study oblivious deterministic gossip algorithms for multi-channel radio networks with a malicious adversary. In a multi-channel network, each of the n processes in the system must choose, in each round, one of the c channels of the system on which to participate. Assuming the adversary can disrupt one channel per round, preventing communication on that channel, we establish a tight bound of max (Θ-((1-e)n c-1 + log cn), Θ (n(1-e) ec2)) on the number of rounds needed to solve the e-gossip problem, a parameterized generalization of the all-to-all gossip problem that requires (1-e)n of the \"rumors\" to be successfully disseminated. Underlying our lower bound proof lies an interesting connection between e-gossip and extremal graph theory. Specifically, we make use of Turan's theorem, a seminal result in extremal combinatorics, to reason about an adversary's optimal strategy for disrupting an algorithm of a given duration. We then show how to generalize our upper bound to cope with an adversary that can simultaneously disrupt t < c channels. Our generalization makes use of selectors: a combinatorial tool that guarantees that any subset of processes will be \"selected\" by some set in the selector. We prove this generalized algorithm optimal if a maximum number of values is to be gossiped. We conclude by extending our algorithm to tolerate traditional Byzantine corruption faults.", "In this paper, we address the problem of characterizing the instances of the multiterminal source model of Csiszar and Narayan in which communication from all terminals is needed for establishing a secret key of maximum rate. We give an information-theoretic sufficient condition for identifying such instances. We believe that our sufficient condition is in fact an exact characterization, but we are only able to prove this in the case of the three-terminal source model." ] }
0901.1853
2951410353
In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x_1,...,x_n) bit-by-bit over a communication channel. The adversarial jammer can view the transmitted bits x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. Namely, for each bit x_i the jammer's decision on whether to corrupt it or not (and on how to change it) must depend only on x_j for j <= i. This is in contrast to the "classical" adversarial jammer which may base its decisions on its complete knowledge of x. We present a non-trivial upper bound on the amount of information that can be communicated. We show that the achievable rate can be asymptotically no greater than min 1-H(p),(1-4p)^+ . Here H(.) is the binary entropy function, and (1-4p)^+ equals 1-4p for p < 0.25, and 0 otherwise.
We note that under a very weak notion of capacity in which one only requires the success probability to be bounded away from zero (instead of approaching @math ), the capacity of the omniscient channel, and thus the binary causal-adversary channel, approaches @math . This follows by the fact that for @math sufficiently large and @math there exists @math codes which are @math list decodable with @math @cite_9 . Communicating using an @math list decodable code allows Bob to decode a list of size @math of messages which includes the message transmitted by Alice. Choosing a message uniformly at random from his list, Bob decodes correctly with probability at least @math .
{ "cite_N": [ "@cite_9" ], "mid": [ "2211831180" ], "abstract": [ "Suppose that Alice wishes to send messages to Bob through a communication channel C1, but her transmissions also reach an eavesdropper Eve through another channel C2. This is the wiretap channel model introduced by Wyner in 1975. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of the mutual information between the message and Eve's observations. Wyner showed that the situation is characterized by a single constant Cs, called the secrecy capacity, which has the following meaning: for all e >; 0, there exist coding schemes of rate R ≥ Cs-e that asymptotically achieve the reliability and security objectives. However, his proof of this result is based upon a random-coding argument. To date, despite consider able research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. In this paper, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C1 and C2 are symmetric and binary-input, and C2 is degraded with respect to C1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will also be satisfied unless the main channel C1 is noiseless, although we believe it can be always satisfied in practice." ] }
0901.2730
2951327449
In this paper, we present a novel and general framework called Maximum Entropy Discrimination Markov Networks (MaxEnDNet), which integrates the max-margin structured learning and Bayesian-style estimation and combines and extends their merits. Major innovations of this model include: 1) It generalizes the extant Markov network prediction rule based on a point estimator of weights to a Bayesian-style estimator that integrates over a learned distribution of the weights. 2) It extends the conventional max-entropy discrimination learning of classification rule to a new structural max-entropy discrimination paradigm of learning the distribution of Markov networks. 3) It subsumes the well-known and powerful Maximum Margin Markov network (M @math N) as a special case, and leads to a model similar to an @math -regularized M @math N that is simultaneously primal and dual sparse, or other types of Markov network by plugging in different prior distributions of the weights. 4) It offers a simple inference algorithm that combines existing variational inference and convex-optimization based M @math N solvers as subroutines. 5) It offers a PAC-Bayesian style generalization bound. This work represents the first successful attempt to combine Bayesian-style learning (based on generative models) with structured maximum margin learning (based on a discriminative model), and outperforms a wide array of competing methods for structured input output learning on both synthetic and real data sets.
Although the parameter distribution @math in Theorem has a similar form as that of the Bayesian Conditional Random Fields (BCRFs) , MaxEnDNet is fundamentally different from BCRFs as we have stated. @cite_2 present an interesting confidence-weighted linear classification method, which automatically estimates the mean and variance of model parameters in online learning. The procedure is similar to (but indeed different from) our variational Bayesian method of Laplace MaxEnDNet.
{ "cite_N": [ "@cite_2" ], "mid": [ "2098387242" ], "abstract": [ "Graphical models, such as Bayesian networks and Markov random fields (MRFs), represent statistical dependencies of variables by a graph. The max-product \"belief propagation\" algorithm is a local-message-passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixed point yields the most probable values of the unobserved variables given the observed ones. Good empirical performance has been obtained by running the max-product algorithm (or the equivalent min-sum algorithm) on graphs with loops, for applications including the decoding of \"turbo\" codes. Except for two simple graphs (cycle codes and single-loop graphs) there has been little theoretical understanding of the max-product algorithm on graphs with loops. Here we prove a result on the fixed points of max-product on a graph with arbitrary topology and with arbitrary probability distributions (discrete- or continuous-valued nodes). We show that the assignment based on a fixed point is a \"neighborhood maximum\" of the posterior probability: the posterior probability of the max-product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment. The region includes all assignments that differ from the max-product assignment in any subset of nodes that form no more than a single loop in the graph. In some graphs, this neighborhood is exponentially large. We illustrate the analysis with examples." ] }
0901.0339
1946152203
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
In general, semantic access to relational databases is not a new concept. Some of the work on this topic is limited to semantic access to, or semantic interpretation of relational data in terms of Description Logic-based ontologies or RDF (see, e. g., @cite_9 @cite_27 @cite_28 ), or non-logical semantic schemas (see @cite_11 ). There is also a large number of projects and publications on the use of RDB for storing and querying large RDF and OWL datasets: see, e. g., @cite_19 @cite_0 @cite_29 @cite_18 @cite_12 , to mention just a few. The format of the research note does not allow us to give a comprehensive overview of such work, so we will concentrate on research that tries to go beyond the expressivity of DL and, at the same time, is applicable to legacy relational databases.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_9", "@cite_29", "@cite_0", "@cite_19", "@cite_27", "@cite_12", "@cite_11" ], "mid": [ "1983077505", "1990391007", "1758759808", "2108223890", "2166306255", "2521441617", "2112671715", "56080075", "1875099574" ], "abstract": [ "We prove that entailment for RDF Schema (RDFS) is decidable, NP-complete, and in P if the target graph does not contain blank nodes. We show that the standard set of entailment rules for RDFS is incomplete and that this can be corrected by allowing blank nodes in predicate position. We define semantic extensions of RDFS that involve datatypes and a subset of the OWL vocabulary that includes the property-related vocabulary (e.g. FunctionalProperty), the comparisons (e.g. sameAs and differentFrom) and the value restrictions (e.g. allValuesFrom). These semantic extensions are in line with the 'if-semantics' of RDFS and weaker than the 'iff-semantics' of D-entailment and OWL (DL or Full). For these semantic extensions we present entailment rules, prove completeness results, prove that consistency is in P and that, just as for RDFS, entailment is NP-complete, and in P if the target graph does not contain blank nodes. There are no restrictions on use to obtain decidability: classes can be used as instances.", "ABSTRACT This paper concerns the semantics of Codd's relational model of data. Formulated are precise conditions that should be satisfied in a semantically meaningful extension of the usual relational operators, such as projection, selection, union, and join, from operators on relations to operators on tables with “null values” of various kinds allowed. These conditions require that the system be safe in the sense that no incorrect conclusion is derivable by using a specified subset Ω of the relational operators; and that it be complete in the sense that all valid conclusions expressible by relational expressions using operators in Ω are in fact derivable in this system. Two such systems of practical interest are shown. The first, based on the usual Codd's null values, supports projection and selection. The second, based on many different (“marked”) null values or variables allowed to appear in a table, is shown to correctly support projection, positive selection (with no negation occurring in the selection condition), union, and renaming of attributes, which allows for processing arbitrary conjunctive queries. A very desirable property enjoyed by this system is that all relational operators on tables are performed in exactly the same way as in the case of the usual relations. A third system, mainly of theoretical interest, supporting projection, selection, union, join, and renaming, is also discussed. Under a so-called closed world assumption, it can also handle the operator of difference. It is based on a device called a conditional table and is crucial to the proof of the correctness of the second system. All systems considered allow for relational expressions containing arbitrarily many different relation symbols, and no form of the universal relation assumption is required. Categories and Subject Descriptors: H.2.3 [Database Management]: Languages— query languages; H.2.4 [Database Management]: Systems— query processing General Terms: Theory", "While the realization of the SemanticWeb as once envisioned by Tim Berners-Lee remains in a distant future, the Web of Data has already become a reality. Billions of RDF statements on the Internet, facts about a variety of different domains, are ready to be used by semantic applications. Some of these applications, however, crucially hinge on the availability of expressive schemas suitable for logical inference that yields non-trivial conclusions. In this paper, we present a statistical approach to the induction of expressive schemas from large RDF repositories. We describe in detail the implementation of this approach and report on an evaluation that we conducted using several data sets including DBpedia.", "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google's general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own \"schema\" of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WEBTABLES system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on co-occurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links.", "Relational databases are widely used today as a mechanism for providing access to structured data. They, however, are not suitable for typical information finding tasks of end users. There is often a semantic gap between the queries users want to express and the queries that can be answered by the database. In this paper, we propose a system that bridges this semantic gap using domain knowledge contained in ontologies. Our system extends relational databases with the ability to answer semantic queries that are represented in SPARQL, an emerging Semantic Web query language. Users express their queries in SPARQL, based on a semantic model of the data, and they get back semantically relevant results. We define different categories of results that are semantically relevant to the users' query and show how our system retrieves these results. We evaluate the performance of our system on sample relational databases, using a combination of standard and custom ontologies.", "With the success of Open Data a huge amount of tabular data sources became available that could potentially be mapped and linked into the Web of (Linked) Data. Most existing approaches to “semantically label” such tabular data rely on mappings of textual information to classes, properties, or instances in RDF knowledge bases in order to link – and eventually transform – tabular data into RDF. However, as we will illustrate, Open Data tables typically contain a large portion of numerical columns and or non-textual headers; therefore solutions that solely focus on textual “cues” are only partially applicable for mapping such data sources. We propose an approach to find and rank candidates of semantic labels and context descriptions for a given bag of numerical values. To this end, we apply a hierarchical clustering over information taken from DBpedia to build a background knowledge graph of possible “semantic contexts” for bags of numerical values, over which we perform a nearest neighbour search to rank the most likely candidates. Our evaluation shows that our approach can assign fine-grained semantic labels, when there is enough supporting evidence in the background knowledge graph. In other cases, our approach can nevertheless assign high level contexts to the data, which could potentially be used in combination with other approaches to narrow down the search space of possible labels.", "Traditional text mining techniques transform free text into flat bags of words representation, which does not preserve sufficient semantics for the purpose of knowledge discovery. In this paper, we present a two-step procedure to mine generalized associations of semantic relations conveyed by the textual content of Web documents. First, RDF (resource description framework) metadata representing semantic relations are extracted from raw text using a myriad of natural language processing techniques. The relation extraction process also creates a term taxonomy in the form of a sense hierarchy inferred from WordNet. Then, a novel generalized association pattern mining algorithm (GP-Close) is applied to discover the underlying relation association patterns on RDF metadata. For pruning the large number of redundant overgeneralized patterns in relation pattern search space, the GP-Close algorithm adopts the notion of generalization closure for systematic overgeneralization reduction. The efficacy of our approach is demonstrated through empirical experiments conducted on an online database of terrorist activities", "The proliferation of semantic data on the Web requires RDF database systems to constantly improve their scalability and transactional efficiency. At the same time, users are increasingly interested in investigating or visualizing large collections of online data by performing complex analytic queries. This paper introduces a novel database system for RDF data management called dipLODocus[RDF], which supports both transactional and analytical queries efficiently. dipLODocus[RDF] takes advantage of a new hybrid storage model for RDF data based on recurring graph patterns. In this paper, we describe the general architecture of our system and compare its performance to state-of-the-art solutions for both transactional and analytic workloads.", "Significant efforts have focused in the past years on bringing large amounts of metadata online and the success of these efforts can be seen by the impressive number of web sites exposing data in RDFa or RDF XML. However, little is known about the extent to which this data fits the needs of ordinary web users with everyday information needs. In this paper we study what we perceive as the semantic gap between the supply of data on the Semantic Web and the needs of web users as expressed in the queries submitted to a major Web search engine. We perform our analysis on both the level of instances and ontologies. First, we first look at how much data is actually relevant to Web queries and what kind of data is it. Second, we provide a generic method to extract the attributes that Web users are searching for regarding particular classes of entities. This method allows to contrast class definitions found in Semantic Web vocabularies with the attributes of objects that users are interested in. Our findings are crucial to measuring the potential of semantic search, but also speak to the state of the Semantic Web in general." ] }
0901.0339
1946152203
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
The work presented here was originally inspired by the XSTONE project @cite_2 . In XSTONE, a resolution-based theorem prover (a reimplementation of Gandalf, which is, in particular, optimised for taxonomic reasoning) is integrated with an RDBMS by loading rows from a database as ground facts into the reasoner and using them to answer queries with resolution. The system is highly scalable in terms of expressiveness: it accepts full FOL with some useful extensions, and also has parsers for RDF, RDFS and OWL. We believe that our approach has better data scalability and can cope with very large databases which are beyond the reach of XSTONE, mostly because our approach obtains answers in bulk, and also due to the way we use highly-optimised RDBMS.
{ "cite_N": [ "@cite_2" ], "mid": [ "1870380066" ], "abstract": [ "Triple stores implementing the RL profile of OWL 2 are becoming increasingly popular. In contrast to unrestricted OWL 2, the RL profile is known to enjoy favourable computational properties for query answering, and state-of-the-art RL reasoners such as OWLim and Oracle's native inference engine of Oracle Spatial and Graph have proved extremely successful in industry-scale applications. The expressive restrictions imposed by OWL 2 RL may, however, be problematical for some applications. In this paper, we propose novel techniques that allow us (in many cases) to compute exact query answers using an off-the-shelf RL reasoner, even when the ontology is outside the RL profile. Furthermore, in the cases where exact query answers cannot be computed, we can still compute both lower and upper bounds on the exact answers. These bounds allow us to estimate the degree of incompleteness of the RL reasoner on the given query, and to optimise the computation of exact answers using a fully-fledged OWL 2 reasoner. A preliminary evaluation using the RDF Semantic Graph feature in Oracle Database has shown very promising results with respect to both scalability and tightness of the bounds." ] }
0901.0339
1946152203
We address the problem of semantic querying of relational databases (RDB) modulo knowledge bases using very expressive knowledge representation formalisms, such as full first-order logic or its various fragments. We propose to use a first-order logic (FOL) reasoner for computing schematic answers to deductive queries, with the subsequent instantiation of these schematic answers using a conventional relational DBMS. In this research note, we outline the main idea of this technique -- using abstractions of databases and constrained clauses for deriving schematic answers. The proposed method can be directly used with regular RDB, including legacy databases. Moreover, we propose it as a potential basis for an efficient Web-scale semantic search technology.
On the more theoretical side, it is necessary to mention two other connections. The idea of using constraints to represent schematic answers is borrowed from Constraint Logic Programming @cite_20 and Constrained Resolution @cite_17 . Also, the general idea of using reasoning for preprocessing expressive queries into a database-related formalism, was borrowed from @cite_10 , where a resolution- and paramodulation-based calculus is used to translate expressive DL ontologies into Disjunctive Datalog. This work also shares a starting point with ours -- the observation that reasoning methods that treat individuals data values separately can not scale up sufficiently.
{ "cite_N": [ "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "1598593580", "2396306223", "2108930340" ], "abstract": [ "Recently, extensions of constrained logic programming and constrained resolution for theorem proving have been introduced, that consider constraints, which are interpreted under an open world assumption. We discuss relationships between applications of these approaches for query answering in knowledge base systems on the one hand and abduction-based hypothetical reasoning on the other hand. We show both that constrained resolution can be used as an operationalization of (some limited form of) abduction and that abduction is the logical status of an answer generation process through constrained resolution, ie., it is an abductive but not a deductive form of reasoning.", "We present several algorithms for reasoning with description logics closely related to SHIQ. Firstly, we present an algorithm for deciding satisfiability of SHIQ knowledge bases. Then, to enable representing concrete data such as strings or integers, we devise a general approach for reasoning with concrete domains in the framework of resolution, and apply it to obtain a procedure for deciding SHIQ(D). For unary coding of numbers, this procedure is worst-case optimal, i.e. it runs in exponential time. Motivated by the prospects of reusing optimization techniques from deductive databases, such as magic sets, we devise an algorithm for reducing SHIQ(D) knowledge bases to disjunctive datalog programs. Furthermore, we show that so-called DL-safe rules can be combined with disjunctive programs obtained by our transformation to increase the expressivity of the logic, without affecting decidability. We show that our algorithms can easily be extended to handle answering conjunctive queries over SHIQ(D) knowledge bases. Finally, we extend our algorithms to support metamodeling. Since SHIQ(D) is closely related to OWL-DL, our algorithms provide alternative mechanisms for reasoning in the Semantic Web.", "The authors address the issue of reasoning with two classes of commonly used semantic integrity constraints in database and knowledge-base systems: implication constraints and referential constraints. They first consider a central problem in this respect, the IRC-refuting problem, which is to decide whether a conjunctive query always produces an empty relation on (finite) database instances satisfying a given set of implication and referential constraints. Since the general problem is undecidable, they only consider acyclic referential constraints. Under this assumption, they prove that the IRC-refuting problem is decidable, and give a novel necessary and sufficient condition for it. Under the same assumption, they also study several other problems encountered in semantic query optimization, such as the semantics-based query containment problem, redundant join problem, and redundant selection-condition problem, and show that they are polynomially equivalent or reducible to the IRC-refuting problem. Moreover, they give results on reducing the complexity for some special cases of the IRC-refuting problem." ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
The problem of secure sensor node initialization has been considered only recently. Prior to MiB method of @cite_15 (which we reviewed in the previous section), the following schemes were proposed. The Shake-them-up'' @cite_5 scheme suggests a simple manual technique for pairing two sensor nodes that involves shaking and twirling them in very close proximity to each other, in order to prevent eavesdropping. While being shaken, two sensor nodes exchange packets and agree on a key one bit at a time, relying on the adversary's inability to determine the sending node. However, it turns out that the sender can be identified using radio fingerprinting @cite_14 and the security of this scheme is uncertain.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_14" ], "mid": [ "2157123706", "2136447324", "2005259871" ], "abstract": [ "Sensor nodes have limited sensing range and are not very reliable. To obtain accurate sensing data, many sensor nodes should he deployed and then the collaboration among them becomes an important issue. In W. Zhang and G. Cao, a tree-based approach has been proposed to facilitate sensor nodes collaborating in detecting and tracking a mobile target. As the target moves, many nodes in the tree may become faraway from the root of the tree, and hence a large amount of energy may be wasted for them to send their sensing data to the root. We address the tree reconfiguration problem. We formalize it as finding a min-cost convoy tree sequence, and solve it by proposing an optimized complete reconfiguration scheme and an optimized interception-based reconfiguration scheme. Analysis and simulation are conducted to compare the proposed schemes with each other and with other reconfiguration schemes. The results show that the proposed schemes are more energy efficient than others.", "A challenge in facilitating spontaneous mobile interactions is to provide pairing methods that are both intuitive and secure. Simultaneous shaking is proposed as a novel and easy-to-use mechanism for pairing of small mobile devices. The underlying principle is to use common movement as a secret that the involved devices share for mutual authentication. We present two concrete methods, ShaVe and ShaCK, in which sensing and analysis of shaking movement is combined with cryptographic protocols for secure authentication. ShaVe is based on initial key exchange followed by exchange and comparison of sensor data for verification of key authenticity. ShaCK, in contrast, is based on matching features extracted from the sensor data to construct a cryptographic key. The classification algorithms used in our approach are shown to robustly separate simultaneous shaking of two devices from other concurrent movement of a pair of devices, with a false negative rate of under 12 percent. A user study confirms that the method is intuitive and easy to use, as users can shake devices in an arbitrary pattern.", "We consider the problem of secure detection in wireless sensor networks operating over insecure links. It is assumed that an eavesdropping fusion center (EFC) attempts to intercept the transmissions of the sensors and to detect the state of nature. The sensor nodes quantize their observations using a multilevel quantizer. Before transmission to the ally fusion center (AFC), the senor nodes encrypt their data using a probabilistic encryption scheme, which randomly maps the sensor's data to another quantizer output level using a stochastic cipher matrix (key). The communication between the sensors and each fusion center is assumed to be over a parallel access channel with identical and independent branches, and with each branch being a discrete memoryless channel. We employ J-divergence as the performance criterion for both the AFC and EFC. The optimal solution for the cipher matrices is obtained in order to maximize J-divergence for AFC, whereas ensuring that it is zero for the EFC. With the proposed method, as long as the EFC is not aware of the specific cipher matrix employed by each sensor, its detection performance will be very poor. The cost of this method is a small degradation in the detection performance of the AFC. The proposed scheme has no communication overhead and minimal processing requirements making it suitable for sensors with limited resources. Numerical results showing the detection performance of the AFC and EFC verify the efficacy of the proposed method." ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
The initialization method that we propose in this paper is similar to the device pairing schemes that use an OOB channel. Thus, we also review most relevant device pairing methods and argue whether or not they can be extended for the application of sensor node initialization. In their seminal work, Stajano and Anderson @cite_24 proposed to establish a shared secret between two devices using a link created through a physical contact (such as an electric cable). As pointed out previously, this approach requires interfaces not available on most sensor motes. Moreover, the approach would be unscalable.
{ "cite_N": [ "@cite_24" ], "mid": [ "2005259871" ], "abstract": [ "We consider the problem of secure detection in wireless sensor networks operating over insecure links. It is assumed that an eavesdropping fusion center (EFC) attempts to intercept the transmissions of the sensors and to detect the state of nature. The sensor nodes quantize their observations using a multilevel quantizer. Before transmission to the ally fusion center (AFC), the senor nodes encrypt their data using a probabilistic encryption scheme, which randomly maps the sensor's data to another quantizer output level using a stochastic cipher matrix (key). The communication between the sensors and each fusion center is assumed to be over a parallel access channel with identical and independent branches, and with each branch being a discrete memoryless channel. We employ J-divergence as the performance criterion for both the AFC and EFC. The optimal solution for the cipher matrices is obtained in order to maximize J-divergence for AFC, whereas ensuring that it is zero for the EFC. With the proposed method, as long as the EFC is not aware of the specific cipher matrix employed by each sensor, its detection performance will be very poor. The cost of this method is a small degradation in the detection performance of the AFC. The proposed scheme has no communication overhead and minimal processing requirements making it suitable for sensors with limited resources. Numerical results showing the detection performance of the AFC and EFC verify the efficacy of the proposed method." ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
Balfanz, et al @cite_23 extended the above approach through the use of infrared as an OOB channel -- the devices exchange their public keys over the wireless channel followed by exchanging (at least @math -bits long) hashes of their respective public keys over infrared. Most sensor motes do not possess infrared transmitters. Also, infrared is not easily perceptible by humans. Based on the protocol of @cite_23 , proposed the Seeing-is-Believing'' (SiB) scheme @cite_20 . SiB involves establishing two unidirectional visual OOB channels -- one device encodes the data into a two-dimensional barcode and the other device reads it using a photo camera. To apply SiB for sensor node initialization, one would need to affix a static barcode (during the manufacturing phase) on each sensor node, which can be captured by a camera on the sink node. However, this will only provide unidirectional authentication, since the sensor nodes can not afford to have a camera each. Note that it will also not be possible to manually input on each sensor node the hash of the public key of the sink, since most sensor nodes do not possess keypads and even if they do, this will not scale.
{ "cite_N": [ "@cite_20", "@cite_23" ], "mid": [ "2006853333", "1973225261" ], "abstract": [ "In the wireless sensor networks (WSNs), sensor nodes may be deployed in the hostile areas. The eavesdropper can intercept the messages in the public channel and the communication between the nodes is easily monitored. Furthermore, any malicious intermediate node can act as a legal receiver to alter the passing messages. Hence, message protection and sensor node identification become important issues in WSN. In this paper, we propose a novel scheme providing unconditional secure communication based on the quantum characteristics, including no-cloning and teleportation. We present a random EPR-pair allocation scheme that is designed to overcome the vulnerability caused by possible compromised nodes. EPR pairs are pre-assigned to sensor nodes randomly and the entangled qubits are used by the nodes with the quantum teleportation scheme to form a secure link. We also show a scheme on how to resist the man-in-the-middle attack. In the framework, the qubits are allocated to each node before deployment and the adversary is unable to create the duplicated nodes. Even if the malicious nodes are added to the network to falsify the messages transmitting in the public channel, the legal nodes can easily detect the fake nodes that have no entangled qubits and verify the counterfeit messages. In addition, we prove that one node sharing EPR pairs with a certain amount of neighbor nodes can teleport information to any node in the sensor network if there are sufficient EPR pairs in the qubits pool. The proposal shows that the distributed quantum wireless sensor network gains better security than classical wireless sensor network and centralized quantum wireless network.", "To achieve security in wireless sensor networks, it is important to be able to encrypt and authenticate messages sent between sensor nodes. Before doing so, keys for performing encryption and authentication must be agreed upon by the communicating parties. Due to resource constraints, however, achieving key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and other public-key based schemes, are not suitable for wireless sensor networks due to the limited computational abilities of the sensor nodes. Predistribution of secret keys for all pairs of nodes is not viable due to the large amount of memory this requires when the network size is large.In this paper, we provide a framework in which to study the security of key predistribution schemes, propose a new key predistribution scheme which substantially improves the resilience of the network compared to previous schemes, and give an in-depth analysis of our scheme in terms of network resilience and associated overhead. Our scheme exhibits a nice threshold property: when the number of compromised nodes is less than the threshold, the probability that communications between any additional nodes are compromised is close to zero. This desirable property lowers the initial payoff of smaller-scale network breaches to an adversary, and makes it necessary for the adversary to attack a large fraction of the network before it can achieve any significant gain." ] }
0812.4983
1485891830
To establish secure (point-to-point and or broadcast) communication channels among the nodes of a wireless sensor network is a fundamental task. To this end, a plethora of (socalled) key pre-distribution schemes have been proposed in the past. All these schemes, however, rely on shared secret(s), which are assumed to be somehow pre-loaded onto the sensor nodes. In this paper, we propose a novel method for secure initialization of sensor nodes based on a visual out-of-band channel. Using the proposed method, the administrator of a sensor network can distribute keys onto the sensor nodes, necessary to bootstrap key pre-distribution. Our secure initialization method requires only a little extra cost, is efficient and scalable with respect to the number of sensor nodes. Moreover, based on a usability study that we conducted, the method turns out to be quite user-friendly and easy to use by naive human users.
@cite_16 proposed a new scheme based on visual OOB channel. The scheme uses one of the protocols based on Short Authenticated Strings (SAS) @cite_6 , @cite_4 , and is aimed at pairing two devices (such as a cell phone and an access point), only one of which has a relevant receiver (such as a camera). The protocol is depicted in Figure and as we will see in the next section, this is the protocol that we utilize in our proposal. In this paper, we extend the above scheme to a many-to-one'' setting applicable to key distribution in sensor networks. Basically, the novel OOB channel that we build consists of multiple devices blinking their SAS data simultaneously, which is captured using a camera connected to the sink.
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_6" ], "mid": [ "2144454907", "2963353797", "2073753174" ], "abstract": [ "Initiating and bootstrapping secure, yet low-cost, ad-hoc transactions is an important challenge that needs to be overcome if the promise of mobile and pervasive computing is to be fulfilled. For example, mobile payment applications would benefit from the ability to pair devices securely without resorting to conventional mechanisms such as shared secrets, a Public Key Infrastructure (PKI), or trusted third parties. A number of methods have been proposed for doing this based on the use of a secondary out-of-band (OOB) channel that either authenticates information passed over the normal communication channel or otherwise establishes an authenticated shared secret which can be used for subsequent secure communication. A key element of the success of these methods is dependent on the performance and effectiveness of the OOB channel, which usually depends on people performing certain critical tasks correctly. In this paper, we present the results of a comparative usability study on methods that propose using humans to implement the OOB channel and argue that most of these proposals fail to take into account factors that may seriously harm the security and usability of a protocol. Our work builds on previous research in the usability of pairing methods and the accompanying recommendations for designing user interfaces that minimise human mistakes. Our findings show that the traditional methods of comparing and typing short strings into mobile devices are still preferable despite claims that new methods are more usable and secure, and that user interface design alone is not sufficient in mitigating human mistakes in OOB channels.", "Due to the publicly-known deterministic character- istic of pilot tones, pilot-aware attack, by jamming, nulling and spoofing pilot tones, can significantly paralyze the uplink channel training in large-scale MISO-OFDM systems. To solve this, we in this paper develop an independence-checking coding based (ICCB) uplink training architecture for one-ring scattering scenarios allowing for uniform linear arrays (ULA) deployment. Here, we not only insert randomized pilots on subcarriers for channel impulse response (CIR) estimation, but also diversify and encode subcarrier activation patterns (SAPs) to convey those pilots simultaneously. The coded SAPs, though interfered by arbitrary unknown SAPs in wireless environment, are qualified to be reliably identified and decoded into the original pilots by checking the hidden channel independence existing in sub- carriers. Specifically, an independence-checking coding (ICC) theory is formulated to support the encoding decoding process in this architecture. The optimal ICC code is further devel- oped for guaranteeing a well-imposed estimation of CIR while maximizing the code rate. Based on this code, the identification error probability (IEP) is characterized to evaluate the reliability of this architecture. Interestingly, we discover the principle of IEP reduction by exploiting the array spatial correlation, and prove that zero- IEP, i.e., perfect reliability, can be guaranteed under continuously-distributed mean angle of arrival (AoA). Besides this, a novel closed form of IEP expression is derived in discretely-distributed case. Simulation results finally verify the effectiveness of the proposed architecture.", "Abstract In this paper, a novel error control scheme using Fountain codes is proposed in on–off keying (OOK) based visible light communications (VLC) systems. By using Fountain codes, feedback information is needed to be sent back to the transmitter only when transmitted messages are successfully recovered. Therefore improved transmission efficiency, reduced protocol complexity and relative little wireless link-layer delay are gained. By employing scrambling techniques and complementing symbols, the least complemented symbols are needed to support arbitrary dimming target values, and the value of entropy of encoded message are increased." ] }
0812.5064
2106011017
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
Evolutionary game theory, which combines the traditional game theory with the idea of evolution, is based on the assumption of bounded rationality. On the contrary, in classical game theory players are supposed to be perfectly rational or hyper-rational, and always choose optimal strategies in complex environments. Finite information and cognitive limitations, however, often make rational decisions inaccessible. Besides, perfect rationality may cause the so-called backward induction paradox @cite_12 in finitely repeated games. On the other hand, as the relaxation of perfect rationality in classical game theory, bounded rationality means people in games need only part rationality @cite_17 , which explains why in many cases people respond or play instinctively according to heuristic rules and social norms rather than adopting the strategies indicated by rational game theory @cite_7 . So, various dynamic rules can be defined to characterize the boundedly rational behavior of players in evolutionary game theory.
{ "cite_N": [ "@cite_7", "@cite_12", "@cite_17" ], "mid": [ "2085728653", "1567092208", "2102794165" ], "abstract": [ "Every form of behavior is shaped by trial and error. Such stepwise adaptation can occur through individual learning or through natural selection, the basis of evolution. Since the work of Maynard Smith and others, it has been realized how game theory can model this process. Evolutionary game theory replaces the static solutions of classical game theory by a dynamical approach centered not on the concept of rational players but on the population dynamics of behavioral programs. In this book the authors investigate the nonlinear dynamics of the self-regulation of social and economic behavior, and of the closely related interactions among species in ecological communities. Replicator equations describe how successful strategies spread and thereby create new conditions that can alter the basis of their success, i.e., to enable us to understand the strategic and genetic foundations of the endless chronicle of invasions and extinctions that punctuate evolution. In short, evolutionary game theory describes when to escalate a conflict, how to elicit cooperation, why to expect a balance of the sexes, and how to understand natural selection in mathematical terms. Comprehensive treatment of ecological and game theoretic dynamics Invasion dynamics and permanence as key concepts Explanation in terms of games of things like competition between species", "This text offers a systematic, rigorous, and unified presentation of evolutionary game theory, covering the core developments of the theory from its inception in biology in the 1970s through recent advances. Evolutionary game theory, which studies the behavior of large populations of strategically interacting agents, is used by economists to make predictions in settings where traditional assumptions about agents' rationality and knowledge may not be justified. Recently, computer scientists, transportation scientists, engineers, and control theorists have also turned to evolutionary game theory, seeking tools for modeling dynamics in multiagent systems. Population Games and Evolutionary Dynamics provides a point of entry into the field for researchers and students in all of these disciplines. The text first considers population games, which provide a simple, powerful model for studying strategic interactions among large numbers of anonymous agents. It then studies the dynamics of behavior in these games. By introducing a general model of myopic strategy revision by individual agents, the text provides foundations for two distinct approaches to aggregate behavior dynamics: the deterministic approach, based on differential equations, and the stochastic approach, based on Markov processes. Key results on local stability, global convergence, stochastic stability, and nonconvergence are developed in detail. Ten substantial appendixes present the mathematical tools needed to work in evolutionary game theory, offering a practical introduction to the methods of dynamic modeling. Accompanying the text are more than 200 color illustrations of the mathematics and theoretical results; many were created using the Dynamo software suite, which is freely available on the author's Web site. Readers are encouraged to use Dynamo to run quick numerical experiments and to create publishable figures for their own research.", "Part I of this paper has described a new theory for the analysis of games with incomplete information. It has been shown that, if the various players' subjective probability distributions satisfy a certain mutual-consistency requirement, then any given game with incomplete information will be equivalent to a certain game with complete information, called the “Bayes-equivalent” of the original game, or briefly a “Bayesian game.” Part II of the paper will now show that any Nash equilibrium point of this Bayesian game yields a “Bayesian equilibrium point” for the original game and conversely. This result will then be illustrated by numerical examples, representing two-person zero-sum games with incomplete information. We shall also show how our theory enables us to analyze the problem of exploiting the opponent's erroneous beliefs. However, apart from its indubitable usefulness in locating Bayesian equilibrium points, we shall show it on a numerical example the Bayes-equivalent of a two-person cooperative game that the normal form of a Bayesian game is in many cases a highly unsatisfactory representation of the game situation and has to be replaced by other representations e.g., by the semi-normal form. We shall argue that this rather unexpected result is due to the fact that Bayesian games must be interpreted as games with “delayed commitment” whereas the normal-form representation always envisages a game with “immediate commitment.”" ] }
0812.5064
2106011017
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
Evolutionary stability is a central concept in evolutionary game theory. In biological situations the evolutionary stability provides a robust criterion for strategies against natural selection. Furthermore, it also means that any small group of individuals who tries some alternative strategies gets lower payoffs than those who stick to the original strategy @cite_21 . Suppose that individuals in an infinite and homogenous population who play symmetric games with equal probability are randomly matched and all employ the same strategy @math . Nevertheless, if a small group of mutants with population share @math who plays some other strategy appear in the whole group of individuals, they will receive lower payoffs. Therefore, the strategy @math is said to be evolutionary stable for any mutant strategy @math , if and only if the inequality, @math , holds, where the function @math denotes the payoff for playing strategy @math against strategy @math @cite_11 .
{ "cite_N": [ "@cite_21", "@cite_11" ], "mid": [ "2079460424", "220156613" ], "abstract": [ "Evolutionary dynamics have been traditionally studied in the context of homogeneous or spatially extended populations1,2,3,4. Here we generalize population structure by arranging individuals on a graph. Each vertex represents an individual. The weighted edges denote reproductive rates which govern how often individuals place offspring into adjacent vertices. The homogeneous population, described by the Moran process3, is the special case of a fully connected graph with evenly weighted edges. Spatial structures are described by graphs where vertices are connected with their nearest neighbours. We also explore evolution on random and scale-free networks5,6,7. We determine the fixation probability of mutants, and characterize those graphs for which fixation behaviour is identical to that of a homogeneous population7. Furthermore, some graphs act as suppressors and others as amplifiers of selection. It is even possible to find graphs that guarantee the fixation of any advantageous mutant. We also study frequency-dependent selection and show that the outcome of evolutionary games can depend entirely on the structure of the underlying graph. Evolutionary graph theory has many fascinating applications ranging from ecology to multi-cellular organization and economics.", "We model evolution according to an asymmetric game as occurring in multiple finite populations, one for each role in the game, and study the effect of subjecting individuals to stochastic strategy mutations. We show that, when these mutations occur sufficiently infrequently, the dynamics over all population states simplify to an ergodic Markov chain over just the pure population states (where each population is monomorphic). This makes calculation of the stationary distribution computationally feasible. The transition probabilities of this embedded Markov chain involve fixation probabilities of mutants in single populations. The asymmetry of the underlying game leads to fixation probabilities that are derived from frequency-independent selection, in contrast to the analogous single-population symmetric-game case (Fudenberg and Imhof, 2006). This frequency independence is useful in that it allows us to employ results from the population genetics literature to calculate the stationary distribution of the evolutionary process, giving sharper, and sometimes even analytic, results. We demonstrate the utility of this approach by applying it to a battle-of-the-sexes game, a Crawford–Sobel signalling game, and the beer-quiche game of Cho and Kreps (1987)." ] }
0812.5064
2106011017
This paper introduces a model based upon games on an evolving network, and develops three clustering algorithms according to it. In the clustering algorithms, data points for clustering are regarded as players who can make decisions in games. On the network describing relationships among data points, an edge-removing-and-rewiring (ERR) function is employed to explore in a neighborhood of a data point, which removes edges connecting to neighbors with small payoffs, and creates new edges to neighbors with larger payoffs. As such, the connections among data points vary over time. During the evolution of network, some strategies are spread in the network. As a consequence, clusters are formed automatically, in which data points with the same evolutionarily stable strategy are collected as a cluster, so the number of evolutionarily stable strategies indicates the number of clusters. Moreover, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
In addition, the cooperation mechanism and spatial-temporal dynamics related to it have long been investigated within the framework of evolutionary game theory based on the prisoner's dilemma (PD) game or snowdrift game which models interactions between a pair of players. In early days, the iterated PD game was widely studied, in which a player interacted with all other players. By round robin interactions among players, strategies in the population began to evolve according to their payoffs. As a result, the strategy of unconditional defection was always evolutionary stable @cite_14 while pure cooperators could not survive. Nevertheless, the Tit-for-Tat strategy is evolutionary stable as well, which promotes cooperation based on reciprocity @cite_22 .
{ "cite_N": [ "@cite_14", "@cite_22" ], "mid": [ "2771187719", "2062663664" ], "abstract": [ "Cooperation is a difficult proposition in the face of Darwinian selection. Those that defect have an evolutionary advantage over cooperators who should therefore die out. However, spatial structure enables cooperators to survive through the formation of homogeneous clusters, which is the hallmark of network reciprocity. Here we go beyond this traditional setup and study the spatiotemporal dynamics of cooperation in a population of populations. We use the prisoner's dilemma game as the mathematical model and show that considering several populations simultaneously gives rise to fascinating spatiotemporal dynamics and pattern formation. Even the simplest assumption that strategies between different populations are payoff-neutral with one another results in the spontaneous emergence of cyclic dominance, where defectors of one population become prey of cooperators in the other population, and vice versa. Moreover, if social interactions within different populations are characterized by significantly different temptations to defect, we observe that defectors in the population with the largest temptation counterintuitively vanish the fastest, while cooperators that hang on eventually take over the whole available space. Our results reveal that considering the simultaneous presence of different populations significantly expands the complexity of evolutionary dynamics in structured populations, and it allows us to understand the stability of cooperation under adverse conditions that could never be bridged by network reciprocity alone.", "Cooperation in organisms, whether bacteria or primates, has been a difficulty for evolutionary theory since Darwin. On the assumption that interactions between pairs of individuals occur on a probabilistic basis, a model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game. Deductions from the model, and the results of a computer tournament show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established. Potential applications include specific aspects of territoriality, mating, and disease." ] }
0812.3120
2061159444
Imperfect channel state information degrades the performance of multiple-input multiple-output (MIMO) communications; its effects on single-user (SU) and multiuser (MU) MIMO transmissions are quite different. In particular, MU-MIMO suffers from residual interuser interference due to imperfect channel state information while SU-MIMO only suffers from a power loss. This paper compares the throughput loss of both SU and MU-MIMO in the broadcast channel due to delay and channel quantization. Accurate closed-form approximations are derived for achievable rates for both SU and MU-MIMO. It is shown that SU-MIMO is relatively robust to delayed and quantized channel information, while MU-MIMO with zero-forcing precoding loses its spatial multiplexing gain with a fixed delay or fixed codebook size. Based on derived achievable rates, a mode switching algorithm is proposed, which switches between SU and MU-MIMO modes to improve the spectral efficiency based on average signal-to-noise ratio (SNR), normalized Doppler frequency, and the channel quantization codebook size. The operating regions for SU and MU modes with different delays and codebook sizes are determined, and they can be used to select the preferred mode. It is shown that the MU mode is active only when the normalized Doppler frequency is very small, and the codebook size is large.
For the MIMO downlink, CSIT is required to separate the spatial channels for different users. To obtain the full spatial multiplexing gain for the MU-MIMO system employing zero-forcing (ZF) or block-diagonalization (BD) precoding, it was shown in @cite_38 @cite_14 that the quantization codebook size for limited feedback needs to increase linearly with SNR (in dB) and the number of transmit antennas. Zero-forcing dirty-paper coding and channel inversion systems with limited feedback were investigated in @cite_9 , where a sum rate ceiling due to a fixed codebook size was derived for both schemes. In @cite_11 , it was shown that to exploit multiuser diversity for ZF, both channel direction and information about signal-to-interference-plus-noise ratio (SINR) must be fed back. More recently, a comprehensive study of the MIMO downlink with ZF precoding was done in @cite_6 , which considered downlink training and explicit channel feedback and concluded that significant downlink throughput is achievable with efficient CSI feedback. For a compound MIMO broadcast channel, the information theoretic analysis in @cite_35 showed that scaling the CSIT quality such that the CSIT error is dominated by the inverse of the SNR is both necessary and sufficient to achieve the full spatial multiplexing gain.
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_14", "@cite_9", "@cite_6", "@cite_11" ], "mid": [ "2015301486", "2131262740", "347411582", "2047468088", "2106872716", "2725094597" ], "abstract": [ "We propose joint spatial division and multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter (CSIT). JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing (FDD) systems, for which uplink downlink channel reciprocity cannot be exploited. In the proposed scheme, the multiuser MIMO downlink precoder is obtained by concatenating a prebeamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional “effective” channel matrix. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is approached in the large number of antennas limit. For this case, we use Szego's asymptotic theory of Toeplitz matrices to show that a DFT-based prebeamforming matrix is near-optimal, requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a 2-D base station antenna array, with 3-D beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the prebeamforming optimization and calculate the system spectral efficiency under proportional fairness and max-min fairness criteria, showing extremely attractive performance. Our numerical results are obtained via asymptotic random matrix theory, avoiding lengthy Monte Carlo simulations and providing accurate results for realistic (finite) number of antennas and users.", "We consider a MIMO broadcast channel where both the transmitter and receivers are equipped with multiple antennas. Channel state information at the transmitter (CSIT) is obtained through limited (i.e., finite-bandwidth) feedback from the receivers that index a set of precoding vectors contained in a predefined codebook. We propose a novel transceiver architecture based on zero-forcing beamforming and linear receiver combining. The receiver combining and quantization for CSIT feedback are jointly designed in order to maximize the expected SINR for each user. We provide an analytic characterization of the achievable throughput in the case of many users and show how additional receive antennas or higher multiuser diversity can reduce the required feedback rate to achieve a target throughput.We also propose a design methodology for generating codebooks tailored for arbitrary spatial correlation statistics. The resulting codebooks have a tree structure that can be utilized in time-correlated MIMO channels to significantly reduce feedback overhead. Simulation results show the effectiveness of the overall transceiver design strategy and codebook design methodology compared to prior techniques in a variety of correlation environments.", "We consider a MIMO fading broadcast channel and compute achievable ergodic rates when channel state information is acquired at the receivers via downlink training and explicit channel feedback is performed to provide transmitter channel state information (CSIT). Both “analog” and quantized (digital) channel feedback are analyzed, and digital feedback is shown to be potentially superior when the feedback channel uses per channel coefficient is larger than 1. Also, we show that by proper design of the digital feedback link, errors in the feedback have a relatively minor effect even if simple uncoded modulation is used on the feedback channel. We extend our analysis to the case of fading MIMO Multiaccess Channel (MIMO-MAC) in the feedback link, as well as to the case of a time-varying channel and feedback delay. We show that by exploiting the MIMO-MAC nature of the uplink channel, a fully scalable system with both downlink multiplexing gain and feedback redundancy proportional to the number of base station antennas can be achieved. Furthermore, the feedback strategy is optimized by a non-trivial combination of time-division and space-division multiple-access. For the case of delayed feedback, we show that in the realistic case where the fading process has (normalized) maximum Doppler frequency shift 0 F < 1=2, a fraction 1 2F of the optimal multiplexing gain is achievable. The general conclusion of this work is that very significant downlink throughput is achievable with simple and efficient channel state feedback, provided that the feedback link is properly designed.", "Large multiple-input multiple-output (MIMO) networks promise high energy efficiency, i.e., much less power is required to achieve the same capacity compared to the conventional MIMO networks if perfect channel state information (CSI) is available at the transmitter. However, in such networks, huge overhead is required to obtain full CSI especially for Frequency-Division Duplex (FDD) systems. To reduce overhead, we propose a downlink antenna selection scheme, which selects S antennas from M > S transmit antennas based on the large scale fading to serve K ≤ S users in large distributed MIMO networks employing regularized zero-forcing (RZF) precoding. In particular, we study the joint optimization of antenna selection, regularization factor, and power allocation to maximize the average weighted sum-rate. This is a mixed combinatorial and non-convex problem whose objective and constraints have no closed-form expressions. We apply random matrix theory to derive asymptotically accurate expressions for the objective and constraints. As such, the joint optimization problem is decomposed into subproblems, each of which is solved by an efficient algorithm. In addition, we derive structural solutions for some special cases and show that the capacity of very large distributed MIMO networks scales as O(KlogM) when M→∞ with K, S fixed. Simulations show that the proposed scheme achieves significant performance gain over various baselines.", "In this paper, we consider two different models of partial channel state information at the base station transmitter (CSIT) for multiple antenna broadcast channels: 1) the shape feedback model where the normalized channel vector of each user is available at the base station and 2) the limited feedback model where each user quantizes its channel vector according to a rotated codebook that is optimal in the sense of mean squared error and feeds back the codeword index. This paper is focused on characterizing the sum rate performance of both zero-forcing dirty paper coding (ZFDPC) systems and channel inversion (CI) systems under the given two partial CSIT models. Intuitively speaking, a system with shape feedback loses the sum rate gain of adaptive power allocation. However, shape feedback still provides enough channel knowledge for ZFDPC and CI to approach their own optimal throughput in the high signal-to-noise ratio (SNR) regime. As for limited feedback, we derive sum rate bounds for both signaling schemes and link their throughput performance to some basic properties of the quantization codebook. Interestingly, we find that limited feedback employing a fixed codebook leads to a sum rate ceiling for both schemes for asymptotically high SNR.", "The Interfering Broadcast Channel (IBC) applies to the downlink of (cellular and or heterogeneous) multi-cell networks, which are limited by multi-user (MU) interference. The interference alignment (IA) concept has shown that interference does not need to be inevitable. In particular spatial IA in the MIMO IBC allows for low latency transmission. However, IA requires perfect and typically global Channel State Information at the Transmitter(s) (CSIT), whose acquisition does not scale well with network size. Also, the design of transmitters (Txs) and receivers (Rxs) is coupled and hence needs to be centralized (cloud) or duplicated (distributed approach). CSIT, which is crucial in MU systems, is always imperfect in practice. We consider the joint optimal exploitation of mean (channel estimates) and covariance Gaussian partial CSIT. Indeed, in a Massive MIMO (MaMIMO) setting (esp. when combined with mmWave) the channel covariances may exhibit low rank and zero-forcing might be possible by just exploiting the covariance subspaces. But the question is the optimization of beamformers for the expected weighted sum rate (EWSR) at finite SNR. We propose explicit beamforming solutions and indicate that existing large system analysis can be extended to handle optimized beamformers with the more general partial CSIT considered here." ] }
0812.3120
2061159444
Imperfect channel state information degrades the performance of multiple-input multiple-output (MIMO) communications; its effects on single-user (SU) and multiuser (MU) MIMO transmissions are quite different. In particular, MU-MIMO suffers from residual interuser interference due to imperfect channel state information while SU-MIMO only suffers from a power loss. This paper compares the throughput loss of both SU and MU-MIMO in the broadcast channel due to delay and channel quantization. Accurate closed-form approximations are derived for achievable rates for both SU and MU-MIMO. It is shown that SU-MIMO is relatively robust to delayed and quantized channel information, while MU-MIMO with zero-forcing precoding loses its spatial multiplexing gain with a fixed delay or fixed codebook size. Based on derived achievable rates, a mode switching algorithm is proposed, which switches between SU and MU-MIMO modes to improve the spectral efficiency based on average signal-to-noise ratio (SNR), normalized Doppler frequency, and the channel quantization codebook size. The operating regions for SU and MU modes with different delays and codebook sizes are determined, and they can be used to select the preferred mode. It is shown that the MU mode is active only when the normalized Doppler frequency is very small, and the codebook size is large.
Although previous studies show that the spatial multiplexing gain of MU-MIMO can be achieved with limited feedback, it requires the codebook size to increase with SNR and the number of transmit antennas. Even if such a requirement is satisfied, there is an inevitable rate loss due to quantization error, plus other CSIT imperfections such as estimation error and delay. In addition, most of prior work focused on the achievable spatial multiplexing gain, mainly based on the analysis of the rate loss due to imperfect CSIT, which is usually a loose bound @cite_38 @cite_14 @cite_35 . Such analysis cannot accurately characterize the throughput loss, and no comparison with SU-MIMO has been made. In this paper, we derive good approximations for the achievable throughput for both SU and MU MIMO systems with fixed channel information accuracy, i.e. with a fixed delay and a fixed quantization codebook size. We are interested in the following question: Based on this, we can select the one with the higher throughput as the transmission technique.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_35" ], "mid": [ "2144703403", "2045567371", "347411582" ], "abstract": [ "Multiple-input-multiple-output (MIMO) communication systems can provide large capacity gains over traditional single-input-single-output (SISO) systems and are expected to be a core technology of next generation wireless systems. Often, these capacity gains are achievable only with some form of adaptive transmission. In this paper, we study the capacity loss (defined as the rate loss in bits s Hz) of the MIMO wireless system when the covariance matrix of the transmitted signal vector is designed using a low rate feedback channel. For the MIMO channel, we find a bound on the ergodic capacity loss when random codebooks, generated from the uniform distribution on the complex unit sphere, are used to convey the second order statistics of the transmitted signal from the receiver to the transmitter. In this case, we find a closed-form expression for the ergodic capacity loss as a function of the number of bits fed back at each channel realization. These results show that the capacity loss decreases at least as O(2 sup -B (2MMt-2) ) where B is the number of feedback bits, M sub t is the number of transmit antennas, and M=min M sub r ,M sub t where M sub r is the number of receive antennas. In the high SNR regime, we present a new bound on the capacity loss that is tighter than the previously derived bound and show that the capacity loss decreases exponentially as a function of the number of feedback bits.", "The enormous success of advanced wireless devices is pushing the demand for higher wireless data rates. Denser spectrum reuse through the deployment of more access points (APs) per square mile has the potential to successfully meet such demand. In principle, distributed multiuser multiple-input-multiple-output (MU-MIMO) provides the best approach to infrastructure density increase since several access points are connected to a central server and operate as a large distributed multiantenna access point. This ensures that all transmitted signal power serves the purpose of data transmission, rather than creating interference. In practice, however, a number of implementation difficulties must be addressed, the most significant of which is aligning the phases of all jointly coordinated APs. In this paper, we propose AirSync, a novel scheme that provides timing and phase synchronization accurate enough to enable distributed MU-MIMO. AirSync detects the slot boundary such that all APs are time-synchronous within a cyclic prefix (CP) of the orthogonal frequency-division multiplexing (OFDM) modulation and predicts the instantaneous carrier phase correction along the transmit slot such that all transmitters maintain their coherence, which is necessary for multiuser beamforming. We have implemented AirSync as a digital circuit in the field programmable gate array (FPGA) of the WARP radio platform. Our experimental testbed, comprising four APs and four clients, shows that AirSync is able to achieve timing synchronization within the OFDM CP and carrier phase coherence within a few degrees. For the purpose of demonstration, we have implemented two MU-MIMO precoding schemes, Zero-Forcing Beamforming (ZFBF) and Tomlinson-Harashima Precoding (THP). In both cases, our system approaches the theoretical optimal multiplexing gains. We also discuss aspects related to the MAC and multiuser scheduling design, in relation to the distributed MU-MIMO architecture. To the best of our knowledge, AirSync offers the first realization of the full distributed MU-MIMO multiplexing gain, namely the ability to increase the number of active wireless clients per time-frequency slot linearly with the number of jointly coordinated APs, without reducing the per client rate.", "We consider a MIMO fading broadcast channel and compute achievable ergodic rates when channel state information is acquired at the receivers via downlink training and explicit channel feedback is performed to provide transmitter channel state information (CSIT). Both “analog” and quantized (digital) channel feedback are analyzed, and digital feedback is shown to be potentially superior when the feedback channel uses per channel coefficient is larger than 1. Also, we show that by proper design of the digital feedback link, errors in the feedback have a relatively minor effect even if simple uncoded modulation is used on the feedback channel. We extend our analysis to the case of fading MIMO Multiaccess Channel (MIMO-MAC) in the feedback link, as well as to the case of a time-varying channel and feedback delay. We show that by exploiting the MIMO-MAC nature of the uplink channel, a fully scalable system with both downlink multiplexing gain and feedback redundancy proportional to the number of base station antennas can be achieved. Furthermore, the feedback strategy is optimized by a non-trivial combination of time-division and space-division multiple-access. For the case of delayed feedback, we show that in the realistic case where the fading process has (normalized) maximum Doppler frequency shift 0 F < 1=2, a fraction 1 2F of the optimal multiplexing gain is achievable. The general conclusion of this work is that very significant downlink throughput is achievable with simple and efficient channel state feedback, provided that the feedback link is properly designed." ] }
0812.3478
2952425088
The need for domain ontologies in mission critical applications such as risk management and hazard identification is becoming more and more pressing. Most research on ontology learning conducted in the academia remains unrealistic for real-world applications. One of the main problems is the dependence on non-incremental, rare knowledge and textual resources, and manually-crafted patterns and rules. This paper reports work in progress aiming to address such undesirable dependencies during ontology construction. Initial experiments using a working prototype of the system revealed promising potentials in automatically constructing high-quality domain ontologies using real-world texts.
Besides manual efforts, several ontology construction systems have also been developed in recent years which aim at generating domain ontologies. For example, @cite_18 employs standard natural language processing (NLP) tools and corpus analysis to extract and recognise domain terms. @cite_25 and @cite_8 are utilised to extract semantic relations between the terms. Similarly, the system @cite_36 makes use of non-incremental resources such as , and manually-crafted lexico-syntactic patterns to construct ontologies. In order to identify more complex relations, employs association rule learning. More recent work from @cite_9 extract terms and semantic relations through dependency structure analysis. The terms are mapped onto to obtain bags of senses. These senses are then clustered using cosine similarity. Semantic relations that consist of similar terms can be generalised using association rule mining algorithms for deducing statistically significant patterns. @cite_6 conducted a study on clustering and the associated tasks of feature extraction and selection, and similarity measurement for constructing ontologies. Contexts, appearing as sentences in which the terms occur, are used as features in their study. @cite_29 utilise dependency structure analysis to extract terms and relationships with the help of a controlled vocabulary called the and domain knowledge in the form of the .
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_36", "@cite_9", "@cite_29", "@cite_6", "@cite_25" ], "mid": [ "2109718074", "2949465329", "14559458", "2112671715", "2053238041", "2107229268", "2212742528" ], "abstract": [ "Traditional approaches to Relation Extraction from text require manually defining the relations to be extracted. We propose here an approach to automatically discovering relevant relations, given a large text corpus plus an initial ontology defining hundreds of noun categories (e.g., Athlete, Musician, Instrument). Our approach discovers frequently stated relations between pairs of these categories, using a two step process. For each pair of categories (e.g., Musician and Instrument) it first co-clusters the text contexts that connect known instances of the two categories, generating a candidate relation for each resulting cluster. It then applies a trained classifier to determine which of these candidate relations is semantically valid. Our experiments apply this to a text corpus containing approximately 200 million web pages and an ontology containing 122 categories from the NELL system [, 2010b], producing a set of 781 proposed candidate relations, approximately half of which are semantically valid. We conclude this is a useful approach to semi-automatic extension of the ontology for large-scale information extraction systems such as NELL.", "We present three systems for surface natural language generation that are trainable from annotated corpora. The first two systems, called NLG1 and NLG2, require a corpus marked only with domain-specific semantic attributes, while the last system, called NLG3, requires a corpus marked with both semantic attributes and syntactic dependency information. All systems attempt to produce a grammatical natural language phrase from a domain-specific semantic representation. NLG1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step, while NLG2 and NLG3 use maximum entropy probability models to individually generate each word in the phrase. The systems NLG2 and NLG3 learn to determine both the word choice and the word order of the phrase. We present experiments in which we generate phrases to describe flights in the air travel domain.", "The approach towards Semantic Web Information Extraction (IE) presented here is implemented in KIM – a platform for semantic indexing, annotation, and retrieval. It combines IE based on the mature text engineering platform (GATE1) with Semantic Web-compliant knowledge representation and management. The cornerstone is automatic generation of named-entity (NE) annotations with class and instance references to a semantic repository. Simplistic upper-level ontology, providing detailed coverage of the most popular entity types (Person, Organization, Location, etc.; more than 250 classes) is designed and used. A knowledge base (KB) with de-facto exhaustive coverage of real-world entities of general importance is maintained, used, and constantly enriched. Extensions of the ontology and KB take care of handling all the lexical resources used for IE, most notable, instead of gazetteer lists, aliases of specific entities are kept together with them in the KB. A Semantic Gazetteer uses the KB to generate lookup annotations. Ontologyaware pattern-matching grammars allow precise class information to be handled via rules at the optimal level of generality. The grammars are used to recognize NE, with class and instance information referring to the KIM ontology and KB. Recognition of identity relations between the entities is used to unify their references to the KB. Based on the recognized NE, template relation construction is performed via grammar rules. As a result of the latter, the KB is being enriched with the recognized relations between entities. At the final phase of the IE process, previously unknown aliases and entities are being added to the KB with their specific types.", "Traditional text mining techniques transform free text into flat bags of words representation, which does not preserve sufficient semantics for the purpose of knowledge discovery. In this paper, we present a two-step procedure to mine generalized associations of semantic relations conveyed by the textual content of Web documents. First, RDF (resource description framework) metadata representing semantic relations are extracted from raw text using a myriad of natural language processing techniques. The relation extraction process also creates a term taxonomy in the form of a sense hierarchy inferred from WordNet. Then, a novel generalized association pattern mining algorithm (GP-Close) is applied to discover the underlying relation association patterns on RDF metadata. For pruning the large number of redundant overgeneralized patterns in relation pattern search space, the GP-Close algorithm adopts the notion of generalization closure for systematic overgeneralization reduction. The efficacy of our approach is demonstrated through empirical experiments conducted on an online database of terrorist activities", "Extracting semantic relationships between entities is challenging. This paper investigates the incorporation of diverse lexical, syntactic and semantic knowledge in feature-based relation extraction using SVM. Our study illustrates that the base phrase chunking information is very effective for relation extraction and contributes to most of the performance improvement from syntactic aspect while additional information from full parsing gives limited further enhancement. This suggests that most of useful information in full parse trees for relation extraction is shallow and can be captured by chunking. We also demonstrate how semantic information such as WordNet and Name List, can be used in feature-based relation extraction to further improve the performance. Evaluation on the ACE corpus shows that effective incorporation of diverse features enables our system outperform previously best-reported systems on the 24 ACE relation subtypes and significantly outperforms tree kernel-based systems by over 20 in F-measure on the 5 ACE relation types.", "Lexical-semantic resources, including thesauri and WORDNET, have been successfully incorporated into a wide range of applications in Natural Language Processing. However they are very difficult and expensive to create and maintain, and their usefulness has been severely hampered by their limited coverage, bias and inconsistency. Automated and semi-automated methods for developing such resources are therefore crucial for further resource development and improved application performance. Systems that extract thesauri often identify similar words using the distributional hypothesis that similar words appear in similar contexts. This approach involves using corpora to examine the contexts each word appears in and then calculating the similarity between context distributions. Different definitions of context can be used, and I begin by examining how different types of extracted context influence similarity. To be of most benefit these systems must be capable of finding synonyms for rare words. Reliable context counts for rare events can only be extracted from vast collections of text. In this dissertation I describe how to extract contexts from a corpus of over 2 billion words. I describe techniques for processing text on this scale and examine the trade-off between context accuracy, information content and quantity of text analysed. Distributional similarity is at best an approximation to semantic similarity. I develop improved approximations motivated by the intuition that some events in the context distribution are more indicative of meaning than others. For instance, the object-of-verb context wear is far more indicative of a clothing noun than get. However, existing distributional techniques do not effectively utilise this information. The new context-weighted similarity metric I propose in this dissertation significantly outperforms every distributional similarity metric described in the literature. Nearest-neighbour similarity algorithms scale poorly with vocabulary and context vector size. To overcome this problem I introduce a new context-weighted approximation algorithm with bounded complexity in context vector size that significantly reduces the system runtime with only a minor performance penalty. I also describe a parallelized version of the system that runs on a Beowulf cluster for the 2 billion word experiments. To evaluate the context-weighted similarity measure I compare ranked similarity lists against gold-standard resources using precision and recall-based measures from Information Retrieval,", "The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL." ] }
0812.4171
2950782526
We give a complexity dichotomy for the problem of computing the partition function of a weighted Boolean constraint satisfaction problem. Such a problem is parameterized by a set of rational-valued functions, which generalize constraints. Each function assigns a weight to every assignment to a set of Boolean variables. Our dichotomy extends previous work in which the weight functions were restricted to being non-negative. We represent a weight function as a product of the form (-1)^s g, where the polynomial s determines the sign of the weight and the non-negative function g determines its magnitude. We show that the problem of computing the partition function (the sum of the weights of all possible variable assignments) is in polynomial time if either every weight function can be defined by a "pure affine" magnitude with a quadratic sign polynomial or every function can be defined by a magnitude of "product type" with a linear sign polynomial. In all other cases, computing the partition function is FP^#P-complete.
The contribution of this paper (Theorem below) extends Theorem to constraint languages @math containing arbitrary rational-valued functions. This is an interesting extension since functions with negative values can cause cancellations and may make the partition function easier to compute. In a related context, recall the sharp distinction in complexity between computing the permanent and the determinant of a matrix. Independently, Cai, Lu and Xia have recently found a wider generalization, giving a dichotomy for the case where @math can be any set of complex-valued functions @cite_17 .
{ "cite_N": [ "@cite_17" ], "mid": [ "2097738137" ], "abstract": [ "This paper gives a dichotomy theorem for the complexity of computing the partition function of an instance of a weighted Boolean constraint satisfaction problem. The problem is parameterized by a finite set @math of nonnegative functions that may be used to assign weights to the configurations (feasible solutions) of a problem instance. Classical constraint satisfaction problems correspond to the special case of 0,1-valued functions. We show that computing the partition function, i.e., the sum of the weights of all configurations, is @math -complete unless either (1) every function in @math is of “product type,” or (2) every function in @math is “pure affine.” In the remaining cases, computing the partition function is in P." ] }
0812.4171
2950782526
We give a complexity dichotomy for the problem of computing the partition function of a weighted Boolean constraint satisfaction problem. Such a problem is parameterized by a set of rational-valued functions, which generalize constraints. Each function assigns a weight to every assignment to a set of Boolean variables. Our dichotomy extends previous work in which the weight functions were restricted to being non-negative. We represent a weight function as a product of the form (-1)^s g, where the polynomial s determines the sign of the weight and the non-negative function g determines its magnitude. We show that the problem of computing the partition function (the sum of the weights of all possible variable assignments) is in polynomial time if either every weight function can be defined by a "pure affine" magnitude with a quadratic sign polynomial or every function can be defined by a magnitude of "product type" with a linear sign polynomial. In all other cases, computing the partition function is FP^#P-complete.
The case of mixed signs has been been considered previously by Goldberg, Grohe, Jerrum and Thurley @cite_12 , in the case of one symmetric binary function on an arbitrary finite domain. Their theorem generalizes that of Bulatov and Grohe @cite_21 for the non-negative case. @cite_12 give two examples, which can also be expressed as Boolean weighted @math , and fall within the scope of this paper. The first appeared as an open problem in @cite_21 . The complexity of these problems can be deduced from @cite_12 and from the results of this paper.
{ "cite_N": [ "@cite_21", "@cite_12" ], "mid": [ "2282669851", "2000931246" ], "abstract": [ "We prove a complexity dichotomy theorem for counting weighted Boolean CSP modulo k for any positive integer k > 1. This generalizes a theorem by Faben for the unweighted setting. In the weighted setting, there are new interesting tractable problems. We first prove a dichotomy theorem for the finite field case where k is a prime. It turns out that the dichotomy theorem for the finite field is very similar to the one for the complex weighted Boolean #CSP, found by [Cai, Lu and Xia, STOC 2009]. Then we further extend the result to an arbitrary integer k.", "Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253" ] }
0812.2049
2950274370
We address the problem of finding a "best" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model , which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest.
There has been much work on managing probabilistic, uncertain, incomplete, and or fuzzy data in database systems and this area has received renewed attention in the last few years (see e.g. @cite_32 @cite_18 @cite_6 @cite_15 @cite_34 @cite_10 @cite_17 @cite_40 @cite_19 @cite_26 ). This work has spanned a range of issues from theoretical development of data models and data languages, to practical implementation issues such as indexing techniques. In terms of representation power, most of this work has either assumed independence between the tuples @cite_34 @cite_40 , or has restricted the correlations that can be modeled @cite_18 @cite_6 @cite_12 @cite_21 . Several approaches for modeling complex correlations in probabilistic databases have also been proposed @cite_42 @cite_9 @cite_4 @cite_7 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_7", "@cite_9", "@cite_21", "@cite_42", "@cite_32", "@cite_6", "@cite_19", "@cite_40", "@cite_15", "@cite_34", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2114157818", "1963853643", "2120825705", "2125791539", "1511986666", "2093149131", "2074622901", "1551374365", "2013333366", "2114242687", "1522055873", "2022501110", "1992609556", "1564629734", "2169600045", "2952367005" ], "abstract": [ "Several real-world applications need to effectively manage and reason about large amounts of data that are inherently uncertain. For instance, pervasive computing applications must constantly reason about volumes of noisy sensory readings for a variety of reasons, including motion prediction and human behavior modeling. Such probabilistic data analyses require sophisticated machine-learning tools that can effectively model the complex spatio temporal correlation patterns present in uncertain sensory data. Unfortunately, to date, most existing approaches to probabilistic database systems have relied on somewhat simplistic models of uncertainty that can be easily mapped onto existing relational architectures: Probabilistic information is typically associated with individual data tuples, with only limited or no support for effectively capturing and reasoning about complex data correlations. In this paper, we introduce BayesStore, a novel probabilistic data management architecture built on the principle of handling statistical models and probabilistic inference tools as first-class citizens of the database system. Adopting a machine-learning view, BAYESSTORE employs concise statistical relational models to effectively encode the correlation patterns between uncertain data, and promotes probabilistic inference and statistical model manipulation as part of the standard DBMS operator repertoire to support efficient and sound query processing. We present BAYESSTORE's uncertainty model based on a novel, first-order statistical model, and we redefine traditional query processing operators, to manipulate the data and the probabilistic models of the database in an efficient manner. Finally, we validate our approach, by demonstrating the value of exploiting data correlations during query processing, and by evaluating a number of optimizations which significantly accelerate query processing.", "Due to numerous applications producing noisy data, e.g., sensor data, experimental data, data from uncurated sources, information extraction, etc., there has been a surge of interest in the development of probabilistic databases. Most probabilistic database models proposed to date, however, fail to meet the challenges of real-world applications on two counts: (1) they often restrict the kinds of uncertainty that the user can represent; and (2) the query processing algorithms often cannot scale up to the needs of the application. In this work, we define a probabilistic database model, PrDB, that uses graphical models, a state-of-the-art probabilistic modeling technique developed within the statistics and machine learning community, to model uncertain data. We show how this results in a rich, complex yet compact probabilistic database model, which can capture the commonly occurring uncertainty models (tuple uncertainty, attribute uncertainty), more complex models (correlated tuples and attributes) and allows compact representation (shared and schema-level correlations). In addition, we show how query evaluation in PrDB translates into inference in an appropriately augmented graphical model. This allows us to easily use any of a myriad of exact and approximate inference algorithms developed within the graphical modeling community. While probabilistic inference provides a generic approach to solving queries, we show how the use of shared correlations, together with a novel inference algorithm that we developed based on bisimulation, can speed query processing significantly. We present a comprehensive experimental evaluation of the proposed techniques and show that even with a few shared correlations, significant speedups are possible.", "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.", "It is often desirable to represent in a database, entities whose properties cannot be deterministically classified. The authors develop a data model that includes probabilities associated with the values of the attributes. The notion of missing probabilities is introduced for partially specified probability distributions. This model offers a richer descriptive language allowing the database to more accurately reflect the uncertain real world. Probabilistic analogs to the basic relational operators are defined and their correctness is studied. A set of operators that have no counterpart in conventional relational systems is presented. >", "Most tasks require a person or an automated system to reason -- to reach conclusions based on available information. The framework of probabilistic graphical models, presented in this book, provides a general approach for this task. The approach is model-based, allowing interpretable models to be constructed and then manipulated by reasoning algorithms. These models can also be learned automatically from data, allowing the approach to be used in cases where manually constructing a model is difficult or even impossible. Because uncertainty is an inescapable aspect of most real-world applications, the book focuses on probabilistic models, which make the uncertainty explicit and provide models that are more faithful to reality. Probabilistic Graphical Models discusses a variety of models, spanning Bayesian networks, undirected Markov networks, discrete and continuous models, and extensions to deal with dynamical systems and relational data. For each class of models, the text describes the three fundamental cornerstones: representation, inference, and learning, presenting both basic concepts and advanced techniques. Finally, the book considers the use of the proposed framework for causal reasoning and decision making under uncertainty. The main text in each chapter provides the detailed technical development of the key ideas. Most chapters also include boxes with additional material: skill boxes, which describe techniques; case study boxes, which discuss empirical cases related to the approach described in the text, including applications in computer vision, robotics, natural language understanding, and computational biology; and concept boxes, which present significant concepts drawn from the material in the chapter. Instructors (and readers) can group chapters in various combinations, from core topics to more technically advanced material, to suit their particular needs.", "A wide range of applications have recently emerged that need to manage large, imprecise data sets. The reasons for imprecision in data are as diverse as the applications themselves: in sensor and RFID data, imprecision is due to measurement errors [15, 34]; in information extraction, imprecision comes from the inherent ambiguity in natural-language text [20, 26]; and in business intelligence, imprecision is tolerated because of the high cost of data cleaning [5]. In some applications, such as privacy, it is a requirement that the data be less precise. For example, imprecision is purposely inserted to hide sensitive attributes of individuals so that the data may be published [30]. Imprecise data has no place in traditional, precise database applications like payroll and inventory, and so, current database management systems are not prepared to deal with it. In contrast, the newly emerging applications offer value precisely because they query, search, and aggregate large volumes of imprecise data to find the “diamonds in the dirt”. This wide-variety of new applications points to the need for generic tools to manage imprecise data. In this paper, we survey the state of the art of techniques that handle imprecise data, by modeling it as probabilistic data [2–4,7,12,15,23,27,36]. A probabilistic database management system, or ProbDMS, is a system that stores large volumes of probabilistic data and supports complex queries. A ProbDMS may also need to perform some additional tasks, such as updates or recovery, but these do not differ from those in conventional database management systems and will not be discussed here. The major challenge in a ProbDMS is that it needs both to scale to large data volumes, a core competence of database management systems, and to do probabilistic inference, which is a problem studied in AI. While many scalable data management systems exists, probabilistic inference is a hard problem [35], and current systems do not scale to the same extent as data management systems do. To address this challenge, researchers have focused on the specific", "Abstract This paper deals with relational databases which are extended in the sense that fuzzily known values are allowed for attributes. Precise as well as partial (imprecise, uncertain) knowledge concerning the value of the attributes are represented by means of [0,1]-valued possibility distributions in Zadeh's sense. Thus, we have to manipulate ordinary relations on Cartesian products of sets of fuzzy subsets rather than fuzzy relations. Besides, vague queries whose contents are also represented by possibility distributions can be taken into account. The basic operations of relational algebra, union, intersection, Cartesian product, projection, and selection are extended in order to deal with partial information and vague queries. Approximate equalities and inequalities modeled by fuzzy relations can also be taken into account in the selection operation. Then, the main features of a query language based on the extended relational algebra are presented. An illustrative example is provided. This approach, which enables a very general treatment of relational databases with fuzzy attribute values, makes an extensive use of dual possibility and necessity measures.", "We consider the problem of answering queries from databases that may be incomplete. A database is incomplete if some tuples may be missing from some relations, and only a part of each relation is known to be complete. This problem arises in several contexts. For example, systems that provide access to multiple heterogeneous information sources often encounter incomplete sources. The question we address is to determine whether the answer to a specific given query is complete even when the database is incomplete. We present a novel sound and complete algorithm for the answer-completeness problem by relating it to the problem of independence of queries from updates. We also show an important case of the independence problem (and therefore ofthe answer-completeness problem) that can be decided in polynomial time, whereas the best known algorithm for this case is exponential. This case involves updates that are described using a conjunction of comparison predicates. We also describe an algorithm that determines whether the answer to the query is complete in the current state of the database. Finally, we show that our ‘treatment extends naturally to partiallyincorrect databases. Permission to copy without fee all or part of this material is granted provided that the copies aTe not made OT distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, OT to republish, requirea a fee and or special permission from the Endowment. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996", "In applications like location-based services, sensor monitoring and biological databases, the values of the database items are inherently uncertain in nature. An important query for uncertain objects is the probabilistic nearest-neighbor query (PNN), which computes the probability of each object for being the nearest neighbor of a query point. Evaluating this query is computationally expensive, since it needs to consider the relationship among uncertain objects, and requires the use of numerical integration or Monte-Carlo methods. Sometimes, a query user may not be concerned about the exact probability values. For example, he may only need answers that have sufficiently high confidence. We thus propose the constrained nearest-neighbor query (C-PNN), which returns the IDs of objects whose probabilities are higher than some threshold, with a given error bound in the answers. The C-PNN can be answered efficiently with probabilistic verifiers. These are methods that derive the lower and upper bounds of answer probabilities, so that an object can be quickly decided on whether it should be included in the answer. We have developed three probabilistic verifiers, which can be used on uncertain data with arbitrary probability density functions. Extensive experiments were performed to examine the effectiveness of these approaches.", "There has been a recent surge in work in probabilistic databases, propelled in large part by the huge increase in noisy data sources --- from sensor data, experimental data, data from uncurated sources, and many others. There is a growing need for database management systems that can efficiently represent and query such data. In this work, we show how data characteristics can be leveraged to make the query evaluation process more efficient. In particular, we exploit what we refer to as shared correlations where the same uncertainties and correlations occur repeatedly in the data. Shared correlations occur mainly due to two reasons: (1) Uncertainty and correlations usually come from general statistics and rarely vary on a tuple-to-tuple basis; (2) The query evaluation procedure itself tends to re-introduce the same correlations. Prior work has shown that the query evaluation problem on probabilistic databases is equivalent to a probabilistic inference problem on an appropriately constructed probabilistic graphical model (PGM). We leverage this by introducing a new data structure, called the random variable elimination graph (rv-elim graph) that can be built from the PGM obtained from query evaluation. We develop techniques based on bisimulation that can be used to compress the rv-elim graph exploiting the presence of shared correlations in the PGM, the compressed rv-elim graph can then be used to run inference. We validate our methods by evaluating them empirically and show that even with a few shared correlations significant speed-ups are possible.", "To speed up multidimensional data analysis, database systems frequently precompute aggregates on some subsets of dimensions and their corresponding hierarchies. This improves query response time. However, the decision of what and how much to precompute is a difficult one. It is further complicated by the fact that precomputation in the presence of hierarchies can result in an unintuitively large increase in the amount of storage required by the database. Hence, it is interesting and useful to estimate the storage blowup that will result from a proposed set of precomputations without actually computing them. We propose three strategies for this problem: one based on sampling, one based on mathematical approximation, and one based on probabilistic counting. We investigate the accuracy of these algorithms in estimating the blowup for different data distributions and database schemas. The algorithm based upon probabilistic counting is particularly attractive, since it estimates the storage blowup to within provable error bounds while performing only a single scan of the data. *Work supported by an IBM CAS Fellowship, NSF grant IRI9157357, and a grant from IBM under the University Partnership Program. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying 1:s by permission of the Very Large Data Base Endowm.ent. To copy otherwise, 01‘ to republish, requires a fee and or special permission j orn the En.do?ument. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996", "In emerging applications such as location-based services, sensor monitoring and biological management systems, the values of the database items are naturally imprecise. For these uncertain databases, an important query is the Probabilistic k-Nearest-Neighbor Query (k-PNN), which computes the probabilities of sets of k objects for being the closest to a given query point. The evaluation of this query can be both computationally- and I O-expensive, since there is an exponentially large number of k object-sets, and numerical integration is required. Often a user may not be concerned about the exact probability values. For example, he may only need answers that have sufficiently high confidence. We thus propose the Probabilistic Threshold k-Nearest-Neighbor Query (T-k-PNN), which returns sets of k objects that satisfy the query with probabilities higher than some threshold T. Three steps are proposed to handle this query efficiently. In the first stage, objects that cannot constitute an answer are filtered with the aid of a spatial index. The second step, called probabilistic candidate selection, significantly prunes a number of candidate sets to be examined. The remaining sets are sent for verification, which derives the lower and upper bounds of answer probabilities, so that a candidate set can be quickly decided on whether it should be included in the answer. We also examine spatially-efficient data structures that support these methods. Our solution can be applied to uncertain data with arbitrary probability density functions. We have also performed extensive experiments to examine the effectiveness of our methods.", "We present a probabilistic relational algebra (PRA) which is a generalization of standard relational algebra. In PRA, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Based on intensional semantics, the tuple weights of the result of a PRA expression always conform to the underlying probabilistic model. We also show for which expressions extensional semantics yields the same results. Furthermore, we discuss complexity issues and indicate possibilities for optimization. With regard to databases, the approach allows for representing imprecise attribute values, whereas for information retrieval, probabilistic document indexing and probabilistic search term weighting can be modeled. We introduce the concept of vague predicates which yield probabilistic weights instead of Boolean values, thus allowing for queries with vague selection conditions. With these features, PRA implements uncertainty and vagueness in combination with the relational model.", "We investigate the class of computable probability distributions and explore the fundamental limitations of using this class to describe and compute conditional distributions. In addition to proving the existence of noncomputable conditional distributions, and thus ruling out the possibility of generic probabilistic inference algorithms (even inefficient ones), we highlight some positive results showing that posterior inference is possible in the presence of additional structure like exchangeability and noise, both of which are common in Bayesian hierarchical modeling. This theoretical work bears on the development of probabilistic programming languages (which enable the specification of complex probabilistic models) and their implementations (which can be used to perform Bayesian reasoning). The probabilistic programming approach is particularly well suited for defining infinite-dimensional, recursively-defined stochastic processes of the sort used in nonparametric Bayesian statistics. We present a new construction of the Mondrian process as a partition-valued Markov process in continuous time, which can be viewed as placing a distribution on an infinite kd-tree data structure. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)", "Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has led researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techniques from constraint satisfaction, such as the Davis-Putnam algorithm. We complement this with a thorough experimental evaluation of the algorithms proposed. Our experiments show that our exact algorithms scale well to realistic database sizes and can in some scenarios compete with the most efficient previous approximation algorithms.", "Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NP-hard problem. This has led researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techniques from constraint satisfaction, such as the Davis-Putnam algorithm. We complement this with a thorough experimental evaluation of the algorithms proposed. Our experiments show that our exact algorithms scale well to realistic database sizes and can in some scenarios compete with the most efficient previous approximation algorithms." ] }
0812.2049
2950274370
We address the problem of finding a "best" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model , which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest.
In recent years, there has also been much work on efficiently answering different types of queries over probabilistic databases. @cite_29 first considered the problem of ranking over probabilistic databases, and proposed two ranking functions to combine the tuple scores and probabilities. @cite_35 presented improved algorithms for the same ranking functions. Zhang and Chomicki @cite_36 presented a desiderata for ranking functions and propose Global queries. Ming @cite_5 @cite_3 recently presented a different ranking function called Probabilistic threshold queries . Finally, @cite_37 also present a semantics of ranking functions and a new ranking function called expected rank . In a recent work, we proposed a parameterized ranking function, and presented general algorithms for evaluating them @cite_31 Other types of queries have also been recently considered over probabilistic databases (e.g. clustering @cite_16 , nearest neighbors @cite_41 etc.).
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_36", "@cite_41", "@cite_29", "@cite_3", "@cite_5", "@cite_31", "@cite_16" ], "mid": [ "2120342618", "2140237757", "2022501110", "2107105629", "2128230033", "2159545104", "2013333366", "2118291925", "2116440837" ], "abstract": [ "The dramatic growth in the number of application domains that naturally generate probabilistic, uncertain data has resulted in a need for efficiently supporting complex querying and decision-making over such data. In this paper, we present a unified approach to ranking and top-k query processing in probabilistic databases by viewing it as a multi-criteria optimization problem, and by deriving a set of features that capture the key properties of a probabilistic dataset that dictate the ranked result. We contend that a single, specific ranking function may not suffice for probabilistic databases, and we instead propose two parameterized ranking functions, called PRFω and PRFe, that generalize or can approximate many of the previously proposed ranking functions. We present novel generating functions-based algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations modeled using probabilistic and xor trees or Markov networks. We further propose that the parameters of the ranking function be learned from user preferences, and we develop an approach to learn those parameters. Finally, we present a comprehensive experimental study that illustrates the effectiveness of our parameterized ranking functions, especially PRFe, at approximating other ranking functions and the scalability of our proposed algorithms for exact or approximate ranking.", "When dealing with massive quantities of data, top-k queries are a powerful technique for returning only the k most relevant tuples for inspection, based on a scoring function. The problem of efficiently answering such ranking queries has been studied and analyzed extensively within traditional database settings. The importance of the top-k is perhaps even greater in probabilistic databases, where a relation can encode exponentially many possible worlds. There have been several recent attempts to propose definitions and algorithms for ranking queries over probabilistic data. However, these all lack many of the intuitive properties of a top-k over deterministic data. Specifically, we define a number of fundamental properties, including exact-k, containment, unique-rank, value-invariance, and stability, which are all satisfied by ranking queries on certain data. We argue that all these conditions should also be fulfilled by any reasonable definition for ranking uncertain data. Unfortunately, none of the existing definitions is able to achieve this. To remedy this shortcoming, this work proposes an intuitive new approach of expected rank. This uses the well-founded notion of the expected rank of each tuple across all possible worlds as the basis of the ranking. We are able to prove that, in contrast to all existing approaches, the expected rank satisfies all the required properties for a ranking query. We provide efficient solutions to compute this ranking across the major models of uncertain data, such as attribute-level and tuple-level uncertainty. For an uncertain relation of N tuples, the processing cost is O(N logN)—no worse than simply sorting the relation. In settings where there is a high cost for generating each tuple in turn, we provide pruning techniques based on probabilistic tail bounds that can terminate the search early and guarantee that the top-k has been found. Finally, a comprehensive experimental study confirms the effectiveness of our approach.", "In emerging applications such as location-based services, sensor monitoring and biological management systems, the values of the database items are naturally imprecise. For these uncertain databases, an important query is the Probabilistic k-Nearest-Neighbor Query (k-PNN), which computes the probabilities of sets of k objects for being the closest to a given query point. The evaluation of this query can be both computationally- and I O-expensive, since there is an exponentially large number of k object-sets, and numerical integration is required. Often a user may not be concerned about the exact probability values. For example, he may only need answers that have sufficiently high confidence. We thus propose the Probabilistic Threshold k-Nearest-Neighbor Query (T-k-PNN), which returns sets of k objects that satisfy the query with probabilities higher than some threshold T. Three steps are proposed to handle this query efficiently. In the first stage, objects that cannot constitute an answer are filtered with the aid of a spatial index. The second step, called probabilistic candidate selection, significantly prunes a number of candidate sets to be examined. The remaining sets are sent for verification, which derives the lower and upper bounds of answer probabilities, so that a candidate set can be quickly decided on whether it should be included in the answer. We also examine spatially-efficient data structures that support these methods. Our solution can be applied to uncertain data with arbitrary probability density functions. We have also performed extensive experiments to examine the effectiveness of our methods.", "Various approaches for keyword proximity search have been implemented in relational databases, XML and the Web. Yet, in all of them, an answer is a Q-fragment, namely, a subtree T of the given data graph G, such that T contains all the keywords of the query Q and has no proper subtree with this property. The rank of an answer is inversely proportional to its weight. Three problems are of interest: finding an optimal (i.e., top-ranked) answer, computing the top-k answers and enumerating all the answers in ranked order. It is shown that, under data complexity, an efficient algorithm for solving the first problem is sufficient for solving the other two problems with polynomial delay. Similarly, an efficient algorithm for finding a θ-approximation of the optimal answer suffices for carrying out the following two tasks with polynomial delay, under query-and-data complexity. First, enumerating in a (θ+1)-approximate order. Second, computing a (θ+1)-approximation of the top-k answers. As a corollary, this paper gives the first efficient algorithms, under data complexity, for enumerating all the answers in ranked order and for computing the top-k answers. It also gives the first efficient algorithms, under query-and-data complexity, for enumerating in a provably approximate order and for computing an approximation of the top-k answers.", "We address the problem of finding a \"best\" deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the well-studied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, Top-k ranking queries, group-by aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NP-hardness). Most of our results are for a general probabilistic database model, called and xor tree model, which significantly generalizes previous probabilistic database models like x-tuples and block-independent disjoint models, and is of independent interest.", "Ranking a set of retrieved documents according to their relevance to a query is a popular problem in information retrieval. Methods that learn ranking functions are difficult to optimize, as ranking performance is typically judged by metrics that are not smooth. In this paper we propose a new listwise approach to learning to rank. Our method creates a conditional probability distribution over rankings assigned to documents for a given query, which permits gradient ascent optimization of the expected value of some performance measure. The rank probabilities take the form of a Boltzmann distribution, based on an energy function that depends on a scoring function composed of individual and pairwise potentials. Including pairwise potentials is a novel contribution, allowing the model to encode regularities in the relative scores of documents; existing models assign scores at test time based only on individual documents, with no pairwise constraints between documents. Experimental results on the LETOR3.0 data set show that our method out-performs existing learning approaches to ranking.", "In applications like location-based services, sensor monitoring and biological databases, the values of the database items are inherently uncertain in nature. An important query for uncertain objects is the probabilistic nearest-neighbor query (PNN), which computes the probability of each object for being the nearest neighbor of a query point. Evaluating this query is computationally expensive, since it needs to consider the relationship among uncertain objects, and requires the use of numerical integration or Monte-Carlo methods. Sometimes, a query user may not be concerned about the exact probability values. For example, he may only need answers that have sufficiently high confidence. We thus propose the constrained nearest-neighbor query (C-PNN), which returns the IDs of objects whose probabilities are higher than some threshold, with a given error bound in the answers. The C-PNN can be answered efficiently with probabilistic verifiers. These are methods that derive the lower and upper bounds of answer probabilities, so that an object can be quickly decided on whether it should be included in the answer. We have developed three probabilistic verifiers, which can be used on uncertain data with arbitrary probability density functions. Extensive experiments were performed to examine the effectiveness of these approaches.", "We study the problem of answering spatial queries in databases where objects exist with some uncertainty and they are associated with an existential probability. The goal of a thresholding probabilistic spatial query is to retrieve the objects that qualify the spatial predicates with probability that exceeds a threshold. Accordingly, a ranking probabilistic spatial query selects the objects with the highest probabilities to qualify the spatial predicates. We propose adaptations of spatial access methods and search algorithms for probabilistic versions of range queries, nearest neighbors, spatial skylines, and reverse nearest neighbors and conduct an extensive experimental study, which evaluates the effectiveness of proposed solutions.", "Probabilistic data have recently become popular in applications such as scientific and geospatial databases. For images and other spatial datasets, probabilistic values can capture the uncertainty in extent and class of the objects in the images. Relating one such dataset to another by spatial joins is an important operation for data management systems. We consider probabilistic spatial join (PSJ) queries, which rank the results according to a score that incorporates both the uncertainties associated with the objects and the distances between them. We present algorithms for two kinds of PSJ queries: Threshold PSJ queries, which return all pairs that score above a given threshold, and top-k PSJ queries, which return the k top-scoring pairs. For threshold PSJ queries, we propose a plane sweep algorithm that, because it exploits the special structure of the problem, runs in 0(n (log n + k)) time, where n is the number of points and k is the number of results. We extend the algorithms to 2-D data and to top-k PSJ queries. To further speed up top-k PSJ queries, we develop a scheduling technique that estimates the scores at the level of blocks, then hands the blocks to the plane sweep algorithm. By finding high-scoring pairs early, the scheduling allows a large portion of the datasets to be pruned. Experiments demonstrate speed-ups of two orders of magnitude." ] }
0812.0790
2952665523
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
[1.] Program instrumentation and execution: assertion-based debugging (e.g., @cite_21 ) and algorithmic debugging @cite_25 are examples of approaches focused on this first phase.
{ "cite_N": [ "@cite_21", "@cite_25" ], "mid": [ "1514468887", "2044672898" ], "abstract": [ "The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize.", "Algorithmic Debugging is a theory of debugging that uses queries on the compositional semantics of a program in order to localize bugs. It uses the following principle: if a computation of a program's component gives an incorrect result, while all the subcomputations it invokes compute correct results, then the code of this component is erroneous. Algorithmic Debugging is applied, in this work, to reactive systems, in particular to programs written in Flat Concurrent Prolog (FCP). Debugging reactive systems is known to be more difficult than the debugging of functional systems. A functional system is fully described by the relation between its initial input and final output; this context-freedom is used in debugging. A reactive system continuously responds to external inputs, thus its debugging cannot make use of context-free input output relations. Given a compositional semantic model for a concurrent programming language, we demonstrate how one can directly apply the ideas of Algorithmic Debugging to obtain a theory of program debugging for the considered language. The conflict between the context-freedom of input output relations and the reactive nature of concurrent systems is resolved by using semantic objects which record the reactive nature of the system's components. In functional algorithmic debugging the queries relate to input output relations; in concurrent algorithmic debugging the queries refer to semantic objects called processes which capture the reactive nature of FCP computations. A diagnosis algorithm for incorrect FCP programs is proposed. The algorithm gets an erroneous computation and using queries isolates an erroneous clause or an incomplete procedure. An FCP implementation of the diagnosis algorithm demonstrates the usefulness as well as the difficulties of Algorithmic Debugging of FCP programs." ] }
0812.0790
2952665523
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
[2.] Data Collection: focuses on from the execution data necessary to understand it, as in event-based debugging @cite_55 , tracing, and explanation-based debugging @cite_39 @cite_50 .
{ "cite_N": [ "@cite_55", "@cite_50", "@cite_39" ], "mid": [ "2108743083", "2129879174", "2130714105" ], "abstract": [ "Event extraction is a particularly challenging type of information extraction (IE). Most current event extraction systems rely on local information at the phrase or sentence level. However, this local context may be insufficient to resolve ambiguities in identifying particular types of events; information from a wider scope can serve to resolve some of these ambiguities. In this paper, we use document level information to improve the performance of ACE event extraction. In contrast to previous work, we do not limit ourselves to information about events of the same type, but rather use information about other types of events to make predictions or resolve ambiguities regarding a given event. We learn such relationships from the training corpus and use them to help predict the occurrence of events and event arguments in a text. Experiments show that we can get 9.0 (absolute) gain in trigger (event) classification, and more than 8 gain for argument (role) classification in ACE event extraction.", "In this paper, we present a semi-automatic approach for summarizing the content of large execution traces. Similar to text summarization, where abstracts can be extracted from large documents, the aim of trace summarization is to take an execution trace as input and return a summary of its main content as output. The resulting summary can then be converted into a UML sequence diagram and used by software engineers to understand the main behavioural aspects of the system. Our approach to trace summarization is based on the removal of implementation details such as utilities from execution traces. To achieve our goal, we have developed a metric based on fan-in and fan-out to rank the system components according to whether they implement key system concepts or they are mere implementation details. We applied our approach to a trace generated from an object-oriented system called Weka that initially contains 97413 method calls. We succeeded to extract a summary from this trace that contains 453 calls. According to the developers of the Weka system, the resulting summary is an adequate high-level representation of the main interactions of the traced scenario.", "Event extraction is the task of detecting certain specified types of events that are mentioned in the source language data. The state-of-the-art research on the task is transductive inference (e.g. cross-event inference). In this paper, we propose a new method of event extraction by well using cross-entity inference. In contrast to previous inference methods, we regard entity-type consistency as key feature to predict event mentions. We adopt this inference method to improve the traditional sentence-level event extraction system. Experiments show that we can get 8.6 gain in trigger (event) identification, and more than 11.8 gain for argument (role) classification in ACE event extraction." ] }
0812.0790
2952665523
The paper introduces the notion of off-line justification for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification), and presents an integration of on-line justifications within the computation model of Smodels. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in ASP-PROLOG. (To appear in Theory and Practice of Logic Programming (TPLP))
The use of graphs proposed in this paper is complementary to the view proposed by other authors, who use graph structures as a mean to describe answer set programs, to make structural properties explicit, and to support the development of the program execution. @cite_45 @cite_27 , (a.k.a. ) of answer set programs are employed to model the computation of answer sets as special forms of graph coloring. A comprehensive survey of alternative graph representations of answer set programs, and their properties with respect to the problem of answer set characterization, has been presented in @cite_7 @cite_10 . In particular, the authors provide characterizations of desirable graph representations, relating the existence of answer sets to the presence of cycles and the use of coloring to characterize properties of programs (e.g., consistency). We conjuncture that the outcome of a successful coloring of an EDG @cite_7 to represent one answer set can be projected, modulo non-obvious transformations, to an off-line graph and vice versa. On the other hand, the notion of on-line justification does not seem to have a direct relation to the graph representations presented in the cited works.
{ "cite_N": [ "@cite_27", "@cite_45", "@cite_10", "@cite_7" ], "mid": [ "2166174694", "95993446", "2070951159", "2115831572" ], "abstract": [ "We investigate the usage of rule dependency graphs and their colorings for characterizing and computing answer sets of logic programs. This approach provides us with insights into the interplay between rules when inducing answer sets. We start with different characterizations of answer sets in terms of totally colored dependency graphs that differ in graph-theoretical aspects. We then develop a series of operational characterizations of answer sets in terms of operators on partial colorings. In analogy to the notion of a derivation in proof theory, our operational characterizations are expressed as (non-deterministically formed) sequences of colorings, turning an uncolored graph into a totally colored one. In this way, we obtain an operational framework in which different combinations of operators result in different formal properties. Among others, we identify the basic strategy employed by the noMoRe system and justify its algorithmic approach. Furthermore, we distinguish operations corresponding to Fitting's operator as well as to well-founded semantics.", "characterized in terms of properties of Rule Graphs. We show that, unfortunately, also the RG is ambiguous with respect to the answer set semantics, while the EDG is isomorphic to the program it represents. We argue that the reason of this drawback of the RG as a software engineering tool relies in the absence of a distinction between the different kinds of connections between cycles. Finally, we suggest that properties of a program might be characterized(andchecked)intermsofadmissiblecolorings of the EDG.", "Logic programs under Answer Sets semantics can be studied, and actual computation can be carried out, by means of representing them by directed graphs. Several reductions of logic programs to directed graphs are now available. We compare our proposed representation, called Extended Dependency Graph, to the Block Graph representation recently defined by Linke [Proc. IJCAI-2001, 2001, pp. 641-648]. On the relevant fragment of well-founded irreducible programs, extended dependency and block graph turns out to be isomorphic. So, we argue that graph representation of general logic programs should be abandoned in favor of graph representation of well-founded irreducible programs, which are more concise, more uniform in structure while being equally expressive.", "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs." ] }
0812.0147
2950083903
In this paper we analyze the performance of Warning Propagation, a popular message passing algorithm. We show that for 3CNF formulas drawn from a certain distribution over random satisfiable 3CNF formulas, commonly referred to as the planted-assignment distribution, running Warning Propagation in the standard way (run message passing until convergence, simplify the formula according to the resulting assignment, and satisfy the remaining subformula, if necessary, using a simple "off the shelf" heuristic) results in a satisfying assignment when the clause-variable ratio is a sufficiently large constant.
As for relevant results in random graph theory, the seminal work of Alon and Kahale @cite_28 paved the road towards dealing with large-constant-degree planted distributions. @cite_28 present an algorithm that @math @math -colors planted @math -colorable graphs (the distribution of graphs generated by partitioning the @math vertices into @math equally-sized color classes, and including every edge connecting two different color classes with probability @math ; commonly denoted @math ) with a sufficiently large constant expected degree. Building upon the techniques introduced in @cite_28 , Chen and Frieze @cite_14 present an algorithm that 2-colors large constant degree planted 3-uniform bipartite hypergraphs, and Flaxman @cite_7 presents an algorithm for satisfying large-constant clause-variable ratio planted 3SAT instances.
{ "cite_N": [ "@cite_28", "@cite_14", "@cite_7" ], "mid": [ "2115831572", "2534944111", "2065826935" ], "abstract": [ "For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs.", "We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: • We present a poly log n round deterministic algorithm for (2Δ−1)·(1+o(1))-edge-coloring, where Δ denotes the maximum degree. Modulo the 1 + o(1) factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of (2Δ − 1) · poly log Δ-edge-coloring in poly log n rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. • We show that sinkless orientation---i.e., orienting edges such that each node has at least one out-going edge---on Δ-regular graphs can be solved in O(logΔ log n) rounds randomized and in O(logΔ n) rounds deterministically. These prove the corresponding lower bounds by [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of for Δ-coloring Δ-regular trees. • We present a randomized O(log4 n) round algorithm for orienting a-arboricity graphs with maximum out-degree a(1 + e). This can be also turned into a decomposition into a(1 + e) forests when a = Ω(log n) and into a(1 + e) pseduo-forests when a = o(log n). Obtaining an efficient distributed decomposition into less than 2a forests was stated as the 10th open problem in the book by Barenboim and Elkin.", "Consider an n-vertex graph G = (V, E) of maximum degree Δ, and suppose that each vertex v ∈ V hosts a processor. The processors are allowed to communicate only with their neighbors in G. The communication is synchronous, that is, it proceeds in discrete rounds. In the distributed vertex coloring problem, the objective is to color G with Δ + 1, or slightly more than Δ + 1, colors using as few rounds of communication as possible. (The number of rounds of communication will be henceforth referred to as running time.) Efficient randomized algorithms for this problem are known for more than twenty years [ 1986; Luby 1986]. Specifically, these algorithms produce a (Δ + 1)-coloring within O(log n) time, with high probability. On the other hand, the best known deterministic algorithm that requires polylogarithmic time employs O(Δ2) colors. This algorithm was devised in a seminal FOCS’87 paper by Linial [1987]. Its running time is O(log* n). In the same article, Linial asked whether one can color with significantly less than Δ2 colors in deterministic polylogarithmic time. By now, this question of Linial became one of the most central long-standing open questions in this area. In this article, we answer this question in the affirmative, and devise a deterministic algorithm that employs Δ1+o(1) colors, and runs in polylogarithmic time. Specifically, the running time of our algorithm is O(f(Δ)log Δ log n), for an arbitrarily slow-growing function f(Δ) = ω(1). We can also produce an O(Δ1+η)-coloring in O(log Δ log n)-time, for an arbitrarily small constant η > 0, and an O(Δ)-coloring in O(Δe log n) time, for an arbitrarily small constant e > 0. Our results are, in fact, far more general than this. In particular, for a graph of arboricity a, our algorithm produces an O(a1+η)-coloring, for an arbitrarily small constant η > 0, in time O(log a log n)." ] }
0812.0147
2950083903
In this paper we analyze the performance of Warning Propagation, a popular message passing algorithm. We show that for 3CNF formulas drawn from a certain distribution over random satisfiable 3CNF formulas, commonly referred to as the planted-assignment distribution, running Warning Propagation in the standard way (run message passing until convergence, simplify the formula according to the resulting assignment, and satisfy the remaining subformula, if necessary, using a simple "off the shelf" heuristic) results in a satisfying assignment when the clause-variable ratio is a sufficiently large constant.
Another difference between our work and that of @cite_28 @cite_14 @cite_7 is that unlike the algorithms analyzed in those other papers, WP is a randomized algorithm, a fact which makes its analysis more difficult. We could have simplified our analysis had we changed WP to be deterministic (for example, by initializing all clause-variable messages to 1 in step 2 of the algorithm), but there are good reasons why WP is randomized. For example, it can be shown that (the randomized version) WP converges with probability 1 on 2CNF formulas that form one cycle of implications, but might not converge if step 4 does not introduce fresh randomness in every iteration of the algorithm (details omitted).
{ "cite_N": [ "@cite_28", "@cite_14", "@cite_7" ], "mid": [ "2036804629", "2030160771", "2591036807" ], "abstract": [ "When faced with a complex task, is it better to be systematic or to proceed by making random adjustments? We study aspects of this problem in the context of generating random elements of a finite group. For example, suppose we want to fill n empty spaces with zeros and ones such that the probability of configuration x = (x1, . . . , xn) is θ n−|x|(1− θ)|x|, with |x| the number of ones in x. A systematic scan approach works left to right, filling each successive place with a θ coin toss. A random scan approach picks places at random, and a given site may be hit many times before all sites are hit. The systematic approach takes order n steps and the random approach takes order 1 4n log n steps. Realistic versions of this toy problem arise in image analysis and Ising-like simulations, where one must generate a random array by a Monte Carlo Markov chain. Systematic updating and random updating are competing algorithms that are discussed in detail in Section 2. There are some successful analyses for random scan algorithms, but the intuitively appealing systematic scan algorithms have resisted analysis. Our main results show that the binary problem just described is exceptional; for the examples analyzed in this paper, systematic and random scans converge in about the same number of steps. Let W be a finite Coxeter group generated by simple reflections s1, s2, . . . , sn, where s2 i = id. For example, W may be the permutation group Sn+1 with si = (i, i + 1). The length function (w) is the smallest k such that w = si1si2 · · · sik . Fix 0 < θ ≤ 1 and define a probability distribution on W by π(w) = θ − (w) PW (θ−1) , where PW(θ −1) = ∑", "This paper considers the computational power of anonymous message passing algorithms (henceforth, anonymous algorithms), i.e., distributed algorithms operating in a network of unidentified nodes. We prove that every problem that can be solved (and verified) by a randomized anonymous algorithm can also be solved by a deterministic anonymous algorithm provided that the latter is equipped with a 2-hop coloring of the input graph. Since the problem of 2-hop coloring a given graph (i.e., ensuring that two nodes with distance at most 2 have different colors) can by itself be solved by a randomized anonymous algorithm, it follows that with the exception of a few mock cases, the execution of every randomized anonymous algorithm can be decoupled into a generic preprocessing randomized stage that computes a 2-hop coloring, followed by a problem-specific deterministic stage. The main ingredient of our proof is a novel simulation method that relies on some surprising connections between 2-hop colorings and an extensively used graph lifting technique.", "We consider distributed plurality consensus in a complete graph of size @math with @math initial opinions. We design an efficient and simple protocol in the asynchronous communication model that ensures that all nodes eventually agree on the initially most frequent opinion. In this model, each node is equipped with a random Poisson clock with parameter @math . Whenever a node's clock ticks, it samples some neighbors, uniformly at random and with replacement, and adjusts its opinion according to the sample. A prominent example is the so-called two-choices algorithm in the synchronous model, where in each round, every node chooses two neighbors uniformly at random, and if the two sampled opinions coincide, then that opinion is adopted. This protocol is very efficient and well-studied when @math . If @math for some small @math , we show that it converges to the initial plurality opinion within @math rounds, w.h.p., as long as the initial difference between the largest and second largest opinion is @math . On the other side, we show that there are cases in which @math rounds are needed, w.h.p. One can beat this lower bound in the synchronous model by combining the two-choices protocol with randomized broadcasting. Our main contribution is a non-trivial adaptation of this approach to the asynchronous model. If the support of the most frequent opinion is at least @math times that of the second-most frequent one and @math , then our protocol achieves the best possible run time of @math , w.h.p. We relax full synchronicity by allowing @math nodes to be poorly synchronized, and the well synchronized nodes are only required to be within a certain time difference from one another. We enforce this synchronicity by introducing a novel gadget into the protocol." ] }
0812.0423
2951084471
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
@cite_7 solve a problem of the form ) for the case of nonlinear programming, where @math is the sum of the objective function @math and the indicator function for the equalities and the inequalities defining the feasible region. The resulting step can be enhanced by solving an EQP.
{ "cite_N": [ "@cite_7" ], "mid": [ "2022144657" ], "abstract": [ "Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples)." ] }
0812.0423
2951084471
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
Mifflin and Sagastiz 'abal @cite_17 describe an algorithm in which an approximate solution of the subproblem prox.gen is obtained, again for the case of a convex objective, by making use of a piecewise linear underapproximation to their objective @math . The approach is most suitable for a bundle method in which the piecewise-linear approximation is constructed from subgradients gathered at previous iterations. Approximations to the manifold of smoothness for @math are constructed from the solution of this approximate proximal point calculation, and a Newton-like step for the Lagrangian is taken along this manifold, as envisioned in earlier methods. Daniilidis, Hare, and Malick @cite_34 use the terminology predictor-corrector'' to describe algorithms of this type. Their predictor'' step is the step along the manifold of smoothness for @math , while the corrector'' step ) eventually returns the iterates to the correct active manifold (see [Theorem 28] DanHM06 ). Miller and Malick @cite_4 show how algorithms of this type are related to Newton-like methods that have been proposed earlier in various contexts.
{ "cite_N": [ "@cite_34", "@cite_4", "@cite_17" ], "mid": [ "2094993225", "2570286083", "2129732816" ], "abstract": [ "For convex minimization we introduce an algorithm based on **-space decomposition. The method uses a bundle subroutine to generate a sequence of approximate proximal points. When a primal-dual track leading to a solution and zero subgradient pair exists, these points approximate the primal track points and give the algorithm's **, or corrector, steps. The subroutine also approximates dual track points that are **-gradients needed for the method's **-Newton predictor steps. With the inclusion of a simple line search the resulting algorithm is proved to be globally convergent. The convergence is superlinear if the primal-dual track points and the objective's **-Hessian are approximated well enough.", "We consider global efficiency of algorithms for minimizing a sum of a convex function and a composition of a Lipschitz convex function with a smooth map. The basic algorithm we rely on is the prox-linear method, which in each iteration solves a regularized subproblem formed by linearizing the smooth map. When the subproblems are solved exactly, the method has efficiency @math , akin to gradient descent for smooth minimization. We show that when the subproblems can only be solved by first-order methods, a simple combination of smoothing, the prox-linear method, and a fast-gradient scheme yields an algorithm with complexity @math . The technique readily extends to minimizing an average of @math composite functions, with complexity @math in expectation. We round off the paper with an inertial prox-linear method that automatically accelerates in presence of convexity.", "We study the convergence properties of an alternating proximal minimization algorithm for nonconvex structured functions of the type: L(x,y)=f(x)+Q(x,y)+g(y), where f and g are proper lower semicontinuous functions, defined on Euclidean spaces, and Q is a smooth function that couples the variables x and y. The algorithm can be viewed as a proximal regularization of the usual Gauss-Seidel method to minimize L. We work in a nonconvex setting, just assuming that the function L satisfies the Kurdyka-Łojasiewicz inequality. An entire section illustrates the relevancy of such an assumption by giving examples ranging from semialgebraic geometry to “metrically regular” problems. Our main result can be stated as follows: If L has the Kurdyka-Łojasiewicz property, then each bounded sequence generated by the algorithm converges to a critical point of L. This result is completed by the study of the convergence rate of the algorithm, which depends on the geometrical properties of the function L around its critical points. When specialized to @math and to f, g indicator functions, the algorithm is an alternating projection mehod (a variant of von Neumann's) that converges for a wide class of sets including semialgebraic and tame sets, transverse smooth manifolds or sets with “regular” intersection. To illustrate our results with concrete problems, we provide a convergent proximal reweighted l1 algorithm for compressive sensing and an application to rank reduction problems." ] }
0812.0893
2762701907
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
In the algorithms community, there has been considerable prior work on shortest path algorithms for Euclidean graphs (e.g., see @cite_35 @cite_50 @cite_48 @cite_1 @cite_8 @cite_20 ), which are geometric graphs where edges are weighted by the lengths of the corresponding line segments. This prior work takes a decidedly different approach than we take in this paper, however, in that it focuses on using special properties of the edge weights that do not hold in the comparison model, whereas we study road networks as geometric graphs with a sublinear number of edge crossings and we desire linear-time algorithms that hold in the comparison model.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_48", "@cite_1", "@cite_50", "@cite_20" ], "mid": [ "1989723858", "2085630237", "2056148792", "2259676875", "2621618541", "2019003829" ], "abstract": [ "The problem of determining shortest paths through a weighted planar polygonal subdivision with n vertices is considered. Distances are measured according to a weighted Euclidean metric: The length of a path is defined to be the weighted sum of (Euclidean) lengths of the subpaths within each region. An algorithm that constructs a (restricted) “shortest path map” with respect to a given source point is presented. The output is a partitioning of each edge of the subdivion into intervals of e-optimality, allowing an e-optimal path to be traced from the source to any query point along any edge. The algorithm runs in worst-case time O ( ES ) and requires O ( E ) space, where E is the number of “events” in our algorithm and S is the time it takes to run a numerical search procedure. In the worst case, E is bounded above by O ( n 4 ) (and we give an O( n 4 ) lower bound), but it is likeky that E will be much smaller in practice. We also show that S is bounded by O ( n 4 L ), where L is the precision of the problem instance (including the number of bits in the user-specified tolerance e). Again, the value of S should be smaller in practice. The algorithm applies the “continuous Dijkstra” paradigm and exploits the fact that shortest paths obey Snell's Law of Refraction at region boundaries, a local optimaly property of shortest paths that is well known from the analogous optics model. The algorithm generalizes to the multi-source case to compute Voronoi diagrams.", "Shortest paths computations constitute one of the most fundamental network problems. Nonetheless, known parallel shortest-paths algorithms are generally inefficient: they perform significantly more work (product of time and processors) than their sequential counterparts. This gap, known in the literature as the “transitive closure bottleneck,” poses a long-standing open problem. Our main result is an O mn e 0 +s m+n1+ e 0 work polylog-time randomized algorithm that computes paths within (1 + O (1 polylog n ) of shortest from s source nodes to all other nodes in weighted undirected networks with n nodes and m edges (for any fixed e 0 >0). This work bound nearly matches the O d sm sequential time. In contrast, previous polylog-time algorithms required nearly min O d n3 , O d m2 work (even when s =1), and previous near-linear work algorithms required near- O ( n ) time. We also present faster sequential algorithms that provide good approximate distances only between “distant” vertices: We obtain an O m+sn n e 0 time algorithm that computes paths of weight (1+ O (1 polylog n ) dist + O ( w max polylog n ), where dist is the corresponding distance and w max is the maximum edge weight. Our chief instrument, which is of independent interest, are efficient constructions of sparse hop sets . A ( d ,e)-hop set of a network G =( V,E ) is a set E * of new weighted edges such that mimimum-weight d -edge paths in V,E∪E* have weight within (1+e) of the respective distances in G . We construct hop sets of size O n1+ e 0 where e= O (1 polylog n ) and d = O (polylog n ).", "We11A preliminary version appeared in the “Proceedings of the 2nd Israeli Symposium on the Theory of Computing and Systems, 1993.”consider parallel shortest-paths computations in weighted undirected graphsG=(V,E), wheren=|V| andm=|E|. The standardO(n3) work path-doubling (Floyd-Warshall) algorithm consists ofO(logn) phases, where in each phase, for every triplet of vertices (u1,u2,u3)?V3, the distance betweenu1andu3is updated to be no more than the sum of the previous-phase distances between u1,u2 and u2,u3 . We introduce a new NC algorithm that for ?=o(n), considers onlyO(n?2) triplets. Our algorithm performsO(n?2) work and augmentsEwithO(n?) new weighted edges such that between every pair of vertices, there exists a minimum weight path of size (number of edges)O(n ?) (whereO(f)?O(fpolylogn)). To compute shortest-paths, we apply to the augmented graph algorithms that are efficient for small-size shortest paths. We obtain anO(t) timeO(|S|n2+n3 t2) work deterministic PRAM algorithm for computing shortest-paths from |S| sources to all other vertices, wheret?nis a parameter. When the ratio of the largest edge weight and the smallest edge weight isnO(polylogn), the algorithm computes shortest paths. When weights are arbitrary, it computes paths within a factor of 1+n??(polylogn)of shortest.", "A prominent tool in many problems involving metric spaces is a notion of randomized low-diameter decomposition. Loosely speaking, ( )-decomposition refers to a probability distribution over partitions of the metric into sets of low diameter, such that nearby points (parameterized by ( >0 )) are likely to be “clustered” together. Applying this notion to the shortest-path metric in edge-weighted graphs, it is known that n-vertex graphs admit an (O( n) )-padded decomposition (Bartal, 37th annual symposium on foundations of computer science. IEEE, pp 184–193, 1996), and that excluded-minor graphs admit O(1)-padded decomposition (, 25th annual ACM symposium on theory of computing, pp 682–690, 1993; Fakcharoenphol and Talwar, J Comput Syst Sci 69(3), 485–497, 2004; , Proceedings of the 46th annual ACM symposium on theory of computing. STOC ’14, pp 79–88. ACM, New York, NY, USA, 2014). We design decompositions to the family of p-path-separable graphs, which was defined by Abraham and Gavoille (Proceedings of the twenty-fifth annual acm symposium on principles of distributed computing, PODC ’06, pp 188–197, 2006) and refers to graphs that admit vertex-separators consisting of at most p shortest paths in the graph. Our main result is that every p-path-separable n-vertex graph admits an (O( (p n)) )-decomposition, which refines the (O( n) ) bound for general graphs, and provides new bounds for families like bounded-treewidth graphs. Technically, our clustering process differs from previous ones by working in (the shortest-path metric of) carefully chosen subgraphs.", "Let G = (V(G), E(G)) be a weighted directed graph and let P be a shortest path from s to t in G. In the replacement paths problem we are required to compute for every edge e in P, the length of a shortest path from s to t that avoids e. The fastest known algorithm for solving the problem in weighted directed graphs is the trivial one: each edge in P is removed from the graph in its turn and the distance from s to t in the modified graph is computed. The running time of this algorithm is O (mn + n2 log n), where n = |V(G)| and m = |E(G)|. The replacement paths problem is strongly motivated by two different applications. First, the fastest algorithm to compute the k simple shortest paths from s to t in directed graphs [21, 13] repeatedly computes the replacement paths from s to t. Its running time is O(kn(m + n log n)). Second, the computation of Vickrey pricing of edges in distributed networks can be reduced to the replacement paths problem. An open question raised by Nisan and Ronen [16] asks whether it is possible to compute the Vickrey pricing faster than the trivial algorithm described in the previous paragraph. In this paper we present a near-linear time algorithm for computing replacement paths in weighted planar directed graphs. In particular, the algorithm computes the lengths of the replacement paths in O(n log3 n) time. This result immediately improves the running time of the two applications mentioned above by almost a linear factor. Our algorithm is obtained by combining several new ideas with a data structure of Klein [12] that supports multi-source shortest paths queries in planar directed graphs in logarithmic time. Our algorithm can be adapted to address the variant of the problem in which one is interested in the replacement path itself (rather than the length of the path). In that case the algorithm is executed in a preprocessing stage constructing a data structure that supports replacement path queries in time O(h), where h is the number of hops in the replacement path. In addition, we can handle the variant in which vertices should be avoided instead of edges.", "We consider the point-to-point (approximate) shortest-path query problem, which is the following generalization of the classical single-source (SSSP) and all-pairs shortest-path (APSP) problems: we are first presented with a network (graph). A so-called preprocessing algorithm may compute certain information (a data structure or index) to prepare for the next phase. After this preprocessing step, applications may ask shortest-path or distance queries, which should be answered as fast as possible. Due to its many applications in areas such as transportation, networking, and social science, this problem has been considered by researchers from various communities (sometimes under different names): algorithm engineers construct fast route planning methods; database and information systems researchers investigate materialization tradeoffs, query processing on spatial networks, and reachability queries; and theoretical computer scientists analyze distance oracles and sparse spanners. Related problems are considered for compact routing and distance labeling schemes in networking and distributed computing and for metric embeddings in geometry as well. In this survey, we review selected approaches, algorithms, and results on shortest-path queries from these fields, with the main focus lying on the tradeoff between the index size and the query time. We survey methods for general graphs as well as specialized methods for restricted graph classes, in particular for those classes with arguable practical significance such as planar graphs and complex networks." ] }
0812.0893
2762701907
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
The specific problems for which we provide linear-time algorithms are well known in the general algorithms and computational geometry literatures. For general graphs with @math vertices and @math edges, excellent work can be found on efficient algorithms in the comparison model, including single-source shortest paths @cite_23 @cite_44 @cite_26 , which can be found in @math time @cite_22 , and Voronoi diagrams @cite_5 @cite_29 , whose graph-theoretic version can be constructed in @math time @cite_6 @cite_24 . None of these algorithms run in linear time, even for planar graphs. Linear-time algorithms for planar graphs are known for single-source shortest paths @cite_27 , but these unfortunately do not immediately translate into linear-time algorithms for non-planar geometric graphs. In addition, there are a number of efficient shortest-path algorithms that make assumptions about edge weights @cite_46 @cite_35 @cite_51 @cite_47 ; hence, they are not applicable in the comparison model.
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_26", "@cite_22", "@cite_29", "@cite_6", "@cite_44", "@cite_24", "@cite_27", "@cite_23", "@cite_5", "@cite_46", "@cite_51" ], "mid": [ "2064493067", "2621618541", "2268320119", "2762935521", "1984406695", "816673839", "2410099853", "2962968573", "2787728049", "2055245094", "2963972775", "2950552904", "2473549844" ], "abstract": [ "We present a deterministic near-linear time algorithm that computes the edge-connectivity and finds a minimum cut for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. In near-linear time we can also construct the classic cactus representation of all minimum cuts. The previous fastest deterministic algorithm by Gabow from STOC'91 took O(m+λ2 n), where λ is the edge connectivity, but λ could be Ω(n). At STOC'96 Karger presented a randomized near linear time Monte Carlo algorithm for the minimum cut problem. As he points out, there is no better way of certifying the minimality of the returned cut than to use Gabow's slower deterministic algorithm and compare sizes. Our main technical contribution is a near-linear time algorithm that contracts vertex sets of a simple input graph G with minimum degree δ, producing a multigraph G with O(m δ) edges which preserves all minimum cuts of G with at least two vertices on each side. In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.", "Let G = (V(G), E(G)) be a weighted directed graph and let P be a shortest path from s to t in G. In the replacement paths problem we are required to compute for every edge e in P, the length of a shortest path from s to t that avoids e. The fastest known algorithm for solving the problem in weighted directed graphs is the trivial one: each edge in P is removed from the graph in its turn and the distance from s to t in the modified graph is computed. The running time of this algorithm is O (mn + n2 log n), where n = |V(G)| and m = |E(G)|. The replacement paths problem is strongly motivated by two different applications. First, the fastest algorithm to compute the k simple shortest paths from s to t in directed graphs [21, 13] repeatedly computes the replacement paths from s to t. Its running time is O(kn(m + n log n)). Second, the computation of Vickrey pricing of edges in distributed networks can be reduced to the replacement paths problem. An open question raised by Nisan and Ronen [16] asks whether it is possible to compute the Vickrey pricing faster than the trivial algorithm described in the previous paragraph. In this paper we present a near-linear time algorithm for computing replacement paths in weighted planar directed graphs. In particular, the algorithm computes the lengths of the replacement paths in O(n log3 n) time. This result immediately improves the running time of the two applications mentioned above by almost a linear factor. Our algorithm is obtained by combining several new ideas with a data structure of Klein [12] that supports multi-source shortest paths queries in planar directed graphs in logarithmic time. Our algorithm can be adapted to address the variant of the problem in which one is interested in the replacement path itself (rather than the length of the path). In that case the algorithm is executed in a preprocessing stage constructing a data structure that supports replacement path queries in time O(h), where h is the number of hops in the replacement path. In addition, we can handle the variant in which vertices should be avoided instead of edges.", "Problems related to computing optimal paths have been abundant in computer science since its emergence as a field. Yet for a large number of such problems we still do not know whether the state-of-the-art algorithms are the best possible. A notable example of this phenomenon is the all pairs shortest paths problem in a directed graph with real edge weights. The best algorithm (modulo small polylogarithmic improvements) for this problem runs in cubic time, a running time known since the 1960s (by Floyd and Warshall). Our grasp of many such fundamental algorithmic questions is far from optimal, and the major goal of this thesis is to bring some new insights into efficiently solving path problems in graphs. We focus on several path problems optimizing different measures: shortest paths, maximum bottleneck paths, minimum nondecreasing paths, and various extensions. For the all-pairs versions of these path problems we use an algebraic approach. We obtain improved algorithms using reductions to fast matrix multiplication. For maximum bottleneck paths and minimum nondecreasing paths we are the first to break the cubic barrier, obtaining truly subcubic strongly polynomial algorithms. We also consider a nonalgebraic, combinatorial approach, which is considered more efficient in practice compared to methods based on fast matrix multiplication. We present a combinatorial data structure that maintains a matrix so that products with given sparse vectors can be computed efficiently. This allows us to obtain good running times for path problems in unweighted sparse graphs. This thesis also gives algorithms for some single source path problems. We obtain the first linear time algorithm for the single source minimum nondecreasing paths problem. We give some extensions to this, including an algorithm to find cheapest minimum nondecreasing paths. Besides finding optimal paths, we consider the related problem of finding optimal cycles. In particular, we focus on the problem of finding in a weighted graph a triangle of maximum weight sum. We obtain the first truly subcubic algorithm for finding a maximum weight triangle in a node-weighted graph. We also present algorithms for the edge-weighted case. These algorithms immediately imply good algorithms for finding maximum weight k-cliques, or arbitrary maximum weight pattern subgraphs of fixed size.", "We give a linear-time algorithm for single-source shortest paths in planar graphs with nonnegative edge-lengths. Our algorithm also yields a linear-time algorithm for maximum flow in a planar graph with the source and sink on the same face. For the case where negative edge-lengths are allowed, we give an algorithm requiringO(n4 3log(nL)) time, whereLis the absolute value of the most negative length. This algorithm can be used to obtain similar bounds for computing a feasible flow in a planar network, for finding a perfect matching in a planar bipartite graph, and for finding a maximum flow in a planar graph when the source and sink are not on the same face. We also give parallel and dynamic versions of these algorithms.", "The problem of construction of planar Voronoi diagrams arises in many areas, one of the most important of which is in nearest neighbor problems. This includes clustering [ 141, contour maps [6] and (Euclidean) minimum spanning trees [23]. Shamos [22] gives several more applications. An JZ(N log N) time worst case lower bound can be shown for this problem by reducing it to sorting [2 11. The challenge is to construct an O(N log N) time algorithm. Shamos [213 and Shamos anti Hoey [23] describe an O(N log N) time divide-and-conquer algorithm for construction of the planar Euclidean Voronoi diagram. Lee and Wong [ 161 describe an O(N log N) time algorithm for the L1 and L, metrics in the plane, and Drysdale pnd Lee [8] present an O(N@g N)l *) t’ rme algorithm for the Voronoi diagram of N line segments (which they have since improved to O(N(log N)*) time). Shamos [2 11, Lee and Preparata [ 151, and Lipton and Tarjan [ 171 have produced fast algorithms for searching a Voronoi diagram (or any other straight-line planar graph). In this paper we describe an O(N log N) time algorithm for constructing a planar Euclidean Voronoi diagram which extends straightforwardly to higher dimensions. The fundamental result is that a K-dimensional Euclidean Voronoi diagram of N points can be constructed by transforming the points to K + I-space,", "We present a deterministic (1+o(1))-approximation O(n1 2+o(1)+D1+o(1))-time algorithm for solving the single-source shortest paths problem on distributed weighted networks (the CONGEST model); here n is the number of nodes in the network and D is its (hop) diameter. This is the first non-trivial deterministic algorithm for this problem. It also improves (i) the running time of the randomized (1+o(1))-approximation O(n1 2D1 4+D)-time algorithm of Nanongkai [STOC 2014] by a factor of as large as n1 8, and (ii) the O(є−1logє−1)-approximation factor of Lenzen and Patt-Shamir’s O(n1 2+є+D)-time algorithm [STOC 2013] within the same running time. Our running time matches the known time lower bound of Ω(n1 2 logn + D) [Das STOC 2011] modulo some lower-order terms, thus essentially settling the status of this problem which was raised at least a decade ago [Elkin SIGACT News 2004]. It also implies a (2+o(1))-approximation O(n1 2+o(1)+D1+o(1))-time algorithm for approximating a network’s weighted diameter which almost matches the lower bound by [PODC 2012]. In achieving this result, we develop two techniques which might be of independent interest and useful in other settings: (i) a deterministic process that replaces the “hitting set argument” commonly used for shortest paths computation in various settings, and (ii) a simple, deterministic, construction of an (no(1), o(1))-hop set of size O(n1+o(1)). We combine these techniques with many distributed algorithmic techniques, some of which from problems that are not directly related to shortest paths, e.g. ruling sets [ STOC 1987], source detection [Lenzen, Peleg PODC 2013], and partial distance estimation [Lenzen, Patt-Shamir PODC 2015]. Our hop set construction also leads to single-source shortest paths algorithms in two other settings: (i) a (1+o(1))-approximation O(no(1))-time algorithm on congested cliques, and (ii) a (1+o(1))-approximation O(no(1)logW)-pass O(n1+o(1)logW)-space streaming algorithm, when edge weights are in 1, 2, …, W . The first result answers an open problem in [Nanongkai, STOC 2014]. The second result partially answers an open problem raised by McGregor in 2006 [ sublinear.info , Problem 14].", "In this paper we provide faster algorithms for solving the geometric median problem: given n points in d compute a point that minimizes the sum of Euclidean distances to the points. This is one of the oldest non-trivial problems in computational geometry yet despite a long history of research the previous fastest running times for computing a (1+є)-approximate geometric median were O(d· n4 3є−8 3) by Chin et. al, O(dexpє−4logє−1) by Badoiu et. al, O(nd+poly(d,є−1)) by Feldman and Langberg, and the polynomial running time of O((nd)O(1)log1 є) by Parrilo and Sturmfels and Xue and Ye. In this paper we show how to compute such an approximate geometric median in time O(ndlog3n є) and O(dє−2). While our O(dє−2) is a fairly straightforward application of stochastic subgradient descent, our O(ndlog3n є) time algorithm is a novel long step interior point method. We start with a simple O((nd)O(1)log1 є) time interior point method and show how to improve it, ultimately building an algorithm that is quite non-standard from the perspective of interior point literature. Our result is one of few cases of outperforming standard interior point theory. Furthermore, it is the only case we know of where interior point methods yield a nearly linear time algorithm for a canonical optimization problem that traditionally requires superlinear time.", "We investigate the complexity of several fundamental polynomial-time solvable problems on graphs and on matrices, when the given instance has low treewidth; in the case of matrices, we consider the treewidth of the graph formed by non-zero entries. In each of the considered cases, the best known algorithms working on general graphs run in polynomial, but far from linear, time. Thus, our goal is to construct algorithms with running time of the form poly(k)·n or poly(k) · n log n, where k is the width of the tree decomposition given on the input. Such procedures would outperform the best known algorithms for the considered problems already for moderate values of the treewidth, like O(n1 c) for some small constant c. Our results include: -- an algorithm for computing the determinant and the rank of an n × n matrix using O(k3 · n) time and arithmetic operations; -- an algorithm for solving a system of linear equations using O(k3 · n) time and arithmetic operations; -- an O(k3 O n log n)-time randomized algorithm for finding the cardinality of a maximum matching in a graph; -- an O(k4 · n log2 n)-time randomized algorithm for constructing a maximum matching in a graph; -- an O(k2 · n log n)-time algorithm for finding a maximum vertex flow in a directed graph. Moreover, we provide an approximation algorithm for treewidth with time complexity suited to the running times as above. Namely, the algorithm, when given a graph G and integer k, runs in time O(k7 · n log n) and either correctly reports that the treewidth of G is larger than k, or constructs a tree decomposition of G of width O(k2). The above results stand in contrast with the recent work of [SODA 2016], which shows that the existence of algorithms with similar running times is unlikely for the problems of finding the diameter and the radius of a graph of low treewidth.", "The well-known @math -disjoint path problem ( @math -DPP) asks for pairwise vertex-disjoint paths between @math specified pairs of vertices @math in a given graph, if they exist. The decision version of the shortest @math -DPP asks for the length of the shortest (in terms of total length) such paths. Similarly the search and counting versions ask for one such and the number of such shortest set of paths, respectively. We restrict attention to the shortest @math -DPP instances on undirected planar graphs where all sources and sinks lie on a single face or on a pair of faces. We provide efficient sequential and parallel algorithms for the search versions of the problem answering one of the main open questions raised by Colin de Verdiere and Schrijver for the general one-face problem. We do so by providing a randomised @math algorithm along with an @math time randomised sequential algorithm. We also obtain deterministic algorithms with similar resource bounds for the counting and search versions. In contrast, previously, only the sequential complexity of decision and search versions of the \"well-ordered\" case has been studied. For the one-face case, sequential versions of our routines have better running times for constantly many terminals. In addition, the earlier best known sequential algorithms (e.g. ) were randomised while ours are also deterministic. The algorithms are based on a bijection between a shortest @math -tuple of disjoint paths in the given graph and cycle covers in a related digraph. This allows us to non-trivially modify established techniques relating counting cycle covers to the determinant. We further need to do a controlled inclusion-exclusion to produce a polynomial sum of determinants such that all \"bad\" cycle covers cancel out in the sum allowing us to count \"good\" cycle covers.", "In this paper we introduce a new simple strategy into edge-searching of a graph, which is useful to the various subgraph listing problems. Applying the strategy, we obtain the following four algorithms. The first one lists all the triangles in a graph G in @math time, where m is the number of edges of G and @math the arboricity of G. The second finds all the quadrangles in @math time. Since @math is at most three for a planar graph G, both run in linear time for a planar graph. The third lists all the complete subgraphs @math of order l in @math time. The fourth lists all the cliques in @math time per clique. All the algorithms require linear space. We also establish an upper bound on @math for a graph @math , where n is the number of vertices in G.", "We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time. The previous fastest deterministic algorithm by Gabow from STOC '91 took O(m+λ2 n), where λ is the edge connectivity, but λ can be as big as n−1. Karger presented a randomized near-linear time Monte Carlo algorithm for the minimum cut problem at STOC’96, but the returned cut is only minimum with high probability. Our main technical contribution is a near-linear time algorithm that contracts vertex sets of a simple input graph G with minimum degree Δ, producing a multigraph Ḡ with O(m Δ) edges, which preserves all minimum cuts of G with at least two vertices on each side. In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS’06. Normally, such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.", "We present new and improved data structures that answer exact node-to-node distance queries in planar graphs. Such data structures are also known as distance oracles. For any directed planar graph on n nodes with non-negative lengths we obtain the following: * Given a desired space allocation @math , we show how to construct in @math time a data structure of size @math that answers distance queries in @math time per query. As a consequence, we obtain an improvement over the fastest algorithm for k-many distances in planar graphs whenever @math . * We provide a linear-space exact distance oracle for planar graphs with query time @math for any constant eps>0. This is the first such data structure with provable sublinear query time. * For edge lengths at least one, we provide an exact distance oracle of space @math such that for any pair of nodes at distance D the query time is @math . Comparable query performance had been observed experimentally but has never been explained theoretically. Our data structures are based on the following new tool: given a non-self-crossing cycle C with @math nodes, we can preprocess G in @math time to produce a data structure of size @math that can answer the following queries in @math time: for a query node u, output the distance from u to all the nodes of C. This data structure builds on and extends a related data structure of Klein (SODA'05), which reports distances to the boundary of a face, rather than a cycle. The best distance oracles for planar graphs until the current work are due to Cabello (SODA'06), Djidjev (WG'96), and Fakcharoenphol and Rao (FOCS'01). For @math and space @math , we essentially improve the query time from @math to @math .", "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math ." ] }
0812.0893
2762701907
We provide linear-time algorithms for geometric graphs with sublinearly many edge crossings. That is, we provide algorithms running in @math time on connected geometric graphs having @math vertices and @math pairwise crossings, where @math is smaller than @math by an iterated logarithmic factor. Specific problems that we study include Voronoi diagrams and single-source shortest paths. Our algorithms all run in linear time in the standard comparison-based computational model; hence, we make no assumptions about the distribution or bit complexities of edge weights, nor do we utilize unusual bit-level operations on memory words. Instead, our algorithms are based on a planarization method that “zeros in” on edge crossings, together with methods for applying planar separator decompositions to geometric graphs with sublinearly many crossings. Incidentally, our planarization algorithm also solves an open computational geometry problem of Chazelle for triangulating a self-intersecting polygonal chain having @math segments and @math crossings in linear time, for the case when @math is sublinear in @math by an iterated logarithmic factor.
Chazelle @cite_18 shows that any simple polygon can be triangulated in @math time and that this algorithm can be extended to determine in @math time, for any polygonal chain @math , whether or not @math contains a self-intersection. In addition, Chazelle posed as an open problem whether or not one can compute the arrangement of a non-simple polygon in @math time, where @math is the number of pairwise edge crossings. Clarkson, Cole, and Tarjan @cite_7 @cite_31 answer this question in the affirmative for polygons with a super-linear number of crossings, as they give a randomized algorithm that solves this problem in @math expected time. There is, to our knowledge, no previous algorithm that solves Chazelle's open problem, however, for non-simple polygons with a sublinear number of edge crossings.
{ "cite_N": [ "@cite_18", "@cite_31", "@cite_7" ], "mid": [ "2113370909", "2109556124", "2100595965" ], "abstract": [ "In this paper we study two link distance problems for rectilinear paths inside a simple rectilinear polygon P.First, we present a data structure using O(n log n) storage such that a shortest path between two query points can be computed efficiently. If both query points are vertices of P, the query time is O(1 + l), where l is the number of links. If the query points are arbitrary points inside P, then the query time becomes O(log n + l). The resulting path is not only optimal in the rectilinear link metric, but it is optimal in the L1-metric as well. Secondly, it is shown that the rectilinear link diameter of P can be computed in time O(n log n). We also give an approximation algorithm that runs in linear time. This algorithm produces a solution that differs by at most three links from the exact diameter.The solutions are based on a rectilinear version of Chazelle's polygon cutting theorem. This new theorem states that any simple rectilinear polygon can be cut into two rectilinear subpolygons of size at most 34 times the original size, and that such a cut segment can be found in linear time.", "This thesis covers work on two topics: unfolding polyhedra into the plane and reconstructing polyhedra from partial information. For each topic, we describe previous work in the area and present an array of new research and results. Our work on unfolding is motivated by the problem of characterizing precisely when overlaps will occur when a polyhedron is cut along edges and unfolded. By contrast to previous work, we begin by classifying overlaps according to a notion of locality. This classification enables us to focus upon particular types of overlaps, and use the results to construct examples of polyhedra with interesting unfolding properties. The research on unfolding is split into convex and non-convex cases. In the non-convex case, we construct a polyhedron for which every edge unfolding has an overlap, with fewer faces than all previously known examples. We also construct a non-convex polyhedron for which every edge unfolding has a particularly trivial type of overlap. In the convex case, we construct a series of example polyhedra for which every unfolding of various types has an overlap. These examples disprove some existing conjectures regarding algorithms to unfold convex polyhedra without overlaps. The work on reconstruction is centered around analyzing the computational complexity of a number of reconstruction questions. We consider two classes of reconstruction problems. The first problem is as follows: given a collection of edges in space, determine whether they can be rearranged by translation only to form a polygon or polyhedron. We consider variants of this problem by introducing restrictions like convexity, orthogonality, and non-degeneracy. All of these problems are NP-complete, though some are proved to be only weakly NP-complete. We then consider a second, more classical problem: given a collection of edges in space, determine whether they can be rearranged by translation and or rotation to form a polygon or polyhedron. This problem is NP-complete for orthogonal polygons, but polynomial algorithms exist for nonorthogonal polygons. For polyhedra, it is shown that if degeneracies are allowed then the problem is NP-hard, but the complexity is still unknown for non-degenerate polyhedra.", "Assume that an isomorphism between two n-vertex simple polygons, P,Q (with k,l reflex vertices, respectively) is given. We present two algorithms for constructing isomorphic (i.e. adjacency preserving) triangulations of P and Q, respectively. The first algorithm computes isomorphic triangulations of P and Q by introducing at most O((k+l)2) Steiner points and has running time O(n+(k+l)2). The second algorithm computes isomorphic traingulations of P and Q by introducing at most O(kl) Steiner points and has running time O(n+kllog n). The number of Steiner points introduced by the second algorithm is also worst-case optimal. Unlike the O(n2) algorithm of Aronov, Seidel and Souvaine1 our algorithms are sensitive to the number of reflex vertices of the polygons. In particular, our algorithms have linear running time when for the first algorithm, and kl≤n log n for the second algorithm." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
In a seminal paper on the tracking problem, Chernoff and Zacks present a Bayesian estimator for the current mean of a process with abrupt changes @cite_1 . Like us, they start with an estimator that assumes there is at most one change, and then use it to generate an approximate estimator in the general case. Their algorithm makes the additional assumption that the change size is distributed normally; our algorithm does not require this assumption. Also, our algorithm generates a predictive distribution for the next value in the series, rather than an estimate of the current mean.
{ "cite_N": [ "@cite_1" ], "mid": [ "2095283167" ], "abstract": [ "Abstract : A tracking problem is considered. Observations are taken on the successive positions of an object traveling on a path, and it is desired to estimate its current position. The objective is to arrive at a simple formula which implicitly accounts for possible changes in direction and discounts observations taken before the latest change. To develop a reasonable procedure, a simpler problem is studied. Successive observations are taken on n independently and normally distributed random variables X sub 1, X sub 2, ..., X sub n with means mu sub 1, mu sub 2, ..., mu sub n and variance 1. Each mean mu sub i is equal to the preceding mean mu sub (i-1) except when an occasional change takes place. The object is to estimate the current mean mu sub n. This problem is studied from a Bayesian point of view. An 'ad hoc' estimator is described, which applies a combination of the A.M.O.C. Bayes estimator and a sequence of tests designed to locate the last time point of change. The various estimators are then compared by a Monte Carlo study of samples of size 9. This Bayesian approach seems to be more appropriate for the related problem of testing whether a change in mean has occurred. This test procedure is simpler than that used by Page. The power functions of the two procedures are compared. (Author)" ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
The algorithm we propose can be extended to detect changes in the variance as well as the mean of a process. This kind of changepoint has received relatively little attention; one exception is recent work by Jandhyala, Fotopoulos and Hawkins @cite_10 .
{ "cite_N": [ "@cite_10" ], "mid": [ "2058148593" ], "abstract": [ "In the past few years there has been increased interest in using data-mining techniques to extract interesting patterns from time series data generated by sensors monitoring temporally varying phenomenon. Most work has assumed that raw data is somehow processed to generate a sequence of events, which is then mined for interesting episodes. In some cases the rule for determining when a sensor reading should generate an event is well known. However, if the phenomenon is ill-understood, stating such a rule is difficult. Detection of events in such an environment is the focus of this paper. Consider a dynamic phenomenon whose behavior changes enough over time to be considered a qualitatively significant change. The problem we investigate is of identifying the time points at which the behavior change occurs. In the statistics literature this has been called the change-point detection problem. The standard approach has been to (a) upriori determine the number of change-points that are to be discovered, and (b) decide the function that will be used for curve fitting in the interval between successive change-points. In this paper we generalize along both these dimensions. We propose an iterative algorithm that fits a model to a time segment, and uses a likelihood criterion to determine if the segment should be partitioned further, i.e. if it contains a new changepoint. In this paper we present algorithms for both the batch and incremental versions of the problem, and evaluate their behavior with synthetic and real data. Finally, we present initial results comparing the change-points detected by the batch algorithm with those detected by people using visual inspection." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
Most recently, Vellekoop and Clark propose a nonlinear filtering approach to the changepoint detection problem (but not estimation or tracking) @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2078450032" ], "abstract": [ "A benchmark change detection problem is considered which involves the detection of a change of unknown size at an unknown time. Both unknown quantities are modeled by stochastic variables, which allows the problem to be formulated within a Bayesian framework. It turns out that the resulting nonlinear filtering problem is much harder than the well-known detection problem for known sizes of the change, and in particular that it can no longer be solved in a recursive manner. An approximating recursive filter is therefore proposed, which is designed using differential-geometric methods in a suitably chosen space of unnormalized probability densities. The new nonlinear filter can be interpreted as an adaptive version of the celebrated Shiryayev--Wonham equation for the detection of a priori known changes, combined with a modified Kalman filter structure to generate estimates of the unknown size of the change. This intuitively appealing interpretation of the nonlinear filter and its excellent performance in simulation studies indicates that it may be of practical use in realistic change detection problems." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
We are aware of a few examples where these techniques have been applied to network measurements. Bla z explore the use of change-point algorithms to detect denial of service attacks @cite_3 . Similarly Deshpande, Thottan and Sikdar use non-parametric CUSUM to detect BGP instabilities @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2123002543", "2146697465" ], "abstract": [ "This paper develops two parametric methods to detect low-rate denial-of-service attacks and other similar near-periodic traffic, without the need for flow separation. The first method, the periodic attack detector, is based on a previous approach that exploits the near-periodic nature of attack traffic in aggregate traffic by modeling the peak frequency in the traffic spectrum. The new method adopts simple statistical models for attack and background traffic in the time-domain. Both approaches use sequential probability ratio tests (SPRTs), allowing control over false alarm rate while examining the trade-off between detection time and attack strength. We evaluate these methods with real and synthetic traces, observing that the new Poisson- based scheme uniformly detects attacks more rapidly, often in less than 200 ms, and with lower complexity than the periodic attack detector. Current entropy-based detection methods provide an equivalent time to detection but require flow-separation since they utilize source destination IP addresses. We evaluate sensitivity to attack strength (compared to the rate of background traffic) with synthetic traces, finding that the new approach can detect attacks that represent only 10 of the total traffic bitrate in fractions of a second.", "The increasing incidence of worm attacks in the Internet and the resulting instabilities in the global routing properties of the border gateway protocol (BGP) routers pose a serious threat to the connectivity and the ability of the Internet to deliver data correctly. In this paper we propose a mechanism to detect predict the onset of such instabilities which can then enable the timely execution of preventive strategies in order to minimize the damage caused by the worm. Our technique is based on online statistical methods relying on sequential change-point and persistence filter based detection algorithms. Our technique is validated using a year's worth of real traces collected from BGP routers in the Internet that we use to detect predict the global routing instabilities corresponding to the Code Red II, Nimda and SQL Slammer worms." ] }
0812.1237
1643135477
We propose an algorithm for simultaneously detecting and locating changepoints in a time series, and a framework for predicting the distribution of the next point in the series. The kernel of the algorithm is a system of equations that computes, for each index i, the probability that the last (most recent) change point occurred at i. We evaluate this algorithm by applying it to the change point detection problem and comparing it to the generalized likelihood ratio (GLR) algorithm. We find that our algorithm is as good as GLR, or better, over a wide range of scenarios, and that the advantage increases as the signal-to-noise ratio decreases.
In the context of large databases, Kifer, Ben-David and Gehrke propose an algorithm for detecting changes in data streams @cite_14 . It is based on a two-window paradigm, in which the distribution of values in the current window is compared to the distribution of values in a past reference window. This approach is appropriate when the number of points between changepoints is large and alarm delay is not a critical metric.
{ "cite_N": [ "@cite_14" ], "mid": [ "1500307578" ], "abstract": [ "In this paper we study the problem of constructing histograms from high-speed time-changing data streams. Learning in this context requires the ability to process examples once at the rate they arrive, maintaining a histogram consistent with the most recent data, and forgetting out-date data whenever a change in the distribution is detected. To construct histogram from high-speed data streams we use the two layer structure used in the Partition Incremental Discretization (PiD) algorithm. Our contribution is a new method to detect whenever a change in the distribution generating examples occurs. The base idea consists of monitoring distributions from two different time windows: the reference time window, that reflects the distribution observed in the past; and the current time window reflecting the distribution observed in the most recent data. We compare both distributions and signal a change whenever they are greater than a threshold value, using three different methods: the Entropy Absolute Difference, the Kullback-Leibler divergence and the Cosine Distance. The experimental results suggest that Kullback-Leibler divergence exhibit high probability in change detection, faster detection rates, with few false positives alarms." ] }
0811.4139
2949367021
Algebraic codes that achieve list decoding capacity were recently constructed by a careful folding'' of the Reed-Solomon code. The low-degree'' nature of this folding operation was crucial to the list decoding algorithm. We show how such folding schemes conducive to list decoding arise out of the Artin-Frobenius automorphism at primes in Galois extensions. Using this approach, we construct new folded algebraic-geometric codes for list decoding based on cyclotomic function fields with a cyclic Galois group. Such function fields are obtained by adjoining torsion points of the Carlitz action of an irreducible @math . The Reed-Solomon case corresponds to the simplest such extension (corresponding to the case @math ). In the general case, we need to descend to the fixed field of a suitable Galois subgroup in order to ensure the existence of many degree one places that can be used for encoding. Our methods shed new light on algebraic codes and their list decoding, and lead to new codes achieving list decoding capacity. Quantitatively, these codes provide list decoding (and list recovery soft decoding) guarantees similar to folded Reed-Solomon codes but with an alphabet size that is only polylogarithmic in the block length. In comparison, for folded RS codes, the alphabet size is a large polynomial in the block length. This has applications to fully explicit (with no brute-force search) binary concatenated codes for list decoding up to the Zyablov radius.
Independent of our work, Huang and Narayanan @cite_20 also consider AG codes constructed from Galois extensions, and observe how automorphisms of large order can be used for folding such codes. To our knowledge, the only instantiation of this approach that improves on folded RS codes is the one based on cyclotomic function fields from our work. As an alternate approach, they also propose a decoding method that works with folding via automorphisms of small order. This involves computing several coefficients of the power series expansion of the message function at a low-degree place. Unfortunately, piecing together these coefficients into a function could lead to an exponential list size bound. The authors suggest a heuristic assumption under which they can show that for a random received word, the expected list size and running time are polynomially bounded.
{ "cite_N": [ "@cite_20" ], "mid": [ "1637999745" ], "abstract": [ "We describe a new class of list decodable codes based on Galois extensions of function fields and present a list decoding algorithm. These codes are obtained as a result of folding the set of rational places of a function field using certain elements (automorphisms) from the Galois group of the extension. This work is an extension of Folded Reed Solomon codes to the setting of Algebraic Geometric codes. We describe two constructions based on this framework depending on if the order of the automorphism used to fold the code is large or small compared to the block length. When the automorphism is of large order, the codes have polynomially bounded list size in the worst case. This construction gives codes of rate @math over an alphabet of size independent of block length that can correct a fraction of @math errors subject to the existence of asymptotically good towers of function fields with large automorphisms. The second construction addresses the case when the order of the element used to fold is small compared to the block length. In this case a heuristic analysis shows that for a random received word, the expected list size and the running time of the decoding algorithm are bounded by a polynomial in the block length. When applied to the Garcia-Stichtenoth tower, this yields codes of rate @math over an alphabet of size @math , that can correct a fraction of @math errors." ] }
0811.4413
2950265833
Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations---it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications.
The second idea is that we can represent the probability of sequences as products of matrix operators, as in the literature on multiplicity automata (see for discussion of this relationship). This idea was re-used in both the Observable Operator Model of @cite_3 and the Predictive State Representations of @cite_0 , both of which are closely related and both of which can model HMMs. In fact, the former work by @cite_3 provides a non-iterative algorithm for learning HMMs, with an asymptotic analysis. However, this algorithm assumed knowing a set of characteristic events', which is a rather strong assumption that effectively reveals some relationship between the hidden states and observations. In our algorithm, this problem is avoided through the first idea.
{ "cite_N": [ "@cite_0", "@cite_3" ], "mid": [ "1528056001", "1934019294" ], "abstract": [ "Hidden Markov models (HMMs) have proven to be one of the most widely used tools for learning probabilistic models of time series data. In an HMM, information about the past is conveyed through a single discrete variable—the hidden state. We discuss a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. We describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations, and relate it to the forward–backward algorithm for HMMs and to algorithms for more general graphical models. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or variational methods. Within the variational framework, we present a structured approximation in which the the state variables are decoupled, yielding a tractable algorithm for learning the parameters of the model. Empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach‘s chorales and show that factorial HMMs can capture statistical structure in this data set which an unconstrained HMM cannot.", "Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We present positive experimental results on the segmentation of FAQ’s." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
@cite_7 , the authors use PLA to estimate a time series. But the authors put unnecessary constraints on the algorithm, which requires the endpoints come from the original dataset. On the whole, their algorithm can run in @math time complexity and takes @math space complexity.
{ "cite_N": [ "@cite_7" ], "mid": [ "1552695456" ], "abstract": [ "Similarity-based search over time-series databases has been a hot research topic for a long history, which is widely used in many applications, including multimedia retrieval, data mining, web search and retrieval, and so on. However, due to high dimensionality (i.e. length) of the time series, the similarity search over directly indexed time series usually encounters a serious problem, known as the \"dimensionality curse\". Thus, many dimensionality reduction techniques are proposed to break such curse by reducing the dimensionality of time series. Among all the proposed methods, only Piecewise Linear Approximation (PLA) does not have indexing mechanisms to support similarity queries, which prevents it from efficiently searching over very large time-series databases. Our initial studies on the effectiveness of different reduction methods, however, show that PLA performs no worse than others. Motivated by this, in this paper, we re-investigate PLA for approximating and indexing time series. Specifically, we propose a novel distance function in the reduced PLA-space, and prove that this function indeed results in a lower bound of the Euclidean distance between the original time series, which can lead to no false dismissals during the similarity search. As a second step, we develop an effective approach to index these lower bounds to improve the search efficiency. Our extensive experiments over a wide spectrum of real and synthetic data sets have demonstrated the efficiency and effectiveness of PLA together with the newly proposed lower bound distance, in terms of both pruning power and wall clock time, compared with two state-of-the-art reduction methods, Adaptive Piecewise Constant Approximation (APCA) and Chebyshev Polynomials (CP)." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
@cite_17 , give a comprehensive review on the existing techniques for segmenting time series. They categorize the solutions into three different groups, namely sliding window methods, top-down methods, and bottom-up methods. They then take advantage of both sliding window and bottom-up methods and design a Sliding-Window-And-Bottom-up (SWAB) algorithm. The SWAB algorithm uses a moving window to constrain a time period in consideration.
{ "cite_N": [ "@cite_17" ], "mid": [ "2049376434" ], "abstract": [ "In time series mining, one of the interesting tasks that attract many researchers is time series clustering which is classified into two main categories. Whole time series clustering considers how to cluster multiple time series, and the other one is Subsequence Time Series (STS) clustering, a clustering of subparts or subsequences within a single time series. Deplorably, STS clustering is not preferable even though it had widely been used as a subroutine in various mining tasks, e.g., rule discovery, anomaly detection, or classification, due to the recent finding a decade ago that STS clustering problem can produce meaningless results. There have been numerous attempts to resolve this problem but seemed to be unsuccessful. Until the two most recent attempts, they seem to accomplish in producing meaningful results; however, their approaches do need some predefined constraint values, such as the width of the subsequences that are in fact quite subjective and sensitive. Thus, we propose a novel parameter-free clustering technique to eliminate this problem by utilizing a motif discovery algorithm and some statistical principles to properly determine these parameters. Our experimental results from well-known datasets demonstrate the effectiveness of the proposed algorithm in selecting the proper subsequence width, and in turn leading to meaningful and highly accurate results." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
@cite_0 , an amnesic function is introduced to give weights to different points in the time series. The PLA-SegmentBound problem is discussed in the context of Unrestricted Window with Absolute Amnesic (UAA) problem, but complete solutions to this problem are not provided in @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2008348094" ], "abstract": [ "Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penalty-based DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems." ] }
0811.4672
2953306525
In many emerging applications, data streams are monitored in a network environment. Due to limited communication bandwidth and other resource constraints, a critical and practical demand is to online compress data streams continuously with quality guarantee. Although many data compression and digital signal processing methods have been developed to reduce data volume, their super-linear time and more-than-constant space complexity prevents them from being applied directly on data streams, particularly over resource-constrained sensor networks. In this paper, we tackle the problem of online quality guaranteed compression of data streams using fast linear approximation (i.e., using line segments to approximate a time series). Technically, we address two versions of the problem which explore quality guarantees in different forms. We develop online algorithms with linear time complexity and constant cost in space. Our algorithms are optimal in the sense they generate the minimum number of segments that approximate a time series with the required quality guarantee. To meet the resource constraints in sensor networks, we also develop a fast algorithm which creates connecting segments with very simple computation. The low cost nature of our methods leads to a unique edge on the applications of massive and fast streaming environment, low bandwidth networks, and heavily constrained nodes in computational power. We implement and evaluate our methods in the application of an acoustic wireless sensor network.
A solution to the PLA-PointBound problem is addressed in @cite_10 with a different definition of point error bound. The algorithm is claimed to be optimal, but the time complexity is @math where @math is the number of points in the time series. Moreover, no performance evaluation of the solution is presented in the paper.
{ "cite_N": [ "@cite_10" ], "mid": [ "2018738327" ], "abstract": [ "When solving the general smooth nonlinear and possibly nonconvex optimization problem involving equality and or inequality constraints, an approximate first-order critical point of accuracy @math can be obtained by a second-order method using cubic regularization in at most @math evaluations of problem functions, the same order bound as in the unconstrained case. This result is obtained by first showing that the same result holds for inequality constrained nonlinear least-squares. As a consequence, the presence of (possibly nonconvex) equality inequality constraints does not affect the complexity of finding approximate first-order critical points in nonconvex optimization. This result improves on the best known ( @math ) evaluation-complexity bound for solving general nonconvexly constrained optimization problems." ] }
0811.2841
2951448804
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
Dinur and Nissim @cite_24 showed that for a database with @math rows, answering @math randomly chosen subset count queries with @math error allows an adversary to reconstruct most of the rows of the database (a blatant privacy breach); see @cite_20 for a more robust impossibility result of the same type. Most of the differential privacy literature circumvents these impossibility results by focusing on interactive models where a mechanism supplies answers to only a sub-linear (in @math ) number of queries. Count queries (e.g. @cite_24 @cite_17 ) and more general queries (e.g. @cite_8 @cite_12 ) have been studied from this perspective.
{ "cite_N": [ "@cite_8", "@cite_24", "@cite_20", "@cite_12", "@cite_17" ], "mid": [ "2621688363", "2120806354", "2017509273", "44899178", "1551374365" ], "abstract": [ "We consider the problem of single-round private information retrieval (PIR) from @math replicated databases. We consider the case when @math databases are outdated (unsynchronized), or even worse, adversarial (Byzantine), and therefore, can return incorrect answers. In the PIR problem with Byzantine databases (BPIR), a user wishes to retrieve a specific message from a set of @math messages with zero-error, irrespective of the actions performed by the Byzantine databases. We consider the @math -privacy constraint in this paper, where any @math databases can collude, and exchange the queries submitted by the user. We derive the information-theoretic capacity of this problem, which is the maximum number of that can be retrieved privately (under the @math -privacy constraint) for every symbol of the downloaded data. We determine the exact BPIR capacity to be @math , if @math . This capacity expression shows that the effect of Byzantine databases on the retrieval rate is equivalent to removing @math databases from the system, with a penalty factor of @math , which signifies that even though the number of databases needed for PIR is effectively @math , the user still needs to access the entire @math databases. The result shows that for the unsynchronized PIR problem, if the user does not have any knowledge about the fraction of the messages that are mis-synchronized, the single-round capacity is the same as the BPIR capacity. Our achievable scheme extends the optimal achievable scheme for the robust PIR (RPIR) problem to correct the introduced by the Byzantine databases as opposed to in the RPIR problem. Our converse proof uses the idea of the cut-set bound in the network coding problem against adversarial nodes.", "This work is at theintersection of two lines of research. One line, initiated by Dinurand Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [4,7,13,11])and explicitly connected to error-correcting codes by Candes and Tao ([4]; see also [5,3]), is in the use of linearprogramming for error correction. Our principal result is the discovery of a sharp threshhold ρ*∠ 0.239, so that if ρ In the context of privacy-preserving datamining our results say thatany privacy mechanism, interactive or non-interactive, providingreasonably accurate answers to a 0.761 fraction of randomly generated weighted subset sum queries, and arbitrary answers on the remaining 0.239 fraction, is blatantly non-private.", "Existing differential privacy (DP) studies mainly consider aggregation on data sets where each entry corresponds to a particular participant to be protected. In many situations, a user may pose a relational algebra query on a database with sensitive data, and desire differentially private aggregation on the result of the query. However, no existing work is able to release such aggregation when the query contains unrestricted join operations. This severely limits the applications of existing DP techniques because many data analysis tasks require unrestricted joins. One example is subgraph counting on a graph. Furthermore, existing methods for differentially private subgraph counting support only edge DP and are subject to very simple subgraphs. Until recent, whether any nontrivial graph statistics can be released with reasonable accuracy for arbitrary kind of input graphs under node DP was still an open problem. In this paper, we propose a novel differentially private mechanism that supports unrestricted joins, to release an approximation of a linear statistic of the result of some positive relational algebra calculation over a sensitive database. The error bound of the approximate answer is roughly proportional to the empirical sensitivity of the query --- a new notion that measures the maximum possible change to the query answer when a participant withdraws its data from the sensitive database. For subgraph counting, our mechanism provides a solution to achieve node DP, for any kind of subgraphs.", "In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.", "We consider the problem of answering queries from databases that may be incomplete. A database is incomplete if some tuples may be missing from some relations, and only a part of each relation is known to be complete. This problem arises in several contexts. For example, systems that provide access to multiple heterogeneous information sources often encounter incomplete sources. The question we address is to determine whether the answer to a specific given query is complete even when the database is incomplete. We present a novel sound and complete algorithm for the answer-completeness problem by relating it to the problem of independence of queries from updates. We also show an important case of the independence problem (and therefore ofthe answer-completeness problem) that can be decided in polynomial time, whereas the best known algorithm for this case is exponential. This case involves updates that are described using a conjunction of comparison predicates. We also describe an algorithm that determines whether the answer to the query is complete in the current state of the database. Finally, we show that our ‘treatment extends naturally to partiallyincorrect databases. Permission to copy without fee all or part of this material is granted provided that the copies aTe not made OT distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, OT to republish, requirea a fee and or special permission from the Endowment. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996" ] }
0811.2841
2951448804
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
@cite_1 take a different approach by restricting attention to count queries that lie in a restricted class; they obtain non-interactive mechanisms that provide simultaneous good accuracy (in terms of worst-case error) for all count queries from a class with polynomial VC dimension. @cite_11 give further results for privately learning hypotheses from a given class.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "2067905901", "2773012335" ], "abstract": [ "Previous teaching models in the learning theory community have been batch models. That is, in these models the teacher has generated a single set of helpful examples to present to the learner. In this paper we present an interactive model in which the learner has the ability to ask queries as in the query learning model of Angluin. We show that this model is at least as powerful as previous teaching models. We also show that anything learnable with queries, even by a randomized learner, is teachable in our model. In all previous teaching models, all classes shown to be teachable are known to be efficiently learnable. An important concept class that is not known to be learnable is DNF formulas. We demonstrate the power of our approach by providing a deterministic teacher and learner for the class of DNF formulas. The learner makes only equivalence queries and all hypotheses are also DNF formulas.", "In this paper, we focus on improving the proposal classification stage in the object detection task and present implicit negative sub-categorization and sink diversion to lift the performance by strengthening loss function in this stage. First, based on the observation that the “background” class is generally very diverse and thus challenging to be handled as a single indiscriminative class in existing state-of-the-art methods, we propose to divide the background category into multiple implicit sub-categories to explicitly differentiate diverse patterns within it. Second, since the ground truth class inevitably has low-value probability scores for certain images, we propose to add a “sink” class and divert the probabilities of wrong classes to this class when necessary, such that the ground truth label will still have a higher probability than other wrong classes even though it has low probability output. Additionally, we propose to use dilated convolution, which is widely used in the semantic segmentation task, for efficient and valuable context information extraction. Extensive experiments on PASCAL VOC 2007 and 2012 data sets show that our proposed methods based on faster R-CNN implementation can achieve state-of-the-art mAPs, i.e., 84.1 , 82.6 , respectively, and obtain 2.5 improvement on ILSVRC DET compared with that of ResNet." ] }
0811.2841
2951448804
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .
The use of abstract utility functions'' in McSherry and Talwar @cite_18 has a similar flavor to our use of loss functions, though the motivations and goals of their work and ours are unrelated. Motivated by pricing problems, McSherry and Talwar @cite_18 design differentially private mechanisms for queries that can have very different values on neighboring databases (unlike count queries); they do not consider users with side information (i.e., priors) and do not formulate a notion of mechanism optimality (simultaneous or otherwise).
{ "cite_N": [ "@cite_18" ], "mid": [ "2090593019" ], "abstract": [ "A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Publishing fully accurate information maximizes utility while minimizing privacy, while publishing random noise accomplishes the opposite. Privacy can be rigorously quantified using the framework of differential privacy, which requires that a mechanism's output distribution is nearly the same whether a given database row is included. The goal of this paper is to formulate and provide strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a symmetric and monotone loss function). Our main result is the following: for each fixed count query and differential privacy level, there is a geometric mechanism @math ---a discrete variant of the simple and well-studi..." ] }
0811.3301
1974339580
The dynamic time warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster. As an example, our approach is 2-3 times faster over random-walk and shape time series.
Beside DTW, several similarity metrics have been proposed including the directed and general Hausdorff distance, Pearson's correlation, nonlinear elastic matching distance @cite_17 , Edit distance with Real Penalty (ERP) @cite_53 , Needleman-Wunsch similarity @cite_30 , Smith-Waterman similarity @cite_19 , and SimilB @cite_27 .
{ "cite_N": [ "@cite_30", "@cite_53", "@cite_19", "@cite_27", "@cite_17" ], "mid": [ "2008348094", "2182136398", "1970013651", "2086784973", "1969086082" ], "abstract": [ "Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penalty-based DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems.", "Similarity measure between time series is a key issue in data mining of time series database. Euclidean distance measure is typically used init. However, the measure is an extremely brittle distance measure. Dynamic Time Warping (DTW) is proposed to deal with this case, but its expensive computation limits its application in massive datasets. In this paper, we present a new distance measure algorithm, called local segmented dynamic time warping (LSDTW), which is based on viewing the local DTW measure at the segment level. The DTW measure between the two segments is the product of the square of the distance between their mean times the number of points of the longer segment. Experiments about cluster analysis on the basis of this algorithm were implemented on a synthetic and a real world dataset comparing with Euclidean and classical DTW measure. The experiment results show that the new algorithm gives better computational performance in comparison to classical DTW with no loss of accuracy.", "While most time series data mining research has concentrated on providing solutions for a single distance function, in this work we motivate the need for an index structure that can support multiple distance measures. Our specific area of interest is the efficient retrieval and analysis of similar trajectories. Trajectory datasets are very common in environmental applications, mobility experiments, and video surveillance and are especially important for the discovery of certain biological patterns. Our primary similarity measure is based on the longest common subsequence (LCSS) model that offers enhanced robustness, particularly for noisy data, which are encountered very often in real-world applications. However, our index is able to accommodate other distance measures as well, including the ubiquitous Euclidean distance and the increasingly popular dynamic time warping (DTW). While other researchers have advocated one or other of these similarity measures, a major contribution of our work is the ability to support all these measures without the need to restructure the index. Our framework guarantees no false dismissals and can also be tailored to provide much faster response time at the expense of slightly reduced precision recall. The experimental results demonstrate that our index can help speed up the computation of expensive similarity measures such as the LCSS and the DTW.", "Similarity search is a core module of many data analysis tasks, including search by example, classification, and clustering. For time series data, Dynamic Time Warping (DTW) has been proven a very effective similarity measure, since it minimizes the effects of shifting and distortion in time. However, the quadratic cost of DTW computation to the length of the matched sequences makes its direct application on databases of long time series very expensive. We propose a technique that decomposes the sequences into a number of segments and uses cheap approximations thereof to compute fast lower bounds for their warping distances. We present several, progressively tighter bounds, relying on the existence or not of warping constraints. Finally, we develop an index and a multi-step technique that uses the proposed bounds and performs two levels of filtering to efficiently process similarity queries. A thorough experimental study suggests that our method consistently outperforms state-of-the-art methods for DTW similarity search.", "In this paper we explore the notion of mobile users' similarity as a key enabler of innovative applications hinging on opportunistic mobile encounters. In particular, we analyze the performance of known similarity metrics, applicable to our problem domain, as well as propose a novel temporal-based metric, in an attempt to quantify the inherently qualitative notion of similarity. Towards this objective, we first introduce generalized profile structures, beyond mere location, that aim to capture users interests and prior experiences, in the form of a probability distribution. Afterwards, we analyze known and proposed similarity metrics for the proposed profile structures using publicly available data. Apart from the classic Cosine similarity, we identify a distance metric from probability theory, namely Hellinger distance, as a strong candidate for quantifying similarity due to the probability distribution structure of the proposed profiles. In addition, we introduce a novel temporal similarity metric, based on matrix vectorization, to capitalize on the richness in the temporal dimension and maintain low complexity. Finally, the numerical results unveil a number of key insights. First, the temporal metrics yield, on the average, lower similarity indices, compared to the non-temporal ones, due to incorporating the dynamics in the temporal dimension. Second, the Hellinger distance holds great promise for quantifying similarity between probability distribution profiles. Third, vectorized metrics constitute a low-complexity approach towards temporal similarity on resource-limited mobile devices." ] }
0811.3301
1974339580
The dynamic time warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster. As an example, our approach is 2-3 times faster over random-walk and shape time series.
@cite_33 have shown that retrieval under the DTW can be faster by mixing progressively finer resolution and by applying early abandoning @cite_31 to the dynamic programming computation.
{ "cite_N": [ "@cite_31", "@cite_33" ], "mid": [ "1968010112", "2144994235" ], "abstract": [ "Time-series data naturally arise in countless domains, such as meteorology, astrophysics, geology, multimedia, and economics. Similarity search is very popular, and DTW (Dynamic Time Warping) is one of the two prevailing distance measures. Although DTW incurs a heavy computation cost, it provides scaling along the time axis. In this paper, we propose FTW (Fast search method for dynamic Time Warping), which guarantees no false dismissals in similarity query processing. FTW efficiently prunes a significant number of the search cost. Experiments on real and synthetic sequence data sets reveals that FTW is significantly faster than the best existing method, up to 222 times.", "Dynamic Time Warping (DTW) has a quadratic time and space complexity that limits its use to small time series. In this paper we introduce FastDTW, an approximation of DTW that has a linear time and space complexity. FastDTW uses a multilevel approach that recursively projects a solution from a coarser resolution and refines the projected solution. We prove the linear time and space complexity of FastDTW both theoretically and empirically. We also analyze the accuracy of FastDTW by comparing it to two other types of existing approximate DTW algorithms: constraints (such as Sakoe-Chiba Bands) and abstraction. Our results show a large improvement in accuracy over existing methods." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
In @cite_14 the authors study the problem of anonymizing market-basket data. They propose a notion of anonymity similar to @math -anonymity where a limit is placed on the number of private items of any individual that could be known to an attacker beforehand. The authors provide generalization algorithms to achieve the anonymity requirements. For example, an item milk' in a user's basket may be generalized to dairy product' in order to protect it. In contrast, the techniques we propose consider additions and deletions to the dataset instead of generalizations. Further, we demonstrate applicability of our algorithms to search engine query log data as well where there is no obvious underlying hierarchy that can be used to generalize queries.
{ "cite_N": [ "@cite_14" ], "mid": [ "2025483242" ], "abstract": [ "This article examines a new problem of k-anonymity with respect to a reference dataset in privacy-aware location data publishing: given a user dataset and a sensitive event dataset, we want to generalize the user dataset such that by joining it with the event dataset through location, each event is covered by at least k users. Existing k-anonymity algorithms generalize every k user locations to the same vague value, regardless of the events. Therefore, they tend to overprotect against the privacy compromise and make the published data less useful. In this article, we propose a new generalization paradigm called local enlargement, as opposed to conventional hierarchy- or partition-based generalization. Local enlargement guarantees that user locations are enlarged just enough to cover all events k times, and thus maximize the usefulness of the published data. We develop an O(Hn)-approximate algorithm under the local enlargement paradigm, where n is the maximum number of events a user could possibly cover and Hn is the Harmonic number of n. With strong pruning techniques and mathematical analysis, we show that it runs efficiently and that the generalized user locations are up to several orders of magnitude smaller than those by the existing algorithms. In addition, it is robust enough to protect against various privacy attacks." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
Our @math -approximation algorithm is derived by reducing the anonymization problem to a clustering problem. Clustering techniques for achieving anonymity have also been studied in @cite_19 , however here the authors seek to minimize the maximum radius of the clustering, whereas we wish to minimize the sum of the Hamming distances of points to their cluster centers.
{ "cite_N": [ "@cite_19" ], "mid": [ "1973264045" ], "abstract": [ "The problem of clustering a set of points so as to minimize the maximum intercluster distance is studied. An O(kn) approximation algorithm, where n is the number of points and k is the number of clusters, that guarantees solutions with an objective function value within two times the optimal solution value is presented. This approximation algorithm succeeds as long as the set of points satisfies the triangular inequality. We also show that our approximation algorithm is best possible, with respect to the approximation bound, if PZ NP." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
In @cite_6 the authors propose the notion of @math -coherence for anonymizing transactional data. Here once again there is a division of items into public and private items. The goal of the anonymization is to ensure that for any set of @math public items, either no transaction contains this set, or at least @math transactions contain it, and no more than @math percent of these transactions contain a common private item. The authors consider the minimal number of suppressions required to achieve these anonymity goals, however no theoretical guarantees are given.
{ "cite_N": [ "@cite_6" ], "mid": [ "1992286709" ], "abstract": [ "It is not uncommon in the data anonymization literature to oppose the \"old\" @math k -anonymity model to the \"new\" differential privacy model, which offers more robust privacy guarantees. Yet, it is often disregarded that the utility of the anonymized results provided by differential privacy is quite limited, due to the amount of noise that needs to be added to the output, or because utility can only be guaranteed for a restricted type of queries. This is in contrast with @math k -anonymity mechanisms, which make no assumptions on the uses of anonymized data while focusing on preserving data utility from a general perspective. In this paper, we show that a synergy between differential privacy and @math k -anonymity can be found: @math k -anonymity can help improving the utility of differentially private responses to arbitrary queries. We devote special attention to the utility improvement of differentially private published data sets. Specifically, we show that the amount of noise required to fulfill @math ? -differential privacy can be reduced if noise is added to a @math k -anonymous version of the data set, where @math k -anonymity is reached through a specially designed microaggregation of all attributes. As a result of noise reduction, the general analytical utility of the anonymized output is increased. The theoretical benefits of our proposal are illustrated in a practical setting with an empirical evaluation on three data sets." ] }
0810.5582
1933583770
In this paper we consider the problem of anonymizing datasets in which each individual is associated with a set of items that constitute private information about the individual. Illustrative datasets include market-basket datasets and search engine query logs. We formalize the notion of k-anonymity for set-valued data as a variant of the k-anonymity model for traditional relational datasets. We define an optimization problem that arises from this definition of anonymity and provide O(klogk) and O(1)-approximation algorithms for the same. We demonstrate applicability of our algorithms to the America Online query log dataset.
With regards to search engine query logs, there has been work on identifying privacy attacks both on users @cite_15 as well as on companies whose websites appear in query results and get clicked on @cite_3 . We do not consider the latter kind of privacy attack in this paper. @cite_15 considers an anonymization procedure wherein keywords in queries are replaced by secure hashes. The authors show that such a procedure is susceptible to statistical attacks on the hashed keywords, leading to privacy breaches. There has also been work on defending against privacy attacks on users in @cite_17 . This line of work considers heuristics such as the removal of infrequent queries and develops methods to apply such techniques on the fly as new queries are posed. In contrast, we consider a static scenario wherein a search engine would like to publicly release an existing set of query logs.
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_17" ], "mid": [ "1607430673", "2170258874", "2170540710" ], "abstract": [ "In this paper we study privacy preservation for the publication of search engine query logs. We introduce a new privacy concern, website privacy as a special case of business privacy.We define the possible adversaries who could be interested in disclosing website information and the vulnerabilities in the query log, which they could exploit. We elaborate on anonymization techniques to protect website information, discuss different types of attacks that an adversary could use and propose an anonymization strategy for one of these attacks. We then present a graph-based heuristic to validate the effectiveness of our anonymization method and perform an experimental evaluation of this approach. Our experimental results show that the query log can be appropriately anonymized against the specific attack, while retaining a significant volume of useful data.", "In this paper we study the privacy preservation properties of aspecific technique for query log anonymization: token-based hashing. In this approach, each query is tokenized, and then a secure hash function is applied to each token. We show that statistical techniques may be applied to partially compromise the anonymization. We then analyze the specific risks that arise from these partial compromises, focused on revelation of identity from unambiguous names, addresses, and so forth, and the revelation of facts associated with an identity that are deemed to be highly sensitive. Our goal in this work is two fold: to show that token-based hashing is unsuitable for anonymization, and to present a concrete analysis of specific techniques that may be effective in breaching privacy, against which other anonymization schemes should be measured.", "The question of how to publish an anonymized search log was brought to the forefront by a well-intentioned, but privacy-unaware AOL search log release. Since then a series of ad-hoc techniques have been proposed in the literature, though none are known to be provably private. In this paper, we take a major step towards a solution: we show how queries, clicks and their associated perturbed counts can be published in a manner that rigorously preserves privacy. Our algorithm is decidedly simple to state, but non-trivial to analyze. On the opposite side of privacy is the question of whether the data we can safely publish is of any use. Our findings offer a glimmer of hope: we demonstrate that a non-negligible fraction of queries and clicks can indeed be safely published via a collection of experiments on a real search log. In addition, we select an application, keyword generation, and show that the keyword suggestions generated from the perturbed data resemble those generated from the original data." ] }
0810.5325
2951937062
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
3D face recognition has attracted a lot of research efforts in the past few decades due to the advent of new sensing technologies and the high potential of 3D methods for building robust systems with invariance to head pose and illumination variations. We review in this section the most relevant work in 3D face recognition, which can be categorized in methods using point cloud representations, depth images, facial surface features or spherical representations respectively. Surveys of the state-of-the-art in 3D face recognition are further provided in @cite_12 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_12" ], "mid": [ "315826137", "2534083259" ], "abstract": [ "Face recognition using standard 2D images struggles to cope with changes in illumination and pose. 3D face recognition algorithms have been more successful in dealing with these challenges. 3D face shape data is used as an independent cue for face recognition and has also been combined with texture to facilitate multimodal face recognition. Additionally, 3D face models have been used for pose correction and calculation of the facial albedo map, which is invariant to illumination. Finally, 3D face recognition has also achieved significant success towards expression invariance by modeling non-rigid surface deformations, removing facial expressions or by using parts-based face recognition. This chapter gives an overview of 3D face recognition and details both well-established and more recent state-of-the-art 3D face recognition techniques in terms of their implementation and expected performance on benchmark datasets.", "Even if, most of 2D face recognition approaches reached recognition rate more than 90 in controlled environment, current days face recognition systems degrade their performance in case of uncontrolled environment which includes pose variations, illumination variations, expression variations and ageing effect etc. Inclusion of 3D face analysis gives an age over 2D face recognition as they give vital informations such as 3D shape, texture and depth which improve discrimination power of an algorithm. In this paper, we have investigated different 3D face recognition approaches that are robust to changes in facial expressions and illumination variations. 2D-PCA and 2D-LDA approaches have been extended to 3D face recognition because they can directly work on 2D depth image matrices rather than 1D vectors without need for transformations before feature extraction. In turn, this reduces storage space and time required for computations. 2D depth image is extracted from 3D face model and nose region from depth mapped image has been detected as a reference point for cropping stage to convert model into a standard size. Two Dimensional Principal Component Analysis (2D-PCA) and Two Dimensional Linear Discriminant analysis (2D-LDA) are employed to obtain feature vectors globally compared to feature vectors obtained locally using PCA or LDA. Finally, euclidean distance classifier is applied for comparison of extracted features. A set of experiments on GavabDB 3D face database, which has 61 individuals in total, demonstrated that 3D face recognition using 2D-LDA method has achieved recognition accuracy of 93.3 and EER of 8.96 over database, which is higher compared to 2D-PCA. So, more optimized performance has been achieved using 2D-LDA for 3D face recognition analysis." ] }
0810.5325
2951937062
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Many recognition systems use depth or range images that permit to formulate the 3D face recognition as a problem of dimensionality reduction for planar images, where each pixel value represents the distance from the sensor to the facial surface. Principal Component Analysis (PCA) and Eigenfaces'' can be used for dimensionality reduction @cite_15 , where the basis vectors are however typically holistic and of global support. PCA can be combined with Linear Discriminant Analysis (LDA) to form Fisherfaces'' with enhanced class separability properties @cite_26 . Alternatively, dimensionality reduction can be performed via variants of non-negative matrix factorization (NMF) algorithms @cite_17 @cite_4 @cite_14 that produce part-based decompositions of the depth images. Part-based decompositions based on non-negative sparse coding @cite_19 have recently been shown to provide improved recognition performance than NMF methods in face recognition @cite_27 . Recent methods have proposed to concentrate dimensionality reduction around facial landmarks like the nose tip @cite_8 or in multiple carefully chosen regions @cite_29 or to compute geodesic distances among the selected fiducial points @cite_25 . They however require a selection of the fiducial points or areas of interest that is often performed manually and prevents the implementation of fully automatic systems.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_4", "@cite_8", "@cite_29", "@cite_19", "@cite_27", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2534083259", "2142621404", "1213294141", "2166335841", "2117553576", "2104300442", "1970030635", "2119077463", "2059697919", "2092179111" ], "abstract": [ "Even if, most of 2D face recognition approaches reached recognition rate more than 90 in controlled environment, current days face recognition systems degrade their performance in case of uncontrolled environment which includes pose variations, illumination variations, expression variations and ageing effect etc. Inclusion of 3D face analysis gives an age over 2D face recognition as they give vital informations such as 3D shape, texture and depth which improve discrimination power of an algorithm. In this paper, we have investigated different 3D face recognition approaches that are robust to changes in facial expressions and illumination variations. 2D-PCA and 2D-LDA approaches have been extended to 3D face recognition because they can directly work on 2D depth image matrices rather than 1D vectors without need for transformations before feature extraction. In turn, this reduces storage space and time required for computations. 2D depth image is extracted from 3D face model and nose region from depth mapped image has been detected as a reference point for cropping stage to convert model into a standard size. Two Dimensional Principal Component Analysis (2D-PCA) and Two Dimensional Linear Discriminant analysis (2D-LDA) are employed to obtain feature vectors globally compared to feature vectors obtained locally using PCA or LDA. Finally, euclidean distance classifier is applied for comparison of extracted features. A set of experiments on GavabDB 3D face database, which has 61 individuals in total, demonstrated that 3D face recognition using 2D-LDA method has achieved recognition accuracy of 93.3 and EER of 8.96 over database, which is higher compared to 2D-PCA. So, more optimized performance has been achieved using 2D-LDA for 3D face recognition analysis.", "In this paper, two supervised methods for enhancing the classification accuracy of the Nonnegative Matrix Factorization (NMF) algorithm are presented. The idea is to extend the NMF algorithm in order to extract features that enforce not only the spatial locality, but also the separability between classes in a discriminant manner. The first method employs discriminant analysis in the features derived from NMF. In this way, a two-phase discriminant feature extraction procedure is implemented, namely NMF plus Linear Discriminant Analysis (LDA). The second method incorporates the discriminant constraints inside the NMF decomposition. Thus, a decomposition of a face to its discriminant parts is obtained and new update rules for both the weights and the basis images are derived. The introduced methods have been applied to the problem of frontal face verification using the well-known XM2VTS database. Both methods greatly enhance the performance of NMF for frontal face verification", "Extending recognition to uncontrolled situations is a key challenge for practical face recognition systems. Finding efficient and discriminative facial appearance descriptors is crucial for this. Most existing approaches use features of just one type. Here we argue that robust recognition requires several different kinds of appearance information to be taken into account, suggesting the use of heterogeneous feature sets. We show that combining two of the most successful local face representations, Gabor wavelets and Local Binary Patterns (LBP), gives considerably better performance than either alone: they are complimentary in the sense that LBP captures small appearance details while Gabor features encode facial shape over a broader range of scales. Both feature sets are high dimensional so it is beneficial to use PCA to reduce the dimensionality prior to normalization and integration. The Kernel Discriminative Common Vector method is then applied to the combined feature vector to extract discriminant nonlinear features for recognition. The method is evaluated on several challenging face datasets including FRGC 1.0.4, FRGC 2.0.4 and FERET, with promising results.", "We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D spherical face representation (SFR) is used in conjunction with the scale-invariant feature transform (SIFT) descriptor to form a rejection classifier, which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach, which is robust to facial expressions. This approach automatically segments the eyes- forehead and the nose regions, which are relatively less sensitive to expressions and matches them separately using a modified iterative closest point (ICP) algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms that used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74 percent and 98.31 percent verification rates at a 0.001 false acceptance rate (FAR) and identification rates of 99.02 percent and 95.37 percent for probes with a neutral and a nonneutral expression, respectively.", "We propose an appearance-based face recognition method called the Laplacianface approach. By using locality preserving projections (LPP), the face images are mapped into a face subspace for analysis. Different from principal component analysis (PCA) and linear discriminant analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition.", "Holistic face recognition algorithms are sensitive to expressions, illumination, pose, occlusions and makeup. On the other hand, feature-based algorithms are robust to such variations. In this paper, we present a feature-based algorithm for the recognition of textured 3D faces. A novel keypoint detection technique is proposed which can repeatably identify keypoints at locations where shape variation is high in 3D faces. Moreover, a unique 3D coordinate basis can be defined locally at each keypoint facilitating the extraction of highly descriptive pose invariant features. A 3D feature is extracted by fitting a surface to the neighborhood of a keypoint and sampling it on a uniform grid. Features from a probe and gallery face are projected to the PCA subspace and matched. The set of matching features are used to construct two graphs. The similarity between two faces is measured as the similarity between their graphs. In the 2D domain, we employed the SIFT features and performed fusion of the 2D and 3D features at the feature and score-level. The proposed algorithm achieved 96.1 identification rate and 98.6 verification rate on the complete FRGC v2 data set.", "We present an algorithm that uses a low resolution 3D sensor for robust face recognition under challenging conditions. A preprocessing algorithm is proposed which exploits the facial symmetry at the 3D point cloud level to obtain a canonical frontal view, shape and texture, of the faces irrespective of their initial pose. This algorithm also fills holes and smooths the noisy depth data produced by the low resolution sensor. The canonical depth map and texture of a query face are then sparse approximated from separate dictionaries learned from training data. The texture is transformed from the RGB to Discriminant Color Space before sparse coding and the reconstruction errors from the two sparse coding steps are added for individual identities in the dictionary. The query face is assigned the identity with the smallest reconstruction error. Experiments are performed using a publicly available database containing over 5000 facial images (RGB-D) with varying poses, expressions, illumination and disguise, acquired using the Kinect sensor. Recognition rates are 96.7 for the RGB-D data and 88.7 for the noisy depth data alone. Our results justify the feasibility of low resolution 3D sensors for robust face recognition.", "In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.", "In this work, we propose and experiment an original solution to 3D face recognition that supports face matching also in the case of probe scans with missing parts. In the proposed approach, distinguishing traits of the face are captured by first extracting 3D keypoints of the scan and then measuring how the face surface changes in the keypoints neighborhood using local shape descriptors. In particular: 3D keypoints detection relies on the adaptation to the case of 3D faces of the meshDOG algorithm that has been demonstrated to be effective for 3D keypoints extraction from generic objects; as 3D local descriptors we used the HOG descriptor and also proposed two alternative solutions that develop, respectively, on the histogram of orientations and the geometric histogram descriptors. Face similarity is evaluated by comparing local shape descriptors across inlier pairs of matching keypoints between probe and gallery scans. The face recognition accuracy of the approach has been first experimented on the difficult probes included in the new 2D 3D Florence face dataset that has been recently collected and released at the University of Firenze, and on the Binghamton University 3D facial expression dataset. Then, a comprehensive comparative evaluation has been performed on the Bosphorus, Gavab and UND FRGC v2.0 databases, where competitive results with respect to existing solutions for 3D face biometrics have been obtained. Graphical abstractDisplay Omitted Highlights3D face recognition approach deployable in real non-cooperative contexts of use.Fully-3D approach, based on keypoints detection, description and matching.MeshDOG keypoints detector combined with the multi-ring GH descriptor.RANSAC algorithm included for outlier removal from matching keypoints.State of the art accuracy for recognizing 3D scans with missing parts.", "As part of the face recognition task in a robust security system, we propose a novel approach for the illumination recovery of faces with cast shadows and specularities. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients by using the face spherical spaces properties. First, an illumination training database is generated by computing the properties of the spherical spaces out of face albedo and normal values estimated from 2D training images. The training database is then discriminately divided into two directions in terms of the illumination quality and light direction of each image. Based on the generated multi-level illumination discriminative training space, we analyze the target face pixels and compare them with the appropriate training subspace using pre-generated tiles. When designing the framework, practical real-time processing speed and small image size were considered. In contrast to other approaches, our technique requires neither 3D face models nor restricted illumination conditions for the training process. Furthermore, the proposed approach uses one single face image to estimate the face albedo and face spherical spaces. In this work, we also provide the results of a series of experiments performed on publicly available databases to show the significant improvements in the face recognition rates." ] }
0810.5325
2951937062
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Finally, spherical representations have been used recently for modelling illumination variations @cite_20 @cite_18 or both illumination and pose variations in face images @cite_7 . Spherical representations permit to efficiently represent facial surfaces and overcome the limitations of other methods towards occlusions and partial views @cite_24 . To the best of our knowledge, the representation of 3D face point clouds as spherical signals for face recognition has however not been investigated yet. We therefore propose to take benefit of the robustness of spherical representations and of spherical signal processing tools to build an effective and automatic 3D face recognition system. We perform dimensionality reduction directly on the sphere, so that the geometry of 3D faces is preserved. The reduced feature space is extracted by sparse approximations with a dictionary of localized geometric features on the sphere that effectively capture spatially localized and salient 3D face features that are advantageous in the recognition process.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_7", "@cite_20" ], "mid": [ "2092179111", "2053955554", "2108428911", "1964475161" ], "abstract": [ "As part of the face recognition task in a robust security system, we propose a novel approach for the illumination recovery of faces with cast shadows and specularities. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients by using the face spherical spaces properties. First, an illumination training database is generated by computing the properties of the spherical spaces out of face albedo and normal values estimated from 2D training images. The training database is then discriminately divided into two directions in terms of the illumination quality and light direction of each image. Based on the generated multi-level illumination discriminative training space, we analyze the target face pixels and compare them with the appropriate training subspace using pre-generated tiles. When designing the framework, practical real-time processing speed and small image size were considered. In contrast to other approaches, our technique requires neither 3D face models nor restricted illumination conditions for the training process. Furthermore, the proposed approach uses one single face image to estimate the face albedo and face spherical spaces. In this work, we also provide the results of a series of experiments performed on publicly available databases to show the significant improvements in the face recognition rates.", "Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. In this paper, we propose to address one of the most challenging scenarios in face recognition. That is, to identify a subject from a test image that is acquired under dierent pose and illumination condition from only one training sample (also known as a gallery image) of this subject in the database. For example, the test image could be semifrontal and illuminated by multiple lighting sources while the corresponding training image is frontal under a single lighting source. Under the assumption of Lambertian reflectance, the spherical harmonics representation has proved to be effective in modeling illumination variations for a fixed pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we utilize the fact that 2D harmonic basis images at different poses are related by close-form linear transformations, and give a more convenient transformation matrix to be directly used for basis images. An immediate application is that we can easily synthesize a different view of a subject under arbitrary lighting conditions by changing the coefficients of the spherical harmonics representation. A more important result is an efficient face recognition method, based on the orthonormality of the linear transformations, for solving the above-mentioned challenging scenario. Thus, we directly project a nonfrontal view test image onto the space of frontal view harmonic basis images. The impact of some empirical factors due to the projection is embedded in a sparse warping matrix; for most cases, we show that the recognition performance does not deteriorate after warping the test image to the frontal view. Very good recognition results are obtained using this method for both synthetic and challenging real images.", "In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.", "Introduces a new surface representation for recognizing curved objects. The authors approach begins by representing an object by a discrete mesh of points built from range data or from a geometric model of the object. The mesh is computed from the data by deforming a standard shaped mesh, for example, an ellipsoid, until it fits the surface of the object. The authors define local regularity constraints that the mesh must satisfy. The authors then define a canonical mapping between the mesh describing the object and a standard spherical mesh. A surface curvature index that is pose-invariant is stored at every node of the mesh. The authors use this object representation for recognition by comparing the spherical model of a reference object with the model extracted from a new observed scene. The authors show how the similarity between reference model and observed data can be evaluated and they show how the pose of the reference object in the observed scene can be easily computed using this representation. The authors present results on real range images which show that this approach to modelling and recognizing 3D objects has three main advantages: (1) it is applicable to complex curved surfaces that cannot be handled by conventional techniques; (2) it reduces the recognition problem to the computation of similarity between spherical distributions; in particular, the recognition algorithm does not require any combinatorial search; and (3) even though it is based on a spherical mapping, the approach can handle occlusions and partial views. >" ] }
0810.5428
2950195527
We argue that relationships between Web pages are functions of the user's intent. We identify a class of Web tasks - information-gathering - that can be facilitated by a search engine that provides links to pages which are related to the page the user is currently viewing. We define three kinds of intentional relationships that correspond to whether the user is a) seeking sources of information, b) reading pages which provide information, or c) surfing through pages as part of an extended information-gathering process. We show that these three relationships can be productively mined using a combination of textual and link information and provide three scoring mechanisms that correspond to them: SeekRel , FactRel and SurfRel . These scoring mechanisms incorporate both textual and link information. We build a set of capacitated subnetworks - each corresponding to a particular keyword - that mirror the interconnection structure of the World Wide Web. The scores are computed by computing flows on these subnetworks. The capacities of the links are derived from the hub and authority values of the nodes they connect, following the work of Kleinberg (1998) on assigning authority to pages in hyperlinked environments. We evaluated our scoring mechanism by running experiments on four data sets taken from the Web. We present user evaluations of the relevance of the top results returned by our scoring mechanisms and compare those to the top results returned by Google's Similar Pages feature, and the Companion algorithm proposed by Dean and Henzinger (1999).
In a different use of link structure related to our own Lu et. al. @cite_30 @cite_31 claimed that two pages were said to be similar if flow could be routed from one of them to the other. However, unlike our work, their capacity assignments were not based on any notion of authority. To the best of our knowledge this is the only other mention of using flow to score similarity in the literature.
{ "cite_N": [ "@cite_30", "@cite_31" ], "mid": [ "2165636119", "2891096760" ], "abstract": [ "In this work, we address the problem of joint modeling of text and citations in the topic modeling framework. We present two different models called the Pairwise-Link-LDA and the Link-PLSA-LDA models. The Pairwise-Link-LDA model combines the ideas of LDA [4] and Mixed Membership Block Stochastic Models [1] and allows modeling arbitrary link structure. However, the model is computationally expensive, since it involves modeling the presence or absence of a citation (link) between every pair of documents. The second model solves this problem by assuming that the link structure is a bipartite graph. As the name indicates, Link-PLSA-LDA model combines the LDA and PLSA models into a single graphical model. Our experiments on a subset of Citeseer data show that both these models are able to predict unseen data better than the baseline model of Erosheva and Lafferty [8], by capturing the notion of topical similarity between the contents of the cited and citing documents. Our experiments on two different data sets on the link prediction task show that the Link-PLSA-LDA model performs the best on the citation prediction task, while also remaining highly scalable. In addition, we also present some interesting visualizations generated by each of the models.", "In this paper, we consider the problem of approximately aligning matching two graphs. Given two graphs (G_ 1 =(V_ 1 ,E_ 1 ) ) and (G_ 2 =(V_ 2 ,E_ 2 ) ), the objective is to map nodes (u, v G_1 ) to nodes (u',v' G_2 ) such that when u, v have an edge in (G_1 ), very likely their corresponding nodes (u', v' ) in (G_2 ) are connected as well. This problem with subgraph isomorphism as a special case has extra challenges when we consider matching complex networks exhibiting the small world phenomena. In this work, we propose to use ‘Ricci flow metric’, to define the distance between two nodes in a network. This is then used to define similarity of a pair of nodes in two networks respectively, which is the crucial step of network alignment. Specifically, the Ricci curvature of an edge describes intuitively how well the local neighborhood is connected. The graph Ricci flow uniformizes discrete Ricci curvature and induces a Ricci flow metric that is insensitive to node edge insertions and deletions. With the new metric, we can map a node in (G_1 ) to a node in (G_2 ) whose distance vector to only a few preselected landmarks is the most similar. The robustness of the graph metric makes it outperform other methods when tested on various complex graph models and real world network data sets (Emails, Internet, and protein interaction networks) (The source code of computing Ricci curvature and Ricci flow metric are available: https: github.com saibalmars GraphRicciCurvature)." ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
Mobility models have been long recognized as one of the fundamental components that impacts the performance of wireless ad hoc networks. A wide variety of mobility models are available in the research community (see @cite_5 for a good survey). Among all mobility models, the popularity of random mobility models (e.g., random walk, random direction, and random waypoint) roots in its simplicity and mathematical tractability. A number of important properties for these models have been studied, such as the stationary nodal distribution @cite_43 , the hitting and meeting times @cite_31 , and the meeting duration @cite_35 . These quantities in turn enable routing protocol analysis to produce performance bounds @cite_34 @cite_3 . However, random mobility models are based on over-simplified assumptions, and as has been shown recently and we will also show in the paper, the resulting mobility characteristics are very different from real-life scenarios. Hence, it is debatable whether the findings under these models will directly translate into performance in real-world implementations of MANETs.
{ "cite_N": [ "@cite_35", "@cite_3", "@cite_43", "@cite_5", "@cite_31", "@cite_34" ], "mid": [ "2002169759", "200596125", "2145450252", "1966415263", "2167627514", "1968560005" ], "abstract": [ "In this paper, we analyze the mobility patterns of users of wireless hand-held PDAs in a campus wireless network using an eleven week trace of wireless network activity. Our study has two goals. First, we characterize the high-level mobility and access patterns of hand-held PDA users and compare these characteristics to previous workload mobility studies focused on laptop users. Second, we develop two wireless network topology models for use in wireless mobility studies: an evolutionary topology model based on user proximity and a campus waypoint model that serves as a trace-based complement to the random waypoint model. We use our evolutionary topology model as a case study for preliminary evaluation of three ad hoc routing algorithms on the network topologies created by the access and mobility patterns of users of modern wireless PDAs. Based upon the mobility characteristics of our trace-based campus waypoint model, we find that commonly parameterized synthetic mobility models have overly aggressive mobility characteristics for scenarios where user movement is limited to walking. Mobility characteristics based on realistic models can have significant implications for evaluating systems designed for mobility. When evaluated using our evolutionary topology model, for example, popular ad hoc routing protocols were very successful at adapting to user mobility, and user mobility was not a key factor in their performance.", "Mobile ad hoc networks (MANETs) are multihop networks that are capable of establishing communication in the absence of any pre-existing infrastructure. Due to frequent node mobility and unreliable wireless links, the network is characterized by unpredictable topological changes. For more robust and reliable communications, it is important that a mobile node anticipates address changes and predicts its future routes in the network. This Chapter describes prediction-based mobility management schemes for mobile ad hoc networks. We propose a Markov model-based mobility management scheme that provides an adaptive location prediction mechanism. We used simulation method to evaluate the prediction accuracy as well as the probability of making the correct predictions. The simulation results indicated that higher order Markov models have slightly greater prediction accuracy than lower order Markov models. However, the prediction accuracy decreases with increase in the probability of random movement both for network sizes and number of hops.", "This paper presents an analysis of the behavior of mobile ad hoc networks when group mobility is involved. We propose four different group mobility models and present a mobility pattern generator, called grcmob that we designed to be used with the ns-2 simulator. Using 2k factorial analysis we determine the most representative factors for protocol performance. We then evaluate the performance of a dynamic source routing (DSR) based MANET, using both TCP and UDP data traffic. The results are compared with the classical random waypoint mobility model. It is shown that the number of groups parameter is more important than the number of nodes one and that the impact of the area size is almost negligible. We make also evident that the mix of inter- and intra-group communication has the strongest impact on the performance. Finally, it is evidenced that the presence of groups forces the network topology to be sparser and therefore the probability of network partitions and node disconnections grows.", "The random waypoint model is a commonly used mobility model in the simulation of ad hoc networks. It is known that the spatial distribution of network nodes moving according to this model is, in general, nonuniform. However, a closed-form expression of this distribution and an in-depth investigation is still missing. This fact impairs the accuracy of the current simulation methodology of ad hoc networks and makes it impossible to relate simulation-based performance results to corresponding analytical results. To overcome these problems, we present a detailed analytical study of the spatial node distribution generated by random waypoint mobility. More specifically, we consider a generalization of the model in which the pause time of the mobile nodes is chosen arbitrarily in each waypoint and a fraction of nodes may remain static for the entire simulation time. We show that the structure of the resulting distribution is the weighted sum of three independent components: the static, pause, and mobility component. This division enables us to understand how the model's parameters influence the distribution. We derive an exact equation of the asymptotically stationary distribution for movement on a line segment and an accurate approximation for a square area. The good quality of this approximation is validated through simulations using various settings of the mobility parameters. In summary, this article gives a fundamental understanding of the behavior of the random waypoint model.", "One of the most important methods for evaluating the characteristics of ad hoc networking protocols is through the use of simulation. Simulation provides researchers with a number of significant benefits, including repeatable scenarios, isolation of parameters, and exploration of a variety of metrics. The topology and movement of the nodes in the simulation are key factors in the performance of the network protocol under study. Once the nodes have been initially distributed, the mobility model dictates the movement of the nodes within the network. Because the mobility of the nodes directly impacts the performance of the protocols, simulation results obtained with unrealistic movement models may not correctly reflect the true performance of the protocols. The majority of existing mobility models for ad hoc networks do not provide realistic movement scenarios; they are limited to random walk models without any obstacles. In this paper, we propose to create more realistic movement models through the incorporation of obstacles. These obstacles are utilized to both restrict node movement as well as wireless transmissions. In addition to the inclusion of obstacles, we construct movement paths using the Voronoi diagram of obstacle vertices. Nodes can then be randomly distributed across the paths, and can use shortest path route computations to destinations at randomly chosen obstacles. Simulation results show that the use of obstacles and pathways has a significant impact on the performance of ad hoc network protocols.", "The random waypoint model is a commonly used mobility model for simulations of wireless communication networks. By giving a formal description of this model in terms of a discrete-time stochastic process, we investigate some of its fundamental stochastic properties with respect to: (a) the transition length and time of a mobile node between two waypoints, (b) the spatial distribution of nodes, (c) the direction angle at the beginning of a movement transition, and (d) the cell change rate if the model is used in a cellular-structured system area. The results of this paper are of practical value for performance analysis of mobile networks and give a deeper understanding of the behavior of this mobility model. Such understanding is necessary to avoid misinterpretation of simulation results. The movement duration and the cell change rate enable us to make a statement about the \"degree of mobility\" of a certain simulation scenario. Knowledge of the spatial node distribution is essential for all investigations in which the relative location of the mobile nodes is important. Finally, the direction distribution explains in an analytical manner the effect that nodes tend to move back to the middle of the system area." ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
More recently, an array of synthetic mobility models are proposed to improve the realism of the simple random mobility models . More complex rules are introduced to make the nodes follow a popularity distribution when selecting the next destination @cite_38 , stay on designated paths for movements @cite_50 , or move as a group @cite_22 . These rules enrich the scenarios covered by the synthetic mobility models , but at the same time make theoretical treatment of these models difficult. In addition, most synthetic mobility models are still limited to i.i.d. models, and the mobility decisions are also independent of the current location of nodes and time of simulation.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_50" ], "mid": [ "1988651441", "2172211955", "1972606191" ], "abstract": [ "Validation of mobile ad hoc network protocols relies almost exclusively on simulation. The value of the validation is, therefore, highly dependent on how realistic the movement models used in the simulations are. Since there is a very limited number of available real traces in the public domain, synthetic models for movement pattern generation must be used. However, most widely used models are currently very simplistic, their focus being ease of implementation rather than soundness of foundation. As a consequence, simulation results of protocols are often based on randomly generated movement patterns and, therefore, may differ considerably from those that can be obtained by deploying the system in real scenarios. Movement is strongly affected by the needs of humans to socialise or cooperate, in one form or another. Fortunately, humans are known to associate in particular ways that can be mathematically modelled and that have been studied in social sciences for years.In this paper we propose a new mobility model founded on social network theory. The model allows collections of hosts to be grouped together in a way that is based on social relationships among the individuals. This grouping is then mapped to a topographical space, with movements influenced by the strength of social ties that may also change in time. We have validated our model with real traces by showing that the synthetic mobility traces are a very good approximation of human movement patterns.", "Models of human mobility have broad applicability in fields such as mobile computing, urban planning, and ecology. This paper proposes and evaluates WHERE, a novel approach to modeling how large populations move within different metropolitan areas. WHERE takes as input spatial and temporal probability distributions drawn from empirical data, such as Call Detail Records (CDRs) from a cellular telephone network, and produces synthetic CDRs for a synthetic population. We have validated WHERE against billions of anonymous location samples for hundreds of thousands of phones in the New York and Los Angeles metropolitan areas. We found that WHERE offers significantly higher fidelity than other modeling approaches. For example, daily range of travel statistics fall within one mile of their true values, an improvement of more than 14 times over a Weighted Random Waypoint model. Our modeling techniques and synthetic CDRs can be applied to a wide range of problems while avoiding many of the privacy concerns surrounding real CDRs.", "This paper provides a model that realistically represents the movements in a disaster area scenario. The model is based on an analysis of tactical issues of civil protection. This analysis provides characteristics influencing network performance in public safety communication networks like heterogeneous area-based movement, obstacles, and joining leaving of nodes. As these characteristics cannot be modeled with existing mobility models, we introduce a new disaster area mobility model. To examine the impact of our more realistic modeling, we compare it to existing ones (modeling the same scenario) using different pure movement and link-based metrics. The new model shows specific characteristics like heterogeneous node density. Finally, the impact of the new model is evaluated in an exemplary simulative network performance analysis. The simulations show that the new model discloses new information and has a significant impact on performance analysis." ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
A different approach to mobility modeling is by empirical mobility trace collection . Along this line, researchers have exploited existing wireless network infrastructure, such as wireless LANs (e.g., @cite_19 @cite_28 @cite_24 ) or cellular phone networks (e.g., @cite_4 ), to track user mobility by monitoring their locations. Such traces can be replayed as input mobility patterns for simulations of network protocols @cite_27 . More recently, DTN-specific testbeds @cite_7 @cite_23 @cite_20 aim at collecting encounter events between mobile nodes instead of the mobility patterns. Some initial efforts to mathematically analyze these traces can be found in @cite_7 @cite_25 . Yet, the size of the traces and the environments in which the experiments are performed can not be adjusted at will by the researchers. To improve the flexibility of traces, the approach of trace-based mobility models have also been proposed @cite_26 @cite_6 @cite_45 . These models discover the underlying mobility rules that lead to the observed properties (such as the duration of stay at locations, the arrival patterns, etc.) in the traces. Statistical analysis is then used to determine proper parameters of the model to match it with the particular trace.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_7", "@cite_28", "@cite_6", "@cite_24", "@cite_19", "@cite_27", "@cite_45", "@cite_23", "@cite_25", "@cite_20" ], "mid": [ "2160005388", "2002169759", "1988651441", "2804572498", "2137688035", "2133286823", "2092003045", "1571751884", "2138198492", "2145517691", "2143859410", "2115240023" ], "abstract": [ "In this paper we present a trace-driven framework capable of building realistic mobility models for the simulation studies of mobile systems. With the goal of realism, this framework combines coarse-grained wireless traces, i.e., association data between WiFi users and access points, with an actual map of the space over which the traces were collected. Through a sequence of data processing steps, including filtering the data trace and converting the map to a graph representation, this framework generates a probabilistic mobility model that produces user movement patterns that are representative of real movement. This is done by adopting a set of heuristics that help us infer the paths users take between access points. We describe our experience applying this approach to a college campus, and study a number of properties of the trace data using our framework.", "In this paper, we analyze the mobility patterns of users of wireless hand-held PDAs in a campus wireless network using an eleven week trace of wireless network activity. Our study has two goals. First, we characterize the high-level mobility and access patterns of hand-held PDA users and compare these characteristics to previous workload mobility studies focused on laptop users. Second, we develop two wireless network topology models for use in wireless mobility studies: an evolutionary topology model based on user proximity and a campus waypoint model that serves as a trace-based complement to the random waypoint model. We use our evolutionary topology model as a case study for preliminary evaluation of three ad hoc routing algorithms on the network topologies created by the access and mobility patterns of users of modern wireless PDAs. Based upon the mobility characteristics of our trace-based campus waypoint model, we find that commonly parameterized synthetic mobility models have overly aggressive mobility characteristics for scenarios where user movement is limited to walking. Mobility characteristics based on realistic models can have significant implications for evaluating systems designed for mobility. When evaluated using our evolutionary topology model, for example, popular ad hoc routing protocols were very successful at adapting to user mobility, and user mobility was not a key factor in their performance.", "Validation of mobile ad hoc network protocols relies almost exclusively on simulation. The value of the validation is, therefore, highly dependent on how realistic the movement models used in the simulations are. Since there is a very limited number of available real traces in the public domain, synthetic models for movement pattern generation must be used. However, most widely used models are currently very simplistic, their focus being ease of implementation rather than soundness of foundation. As a consequence, simulation results of protocols are often based on randomly generated movement patterns and, therefore, may differ considerably from those that can be obtained by deploying the system in real scenarios. Movement is strongly affected by the needs of humans to socialise or cooperate, in one form or another. Fortunately, humans are known to associate in particular ways that can be mathematically modelled and that have been studied in social sciences for years.In this paper we propose a new mobility model founded on social network theory. The model allows collections of hosts to be grouped together in a way that is based on social relationships among the individuals. This grouping is then mapped to a topographical space, with movements influenced by the strength of social ties that may also change in time. We have validated our model with real traces by showing that the synthetic mobility traces are a very good approximation of human movement patterns.", "In wireless networking R&D we typically depend on experimentation to further evaluate a solution, as simulation is inherently a simplification of the real-world. However, experimentation is limited in aspects where simulation excels, such as repeatability and reproducibility. Real wireless experiments are hardly repeatable. Given the same input they can produce very different output results, since wireless communications are influenced by external random phenomena such as noise, interference, and multipath. Real experiments are also difficult to reproduce due to testbed operational constraints and availability. We have previously proposed the Trace-based Simulation (TS) approach, which uses the TraceBasedPropagationLossModel to successfully reproduce past experiments. Yet, in its current version, the TraceBasedPropagationLossModel only supports point-to-point scenarios. In this paper, we introduce a new version of the model that supports Multiple Access wireless scenarios. To validate the new version of the model, the network throughput was measured in a laboratory testbed. The experimental results were then compared to the network throughput achieved using the ns-3 trace-based simulation and a pure ns-3 simulation, confirming the TS approach is valid for multiple access scenarios too.", "Understanding user mobility is critical for simula- tions of mobile devices in a wireless network, but current mobility models often do not reflect real user movements. In this paper, we provide a foundation for such work by exploring mobility characteristics in traces of mobile users. We present a method to estimate the physical location of users from a large trace of mobile devices associating with access points in a wireless network. Using this method, we extracted tracks of always-on Wi-Fi devices from a 13-month trace. We discovered that the speed and pause time each follow a log-normal distribution and that the direction of movements closely reflects the direction of roads and walkways. Based on the extracted mobility characteristics, we developed a mobility model, focusing on movements among popular regions. Our validation shows that synthetic tracks match real tracks with a median relative error of 17 .", "We report that human walk patterns contain statistically similar features observed in Levy walks. These features include heavy-tail flight and pause-time distributions and the super-diffusive nature of mobility. Human walks are not random walks, but it is surprising that the patterns of human walks and Levy walks contain some statistical similarity. Our study is based on 226 daily GPS traces collected from 101 volunteers in five different outdoor sites. The heavy-tail flight distribution of human mobility induces the super-diffusivity of travel, but up to 30 min to 1 h due to the boundary effect of people's daily movement, which is caused by the tendency of people to move within a predefined (also confined) area of daily activities. These tendencies are not captured in common mobility models such as random way point (RWP). To evaluate the impact of these tendencies on the performance of mobile networks, we construct a simple truncated Levy walk mobility (TLW) model that emulates the statistical features observed in our analysis and under which we measure the performance of routing protocols in delay-tolerant networks (DTNs) and mobile ad hoc networks (MANETs). The results indicate the following. Higher diffusivity induces shorter intercontact times in DTN and shorter path durations with higher success probability in MANET. The diffusivity of TLW is in between those of RWP and Brownian motion (BM). Therefore, the routing performance under RWP as commonly used in mobile network studies and tends to be overestimated for DTNs and underestimated for MANETs compared to the performance under TLW.", "We examine the fundamental properties that determine the basic performance metrics for opportunistic communications. We first consider the distribution of intercontact times between mobile devices. Using a diverse set of measured mobility traces, we find as an invariant property that there is a characteristic time, order of half a day, beyond which the distribution decays exponentially. Up to this value, the distribution in many cases follows a power law, as shown in recent work. This power law finding was previously used to support the hypothesis that intercontact time has a power law tail, and that common mobility models are not adequate. However, we observe that the timescale of interest for opportunistic forwarding may be of the same order as the characteristic time, and thus, the exponential tail is important. We further show that already simple models such as random walk and random waypoint can exhibit the same dichotomy in the distribution of intercontact time as in empirical traces. Finally, we perform an extensive analysis of several properties of human mobility patterns across several dimensions, and we present empirical evidence that the return time of a mobile device to its favorite location site may already explain the observed dichotomy. Our findings suggest that existing results on the performance of forwarding schemes based on power law tails might be overly pessimistic.", "As technology to connect people across the world is advancing, there should be corresponding advancement in taking advantage of data that is generated out of such connection. To that end, next place prediction is an important problem for mobility data. In this paper we propose several models using dynamic Bayesian network (DBN). Idea behind development of these models come from typical daily mobility patterns a user have. Three features (location, day of the week (DoW), and time of the day (ToD)) and their combinations are used to develop these models. Knowing that not all models work well for all situations, we developed three combined models using least entropy, highest probability and ensemble. Extensive performance study is conducted to compare these models over two different mobility data sets: a CDR data and Nokia mobile data which is based on GPS. Results show that least entropy and highest probability DBNs perform the best.", "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract energy-efficient transportation patterns (green knowledge), which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors. However, extracting green knowledge from location traces is not a trivial task. Conventional data analysis tools are usually not customized for handling the massive quantity, complex, dynamic, and distributed nature of location traces. To that end, in this paper, we provide a focused study of extracting energy-efficient transportation patterns from location traces. Specifically, we have the initial focus on a sequence of mobile recommendations. As a case study, we develop a mobile recommender system which has the ability in recommending a sequence of pick-up points for taxi drivers or a sequence of potential parking positions. The goal of this mobile recommendation system is to maximize the probability of business success. Along this line, we provide a Potential Travel Distance (PTD) function for evaluating each candidate sequence. This PTD function possesses a monotone property which can be used to effectively prune the search space. Based on this PTD function, we develop two algorithms, LCP and SkyRoute, for finding the recommended routes. Finally, experimental results show that the proposed system can provide effective mobile sequential recommendation and the knowledge extracted from location traces can be used for coaching drivers and leading to the efficient use of energy.", "We examine the fundamental properties that determine the basic performance metrics for opportunistic communications. We first consider the distribution of inter-contact times between mobile devices. Using a diverse set of measured mobility traces, we find as an invariant property that there is a characteristic time, order of half a day, beyond which the distribution decays exponentially. Up to this value, the distribution in many cases follows a power law, as shown in recent work. This powerlaw finding was previously used to support the hypothesis that inter-contact time has a power law tail, and that common mobility models are not adequate. However, we observe that the time scale of interest for opportunistic forwarding may be of the same order as the characteristic time, and thus the exponential tail is important. We further show that already simple models such as random walk and random way point can exhibit the same dichotomy in the distribution of inter-contact time ascin empirical traces. Finally, we perform an extensive analysis of several properties of human mobility patterns across several dimensions, and we present empirical evidence that the return time of a mobile device to its favorite location site may already explain the observed dichotomy. Our findings suggest that existing results on the performance of forwarding schemes basedon power-law tails might be overly pessimistic.", "We present a method for using real world mobility traces to identify tractable theoretical models for the study of distributed algorithms in mobile networks. We validate the method by deriving a vehicular ad hoc network model from a large corpus of position data generated by Boston-area taxicabs. Unlike previous work, our model does not assume global connectivity or eventual stability; it instead assumes only that some subset of processes are connected through transient paths (e.g., paths that exist over time). We use this model to study the problem of prioritized gossip, in which processes attempt to disseminate messages of different priority. Specifically, we present CabChat, a distributed prioritized gossip algorithm that leverages an interesting connection to the classic Tower of Hanoi problem to schedule the broadcast of packets of different priorities. Whereas previous studies of gossip leverage strong connectivity or stabilization assumptions to prove the time complexity of global termination, in our model, with its weak assumptions, we instead analyze CabChat with respect to its ability to deliver a high proportion of high priority messages over the transient paths that happen to exist in a given execution.", "We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals." ] }
0810.3935
2949423092
Realistic mobility models are fundamental to evaluate the performance of protocols in mobile ad hoc networks. Unfortunately, there are no mobility models that capture the non-homogeneous behaviors in both space and time commonly found in reality, while at the same time being easy to use and analyze. Motivated by this, we propose a time-variant community mobility model, referred to as the TVC model, which realistically captures spatial and temporal correlations. We devise the communities that lead to skewed location visiting preferences, and time periods that allow us to model time dependent behaviors and periodic re-appearances of nodes at specific locations. To demonstrate the power and flexibility of the TVC model, we use it to generate synthetic traces that match the characteristics of a number of qualitatively different mobility traces, including wireless LAN traces, vehicular mobility traces, and human encounter traces. More importantly, we show that, despite the high level of realism achieved, our TVC model is still theoretically tractable. To establish this, we derive a number of important quantities related to protocol performance, such as the average node degree, the hitting time, and the meeting time, and provide examples of how to utilize this theory to guide design decisions in routing protocols.
As a final note, in @cite_13 , the authors assume the attraction of a community (i.e., a geographical area) to a mobile node is derived from the number of friends of this node currently residing in the community. In our paper we assume that the nodes make movement decisions independently of the others (nonetheless, node sharing the same community will exhibit mobility correlation, capturing the social feature indirectly). Mobility models with inter-node dependency require a solid understanding of the social network structure, which is an important area under development. We plan to work further in this direction in the future.
{ "cite_N": [ "@cite_13" ], "mid": [ "2095702309" ], "abstract": [ "Understanding the spatial networks formed by the trajectories of mobile users can be beneficial to applications ranging from epidemiology to local search. Despite the potential for impact in a number of fields, several aspects of human mobility networks remain largely unexplored due to the lack of large-scale data at a fine spatiotemporal resolution. Using a longitudinal dataset from the location-based service Foursquare, we perform an empirical analysis of the topological properties of place networks and note their resemblance to online social networks in terms of heavy-tailed degree distributions, triadic closure mechanisms and the small world property. Unlike social networks however, place networks present a mixture of connectivity trends in terms of assortativity that are surprisingly similar to those of the web graph. We take advantage of additional semantic information to interpret how nodes that take on functional roles such as 'travel hub', or 'food spot' behave in these networks. Finally, motivated by the large volume of new links appearing in place networks over time, we formulate the classic link prediction problem in this new domain. We propose a novel variant of gravity models that brings together three essential elements of inter-place connectivity in urban environments: network-level interactions, human mobility dynamics, and geographic distance. We evaluate this model and find it outperforms a number of baseline predictors and supervised learning algorithms on a task of predicting new links in a sample of one hundred popular cities." ] }
0810.1554
1967173005
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
In the mathematics literature the problem of the statistical properties of the largest eigenvalue of ) in the case that all entries of @math are equal to @math was first studied by F "uredi and Komlos @cite_40 . In the more general setting of real Wigner matrices (independent entries i.i.d. with mean @math and variance @math ) the distribution of the largest eigenvalue was identified as a Gaussian, so generalizing the result of Lang. Only in recent years did the associated phase transition problem, already known to @cite_17 , receive attention in the mathematical literature. In the case of the GUE, this was due to Pech 'e @cite_12 , while a rigorous study of the GOE case can be found in the work of Maida @cite_18 .
{ "cite_N": [ "@cite_40", "@cite_18", "@cite_12", "@cite_17" ], "mid": [ "2031603583", "2963919656", "2079985400", "2097968528" ], "abstract": [ "We establish a large deviation principle for the largest eigenvalue of a rank one deformation of a matrix from the GUE or GOE. As a corollary, we get another proof of the phenomenon, well-known in learning theory and finance, that the largest eigenvalue separates from the bulk when the perturbation is large enough. A large part of the paper is devoted to an auxiliary result on the continuity of spherical integrals in the case when one of the matrix is of rank one, as studied in one of our previous works.", "The largest eigenvalue of a matrix is always larger or equal than its largest diagonal entry. We show that for a class of random Laplacian matrices with independent off-diagonal entries, this bound is essentially tight: the largest eigenvalue is, up to lower order terms, often the size of the largest diagonal. entry. Besides being a simple tool to obtain precise estimates on the largest eigenvalue of a class of random Laplacian matrices, our main result settles a number of open problems related to the tightness of certain convex relaxation-based algorithms. It easily implies the optimality of the semidefinite relaxation approaches to problems such as ( Z _2 ) Synchronization and stochastic block model recovery. Interestingly, this result readily implies the connectivity threshold for Erdős–Renyi graphs and suggests that these three phenomena are manifestations of the same underlying principle. The main tool is a recent estimate on the spectral norm of matrices with independent entries by van Handel and the author.", "We consider the asymptotic fluctuation behavior of the largest eigenvalue of certain sample covariance matrices in the asymptotic regime where both dimensions of the corresponding data matrix go to infinity. More precisely, let X be an n x p matrix, and let its rows be i.i.d. complex normal vectors with mean 0 and covariance Σ p . We show that for a large class of covariance matrices £ p, the largest eigenvalue of X*X is asymptotically distributed (after recentering and rescaling) as the Tracy-Widom distribution that appears in the study of the Gaussian unitary ensemble. We give explicit formulas for the centering and scaling sequences that are easy to implement and involve only the spectral distribution of the population covariance, n and p. The main theorem applies to a number of covariance models found in applications. For example, well-behaved Toeplitz matrices as well as covariance matrices whose spectral distribution is a sum of atoms (under some conditions on the mass of the atoms) are among the models the theorem can handle. Generalizations of the theorem to certain spiked versions of our models and a.s. results about the largest eigenvalue are given. We also discuss a simple corollary that does not require normality of the entries of the data matrix and some consequences for applications in multivariate statistics.", "We continue the study of the Hermitian random matrix ensemble with external source Open image in new window where A has two distinct eigenvalues ±a of equal multiplicity. This model exhibits a phase transition for the value a=1, since the eigenvalues of M accumulate on two intervals for a>1, and on one interval for 0 1 was treated in Part I, where it was proved that local eigenvalue correlations have the universal limiting behavior which is known for unitarily invariant random matrices, that is, limiting eigenvalue correlations are expressed in terms of the sine kernel in the bulk of the spectrum, and in terms of the Airy kernel at the edge. In this paper we establish the same results for the case 0<a<1. As in Part I we apply the Deift Zhou steepest descent analysis to a 3×3-matrix Riemann-Hilbert problem. Due to the different structure of an underlying Riemann surface, the analysis includes an additional step involving a global opening of lenses, which is a new phenomenon in the steepest descent analysis of Riemann-Hilbert problems." ] }
0810.1554
1967173005
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
The paper @cite_19 by Ben Arous, Baik and Pech 'e proved the phase transition property relating to ) in the complex case with @math given by @math () corresponds to @math ). Subsequent studies by Baik and Silverstein @cite_31 , Paul @cite_7 and Bai and Yao @cite_15 considered the real case. Significant for the present study is the result of @cite_15 , giving that for ) with @math given as above, the separated eigenvalues have the law of the @math GUE. The case @math ---not considered here---corresponding to self dual quaternion real matrices, is studied in the recent work of Wang @cite_39 .
{ "cite_N": [ "@cite_7", "@cite_15", "@cite_39", "@cite_19", "@cite_31" ], "mid": [ "2032049628", "2170929819", "2015321063", "2052952091", "2138359578" ], "abstract": [ "Let (a>0, b>0, ab<1; ) and let (g L^2( R ). ) In this paper we investigate the relation between the frame operator (S:f L^2( R ) n,m ,(f,g_ na,mb ) ,g_ na,mb ) and the matrix (H ) whose entries (H_ k,l ,; ,k',l' ) are given by ((g_ k' b,l' a ,g_ k b,l a ) ) for (k,l,k',l' Z . ) Here (f_ x,y (t)= exp (2 iyt) ,f(t-x), ) (t R ), for any (f L^2( R ). ) We show that (S ) is bounded as a mapping of (L^2( R ) ) into (L^2( R ) ) if and only if (H ) is bounded as a mapping of (l^2( Z ^2) ) into (l^2( Z ^2). ) Also we show that (AI S BI ) if and only if (AI 1 ab ,H BI, ) where (I ) denotes the identity operator of (L^2( R ) ) and (l^2( Z ^2), ) respectively, and (A 0, ) (B< . ) Next, when (g ) generates a frame, we have that ((g_ k b,l a )_ k,l ) has an upper frame bound, and the minimal dual function (^ ) can be computed as (ab , k,l ,(H^ -1 )_ k,l ,; ,o,o ,g_ k b,l a . ) The results of this paper extend, generalize, and rigourize results of Wexler and Raz and of Qian, D. Chen, K. Chen, and Li on the computation of dual functions for finite, discrete-time Gabor expansions to the infinite, continuous-time case. Furthermore, we present a framework in which one can show that certain smoothness and decay properties of a (g ) generating a frame are inherited by (^ . ) In particular, we show that (^ S ) when (g S ) generates a frame (( S ) Schwartz space). The proofs of the main results of this paper rely heavily on a technique introduced by Tolimieri and Orr for relating frame bound questions on complementary lattices by means of the Poisson summation formula.", "Abstract Let A be a d × n matrix and T = Tn -1 be the standard simplex in R n. Suppose that d and n are both large and comparable: d ≈ δn, δ ∈ (0, 1). We count the faces of the projected simplex AT when the projector A is chosen uniformly at random from the Grassmann manifold of d-dimensional orthoprojectors of R n. We derive ρ N(δ) > 0 with the property that, for any ρ 0 at which phase transition occurs in k d. We compute and display ρ VS and compare with ρ N. Corollaries are as follows. (1) The convex hull of n Gaussian samples in Rd , with n large and proportional to d, has the same k-skeleton as the (n - 1) simplex, for k < ρ N (d n)d(1 + oP (1)). (2) There is a “phase transition” in the ability of linear programming to find the sparsest nonnegative solution to systems of underdetermined linear equations. For most systems having a solution with fewer than ρ VS (d n)d(1 + o(1)) nonzeros, linear programming will find that solution. neighborly polytopes convex hull of Gaussian sample underdetermined systems of linear equations uniformly distributed random projections phase transitions", "Let A be a complex matrix with arbitrary Jordan structure and @math an eigenvalue of A whose largest Jordan block has size n. We review previous results due to Lidskii [U.S.S. R. Comput. Math. and Math. Phys., 1 (1965), pp. 73--85], showing that the splitting of @math under a small perturbation of A of order @math is, generically, of order @math . Explicit formulas for the leading coefficients are obtained, involving the perturbation matrix and the eigenvectors of A. We also present an alternative proof of Lidskii's main theorem, based on the use of the Newton diagram. This approach clarifies certain difficulties which arise in the nongeneric case and leads, in some situations, to the extension of Lidskii's results. These results suggest a new notion of Holder condition number for multiple eigenvalues, depending only on the associated left and right eigenvectors, appropriately normalized, not on the Jordan vectors.", "Over the past decade, physicists have developed deep but non-rigorous techniques for studying phase transitions in discrete structures. Recently, their ideas have been harnessed to obtain improved rigorous results on the phase transitions in binary problems such as random @math -SAT or @math -NAESAT (e.g., Coja-Oghlan and Panagiotou: STOC 2013). However, these rigorous arguments, typically centered around the second moment method, do not extend easily to problems where there are more than two possible values per variable. The single most intensely studied example of such a problem is random graph @math -coloring. Here we develop a novel approach to the second moment method in this problem. This new method, inspired by physics conjectures on the geometry of the set of @math -colorings, allows us to establish a substantially improved lower bound on the @math -colorability threshold. The new lower bound is within an additive @math of a simple first-moment upper bound and within @math of the physics conjecture. By comparison, the best previous lower bound left a gap of about @math , unbounded in terms of the number of colors [Achlioptas, Naor: STOC 2004].", "The “classical” random graph models, in particular G(n,p), are “homogeneous,” in the sense that the degrees (for example) tend to be concentrated around a typical value. Many graphs arising in the real world do not have this property, having, for example, power-law degree distributions. Thus there has been a lot of recent interest in defining and studying “inhomogeneous” random graph models. One of the most studied properties of these new models is their “robustness”, or, equivalently, the “phase transition” as an edge density parameter is varied. For G(n,p), p = c n, the phase transition at c = 1 has been a central topic in the study of random graphs for well over 40 years. Many of the new inhomogeneous models are rather complicated; although there are exceptions, in most cases precise questions such as determining exactly the critical point of the phase transition are approachable only when there is independence between the edges. Fortunately, some models studied have this property already, and others can be approximated by models with independence. Here we introduce a very general model of an inhomogeneous random graph with (conditional) independence between the edges, which scales so that the number of edges is linear in the number of vertices. This scaling corresponds to the p = c n scaling for G(n,p) used to study the phase transition; also, it seems to be a property of many large real-world graphs. Our model includes as special cases many models previously studied. We show that, under one very weak assumption (that the expected number of edges is “what it should be”), many properties of the model can be determined, in particular the critical point of the phase transition, and the size of the giant component above the transition. We do this by relating our random graphs to branching processes, which are much easier to analyze. We also consider other properties of the model, showing, for example, that when there is a giant component, it is “stable”: for a typical random graph, no matter how we add or delete o(n) edges, the size of the giant component does not change by more than o(n). © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 31, 3–122, 2007" ] }
0810.1554
1967173005
The eigenvalue density for members of the Gaussian orthogonal and unitary ensembles follows the Wigner semicircle law. If the Gaussian entries are all shifted by a constant amount s (2N)1 2, where N is the size of the matrix, in the large N limit a single eigenvalue will separate from the support of the Wigner semicircle provided s>1. In this study, using an asymptotic analysis of the secular equation for the eigenvalue condition, we compare this effect to analogous effects occurring in general variance Wishart matrices and matrices from the shifted mean chiral ensemble. We undertake an analogous comparative study of eigenvalue separation properties when the sizes of the matrices are fixed and s→∞, and higher rank analogs of this setting. This is done using exact expressions for eigenvalue probability densities in terms of generalized hypergeometric functions and using the interpretation of the latter as a Green function in the Dyson Brownian motion model. For the shifted mean Gaussian unitary ensemble an...
The eigenvalue probability density function ) is closely related to the Dyson Brownian motion model @cite_1 in random matrix theory (see Section 3 below). It is also referred to as a Gaussian ensemble with a source. In this context the case of @math having a finite rank has been studied by a number of authors @cite_26 @cite_21 @cite_10 @cite_35 . However our use of this differs in that we will keep @math fixed, and exhibit phase separation as a function of the perturbing parameter.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_21", "@cite_1", "@cite_10" ], "mid": [ "2951358417", "2005820649", "2097968528", "1988076732", "2144100131" ], "abstract": [ "This text is about spiked models of non Hermitian random matrices. More specifically, we consider matrices of the type @math , where the rank of @math stays bounded as the dimension goes to infinity and where the matrix @math is a non Hermitian random matrix, satisfying an isotropy hypothesis: its distribution is invariant under the left and right actions of the unitary group. The macroscopic eigenvalue distribution of such matrices is governed by the so called Single Ring Theorem, due to Guionnet, Krishnapur and Zeitouni. We first prove that if @math has some eigenvalues out of the maximal circle of the single ring, then @math has some eigenvalues (called outliers) in the neighborhood of those of @math , which is not the case for the eigenvalues of @math in the inner cycle of the single ring. Then, we study the fluctuations of the outliers of @math around the eigenvalues of @math and prove that they are distributed as the eigenvalues of some finite dimensional random matrices. Such facts had already been noticed for Hermitian models. More surprising facts are that outliers can here have very various rates of convergence to their limits (depending on the Jordan Canonical Form of @math ) and that some correlations can appear between outliers at a macroscopic distance from each other (a fact already noticed by Knowles and Yin in the Hermitian case, but only in the case of non Gaussian models, whereas spiked Gaussian matrices belong to our model and can have such correlated outliers). Our first result generalizes a previous result by Tao for matrices with i.i.d. entries, whereas the second one (about the fluctuations) is new.", "We study various statistics related to the eigenvalues and eigenfunctions of random Hamiltonians in the localized regime. Consider a random Hamiltonian at an energy @math in the localized phase. Assume the density of states function is not too flat near @math . Restrict it to some large cube @math . Consider now @math , a small energy interval centered at @math that asymptotically contains infintely many eigenvalues when the volume of the cube @math grows to infinity. We prove that, with probability one in the large volume limit, the eigenvalues of the random Hamiltonian restricted to the cube inside the interval are given by independent identically distributed random variables, up to an error of size an arbitrary power of the volume of the cube. As a consequence, we derive * uniform Poisson behavior of the locally unfolded eigenvalues, * a.s. Poisson behavior of the joint distibutions of the unfolded energies and unfolded localization centers in a large range of scales. * the distribution of the unfolded level spacings, locally and globally, * the distribution of the unfolded localization centers, locally and globally.", "We continue the study of the Hermitian random matrix ensemble with external source Open image in new window where A has two distinct eigenvalues ±a of equal multiplicity. This model exhibits a phase transition for the value a=1, since the eigenvalues of M accumulate on two intervals for a>1, and on one interval for 0 1 was treated in Part I, where it was proved that local eigenvalue correlations have the universal limiting behavior which is known for unitarily invariant random matrices, that is, limiting eigenvalue correlations are expressed in terms of the sine kernel in the bulk of the spectrum, and in terms of the Airy kernel at the edge. In this paper we establish the same results for the case 0<a<1. As in Part I we apply the Deift Zhou steepest descent analysis to a 3×3-matrix Riemann-Hilbert problem. Due to the different structure of an underlying Riemann surface, the analysis includes an additional step involving a global opening of lenses, which is a new phenomenon in the steepest descent analysis of Riemann-Hilbert problems.", "A new type of Coulomb gas is defined, consisting of n point charges executing Brownian motions under the influence of their mutual electrostatic repulsions. It is proved that this gas gives an exact mathematical description of the behavior of the eigenvalues of an (n × n) Hermitian matrix, when the elements of the matrix execute independent Brownian motions without mutual interaction. By a suitable choice of initial conditions, the Brownian motion leads to an ensemble of random matrices which is a good statistical model for the Hamiltonian of a complex system possessing approximate conservation laws. The development with time of the Coulomb gas represents the statistical behavior of the eigenvalues of a complex system as the strength of conservation‐destroying interactions is gradually increased. A virial theorem'' is proved for the Brownian‐motion gas, and various properties of the stationary Coulomb gas are deduced as corollaries.", "This paper characterizes the eigenvalue distributions of full-rank Hermitian matrices generated from a set of independent (non)zero-mean proper complex Gaussian random vectors with a scaled-identity covariance matrix. More specifically, the joint and marginal cumulative distribution function (CDF) of any subset of unordered eigenvalues of the so-called complex (non)central Wishart matrices, as well as new simple and tractable expressions for their joint probability density function (PDF), are derived in terms of a finite sum of determinants. As corollaries to these new results, explicit expressions for the statistics of the smallest and largest eigenvalues, of (non)central Wishart matrices, can be easily obtained. Moreover, capitalizing on the foregoing distributions, it becomes possible to evaluate exactly the mean, variance, and other higher order statistics such as the skewness and kurtosis of the random channel capacity, in the case of uncorrelated multiple-input multiple-output (MIMO) Ricean and Rayleigh fading channels. Doing so bridges the gap between Telatar's initial approach for evaluating the average MIMO channel capacity (Telatar, 1999), and the subsequently widely adopted moment generating function (MGF) approach, thereby setting the basis for a PDF-based framework for characterizing the capacity statistics of MIMO Ricean and Rayleigh fading channels." ] }
0810.0139
2952484550
Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1,825 test cases against an existing empirically-derived function revealed an improvement in terms of precision, recall and accuracy.
@cite_3 proposed a measure known as for extracting complex terms. The measure is based upon the claim that a substring of a term candidate is a candidate itself given that it demonstrates adequate independence from the longer version it appears in. For example, , and are acceptable as valid complex term candidates. However, is not. Therefore, some measures are required to gauge the strength of word combinations to decide whether two word sequences should be merged or not. Given a word sequence @math to be examined for unithood, the is defined as: where @math is the number of words in @math , @math is the set of longer term candidates that contain @math , @math is the longest n-gram considered, @math is the frequency of occurrence of @math , and @math . While certain researchers @cite_0 consider as a termhood measure, others @cite_5 accept it as a measure for unithood. One can observe that longer candidates tend to gain higher weights due to the inclusion of @math in Equation . In addition, the weights computed using Equation are purely dependent on the frequency of @math .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_3" ], "mid": [ "2620416450", "114321176", "2082173153" ], "abstract": [ "Indexing highly repetitive texts --- such as genomic databases, software repositories and versioned text collections --- has become an important problem since the turn of the millennium. A relevant compressibility measure for repetitive texts is @math , the number of runs in their Burrows-Wheeler Transform (BWT). One of the earliest indexes for repetitive collections, the Run-Length FM-index, used @math space and was able to efficiently count the number of occurrences of a pattern of length @math in the text (in loglogarithmic time per pattern symbol, with current techniques). However, it was unable to locate the positions of those occurrences efficiently within a space bounded in terms of @math . Since then, a number of other indexes with space bounded by other measures of repetitiveness --- the number of phrases in the Lempel-Ziv parse, the size of the smallest grammar generating the text, the size of the smallest automaton recognizing the text factors --- have been proposed for efficiently locating, but not directly counting, the occurrences of a pattern. In this paper we close this long-standing problem, showing how to extend the Run-Length FM-index so that it can locate the @math occurrences efficiently within @math space (in loglogarithmic time each), and reaching optimal time @math within @math space, on a RAM machine of @math bits. Within @math space, our index can also count in optimal time @math . Raising the space to @math , we support count and locate in @math and @math time, which is optimal in the packed setting and had not been obtained before in compressed space. We also describe a structure using @math space that replaces the text and extracts any text substring of length @math in almost-optimal time @math . (...continues...)", "An unsupervised iterative approach for extracting a new lexicon (or unknown words) from a Chinese text corpus is proposed in this paper. Instead of using a non-iterative segmentation-merging-filtering-and-disambiguation approach, the proposed method iteratively integrates the contextual constraints (among word candidates) and a joint character association metric to progressively improve the segmentation results of the input corpus (and thus the new word list.) An augmented dictionary, which includes potential unknown words (in addition to known words), is used to segment the input corpus, unlike traditional approaches which use only known words for segmentation. In the segmentation process, the augmented dictionary is used to impose contextual constraints over known words and potential unknown words within input sentences; an unsupervised Viterbi Training process is then applied to ensure that the selected potential unknown words (and known words) maximize the likelihood of the input corpus. On the other hand, the joint character association metric (which reflects the global character association characteristics across the corpus) is derived by integrating several commonly used word association metrics, such as mutual information and entropy, with a joint Gaussian mixture density function; such integration allows the filter to use multiple features simultaneously to evaluate character association, unlike traditional filters which apply multiple features independently. The proposed method then allows the contextual constraints and the joint character association metric to enhance each other; this is achieved by iteratively applying the joint association metric to truncate unlikely unknown words in the augmented dictionary and using the segmentation result to improve the estimation of the joint association metric. The refined augmented dictionary and improved estimation are then used in the next iteration to acquire better segmentation and carry out more reliable filtering. Experiments show that both the precision and recall rates are improved almost monotonically, in contrast to non-iterative segmentation-merging-filtering-and-disambiguation approaches, which often sacrifice precision for recall or vice versa. With a corpus of 311,591 sentences, the performance is 76 (bigram), 54 (trigram), and 70 (quadragram) in F-measure, which is significantly better than using the non-iterative approach with F-measures of 74 (bigram), 46 (trigram), and 58 (quadragram).", "In this paper we study the problem of finding maximally sized subsets of binary strings (codes) of equal length that are immune to a given number @math of repetitions, in the sense that no two strings in the code can give rise to the same string after @math repetitions. We propose explicit number theoretic constructions of such subsets. In the case of @math repetition, the proposed construction is asymptotically optimal. For @math , the proposed construction is within a constant factor of the best known upper bound on the cardinality of a set of strings immune to @math repetitions. Inspired by these constructions, we then develop a prefixing method for correcting any prescribed number @math of repetition errors in an arbitrary binary linear block code. The proposed method constructs for each string in the given code a carefully chosen prefix such that the resulting strings are all of the same length and such that despite up to any @math repetitions in the concatenation of the prefix and the codeword, the original codeword can be recovered. In this construction, the prefix length is made to scale logarithmically with the length of strings in the original code. As a result, the guaranteed immunity to repetition errors is achieved while the added redundancy is asymptotically negligible." ] }
0809.5008
2102473388
The benefit of multi-antenna receivers is investigated in wireless ad hoc networks, and the main finding is that network throughput can be made to scale linearly with the number of receive antennas N_r even if each transmitting node uses only a single antenna. This is in contrast to a large body of prior work in single-user, multiuser, and ad hoc wireless networks that have shown linear scaling is achievable when multiple receive and transmit antennas (i.e., MIMO transmission) are employed, but that throughput increases logarithmically or sublinearly with N_r when only a single transmit antenna (i.e., SIMO transmission) is used. The linear gain is achieved by using the receive degrees of freedom to simultaneously suppress interference and increase the power of the desired signal, and exploiting the subsequent performance benefit to increase the density of simultaneous transmissions instead of the transmission rate. This result is proven in the transmission capacity framework, which presumes single-hop transmissions in the presence of randomly located interferers, but it is also illustrated that the result holds under several relaxations of the model, including imperfect channel knowledge, multihop transmission, and regular networks (i.e., interferers are deterministically located on a grid).
Early work on characterizing the throughput gains from MIMO in ad hoc networks includes @cite_11 @cite_15 @cite_19 @cite_12 although these generally primarily employed simulations, while more recently @cite_9 @cite_7 @cite_18 used tools similar to those used in the paper and developed by the present authors. However, none of these works have characterized the maximum throughput gains achievable with receiver processing only.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_9", "@cite_19", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2012396676", "2129533376", "2047468088", "2139771702", "2289552691", "1509281150", "2165943096" ], "abstract": [ "We develop a general framework for the analysis of a broad class of point-to-point linear multiple-input multiple-output (MIMO) transmission schemes in decentralized wireless ad hoc networks. New general closed-form expressions are derived for the outage probability, throughput and transmission capacity. For the throughput, we investigate the optimal number of data streams in various asymptotic regimes, which is shown to be dependent on different network parameters. For the transmission capacity, we prove that it scales linearly with the number of antennas, provided that the number of data streams also scales linearly with the number of antennas, in addition to meeting some mild technical conditions. We also characterize the optimal number of data streams for maximizing the transmission capacity. To make our discussion concrete, we apply our general framework to investigate three popular MIMO schemes, each requiring different levels of feedback. In particular, we consider eigenmode selection with MIMO singular value decomposition, multiple transmit antenna selection, and open-loop spatial multiplexing. Our analysis of these schemes reveals that significant performance gains are achieved by utilizing feedback under a range of network conditions.", "Recently, the capacity region of a multiple-input multiple-output (MIMO) Gaussian broadcast channel, with Gaussian codebooks and known-interference cancellation through dirty paper coding, was shown to equal the union of the capacity regions of a collection of MIMO multiple-access channels. We use this duality result to evaluate the system capacity achievable in a cellular wireless network with multiple antennas at the base station and multiple antennas at each terminal. Some fundamental properties of the rate region are exhibited and algorithms for determining the optimal weighted rate sum and the optimal covariance matrices for achieving a given rate vector on the boundary of the rate region are presented. These algorithms are then used in a simulation study to determine potential capacity enhancements to a cellular system through known-interference cancellation. We study both the circuit data scenario in which each user requires a constant data rate in every frame and the packet data scenario in which users can be assigned a variable rate in each frame so as to maximize the long-term average throughput. In the case of circuit data, the outage probability as a function of the number of active users served at a given rate is determined through simulations. For the packet data case, long-term average throughputs that can be achieved using the proportionally fair scheduling algorithm are determined. We generalize the zero-forcing beamforming technique to the multiple receive antennas case and use this as the baseline for the packet data throughput evaluation.", "Large multiple-input multiple-output (MIMO) networks promise high energy efficiency, i.e., much less power is required to achieve the same capacity compared to the conventional MIMO networks if perfect channel state information (CSI) is available at the transmitter. However, in such networks, huge overhead is required to obtain full CSI especially for Frequency-Division Duplex (FDD) systems. To reduce overhead, we propose a downlink antenna selection scheme, which selects S antennas from M > S transmit antennas based on the large scale fading to serve K ≤ S users in large distributed MIMO networks employing regularized zero-forcing (RZF) precoding. In particular, we study the joint optimization of antenna selection, regularization factor, and power allocation to maximize the average weighted sum-rate. This is a mixed combinatorial and non-convex problem whose objective and constraints have no closed-form expressions. We apply random matrix theory to derive asymptotically accurate expressions for the objective and constraints. As such, the joint optimization problem is decomposed into subproblems, each of which is solved by an efficient algorithm. In addition, we derive structural solutions for some special cases and show that the capacity of very large distributed MIMO networks scales as O(KlogM) when M→∞ with K, S fixed. Simulations show that the proposed scheme achieves significant performance gain over various baselines.", "Recently, the remarkable capacity potential of multiple-input multiple-output (MIMO) wireless communication systems was unveiled. The predicted enormous capacity gain of MIMO is nonetheless significantly limited by cochannel interference (CCI) in realistic cellular environments. The previously proposed advanced receiver technique improves the system performance at the cost of increased receiver complexity, and the achieved system capacity is still significantly away from the interference-free capacity upper bound, especially in environments with strong CCI. In this paper, base station cooperative processing is explored to address the CCI mitigation problem in downlink multicell multiuser MIMO networks, and is shown to dramatically increase the capacity with strong CCI. Both information-theoretic dirty paper coding approach and several more practical joint transmission schemes are studied with pooled and practical per-base power constraints, respectively. Besides the CCI mitigation potential, other advantages of cooperative processing including the power gain, channel rank conditioning advantage, and macrodiversity protection are also addressed. The potential of our proposed joint transmission schemes is verified with both heuristic and realistic cellular MIMO settings.", "As a promising technique to meet the drastically growing demand for both high throughput and uniform coverage in the fifth generation (5G) wireless networks, massive multiple-input multiple-output (MIMO) systems have attracted significant attention in recent years. However, in massive MIMO systems, as the density of mobile users (MUs) increases, conventional uplink training methods will incur prohibitively high training overhead, which is proportional to the number of MUs. In this paper, we propose a selective uplink training method for massive MIMO systems, where in each channel block only part of the MUs will send uplink pilots for channel training, and the channel states of the remaining MUs are predicted from the estimates in previous blocks, taking advantage of the channels' temporal correlation. We propose an efficient algorithm to dynamically select the MUs to be trained within each block and determine the optimal uplink training length. Simulation results show that the proposed training method provides significant throughput gains compared to the existing methods, while much lower estimation complexity is achieved. It is observed that the throughput gain becomes higher as the MU density increases.", "We study the throughput limits of a MIMO (multiple-input multiple output) ad hoc network with K simultaneous communicating transceiver pairs. Assume that each transmitter is equipped with t antennas and the receivers with r antennas, we show that in the absence of channel state information (CSI) at the transmitters, the asymptotic network throughput is limited by r nats s Hz as K spl rarr spl infin . With CSI corresponding to the desired receiver available at the transmitter, we demonstrate that an asymptotic throughput of t+r+2 spl radic tr nats s Hz can be achieved using a simple beamforming approach. Further, we show that the asymptotically optimal transmission scheme with CSI amounts to a single-user waterfilling for a properly scaled channel.", "We provide an overview of the extensive results on the Shannon capacity of single-user and multiuser multiple-input multiple-output (MIMO) channels. Although enormous capacity gains have been predicted for such channels, these predictions are based on somewhat unrealistic assumptions about the underlying time-varying channel model and how well it can be tracked at the receiver, as well as at the transmitter. More realistic assumptions can dramatically impact the potential capacity gains of MIMO techniques. For time-varying MIMO channels there are multiple Shannon theoretic capacity definitions and, for each definition, different correlation models and channel information assumptions that we consider. We first provide a comprehensive summary of ergodic and capacity versus outage results for single-user MIMO channels. These results indicate that the capacity gain obtained from multiple antennas heavily depends on the available channel information at either the receiver or transmitter, the channel signal-to-noise ratio, and the correlation between the channel gains on each antenna element. We then focus attention on the capacity region of the multiple-access channels (MACs) and the largest known achievable rate region for the broadcast channel. In contrast to single-user MIMO channels, capacity results for these multiuser MIMO channels are quite difficult to obtain, even for constant channels. We summarize results for the MIMO broadcast and MAC for channels that are either constant or fading with perfect instantaneous knowledge of the antenna gains at both transmitter(s) and receiver(s). We show that the capacity region of the MIMO multiple access and the largest known achievable rate region (called the dirty-paper region) for the MIMO broadcast channel are intimately related via a duality transformation. This transformation facilitates finding the transmission strategies that achieve a point on the boundary of the MIMO MAC capacity region in terms of the transmission strategies of the MIMO broadcast dirty-paper region and vice-versa. Finally, we discuss capacity results for multicell MIMO channels with base station cooperation. The base stations then act as a spatially diverse antenna array and transmission strategies that exploit this structure exhibit significant capacity gains. This section also provides a brief discussion of system level issues associated with MIMO cellular. Open problems in this field abound and are discussed throughout the paper." ] }
0809.1552
2078543558
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
There is some potential to increase efficiency by using a non-uniform'' definition of uniform continuity. That is to say, using a definition of uniform continuity that allows different segments of the domain to have local moduli associated with them. Ulrich Berger uses such a definition of uniform continuity to define integration @cite_12 . Simpson also defines an integration algorithm that uses a local modulus for a function that is computed directly from the definition of the function @cite_10 . However, implementing his algorithm directly in Coq is not possible because it relies on bar induction, which is not available in Coq unless one adds an axiom such as bar induction to it or one treats the real numbers as a formal space @cite_36 @cite_16 .
{ "cite_N": [ "@cite_36", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "1982359839", "2134842679", "2953267151", "1540764732" ], "abstract": [ "We propose and analyze a discontinuous Galerkin approximation for the Stokes problem. The finite element triangulation employed is not required to be conforming and we use discontinuous pressures and velocities. No additional unknown fields need to be introduced, but only suitable bilinear forms defined on the interfaces between the elements, involving the jumps of the velocity and the average of the pressure. We consider hp approximations using ℚk′–ℚk velocity-pressure pairs with k′ = k + 2, k + 1, k. Our methods show better stability properties than the corresponding conforming ones. We prove that our first two choices of velocity spaces ensure uniform divergence stability with respect to the mesh size h. Numerical results show that they are uniformly stable with respect to the local polynomial degree k, a property that has no analog in the conforming case. An explicit bound in k which is not sharp is also proven. Numerical results show that if equal order approximation is chosen for the velocity and pressure, no spurious pressure modes are present but the method is not uniformly stable either with respect to h or k. We derive a priori error estimates generalizing the abstract theory of mixed methods. Optimal error estimates in h are proven. As for discontinuous Galerkin methods for scalar diffusive problems, half of the power of k is lost for p and hp pproximations independently of the divergence stability.", "Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).", "Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).", "We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x1, . . . , x N ) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among N-1 coordinate blocks or f has at most one minimum in each of N-2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation." ] }
0809.1552
2078543558
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
The constructive real numbers have already been used to provide a semi-decision procedure for inequalities of real numbers. Not only for the constructive real numbers, but also for the non-computational real numbers in the Coq standard library @cite_37 . The same technique can be applied here.
{ "cite_N": [ "@cite_37" ], "mid": [ "1646312512" ], "abstract": [ "There are two incompatible Coq libraries that have a theory of the real numbers; the Coq standard library gives an axiomatic treatment of classical real numbers, while the CoRN library from Nijmegen defines constructively valid real numbers. Unfortunately, this means results about one structure cannot easily be used in the other structure. We present a way interfacing these two libraries by showing that their real number structures are isomorphic assuming the classical axioms already present in the standard library reals. This allows us to use O'Connor's decision procedure for solving ground inequalities present in CoRN to solve inequalities about the reals from the Coq standard library, and it allows theorems from the Coq standard library to apply to problem about the CoRN reals." ] }
0809.1552
2078543558
We provide a computer-verified exact monadic functional implementation of the Riemann integral in type theory. Together with previous work by O'Connor, this may be seen as the beginning of the realization of Bishop's vision to use constructive mathematics as a programming language for exact analysis.
Previously, the CoRN project @cite_21 showed that the formalization of constructive analysis in a type theory is feasible. However, the extraction of programs from such developments is difficult @cite_19 . On the contrary, in the present article we have shown that if one takes an algorithmic attitude from the start it is possible to obtain feasible programs.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "1498091397", "1646312512" ], "abstract": [ "We present C-CoRN, the Constructive Coq Repository at Nijmegen. It consists of a mathematical library of constructive algebra and analysis formalized in the theorem prover Coq. We explain the structure and the contents of the library and we discuss the motivation and some (possible) applications of such a library.", "There are two incompatible Coq libraries that have a theory of the real numbers; the Coq standard library gives an axiomatic treatment of classical real numbers, while the CoRN library from Nijmegen defines constructively valid real numbers. Unfortunately, this means results about one structure cannot easily be used in the other structure. We present a way interfacing these two libraries by showing that their real number structures are isomorphic assuming the classical axioms already present in the standard library reals. This allows us to use O'Connor's decision procedure for solving ground inequalities present in CoRN to solve inequalities about the reals from the Coq standard library, and it allows theorems from the Coq standard library to apply to problem about the CoRN reals." ] }
0809.1802
1939790513
Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segregate overlapping shapes that correspond to different data points. We demonstrate performance of individual algorithms, using a combination of generated and real-life images.
The image categorization portion of our work bears a similarity to image understanding, however, we focus on deciding whether a given image contains a 2-D plot. Li et.al. @cite_2 developed wavelet transform, context sensitive algorithms to perform texture based analysis of an image, in separating camera taken pictures from non-pictures. Building on this framework, Lu et.al. @cite_7 developed an automatic categorization image system for digital library documents which categorizes the images into multiple classes within non-picture class e.g. diagram, 2-D figures, 3-D figures, diagrams and other. We find significant improvements in detecting 2-D figures by substituting certain features used in @cite_7 . @cite_1 presents image-processing-based techniques to extract the data represented by lines in 2-D plots. However, @cite_1 does not extract the data represented by data points and treats the data point shapes as noise while processing the image. Our work is complimentary in that we address the question of how to extract data represented by various shapes.
{ "cite_N": [ "@cite_1", "@cite_7", "@cite_2" ], "mid": [ "2805903690", "2067912884", "1969366022" ], "abstract": [ "Abstract Studies show that refining real-world categories into semantic subcategories contributes to better image modeling and classification. Previous image sub-categorization work relying on labeled images and WordNet’s hierarchy is labor-intensive. To tackle this problem, in this work, we extract textual and visual features to automatically select and subsequently classify web images into semantic rich categories. The following two major challenges are well studied: (1) noise in the labels of subcategories derived from the general corpus; (2) noise in the labels of images retrieved from the web. Specifically, we first obtain the semantic refinement subcategories from the text perspective and remove the noise by using the relevance-based approach. To suppress the search error induced noisy images, we then formulate image selection and classifier learning as a multi-instance learning problem and propose to solve the employed problem by the cutting-plane algorithm. The experiments show significant performance gains by using the generated data of our approach on image categorization tasks. The proposed approach also consistently outperforms existing weakly supervised and web-supervised approaches.", "We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset [27]. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb-ucm approach of [2] by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a simple approach that classifies super pixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report significant improvements over the state-of-the-art.", "In this paper, we address the problems of contour detection, bottom-up grouping, object detection and semantic segmentation on RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset (, ECCV, 2012). We propose algorithms for object boundary detection and hierarchical segmentation that generalize the @math gPb-ucm approach of (TPAMI, 2011) by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We train RGB-D object detectors by analyzing and computing histogram of oriented gradients on the depth image and using them with deformable part models (, TPAMI, 2010). We observe that this simple strategy for training object detectors significantly outperforms more complicated models in the literature. We then turn to the problem of semantic segmentation for which we propose an approach that classifies superpixels into the dominant object categories in the NYUD2 dataset. We design generic and class-specific features to encode the appearance and geometry of objects. We also show that additional features computed from RGB-D object detectors and scene classifiers further improves semantic segmentation accuracy. In all of these tasks, we report significant improvements over the state-of-the-art." ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
In @cite_26 , Weinberger considers difference equations of the form [ u(t+1, ) = Q(u(t, )). ] Here the map @math acts on functions @math which themselves map a spatial domain to the positive reals. The spatial domain is @math -dimensional and either continuous or discrete. If condition holds, time- @math maps for fit into the framework of @cite_26 ; in this case, the existence of decreasing fronts for when @math is a consequence of results in @cite_26 . The main focus in @cite_26 is on the so-called asymptotic spreading speed of initial data supported on a compact set; the uniqueness of monotone fronts, and the existence and behavior of non-monotone fronts, are not directly addressed.
{ "cite_N": [ "@cite_26" ], "mid": [ "2261902652" ], "abstract": [ "In this article we consider large energy wave maps in dimension 2+1, as in the resolution of the threshold conjecture by Sterbenz and Tataru (Commun. Math. Phys. 298(1):139–230, 2010; Commun. Math. Phys. 298(1):231–264, 2010), but more specifically into the unit Euclidean sphere ( S ^ n-1 R ^ n ) with ( n ), and study further the dynamics of the sequence of wave maps that are obtained in Sterbenz and Tataru (Commun. Math. Phys. 298(1):231–264, 2010) at the final rescaling for a first, finite or infinite, time singularity. We prove that, on a suitably chosen sequence of time slices at this scaling, there is a decomposition of the map, up to an error with asymptotically vanishing energy, into a decoupled sum of rescaled solitons concentrating in the interior of the light cone and a term having asymptotically vanishing energy dispersion norm, concentrating on the null boundary and converging to a constant locally in the interior of the cone, in the energy space. Similar and stronger results have been recently obtained in the equivariant setting by several authors (Cote, Commun. Pure Appl. Math. 68(11):1946–2004, 2015; Cote, Commun. Pure Appl. Math. 69(4):609–612, 2016; Cote, Am. J. Math. 137(1):139–207, 2015; , Am. J. Math. 137(1):209–250, 2015; Krieger, Commun. Math. Phys. 250(3):507–580, 2004), where better control on the dispersive term concentrating on the null boundary of the cone is provided, and in some cases the asymptotic decomposition is shown to hold for all time. Here, however, we do not impose any symmetry condition on the map itself and our strategy follows the one from bubbling analysis of harmonic maps into spheres in the supercritical regime due to Lin and Riviere (Ann. Math. 149(2):785–829, 1999; Duke Math. J. 111:177–193, 2002), which we make work here in the hyperbolic context of Sterbenz and Tataru (Commun. Math. Phys. 298(1), 231–264, 2010)." ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
The existence results in @cite_26 use a monotone iteration technique. The same is true of @cite_19 (described further below), where a monotonicity condition on @math is also imposed. Recently such techniques have been extended, in the setting of lattice integro-difference equations, to the case where the nonlinearity is not necessarily monotone but satisfies conditions which are similar to our (G1). In particular @cite_17 and @cite_8 both obtain the existence of (not necessarily monotone) traveling waves as well as a variational characterization of the minimum wave speed guaranteeing monotone fronts. Although our setting and techniques differ, our results here can be regarded as complementing these latter works.
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_8", "@cite_17" ], "mid": [ "2071564573", "1243232056", "2171798031", "2407211045" ], "abstract": [ "Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained.", "Monotonicity in concurrent systems stipulates that, in any global state, extant system actions remain executable when new processes are added to the state. This concept is not only natural and common in multi-threaded software, but also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety property verification even when the number of running threads is unknown. In this paper, we show that the act of obtaining finite-data thread abstractions for model checking can be at odds with monotonicity: Predicate-abstracting certain widely used monotone software results in non-monotone multi-threaded Boolean programs — the monotonicity is lost in the abstraction. As a result, well-established sound and complete safety checking algorithms become inapplicable; in fact, safety checking turns out to be undecidable for the obtained class of unbounded-thread Boolean programs. We demonstrate how the abstract programs can be modified into monotone ones, without affecting safety properties of the non-monotone abstraction. This significantly improves earlier approaches of enforcing monotonicity via overapproximations.", "In some applications, it is reasonable to assume that geodesics (rays) have a consistent orientation so that the Helmholtz equation may be viewed as an evolution equation in one of the spatial directions. With such applications in mind, we propose a new Eulerian computational geometrical-optics method, dubbed the fast Huygens sweeping method, for computing Green functions of Helmholtz equations in inhomogeneous media in the high-frequency regime and in the presence of caustics. The first novelty of the new method is that the Huygens–Kirchhoff secondary source principle is used to integrate many locally valid asymptotic solutions to yield a globally valid asymptotic solution so that caustics associated with the usual geometrical-optics ansatz can be treated automatically. The second novelty is that a butterfly algorithm is adapted to carry out the matrix–vector products induced by the Huygens–Kirchhoff integration in O(NlogN) operations, where N is the total number of mesh points, and the proportionality constant depends on the desired accuracy and is independent of the frequency parameter. To reduce the storage of the resulting traveltime and amplitude tables, we compress each table into a linear combination of tensor-product based multivariate Chebyshev polynomials so that the information of each table is encoded into a small number of Chebyshev coefficients. The new method enjoys the following desired features: (1) it precomputes a set of local traveltime and amplitude tables; (2) it automatically takes care of caustics; (3) it constructs Green functions of the Helmholtz equation for arbitrary frequencies and for many point sources; (4) for a specified number of points per wavelength it constructs each Green function in nearly optimal complexity in terms of the total number of mesh points, where the prefactor of the complexity only depends on the specified accuracy and is independent of the frequency parameter. Both two-dimensional (2-D) and three-dimensional (3-D) numerical experiments are presented to demonstrate the performance and accuracy of the new method.", "In this paper we propose a satisfiability-based approach for enumerating all frequent, closed and maximal patterns with wildcards in a given sequence. In this context, since frequency is the most used criterion, we introduce a new polynomial inductive formulation of the cardinality constraint as a Boolean formula. A nogood-based formulation of the anti-monotonicity property is proposed and dynamically used for pruning. This declarative framework allows us to exploit the efficiency of modern SAT solvers and particularly their clause learning component. The experimental evaluation on real world data shows the feasibility of our proposed approach in practice." ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
In @cite_19 Hsu and Lin consider lattice equations of the form (two-dimensional equations, and equations where the coupling is not unidirectional, are also considered in @cite_19 ). The chief motivation for @cite_19 lies in the case that @math is piecewise linear; in this case becomes a so-called (CNN). Cellular neural networks were first introduced in @cite_31 to model the behavior of a large array of coupled electronic components.
{ "cite_N": [ "@cite_19", "@cite_31" ], "mid": [ "2071564573", "2104636679" ], "abstract": [ "Abstract This work proves the existence and multiplicity results of monotonic traveling wave solutions for some lattice differential equations by using the monotone iteration method. Our results include the model of cellular neural networks (CNN). In addition to the monotonic traveling wave solutions, non-monotonic and oscillating traveling wave solutions in the delay type of CNN are also obtained.", "This paper aims to accelerate the test-time computation of convolutional neural networks (CNNs), especially very deep CNNs [1] that have substantially impacted the computer vision community. Unlike previous methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We develop an effective solution to the resulting nonlinear optimization problem without the need of stochastic gradient descent (SGD). More importantly, while previous methods mainly focus on optimizing one or two layers, our nonlinear method enables an asymmetric reconstruction that reduces the rapidly accumulated error when multiple (e.g., @math 10) layers are approximated. For the widely used very deep VGG-16 model [1] , our method achieves a whole-model speedup of 4 @math with merely a 0.3 percent increase of top-5 error in ImageNet classification. Our 4 @math accelerated VGG-16 model also shows a graceful accuracy degradation for object detection when plugged into the Fast R-CNN detector [2] ." ] }
0809.2059
2155584001
We prove the existence and uniqueness, for wave speeds sufficiently large, of monotone traveling wave solutions connecting stable to unstable spatial equilibria for a class of @math -dimensional lattice differential equations with unidirectional coupling. This class of lattice equations includes some spatial discretizations for hyperbolic conservation laws with a source term as well as a subclass of monotone systems. We obtain a variational characterization of the critical wave speed above which monotone traveling wave solutions are guaranteed to exist. We also discuss non-monotone waves, and the coexistence of monotone and non-monotone waves.
The existence problem in @cite_19 is formulated in terms of increasing traveling waves of negative speed; a change of variables is required to convert to our setting. In terms of , the main hypotheses in @cite_19 are @math ; @math . @math for @math and @math . The first and third conditions above are analogous to our (G1.2) and (G1.3). The second condition above is stronger; it should be thought of as analogous to and allows for the application of monotone iteration techniques. Under these conditions, there is some @math such that a decreasing front exists for all @math (Theorem 1.1 in @cite_19 , reformulated for our setting). In @cite_19 and the companion paper @cite_28 the authors also consider with a piecewise linear @math for which the monotonicity condition above fails, but which is simple enough to admit detailed analysis. In results analogous to ours, the authors describe condititions under which, as @math drops below a critical level, monotone fronts give way to non-monotone fronts that overshoot" and oscillate about their limit at @math .
{ "cite_N": [ "@cite_28", "@cite_19" ], "mid": [ "2031810620", "2150477501" ], "abstract": [ "Abstract At the heart of this article will be the study of a branching Brownian motion (BBM) with killing , where individual particles move as Brownian motions with drift − ρ , perform dyadic branching at rate β and are killed on hitting the origin. Firstly, by considering properties of the right-most particle and the extinction probability, we will provide a probabilistic proof of the classical result that the ‘one-sided’ FKPP travelling-wave equation of speed − ρ with solutions f : [ 0 , ∞ ) → [ 0 , 1 ] satisfying f ( 0 ) = 1 and f ( ∞ ) = 0 has a unique solution with a particular asymptotic when ρ 2 β , and no solutions otherwise. Our analysis is in the spirit of the standard BBM studies of [S.C. Harris, Travelling-waves for the FKPP equation via probabilistic arguments, Proc. Roy. Soc. Edinburgh Sect. A 129 (3) (1999) 503–517] and [A.E. Kyprianou, Travelling wave solutions to the K-P-P equation: alternatives to Simon Harris' probabilistic analysis, Ann. Inst. H. Poincare Probab. Statist. 40 (1) (2004) 53–72] and includes an intuitive application of a change of measure inducing a spine decomposition that, as a by product, gives the new result that the asymptotic speed of the right-most particle in the killed BBM is 2 β − ρ on the survival set. Secondly, we introduce and discuss the convergence of an additive martingale for the killed BBM, W λ , that appears of fundamental importance as well as facilitating some new results on the almost-sure exponential growth rate of the number of particles of speed λ ∈ ( 0 , 2 β − ρ ) . Finally, we prove a new result for the asymptotic behaviour of the probability of finding the right-most particle with speed λ > 2 β − ρ . This result combined with Chauvin and Rouault's [B. Chauvin, A. Rouault, KPP equation and supercritical branching Brownian motion in the subcritical speed area. Application to spatial trees, Probab. Theory Related Fields 80 (2) (1988) 299–314] arguments for standard BBM readily yields an analogous Yaglom-type conditional limit theorem for the killed BBM and reveals W λ as the limiting Radon–Nikodým derivative when conditioning the right-most particle to travel at speed λ into the distant future.", "These lecture notes, based on a course given at the Zurich Clay Summer School (June 23-July 18, 2008), review our current mathematical understanding of the global behaviour of waves on black hole exterior backgrounds. Interest in this problem stems from its relationship to the non-linear stability of the black hole spacetimes themselves as solutions to the Einstein equations, one of the central open problems of general relativity. After an introductory discussion of the Schwarzschild geometry and the black hole concept, the classical theorem of Kay and Wald on the boundedness of scalar waves on the exterior region of Schwarzschild is reviewed. The original proof is presented, followed by a new more robust proof of a stronger boundedness statement. The problem of decay of scalar waves on Schwarzschild is then addressed, and a theorem proving quantitative decay is stated and its proof sketched. This decay statement is carefully contrasted with the type of statements derived heuristically in the physics literature for the asymptotic tails of individual spherical harmonics. Following this, our recent proof of the boundedness of solutions to the wave equation on axisymmetric stationary backgrounds (including slowly-rotating Kerr and Kerr-Newman) is reviewed and a new decay result for slowly-rotating Kerr spacetimes is stated and proved. This last result was announced at the summer school and appears in print here for the first time. A discussion of the analogue of these problems for spacetimes with a positive cosmological constant follows. Finally, a general framework is given for capturing the red-shift effect for non-extremal black holes. This unifies and extends some of the analysis of the previous sections. The notes end with a collection of open problems." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
Facility location problems have had a long history; here, we mention only some of the results for these problems. We focus on metric instances of these problems: the non-metric cases are usually much harder @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "113414296" ], "abstract": [ "We design a new approximation algorithm for the metric uncapacitated facility location problem. This algorithm is of LP rounding type and is based on a rounding technique developed in [5,6,7]." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
The @math -median problem seeks to find facilities @math with @math to minimize @math . The first constant factor approximation for the k-median problem was given by @cite_4 , which was subsequently improved by @cite_20 and @cite_2 to the current best factor of @math . It is known that the natural LP relaxation for the problem has an integrality gap of @math , but the currently-known algorithm that achieves this does not run in polynomial time @cite_26 . The extension of @math -median to the case when one can open at most @math facilities, but also has to pay their facility opening cost was studied by @cite_12 , who gave a @math -approximation.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_2", "@cite_20", "@cite_12" ], "mid": [ "169553655", "2767218854", "2033687872", "2085751730", "1512148653" ], "abstract": [ "In this paper, we revisit the classical k-median problem. Using the standard LP relaxation for k-median, we give an efficient algorithm to construct a probability distribution on sets of k centers that matches the marginals specified by the optimal LP solution. Analyzing the approximation ratio of our algorithm presents significant technical difficulties: we are able to show an upper bound of 3.25. While this is worse than the current best known 3+e guarantee of [2], because: (1) it leads to 3.25 approximation algorithms for some generalizations of the k-median problem, including the k-facility location problem introduced in [10], (2) our algorithm runs in @math time to achieve 3.25(1+δ)-approximation compared to the O(n8) time required by the local search algorithm of [2] to guarantee a 3.25 approximation, and (3) our approach has the potential to beat the decade old bound of 3+e for k-median. We also give a 34-approximation for the knapsack median problem, which greatly improves the approximation constant in [13]. Using the same technique, we also give a 9-approximation for matroid median problem introduced in [11], improving on their 16-approximation.", "In this paper, we present a new iterative rounding framework for many clustering problems. Using this, we obtain an (α1 + є ≤ 7.081 + є)-approximation algorithm for k-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen. For k-means with outliers, we give an (α2+є ≤ 53.002 + є)-approximation, which is the first O(1)-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α1- and (α1 + є)-approximation algorithms for matroid and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 due to Swamy and 17.46 due to The natural LP relaxation for the k-median k-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any є > 0.", "In this paper, we study approximation algorithms for several NP-hard facility location problems. We prove that a simple local search heuristic yields polynomial-time constant-factor approximation bounds for the metric versions of the uncapacitated k-median problem and the uncapacitated facility location problem. (For the k-median problem, our algorithms require a constant-factor blowup in the parameter k.) This local search heuristic was first proposed several decades ago, and has been shown to exhibit good practical performance in empirical studies. We also extend the above results to obtain constant-factor approximation bounds for the metric versions of capacitated k-median and facility location problems.", "We present the first constant-factor approximation algorithm for the metric k-median problem. The k-median problem is one of the most well-studied clustering problems, i.e., those problems in which the aim is to partition a given set of points into clusters so that the points within a cluster are relatively close with respect to some measure. For the metric k-median problem, we are given n points in a metric space. We select k of these to be cluster centers and then assign each point to its closest selected center. If point j is assigned to a center i, the cost incurred is proportional to the distance between i and j. The goal is to select the k centers that minimize the sum of the assignment costs. We give a 62 3-approximation algorithm for this problem. This improves upon the best previously known result of O(log k log log k), which was obtained by refining and derandomizing a randomized O(log n log log n)-approximation algorithm of Bartal.", "In this paper we demonstrate a general method of designing constant-factor approximation algorithms for some discrete optimization problems with cardinality constraints. The core of the method is a simple deterministic (\"pipage\") procedure of rounding of linear relaxations. By using the method we design a (1-(1-1 k)k)-approximation algorithm for the maximum coverage problem where k is the maximum size of the subsets that are covered, and a 1 2-approximation algorithm for the maximum cut problem with given sizes of parts in the vertex set bipartition. The performance guarantee of the former improves on that of the well-known (1 - e-1)-greedy algorithm due to Cornuejols, Fisher and Nemhauser in each case of bounded k. The latter is, to the best of our knowledge, the first constant-factor algorithm for that version of the maximum cut problem." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
The @math -means problem minimizes @math , and is widely used for clustering in machine learning, especially when the point set is in the Euclidean space. For Euclidean instances, one can obtain @math -approximations in linear time, if one imagines @math and @math to be constants: see @cite_7 and the references therein. The most commonly used algorithm in practice is Lloyd's algorithm, which is a local-search procedure different from ours, and which is a special case of the EM algorithm @cite_27 . While there is no explicit mention of an approximation algorithm with provable guarantees for @math -means (to the best of our knowledge), many of the constant-factor approximations for @math -median can be extended to the @math -means problem as well. The paper of @cite_8 is closely related to ours: it analyzes the same local search algorithm we consider, and uses properties of k-means in Euclidean spaces to obtain a @math -approximation. Our results for hold for general metrics, and can essentially be viewed as extensions of their results.
{ "cite_N": [ "@cite_27", "@cite_7", "@cite_8" ], "mid": [ "2018165630", "2947175317", "1956536100" ], "abstract": [ "We consider @math -median clustering in finite metric spaces and @math -means clustering in Euclidean spaces, in the setting where @math is part of the input (not a constant). For the @math -means problem, show that if the optimal @math -means clustering of the input is more expensive than the optimal @math -means clustering by a factor of @math , then one can achieve a @math -approximation to the @math -means optimal in time polynomial in @math and @math by using a variant of Lloyd's algorithm. In this work we substantially improve this approximation guarantee. We show that given only the condition that the @math -means optimal is more expensive than the @math -means optimal by a factor @math for some constant @math , we can obtain a PTAS. In particular, under this assumption, for any @math we achieve a @math -approximation to the @math -means optimal in time polynomial in @math and @math , and exponential in @math and @math . We thus decouple the strength of the assumption from the quality of the approximation ratio. We also give a PTAS for the @math -median problem in finite metrics under the analogous assumption as well. For @math -means, we in addition give a randomized algorithm with improved running time of @math . Our technique also obtains a PTAS under the assumption of that all @math approximations are @math -close to a desired target clustering, in the case that all target clusters have size greater than @math and @math is constant. Note that the motivation of is that for many clustering problems, the objective function is only a proxy for the true goal of getting close to the target. From this perspective, our improvement is that for @math -means in Euclidean spaces we reduce the distance of the clustering found to the target from @math to @math when all target clusters are large, and for @math -median we improve the largeness'' condition needed in the work of to get exactly @math -close from @math to @math . Our results are based on a new notion of clustering stability.", "Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. We complement several recent papers in this line of research by introducing a general method to reduce bias in the data through random projections in a fair'' subspace. We apply this method to densest subgraph and @math -means. For densest subgraph, our approach based on fair projections allows to recover both theoretically and empirically an almost optimal, fair, dense subgraph hidden in the input data. We also show that, under the small set expansion hypothesis, approximating this problem beyond a factor of @math is NP-hard and we show a polynomial time algorithm with a matching approximation bound. We further apply our method to @math -means. In a previous paper, [NIPS 2017] showed that problems such as @math -means can be approximated up to a constant factor while ensuring that none of two protected class (e.g., gender, ethnicity) is disparately impacted. We show that fair projections generalize the concept of fairlet introduced by to any number of protected attributes and improve empirically the quality of the resulting clustering. We also present the first constant-factor approximation for an arbitrary number of protected attributes thus settling an open problem recently addressed in several works.", "The Euclidean @math -means problem is a classical problem that has been extensively studied in the theoretical computer science, machine learning and the computational geometry communities. In this problem, we are given a set of @math points in Euclidean space @math , and the goal is to choose @math centers in @math so that the sum of squared distances of each point to its nearest center is minimized. The best approximation algorithms for this problem include a polynomial time constant factor approximation for general @math and a @math -approximation which runs in time @math . At the other extreme, the only known computational complexity result for this problem is NP-hardness [ADHP'09]. The main difficulty in obtaining hardness results stems from the Euclidean nature of the problem, and the fact that any point in @math can be a potential center. This gap in understanding left open the intriguing possibility that the problem might admit a PTAS for all @math . In this paper we provide the first hardness of approximation for the Euclidean @math -means problem. Concretely, we show that there exists a constant @math such that it is NP-hard to approximate the @math -means objective to within a factor of @math . We show this via an efficient reduction from the vertex cover problem on triangle-free graphs: given a triangle-free graph, the goal is to choose the fewest number of vertices which are incident on all the edges. Additionally, we give a proof that the current best hardness results for vertex cover can be carried over to triangle-free graphs. To show this we transform @math , a known hard vertex cover instance, by taking a graph product with a suitably chosen graph @math , and showing that the size of the (normalized) maximum independent set is almost exactly preserved in the product graph using a spectral analysis, which might be of independent interest." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
Tight bounds for the @math -center problem are known: there is a @math -approximation algorithm due to @cite_18 @cite_10 , and this is tight unless @math .
{ "cite_N": [ "@cite_18", "@cite_10" ], "mid": [ "2785893262", "2293564405" ], "abstract": [ "The @math -center problem is a classical combinatorial optimization problem which asks to find @math centers such that the maximum distance of any input point in a set @math to its assigned center is minimized. The problem allows for elegant @math -approximations. However, the situation becomes significantly more difficult when constraints are added to the problem. We raise the question whether general methods can be derived to turn an approximation algorithm for a clustering problem with some constraints into an approximation algorithm that respects one constraint more. Our constraint of choice is privacy: Here, we are asked to only open a center when at least @math clients will be assigned to it. We show how to combine privacy with several other constraints.", "We consider the k-Center problem and some generalizations. For k-Center a set of kcenter vertices needs to be found in a graph G with edge lengths, such that the distance from any vertex ofi¾?G to its nearest center is minimized. This problem naturally occurs in transportation networks, and therefore we model the inputs as graphs with bounded highway dimension, as proposed by [ICALP 2011]. We show both approximation and fixed-parameter hardness results, and how to overcome them using fixed-parameter approximations. In particular, we prove that for any @math computing a @math -approximation is W[2]-hard for parameter k, and NP-hard for graphs with highway dimension @math . The latter does not rule out fixed-parameter @math -approximations for the highway dimension parameteri¾?h, but implies that such an algorithm must have at least doubly exponential running time in h if it exists, unless the ETH fails. On the positive side, we show how to get below the approximation factor ofi¾?2 by combining the parameters k andi¾?h: we develop a fixed-parameter 3 2-approximation with running time @math . We also provide similar fixed-parameter approximations for the weightedk-Center and @math -Partition problems, which generalize k-Center." ] }
0809.2554
1529111168
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the @math -median, @math -center and @math -means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in @math -medians, we are allowed only swap moves. The local-search algorithm for @math -median was analyzed by (SIAM J. Comput. 33(3):544-562, 2004), who used a clever coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the @math -median result which avoids this coupling argument. These arguments can be used in other settings where the arguments have been used. We also show that for the problem of opening @math facilities @math to minimize the objective function @math , the natural swap-based local-search algorithm is a @math -approximation. This implies constant-factor approximations for @math -medians (when @math ), and @math -means (when @math ), and an @math -approximation algorithm for the @math -center problem (which is essentially @math ).
For the uncapacitated metric facility location (UFL) problem, the first constant factor approximation was given by @cite_21 ; subsequent approximation algorithms and hardness results have been given by @cite_21 @cite_3 @cite_17 @cite_13 @cite_0 @cite_11 @cite_20 @cite_19 @cite_5 @cite_9 @cite_16 @cite_1 @cite_20 @cite_2 @cite_6 . It remains a tantalizing problem to close the gap between the best known approximation factor of @math @cite_22 , and the hardness result of @math @cite_14 .
{ "cite_N": [ "@cite_11", "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2177090493", "2949839236", "1973529814", "2052494364", "2033687872", "113414296", "2144926522", "2101622070", "2139841919", "2035994564", "2950999367", "2033647570", "1559577696", "2147670675", "1854155592", "1516429886" ], "abstract": [ "We present a 1.488-approximation algorithm for the metric uncapacitated facility location (UFL) problem. Previously, the best algorithm was due to Byrka (2007). Byrka proposed an algorithm parametrized by @c and used it with @c 1.6774. By either running his algorithm or the algorithm proposed by Jain, Mahdian and Saberi ([email protected]?02), Byrka obtained an algorithm that gives expected approximation ratio 1.5. We show that if @c is randomly selected, the approximation ratio can be improved to 1.488. Our algorithm cuts the gap with the 1.463 approximability lower bound by almost 1 3.", "We obtain a 1.5-approximation algorithm for the metric uncapacitated facility location problem (UFL), which improves on the previously best known 1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the approximability lower bound by Guha and Khuller is 1.463. An algorithm is a ( @math , @math )-approximation algorithm if the solution it produces has total cost at most @math , where @math and @math are the facility and the connection cost of an optimal solution. Our new algorithm, which is a modification of the @math -approximation algorithm of Chudak and Shmoys, is a (1.6774,1.3738)-approximation algorithm for the UFL problem and is the first one that touches the approximability limit curve @math established by Jain, Mahdian and Saberi. As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. When combined with a (1.11,1.7764)-approximation algorithm proposed by , and later analyzed by , we obtain the overall approximation guarantee of 1.5 for the metric UFL problem. We also describe how to use our algorithm to improve the approximation ratio for the 3-level version of UFL.", "In this paper we present a 1.52-approximation algorithm for the metric uncapacitated facility location problem, and a 2-approximation algorithm for the metric capacitated facility location problem with soft capacities. Both these algorithms improve the best previously known approximation factor for the corresponding problem, and our soft-capacitated facility location algorithm achieves the integrality gap of the standard linear programming relaxation of the problem. Furthermore, we will show, using a result of Thorup, that our algorithms can be implemented in quasi-linear time.", "In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O(m log m) and O(n3), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.", "In this paper, we study approximation algorithms for several NP-hard facility location problems. We prove that a simple local search heuristic yields polynomial-time constant-factor approximation bounds for the metric versions of the uncapacitated k-median problem and the uncapacitated facility location problem. (For the k-median problem, our algorithms require a constant-factor blowup in the parameter k.) This local search heuristic was first proposed several decades ago, and has been shown to exhibit good practical performance in empirical studies. We also extend the above results to obtain constant-factor approximation bounds for the metric versions of capacitated k-median and facility location problems.", "We design a new approximation algorithm for the metric uncapacitated facility location problem. This algorithm is of LP rounding type and is based on a rounding technique developed in [5,6,7].", "The authors give the first constant factor approximation algorithm for the facility location problem with nonuniform, hard capacities. Facility location problems have received a great deal of attention in recent years. Approximation algorithms have been developed for many variants. Most of these algorithms are based on linear programming, but the LP techniques developed thus far have been unsuccessful in dealing with hard capacities. A local-search based approximation algorithm (M. , 1998; F.A. Chudak and D.P. Williamson, 1999) is known for the special case of hard but uniform capacities. We present a local-search heuristic that yields an approximation guarantee of 9 + spl epsi for the case of nonuniform hard capacities. To obtain this result, we introduce new operations that are natural in this context. Our proof is based on network flow techniques.", "A fundamental facility location problem is to choose the location of facilities, such as industrial plants and warehouses, to minimize the cost of satisfying the demand for some commodity. There are associated costs for locating the facilities, as well as transportation costs for distributing the commodities. We assume that the transportation costs form a metric. This problem is commonly referred to as theuncapacitated facility locationproblem. Application to bank account location and clustering, as well as many related pieces of work, are discussed by Cornuejols, Nemhauser, and Wolsey. Recently, the first constant factor approximation algorithm for this problem was obtained by Shmoys, Tardos, and Aardal. We show that a simple greedy heuristic combined with the algorithm by Shmoys, Tardos, and Aardal, can be used to obtain an approximation guarantee of 2.408. We discuss a few variants of the problem, demonstrating better approximation factors for restricted versions of the problem. We also show that the problem is max SNP-hard. However, the inapproximability constants derived from the max SNP hardness are very close to one. By relating this problem to Set Cover, we prove a lower bound of 1.463 on the best possible approximation ratio, assumingNP?DTIMEnO(loglogn)].", "We present approximation algorithms for the metric uncapacitated facility location problem and the metric k -median problem achieving guarantees of 3 and 6 respectively. The distinguishing feature of our algorithms is their low running time: O(m log m ) and O(m log m(L + log ( n ))) respectively, where n and m are the total number of vertices and edges in the underlying complete bipartite graph on cities and facilities. The main algorithmic ideas are a new extension of the primal-dual schema and the use of Lagrangian relaxation to derive approximation algorithms.", "In this paper, we present a randomized constant factor approximation algorithm for the metric minimum facility location problem with uniform costs and demands in a distributed setting, in which every point can open a facility. In particular, our distributed algorithm uses three communication rounds with message sizes bounded to O(log n) bits where n is the number of points. We also extend our algorithm to constant powers of metric spaces, where we also obtain a randomized constant factor approximation algorithm.", "This paper presents a distributed O(1)-approximation algorithm, with expected- @math running time, in the @math model for the metric facility location problem on a size- @math clique network. Though metric facility location has been considered by a number of researchers in low-diameter settings, this is the first sub-logarithmic-round algorithm for the problem that yields an O(1)-approximation in the setting of non-uniform facility opening costs. In order to obtain this result, our paper makes three main technical contributions. First, we show a new lower bound for metric facility location, extending the lower bound of B a (ICALP 2005) that applies only to the special case of uniform facility opening costs. Next, we demonstrate a reduction of the distributed metric facility location problem to the problem of computing an O(1)-ruling set of an appropriate spanning subgraph. Finally, we present a sub-logarithmic-round (in expectation) algorithm for computing a 2-ruling set in a spanning subgraph of a clique. Our algorithm accomplishes this by using a combination of randomized and deterministic sparsification.", "We present improved combinatorial approximation algorithms for the uncapacitated facility location problem. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of @math in @math time. This also yields a bicriteria approximation tradeoff of @math for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal-dual algorithm for facility location due to Jain and Vazirani, we get an approximation ratio of @math in @math time. This is very close to the approximation guarantee of the best known algorithm which is linear programming (LP)-based. Further, combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving @math . We also consider a variant of the capacitated facility location problem and present improved approximation algorithms for this.", "In the Universal Facility Location problem we are given a set of demand points and a set of facilities. The goal is to assign the demands to facilities in such a way that the sum of service and facility costs is minimized. The service cost is proportional to the distance each unit of demand has to travel to its assigned facility, whereas the facility cost of each facility i depends on the amount of demand assigned to that facility and is given by a cost function f i (·). We present a (7.88 + e)-approximation algorithm for the Universal Facility Location problem based on local search, under the assumption that the cost functions f i are nondecreasing. The algorithm chooses local improvement steps by solving a knapsack-like subproblem using dynamic programming. This is the first constant-factor approximation algorithm for this problem. Our algorithm also slightly improves the best known approximation ratio for the capacitated facility location problem with non-uniform hard capacities.", "One of the most flourishing areas of research in the design and analysis of approximation algorithms has been for facility location problems. In particular, for the metric case of two simple models, the uncapacitated facility location and the k-median problems, there are now a variety of techniques that yield constant performance guarantees. These methods include LP rounding, primal-dual algorithms, and local search techniques. Furthermore, the salient ideas in these algorithms and their analyzes are simple-to-explain and reflect a surprising degree of commonality. This note is intended as companion to our lecture at CONF 2000, mainly to give pointers to the appropriate references.", "We present improved combinatorial approximation algorithms for the uncapacitated facility location and k-median problems. Two central ideas in most of our results are cost scaling and greedy improvement. We present a simple greedy local search algorithm which achieves an approximation ratio of 2.414+ spl epsiv in O spl tilde (n sup 2 spl epsiv ) time. This also yields a bicriteria approximation tradeoff of (1+ spl gamma , 1+2 spl gamma ) for facility cost versus service cost which is better than previously known tradeoffs and close to the best possible. Combining greedy improvement and cost scaling with a recent primal dual algorithm for facility location due to K. Jain and V. Vazirani (1999), we get an approximation ratio of 1.853 in O spl tilde (n sup 3 ) time. This is already very close to the approximation guarantee of the best known algorithm which is LP-based. Further combined with the best known LP-based algorithm for facility location, we get a very slight improvement in the approximation factor for facility location, achieving 1.728. We present improved approximation algorithms for capacitated facility location and a variant. We also present a 4-approximation for the k-median problem, using similar ideas, building on the 6-approximation of Jain and Vazirani. The algorithm runs in O spl tilde (n sup 3 ) time.", "This work gives new insight into two well-known approximation algorithms for the uncapacitated facility location problem: the primal-dual algorithm of Jain & Vazirani, and an algorithm of Mettu & Plaxton. Our main result answers positively a question posed by Jain & Vazirani of whether their algorithm can be modified to attain a desired “continuity” property. This yields an upper bound of 3 on the integrality gap of the natural LP relaxation of the k-median problem, but our approach does not yield a polynomial time algorithm with this guarantee. We also give a new simple proof of the performance guarantee of the Mettu-Plaxton algorithm using LP duality, which suggests a minor modification of the algorithm that makes it Lagrangian-multiplier preserving." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Does Authority' mean Quality?'' is the question @cite_7 asked when they evaluated the potential of link- and content-based algorithms to identify high quality web pages. Human experts rated web documents from the Yahoo directory related to five popular topics by their quality. found a high correlation between the rankings of the human experts leading to the conclusion that there is a common notion of quality. By computing link-based metrics as well as analyzing the link neighborhood of the web pages from their dataset they were able to evaluate the performance of machine ranking methods. Here too they found a high correlation between in-degree, Kleinberg's authority score @cite_5 and PageRank. They isolated the documents that the human experts rated with good quality and evaluated the performance of algorithms on that list in terms of precision at @math and at @math . In-degree e.g., has a precision at @math of @math which means on average almost @math of the first @math documents it returns would be rated good by the experts. In general they find that in-degree, authority score and PageRank are all highly correlated with rankings provided by experts. Thus, web document quality can be estimated with hyperlink based metrics.
{ "cite_N": [ "@cite_5", "@cite_7" ], "mid": [ "2615207113", "2166227910" ], "abstract": [ "In the field of objective image quality assessment (IQA), the Spearman's @math and Kendall's @math are two most popular rank correlation indicators, which straightforwardly assign uniform weight to all quality levels and assume each pair of images are sortable. They are successful for measuring the average accuracy of an IQA metric in ranking multiple processed images. However, two important perceptual properties are ignored by them as well. Firstly, the sorting accuracy (SA) of high quality images are usually more important than the poor quality ones in many real world applications, where only the top-ranked images would be pushed to the users. Secondly, due to the subjective uncertainty in making judgement, two perceptually similar images are usually hardly sortable, whose ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, we explore a perceptually weighted rank correlation indicator in this paper, which rewards the capability of correctly ranking high quality images, and suppresses the attention towards insensitive rank mistakes. More specifically, we focus on activating valid' pairwise comparison towards image quality, whose difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned an unique weight, which is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient. The proposed indicator offers a new insight for interpreting visual perception behaviors. Furthermore, the applicability of our indicator is validated in recommending robust IQA metrics for both the degraded and enhanced image data.", "For many topics, the World Wide Web contains hundreds or thousands of relevant documents of widely varying quality. Users face a daunting challenge in identifying a small subset of documents worthy of their attention. Link analysis algorithms have received much interest recently, in large part for their potential to identify high quality items. We report here on an experimental evaluation of this potential. We evaluated a number of link and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 0.32 of all documents were of high quality. Surprisingly, a simple content-based metric performed nearly as well; ranking documents by the total number of pages on their containing site." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Upstill, Craswell and Hawking @cite_8 studied the PageRank and indegree of URLs for Fortune 500 and Fortune Most Admired companies. They found companies on those lists averaged 1 point more PageRank (via the Google toolbar's self-reported 0-10 scale) than companies on the list. They also found that IT companies typically had higher PageRank than non-IT companies. Similar to @cite_7 , they found indegree highly correlated with PageRank.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "135168798", "2121672615" ], "abstract": [ "Measures based on the Link Recommendation Assumption are hypothesised to help modern Web search engines rank ‘important, high quality’ pages ahead of relevant but less valuable pages and to reject ‘spam’. We tested these hypotheses using inlink counts and PageRank scores readily obtainable from search engines Google and Fast. We found that the average Google-reported PageRank of websites operated by Fortune 500 companies was approximately one point higher than the average for a large selection of companies. The same was true for Fortune Most Admired companies. A substantially bigger difference was observed in favour of companies with famous brands. Investigating less desirable biases, we found a one point bias toward technology companies, and a two point bias in favour of IT companies listed in the Wired 40. We found negligible bias in favour of US companies. Log of indegree was highly correlated with Google-reported PageRank scores, and just as effective when predicting desirable company attributes. Further, we found that PageRank scores for sites within a known spam network were no lower than would be expected on the basis of their indegree. We encounter no compelling evidence to support the use of PageRank over indegree.", "Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3 (vs. 56.7 for PageRank or 50 for random)." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Bharat and Mihaila @cite_15 propose a ranking scheme based on authority where the most authoritative pages get the highest ranking. Their algorithm is based on a special set of expert documents'' which are defined as web pages about a certain topic with many links to non-affiliated web pages on that topic. Non-affiliated pages are pages from different domains and with sufficiently different IP address. These expert documents are not chosen manually but automatically picked as long as they meet certain requirements (sufficient out-degree, etc). In response to a user query the most relevant expert documents are isolated. The proposed scheme locates relevant links within the expert documents and follows them to identify target pages. These pages are finally ranked according to the number and relevance of expert documents pointing to them and presented to the end user. Bharat and Mihaila evaluated their algorithm against three commercial search engines and found that it performs either just as good or in some cases even better than the top search engine when it comes to locating the home page of a specific topic. The same is true for discovering relevant pages to topic (where many good pages exist).
{ "cite_N": [ "@cite_15" ], "mid": [ "2126497299" ], "abstract": [ "With a suitable algorithm for ranking the expertise of a user in a collaborative tagging system, we will be able to identify experts and discover useful and relevant resources through them. We propose that the level of expertise of a user with respect to a particular topic is mainly determined by two factors. Firstly, an expert should possess a high quality collection of resources, while the quality of a Web resource depends on the expertise of the users who have assigned tags to it. Secondly, an expert should be one who tends to identify interesting or useful resources before other users do. We propose a graph-based algorithm, SPEAR (SPamming-resistant Expertise Analysis and Ranking), which implements these ideas for ranking users in a folksonomy. We evaluate our method with experiments on data sets collected from Delicious.com comprising over 71,000 Web documents, 0.5 million users and 2 million shared bookmarks. We also show that the algorithm is more resistant to spammers than other methods such as the original HITS algorithm and simple statistical measures." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
@cite_12 observe a rich-get-richer'' phenomenon where popular pages tend to get even more popular since search engines repeatedly return popular pages first. As other studies by Cho @cite_10 @cite_21 and Baeza-Yates @cite_19 have shown, PageRank is significantly biased against new (and thus unpopular) pages which makes it problematic for these pages to draw the user's attention even if they are potentially of high quality. That means the popularity of a page can be much lower than its actual quality. propose page quality as an alternative ranking method. By defining quality of a web page as the probability that a user likes the page when seeing it for the first time the authors claim to be able to alleviate the drawbacks of PageRank. With the intuition from PageRank that a user that likes the page will link to it the algorithm is able to identify new and high quality pages much faster than PageRank and thus shorten the time it takes for them to get noticed.
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_10", "@cite_12" ], "mid": [ "2134308336", "2732691751", "2107577105", "2096041903" ], "abstract": [ "In a number of recent studies [4, 8] researchers have found that because search engines repeatedly return currently popular pages at the top of search results, popular pages tend to get even more popular, while unpopular pages get ignored by an average user. This \"rich-get-richer\" phenomenon is particularly problematic for new and high-quality pages because they may never get a chance to get users' attention, decreasing the overall quality of search results in the long run. In this paper, we propose a new ranking function, called page quality that can alleviate the problem of popularity-based ranking. We first present a formal framework to study the search engine bias by discussing what is an \"ideal\" way to measure the intrinsic quality of a page. We then compare how PageRank, the current ranking metric used by major search engines, differs from this ideal quality metric. This framework will help us investigate the search engine bias in more concrete terms and provide clear understanding why PageRank is effective in many cases and exactly when it is problematic. We then propose a practical way to estimate the intrinsic page quality to avoid the inherent bias of PageRank. We derive our proposed quality estimator through a careful analysis of a reasonable web user model, and we present experimental results that show the potential of our proposed estimator. We believe that our quality estimator has the potential to alleviate the rich-get-richer phenomenon and help new and high-quality pages get the attention that they deserve.", "Algorithms that favor popular items are used to help us select among many choices, from top-ranked search engine results to highly-cited scientific papers. The goal of these algorithms is to identify high-quality items such as reliable news, credible information sources, and important discoveries–in short, high-quality content should rank at the top. Prior work has shown that choosing what is popular may amplify random fluctuations and lead to sub-optimal rankings. Nonetheless, it is often assumed that recommending what is popular will help high-quality content “bubble up” in practice. Here we identify the conditions in which popularity may be a viable proxy for quality content by studying a simple model of a cultural market endowed with an intrinsic notion of quality. A parameter representing the cognitive cost of exploration controls the trade-off between quality and popularity. Below and above a critical exploration cost, popularity bias is more likely to hinder quality. But we find a narrow intermediate regime of user attention where an optimal balance exists: choosing what is popular can help promote high-quality items to the top. These findings clarify the effects of algorithmic popularity bias on quality outcomes, and may inform the design of more principled mechanisms for techno-social cultural markets.", "PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer, and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method, which requires about one week of intensive computations. In the present work we propose and analyze Monte Carlo-type methods for the PageRank computation. There are several advantages of the probabilistic Monte Carlo methods over the deterministic power iteration method: Monte Carlo methods already provide good estimation of the PageRank for relatively important pages after one iteration; Monte Carlo methods have natural parallel implementation; and finally, Monte Carlo methods allow one to perform continuous update of the PageRank as the structure of the Web changes.", "The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human-rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
Lim et at. @cite_9 introduce two models to measure the quality of articles from an online community like Wikipedia without interpreting their content. In the basic model quality is derived from the authority of the contributors of the article and the contributions from each of them (in number of words). The peer review model extends the basic model by a review aspect of the article content. It gives higher quality to words that survive'' reviews.
{ "cite_N": [ "@cite_9" ], "mid": [ "2098930119" ], "abstract": [ "Using open source Web editing software (e.g., wiki), online community users can now easily edit, review and publish articles collaboratively. While much useful knowledge can be derived from these articles, content users and critics are often concerned about their qualities. In this paper, we develop two models, namely basic model and peer review model, for measuring the qualities of these articles and the authorities of their contributors. We represent collaboratively edited articles and their contributors in a bipartite graph. While the basic model measures an article?s quality using both the authorities of contributors and the amount of contribution from each contributor, the peer review model extends the former by considering the review aspect of article content. We present results of experiments conducted on some Wikipedia pages and their contributors. Our result show that the two models can effectively determine the articles? qualities and contributors? authorities using the collaborative nature of online communities." ] }
0809.2851
1680755002
In previous research it has been shown that link-based web page metrics can be used to predict experts' assessment of quality. We are interested in a related question: do expert rankings of real-world entities correlate with search engine rankings of corresponding web resources? For example, each year US News & World Report publishes a list of (among others) top 50 graduate business schools. Does their expert ranking correlate with the search engine ranking of the URLs of those business schools? To answer this question we conducted 9 experiments using 8 expert rankings on a range of academic, athletic, financial and popular culture topics. We compared the expert rankings with the rankings in Google, Live Search (formerly MSN) and Yahoo (with list lengths of 10, 25, and 50). In 57 search engine vs. expert comparisons, only 1 strong and 4 moderate correlations were statistically significant. In 42 inter-search engine comparisons, only 2 strong and 4 moderate correlations were statistically significant. The correlations appeared to decrease with the size of the lists: the 3 strong correlations were for lists of 10, the 8 moderate correlations were for lists of 25, and no correlations were found for lists of 50.
An approach to automatically predict information quality is given by @cite_13 . Analyzing news documents they observe an association between users quality score and the occurrence and prevalence of certain textual features like readability and grammar.
{ "cite_N": [ "@cite_13" ], "mid": [ "2048933106" ], "abstract": [ "We report here empirical results of a series of studies aimed at automatically predicting information quality in news documents. Multiple research methods and data analysis techniques enabled a good level of machine prediction of information quality. Procedures regarding user experiments and statistical analysis are described." ] }
0809.0124
2951317363
Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology.
One of the tasks in SemEval 2007 was the classification of semantic relations between nominals @cite_21 . The problem is to classify semantic relations between nouns and noun compounds in the context of a sentence. The task attracted 14 teams who created 15 systems, all of which used supervised machine learning with features that were lexicon-based, corpus-based, or both.
{ "cite_N": [ "@cite_21" ], "mid": [ "2152358231" ], "abstract": [ "The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems." ] }
0809.0257
2951806375
The 3- problem is also called the problem on 3-uniform hypergraphs. In this paper, we address kernelizations of the problem on 3-uniform hypergraphs. We show that this problem admits a linear kernel in three classes of 3-uniform hypergraphs. We also obtain lower and upper bounds on the kernel size for them by the parametric duality.
Buss @cite_7 has given a kernelization with kernel size @math for the problem on graphs by putting high degree elements'' into the cover. Similar to Buss' reduction, Niedermeier and Rossmanith @cite_9 have proposed a cubic-size kernelization for the problem on 3-uniform hypergraphs.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2280799770", "2107410045" ], "abstract": [ "We introduce a novel kernel that upgrades the Weisfeiler-Lehman and other graph kernels to effectively exploit high-dimensional and continuous vertex attributes. Graphs are first decomposed into subgraphs. Vertices of the subgraphs are then compared by a kernel that combines the similarity of their labels and the similarity of their structural role, using a suitable vertex invariant. By changing this invariant we obtain a family of graph kernels which includes generalizations of Weisfeiler-Lehman, NSPDK, and propagation kernels. We demonstrate empirically that these kernels obtain state-of-the-art results on relational data sets.", "In this article, we propose fast subtree kernels on graphs. On graphs with n nodes and m edges and maximum degree d, these kernels comparing subtrees of height h can be computed in O(mh), whereas the classic subtree kernel by Ramon & Gartner scales as O(n24dh). Key to this efficiency is the observation that the Weisfeiler-Lehman test of isomorphism from graph theory elegantly computes a subtree kernel as a byproduct. Our fast subtree kernels can deal with labeled graphs, scale up easily to large graphs and outperform state-of-the-art graph kernels on several classification benchmark datasets in terms of accuracy and runtime." ] }
0809.0257
2951806375
The 3- problem is also called the problem on 3-uniform hypergraphs. In this paper, we address kernelizations of the problem on 3-uniform hypergraphs. We show that this problem admits a linear kernel in three classes of 3-uniform hypergraphs. We also obtain lower and upper bounds on the kernel size for them by the parametric duality.
Fellows . @cite_0 @cite_19 @cite_10 have introduced the crown reduction and obtained a @math -kernelization for the problem on graphs. Recently, Abu-Khzam @cite_6 has reduced further the kernel of this problem on 3-uniform hypergraphs to quadratic size by employing the crown reduction.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_10", "@cite_6" ], "mid": [ "1836501071", "2141946535", "2295840581", "2098322943" ], "abstract": [ "A kernelization algorithm for the 3-Hitting-Set problem is presented along with a general kernelization for d-Hitting-Set problems. For 3-Hitting-Set, a quadratic kernel is obtained by exploring properties of yes instances and employing what is known as crown reduction. Any 3-Hitting-Set instance is reduced into an equivalent instance that contains at most 5k2 + k elements (or vertices). This kernelization is an improvement over previously known methods that guarantee cubic-size kernels. Our method is used also to obtain a quadratic kernel for the Triangle Vertex Deletion problem. For a constant d ≥ 3, a kernelization of d-Hitting-Set is achieved by a generalization of the 3-Hitting-Set method, and guarantees a kernel whose order does not exceed (2d - 1)kd-1 + k.", "The two objectives of this paper are: (1) to articulate three new general techniques for designing FPT algorithms, and (2) to apply these to obtain new FPT algorithms for Set Splitting and Vertex Cover. In the case of Set Splitting, we improve the best previous ( O ^*(72^k) ) FPT algorithm due to Dehne, Fellows and Rosamond [DFR03], to ( O ^*(8^k) ) by an approach based on greedy localization in conjunction with modeled crown reduction. In the case of Vertex Cover, we describe a new approach to 2k kernelization based on iterative compression and crown reduction, providing a potentially useful alternative to the Nemhauser-Trotter 2k kernelization.", "We show an exponential separation between two well-studied models of algebraic computation, namely read-once oblivious algebraic branching programs (ROABPs) and multilinear depth three circuits. In particular we show the following: 1. There exists an explicit n-variate polynomial computable by linear sized multilinear depth three circuits (with only two product gates) such that every ROABP computing it requires 2^ Omega(n) size. 2. Any multilinear depth three circuit computing IMM_ n,d (the iterated matrix multiplication polynomial formed by multiplying d, n * n symbolic matrices) has n^ Omega(d) size. IMM_ n,d can be easily computed by a poly(n,d) sized ROABP. 3. Further, the proof of 2 yields an exponential separation between multilinear depth four and multilinear depth three circuits: There is an explicit n-variate, degree d polynomial computable by a poly(n,d) sized multilinear depth four circuit such that any multilinear depth three circuit computing it has size n^ Omega(d) . This improves upon the quasi-polynomial separation result by Raz and Yehudayoff [2009] between these two models. The hard polynomial in 1 is constructed using a novel application of expander graphs in conjunction with the evaluation dimension measure used previously in Nisan [1991], Raz [2006,2009], Raz and Yehudayoff [2009], and Forbes and Shpilka [2013], while 2 is proved via a new adaptation of the dimension of the partial derivatives measure used by Nisan and Wigderson [1997]. Our lower bounds hold over any field.", "We prove that any graph excluding Kr as a minor has can be partitioned into clusters of diameter at most Δ while removing at most O(r Δ) fraction of the edges. This improves over the results of Fakcharoenphol and Talwar, who building on the work of Klein, Plotkin and Rao gave a partitioning that required to remove O(r2 Δ) fraction of the edges. Our result is obtained by a new approach that relates the topological properties (excluding a minor) of a graph to its geometric properties (the induced shortest path metric). Specifically, we show that techniques used by Andreae in his investigation of the cops and robbers game on graphs excluding a fixed minor, can be used to construct padded decompositions of the metrics induced by such graphs. In particular, we get probabilistic partitions with padding parameter O(r) and strong-diameter partitions with padding parameter O(r2) for Kr-free graphs, O(k) for treewidth-k graphs, and O(log g) for graphs with genus g." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
Although FIOs play an important role in the analysis and computation of linear hyperbolic problems, the literature on fast computations of FIOs is surprisingly limited. The only work addressing in this general form is the article @cite_6 by the authors of the current paper. The operative feature in @cite_6 is an angular partitioning of the frequency domain into @math wedges, each with an opening angle equal to @math . When restricting the input to such a wedge, one can then factor the operator into a product of two simpler operators. The first operator is provably approximately low-rank (and lends itself to efficient computations) whereas the second one is a nonuniform Fourier transform which can be computed rapidly using the nonuniform fast Fourier transform (NFFT) @cite_31 @cite_29 @cite_35 . The resulting algorithm has an @math complexity.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_29", "@cite_6" ], "mid": [ "2063399239", "2010122118", "2012300893", "2099944953" ], "abstract": [ "We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation, general hyperbolic equations, and curvilinear tomography. The problem is to numerically evaluate a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude, and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory. Because a FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations and as low as @math in storage space (the constants in front of these estimates are small). It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: (1) a diffeomorphism which is handled by means of a nonuniform FFT and (2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is the fact that the separation rank of the residual kernel is provably independent of the problem size. Several numerical examples demonstrate the numerical accuracy and low computational complexity of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology.", "A group of algorithms is presented generalizing the fast Fourier transform to the case of noninteger frequencies and nonequispaced nodes on the interval @math . The schemes of this paper are based on a combination of certain analytical considerations with the classical fast Fourier transform and generalize both the forward and backward FFTs. Each of the algorithms requires @math arithmetic operations, where @math is the precision of computations and N is the number of nodes. The efficiency of the approach is illustrated by several numerical examples.", "The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N log N ) operations rather than O(N 2 ) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid (A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368-1383). In this paper, we observe that one of the standard interpolation or \"gridding\" schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in two- and three- dimensional settings, saving either 10 d N in storage in d dimensions or a factor of about 5-10 in CPUtime (independent of dimension).", "The fast Fourier transform (FFT) is used widely in signal processing for efficient computation of the FT of finite-length signals over a set of uniformly spaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e., a nonuniform FT. Several papers have described fast approximations for the nonuniform FT based on interpolating an oversampled FFT. This paper presents an interpolation method for the nonuniform FT that is optimal in the min-max sense of minimizing the worst-case approximation error over all signals of unit norm. The proposed method easily generalizes to multidimensional signals. Numerical results show that the min-max approach provides substantially lower approximation errors than conventional interpolation methods. The min-max criterion is also useful for optimizing the parameters of interpolation kernels such as the Kaiser-Bessel function." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
In a different direction, there has been a great amount of research on other types of oscillatory integral transforms. An important example is the discrete @math -body problem where one wants to evaluate sums of the form [ 1 j n q_j K(|x-x_j|), K(r) = e^ r r ] in the high-frequency regime ( @math is large). Such problems appear naturally when solving the Helmholtz equation by means of a boundary integral formulation @cite_1 @cite_17 . A popular approach seeks to compress the oscillatory integral operator by representing it in an appropriate basis such as a local Fourier basis, or a basis extracted from the wavelet packet dictionary @cite_36 @cite_12 @cite_13 @cite_25 . This representation sparsifies the operator, thus allowing fast matrix-vector products. In spite of having good theoretical estimates, this approach has thus far been practically limited to 1D boundaries. One particular issue with this approach is that the evaluation of the remaining nonnegligible coefficients sometimes requires assembling the entire matrix, which can be computationally rather expensive.
{ "cite_N": [ "@cite_36", "@cite_1", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2038870672", "2083690159", "2113502771", "2063399239", "2127600768", "2164119610" ], "abstract": [ "In this article we describe recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles. These hybrid methods combine conventional piecewise polynomial approximations with high-frequency asymptotics to build basis functions suitable for representing the oscillatory solutions. They have the potential to solve scattering problems accurately in a computation time that is (almost) independent of frequency and this has been realized for many model problems. The design and analysis of this class of methods requires new results on the analysis and numerical analysis of highly oscillatory boundary integral operators and on the high-frequency asymptotics of scattering problems. The implementation requires the development of appropriate quadrature rules for highly oscillatory integrals. This article contains a historical account of the development of this currently very active field, a detailed account of recent progress and, in addition, a number of original research results on the design, analysis and implementation of these methods.", "We present a novel boundary integral formulation of the Helmholtz transmission problem for bounded composite scatterers (that is, piecewise constant material parameters in \"subdomains\") that directly lends itself to operator preconditioning via Calderon projectors. The method relies on local traces on subdomains and weak enforcement of transmission conditions. The variational formulation is set in Cartesian products of standard Dirichlet and special Neumann trace spaces for which restriction and extension by zero are well defined. In particular, the Neumann trace spaces over each subdomain boundary are built as piecewise @math -distributions over each associated interface. Through the use of interior Calderon projectors, the problem is cast in variational Galerkin form with an operator matrix whose diagonal is composed of block boundary integral operators associated with the subdomains. We show existence and uniqueness of solutions based on an extension of Lions' projection lemma for non-closed subspaces. We also investigate asymptotic quasi-optimality of conforming boundary element Galerkin discretization. Numerical experiments in 2-D confirm the efficacy of the method and a performance matching that of another widely used boundary element discretization. They also demonstrate its amenability to different types of preconditioning.", "SUMMARY In this paper, we address the solution of three-dimensional heterogeneous Helmholtz problems discretized with second-order finite difference methods with application to acoustic waveform inversion in geophysics. In this setting, the numerical simulation of wave propagation phenomena requires the approximate solution of possibly very large indefinite linear systems of equations. For that purpose, we propose and analyse an iterative two-grid method acting on the Helmholtz operator where the coarse grid problem is solved inaccurately. A cycle of a multigrid method applied to a complex shifted Laplacian operator is used as a preconditioner for the approximate solution of this coarse problem. A single cycle of the new method is then used as a variable preconditioner of a flexible Krylov subspace method. We analyse the properties of the resulting preconditioned operator by Fourier analysis. Numerical results demonstrate the effectiveness of the algorithm on three-dimensional applications. The proposed numerical method allows us to solve three-dimensional wave propagation problems even at high frequencies on a reasonable number of cores of a distributed memory computer. Copyright © 2012 John Wiley & Sons, Ltd.", "We introduce a general purpose algorithm for rapidly computing certain types of oscillatory integrals which frequently arise in problems connected to wave propagation, general hyperbolic equations, and curvilinear tomography. The problem is to numerically evaluate a so-called Fourier integral operator (FIO) of the form @math at points given on a Cartesian grid. Here, @math is a frequency variable, @math is the Fourier transform of the input @math , @math is an amplitude, and @math is a phase function, which is typically as large as @math ; hence the integral is highly oscillatory. Because a FIO is a dense matrix, a naive matrix vector product with an input given on a Cartesian grid of size @math by @math would require @math operations. This paper develops a new numerical algorithm which requires @math operations and as low as @math in storage space (the constants in front of these estimates are small). It operates by localizing the integral over polar wedges with small angular aperture in the frequency plane. On each wedge, the algorithm factorizes the kernel @math into two components: (1) a diffeomorphism which is handled by means of a nonuniform FFT and (2) a residual factor which is handled by numerical separation of the spatial and frequency variables. The key to the complexity and accuracy estimates is the fact that the separation rank of the residual kernel is provably independent of the problem size. Several numerical examples demonstrate the numerical accuracy and low computational complexity of the proposed methodology. We also discuss the potential of our ideas for various applications such as reflection seismology.", "We are concerned with a finite element approximation for time-harmonic wave propagation governed by the Helmholtz equation. The usually oscillatory behavior of solutions, along with numerical dispersion, render standard finite element methods grossly inefficient already in medium-frequency regimes. As an alternative, methods that incorporate information about the solution in the form of plane waves have been proposed. We focus on a class of Trefftz-type discontinuous Galerkin methods that employs trial and test spaces spanned by local plane waves. In this paper we give a priori convergence estimates for the h -version of these plane wave discontinuous Galerkin methods in two dimensions. To that end, we develop new inverse and approximation estimates for plane waves and use these in the context of duality techniques. Asymptotic optimality of the method in a mesh dependent norm can be established. However, the estimates require a minimal resolution of the mesh beyond what it takes to resolve the wavelength. We give numerical evidence that this requirement cannot be dispensed with. It reflects the presence of numerical dispersion.", "The Helmholtz equation governing wave propagation and scattering phenomena is difficult to solve numerically. Its discretization with piecewise linear finite elements results in typically large linear systems of equations. The inherently parallel domain decomposition methods constitute hence a promising class of preconditioners. An essential element of these methods is a good coarse space. Here, the Helmholtz equation presents a particular challenge, as even slight deviations from the optimal choice can be devastating. In this paper, we present a coarse space that is based on local eigenproblems involving the Dirichlet-to-Neumann operator. Our construction is completely automatic, ensuring good convergence rates without the need for parameter tuning. Moreover, it naturally respects local variations in the wave number and is hence suited also for heterogeneous Helmholtz problems. The resulting method is parallel by design and its efficiency is demonstrated on 2D homogeneous and heterogeneous numerical examples." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
To the best of our knowledge, the most successful method for the Helmholtz kernel @math -body problem in both 2 and 3D is the high-frequency fast multipole method (HF-FMM) proposed by Rokhlin and his collaborators in a series of papers @cite_4 @cite_18 @cite_28 . This approach combines the analytic property of the Helmholtz kernel with an FFT-type fast algorithm to speedup the computation of the interaction between well-separated regions. If @math is the number of input and output points as before, the resulting algorithm has an @math computational complexity. Other algorithms using similar techniques can be found in @cite_21 @cite_34 @cite_38 @cite_0 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_28", "@cite_21", "@cite_0", "@cite_34" ], "mid": [ "1621423264", "1963750172", "1968359830", "2117762537", "2084385130", "2117926105", "2129152507" ], "abstract": [ "This paper introduces a directional multiscale algorithm for the two dimensional @math -body problem of the Helmholtz kernel with applications to high frequency scattering. The algorithm follows the approach in [Engquist and Ying, SIAM Journal on Scientific Computing, 29 (4), 2007] where the three dimensional case was studied. The main observation is that, for two regions that follow a directional parabolic geometric configuration, the interaction between the points in these two regions through the Helmholtz kernel is approximately low rank. We propose an improved randomized procedure for generating the low rank representations. Based on these representations, we organize the computation of the far field interaction in a multidirectional and multiscale way to achieve maximum efficiency. The proposed algorithm is accurate and has the optimal @math complexity for problems from two dimensional scattering applications. We present numerical results for several test examples to illustrate the algorithm and its application to two dimensional high frequency scattering problems.", "We describe a wideband version of the Fast Multipole Method for the Helmholtz equation in three dimensions. It unifies previously existing versions of the FMM for high and low frequencies into an algorithm which is accurate and efficient for any frequency, having a CPU time of O(N) if low-frequency computations dominate, or O(NlogN) if high-frequency computations dominate. The performance of the algorithm is illustrated with numerical examples.", "The solution of Helmholtz and Maxwell equations by integral formulations (kernel in exp( i kr) r ) leads to large dense linear systems. Using direct solvers requires large computational costs in O(N 3 ) . Using iterative solvers, the computational cost is reduced to large matrix–vector products. The fast multipole method provides a fast numerical way to compute convolution integrals. Its application to Maxwell and Helmholtz equations was initiated by Rokhlin, based on a multipole expansion of the interaction kernel. A second version, proposed by Chew, is based on a plane–wave expansion of the kernel. We propose a third approach, the stable–plane–wave expansion, which has a lower computational expense than the multipole expansion and does not have the accuracy and stability problems of the plane–wave expansion. The computational complexity is N log N as with the other methods.", "This paper introduces a new directional multilevel algorithm for solving @math -body or @math -point problems with highly oscillatory kernels. These systems often result from the boundary integral formulations of scattering problems and are difficult due to the oscillatory nature of the kernel and the non-uniformity of the particle distribution. We address the problem by first proving that the interaction between a ball of radius @math and a well-separated region has an approximate low rank representation, as long as the well-separated region belongs to a cone with a spanning angle of @math and is at a distance which is at least @math away from from the ball. We then propose an efficient and accurate procedure which utilizes random sampling to generate such a separated, low rank representation. Based on the resulting representations, our new algorithm organizes the high frequency far field computation by a multidirectional and multiscale strategy to achieve maximum efficiency. The algorithm performs well on a large group of highly oscillatory kernels. Our algorithm is proved to have @math computational complexity for any given accuracy when the points are sampled from a two dimensional surface. We also provide numerical results to demonstrate these properties.", "The fast multipole method (FMM) has been implemented to speed up the matrix-vector multiply when an iterative method is used to solve the combined field integral equation (CFIE). FMM reduces the complexity from O(N2) to O(N1.5). With a multilevel fast multipole algorithm (MLFMA), it is further reduced to O(N log N). A 110, 592-unknown problem can be solved within 24 h on a SUN Sparc 10. © 1995 John Wiley & Sons, Inc.", "We present a new fast multipole method for particle simulations. The main feature of our algorithm is that it does not require the implementation of multipole expansions of the underlying kernel, and it is based only on kernel evaluations. Instead of using analytic expansions to represent the potential generated by sources inside a box of the hierarchical FMM tree, we use a continuous distribution of an equivalent density on a surface enclosing the box. To find this equivalent density, we match its potential to the potential of the original sources at a surface, in the far field, by solving local Dirichlet-type boundary value problems. The far-field evaluations are sparsified with singular value decomposition in 2D or fast Fourier transforms in 3D. We have tested the new method on the single and double layer operators for the Laplacian, the modified Laplacian, the Stokes, the modified Stokes, the Navier, and the modified Navier operators in two and three dimensions. Our numerical results indicate that our method compares very well with the best known implementations of the analytic FMM method for both the Laplacian and modified Laplacian kernels. Its advantage is the (relative) simplicity of the implementation and its immediate extension to more general kernels.", "Abstract This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop s on gpu hardware using single precision. The simulations use a vortex particle method to solve the Navier–Stokes equations, with a highly parallel fast multipole method ( fmm ) as numerical engine, and match the current record in mesh size for this application, a cube of 4096 3 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the fft algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the fmm -based vortex method achieving 74 parallel efficiency on 4096 processes (one gpu per mpi process, 3 gpu s per node of the tsubame -2.0 system). The fft -based spectral method is able to achieve just 14 parallel efficiency on the same number of mpi processes (using only cpu cores), due to the all-to-all communication pattern of the fft algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date." ] }
0809.0719
2953075422
This paper is concerned with the fast computation of Fourier integral operators of the general form @math , where @math is a frequency variable, @math is a phase function obeying a standard homogeneity condition, and @math is a given input. This is of interest for such fundamental computations are connected with the problem of finding numerical solutions to wave equations, and also frequently arise in many applications including reflection seismology, curvilinear tomography and others. In two dimensions, when the input and output are sampled on @math Cartesian grids, a direct evaluation requires @math operations, which is often times prohibitively expensive. This paper introduces a novel algorithm running in @math time, i. e. with near-optimal computational complexity, and whose overall structure follows that of the butterfly algorithm [Michielssen and Boag, IEEE Trans Antennas Propagat 44 (1996), 1086-1093]. Underlying this algorithm is a mathematical insight concerning the restriction of the kernel @math to subsets of the time and frequency domains. Whenever these subsets obey a simple geometric condition, the restricted kernel has approximately low-rank; we propose constructing such low-rank approximations using a special interpolation scheme, which prefactors the oscillatory component, interpolates the remaining nonoscillatory part and, lastly, remodulates the outcome. A byproduct of this scheme is that the whole algorithm is highly efficient in terms of memory requirement. Numerical results demonstrate the performance and illustrate the empirical properties of this algorithm.
Finally, the idea of butterfly computations has been applied to the @math -body problem in several ways. The original paper of Michielssen and Boag @cite_19 used this technique to accelerate the computation of the oscillatory interactions between well-separated regions. More recently, Engquist and Ying @cite_7 @cite_23 proposed a multidirectional solution to this problem, where part of the algorithm can be viewed as a butterfly computation between specially selected spatial subdomain.
{ "cite_N": [ "@cite_19", "@cite_23", "@cite_7" ], "mid": [ "2117762537", "2052952091", "1621423264" ], "abstract": [ "This paper introduces a new directional multilevel algorithm for solving @math -body or @math -point problems with highly oscillatory kernels. These systems often result from the boundary integral formulations of scattering problems and are difficult due to the oscillatory nature of the kernel and the non-uniformity of the particle distribution. We address the problem by first proving that the interaction between a ball of radius @math and a well-separated region has an approximate low rank representation, as long as the well-separated region belongs to a cone with a spanning angle of @math and is at a distance which is at least @math away from from the ball. We then propose an efficient and accurate procedure which utilizes random sampling to generate such a separated, low rank representation. Based on the resulting representations, our new algorithm organizes the high frequency far field computation by a multidirectional and multiscale strategy to achieve maximum efficiency. The algorithm performs well on a large group of highly oscillatory kernels. Our algorithm is proved to have @math computational complexity for any given accuracy when the points are sampled from a two dimensional surface. We also provide numerical results to demonstrate these properties.", "Over the past decade, physicists have developed deep but non-rigorous techniques for studying phase transitions in discrete structures. Recently, their ideas have been harnessed to obtain improved rigorous results on the phase transitions in binary problems such as random @math -SAT or @math -NAESAT (e.g., Coja-Oghlan and Panagiotou: STOC 2013). However, these rigorous arguments, typically centered around the second moment method, do not extend easily to problems where there are more than two possible values per variable. The single most intensely studied example of such a problem is random graph @math -coloring. Here we develop a novel approach to the second moment method in this problem. This new method, inspired by physics conjectures on the geometry of the set of @math -colorings, allows us to establish a substantially improved lower bound on the @math -colorability threshold. The new lower bound is within an additive @math of a simple first-moment upper bound and within @math of the physics conjecture. By comparison, the best previous lower bound left a gap of about @math , unbounded in terms of the number of colors [Achlioptas, Naor: STOC 2004].", "This paper introduces a directional multiscale algorithm for the two dimensional @math -body problem of the Helmholtz kernel with applications to high frequency scattering. The algorithm follows the approach in [Engquist and Ying, SIAM Journal on Scientific Computing, 29 (4), 2007] where the three dimensional case was studied. The main observation is that, for two regions that follow a directional parabolic geometric configuration, the interaction between the points in these two regions through the Helmholtz kernel is approximately low rank. We propose an improved randomized procedure for generating the low rank representations. Based on these representations, we organize the computation of the far field interaction in a multidirectional and multiscale way to achieve maximum efficiency. The proposed algorithm is accurate and has the optimal @math complexity for problems from two dimensional scattering applications. We present numerical results for several test examples to illustrate the algorithm and its application to two dimensional high frequency scattering problems." ] }
0808.3971
2949251435
A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination to enhance the sum rate and limited inter-cluster coordination to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.
, where neighboring BTSs cooperatively schedule their transmissions, is a practical strategy to reduce interference, as each time slot only one BTS in each cluster is transmitting and it only requires message change comparable to that for handoff. In @cite_21 , it was shown that one major advantage of intercell scheduling compared with conventional frequency reuse is the expanded multiuser diversity gain. The interference reduction is at the expense of a transmission duty cycle, however, and it does not make full use of the available spatial degrees of freedom.
{ "cite_N": [ "@cite_21" ], "mid": [ "2108662631" ], "abstract": [ "A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination-to enhance the sum rate-and limited inter-cluster coordination-to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks." ] }
0808.3971
2949251435
A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination to enhance the sum rate and limited inter-cluster coordination to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.
Recently, has been proposed as an effective technique to mitigate interference in the downlink of multi-cell networks @cite_15 . By sharing information across BTSs and designing downlink signals cooperatively, signals from other cells may be used to assist the transmission instead of acting as interference, and the available degrees of freedom are fully utilized. In @cite_28 , BTS coordination with DPC was first proposed with single-antenna transmitters and receivers in each cell. BTS coordination in a downlink multi-cell MIMO network was studied in @cite_9 , with a per-BTS power constraint and various joint transmission schemes. The maximum achievable common rate in a coordinated network, with zero-forcing (ZF) and DPC, was studied in @cite_11 @cite_31 , which demonstrated a significant gain over the conventional single BTS transmission. With simplified network models, analytical results were derived for multi-cell ZF beamforming in @cite_34 and for various coordination strategies with grouped cell interior and edge users in @cite_41 . Studies considering practical issues such as limited-capacity backhaul and asynchronous interference can be found in @cite_42 @cite_43 @cite_4 @cite_32 .
{ "cite_N": [ "@cite_31", "@cite_4", "@cite_28", "@cite_41", "@cite_9", "@cite_42", "@cite_32", "@cite_43", "@cite_15", "@cite_34", "@cite_11" ], "mid": [ "2108662631", "2135096483", "2094481065", "2767375015", "2061634004", "2131822606", "2023633390", "2015301486", "1519695362", "2079587612", "2047468088" ], "abstract": [ "A clustered base transceiver station (BTS) coordination strategy is proposed for a large cellular MIMO network, which includes full intra-cluster coordination-to enhance the sum rate-and limited inter-cluster coordination-to reduce interference for the cluster edge users. Multi-cell block diagonalization is used to coordinate the transmissions across multiple BTSs in the same cluster. To satisfy per-BTS power constraints, three combined precoder and power allocation algorithms are proposed with different performance and complexity tradeoffs. For inter-cluster coordination, the coordination area is chosen to balance fairness for edge users and the achievable sum rate. It is shown that a small cluster size (about 7 cells) is sufficient to obtain most of the sum rate benefits from clustered coordination while greatly relieving channel feedback requirement. Simulations show that the proposed coordination strategy efficiently reduces interference and provides a considerable sum rate gain for cellular MIMO networks.", "For a multiple-input single-output (MISO) down- link channel with M transmit antennas, it has been recently proved that zero-forcing beamforming (ZFBF) to a subset of (at most) M \"semi-orthogonal\" users is optimal in terms of the sum-rate, asymptotically with the number of users. However, determining the subset of users for transmission is a complex optimization problem. Adopting the ZFBF scheme in a cooper- ative multi-cell scenario renders the selection process even more difficult since more users are involved. In this paper, we consider a multi-cell cooperative ZFBF scheme combined with a simple sub-optimal users selection procedure for the Wyner downlink channel setup. According to this sub-optimal procedure, the user with the \"best\" local channel is selected for transmission in each cell. It is shown that under an overall power constraint, a distributed multi-cell ZFBF to this sub-optimal subset of users achieves the same sum-rate growth rate as an optimal scheme deploying joint multi-cell dirty-paper coding (DPC) techniques, asymptotically with the number of users per cell. Moreover, the overall power constraint is shown to ensure in probability, equal per-cell power constraints when the number of users per-cell increases.", "We investigate the downlink throughput of cellular systems where groups of M antennas - either co-located or spatially distributed - transmit to a subset of a total population of K > M users in a coherent, coordinated fashion in order to mitigate intercell interference. We consider two types of coordination: the capacity-achieving technique based on dirty paper coding (DPC), and a simpler technique based on zero-forcing (ZF) beamforming with per-antenna power constraints. During a given frame, a scheduler chooses the subset of the K users in order to maximize the weighted sum rate, where the weights are based on the proportional-fair scheduling algorithm. We consider the weighted average sum throughput among K users per cell in a multi-cell network where coordination is limited to a neighborhood of M antennas. Consequently, the performance of both systems is limited by interference from antennas that are outside of the M coordinated antennas. Compared to a 12-sector baseline which uses the same number of antennas per cell site, the throughput of ZF and DPC achieve respective gains of 1.5 and 1.75.", "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is proposed to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. Utilizing the multiantenna multicasting opportunities provided by the CC technique, the proposed method is shown to perform well over the entire SNR region, including the low SNR regime, unlike the existing schemes based on zero forcing (ZF). Instead of nulling the interference at users not requiring a specific coded message, general multicast beamforming strategies are employed, optimally balancing the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several base-line schemes including, the joint ZF and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming.", "In small cell networks (SCNs) co-channel interference is an important issue, and necessitates the use of interference mitigation strategies that allocate resources efficiently. This work discusses a distributed utility-based algorithm for downlink resource allocation (i.e., power and scheduling weights per carrier) in multicarrier SCNs. The proposed distributed downlink resource allocation (DDRA) algorithm aims to maximize the sum utility of the whole system. To achieve this goal, each base station (BS) selects the resource allocation strategy to maximize a surplus function comprising both, own cell utility and interference prices (that reflect the interference that is caused to neighboring cells). Two different utility functions are considered: max-rate and proportional fair-rate. For performance evaluation, a SCN deployed in a single story WINNER office building is considered. Simulation results show that the proposed algorithm is effective in enhancing not only the sum data rate of a SCN, but also the degree of fairness in resource sharing among users.", "In a cooperative multiple-antenna downlink cellular network, maximization of a concave function of user rates is considered. A new linear precoding technique called soft interference nulling (SIN) is proposed, which performs at least as well as zero-forcing (ZF) beamforming. All base stations share channel state information, but each user's message is only routed to those that participate in the user's coordination cluster. SIN precoding is particularly useful when clusters of limited sizes overlap in the network, in which case traditional techniques such as dirty paper coding or ZF do not directly apply. The SIN precoder is computed by solving a sequence of convex optimization problems. SIN under partial network coordination can outperform ZF under full network coordination at moderate SNRs. Under overlapping coordination clusters, SIN precoding achieves considerably higher throughput compared to myopic ZF, especially when the clusters are large.", "We consider a heterogeneous cellular network (HetNet) where a macrocell tier with a large antenna array base station (BS) is overlaid with a dense tier of small cells (SCs). We investigate the potential benefits of incorporating a massive MIMO BS in a TDD-based HetNet and we provide analytical expressions for the coverage probability and the area spectral efficiency using stochastic geometry. The duplexing mode in which SCs should operate during uplink macrocell transmissions is optimized. Furthermore, we consider a reverse TDD scheme, in which the massive MIMO BS can estimate the SC interference covariance matrix. Our results suggest that significant throughput improvement can be achieved by exploiting interference nulling and implicit coordination across the tiers due to flexible and asymmetric TDD operation.", "We propose joint spatial division and multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter (CSIT). JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing (FDD) systems, for which uplink downlink channel reciprocity cannot be exploited. In the proposed scheme, the multiuser MIMO downlink precoder is obtained by concatenating a prebeamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional “effective” channel matrix. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is approached in the large number of antennas limit. For this case, we use Szego's asymptotic theory of Toeplitz matrices to show that a DFT-based prebeamforming matrix is near-optimal, requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a 2-D base station antenna array, with 3-D beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the prebeamforming optimization and calculate the system spectral efficiency under proportional fairness and max-min fairness criteria, showing extremely attractive performance. Our numerical results are obtained via asymptotic random matrix theory, avoiding lengthy Monte Carlo simulations and providing accurate results for realistic (finite) number of antennas and users.", "We develop an interference alignment (IA) technique for a downlink cellular system. In the uplink, IA schemes need channel-state-information exchange across base-stations of different cells, but our downlink IA technique requires feedback only within a cell. As a result, the proposed scheme can be implemented with a few changes to an existing cellular system where the feedback mechanism (within a cell) is already being considered for supporting multi-user MIMO. Not only is our proposed scheme implementable with little effort, it can in fact provide substantial gain especially when interference from a dominant interferer is significantly stronger than the remaining interference: it is shown that in the two-isolated cell layout, our scheme provides four-fold gain in throughput performance over a standard multi-user MIMO technique. We also show through simulations that our technique provides respectable gain under a more realistic scenario: it gives approximately 28 gain for a 19 hexagonal wrap-around-cell layout. Furthermore, we show that our scheme has the potential to provide substantial gain for macro-pico cellular networks where pico-users can be significantly interfered with by the nearby macro-BS.", "We consider the interference management problem in a multicell MIMO heterogeneous network. Within each cell there is a large number of distributed micro pico base stations (BSs) that can be potentially coordinated for joint transmission. To reduce coordination overhead, we consider user-centric BS clustering so that each user is served by only a small number of (potentially overlapping) BSs. Thus, given the channel state information, our objective is to jointly design the BS clustering and the linear beamformers for all BSs in the network. In this paper, we formulate this problem from a sparse optimization perspective, and propose an efficient algorithm that is based on iteratively solving a sequence of group LASSO problems. A novel feature of the proposed algorithm is that it performs BS clustering and beamformer design jointly rather than separately as is done in the existing approaches for partial coordinated transmission. Moreover, the cluster size can be controlled by adjusting a single penalty parameter in the nonsmooth regularized utility function. The convergence of the proposed algorithm (to a stationary solution) is guaranteed, and its effectiveness is demonstrated via extensive simulation.", "Large multiple-input multiple-output (MIMO) networks promise high energy efficiency, i.e., much less power is required to achieve the same capacity compared to the conventional MIMO networks if perfect channel state information (CSI) is available at the transmitter. However, in such networks, huge overhead is required to obtain full CSI especially for Frequency-Division Duplex (FDD) systems. To reduce overhead, we propose a downlink antenna selection scheme, which selects S antennas from M > S transmit antennas based on the large scale fading to serve K ≤ S users in large distributed MIMO networks employing regularized zero-forcing (RZF) precoding. In particular, we study the joint optimization of antenna selection, regularization factor, and power allocation to maximize the average weighted sum-rate. This is a mixed combinatorial and non-convex problem whose objective and constraints have no closed-form expressions. We apply random matrix theory to derive asymptotically accurate expressions for the objective and constraints. As such, the joint optimization problem is decomposed into subproblems, each of which is solved by an efficient algorithm. In addition, we derive structural solutions for some special cases and show that the capacity of very large distributed MIMO networks scales as O(KlogM) when M→∞ with K, S fixed. Simulations show that the proposed scheme achieves significant performance gain over various baselines." ] }