aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0804.1696
1667425162
Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.
@cite_5 categories of direct and indirect interactions between aspects and methods are identified. Direct interaction is whether an advice interferes with the execution of a method, whereas indirect is whether advices and methods may read write the same fields. This classification is similar to ours, however, it addresses a different dimension. We identify invasiveness patterns instead of direct indirect interactions. Katz @cite_3 recognizes the fact that aspects can be harmful to the base code and the need of specification on aspect-oriented applications. Our approach agrees with his ideas and likewise we propose a mean to write such specifications. Furthermore, he describes three groups of advices according to their properties. aspects, which do not influence the underlying computation, aspects, which change the control flow but do not affect existing fields, and aspects, which affect existing fields. This classification is similar to ours, however, our characterization of is more fine grained. The two first correspond to our behavioral classification and the last to our data access classification.
{ "cite_N": [ "@cite_5", "@cite_3" ], "mid": [ "2149612550", "2099640060" ], "abstract": [ "We present a new classification system for aspect-oriented programs. This system characterizes the interactions between aspects and methods and identifies classes of interactions that enable modular reasoning about the crosscut program. We argue that this system can help developers structure their understanding of aspect-oriented programs and promotes their ability to reason productively about the consequences of crosscutting a program with a given aspect. We have designed and implemented a program analysis system that automatically classifies interactions between aspects and methods and have applied this analysis to a set of benchmark programs. We found that our analysis is able to 1) identify interactions with desirable properties (such as lack of interference), 2) identify potentially problematic interactions (such as interference caused by the aspect and the method both writing the same field), and 3) direct the developer's attention to the causes of such interactions.", "Taking an interaction network oriented perspective in informatics raises the challenge to describe deterministic finite systems which take part in networks of nondeterministic interactions. The traditional approach to describe processes as stepwise executable activities which are not based on the ordinarily nondeterministic interaction shows strong centralization tendencies. As suggested in this article, viewing processes and their interactions as complementary can circumvent these centralization tendencies. The description of both, processes and their interactions is based on the same building blocks, namely finite input output automata (or transducers). Processes are viewed as finite systems that take part in multiple, ordinarily nondeterministic interactions. The interactions between processes are described as protocols. The effects of communication between processes as well as the necessary coordination of different interactions within a processes are both based on the restriction of the transition relation of product automata. The channel based outer coupling represents the causal relation between the output and the input of different systems. The coordination condition based inner coupling represents the causal relation between the input and output of a single system. All steps are illustrated with the example of a network of resource administration processes which is supposed to provide requesting user processes exclusive access to a single resource." ] }
0804.1696
1667425162
Aspect-Oriented Programming (AOP) improves modularity by encapsulating crosscutting concerns into aspects. Some mechanisms to compose aspects allow invasiveness as a mean to integrate concerns. Invasiveness means that AOP languages have unrestricted access to program properties. Such kind of languages are interesting because they allow performing complex operations and better introduce functionalities. In this report we present a classification of invasive patterns in AOP. This classification characterizes the aspects invasive behavior and allows developers to abstract about the aspect incidence over the program they crosscut.
Clifton and Leavens propose and @cite_4 . Spectators are advices that do not affect the control flow of the advised method and do not affect existing fields. Assistants can change the control flow of the advised method and affect existing fields. are similar to our classification category and in the sense that they do not interfere with the mainline computation or write fields. All other classification categories are equivalent to . Nevertheless, we have achieved a more fine granularity level in our classification.
{ "cite_N": [ "@cite_4" ], "mid": [ "2773012335" ], "abstract": [ "In this paper, we focus on improving the proposal classification stage in the object detection task and present implicit negative sub-categorization and sink diversion to lift the performance by strengthening loss function in this stage. First, based on the observation that the “background” class is generally very diverse and thus challenging to be handled as a single indiscriminative class in existing state-of-the-art methods, we propose to divide the background category into multiple implicit sub-categories to explicitly differentiate diverse patterns within it. Second, since the ground truth class inevitably has low-value probability scores for certain images, we propose to add a “sink” class and divert the probabilities of wrong classes to this class when necessary, such that the ground truth label will still have a higher probability than other wrong classes even though it has low probability output. Additionally, we propose to use dilated convolution, which is widely used in the semantic segmentation task, for efficient and valuable context information extraction. Extensive experiments on PASCAL VOC 2007 and 2012 data sets show that our proposed methods based on faster R-CNN implementation can achieve state-of-the-art mAPs, i.e., 84.1 , 82.6 , respectively, and obtain 2.5 improvement on ILSVRC DET compared with that of ResNet." ] }
0803.3395
1890461267
In the first part of the paper we generalize a descent technique due to Harish-Chandra to the case of a reductive group acting on a smooth affine variety both defined over arbitrary local field F of characteristic zero. Our main tool is Luna slice theorem. In the second part of the paper we apply this technique to symmetric pairs. In particular we prove that the pair (GL(n,C),GL(n,R)) is a Gelfand pair. We also prove that any conjugation invariant distribution on GL(n,F) is invariant with respect to transposition. For non-archimedean F the later is a classical theorem of Gelfand and Kazhdan. We use the techniques developed here in our subsequent work [AG3] where we prove an archimedean analog of the theorem on uniqueness of linear periods by H. Jacquet and S. Rallis.
Another generalization of Harish-Chandra descent using Luna slice theorem has been done in the non-archimedean case in @cite_17 . In that paper Rader and Rallis investigated spherical characters of @math -distinguished representations of @math for symmetric pairs @math and checked the validity of what they call "density principle" for rank one symmetric pairs. They found out that usually it holds, but also found counterexamples.
{ "cite_N": [ "@cite_17" ], "mid": [ "2045361520" ], "abstract": [ "Abstract Elementary proofs are given for theorems of Bapat and Raghavan on the scaling of nonnegative multidimensional matrices. Theorems of Sinkhorn and of Brualdi, Parter, and Schneider are derived as corollaries. For positive two-dimensional matrices, Hilbert's projective metric and a theorem of G. Birkhoff are used to prove that Sinkhorn's original iterative procedure converges geometrically; the ratio of convergence is estimated from the given data." ] }
0803.3448
1506948380
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
In end-to-end encryption schemes @cite_8 @cite_0 @cite_15 @cite_5 , intermediate aggregators apply some aggregation functions on encrypted data which they can't decrypt. This is because these intermediate aggregators don't have access to the keys that are only shared between data originators (usually leaf sensor nodes) and the BS. In CDA @cite_0 sensor nodes share a common symmetric key with the BS that is kept hidden from middle-way aggregators. @cite_8 each leaf sensor share a distinct long-term key with the BS. This key is originally derived from the master secret only known to the BS. These protocols show that aggregation of end-to-end encrypted data is possible through using additive Privacy Homomorphism (PH) as the underlying encryption scheme. Although these protocols are supposed to provide maximum data secrecy across the paths between leaf sensor nodes and their sink, overall secrecy resilience of a WSN becomes in danger if an adversary gains access to the master key in @cite_8 , or compromises only a single leaf sensor node in CDA to acquire the common symmetric key shared between all leaf nodes.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_15", "@cite_8" ], "mid": [ "2137996019", "2148306323", "2006853333", "2219395474" ], "abstract": [ "Wireless ad hoc and sensor networks (WSNs) often require a connected dominating set (CDS) as the underlying virtual backbone for efficient routing. Nodes in a CDS have extra computation and communication load for their role as dominator, subjecting them to an early exhaustion of their battery. A simple mechanism to address this problem is to switch from one CDS to another fresh CDS, rotating the active CDS through a disjoint set of CDSs. This gives rise to the connected domatic partition (CDP) problem, which essentially involves partitioning the nodes V(G) of a graph G into node disjoint CDSs. We have developed a distributed algorithm for constructing the CDP using our maximal independent set (MlS)-based proximity heuristics, which depends only on connectivity information and does not rely on geographic or geometric information. We show that the size of a CDP that is identified by our algorithm is at least [delta+1 beta(c+1)] - f, where delta is the minimum node degree of G, beta les 2, c les 11 is a constant for a unit disk graph (UDG), and the expected value of f is epsidelta|V|, where epsi Lt 1 is a positive constant, and delta ges 48. Results of varied testing of our algorithm are positive even for a network of a large number of sensor nodes. Our scheme also performs better than other related techniques such as the ID-based scheme.", "We consider decentralized estimation of a noise-corrupted deterministic signal in a bandwidth-constrained sensor network communicating through an insecure medium. Each sensor collects a noise-corrupted version, performs a local quantization, and transmits a 1-bit message to an ally fusion center through a wireless medium where the sensor outputs are vulnerable to unauthorized observation from enemy third-party fusion centers. In this paper, we introduce an encrypted wireless sensor network (eWSN) concept where stochastic enciphers operating on binary sensor outputs are introduced to disguise the sensor outputs, creating an eWSN scheme. Noting that the plaintext (original) and ciphertext (disguised) messages are constrained to a single bit due to bandwidth constraints, we consider a binary channel-like scheme to probabilistically encipher (i.e., flip) the sensor outputs. We first consider a symmetric key encryption case where the \"0\" and \"1\" enciphering probabilities are equal. The key is represented by the bit enciphering probability. Specifically, we derive the optimal estimator of the deterministic signal approached from a maximum-likelihood perspective and the Cramer-Rao lower bound for the estimation problem utilizing the key. Furthermore, we analyze the effect of the considered cryptosystem on enemy fusion centers that are unaware of the fact that the WSN is encrypted (i.e., we derive the bias, variance, and mean square error (MSE) of the enemy fusion center). We then extend the cryptosystem to admit unequal enciphering schemes for \"0\" and \"1\", and analyze the estimation problem from both the prospectives of ally (that has access to the enciphering keys) and (third-party) enemy fusion centers. The results show that when designed properly, a significant amount of bias and MSE can be introduced to an enemy fusion center with the cost to the ally fusion center being a marginal increase [factor of (1-Omega1-Omega0 )-2, where 1-Omegaj, j=0, 1 is the \"j\" enciphering probability in the estimation variance (compared to the variance of a fusion center estimate operating in a vulnerable WSN).", "In the wireless sensor networks (WSNs), sensor nodes may be deployed in the hostile areas. The eavesdropper can intercept the messages in the public channel and the communication between the nodes is easily monitored. Furthermore, any malicious intermediate node can act as a legal receiver to alter the passing messages. Hence, message protection and sensor node identification become important issues in WSN. In this paper, we propose a novel scheme providing unconditional secure communication based on the quantum characteristics, including no-cloning and teleportation. We present a random EPR-pair allocation scheme that is designed to overcome the vulnerability caused by possible compromised nodes. EPR pairs are pre-assigned to sensor nodes randomly and the entangled qubits are used by the nodes with the quantum teleportation scheme to form a secure link. We also show a scheme on how to resist the man-in-the-middle attack. In the framework, the qubits are allocated to each node before deployment and the adversary is unable to create the duplicated nodes. Even if the malicious nodes are added to the network to falsify the messages transmitting in the public channel, the legal nodes can easily detect the fake nodes that have no entangled qubits and verify the counterfeit messages. In addition, we prove that one node sharing EPR pairs with a certain amount of neighbor nodes can teleport information to any node in the sensor network if there are sufficient EPR pairs in the qubits pool. The proposal shows that the distributed quantum wireless sensor network gains better security than classical wireless sensor network and centralized quantum wireless network.", "Wireless sensor network (WSN) brings a new paradigm of real-time embedded systems with limited computation, communication, memory, and energy resources that are being used for huge range of applications where the traditional infrastructure-based network is mostly infeasible. The sensor nodes are densely deployed in a hostile environment to monitor, detect, and analyze the physical phenomenon and consume considerable amount of energy while transmitting the information. It is impractical and sometimes impossible to replace the battery and to maintain longer network life time. So, there is a limitation on the lifetime of the battery power and energy conservation is a challenging issue. Appropriate cluster head (CH) election is one such issue, which can reduce the energy consumption dramatically. Low energy adaptive clustering hierarchy (LEACH) is the most famous hierarchical routing protocol, where the CH is elected in rotation basis based on a probabilistic threshold value and only CHs are allowed to send the information to the base station (BS). But in this approach, a super-CH (SCH) is elected among the CHs who can only send the information to the mobile BS by choosing suitable fuzzy descriptors, such as remaining battery power, mobility of BS, and centrality of the clusters. Fuzzy inference engine (Mamdani’s rule) is used to elect the chance to be the SCH. The results have been derived from NS-2 simulator and show that the proposed protocol performs better than the LEACH protocol in terms of the first node dies, half node alive, better stability, and better lifetime." ] }
0803.3448
1506948380
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
@cite_15 @cite_5 public key encryption based on elliptic curves is used to conceal transient data from leaf sensors to the BS. These schemes enhance secrecy resilience of WSNs against individual sensor attacks, since compromising a single or a set of sensor nodes won't reveal the decryption key that only the BS knows. An attracting feature of @cite_15 is the introduction of data integrity in end-to-end encrypted WSNs through Merkle hash trees of Message Authentication Codes (MACs). However, both schemes raise power consumption concerns, since computation requirements for public key encryption is still considered high for WSNs @cite_13 .
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "2011539652", "2143536181", "2219395474" ], "abstract": [ "A major issue in many applications of Wireless Sensor Networks (WSNs) is ensuring security. Particularly, in military applications, sensors are usually deployed in hostile areas where they can be easily captured and operated by an adversary. Most of security attacks in WSNs are due to the lack of security guaranties in terms of authentication, integrity, and confidentiality. These services are often provided using cryptographic primitives where sensor nodes need to agree on a set of secret keys. Current key distribution schemes are not fully adapted to the tiny, low-cost, and fragile nature of sensors that are equipped with limited computation capability, reduced memory size, and battery-based power supply. This paper investigates the design of an efficient key distribution and management scheme for wireless sensor networks. The proposed scheme can ensure the generation and distribution of different encryption keys intended to secure individual and group communications. This is performed based on elliptic curve public key encryption using Diffie-Hellman like key exchange that is applied at different levels of the network topology. In addition, a re-keying procedure is performed using secret sharing techniques. This scheme is more efficient and less complex than existing approaches, due to the reduced number of messages and the less processing overhead required to accomplish key exchange. Furthermore, few number of encryption keys with reduced sizes are managed in sensor nodes, which optimizes memory usage and enhances scalability to large size networks.", "To achieve security in wireless sensor networks, it is important to he able to encrypt messages sent among sensor nodes. Keys for encryption purposes must he agreed upon by communicating nodes. Due to resource constraints, achieving such key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and public-key based schemes, are not suitable for wireless sensor networks. Pre-distribution of secret keys for all pairs of nodes is not viable due to the large amount of memory used when the network size is large. Recently, a random key pre-distribution scheme and its improvements have been proposed. A common assumption made by these random key pre-distribution schemes is that no deployment knowledge is available. Noticing that in many practical scenarios, certain deployment knowledge may be available a priori, we propose a novel random key pre-distribution scheme that exploits deployment knowledge and avoids unnecessary key assignments. We show that the performance (including connectivity, memory usage, and network resilience against node capture) of sensor networks can he substantially improved with the use of our proposed scheme. The scheme and its detailed performance evaluation are presented in this paper.", "Wireless sensor network (WSN) brings a new paradigm of real-time embedded systems with limited computation, communication, memory, and energy resources that are being used for huge range of applications where the traditional infrastructure-based network is mostly infeasible. The sensor nodes are densely deployed in a hostile environment to monitor, detect, and analyze the physical phenomenon and consume considerable amount of energy while transmitting the information. It is impractical and sometimes impossible to replace the battery and to maintain longer network life time. So, there is a limitation on the lifetime of the battery power and energy conservation is a challenging issue. Appropriate cluster head (CH) election is one such issue, which can reduce the energy consumption dramatically. Low energy adaptive clustering hierarchy (LEACH) is the most famous hierarchical routing protocol, where the CH is elected in rotation basis based on a probabilistic threshold value and only CHs are allowed to send the information to the base station (BS). But in this approach, a super-CH (SCH) is elected among the CHs who can only send the information to the mobile BS by choosing suitable fuzzy descriptors, such as remaining battery power, mobility of BS, and centrality of the clusters. Fuzzy inference engine (Mamdani’s rule) is used to elect the chance to be the SCH. The results have been derived from NS-2 simulator and show that the proposed protocol performs better than the LEACH protocol in terms of the first node dies, half node alive, better stability, and better lifetime." ] }
0803.3448
1506948380
In-network data aggregation is an essential technique in mission critical wireless sensor networks (WSNs) for achieving effective transmission and hence better power conservation. Common security protocols for aggregated WSNs are either hop-by-hop or end-to-end, each of which has its own encryption schemes considering different security primitives. End-to-end encrypted data aggregation protocols introduce maximum data secrecy with in-efficient data aggregation and more vulnerability to active attacks, while hop-by-hop data aggregation protocols introduce maximum data integrity with efficient data aggregation and more vulnerability to passive attacks. In this paper, we propose a secure aggregation protocol for aggregated WSNs deployed in hostile environments in which dual attack modes are present. Our proposed protocol is a blend of flexible data aggregation as in hop-by-hop protocols and optimal data confidentiality as in end-to-end protocols. Our protocol introduces an efficient O(1) heuristic for checking data integrity along with cost-effective heuristic-based divide and conquer attestation process which is O(ln n) in average -O(n) in the worst scenario- for further verification of aggregated results.
Many hop-by-hop aggregation protocols in WSNs like @cite_16 @cite_2 @cite_12 @cite_14 @cite_9 , provide more efficient aggregation operations and highly consider data integrity. However, since sensed data being passed to non-leaf aggregators are revealed for the sake of middle-way aggregation, hop-by-hop aggregation protocols represent weaker model of data confidentiality perspective than end-to-end aggregation protocols. Data secrecy can be revoked of a partition if a passive adversary has obtained the key of the root aggregator of that partition.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_2", "@cite_16", "@cite_12" ], "mid": [ "2110889959", "2786403027", "2092347390", "2102832611", "1978885205" ], "abstract": [ "Hop-by-hop data aggregation is a very important technique for reducing the communication overhead and energy expenditure of sensor nodes during the process of data collection in a sensor network. However, because individual sensor readings are lost in the per-hop aggregation process, compromised nodes in the network may forge false values as the aggregation results of other nodes, tricking the base station into accepting spurious aggregation results. Here a fundamental challenge is: how can the base station obtain a good approximation of the fusion result when a fraction of sensor nodes are compromised.To answer this challenge, we propose SDAP, a Secure Hop-by-hop Data Aggregation Protocol for sensor networks. The design of SDAP is based on the principles of divide-and-conquer and commit-and-attest. First, SDAP uses a novel probabilistic grouping technique to dynamically partition the nodes in a tree topology into multiple logical groups (subtrees) of similar sizes. A commitment-based hop-by-hop aggregation is performed in each group to generate a group aggregate. The base station then identifies the suspicious groups based on the set of group aggregates. Finally, each group under suspect participates in an attestation process to prove the correctness of its group aggregate. Our analysis and simulations show that SDAP can achieve the level of efficiency close to an ordinary hop-by-hop aggregation protocol while providing certain assurance on the trustworthiness of the aggregation result. Moreover, SDAP is a general-purpose secure aggregation protocol applicable to multiple aggregation functions.", "Established approaches to data aggregation in wireless sensor networks (WSNs) do not cover the variety of new use cases developing with the advent of the Internet of Things (IoT). In particular, the current push toward fog computing, in which control, computation, and storage are moved to nodes close to the network edge, induces a need to collect data at multiple sinks, rather than the single sink typically considered in WSN aggregation algorithms. Moreover, for machine-to-machine communication scenarios, actuators subscribing to sensor measurements may also be present, in which case data should be not only aggregated and processed in-network but also disseminated to actuator nodes. In this paper, we present mixed-integer programming formulations and algorithms for the problem of energy-optimal routing and multiple-sink aggregation, as well as joint aggregation and dissemination, of sensor measurement data in IoT edge networks. We consider optimization of the network for both minimal total energy usage, and min-max per-node energy usage. We also provide a formulation and algorithm for throughput-optimal scheduling of transmissions under the physical interference model in the pure aggregation case. We have conducted a numerical study to compare the energy required for the two use cases, as well as the time to solve them, in generated network scenarios with varying topologies and between 10 and 40 nodes. Although aggregation only accounts for less than 15 of total energy usage in all cases tested, it provides substantial energy savings. Our results show more than 13 times greater energy usage for 40-node networks using direct, shortest-path flows from sensors to actuators, compared with our aggregation and dissemination solutions.", "Wireless sensor network (WSN) is a rapidly evolving technological platform with tremendous and novel applications. Recent advances in WSN have led to many new protocols specifically designed for them where energy awareness (i.e. long lived wireless network) is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. As routing approach with hierarchical structure is realized to successfully provide energy efficient solution, various heuristic clustering algorithms have been proposed. As an attractive WSN routing protocol, LEACH has been widely accepted for its energy efficiency and simplicity. Also, the discipline of meta-heuristics Evolutionary Algorithms (EAs) has been utilized by several researchers to tackle cluster-based routing problem in WSN. These biologically inspired routing mechanisms, e.g., HCR, have proved beneficial in prolonging the WSN lifetime, but unfortunately at the expense of decreasing the stability period of WSN. This is most probably due to the abstract modeling of the EA's clustering fitness function. The aim of this paper is to alleviate the undesirable behavior of the EA when dealing with clustered routing problem in WSN by formulating a new fitness function that incorporates two clustering aspects, viz. cohesion and separation error. Simulation over 20 random heterogeneous WSNs shows that our evolutionary based clustered routing protocol (ERP) always prolongs the network lifetime, preserves more energy as compared to the results obtained using the current heuristics such as LEACH, SEP, and HCR protocols. Additionally, we found that ERP outperforms LEACH and HCR in prolonging the stability period, comparable to SEP performance for heterogeneous networks with 10 extra heterogeneity but requires further heterogeneous-aware modification in the presence of 20 of node heterogeneity.", "Wireless sensor networks (WSNs) are ad-hoc networks composed of tiny devices with limited computation and energy capacities. For such devices, data transmission is a very energy-consuming operation. It thus becomes essential to the lifetime of a WSN to minimize the number of bits sent by each device. One well-known approach is to aggregate sensor data (e.g., by adding) along the path from sensors to the sink. Aggregation becomes especially challenging if end-to-end privacy between sensors and the sink is required. In this paper, we propose a simple and provably secure additively homomorphic stream cipher that allows efficient aggregation of encrypted data. The new cipher only uses modular additions (with very small moduli) and is therefore very well suited for CPU-constrained devices. We show that aggregation based on this cipher can be used to efficiently compute statistical values such as mean, variance and standard deviation of sensed data, while achieving significant bandwidth gain.", "Data aggregation is a widely used technique in wireless sensor networks. The security issues, data confidentiality and integrity, in data aggregation become vital when the sensor network is deployed in a hostile environment. There has been many related work proposed to address these security issues. In this paper we survey these work and classify them into two cases: hop-by-hop encrypted data aggregation and end-to-end encrypted data aggregation. We also propose two general frameworks for the two cases respectively. The framework for end-to-end encrypted data aggregation has higher computation cost on the sensor nodes, but achieves stronger security, in comparison with the framework for hop-by-hop encrypted data aggregation." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
A standard centralized approach to tracking ( @cite_4 ), is sensor specific'', in the sense that it uses some smart powerful sensors that have high processing abilities. In particular, this algorithm assumes that each node is aware of its absolute location (e.g. via a GPS) or of a relative location. The sensors must be capable of estimating the distance of the target from the sensor readings. The process of tracking a target has three distinct steps: detecting the presence of the target, determining the direction of motion of the target and alerting appropriate nodes in the network. Thus, in their approach a very large part of the network is actively involved in the tracking process, a fact that may lead to increased energy dissipation. Also, in contrast to our method that can simultaneously handle multiple targets, their protocol can only track one target in the network at any time. Overall, their method has several strengths (reasonable estimation error, precise location of the tracked source, real time target tracking, but there are weaknesses as well (intensive computations, intensive radio transmissions).
{ "cite_N": [ "@cite_4" ], "mid": [ "2115335002" ], "abstract": [ "We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a @Q(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Our method is entirely different to the network architecture design approach for centralized placement distributed tracking (see e.g. the book @cite_0 for a nice overview). According to that approach, optimal (or as efficient as possible) sensor deployment strategies are proposed to ensure maximum sensing coverage with minimal number of sensors, as well as power conservation in sensor networks. In one of the centralized methods ( @cite_3 ), that focuses on deployment optimization, a grid manner discretization of the space is performed. Their method tries to find the gridpoint closest to the target, instead of finding the exact coordinates of the target. In such a setting, an optimized placement of sensors will guarantee that every gridpoint in the area is covered by a unique subset of sensors.
{ "cite_N": [ "@cite_0", "@cite_3" ], "mid": [ "2102099586", "2112953516" ], "abstract": [ "One of the research issues in wireless sensor networks (WSNs) is how to efficiently deploy sensors to cover an area. In this paper, we solve the k-coverage sensor deployment problem to achieve multi-level coverage of an area I. We consider two sub-problems: k-coverage placement and distributed dispatch problems. The placement problem asks how to determine the minimum number of sensors required and their locations in I to guarantee that I is k-covered and the network is connected; the dispatch problem asks how to schedule mobile sensors to move to the designated locations according to the result computed by the placement strategy such that the energy consumption due to movement is minimized. Our solutions to the placement problem consider both the binary and probabilistic sensing models, and allow an arbitrary relationship between the communication distance and sensing distance of sensors. For the dispatch problem, we propose a competition-based and a pattern-based schemes. The former allows mobile sensors to bid for their closest locations, while the latter allows sensors to derive the target locations on their own. Our proposed schemes are efficient in terms of the number of sensors required and are distributed in nature. Simulation results are presented to verify their effectiveness.", "Due to their low cost and small form factors, a large number of sensor nodes can be deployed in redundant fashion in dense sensor networks. The availability of redundant nodes increases network lifetime as well as network fault tolerance. It is, however, undesirable to keep all the sensor nodes active at all times for sensing and communication. An excessive number of active nodes lead to higher energy consumption and it places more demand on the limited network bandwidth. We present an efficient technique for the selection of active sensor nodes in dense sensor networks. The active node selection procedure is aimed at providing the highest possible coverage of the sensor field, i.e., the surveillance area. It also assures network connectivity for routing and information dissemination. We first show that the coverage-centric active nodes selection problem is NP-complete. We then present a distributed approach based on the concept of a connected dominating set (CDS). We prove that the set of active nodes selected by our approach provides full coverage and connectivity. We also describe an optimal coverage-centric centralized approach based on integer linear programming. We present simulation results obtained using an ns2 implementation of the proposed technique." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Another network design approach for tracking is provided in @cite_6 , that tries to avoid an expensive massive deployment of sensors, taking advantage of possible coverage ovelaps over space and time, by introducing a novel combinatorial model (using set covers) that captures such overlaps. The authors then use this model to design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a @math approximation ratio to the optimal solution.
{ "cite_N": [ "@cite_6" ], "mid": [ "2115335002" ], "abstract": [ "We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a @Q(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
As opposed to centralized processing, in a distributed model sensor networks distribute the computation among sensor nodes. Each sensor unit acquires local, partial, and relatively coarse information from its environment. The network then collaboratively determines a fairly precise estimate based on its coverage and multiplicity of sensing modalities. Several such distributed approaches have been proposed. In @cite_7 , a cluster-based distributed tracking scheme is provided. The sensor network is logically partitioned into local collaborative groups. Each group is responsible for providing information on a target and tracking it. Sensors that can jointly provide the most accurate information on a target (in this case, those that are nearest to the target) form a group. As the target moves, the local region must move with it; hence groups are dynamic with nodes dropping out and others joining in. It is clear that time synchronization is a major prerequisite for this approach to work. Furthermore, this algorithm works well for merging multiple tracks corresponding to the same target. However, if two targets come very close to each other, then the mechanism described will be unable to distinguish between them.
{ "cite_N": [ "@cite_7" ], "mid": [ "1581531227" ], "abstract": [ "The tradeoff between performance and scalability is a fundamental issue in distributed sensor networks. In this paper, we propose a novel scheme to efficiently organize and utilize network resources for target localization. Motivated by the essential role of geographic proximity in sensing, sensors are organized into geographically local collaborative groups. In a target tracking context, we present a dynamic group management method to initiate and maintain multiple tracks in a distributed manner. Collaborative groups are formed, each responsible for tracking a single target. The sensor nodes within a group coordinate their behavior using geographically-limited message passing. Mechanisms such as these for managing local collaborations are essential building blocks for scalable sensor network applications." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Another nice distributed approach is the dynamic convoy tree-based collaboration (DCTC) framework that has been proposed in @cite_8 . The convoy tree includes sensor nodes around the detected target, and the tree progressively adapts itself to add more nodes and prune some nodes as the target moves. In particular, as the target moves, some nodes lying upstream of the moving path will drift farther away from the target and will be pruned from the convoy tree. On the other hand, some free nodes lying on the projected moving path will soon need to join the collaborative tracking. As the tree further adapts itself according to the movement of the target, the root will be too far away from the target, which introduces the need to relocate a new root and reconfigure the convoy tree accordingly. If the moving target's trail is known a priori and each node has knowledge about the global network topology, it is possible for the tracking nodes to agree on an optimal convoy tree structure; these are at the same time the main weaknesses of the protocol, since in many real scenarios such assumptions are unrealistic.
{ "cite_N": [ "@cite_8" ], "mid": [ "2157123706" ], "abstract": [ "Sensor nodes have limited sensing range and are not very reliable. To obtain accurate sensing data, many sensor nodes should he deployed and then the collaboration among them becomes an important issue. In W. Zhang and G. Cao, a tree-based approach has been proposed to facilitate sensor nodes collaborating in detecting and tracking a mobile target. As the target moves, many nodes in the tree may become faraway from the root of the tree, and hence a large amount of energy may be wasted for them to send their sensing data to the root. We address the tree reconfiguration problem. We formalize it as finding a min-cost convoy tree sequence, and solve it by proposing an optimized complete reconfiguration scheme and an optimized interception-based reconfiguration scheme. Analysis and simulation are conducted to compare the proposed schemes with each other and with other reconfiguration schemes. The results show that the proposed schemes are more energy efficient than others." ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
Finally, a mobile'' agent approach is followed in @cite_1 , i.e. a master agent is traveling through the network, and two slave agents are assigned the task to participate to the trilateration. As opposed to our method, their approach is quite complicated, including several sub-protocols (e.g. election protocols, trilateration, fusion and delivery of tracking results, maintaining a tracking history). Although by using mobile agents, the sensing, computing and communication overheads can be greatly reduced, their approach is not scalable in randomly scattered networks and also for well connected irregular networks, since a big amount of offline computation is needed Finally, the base that receives the tracking results is assumed fixed (in a tracking application this can be a problem).
{ "cite_N": [ "@cite_1" ], "mid": [ "1981643246" ], "abstract": [ "Mobile agent computing is being used in fields as diverse as artificial intelligence, computational economics and robotics. Agents' ability to adapt dynamically and execute asynchronously and autonomously brings potential advantages in terms of fault-tolerance, flexibility and simplicity. This monograph focuses on studying mobile agents as modelled in distributed systems research and in particular within the framework of research performed in the distributed algorithms community. It studies the fundamental question of how to achieve rendezvous , the gathering of two or more agents at the same node of a network. Like leader election, such an operation is a useful subroutine in more general computations that may require the agents to synchronize, share information, divide up chores, etc. The work provides an introduction to the algorithmic issues raised by the rendezvous problem in the distributed computing setting. For the most part our investigation concentrates on the simplest case of two agents attempting to rendezvous on a ring network. Other situations including multiple agents, faulty nodes and other topologies are also examined. An extensive bibliography provides many pointers to related work not covered in the text. The presentation has a distinctly algorithmic, rigorous, distributed computing flavor and most results should be easily accessible to advanced undergraduate and graduate students in computer science and mathematics departments. Table of Contents: Models for Mobile Agent Computing Deterministic Rendezvous in a Ring Multiple Agent Rendezvous in a Ring Randomized Rendezvous in a Ring Other Models Other Topologies" ] }
0803.2219
1655844585
We study the important problem of tracking moving targets in wireless sensor networks. We try to overcome the limitations of standard state of the art tracking methods based on continuous location tracking, i.e. the high energy dissipation and communication overhead imposed by the active participation of sensors in the tracking process and the low scalability, especially in sparse networks. Instead, our approach uses sensors in a passive way: they just record and judiciously spread information about observed target presence in their vicinity; this information is then used by the (powerful) tracking agent to locate the target by just following the traces left at sensors. Our protocol is greedy, local, distributed, energy efficient and very successful, in the sense that (as shown by extensive simulations) the tracking agent manages to quickly locate and follow the target; also, we achieve good trade-offs between the energy dissipation and latency.
The interested reader is referred to @cite_2 , the nice book by F. Zhao and L. Guibas, that even presents the tracking problem as a canonical'' problem for wireless sensor networks. Also, several tracking approaches are presented in @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2115335002", "2102099586" ], "abstract": [ "We study the problem of localizing and tracking multiple moving targets in wireless sensor networks, from a network design perspective i.e. towards estimating the least possible number of sensors to be deployed, their positions and operation characteristics needed to perform the tracking task. To avoid an expensive massive deployment, we try to take advantage of possible coverage overlaps over space and time, by introducing a novel combinatorial model that captures such overlaps. Under this model, we abstract the tracking network design problem by a combinatorial problem of covering a universe of elements by at least three sets (to ensure that each point in the network area is covered at any time by at least three sensors, and thus being localized). We then design and analyze an efficient approximate method for sensor placement and operation, that with high probability and in polynomial expected time achieves a @Q(logn) approximation ratio to the optimal solution. Our network design solution can be combined with alternative collaborative processing methods, to suitably fit different tracking scenarios.", "One of the research issues in wireless sensor networks (WSNs) is how to efficiently deploy sensors to cover an area. In this paper, we solve the k-coverage sensor deployment problem to achieve multi-level coverage of an area I. We consider two sub-problems: k-coverage placement and distributed dispatch problems. The placement problem asks how to determine the minimum number of sensors required and their locations in I to guarantee that I is k-covered and the network is connected; the dispatch problem asks how to schedule mobile sensors to move to the designated locations according to the result computed by the placement strategy such that the energy consumption due to movement is minimized. Our solutions to the placement problem consider both the binary and probabilistic sensing models, and allow an arbitrary relationship between the communication distance and sensing distance of sensors. For the dispatch problem, we propose a competition-based and a pattern-based schemes. The former allows mobile sensors to bid for their closest locations, while the latter allows sensors to derive the target locations on their own. Our proposed schemes are efficient in terms of the number of sensors required and are distributed in nature. Simulation results are presented to verify their effectiveness." ] }
0803.2331
2062563843
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
Many methods have been proposed for computing or estimating the first- and second-order differential quantities of a surface. In recent years, there have been significant interests in the convergence and consistency of these methods. We do not attempt to give a comprehensive review of these methods but consider only a few of them that are more relevant to our proposed approach; readers are referred to @cite_2 and @cite_1 for comprehensive surveys. Among the existing methods, many of them estimate the different quantities separately of each other. For the estimation of the normals, a common practice is to estimate vertex normals using a weighted average of face normals, such as area weighting or angle weighting. These methods in general are only first-order accurate, although they are the most efficient.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2096244937", "2092785747" ], "abstract": [ "This paper presents a novel method for 3D surface reconstruction that uses polarization and shading information from two views. The method relies on the polarization data acquired using a standard digital camera and a linear polarizer. Fresnel theory is used to process the raw images and to obtain initial estimates of surface normals, assuming that the reflection type is diffuse. Based on this idea, the paper presents two novel contributions to the problem of surface reconstruction. The first is a technique to enhance the surface normal estimates by incorporating shading information into the method. This is done using robust statistics to estimate how the measured pixel brightnesses depend on the surface orientation. This gives an estimate of the object material reflectance function, which is used to refine the estimates of the surface normals. The second contribution is to use the refined estimates to establish correspondence between two views of an object. To do this, surface patches are extracted from each view, which are then aligned by minimising an energy functional based on the surface normal estimates and local topographic properties. The optimum alignment parameters for different patch pairs are then used to establish stereo correspondence. This process results in an unambiguous field of surface normals, which can be integrated to recover the surface depth. Our technique is most suited to smooth nonmet allic surfaces. It complements existing stereo algorithms since it does not require salient surface features to obtain correspondences. An extensive set of experiments, yielding reconstructed objects and reflectance functions, are presented and compared to ground truth.", "Most mesh denoising techniques utilize only either the facet normal field or the vertex normal field of a mesh surface. The two normal fields, though contain some redundant geometry information of the same model, can provide additional information that the other field lacks. Thus, considering only one normal field is likely to overlook some geometric features. In this paper, we take advantage of the piecewise consistent property of the two normal fields and propose an effective framework in which they are filtered and integrated using a novel method to guide the denoising process. Our key observation is that, decomposing the inconsistent field at challenging regions into multiple piecewise consistent fields makes the two fields complementary to each other and produces better results. Our approach consists of three steps: vertex classification , bi-normal filtering , and vertex position update . The classification step allows us to filter the two fields on a piecewise smooth surface rather than a surface that is smooth everywhere. Based on the piecewise consistence of the two normal fields, we filtered them using a piecewise smooth region clustering strategy. To benefit from the bi-normal filtering, we design a quadratic optimization algorithm for vertex position update. Experimental results on synthetic and real data show that our algorithm achieves higher quality results than current approaches on surfaces with multifarious geometric features and irregular surface sampling." ] }
0803.2331
2062563843
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
Vertex-based quadratic or higher-order polynomial fittings can produce convergent normal and curvature estimations. Meek and Walton studied the convergence properties of a number of estimators for normals and curvatures. It was further generalized to higher-degree polynomial fittings by Cazals and Pouget . These methods are most closely related to our approach. It is well-known that these methods may encounter numerical difficulties at low-valence vertices or special arrangements of vertices @cite_17 , which we address in this paper. Razdan and Bae proposed a scheme to estimate curvatures using biquadratic B 'ezier patches. Some methods were also proposed to improve robustness of curvature estimation under noise. For example, estimated curvatures by fitting the surface implicitly with multi-level meshes. Recently, proposed to improve robustness by adapting the neighborhood sizes. These methods in general only provide curvature estimations that are meaningful in some average sense but do not necessarily guarantee convergence of pointwise estimates.
{ "cite_N": [ "@cite_17" ], "mid": [ "2112696958" ], "abstract": [ "In this paper, we introduce a feature-preserving denoising algorithm. It is built on the premise that the underlying surface of a noisy mesh is piecewise smooth, and a sharp feature lies on the intersection of multiple smooth surface regions. A vertex close to a sharp feature is likely to have a neighborhood that includes distinct smooth segments. By defining the consistent subneighborhood as the segment whose geometry and normal orientation most consistent with those of the vertex, we can completely remove the influence from neighbors lying on other segments during denoising. Our method identifies piecewise smooth subneighborhoods using a robust density-based clustering algorithm based on shared nearest neighbors. In our method, we obtain an initial estimate of vertex normals and curvature tensors by robustly fitting a local quadric model. An anisotropic filter based on optimal estimation theory is further applied to smooth the normal field and the curvature tensor field. This is followed by second-order bilateral filtering, which better preserves curvature details and alleviates volume shrinkage during denoising. The support of these filters is defined by the consistent subneighborhood of a vertex. We have applied this algorithm to both generic and CAD models, and sharp features, such as edges and corners, are very well preserved." ] }
0803.2331
2062563843
Differential quantities, including normals, curvatures, principal directions, and associated matrices, play a fundamental role in geometric processing and physics-based modeling. Computing these differential quantities consistently on surface meshes is important and challenging, and some existing methods often produce inconsistent results and require ad hoc fixes. In this paper, we show that the computation of the gradient and Hessian of a height function provides the foundation for consistently computing the differential quantities. We derive simple, explicit formulas for the transformations between the first- and second-order differential quantities (i.e., normal vector and curvature matrix) of a smooth surface and the first- and second-order derivatives (i.e., gradient and Hessian) of its corresponding height function. We then investigate a general, flexible numerical framework to estimate the derivatives of the height function based on local polynomial fittings formulated as weighted least squares approximations. We also propose an iterative fitting scheme to improve accuracy. This framework generalizes polynomial fitting and addresses some of its accuracy and stability issues, as demonstrated by our theoretical analysis as well as experimental results.
Some of the methods estimate the second-order differential quantities from the surface normals. Goldfeather and Interrante proposed a cubic-order formula by fitting the positions and normals of the surface simultaneously to estimate curvatures and principal directions. proposed a face-based approach for computing shape operators using linear interpolation of normals. Rusinkiewicz proposed a similar face-based curvature estimator from vertex normals. These methods rely on good normal estimations for reliable results. Zorin and coworkers @cite_6 @cite_10 proposed to compute a shape operator using mid-edge normals, which resembles and corrects'' the formula of @cite_7 . Good results were obtained in practice but there was no theoretical guarantee of its order of convergence.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_6" ], "mid": [ "101526648", "1981660604", "1994974865" ], "abstract": [ "3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting.", "We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both-empirical and theoretical-perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.", "An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi: 10.1007 s10107-009-0286-5, 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most @math iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy @math , and @math iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians." ] }
0803.2559
2952534738
We study the problem of deciding satisfiability of first order logic queries over views, our aim being to delimit the boundary between the decidable and the undecidable fragments of this language. Views currently occupy a central place in database research, due to their role in applications such as information integration and data warehousing. Our main result is the identification of a decidable class of first order queries over unary conjunctive views that generalises the decidability of the classical class of first order sentences over unary relations, known as the Lowenheim class. We then demonstrate how various extensions of this class lead to undecidability and also provide some expressivity results. Besides its theoretical interest, our new decidable class is potentially interesting for use in applications such as deciding implication of complex dependencies, analysis of a restricted class of active database rules, and ontology reasoning.
As observed earlier, description logics are important logics for expressing constraints on desired models. In @cite_22 , the query containment problem is studied in the context of the description logic @math . There are certain similarities between this and the first order (unary) view languages we have studied in this paper. The key difference appears to be that although @math can be used to define view constraints, these constraints cannot express unary conjunctive views (since assertions do not allow arbitrary projection). Furthermore, @math can express functional dependencies on a single attribute, a feature which would make the UCV language undecidable (see proof of theorem ). There is a result in @cite_22 , however, showing undecidability for a fragment of @math with inequality, which could be adapted to give an alternative proof of theorem (although inequality is used there in a slightly more powerful way).
{ "cite_N": [ "@cite_22" ], "mid": [ "2013409229" ], "abstract": [ "Query containment under constraints is the problem of checking whether for every database satisfying a given set of constraints, the result of one query is a subset of the result of another query. Recent research points out that this is a central problem in several database applications, and we address it within a setting where constraints are specified in the form of special inclusion dependencies over complex expressions, built by using intersection and difference of relations, special forms of quantification, regular expressions over binary relations, and cardinality constraints. These types of constraints capture a great variety of data models, including the relational, the entity-relational, and the object-oriented model. We study the problem of checking whether q is contained in q′ with respect to the constraints specified in a schema S, where q and q′ are nonrecursive Datalog programs whose atoms are complex expressions. We present the following results on query containment. For the case where q does not contain regular expressions, we provide a method for deciding query containment, and analyze its computational complexity. We do the same for the case where neither S nor q, q′ contain number restrictions. To the best of our knowledge, this yields the first decidability result on containment of conjunctive queries with regular expressions. Finally, we prove that the problem is undecidable for the case where we admit inequalities in q′." ] }
0803.2824
2949727990
A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.
A first LWO algorithm for a given intradomain traffic matrix has been proposed by in @cite_4 . It is based on a tabu-search metaheuristic and finds a nearly-optimal set of link weights that minimizes a particular objective function, namely the sum over all links of a convex function of the link loads and or utilizations. This problem has later been generalized to take several traffic matrices @cite_21 and some link failures @cite_11 into account. A heuristic that takes into account possible link failure scenarios when choosing weights is also proposed in @cite_20 by In our LWO we reuse the heuristic detailed in @cite_4 , but we have adapted this algorithm to consider the effect of hot-potato routing. All the later improvements to this algorithm (i.e. multiple traffic matrices, link failures) could be integrated in our new LWO in a similar way.
{ "cite_N": [ "@cite_21", "@cite_4", "@cite_20", "@cite_11" ], "mid": [ "1739694831", "1582342575", "2160995598", "1764504968" ], "abstract": [ "Link weight optimization is shown to be a key issue in engineering of IGPs using shortest path first routing. The IGP weight optimization problem seeks a weight array resulting an optimal load distribution in the network based on the topology information and a traffic demand matrix. Several solution methods for various kinds of this problem have been proposed in the literature. However, the interaction of IGP with BGP is generally neglected in these studies. In reality, the optimized weights may not perform as well as expected, since updated link weights can cause shifts in the traffic demand matrix by hot-potato routing in the decision process of BGP. Hot-potato routing occurs when BGP decides the egress router for a destination prefix according to the IGP lengths. This paper mainly investigates the possible degradation of an IGP weight optimization tool due to hot-potato routing under a worst-case example and some experiments which are carried out by using an open source traffic engineering toolbox. Furthermore, it proposes an approach based on robust optimization to overcome the negative effect of hot-potato routing and analyzes its performance", "Intra-domain routing in IP backbone networks relies on link-state protocols such as IS-IS or OSPF. These protocols associate a weight (or cost) with each network link, and compute traffic routes based on these weight. However, proposed methods for selecting link weights largely ignore the issue of failures which arise as part of everyday network operations (maintenance, accidental, etc.). Changing link weights during a short-lived failure is impractical. However such failures are frequent enough to impact network performance. We propose a Tabu-search heuristic for choosing link weights which allow a network to function almost optimally during short link failures. The heuristic takes into account possible link failure scearios when choosing weights, thereby mitigating the effect of such failures. We find that the weights chosen by the heuristic can reduce link overload during transient link failures by as much as 40 at the cost of a small performance degradation in the absence of failures (10 ).", "As the operation of our fiber-optic backbone networks migrates from interconnected SONET rings to arbitrary mesh topology, traffic grooming on wavelength-division multiplexing (WDM) mesh networks becomes an extremely important research problem. To address this problem, we propose a new generic graph model for traffic grooming in heterogeneous WDM mesh networks. The novelty of our model is that, by only manipulating the edges of the auxiliary graph created by our model and the weights of these edges, our model can achieve various objectives using different grooming policies, while taking into account various constraints such as transceivers, wavelengths, wavelength-conversion capabilities, and grooming capabilities. Based on the auxiliary graph, we develop an integrated traffic-grooming algorithm (IGABAG) and an integrated grooming procedure (INGPROC) which jointly solve several traffic-grooming subproblems by simply applying the shortest-path computation method. Different grooming policies can be represented by different weight-assignment functions, and the performance of these grooming policies are compared under both nonblocking scenario and blocking scenario. The IGABAG can be applied to both static and dynamic traffic grooming. In static grooming, the traffic-selection scheme is key to achieving good network performance. We propose several traffic-selection schemes based on this model and we evaluate their performance for different network topologies.", "In this paper, we adapt the heuristic of Fortz and Thorup for optimizing the weights of Shortest Path First protocols suchas Open Shortest Path First (OSPF) or Intermediate System-Intermediate System (IS-IS), in order to take into account failurescenarios.More precisely, we want to find a set of weights that is robust to all single link failures. A direct application of the originalheuristic, evaluating all the link failures, is too time consuming for realistic networks, so we developed a method based on acritical set of scenarios aimed to be representative of the whole set of scenarios. This allows us to make the problem manageableand achieve very robust solutions." ] }
0803.2824
2949727990
A well-known approach to intradomain traffic engineering consists in finding the set of link weights that minimizes a network-wide objective function for a given intradomain traffic matrix. This approach is inadequate because it ignores a potential impact on interdomain routing. Indeed, the resulting set of link weights may trigger BGP to change the BGP next hop for some destination prefixes, to enforce hot-potato routing policies. In turn, this results in changes in the intradomain traffic matrix that have not been anticipated by the link weights optimizer, possibly leading to degraded network performance. We propose a BGP-aware link weights optimization method that takes these effects into account, and even turns them into an advantage. This method uses the interdomain traffic matrix and other available BGP data, to extend the intradomain topology with external virtual nodes and links, on which all the well-tuned heuristics of a classical link weights optimizer can be applied. A key innovative asset of our method is its ability to also optimize the traffic on the interdomain peering links. We show, using an operational network as a case study, that our approach does so efficiently at almost no extra computational cost.
Cerav- have already shown in @cite_14 that the link weights found by a LWO may change the intradomain TM considered as input. In that paper they also show that applying LWO recursively on the resulting intradomain TM may not converge. They propose a method that keeps track of the series of resulting TMs and at each iteration they optimize the weights for the previous resulting intradomain TMs simultaneously. However, they do not consider the general problem with multiple exit points for each destination prefix, let alone taking advantage of it.
{ "cite_N": [ "@cite_14" ], "mid": [ "1891300059" ], "abstract": [ "Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without exception, all previous weight change algorithms have many specific limitations. Is it (in principle) possible to overcome limitations of hard-wired algorithms by allowing neural nets to run and improve their own weight change algorithms? This paper constructively demonstrates that the answer (in principle) is ‘yes’. I derive an initial gradientbased sequence learning algorithm for a ‘self-referential’ recurrent network that can ‘speak’ about its own weight matrix in terms of activations. It uses some of its input and output units for observing its own errors and for explicitly analyzing and modifying its own weight matrix, including those parts of the weight matrix responsible for analyzing and modifying the weight matrix. The result is the first ‘introspective’ neural net with explicit potential control over all of its own adaptive parameters. A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals O(nconnlognconn), where riconn is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient ‘introspective’ or ‘self-referential’ weight change algorithm, but to show that such algorithms are possible at all." ] }
0803.3699
1621497470
We investigate Quantum Key Distribution (QKD) relaying models. Firstly, we propose a novel quasi-trusted QKD relaying model. The quasi-trusted relays are defined as follows: (i) being honest enough to correctly follow a given multi-party finite-time communication protocol; (ii) however, being under the monitoring of eavesdroppers. We develop a simple 3-party quasi-trusted model, called Quantum Quasi-Trusted Bridge (QQTB) model, to show that we could securely extend up to two times the limited range of single-photon based QKD schemes. We also develop the Quantum Quasi-Trusted Relay (QQTR) model to show that we could securely distribute QKD keys over arbitrarily long distances. The QQTR model requires EPR pair sources, but does not use entanglement swapping and entanglement purification schemes. Secondly, we show that our quasi-trusted models could be improved to become untrusted models in which the security is not compromised even though attackers have full controls over some relaying nodes. We call our two improved models the Quantum Untrusted Bridge (QUB) and Quantum Untrusted Relay (QUR) ones. The QUB model works on single photons and allows securely extend up to two times the limited QKD range. The QUR model works on entangled photons but does not use entanglement swapping and entanglement purification operations. This model allows securely transmit shared keys over arbitrarily long distances without dramatically decreasing the key rate of the original QKD schemes.
Since the range of QKD is limited, QKD relaying methods are necessary. This becomes indispensable when one aims at building QKD networks as in the last recent years. All QKD relaying methods so far introduce some undesirable drawbacks. The most practical method is based on trusted model. This method has been applied in two famous QKD networks, DAPRA and SECOCQ @cite_20 @cite_14 @cite_15 @cite_10 . In this method, all the relaying nodes must be assumed perfectly secured. Such an assumption is critical since passive attacks or eavesdropping on intermediate nodes are very difficult to detect. A few number of intermediate nodes could lead to a great vulnerability in practice. Consequently, one wants to limit the number of trusted nodes in QKD networks.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_10", "@cite_20" ], "mid": [ "1580663165", "2006155898", "2097480027", "2138203492" ], "abstract": [ "QKD networks are of much interest due to their capacity of providing extremely high security keys to network participants. Most QKD network studies so far focus on trusted models where all the network nodes are assumed to be perfectly secured. This restricts QKD networks to be small. In this paper, we first develop a novel model dedicated to large-scale QKD networks, some of whose nodes could be eavesdropped secretely. Then, we investigate the key transmission problem in the new model by an approach based on percolation theory and stochastic routing. Analyses show that under computable conditions large-scale QKD networks could protect secret keys with an extremely high probability. Simulations validate our results.", "We show how quantum key distribution (QKD) techniques can be employed within realistic, highly secure communications systems, using the internet architecture for a specific example. We also discuss how certain drawbacks in existing QKD point-to-point links can be mitigated by building QKD networks, where such networks can be composed of trusted relays or untrusted photonic switches.", "We consider the communication scenario where a source-destination pair wishes to keep the information secret from a relay node despite wanting to enlist its help. For this scenario, an interesting question is whether the relay node should be deployed at all. That is, whether cooperation with an untrusted relay node can ever be beneficial. We first provide an achievable secrecy rate for the general untrusted relay channel, and proceed to investigate this question for two types of relay networks with orthogonal components. For the first model, there is an orthogonal link from the source to the relay. For the second model, there is an orthogonal link from the relay to the destination. For the first model, we find the equivocation capacity region and show that answer is negative. In contrast, for the second model, we find that the answer is positive. Specifically, we show, by means of the achievable secrecy rate based on compress-and-forward, that by asking the untrusted relay node to relay information, we can achieve a higher secrecy rate than just treating the relay as an eavesdropper. For a special class of the second model, where the relay is not interfering itself, we derive an upper bound for the secrecy rate using an argument whose net effect is to separate the eavesdropper from the relay. The merit of the new upper bound is demonstrated on two channels that belong to this special class. The Gaussian case of the second model mentioned above benefits from this approach in that the new upper bound improves the previously known bounds. For the Cover-Kim deterministic relay channel, the new upper bound finds the secrecy capacity when the source-destination link is not worse than the source-relay link, by matching with achievable rate we present.", "The capacity of a particular large Gaussian relay network is determined in the limit as the number of relays tends to infinity. Upper bounds are derived from cut-set arguments, and lower bounds follow from an argument involving uncoded transmission. It is shown that in cases of interest, upper and lower bounds coincide in the limit as the number of relays tends to infinity. Hence, this paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance. The findings are illustrated by geometric interpretations. The techniques developed in this paper are then applied to a sensor network situation. This is a network joint source-channel coding problem, and it is well known that the source-channel separation theorem does not extend to this case. The present paper extends this insight by providing an example where separating source from channel coding does not only lead to suboptimal performance-it leads to an exponential penalty in performance scaling behavior (as a function of the number of nodes). Finally, the techniques developed in this paper are extended to include certain models of ad hoc wireless networks, where a capacity scaling law can be established: When all nodes act purely as relays for a single source-destination pair, capacity grows with the logarithm of the number of nodes." ] }
0803.0929
2952822878
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph @math and a parameter @math , we produce a weighted subgraph @math of @math such that @math and for all vectors @math @math This improves upon the sparsifiers constructed by Spielman and Teng, which had @math edges for some large constant @math , and upon those of Bencz 'ur and Karger, which only satisfied (*) for @math . A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in @math time.
In addition to the graph sparsifiers of @cite_6 @cite_9 @cite_20 , there is a large body of work on sparse @cite_3 @cite_1 and low-rank @cite_4 @cite_1 @cite_0 @cite_23 @cite_16 approximations for general matrices. The algorithms in this literature provide guarantees of the form @math , where @math is the original matrix and @math is obtained by entrywise or columnwise sampling of @math . This is analogous to satisfying ) only for vectors @math in the span of the dominant eigenvectors of @math ; thus, if we were to use these sparsifiers on graphs, they would only preserve the large cuts. Interestingly, our proof uses some of the same machinery as the low-rank approximation result of Rudelson and Vershynin @cite_0 --- the sampling of edges in our algorithm corresponds to picking @math columns at random from a certain rank @math matrix of dimension @math (this is the matrix @math introduced in Section 3).
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_23", "@cite_16", "@cite_20" ], "mid": [ "2089135543", "2949962875", "1554201860", "1998269045", "2103318769", "2062570725", "2103972604", "2899347127", "2950958145" ], "abstract": [ "We give efficient algorithms for volume sampling, i.e., for picking @math -subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). In other words, we can efficiently sample @math -subsets of @math with probabilities proportional to the corresponding @math by @math principal minors of any given @math by @math positive semi definite matrix. This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section @math of KV , also implicit in BDM, DRVW ). Our first algorithm for volume sampling @math -subsets of rows from an @math -by- @math matrix runs in @math arithmetic operations (where @math is the exponent of matrix multiplication) and a second variant of it for @math -approximate volume sampling runs in @math arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small @math . Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: (1) Given @math , in @math arithmetic operations we can find @math of its rows such that projecting onto their span gives a @math -approximation to the matrix of rank @math closest to @math under the Frobenius norm. This improves the @math -approximation of Boutsidis, Drineas and Mahoney BDM and matches the lower bound shown in DRVW . The method of conditional expectations gives a algorithm with the same complexity. The running time can be improved to @math at the cost of losing an extra @math in the approximation factor. (2) The same rows and projection as in the previous point give a @math -approximation to the matrix of rank @math closest to @math under the spectral norm. In this paper, we show an almost matching lower bound of @math , even for @math .", "In sparse principal component analysis we are given noisy observations of a low-rank matrix of dimension @math and seek to reconstruct it under additional sparsity assumptions. In particular, we assume here each of the principal components @math has at most @math non-zero entries. We are particularly interested in the high dimensional regime wherein @math is comparable to, or even much larger than @math . In an influential paper, johnstone2004sparse introduced a simple algorithm that estimates the support of the principal vectors @math by the largest entries in the diagonal of the empirical covariance. This method can be shown to identify the correct support with high probability if @math , and to fail with high probability if @math for two constants @math . Despite a considerable amount of work over the last ten years, no practical algorithm exists with provably better support recovery guarantees. Here we analyze a covariance thresholding algorithm that was recently proposed by KrauthgamerSPCA . On the basis of numerical simulations (for the rank-one case), these authors conjectured that covariance thresholding correctly recover the support with high probability for @math (assuming @math of the same order as @math ). We prove this conjecture, and in fact establish a more general guarantee including higher-rank as well as @math much smaller than @math . Recent lower bounds berthet2013computational, ma2015sum suggest that no polynomial time algorithm can do significantly better. The key technical component of our analysis develops new bounds on the norm of kernel random matrices, in regimes that were not considered before.", "Given a matrix A e ℝ m ×n of rank r, and an integer k < r, the top k singular vectors provide the best rank-k approximation to A. When the columns of A have specific meaning, it is desirable to find (provably) \"good\" approximations to A k which use only a small number of columns in A. Proposed solutions to this problem have thus far focused on randomized algorithms. Our main result is a simple greedy deterministic algorithm with guarantees on the performance and the number of columns chosen. Specifically, our greedy algorithm chooses c columns from A with @math such that @math where C gr is the matrix composed of the c columns, @math is the pseudo-inverse of C gr ( @math is the best reconstruction of A from C gr), and ¼(A) is a measure of the coherence in the normalized columns of A. The running time of the algorithm is O(SVD(A k) + mnc) where SVD(A k) is the running time complexity of computing the first k singular vectors of A. To the best of our knowledge, this is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + e) approximation ratio in Frobenius norm. The algorithm is quite simple and intuitive and is obtained by combining a generalization of the well known sparse approximation problem from information theory with an existence result on the possibility of sparse approximation. Tightening the analysis along either of these two dimensions would yield improved results.", "Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of “components.” Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the input data. In this paper, we propose and study matrix approximations that are explicitly expressed in terms of a small number of columns and or rows of the data matrix, and thereby more amenable to interpretation in terms of the original data. Our main algorithmic results are two randomized algorithms which take as input an @math matrix @math and a rank parameter @math . In our first algorithm, @math is chosen, and we let @math , where @math is the Moore-Penrose generalized inverse of @math . In our second algorithm @math , @math , @math are chosen, and we let @math . ( @math and @math are matrices that consist of actual columns and rows, respectively, of @math , and @math is a generalized inverse of their intersection.) For each algorithm, we show that with probability at least @math , @math , where @math is the “best” rank- @math approximation provided by truncating the SVD of @math , and where @math is the Frobenius norm of the matrix @math . The number of columns of @math and rows of @math is a low-degree polynomial in @math , @math , and @math . Both the Numerical Linear Algebra community and the Theoretical Computer Science community have studied variants of these matrix decompositions over the last ten years. However, our two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist. Both of our algorithms are simple and they take time of the order needed to approximately compute the top @math singular vectors of @math . The technical crux of our analysis is a novel, intuitive sampling method we introduce in this paper called “subspace sampling.” In subspace sampling, the sampling probabilities depend on the Euclidean norms of the rows of the top singular vectors. This allows us to obtain provable relative-error guarantees by deconvoluting “subspace” information and “size-of- @math ” information in the input matrix. This technique is likely to be useful for other matrix approximation and data analysis problems.", "[17] proved that a small sample of rows of a given matrix A contains a low-rank approximation D that minimizes ||A - D||F to within small additive error, and the sampling can be done efficiently using just two passes over the matrix [12]. In this paper, we generalize this result in two ways. First, we prove that the additive error drops exponentially by iterating the sampling in an adaptive manner. Using this result, we give a pass-efficient algorithm for computing low-rank approximation with reduced additive error. Our second result is that using a natural distribution on subsets of rows (called volume sampling), there exists a subset of k rows whose span contains a factor (k + 1) relative approximation and a subset of k + k(k + 1) e rows whose span contains a 1+e relative approximation. The existence of such a small certificate for multiplicative low-rank approximation leads to a PTAS for the following projective clustering problem: Given a set of points P in Rd, and integers k, j, find a set of j subspaces F 1 , . . ., F j , each of dimension at most k, that minimize Σ p∈P min i d(p, F i )2.", "Given a real matrix A∈Rm×n of rank r, and an integer k<r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on the performance and the number of columns chosen. The algorithm selects c columns from A with c=O(klogkϵ2η2(A)) such that ‖A−ΠCA‖F≤(1+ϵ)‖A−Ak‖F, where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm. The results indicate that in practice, the performance of the algorithm can be significantly better than the bounds suggest.", "This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices @math , and at each step mainly performs a soft-thresholding operation on the singular values of the matrix @math . There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates @math is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which @math matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4 of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for @math minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.", "Given an m ×n matrix A and an integer k less than the rank of A, the “best” rank k approximation to A that minimizes the error with respect to the Frobenius norm is Ak, which is obtained by projecting A on the top k left singular vectors of A. While Ak is routinely used in data analysis, it is difficult to interpret and understand it in terms of the original data, namely the columns and rows of A. For example, these columns and rows often come from some application domain, whereas the singular vectors are linear combinations of (up to all) the columns or rows of A. We address the problem of obtaining low-rank approximations that are directly interpretable in terms of the original columns or rows of A. Our main results are two polynomial time randomized algorithms that take as input a matrix A and return as output a matrix C, consisting of a “small” (i.e., a low-degree polynomial in k, 1 e, and log(1 δ)) number of actual columns of A such that ||A–CC+A||F ≤(1+e) ||A–Ak||F with probability at least 1–δ. Our algorithms are simple, and they take time of the order of the time needed to compute the top k right singular vectors of A. In addition, they sample the columns of A via the method of “subspace sampling,” so-named since the sampling probabilities depend on the lengths of the rows of the top singular vectors and since they ensure that we capture entirely a certain subspace of interest.", "We consider the problem of selecting the best subset of exactly @math columns from an @math matrix @math . We present and analyze a novel two-stage algorithm that runs in @math time and returns as output an @math matrix @math consisting of exactly @math columns of @math . In the first (randomized) stage, the algorithm randomly selects @math columns according to a judiciously-chosen probability distribution that depends on information in the top- @math right singular subspace of @math . In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly @math columns from the set of columns selected in the first stage. Let @math be the @math matrix containing those @math columns, let @math denote the projection matrix onto the span of those columns, and let @math denote the best rank- @math approximation to the matrix @math . Then, we prove that, with probability at least 0.8, @math This Frobenius norm bound is only a factor of @math worse than the best previously existing existential result and is roughly @math better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, @math This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on @math , whereas previous results depend on @math ; if these two quantities are comparable, then our bound is asymptotically worse by a @math factor." ] }
0803.0929
2952822878
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph @math and a parameter @math , we produce a weighted subgraph @math of @math such that @math and for all vectors @math @math This improves upon the sparsifiers constructed by Spielman and Teng, which had @math edges for some large constant @math , and upon those of Bencz 'ur and Karger, which only satisfied (*) for @math . A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in @math time.
The use of effective resistance as a distance in graphs has recently gained attention as it is often more useful than the ordinary geodesic distance in a graph. For example, in small-world graphs, all vertices will be close to one another, but those with a smaller effective resistance distance are connected by more short paths. See, for instance @cite_19 @cite_21 , which use effective resistance commute time as a distance measure in social network graphs.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2156286761", "2030435519" ], "abstract": [ "This paper studies an interesting graph measure that we call the effective graph resistance. The notion of effective graph resistance is derived from the field of electric circuit analysis where it is defined as the accumulated effective resistance between all pairs of vertices. The objective of the paper is twofold. First, we survey known formulae of the effective graph resistance and derive other representations as well. The derivation of new expressions is based on the analysis of the associated random walk on the graph and applies tools from Markov chain theory. This approach results in a new method to approximate the effective graph resistance. A second objective of this paper concerns the optimisation of the effective graph resistance for graphs with given number of vertices and diameter, and for optimal edge addition. A set of analytical results is described, as well as results obtained by exhaustive search. One of the foremost applications of the effective graph resistance we have in mind, is the analysis of robustness-related problems. However, with our discussion of this informative graph measure we hope to open up a wealth of possibilities of applying the effective graph resistance to all kinds of networks problems. © 2011 Elsevier Inc. All rights reserved.", "The walk distances in graphs are defined as the result of appropriate transformations of the @?\"k\"=\"0^ (tA)^k proximity measures, where A is the weighted adjacency matrix of a graph and t is a sufficiently small positive parameter. The walk distances are graph-geodetic; moreover, they converge to the shortest path distance and to the so-called long walk distance as the parameter t approaches its limiting values. We also show that the logarithmic forest distances which are known to generalize the resistance distance and the shortest path distance are a specific subclass of walk distances. On the other hand, the long walk distance is equal to the resistance distance in a transformed graph." ] }
0803.1520
2949171548
Strong replica consistency is often achieved by writing deterministic applications, or by using a variety of mechanisms to render replicas deterministic. There exists a large body of work on how to render replicas deterministic under the benign fault model. However, when replicas can be subject to malicious faults, most of the previous work is no longer effective. Furthermore, the determinism of the replicas is often considered harmful from the security perspective and for many applications, their integrity strongly depends on the randomness of some of their internal operations. This calls for new approaches towards achieving replica consistency while preserving the replica randomness. In this paper, we present two such approaches. One is based on Byzantine agreement and the other on threshold coin-tossing. Each approach has its strength and weaknesses. We compare the performance of the two approaches and outline their respective best use scenarios.
In the recent several years, significant progress has been made towards building practical Byzantine fault tolerant systems, as shown in the series of seminal papers such as @cite_19 @cite_0 @cite_3 @cite_9 . This makes it possible to address the problem of reconciliation of the requirement of strong replica consistency and the preservation of each replica's randomness for real-world applications that requires both high availability and high degree of security. We believe the work presented in this paper is an important step towards solving this challenging problem.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_9", "@cite_3" ], "mid": [ "1487382749", "2121510533", "2126087831", "2126789306" ], "abstract": [ "Byzantine fault-tolerant storage systems can provide high availability in hazardous environments, but the redundant servers they require increase software development and hardware costs. In order to minimize the number of servers required to implement fault-tolerant storage services, we develop a new algorithm that uses a \"Listeners\" pattern of network communication to detect and resolve ordering ambiguities created by concurrent accesses to the system. Our protocol requires 3f + 1 servers to tolerate up to f Byzantine faults--f fewer than the 4f + 1 required by existing protocols for non-self-verifying data. In addition, SBQ-L provides atomic consistency semantics, which is stronger than the regular or pseudo-atomic semantics provided by these existing protocols. We show that this protocol is optimal in the number of servers-- any protocol that provides safe semantics or stronger requires at least 3f + 1 servers to tolerate f Byzantine faults in an asynchronous system. Finally, we examine a non-confirmable writes variation of the SBQ-L protocol where a client cannot determine when its writes complete. We show that SBQ-L with non-confirmable writes provides regular semantics with 2f + 1 servers and that this number of servers is minimal.", "Researchers have made great strides in improving the fault tolerance of both centralized and replicated systems against arbitrary (Byzantine) faults. However, there are hard limits to how much can be done with entirely untrusted components; for example, replicated state machines cannot tolerate more than a third of their replica population being Byzantine. In this paper, we investigate how minimal trusted abstractions can push through these hard limits in practical ways. We propose Attested Append-Only Memory (A2M), a trusted system facility that is small, easy to implement and easy to verify formally. A2M provides the programming abstraction of a trusted log, which leads to protocol designs immune to equivocation -- the ability of a faulty host to lie in different ways to different clients or servers -- which is a common source of Byzantine headaches. Using A2M, we improve upon the state of the art in Byzantine-fault tolerant replicated state machines, producing A2M-enabled protocols (variants of Castro and Liskov's PBFT) that remain correct (linearizable) and keep making progress (live) even when half the replicas are faulty, in contrast to the previous upper bound. We also present an A2M-enabled single-server shared storage protocol that guarantees linearizability despite server faults. We implement A2M and our protocols, evaluate them experimentally through micro- and macro-benchmarks, and argue that the improved fault tolerance is cost-effective for a broad range of uses, opening up new avenues for practical, more reliable services.", "This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault-tolerant algorithms will be increasingly important in the future because malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that our service is only 3 slower than a standard unreplicated NFS.", "We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems." ] }
0803.1520
2949171548
Strong replica consistency is often achieved by writing deterministic applications, or by using a variety of mechanisms to render replicas deterministic. There exists a large body of work on how to render replicas deterministic under the benign fault model. However, when replicas can be subject to malicious faults, most of the previous work is no longer effective. Furthermore, the determinism of the replicas is often considered harmful from the security perspective and for many applications, their integrity strongly depends on the randomness of some of their internal operations. This calls for new approaches towards achieving replica consistency while preserving the replica randomness. In this paper, we present two such approaches. One is based on Byzantine agreement and the other on threshold coin-tossing. Each approach has its strength and weaknesses. We compare the performance of the two approaches and outline their respective best use scenarios.
The CT-algorithm is inspired by the work of Cachin, Kursawe and Shoup @cite_13 , in particular, the idea of exploiting threshold signature techniques for agreement. However, we have adapted this idea to solve a totally different problem, it is used towards reaching integrity-preserving strong replica consistency. Furthermore, we carefully studied what to sign for each request so that the final random number obtained is not vulnerable to attacks.
{ "cite_N": [ "@cite_13" ], "mid": [ "2795777276" ], "abstract": [ "Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images." ] }
0803.1521
2951002888
In this paper, we describe a novel proactive recovery scheme based on service migration for long-running Byzantine fault tolerant systems. Proactive recovery is an essential method for ensuring long term reliability of fault tolerant systems that are under continuous threats from malicious adversaries. The primary benefit of our proactive recovery scheme is a reduced vulnerability window. This is achieved by removing the time-consuming reboot step from the critical path of proactive recovery. Our migration-based proactive recovery is coordinated among the replicas, therefore, it can automatically adjust to different system loads and avoid the problem of excessive concurrent proactive recoveries that may occur in previous work with fixed watchdog timeouts. Moreover, the fast proactive recovery also significantly improves the system availability in the presence of faults.
Finally, the reliance on extra nodes beyond the @math active nodes in our scheme may somewhat relates to the use of @math additional witness replicas in the fast Byzantine consensus algorithm @cite_13 . However, the extra nodes are needed for completely different purposes. In our scheme, they are required for proactive recovery for long-running Byzantine fault tolerant systems. In @cite_13 , however, they are needed to reach Byzantine consensus in fewer message delays.
{ "cite_N": [ "@cite_13" ], "mid": [ "2147056869" ], "abstract": [ "Much of the past work on asynchronous approximate Byzantine consensus has assumed scalar inputs at the nodes [4, 8]. Recent work has yielded approximate Byzantine consensus algorithms for the case when the input at each node is a d-dimensional vector, and the nodes must reach consensus on a vector in the convex hull of the input vectors at the fault-free nodes [9, 13]. The d-dimensional vectors can be equivalently viewed as points in the d-dimensional Euclidean space. Thus, the algorithms in [9, 13] require the fault-free nodes to decide on a point in the d-dimensional space. In our recent work [12], we proposed a generalization of the consensus problem, namely Byzantine convex consensus (BCC), which allows the decision to be a convex polytope in the d-dimensional space, such that the decided polytope is within the convex hull of the input vectors at the fault-free nodes. We also presented an asynchronous approximate BCC algorithm. In this paper, we propose a new BCC algorithm with optimal fault-tolerance that also agrees on a convex polytope that is as large as possible under adversarial conditions. Our prior work [12] does not guarantee the optimality of the output polytope." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Traditional client server solutions for security monitoring and protection of large-scale networks rely on the deployment of multiple sensors. These sensors locally collect audit data and forward it to a central server, where it is further analyzed. Early intrusion detection systems such as DIDS @cite_15 and STAT @cite_6 use this architecture and process the monitoring data in one central node. DIDS (Distributed Intrusion Detection System), for instance, is one of the first systems referred to in the literature that is using monitoring architecture @cite_15 . The main components of DIDS are a central analyzer component called DIDS director, a set of host-based sensors installed on each monitored host within the protected network, and a set of network-based sensors installed on each broadcasting segment of the target system. The communication channels between the central analyzer and the distributed sensors are bidirectional. This way, the sensors can push their reports asynchronously to the central analyzer while the director is still able to actively request more details from the sensors.
{ "cite_N": [ "@cite_15", "@cite_6" ], "mid": [ "2891273344", "1541939527" ], "abstract": [ "Today's Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor's and actuator's time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.", "Intrusion detection is the problem of identifying unauthorized use, misuse, and abuse of computer systems by both system insiders and external penetrators. The proliferation of heterogeneous computer networks provides additional implications for the intrusion detection problem. Namely, the increased connectivity of computer systems gives greater access to outsiders, and makes it easier for intruders to avoid detection. IDS’s are based on the belief that an intruder’s behavior will be noticeably different from that of a legitimate user. We are designing and implementing a prototype Distributed Intrusion Detection System (DIDS) that combines distributed monitoring and data reduction (through individual host and LAN monitors) with centralized data analysis (through the DIDS director) to monitor a heterogeneous network of computers. This approach is unique among current IDS’s. A main problem considered in this paper is the Network-user Identification problem, which is concerned with tracking a user moving across the network, possibly with a new user-id on each computer. Initial system prototypes have provided quite favorable results on this problem and the detection of attacks on a network. This paper provides an overview of the motivation behind DIDS, the system architecture and capabilities, and a discussion of the early prototype." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
The issue of sensor distribution is the focus of NetSTAT @cite_18 , an application of STAT (State Transition Analysis Technique) @cite_6 to network-based detection. It is based on NSTAT @cite_19 and comprises several extensions. Based on the attack scenarios and the network fact modeled as a hyper-graph, NetSTAT automatically chooses places to probe network activities and applies an analysis of state transitions. This way, it is able to decide what information is needed to collect within the protected network. Although NetSTAT collects network events in a distributed way, it analyzes them in a centralized fashion similarly to DIDS.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_6" ], "mid": [ "1603713701", "2107409339", "2112953516" ], "abstract": [ "The Reliable Software Group at UCSB has developed a new approach to representing computer penetrations. This approach models penetrations as a series of state transitions described in terms of signature actions and state assertions. State transition representations are written to correspond to the states of an actual computer system, and they form the basis of a rule-based expert system for detecting penetrations. The system is called the State Transition Analysis Tool (STAT). On a network filesystem where the files are distributed on many hosts and where each host mounts directories from the others, actions on each host computer need to be audited. A natural extension of the STAT effort is to run the system on audit data collected by multiple hosts. This means an audit mechanism needs to be run on each host. However, running an implementation of STAT on each host would result in inefficient use of computer resources. In addition, the possibility of having cooperative attacks on different hosts would make detection difficult. Therefore, for the distributed version of STAT, called NSTAT, there is a single STAT process with a single, chronological audit trail. We are currently designing a client server approach to the problem. The client side has two threads: a producer that reads and filters the audit trail and a consumer that sends it to the server. The server side merges the filtered information from the various clients and performs the analysis.", "The paper presents a new approach to representing and detecting computer penetrations in real time. The approach, called state transition analysis, models penetrations as a series of state changes that lead from an initial secure state to a target compromised state. State transition diagrams, the graphical representation of penetrations, identify precisely the requirements for and the compromise of a penetration and present only the critical events that must occur for the successful completion of the penetration. State transition diagrams are written to correspond to the states of an actual computer system, and these diagrams form the basis of a rule based expert system for detecting penetrations, called the state transition analysis tool (STAT). The design and implementation of a Unix specific prototype of this expert system, called USTAT, is also presented. This prototype provides a further illustration of the overall design and functionality of this intrusion detection approach. Lastly, STAT is compared to the functionality of comparable intrusion detection tools. >", "Due to their low cost and small form factors, a large number of sensor nodes can be deployed in redundant fashion in dense sensor networks. The availability of redundant nodes increases network lifetime as well as network fault tolerance. It is, however, undesirable to keep all the sensor nodes active at all times for sensing and communication. An excessive number of active nodes lead to higher energy consumption and it places more demand on the limited network bandwidth. We present an efficient technique for the selection of active sensor nodes in dense sensor networks. The active node selection procedure is aimed at providing the highest possible coverage of the sensor field, i.e., the surveillance area. It also assures network connectivity for routing and information dissemination. We first show that the coverage-centric active nodes selection problem is NP-complete. We then present a distributed approach based on the concept of a connected dominating set (CDS). We prove that the set of active nodes selected by our approach provides full coverage and connectivity. We also describe an optimal coverage-centric centralized approach based on integer linear programming. We present simulation results obtained using an ns2 implementation of the proposed technique." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Some approaches published later try to solve those disadvantages. GrIDS @cite_8 , EMERALD @cite_2 , and AAfID @cite_9 , for example, propose the use of layered structures, where data is locally pre-processed and filtered, and further analyzed by intermediate components in a hierarchical fashion. The computational and network load is distributed over multiple analyzers and managers as well as over different domains to analyze. The analyzers and managers of each domain perform their detection for just a small part of the whole network. They forward the processed information to the entity that is on the top of the hierarchy,i.e., a master node which finally analyzes all the reported incidents of the system.
{ "cite_N": [ "@cite_9", "@cite_2", "@cite_8" ], "mid": [ "1975618234", "2474152637", "2526145926" ], "abstract": [ "We investigate a wireless system of multiple cells, each having a downlink shared channel in support of high-speed packet data services. In practice, such a system consists of hierarchically organized entities including a central server, Base Stations (BSs), and Mobile Stations (MSs). Our goal is to improve global resource utilization and reduce regional congestion given asymmetric arrivals and departures of mobile users, a goal requiring load balancing among multiple cells. For this purpose, we propose a scalable cross-layer framework to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level cell coverage based on load, throughput, and channel measurements. In this framework, an opportunistic scheduling algorithm--the weighted Alpha-Rule--exploits the gain of multiuser diversity in each cell independently, trading aggregate (mean) down-link throughput for fairness and minimum rate guarantees among MSs. Each MS adapts to its channel dynamics and the load fluctuations in neighboring cells, in accordance with MSs' mobility or their arrival and departure, by initiating load-aware handoff and cell-site selection. The central server adjusts schedulers of all cells to coordinate their coverage by prompting cell breathing or distributed MS handoffs. Across the whole system, BSs and MSs constantly monitor their load, throughput, or channel quality in order to facilitate the overall system coordination. Our specific contributions in such a framework are highlighted by the minimum-rate guaranteed weighted Alpha-Rule scheduling, the load-aware MS handoff cell-site selection, and the Media Access Control (MAC)-layer cell breathing. Our evaluations show that the proposed framework can improve global resource utilization and load balancing, resulting in a smaller blocking rate of MS arrivals without extra resources while the aggregate throughput remains roughly the same or improved at the hot-spots. Our simulation tests also show that the coordinated system is robust to dynamic load fluctuations and is scalable to both the system dimension and the size of MS population.", "Wepropose using relaxed deep supervision (RDS) within convolutional neural networks for edge detection. The conventional deep supervision utilizes the general groundtruth to guide intermediate predictions. Instead, we build hierarchical supervisory signals with additional relaxed labels to consider the diversities in deep neural networks. We begin by capturing the relaxed labels from simple detectors (e.g. Canny). Then we merge them with the general groundtruth to generate the RDS. Finally we employ the RDS to supervise the edge network following a coarse-to-fine paradigm. These relaxed labels can be seen as some false positives that are difficult to be classified. Weconsider these false positives in the supervision, and are able to achieve high performance for better edge detection. Wecompensate for the lack of training images by capturing coarse edge annotations from a large dataset of image segmentations to pretrain the model. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on the well-known BSDS500 dataset (ODS F-score of .792) and obtains superior cross-dataset generalization results on NYUD dataset.", "Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Similar to GrIDS, EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) extends the work of IDES (Intrusion Detection Expert System) @cite_25 and NIDES (Next-Generation Intrusion Detection Expert System) @cite_24 by implementing a recursive framework in which generic building blocks can be deployed in a hierarchical fashion @cite_2 . It combines host- and network-based sensors as well as anomaly- and misuse-based analyzers. EMERALD focuses on the protection of large-scale enterprise networks that are divided into independent domains, each one of them with its own security policy. The authors claim to rely on a very efficient communication infrastructure for the exchange of information between the system components. Unfortunately, they also provide only few details regarding their implementation. Thus, a general statement regarding the performance of their infrastructure cannot be made.
{ "cite_N": [ "@cite_24", "@cite_25", "@cite_2" ], "mid": [ "2288766236", "2134269391", "2100537916" ], "abstract": [ "The EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) environment is a distributed scalable tool suite for tracking malicious activity through and across large networks. EMERALD introduces a highly distributed, building-block approach to network surveillance, attack isolation, and automated response. It combines models from research in distributed high-volume event-correlation methodologies with over a decade of intrusion detection research and engineering experience. The approach is novel in its use of highly distributed, independently tunable, surveillance and response monitors that are deployable polymorphically at various abstract layers in a large network. These monitors contribute to a streamlined event-analysis system that combines signature analysis with statistical profiling to provide localized real-time protection of the most widely used network services on the Internet. Equally important, EMERALD introduces a recursive framework for coordinating the dissemination of analyses from the distributed monitors to provide a global detection and response capability that can counter attacks occurring across an entire network enterprise. Further, EMERALD introduces a versatile application programmers' interface that enhances its ability to integrate with heterogeneous target hosts and provides a high degree of interoperability with third-party tool suites.", "Intrusion detection systems (IDSs) fall into two high-level categories: network-based systems (NIDS) that monitor network behaviors, and host-based systems (HIDS) that monitor system calls. In this work, we present a general technique for both systems. We use anomaly detection, which identifies patterns not conforming to a historic norm. In both types of systems, the rates of change vary dramatically over time (due to burstiness) and over components (due to service difference). To efficiently model such systems, we use continuous time Bayesian networks (CTBNs) and avoid specifying a fixed update interval common to discrete-time models. We build generative models from the normal training data, and abnormal behaviors are flagged based on their likelihood under this norm. For NIDS, we construct a hierarchical CTBN model for the network packet traces and use Rao-Blackwellized particle filtering to learn the parameters. We illustrate the power of our method through experiments on detecting real worms and identifying hosts on two publicly available network traces, the MAWI dataset and the LBNL dataset. For HIDS, we develop a novel learning method to deal with the finite resolution of system log file time stamps, without losing the benefits of our continuous time model. We demonstrate the method by detecting intrusions in the DARPA 1998 BSM dataset.", "Prevention of security breaches completely using the existing security technologies is unrealistic. As a result, intrusion detection is an important component in network security. However, many current intrusion detection systems (IDSs) are rule-based systems, which have limitations to detect novel intrusions. Moreover, encoding rules is time-consuming and highly depends on the knowledge of known intrusions. Therefore, we propose new systematic frameworks that apply a data mining algorithm called random forests in misuse, anomaly, and hybrid-network-based IDSs. In misuse detection, patterns of intrusions are built automatically by the random forests algorithm over training data. After that, intrusions are detected by matching network activities against the patterns. In anomaly detection, novel intrusions are detected by the outlier detection mechanism of the random forests algorithm. After building the patterns of network services by the random forests algorithm, outliers related to the patterns are determined by the outlier detection algorithm. The hybrid detection system improves the detection performance by combining the advantages of the misuse and anomaly detection. We evaluate our approaches over the knowledge discovery and data mining 1999 (KDDpsila99) dataset. The experimental results demonstrate that the performance provided by the proposed misuse approach is better than the best KDDpsila99 result; compared to other reported unsupervised anomaly detection approaches, our anomaly detection approach achieves higher detection rate when the false positive rate is low; and the presented hybrid system can improve the overall performance of the aforementioned IDSs." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
The AAfID (Architecture for Intrusion Detection using Autonomous Agents) also pre -sents a hierarchical approach to remove the limitations of centralized approaches and, particularly, to provide better resistance to denial of service attacks @cite_9 . It consists of four main components called agents, filters, transceivers, and monitors organized in a tree structure, where child and parent components communicate with each other. The communication subsystem of AAfID exhibits a very simplistic design and does not seem to be resistant to a denial of service attack as intended. Although the set of agents may communicate with each other to agree upon a common suspicion level regarding every host, all relevant data is simply forwarded to monitors via transceivers and demands for human interaction in order to detect distributed intrusions.
{ "cite_N": [ "@cite_9" ], "mid": [ "2124365372" ], "abstract": [ "AAFID is a distributed intrusion detection architecture and system, developed in CERIAS at Purdue University. AAFID was the first architecture that proposed the use of autonomous agents for doing intrusion detection. With its prototype implementation, it constitutes a useful framework for the research and testing of intrusion detection algorithms and mechanisms. We describe the AAFID architecture and the existing prototype, as well as some design and implementation experiences and future research issues. ” 2000 Elsevier Science B.V. All rights reserved." ] }
0802.3718
1592836390
Attacks on information systems followed by intrusions may cause large revenue losses. The prevention of both is not always possible by just considering information from isolated sources of the network. A global view of the whole system is necessary to recognize and react to the different actions of such an attack. The design and deployment of a decentralized system targeted at detecting as well as reacting to information system attacks might benefit from the loose coupling realized by publish subscribe middleware. In this paper, we present the advantages and convenience in using this communication paradigm for a general decentralized attack prevention framework. Furthermore, we present the design and implementation of our approach based on existing publish subscribe middleware and evaluate our approach for GNU Linux systems.
Most of these limitations can be solved efficiently by using a distributed publish sub -scribe middleware. The advantages of publish subscribe communication for our problem domain over other communication paradigms is that it keeps the producers of messages decoupled from the consumers and that the communication is information-driven. This way, it is possible to avoid problems regarding the scalability and the management inherent to other designs by means of a network of publishers, brokers, and subscribers. A publisher in a publish subscribe system does not need to have any knowledge about any of the entities that consume the published information since the communication is anonymous. Likewise, the subscribers do not need to know anything about the publishers. New services can simply be added without any impact on or interruption of the service to other users. @cite_3 @cite_13 , we presented an infrastructure inspired by the decentralized architectures discussed with the focus on removing the discussed limitations. In the following sections, we present further details on our work.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2103856529", "2005903673" ], "abstract": [ "The publish subscribe (pub sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extending a pub sub system in wireless networks has become a promising topic. However, most existing works focus on pub sub systems in infrastructured wireless networks. To adapt pub sub systems to mobile ad hoc networks, we propose DRIP, a dynamic Voronoi region-based pub sub protocol. In our design, the network is dynamically divided into several Voronoi regions after choosing proper nodes as broker nodes. Each broker node is used to collect subscriptions and detected events, as well as efficiently notify subscribers with matched events in its Voronoi region. Other nodes join their nearest broker nodes to submit subscriptions, publish events, and wait for notifications of their requested events. Broker nodes cooperate with each other for sharing subscriptions and useful events. Our proposal includes two major components: a Voronoi regions construction protocol, and a delivery mechanism that implements the pub sub paradigm. The effectiveness of DRIP is demonstrated through comprehensive simulation studies.", "The ability to seamlessly scale on demand has made Content-Based Publish-Subscribe (CBPS) systems the choice of distributing messages documents produced by Content Publishers to many Subscribers through Content Brokers. Most of the current systems assume that Content Brokers are trusted for the confidentiality of the data published by Content Publishers and the privacy of the subscriptions, which specify their interests, made by Subscribers. However, with the increased use of technologies, such as service oriented architectures and cloud computing, essentially outsourcing the broker functionality to third-party providers, one can no longer assume the trust relationship to hold. The problem of providing privacy confidentiality in CBPS systems is challenging, since the solution to the problem should allow Content Brokers to make routing decisions based on the content without revealing the content to them. The previous work attempted to solve this problem was not fully successful. The problem may appear unsolvable since it involves conflicting goals, but in this paper, we propose a novel approach to preserve the privacy of the subscriptions made by Subscribers and confidentiality of the data published by Content Publishers using cryptographic techniques when third-party Content Brokers are utilized to make routing decisions based on the content. Our protocols are expressive to support any type of subscriptions and designed to work efficiently. We distribute the work such that the load on Content Brokers, where the bottleneck is in a CBPS system, is minimized. We extend a popular CBPS system using our protocols to implement a privacy preserving CBPS system." ] }
0801.2069
2950286557
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
The exact solution of factored MDPs is infeasible. The idea of representing a large MDP using a factored model was first proposed by Koller & Parr @cite_3 but similar ideas appear already in the works of Boutilier, Dearden, & Goldszmidt @cite_2 @cite_8 . More recently, the framework (and some of the algorithms) was extended to fMDPs with hybrid continuous-discrete variables @cite_9 and factored partially observable MDPs @cite_1 . Furthermore, the framework has also been applied to structured MDPs with alternative representations, e.g., relational MDPs @cite_14 and first-order MDPs @cite_13 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_2", "@cite_13" ], "mid": [ "2170400507", "1588316674", "2165304603", "1530444831", "2169997910", "1557798492", "2963986064" ], "abstract": [ "This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 1040 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time.", "Many large Markov decision processes (MDPs) can be represented compactly using a structured representation such as a dynamic Bayesian network. Unfortunately, the compact representation does not help standard MDP algorithms, because the value function for the MDP does not retain the structure of the process description. We argue that in many such MDPs, structure is approximately retained. That is, the value functions are nearly additive: closely approximated by a linear function over factors associated with small subsets of problem features. Based on this idea, we present a convergent, approximate value determination algorithm for structured MDPs. The algorithm maintains an additive value function, alternating dynamic programming steps with steps that project the result back into the restricted space of additive functions. We show that both the dynamic programming and the projection steps can be computed efficiently, despite the fact that the number of states is exponential in the number of state variables.", "Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automated decision support systems. In this paper, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a new hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function by a linear combination of basis functions and optimize its weights by linear programming. We analyze both theoretical and computational aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems.", "Recently, structured methods for solving factored Markov decisions processes (MDPs) with large state spaces have been proposed recently to allow dynamic programming to be applied without the need for complete state enumeration. We propose and examine a new value iteration algorithm for MDPs that uses algebraic decision diagrams (ADDs) to represent value functions and policies, assuming an ADD input representation of the MDP. Dynamic programming is implemented via ADD manipulation. We demonstrate our method on a class of large MDPs (up to 63 million states) and show that significant gains can be had when compared to tree-structured representations (with up to a thirty-fold reduction in the number of nodes required to represent optimal value functions).", "Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods cannot adequately address these problems. We present the first framework that can exploit problem structure for modeling and solving hybrid problems efficiently. We formulate these problems as hybrid Markov decision processes (MDPs with continuous and discrete state and action variables), which we assume can be represented in a factored way using a hybrid dynamic Bayesian network (hybrid DBN). This formulation also allows us to apply our methods to collaborative multiagent settings. We present a new linear program approximation method that exploits the structure of the hybrid MDP and lets us compute approximate value functions more efficiently. In particular, we describe a new factored discretization of continuous variables that avoids the exponential blow-up of traditional approaches. We provide theoretical bounds on the quality of such an approximation and on its scale-up potential. We support our theoretical arguments with experiments on a set of control problems with up to 28-dimensional continuous state space and 22-dimensional action space.", "This paper presents two new approaches to decomposing and solving large Markov decision problems (MDPs), a partial decoupling method and a complete decoupling method. In these approaches, a large, stochastic decision problem is divided into smaller pieces. The first approach builds a cache of policies for each part of the problem independently, and then combines the pieces in a separate, light-weight step. A second approach also divides the problem into smaller pieces, but information is communicated between the different problem pieces, allowing intelligent decisions to be made about which piece requires the most attention. Both approaches can be used to find optimal policies or approximately optimal policies with provable bounds. These algorithms also provide a framework for the efficient transfer of knowledge across problems that share similar structure.", "We consider positive covering integer programs, which generalize set cover and which have attracted a long line of research developing (randomized) approximation algorithms. Srinivasan (2006) gave a rounding algorithm based on the FKG inequality for systems which are \"column-sparse.\" This algorithm may return an integer solution in which the variables get assigned large (integral) values; Kolliopoulos & Young (2005) modified this algorithm to limit the solution size, at the cost of a worse approximation ratio. We develop a new rounding scheme based on the Partial Resampling variant of the Lovasz Local Lemma developed by Harris & Srinivasan (2013). This achieves an approximation ratio of 1 + ln([EQUATION]), where amin is the minimum covering constraint and Δ1 is the maximum e1-norm of any column of the covering matrix (whose entries are scaled to lie in [0, 1]); we also show nearly-matching inapproximability and integrality-gap lower bounds. Our approach improves asymptotically, in several different ways, over known results. First, it replaces Δ0, the maximum number of nonzeroes in any column (from the result of Srinivasan) by Δ1 which is always - and can be much - smaller than Δ0; this is the first such result in this context. Second, our algorithm automatically handles multi-criteria programs; we achieve improved approximation ratios compared to the algorithm of Srinivasan, and give, for the first time when the number of objective functions is large, polynomial-time algorithms with good multi-criteria approximations. We also significantly improve upon the upper-bounds of Kolliopoulos & Young when the integer variables are required to be within (1 + e) of some given upper-bounds, and show nearly-matching inapproximability." ] }
0801.2069
2950286557
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
Markov decision processes were first formulated as LP tasks by Schweitzer and Seidmann . The approximate LP form is due to de Farias and van Roy . show that the maximum of local-scope functions can be computed by rephrasing the task as a non-serial dynamic programming task and eliminating variables one by one. Therefore, ) can be transformed to an equivalent, more compact linear program. The gain may be exponential, but this is not necessarily so in all cases: according to @cite_10 , as shown by Dechter , [the cost of the transformation] is exponential in the induced width of the cost network, the undirected graph defined over the variables @math , with an edge between @math and @math if they appear together in one of the original functions @math . The complexity of this algorithm is, of course, dependent on the variable elimination order and the problem structure. Computing the optimal elimination order is an NP-hard problem and elimination orders yielding low induced tree width do not exist for some problems.'' Furthermore, for the approximate LP task ), the solution is no longer independent of @math and the optimal choice of the @math values is not known.
{ "cite_N": [ "@cite_10" ], "mid": [ "2170400507" ], "abstract": [ "This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and context-specific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomial-sized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on max-norm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 1040 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing state-of-the-art approach, showing, in some problems, exponential gains in computation time." ] }
0801.2069
2950286557
In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
The approximate policy iteration algorithm also uses an approximate LP reformulation, but it is based on the policy-evaluation Bellman equation ). Policy-evaluation equations are, however, linear and do not contain the maximum operator, so there is no need for the second, costly transformation step. On the other hand, the algorithm needs an explicit decision tree representation of the policy. Liberatore @cite_6 has shown that the size of the decision tree representation can grow exponentially.
{ "cite_N": [ "@cite_6" ], "mid": [ "2158738729" ], "abstract": [ "In this paper we consider approximate policy-iteration-based reinforcement learning algorithms. In order to implement a flexible function approximation scheme we propose the use of non-parametric methods with regularization, providing a convenient way to control the complexity of the function approximator. We propose two novel regularized policy iteration algorithms by adding L2-regularization to two widely-used policy evaluation methods: Bellman residual minimization (BRM) and least-squares temporal difference learning (LSTD). We derive efficient implementation for our algorithms when the approximate value-functions belong to a reproducing kernel Hilbert space. We also provide finite-sample performance bounds for our algorithms and show that they are able to achieve optimal rates of convergence under the studied conditions." ] }
0801.2575
1818109327
This paper reviews the fully complete hypergames model of system @math , presented a decade ago in the author's thesis. Instantiating type variables is modelled by allowing games as moves''. The uniformity of a quantified type variable @math is modelled by copycat expansion: @math represents an unknown game, a kind of black box, so all the player can do is copy moves between a positive occurrence and a negative occurrence of @math . This presentation is based on slides for a talk entitled Hypergame semantics: ten years later'' given at Games for Logic and Programming Languages', Seattle, August 2006.
Affine linear polymorphism was modelled in @cite_20 Samson Abramsky's course at this summer school, during the summer before my D.Phil., is in part what inspired my choice of thesis topic. with a PER-like intersections'' of first-order games of the form @cite_4 @cite_13 . Abramsky and Lenisa have explored systematic ways of modelling quantifiers so that, in the limited case in which all quantifiers are outermost (so in particular positive), models are fully complete @cite_5 . (See subsection for a simple example of a type at which full completeness fails.)
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_4", "@cite_20" ], "mid": [ "2124501788", "2104715607", "2949560198", "2950700385" ], "abstract": [ "We present a linear realizability technique for building Partial Equivalence Relations (PER) categories over Linear Combinatory Algebras. These PER categories turn out to be linear categories and to form an adjoint model with their co-Kleisli categories. We show that a special linear combinatory algebra of partial involutions, arising from Geometry of Interaction constructions, gives rise to a fully and faithfully complete modelfor ML polymorphic types of system F.", "We consider linear problems in fields, ordered fields, discretely valued fields (with finite residue field or residue field of characteristic zero) and fields with finitely many independent orderings and discrete valuations. Most of the fields considered will be of characteristic zero. Formally, linear statements about these structures (with parameters) are given by formulas of the respective first-order language, in which all bound variables occur only linearly. We study symbolic algorithms (linear elimination procedures) that reduce linear formulas to linear formulas of a very simple form, i.e. quantifier-free linear formulas, and algorithms (linear decision procedures) that decide whether a given linear sentence holds in all structures of the given class. For all classes of fields considered, we find linear elimination procedures that run in double exponential space and time. As a consequence, we can show that for fields (with one or several discrete valuations), linear statements can be transferred from characteristic zero to prime characteristic p, provided p is double exponential in the length of the statement. (For similar bounds in the non-linear case, see Brown, 1978.) We find corresponding linear decision procedures in the Berman complexity classes @[email protected]?NSTA(*,2^c^n,dn) for d = 1, 2. In particular, all hese procedures run in exponential space. The technique employed is quantifier elimination via Skolem terms based on Ferrante & Rackoff (1975). Using ideas of Fischer & Rabin (1974), Berman (1977), Furer (1982), we establish lower bounds for these problems showing that our upper bounds are essentially tight. For linear formulas with a bounded number of quantifiers all our algorithms run in polynomial time. For linear formulas of bounded quantifier alternation most of the algorithms run in time 2^O^(^n^^^k^) for fixed k.", "We initiate the probabilistic analysis of linear programming (LP) decoding of low-density parity-check (LDPC) codes. Specifically, we show that for a random LDPC code ensemble, the linear programming decoder of Feldman succeeds in correcting a constant fraction of errors with high probability. The fraction of correctable errors guaranteed by our analysis surpasses previous nonasymptotic results for LDPC codes, and in particular, exceeds the best previous finite-length result on LP decoding by a factor greater than ten. This improvement stems in part from our analysis of probabilistic bit-flipping channels, as opposed to adversarial channels. At the core of our analysis is a novel combinatorial characterization of LP decoding success, based on the notion of a flow on the Tanner graph of the code. An interesting by-product of our analysis is to establish the existence of ldquoprobabilistic expansionrdquo in random bipartite graphs, in which one requires only that almost every (as opposed to every) set of a certain size expands, for sets much larger than in the classical worst case setting.", "Topic models, such as Latent Dirichlet Allocation (LDA), posit that documents are drawn from admixtures of distributions over words, known as topics. The inference problem of recovering topics from admixtures, is NP-hard. Assuming separability, a strong assumption, [4] gave the first provable algorithm for inference. For LDA model, [6] gave a provable algorithm using tensor-methods. But [4,6] do not learn topic vectors with bounded @math error (a natural measure for probability vectors). Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded @math error. A topic in LDA and other models is essentially characterized by a group of co-occurring words. Motivated by this, we introduce topic specific Catchwords, group of words which occur with strictly greater frequency in a topic than any other topic individually and are required to have high frequency together rather than individually. A major contribution of the paper is to show that under this more realistic assumption, which is empirically verified on real corpora, a singular value decomposition (SVD) based algorithm with a crucial pre-processing step of thresholding, can provably recover the topics from a collection of documents drawn from Dominant admixtures. Dominant admixtures are convex combination of distributions in which one distribution has a significantly higher contribution than others. Apart from the simplicity of the algorithm, the sample complexity has near optimal dependence on @math , the lowest probability that a topic is dominant, and is better than [4]. Empirical evidence shows that on several real world corpora, both Catchwords and Dominant admixture assumptions hold and the proposed algorithm substantially outperforms the state of the art [5]." ] }
0801.3372
2057867606
This paper studies the effect of discretizing the parametrization of a dictionary used for matching pursuit (MP) decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
A similar approach to our geometric analysis of MP atom selection rule has been proposed in @cite_10 . In that paper, a dictionary of ( @math -normalized) wavelets is seen as a manifold associate to a Riemannian metric. However, the authors restrict their work to wavelet parametrization inherited from Lie group (such as the affine group). They also work only on the @math (dictionary) distance between dictionary atoms and do not introduce intrinsic geodesic distance. They define a discretization of the parametrization @math such that, in our notations, @math , with @math the local width of the cell localized on @math . There is however no analysis of the effect of this discretization on the MP rate of convergence.
{ "cite_N": [ "@cite_10" ], "mid": [ "2069912449" ], "abstract": [ "This paper introduces new tight frames of curvelets to address the problem of finding optimally sparse representations of objects with discontinuities along piecewise C 2 edges. Conceptually, the curvelet transform is a multiscale pyramid with many directions and positions at each length scale, and needle-shaped elements at fine scales. These elements have many useful geometric multiscale features that set them apart from classical multiscale representations such as wavelets. For instance, curvelets obey a parabolic scaling relation which says that at scale 2 -j , each element has an envelope that is aligned along a ridge of length 2 -j 2 and width 2 -j . We prove that curvelets provide an essentially optimal representation of typical objects f that are C 2 except for discontinuities along piecewise C 2 curves. Such representations are nearly as sparse as if f were not singular and turn out to be far more sparse than the wavelet decomposition of the object. For instance, the n-term partial reconstruction f C n obtained by selecting the n largest terms in the curvelet series obeys ∥f - f C n ∥ 2 L2 ≤ C . n -2 . (log n) 3 , n → ∞. This rate of convergence holds uniformly over a class of functions that are C 2 except for discontinuities along piecewise C 2 curves and is essentially optimal. In comparison, the squared error of n-term wavelet approximations only converges as n -1 as n → ∞, which is considerably worse than the optimal behavior." ] }
0801.3372
2057867606
This paper studies the effect of discretizing the parametrization of a dictionary used for matching pursuit (MP) decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
In @cite_33 , the author uses a 4-dimensional Gaussian chirp dictionary to analyze 1-D signals with MP algorithm. He develops a fast procedure to find the best atom of this dictionary in the representation of the current MP residual by applying a two-step search. First, by setting the chirp rate parameter to zero, the best common Gabor atom is found with full search procedure taking advantage of the FFT algorithm. Next, a ridge theorem proves that starting from this Gabor atom, the best Gaussian chirp atom can be approximated with a controlled error. The whole method is similar to the development of our optimized matching pursuit since we start also from a discrete parametrization to find a better atom in the continuous one. However, our approach is more general since we are not restricted to a specific dictionary. We use the intrinsic geometry of any smooth dictionary manifold to perform a optimization driven by a geometric gradient ascent.
{ "cite_N": [ "@cite_33" ], "mid": [ "2141660238" ], "abstract": [ "We introduce a modified matching pursuit algorithm, called fast ridge pursuit, to approximate N-dimensional signals with M Gaussian chirps at a computational cost O(MN) instead of the expected O(MN sup 2 logN). At each iteration of the pursuit, the best Gabor atom is first selected, and then, its scale and chirp rate are locally optimized so as to get a \"good\" chirp atom, i.e., one for which the correlation with the residual is locally maximized. A ridge theorem of the Gaussian chirp dictionary is proved, from which an estimate of the locally optimal scale and chirp is built. The procedure is restricted to a sub-dictionary of local maxima of the Gaussian Gabor dictionary to accelerate the pursuit further. The efficiency and speed of the method is demonstrated on a sound signal." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
The @cite_18 of a finite set of points @math is a partition of the plane into @math regions such that each region contains exactly those points having the same nearest neighbor in @math . The points in @math are also called . It is well known that the Voronoi diagram of a set of points is the face dual of the Delaunay graph of that set of points @cite_18 , i.e. two points have adjacent Voronoi regions if and only if they share an edge in the Delaunay graph (see Figure ).
{ "cite_N": [ "@cite_18" ], "mid": [ "2000879295" ], "abstract": [ "The Voronoi diagram is a famous structure of computational geometry. We show that there is a straightforward equivalent in graph theory which can be efficiently computed. In particular, we give two algorithms for the computation of graph Voronoi diagrams, prove a lower bound on the problem, and identify cases where the algorithms presented are optimal. The space requirement of a graph Voronoi diagram is modest, since it needs no more space than does the graph itself. The investigation of graph Voronoi diagrams is motivated by many applications and problems on networks that can be easily solved with their help. This includes the computation of nearest facilities, all nearest neighbors and closest pairs, some kind of collision free moving, and anticenters and closest points. © 2000 John Wiley & Sons, Inc." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Let @math be a real number. Two set of points @math and @math in @math are if there exists two @math -dimensional balls @math and @math of same radius @math respectively containing the bounding boxes of @math and @math such that the distance between @math and @math is greater than or equal to @math . The distance between @math and @math is defined as the distance between their centers minus @math . A @cite_3 @cite_29 is a set of unordered pairs @math of subsets of @math that are well-separated with respect to @math with the additional property that for every two points @math there is exactly one pair @math such that @math and @math . showed that for @math , every point set admits a WSPD with separation ratio @math of @math size that can be computed in @math time. Choosing one edge per pair allows to construct a @math -spanner that has @math size with @math .
{ "cite_N": [ "@cite_29", "@cite_3" ], "mid": [ "2082352769", "2037489203" ], "abstract": [ "For an unweighted graph @math , @math is a subgraph if @math , and @math is a Steiner graph if @math , and for any pair of vertices @math , the distance between them in @math (denoted @math ) is at least the distance between them in @math (denoted @math ). In this paper we introduce the notion of distance preserver. A subgraph (resp., Steiner graph) @math of a graph @math is a subgraph (resp., Steiner) @math -preserver of @math if for every pair of vertices @math with @math , @math . We show that any graph (resp., digraph) has a subgraph @math -preserver with at most @math edges (resp., arcs), and there are graphs and digraphs for which any undirected Steiner @math -preserver contains @math edges. However, we show that if one allows a directed Steiner (diSteiner) @math -preserver, then these bounds can be improved. Specifically, we show that for any graph or digraph there exists a diSteiner @math -preserver with @math arcs, and that this result is tight up to a constant factor. We also study @math -preserving distance labeling schemes, that are labeling schemes that guarantee precise calculation of distances between pairs of vertices that are at a distance of at least @math one from another. We show that there exists a @math -preserving labeling scheme with labels of size @math , and that labels of size @math are required for any @math -preserving labeling scheme.", "Let @math be a weighted undirected graph having nonnegative edge weights. An estimate @math of the actual distance @math between @math is said to be of stretch @math if and only if @math . Computing all-pairs small stretch distances efficiently (both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs @math -stretch distances for a whole range of stretch @math , and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24]." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Unit disk graphs @cite_17 @cite_8 received a lot of attention from the wireless community. A is a graph whose nodes are points in the plane and edges join two points whose distance is at most one unit. It is well-known that intersecting a unit disk graph with the Delaunay or the Yao graph of the points provides a @math -spanner of the unit disk graph @cite_12 , where the constant @math is the same as the one of the original graph. However, this simple strategy does not work with all spanners. In particular, it does not work with the @math -graph @cite_25 . Unit disk graphs can be seen as intersection graphs of disks of same radius in the plane. The general problem of computing spanners for geometric intersection graphs has been studied by .
{ "cite_N": [ "@cite_8", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2063572899", "1982378203", "2154745646", "1766136887" ], "abstract": [ "Unit disk graphs are the intersection graphs of equal sized circles in the plane: they provide a graph-theoretic model for broadcast networks (cellular networks) and for some problems in computational geometry. We show that many standard graph theoretic problems remain NP-complete on unit disk graphs, including coloring, independent set, domination, independent domination, and connected domination; NP-completeness for the domination problem is shown to hold even for grid graphs, a subclass of unit disk graphs. In contrast, we give a polynomial time algorithm for finding cliques when the geometric representation (circles in the plane) is provided.", "Unit disk graphs are the intersection graphs of unit diameter closed disks in the plane. This paper reduces SATISFIABILITY to the problem of recognizing unit disk graphs. Equivalently, it shows that determining if a graph has sphericity 2 or less, even if the graph is planar or is known to have sphericity at most 3, is NP-hard. We show how this reduction can be extended to 3 dimensions, thereby showing that unit sphere graph recognition, or determining if a graph has sphericity 3 or less, is also NP-hard. We conjecture that K-sphericity is NP-hard for all fixed K greater than 1.", "A unit disk graph is the intersection graph of unit disks in the euclidean plane. We present a polynomial-time approximation scheme for the maximum weight independent set problem in unit disk graphs. In contrast to previously known approximation schemes, our approach does not require a geometric representation (specifying the coordinates of the disk centers). The approximation algorithm presented is robust in the sense that it accepts any graph as input and either returns a (1+e)-approximate independent set or a certificate showing that the input graph is no unit disk graph. The algorithm can easily be extended to other families of intersection graphs of geometric objects.", "The simplest model of a wireless network graph is the Unit Disk Graph (UDG): an edge exists in UDG if the Euclidean distance between its endpoints is ≤ 1. The problem of constructing planar spanners of Unit Disk Graphs with respect to the Euclidean distance has received considerable attention from researchers in computational geometry and ad-hoc wireless networks. In this paper, we present an algorithm that, given a set X of terminals in the plane, constructs a planar hop spanner with constant stretch factor for the Unit Disk Graph defined by X. Our algorithm improves on previous constructions in the sense that (i) it ensures the planarity of the whole spanner while previous algorithms ensure only the planarity of a backbone subgraph; (ii) the hop stretch factor of our spanner is significantly smaller." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
Another graph that has been looked at is the . In that case, points are assigned a unique color (which may be thought of as a positive integer) between 1 and @math , and there is an edge between two points if and only if they are assigned different colors. Bose al @cite_21 showed that the WSPD can be adapted to compute a @math -spanner of that graph that has @math edges for arbitrary values of @math strictly greater than 5.
{ "cite_N": [ "@cite_21" ], "mid": [ "2045611333" ], "abstract": [ "Bounds are given on the number of colors required to color the edges of a graph (multigraph) such that each color appears at each vertex v at most m(ν) times. The known results and proofs generalize in natural ways. Certain new edge-coloring problems, which have no counterparts when m(ν) = 1 for all ν ϵ V, are studied." ] }
0801.4013
2952269279
We study the problem of computing geometric spanners for (additively) weighted point sets. A weighted point set is a set of pairs @math where @math is a point in the plane and @math is a real number. The distance between two points @math and @math is defined as @math . We show that in the case where all @math are positive numbers and @math for all @math (in which case the points can be seen as non-intersecting disks in the plane), a variant of the Yao graph is a @math -spanner that has a linear number of edges. We also show that the Additively Weighted Delaunay graph (the face-dual of the Additively Weighted Voronoi diagram) has constant spanning ratio. The straight line embedding of the Additively Weighted Delaunay graph may not be a plane graph. We show how to compute a plane embedding that also has a constant spanning ratio.
For spanners of arbitrary geometric graphs, much less is known. Alth "o fer @cite_26 have shown that for any @math , every weighted graph @math with @math vertices contains a subgraph with @math edges, which is a @math -spanner of @math . Observe that this result holds for any weighted graph; in particular, it is valid for any geometric graph. For geometric graphs, a lower bound was given by Gudmundsson and Smid @cite_6 : They proved that for every real number @math with @math , there exists a geometric graph @math with @math vertices, such that every @math -spanner of @math contains @math edges. Thus, if we are looking for spanners with @math edges of arbitrary geometric graphs, then the best spanning ratio we can obtain is @math .
{ "cite_N": [ "@cite_26", "@cite_6" ], "mid": [ "1496182974", "2951859740" ], "abstract": [ "Given a connected geometric graph G, we consider the problem of constructing a t-spanner of G having the minimum number of edges. We prove that for every t with @math , there exists a connected geometric graph G with n vertices, such that every t-spanner of G contains Ω( n1+1 t ) edges. This bound almost matches the known upper bound, which states that every connected weighted graph with n vertices contains a t-spanner with O(tn1+2 (t+1)) edges. We also prove that the problem of deciding whether a given geometric graph contains a t-spanner with at most K edges is NP-hard. Previously, this NP-hardness result was only known for non-geometric graphs", "Given an undirected @math -node unweighted graph @math , a spanner with stretch function @math is a subgraph @math such that, if two nodes are at distance @math in @math , then they are at distance at most @math in @math . Spanners are very well studied in the literature. The typical goal is to construct the sparsest possible spanner for a given stretch function. In this paper we study pairwise spanners, where we require to approximate the @math - @math distance only for pairs @math in a given set @math . Such @math -spanners were studied before [Coppersmith,Elkin'05] only in the special case that @math is the identity function, i.e. distances between relevant pairs must be preserved exactly (a.k.a. pairwise preservers). Here we present pairwise spanners which are at the same time sparser than the best known preservers (on the same @math ) and of the best known spanners (with the same @math ). In more detail, for arbitrary @math , we show that there exists a @math -spanner of size @math with @math . Alternatively, for any @math , there exists a @math -spanner of size @math with @math . We also consider the relevant special case that there is a critical set of nodes @math , and we wish to approximate either the distances within nodes in @math or from nodes in @math to any other node. We show that there exists an @math -spanner of size @math with @math , and an @math -spanner of size @math with @math . All the mentioned pairwise spanners can be constructed in polynomial time." ] }
0801.0523
1663280704
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
As a summary, proofs written for version of the project up to versions 0.8 @cite_24 are typically composed of several pages of paper proof and several pages of supporting Maple for a few lines of code. This provides an excellent documentation and helps maintaining the code, but experience has consistently shown that such proofs are extremely error-prone. Implementing the error computation in Maple was a first step towards the automation of this process, but if it helps avoiding computation mistakes, it does not prevent methodological mistakes. Gappa was designed, among other objectives, in order to fill this void.
{ "cite_N": [ "@cite_24" ], "mid": [ "1970278864" ], "abstract": [ "In this work we look back into the proof of the PCP (probabilistically checkable proofs) theorem, with the goal of finding new proofs that are “more combinatorial” and arguably simpler. For that we introduce the notion of an assignment tester, which is a strengthening of the standard PCP verifier, in the following sense. Given a statement and an alleged proof for it, while the PCP verifier checks correctness of the statement, the assignment tester checks correctness of the statement and the proof. This notion enables composition that is truly modular; i.e., one can compose two assignment testers without any assumptions on how they are constructed. A related notion called PCPs of proximity was independently introduced in [E. Ben-, Proceedings of the 36th Annual ACM Symposium on Theory of Computing, Chicago, IL, 2004, ACM, New York, 2004, pp. 1-10]. We provide a toolkit of (nontrivial) generic transformations on assignment testers. These transformations may be interesting in their own right, and allow us to present the following two main results: 1. A new proof of the PCP theorem. This proof relies on a rather weak assignment tester given as a “black box.” From this, we construct combinatorially the full PCP. An important component of this proof is a new combinatorial aggregation technique (i.e., a new transformation that allows the verifier to read fewer, though possibly longer, “pieces” of the proof). An implementation of the black-box tester can be obtained from the algebraic proof techniques that already appear in [L. , Proceedings of the 23rd ACM Symposium on Theory of Computing, New Orleans, LA, 1991, ACM, New York, 1991, pp. 21-31; U. , J. ACM, 43 (1996), pp. 268-292]. 2. Our second construction is a “standalone” combinatorial construction showing @math . This implies, for example, that approximating max-SAT is quasi-NP-hard. This construction relies on a transformation that makes an assignment tester “oblivious,” so that the proof locations read are independent of the statement that is being proven. This eliminates, in a rather surprising manner, the need for aggregation in a crucial point in the proof." ] }
0801.0523
1663280704
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. Such work may require several lines of proof for each line of code, and will usually be broken by the smallest change to the code (e.g. for maintenance or optimization purpose). Certifying these programs by hand is therefore very tedious and error-prone. This article discusses the use of the Gappa proof assistant in this context. Gappa has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic. Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is accessible to a wide community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lower-level proof assistant like Coq, hence providing an even higher confidence in the certification of the numerical code. The article demonstrates the use of this tool on a real-size example, an elementary function with correctly rounded output.
There has been other attempts of assisted proofs of elementary functions or similar floating-point code. The pure formal proof approach of Harrison @cite_18 @cite_19 @cite_7 goes deeper than the Gappa approach, as it accounts for approximation errors. However it is accessible only to experts of formal proofs, and fragile in case of a change to the code. The approach of Krmer @cite_30 @cite_12 relies on operator overloading and does not provide a formal proof.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_7", "@cite_19", "@cite_12" ], "mid": [ "1988579293", "2021217869", "1672719503", "2008724760", "1499638282" ], "abstract": [ "Floating-point arithmetic is known to be tricky: roundings, formats, exceptional values. The IEEE-754 standard was a push towards straightening the field and made formal reasoning about floating-point computations easier and flourishing. Unfortunately, this is not sufficient to guarantee the final result of a program, as several other actors are involved: programming language, compiler, and architecture. The CompCert formally-verified compiler provides a solution to this problem: this compiler comes with a mathematical specification of the semantics of its source language (a large subset of ISO C99) and target platforms (ARM, PowerPC, x86-SSE2), and with a proof that compilation preserves semantics. In this paper, we report on our recent success in formally specifying and proving correct CompCert's compilation of floating-point arithmetic. Since CompCert is verified using the Coq proof assistant, this effort required a suitable Coq formalization of the IEEE-754 standard; we extended the Flocq library for this purpose. As a result, we obtain the first formally verified compiler that provably preserves the semantics of floating-point programs.", "A well known but incorrect piece of functional programming folklore is that ML expressions can be efficiently typed in polynomial time. In probing the truth of that folklore, various researchers, including Wand, Buneman, Kanellakis, and Mitchell, constructed simple counterexamples consisting of typable ML programs having length n , with principal types having O(2 cn ) distinct type variables and length O(2 2cn ). When the types associated with these ML constructions were represented as directed acyclic graphs, their sizes grew as O(2 cn ). The folklore was even more strongly contradicted by the recent result of Kanellakis and Mitchell that simply deciding whether or not an ML expression is typable is PSPACE-hard. We improve the latter result, showing that deciding ML typability is DEXPTIME-hard. As Kanellakis and Mitchell have shown containment in DEXPTIME, the problem is DEXPTIME-complete. The proof of DEXPTIME-hardness is carried out via a generic reduction: it consists of a very straightforward simulation of any deterministic one-tape Turing machine M with input k running in O ( c |k| ) time by a polynomial-sized ML formula P M,k , such that M accepts k iff P M,k is typable. The simulation of the transition function δ of the Turing Machine is realized uniquely through terms in the lambda calculus without the use of the polymorphic let construct. We use let for two purposes only: to generate an exponential amount of blank tape for the Turing Machine simulation to begin, and to compose an exponential number of applications of the ML formula simulating state transition. It is purely the expressive power of ML polymorphism to succinctly express function composition which results in a proof of DEXPTIME-hardness. We conjecture that lower bounds on deciding typability for extensions to the typed lambda calculus can be regarded precisely in terms of this expressive capacity for succinct function composition. To further understand this lower bound, we relate it to the problem of proving equality of type variables in a system of type equations generated from an ML expression with let-polymorphism. We show that given an oracle for solving this problem, deciding typability would be in PSPACE, as would be the actual computation of the principal type of the expression, were it indeed typable.", "Since they often embody compact but mathematically sophisticated algorithms, operations for computing the common transcendental functions in floating point arithmetic seem good targets for formal verification using a mechanical theorem prover. We discuss some of the general issues that arise in verifications of this class, and then present a machine-checked verification of an algorithm for computing the exponential function in IEEE-754 standard binary floating point arithmetic. We confirm (indeed strengthen) the main result of a previously published error analysis, though we uncover a minor error in the hand proof and are forced to confront several subtle issues that might easily be overlooked informally.", "In this paper we study a fundamental open problem in the area of probabilistic checkable proofs: What is the smallest s such that NP ⊆ naPCP1,s[O(log n),3]? In the language of hardness of approximation, this problem is equivalent to determining the smallest s such that getting an s-approximation for satisfiable 3-bit constraint satisfaction problems (\"3-CSPs\") is NP-hard. The previous best upper bound and lower bound for s are 20 27+µ by Khot and Saket [KS06], and 5 8 (assuming NP subseteq BPP) by Zwick [Zwi98]. In this paper we close the gap assuming Khot's d-to-1 Conjecture. Formally, we prove that if Khot's d-to-1 Conjecture holds for any finite constant integer d, then NP naPCP1,5 8+ µ[O(log n),3] for any constant µ > 0. Our conditional result also solves Hastad's open question [Has01] on determining the inapproximability of satisfiable Max-NTW (\"Not Two\") instances and confirms Zwick's conjecture [Zwi98] that the 5 8-approximation algorithm for satisfiable 3-CSPs is optimal.", "In this paper operational equivalence of simple functional programs is defined, and certain basic theorems proved thereupon. These basic theorems include congruence, least fixed-point, an analogue to continuity, and fixed-point induction. We then show how any ordering on programs for which these theorems hold can be easily extended to give a fully abstract cpo for the language, giving evidence that any operational semantics with these basic theorems proven is complete with respect to a denotational semantics. Furthermore, the mathematical tools used in the paper are minimal, the techniques should be applicable to a wide class of languages, and all proofs are constructive." ] }
0801.0882
2953374554
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
Jones @cite_3 was an early paper on control-flow analysis of the untyped @math -calculus. Shivers' thesis and subsequent work @cite_12 @cite_19 on CFA (control flow analysis) developed this approach considerably further and applied it to the Scheme programming language. This line is closely related to the approximate semantics (static control graph) of Section @cite_3 .
{ "cite_N": [ "@cite_19", "@cite_12", "@cite_3" ], "mid": [ "2160248455", "2131135493", "2127637733" ], "abstract": [ "While the reconstruction of the control-flow graph of a binary has received wide attention, the challenge of categorizing code into defect-free and possibly incorrect remains a challenge for current static analyses. We present the intermediate language RREIL and a corresponding analysis framework that is able to infer precise numeric information on variables without resorting to an expensive analysis at the bit-level. Specifically, we propose a hierarchy of three interfaces to abstract domains, namely for inferring memory layout, bit-level information and numeric information. Our framework can be easily enriched with new abstract domains at each level. We demonstrate the extensibility of our framework by detailing a novel acceleration technique (a so-called widening) as an abstract domain that helps to find precise fix points of loops.", "We present an interprocedural flow-insensitive points-to analysis based on type inference methods with an almost linear time cost complexity To our knowledge, this is the asymptotically fastest non-trivial interprocedural points-to analysis algorithm yet described The algorithm is based on a non-standard type system. The type inferred for any variable represents a set of locations and includes a type which in turn represents a set of locations possibly pointed to by the variable. The type inferred for a function variable represents a set of functions It may point to and includes a type signature for these functions The results are equivalent to those of a flow-insensitive alias analysis (and control flow analysis) that assumes alias relations are reflexive and transitive.This work makes three contributions. The first is a type system for describing a universally valid storage shape graph for a program in linear space. The second is a constraint system which often leads to better results than the \"obvious\" constraint system for the given type system The third is an almost linear time algorithm for points-to analysis by solving a constraint system.", "Any static, global analysis of the expression and data relationships in a program requires a knowledge of the control flow of the program. Since one of the primary reasons for doing such a global analysis in a compiler is to produce optimized programs, control flow analysis has been embedded in many compilers and has been described in several papers. An early paper by Prosser [5] described the use of Boolean matrices (or, more particularly, connectivity matrices) in flow analysis. The use of “dominance” relationships in flow analysis was first introduced by Prosser and much expanded by Lowry and Medlock [6]. References [6,8,9] describe compilers which use various forms of control flow analysis for optimization. Some recent developments in the area are reported in [4] and in [7]. The underlying motivation in all the different types of control flow analysis is the need to codify the flow relationships in the program. The codification may be in connectivity matrices, in predecessor-successor tables, in dominance lists, etc. Whatever the form, the purpose is to facilitate determining what the flow relationships are; in other words to facilitate answering such questions as: is this an inner loop?, if an expression is removed from the loop where can it be correctly and profitably placed?, which variable definitions can affect this use? In this paper the basic control flow relationships are expressed in a directed graph. Various graph constructs are then found and shown to codify interesting global relationships." ] }
0801.0882
2953374554
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
We had anticipated from the start that our framework could naturally be extended to higher-order functional programs, e.g., functional subsets of Scheme or ML. This has since been confirmed by Sereni and Jones, first reported in @cite_25 . Sereni's Ph.D. thesis @cite_0 develops this direction in considerably more detail with full proofs, and also investigates problems with lazy (call-by-name) languages. Independently and a bit later, Giesl and coauthors have addressed the analysis of the lazy functional language Haskell @cite_7 .
{ "cite_N": [ "@cite_0", "@cite_25", "@cite_7" ], "mid": [ "2006990447", "2105045857", "2094413750" ], "abstract": [ "In recent years much interest has been shown in a class of functional languages including HASKELL, lazy ML, SASL KRC MIRANDA, ALFL, ORWELL, and PONDER. It has been seen that their expressive power is great, programs are compact, and program manipulation and transformation is much easier than with imperative languages or more traditional applicative ones. Common characteristics: they are purely applicative, manipulate trees as data objects, use pattern matching both to determine control flow and to decompose compound data structures, and use a ''lazy'' evaluation strategy. In this paper we describe a technique for data flow analysis of programs in this class by safely approximating the behavior of a certain class of term rewriting systems. In particular we obtain ''safe'' descriptions of program inputs, outputs and intermediate results by regular sets of trees. Potential applications include optimization, strictness analysis and partial evaluation. The technique improves earlier work because of its applicability to programs with higher-order functions, and with either eager or lazy evaluation. The technique addresses the call-by-name aspect of laziness, but not memoization.", "When a C programmer needs an efficient data structure for a particular problem, he or she can often simply look one up in any of a number of good textbooks or handbooks. Unfortunately, programmers in functional languages such as Standard ML or Haskell do not have this luxury. Although some data structures designed for imperative languages such as C can be quite easily adapted to a functional setting, most cannot, usually because they depend in crucial ways on assignments, which are disallowed, or at least discouraged, in functional languages. To address this imbalance, we describe several techniques for designing functional data structures, and numerous original data structures based on these techniques, including multiple variations of lists, queues, double-ended queues, and heaps, many supporting more exotic features such as random access or efficient catenation. In addition, we expose the fundamental role of lazy evaluation in amortized functional data structures. Traditional methods of amortization break down when old versions of a data structure, not just the most recent, are available for further processing. This property is known as persistence, and is taken for granted in functional languages. On the surface, persistence and amortization appear to be incompatible, but we show how lazy evaluation can be used to resolve this conflict, yielding amortized data structures that are efficient even when used persistently. Turning this relationship between lazy evaluation and amortization around, the notion of amortization also provides the first practical techniques for analyzing the time requirements of non-trivial lazy programs. Finally, our data structures offer numerous hints to programming language designers, illustrating the utility of combining strict and lazy evaluation in a single language, and providing non-trivial examples using polymorphic recursion and higher-order, recursive modules.", "Papers on functional language implementations frequently set the goal of achieving performance \"comparable to C\", and sometimes report results comparing benchmark results to concrete C implementations of the same problem. A key pair of questions for such comparisons is: what C program to compare to, and what C compiler to compare with? In a 2012 paper, [9] compare naive serial C implementations of a range of throughput-oriented benchmarks to best-optimized implementations parallelized on a six-core machine and demonstrate an average 23X (up to 53X) speedup. Even accounting for thread parallel speedup, these results demonstrate a substantial performance gap between naive and tuned C code. In this current paper, we choose a subset of the benchmarks studied by to port to Haskell. We measure performance of these Haskell benchmarks compiled with the standard Glasgow Haskell Compiler and with our experimental Intel Labs Haskell Research Compiler and report results as compared to our best reconstructions of the algorithms used by Results are reported as measured both on an Intel Xeon E5-4650 32-core machine, and on an Intel Xeon Phi co-processor. We hope that this study provides valuable data on the concrete performance of Haskell relative to C." ] }
0801.0882
2953374554
A fully-automated algorithm is developed able to show that evaluation of a given untyped lambda-expression will terminate under CBV (call-by-value). The size-change principle'' from first-order programs is extended to arbitrary untyped lambda-expressions in two steps. The first step suffices to show CBV termination of a single, stand-alone lambda;-expression. The second suffices to show CBV termination of any member of a regular set of lambda-expressions, defined by a tree grammar. (A simple example is a minimum function, when applied to arbitrary Church numerals.) The algorithm is sound and proven so in this paper. The Halting Problem's undecidability implies that any sound algorithm is necessarily incomplete: some lambda-expressions may in fact terminate under CBV evaluation, but not be recognised as terminating. The intensional power of the termination algorithm is reasonably high. It certifies as terminating many interesting and useful general recursive algorithms including programs with mutual recursion and parameter exchanges, and Colson's minimum'' algorithm. Further, our type-free approach allows use of the Y combinator, and so can identify as terminating a substantial subset of PCF.
Term rewriting systems: The popular dependency pair'' method was developed by Arts and Giesl @cite_24 for first-order programs in TRS form. This community has begun to study termination of higher order term rewriting systems, including research by Giesl et.al. @cite_4 @cite_7 , Toyama @cite_26 and others.
{ "cite_N": [ "@cite_24", "@cite_26", "@cite_4", "@cite_7" ], "mid": [ "2964098951", "1505645573", "2119852735", "1508261706" ], "abstract": [ "Arts and Giesl proved that the termination of a first-order rewrite system can be reduced to the study of its dependency pairs''. We extend these results to rewrite systems on simply typed lambda-terms by using Tait's computability technique.", "This paper explores how to extend the dependency pair technique for proving termination of higher-order rewrite systems. We show that the termination property of higher-order rewrite systems can be checked by the non-existence of an infinite R-chain, which is an extension of Arts’ and Giesl’s result for the first-order case. It is clarified that the subterm property of the quasi-ordering, used for proving termination automatically, is indispensable.", "Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.", "The dependency pair technique is a powerful modular method for automated termination proofs of term rewrite systems (TRSs). We present two important extensions of this technique: First, we show how to prove termination of higher-order functions using dependency pairs. To this end, the dependency pair technique is extended to handle (untyped) applicative TRSs. Second, we introduce a method to prove non-termination with dependency pairs, while up to now dependency pairs were only used to verify termination. Our results lead to a framework for combining termination and non-termination techniques for first- and higher-order functions in a very flexible way. We implemented and evaluated our results in the automated termination prover AProVE." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Recently there has been a tremendous amount of work on summarizing sentiment @cite_9 and in particular summarizing sentiment by extracting and aggregating sentiment over ratable aspects. There have been many methods proposed from unsupervised to fully supervised systems.
{ "cite_N": [ "@cite_9" ], "mid": [ "2113786470" ], "abstract": [ "With the increase in popularity of online review sites comes a corresponding need for tools capable of extracting the information most important to the user from the plain text data. Due to the diversity in products and services being reviewed, supervised methods are often not practical. We present an unsuper-vised system for extracting aspects and determining sentiment in review text. The method is simple and flexible with regard to domain and language, and takes into account the influence of aspect on sentiment polarity, an issue largely ignored in previous literature. We demonstrate its effectiveness on both component tasks, where it achieves similar results to more complex semi-supervised methods that are restricted by their reliance on manual annotation and extensive knowledge sources." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
In terms of unsupervised aspect extraction, in which this work can be categorized, the system of Hu and Liu @cite_32 @cite_23 was one of the earliest endeavors. In that study association mining is used to extract product aspects that can be rated. Hu and Liu defined an aspect as simply a string and there was no attempt to cluster or infer aspects that are mentioned implicitly, e.g., The amount of stains in the room was overwhelming'' is about the aspect for hotels. A similar work by Popescu and Etzioni @cite_0 also extract explicit aspects mentions without describing how implicit mentions are extracted and clustered. Though they imply that this is done somewhere in their system. Clustering can be of particular importance for domains in which aspects are described with a large vocabulary, such as for restaurants or for hotels. Both implicit mentions and clustering arise naturally out of the topic model formulation requiring no additional augmentations.
{ "cite_N": [ "@cite_0", "@cite_32", "@cite_23" ], "mid": [ "2096110600", "114321176", "2019207508" ], "abstract": [ "In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews [18, 19, 7, 12, 27, 36, 21]. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., 'waitress' and 'bartender' are part of the same topic 'staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.", "An unsupervised iterative approach for extracting a new lexicon (or unknown words) from a Chinese text corpus is proposed in this paper. Instead of using a non-iterative segmentation-merging-filtering-and-disambiguation approach, the proposed method iteratively integrates the contextual constraints (among word candidates) and a joint character association metric to progressively improve the segmentation results of the input corpus (and thus the new word list.) An augmented dictionary, which includes potential unknown words (in addition to known words), is used to segment the input corpus, unlike traditional approaches which use only known words for segmentation. In the segmentation process, the augmented dictionary is used to impose contextual constraints over known words and potential unknown words within input sentences; an unsupervised Viterbi Training process is then applied to ensure that the selected potential unknown words (and known words) maximize the likelihood of the input corpus. On the other hand, the joint character association metric (which reflects the global character association characteristics across the corpus) is derived by integrating several commonly used word association metrics, such as mutual information and entropy, with a joint Gaussian mixture density function; such integration allows the filter to use multiple features simultaneously to evaluate character association, unlike traditional filters which apply multiple features independently. The proposed method then allows the contextual constraints and the joint character association metric to enhance each other; this is achieved by iteratively applying the joint association metric to truncate unlikely unknown words in the augmented dictionary and using the segmentation result to improve the estimation of the joint association metric. The refined augmented dictionary and improved estimation are then used in the next iteration to acquire better segmentation and carry out more reliable filtering. Experiments show that both the precision and recall rates are improved almost monotonically, in contrast to non-iterative segmentation-merging-filtering-and-disambiguation approaches, which often sacrifice precision for recall or vice versa. With a corpus of 311,591 sentences, the performance is 76 (bigram), 54 (trigram), and 70 (quadragram) in F-measure, which is significantly better than using the non-iterative approach with F-measures of 74 (bigram), 46 (trigram), and 58 (quadragram).", "Mining detailed opinions buried in the vast amount of review text data is an important, yet quite challenging task with widespread applications in multiple domains. Latent Aspect Rating Analysis (LARA) refers to the task of inferring both opinion ratings on topical aspects (e.g., location, service of a hotel) and the relative weights reviewers have placed on each aspect based on review content and the associated overall ratings. A major limitation of previous work on LARA is the assumption of pre-specified aspects by keywords. However, the aspect information is not always available, and it may be difficult to pre-define appropriate aspects without a good knowledge about what aspects are actually commented on in the reviews. In this paper, we propose a unified generative model for LARA, which does not need pre-specified aspect keywords and simultaneously mines 1) latent topical aspects, 2) ratings on each identified aspect, and 3) weights placed on different aspects by a reviewer. Experiment results on two different review data sets demonstrate that the proposed model can effectively perform the Latent Aspect Rating Analysis task without the supervision of aspect keywords. Because of its generality, the proposed model can be applied to explore all kinds of opinionated text data containing overall sentiment judgments and support a wide range of interesting application tasks, such as aspect-based opinion summarization, personalized entity ranking and recommendation, and reviewer behavior analysis." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
@cite_11 present an unsupervised system that does incorporate clustering, however, their method clusters sentences and not individual aspects to produce a sentence based summary. Sentence clusters are labeled with the most frequent non-stop word stem in the cluster. @cite_27 present a weakly supervised model that uses the algorithms of Hu and Liu @cite_32 @cite_23 to extract explicit aspect mentions from reviews. The method is extended through a user supplied aspect hierarchy of a product class. Extracted aspects are clustered by placing the aspects into the hierarchy using various string and semantic similarity metrics. This method is then used to compare extractive versus abstractive summarizations for sentiment @cite_6 .
{ "cite_N": [ "@cite_32", "@cite_6", "@cite_27", "@cite_23", "@cite_11" ], "mid": [ "179757531", "2110693578", "2060515712", "2109154616", "2010163591" ], "abstract": [ "Many methods, including supervised and unsupervised algorithms, have been developed for extractive document summarization. Most supervised methods consider the summarization task as a two-class classification problem and classify each sentence individually without leveraging the relationship among sentences. The unsupervised methods use heuristic rules to select the most informative sentences into a summary directly, which are hard to generalize. In this paper, we present a Conditional Random Fields (CRF) based framework to keep the merits of the above two kinds of approaches while avoiding their disadvantages. What is more, the proposed framework can take the outcomes of previous methods as features and seamlessly integrate them. The key idea of our approach is to treat the summarization task as a sequence labeling problem. In this view, each document is a sequence of sentences and the summarization procedure labels the sentences by 1 and 0. The label of a sentence depends on the assignment of labels of others. We compared our proposed approach with eight existing methods on an open benchmark data set. The results show that our approach can improve the performance by more than 7.1 and 12.1 over the best supervised baseline and unsupervised baseline respectively in terms of two popular metrics F1 and ROUGE-2. Detailed analysis of the improvement is presented as well.", "We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.", "The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein–protein interactions extraction, and (2) Gene–suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene–suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.", "We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer.", "We describe and evaluate a new method of automatic seed word selection for un-supervised sentiment classification of product reviews in Chinese. The whole method is unsupervised and does not require any annotated training data; it only requires information about commonly occurring negations and adverbials. Unsupervised techniques are promising for this task since they avoid problems of domain-dependency typically associated with supervised methods. The results obtained are close to those of supervised classifiers and sometimes better, up to an F1 of 92 ." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
There has also been some studies of supervised aspect extraction methods. For example, @cite_20 work on sentiment summarization for movie reviews. In that work, aspects are extracted and clustered, but they are done so manually through the examination of a labeled data set. The short-coming of such an approach is that it requires a labeled corpus for every domain of interest.
{ "cite_N": [ "@cite_20" ], "mid": [ "1977593555" ], "abstract": [ "With the rapid growth of user-generated content on the internet, automatic sentiment analysis of online customer reviews has become a hot research topic recently, but due to variety and wide range of products and services being reviewed on the internet, the supervised and domain-specific models are often not practical. As the number of reviews expands, it is essential to develop an efficient sentiment analysis model that is capable of extracting product aspects and determining the sentiments for these aspects. In this paper, we propose a novel unsupervised and domain-independent model for detecting explicit and implicit aspects in reviews for sentiment analysis. In the model, first a generalized method is proposed to learn multi-word aspects and then a set of heuristic rules is employed to take into account the influence of an opinion word on detecting the aspect. Second a new metric based on mutual information and aspect frequency is proposed to score aspects with a new bootstrapping iterative algorithm. The presented bootstrapping algorithm works with an unsupervised seed set. Third, two pruning methods based on the relations between aspects in reviews are presented to remove incorrect aspects. Finally the model employs an approach which uses explicit aspects and opinion words to identify implicit aspects. Utilizing extracted polarity lexicon, the approach maps each opinion word in the lexicon to the set of pre-extracted explicit aspects with a co-occurrence metric. The proposed model was evaluated on a collection of English product review datasets. The model does not require any labeled training data and it can be easily applied to other languages or other domains such as movie reviews. Experimental results show considerable improvements of our model over conventional techniques including unsupervised and supervised approaches." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
A key point of note is that our topic model approach is orthogonal to most of the methods mentioned above. For example, the topic model can be used to help cluster explicit aspects extracted by @cite_32 @cite_23 @cite_0 or used to improve the recall of knowledge driven approaches that require domain specific ontologies @cite_27 or labeled data @cite_20 .
{ "cite_N": [ "@cite_32", "@cite_0", "@cite_27", "@cite_23", "@cite_20" ], "mid": [ "2106035193", "1995866178", "107389347", "2087382273", "2109154616" ], "abstract": [ "Topic modeling has been commonly used to discover topics from document collections. However, unsupervised models can generate many incoherent topics. To address this problem, several knowledge-based topic models have been proposed to incorporate prior domain knowledge from the user. This work advances this research much further and shows that without any user input, we can mine the prior knowledge automatically and dynamically from topics already found from a large number of domains. This paper first proposes a novel method to mine such prior knowledge dynamically in the modeling process, and then a new topic model to use the knowledge to guide the model inference. What is also interesting is that this approach offers a novel lifelong learning algorithm for topic discovery, which exploits the big (past) data and knowledge gained from such data for subsequent modeling. Our experimental results using product reviews from 50 domains demonstrate the effectiveness of the proposed approach.", "Topic modeling has been widely used to mine topics from documents. However, a key weakness of topic modeling is that it needs a large amount of data (e.g., thousands of documents) to provide reliable statistics to generate coherent topics. However, in practice, many document collections do not have so many documents. Given a small number of documents, the classic topic model LDA generates very poor topics. Even with a large volume of data, unsupervised learning of topic models can still produce unsatisfactory results. In recently years, knowledge-based topic models have been proposed, which ask human users to provide some prior domain knowledge to guide the model to produce better topics. Our research takes a radically different approach. We propose to learn as humans do, i.e., retaining the results learned in the past and using them to help future learning. When faced with a new task, we first mine some reliable (prior) knowledge from the past learning modeling results and then use it to guide the model inference to generate more coherent topics. This approach is possible because of the big data readily available on the Web. The proposed algorithm mines two forms of knowledge: must-link (meaning that two words should be in the same topic) and cannot-link (meaning that two words should not be in the same topic). It also deals with two problems of the automatically mined knowledge, i.e., wrong knowledge and knowledge transitivity. Experimental results using review documents from 100 product domains show that the proposed approach makes dramatic improvements over state-of-the-art baselines.", "The use of topic models to analyze domain-specific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expert-provided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.", "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis VAST paper data set and product review data sets.", "We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer." ] }
0801.1063
2951035133
In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., waitress' and bartender' are part of the same topic staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models.
Several models have been proposed to overcome the bag-of-words assumption by explicitly modeling topic transitions @cite_17 @cite_25 @cite_30 @cite_10 @cite_26 @cite_24 . In our MG-LDA model we instead proposed a sliding windows to model local topics, as it is computationally less expensive and leads to good results. The model of Blei and Moreno @cite_17 also uses windows, but their windows are not overlapping and, therefore, it is a priori known from which window a word is going to be sampled. They perform explicit modeling of topic transitions between these windows. In our case the distribution of sentences over overlapping windows @math is responsible for modeling transitions. However, it is possible to construct a multi-grain model which uses a n-gram topic model for local topics and a distribution fixed per document for global topics.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_24", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2251734881", "2950700385", "1972622791", "1498269992", "2128925311", "2165636119" ], "abstract": [ "Topic Model such as Latent Dirichlet Allocation(LDA) makes assumption that topic assignment of different words are conditionally independent. In this paper, we propose a new model Extended Global Topic Random Field (EGTRF) to model non-linear dependencies between words. Specifically, we parse sentences into dependency trees and represent them as a graph, and assume the topic assignment of a word is influenced by its adjacent words and distance-2 words. Word similarity information learned from large corpus is incorporated to enhance word topic assignment. Parameters are estimated efficiently by variational inference and experimental results on two datasets show EGTRF achieves lower perplexity and higher log predictive probability.", "Topic models, such as Latent Dirichlet Allocation (LDA), posit that documents are drawn from admixtures of distributions over words, known as topics. The inference problem of recovering topics from admixtures, is NP-hard. Assuming separability, a strong assumption, [4] gave the first provable algorithm for inference. For LDA model, [6] gave a provable algorithm using tensor-methods. But [4,6] do not learn topic vectors with bounded @math error (a natural measure for probability vectors). Our aim is to develop a model which makes intuitive and empirically supported assumptions and to design an algorithm with natural, simple components such as SVD, which provably solves the inference problem for the model with bounded @math error. A topic in LDA and other models is essentially characterized by a group of co-occurring words. Motivated by this, we introduce topic specific Catchwords, group of words which occur with strictly greater frequency in a topic than any other topic individually and are required to have high frequency together rather than individually. A major contribution of the paper is to show that under this more realistic assumption, which is empirically verified on real corpora, a singular value decomposition (SVD) based algorithm with a crucial pre-processing step of thresholding, can provably recover the topics from a collection of documents drawn from Dominant admixtures. Dominant admixtures are convex combination of distributions in which one distribution has a significantly higher contribution than others. Apart from the simplicity of the algorithm, the sample complexity has near optimal dependence on @math , the lowest probability that a topic is dominant, and is better than [4]. Empirical evidence shows that on several real world corpora, both Catchwords and Dominant admixture assumptions hold and the proposed algorithm substantially outperforms the state of the art [5].", "In topic modelling, various alternative priors have been developed, for instance asymmetric and symmetric priors for the document-topic and topic-word matrices respectively, the hierarchical Dirichlet process prior for the document-topic matrix and the hierarchical Pitman-Yor process prior for the topic-word matrix. For information retrieval, language models exhibiting word burstiness are important. Indeed, this burstiness effect has been show to help topic models as well, and this requires additional word probability vectors for each document. Here we show how to combine these ideas to develop high-performing non-parametric topic models exhibiting burstiness based on standard Gibbs sampling. Experiments are done to explore the behavior of the models under different conditions and to compare the algorithms with previously published. The full non-parametric topic models with burstiness are only a small factor slower than standard Gibbs sampling for LDA and require double the memory, making them very competitive. We look at the comparative behaviour of different models and present some experimental insights.", "Algorithms such as Latent Dirichlet Allocation (LDA) have achieved significant progress in modeling word document relationships. These algorithms assume each word in the document was generated by a hidden topic and explicitly model the word distribution of each topic as well as the prior distribution over topics in the document. Given these parameters, the topics of all words in the same document are assumed to be independent. In this paper, we propose modeling the topics of words in the document as a Markov chain. Specifically, we assume that all words in the same sentence have the same topic, and successive sentences are more likely to have the same topics. Since the topics are hidden, this leads to using the well-known tools of Hidden Markov Models for learning and inference. We show that incorporating this dependency allows us to learn better topics and to disambiguate words that can belong to different topics. Quantitatively, we show that we obtain better perplexity in modeling documents with only a modest increase in learning and inference complexity.", "Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than X-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [J. Roy. Statist. Soc. Ser. B 44 (1982) 139--177]. We derive a fast variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. We apply the CTM to the articles from Science published from 1990--1999, a data set that comprises 57M words. The CTM gives a better fit of the data than LDA, and we demonstrate its use as an exploratory tool of large document collections.", "In this work, we address the problem of joint modeling of text and citations in the topic modeling framework. We present two different models called the Pairwise-Link-LDA and the Link-PLSA-LDA models. The Pairwise-Link-LDA model combines the ideas of LDA [4] and Mixed Membership Block Stochastic Models [1] and allows modeling arbitrary link structure. However, the model is computationally expensive, since it involves modeling the presence or absence of a citation (link) between every pair of documents. The second model solves this problem by assuming that the link structure is a bipartite graph. As the name indicates, Link-PLSA-LDA model combines the LDA and PLSA models into a single graphical model. Our experiments on a subset of Citeseer data show that both these models are able to predict unseen data better than the baseline model of Erosheva and Lafferty [8], by capturing the notion of topical similarity between the contents of the cited and citing documents. Our experiments on two different data sets on the link prediction task show that the Link-PLSA-LDA model performs the best on the citation prediction task, while also remaining highly scalable. In addition, we also present some interesting visualizations generated by each of the models." ] }
0712.3936
1949066771
Lagrangian relaxation has been used extensively in the design of approximation algorithms. This paper studies its strengths and limitations when applied to Partial Cover.
Much work has been done on covering problems because of both their simple and elegant formulation, and their pervasiveness in different application areas. In its most general form the problem, also known as Set Cover, cannot be approximated within @math unless @math @cite_2 . Due to this hardness, easier, special cases have been studied.
{ "cite_N": [ "@cite_2" ], "mid": [ "1969423858" ], "abstract": [ "Abstract A set-covering problem is called regular if a cover always remains a cover when any column in it is replaced by an earlier column. From the input of the problem - the coefficient matrix of the set-covering inequalities - it is possible to check in polynomial time whether the problem is regular or can be made regular by permuting the columns. If it is, then all the minimal covers are generated in polynomial time, and one of them is an optimal solution. The algorithm also yields an explicit bound for the number of minimal covers. These results can be used to check in polynomial time whether a given set-covering problem is equivalent to some knapsack problem without additional variables, or equivalently to recognize positive threshold functions in polynomial time. However, the problem of recognizing when an arbitrary Boolean function is threshold is NP-complete. It is also shown that the list of maximal non-covers is essentially the most compact input possible, even if it is known in advance that the problem is regular." ] }
0712.4279
2950218338
We show that disjointness requires randomized communication Omega(n^ 1 (k+1) 2^ 2^k ) in the general k-party number-on-the-forehead model of complexity. The previous best lower bound for k >= 3 was log(n) (k-1). Our results give a separation between nondeterministic and randomized multiparty number-on-the-forehead communication complexity for up to k=log log n - O(log log log n) many players. Also by a reduction of Beame, Pitassi, and Segerlind, these results imply subexponential lower bounds on the size of proofs needed to refute certain unsatisfiable CNFs in a broad class of proof systems, including tree-like Lovasz-Schrijver proofs.
Beame, Pitassi, Segerlind, and Wigderson @cite_26 devised a method based on a direct product theorem to show a @math bound on the complexity of three-party disjointness in a model stronger than one-way where the first player speaks once, and then the two remaining players interact arbitrarily.
{ "cite_N": [ "@cite_26" ], "mid": [ "1831717676" ], "abstract": [ "A strong direct product theorem states that, in order to solve k instances of a problem, if we provide less than k times the resource required to compute one instance, then the probability of overall success is exponentially small in k. In this paper, we consider the model of two-way public-coin communication complexity and show a strong direct product theorem for all relations in terms of the smooth rectangle bound, introduced by Jain and Klauck as a generic lower bound method in this model. Our result therefore uniformly implies a strong direct product theorem for all relations for which an (asymptotically) optimal lower bound can be provided using the smooth rectangle bound, for example Inner Product, Greater-Than, Set-Disjointness, Gap-Hamming Distance etc. Our result also implies near optimal direct product results for several important functions and relations used to show exponential separations between classical and quantum communication complexity, for which near optimal lower bounds are provided using the rectangle bound, for example by Raz [1999], Gavinsky [2008] and Klartag and Regev [2011]. In fact we are not aware of any relation for which it is known that the smooth rectangle bound does not provide an optimal lower bound. This lower bound subsumes many of the other lower bound methods, for example the rectangle bound (a.k.a the corruption bound), the smooth discrepancy bound (a.k.a the bound) which in turn subsumes the discrepancy bound, the subdistribution bound and the conditional min-entropy bound. We show our result using information theoretic arguments. A key tool we use is a sampling protocol due to Braverman [2012], in fact a modification of it used by Kerenidis, Laplante, Lerays, Roland and Xiao [2012]." ] }
0712.4279
2950218338
We show that disjointness requires randomized communication Omega(n^ 1 (k+1) 2^ 2^k ) in the general k-party number-on-the-forehead model of complexity. The previous best lower bound for k >= 3 was log(n) (k-1). Our results give a separation between nondeterministic and randomized multiparty number-on-the-forehead communication complexity for up to k=log log n - O(log log log n) many players. Also by a reduction of Beame, Pitassi, and Segerlind, these results imply subexponential lower bounds on the size of proofs needed to refute certain unsatisfiable CNFs in a broad class of proof systems, including tree-like Lovasz-Schrijver proofs.
Following up on our work, David, Pitassi, and Viola @cite_22 gave an explicit function which separates nondeterministic and randomized communication complexity for up to @math players. They are also able, for any constant @math to give a function computable in @math which separates them for up to @math players. Note that disjointness can be computed in @math , but that our bounds are already trivial for @math players. Even more recently, Beame and Huynh-Ngoc @cite_7 have shown a bound of @math on the @math -party complexity of disjointness. This bound remains non-trivial for up to @math many players, but is not as strong as our bound for few players.
{ "cite_N": [ "@cite_22", "@cite_7" ], "mid": [ "2015924804", "2951592515" ], "abstract": [ "We prove n (1) lower bounds on the multiparty communication complexity of AC 0 functions in the number-on-forehead (NOF) model for up to �(logn) players. These are the first lower bounds for any AC 0 function for !(loglogn) players. In particular we show that there are families of depth 3 read-once AC 0 formulas having k-player randomized multiparty NOF communication complexity n (1) 2 O(k) . We show similar lower bounds for depth 4 read-once AC 0 formulas that have nondeterministic communication complexity O(log 2 n), yielding exponential separations between k-party nondeterministic and randomized communication complexity for AC 0 functions. As a consequence of the latter bound, we obtain an n (1 k) 2 O(k) lower bound on the k-party NOF communication complexity of set disjointness. This is non-trivial for up to �( p logn) players which is significantly larger than the up to �(loglogn) players allowed in the best previous lower bounds for multiparty set disjointness given by Lee and Shraibman [LS08] and Chattopadhyay and Ada [CA08] (though our complexity bounds themselves are not as strong as those in [LS08, CA08] for o(loglogn) players). We derive these results by extending the k-party generalization in [CA08, LS08] of the pattern matrix method of Sherstov [She07, She08]. Using this technique, we derive a new sufficient criterion for strong communication complexity lower bounds based on functions having many diverse subfunctions that do not have good low-degree polynomial approximations. This criterion guarantees that such functions have orthogonalizing distributions that are “max-smooth” as opposed to the “min-smooth” orthogonalizing distributions used by Razborov and Sherstov [RS08] to analyze the sign-rank of AC 0 .", "This paper provides the first general technique for proving information lower bounds on two-party unbounded-rounds communication problems. We show that the discrepancy lower bound, which applies to randomized communication complexity, also applies to information complexity. More precisely, if the discrepancy of a two-party function @math with respect to a distribution @math is @math , then any two party randomized protocol computing @math must reveal at least @math bits of information to the participants. As a corollary, we obtain that any two-party protocol for computing a random function on @math must reveal @math bits of information to the participants. In addition, we prove that the discrepancy of the Greater-Than function is @math , which provides an alternative proof to the recent proof of Viola Viola11 of the @math lower bound on the communication complexity of this well-studied function and, combined with our main result, proves the tight @math lower bound on its information complexity. The proof of our main result develops a new simulation procedure that may be of an independent interest. In a very recent breakthrough work of kerenidis2012lower , this simulation procedure was the main building block for proving that almost all known lower bound techniques for communication complexity (and not just discrepancy) apply to information complexity." ] }
0712.2682
2152668278
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+2 under L"1-norm for 0-1 valued matrices, and of 2 under L"2-norm for real valued matrices.
This basic algorithmic problem and several variations were initially presented in @cite_9 with the name of direct clustering. The same problem and its variations have also been referred to as two-way clustering, co-clustering or subspace clustering. In practice, finding highly homogeneous biclusters has important applications in biological data analysis (see @cite_7 for review and references), where a bicluster may, for example, correspond to an activation pattern common to a group of genes only under specific experimental conditions.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2144544802", "2145799156" ], "abstract": [ "A large number of clustering approaches have been proposed for the analysis of gene expression data obtained from microarray experiments. However, the results from the application of standard clustering methods to genes are limited. This limitation is imposed by the existence of a number of experimental conditions where the activity of genes is uncorrelated. A similar limitation exists when clustering of conditions is performed. For this reason, a number of algorithms that perform simultaneous clustering on the row and column dimensions of the data matrix has been proposed. The goal is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this paper, we refer to this class of algorithms as biclustering. Biclustering is also referred in the literature as coclustering and direct clustering, among others names, and has also been used in fields such as information retrieval and data mining. In this comprehensive survey, we analyze a large number of existing approaches to biclustering, and classify them in accordance with the type of biclusters they can find, the patterns of biclusters that are discovered, the methods used to perform the search, the approaches used to evaluate the solution, and the target applications.", "We consider the problem of identifying a sparse set of relevant columns and rows in a large data matrix with highly corrupted entries. This problem of identifying groups from a collection of bipartite variables such as proteins and drugs, biological species and gene sequences, malware and signatures, etc is commonly referred to as biclustering or co-clustering. Despite its great practical relevance, and although several ad-hoc methods are available for biclustering, theoretical analysis of the problem is largely non-existent. The problem we consider is also closely related to structured multiple hypothesis testing, an area of statistics that has recently witnessed a flurry of activity. We make the following contributions 1. We prove lower bounds on the minimum signal strength needed for successful recovery of a bicluster as a function of the noise variance, size of the matrix and bicluster of interest. 2. We show that a combinatorial procedure based on the scan statistic achieves this optimal limit. 3. We characterize the SNR required by several computationally tractable procedures for biclustering including element-wise thresholding, column row average thresholding and a convex relaxation approach to sparse singular vector decomposition." ] }
0712.2682
2152668278
The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+2 under L"1-norm for 0-1 valued matrices, and of 2 under L"2-norm for real valued matrices.
An alternative definition of the basic biclustering problem described in the introduction consists on finding the maximal bicluster in a given matrix. A well-known connection of this alternative formulation is its reduction to the problem of finding a biclique in a bipartite graph @cite_4 . Algorithms for detecting bicliques enumerate them in the graph by using the monotonicity property that a subset of a biclique is also a biclique @cite_3 @cite_6 . These algorithms usually have a high order of complexity.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_3" ], "mid": [ "2145799156", "1894726941", "2051006653" ], "abstract": [ "We consider the problem of identifying a sparse set of relevant columns and rows in a large data matrix with highly corrupted entries. This problem of identifying groups from a collection of bipartite variables such as proteins and drugs, biological species and gene sequences, malware and signatures, etc is commonly referred to as biclustering or co-clustering. Despite its great practical relevance, and although several ad-hoc methods are available for biclustering, theoretical analysis of the problem is largely non-existent. The problem we consider is also closely related to structured multiple hypothesis testing, an area of statistics that has recently witnessed a flurry of activity. We make the following contributions 1. We prove lower bounds on the minimum signal strength needed for successful recovery of a bicluster as a function of the noise variance, size of the matrix and bicluster of interest. 2. We show that a combinatorial procedure based on the scan statistic achieves this optimal limit. 3. We characterize the SNR required by several computationally tractable procedures for biclustering including element-wise thresholding, column row average thresholding and a convex relaxation approach to sparse singular vector decomposition.", "Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.", "We present here 2-approximation algorithms for several node deletion and edge deletion biclique problems and for an edge deletion clique problem. The biclique problem is to find a node induced subgraph that is bipartite and complete. The objective is to minimize the total weight of nodes or edges deleted so that the remaining subgraph is bipartite complete. Several variants of the biclique problem are studied here, where the problem is defined on bipartite graph or on general graphs with or without the requirement that each side of the bipartition forms an independent set. The maximum clique problem is formulated as maximizing the number (or weight) of edges in the complete subgraph. A 2-approximation algorithm is given for the minimum edge deletion version of this problem. The approximation algorithms given here are derived as a special case of an approximation technique devised for a class of formulations introduced by Hochbaum. All approximation algorithms described (and the polynomial algorithms for two versions of the node biclique problem) involve calls to a minimum cut algorithm. One conclusion of our analysis of the NP-hard problems here is that all of these problems are MAX SNP-hard and at least as difficult to approximate as the vertex cover problem. Another conclusion is that the problem of finding the minimum node cut-set, the removal of which leaves two cliques in the graph, is NP-hard and 2-approximable." ] }
0712.3113
1662211127
The SINTAGMA information integration system is an infrastructure for accessing several different information sources together. Besides providing a uniform interface to the information sources (databases, web services, web sites, RDF resources, XML files), semantic integration is also needed. Semantic integration is carried out by providing a high-level model and the mappings to the models of the sources. When executing a query of the high level model, a query is transformed to a low-level query plan, which is a piece of Prolog code that answers the high-level query. This transformation is done in two phases. First, the Query Planner produces a plan as a logic formula expressing the low-level query. Next, the Query Optimizer transforms this formula to executable Prolog code and optimizes it according to structural and statistical information about the information sources. This article discusses the main ideas of the optimization algorithm and its implementation.
The compiler of Mercury, a pure declarative Prolog-variant does predicate reordering according to the I O modes of the predicates, as described in @cite_7 . The mode system of Mercury is much more expressive than the mode system of SINTAGMA's Query Optimizer, our !in! and !out! modes are easily handled by the Mercury compiler. On the other hand, it does not offer optimizations similar to our optimizer, it only reorders the predicates according to their I O modes.
{ "cite_N": [ "@cite_7" ], "mid": [ "1574950491" ], "abstract": [ "For any LP system, tabling can be quite handy in a variety of tasks, especially if it is efficiently implemented and fully integrated in the language. Implementing tabling in Mercury poses special challenges for several reasons. First, Mercury is both semantically and culturally quite different from Prolog. While decreeing that tabled predicates must not include cuts is acceptable in a Prolog system, it is not acceptable in Mercury, since if-then-elses and existential quantification have sound semantics for stratified programs and are used very frequently both by programmers and by the compiler. The Mercury implementation thus has no option but to handle interactions of tabling with Mercury’s language features safely. Second, the Mercury implementation is vastly different from the WAM, and many of the differences (e.g. the absence of a trail) have significant impact on the implementation of tabling. In this paper, we describe how we adapted the copying approach to tabling to implement tabling in Mercury." ] }
0712.3113
1662211127
The SINTAGMA information integration system is an infrastructure for accessing several different information sources together. Besides providing a uniform interface to the information sources (databases, web services, web sites, RDF resources, XML files), semantic integration is also needed. Semantic integration is carried out by providing a high-level model and the mappings to the models of the sources. When executing a query of the high level model, a query is transformed to a low-level query plan, which is a piece of Prolog code that answers the high-level query. This transformation is done in two phases. First, the Query Planner produces a plan as a logic formula expressing the low-level query. Next, the Query Optimizer transforms this formula to executable Prolog code and optimizes it according to structural and statistical information about the information sources. This article discusses the main ideas of the optimization algorithm and its implementation.
The SIMS and the Infomaster information integration systems have a query optimizer component, as described in @cite_4 and @cite_6 , however, they have a different task than ours. In those systems, query optimizers take advantage of semantic knowledge about the information sources to choose a query plan that needs the least number of information source accesses, among the plans which answer the user query. In the Mediator of SINTAGMA, this is the task of the Query Planner, and Query Optimizer optimizes only the query execution plan.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2106082581", "1710121815" ], "abstract": [ "New applications of information systems need to integrate a large number of heterogeneous databases over computer networks. Answering a query in these applications usually involves selecting relevant information sources and generating a query plan to combine the data automatically. As significant progress has been made in source selection and plan generation, the critical issue has been shifting to query optimization. This paper presents a semantic query optimization (SQO) approach to optimizing query plans of heterogeneous multidatabase systems. This approach provides global optimization for query plans as well as local optimization for subqueries that retrieve data from individual database sources. An important feature of our local optimization algorithm is that we prove necessary and sufficient conditions to eliminate an unnecessary join in a conjunctive query of arbitrary join topology. This feature allows our optimizer to utilize more expressive relational rules to provide a wider range of possible optimizations than previous work in SQO. The local optimization algorithm also features a new data structure called AND-OR implication graphs to facilitate the search for optimal queries. These features allow the global optimization to effectively use semantic knowledge to reduce the data transmission cost. We have implemented this approach in the PESTO (Plan Enhancement by SemanTic Optimization) query plan optimizer as a part of the SIMS information mediator. Experimental results demonstrate that PESTO can provide significant savings in query execution cost over query plan execution without optimization.", "Information integration systems, also knows as mediators, information brokers, or information gathering agents, provide uniform user interfaces to varieties of different information sources. With corporate databases getting connected by intranets, and vast amounts of information becoming available over the Internet, the need for information integration systems is increasing steadily. Our work focuses on query planning in such systems. Query planning is the task of transforming a user query, represented in the user's interface language and vocabulary, into queries that can be executed by the information sources. Every information source might require a different query language and might use different vocabularies. The resulting answers of the information sources need to be translated and combined before the final answer can be reported to the user. We show that query plans with a fixed number of database operations are insufficient to extract all information from the sources, if functional dependencies or limitations on binding patterns are present. Dependencies complicate query planning because they allow query plans that would otherwise be invalid. We present an algorithm that constructs query plans that are guaranteed to extract all available information in these more general cases. This algorithm is also able to handle datalog user queries. We examine further extensions of the languages allowed for user queries and for describing information sources: disjunction, recursion and negation in source descriptions, negation and inequality in user queries. For these more expressive cases, we determine the data complexity required of languages able to represent \"best possible\" query plans." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
Alon and Kahale @cite_7 were the first to employ spectral techniques for 3-coloring sparse random graphs. They present a spectral heuristic and show that this heuristic finds a 3-coloring in the so-called planted solution model''. This model is somewhat more difficult to deal with algorithmically than the @math model that we study in the present work. For while in the @math -model each vertex @math has @math neighbors in each of the other color classes @math , in the planted solution model of Alon and Kahale the number of neighbors of @math in @math has a Poisson distribution with mean @math . In effect, the spectral algorithm in @cite_7 is more sophisticated than the spectral heuristic from . In particular, the Alon-Kahale algorithm succeeds on @math -regular graphs (and hence on @math w.h.p.).
{ "cite_N": [ "@cite_7" ], "mid": [ "2105546154" ], "abstract": [ "We consider the problem of coloring the vertices of a large sparse random graph with a given number of colors so that no adjacent vertices have the same color. Using the cavity method, we present a detailed and systematic analytical study of the space of proper colorings (solutions). We show that for a fixed number of colors and as the average vertex degree (number of constraints) increases, the set of solutions undergoes several phase transitions similar to those observed in the mean field theory of glasses. First, at the clustering transition, the entropically dominant part of the phase space decomposes into an exponential number of pure states so that beyond this transition a uniform sampling of solutions becomes hard. Afterward, the space of solutions condenses over a finite number of the largest states and consequently the total entropy of solutions becomes smaller than the annealed one. Another transition takes place when in all the entropically dominant states a finite fraction of nodes freezes so that each of these nodes is allowed a single color in all the solutions inside the state. Eventually, above the coloring threshold, no more solutions are available." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
There are numerous papers on the performance of message passing algorithms for constraint satisfaction problems (e.g., Belief Propagation Survey Propagation) by authors from the statistical physics community (cf. @cite_6 @cite_4 @cite_9 and the references therein). While these papers provide rather plausible (and insightful) explanations for the success of message passing algorithms on problem instances such as random graphs @math or random @math -SAT formulae, the arguments (e.g., the replica or the cavity method) are mathematically non-rigorous. To the best of our knowledge, no connection between spectral methods and BP has been established in the physics literature.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_6" ], "mid": [ "1823667034", "2139919528", "2166670884" ], "abstract": [ "Message-passing algorithms can solve a wide variety of optimization, inference, and constraint satisfaction problems. The algorithms operate on factor graphs that visually represent and specify the structure of the problems. After describing some of their applications, I survey the family of belief propagation (BP) algorithms, beginning with a detailed description of the min-sum algorithm and its exactness on tree factor graphs, and then turning to a variety of more sophisticated BP algorithms, including free-energy based BP algorithms, “splitting” BP algorithms that generalize “tree-reweighted” BP, and the various BP algorithms that have been proposed to deal with problems with continuous variables.", "Experimental results show that certain message passing algorithms, namely, survey propagation, are very effective in finding satisfying assignments in random satisfiable 3CNF formulas. In this paper we make a modest step towards providing rigorous analysis that proves the effectiveness of message passing algorithms for random 3SAT. We analyze the performance of Warning Propagation, a popular message passing algorithm that is simpler than survey propagation. We show that for 3CNF formulas generated under the planted assignment distribution, running warning propagation in the standard way works when the clause-to-variable ratio is a sufficiently large constant. We are not aware of previous rigorous analysis of message passing algorithms for satisfiability instances, though such analysis was performed for decoding of Low Density Parity Check (LDPC) Codes. We discuss some of the differences between results for the LDPC setting and our results.", "We consider the estimation of a random vector observed through a linear transform followed by a componentwise probabilistic measurement channel. Although such linear mixing estimation problems are generally highly non-convex, Gaussian approximations of belief propagation (BP) have proven to be computationally attractive and highly effective in a range of applications. Recently, Bayati and Montanari have provided a rigorous and extremely general analysis of a large class of approximate message passing (AMP) algorithms that includes many Gaussian approximate BP methods. This paper extends their analysis to a larger class of algorithms to include what we call generalized AMP (G-AMP). G-AMP incorporates general (possibly non-AWGN) measurement channels. Similar to the AWGN output channel case, we show that the asymptotic behavior of the G-AMP algorithm under large i.i.d. Gaussian transform matrices is described by a simple set of state evolution (SE) equations. The general SE equations recover and extend several earlier results, including SE equations for approximate BP on general output channels by Guo and Wang." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
Feige, Mossel, and Vilenchik @cite_15 showed that the Warning Propagation (WP) algorithm for 3-SAT converges in polynomial time to a satisfying assignment on a model of random 3-SAT instances with a planted solution. Since the messages in WP are additive in nature, and not multiplicative as in BP, the WP algorithm is conceptually much simpler. Moreover, on the model studied in @cite_15 a fairly simple combinatorial algorithm (based on the majority vote'' algorithm) is known to succeed. By contrast, no purely combinatorial algorithm (that does not rely on spectral methods or semi-definite programming) is known to 3-color @math or even arbitrary @math -regular instances.
{ "cite_N": [ "@cite_15" ], "mid": [ "2139919528" ], "abstract": [ "Experimental results show that certain message passing algorithms, namely, survey propagation, are very effective in finding satisfying assignments in random satisfiable 3CNF formulas. In this paper we make a modest step towards providing rigorous analysis that proves the effectiveness of message passing algorithms for random 3SAT. We analyze the performance of Warning Propagation, a popular message passing algorithm that is simpler than survey propagation. We show that for 3CNF formulas generated under the planted assignment distribution, running warning propagation in the standard way works when the clause-to-variable ratio is a sufficiently large constant. We are not aware of previous rigorous analysis of message passing algorithms for satisfiability instances, though such analysis was performed for decoding of Low Density Parity Check (LDPC) Codes. We discuss some of the differences between results for the LDPC setting and our results." ] }
0712.0171
1603803693
Belief propagation (BP) is a message-passing algorithm that computes the exact marginal distributions at every vertex of a graphical model without cycles. While BP is designed to work correctly on trees, it is routinely applied to general graphical models that may contain cycles, in which case neither convergence, nor correctness in the case of convergence is guaranteed. Nonetheless, BP has gained popularity as it seems to remain effective in many cases of interest, even when the underlying graph is ‘far’ from being a tree. However, the theoretical understanding of BP (and its new relative survey propagation) when applied to CSPs is poor. Contributing to the rigorous understanding of BP, in this paper we relate the convergence of BP to spectral properties of the graph. This encompasses a result for random graphs with a ‘planted’ solution; thus, we obtain the first rigorous result on BP for graph colouring in the case of a complex graphical structure (as opposed to trees). In particular, the analysis shows how belief propagation breaks the symmetry between the 3! possible permutations of the colour classes.
A very recent paper by Yamamoto and Watanabe @cite_10 deals with a spectral approach to analyzing BP for the Minimum Bisection problem. Their work is similar to ours in that they point out that a BP-related algorithm pseudo-bp emulates spectral methods. However, a significant difference is that pseudo-bp is a simplified version of BP that is easier to analyze, whereas in the present work we make a point of analyzing the BP algorithm for coloring as it is stated in @cite_6 (cf. Remark for more detailed comments). Nonetheless, an interesting aspect of @cite_10 certainly is that this paper shows that BP can be applied to an actual optimization problem, rather than to the problem of just finding any feasible solution (e.g., a @math -coloring).
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2163985581", "2078204800" ], "abstract": [ "We address the problem of estimating spectral lines from irregularly sampled data within the framework of sparse representations. Spectral analysis is formulated as a linear inverse problem, which is solved by minimizing an l1-norm penalized cost function. This approach can be viewed as a basis pursuit de-noising (BPDN) problem using a dictionary of cisoids with high frequency resolution. In the studied case, however, usual BPDN characterizations of uniqueness and sparsity do not apply. This paper deals with the l1-norm penalization of complex-valued variables, that brings satisfactory prior modeling for the estimation of spectral lines. An analytical characterization of the minimizer of the criterion is given and geometrical properties are derived about the uniqueness and the sparsity of the solution. An efficient optimization strategy is proposed. Convergence properties of the iterative coordinate descent (ICD) and iterative reweighted least-squares (IRLS) algorithms are first examined. Then, both strategies are merged in a convergent procedure, that takes advantage of the specificities of ICD and IRLS, considerably improving the convergence speed. The computation of the resulting spectrum estimator can be implemented efficiently for any sampling scheme. Algorithm performance and estimation quality are illustrated throughout the paper using an artificial data set, typical of some astrophysical problems, where sampling irregularities are caused by day night alternation. We show that accurate frequency location is achieved with high resolution. In particular, compared with sequential Matching Pursuit methods, the proposed approach is shown to achieve more robustness regarding sampling artifacts.", "The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries---stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis pursuit (BP) is a principle for decomposing a signal into an \"optimal\"' superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear and quadratic programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver." ] }
0711.4562
2951599038
The discovery of Autonomous Systems (ASes) interconnections and the inference of their commercial Type-of-Relationships (ToR) has been extensively studied during the last few years. The main motivation is to accurately calculate AS-level paths and to provide better topological view of the Internet. An inherent problem in current algorithms is their extensive use of heuristics. Such heuristics incur unbounded errors which are spread over all inferred relationships. We propose a near-deterministic algorithm for solving the ToR inference problem. Our algorithm uses as input the Internet core, which is a dense sub-graph of top-level ASes. We test several methods for creating such a core and demonstrate the robustness of the algorithm to the core's size and density, the inference period, and errors in the core. We evaluate our algorithm using AS-level paths collected from RouteViews BGP paths and DIMES traceroute measurements. Our proposed algorithm deterministically infers over 95 of the approximately 58,000 AS topology links. The inference becomes stable when using a week worth of data and as little as 20 ASes in the core. The algorithm infers 2-3 times more peer-to-peer relationships in edges discovered only by DIMES than in RouteViews edges, validating the DIMES promise to discover periphery AS edges.
Battista al @cite_16 showed that the decision version of the ToR problem () is an NP-complete problem in the general case. Motivated by the hardness of the general problem, they proposed approximation algorithms and reduced the ToR-D problem to a 2SAT formula by mapping any two adjacent edges in all input AS-level routing paths into a clause with two literals, while adding heuristics based inference.
{ "cite_N": [ "@cite_16" ], "mid": [ "2008724760" ], "abstract": [ "In this paper we study a fundamental open problem in the area of probabilistic checkable proofs: What is the smallest s such that NP ⊆ naPCP1,s[O(log n),3]? In the language of hardness of approximation, this problem is equivalent to determining the smallest s such that getting an s-approximation for satisfiable 3-bit constraint satisfaction problems (\"3-CSPs\") is NP-hard. The previous best upper bound and lower bound for s are 20 27+µ by Khot and Saket [KS06], and 5 8 (assuming NP subseteq BPP) by Zwick [Zwi98]. In this paper we close the gap assuming Khot's d-to-1 Conjecture. Formally, we prove that if Khot's d-to-1 Conjecture holds for any finite constant integer d, then NP naPCP1,5 8+ µ[O(log n),3] for any constant µ > 0. Our conditional result also solves Hastad's open question [Has01] on determining the inapproximability of satisfiable Max-NTW (\"Not Two\") instances and confirms Zwick's conjecture [Zwi98] that the 5 8-approximation algorithm for satisfiable 3-CSPs is optimal." ] }
0711.4562
2951599038
The discovery of Autonomous Systems (ASes) interconnections and the inference of their commercial Type-of-Relationships (ToR) has been extensively studied during the last few years. The main motivation is to accurately calculate AS-level paths and to provide better topological view of the Internet. An inherent problem in current algorithms is their extensive use of heuristics. Such heuristics incur unbounded errors which are spread over all inferred relationships. We propose a near-deterministic algorithm for solving the ToR inference problem. Our algorithm uses as input the Internet core, which is a dense sub-graph of top-level ASes. We test several methods for creating such a core and demonstrate the robustness of the algorithm to the core's size and density, the inference period, and errors in the core. We evaluate our algorithm using AS-level paths collected from RouteViews BGP paths and DIMES traceroute measurements. Our proposed algorithm deterministically infers over 95 of the approximately 58,000 AS topology links. The inference becomes stable when using a week worth of data and as little as 20 ASes in the core. The algorithm infers 2-3 times more peer-to-peer relationships in edges discovered only by DIMES than in RouteViews edges, validating the DIMES promise to discover periphery AS edges.
Dimitropoulos al @cite_1 addressed a problem in current ToR algorithms. They showed that although ToR algorithms produce a directed Internet graph with a very small number of invalid paths, the resulting AS relationships are far from reality. This led them to the conclusion that simply trying to maximize the number of valid paths (namely improving the result of the ToR algorithms) does not produce realistic results. Later in @cite_17 they showed that ToR has no means to deterministically select the most realistic solution when facing multiple possible solutions. In order to solve this problem, the authors suggested a new objective function by adding a notion of "AS importance", which is the AS degree "gradient" in the original undirected Internet graph. The modified ToR algorithm directs the edges from low importance AS to a higher one. The authors showed that although they have high success rate in p2c inference (96.5 in s2s inference (90.3 (82.8 and mention that for some of them, the BGP tables, which are the source for AS-level routing paths for most works in this research field, miss up to 86.2 ASes, most of which are of p2p type.
{ "cite_N": [ "@cite_1", "@cite_17" ], "mid": [ "2123649205", "1655958391" ], "abstract": [ "The topology of the Internet at the autonomous system (AS) level is not yet fully discovered despite significant research activity. The community still does not know how many links are missing, where these links are and finally, whether the missing links will change our conceptual model of the Internet topology. An accurate and complete model of the topology would be important for protocol design, performance evaluation and analyses. The goal of our work is to develop methodologies and tools to identify and validate such missing links between ASes. In this work, we develop several methods and identify a significant number of missing links, particularly of the peer-to-peer type. Interestingly, most of the missing AS links that we find exist as peer-to-peer links at the Internet exchange points (IXPs). First, in more detail, we provide a large-scale comprehensive synthesis of the available sources of information. We cross-validate and compare BGP routing tables, Internet routing registries, and traceroute data, while we extract significant new information from the less-studied Internet exchange points (IXPs). We identify 40 more edges and approximately 300 more peer-to-peer edges compared to commonly used data sets. All of these edges have been verified by either BGP tables or traceroute. Second, we identify properties of the new edges and quantify their effects on important topological properties. Given the new peer-to-peer edges, we find that for some ASes more than 50 of their paths stop going through their ISPs assuming policy-aware routing. A surprising observation is that the degree of an AS may be a poor indicator of which ASes it will peer with.", "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. We close with a list of open problems in anonymous communication." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
Ehrgott @cite_28 considered a variant of , where all objectives are encoded into a single objective by using some norm. He proved approximation ratios between @math and @math for this problem, where the ratio depends on the norm used.
{ "cite_N": [ "@cite_28" ], "mid": [ "2039191483" ], "abstract": [ "One way of solving multiple objective mathematical programming problems is finding discrete representations of the efficient set. A modified goal of finding good discrete representations of the efficient set would contribute to the practicality of vector maximization algorithms. We define coverage, uniformity and cardinality as the three attributes of quality of discrete representations and introduce a framework that includes these attributes in which discrete representations can be evaluated, compared to each other, and judged satisfactory or unsatisfactory by a Decision Maker. We provide simple mathematical programming formulations that can be used to compute the coverage error of a given discrete representation. Our formulations are practically implementable when the problem under study is a multiobjective linear programming problem. We believe that the interactive algorithms along with the vector maximization methods can make use of our framework and its tools." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
Manthey and Ram @cite_6 designed a @math approximation algorithm for and an approximation algorithm for , which achieves a constant ratio but works only for @math . They left open the existence of approximation algorithms for , , and .
{ "cite_N": [ "@cite_6" ], "mid": [ "1989921461" ], "abstract": [ "Given @math elements with non-negative integer weights @math and an integer capacity @math , we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most @math . We give the first deterministic, fully polynomial-time approximation scheme (FPTAS) for estimating the number of solutions to any knapsack constraint (our estimate has relative error @math ). Our algorithm is based on dynamic programming. Previously, randomized polynomial-time approximation schemes (FPRAS) were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. In addition, we present a new method for deterministic approximate counting using read-once branching programs. Our approach yields an FPTAS for several other counting problems, including counting solutions for the multidimensional knapsack problem with a constant number of constraints, the general integer knapsack problem, and the contingency tables problem with a constant number of rows." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
Bl " @cite_19 devised the first randomized approximation algorithms for and . Their algorithms achieve ratios of @math for k and @math for k. They argue that with their approach, only approximation ratios of @math can be achieved. Nevertheless, they conjectured that approximation ratios of @math are possible.
{ "cite_N": [ "@cite_19" ], "mid": [ "2952926320" ], "abstract": [ "We present randomized approximation algorithms for multi-criteria Max-TSP. For Max-STSP with k > 1 objective functions, we obtain an approximation ratio of @math for arbitrarily small @math . For Max-ATSP with k objective functions, we obtain an approximation ratio of @math ." ] }
0711.2157
2953214908
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2 3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1 2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7 27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
For an overview of the literature about multi-criteria optimization, including multi-criteria TSP, we refer to Ehrgott and Gandibleux @cite_22 .
{ "cite_N": [ "@cite_22" ], "mid": [ "2529345806" ], "abstract": [ "Dynamic Multi-objective Optimization is a challenging research topic since the objective functions, constraints, and problem parameters may change over time. Although dynamic optimization and multi-objective optimization have separately obtained a great interest among many researchers, there are only few studies that have been developed to solve Dynamic Multi-objective Optimisation Problems (DMOPs). Moreover, applying Evolutionary Algorithms (EAs) to solve this category of problems is not yet highly explored although this kind of problems is of significant importance in practice. This paper is devoted to briefly survey EAs that were proposed in the literature to handle DMOPs. In addition, an overview of the most commonly used test functions, performance measures and statistical tests is presented. Actual challenges and future research directions are also discussed." ] }
0711.2949
1777853180
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
As relative deviation is essentially equivalent to absolute deviation on a logarithmically scaled objective space, this choice should not affect the convergence results obtained but rather depend on the actual application problem at hand. The nice property of relative deviation is that it allows to prove that, under very mild assumptions, there is always an @math -Pareto set whose size is polynomial in the input length @cite_5 @cite_11 . Further approximation results for particular combinatorial multiobjective optimization problems are given in @cite_10 , where the question was how well a single solution can approximate the whole Pareto set, which is a special case of our question restricted to @math and with focus on deterministic algorithms.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2001663593", "2123485784", "1993855803" ], "abstract": [ "par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 .", "The problem of finding high-dimensional approximate nearest neighbors is considered when the data is generated by some known probabilistic model. A large natural class of algorithms (bucketing codes) is investigated, Bucketing information is defined, and is proven to bound the performance of all bucketing codes. The bucketing information bound is asymptotically attained by some randomly constructed bucketing codes. The example of n Bernoulli(1 2) very long (length d → ∞) sequences of bits is singled out. It is assumed that n - 2m sequences are completely independent, while the remaining 2m sequences are composed of m dependent pairs. The interdependence within each pair is that their bits agree with probability 1 2 0. A specific 2-D inequality (proven in another paper) implies that the exponent 1 p cannot be lowered. Moreover, if one sequence out of each pair belongs to a known set of n(2p-1)2 sequences, pairing can be done using order n1+∈ comparisons!", "In this paper, we attempt to approximate and index a d- dimensional (d ≥ 1) spatio-temporal trajectory with a low order continuous polynomial. There are many possible ways to choose the polynomial, including (continuous)Fourier transforms, splines, non-linear regressino, etc. Some of these possiblities have indeed been studied beofre. We hypothesize that one of the best possibilities is the polynomial that minimizes the maximum deviation from the true value, which is called the minimax polynomial. Minimax approximation is particularly meaningful for indexing because in a branch-and-bound search (i.e., for finding nearest neighbours), the smaller the maximum deviation, the more pruning opportunities there exist. However, in general, among all the polynomials of the same degree, the optimal minimax polynomial is very hard to compute. However, it has been shown thta the Chebyshev approximation is almost identical to the optimal minimax polynomial, and is easy to compute [16]. Thus, in this paper, we explore how to use the Chebyshev polynomials as a basis for approximating and indexing d-dimenstional trajectories.The key analytic result of this paper is the Lower Bounding Lemma. that is, we show that the Euclidean distance between two d-dimensional trajectories is lower bounded by the weighted Euclidean distance between the two vectors of Chebyshev coefficients. this lemma is not trivial to show, and it ensures that indexing with Chebyshev cofficients aedmits no false negatives. To complement that analystic result, we conducted comprehensive experimental evaluation with real and generated 1-dimensional to 4-dimensional data sets. We compared the proposed schem with the Adaptive Piecewise Constant Approximation (APCA) scheme. Our preliminary results indicate that in all situations we tested, Chebyshev indexing dominates APCA in pruning power, I O and CPU costs." ] }
0711.2949
1777853180
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
Despite the existence of suitable approximation concepts, investigations on the of particular algorithms towards such approximation sets, that is, their ability to obtain a suitable Pareto set approximation in the limit, have remained rare. In @cite_15 @cite_1 the stochastic search procedure proposed by earlier by @cite_7 was analyzed and proved to converge to an @math -Pareto set with @math in case of a finite search space. Obviously, the solution set maintained by this algorithm might in the worst case grow as large as the Pareto set @math itself. Thus, a different version with bounded memory of at most @math elements was proposed and shown to converge to some subset of @math of size at most @math , but no guarantee about the approximation quality could be given. Similar results were obtained by @cite_8 for continuous search spaces.
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_7", "@cite_8" ], "mid": [ "2077178480", "2808191361", "2165626989", "2001663593" ], "abstract": [ "We consider the problem of nearest-neighbor searching among a set of stochastic sites, where a stochastic site is a tuple ((s_i, _i) ) consisting of a point (s_i ) in a (d )-dimensional space and a probability ( _i ) determining its existence. The problem is interesting and non-trivial even in (1 )-dimension, where the Most Likely Voronoi Diagram (LVD) is shown to have worst-case complexity ( (n^2) ). We then show that under more natural and less adversarial conditions, the size of the (1 )-dimensional LVD is significantly smaller: (1) ( (k n) ) if the input has only (k ) distinct probability values, (2) (O(n n) ) on average, and (3) (O(n n ) ) under smoothed analysis. We also present an alternative approach to the most likely nearest neighbor (LNN) search using Pareto sets, which gives a linear-space data structure and sub-linear query time in 1D for average and smoothed analysis models, as well as worst-case with a bounded number of distinct probabilities. Using the Pareto-set approach, we can also reduce the multi-dimensional LNN search to a sequence of nearest neighbor and spherical range queries.", "We consider the over-time version of the Max-Min Fair Allocation problem. Given a time horizon @math , with at each time t a set of demands and a set of available resources that may change over the time defining instance @math , we seek a sequence of solutions @math that (1) are near-optimal at each time t , and (2) as stable as possible (inducing small modification costs). We focus on the impact of the knowledge of the future on the quality and the stability of the returned solutions by distinguishing three settings: the off-line setting where the whole set of instances through the time horizon is known in advance, the on-line setting where no future instance is known, and the k -lookahead setting where at time t , the instances at times @math are known. We first consider the case without restrictions where the set of resources and the set of agents are the same for all instances and where every resource can be allocated to any agent. For the off-line setting, we show that the over-time version of the problem is much harder than the static one, since it becomes @math -hard even for families of instances for which the static problem is trivial. Then, we provide a @math -approximation algorithm for the off-line setting using as subroutine a ρ-approximation algorithm for the static version. We also give a @math -competitive algorithm for the online setting using also as subroutine a ρ-approximation algorithm for the static version. Furthermore, for the case with restrictions, we show that in the off-line setting it is possible to get a polynomial-time algorithm with the same approximation ratio as in the case without restrictions. For the online setting, we prove that it is not possible to find an online algorithm with bounded competitive ratio. For the 1-lookahead setting however, we give a @math -approximation algorithm using as subroutine a ρ-approximation algorithm for the static version.", "Search algorithms for Pareto optimization are designed to obtain multiple solutions, each offering a different trade-off of the problem objectives. To make the different solutions available at the end of an algorithm run, procedures are needed for storing them, one by one, as they are found. In a simple case, this may be achieved by placing each point that is found into an \"archive\" which maintains only nondominated points and discards all others. However, even a set of mutually nondominated points is potentially very large, necessitating a bound on the archive's capacity. But with such a bound in place, it is no longer obvious which points should be maintained and which discarded; we would like the archive to maintain a representative and well-distributed subset of the points generated by the search algorithm, and also that this set converges. To achieve these objectives, we propose an adaptive archiving algorithm, suitable for use with any Pareto optimization algorithm, which has various useful properties as follows. It maintains an archive of bounded size, encourages an even distribution of points across the Pareto front, is computationally efficient, and we are able to prove a form of convergence. The method proposed here maintains evenness, efficiency, and cardinality, and provably converges under certain conditions but not all. Finally, the notions underlying our convergence proofs support a new way to rigorously define what is meant by \"good spread of points\" across a Pareto front, in the context of grid-based archiving schemes. This leads to proofs and conjectures applicable to archive sizing and grid sizing in any Pareto optimization algorithm maintaining a grid-based archive.", "par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 ." ] }
0711.2949
1777853180
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
One option to control the approximation quality under size restrictions is to define a which maps each possible solution set to a real value that can then be used to decide on the inclusion of a new search point. Several algorithms have been proposed that implement this concept @cite_20 @cite_16 . In case that such a quality indicator fulfils certain monotonicity conditions, it can be used as a potential function in the convergence analysis. As shown in @cite_9 @cite_17 , this entails convergence to a subset of the Pareto set as a local optimum of the quality indicator, but it remained open how such a local optimum relates to a guarantee on the approximation quality @math . @cite_17 also analyzed an adaptive grid archiving method proposed in @cite_19 and proved that after finite time, even though the solution set itself might permanently oscillate, it will always represent an @math -approximation whose approximation quality depends on the granularity of the adaptive grid and on the number of allowed solutions. The results depend on the additional assumption that the grid boundaries converge after finite time, which is fulfilled in certain special cases.
{ "cite_N": [ "@cite_9", "@cite_19", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "2973707709", "2808191361", "2163637155", "2263717697", "2089729462" ], "abstract": [ "In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1 e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement", "We consider the over-time version of the Max-Min Fair Allocation problem. Given a time horizon @math , with at each time t a set of demands and a set of available resources that may change over the time defining instance @math , we seek a sequence of solutions @math that (1) are near-optimal at each time t , and (2) as stable as possible (inducing small modification costs). We focus on the impact of the knowledge of the future on the quality and the stability of the returned solutions by distinguishing three settings: the off-line setting where the whole set of instances through the time horizon is known in advance, the on-line setting where no future instance is known, and the k -lookahead setting where at time t , the instances at times @math are known. We first consider the case without restrictions where the set of resources and the set of agents are the same for all instances and where every resource can be allocated to any agent. For the off-line setting, we show that the over-time version of the problem is much harder than the static one, since it becomes @math -hard even for families of instances for which the static problem is trivial. Then, we provide a @math -approximation algorithm for the off-line setting using as subroutine a ρ-approximation algorithm for the static version. We also give a @math -competitive algorithm for the online setting using also as subroutine a ρ-approximation algorithm for the static version. Furthermore, for the case with restrictions, we show that in the off-line setting it is possible to get a polynomial-time algorithm with the same approximation ratio as in the case without restrictions. For the online setting, we prove that it is not possible to find an online algorithm with bounded competitive ratio. For the 1-lookahead setting however, we give a @math -approximation algorithm using as subroutine a ρ-approximation algorithm for the static version.", "Given any scheme in conservation form and an appropriate uniform grid for the numerical solution of the initial value problem for one-dimensional hyperbolic conservation laws we describe a multiresolution algorithm that approximates this numerical solution to a prescribed tolerance in an efficient manner. To do so we consider the grid-averages of the numerical solution for a hierarchy of nested diadic grids in which the given grid is the finest, and introduce an equivalent multiresolution representation. The multiresolution representation of the numerical solution consists of its grid-averages for the coarsest grid and the set of errors in predicting the grid-averages of each level of resolution in this hierarchy from those of the next coarser one. Once the numerical solution is resolved to our satisfaction in a certain locality of some grid, then the prediction errors there are small for this particular grid and all finer ones; this enables us to compress data by setting to zero small components of the representation which fall below a prescribed tolerance. Therefore instead of computing the time-evolution of the numerical solution on the given grid we compute the time-evolution of its compressed multiresolution representation. Algorithmically this amounts to computing the numerical fluxes of the given scheme at the points of the given grid by a hierarchical algorithm which starts with the computation of these numerical fluxes at the points of the coarsest grid and then proceeds through diadic refinements to the given grid. At each step of refinement we add the values of the numerical flux at the center of the coarser cells. The information in the multiresolution representation of the numerical solution is used to determine whether the solution is locally well-resolved. When this is the case we replace the costly exact value of the numerical flux with an accurate enough approximate value which is obtained by an inexpensive interpolation from the coarser grid. The computational efficiency of this multiresolution algorithm is proportional to the rate of data compression (for a prescribed level of tolerance) that can be achieved for the numerical solution of the given scheme.", "We consider the problem of maximizing a (non-monotone) submodular function subject to a cardinality constraint. In addition to capturing well-known combinatorial optimization problems, e.g., Max-k-Coverage and Max-Bisection, this problem has applications in other more practical settings such as natural language processing, information retrieval, and machine learning. In this work we present improved approximations for two variants of the cardinality constraint for non-monotone functions. When at most k elements can be chosen, we improve the current best 1 e -- o(1) approximation to a factor that is in the range [1 e + 0.004, 1 2], achieving a tight approximation of 1 2 -- o(1) for k = n 2 and breaking the 1 e barrier for all values of k. When exactly k elements must be chosen, our algorithms improve the current best 1 4 -- o(1) approximation to a factor that is in the range [0.356, 1 2], again achieving a tight approximation of 1 2 -- o(1) for k = n 2. Additionally, some of the algorithms we provide are very fast with time complexities of O(nk), as opposed to previous known algorithms which are continuous in nature, and thus, too slow for applications in the practical settings mentioned above. Our algorithms are based on two new techniques. First, we present a simple randomized greedy approach where in each step a random element is chosen from a set of \"reasonably good\" elements. This approach might be considered a natural substitute for the greedy algorithm of Nemhauser, Wolsey and Fisher [45], as it retains the same tight guarantee of 1--1 e for monotone objectives and the same time complexity of O(nk), while giving an approximation of 1 e for general non-monotone objectives (while the greedy algorithm of Nemhauser et. al. fails to provide any constant guarantee). Second, we extend the double greedy technique, which achieves a tight 1 2 approximation for unconstrained submodular maximization, to the continuous setting. This allows us to manipulate the natural rates by which elements change, thus bounding the total number of elements chosen.", "A number of recent results on optimization problems involving submodular functions have made use of the \"multilinear relaxation\" of the problem. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of \"symmetry gap\". Our main result is that for any fixed instance that exhibits a certain \"symmetry gap\" in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, e.g. the optimality of (1-1 e)-approximation for monotone submodular maximization under a cardinality constraint, and the impossibility of (1 2+epsilon)-approximation for unconstrained (non-monotone) submodular maximization. It follows from our result that (1 2+epsilon)-approximation is also impossible for non-monotone submodular maximization subject to a (non-trivial) matroid constraint. On the algorithmic side, we present a 0.309-approximation for this problem, improving the previously known factor of 1 4-o(1).As another application, we consider the problem of maximizing a non-monotone submodular function over the bases of a matroid. A (1 6-o(1))-approximation has been developed for this problem, assuming that the matroid contains two disjoint bases. We show that the best approximation one can achieve is indeed related to packings of bases in the matroid. Specifically, for any k≫=2, there is a class of matroids of fractional base packing number nu = k (k-1), such that any algorithm achieving a better than (1-1 nu)-approximation for this class would require exponentially many value queries. On the positive side, we present a 1 2 (1-1 nu-o(1))-approximation algorithm for the same problem. Our hardness results hold in fact for very special symmetric instances. For such symmetric instances, we show that the approximation factors of 1 2 (for submodular maximization subject to a matroid constraint) and 1-1 nu (for a matroid base constraint) can be achieved algorithmically and hence are optimal." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
A wrapper is a tool that extracts information (entities or values) from a document, or a set of documents, with a purpose of reusing information in another system. A lot of research has been carried out in this field by the database community, mostly in relation to querying heterogeneous databases @cite_24 @cite_21 @cite_15 @cite_5 . More recently, wrappers have also been built to extract information from web pages with different applications in mind, such as product comparison, reuse of information in virtual documents, or building experimental data sets. Most web wrappers are either based on scripting languages @cite_24 @cite_21 that are very close to current XML query languages, or use wrapper induction @cite_15 @cite_5 that learn rules for extracting information.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_15", "@cite_21" ], "mid": [ "2169262681", "2115770258", "1995746869", "2124157324" ], "abstract": [ "Extracting data from Web pages using wrappers is a fundamental problem arising in a large variety of applications of vast practical interests. There are two main issues relevant to Web-data extraction, namely wrapper generation and wrapper maintenance. In this paper, we propose a novel schema-guided approach to the problem of automatic wrapper maintenance. It is based on the observation that despite various page changes, many important features of the pages are preserved, such as syntactic patterns, annotations, and hyperlinks of the extracted data items. Our approach uses these preserved features to identify the locations of the desired values in the changed pages, and repair wrappers correspondingly by inducing semantic blocks from the HTML tree. Our intensive experiments on real Web sites show that the proposed approach can effectively maintain wrappers to extract desired data with high accuracies.", "The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.", "We study the problem of automatic repairing of wrappers for Web information providers. Majority of Web wrappers use \"hooks'' or \"landmarks'' to find and extract relevant information from Web pages and such wrappers often become inoperable when the page structure is changed. The solution we propose in this paper extends conventional forward wrappers with alternative classifiers built using content features of extracted information and wrappers processing pages backward. We report some preliminary results of the information extraction recovery and wrapper repairing for a set of real Web provider changes.", "Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
To prevent wrappers breaking over time without notice when pages change, @cite_9 propose using machine learning for wrapper verification and re-induction. Rather than repairing a wrapper over changes in the web data, Callan and Mitamura @cite_7 propose generating the wrapper dynamically --- that is at the time of wrapping, using data previously extracted and stored in a database. The extraction rules are based on heuristics around a few pre-defined lexico-syntactic HTML patterns such as lists, tables, and links. The patterns are weighted according to the number of examples they recognise; the best patterns are used to dynamically extract new data.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2169262681", "1606262545" ], "abstract": [ "Extracting data from Web pages using wrappers is a fundamental problem arising in a large variety of applications of vast practical interests. There are two main issues relevant to Web-data extraction, namely wrapper generation and wrapper maintenance. In this paper, we propose a novel schema-guided approach to the problem of automatic wrapper maintenance. It is based on the observation that despite various page changes, many important features of the pages are preserved, such as syntactic patterns, annotations, and hyperlinks of the extracted data items. Our approach uses these preserved features to identify the locations of the desired values in the changed pages, and repair wrappers correspondingly by inducing semantic blocks from the HTML tree. Our intensive experiments on real Web sites show that the proposed approach can effectively maintain wrappers to extract desired data with high accuracies.", "We develop a probabilistic framework for adapting information extraction wrappers with new attribute discovery. Wrapper adaptation aims at automatically adapting a previously learned wrapper from the source Web site to a new unseen site for information extraction. One unique characteristic of our framework is that it can discover new or previously unseen attributes as well as headers from the new site. It is based on a generative model for the generation of text fragments related to attribute items and formatting data in a Web page. To solve the wrapper adaptation problem, we consider two kinds of information from the source Web site. The first kind of information is the extraction knowledge contained in the previously learned wrapper from the source Web site. The second kind of information is the previously extracted or collected items. We employ a Bayesian learning approach to automatically select a set of training examples for adapting a wrapper for the new unseen site. To solve the new attribute discovery problem, we develop a model which analyzes the surrounding text fragments of the attributes in the new unseen site. A Bayesian learning method is developed to discover the new attributes and their headers. EM technique is employed in both Bayesian learning models. We conducted extensive experiments from a number of real-world Web sites to demonstrate the effectiveness of our framework." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
Other approaches for entity extraction are based on the use of external resources, such as an ontology or a dictionary. @cite_18 use a populated ontology for entity extraction, while Cohen and Sarawagi @cite_19 exploit a dictionary for named entity extraction. @cite_2 use an ontology for automatic semantic annotation of web pages. Their system firstly identifies the syntactic structure that characterises an entity in a page, and then uses subsumption to identify the more specific concept to be associated with this entity.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_2" ], "mid": [ "2109718074", "14559458", "2587809655" ], "abstract": [ "Traditional approaches to Relation Extraction from text require manually defining the relations to be extracted. We propose here an approach to automatically discovering relevant relations, given a large text corpus plus an initial ontology defining hundreds of noun categories (e.g., Athlete, Musician, Instrument). Our approach discovers frequently stated relations between pairs of these categories, using a two step process. For each pair of categories (e.g., Musician and Instrument) it first co-clusters the text contexts that connect known instances of the two categories, generating a candidate relation for each resulting cluster. It then applies a trained classifier to determine which of these candidate relations is semantically valid. Our experiments apply this to a text corpus containing approximately 200 million web pages and an ontology containing 122 categories from the NELL system [, 2010b], producing a set of 781 proposed candidate relations, approximately half of which are semantically valid. We conclude this is a useful approach to semi-automatic extension of the ontology for large-scale information extraction systems such as NELL.", "The approach towards Semantic Web Information Extraction (IE) presented here is implemented in KIM – a platform for semantic indexing, annotation, and retrieval. It combines IE based on the mature text engineering platform (GATE1) with Semantic Web-compliant knowledge representation and management. The cornerstone is automatic generation of named-entity (NE) annotations with class and instance references to a semantic repository. Simplistic upper-level ontology, providing detailed coverage of the most popular entity types (Person, Organization, Location, etc.; more than 250 classes) is designed and used. A knowledge base (KB) with de-facto exhaustive coverage of real-world entities of general importance is maintained, used, and constantly enriched. Extensions of the ontology and KB take care of handling all the lexical resources used for IE, most notable, instead of gazetteer lists, aliases of specific entities are kept together with them in the KB. A Semantic Gazetteer uses the KB to generate lookup annotations. Ontologyaware pattern-matching grammars allow precise class information to be handled via rules at the optimal level of generality. The grammars are used to recognize NE, with class and instance information referring to the KIM ontology and KB. Recognition of identity relations between the entities is used to unify their references to the KB. Based on the recognized NE, template relation construction is performed via grammar rules. As a result of the latter, the KB is being enriched with the recognized relations between entities. At the final phase of the IE process, previously unknown aliases and entities are being added to the KB with their specific types.", "Entity and relation extraction is a task that combines detecting entity mentions and recognizing entities' semantic relationships from unstructured text. We propose a hybrid neural network model to extract entities and their relationships without any handcrafted features. The hybrid neural network contains a novel bidirectional encoder-decoder LSTM module (BiLSTM-ED) for entity extraction and a CNN module for relation classification. The contextual information of entities obtained in BiLSTM-ED further pass though to CNN module to improve the relation classification. We conduct experiments on the public dataset ACE05 (Automatic Content Extraction program) to verify the effectiveness of our method. The method we proposed achieves the state-of-the-art results on entity and relation extraction task. (C) 2017 Elsevier B.V. All rights reserved." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
@cite_0 use a populated ontology'' to assist in disambiguation of entities, such as names of authors using their published papers or domain of interest. They use text proximity between entities to disambiguate names (e.g. organisation name would be close to author's name). They also use text co-occurrence, for example for topics relevant to an author. So their algorithm is tuned for their actual ontology, while our algorithm is more based on the the categories and the structural properties of the Wikipedia.
{ "cite_N": [ "@cite_0" ], "mid": [ "2067292826" ], "abstract": [ "Disambiguating entity references by annotating them with unique ids from a catalog is a critical step in the enrichment of unstructured content. In this paper, we show that topic models, such as Latent Dirichlet Allocation (LDA) and its hierarchical variants, form a natural class of models for learning accurate entity disambiguation models from crowd-sourced knowledge bases such as Wikipedia. Our main contribution is a semi-supervised hierarchical model called Wikipedia-based Pachinko Allocation Model (WPAM) that exploits: (1) All words in the Wikipedia corpus to learn word-entity associations (unlike existing approaches that only use words in a small fixed window around annotated entity references in Wikipedia pages), (2) Wikipedia annotations to appropriately bias the assignment of entity labels to annotated (and co-occurring unannotated) words during model learning, and (3) Wikipedia's category hierarchy to capture co-occurrence patterns among entities. We also propose a scheme for pruning spurious nodes from Wikipedia's crowd-sourced category hierarchy. In our experiments with multiple real-life datasets, we show that WPAM outperforms state-of-the-art baselines by as much as 16 in terms of disambiguation accuracy." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
Cucerzan @cite_12 uses Wikipedia data for named entity disambiguation. He first pre-processed a version of the Wikipedia collection (September 2006), and extracted more than 1.4 millions entities with an average of 2.4 surface forms by entities. He also extracted more than one million (entities, category) pairs that were further filtered down to 540 thousand pairs. Lexico-syntactic patterns, such as titles, links, paragraphs and lists, are used to build co-references of entities in limited contexts. The knowledge extracted from Wikipedia is then used for improving entity disambiguation in the context of web and news search.
{ "cite_N": [ "@cite_12" ], "mid": [ "2177768736" ], "abstract": [ "Entity disambiguation with Wikipedia relies on structured information from redirect pages, article text, inter-article links, and categories. We explore whether web links can replace a curated encyclopaedia, obtaining entity prior, name, context, and coherence models from a corpus of web pages with links to Wikipedia. Experiments compare web link models to Wikipedia models on well-known CoNLL and TAC data sets. Results show that using 34 million web links approaches Wikipedia performance. Combining web link and Wikipedia models produces the best-known disambiguation accuracy of 88.7 on standard newswire test data." ] }
0711.3128
2949905168
The traditional entity extraction problem lies in the ability of extracting named entities from plain text using natural language processing techniques and intensive training from large document collections. Examples of named entities include organisations, people, locations, or dates. There are many research activities involving named entities; we are interested in entity ranking in the field of information retrieval. In this paper, we describe our approach to identifying and ranking entities from the INEX Wikipedia document collection. Wikipedia offers a number of interesting features for entity identification and ranking that we first introduce. We then describe the principles and the architecture of our entity ranking system, and introduce our methodology for evaluation. Our preliminary results show that the use of categories and the link structure of Wikipedia, together with entity examples, can significantly improve retrieval effectiveness.
PageRank, an algorithm proposed by Brin and Page @cite_3 , is a link analysis algorithm that assigns a numerical weighting to each page of a hyperlinked set of web pages. The idea of PageRank is that a web page is a good page if it is popular, that is if many other (also preferably popular) web pages are referring to it.
{ "cite_N": [ "@cite_3" ], "mid": [ "2096041903" ], "abstract": [ "The PageRank algorithm, used in the Google search engine, greatly improves the results of Web search by taking into account the link structure of the Web. PageRank assigns to a page a score proportional to the number of times a random surfer would visit that page, if it surfed indefinitely from page to page, following all outlinks from a page with equal probability. We propose to improve PageRank by using a more intelligent surfer, one that is guided by a probabilistic model of the relevance of a page to a query. Efficient execution of our algorithm at query time is made possible by precomputing at crawl time (and thus once for all queries) the necessary terms. Experiments on two large subsets of the Web indicate that our algorithm significantly outperforms PageRank in the (human-rated) quality of the pages returned, while remaining efficient enough to be used in today's large search engines." ] }
0711.3242
1583411863
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
In section , we show that the two sided Bregman centroids @math and @math with respect to Bregman divergence @math are unique and easily obtained as generalized means for the identity and @math functions, respectively. We extend Sibson' s notion of information radius @cite_28 for these sided centroids, and show that they are both equal to the @math -Jensen difference, a generalized Jensen-Shannon divergence @cite_7 also known as Burbea-Rao divergences @cite_19 .
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_7" ], "mid": [ "2096765209", "2079630523", "2112939491" ], "abstract": [ "A wide variety of distortion functions, such as squared Euclidean distance, Mahalanobis distance, Itakura-Saito distance and relative entropy, have been used for clustering. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences. The proposed algorithms unify centroid-based parametric clustering approaches, such as classical kmeans , the Linde-Buzo-Gray (LBG) algorithm and information-theoretic clustering, which arise by special choices of the Bregman divergence. The algorithms maintain the simplicity and scalability of the classical kmeans algorithm, while generalizing the method to a large class of clustering loss functions. This is achieved by first posing the hard clustering problem in terms of minimizing the loss in Bregman information, a quantity motivated by rate distortion theory, and then deriving an iterative algorithm that monotonically decreases this loss. In addition, we show that there is a bijection between regular exponential families and a large class of Bregman divergences, that we call regular Bregman divergences. This result enables the development of an alternative interpretation of an efficient EM scheme for learning mixtures of exponential family distributions, and leads to a simple soft clustering algorithm for regular Bregman divergences. Finally, we discuss the connection between rate distortion theory and Bregman clustering and present an information theoretic analysis of Bregman clustering algorithms in terms of a trade-off between compression and loss in Bregman information.", "Let be a complete Riemannian manifold and a probability measure on . Assume . We derive a new bound (in terms of , the injectivity radius of and an upper bound on the sectional curvatures of ) on the radius of a ball containing the support of which ensures existence and uniqueness of the global Riemannian center of mass with respect to . A significant consequence of our result is that under the best available existence and uniqueness conditions for the so-called local'' center of mass, the global and local centers coincide. In our derivation we also give an alternative proof for a uniqueness result by W. S. Kendall. As another contribution, we show that for a discrete probability measure on , under the existence and uniqueness conditions, the (global) center of mass belongs to the closure of the convex hull of the masses. We also give a refined result when is of constant curvature.", "Let D and V denote respectively Information Divergence and Total Variation Distance. Pinsker's and Vajda's inequalities are respectively D ≥ [ 1 2] V2 and D ≥ log[( 2+V) ( 2-V)] - [( 2V) ( 2+V)]. In this paper, several generalizations and improvements of these inequalities are established for wide classes of f -divergences. First, conditions on f are determined under which an f-divergence Df will satisfy Df ≥ cf V2 or Df ≥ c2,f V2 + c4,f V4, where the constants cf, c2,f and c4,f are best possible. As a consequence, lower bounds in terms of V are obtained for many well known distance and divergence measures, including the χ2 and Hellinger's discrimination and the families of Tsallis' and Renyi's divergences. For instance, if D(α) (P||Q) = [α(α-1)]-1 [∫pαq1-αdμ-1] and ℑα (P||Q) = (α-1)-1 log[∫pαq1-αdμ] are respectively the relative information of type α and the Renyi's information gain of order α, it is shown that D(α) ≥ [ 1 2] V2 + [ 1 72] (α+1)(2-α) V4 whenever -1 ≤ α ≤ 2, α ≠ 0,1 and that ℑα ≥ [( α) 2] V2 + [ 1 36] α(1 + 5 α- 5 α2 ) V4 for 0 <; α <; 1. In a somewhat different direction, and motivated by the fact that these Pinsker's type lower bounds are accurate only for small variation (V close to zero), lower bounds for Df which are accurate for both small and large variation (V close to two) are also obtained. In the special case of the information divergence they imply that D ≥ log[ 2 ( 2-V)] - [( 2-V) 2] log[( 2+V) 2], which uniformly improves Vajda's inequality." ] }
0711.3242
1583411863
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
The symmetrized Kullback-Leibler divergence ( @math -divergence) and symmetrized Itakura-Saito divergence (COSH distance) are often used in sound image applications, where our fast geodesic dichotomic walk algorithm converging to the unique symmetrized Bregman centroid comes in handy over former complex adhoc methods @cite_9 @cite_15 @cite_27 @cite_18 @cite_29 . considers applications of the generic geodesic-walk algorithm to two cases: The symmetrized Kullback-Leibler for probability mass functions represented as @math -dimensional points lying in the @math -dimensional simplex @math . These discrete distributions are handled as multinomials of the exponential families @cite_23 with @math degrees of freedom. We instantiate the generic geodesic-walk algorithm for that setting, show how it compares favorably with the prior convex optimization work of Veldhuis @cite_14 @cite_18 , and validate formally experimental remarks of Veldhuis.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_9", "@cite_29", "@cite_27", "@cite_23", "@cite_15" ], "mid": [ "2096765209", "1982831910", "2150887479", "2075660001", "2069278600", "2133027741", "2129455341" ], "abstract": [ "A wide variety of distortion functions, such as squared Euclidean distance, Mahalanobis distance, Itakura-Saito distance and relative entropy, have been used for clustering. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences. The proposed algorithms unify centroid-based parametric clustering approaches, such as classical kmeans , the Linde-Buzo-Gray (LBG) algorithm and information-theoretic clustering, which arise by special choices of the Bregman divergence. The algorithms maintain the simplicity and scalability of the classical kmeans algorithm, while generalizing the method to a large class of clustering loss functions. This is achieved by first posing the hard clustering problem in terms of minimizing the loss in Bregman information, a quantity motivated by rate distortion theory, and then deriving an iterative algorithm that monotonically decreases this loss. In addition, we show that there is a bijection between regular exponential families and a large class of Bregman divergences, that we call regular Bregman divergences. This result enables the development of an alternative interpretation of an efficient EM scheme for learning mixtures of exponential family distributions, and leads to a simple soft clustering algorithm for regular Bregman divergences. Finally, we discuss the connection between rate distortion theory and Bregman clustering and present an information theoretic analysis of Bregman clustering algorithms in terms of a trade-off between compression and loss in Bregman information.", "In this paper, we consider conic programming problems whose constraints consist of linear equalities, linear inequalities, a nonpolyhedral cone, and a polyhedral cone. A convenient way for solving this class of problems is to apply the directly extended alternating direction method of multipliers (ADMM) to its dual problem, which has been observed to perform well in numerical computations but may diverge in theory. Ideally, one should find a convergent variant which is at least as efficient as the directly extended ADMM in practice. We achieve this goal by designing a convergent semiproximal ADMM (called sPADMM3c for convenience) for convex programming problems having three separable blocks in the objective function with the third part being linear. At each iteration, the proposed sPADMM3c takes one special block coordinate descent (BCD) cycle with the order @math , instead of the usual @math Gauss--Seidel BCD cycle used in the nonconvergent directly extended 3-block ADMM, for updating the variable blocks. Our numerical experiments demonstrate that the convergent method is at least 20 faster than the directly extended ADMM with unit step-length for the vast majority of about 550 large-scale doubly nonnegative semidefinite programming problems with linear equality and or inequality constraints. This confirms that at least for conic convex programming, one can design a convergent and efficient ADMM with a special BCD cycle of updating the variable blocks.", "We study the pioneer points of the simple random walk on the uniform infinite planar quadrangulation (UIPQ) using an adaptation of the peeling procedure of Angel (Geom Funct Anal 13:935–974, 2003) to the quadrangulation case. Our main result is that, up to polylogarithmic factors, n 3 pioneer points have been discovered before the walk exits the ball of radius n in the UIPQ. As a result we verify the KPZ relation (Modern Phys Lett A 3:819–826, 1988) in the particular case of the pioneer exponent and prove that the walk is subdiffusive with exponent less than 1 3. Along the way, new geometric controls on the UIPQ are established.", "In this paper we analyze the randomized block-coordinate descent (RBCD) methods proposed in Nesterov (SIAM J Optim 22(2):341---362, 2012), Richtarik and Takaă? (Math Program 144(1---2):1---38, 2014) for minimizing the sum of a smooth convex function and a block-separable convex function, and derive improved bounds on their convergence rates. In particular, we extend Nesterov's technique developed in Nesterov (SIAM J Optim 22(2):341---362, 2012) for analyzing the RBCD method for minimizing a smooth convex function over a block-separable closed convex set to the aforementioned more general problem and obtain a sharper expected-value type of convergence rate than the one implied in Richtarik and Takaă? (Math Program 144(1---2):1---38, 2014). As a result, we also obtain a better high-probability type of iteration complexity. In addition, for unconstrained smooth convex minimization, we develop a new technique called randomized estimate sequence to analyze the accelerated RBCD method proposed by Nesterov (SIAM J Optim 22(2):341---362, 2012) and establish a sharper expected-value type of convergence rate than the one given in Nesterov (SIAM J Optim 22(2):341---362, 2012).", "We present a new efficient algorithm for the search version of the approximate Closest Vector Problem with Preprocessing (CVPP). Our algorithm achieves an approximation factor of O(n sqrt log n ), improving on the previous best of O(n^ 1.5 ) due to Lag arias, Lenstra, and Schnorr hkzbabai . We also show, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve the problem (with the slightly worse approximation factor of O(n)). We remark that this still leaves a large gap with respect to the decisional version of CVPP, where the best known approximation factor is O(sqrt n log n ) due to Aharonov and Regev AharonovR04 . To achieve these results, we show a reduction to the same problem restricted to target points that are close to the lattice and a more efficient reduction to a harder problem, Bounded Distance Decoding with preprocessing (BDDP). Combining either reduction with the previous best-known algorithm for BDDP by Liu, Lyubashevsky, and Micciancio LiuLM06 gives our main result. In the setting of CVP without preprocessing, we also give a reduction from (1+eps)gamma approximate CVP to gamma approximate CVP where the target is at distance at most 1+1 eps times the minimum distance (the length of the shortest non-zero vector) which relies on the lattice sparsification techniques of Dadush and Kun DadushK13 . As our final and most technical contribution, we present a substantially more efficient variant of the LLM algorithm (both in terms of run-time and amount of preprocessing advice), and via an improved analysis, show that it can decode up to a distance proportional to the reciprocal of the smoothing parameter of the dual lattice MR04 . We show that this is never smaller than the LLM decoding radius, and that it can be up to an wide tilde Omega (sqrt n ) factor larger.", "These lecture notes highlight the mathematical and computational structure relating to the formulation of, and development of algorithms for, the Bayesian approach to inverse problems in differential equations. This approach is fundamental in the quantification of uncertainty within applications in volving the blending of mathematical models with data. The finite dimensional situation is described first, along with some motivational examples. Then the development of probability measures on separable Banach space is undertaken, using a random series over an infinite set of functions to construct draws; these probability measures are used as priors in the Bayesian approach to inverse problems. Regularity of draws from the priors is studied in the natural Sobolev or Besov spaces implied by the choice of functions in the random series construction, and the Kolmogorov continuity theorem is used to extend regularity considerations to the space of Holder continuous functions. Bayes’ theorem is de rived in this prior setting, and here interpreted as finding conditions under which the posterior is absolutely continuous with respect to the prior, and determining a formula for the Radon-Nikodym derivative in terms of the likelihood of the data. Having established the form of the posterior, we then describe various properties common to it in the infinite dimensional setting. These properties include well-posedness, approximation theory, and the existence of maximum a posteriori estimators. We then describe measure-preserving dynamics, again on the infinite dimensional space, including Markov chain-Monte C arlo and sequential Monte Carlo methods, and measure-preserving reversible stochastic differential equations. By formulating the theory and algorithms on the underlying infinite dimensional space, we obtain a framework suitable for rigorous analysis of the accuracy of reconstructions, of computational complexity, as well as naturally constructing algorithms which perform well under mesh refinement, since they are inherently well-defined in infinite dimensions.", "In this paper we study Nonnegative Tensor Factorization (NTF) based on the Kullback---Leibler (KL) divergence as an alternative Csiszar---Tusnady procedure. We propose new update rules for the aforementioned divergence that are based on multiplicative update rules. The proposed algorithms are built on solid theoretical foundations that guarantee that the limit point of the iterative algorithm corresponds to a stationary solution of the optimization procedure. Moreover, we study the convergence properties of the optimization procedure and we present generalized pythagorean rules. Furthermore, we provide clear probabilistic interpretations of these algorithms. Finally, we discuss the connections between generalized Probabilistic Tensor Latent Variable Models (PTLVM) and NTF, proposing in that way algorithms for PTLVM for arbitrary multivariate probabilistic mass functions." ] }
0711.3242
1583411863
In this paper, we generalize the notions of centroids and barycenters to the broad class of information-theoretic distortion measures called Bregman divergences. Bregman divergences are versatile, and unify quadratic geometric distances with various statistical entropic measures. Because Bregman divergences are typically asymmetric, we consider both the left-sided and right-sided centroids and the symmetrized centroids, and prove that all three are unique. We give closed-form solutions for the sided centroids that are generalized means, and design a provably fast and efficient approximation algorithm for the symmetrized centroid based on its exact geometric characterization that requires solely to walk on the geodesic linking the two sided centroids. We report on our generic implementation for computing entropic centers of image clusters and entropic centers of multivariate normals, and compare our results with former ad-hoc methods.
The symmetrized Kullback-Leibler of multivariate normal distributions. We describe the geodesic-walk for this particular mixed-type exponential family of multivariate normals, and explain the Legendre mixed-type vector matrix dual convex conjugates defining the corresponding Bregman divergences. This yields a simple, fast and elegant geometric method compared to the former overly complex method of Myrvoll and Soong @cite_9 that relies on solving Riccati matrix equations.
{ "cite_N": [ "@cite_9" ], "mid": [ "2337671281" ], "abstract": [ "Abstract This paper mainly focuses on dimensional reduction of fused dataset of holistic and geometrical face features vectors by solving singularity problem of linear discriminant analysis and maximizing the Fisher ratio in nonlinear subspace region with the preservation of local discriminative features. The combinational feature vector space is projected into low dimensional subspace using proposed Kernel Locality Preserving Symmetrical Weighted Fisher Discriminant Analysis (KLSWFDA) method. Matching score level fusion technique has been applied on projected subspace and combinational entire Gabor subspace is framed. Euclidean distance metric (L2) and support vector machine (SVM) classifier has been implemented to recognize and classify the expressions. Performance of proposed approach is evaluated and compared with state of art approaches. Experimental results on JAFFE, YALE and FD expression database demonstrate the effectiveness of the proposed approach." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Competitive algorithms for advanced reservation networks is the focus of @cite_14 . This work discusses the lazy ftp problem, where reservations are made for channels for the transfer of different files. The algorithm presented there provides a 4-competitive algorithm for the makespan (the total completion time). However, @cite_14 focuses on the case of fixed routes. When routing is also to be considered, the time complexity of the algorithm presented there may be exponential in the network size.
{ "cite_N": [ "@cite_14" ], "mid": [ "2763269482" ], "abstract": [ "Modern planetary-scale online services have massive data to transfer over the wide area network (WAN). Due to the tremendous cost of building WANs and the stringent timing requirement of distributed applications, it is critical for network operators to make efficient use of network resources to optimize data transfers. By leveraging software-defined networking (SDN) and reconfigurable optical devices, recent solutions design centralized systems to jointly control the network layer and the optical layer. While these solutions show it is promising to significantly reduce data transfer times by centralized cross-layer control, they do not have any theoretical guarantees on the proposed algorithms. This paper presents approximation algorithms and theoretical analysis for the online transfer scheduling problem over optical WANs. The goal of the scheduling problem is to minimize the makespan (the time to finish all transfers) or the total sum of completion times. We design and analyze various greedy, online scheduling algorithms that can achieve 3-competitive ratio for makespan, 2-competitive ratio for minimum sum completion time for jobs of unit size, and 3α-competitive ratio for jobs of arbitrary transfer size and each node having degree constraint d, where α = 1 when d = 1 and α = 1.86 when d ≥ 2. We also evaluated the performance of these algorithms and compared the performance with prior heuristics." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Another recent work @cite_28 , focusing on routing in packet switched network in an adversarial setting, discusses choosing routes for fixed size packets injected by an adversary. It enforces regularity limitations on the adversary which are stronger than the ones required here, and achieves the network capacity with a guarantee on the maximum queue size. It does not discuss the case of advance reservation with different job sizes or bandwidth requirements. It is based upon approximating an integer programming, which may not be extensible to a case where path reservation, rather than packet-based routing is involved.
{ "cite_N": [ "@cite_28" ], "mid": [ "2133049312" ], "abstract": [ "We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms.When the paths are known (either given by the adversary or computed as above), our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this article, we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet.Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Most of the works regarding competitive approaches to routing focused mainly on call admission, without the ability of advance reservation. For some results in this field see, e.g., @cite_20 @cite_3 . Some results involving advanced reservation are presented in @cite_6 . However, the path selection there is based on several alternatives supplied by a user in the request rather than a path selection using an automated mechanism attempting to optimize performance, as discussed here. In @cite_27 a combination of call admission and circuit switching is used to obtain a routing scheme with a logarithmic competitive ratio on the total revenue received. A competitive routing scheme in terms of the number of failed routes in the setting of ad-hoc networks is presented in @cite_15 . A survey of on-line routing results is presented in @cite_22 . A competitive algorithm for admission and routing in a multicasting setting is presented in @cite_18 . Most of the other existing work in this area consists of heuristic approaches which main emphasis are on the algorithm correctness and computational complexity, without throughput guarantees.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_3", "@cite_6", "@cite_27", "@cite_15", "@cite_20" ], "mid": [ "2096909875", "2114641109", "2139386764", "2149933233", "2136316858", "2141661023", "2154125468" ], "abstract": [ "Classical routing and admission control strategies achieve provably good performance by relying on an assumption that the virtual circuits arrival pattern can be described by some a priori known probabilistic model. A new on-line routing framework, based on the notion of competitive analysis, was proposed. This framework is geared toward design of strategies that have provably good performance even in the case where there are no statistical assumptions on the arrival pattern and parameters of the virtual circuits. The on-line strategies motivated by this framework are quite different from the min-hop and reservation-based strategies. This paper surveys the on-line routing framework, the proposed routing and admission control strategies, and discusses some of the implementation issues. >", "We present the first polylog-competitive online algorithm for the general multicast admission control and routing problem in the throughput model. The ratio of the number of requests accepted by the optimum offline algorithm to the expected number of requests accepted by our algorithm is O((logn + loglogM)(logn + logM)logn), where M is the number of multicast groups and n is the number of nodes in the graph. We show that this is close to optimum by presenting an Ω(lognlogM) lower bound on this ratio for any randomized online algorithm against an oblivious adversary, when M is much larger than the link capacities. Our lower bound applies even in the restricted case where the link capacities are much larger than bandwidth requested by a single multicast. We also present a simple proof showing that it is impossible to be competitive against an adaptive online adversary.As in the previous online routing algorithms, our algorithm uses edge-costs when deciding on which is the best path to use. In contrast to the previous competitive algorithms in the throughput model, our cost is not a direct function of the edge load. The new cost definition allows us to decouple the effects of routing and admission decisions of different multicast groups.", "This paper builds upon the scalable admission control schemes for CDMA networks developed in F. (2003, December 2004). These schemes are based on an exact representation of the geometry of both the downlink and the uplink channels and ensure that the associated power allocation problems have solutions under constraints on the maximal power of each station user. These schemes are decentralized in that they can be implemented in such a way that each base station only has to consider the load brought by its own users to decide on admission. By load we mean here some function of the configuration of the users and of their bit rates that is described in the paper. When implemented in each base station, such schemes ensure the global feasibility of the power allocation even in a very large (infinite number of cells) network. The estimation of the capacity of large CDMA networks controlled by such schemes was made in these references. In certain cases, for example for a Poisson pattern of mobiles in an hexagonal network of base stations, this approach gives explicit formulas for the infeasibility probability, defined as the fraction of cells where the population of users cannot be entirely admitted by the base station. In the present paper we show that the notion of infeasibility probability is closely related to the notion of blocking probability, defined as the fraction of users that are rejected by the admission control policy in the long run, a notion of central practical importance within this setting. The relation between these two notions is not bound to our particular admission control schemes, but is of more general nature, and in a simplified scenario it can be identified with the well-known Erlang loss formula. We prove this relation using a general spatial birth-and-death process, where customer locations are represented by a spatial point process that evolves over time as users arrive or depart. This allows our model to include the exact representation of the geometry of inter-cell and intra-cell interferences, which play an essential role in the load indicators used in these cellular network admission control schemes.", "This paper considers max-min fair rate allocation and routing in energy harvesting networks where fairness is required among both the nodes and the time slots. Unlike most previous work on fairness, we focus on multihop topologies and consider different routing methods. We assume a predictable energy profile and focus on the design of efficient and optimal algorithms that can serve as benchmarks for distributed and approximate algorithms. We first develop an algorithm that obtains a max-min fair rate assignment for any given (time-variable or time-invariable) unsplittable routing or a routing tree. For time-invariable unsplittable routing, we also develop an algorithm that finds routes that maximize the minimum rate assigned to any node in any slot. For fractional routing, we study the joint routing and rate assignment problem. We develop an algorithm for the time-invariable case with constant rates. We show that the time-variable case is at least as hard as the 2-commodity feasible flow problem and design an FPTAS to combat the high running time. Finally, we show that finding a max-min fair unsplittable routing or a routing tree is NP-hard, even for a time horizon of a single slot. Our analysis provides insights into the problem structure and can be applied to other related fairness problems.", "This paper is meant to be an illustration of the use of stochastic geometry for analyzing the performance of routing in large wireless ad hoc (mobile or mesh) networks. In classical routing strategies used in such networks, packets are transmitted on a pre-defined route that is usually obtained by a shortest-path routing protocol. In this paper we review some recent ideas concerning a new routing technique which is opportunistic in the sense that each packet at each hop on its (specific) route from an origin to a destination takes advantage of the actual pattern of nodes that captured its recent (re)transmission in order to choose the next relay. The paper focuses both on the distributed algorithms allowing such a routing technique to work and on the evaluation of the gain in performance it brings compared to classical mechanisms. On the algorithmic side, we show that it is possible to implement this opportunistic technique in such a way that the current transmitter of a given packet does not need to know its next relay a priori, but the nodes that capture this transmission (if any) perform a self-selection procedure to choose the packet relay node and acknowledge the transmitter. We also show that this routing technique works well with various medium access protocols (such as Aloha, CSMA, TDMA). Finally, we show that the above relay self-selection procedure can be optimized in the sense that it is the node that optimizes some given utility criterion (e.g. minimize the remaining distance to the final destination), which is chosen as the relay. The performance evaluation part is based on stochastic geometry and combines simulation as analytical models. The main result is that such opportunistic schemes very significantly outperform classical routing schemes when properly optimized and provided at least a small number of nodes in the network know their geographical positions exactly.", "Existing position-based unicast routing algorithms which forward packets in the geographic direction of the destination require that the forwarding node knows the positions of all neighbors in its transmission range. This information on direct neighbors is gained by observing beacon messages each node sends out periodically. Due to mobility, the information that a node receives about its neighbors becomes outdated, leading either to a significant decrease in the packet delivery rate or to a steep increase in load on the wireless channel as node mobility increases. In this paper, we propose a mechanism to perform position-based unicast forwarding without the help of beacons. In our contention-based forwarding scheme (CBF) the next hop is selected through a distributed contention process based on the actual positions of all current neighbors. For the contention process, CBF makes use of biased timers. To avoid packet duplication, the first node that is selected suppresses the selection of further nodes. We propose three suppression strategies which vary with respect to forwarding efficiency and suppression characteristics. We analyze the behavior of CBF with all three suppression strategies and compare it to an existing greedy position-based routing approach by means of simulation with ns-2. Our results show that CBF significantly reduces the load on the wireless channel required to achieve a specific delivery rate compared to the load a beacon-based greedy forwarding strategy generates.", "In the interference scheduling problem, one is given a set of n communication requests described by source-destination pairs of nodes from a metric space. The nodes correspond to devices in a wireless network. Each pair must be assigned a power level and a color such that the pairs in each color class can communicate simultaneously at the specified power levels. The feasibility of simultaneous communication within a color class is defined in terms of the Signal to Interference plus Noise Ratio (SINR) that compares the strength of a signal at a receiver to the sum of the strengths of other signals. The objective is to minimize the number of colors as this corresponds to the time needed to schedule all requests. We introduce an instance-based measure of interference, denoted by I, that enables us to improve on previous results for the interference scheduling problem. We prove the upper and lower bounds in terms of I on the number of steps needed for scheduling a set of requests. For general power assignments, we prove a lower bound of @W(I ([email protected])) steps, where @D denotes the aspect ratio of the metric. When restricting to the two-dimensional Euclidean space (as in the previous work) the bound improves to @W(I [email protected]). Alternatively, when restricting to linear power assignments, the lower bound improves even to @W(I). The lower bounds are complemented by an efficient algorithm computing a schedule for linear power assignments using only O(Ilogn) steps. A more sophisticated algorithm computes a schedule using even only O(I+log^2n) steps. For dense instances in the two-dimensional Euclidean space, this gives a constant factor approximation for scheduling under linear power assignments, which shows that the price for using linear (and, hence, energy-efficient) power assignments is bounded by a factor of O([email protected]). In addition, we extend these results for single-hop scheduling to multi-hop scheduling and combined scheduling and routing problems, where our analysis generalizes the previous results towards general metrics and improves on the previous approximation factors." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
In @cite_1 a rate achieving scheme for packet switching at the switch level is presented. Their scheme is based on convergence to the optimal multicommodity flow using delayed decision for queued packets. Their results somewhat resemble our @math algorithm. However, their scheme depends on the existence of an average packet size, whereas our scheme addresses the full routing-scheduling question for any size distribution and any (adversarial) arrival schedule. In @cite_11 queuing analysis of several optical transport network architectures is conducted, and it is shown that under some conditions on the arrival process, some of the schemes can achieve the maximum network rate. As the previous one, this paper does not discuss the full routing-scheduling question discussed here and does not handle unbounded job sizes. Another difference is that our paper provides an algorithm, @math discussed below, guaranteeing the ending time of a job at the time of arrival, which, as far as we know is the first such algorithm.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "1984382275", "2125953414" ], "abstract": [ "This paper proposes a new class of online policies for scheduling in input-buffered crossbar switches. Given an initial configuration of packets at the input buffers, these policies drain all packets in the system in the minimal amount of time provided that there are no further arrivals. These policies are also throughput optimal for a large class of arrival processes which satisfy strong-law of large numbers. We show that it is possible for policies in our class to be throughput optimal even if they are not constrained to be maximal in every time slot. Most algorithms for switch scheduling take an edge based approach; in contrast, we focus on scheduling (a large enough set of) the most congested ports. This alternate approach allows for lower-complexity algorithms, and also requires a non-standard technique to prove throughput-optimality. One algorithm in our class, Maximum Vertex-weighted Matching (MVM) has worst-case complexity similar to Max-size Matching, and in simulations shows slightly better delay performance than Max-(edge)weighted-Matching (MWM).", "Input Queued (IQ) switches have been very well studied in the recent past. The main problem in the IQ switches concerns scheduling. The main focus of the research has been the fixed length packet-known as cells-case. The scheduling decision becomes relatively easier for cells compared to the variable length packet case as scheduling needs to be done at a regular interval of fixed cell time. In real traffic dividing the variable packets into cells at the input side of the switch and then reassembling these cells into packets on the output side achieve it. The disadvantages of this cell-based approach are the following: (a) bandwidth is lost as division of a packet may generate incomplete cells, and (b) additional overhead of segmentation and reassembling cells into packets. This motivates the packet scheduling: scheduling is done in units of arriving packet sizes and in nonpreemptive fashion. In M.A. (2001) the problem of packet scheduling was first considered. They show that under any admissible Bernoulli i.i.d. arrival traffic a simple modification of maximum weight matching (MWM) algorithm is stable, similar to cell-based MWM. In this paper, we study the stability properties of packet based scheduling algorithm for general admissible arrival traffic pattern. We first show that the result of extends to general regenerative traffic model instead of just admissible traffic, that is, packet based MWM is stable. Next we show that there exists an admissible traffic pattern under which any work-conserving (that is maximal type) scheduling algorithm will be unstable. This suggests that the packet based MWM will be unstable too. To overcome this difficulty we propose a new class of \"waiting\" algorithms. We show that \"waiting\"-MWM algorithm is stable for any admissible traffic using fluid limit technique." ] }
0711.0301
1812196998
Advanced channel reservation is emerging as an important feature of ultra high-speed networks requiring the transfer of large files. Applications include scientific data transfers and database backup. In this paper, we present two new, on-line algorithms for advanced reservation, called BatchAll and BatchLim, that are guaranteed to achieve optimal throughput performance, based on multi-commodity flow arguments. Both algorithms are shown to have polynomial-time complexity and provable bounds on the maximum delay for 1+epsilon bandwidth augmented networks. The BatchLim algorithm returns the completion time of a connection immediately as a request is placed, but at the expense of a slightly looser competitive ratio than that of BatchAll. We also present a simple approach that limits the number of parallel paths used by the algorithms while provably bounding the maximum reduction factor in the transmission throughput. We show that, although the number of different paths can be exponentially large, the actual number of paths needed to approximate the flow is quite small and proportional to the number of edges in the network. Simulations for a number of topologies show that, in practice, 3 to 5 parallel paths are sufficient to achieve close to optimal performance. The performance of the competitive algorithms are also compared to a greedy benchmark, both through analysis and simulation.
Many papers have discussed the issue of path dispersion and attempted to achieve good throughput with limited dispersion, a survey of some results in this field is given in @cite_9 . In @cite_23 @cite_21 heuristic methods of controlling multipath routing and some quantitative measures are presented. As far as we know, our work proposes the first formal treatment allowing the approximation of a flow using a limited number of paths with any desired accuracy.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_23" ], "mid": [ "1481933765", "1970554468", "2108261513" ], "abstract": [ "Internet service provider faces a daunting challenge in provisioning network efficiently. We introduce a proactive multipath routing scheme that tries to route traffic according to its built-in properties. Based on mathematical analysis, our approach disperses incoming traffic flows onto multiple paths according to path qualities. Long-lived flows are detected and migrated to the shortest path if their QoS could be guaranteed there. Suggesting nondisjoint path set, four types of dispersion policies are analyzed, and flow classification policy which relates flow trigger with link state update period is investigated. Simulation experiments show that our approach outperforms traditional single path routing significantly.", "The throughput of wireless networks can be significantly improved by multi-channel communications compared with single-channel communications since the use of multiple channels can reduce interference influence. In this paper, we study interference-aware topology control and QoS routing in IEEE 802.11-based multi-channel wireless mesh networks with dynamic traffic. Channel assignment and routing are two basic issues in such networks. Different channel assignments can lead to different network topologies. We present a novel definition of co-channel interference. Based on this concept, we formally define and present an effective heuristic for the minimum INterference Survivable Topology Control (INSTC) problem which seeks a channel assignment for the given network such that the induced network topology is interference-minimum among all K-connected topologies. We then formulate the Bandwidth-Aware Routing (BAR) problem for a given network topology, which seeks routes for QoS connection requests with bandwidth requirements. We present a polynomial time optimal algorithm to solve the BAR problem under the assumption that traffic demands are splittable. For the non-splittable case, we present a maximum bottleneck capacity path routing heuristic. Simulation results show that compared with the simple common channel assignment and shortest path routing approach, our scheme improves the system performance by 57 on average in terms of connection blocking ratio.", "This paper analyzes the asymptotic behavior of packet-train probing over a multi-hop network path P carrying arbitrarily routed bursty cross-traffic flows. We examine the statistical mean of the packet-train output dispersions and its relationship to the input dispersion. We call this relationship the response curve of path P. We show that the real response curve Z is tightly lower-bounded by its multi-hop fluid counterpart F, obtained when every cross-traffic flow on P is hypothetically replaced with a constant-rate fluid flow of the same average intensity and routing pattern. The real curve Z asymptotically approaches its fluid counterpart F as probing packet size or packet train length increases. Most existing measurement techniques are based upon the single-hop fluid curve S associated with the bottleneck link in P. We note that the curve S coincides with F in a certain large-dispersion input range, but falls below F in the remaining small-dispersion input ranges. As an implication of these findings, we show that bursty cross-traffic in multi-hop paths causes negative bias (asymptotic underestimation) to most existing techniques. This bias can be mitigated by reducing the deviation of Z from S using large packet size or long packet-trains. However, the bias is not completely removable for the techniques that use the portion of S that falls below F." ] }
0711.1242
1843364394
We study a class of games in which a finite number of agents each controls a quantity of flow to be routed through a network, and are able to split their own flow between multiple paths through the network. Recent work on this model has contrasted the social cost of Nash equilibria with the best possible social cost. Here we show that additional costs are incurred in situations where a selfish leader'' agent allocates his flow, and then commits to that choice so that other agents are compelled to minimise their own cost based on the first agent's choice. We find that even in simple networks, the leader can often improve his own cost at the expense of increased social cost. Focusing on the 2-player case, we give upper and lower bounds on the worst-case additional cost incurred.
A large body of recent work (initiated mainly by Roughgarden and Tardos @cite_2 @cite_19 ) has studied from a game-theory perspective, how selfishness can degrade the overall performance of a system that has multiple (selfish) users. Much of this work has focused on situations where users have access to shared resources, and the cost of using a resource increases as the resource attracts more usage. Our focus here is on the parallel links'' network topology, also referred to as scheduling jobs to a set of load-dependent machines, which is one of the most commonly studied models (e.g. @cite_10 @cite_11 @cite_16 @cite_21 @cite_6 @cite_20 ). Papers such as @cite_1 @cite_4 @cite_16 have studied the price of anarchy for these games in the unsplittable flow'' setting, where each user may only use a single resource. In contrast we study the splittable flow'' setting of @cite_15 . This version (finitely many players, splittable flow) was shown in @cite_15 @cite_8 to possess unique pure Nash equilibria (see Definition ). @cite_14 study the cost of selfish behaviour in this model, and compare it with the cost of selfish behaviour in the Wardrop model (i.e. infinitely many infinitesimal users).
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_21", "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_15", "@cite_16", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2145512846", "1554620983", "2072113937", "2107983838", "2158770193", "2017227260", "1548210998", "2039786295", "1521705705", "2015061583", "2070394877", "1991014770", "2346929692" ], "abstract": [ "It is well known that in a network with arbitrary (convex) latency functions that are a function of edge traffic, the worst-case ratio, over all inputs, of the system delay caused due to selfish behavior versus the system delay of the optimal centralized solution may be unbounded even if the system consists of only two parallel links. This ratio is called the price of anarchy (PoA). In this paper, we investigate ways by which one can reduce the performance degradation due to selfish behavior. We investigate two primary methods (a) Stackelberg routing strategies, where a central authority, e.g., network manager, controls a fixed fraction of the flow, and can route this flow in any desired way so as to influence the flow of selfish users; and (b) network tolls, where tolls are imposed on the edges to modify the latencies of the edges, and thereby influence the induced Nash equilibrium. We obtain results demonstrating the effectiveness of both Stackelberg strategies and tolls in controlling the price of anarchy. For Stackelberg strategies, we obtain the first results for nonatomic routing in graphs more general than parallel-link graphs, and strengthen existing results for parallel-link graphs, (i) In series-parallel graphs, we show that Stackelberg routing reduces the PoA to a constant (depending on the fraction of flow controlled). (ii) For general graphs, we obtain latency-class specific bounds on the PoA with Stackelberg routing, which give a continuous trade-off between the fraction of flow controlled and the price of anarchy, (iii) In parallel-link graphs, we show that for any given class L of latency functions, Stackelberg routing reduces the PoA to at most α + (1 - α) · ρ(L), where α is the fraction of flow controlled and ρ(L) is the PoA of class L (when α = 0). For network tolls, motivated by the known strong results for nonatomic games, we consider the more general setting of atomic splittable routing games. We show that tolls inducing an optimal flow always exist, even for general asymmetric games with heterogeneous users, and can be computed efficiently by solving a convex program. Furthermore, we give a complete characterization of flows that can be induced via tolls. These are the first results on the effectiveness of tolls for atomic splittable games.", "We present a short geometric proof for the price of anarchy results that have recently been established in a series of papers on selfish routing in multicommodity flow networks. This novel proof also facilitates two new types of results: On the one hand, we give pseudo-approximation results that depend on the class of allowable cost functions. On the other hand, we derive improved bounds on the inefficiency of Nash equilibria for situations in which the equilibrium travel times are within reasonable limits of the free-flow travel times. These tighter bounds help to explain empirical observations in vehicular traffic networks. Our analysis holds in the more general context of congestion games, which provides the framework in which we describe this work.", "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria.", "We consider routing games where the performance of each user is dictated by the worst (bottleneck) element it employs. We are given a network, finitely many (selfish) users, each associated with a positive flow demand, and a load-dependent performance function for each network element; the social (i.e., system) objective is to optimize the performance of the worst element in the network (i.e., the network bottleneck). Although we show that such \"bottleneck\" routing games appear in a variety of practical scenarios, they have not been considered yet. Accordingly, we study their properties, considering two routing scenarios, namely when a user can split its traffic over more than one path (splittable bottleneck game) and when it cannot (unsplittable bottleneck game). First, we prove that, for both splittable and unsplittable bottleneck games, there is a (not necessarily unique) Nash equilibrium. Then, we consider the rate of convergence to a Nash equilibrium in each game. Finally, we investigate the efficiency of the Nash equilibria in both games with respect to the social optimum; specifically, while for both games we show that the price of anarchy is unbounded, we identify for each game conditions under which Nash equilibria are socially optimal.", "We study the impact of collusion in network games with splittable flow and focus on the well established price of anarchy as a measure of this impact. We first investigate symmetric load balancing games and show that the price of anarchy is at most m, where m denotes the number of coalitions. For general networks, we present an instance showing that the price of anarchy is unbounded, even in the case of two coalitions. If latencies are restricted to polynomials with nonnegative coefficients and bounded degree, we prove upper bounds on the price of anarchy for general networks, which improve upon the current best ones except for affine latencies. In light of the negative results even for two coalitions, we analyze the effectiveness of Stackelberg strategies as a means to improve the quality of Nash equilibria. In this setting, an α fraction of the entire demand is first routed centrally by a Stackelberg leader according to a predefined Stackelberg strategy and the remaining demand is then routed selfishly by the coalitions (followers). For a single coalitional follower and parallel arcs, we develop an efficiently computable Stackelberg strategy that reduces the price of anarchy to one. For general networks and a single coalitional follower, we show that a simple strategy, called SCALE, reduces the price of anarchy to 1+α. Finally, we investigate SCALE for multiple coalitional followers, general networks, and affine latencies. We present the first known upper bound on the price of anarchy in this case. Our bound smoothly varies between 1.5 for α=0 and full efficiency for α=1.", "The essence of the routing problem in real networks is that the traffic demand from a source to destination must be satisfied by choosing a single path between source and destination. The splittable version of this problem is when demand can be satisfied by many paths, namely a flow from source to destination. The unsplittable, or discrete version of the problem is more realistic yet is more complex from the algorithmic point of view; in some settings optimizing such unsplittable traffic flow is computationally intractable.In this paper, we assume this more realistic unsplittable model, and investigate the \"price of anarchy\", or deterioration of network performance measured in total traffic latency under the selfish user behavior. We show that for linear edge latency functions the price of anarchy is exactly @math 2.5 for unweighted demand. These results are easily extended to (weighted or unweighted) atomic \"congestion games\", where paths are replaced by general subsets. We also show that for polynomials of degree d edge latency functions the price of anarchy is dδ(d). Our results hold also for mixed strategies.Previous results of Roughgarden and Tardos showed that for linear edge latency functions the price of anarchy is exactly 4 3 under the assumption that each user controls only a negligible fraction of the overall traffic (this result also holds for the splittable case). Note that under the assumption of negligible traffic pure and mixed strategies are equivalent and also splittable and unsplittable models are equivalent.", "We study bottleneck congestion games where the social cost is determined by the worst congestion on any resource. In the literature, bottleneck games assume player utility costs determined by the worst congested resource in their strategy. However, the Nash equilibria of such games are inefficient since the price of anarchy can be very high and proportional to the number of resources. In order to obtain smaller price of anarchy we introduce exponential bottleneck games, where the utility costs of the players are exponential functions of their congestions. In particular, the delay function for any resource r is MCr, where Cr denotes the number of players that use r, and M is an integer constant. We find that exponential bottleneck games are very efficient and give the following bound on the price of anarchy: O(log |R|), where R is the set of resources. This price of anarchy is tight, since we demonstrate a game with price of anarchy Ω(log |R|). We obtain our tight bounds by using two novel proof techniques: transformation, which we use to convert arbitrary games to simpler games, and expansion, which we use to bound the price of anarchy in a simpler game.", "A natural generalization of the selfish routing setting arises when some of the users obey a central coordinating authority, while the rest act selfishly. Such behavior can be modeled by dividing the users into an α fraction that are routed according to the central coordinator’s routing strategy (Stackelberg strategy), and the remaining 1−α that determine their strategy selfishly, given the routing of the coordinated users. One seeks to quantify the resulting price of anarchy, i.e., the ratio of the cost of the worst traffic equilibrium to the system optimum, as a function of α. It is well-known that for α=0 and linear latency functions the price of anarchy is at most 4 3 (J. ACM 49, 236–259, 2002). If α tends to 1, the price of anarchy should also tend to 1 for any reasonable algorithm used by the coordinator. We analyze two such algorithms for Stackelberg routing, LLF and SCALE. For general topology networks, multicommodity users, and linear latency functions, we show a price of anarchy bound for SCALE which decreases from 4 3 to 1 as α increases from 0 to 1, and depends only on α. Up to this work, such a tradeoff was known only for the case of two nodes connected with parallel links (SIAM J. Comput. 33, 332–350, 2004), while for general networks it was not clear whether such a result could be achieved, even in the single-commodity case. We show a weaker bound for LLF and also some extensions to general latency functions. The existence of a central coordinator is a rather strong requirement for a network. We show that we can do away with such a coordinator, as long as we are allowed to impose taxes (tolls) on the edges in order to steer the selfish users towards an improved system cost. As long as there is at least a fraction α of users that pay their taxes, we show the existence of taxes that lead to the simulation of SCALE by the tax-payers. The extension of the results mentioned above quantifies the improvement on the system cost as the number of tax-evaders decreases.", "We examine how the selfish behavior of heterogeneous users in a network can be regulated through economic disincentives, i.e., through the introduction of appropriate taxation. One wants to impose taxes on the edges so that any traffic equilibrium reached by the selfish users who are conscious of both the travel latencies and the taxes will minimize the social cost, i.e., will minimize the total latency. We generalize previous results of Cole, Dodis and Roughgarden that held for a single origin-destination pair to the multicommodity setting. Our approach, which could be of independent interest, is based on the formulation of traffic equilibria as a nonlinear complementarity problem by Aashtiani and Magnanti (1981), We extend this formulation so that each of its solutions will give us a set of taxes that forces the network users to conform, at equilibrium, to a certain prescribed routing. We use the special nature of the prescribed minimum-latency flow in order to reduce the difficult nonlinear complementarity formulation to a pair of primal-dual linear programs. LP duality is then enough to derive our results.", "There has been substantial work developing simple, efficient no-regret algorithms for a wide class of repeated decision-making problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversarially-changing environments. There has also been substantial work on analyzing properties of Nash equilibria in routing games. In this paper, we consider the question: if each player in a routing game uses a no-regret strategy, will behavior converge to a Nash equilibrium? In general games the answer to this question is known to be no in a strong sense, but routing games have substantially more structure.In this paper we show that in the Wardrop setting of multicommodity flow and infinitesimal agents, behavior will approach Nash equilibrium (formally, on most days, the cost of the flow will be close to the cost of the cheapest paths possible given that flow) at a rate that depends polynomially on the players' regret bounds and the maximum slope of any latency function. We also show that price-of-anarchy results may be applied to these approximate equilibria, and also consider the finite-size (non-infinitesimal) load-balancing model of Azar [2].", "We resolve the worst-case price of anarchy (POA) of atomic splittable congestion games. Prior to this work, no tight bounds on the POA in such games were known, even for the simplest non-trivial special case of affine cost functions. We make two distinct contributions. On the upper-bound side, we define the framework of \"local smoothness\", which refines the standard smoothness framework for games with convex strategy sets. While standard smoothness arguments cannot establish tight bounds on the POA in atomic splittable congestion games, we prove that local smoothness arguments can. Further, we prove that every POA bound derived via local smoothness applies automatically to every correlated equilibrium of the game. Unlike standard smoothness arguments, bounds proved using local smoothness do not always apply to the coarse correlated equilibria of the game. Our second contribution is a very general lower bound: for every set L that satisfies mild technical conditions, the worst-case POA of pure Nash equilibria in atomic splittable congestion games with cost functions in L is exactly the smallest upper bound provable using local smoothness arguments. In particular, the worst-case POA of pure Nash equilibria, mixed Nash equilibria, and correlated equilibria coincide in such games.", "In this paper, we present a combined study of price competition and traffic control in a congested network. We study a model in which service providers own the routes in a network and set prices to maximize their profits, while users choose the amount of flow to send and the routing of the flow according to Wardrop's principle. When utility functions of users are concave and have concave first derivatives, we characterize a tight bound of 2-3 on efficiency in pure strategy equilibria of the price competition game. We obtain the same bound under the assumption that there is no fixed latency cost, i.e., the latency of a link at zero flow is equal to zero. These bounds are tight even when the numbers of routes and service providers are arbitrarily large. © 2008 Wiley Periodicals, Inc. NETWORKS, 2008", "We introduce a unifying model to study the impact of worst-case latency deviations in non-atomic selfish routing games. In our model, latencies are subject to (bounded) deviations which are taken into account by the players. The quality deterioration caused by such deviations is assessed by the Deviation Ratio, i.e., the worst case ratio of the cost of a Nash flow with respect to deviated latencies and the cost of a Nash flow with respect to the unaltered latencies. This notion is inspired by the Price of Risk Aversion recently studied by Nikolova and Stier-Moses [9]. Here we generalize their model and results. In particular, we derive tight bounds on the Deviation Ratio for multi-commodity instances with a common source and arbitrary non-negative and non-decreasing latency functions. These bounds exhibit a linear dependency on the size of the network (besides other parameters). In contrast, we show that for general multi-commodity networks an exponential dependency is inevitable. We also improve recent smoothness results to bound the Price of Risk Aversion." ] }
0711.1242
1843364394
We study a class of games in which a finite number of agents each controls a quantity of flow to be routed through a network, and are able to split their own flow between multiple paths through the network. Recent work on this model has contrasted the social cost of Nash equilibria with the best possible social cost. Here we show that additional costs are incurred in situations where a selfish leader'' agent allocates his flow, and then commits to that choice so that other agents are compelled to minimise their own cost based on the first agent's choice. We find that even in simple networks, the leader can often improve his own cost at the expense of increased social cost. Focusing on the 2-player case, we give upper and lower bounds on the worst-case additional cost incurred.
Stackelberg leadership refers to a game-theoretic situation where one player (the leader'') selects his action first, and commits to it. The other player(s) then choose their own action based on the choice made by the leader. Recent work on Stackelberg scheduling in the context of network flow (e.g. @cite_7 @cite_0 @cite_17 ), has studied it as a tool to mitigate the performance degradation due to selfish users. The flow that is controlled by the leader is routed so as to minimise social cost in the presence of followers who minimise their own costs. In contrast, here we consider what happens when the leading flow is controlled by another selfish agent. We show here that the price of decentralised behaviour goes up even further in the presence of a Stackelberg leader.
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_17" ], "mid": [ "1764576420", "2158770193", "2012552037" ], "abstract": [ "We study a multi-player one-round game termed Stackelberg Network Pricing Game, in which a leader can set prices for a subset of m priceable edges in a graph. The other edges have a fixed cost. Based on the leader’s decision one or more followers optimize a polynomial-time solvable combinatorial minimization problem and choose a minimum cost solution satisfying their requirements based on the fixed costs and the leader’s prices. The leader receives as revenue the total amount of prices paid by the followers for priceable edges in their solutions. Our model extends several known pricing problems, including single-minded and unit-demand pricing, as well as Stackelberg pricing for certain follower problems like shortest path or minimum spanning tree. Our first main result is a tight analysis of a single-price algorithm for the single follower game, which provides a (1+e)log m-approximation. This can be extended to provide a (1+e)(log k+log m)-approximation for the general problem and k followers. The problem is also shown to be hard to approximate within @math for some e>0. If followers have demands, the single-price algorithm provides an @math -approximation, and the problem is hard to approximate within @math for some e>0. Our second main result is a polynomial time algorithm for revenue maximization in the special case of Stackelberg bipartite vertex-cover, which is based on non-trivial max-flow and LP-duality techniques. This approach can be extended to provide constant-factor approximations for any constant number of followers.", "We study the impact of collusion in network games with splittable flow and focus on the well established price of anarchy as a measure of this impact. We first investigate symmetric load balancing games and show that the price of anarchy is at most m, where m denotes the number of coalitions. For general networks, we present an instance showing that the price of anarchy is unbounded, even in the case of two coalitions. If latencies are restricted to polynomials with nonnegative coefficients and bounded degree, we prove upper bounds on the price of anarchy for general networks, which improve upon the current best ones except for affine latencies. In light of the negative results even for two coalitions, we analyze the effectiveness of Stackelberg strategies as a means to improve the quality of Nash equilibria. In this setting, an α fraction of the entire demand is first routed centrally by a Stackelberg leader according to a predefined Stackelberg strategy and the remaining demand is then routed selfishly by the coalitions (followers). For a single coalitional follower and parallel arcs, we develop an efficiently computable Stackelberg strategy that reduces the price of anarchy to one. For general networks and a single coalitional follower, we show that a simple strategy, called SCALE, reduces the price of anarchy to 1+α. Finally, we investigate SCALE for multiple coalitional followers, general networks, and affine latencies. We present the first known upper bound on the price of anarchy in this case. Our bound smoothly varies between 1.5 for α=0 and full efficiency for α=1.", "We study the problem of optimizing the performance of a system shared by selfish, noncooperative users. We consider the concrete setting of scheduling jobs on a set of shared machines with load-dependent latency functions specifying the length of time necessary to complete a job; we measure system performance by the total latency of the system. Assigning jobs according to the selfish interests of individual users (who wish to minimize only the latency that their own jobs experience) typically results in suboptimal system performance. However, in many systems of this type there is a mixture of “selfishly controlled” and “centrally controlled” jobs; as the assignment of centrally controlled jobs will influence the subsequent actions by selfish users, we aspire to contain the degradation in system performance due to selfish behavior by scheduling the centrally controlled jobs in the best possible way. We formulate this goal as an optimization problem via Stackelberg games , games in which one player acts a leader (here, the centralized authority interested in optimizing system performance) and the rest as followers (the selfish users). The problem is then to compute a strategy for the leader (a em Stackelberg strategy ) that induces the followers to react in a way that (at least approximately) minimizes the total latency in the system. In this paper, we prove that it is NP-hard to compute the optimal Stackelberg strategy and present simple strategies with provable performance guarantees. More precisely, we give a simple algorithm that computes a strategy inducing a job assignment with total latency no more than a constant times that of the optimal assignment of all of the jobs; in the absence of centrally controlled jobs and a Stackelberg strategy, no result of this type is possible. We also prove stronger performance guarantees in the" ] }
0711.1612
2951396542
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Gorodnitsky and Rao @cite_3 propose FOCUSS as an iterative method for finding sparse solutions to underdetermined systems. At each iteration, FOCUSS solves a reweighted @math minimization with weights for @math . For nonzero signal coefficients, it is shown that each step of FOCUSS is equivalent to a step of the modified Newton's method for minimizing the function subject to @math . As the iterations proceed, it is suggested to identify those coefficients apparently converging to zero, remove them from subsequent iterations, and constrain them instead to be identically zero.
{ "cite_N": [ "@cite_3" ], "mid": [ "2122315118" ], "abstract": [ "We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging." ] }
0711.1612
2951396542
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Harikumar and Bresler @cite_4 propose an iterative algorithm that can be viewed as a generalization of FOCUSS. At each stage, the algorithm solves a convex optimization problem with a reweighted @math cost function that encourages sparse solutions. The algorithm allows for different reweighting rules; for a given choice of reweighting rule, the algorithm converges to a local minimum of some concave objective function (analogous to the log-sum penalty function in )). These methods build upon @math minimization rather than @math minimization.
{ "cite_N": [ "@cite_4" ], "mid": [ "2081956522" ], "abstract": [ "An algorithm for solving the problem: minimize @math (a convex function) subject to @math , @math , each @math a concave function, is presented. Specifically, the function [ P [ x,t,r_k ] f( x ) + r_k^ - 1 [ g_i ( x ) - t_i ] ^2 ] is minimized over all x, nonnegative t, for a strictly decreasing null sequence @math . This extends the work of T. Pietrzykowski [5]. It is proved that for every @math , there exists a finite point @math which minimizes P, and which solves the convex programming problem as @math . This algorithm is similar to the Sequential Unconstrained Minimization Technique (SUMT) [1] in that it solves the (Wolfe) dual programming problem [6]. It differs from SUMT in that (1) it approaches the optimum from the region of infeasibility (i.e., it is a relaxation technique), (2) it does not require a nonempty interior to the nonlinearly constrained region, (3) no separate feasibilit..." ] }
0711.1612
2951396542
It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing.
Delaney and Bresler @cite_41 also propose a general algorithm for minimizing functionals having concave regularization penalties, again by solving a sequence of reweighted convex optimization problems (though not necessarily @math problems) with weights that decrease as a function of the prior estimate. With the particular choice of a log-sum regularization penalty, the algorithm resembles the noise-aware reweighted @math minimization discussed in .
{ "cite_N": [ "@cite_41" ], "mid": [ "2075547019" ], "abstract": [ "As surrogate functions of 0-norm, many nonconvex penalty functions have been proposed to enhance the sparse vector recovery. It is easy to extend these nonconvex penalty functions on singular values of a matrix to enhance low-rank matrix recovery. However, different from convex optimization, solving the nonconvex low-rank minimization problem is much more challenging than the nonconvex sparse minimization problem. We observe that all the existing nonconvex penalty functions are concave and monotonically increasing on [0, ∞). Thus their gradients are decreasing functions. Based on this property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the weight vector as the gradient of the concave penalty function, the WSVT problem has a closed form solution. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthetic data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms." ] }
0710.2505
2135346163
Trace semantics has been defined for various kinds of state-based systems, notably with different forms of branching such as non-determinism vs. probability. In this paper we claim to identify one underlying mathematical structure behind these "trace semantics," namely coinduction in a Kleisli category. This claim is based on our technical result that, under a suitably order-enriched setting, a final coalgebra in a Kleisli category is given by an initial algebra in the category Sets. Formerly the theory of coalgebras has been employed mostly in Sets where coinduction yields a finer process semantics of bisimilarity. Therefore this paper extends the application field of coalgebras, providing a new instance of the principle "process semantics via coinduction."
In a different context of functional programming, the work @cite_42 also studies initial algebras and final coalgebras in a Kleisli category. The motivation there is to combine and . More specifically, an initial algebra and a final coalgebra support the and the operators, respectively, used in recursive programs over datatypes. A computational effect is presented as a monad, and its Kleisli category is the category of effectful computations. The difference between @cite_42 and the current work is as follows. @cite_42 , the original category of pure functions is already algebraically compact; the paper studies the conditions for the algebraic compactness to be carried over to Kleisli categories. In contrast, in the current work, it is a monad---with a suitable order structure, embodying the essence of branching''---which yields the initial algebra-final coalgebra coincidence on a Kleisli category; the coincidence is not present in the original category @math .
{ "cite_N": [ "@cite_42" ], "mid": [ "2088189323" ], "abstract": [ "In the semantics of programming, finite data types such as finite lists, have traditionally been modelled by initial algebras. Later final coalgebras were used in order to deal with infinite data types. Coalgebras, which are the dual of algebras, turned out to be suited, moreover, as models for certain types of automata and more generally, for (transition and dynamical) systems. An important property of initial algebras is that they satisfy the familiar principle of induction. Such a principle was missing for coalgebras until the work of Aczel (1988) on a theory of non-wellfounded sets, in which he introduced a proof principle nowadays called coinduction. It was formulated in terms of bisimulation, a notion originally stemming from the world of concurrent programming languages (Milner, 1980; Park, 1981). Using the notion of coalgebra homomorphism, the definition of bisimulation on coalgebras can be shown to be formally dual to that of congruence on algebras (Aczel and Mendler, 1989). Thus the three basic notions of universal algebra: algebra, homomorphism of algebras, and congruence, turn out to correspond to: coalgebra, homomorphism of coalgebras, and bisimulation, respectively. In this paper, the latter are taken as the basic ingredients of a theory called universal coalgebra. Some standard results from universal algebra are reformulated (using the afore mentioned correspondence) and proved for a large class of coalgebras, leading to a series of results on, e.g., the lattices of subcoalgebras and bisimulations, simple coalgebras and coinduction, and a covariety theorem for coalgebras similar to Birkhoff''s variety theorem." ] }
0710.3392
2950145130
We introduce a new formalism of differential operators for a general associative algebra A. It replaces Grothendieck's notion of differential operator on a commutative algebra in such a way that derivations of the commutative algebra are replaced by DDer(A), the bimodule of double derivations. Our differential operators act not on the algebra A itself but rather on F(A), a certain Fock space' associated to any noncommutative algebra A in a functorial way. The corresponding algebra D(F(A)), of differential operators, is filtered and gr D(F(A)), the associated graded algebra, is commutative in some twisted' sense. The resulting double Poisson structure on gr D(F(A)) is closely related to the one introduced by Van den Bergh. Specifically, we prove that gr D(F(A))=F(T_A(DDer(A)), provided A is smooth. It is crucial for our construction that the Fock space F(A) carries an extra-structure of a wheelgebra, a new notion closely related to the notion of a wheeled PROP. There are also notions of Lie wheelgebras, and so on. In that language, D(F(A)) becomes the universal enveloping wheelgebra of a Lie wheelgebroid of double derivations. In the second part of the paper we show, extending a classical construction of Koszul to the noncommutative setting, that any Ricci-flat, torsion-free bimodule connection on DDer(A) gives rise to a second order (wheeled) differential operator, a noncommutative analogue of the BV-operator.
Finally, in @cite_4 , Barannikov constructs from any modular operad a BV-style master equation ( [(5.5)] Barmobvg ) whose solutions are equivalent to algebras over the Feynman transform of that operad. When one sets the modular operad to be the operad denoted by @math in ( 9), one obtains a slight modification of the BV algebra defined above for the case of a quiver with one vertex, obtained by adding a parameter @math and keeping track of a genus grading.
{ "cite_N": [ "@cite_4" ], "mid": [ "2043706520" ], "abstract": [ "I describe the noncommutative Batalin-Vilkovisky geometry as- sociated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feyn- man transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affineP manifolds. As an application I give a construction of character- istic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affineS(t) manifolds, where S(t) is the twisted modular Det operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras." ] }
0710.3392
2950145130
We introduce a new formalism of differential operators for a general associative algebra A. It replaces Grothendieck's notion of differential operator on a commutative algebra in such a way that derivations of the commutative algebra are replaced by DDer(A), the bimodule of double derivations. Our differential operators act not on the algebra A itself but rather on F(A), a certain Fock space' associated to any noncommutative algebra A in a functorial way. The corresponding algebra D(F(A)), of differential operators, is filtered and gr D(F(A)), the associated graded algebra, is commutative in some twisted' sense. The resulting double Poisson structure on gr D(F(A)) is closely related to the one introduced by Van den Bergh. Specifically, we prove that gr D(F(A))=F(T_A(DDer(A)), provided A is smooth. It is crucial for our construction that the Fock space F(A) carries an extra-structure of a wheelgebra, a new notion closely related to the notion of a wheeled PROP. There are also notions of Lie wheelgebras, and so on. In that language, D(F(A)) becomes the universal enveloping wheelgebra of a Lie wheelgebroid of double derivations. In the second part of the paper we show, extending a classical construction of Koszul to the noncommutative setting, that any Ricci-flat, torsion-free bimodule connection on DDer(A) gives rise to a second order (wheeled) differential operator, a noncommutative analogue of the BV-operator.
One may also form a directed analogue of the construction of @cite_4 , replacing modular operads with wheeled PROPs (which include commutative wheelgebras), by replacing undirected graphs with directed graphs. Here, one can additionally keep track of a genus grading at vertices.
{ "cite_N": [ "@cite_4" ], "mid": [ "2043706520" ], "abstract": [ "I describe the noncommutative Batalin-Vilkovisky geometry as- sociated naturally with arbitrary modular operad. The classical limit of this geometry is the noncommutative symplectic geometry of the corresponding tree-level cyclic operad. I show, in particular, that the algebras over the Feyn- man transform of a twisted modular operad P are in one-to-one correspondence with solutions to quantum master equation of Batalin-Vilkovisky geometry on the affineP manifolds. As an application I give a construction of character- istic classes with values in the homology of the quotient of Deligne-Mumford moduli spaces. These classes are associated naturally with solutions to the quantum master equation on affineS(t) manifolds, where S(t) is the twisted modular Det operad constructed from symmetric groups, which generalizes the cyclic operad of associative algebras." ] }
0710.3777
1802395297
We present a deterministic channel model which captures several key features of multiuser wireless communication. We consider a model for a wireless network with nodes connected by such deterministic channels, and present an exact characterization of the end-to-end capacity when there is a single source and a single destination and an arbitrary number of relay nodes. This result is a natural generalization of the max-flow min-cut theorem for wireline networks. Finally to demonstrate the connections between deterministic model and Gaussian model, we look at two examples: the single-relay channel and the diamond network. We show that in each of these two examples, the capacity-achieving scheme in the corresponding deterministic model naturally suggests a scheme in the Gaussian model that is within 1 bit and 2 bit respectively from cut-set upper bound, for all values of the channel gains. This is the first part of a two-part paper; the sequel [1] will focus on the proof of the max-flow min-cut theorem of a class of deterministic networks of which our model is a special case.
Finite field addition makes the model much more tractable, and neglecting the 1-bit carryover from one level to the next introduce a small error when the SNR is high. Other works @cite_7 have also exploited the simplicity of finite-field addition over real addition. Aref @cite_1 is one of the earliest works that use deterministic models for relay networks, and for which he proved a capacity result for the single-source-single-destination case. However, his model only captures the broadcast aspect but not the superposition aspect. This work was later extended to the multicast setting by Ratnaker and Kramer @cite_8 . Aref and El Gamal @cite_0 also computed the capacity of the semi-determinstic relay channel but only with a single relay. @cite_4 also uses finite-field deterministic addition to model the superposition property, but they do not have the notion of signal scale and the channel as sending some of the signal scales to below noise level. Instead they use random erasures to model noise.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_0" ], "mid": [ "1971976913", "2152137554", "2070085078", "2153829573", "2336775896" ], "abstract": [ "We present a framework to study linear deterministic interference networks over finite fields. Unlike the popular linear deterministic models introduced to study Gaussian networks, we consider networks where the channel coefficients are general scalars over some extension field Fpm (scalar m-th extensionfield models), m×m diagonal matrices over Fp (m-symbol extension ground-field models), and m×m general non-singular matrices (MIMO ground field models). We use the companion matrix representation of the extension field to convert m-th extension scalar models into MIMO ground-field models where the channel matrices have special algebraic structure. For such models, we consider the 2× 2× 2 topology (two-hops two-flow) and the 3-user interference network topology. We derive achievability results and feasibility conditions for certain schemes based on the Precoding-Based Network Alignment (PBNA) approach, where intermediate nodes use random linear network coding (i.e., propagate random linear combinations of their incoming messages) and non-trivial precoding decoding is performed only at the network edges, at the sources and destinations. Furthermore, we apply this approach to the scalar 2×2×2 complex Gaussian IC with fixed channel coefficients, and show two competitive schemes outperforming other known approaches at any SNR, where we combine finite-field linear precoding decoding with lattice coding and the Compute and Forward approach at the signal level. As a side result, we also show significant advantages of vector linear network coding both in terms of feasibility probability (with random coding coefficients) and in terms of coding latency, with respect to standard scalar linear network coding, in PBNA schemes.", "In this paper, we consider a discrete memoryless state-dependent relay channel with non-causal Channel State Information (CSI). We investigate three different cases in which perfect channel states can be known non-causally: i) only to the source, ii) only to the relay or iii) both to the source and to the relay node. For these three cases we establish lower bounds on the channel capacity (achievable rates) based on using Gel'fand-Pinsker coding at the nodes where the CSI is available and using Compress-and-Forward (CF) strategy at the relay. Furthermore, for the general Gaussian relay channel with additive independent and identically distributed (i.i.d) states and noise, we obtain lower bounds on the capacity for the cases in which CSI is available at the source or at the relay. We also compare our derived bounds with the previously obtained results which were based on Decode-and-Forward (DF) strategy, and we show the cases in which our derived lower bounds outperform DF based bounds, and can achieve the rates close to the upper bound.", "We consider a state-dependent three-terminal full-duplex relay channel with the channel states noncausally available at only the source, that is, neither at the relay nor at the destination. This model has application to cooperation over certain wireless channels with asymmetric cognition capabilities and cognitive interference relay channels. We establish lower bounds on the channel capacity for both discrete memoryless (DM) and Gaussian cases. For the DM case, the coding scheme for the lower bound uses techniques of rate-splitting at the source, decode-and-forward (DF) relaying, and a Gel'fand-Pinsker-like binning scheme. In this coding scheme, the relay decodes only partially the information sent by the source. Due to the rate-splitting, this lower bound is better than the one obtained by assuming that the relay decodes all the information from the source, that is, full-DF. For the Gaussian case, we consider channel models in which each of the relay node and the destination node experiences on its link an additive Gaussian outside interference. We first focus on the case in which the links to the relay and to the destination are corrupted by the same interference; and then we focus on the case of independent interferences. We also discuss a model with correlated interferences. For each of the first two models, we establish a lower bound on the channel capacity. The coding schemes for the lower bounds use techniques of dirty paper coding or carbon copying onto dirty paper, interference reduction at the source and decode-and-forward relaying. The results reveal that, by opposition to carbon copying onto dirty paper and its root Costa's initial dirty paper coding (DPC), it may be beneficial in our setup that the informed source uses a part of its power to partially cancel the effect of the interference so that the uninformed relay benefits from this cancellation, and so the source benefits in turn.", "The paper investigates the effect of link delays on the capacity of relay networks. The relay-with-delay is defined as a relay channel with relay encoding delay d isin Z of units, or equivalently, a delay of units on the link from the sender to the relay, zero delay on the links from the transmitter to the receiver and from the relay to the receiver, and zero relay encoding delay. Two special cases are studied. The first is the relay-with-unlimited look-ahead, where each relay transmission can depend on its entire received sequence, and the second is the relay-without-delay, where the relay transmission can depend only on current and past received symbols, i.e., d=0. Upper and lower bounds on capacity for these two channels that are tight in some cases are presented. It is shown that the cut-set bound for the classical relay channel, corresponding to the case where d=1, does not hold for the relay-without-delay. Further, it is shown that instantaneous relaying can be optimal and can achieve higher rates than the classical cut-set bound. Capacity for the classes of degraded and semi-deterministic relay-with-unlimited-look-ahead and relay-without-delay are established. These results are then extended to the additive white Gaussian noise (AWGN) relay-with-delay case, where it is shown that for any dles0, capacity is achieved using amplify-and-forward when the channel from the sender to the relay is sufficiently weaker than the other two channels. In addition, it is shown that a superposition of amplify-and-forward and decode-and-forward can achieve higher rates than the classical cut-set bound. The relay-with-delay model is then extended to feedforward relay networks. It is shown that capacity is determined only by the relative delays of paths from the sender to the receiver and not by their absolute delays. A new cut-set upper bound that generalizes both the classical cut-set bound for the classical relay and the upper bound for the relay-without-delay on capacity is established.", "In this paper, we investigate the potential benefits of deploying relays in outdoor millimeter-wave (mmWave) networks. We study the coverage probability from sources to a destination for such systems aided by relays. The sources and the relays are modeled as independent homogeneous Poisson point processes (PPPs). We present a relay modeling technique for mmWave networks considering blockages and compute the density of active relays that aid the transmission. Two relay selection techniques are discussed, namely best path selection and best relay selection. For the first technique, we provide a closed form expression for end-to-end signal-to-noise ratio (SNR) and compute the best random relay path in a mmWave network using order statistics. Moreover, the maximum end-to-end SNR of random relay paths is investigated asymptotically by using extreme value theory. For the second technique, we provide a closed form expression for the best relay node having the maximum path gain. Finally, we analyze the coverage probability and transmission capacity of the network and validate them with simulation results. Our results show that deploying relays in mmWave networks can increase the coverage probability and transmission capacity of such systems." ] }
0710.3824
2953239217
Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does rely on a subset of nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most @math faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most @math misbehaving sensors. On the negative side, we prove that no deterministic protocol can identify faking sensors if their number is @math . Thus our scheme is almost optimal with respect to the number of faking sensors. We discuss application of our technique in the trusted sensor model. More precisely our results can be used to minimize the number of trusted sensors that are needed to defeat faking ones.
Relaxing the assumption of trusted nodes makes the problem more challenging, and to our knowledge, has only been investigated very recently @cite_19 . We call this model where no trusted node preexists the (or ) model. The approach of @cite_19 is randomized and consists of two phases: distance measurement and filtering. In the distance measurement phase, sensors measure their distances to their neighbors, faking sensors being allowed to corrupt the distance measure technique. In the filtering phase each correct sensor randomly picks up @math so-called sensors. Next each sensor @math uses trilateration with respect to the chosen pivot sensors to compute the location of its neighbor @math . If there is a match between the announced location and the computed location, the @math link is added to the network, otherwise it is discarded. Of course, the chosen pivot sensors could be faking and lying, so the protocol may only give probabilistic guarantee.
{ "cite_N": [ "@cite_19" ], "mid": [ "2131007986" ], "abstract": [ "Wireless sensor networks are deployed for the purpose of monitoring an area of interest. Even when the sensors are properly calibrated at the time of deployment, they develop drift in their readings leading to erroneous network inferences. Based on the assumption that neighbouring sensors have correlated measurements and that the instantiations of drifts in sensors are uncorrelated, the authors present a novel algorithm for detecting and correcting sensor measurement errors. The authors use statistical modelling rather than physical relations to model the spatio-temporal cross-correlations among sensors. This in principle makes the framework presented applicable to most sensing problems. Each sensor in the network trains a support vector regression algorithm on its neighbours' corrected readings to obtain a predicted value for its future measurements. This phase is referred to here as the training phase. In the running phase, the predicted measurements are used by each node, in a recursive decentralised fashion, to self-assess its measurement and to detect and correct its drift and random error using an unscented Kalman filter. No assumptions regarding the linearity of drift or the density (closeness) of sensor deployment are made. The authors also demonstrate using real data obtained from the Intel Berkeley Research Laboratory that the proposed algorithm successfully suppresses drifts developed in sensors and thereby prolongs the effective lifetime of the network." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Operational transformation (OT) @cite_11 considers collaborative editing based on non-commutative single-character operations. To this end, OT transforms the arguments of remote operations to take into account the effects of concurrent executions. OT requires two correctness conditions @cite_11 : the transformation should enable concurrent operations to execute in either order, and furthermore, transformation functions themselves must commute. The former is relatively easy. The latter is more complex, and @cite_15 prove that all existing transformations violate it.
{ "cite_N": [ "@cite_15", "@cite_11" ], "mid": [ "335169515", "2151943351" ], "abstract": [ "Operational transformation (OT) is an approach which allows to build real-time groupware tools. This approach requires correct transformation functions regarding two conditions called TP1 and TP2. Proving correctness of these transformation functions is very complex and error prone. In this paper, we show how a theorem prover can address this serious bottleneck. To validate our approach, we verifed correctness of state-of-art transformation functions de ned on strings of characters with surprising results. Counter-examples provided by the theorem prover helped us to design the tombstone transformation functions. These functions verify TP1 and TP2, preserve intentions and ensure multi-effect relationships.", "Real-time cooperative editing systems allow multiple users to view and edit the same text graphic image multimedia document at the same time for multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms" ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
A number of papers study the advantages of commutativity for concurrency and consistency control [for instance] syn:alg:1466,syn:1470 . Systems such as Psync @cite_10 , Generalized Paxos @cite_12 , Generic Broadcast @cite_7 and IceCube @cite_2 make use of commutativity information to relax consistency or scheduling requirements. However, these works do not address the issue of achieving commutativity.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_12", "@cite_2" ], "mid": [ "2025073323", "2038609352", "2080920242", "2115806861" ], "abstract": [ "Modern cloud systems are geo-replicated to improve application latency and availability. Transactional consistency is essential for application developers; however, the corresponding concurrency control and commitment protocols are costly in a geo-replicated setting. To minimize this cost, we identify the following essential scalability properties: (i) only replicas updated by a transaction T make steps to execute T; (ii) a read-only transaction never waits for concurrent transactions and always commits; (iii) a transaction may read object versions committed after it started; and (iv) two transactions synchronize with each other only if their writes conflict. We present Non-Monotonic Snapshot Isolation (NMSI), the first strong consistency criterion to allow implementations with all four properties. We also present a practical implementation of NMSI called Jessy, which we compare experimentally against a number of well-known criteria. Our measurements show that the latency and throughput of NMSI are comparable to the weakest criterion, read-committed, and between two to fourteen times faster than well-known strong consistencies.", "Cooperative vehicle safety (CVS) systems operate based on broadcast of vehicle position and safety information to neighboring cars. The communication medium of CVS is a vehicular ad-hoc network. One of the main challenges in large scale deployment of CVS systems is the issue of scalability. To address the scalability problem, several congestion control methods have been proposed and are currently under field study. These algorithms adapt transmission rate and power based on network measures such as channel busy ratio. We examine two such algorithms and study their dynamic behavior in time and space to evaluate stability (in time) and fairness (in space) properties of these algorithms. We present stability conditions and evaluate stability and fairness of the algorithms through simulation experiments. Results show that there is a trade-off between fast convergence, temporal stability and spatial fairness. The proper ranges of parameters for achieving stability are presented for the discussed algorithms. Stability is verified for all typical highway density cases for static traffic as well as real scenarios. Fairness is shown to be naturally achieved for some algorithms and its analysis is under study in another work of us. Under the same conditions other algorithms may have problem to maintain fairness in space. We have shown that this can be resolved by a distributed measurement of CBR and is verified.", "As collaboration over the Internet becomes an everyday affair, it is increasingly important to provide high quality of interactivity. Distributed applications can replicate collaborative objects at every site for the purpose of achieving high interactivity. Replication, however, has a fatal weakness that it is difficult to maintain consistency among replicas. This paper introduces operation commutativity as a key principle in designing operations in order to manage distributed replicas consistent. In addition, we suggest effective schemes that make operations commutative using the relations of objects and operations. Finally, we apply our approaches to some simple replicated abstract data types, and achieve their consistency without serialization and locking.", "The tradeoffs between consistency, performance, and availability are well understood. Traditionally, however, designers of replicated systems have been forced to choose from either strong consistency guarantees or none at all. This paper explores the semantic space between traditional strong and optimistic consistency models for replicated services. We argue that an important class of applications can tolerate relaxed consistency, but benefit from bounding the maximum rate of inconsistent access in an application-specific manner. Thus, we develop a conit-based continuous consistency model to capture the consistency spectrum using three application-independent metrics, numerical error, order error, and staleness. We then present the design and implementation of TACT, a middleware layer that enforces arbitrary consistency bounds among replicas using these metrics. We argue that the TACT consistency model can simultaneously achieve the often conflicting goals of generality and practicality by describing how a broad range of applications can express their consistency semantics using TACT and by demonstrating that application-independent algorithms can efficiently enforce target consistency levels. Finally, we show that three replicated applications running across the Internet demonstrate significant semantic and performance benefits from using our framework." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
Weihl @cite_1 distinguishes between forward and backward commutativity. They differ only when operations fail their pre-condition. In this work, we consider only operations that succeed at the submission site, and ensure by design that they won't fail at replay sites.
{ "cite_N": [ "@cite_1" ], "mid": [ "2080634500" ], "abstract": [ "In this paper, we study the performance characteristics of simple load sharing algorithms for heterogeneous distributed systems. We assume that nonnegligible delays are encountered in transferring jobs from one node to another. We analyze the effects of these delays on the performance of two threshold-based algorithms called Forward and Reverse. We formulate queuing theoretic models for each of the algorithms operating in heterogeneous systems under the assumption that the job arrival process at each node in Poisson and the service times and job transfer times are exponentially distributed. The models are solved using the Matrix-Geometric solution technique. These models are used to study the effects of different parameters and algorithm variations on the mean job response time: e.g., the effects of varying the thresholds, the impact of changing the probe limit, the impact of biasing the probing, and the optimal response times over a large range of loads and delays. Wherever relevant, the results of the models are compared with the M M 1 model, representing no load balancing (hereafter referred to as NLB), and the M M K model, which is an achievable lower bound (hereafter referred to as LB)." ] }