aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
@cite_4 were the first to suggest the CRDT approach. They give the example of an array with a slot assignment operation. To make concurrent assignments commute, they propose a deterministic procedure (based on vector clocks) whereby one takes precedence over the other.
{ "cite_N": [ "@cite_4" ], "mid": [ "2083604420" ], "abstract": [ "Following the introduction of contention resolution diversity slotted ALOHA (CRDSA), a number of variants of the scheme have been proposed in literature. A major drawback of these slotted random access (RA) schemes is related to the need to keep slot synchronization among all transmitters. The volume of signaling generated to maintain transmitters' slot synchronization is impractical for large networks. In this paper, we describe in detail asynchronous contention resolution diversity ALOHA (ACRDA), which represents the evolution of the CRDSA RA scheme. ACRDA provides better throughput performance with reduced demodulator complexity and lower transmission latency than its predecessor while allowing truly asynchronous access to the shared medium. The performance of the ACRDA protocol is evaluated via mathematical analysis and computer simulations and is compared with that of CRDSA." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
This is similar to the well-known Last-Writer Wins algorithm, used in shared file systems. Each file replica is timestamped with the time it was last written. Timestamps are consistent with happens-before @cite_13 . When comparing two versions of the file, the one with the highest timestamp takes precedence. This is correct with respect to successive writes related by happens-before, and constitutes a simple precedence rule for concurrent writes.
{ "cite_N": [ "@cite_13" ], "mid": [ "2100265791" ], "abstract": [ "We study the problem of implementing a replicated data store with atomic semantics for non self-verifying data in a system of n servers that are subject to Byzantine failures. We present a solution that significantly improves over previously proposed solutions. Timestamps used by our solution cannot be forced to grow arbitrarily large by faulty servers as is the case for other solutions. Instead, timestamps grow no faster than logarithmically in the number of operations. We achieve this saving by defining and providing an implementation for non-skipping timestamps, which are guaranteed not to skip any value. Non-skipping timestamps allow us to reduce the space requirements for readers to O(max|Q|), Where |Q| ≤ n. This is a significant improvement over the best previously known solution which requires O(fn) space, where f is the maximum number of faulty servers in the system. The solution we present has a low write-load if f is small compared to n, whereas the previously proposed solution always has a high constant write-load." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
In Lamport's replicated state machine approach @cite_13 , every replica executes the same operations in the same order. This total order is computed either by a consensus algorithm such as Paxos @cite_18 or, equivalently, by using an atomic broadcast mechanism @cite_0 . Such algorithms can tolerate faults.However they are complex and scale poorly; consensus occurs within the critical execution path, adding latency to every operation.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_13" ], "mid": [ "2067740651", "2129467152", "1814543774" ], "abstract": [ "This paper describes the design and implementation of Egalitarian Paxos (EPaxos), a new distributed consensus algorithm based on Paxos. EPaxos achieves three goals: (1) optimal commit latency in the wide-area when tolerating one and two failures, under realistic conditions; (2) uniform load balancing across all replicas (thus achieving high throughput); and (3) graceful performance degradation when replicas are slow or crash. Egalitarian Paxos is to our knowledge the first protocol to achieve the previously stated goals efficiently---that is, requiring only a simple majority of replicas to be non-faulty, using a number of messages linear in the number of replicas to choose a command, and committing commands after just one communication round (one round trip) in the common case or after at most two rounds in any case. We prove Egalitarian Paxos's properties theoretically and demonstrate its advantages empirically through an implementation running on Amazon EC2.", "There are currently two approaches to providing Byzantine-fault-tolerant state machine replication: a replica-based approach, e.g., BFT, that uses communication between replicas to agree on a proposed ordering of requests, and a quorum-based approach, such as Q U, in which clients contact replicas directly to optimistically execute operations. Both approaches have shortcomings: the quadratic cost of inter-replica communication is un-necessary when there is no contention, and Q U requires a large number of replicas and performs poorly under contention. We present HQ, a hybrid Byzantine-fault-tolerant state machine replication protocol that overcomes these problems. HQ employs a lightweight quorum-based protocol when there is no contention, but uses BFT to resolve contention when it arises. Furthermore, HQ uses only 3f + 1 replicas to tolerate f faults, providing optimal resilience to node failures. We implemented a prototype of HQ, and we compare its performance to BFT and Q U analytically and experimentally. Additionally, in this work we use a new implementation of BFT designed to scale as the number of faults increases. Our results show that both HQ and our new implementation of BFT scale as f increases; additionally our hybrid approach of using BFT to handle contention works well.", "We present a protocol for general state machine replication - a method that provides strong consistency - that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements." ] }
0710.1784
2161119415
Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of any data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type.
In the treedoc design, common edit operations execute optimistically, with no latency; it uses consensus in the background only. Previously, Golding relied on background consensus for garbage collection @cite_21 . We are not aware of previous instances of background consensus for structural operations, nor of aborting consensus when it conflicts with essential operations.
{ "cite_N": [ "@cite_21" ], "mid": [ "2589531210" ], "abstract": [ "We study consensus processes on the complete graph of n nodes. Initially, each node supports one up to n different opinions. Nodes randomly and in parallel sample the opinions of constantly many nodes. Based on these samples, they use an update rule to change their own opinion. The goal is to reach consensus, a configuration where all nodes support the same opinion. We compare two well-known update rules: 2-Choices and 3-Majority. In the former, each node samples two nodes and adopts their opinion if they agree. In the latter, each node samples three nodes: If an opinion is supported by at least two samples the node adopts it, otherwise it randomly adopts one of the sampled opinions. Known results for these update rules focus on initial configurations with a limited number of colors (say n1 3), or typically assume a bias, where one opinion has a much larger support than any other. For such biased configurations, the time to reach consensus is roughly the same for 2-Choices and 3-Majority. Interestingly, we prove that this is no longer true for configurations with a large number of initial colors. In particular, we show that 3-Majority reaches consensus with high probability in O(n3 4 · log7 8 n) rounds, while 2-Choices can need Ω(n log n) rounds. We thus get the first unconditional sublinear bound for 3-Majority and the first result separating the consensus time of these processes. Along the way, we develop a framework that allows a fine-grained comparison between consensus processes from a specific class. We believe that this framework might help to classify the performance of more consensus processes." ] }
0710.0528
2035639486
In the analysis of logic programs, abstract domains for detecting sharing and linearity information are widely used. Devising abstract unification algorithms for such domains has proved to be rather hard. At the moment, the available algorithms are correct but not optimal, i.e., they cannot fully exploit the information conveyed by the abstract domains. In this paper, we define a new (infinite) domain ShLin ! which can be thought of as a general framework from which other domains can be easily derived by abstraction. ShLin ! makes the interaction between sharing and linearity explicit. We provide a constructive characterization of the optimal abstract unification operator on ShLin ! and we lift it to two well-known abstractions of ShLin ! . Namely, to the classical Sharing × Lin abstract domain and to the more precise ShLin 2 abstract domain by Andy King. In the case of
In most of the work combining sharing and linearity, freeness information is included in the abstract domain. In fact, freeness may improve the precision of the aliasing component and it is also interesting by itself, for example in the parallelization of logic programs @cite_18 . In this comparison, we do not consider the freeness component.
{ "cite_N": [ "@cite_18" ], "mid": [ "1970730703" ], "abstract": [ "Abstract Sharing information is useful in specialising, optimising and parallelising logic programs and thus sharing analysis is an important topic of both abstract interpretation and logic programming. Sharing analyses infer which pairs of program variables can never be bound to terms that contain a common variable. We generalise a classic pair-sharing analysis from Herbrand unification to trace sharing over rational tree constraints. This is useful for reasoning about programs written in SICStus and Prolog-III because these languages use rational tree unification as the default equation solver." ] }
0710.0528
2035639486
In the analysis of logic programs, abstract domains for detecting sharing and linearity information are widely used. Devising abstract unification algorithms for such domains has proved to be rather hard. At the moment, the available algorithms are correct but not optimal, i.e., they cannot fully exploit the information conveyed by the abstract domains. In this paper, we define a new (infinite) domain ShLin ! which can be thought of as a general framework from which other domains can be easily derived by abstraction. ShLin ! makes the interaction between sharing and linearity explicit. We provide a constructive characterization of the optimal abstract unification operator on ShLin ! and we lift it to two well-known abstractions of ShLin ! . Namely, to the classical Sharing × Lin abstract domain and to the more precise ShLin 2 abstract domain by Andy King. In the case of
The following is a counterexample to the optimality of the abstract unification in @cite_12 , in the case of finite trees, when pair sharing is equipped with @math or @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "1970730703" ], "abstract": [ "Abstract Sharing information is useful in specialising, optimising and parallelising logic programs and thus sharing analysis is an important topic of both abstract interpretation and logic programming. Sharing analyses infer which pairs of program variables can never be bound to terms that contain a common variable. We generalise a classic pair-sharing analysis from Herbrand unification to trace sharing over rational tree constraints. This is useful for reasoning about programs written in SICStus and Prolog-III because these languages use rational tree unification as the default equation solver." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Kuhn al @cite_14 present a distributed approximation scheme for the packing LP and covering LP. The algorithm provides a local approximation scheme for some families of packing and covering LPs. For example, let @math for all @math . Then for each @math , @math and @math , there is a local algorithm with some constant horizon @math which achieves an @math -approximation. Our work shows that such local approximation schemes do not exist for .
{ "cite_N": [ "@cite_14" ], "mid": [ "2890196309" ], "abstract": [ "Several algorithms with an approximation guarantee of @math are known for the Set Cover problem, where @math is the number of elements. We study a generalization of the Set Cover problem, called the Partition Set Cover problem. Here, the elements are partitioned into @math , and we are required to cover at least @math elements from each color class @math , using the minimum number of sets. We give a randomized LP-rounding algorithm that is an @math approximation for the Partition Set Cover problem. Here @math denotes the approximation guarantee for a related Set Cover instance obtained by rounding the standard LP. As a corollary, we obtain improved approximation guarantees for various set systems for which @math is known to be sublogarithmic in @math . We also extend the LP rounding algorithm to obtain @math approximations for similar generalizations of the Facility Location type problems. Finally, we show that many of these results are essentially tight, by showing that it is NP-hard to obtain an @math -approximation for any of these problems." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Another distributed approximation scheme by Kuhn al @cite_14 forms several decompositions of @math into subgraphs, solves the optimisation problem optimally for each subgraph, and combines the solutions. However, the algorithm is not a local approximation algorithm in the strict sense that we use here: to obtain any constant approximation ratio, the local horizon must extend (logarithmically) as the number of variables increases. Also Bartal al @cite_2 present a distributed but not local approximation scheme for the packing LP.
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "1967595345", "1572977974" ], "abstract": [ "We present two new approximation algorithms for the problem of finding a k-node connected spanning subgraph (directed or undirected) of minimum cost. The best known approximation guarantees for this problem were @math for both directed and undirected graphs, and @math for undirected graphs with @math , where @math is the number of nodes in the input graph. Our first algorithm has approximation ratio @math , which is @math except for very large values of @math , namely, @math . This algorithm is based on a new result on @math -connected @math -critical graphs, which is of independent interest in the context of graph theory. Our second algorithm uses the primal-dual method and has approximation ratio @math for all values of @math . Combining these two gives an algorithm with approximation ratio @math , which asymptotically improves the best known approximation guarantee for directed graphs for all values of @math , and for undirected graphs for @math . Moreover, this is the first algorithm that has an approximation guarantee better than @math for all values of @math . Our approximation ratio also provides an upper bound on the integrality gap of the standard LP-relaxation.", "A local-ratio theorem for approximating the weighted vertex cover problem is presented. It consists of reducing the weights of vertices in certain subgraphs and has the effect of local-approximation. Putting together the Nemhauser-Trotter local optimization algorithm and the local-ratio theorem yields several new approximation techniques which improve known results from time complexity, simplicity and performance-ratio point of view. The main approximation algorithm guarantees a ratio of where K is the smallest integer s.t. † This is an improvement over the currently known ratios, especially for a “practical” number of vertices (e.g. for graphs which have less than 2400, 60000, 10 12 vertices the ratio is bounded by 1.75, 1.8, 1.9 respectively)." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
Kuhn and Wattenhofer @cite_9 present a family of local, constant-factor approximation algorithms of the covering LP that is obtained as an LP relaxation of the minimum dominating set problem. Kuhn al @cite_3 present a local, constant-factor approximation of the packing and covering LPs in unit-disk graphs.
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "2127622665", "2144926522" ], "abstract": [ "Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.", "The authors give the first constant factor approximation algorithm for the facility location problem with nonuniform, hard capacities. Facility location problems have received a great deal of attention in recent years. Approximation algorithms have been developed for many variants. Most of these algorithms are based on linear programming, but the LP techniques developed thus far have been unsuccessful in dealing with hard capacities. A local-search based approximation algorithm (M. , 1998; F.A. Chudak and D.P. Williamson, 1999) is known for the special case of hard but uniform capacities. We present a local-search heuristic that yields an approximation guarantee of 9 + spl epsi for the case of nonuniform hard capacities. To obtain this result, we introduce new operations that are natural in this context. Our proof is based on network flow techniques." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
There are few examples of local algorithms which approximate linear problems beyond packing and covering LPs. Kuhn al @cite_11 study an LP relaxation of the @math -fold dominating set problem and obtain a local constant-factor approximation for bounded-degree graphs.
{ "cite_N": [ "@cite_11" ], "mid": [ "2127622665" ], "abstract": [ "Finding a small dominating set is one of the most fundamental problems of traditional graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary parameter k and maximum degree Δ, our algorithm computes a dominating set of expected size O(kΔ2 k log Δ|DSOPT|) in O(k2) rounds where each node has to send O(k2Δ) messages of size O(logΔ). This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds." ] }
0710.1499
2104084528
A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constant-size neighbourhood of the node. We study the applicability of local algorithms to max-min LPs where the objective is to maximise mink Sigmav CkvXv subject to Sigmav alphaivXv les 1 far each i and Xv ges 0 far each v. Here ckv ges 0, and the support sets Vi = v : alphaiv> 0 , Vk = v : ckv > 0 , Iv = i: alphaiv > 0 and Kv = k : Ckv > 0 have bounded size. In the distributed setting, each agent v is responsible for choosing the value of Xv, and the communication network is a hypergraph H where the sets Vk and Vi constitute the hyperedges. We present inapproximability results for a wide range of structural assumptions; for example, even if |Vi| and |Vk| are bounded by some constants larger than 2, there is no local approximation scheme. To contrast the negative results, we present a local approximation algorithm which achieves good approximation ratios if we can bound the relative growth of the vertex neighbourhoods in H.
For combinatorial problems, there are both negative @cite_10 @cite_0 and positive @cite_7 @cite_11 @cite_9 @cite_16 @cite_1 results on the applicability of local algorithms.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_0", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "1494455882", "1810595280", "2011058337", "2017345786", "2000217912", "2084803275", "2125889434" ], "abstract": [ "In this chapter we review the main results known on local search algorithms with worst case guarantees. We consider classical combinatorial optimization problems: satisfiability problems, traveling salesman and quadratic assignment problems, set packing and set covering problems, maximum independent set, maximum cut, several facility location related problems and finally several scheduling problems. A replica placement problem in a distributed file systems is also considered as an example of the use of a local search algorithm in a distributed environment. For each problem we have provided the neighborhoods used along with approximation results. Proofs when too technical are omitted, but often sketch of proofs are provided.", "The algorithm for Lovasz Local Lemma by Moser and Tardos gives a constructive way to prove the existence of combinatorial objects that satisfy a system of constraints. We present an alternative probabilistic analysis of the algorithm that does not involve reconstructing the history of the algorithm from the witness tree. We apply our technique to improve the best known upper bound to acyclic chromatic index. Specifically we show that a graph with maximum degree Δ has an acyclic proper edge coloring with at most ⌈3.74(Δ − 1)⌉ + 1 colors, whereas the previously known best bound was 4(Δ − 1). The same technique is also applied to improve corresponding bounds for graphs with bounded girth. An interesting aspect of this application is that the probability of the \"undesirable\" events do not have a uniform upper bound, i.e. it constitutes a case of the asymmetric Lovasz Local Lemma.", "We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, efficiently searching not only the space of domain-specific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience. The initial bias is embodied by a task-dependent probability distribution on possible program prefixes. Prefixes are self-delimiting and executed in online fashion while being generated. They compute the probabilities of their own possible continuations. Let p^n denote a found prefix solving the first n tasks. It may exploit previously stored solutions p^i, i >n, by calling them as subprograms, or by copying them and editing the copies before applying them. We provide equal resources for two searches that run in parallel until p^ n+1 is discovered and stored. The first search is exhaustive; it systematically tests all possible prefixes on all tasks up to n+1. The second search is much more focused; it only searches for prefixes that start with p^n, and only tests them on task n+1, which is safe, because we already know that such prefixes solve all tasks up to n. Both searches are depth-first and bias-optimal: the branches of the search trees are program prefixes, and backtracking is triggered once the sum of the runtimes of the current prefix on all current tasks exceeds the prefix probability multiplied by the total search time so far. In illustrative experiments, our self-improver becomes the first general system that learns to solve all n disk Towers of Hanoi tasks (solution size 2^n-1) for n up to 30, profiting from previously solved, simpler tasks involving samples of a simple context free language.", "The purpose of this paper is a study of computation that can be done locally in a distributed network, where \"locally\" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following: There are nontrivial LCL problems that have local algorithms. There is a variant of the dining philosophers problem that can be solved locally. Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm. It is undecidable, in general, whether a given LCL has a local algorithm. However, it is decidable whether a given LCL has an algorithm that operates in a given time @math . Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).", "For certain subclasses of NP, @math P, or #P characterized by local constraints, it is known that if there exist any problems within that subclass that are not polynomial time computable, then all the problems in the subclass are NP-complete, @math P-complete, or #P-complete. Such dichotomy results have been proved for characterizations such as constraint satisfaction problems and directed and undirected graph homomorphism problems, often with additional restrictions. Here we give a dichotomy result for the more expressive framework of Holant problems. For example, these additionally allow for the expression of matching problems, which have had pivotal roles in the development of complexity theory. As our main result we prove the dichotomy theorem that, for the class @math P, every set of symmetric Holant signatures of any arities that is not polynomial time computable is @math P-complete. The result exploits some special properties of the class @math P and characterizes four distinct tractable ...", "We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^ 3- ( M)) time for some > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^ 2.99 negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the ( , +)-semiring. Therefore, if APSP cannot be solved in n^ 3- time for any > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.", "The results of this paper can be stated in three equivalent ways—in terms of the sparse recovery problem, the error-correction problem, and the problem of existence of certain extremal (neighborly) polytopes. Error-correcting codes are used in modern technology to protect information from errors. Information is formed by finite words over some alphabet F. The encoder transforms an n-letter word x into an m-letter word y with m>n . The decoder must be able to recover x correctly when up to r letters of y are corrupted in any way. Such an encoder-decoder pair is called an (n, m, r)-error-correcting code. Development of algorithmically efficient error correcting codes has attracted attention of engineers, computer scientists, and applied mathematicians for the past five decades. Known constructions involve deep algebraic and combinatorial methods, see [34, 35, 36]. This paper develops an approach to error-correcting codes from the viewpoint of geometric functional analysis (asymptotic convex geometry). It thus belongs to a common ground of coding theory, signal processing, combinatorial geometry, and geometric functional analysis. Our argument, outlined in Section 3, may be of independent interest in geometric functional analysis. Our main focus will be on words over the alphabet F = R or C. In applications, these words may be formed of the coefficients of some signal (such as image or audio)" ] }
0710.2296
2124886183
We say that a graph G=(V,E) on n vertices is a @b-expander for some constant @b>0 if every U@?V of cardinality |U|@?n2 satisfies |N"G(U)|>=@b|U| where N"G(U) denotes the neighborhood of U. In this work we explore the process of deleting vertices of a @b-expander independently at random with probability n^-^@a for some constant @a>0, and study the properties of the resulting graph. Our main result states that as n tends to infinity, the deletion process performed on a @b-expander graph of bounded degree will result with high probability in a graph composed of a giant component containing n-o(n) vertices that is in itself an expander graph, and constant size components. We proceed by applying the main result to expander graphs with a positive spectral gap. In the particular case of (n,d,@l)-graphs, that are such expanders, we compute the values of @a, under additional constraints on the graph, for which with high probability the resulting graph will stay connected, or will be composed of a giant component and isolated vertices. As a graph sampled from the uniform probability space of d-regular graphs with high probability is an expander and meets the additional constraints, this result strengthens a recent result due to Greenhill, Holt and Wormald about vertex percolation on random d-regular graphs. We conclude by showing that performing the above described deletion process on graphs that expand sub-linear sets by an unbounded expansion ratio, with high probability results in a connected expander graph.
The process of random deletion of vertices of a graph received rather limited attention, mainly in the context of faulty storage (see e.g. @cite_0 ), communication networks, and distributed computing. For instance, the main motivation of @cite_12 is the SWAN peer-to-peer @cite_7 network whose topology possess some properties of @math -regular graphs, and may have faulty nodes. Other works are mainly interested in connectivity and routing in the resulting graph after performing (possibly adversarial) vertex deletions on some prescribed graph topologies.
{ "cite_N": [ "@cite_0", "@cite_7", "@cite_12" ], "mid": [ "2952961426", "2568950526", "2058466367" ], "abstract": [ "Motivated by low energy consumption in geographic routing in wireless networks, there has been recent interest in determining bounds on the length of edges in the Delaunay graph of randomly distributed points. Asymptotic results are known for random networks in planar domains. In this paper, we obtain upper and lower bounds that hold with parametric probability in any dimension, for points distributed uniformly at random in domains with and without boundary. The results obtained are asymptotically tight for all relevant values of such probability and constant number of dimensions, and show that the overhead produced by boundary nodes in the plane holds also for higher dimensions. To our knowledge, this is the first comprehensive study on the lengths of long edges in Delaunay graphs", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "We investigate the following vertex percolation process. Starting with a random regular graph of constant degree, delete each vertex independently with probability p, where p=n^-^@a and @[email protected](n) is bounded away from 0. We show that a.a.s. the resulting graph has a connected component of size n-o(n) which is an expander, and all other components are trees of bounded size. Sharper results are obtained with extra conditions on @a. These results have an application to the cost of repairing a certain peer-to-peer network after random failures of nodes." ] }
0709.2252
2953139907
We propose Otiy, a node-centric location service that limits the impact of location updates generate by mobile nodes in IEEE802.11-based wireless mesh networks. Existing location services use node identifiers to determine the locator (aka anchor) that is responsible for keeping track of a node's location. Such a strategy can be inefficient because: (i) identifiers give no clue on the node's mobility and (ii) locators can be far from the source destination shortest path, which increases both location delays and bandwidth consumption. To solve these issues, Otiy introduces a new strategy that identifies nodes to play the role of locators based on the likelihood of a destination to be close to these nodes- i.e., locators are identified depending on the mobility pattern of nodes. Otiy relies on the cyclic mobility patterns of nodes and creates a slotted agenda composed of a set of predicted locations, defined according to the past and present patterns of mobility. Correspondent nodes fetch this agenda only once and use it as a reference for identifying which locators are responsible for the node at different points in time. Over a period of about one year, the weekly proportion of nodes having at least 50 of exact location predictions is in average about 75 . This proportion increases by 10 when nodes also consider their closeness to the locator from only what they know about the network.
Tabbane is one of the firsts to have introduced the node (mobility) profiling to improve location management in . @cite_14 the profiling is operated by the network and shared with the node's subscriber identity module . Thanks to this profiling, whatever the period of time @math the system can find a list of areas where the nodes could be. This list of areas is decreasingly ordered by the probabilities of being in the different areas. One probability is dependent of a function associated with and has several parameters such as the time, the pattern of mobility, the last location, the weather etc. Until the node is in one of these areas it does not update its location. When the system needs to locate him, it asks sequentially the different areas within the list. Two notions are shared with our approach: (i) the node profiling although in it is the nodes which make their self-profiling. (ii) The relation with the time. However, in Otiy the period of time are predefined and only one area (anchor) is assigned to each time slot.
{ "cite_N": [ "@cite_14" ], "mid": [ "2166617227" ], "abstract": [ "A distributed mobility management scheme using a class of uniform quorum systems (UQS) is proposed for ad hoc networks. In the proposed scheme, location databases are stored in the network nodes themselves, which form a self-organizing virtual backbone within the flat network structure. The databases are dynamically organized into quorums, every two of which intersect at a constant number of databases. Upon location update or call arrival, a mobile's location information is written to or read from all the databases of a quorum, chosen in a nondeterministic manner. Compared with a conventional scheme [such as the use of home location register (HLR)] with fixed associations, this scheme is more suitable for ad hoc networks, where the connectivity of the nodes with the rest of the network can be intermittent and sporadic and the databases are relatively unstable. We introduce UQS, where the size of the quorum intersection is a design parameter that can be tuned to adapt to the traffic and mobility patterns of the network nodes. We propose the construction of UQS through the balanced incomplete block designs. The average cost, due to call loss and location updates using such systems, is analyzed in the presence of database disconnections. Based on the average cost, we investigate the tradeoff between the system reliability and the cost of location updates in the UQS scheme. The problem of optimizing the quorum size under different network traffic and mobility patterns is treated numerically. A dynamic and distributed HLR scheme, as a limiting case of the UQS, is also analyzed and shown to be suboptimal in general. It is also shown that partitioning of the network is sometimes necessary to reduce the cost of mobility management." ] }
0709.2252
2953139907
We propose Otiy, a node-centric location service that limits the impact of location updates generate by mobile nodes in IEEE802.11-based wireless mesh networks. Existing location services use node identifiers to determine the locator (aka anchor) that is responsible for keeping track of a node's location. Such a strategy can be inefficient because: (i) identifiers give no clue on the node's mobility and (ii) locators can be far from the source destination shortest path, which increases both location delays and bandwidth consumption. To solve these issues, Otiy introduces a new strategy that identifies nodes to play the role of locators based on the likelihood of a destination to be close to these nodes- i.e., locators are identified depending on the mobility pattern of nodes. Otiy relies on the cyclic mobility patterns of nodes and creates a slotted agenda composed of a set of predicted locations, defined according to the past and present patterns of mobility. Correspondent nodes fetch this agenda only once and use it as a reference for identifying which locators are responsible for the node at different points in time. Over a period of about one year, the weekly proportion of nodes having at least 50 of exact location predictions is in average about 75 . This proportion increases by 10 when nodes also consider their closeness to the locator from only what they know about the network.
In @cite_1 , Wu mine the mobility behavior nodely (operated by the nodes) from long term mobility history. From this information they evaluate the time-varying probability of the different nodely-defined regions. The prevalence of a region on the time is defined through a cost model. Finally, they obtain a vector @math of mobility which will define the region to be paged in function of the time. The location updates and paging schemes are approximately the same than the afore-presented proposals. Here, the prevalence of a particular area on the time is more flexible and accurate than ours. However, the complexity of the algorithm is more important. Furthermore, the length of the vector of mobility can be a lot more important than a division in time slots of equal duration.
{ "cite_N": [ "@cite_1" ], "mid": [ "1533191377" ], "abstract": [ "We propose a new location tracking strategy called behavior-based strategy (BBS) based on each mobile's moving behavior. With the help of data mining technologies the moving behavior of each mobile could be mined from long-term collection of the mobile's moving logs. From the moving behavior of each mobile, we first estimate the time-varying probability of the mobile and then the optimal paging area of each time region is derived. To reduce unnecessary computation, we consider the location tracking and computational cost and then derive a cost model. A heuristics is proposed to minimize the cost model through finding the appropriate moving period checkpoints of each mobile. The experimental results show our strategy outperforms fixed paging area strategy currently used in the GSM system and time-based strategy for highly regular moving mobiles." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Untangling was first investigated for the @math , following the question by Watanabe @cite_1 of whether @math . The answer turned out to be negative: Pach and Tardos @cite_10 showed, by a probabilistic argument, that @math . They also showed that @math by applying the Erd o s--Szekeres theorem to the sequence of the indices of the vertices of @math in clockwise order around some specific point. Cibulka @cite_3 recently improved that lower bound to @math by applying the Erd o s--Szekeres theorem not once but @math times.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_3" ], "mid": [ "2014466831", "2951036212", "2889042315" ], "abstract": [ "Untangling is a process in which some vertices in a drawing of a planar graph are moved to obtain a straight-line plane drawing. The aim is to move as few vertices as possible. We present an algorithm that untangles the cycle graph C n while keeping Ω(n 2 3) vertices fixed. For any connected graph G, we also present an upper bound on the number of fixed vertices in the worst case. The bound is a function of the number of vertices, maximum degree, and diameter of G. One consequence is that every 3-connected planar graph has a drawing δ such that at most O((nlog n)2 3) vertices are fixed in every untangling of δ.", "Indyk and Sidiropoulos (2007) proved that any orientable graph of genus @math can be probabilistically embedded into a graph of genus @math with constant distortion. Viewing a graph of genus @math as embedded on the surface of a sphere with @math handles attached, Indyk and Sidiropoulos' method gives an embedding into a distribution over planar graphs with distortion @math , by iteratively removing the handles. By removing all @math handles at once, we present a probabilistic embedding with distortion @math for both orientable and non-orientable graphs. Our result is obtained by showing that the nimum-cut graph of Erickson and Har Peled (2004) has low dilation, and then randomly cutting this graph out of the surface using the Peeling Lemma of Lee and Sidiropoulos (2009).", "Oliveira conjectured that the order of the mixing time of the exclusion process with @math -particles on an arbitrary @math -vertex graph is at most that of the mixing-time of @math independent particles. We verify this up to a constant factor for @math -regular graphs when each edge rings at rate @math in various cases: (1) when @math or (2) when @math the spectral-gap of a single walk is @math and @math or (3) when @math for some constant @math . In these cases our analysis yields a probabilistic proof of a weaker version of Aldous' famous spectral-gap conjecture (resolved by ). We also prove a general bound which is at worst @math , which is within a @math factor from Oliveira's conjecture when @math . As applications we get new mixing bounds: (a) @math for expanders, (b) order @math for the hypercube @math and (c) order @math for vertex-transitive graphs of moderate growth and for supercritical percolation on a fixed dimensional torus." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Verbitsky @cite_22 investigated planar graphs of higher connectivity. He proved linear upper bounds on @math for three- and four-connected planar graphs. Cibulka @cite_3 gave, for any planar graph @math , an upper bound on @math that is a function of the number of vertices, the maximum degree, and the diameter of @math . This latter bound implies, in particular, that @math for any three-connected planar graph @math and that any graph @math such that @math for some @math must have a vertex of degree @math .
{ "cite_N": [ "@cite_22", "@cite_3" ], "mid": [ "2161190897", "2034501275" ], "abstract": [ "We present algorithmic and hardness results for network design problems with degree or order constraints. The first problem we consider is the Survivable Network Design problem with degree constraints on vertices. The objective is to find a minimum cost subgraph which satisfies connectivity requirements between vertices and also degree upper bounds @math on the vertices. This includes the well-studied Minimum Bounded Degree Spanning Tree problem as a special case. Our main result is a @math -approximation algorithm for the edge-connectivity Survivable Network Design problem with degree constraints, where the cost of the returned solution is at most twice the cost of an optimum solution (satisfying the degree bounds) and the degree of each vertex @math is at most @math . This implies the first constant factor (bicriteria) approximation algorithms for many degree constrained network design problems, including the Minimum Bounded Degree Steiner Forest problem. Our results also extend to directed graphs and provide the first constant factor (bicriteria) approximation algorithms for the Minimum Bounded Degree Arborescence problem and the Minimum Bounded Degree Strongly @math -Edge-Connected Subgraph problem. In contrast, we show that the vertex-connectivity Survivable Network Design problem with degree constraints is hard to approximate, even when the cost of every edge is zero. A striking aspect of our algorithmic result is its simplicity. It is based on the iterative relaxation method, which is an extension of Jain's iterative rounding method. This provides an elegant and unifying algorithmic framework for a broad range of network design problems. We also study the problem of finding a minimum cost @math -edge-connected subgraph with at least @math vertices, which we call the @math -subgraph problem. This generalizes some well-studied classical problems such as the @math -MST and the minimum cost @math -edge-connected subgraph problems. We give a polylogarithmic approximation for the @math -subgraph problem. However, by relating it to the Densest @math -Subgraph problem, we provide evidence that the @math -subgraph problem might be hard to approximate for arbitrary @math .", "We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
The hardness of computing @math given @math and @math was obtained independently by Verbitsky @cite_22 by a reduction from independent set in line-segment intersection graphs. While our proof is more complicated than his, it is stronger as it also yields hardness of approximation and extends to the problem with given vertex--point correspondence.
{ "cite_N": [ "@cite_22" ], "mid": [ "2166369652" ], "abstract": [ "We study the inapproximability of Vertex Cover and Independent Set on degree @math graphs. We prove that: Vertex Cover is Unique Games-hard to approximate to within a factor @math . This exactly matches the algorithmic result of Halperin halperin02improved up to the @math term. Independent Set is Unique Games-hard to approximate to within a factor @math . This improves the @math Unique Games hardness result of Samorodnitsky and Trevisan samorodnitsky06gowers . Additionally, our result does not rely on the construction of a query efficient PCP as in samorodnitsky06gowers ." ] }
0709.0170
2919403758
A straight-line drawing δ of a planar graph G need not be plane but can be made so by untangling it, that is, by moving some of the vertices of G. Let shift(G,δ) denote the minimum number of vertices that need to be moved to untangle δ. We show that shift(G,δ) is NP-hard to compute and to approximate. Our hardness results extend to a version of 1BendPointSetEmbeddability, a well-known graph-drawing problem. Further we define fix(G,δ)=n−shift(G,δ) to be the maximum number of vertices of a planar n-vertex graph G that can be fixed when untangling δ. We give an algorithm that fixes at least @math vertices when untangling a drawing of an n-vertex graph G. If G is outerplanar, the same algorithm fixes at least @math vertices. On the other hand, we construct, for arbitrarily large n, an n-vertex planar graph G and a drawing δ G of G with @math and an n-vertex outerplanar graph H and a drawing δ H of H with @math . Thus our algorithm is asymptotically worst-case optimal for outerplanar graphs.
Finally, a somewhat related problem is that of , or isotopy, between two plane drawings @math and @math of the same graph @math , that is, to define for each vertex @math of @math a movement from @math to @math such that at any time during the move the drawing defined by the current vertex positions is plane. We refer the interested reader to the survey by @cite_25 .
{ "cite_N": [ "@cite_25" ], "mid": [ "1483285511" ], "abstract": [ "We consider the following problem known as simultaneous geometric graph embedding (SGE). Given a set of planar graphs on a shared vertex set, decide whether the vertices can be placed in the plane in such a way that for each graph the straight-line drawing is planar. We partially settle an open problem of Erten and Kobourov [5] by showing that even for two graphs the problem is NP-hard. We also show that the problem of computing the rectilinear crossing number of a graph can be reduced to a simultaneous geometric graph embedding problem; this implies that placing SGE in NP will be hard, since the corresponding question for rectilinear crossing number is a long-standing open problem. However, rather like rectilinear crossing number, SGE can be decided in PSPACE." ] }
0709.1909
1718410566
Given a sequence of complex square matrices, @math , consider the sequence of their partial products, defined by @math . What can be said about the asymptotics as @math of the sequence @math , where @math is a continuous function? A special case of our most general result addresses this question under the assumption that the matrices @math are an @math perturbation of a sequence of matrices with bounded partial products. We apply our theory to investigate the asymptotics of the approximants of continued fractions. In particular, when a continued fraction is @math limit 1-periodic of elliptic or loxodromic type, we show that its sequence of approximants tends to a circle in @math , or to a finite set of points lying on a circle. Our main theorem on such continued fractions unifies the treatment of the loxodromic and elliptic cases, which are convergent and divergent, respectively. When an approximating sequence tends to a circle, we obtain statistical information about the limiting distribution of the approximants. When the circle is the real line, the points are shown to have a Cauchy distribution with parameters given in terms of modifications of the original continued fraction. As an example of the general theory, a detailed study of a @math -continued fraction in five complex variables is provided. The most general theorem in the paper holds in the context of Banach algebras. The theory is also applied to @math -matrix continued fractions and recurrence sequences of Poincar 'e type and compared with closely related literature.
We are aware of four other places where work related to the results of this section was given previously. Two of these were motivated by the identity of Ramanujan. The first paper is @cite_46 which gave the first proof of . The proof in @cite_46 is particular to the continued fraction . However, section 3 of @cite_46 studied the recurrence @math and showed that when @math , the sequence @math has six limit points and that moreover a continued fraction whose convergents satisfies this recurrence under the @math assumption tends to three limit points (Theorem 3.3 of @cite_46 ). The paper does not consider other numbers of limits, however. Moreover, the role of the sixth roots of unity in the recurrence is not revealed. In the section 6 of the present paper, we treat the general case in which recurrences can have a finite or uncountable number of limits. Previously in @cite_20 we treated such recurrences with a finite number @math of limits as well as the associated continued fractions.
{ "cite_N": [ "@cite_46", "@cite_20" ], "mid": [ "2036434814", "2000931246" ], "abstract": [ "Abstract On page 45 in his lost notebook, Ramanujan asserts that a certain q -continued fraction has three limit points. More precisely, if A n B n denotes its n th partial quotient, and n tends to ∞ in each of three residue classes modulo 3, then each of the three limits of A n B n exists and is explicitly given by Ramanujan. Ramanujan's assertion is proved in this paper. Moreover, general classes of continued fractions with three limit points are established.", "Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253" ] }
0708.3834
1482001270
We study an evolutionary game of chance in which the probabilities for different outcomes (e.g., heads or tails) depend on the amount wagered on those outcomes. The game is perhaps the simplest possible probabilistic game in which perception affects reality. By varying the reality map', which relates the amount wagered to the probability of the outcome, it is possible to move continuously from a purely objective game in which probabilities have no dependence on wagers, to a purely subjective game in which probabilities equal the amount wagered. The reality map can reflect self-reinforcing strategies or self-defeating strategies. In self-reinforcing games, rational players can achieve increasing returns and manipulate the outcome probabilities to their advantage; consequently, an early lead in the game, whether acquired by chance or by strategy, typically gives a persistent advantage. We investigate the game both in and out of equilibrium and with and without rational players. We introduce a method of measuring the inefficiency of the game and show that in the large time limit the inefficiency decreases slowly in its approach to equilibrium as a power law with an exponent between zero and one, depending on the subjectivity of the game.
There has been considerable past work on situations where subjective factors influence objective outcomes. Some examples include Hommes's studies of cobweb models @cite_8 @cite_3 , studies of increasing returns @cite_1 , Arthur's El Farol model and its close relative the minority game @cite_2 @cite_9 , Blume and Easley's model of the influence of capital markets on natural selection in an economy @cite_5 @cite_12 , and Akiyama and Kaneko's example of a game that changes due to the players' behaviors and states @cite_11 . The model we introduce here has the advantage of being very general yet very simple, providing a tunable way to study this phenomenon under varying levels of feedback.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_2", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2556732087", "1948149164", "2127508398", "1530305922", "2788306682", "2570384471", "1967445478", "2072113937" ], "abstract": [ "We study a setting where a set of players simultaneously invest in a shared resource. The resource has a probability of failure and a return on investment, both of which are functions of the total investment by all players. We use a simple reference dependent preference model to capture players with heterogeneous risk attitudes (risk seeking, risk neutral and risk averse). We show the existence and uniqueness of a pure strategy Nash equilibrium in this setting and examine the effect of different risk attitudes on players' strategies in the presence of uncertainty. In particular, we show that at the equilibrium, risk averse players are pushed out of the resource by risk seeking players. We compare the failure probabilities in the decentralized (game-theoretic) and centralized settings, and show that our proposed game belongs to the class of best response potential games, for which there are simple dynamics that allow all players to converge to the equilibrium.", "Studies in experimental economics have consistently demonstrated that Nash equilibrium is a poor description of human players' behavior in unrepeated normal-form games. Behavioral game theory offers alternative models that more accurately describe human behavior in these settings. These models typically depend upon the values of exogenous parameters, which are estimated based on experimental data. We describe methods for deriving and analyzing the posterior distributions over the parameters of such models, and apply these techniques to study two popular models (Poisson-CH and QLk), the latter of which we previously showed to be the best-performing existing model in a comparison of four widely-studied behavioral models [22]. Drawing on a large set of publicly available experimental data, we derive concrete recommendations for the parameters that should be used with Poisson-CH, contradicting previous recommendations in the literature. We also uncover anomalies in QLk that lead us to develop a new, simpler, and better-performing family of models.", "We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty we show that even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are Ω(1)-fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest. In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up Ω(1)-fairness). We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties.", "Many natural games have both high and low cost Nash equilibria: their Price of Anarchy is high and yet their Price of Stability is low. In such cases, one could hope to move behavior from a high cost equilibrium to a low cost one by a \"public service advertising campaign\" encouraging players to follow the low-cost equilibrium, and if every player follows the advice then we are done. However, the assumption that everyone follows instructions is unrealistic. A more natural assumption is that some players will follow them, while other players will not. In this paper we consider the question of to what extent can such an advertising campaign cause behavior to switch from a bad equilibrium to a good one even if only a fraction of people actually follow the given advice, and do so only temporarily. Unlike the \"value of altruism\" model, we assume everyone will ultimately act in their own interest. We analyze this question for several important and widely studied classes of games including network design with fair cost sharing, scheduling with unrelated machines, and party affiliation games (which include consensus and cut games). We show that for some of these games (such as fair cost sharing), a random α fraction of the population following the given advice is sufficient to get a guarantee within an O(1 α) factor of the price of stability for any α > 0. For other games (such as party affiliation games), there is a strict threshold (in this case, α 1 2 is enough to reach near-optimal behavior). Finally, for some games, such as scheduling, no value α < 1 is sufficient. We also consider a \"viral marketing\" model in which certain players are specifically targeted, and analyze the ability of such targeting to influence behavior using a much smaller number of targeted players.", "We study a simple variant of the von Neumann model of an expanding economy, in which multiple producers produce goods according to their production function. The players trade their goods at the market and then use the bundles acquired as inputs for the production in the next round. We show that a simple decentralized dynamic, where players update their bids proportionally to how useful the investments were in the past round, leads to growth of the economy in the long term (whenever growth is possible) but also creates unbounded inequality, i.e. very rich and very poor players emerge. We analyze several other phenomena, such as how the relation of a player with others influences its development and the Gini index of the system.", "Evolutionary arguments are often used to justify the fundamental behavioral postulates of competive equilibrium. Economists such as Milton Friedman have argued that natural selection favors profit maximizing firms over firms engaging in other behaviors. Consequently, producer efficiency, and therefore Pareto efficiency, are justified on evolutionary grounds. We examine these claims in an evolutionary general equilibrium model. If the economic environment were held constant, profitable firms would grow and unprofitable firms would shrink. In the general equilibrium model, prices change as factor demands and output supply evolves. Without capital markets, when firms can grow only through retained earnings, our model verifies Friedman's claim that natural selection favors profit maximization. But we show through examples that this does not imply that equilibrium allocations converge over time to efficient allocations. Consequently, Koopmans critique of Friedman is correct. When capital markets are added, and firms grow by attracting investment, Friedman's claim may fail. In either model the long-run outcomes of evolutionary market models are not well described by conventional General Equilibrium analysis with profit maximizing firms. Submitted to Journal of Economic Theory. (This abstract was borrowed from another version of this item.)", "In this paper, we want to introduce experimental economics to the field of data mining and vice versa. It continues related work on mining deterministic behavior rules of human subjects in data gathered from experiments. Game-theoretic predictions partially fail to work with this data. Equilibria also known as game-theoretic predictions solely succeed with experienced subjects in specific games - conditions, which are rarely given. Contemporary experimental economics offers a number of alternative models apart from game theory. In relevant literature, these models are always biased by philosophical plausibility considerations and are claimed to fit the data. An agnostic data mining approach to the problem is introduced in this paper - the philosophical plausibility considerations follow after the correlations are found. No other biases are regarded apart from determinism. The dataset of the paper Social Learning in Networks\" by 2012 is taken for evaluation. As a result, we come up with new findings. As future work, the design of a new infrastructure is discussed.", "In the traffic assignment problem, first proposed by Wardrop in 1952, commuters select the shortest available path to travel from their origins to their destinations. We study a generalization of this problem in which competitors, who may control a nonnegligible fraction of the total flow, ship goods across a network. This type of games, usually referred to as atomic games, readily applies to situations in which the competing freight companies have market power. Other applications include intelligent transportation systems, competition among telecommunication network service providers, and scheduling with flexible machines. Our goal is to determine to what extent these systems can benefit from some form of coordination or regulation. We measure the quality of the outcome of the game without centralized control by computing the worst-case inefficiency of Nash equilibria. The main conclusion is that although self-interested competitors will not achieve a fully efficient solution from the system's point of view, the loss is not too severe. We show how to compute several bounds for the worst-case inefficiency that depend on the characteristics of cost functions and on the market structure in the game. In addition, building upon the work of Catoni and Pallotino, we show examples in which market aggregation (or collusion) adversely impacts the aggregated competitors, even though their market power increases. For example, Nash equilibria of atomic network games may be less efficient than the corresponding Wardrop equilibria. When competitors are completely symmetric, we provide a characterization of the Nash equilibrium using a potential function, and prove that this counterintuitive phenomenon does not arise. Finally, we study a pricing mechanism that elicits more coordination from the players by reducing the worst-case inefficiency of Nash equilibria." ] }
0708.1211
2949126942
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) @math of length @math . More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial @math time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) CMDetCS3,CMDetCS1,CMDetCS2 in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
Compressed Sensing (CS) methods @cite_2 @cite_20 @cite_22 @cite_15 @cite_27 provide a robust framework for reducing the number of measurements required to estimate a sparse signal. For this reason CS methods are useful in areas such as MR imaging @cite_8 @cite_6 and analog-to-digital conversion @cite_10 @cite_12 where measurement costs are high. The general CS setup is as follows: Let be an @math -length signal vector with complex valued entries and @math be a full rank @math change of basis matrix. Furthermore, suppose that @math is sparse (i.e., only @math entries of @math are significant large in magnitude). CS methods deal with generating a @math measurement matrix, @math , with the smallest number of rows possible (i.e., @math minimized) so that the @math significant entries of @math can be well approximated by the @math -element vector result of Note that CS is inherently algorithmic since a procedure for recovering @math 's largest @math -entries from the result of Equation must be specified.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_6", "@cite_27", "@cite_2", "@cite_15", "@cite_10", "@cite_12", "@cite_20" ], "mid": [ "2032618720", "1676074242", "2147361252", "2949903712", "2339789461", "2121194215", "2018429487", "2134033146", "2949327578" ], "abstract": [ "Compressed sensing (CS) decoding algorithms can efficiently recover an N -dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(klogN k) measurements y = Phix. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ||x - x ||lN2 can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 log p of these m measurements and compute a sequence of possible estimates ( xj)j=1p to x from the m -10logp remaining measurements; the errors ||x - xj ||lN2 for j = 1, ..., p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well.", "Compressed sensing (CS) demonstrates that sparse signals can be recovered from underdetermined linear measurements. We focus on the joint sparse recovery problem where multiple signals share the same common sparse support sets, and they are measured through the same sensing matrix. Leveraging a recent information theoretic characterization of single signal CS, we formulate the optimal minimum mean square error (MMSE) estimation problem, and derive a belief propagation algorithm, its relaxed version, for the joint sparse recovery problem and an approximate message passing algorithm. In addition, using density evolution, we provide a sufficient condition for exact recovery.", "We relate compressed sensing (CS) with Bayesian experimental design and provide a novel efficient approximate method for the latter, based on expectation propagation. In a large comparative study about linearly measuring natural images, we show that the simple standard heuristic of measuring wavelet coefficients top-down systematically outperforms CS methods using random measurements; the sequential projection optimisation approach of (Ji & Carin, 2007) performs even worse. We also show that our own approximate Bayesian method is able to learn measurement filters on full images efficiently which outperform the wavelet heuristic. To our knowledge, ours is the first successful attempt at \"learning compressed sensing\" for images of realistic size. In contrast to common CS methods, our framework is not restricted to sparse signals, but can readily be applied to other notions of signal complexity or noise models. We give concrete ideas how our method can be scaled up to large signal representations.", "The Compressive Sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that i.i.d. random Gaussian matrices describe measurement mappings achieving, with overwhelming probability, nearly optimal error decay. Next, we consider reconstruction robustness to measurement errors and noise and introduce the Binary @math -Stable Embedding (B @math SE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide almost optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance.", "Compressed sensing (CS) demonstrates that sparse signals can be estimated from underdetermined linear systems. Distributed CS (DCS) further reduces the number of measurements by considering joint sparsity within signal ensembles. DCS with jointly sparse signals has applications in multisensor acoustic sensing, magnetic resonance imaging with multiple coils, remote sensing, and array signal processing. Multimeasurement vector (MMV) problems consider the estimation of jointly sparse signals under the DCS framework. Two related MMV settings are studied. In the first setting, each signal vector is measured by a different independent and identically distributed (i.i.d.) measurement matrix, while in the second setting, all signal vectors are measured by the same i.i.d. matrix. Replica analysis is performed for these two MMV settings, and the minimum mean squared error (MMSE), which turns out to be identical for both settings, is obtained as a function of the noise variance and number of measurements. To showcase the application of MMV models, the MMSE's of complex CS problems with both real and complex measurement matrices are also analyzed. Multiple performance regions for MMV are identified where the MMSE behaves differently as a function of the noise variance and the number of measurements. Belief propagation (BP) is a CS signal estimation framework that often achieves the MMSE asymptotically. A phase transition for BP is identified. This phase transition, verified by numerical results, separates the regions where BP achieves the MMSE and where it is suboptimal. Numerical results also illustrate that more signal vectors in the jointly sparse signal ensemble lead to a better phase transition.", "Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional sparse signals from relatively few linear observations in the form of projections onto a collection of test vectors. Existing results show that if the entries of the test vectors are independent realizations of certain zero-mean random variables, then with high probability the unknown signals can be recovered by solving a tractable convex optimization. This work extends CS theory to settings where the entries of the test vectors exhibit structured statistical dependencies. It follows that CS can be effectively utilized in linear, time-invariant system identification problems provided the impulse response of the system is (approximately or exactly) sparse. An immediate application is in wireless multipath channel estimation. It is shown here that time-domain probing of a multipath channel with a random binary sequence, along with utilization of CS reconstruction techniques, can provide significant improvements in estimation accuracy compared to traditional least-squares based linear channel estimation strategies. Abstract extensions of the main results are also discussed, where the theory of equitable graph coloring is employed to establish the utility of CS in settings where the test vectors exhibit more general statistical dependencies.", "Compressed sensing (CS) seeks to recover an unknown vector with @math entries by making far fewer than @math measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply @math . CS combines directly the important task of compression with the measurement task. Since its introduction in 2004 there have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements. Because of the paradoxical nature of CS—exact reconstruction from seemingly undersampled measurements—it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits. The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases. We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one. We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble. Our derivations illustrate the way in which the combinatorial nature of CS is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners. We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry.", "Compressed sensing (CS) offers a joint compression and sensing processes, based on the existence of a sparse representation of the treated signal and a set of projected measurements. Work on CS thus far typically assumes that the projections are drawn at random. In this paper, we consider the optimization of these projections. Since such a direct optimization is prohibitive, we target an average measure of the mutual coherence of the effective dictionary, and demonstrate that this leads to better CS reconstruction performance. Both the basis pursuit (BP) and the orthogonal matching pursuit (OMP) are shown to benefit from the newly designed projections, with a reduction of the error rate by a factor of 10 and beyond.", "Compressed Sensing (CS) is an appealing framework for applications such as Magnetic Resonance Imaging (MRI). However, up-to-date, the sensing schemes suggested by CS theories are made of random isolated measurements, which are usually incompatible with the physics of acquisition. To reflect the physical constraints of the imaging device, we introduce the notion of blocks of measurements: the sensing scheme is not a set of isolated measurements anymore, but a set of groups of measurements which may represent any arbitrary shape (parallel or radial lines for instance). Structured acquisition with blocks of measurements are easy to implement, and provide good reconstruction results in practice. However, very few results exist on the theoretical guarantees of CS reconstructions in this setting. In this paper, we derive new CS results for structured acquisitions and signals satisfying a prior structured sparsity. The obtained results provide a recovery probability of sparse vectors that explicitly depends on their support. Our results are thus support-dependent and offer the possibility for flexible assumptions on the sparsity structure. Moreover, the results are drawing-dependent, since we highlight an explicit dependency between the probability of reconstructing a sparse vector and the way of choosing the blocks of measurements. Numerical simulations show that the proposed theory is faithful to experimental observations." ] }
0708.1211
2949126942
We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) @math of length @math . More explicitly, we investigate how to deterministically identify B of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial @math time. Randomized sub-linear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) CMDetCS3,CMDetCS1,CMDetCS2 in order to develop the first known deterministic sub-linear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM's algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.
For the remainder of this paper we will consider the special CS case where @math is the @math Discrete Fourier Transform matrix. Hence, we have Our problem of interest is to find, and estimate the coefficients of, the @math significant entries (i.e., frequencies) of @math given a frequency-sparse (i.e., smooth) signal . In this case the deterministic Fourier CS measurement matrixes, @math , produced by @cite_20 @cite_22 @cite_15 @cite_27 require super-linear @math -time to multiply by in Equation . Similarly, the energetic frequency recovery procedure of @cite_2 @cite_13 requires super-linear time in @math . Hence, none of @cite_2 @cite_20 @cite_13 @cite_22 @cite_15 @cite_27 have both sub-linear measurement and reconstruction time.
{ "cite_N": [ "@cite_22", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_20" ], "mid": [ "2114789426", "2031425559", "1977252496", "2064771273", "2099641086", "2949903712" ], "abstract": [ "We consider the problem of robustly recovering a @math -sparse coefficient vector from the Fourier series that it generates, restricted to the interval @math . The difficulty of this problem is linked to the superresolution factor SRF, equal to the ratio of the Rayleigh length (inverse of @math ) by the spacing of the grid supporting the sparse vector. In the presence of additive deterministic noise of norm @math , we show upper and lower bounds on the minimax error rate that both scale like @math , providing a partial answer to a question posed by Donoho in 1992. The scaling arises from comparing the noise level to a restricted isometry constant at sparsity @math , or equivalently from comparing @math to the so-called @math -spark of the Fourier system. The proof involves new bounds on the singular values of restricted Fourier matrices, obtained in part from old techniques in complex analysis.", "We study the problem of estimating the best k term Fourier representation for a given frequency sparse signal (i.e., vector) A of length N≫k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of @math , and estimate their coefficients, in polynomial(k,log N) time. Randomized sublinear-time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem ( in ACM STOC, pp. 152–161, 2002; Proceedings of SPIE Wavelets XI, 2005). In this paper we develop the first known deterministic sublinear-time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime sampling bounds to the current best randomized Fourier method ( in Proceedings of SPIE Wavelets XI, 2005). Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in (Iwen in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA’08), 2008).", "We study the notion of compressed sensing (CS) as put forward by Donoho, Candes, Tao and others. The notion proposes a signal or image, unknown but supposed to be compressible by a known transform, (e.g. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of data points, and yet be accurately reconstructed. The samples are nonadaptive and measure 'random' linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible l1 norm.We present initial 'proof-of-concept' examples in the favorable case where the vast majority of the transform coefficients are zero. We continue with a series of numerical experiments, for the setting of lp-sparsity, in which the object has all coefficients nonzero, but the coefficients obey an lp bound, for some p ∈ (0, 1]. The reconstruction errors obey the inequalities paralleling the theory, seemingly with well-behaved constants.We report that several workable families of 'random' linear combinations all behave equivalently, including random spherical, random signs, partial Fourier and partial Hadamard.We next consider how these ideas can be used to model problems in spectroscopy and image processing, and in synthetic examples see that the reconstructions from CS are often visually \"noisy\". To suppress this noise we postprocess using translation-invariant denoising, and find the visual appearance considerably improved.We also consider a multiscale deployment of compressed sensing, in which various scales are segregated and CS applied separately to each; this gives much better quality reconstructions than a literal deployment of the CS methodology.These results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.", "This paper explores the problem of spectral compressed sensing, which aims to recover a spectrally sparse signal from a small random subset of its n time domain samples. The signal of interest is assumed to be a superposition of r multidimensional complex sinusoids, while the underlying frequencies can assume any continuous values in the normalized frequency domain. Conventional compressed sensing paradigms suffer from the basis mismatch issue when imposing a discrete dictionary on the Fourier representation. To address this issue, we develop a novel algorithm, called enhanced matrix completion (EMaC), based on structured matrix completion that does not require prior knowledge of the model order. The algorithm starts by arranging the data into a low-rank enhanced form exhibiting multifold Hankel structure, and then attempts recovery via nuclear norm minimization. Under mild incoherence conditions, EMaC allows perfect recovery as soon as the number of samples exceeds the order of r log4 n, and is stable against bounded noise. Even if a constant portion of samples are corrupted with arbitrary magnitude, EMaC still allows exact recovery, provided that the sample complexity exceeds the order of r2 log3 n. Along the way, our results demonstrate the power of convex relaxation in completing a low-rank multifold Hankel or Toeplitz matrix from minimal observed entries. The performance of our algorithm and its applicability to super resolution are further validated by numerical experiments.", "Suppose a discrete-time signal S(t), 0 spl les t<N, is a superposition of atoms taken from a combined time-frequency dictionary made of spike sequences 1 sub t= spl tau and sinusoids exp 2 spl pi iwt N spl radic N. Can one recover, from knowledge of S alone, the precise collection of atoms going to make up S? Because every discrete-time signal can be represented as a superposition of spikes alone, or as a superposition of sinusoids alone, there is no unique way of writing S as a sum of spikes and sinusoids in general. We prove that if S is representable as a highly sparse superposition of atoms from this time-frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the l sup 1 norm of the coefficients among all decompositions. Here \"highly sparse\" means that N sub t +N sub w < spl radic N 2 where N sub t is the number of time atoms, N sub w is the number of frequency atoms, and N is the length of the discrete-time signal. Underlying this result is a general l sup 1 uncertainty principle which says that if two bases are mutually incoherent, no nonzero signal can have a sparse representation in both bases simultaneously. For the above setting, the bases are sinusoids and spikes, and mutual incoherence is measured in terms of the largest inner product between different basis elements. The uncertainty principle holds for a variety of interesting basis pairs, not just sinusoids and spikes. The results have idealized applications to band-limited approximation with gross errors, to error-correcting encryption, and to separation of uncoordinated sources. Related phenomena hold for functions of a real variable, with basis pairs such as sinusoids and wavelets, and for functions of two variables, with basis pairs such as wavelets and ridgelets. In these settings, if a function f is representable by a sufficiently sparse superposition of terms taken from both bases, then there is only one such sparse representation; it may be obtained by minimum l sup 1 norm atomic decomposition. The condition \"sufficiently sparse\" becomes a multiscale condition; for example, that the number of wavelets at level j plus the number of sinusoids in the jth dyadic frequency band are together less than a constant times 2 sup j 2 .", "The Compressive Sensing (CS) framework aims to ease the burden on analog-to-digital converters (ADCs) by reducing the sampling rate required to acquire and stably recover sparse signals. Practical ADCs not only sample but also quantize each measurement to a finite number of bits; moreover, there is an inverse relationship between the achievable sampling rate and the bit depth. In this paper, we investigate an alternative CS approach that shifts the emphasis from the sampling rate to the number of bits per measurement. In particular, we explore the extreme case of 1-bit CS measurements, which capture just their sign. Our results come in two flavors. First, we consider ideal reconstruction from noiseless 1-bit measurements and provide a lower bound on the best achievable reconstruction error. We also demonstrate that i.i.d. random Gaussian matrices describe measurement mappings achieving, with overwhelming probability, nearly optimal error decay. Next, we consider reconstruction robustness to measurement errors and noise and introduce the Binary @math -Stable Embedding (B @math SE) property, which characterizes the robustness measurement process to sign changes. We show the same class of matrices that provide almost optimal noiseless performance also enable such a robust mapping. On the practical side, we introduce the Binary Iterative Hard Thresholding (BIHT) algorithm for signal reconstruction from 1-bit measurements that offers state-of-the-art performance." ] }
0708.0961
2950164889
We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.
Most isosurface extraction methods operate only on structured data usually a structured or unstructured grid @cite_10 . Livnat @cite_10 and Sutton . @cite_22 provide overviews of popular isosurface extraction techniques.
{ "cite_N": [ "@cite_10", "@cite_22" ], "mid": [ "2167302705", "2111734174" ], "abstract": [ "Isosurface extraction is a standard visualization method for scalar volume data and has been subject to research for decades. Nevertheless, to our knowledge, no isosurface extraction method exists that directly extracts surfaces from scattered volume data without 3D mesh generation or reconstruction over a structured grid. We propose a method based on spatial domain partitioning using a kd-tree and an indexing scheme for efficient neighbor search. Our approach consists of a geometry extraction and a rendering step. The geometry extraction step computes points on the isosurface by linearly interpolating between neighboring pairs of samples. The neighbor information is retrieved by partitioning the 3D domain into cells using a kd-tree. The cells are merely described by their index and bitwise index operations allow for a fast determination of potential neighbors. We use an angle criterion to select appropriate neighbors from the small set of candidates. The output of the geometry step is a point cloud representation of the isosurface. The final rendering step uses point-based rendering techniques to visualize the point cloud. Our direct isosurface extraction algorithm for scattered volume data produces results of quality close to the results from standard isosurface extraction algorithms for gridded volume data (like marching cubes). In comparison to 3D mesh generation algorithms (like Delaunay tetrahedrization), our algorithm is about one order of magnitude faster for the examples used in this paper.", "Presents the \"Near Optimal IsoSurface Extraction\" (NOISE) algorithm for rapidly extracting isosurfaces from structured and unstructured grids. Using the span space, a new representation of the underlying domain, we develop an isosurface extraction algorithm with a worst case complexity of o( spl radic n+k) for the search phase, where n is the size of the data set and k is the number of cells intersected by the isosurface. The memory requirement is kept at O(n) while the preprocessing step is O(n log n). We utilize the span space representation as a tool for comparing isosurface extraction methods on structured and unstructured grids. We also present a fast triangulation scheme for generating and displaying unstructured tetrahedral grids." ] }
0708.0961
2950164889
We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping. We import the resulting regular grid representation into ParaView, with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade.
Value-space decomposition techniques, such as NOISE @cite_13 and interval trees @cite_15 @cite_39 , can extract isosurfaces from datasets that lack structure, as can the various techniques of Co . @cite_11 @cite_36 and Rosenthal . @cite_27 . Unfortunately, implementations of these techniques are usually not freely available.
{ "cite_N": [ "@cite_36", "@cite_39", "@cite_27", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2151005058", "2111734174", "2125039738", "2015418199", "2167302705", "2147880780" ], "abstract": [ "A method is proposed which supports the extraction of isosurfaces from irregular volume data, represented by tetrahedral decomposition, in optimal time. The method is based on a data structure called interval tree, which encodes a set of intervals on the real line, and supports efficient retrieval of all intervals containing a given value. Each cell in the volume data is associated with an interval bounded by the extreme values of the field in the cell. All cells intersected by a given isosurface are extracted in O(m+log h) time, with m the output size and h the number of different extreme values (min or max). The implementation of the method is simple. Tests have shown that its practical performance reflects the theoretical optimality.", "Presents the \"Near Optimal IsoSurface Extraction\" (NOISE) algorithm for rapidly extracting isosurfaces from structured and unstructured grids. Using the span space, a new representation of the underlying domain, we develop an isosurface extraction algorithm with a worst case complexity of o( spl radic n+k) for the search phase, where n is the size of the data set and k is the number of cells intersected by the isosurface. The memory requirement is kept at O(n) while the preprocessing step is O(n log n). We utilize the span space representation as a tool for comparing isosurface extraction methods on structured and unstructured grids. We also present a fast triangulation scheme for generating and displaying unstructured tetrahedral grids.", "The interval tree is an optimally efficient search structure proposed by Edelsbrunner (1980) to retrieve intervals on the real line that contain a given query value. We propose the application of such a data structure to the fast location of cells intersected by an isosurface in a volume dataset. The resulting search method can be applied to both structured and unstructured volume datasets, and it can be applied incrementally to exploit coherence between isosurfaces. We also address issues of storage requirements, and operations other than the location of cells, whose impact is relevant in the whole isosurface extraction task. In the case of unstructured grids, the overhead, due to the search structure, is compatible with the storage cost of the dataset, and local coherence in the computation of isosurface patches is exploited through a hash table. In the case of a structured dataset, a new conceptual organization is adopted, called the chess-board approach, which exploits the regular structure of the dataset to reduce memory usage and to exploit local coherence. In both cases, efficiency in the computation of surface normals on the isosurface is obtained by a precomputation of the gradients at the vertices of the mesh. Experiments on different kinds of input show that the practical performance of the method reflects its theoretical optimality.", "Abstract It is now well-known that one can reconstruct sparse or compressible signals accurately from a very limited number of measurements, possibly contaminated with noise. This technique known as “compressed sensing” or “compressive sampling” relies on properties of the sensing matrix such as the restricted isometry property . In this Note, we establish new results about the accuracy of the reconstruction from undersampled measurements which improve on earlier estimates, and have the advantage of being more elegant. To cite this article: E.J. Candes, C. R. Acad. Sci. Paris, Ser. I 346 (2008).", "Isosurface extraction is a standard visualization method for scalar volume data and has been subject to research for decades. Nevertheless, to our knowledge, no isosurface extraction method exists that directly extracts surfaces from scattered volume data without 3D mesh generation or reconstruction over a structured grid. We propose a method based on spatial domain partitioning using a kd-tree and an indexing scheme for efficient neighbor search. Our approach consists of a geometry extraction and a rendering step. The geometry extraction step computes points on the isosurface by linearly interpolating between neighboring pairs of samples. The neighbor information is retrieved by partitioning the 3D domain into cells using a kd-tree. The cells are merely described by their index and bitwise index operations allow for a fast determination of potential neighbors. We use an angle criterion to select appropriate neighbors from the small set of candidates. The output of the geometry step is a point cloud representation of the isosurface. The final rendering step uses point-based rendering techniques to visualize the point cloud. Our direct isosurface extraction algorithm for scattered volume data produces results of quality close to the results from standard isosurface extraction algorithms for gridded volume data (like marching cubes). In comparison to 3D mesh generation algorithms (like Delaunay tetrahedrization), our algorithm is about one order of magnitude faster for the examples used in this paper.", "We investigate techniques for analysis and retrieval of object trajectories in two or three dimensional space. Such data usually contain a large amount of noise, that has made previously used metrics fail. Therefore, we formalize non-metric similarity functions based on the longest common subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translation of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and time warping distance functions (for real and synthetic data) and show the superiority of our approach, especially in the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Several scholarly ontologies are available in the DAML Ontology Library DAML Ontology Library available at: http: www.daml.org ontologies . While they focus on bibliographic constructs, they do not model usage events. The same is true of the Semantic Community Web Portal ontology @cite_9 , which, in addition maintains many detailed classes whose instantiation is unrealistic given what is recorded by modern scholarly information systems.
{ "cite_N": [ "@cite_9" ], "mid": [ "1563413502" ], "abstract": [ "This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web. In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologies are treated as evidential reasoning between the two translated BNs. Probabilities needed for constructing conditional probability tables (CPT) during translation and for measuring semantic similarity during mapping are learned using text classification techniques where each concept in an ontology is associated with a set of semantically relevant text documents, which are obtained by ontology guided web mining. The basic ideas of this approach are validated by positive results from computer experiments on two small real-world ontologies." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
The ScholOnto ontology was developed as part of an effort aimed at enabling researchers to describe and debate, via a semantic network, the contributions of a document, and its relationship to the literature @cite_4 . While this ontology supports the concept of a scholarly document and a scholarly agent, it focuses on formally summarizing and interactively debating claims made in documents, not on expressing the actual use of documents. Moreover, support for bibliographic data is minimal whereas support for discourse constructs, not required for MESUR, is very detailed.
{ "cite_N": [ "@cite_4" ], "mid": [ "2130307546" ], "abstract": [ "The internet is rapidly becoming the first place for researchers to publish documents, but at present they receive little support in searching, tracking, analysing or debating concepts in a literature from scholarly perspectives. This paper describes the design rationale and implementation of ScholOnto, an ontology-based digital library server to support scholarly interpretation and discourse. It enables researchers to describe and debate via a semantic network the contributions a document makes, and its relationship to the literature. The paper discusses the computational services that an ontology-based server supports, alternative user interfaces to support interaction with a large semantic network, usability issues associated with knowledge formalisation, new work practices that could emerge, and related work." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
The ABC ontology @cite_25 was primarily engineered as a common conceptual model for the interoperability of a variety of metadata ontologies from different domains. Although the ABC ontology is able to represent bibliographic and usage concepts by means of constructs such as artifact (e.g. article), agent (e.g. author), and action (e.g. use), it is designed at a level of generality that does not directly support the granularity required by the MESUR project.
{ "cite_N": [ "@cite_25" ], "mid": [ "1552027408" ], "abstract": [ "This paper describes the latest version of the ABC metadata model. This model has been developed within the Harmony international digital library project to provide a common conceptual model to facilitate interoperability between metadata vocabularies from different domains. This updated ABC model is the result of collaboration with the CIMI consortium whereby earlier versions of the ABC model were applied to metadata descriptions of complex objects provided by CIMI museums and libraries. The result is a metadata model with more logically grounded time and entity semantics. Based on this model we have been able to build a metadata repository of RDF descriptions and a search interface which is capable of more sophisticated queries than less-expressive, object-centric metadata models will allow." ] }
0708.1150
2140504256
The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. We present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.
Finally, in the realm of usage data representation, no ontology-based efforts were found. Nevertheless, the following existing schema-driven approaches were explored and served as inspiration: the OpenURL ContextObject approach to facilitate OAI-PMH-based harvesting of scholarly usage events @cite_2 , the XML Log standard to represent digital library logs @cite_8 , and the COUNTER schema to express journal level usage statistics @cite_16 .
{ "cite_N": [ "@cite_8", "@cite_16", "@cite_2" ], "mid": [ "2084177123", "1966541562", "2127539404" ], "abstract": [ "Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. This paper presents a technical, standards-based architecture for sharing usage information, which we have designed and implemented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service. This paper also discusses issues that were encountered when implementing the proposed approach, and it presents preliminary results obtained from analyzing a usage data set containing about 3,500,000 requests aggregated by a federation of linking servers at the California State University system over a 20 month period.", "Representing the semantics of unstructured scientific publications will certainly facilitate access and search and hopefully lead to new discoveries. However, current digital libraries are usually limited to classic flat structured metadata even for scientific publications that potentially contain rich semantic metadata. In addition, how to search the scientific literature of linked semantic metadata is an open problem. We have developed a semantic digital library oreChem ChemxSeer that models chemistry papers with semantic metadata. It stores and indexes extracted metadata from a chemistry paper repository Chemx Seer using \"compound objects\". We use the Open Archives Initiative Object Reuse and Exchange (OAI-ORE) (http: www.openarchives.org ore standard to define a compound object that aggregates metadata fields related to a digital object. Aggregated metadata can be managed and retrieved easily as one unit resulting in improved ease-of-use and has the potential to improve the semantic interpretation of shared data. We show how metadata can be extracted from documents and aggregated using OAI-ORE. ORE objects are created on demand; thus, we are able to search for a set of linked metadata with one query. We were also able to model new types of metadata easily. For example, chemists are especially interested in finding information related to experiments in documents. We show how paragraphs containing experiment information in chemistry papers can be extracted and tagged based on a chemistry ontology with 470 classes, and then represented in ORE along with other document-related metadata. Our algorithm uses a classifier with features that are words that are typically only used to describe experiments, such as \"apparatus\", \"prepare\", etc. Using a dataset comprised of documents from the Royal Society of Chemistry digital library, we show that the our proposed methodperforms well in extracting experiment-related paragraphs from chemistry documents.", "Capturing the context of a user's query from the previous queries and clicks in the same session may help understand the user's information need. A context-aware approach to document re-ranking, query suggestion, and URL recommendation may improve users' search experience substantially. In this paper, we propose a general approach to context-aware search. To capture contexts of queries, we learn a variable length Hidden Markov Model (vlHMM) from search sessions extracted from log data. Although the mathematical model is intuitive, how to learn a large vlHMM with millions of states from hundreds of millions of search sessions poses a grand challenge. We develop a strategy for parameter initialization in vlHMM learning which can greatly reduce the number of parameters to be estimated in practice. We also devise a method for distributed vlHMM learning under the map-reduce model. We test our approach on a real data set consisting of 1.8 billion queries, 2.6 billion clicks, and 840 million search sessions, and evaluate the effectiveness of the vlHMM learned from the real data on three search applications: document re-ranking, query suggestion, and URL recommendation. The experimental results show that our approach is both effective and efficient." ] }
0708.1337
2164790530
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersely-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.
There has also been work on Quantum Markov networks within the quantum probability literature @cite_35 @cite_5 @cite_52 , although Belief Propagation has not been investigated in this literature. This is closer to the spirit of the present work, in the sense that it is based on the generalization of classical probability to a noncommutative, operator-valued probability theory. These works are primarily concerned with defining the Markov condition in such a way that it can be applied to systems with an infinite number of degrees of freedom, and hence an operator algebraic formalism is used. This is important for applications to statistical physics because the thermodynamic limit can be formally defined as the limit of an infinite number of systems, but it is not so important for numerical simulations, since these necessarily operate with a finite number of discretized degrees of freedom. Also conditional independence is defined in a different way via quantum conditional expectations, rather than the approach based on conditional mutual information and conditional density operators used in the present work. Nevertheless, it seems likely that there are connections to our approach that should to be investigated in future work.
{ "cite_N": [ "@cite_35", "@cite_5", "@cite_52" ], "mid": [ "1914859158", "2963271519", "2071849960" ], "abstract": [ "We study a model of communication complexity that encompasses many well-studied problems, including classical and quantum communication complexity, the complexity of simulating distributions arising from bipartite measurements of shared quantum states, and XOR games. In this model, Alice gets an input x, Bob gets an input y, and their goal is to each produce an output a, b distributed according to some pre-specified joint distribution p(a, b|x, y). Our results apply to any non-signaling distribution, that is, those where Alice's marginal distribution does not depend on Bob's input, and vice versa. By taking a geometric view of the non-signaling distributions, we introduce a simple new technique based on affine combinations of lower-complexity distributions, and we give the first general technique to apply to all these settings, with elementary proofs and very intuitive interpretations. Specifically, we introduce two complexity measures, one which gives lower bounds on classical communication, and one for quantum communication. These measures can be expressed as convex optimization problems. We show that the dual formulations have a striking interpretation, since they coincide with maximum violations of Bell and Tsirelson inequalities. The dual expressions are closely related to the winning probability of XOR games. Despite their apparent simplicity, these lower bounds subsume many known communication complexity lower bound methods, most notably the recent lower bounds of Linial and Shraibman for the special case of Boolean functions. We show that as in the case of Boolean functions, the gap between the quantum and classical lower bounds is at most linear in the size of the support of the distribution, and does not depend on the size of the inputs. This translates into a bound on the gap between maximal Bell and Tsirelson inequality violations, which was previously known only for the case of distributions with Boolean outcomes and uniform marginals. It also allows us to show that for some distributions, information theoretic methods are necessary to prove strong lower bounds. Finally, we give an exponential upper bound on quantum and classical communication complexity in the simultaneous messages model, for any non-signaling distribution. One consequence of this is a simple proof that any quantum distribution can be approximated with a constant number of bits of communication.", "AbstractIn this paper we propose a continuous-time, dissipative Markov dynamics that asymp-totically drives a network of n-dimensional quantum systems to the set of states that areinvariant under the action of the subsystem permutation group. The Lindblad-type gen-erator of the dynamics is built with two-body subsystem swap operators, thus satisfyinglocality constraints, and preserve symmetric observables. The potential use of the pro-posed generator in combination with local control and measurement actions is illustratedwith two applications: the generation of a global pure state and the estimation of thenetwork size. 1 INTRODUCTION Classical consensus algorithms and the related distributed control problems have recentlygenerated an impressive body of literature, motivated by applications in distributed com-putation and multi-agent coordination, see e.g. [1, 2, 3, 4, 5, 6, 7, 8, 9]. We have recentlyrecast the problem, and a class of algorithms for its solution, as a dynamical symmetriza-tion problem in a group-theoretic framework [10]. This allows to extend the use of simpleand robust algorithms, e.g. of gossip type [11], to new settings and applications. Amongthese, we have studied a quantum version of consensus problems and its applications [12],which can be seen as symmetrization with respect to the subsystem-permutation group, aswell as control methods like quantum dynamical decoupling [13], in which we generally donot have a multipartite structure and symmetrization is attained with respect to other finitegroups. The emerging dynamics are intrinsically in discrete time, and suitable for sequentialimplementation in dissipative quantum simulators [14].", "We give an explicit characterisation of the quantum states which saturate the strong subadditivity inequality for the von Neumann entropy. By combining a result of Petz characterising the equality case for the monotonicity of relative entropy with a recent theorem by Koashi and Imoto, we show that such states will have the form of a so–called short quantum Markov chain, which in turn implies that two of the systems are independent conditioned on the third, in a physically meaningful sense. This characterisation simultaneously generalises known necessary and sufficient entropic conditions for quantum error correction as well as the conditions for the achievability of the Holevo bound on accessible information." ] }
0708.1337
2164790530
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersely-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.
Lastly, during the final stage of preparation of this manuscript, two related papers have appeared on the physics archive. An article by Laumann, Scardicchio and Sondhi @cite_15 used a QBP-like to solve quantum models on sparse graphs. Hastings @cite_20 proposed a QBP algorithm for the simulation of quantum many-body systems based on ideas similar to the ones presented here. The connection between the two approaches, and in particular the application of the Lieb-Robinson bound @cite_7 to conditional mutual information, is worthy of further investigation.
{ "cite_N": [ "@cite_15", "@cite_7", "@cite_20" ], "mid": [ "2611161794", "2566350286", "2952413488" ], "abstract": [ "Although a quantum state requires exponentially many classical bits to describe, the laws of quantum mechanics impose severe restrictions on how that state can be accessed. This paper shows in three settings that quantum messages have only limited advantages over classical ones. First, we show that BQP qpoly is contained in PP poly, where BQP qpoly is the class of problems solvable in quantum polynomial time, given a polynomial-size \"quantum advice state\" that depends only on the input length. This resolves a question of Buhrman, and means that we should not hope for an unrelativized separation between quantum and classical advice. Underlying our complexity result is a general new relation between deterministic and quantum one-way communication complexities, which applies to partial as well as total functions. Second, we construct an oracle relative to which NP is not contained in BQP qpoly. To do so, we use the polynomial method to give the first correct proof of a direct product theorem for quantum search. This theorem has other applications; for example, it can be used to fix a flawed result of Klauck about quantum time-space tradeoffs for sorting. Third, we introduce a new trace distance method for proving lower bounds on quantum one-way communication complexity. Using this method, we obtain optimal quantum lower bounds for two problems of Ambainis, for which no nontrivial lower bounds were previously known even for classical randomized protocols.", "In the near future, there will likely be special-purpose quantum computers with 40--50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate \"quantum supremacy\": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible. First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by the Quantum AI group at Google. We show that there's a natural average-case hardness assumption, which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work -- for example, on BosonSampling and IQP -- the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled. Second, in an attempt to refute our hardness assumption, we give a new algorithm, inspired by Savitch's Theorem, for simulating a general quantum circuit with n qubits and depth d in polynomial space and dO(n) time. We then discuss why this and other known algorithms fail to refute our assumption. Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem -- of the form \"if approximate quantum sampling is classically easy, then the polynomial hierarchy collapses\"-- must be non-relativizing. This sharply contrasts with the situation for exact sampling. Fourth, refuting a conjecture by Aaronson and Ambainis, we show that there is a sampling task, namely Fourier Sampling, with a 1 versus linear separation between its quantum and classical query complexities. Fifth, in search of a \"happy medium\" between black-box and non-black-box arguments, we study quantum supremacy relative to oracles in P poly. Previous work implies that, if one-way functions exist, then quantum supremacy is possible relative to such oracles. We show, conversely, that some computational assumption is needed: if SampBPP = SampBQP and NP ⊆ BPP, then quantum supremacy is impossible relative to oracles with small circuits.", "The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked: If predicting Quantum Mechanical systems requires exponential resources, is QM a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To answer these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard Interactive Proofs the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: \"Any language in BQP has a QPIP, and moreover, a fault tolerant one\". We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our second protocol uses polynomial codes QAS due to BCGHS, combined with quantum fault tolerance and multiparty quantum computation techniques. A slight modification of our constructions makes the protocol \"blind\": the quantum computation and input are unknown to the prover. After we have derived the results, we have learned that Broadbent at al. have independently derived \"universal blind quantum computation\" using completely different methods. Their construction implicitly implies similar implications." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Another idea is to use light instead of electrical power. It is hoped that optical computing could advance computer architecture and can improve the speed of data input and output by several orders of magnitude @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2139842859" ], "abstract": [ "Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14]." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Many theoretical and practical light-based devices have been proposed for dealing with various problems. Optical computation has some advantages, one of them being the fact that it can perform some operations faster than conventional devices. An example is the @math -point discrete Fourier transform computation which can be performed in only unit time @cite_1 @cite_8 .
{ "cite_N": [ "@cite_1", "@cite_8" ], "mid": [ "2139842859", "2027561983" ], "abstract": [ "Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14].", "We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
A recent paper @cite_25 introduces the idea of sorting by using some properties of light. The method called Rainbow Sort is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength (see Figure ). For implementing the Rainbow Sort one need to perform the following steps:
{ "cite_N": [ "@cite_25" ], "mid": [ "2045864543" ], "abstract": [ "Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this \"rainbow effect\" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
Naughton (et al) proposed and investigated @cite_14 @cite_2 a model called the continuous space machine which operates in discrete time-steps over a number of two-dimensional complex-valued images of constant size and arbitrary spatial resolution. The (constant time) operations on images include Fourier transformation, multiplication, addition, thresholding, copying and scaling.
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "2101615041", "2041855012" ], "abstract": [ "We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a Θ(log2n) unordered search algorithm with our model.", "We introduce a volumetric space-time technique for the reconstruction of moving and deforming objects from point data. The output of our method is a four-dimensional space-time solid, made up of spatial slices, each of which is a three-dimensional solid bounded by a watertight manifold. The motion of the object is described as an incompressible flow of material through time. We optimize the flow so that the distance material moves from one time frame to the next is bounded, the density of material remains constant, and the object remains compact. This formulation overcomes deficiencies in the acquired data, such as persistent occlusions, errors, and missing frames. We demonstrate the performance of our flow-based technique by reconstructing coherent sequences of watertight models from incomplete scanner data." ] }
0708.1512
2081798814
In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.
There are also other devices which are taking into account the quantum properties of light. This idea has been used for solving the Traveling Salesman Problem @cite_26 @cite_3 using a special purpose device.
{ "cite_N": [ "@cite_26", "@cite_3" ], "mid": [ "2169259398", "1532824777" ], "abstract": [ "We introduce an optical method based on white light interferometry in order to solve the well-known NP–complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non–polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to NN for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal–to–noise ratio. The proposed method is meant purely as a gedankenexperiment.", "In this paper we suggest the use of light for performing useful computations. Namely, we propose a special device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
Using light instead of electric power for performing computations is not a new idea. Optical Character Recognition (OCR) machines @cite_4 were one of the first modern devices which are based on light for solving a difficult problem. Later, various researchers have shown how light can solve problems faster than modern computers. An example is the @math -point discrete Fourier transform computation which can be performed in only unit time @cite_1 @cite_8 .
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_8" ], "mid": [ "2129824148", "2158030311", "1521064364" ], "abstract": [ "We exploit the gap in ability between human and machine vision systems to craft a family of automatic challenges that tell human and machine users apart via graphical interfaces including Internet browsers. Turing proposed (1950) a method whereby human judges might validate \"artificial intelligence\" by failing to distinguish between human and machine interlocutors. Stimulated by the \"chat room problem\", and influenced by the CAPTCHA project of (2000), we propose a variant of the Turing test using pessimal print: that is, low-quality images of machine-printed text synthesized pseudo-randomly over certain ranges of words, typefaces, and image degradations. We show experimentally that judicious choice of these ranges can ensure that the images are legible to human readers but illegible to several of the best present-day optical character recognition (OCR) machines. Our approach is motivated by a decade of research on performance evaluation of OCR machines and on quantitative stochastic models of document image quality. The slow pace of evolution of OCR and other species of machine vision over many decades suggests that pessimal print will defy automated attack for many years. Applications include 'bot' barriers and database rationing.", "Optical Character Recognition (OCR) is a classical research field and has become one of most thriving applications in the field of pattern recognition. Feature extraction is a key step in the process of OCR, which in fact is a deciding factor of the accuracy of the system. This paper proposes a novel and robust technique for feature extraction using Gabor Filters, to be employed in the OCR. The use of 2D Gabor filters is investigated and features are extracted using these filters. The technique generally extracts fifty features based on global texture analysis and can be further extended to increase the number of features if necessary. The algorithm is well explained and is found that the proposed method demonstrated better performance in efficiency. In addition, experimental results show that the method gains high recognition rate and cost reasonable average running time.", "We present a method for spotting words in the wild, i.e., in real images taken in unconstrained environments. Text found in the wild has a surprising range of difficulty. At one end of the spectrum, Optical Character Recognition (OCR) applied to scanned pages of well formatted printed text is one of the most successful applications of computer vision to date. At the other extreme lie visual CAPTCHAs - text that is constructed explicitly to fool computer vision algorithms. Both tasks involve recognizing text, yet one is nearly solved while the other remains extremely challenging. In this work, we argue that the appearance of words in the wild spans this range of difficulties and propose a new word recognition approach based on state-of-the-art methods from generic object recognition, in which we consider object categories to be the words themselves. We compare performance of leading OCR engines - one open source and one proprietary - with our new approach on the ICDAR Robust Reading data set and a new word spotting data set we introduce in this paper: the Street View Text data set. We show improvements of up to 16 on the data sets, demonstrating the feasibility of a new approach to a seemingly old problem." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
In @cite_19 was presented a new, principally non-dissipative digital logic architecture which involves a distributed and parallel input scheme where logical functions are evaluated at the speed of light. The system is based on digital logic vectors rather than the Boolean scalars of electronic logic. This new logic paradigm was specially developed with optical implementation in mind.
{ "cite_N": [ "@cite_19" ], "mid": [ "2027217447" ], "abstract": [ "Conventional architectures for the implementation of Boolean logic are based on a network of bistable elements assembled to realize cascades of simple Boolean logic gates. Since each such gate has two input signals and only one output signal, such architectures are fundamentally dissipative in information and energy. Their serial nature also induces a latency in the processing time. In this paper we present a new, principally non-dissipative digital logic architecture which mitigates the above impediments. Unlike traditional computing architectures, the proposed architecture involves a distributed and parallel input scheme where logical functions are evaluated at the speed of light. The system is based on digital logic vectors rather than the Boolean scalars of electronic logic. The architecture employs a novel conception of cascading which utilizes the strengths of both optics and electronics while avoiding their weaknesses. It is inherently non-dissipative, respects the linear nature of interactions in pure optics, and harnesses the control advantages of electrons without reducing the speed advantages of optics. This new logic paradigm was specially developed with optical implementation in mind. However, it is suitable for other implementations as well, including conventional electronic devices." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
In @cite_29 the idea of sorting by using some properties of light is introduced. The method called Rainbow Sort is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. For implementing the Rainbow Sort one need to perform the following steps:
{ "cite_N": [ "@cite_29" ], "mid": [ "2045864543" ], "abstract": [ "Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this \"rainbow effect\" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
An optical solution for solving the traveling salesman problem (TSP) was proposed in @cite_9 . The power of optics in this method was done by using a fast matrix-vector multiplication between a binary matrix, representing all feasible TSP tours, and a gray-scale vector, representing the weights among the TSP cities. The multiplication was performed optically by using an optical correlator. To synthesize the initial binary matrix representing all feasible tours, an efficient algorithm was provided. However, since the number of all tours is exponential the method is difficult to be implemented even for small instances.
{ "cite_N": [ "@cite_9" ], "mid": [ "2061959054" ], "abstract": [ "We present a new optical method for solving bounded (input-length-restricted) NP-complete combinatorial problems. We have chosen to demonstrate the method with an NP-complete problem called the traveling salesman problem (TSP). The power of optics in this method is realized by using a fast matrix-vector multiplication between a binary matrix, representing all feasible TSP tours, and a gray-scale vector, representing the weights among the TSP cities. The multiplication is performed optically by using an optical correlator. To synthesize the initial binary matrix representing all feasible tours, an efficient algorithm is provided. Simulations and experimental results prove the validity of the new method." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
An optical system which finds solutions to the 6-city TSP using a Kohonen-type network was proposed in @cite_7 . The system shows robustness with regard to the light intensity fluctuations and weight discretization which have been simulated. Using these heuristic methods, a relatively large number of TSP cities can be handled.
{ "cite_N": [ "@cite_7" ], "mid": [ "1552943972" ], "abstract": [ "A systems is described which finds solutions to the 6-city TSP using a Kohonen-type network. The system shows robustness with regard to the light intensity fluctuations and weight discretization which have been simulated. Scalability to larger size problems appears straightforward." ] }
0708.1962
1654960974
We suggest a new optical solution for solving the YES NO version of the Exact Cover problem by using the massive parallelism of light. The idea is to build an optical device which can generate all possible solutions of the problem and then to pick the correct one. In our case the device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible covers (exact or not) of the given set. For selecting the correct solution we assign to each item, from the set to be covered, a special integer number. These numbers will actually represent delays induced to light when it passes through arcs. The solution is represented as a subray arriving at a certain moment in the destination node. This will tell us if an exact cover does exist or not.
A similar idea was used in @cite_24 for solving the TSP problem. The device uses white light interferometry for find the shortest TSP path.
{ "cite_N": [ "@cite_24" ], "mid": [ "2142607374" ], "abstract": [ "We consider the special case of the traveling salesman problem (TSP) in which the distance metric is the shortest-path metric of a planar unweighted graph. We present a polynomial-time approximation scheme (PTAS) for this problem." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Another idea is to use light instead of electrical power. It is hoped that optical computing could advance computer architecture and can improve the speed of data input and output by several orders of magnitude @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2139842859" ], "abstract": [ "Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14]." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Many theoretical and practical light-based devices have been proposed for dealing with various problems. Optical computation has some advantages, one of them being the fact that it can perform some operations faster than conventional devices. An example is the @math -point discrete Fourier transform computation which can be performed, optically, in only unit time @cite_1 @cite_5 . Based on that, a solution to the subset sum problem can be obtained by discrete convolution. The idea is that the convolution of 2 functions is the same as the product of their frequencies representation @cite_9 .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_1" ], "mid": [ "2139842859", "2027561983", "1532824777" ], "abstract": [ "Optical-computing technology offers new challenges to algorithm designers since it can perform an n-point discrete Fourier transform (DFT) computation in only unit time. Note that the DFT is a nontrivial computation in the parallel random-access machine model, a model of computing commonly used by parallel-algorithm designers. We develop two new models, the DFT–VLSIO (very-large-scale integrated optics) and the DFT–circuit, to capture this characteristic of optical computing. We also provide two paradigms for developing parallel algorithms in these models. Efficient parallel algorithms for many problems, including polynomial and matrix computations, sorting, and string matching, are presented. The sorting and string-matching algorithms are particularly noteworthy. Almost all these algorithms are within a polylog factor of the optical-computing (VLSIO) lower bounds derived by Barakat Reif [Appl. Opt.26, 1015 (1987) and by Tyagi Reif [Proceedings of the Second IEEE Symposium on Parallel and Distributed Processing (Institute of Electrical and Electronics Engineers, New York, 1990) p. 14].", "We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors.", "In this paper we suggest the use of light for performing useful computations. Namely, we propose a special device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
A recent paper @cite_28 introduces the idea of sorting by using some properties of light. The method called Rainbow Sort is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength (see Figure (b)). For implementing the Rainbow Sort one need to perform the following steps:
{ "cite_N": [ "@cite_28" ], "mid": [ "2045864543" ], "abstract": [ "Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this \"rainbow effect\" that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in ? (n) with a space complexity of ? (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of ? (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
Naughton (et al) proposed and investigated @cite_14 @cite_2 a model called the continuous space machine which operates in discrete time-steps over a number of two-dimensional complex-valued images of constant size and arbitrary spatial resolution. The (constant time) operations on images include Fourier transformation, multiplication, addition, thresholding, copying and scaling.
{ "cite_N": [ "@cite_14", "@cite_2" ], "mid": [ "2101615041", "2041855012" ], "abstract": [ "We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a Θ(log2n) unordered search algorithm with our model.", "We introduce a volumetric space-time technique for the reconstruction of moving and deforming objects from point data. The output of our method is a four-dimensional space-time solid, made up of spatial slices, each of which is a three-dimensional solid bounded by a watertight manifold. The motion of the object is described as an incompressible flow of material through time. We optimize the flow so that the distance material moves from one time frame to the next is bounded, the density of material remains constant, and the object remains compact. This formulation overcomes deficiencies in the acquired data, such as persistent occlusions, errors, and missing frames. We demonstrate the performance of our flow-based technique by reconstructing coherent sequences of watertight models from incomplete scanner data." ] }
0708.1964
2079532011
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.
A system which solves the Hamiltonian path problem (HPP) @cite_4 by using light and its properties has been proposed in @cite_21 @cite_27 . The device has the same structure as the graph where the solution is to be found. The light is delayed within nodes, whereas the delays introduced by arcs are constants. Because the problem asks that each node has to be visited exactly once, a special delaying system was designed. At the destination node we will search for a ray which has visited each node exactly once. This is very easy due to the special properties of the delaying system.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4" ], "mid": [ "1532824777", "2081798814", "2955172121" ], "abstract": [ "In this paper we suggest the use of light for performing useful computations. Namely, we propose a special device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.", "In this paper we propose a special computational device which uses light rays for solving the Hamiltonian path problem on a directed graph. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. In each node the rays are uniquely marked so that they can be easily identified. At the destination node we will search only for particular rays that have passed only once through each node. We show that the proposed device can solve small and medium instances of the problem in reasonable time.", "Over the last decade, several techniques have been developed for looking around the corner by exploiting the round-trip travel time of photons. Typically, these techniques necessitate the collection of a large number of measurements with varying virtual source and virtual detector locations. This data is then processed by a reconstruction algorithm to estimate the hidden scene. As a consequence, even when the region of interest in the hidden volume is small and limited, the acquisition time needed is large as the entire dataset has to be acquired and then processed.In this paper, we present the first example of scanning based non-line-of-sight imaging technique. The key idea is that if the virtual sources (pulsed sources) on the wall are delayed using a quadratic delay profile (much like the quadratic phase of a focusing lens), then these pulses arrive at the same instant at a single point in the hidden volume – the point being scanned. On the imaging side, applying quadratic delays to the virtual detectors before integration on a single gated detector allows us to ‘focus’ and scan each point in the hidden volume. By changing the quadratic delay profiles, we can focus light at different points in the hidden volume. This provides the first example of scanning based non-line-of-sight imaging, allowing us to focus our measurements only in the region of interest. We derive the theoretical underpinnings of ‘temporal focusing’, show compelling simulations of performance analysis, build a hardware prototype system and demonstrate real results." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Paper @cite_12 was the first of a series of papers offering for the thesis, for particular classes of algorithms. They all follow the same general pattern; Describe axiomatically a class A of algorithms. Define behavioral equivalence of A algorithms. Define a class M of abstract state machines. Prove the following characterization theorem for A : @math , and every @math is behaviorally equivalent to some @math . The characterization provides a theoretical programming language for A and opens the way for more practical languages for A . The justification of the ASM Thesis thus obtained is speculative in two ways: The claim that A captures the intuitive class of intended algorithms is open to criticism. Definition of behavioral equivalence is open to criticism. But the characterization of A by M is precise, and in this sense the procedure the ASM thesis for the class of algorithms A modulo the chosen behavioral equivalence.
{ "cite_N": [ "@cite_12" ], "mid": [ "2142611800" ], "abstract": [ "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In this subsection we briefly overview the realization of this program for isolated small-step algorithms in @cite_12 and ordinary interactive small-step algorithms in @cite_27 @cite_21 @cite_4 .
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4", "@cite_12" ], "mid": [ "2142611800", "2134723009", "197061278", "1600348603" ], "abstract": [ "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.", "In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.", "In earlier work, the Abstract State Machine Thesis — that arbitrary algorithms are behaviorally equivalent to abstract state machines — was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and the proof to cover interactive smallstep algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment’s replies but also the order in which the replies were received. In order to prove the thesis for algorithms of this generality, we extend the definition of abstract state machines to incorporate explicit attention to the relative timing of replies and to the possible absence of replies. ∗Partially supported by NSF grant DMS–0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109–1043, U.S.A., ablass@umich.edu. Much of this paper was written during a visit to Microsoft Research. †Microsoft Research, One Microsoft Way, Redmond, WA 98052, U.S.A. gurevich@microsoft.com ‡Microsoft Research; and University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb, Croatia, dean@math.hr §Microsoft Research; current address: Computer Science Dept., M.I.T., Cambridge, MA 02139, U.S.A., brossman@mit.edu", "References.- Technical Foundations.- 1 Introduction.- 2 Graphs and Their Representation.- 3 Graph Planarity and Embeddings.- 4 Graph Drawing Methods.- References.- WilmaScope - A 3D Graph Visualization System.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Pajek - Analysis and Visualization of Large Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Tulip - A Huge Graph Visualization Framework.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Graphviz and Dynagraph - Static and Dynamic Graph Drawing Tools.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- AGD - A Library of Algorithms for Graph Drawing.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- yFiles - Visualization and Automatic Layout of Graphs.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- GDS - A Graph Drawing Server on the Internet.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- BioPath - Exploration and Visualization of Biochemical Pathways.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- DBdraw - Automatic Layout of Relational Database Schemas.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- GoVisual - A Diagramming Software for UML Class Diagrams.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- CrocoCosmos - 3D Visualization of Large Object-oriented Programs.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- ViSta - Visualizing Statecharts.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- visone - Analysis and Visualization of Social Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References.- Polyphemus and Hermes - Exploration and Visualization of Computer Networks.- 1 Introduction.- 2 Applications.- 3 Algorithms.- 4 Implementation.- 5 Examples.- 6 Software.- References." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The algorithms of @cite_12 are executed by a single sequential agent and are isolated in the following sense: there is no interaction with the environment during the execution of a step. The environment can intervene in between algorithm's steps. But we concentrate on step-for-step simulation, and so inter-step interaction with the environment can be ignored. This class of algorithms is axiomatized by three simple postulates.
{ "cite_N": [ "@cite_12" ], "mid": [ "2950462959" ], "abstract": [ "Reinforcement Learning algorithms can learn complex behavioral patterns for sequential decision making tasks wherein an agent interacts with an environment and acquires feedback in the form of rewards sampled from it. Traditionally, such algorithms make decisions, i.e., select actions to execute, at every single time step of the agent-environment interactions. In this paper, we propose a novel framework, Fine Grained Action Repetition (FiGAR), which enables the agent to decide the action as well as the time scale of repeating it. FiGAR can be used for improving any Deep Reinforcement Learning algorithm which maintains an explicit policy estimate by enabling temporal abstractions in the action space. We empirically demonstrate the efficacy of our framework by showing performance improvements on top of three policy search algorithms in different domains: Asynchronous Advantage Actor Critic in the Atari 2600 domain, Trust Region Policy Optimization in Mujoco domain and Deep Deterministic Policy Gradients in the TORCS car racing domain." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The says that an algorithm defines a deterministic transition system, a (not necessarily finite-state) automaton. More explicitly, the algorithm determines a nonempty collection of states, a nonempty subcollection of initial states, and a state-transition function. The algorithm is presumed to be deterministic. Nondeterministic choices involve interaction with the environment; see @cite_15 @cite_12 @cite_27 for discussion. The term state is used in a comprehensive way. For example, in case of a Turing machine, a state would include not only the control state but also the head position and the tape contents.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_12" ], "mid": [ "1963619740", "2142611800", "2260314326" ], "abstract": [ "This paper deals with probabilistic and nondeterministic processes represented by a variant of labeled transition systems where any outgoing transition of a state s is augmented with probabilities for the possible successor states. Our main contributions are algorithms for computing this bisimulation equivalence classes as introduced by Larsen and Skou (1996, Inform. and Comput.99, 1?28), and the simulation preorder a la Segala and Lynch (1995, Nordic J. Comput.2, 250?273). The algorithm for deciding bisimilarity is based on a variant of the traditional partitioning technique and runs in time O(mn(logm+logn)) where m is the number of transitions and n the number of states. The main idea for computing the simulation preorder is the reduction to maximum flow problems in suitable networks. Using the method of Cheriyan, Hagerup, and Mehlhorn, (1990, Lecture Notes in Computer Science, Vol. 443, pp. 235?248, Springer-Verlag, Berlin) for computing the maximum flow, the algorithm runs in time O((mn6+m2n3) logn). Moreover, we show that the network-based technique is also applicable to compute the simulation-like relation of Jonsson and Larsen (1991, “Proc. LICS'91” pp. 266?277) in fully probabilistic systems (a variant of ordinary labeled transition systems where the nondeterminism is totally resolved by probabilistic choices).", "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.", "Deterministic finite automata DFA have long served as a fundamental computational model in the study of theoretical computer science, and the problem of learning a DFA from given input data is a classic topic in computational learning theory. In this paper we study the learnability of a random DFA and propose a computationally efficient algorithm for learning and recovering a random DFA from uniform input strings and state information in the statistical query model. A random DFA is uniformly generated: for each state-symbol pair @math , we choose a state @math with replacement uniformly and independently at random and let @math , where Q is the state space, @math is the alphabet and @math is the transition function. The given data are string-state pairs x,i¾?q where x is a string drawn uniformly at random and q is the state of the DFA reached on input x starting from the start state @math . A theoretical guarantee on the maximum absolute error of the algorithm in the statistical query model is presented. Extensive experiments demonstrate the efficiency and accuracy of the algorithm." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The says that all states are first-order structures of a fixed vocabulary, the transition function does not change the base set of a state, and isomorphism of structures preserves everything, which here means states, initial states and the transition function. It reflects the vast experience of mathematics and mathematical logic according to which every static mathematical situation can be adequately represented as a first-order structure. The idea behind the second requirement is that, even when the base set seems to increase with the creation of new objects, those objects can be regarded as having been already present in a reserve'' part of the state. What looks like creation is then regarded as taking an element from out of the reserve and into the active part of the state. (The nondeterministic choice of the element is made by the environment.) See @cite_15 @cite_12 @cite_27 @cite_21 and the next section for discussion. The idea behind the third requirement is that all relevant state information is reflected in the vocabulary: if your algorithm can distinguish red integers from green integers, then it is not just about integers.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_21", "@cite_12" ], "mid": [ "2031764672", "2117161218", "2007053458", "1815155750" ], "abstract": [ "Abstract Define a (∀1 unary)-sentence to be a prenex first-order sentence of unary type (i.e., a type which only contains unary relation and function symbols and constant symbols) with only one (universal) quantifier. A successor structure is a structure 〈 B , S 〉 such that S is a function which is a permutation of the basis B with only one cycle. We exhibit a (∀1, unary)-sentence φ of type S , U 1 , …, U p such that if B is finite then 〈 B , S 〉 is a successor structure if 〈 B , S 〉 satisfies ∃ U 1 , …, ∃ U p ϕ . It implies that ⋃ NRAM( cn )=SPECTRA(∀1, unary), c ⩾1 where NRAM(cn) denotes the class of sets of positive integers accepted by a nondeterministic random access machine in time cn (where n is the input integer) and SPECTRA(∀1, unary) is the class of finite spectra of (∀1, unary)-sentences. Another consequence is that some graph properties (hamiltonicity, connectedness) can be characterised by sentences with unary function symbols and constant symbols and only one variable. This contrasts with the result (by Fagin and De Rougemont) that these two graph properties are not definable by monadic generalized spectra (without function symbols) even in the presence of an underlying successor relation.", "In this paper, we present a new visual way of exploring state sequences in large observational time-series. A key advantage of our method is that it can directly visualize higher-order state transitions. A standard first order state transition is a sequence of two states that are linked by a transition. A higher-order state transition is a sequence of three or more states where the sequence of participating states are linked together by consecutive first order state transitions. Our method extends the current state-graph exploration methods by employing a two dimensional graph, in which higher-order state transitions are visualized as curved lines. All transitions are bundled into thick splines, so that the thickness of an edge represents the frequency of instances. The bundling between two states takes into account the state transitions before and after the transition. This is done in such a way that it forms a continuous representation in which any subsequence of the timeseries is represented by a continuous smooth line. The edge bundles in these graphs can be explored interactively through our incremental selection algorithm. We demonstrate our method with an application in exploring labeled time-series data from a biological survey, where a clustering has assigned a single label to the data at each time-point. In these sequences, a large number of cyclic patterns occur, which in turn are linked to specific activities. We demonstrate how our method helps to find these cycles, and how the interactive selection process helps to find and investigate activities.", "The spectrum of a first-order sentence is the set of cardinalities of its finite models. We refine the well-known equality between the class of spectra and the class of sets (of positive integers) accepted by nondeterministic Turing machines in polynomial time. Let @math denote the class of spectra of sentences with d universal quantifiers. For any integer @math and each set of positive integers, A, we obtain: [ A NTIME (n^d ) A Sp (d ) A NTIME (n^d ( n)^2 ). ] Further the first implication holds even if we use multidimensional nondeterministic Turing machines. These results hold similarly for generalized spectra. As a consequence, we obtain a simplified proof of a hierarchy result of P. Pudlak about (generalized) spectra. We also prove that the set of primes is the spectrum of a certain sentence with only one variable.", "We present a novel approach for generalizing the IC3 algorithm for invariant checking from finite-state to infinite-state transition systems, expressed over some background theories. The procedure is based on a tight integration of IC3 with Implicit (predicate) Abstraction, a technique that expresses abstract transitions without computing explicitly the abstract system and is incremental with respect to the addition of predicates. In this scenario, IC3 operates only at the Boolean level of the abstract state space, discovering inductive clauses over the abstraction predicates. Theory reasoning is confined within the underlying SMT solver, and applied transparently when performing satisfiability checks. When the current abstraction allows for a spurious counterexample, it is refined by discovering and adding a sufficient set of new predicates. Importantly, this can be done in a completely incremental manner, without discarding the clauses found in the previous search." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The expresses the idea that a sequential algorithm (in the traditional meaning of the term) computes in steps of bounded complexity'' @cite_7 . More explicitly, it asserts that the values of a finite set @math of terms (also called expressions), that depends only on the algorithm and not on the input or state, determine the state change (more exactly the set of location updates) for every step; see @cite_15 @cite_12 or the next section for precise definitions of locations and updates.
{ "cite_N": [ "@cite_15", "@cite_12", "@cite_7" ], "mid": [ "1995437102", "2952554977", "2151690061" ], "abstract": [ "There is growing interest in algorithms for processing and querying continuous data streams (i.e., data seen only once in a fixed order) with limited memory resources. In its most general form, a data stream is actually an update stream, i.e., comprising data-item deletions as well as insertions. Such massive update streams arise naturally in several application domains (e.g., monitoring of large IP network installations or processing of retail-chain transactions). Estimating the cardinality of set expressions defined over several (possibly distributed) update streams is perhaps one of the most fundamental query classes of interest; as an example, such a query may ask “what is the number of distinct IP source addresses seen in passing packets from both router R1 and R 2 but not router R3?”. Earlier work only addressed very restricted forms of this problem, focusing solely on the special case of insert-only streams and specific operators (e.g., union). In this paper, we propose the first space-efficient algorithmic solution for estimating the cardinality of full-fledged set expressions over general update streams. Our estimation algorithms are probabilistic in nature and rely on a novel, hash-based synopsis data structure, termed ”2-level hash sketch”. We demonstrate how our 2-level hash sketch synopses can be used to provide low-error, high-confidence estimates for the cardinality of set expressions (including operators such as set union, intersection, and difference) over continuous update streams, using only space that is significantly sublinear in the sizes of the streaming input (multi-)sets. Furthermore, our estimators never require rescanning or resampling of past stream items, regardless of the number of deletions in the stream. We also present lower bounds for the problem, demonstrating that the space usage of our estimation algorithms is within small factors of the optimal. Finally, we propose an optimized, time-efficient stream synopsis (based on 2-level hash sketches) that provides similar, strong accuracy-space guarantees while requiring only guaranteed logarithmic maintenance time per update, thus making our methods applicable for truly rapid-rate data streams. Our results from an empirical study of our synopsis and estimation techniques verify the effectiveness of our approach.", "In this paper we consider a fragment of the first-order theory of the real numbers that includes systems of equations of continuous functions in bounded domains, and for which all functions are computable in the sense that it is possible to compute arbitrarily close piece-wise interval approximations. Even though this fragment is undecidable, we prove that there is a (possibly non-terminating) algorithm for checking satisfiability such that (1) whenever it terminates, it computes a correct answer, and (2) it always terminates when the input is robust. A formula is robust, if its satisfiability does not change under small perturbations. As a basic tool for our algorithm we use the notion of degree from the field of (differential) topology.", "The greedy sequential algorithm for maximal independent set (MIS) loops over the vertices in an arbitrary order adding a vertex to the resulting set if and only if no previous neighboring vertex has been added. In this loop, as in many sequential loops, each iterate will only depend on a subset of the previous iterates (i.e. knowing that any one of a vertex's previous neighbors is in the MIS, or knowing that it has no previous neighbors, is sufficient to decide its fate one way or the other). This leads to a dependence structure among the iterates. If this structure is shallow then running the iterates in parallel while respecting the dependencies can lead to an efficient parallel implementation mimicking the sequential algorithm. In this paper, we show that for any graph, and for a random ordering of the vertices, the dependence length of the sequential greedy MIS algorithm is polylogarithmic (O(log^2 n) with high probability). Our results extend previous results that show polylogarithmic bounds only for random graphs. We show similar results for greedy maximal matching (MM). For both problems we describe simple linear-work parallel algorithms based on the approach. The algorithms allow for a smooth tradeoff between more parallelism and reduced work, but always return the same result as the sequential greedy algorithms. We present experimental results that demonstrate efficiency and the tradeoff between work and parallelism." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The characterization theorem of @cite_12 establishes the ASM thesis for the class A of algorithms defined by Sequential Time, Abstract State, and Bounded Exploration and the class M of machines defined by the basic ASM language of update rules, parallel rules and conditional rules @cite_3 @cite_15 @cite_12 .
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_12" ], "mid": [ "2139801570", "158271966", "1998333922" ], "abstract": [ "We consider parallel algorithms working in sequential global time, for example, circuits or parallel random access machines (PRAMs). Parallel abstract state machines (parallel ASMs) are such parallel algorithms, and the parallel ASM thesis asserts that every parallel algorithm is behaviorally equivalent to a parallel ASM. In an earlier article, we axiomatized parallel algorithms, proved the ASM thesis, and proved that every parallel ASM satisfies the axioms. It turned out that we were too timid in formulating the axioms; they did not allow a parallel algorithm to create components on the fly. This restriction did not hinder us from proving that the usual parallel models, like circuits or PRAMs or even alternating Turing machines, satisfy the postulates. But it resulted in an error in our attempt to prove that parallel ASMs always satisfy the postulates. To correct the error, we liberalize our axioms and allow on-the-fly creation of new parallel components. We believe that the improved axioms accurately express what parallel algorithms ought to be. We prove the parallel thesis for the new, corrected notion of parallel algorithms, and we check that parallel ASMs satisfy the new axioms.", "We present an idea how to simplify Gurevich's parallel ASM thesis. The key idea is to modify only the bounded exploration postulate from the sequential ASM thesis by allowing also non-ground comprehension terms. The idea arises from comparison with work on ASM foundations of database transformations.", "We examine sequential algorithms and formulate a sequential-time postulate, an abstract-state postulate, and a bounded-exploration postulate . Analysis of the postulates leads us to the notion of sequential abstract-state machine and to the theorem in the title. First we treat sequential algorithms that are deterministic and noninteractive. Then we consider sequential algorithms that may be nondeterministic and that may interact with their environments." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
While the intent in @cite_12 is to capture algorithms executing steps in isolation from the environment, a degree of is accommodated in the ASM literature since @cite_15 : (i) using the import command to create new elements, and (ii) marking certain functions as and allowing the environment to provide the values of external functions. One pretends that the interaction is inter-step. This requires the environment to anticipate some actions of the algorithm. Also, in @cite_15 , nesting of external functions was prohibited; the first study of ASMs with nested external functions was @cite_21 . The notion of determines whether interaction is allowed, see @cite_15 @cite_12 for precise definitions and discussion.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_12" ], "mid": [ "2142611800", "1973673849", "2139801570" ], "abstract": [ "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.", "In the field of infrastructures’ surveillance and protection, it is important to make decisions based on activities occurring in the environment and its local context and conditions. In this paper we use an active rule based event processing architecture in order to make sense of situations from the combination of different signals received by the rule engine. However obtaining some high level information automatically is not without risks, especially in sensitive environments, and detection mistakes can happen for various reasons: the signal’s source can be defective, whether it is human—miss-interpretation of the signal—or computed—material malfunction; the aggregation rules can be wrong syntaxically, for example when a rule will never be triggered or a situation never detected; the interpretation given to the combination of signals does not correspond to the reality on the field—because the knowledge of the rule designer is subjective or because the environment evolves over-time—the rules are therefore incorrect semantically. In this paper, a new approach is proposed to avoid the third kind of error sources. We present a hybrid machine learning technique adapted to the complexity of the rules’ representation, in order to create a system more conform to reality. The proposed approach uses a combination of an Association Rule Mining algorithm and Inductive Logic Programming for rule induction. Empirical studies on simulated datasets demonstrate how our method can contribute to sensible systems such as the security of a public or semi-public place.", "We consider parallel algorithms working in sequential global time, for example, circuits or parallel random access machines (PRAMs). Parallel abstract state machines (parallel ASMs) are such parallel algorithms, and the parallel ASM thesis asserts that every parallel algorithm is behaviorally equivalent to a parallel ASM. In an earlier article, we axiomatized parallel algorithms, proved the ASM thesis, and proved that every parallel ASM satisfies the axioms. It turned out that we were too timid in formulating the axioms; they did not allow a parallel algorithm to create components on the fly. This restriction did not hinder us from proving that the usual parallel models, like circuits or PRAMs or even alternating Turing machines, satisfy the postulates. But it resulted in an error in our attempt to prove that parallel ASMs always satisfy the postulates. To correct the error, we liberalize our axioms and allow on-the-fly creation of new parallel components. We believe that the improved axioms accurately express what parallel algorithms ought to be. We prove the parallel thesis for the new, corrected notion of parallel algorithms, and we check that parallel ASMs satisfy the new axioms." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
In @cite_27 it is argued at length why the inter-step form of interaction cannot suffice for all modeling needs. As a small example take the computation of @math where @math is an external call (a query) with argument 7, whose result @math is used as the argument for a new query @math . An attempt to model this as inter-step interaction would force splitting the computation into substeps. But at some level of abstraction we may want to evaluate @math within a single step. Limiting interaction to the inter-step mode would necessarily lower the abstraction level.
{ "cite_N": [ "@cite_27" ], "mid": [ "2151463894" ], "abstract": [ "The success of model checking for large programs depends crucially on the ability to efficiently construct parsimonious abstractions. A predicate abstraction is parsimonious if at each control location, it specifies only relationships between current values of variables, and only those which are required for proving correctness. Previous methods for automatically refining predicate abstractions until sufficient precision is obtained do not systematically construct parsimonious abstractions: predicates usually contain symbolic variables, and are added heuristically and often uniformly to many or all control locations at once. We use Craig interpolation to efficiently construct, from a given abstract error trace which cannot be concretized, a parsominous abstraction that removes the trace. At each location of the trace, we infer the relevant predicates as an interpolant between the two formulas that define the past and the future segment of the trace. Each interpolant is a relationship between current values of program variables, and is relevant only at that particular program location. It can be found by a linear scan of the proof of infeasibility of the trace.We develop our method for programs with arithmetic and pointer expressions, and call-by-value function calls. For function calls, Craig interpolation offers a systematic way of generating relevant predicates that contain only the local variables of the function and the values of the formal parameters when the function was called. We have extended our model checker Blast with predicate discovery by Craig interpolation, and applied it successfully to C programs with more than 130,000 lines of code, which was not possible with approaches that build less parsimonious abstractions." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Thus @cite_27 sets modeling interaction as its goal. Different forms of interaction, such as message-passing, database queries, remote procedure calls, inputs, outputs, signals, all reduce to a single universal form: a single-reply zero-or-more-arguments not-necessarily-blocking . All arguments and the reply (if any) should be elements of the state if they are to make sense to the algorithm. For a formal definition of queries see @cite_27 ; a reminder is given in the next section. For a detailed discussion and arguments for the universality of the query-reply approach see @cite_27 .
{ "cite_N": [ "@cite_27" ], "mid": [ "1990391007" ], "abstract": [ "ABSTRACT This paper concerns the semantics of Codd's relational model of data. Formulated are precise conditions that should be satisfied in a semantically meaningful extension of the usual relational operators, such as projection, selection, union, and join, from operators on relations to operators on tables with “null values” of various kinds allowed. These conditions require that the system be safe in the sense that no incorrect conclusion is derivable by using a specified subset Ω of the relational operators; and that it be complete in the sense that all valid conclusions expressible by relational expressions using operators in Ω are in fact derivable in this system. Two such systems of practical interest are shown. The first, based on the usual Codd's null values, supports projection and selection. The second, based on many different (“marked”) null values or variables allowed to appear in a table, is shown to correctly support projection, positive selection (with no negation occurring in the selection condition), union, and renaming of attributes, which allows for processing arbitrary conjunctive queries. A very desirable property enjoyed by this system is that all relational operators on tables are performed in exactly the same way as in the case of the usual relations. A third system, mainly of theoretical interest, supporting projection, selection, union, join, and renaming, is also discussed. Under a so-called closed world assumption, it can also handle the operator of difference. It is based on a device called a conditional table and is crucial to the proof of the correctness of the second system. All systems considered allow for relational expressions containing arbitrarily many different relation symbols, and no form of the universal relation assumption is required. Categories and Subject Descriptors: H.2.3 [Database Management]: Languages— query languages; H.2.4 [Database Management]: Systems— query processing General Terms: Theory" ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Articles @cite_27 @cite_21 @cite_4 limit themselves to interactive algorithms which are in the sense that they obey the following two restrictions: the actions of the algorithm depend only on the state and the replies to queries, and not on other aspects, such as relative timing of replies, and the algorithm cannot complete its step unless it has received replies to all queries issued. The first restriction means that an algorithm can be seen as operating on pairs of form @math where @math is a state and @math an over @math : a partial function mapping queries over @math to their replies. The second restriction means that all queries issued are ; the algorithm cannot complete its step without a reply. (Some uses of non-blocking, asynchronous queries can still be modeled, by assuming that some forms of queries always obtain a default answer. But this is an assumption on environment behavior.) The present paper lifts both restrictions, and thus extends the theory to interactive algorithms.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4" ], "mid": [ "2142611800", "2395495808", "2592257482" ], "abstract": [ "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.", "We consider the following natural generalization of Binary Search: in a given undirected, positively weighted graph, one vertex is a target. The algorithm’s task is to identify the target by adaptively querying vertices. In response to querying a node q, the algorithm learns either that q is the target, or is given an edge out of q that lies on a shortest path from q to the target. We study this problem in a general noisy model in which each query independently receives a correct answer with probability p > 1 2 (a known constant), and an (adversarial) incorrect one with probability 1 − p. Our main positive result is that when p = 1 (i.e., all answers are correct), log2 n queries are always sufficient. For general p, we give an (almost information-theoretically optimal) algorithm that uses, in expectation, no more than (1 − δ) logn 1 − H(p) + o(logn) + O(log2 (1 δ)) queries, and identifies the target correctly with probability at leas 1 − δ. Here, H(p) = −(p logp + (1 − p) log(1 − p)) denotes the entropy. The first bound is achieved by the algorithm that iteratively queries a 1-median of the nodes not ruled out yet; the second bound by careful repeated invocations of a multiplicative weights algorithm. Even for p = 1, we show several hardness results for the problem of determining whether a target can be found using K queries. Our upper bound of log2 n implies a quasipolynomial-time algorithm for undirected connected graphs; we show that this is best-possible under the Strong Exponential Time Hypothesis (SETH). Furthermore, for directed graphs, or for undirected graphs with non-uniform node querying costs, the problem is PSPACE-complete. For a semi-adaptive version, in which one may query r nodes each in k rounds, we show membership in Σ2k−1 in the polynomial hierarchy, and hardness for Σ2k−5.", "We consider the task of enumerating and counting answers to k-ary conjunctive queries against relational databases that may be updated by inserting or deleting tuples. We exhibit a new notion of q-hierarchical conjunctive queries and show that these can be maintained efficiently in the following sense. During a linear time pre-processing phase, we can build a data structure that enables constant delay enumeration of the query results; and when the database is updated, we can update the data structure and restart the enumeration phase within constant time. For the special case of self-join free conjunctive queries we obtain a dichotomy: if a query is not q-hierarchical, then query enumeration with sublinear *) delay and sublinear update time (and arbitrary preprocessing time) is impossible. For answering Boolean conjunctive queries and for the more general problem of counting the number of solutions of k-ary queries we obtain complete dichotomies: if the query's homomorphic core is q-hierarchical, then size of the the query result can be computed in linear time and maintained with constant update time. Otherwise, the size of the query result cannot be maintained with sublinear update time. All our lower bounds rely on the OMv-conjecture, a conjecture on the hardness of online matrix-vector multiplication that has recently emerged in the field of fine-grained complexity to characterise the hardness of dynamic problems. The lower bound for the counting problem additionally relies on the orthogonal vectors conjecture, which in turn is implied by the strong exponential time hypothesis.*) By sublinear we mean O(n(1-e) for some e > 0, where n is the size of the active domain of the current database." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Several of the postulates of @cite_27 cover the same ground as the postulates of @cite_12 , but of course taking answer functions into account. The most important new postulate is the , saying that the algorithm, for each state @math , determines a @math between finite answer functions and queries. The intuition behind @math is this: if, over state @math , the environment behaves according to answer function @math then the algorithm issues @math . The causality relation is an abstract representation of potential interaction of the algorithm with the environment.
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "2542071751", "2949121808" ], "abstract": [ "Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are @math observed contexts and @math arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality @math ( @math ). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the @math mean reward matrix @math (for each context in @math and each arm in @math ) factorizes into non-negative factors @math ( @math ) and @math ( @math ). This insight enables us to propose an @math -greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of @math at time @math , as compared to @math for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of @math . These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.", "Let @math be a @math -ary predicate over a finite alphabet. Consider a random CSP @math instance @math over @math variables with @math constraints. When @math the instance @math will be unsatisfiable with high probability, and we want to find a refutation - i.e., a certificate of unsatisfiability. When @math is the @math -ary OR predicate, this is the well studied problem of refuting random @math -SAT formulas, and an efficient algorithm is known only when @math . Understanding the density required for refutation of other predicates is important in cryptography, proof complexity, and learning theory. Previously, it was known that for a @math -ary predicate, having @math constraints suffices for refutation. We give a criterion for predicates that often yields efficient refutation algorithms at much lower densities. Specifically, if @math fails to support a @math -wise uniform distribution, then there is an efficient algorithm that refutes random CSP @math instances @math whp when @math . Indeed, our algorithm will \"somewhat strongly\" refute @math , certifying @math , if @math then we get the strongest possible refutation, certifying @math . This last result is new even in the context of random @math -SAT. Regarding the optimality of our @math requirement, prior work on SDP hierarchies has given some evidence that efficient refutation of random CSP @math may be impossible when @math . Thus there is an indication our algorithm's dependence on @math is optimal for every @math , at least in the context of SDP hierarchies. Along these lines, we show that our refutation algorithm can be carried out by the @math -round SOS SDP hierarchy. Finally, as an application of our result, we falsify assumptions used to show hardness-of-learning results in recent work of Daniely, Linial, and Shalev-Shwartz." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
This refines the transition relation of Sequential Time of @cite_12 . The possibility of explicit failure is new here; the algorithm may obtain replies that are absurd or inconsistent from its point of view, and it can fail in such a case. The next state, if there is one, is defined by an update set, which can also contain trivial updates: updating'' a location to the old value. Trivial updates do not contribute to the next state, but in composition with other algorithms can contribute to a clash, see @cite_27 and also the next section for discussion.
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "2592257482", "2161407849" ], "abstract": [ "We consider the task of enumerating and counting answers to k-ary conjunctive queries against relational databases that may be updated by inserting or deleting tuples. We exhibit a new notion of q-hierarchical conjunctive queries and show that these can be maintained efficiently in the following sense. During a linear time pre-processing phase, we can build a data structure that enables constant delay enumeration of the query results; and when the database is updated, we can update the data structure and restart the enumeration phase within constant time. For the special case of self-join free conjunctive queries we obtain a dichotomy: if a query is not q-hierarchical, then query enumeration with sublinear *) delay and sublinear update time (and arbitrary preprocessing time) is impossible. For answering Boolean conjunctive queries and for the more general problem of counting the number of solutions of k-ary queries we obtain complete dichotomies: if the query's homomorphic core is q-hierarchical, then size of the the query result can be computed in linear time and maintained with constant update time. Otherwise, the size of the query result cannot be maintained with sublinear update time. All our lower bounds rely on the OMv-conjecture, a conjecture on the hardness of online matrix-vector multiplication that has recently emerged in the field of fine-grained complexity to characterise the hardness of dynamic problems. The lower bound for the counting problem additionally relies on the orthogonal vectors conjecture, which in turn is implied by the strong exponential time hypothesis.*) By sublinear we mean O(n(1-e) for some e > 0, where n is the size of the active domain of the current database.", "Reliable systems have always been built out of unreliable components [1]. Early on, the reliable components were small such as mirrored disks or ECC (Error Correcting Codes) in core memory. These systems were designed such that failures of these small components were transparent to the application. Later, the size of the unreliable components grew larger and semantic challenges crept into the application when failures occurred. Fault tolerant algorithms comprise a set of idempotent subalgorithms. Between these idempotent sub-algorithms, state is sent across the failure boundaries of the unreliable components. The failure of an unreliable component can then be tolerated as a takeover by a backup, which uses the last known state and drives forward with a retry of the idempotent sub-algorithm. Classically, this has been done in a linear fashion (i.e. one step at a time). As the granularity of the unreliable component grows (from a mirrored disk to a system to a data center), the latency to communicate with a backup becomes unpalatable. This leads to a more relaxed model for fault tolerance. The primary system will acknowledge the work request and its actions without waiting to ensure that the backup is notified of the work. This improves the responsiveness of the system because the user is not delayed behind a slow interaction with the backup. There are two implications of asynchronous state capture: 1) Everything promised by the primary is probabilistic. There is always a chance that an untimely failure shortly after the promise results in a backup proceeding without knowledge of the commitment. Hence, nothing is guaranteed! 2) Applications must ensure eventual consistency [20]. Since work may be stuck in the primary after a failure and reappear later, the processing order for work cannot be guaranteed. Platform designers are struggling to make this easier for their applications. Emerging patterns of eventual consistency and probabilistic execution may soon yield a way for applications to express requirements for a “looser” form of consistency while providing availability in the face of ever larger failures. As we will also point out in this paper, the patterns of probabilistic execution and eventual consistency are applicable to intermittently connected application patterns. This paper recounts portions of the evolution of these trends, attempts to show the patterns that span these changes, and talks about future directions as we continue to “build on quicksand”." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The inductive character of the context definition is unwound and analyzed in detail in @cite_27 . The answer functions which can occur as stages in the inductive construction of contexts are called . This captures the intuition of answer functions which can actually arise as records of interaction of an algorithm and its environment. Two causality relations (over the same state) are if they make the same answer functions well-founded. Equivalent causality relations have the same contexts but the converse is not in general true: intermediate intra-step behavior matters.
{ "cite_N": [ "@cite_27" ], "mid": [ "1549166962" ], "abstract": [ "A new form of SAT-based symbolic model checking is described. Instead of unrolling the transition relation, it incrementally generates clauses that are inductive relative to (and augment) stepwise approximate reachability information. In this way, the algorithm gradually refines the property, eventually producing either an inductive strengthening of the property or a counterexample trace. Our experimental studies show that induction is a powerful tool for generalizing the unreachability of given error states: it can refine away many states at once, and it is effective at focusing the proof search on aspects of the transition system relevant to the property. Furthermore, the incremental structure of the algorithm lends itself to a parallel implementation." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The of @cite_27 extends Bounded Exploration of @cite_12 to queries. As a consequence every well-founded answer function is finite. Furthermore, there is a uniform bound on the size of well-founded answer functions.
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "48624319", "1840129892" ], "abstract": [ "Motivated by the problem of querying and communicating bidders' valuations in combinatorial auctions, we study how well different classes of set functions can be sketched. More formally, let f be a function mapping subsets of some ground set [n] to the non-negative real numbers. We say that f' is an α-sketch of f if for every set S, the value f'(S) lies between f(S) α and f(S), and f' can be specified by poly(n) bits. We show that for every subadditive function f there exists an α-sketch where α = n1 2. O(polylog(n)). Furthermore, we provide an algorithm that finds these sketches with a polynomial number of demand queries. This is essentially the best we can hope for since: 1. We show that there exist subadditive functions (in fact, XOS functions) that do not admit an o(n1 2) sketch. (Balcan and Harvey [3] previously showed that there exist functions belonging to the class of substitutes valuations that do not admit an O(n1 3) sketch.) 2. We prove that every deterministic algorithm that accesses the function via value queries only cannot guarantee a sketching ratio better than n1−e. We also show that coverage functions, an interesting subclass of submodular functions, admit arbitrarily good sketches. Finally, we show an interesting connection between sketching and learning. We show that for every class of valuations, if the class admits an α-sketch, then it can be α-approximately learned in the PMAC model of Balcan and Harvey. The bounds we prove are only information-theoretic and do not imply the existence of computationally efficient learning algorithms in general.", "Bounded increase is a termination technique where it is tried to find an argument x of a recursive function that is increased repeatedly until it reaches a bound b, which might be ensured by a condition x<b. Since the predicates like < may be arbitrary user-defined recursive functions, an induction calculus is utilized to prove conditional constraints. In this paper, we present a full formalization of bounded increase in the theorem prover Isabelle HOL. It fills one large gap in the pen-and-paper proof, and it includes generalized inference rules for the induction calculus as well as variants of the Babylonian algorithm to compute square roots. These algorithms were required to write executable functions which can certify untrusted termination proofs from termination tools that make use of bounded increase. And indeed, the resulting certifier was already useful: it detected an implementation error that remained undetected since 2007." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
The characterization theorem for the class A of algorithms defined in @cite_27 and the class M of machines defined in @cite_21 is proved in @cite_4 .
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_4" ], "mid": [ "2086631032", "2165155277", "1714704734" ], "abstract": [ "A general theorem on the complexity of a class of recognition problems is proved. As a particular case the following result is given: There is no algorithm which, for any 2-coloration of the infinite complete graph, can produce a monochromatic subgraph of k vertices within 2k2 steps (at each step the color of an arbitrary edge is questioned).", "A new large margin classifier, named Maxi-Min Margin Machine (M4) is proposed in this paper. This new classifier is constructed based on both a \"local: and a \"global\" view of data, while the most popular large margin classifier, Support Vector Machine (SVM) and the recently-proposed important model, Minimax Probability Machine (MPM) consider data only either locally or globally. This new model is theoretically important in the sense that SVM and MPM can both be considered as its special case. Furthermore, the optimization of M4 can be cast as a sequential conic programming problem, which can be solved efficiently. We describe the M4 model definition, provide a clear geometrical interpretation, present theoretical justifications, propose efficient solving methods, and perform a series of evaluations on both synthetic data sets and real world benchmark data sets. Its comparison with SVM and MPM also demonstrates the advantages of our new model.", "In this paper we study a paradigm to generalize online classification algorithms for binary classification problems to multiclass problems. The particular hypotheses we investigate maintain one prototype vector per class. Given an input instance, a multiclass hypothesis computes a similarity-score between each prototype and the input instance and sets the predicted label to be the index of the prototype achieving the highest similarity. To design and analyze the learning algorithms in this paper we introduce the notion of ultraconservativeness. Ultraconservative algorithms are algorithms that update only the prototypes attaining similarity-scores which are higher than the score of the correct label's prototype. We start by describing a family of additive ultraconservative algorithms where each algorithm in the family updates its prototypes by finding a feasible solution for a set of linear constraints that depend on the instantaneous similarity-scores. We then discuss a specific online algorithm that seeks a set of prototypes which have a small norm. The resulting algorithm, which we term MIRA (for Margin Infused Relaxed Algorithm) is ultraconservative as well. We derive mistake bounds for all the algorithms and provide further analysis of MIRA using a generalized notion of the margin for multiclass problems. We discuss the form the algorithms take in the binary case and show that all the algorithms from the first family reduce to the Perceptron algorithm while MIRA provides a new Perceptron-like algorithm with a margin-dependent learning rate. We then return to multiclass problems and describe an analogous multiplicative family of algorithms with corresponding mistake bounds. We end the formal part by deriving and analyzing a multiclass version of Li and Long's ROMMA algorithm. We conclude with a discussion of experimental results that demonstrate the merits of our algorithms." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Andrei N. Kolmogorov @cite_7 has given another mathematical description of computation, presumably motivated by the physics of computation rather than by an analysis of the actions of a human computer. For a detailed presentation of Kolmogorov's approach, see @cite_10 . Also see @cite_17 and the references there for information about research on pointer machines. Like Turing's model, these computation models also lower the abstraction level of algorithms.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_17" ], "mid": [ "2104359186", "1999580286", "2029308360" ], "abstract": [ "In 1974, Kolmogorov proposed a nonprobabilistic approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The \"structure function\" of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best fitting model in the class irrespective of whether the \"true\" model is in the model class considered or not. In this setting, this happens with certainty, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that-within the obvious constraints-every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the \"algorithmic minimal sufficient statistic.\".", "We extend algorithmic information theory to quantum mechanics, taking a universal semicomputable density matrix ( universal probability') as a starting point, and define complexity (an operator) as its negative logarithm. A number of properties of Kolmogorov complexity extend naturally to the new domain. Approximately, a quantum state is simple if it is within a small distance from a low-dimensional subspace of low Kolmogorov complexity. The von Neumann entropy of a computable density matrix is within an additive constant from the average complexity. Some of the theory of randomness translates to the new domain. We explore the relations of the new quantity to the quantum Kolmogorov complexity defined by Vitanyi (we show that the latter is sometimes as large as 2n − 2 log n) and the qubit complexity defined by Berthiaume, Dam and Laplante. The cloning' properties of our complexity measure are similar to those of qubit complexity.", "In?1964 Kolmogorov introduced the concept of the complexity of a?finite object (for instance, the words in a certain alphabet). He defined complexity as the minimum number of binary signs containing all the information about a?given object that are sufficient for its recovery (decoding). This definition depends essentially on the method of decoding. However, by means of the general theory of algorithms, Kolmogorov was able to give an invariant (universal) definition of complexity. Related concepts were investigated by Solomonoff (U.S.A.) and Markov. Using the concept of complexity, Kolmogorov gave definitions of the quantity of information in finite objects and of the concept of a?random sequence (which was then defined more precisely by Martin-L?f). Afterwards, this circle of questions developed rapidly. In particular, an interesting development took place of the ideas of Markov on the application of the concept of complexity to the study of quantitative questions in the theory of algorithms. The present article is a survey of the fundamental results connected with the brief remarks above." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
Yiannis Moschovakis @cite_23 proposed that the informal notion of algorithm be identified with the formal notion of . A recursor is a monotone operator over partial functions whose least fixed point includes (as one component) the function that the algorithm computes. The approach does not seem to scale to algorithms interacting with an unknown environment. See [Section 4.3] abs for a critique of Moschovakis's computation model.
{ "cite_N": [ "@cite_23" ], "mid": [ "2345322282" ], "abstract": [ "Maximal monotone operators are set-valued mappings which extend (but are not limited to) the notion of subdifferential of a convex function. The proximal point algorithm is a method for finding a zero of a maximal monotone operator. The algorithm consists in fixed point iterations of a mapping called the resolvent which depends on the maximal monotone operator of interest. The paper investigates a stochastic version of the algorithm where the resolvent used at iteration k is associated to one realization of a random maximal monotone operator. We establish the almost sure ergodic convergence of the iterates to a zero of the expectation (in the Aumann sense) of the latter random operator. Application to constrained stochastic optimization is considered." ] }
0707.3782
2134723009
In earlier work, the Abstract State Machine Thesis -- that arbitrary algorithms are behaviorally equivalent to abstract state machines -- was established for several classes of algorithms, including ordinary, interactive, small-step algorithms. This was accomplished on the basis of axiomatizations of these classes of algorithms. Here we extend the axiomatization and, in a companion paper, the proof, to cover interactive small-step algorithms that are not necessarily ordinary. This means that the algorithms (1) can complete a step without necessarily waiting for replies to all queries from that step and (2) can use not only the environment's replies but also the order in which the replies were received.
An approach to interactive computing was pioneered by Peter Wegner and developed in particular in @cite_0 . The approach is based on special interactive variants of Turing machines called , in short PTMs. Interactive ASMs can step-for-step simulate PTMs. Goldin and Wegner assert that any sequential interactive computation can be performed by a persistent Turing machine'' @cite_19 . But this is not so if one intends to preserve the abstraction level of the given interactive algorithm. In particular, PTMs cannot step-for step simulate interactive ASMs @cite_25 .
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_25" ], "mid": [ "2048671682", "2142611800", "2407991844" ], "abstract": [ "This paper presents persistent Turing machines (PTMs), a new way of interpreting Turing-machine computation, based on dynamic stream semantics. A PTM is a Turing machine that performs an infinite sequence of ''normal'' Turing machine computations, where each such computation starts when the PTM reads an input from its input tape and ends when the PTM produces an output on its output tape. The PTM has an additional worktape, which retains its content from one computation to the next; this is what we mean by persistence. A number of results are presented for this model, including a proof that the class of PTMs is isomorphic to a general class of effective transition systems called interactive transition systems; and a proof that PTMs without persistence (amnesic PTMs) are less expressive than PTMs. As an analogue of the Church-Turing hypothesis which relates Turing machines to algorithmic computation, it is hypothesized that PTMs capture the intuitive notion of sequential interactive computation.", "This is the second in a series of three articles extending the proof of the Abstract State Machine Thesis---that arbitrary algorithms are behaviorally equivalent to abstract state machines---to algorithms that can interact with their environments during a step, rather than only between steps. As in the first article of the series, we are concerned here with ordinary, small-step, interactive algorithms. This means that the algorithms: (1) proceed in discrete, global steps, (2) perform only a bounded amount of work in each step, (3) use only such information from the environment as can be regarded as answers to queries, and (4) never complete a step until all queries from that step have been answered. After reviewing the previous article's formal description of such algorithms and the definition of behavioral equivalence, we define ordinary, interactive, small-step abstract state machines (ASMs). Except for very minor modifications, these are the machines commonly used in the ASM literature. We define their semantics in the framework of ordinary algorithms and show that they satisfy the postulates for these algorithms. This material lays the groundwork for the final article in the series, in which we shall prove the Abstract State Machine thesis for ordinary, intractive, small-step algorithms: All such algorithms are equivalent to ASMs.", "Consider two parties who wish to communicate in order to execute some interactive protocol π. However, the communication channel between them is noisy: An adversary sees everything that is transmitted over the channel and can change a constant fraction of the bits as he pleases, thus interrupting the execution of π (which was designed for an errorless channel). If π only contained one message, then a good error correcting code would have overcame the noise with only a constant overhead in communication, but this solution is not applicable to interactive protocols with many short messages. Schulman (FOCS 92, STOC 93) presented the notion of interactive coding: A simulator that, given any protocol π, is able to simulate it (i.e. produce its intended transcript) even with constant rate adversarial channel errors, and with only constant (multiplicative) communication overhead. Until recently, however, the running time of all known simulators was exponential (or sub-exponential) in the communication complexity of π (denoted N in this work). Brakerski and Kalai (FOCS 12) recently presented a simulator that runs in time poly (N). Their simulator is randomized (each party flips private coins) and has failure probability roughly 2−N. In this work, we improve the computational complexity of interactive coding. While at least N computational steps are required (even just to output the transcript of π), the BK simulator runs in time" ] }
0707.4333
2082245279
We prove a nearly optimal bound on the number of stable homotopy types occurring in a k-parameter semi-algebraic family of sets in @math , each defined in terms of m quadratic inequalities. Our bound is exponential in k and m, but polynomial in @math . More precisely, we prove the following. Let @math be a real closed field and let [ P = P_1,...,P_m [Y_1,...,Y_ ,X_1,...,X_k], ] with @math . Let @math be a semi-algebraic set, defined by a Boolean formula without negations, whose atoms are of the form, @math . Let @math be the projection on the last k co-ordinates. Then, the number of stable homotopy types amongst the fibers @math is bounded by [ (2^m k d)^ O(mk) . ]
In another direction Agrachev @cite_5 studied the topology of semi-algebraic sets defined by quadratic inequalities, and he defined a certain spectral sequence converging to the homology groups of such sets. A parametrized version of Agrachev's construction is in fact a starting point of our proof of the main theorem in this paper.
{ "cite_N": [ "@cite_5" ], "mid": [ "1822242587" ], "abstract": [ "Prologue Point-set topology, change functors, and proper actions: Introduction to Part I The point-set topology of parametrized spaces Change functors and compatibility relations Proper actions, equivariant bundles and fibrations Model categories and parametrized spaces: Introduction to Part II Topologically bicomplete model categories Well-grounded topological model categories The @math -model structure on @math Equivariant @math -type model structures Ex-fibrations and ex-quasifibrations The equivalence between Ho @math and @math Parametrized equivariant stable homotopy theory: Introduction to Part III Enriched categories and @math -categories The category of orthogonal @math -spectra over @math Model structures for parametrized @math -spectra Adjunctions and compatibility relations Module categories, change of universe, and change of groups Parametrized duality theory: Introduction to Part IV Fiberwise duality and transfer maps Closed symmetric bicategories The closed symmetric bicategory of parametrized spectra Costenoble-Waner duality Fiberwise Costenoble-Waner duality Homology and cohomology, Thom spectra, and addenda: Introduction to Part V Parametrized homology and cohomology theories Equivariant parametrized homology and cohomology Twisted theories and spectral sequences Parametrized FSP's and generalized Thom spectra Epilogue: Cellular philosophy and alternative approaches Bibliography Index Index of notation." ] }
0705.1364
1773390463
A path from s to t on a polyhedral terrain is descending if the height of a point p never increases while we move p along the path from s to t. No ecient algorithm is known to find a shortest descending path (SDP) from s to t in a polyhedral terrain. We give a simple approximation algorithm that solves the SDP problem on general terrains. Our algorithm discretizes the terrain with O(n 2 X †) Steiner points so that after an O ‡ n 2 X † log i nX
One generalization of the Weighted Region Problem is finding a shortest aniso-tropic path @cite_18 , where the weight assigned to a region depends on the direction of travel. The weights in this problem capture, for example, the effect the gravity and friction on a vehicle moving on a slope. @cite_12 , Sun and Reif @cite_16 and Sun and Bu @cite_0 solved this problem by placing Steiner points along the edges.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_16", "@cite_12" ], "mid": [ "1480698831", "2050710547", "2147998016", "1989723858" ], "abstract": [ "We discuss the problem of computing shortest anisotropic paths on terrains. Anisotropic path costs take into account the length of the path traveled, possibly weighted, and the direction of travel along the faces of the terrain. Considering faces to be weighted has added realism to the study of (pure) Euclidean shortest paths. Parameters such as the varied nature of the terrain, friction, or slope of each face, can be captured via face weights. Anisotropic paths add further realism by taking into consideration the direction of travel on each face thereby e.g., eliminating paths that are too steep for vehicles to travel and preventing the vehicles from turning over. Prior to this work an O(nn) time algorithm had been presented for computing anisotropic paths. Here we present the first polynomial time approximation algorithm for computing shortest anisotropic paths. Our algorithm is simple to implement and allows for the computation of shortest anisotropic paths within a desired accuracy. Our result addresses the corresponding problem posed in [12].", "The authors address anisotropic friction and gravity effects as well as ranges of impermissible-traversal headings due to overturn danger or power limitations. The method does not require imposition of a uniform grid, nor does it average effects in different directions, but reasons about a polyhedral approximation of terrain. It reduces the problem to a finite but provably optimal set of possibilities and then uses A* search to find the cost-optimal path. However, the possibilities are not physical locations but path subspaces. The method also exploits the insight that there are only four ways to optimally traverse an anisotropic homogeneous region: (1) straight across without braking, which is the standard isotropic-weighted-region traversal; (2) straight across without braking but as close as possible to a desired impermissible heading; (3) making impermissibility-avoiding switchbacks on the path across a region; and (4) straight across with braking. The authors prove specific optimality criteria for transitions on the boundaries of regions for each combination of traversal types. >", "We consider a rectilinear shortest path problem among weighted obstacles. Instead of restricting a path to totally avoid obstacles we allow a path to pass through them at extra costs. The extra costs are represented by the weights of the obstacles. We aim to find a shortest rectilinear path between two distinguished points among a set of weighted obstacles. The unweighted case is a special case of this problem when the weight of each obstacle is +∞. By using a graph-theoretical approach, we obtain two algorithms which run in O(n log2 n) time and O(n log n) space and in O(n log3 2 n) time and space, respectively, where n is the number of the vertices of the obstacles.", "The problem of determining shortest paths through a weighted planar polygonal subdivision with n vertices is considered. Distances are measured according to a weighted Euclidean metric: The length of a path is defined to be the weighted sum of (Euclidean) lengths of the subpaths within each region. An algorithm that constructs a (restricted) “shortest path map” with respect to a given source point is presented. The output is a partitioning of each edge of the subdivion into intervals of e-optimality, allowing an e-optimal path to be traced from the source to any query point along any edge. The algorithm runs in worst-case time O ( ES ) and requires O ( E ) space, where E is the number of “events” in our algorithm and S is the time it takes to run a numerical search procedure. In the worst case, E is bounded above by O ( n 4 ) (and we give an O( n 4 ) lower bound), but it is likeky that E will be much smaller in practice. We also show that S is bounded by O ( n 4 L ), where L is the precision of the problem instance (including the number of bits in the user-specified tolerance e). Again, the value of S should be smaller in practice. The algorithm applies the “continuous Dijkstra” paradigm and exploits the fact that shortest paths obey Snell's Law of Refraction at region boundaries, a local optimaly property of shortest paths that is well known from the analogous optics model. The algorithm generalizes to the multi-source case to compute Voronoi diagrams." ] }
0705.2065
1614404949
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
The most closely related work is that of Yao, Leonard et. al. @cite_6 . They model heterogeneous user churn and local resilience of unstructured P2P networks. They also concede early that balancing model complexity and its fidelity is required to make advances in this area. They examine both the Poisson and Pareto distribution for user churn and provide a deep analysis on this front. Their work focuses on how churn affects connectivity in the network and we have separated this aspect from our work and concentrated on message throughput.
{ "cite_N": [ "@cite_6" ], "mid": [ "2119971988" ], "abstract": [ "Previous analytical results on the resilience of un-structured P2P systems have not explicitly modeled heterogeneity of user churn (i.e., difference in online behavior) or the impact of in-degree on system resilience. To overcome these limitations, we introduce a generic model of heterogeneous user churn, derive the distribution of the various metrics observed in prior experimental studies (e.g., lifetime distribution of joining users, joint distribution of session time of alive peers, and residual lifetime of a randomly selected user), derive several closed-form results on the transient behavior of in-degree, and eventually obtain the joint in out degree isolation probability as a simple extension of the out-degree model in [13]." ] }
0705.2065
1614404949
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
Other closely related work concerns mobile and ad hoc networks, and sensor networks, because these applications require robust communication techniques and tend to have limited buffer space at each node. The recent work of Lindemann and Waldhorst @cite_9 considers the use of epidemiology in mobile devices with finite buffers and they follow the seven degrees of separation system @cite_3 . In particular they use models for power conservation" where each mobile device is ON with probability @math and OFF with probability @math . Their analytical model gives very close predictions to their simulation results. In our work we describe these states using arrival rate, @math , and departure rate, @math , which allows us to naturally relate this to a rate of message arrivals, @math . We focus solely on these parameters so that we can show precisely how they affect message coverage rate.
{ "cite_N": [ "@cite_9", "@cite_3" ], "mid": [ "1976011215", "2140659572" ], "abstract": [ "During and immediately after their deployment, ad hoc and sensor networks lack an efficient communication scheme rendering even the most basic network coordination problems difficult. Before any reasonable communication can take place, nodes must come up with an initial structure that can serve as a foundation for more sophisticated algorithms. In this paper, we consider the problem of obtaining a vertex coloring as such an initial structure. We propose an algorithm that works in the unstructured radio network model. This model captures the characteristics of newly deployed ad hoc and sensor networks, i.e. asynchronous wake-up, no collision-detection, and scarce knowledge about the network topology. When modeling the network as a graph with bounded independence, our algorithm produces a correct coloring with O(Δ) colors in time O(Δ log n) with high probability, where n and Δ are the number of nodes in the network and the maximum degree, respectively. Also, the number of locally used colors depends only on the local node density. Graphs with bounded independence generalize unit disk graphs as well as many other well-known models for wireless multi-hop networks. They allow us to capture aspects such as obstacles, fading, or irregular signal-propagation.", "This thesis presents models for the performance analysis of a recent communication paradigm: mobile ad hoc networking. The objective of mobile ad hoc networking is to provide wireless connectivity between stations in a highly dynamic environment. These dynamics are driven by the mobility of stations and by breakdowns of stations, and may lead to temporary disconnectivity of parts of the network. Applications of this novel paradigm can be found in telecommunication services, but also in manufacturing systems, road-traffic control, animal monitoring and emergency networking. The performance of mobile ad hoc networks in terms of buffer occupancy and delay is quantified in this thesis by employing specific queueing models, viz., time-limited polling models. These polling models capture the uncontrollable characteristic of link availability in mobile ad hoc networks. Particularly, a novel, so-called pure exponential time-limited, service discipline is introduced in the context of polling systems. The highlighted performance characteristics for these polling systems include the stability, the queue lengths and the sojourn times of the customers. Stability conditions prescribe limits on the amount of tra±c that can be sustained by the system, so that the establishment of these conditions is a fundamental keystone in the analysis of polling models. Moreover, both exact and approximate analysis is presented for the queue length and sojourn time in time-limited polling systems with a single server. These exact analytical techniques are extended to multi-server polling systems operating under the pure time-limited service discipline. Such polling systems with multiple servers effectively may reflect large communication networks with multiple simultaneously active links, while the systems with a single server represent performance models for small networks in which a single communication link can be active at a time." ] }
0705.2065
1614404949
The churn rate of a peer-to-peer system places direct limitations on the rate at which messages can be effectively communicated to a group of peers. These limitations are independent of the topology and message transmission latency. In this paper we consider a peer-to-peer network, based on the Engset model, where peers arrive and depart independently at random. We show how the arrival and departure rates directly limit the capacity for message streams to be broadcast to all other peers, by deriving mean field models that accurately describe the system behavior. Our models cover the unit and more general k buffer cases, i.e. where a peer can buffer at most k messages at any one time, and we give results for both single and multi-source message streams. We define coverage rate as peer-messages per unit time, i.e. the rate at which a number of peers receive messages, and show that the coverage rate is limited by the churn rate and buffer size. Our theory introduces an Instantaneous Message Exchange (IME) model and provides a template for further analysis of more complicated systems. Using the IME model, and assuming random processes, we have obtained very accurate equations of the system dynamics in a variety of interesting cases, that allow us to tune a peer-to-peer system. It remains to be seen if we can maintain this accuracy for general processes and when applying a non-instantaneous model.
Other closely related work such as in @cite_4 looks at the rate of file transmission in a file sharing system that is based on epidemics. The use of epidemics for large scale communication is also reviewed in @cite_0 . The probabilistic multicast technique in @cite_2 attempts to increase the probability that peers receive messages for which they are interested and to decrease the probability that peers receive messages for which they are not interested. Hence it introduces a notion of membership which is not too different to being online offline. Autonomous Gossiping presented in @cite_5 provides further examples of using epidemics for selective information dissemination.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4", "@cite_2" ], "mid": [ "2152948161", "2144588366", "2249121382", "2059722914" ], "abstract": [ "Peer-to-peer applications have become highly popular in today's pervasive environments due to the spread of different file sharing platforms. In such a multiclient environment, if users have mobility characteristics, asymmetry in communication causes a degradation of reliability. This work proposes an approach based on the advantages of epidemic selective resource placement through mobile Infostations. Epidemic placement policy combines the strengths of both proactive multicast group establishment and hybrid Infostation concept. With epidemic selective placement we face the flooding problem locally (in geographic region landscape) and enable end to end reliability by forwarding requested packets to epidemically 'selected' mobile users in the network on a recursive basis. The selection of users is performed based on their remaining capacity, weakness of their signal and other explained mobility limitations. Examination through simulation is performed for the response and reliability offered by epidemic placement policy which reveals the robustness and reliability in file sharing among mobile peers.", "Epidemic algorithms have recently been proposed as an effective solution for disseminating information in large-scale peer-to-peer (P2P) systems and in mobile ad hoc networks (MANET). In this paper, we present a modeling approach for steady-state analysis of epidemic dissemination of information in MANET. As major contribution, the introduced approach explicitly represents the spread of multiple data items, finite buffer capacity at mobile devices and a least recently used buffer replacement scheme. Using the introduced modeling approach, we analyze seven degrees of separation (7DS) as one well-known approach for implementing P2P data sharing in a MANET using epidemic dissemination of information. A validation of results derived from the analytical model against simulation shows excellent agreement. Quantitative performance curves derived from the analytical model yield several insights for optimizing the system design of 7DS.", "We introduce autonomous gossiping (A G), a new genre epidemic algorithm for selective dissemination of information in contrast to previous usage of epidemic algorithms which flood the whole network. A G is a paradigm which suits well in a mobile ad-hoc networking (MANET) environment because it does not require any infrastructure or middleware like multicast tree and (un)subscription maintenance for publish subscribe, but uses ecological and economic principles in a self-organizing manner in order to achieve any arbitrary selectivity (flexible casting). The trade-off of using a stateless self-organizing mechanism like A G is that it does not guarantee completeness deterministically as is one of the original objectives of alternate selective dissemination schemes like publish subscribe. We argue that such incompleteness is not a problem in many non-critical real-life civilian application scenarios and realistic node mobility patterns, where the overhead of infrastructure maintenance may outweigh the benefits of completeness, more over, at present there exists no mechanism to realize publish subscribe or other paradigms for selective dissemination in MANET environments.", "For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7 to 2.6 and improving viral marketing with 9.7 incremental customers." ] }
0705.1309
2953319296
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
The work by Gordon and Bentley @cite_5 differs from previous approaches by considering only communication and differentiation in the substrata. The grid starts with a cell at all available grid points, and cells communicate by diffusing chemicals to neighboring cells only. Each cell then receives as input one chemical concentration, computed as the average of the concentrations of all neighboring cells: hence, no orientation information is available. In the Cellular Automata context, such system is called a totalistic automaton. One drawback of this approach is that it requires that some cells have different chemicals concentration at start-up. Furthermore, it makes the whole model biased toward symmetrical patterns ("four-fold dihedral symmetry"). The controller is a set of 20 rules that produce one of the four chemicals and sends it towards neighboring cells. The set of rules is represented by a bit vector and is evolved using a classical bitstring GA. The paper ends with some comparisons with previous works, namely @cite_8 @cite_2 , demonstrating comparable and sometimes better results. But a possible explanation for that success could be the above-mentionned bias of the method toward symmetrical patterns.
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_8" ], "mid": [ "2166918926", "2029310694", "2066381067" ], "abstract": [ "Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference. In particular, we show that marginalization based on sum-product message passing can be implemented in terms of reactions between chemical species whose concentrations represent probabilities. We show algebraically that the steady state concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations. As with standard sum-product inference, this procedure yields exact results for tree-structured graphs, and approximate solutions for loopy graphs.", "Linear or one-dimensional reversible second-order cellular automata, exemplified by three cases named as RCA1–3, are introduced. Displays of their evolution in discrete time steps, , from their simplest initial states and on the basis of updating rules in modulo 2 arithmetic, are presented. In these, shaded and unshaded squares denote cells whose cell variables are equal to one and zero respectively. This paper is devoted to finding general formulas for, and explicit numerical evaluations of, the weights N(t) of the states or configurations of RCA1–3, i.e. the total number of shaded cells in tth line of their displays. This is achieved by means of the replacement of RCA1–3 by the equivalent linear first-order matrix automata MCA1–3, for which the cell variables are matrices, instead of just numbers () as for RCA1–3. MCA1–3 are tractable because it has been possible to generalize to them the heavy duty methods already well-developed for ordinary first-order cellular automata like those of Wolfram's Rules 90 and 150. While the automata MCA1–3 are thought to be of genuine interest in their own right, with untapped further mathematical potential, their treatment has been applied here to expediting derivation of a large body of general and explicit results for N(t) for RCA1–3. Amongst explicit results obtained are formulas also for each of RCA1–3 for the total weight of the configurations of the first times, .", "Abstract This article describes computer simulation of the dynamics of a distributed model of the olfactory system that is aimed at understanding the role of chaos in biological pattern recognition. The model is governed by coupled nonlinear differential equations with many variables and parameters, which allow multiple high-dimensional chaotic states. An appropriate set of the parameters is identified by computer experiments with the guidance of biological measurements, through which this model of the olfactory system maintains a low dimensional global chaotic attractor with multiple “wings.” The central part of the attractor is its basal chaotic activity, which simulates the electroencephalographic (EEG) activity of the olfactory system under zero signal input (exhalation). It provides the system with a ready state so that it is unnecessary for the system to “wake up” from or return to a “dormant” equilibrium state every time that an input is given (by inhalation). Each of the wings may be either a near-limit cycle (a narrow band chaos) or a broad band chaos. The reproducible spatial pattern of each near-limit cycle is determined by a template made in the system. A novel input with no template activates the system to either a nonreproducible near-limit cycle wing or a broad band chaotic wing. Pattern recognition in the system may be considered as the transition from one wing to another, as demonstrated by the computer simulation. The time series of the manifestations of the attractor are EEG-like waveforms with fractal dimensions that reflect which wing the system is placed in by input or lack of input. The computer simulation also shows that the adaptive behavior of the system is scaling invariant, and it is independent of the initial conditions at the transition from one wing to another. These properties enable the system to classify an uninterrupted sequence of stimuli." ] }
0705.1309
2953319296
This paper introduces a continuous model for Multi-cellular Developmental Design. The cells are fixed on a 2D grid and exchange "chemicals" with their neighbors during the growth process. The quantity of chemicals that a cell produces, as well as the differentiation value of the cell in the phenotype, are controlled by a Neural Network (the genotype) that takes as inputs the chemicals produced by the neighboring cells at the previous time step. In the proposed model, the number of iterations of the growth process is not pre-determined, but emerges during evolution: only organisms for which the growth process stabilizes give a phenotype (the stable state), others are declared nonviable. The optimization of the controller is done using the NEAT algorithm, that optimizes both the topology and the weights of the Neural Networks. Though each cell only receives local information from its neighbors, the experimental results of the proposed approach on the 'flags' problems (the phenotype must match a given 2D pattern) are almost as good as those of a direct regression approach using the same model with global information. Moreover, the resulting multi-cellular organisms exhibit almost perfect self-healing characteristics.
However, there are even greater similarities between the present work and that in @cite_5 . In both works, the grid is filled with cells at iteration 0 of the growth process (i.e. no replication is allowed) and chemicals are propagated only in a cell-cell fashion without the diffusion mechanisms used in @cite_8 @cite_2 . Indeed, a pure cell-cell communication is theoretically sufficient for modelling any kind of temporal diffusion function, since diffusion in the substrata is the result of successive transformation with non-linear functions (such as the ones implemented by sigmoidal neural networks with hidden neurons). However, this means that the optimization algorithm must tune both the diffusion reaction and the differentiation of the cells. On the other hand, whereas @cite_5 only consider the average of the chemical concentrations of the neighboring cells (i.e. is totalistic in the Cellular Automata terminology), our approach does take into account the topology of the organism at the controller level, de facto benefitting from orientation information. This results in a more general approach, though probably less efficient to reach symmetrical targets. Here again, further experiments must be run to give a solid answer.
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_8" ], "mid": [ "1622908138", "2006456190", "2022178479" ], "abstract": [ "We consider a channel model based on the diffusion of particles in the medium which is motivated by the natural communication mechanisms between biological cells based on exchange of molecules. In this model, the transmitter secretes particles into the medium via a particle dissemination rate. The concentration of particles at any point in the medium is a function of its distance from the transmitter and the particle dissemination rate. The reception process is a doubly stochastic Poisson process whose rate is a function of the concentration of the particles in the vicinity of the receiver. We derive a closed-form for the mutual information between the input and output processes in this communication scenario and establish useful properties about the mutual information. We also provide a signaling strategy using which we derive a lower bound on the capacity of the diffusion channel with Poisson reception process under average and peak power constraints. Furthermore, it is shown that the capacity of discretized diffusion channel can be a negligible factor of the capacity of continuous time diffusion channel. Finally, the application of the considered model to the molecular communication systems is discussed.", "Abstract Simulation-based and information theoretic models for a diffusion-based short-range molecular communication channel between a nano-transmitter and a nano-receiver are constructed to analyze information rates between channel inputs and outputs when the inputs are independent and identically distributed (i.i.d.). The total number of molecules available for information transfer is assumed to be limited. It is also assumed that there is a maximum tolerable delay bound for the overall information transfer. Information rates are computed via simulation-based methods for different time slot lengths and transmitter–receiver distances. The rates obtained from simulations are then compared to those computed using information theoretic channel models which provide upper bounds for information rates. The results indicate that a 4-input–2-output discrete channel model provides a very good approximation to the nano-communication channel, particularly when the time slot lengths are large and the distance between the transmitter and the receiver is small. It is shown through an extensive set of simulations that the information theoretic channel capacity with i.i.d. inputs can be achieved when an encoder adjusts the relative frequency of binary zeros to be higher (between 50 and 70 for the scenarios considered) than binary ones, where a ‘zero’ corresponds to not releasing and a ‘one’ corresponds to releasing a molecule from the transmitter.", "Quantitative analysis of biochemical networks often requires consideration of both spatial and stochastic aspects of chemical processes. Despite significant progress in the field, it is still computationally prohibitive to simulate systems involving many reactants or complex geometries using a microscopic framework that includes the finest length and time scales of diffusion-limited molecular interactions. For this reason, spatially or temporally discretized simulations schemes are commonly used when modeling intracellular reaction networks. The challenge in defining such coarse-grained models is to calculate the correct probabilities of reaction given the microscopic parameters and the uncertainty in the molecular positions introduced by the spatial or temporal discretization. In this paper we have solved this problem for the spatially discretized Reaction-Diffusion Master Equation; this enables a seamless and physically consistent transition from the microscopic to the macroscopic frameworks of reaction-diffusion kinetics. We exemplify the use of the methods by showing that a phosphorylation-dephosphorylation motif, commonly observed in eukaryotic signaling pathways, is predicted to display fluctuations that depend on the geometry of the system." ] }
0705.1999
1634556542
We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
Javier Pinto has extended situation calculus in order to integrate time @cite_3 . He conserves the framework of situation calculus and introduces a notion of time. Intuitively, every situation @math has a starting time and an ending time, where @math meaning that situation @math ends when the succeeding situation @math is reached. The end of the situation @math is the same time point as the beginning of the next situation resulting from the occurrence of action @math in @math . The obvious asymmetry of the @math and @math functions is due to the fact that the situation space has the form of a tree whose root is the beginning state @math . Thus, every state has a unique preceding state but eventually more that one succeeding state.
{ "cite_N": [ "@cite_3" ], "mid": [ "1992170615" ], "abstract": [ "The Situation Calculus is a logic of time and change in which there is a distinguished initial situation and all other situations arise from the different sequences of actions that might be performed starting in the initial one. Within this framework, it is difficult to incorporate the notion of an occurrence, since all situations after the initial one are hypothetical. These occurrences are important, for instance, when one wants to represent narratives. There have been proposals to incorporate the notion of an action occurrence in the language of the Situation Calculus, namely Miller and Shanahan’s work on narratives [22] and Pinto and Reiter’s work on actual lines of situations [27, 29]. Both approaches have in common the idea of incorporating a linear sequence of situations into the tree described by theories written in the Situation Calculus language. Unfortunately, several advantages of the Situation Calculus are lost when reasoning with a narrative line or with an actual line of occurrences. In this paper we propose a different approach to dealing with action occurrences and narratives, which can be seen as a generalization of narrative lines to narrative trees. In this approach we exploit the fact that, in the discrete Situation Calculus [13], each situation has a unique history. Then, occurrences are interpreted as constraints on valid histories. We argue that this new approach subsumes the linear approaches of Miller and Shanahan’s, and Pinto and Reiter’s. In this framework, we are able to represent various kinds of occurrences; namely, conditional, preventable and non-preventable occurrences. Other types of occurrences, not discussed in this article, can also be accommodated." ] }
0705.1999
1634556542
We present a multi-modal action logic with first-order modalities, which contain terms which can be unified with the terms inside the subsequent formulas and which can be quantified. This makes it possible to handle simultaneously time and states. We discuss applications of this language to action theory where it is possible to express many temporal aspects of actions, as for example, beginning, end, time points, delayed preconditions and results, duration and many others. We present tableaux rules for a decidable fragment of this logic.
Paolo Tereziani proposes in @cite_2 a system that can handle temporal constraints between events and temporal constraints between instances of events.
{ "cite_N": [ "@cite_2" ], "mid": [ "2101006942" ], "abstract": [ "Representing and reasoning with both temporal constraints between classes of events (e.g., between the types of actions needed to achieve a goal) and temporal constraints between instances of events (e.g., between the specific actions being executed) is a ubiquitous task in many areas of computer science, such as planning, workflow, guidelines and protocol management. The temporal constraints between the classes of events must be inherited by the instances, and the consistency of both types of constraints must be checked. We propose a general-purpose domain-independent knowledge server dealing with these issues. In particular, we propose a formalism to represent temporal constraints, we show two algorithms to deal with inheritance and to perform temporal consistency checking, and we study the properties of the algorithms." ] }
0704.2803
1489677531
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
How often do people create blog posts and links? Extensive work has been published on patterns relating to human behavior, which often generates bursty traffic. Disk accesses, network traffic, web-server traffic all exhibit burstiness. in @cite_15 provide fast algorithms for modeling such burstiness. Burstiness is often related to self-similarity, which was studied in the context of World Wide Web traffic @cite_17 . @cite_22 demonstrate the bursty behavior in web page visits and corresponding response times.
{ "cite_N": [ "@cite_15", "@cite_22", "@cite_17" ], "mid": [ "2146603609", "1606050636", "2105818147" ], "abstract": [ "Network, Web, and disk I O traffic are usually bursty and self-similar and therefore cannot be modeled adequately with Poisson arrivals. However, we wish to model these types of traffic and generate realistic traces, because of obvious applications for disk scheduling, network management, and Web server design. Previous models (like fractional Brownian motion and FARIMA, etc.) tried to capture the 'burstiness'. However, the proposed models either require too many parameters to fit and or require prohibitively large (quadratic) time to generate large traces. We propose a simple, parsimonious method, the b-model, which solves both problems: it requires just one parameter, and can easily generate large traces. In addition, it has many more attractive properties: (a) with our proposed estimation algorithm, it requires just a single pass over the actual trace to estimate b. For example, a one-day-long disk trace in milliseconds contains about 86 Mb data points and requires about 3 minutes for model fitting and 5 minutes for generation. (b) The resulting synthetic traces are very realistic: our experiments on real disk and Web traces show that our synthetic traces match the real ones very well in terms of queuing behavior.", "How do blogs produce posts? What local, underlying mechanisms lead to the bursty temporal behaviors observed in blog networks? Earlier work analyzed network patterns of blogs and found that blog behavior is bursty and often follows power laws in both topological and temporal characteristics. However, no intuitive and realistic model has yet been introduced, that can lead to such patterns. This is exactly the focus of this work. We propose a generative model that uses simple and intuitive principles for each individual blog, and yet it is able to produce the temporal characteristics of the blogosphere together with global topological network patterns, like power-laws for degree distributions, for inter-posting times, and several more. Our model ZC uses a novel ‘zero-crossing’ approach based on a random walk, combined with other powerful ideas like exploration and exploitation. This makes it the first model to simultaneously model the topology and temporal dynamics of the blogosphere. We validate our model with experiments on a large collection of 45,000 blogs and 2.2 million posts.", "Demonstrates that Ethernet LAN traffic is statistically self-similar, that none of the commonly used traffic models is able to capture this fractal-like behavior, that such behavior has serious implications for the design, control, and analysis of high-speed, cell-based networks, and that aggregating streams of such traffic typically intensifies the self-similarity (\"burstiness\") instead of smoothing it. These conclusions are supported by a rigorous statistical analysis of hundreds of millions of high quality Ethernet traffic measurements collected between 1989 and 1992, coupled with a discussion of the underlying mathematical and statistical properties of self-similarity and their relationship with actual network behavior. The authors also present traffic models based on self-similar stochastic processes that provide simple, accurate, and realistic descriptions of traffic scenarios expected during B-ISDN deployment. >" ] }
0704.2803
1489677531
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
Most work on modeling link behavior in large-scale on-line data has been done in the blog domain @cite_21 @cite_12 @cite_11 . The authors note that, while information propagates between blogs, examples of genuine cascading behavior appeared relatively rare. This may, however, be due in part to the Web-crawling and text analysis techniques used to infer relationships among posts @cite_12 @cite_1 . Our work here differs in a way that we concentrate solely on the propagation of links, and do not infer additional links from text of the post, which gives us more accurate information.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_12", "@cite_11" ], "mid": [ "1914856875", "2728059831", "4503277", "2120514843" ], "abstract": [ "While the study of the connection between discourse patterns and personal identification is decades old, the study of these patterns using language technologies is relatively recent. In that more recent tradition we frame author age prediction from text as a regression problem. We explore the same task using three very different genres of data simultaneously: blogs, telephone conversations, and online forum posts. We employ a technique from domain adaptation that allows us to train a joint model involving all three corpora together as well as separately and analyze differences in predictive features across joint and corpus-specific aspects of the model. Effective features include both stylistic ones (such as POS patterns) as well as content oriented ones. Using a linear regression model based on shallow text features, we obtain correlations up to 0.74 and mean absolute errors between 4.1 and 6.8 years.", "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models -- which potentially limits performance. In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree -- which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set -- however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets -- deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets.", "With an increasing number of people that read, write and comment on blogs, the blogosphere has established itself as an essential medium of communication. A fundamental characteristic of the blogging activity is that bloggers often link to each other. The succession of linking behavior determines the way in which information propagates in the blogosphere, forming cascades. Analyzing cascades can be useful in various applications, such as providing insight of public opinion on various topics and developing better cascade models. This paper presents the results of an excessive study on cascading behavior in the blogosphere. Our objective is to present trends on the degree of engagement and reaction of bloggers in stories that become available in blogs under various parameters and constraints. To this end, we analyze cascades that are attributed to different population groups constrained by factors of gender, age, and continent. We also analyze how cascades differentiate depending on their subject. Our analysis is performed on one of the largest available datasets, including 30M active blogs and 700M posts. The study reveals large variations in the properties of cascades.", "Research on performance, robustness, and evolution of the global Internet is fundamentally handicapped without accurate and thorough knowledge of the nature and structure of the contractual relationships between Autonomous Systems (ASs). In this work we introduce novel heuristics for inferring AS relationships. Our heuristics improve upon previous works in several technical aspects, which we outline in detail and demonstrate with several examples. Seeking to increase the value and reliability of our inference results, we then focus on validation of inferred AS relationships. We perform a survey with ASs' network administrators to collect information on the actual connectivity and policies of the surveyed ASs. Based on the survey results, we find that our new AS relationship inference techniques achieve high levels of accuracy: we correctly infer 96.5 customer to provider (c2p), 82.8 peer to peer (p2p), and 90.3 sibling to sibling (s2s) relationships. We then cross-compare the reported AS connectivity with the AS connectivity data contained in BGP tables. We find that BGP tables miss up to 86.2 of the true adjacencies of the surveyed ASs. The majority of the missing links are of the p2p type, which highlights the limitations of present measuring techniques to capture links of this type. Finally, to make our results easily accessible and practically useful for the community, we open an AS relationship repository where we archive, on a weekly basis, and make publicly available the complete Internet AS-level topology annotated with AS relationship information for every pair of AS neighbors." ] }
0704.2803
1489677531
How do blogs cite and influence each other? How do such links evolve? Does the popularity of old blog posts drop exponentially with time? These are some of the questions that we address in this work. Our goal is to build a model that generates realistic cascades, so that it can help us with link prediction and outlier detection. Blogs (weblogs) have become an important medium of information because of their timely publication, ease of use, and wide availability. In fact, they often make headlines, by discussing and discovering evidence about political events and facts. Often blogs link to one another, creating a publicly available record of how information and influence spreads through an underlying social network. Aggregating links from several blog posts creates a directed graph which we analyze to discover the patterns of information propagation in blogspace, and thereby understand the underlying social network. Not only are blogs interesting on their own merit, but our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. Here we report some surprising findings of the blog linking and information propagation structure, after we analyzed one of the largest available datasets, with 45,000 blogs and 2.2 million blog-postings. Our analysis also sheds light on how rumors, viruses, and ideas propagate over social and computer networks. We also present a simple model that mimics the spread of information on the blogosphere, and produces information cascades very similar to those found in real life.
Information cascades are phenomena in which an action or idea becomes widely adopted due to the influence of others, typically, neighbors in some network @cite_0 @cite_2 @cite_8 . Cascades on random graphs using a threshold model have been theoretically analyzed @cite_5 . Empirical analysis of the topological patterns of cascades in the context of a large product recommendation network is in @cite_19 and @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_0", "@cite_19", "@cite_2", "@cite_5" ], "mid": [ "2114696370", "2398526644", "2257559988", "2964269387", "2091087160", "1497522841" ], "abstract": [ "The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascades—herein called global cascades—that occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.", "Cascades are a popular construct to observe and study information propagation (or diffusion) in social media such as Twitter. and are defined using notions of influence, activity, or discourse commonality (e.g., hashtags). While these notions of cascades lead to different perspectives, primarily cascades are modeled as trees. We argue in this paper an alternative viewpoint of cascades as forests (of trees) which yields a richer vocabulary of features to understand information propagation. We develop a framework to extract forests and analyze their growth by studying their evolution at the tree-level and at the node-level. Moreover, we demonstrate how the structural features of forests, properties of the underlying network, and temporal features of the cascades provide significant predictive value in forecasting the future trajectory of both size and shape of forests. We observe that the forecasting performance increases with observations, that the temporal features are highly indicative of cascade size, and that the features extracted from the underlying connected graph best forecast the shape of the cascade.", "Cascades of information-sharing are a primary mechanism by which content reaches its audience on social media, and an active line of research has studied how such cascades, which form as content is reshared from person to person, develop and subside. In this paper, we perform a large-scale analysis of cascades on Facebook over significantly longer time scales, and find that a more complex picture emerges, in which many large cascades recur, exhibiting multiple bursts of popularity with periods of quiescence in between. We characterize recurrence by measuring the time elapsed between bursts, their overlap and proximity in the social network, and the diversity in the demographics of individuals participating in each peak. We discover that content virality, as revealed by its initial popularity, is a main driver of recurrence, with the availability of multiple copies of that content helping to spark new bursts. Still, beyond a certain popularity of content, the rate of recurrence drops as cascades start exhausting the population of interested individuals. We reproduce these observed patterns in a simple model of content recurrence simulated on a real social network. Using only characteristics of a cascade's initial burst, we demonstrate strong performance in predicting whether it will recur in the future.", "Cascades of information-sharing are a primary mechanism by which content reaches its audience on social media, and an active line of research has studied how such cascades, which form as content is reshared from person to person, develop and subside. In this paper, we perform a large-scale analysis of cascades on Facebook over significantly longer time scales, and find that a more complex picture emerges, in which many large cascades recur, exhibiting multiple bursts of popularity with periods of quiescence in between. We characterize recurrence by measuring the time elapsed between bursts, their overlap and proximity in the social network, and the diversity in the demographics of individuals participating in each peak. We discover that content virality, as revealed by its initial popularity, is a main driver of recurrence, with the availability of multiple copies of that content helping to spark new bursts. Still, beyond a certain popularity of content, the rate of recurrence drops as cascades start exhausting the population of interested individuals. We reproduce these observed patterns in a simple model of content recurrence simulated on a real social network. Using only characteristics of a cascade's initial burst, we demonstrate strong performance in predicting whether it will recur in the future.", "An informational cascade occurs when it is optimal for an individual, having observed the actions of those ahead of him, to follow the behavior of the preceding individual without regard to his own information. We argue that localized conformity of behavior and the fragility of mass behaviors can be explained by informational cascades.", "Cascades are ubiquitous in various network environments. How to predict these cascades is highly nontrivial in several vital applications, such as viral marketing, epidemic prevention and traffic management. Most previous works mainly focus on predicting the final cascade sizes. As cascades are typical dynamic processes, it is always interesting and important to predict the cascade size at any time, or predict the time when a cascade will reach a certain size (e.g. an threshold for outbreak). In this paper, we unify all these tasks into a fundamental problem: cascading process prediction. That is, given the early stage of a cascade, how to predict its cumulative cascade size of any later time? For such a challenging problem, how to understand the micro mechanism that drives and generates the macro phenomena (i.e. cascading process) is essential. Here we introduce behavioral dynamics as the micro mechanism to describe the dynamic process of a node's neighbors getting infected by a cascade after this node getting infected (i.e. one-hop subcascades). Through data-driven analysis, we find out the common principles and patterns lying in behavioral dynamics and propose a novel Networked Weibull Regression model for behavioral dynamics modeling. After that we propose a novel method for predicting cascading processes by effectively aggregating behavioral dynamics, and present a scalable solution to approximate the cascading process with a theoretical guarantee. We extensively evaluate the proposed method on a large scale social network dataset. The results demonstrate that the proposed method can significantly outperform other state-of-the-art baselines in multiple tasks including cascade size prediction, outbreak time prediction and cascading process prediction." ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
Much work has been focused on the problem of understanding the mixing time of the Ising model in various contexts. In a series of results @cite_11 @cite_16 @cite_3 culminating in @cite_21 it was shown that the Gibbs sampler on integer lattice mixes rapidly when the model has the strong spatial mixing property. In @math strong spatial mixing, and therefore rapid mixing, holds in the entire uniqueness regime (see e.g. @cite_7 ). On the regular tree the mixing time is always polynomial but is only @math up to the threshold for extremity @cite_18 . For completely general graphs the best known results are given by the Dobrushin condition which establishes rapid mixing when @math where @math is the maximum degree.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_21", "@cite_3", "@cite_16", "@cite_11" ], "mid": [ "2090216341", "2092260192", "2021912525", "1965879843", "2950224979", "1982328614" ], "abstract": [ "We establish tight results for rapid mixing of Gibbs samplers for the Ferromagnetic Ising model on general graphs. We show that if (d−1) tanh β < 1, then there exists a constant C such that the discrete time mixing time of Gibbs samplers for the ferromagnetic Ising model on any graph of n vertices and maximal degree d, where all interactions are bounded by β, and arbitrary external fields are bounded by Cnlogn. Moreover, the spectral gap is uniformly bounded away from 0 for all such graphs, as well as for infinite graphs of maximal degree d. We further show that when dtanh β < 1, with high probability over the Erdős–Renyi random graph G(n,d n), it holds that the mixing time of Gibbs samplers is n1+Θ(1 loglogn). Both results are tight, as it is known that the mixing time for random regular and Erdős–Renyi random graphs is, with high probability, exponential in n when (d−1) tanh β> 1, and dtanh β>1, respectively. To our knowledge our results give the first tight sufficient conditions for rapid mixing of spin systems on general graphs. Moreover, our results are the first rigorous results establishing exact thresholds for dynamics on random graphs in terms of spatial thresholds on trees.", "We prove that for finite range discrete spin systems on the two dimensional latticeZ2, the (weak) mixing condition which follows, for instance, from the Dobrushin-Shlosman uniqueness condition for the Gibbs state implies a stronger mixing property of the Gibbs state, similar to the Dobrushin-Shlosman complete analyticity condition, but restricted to all squares in the lattice, or, more generally, to all sets multiple of a large enough square. The key observation leading to the proof is that a change in the boundary conditions cannot propagate either in the bulk, because of the weak mixing condition, or along the boundary because it is one dimensional. As a consequence we obtain for ferromagnetic Ising-type systems proofs that several nice properties hold arbitrarily close to the critical temperature; these properties include the existence of a convergent cluster expansion and uniform boundedness of the logarithmic Sobolev constant and rapid convergence to equilibrium of the associated Glauber dynamics on nice subsets ofZ2, including the full lattice.", "Given a finite graph @math , a vertex of the lamplighter graph @math consists of a zero-one labeling of the vertices of @math , and a marked vertex of @math . For transitive @math we show that, up to constants, the relaxation time for simple random walk in @math is the maximal hitting time for simple random walk in @math , while the mixing time in total variation on @math is the expected cover time on @math . The mixing time in the uniform metric on @math admits a sharp threshold, and equals @math multiplied by the relaxation time on @math , up to a factor of @math . For @math , the lamplighter group over the discrete two dimensional torus, the relaxation time is of order @math , the total variation mixing time is of order @math , and the uniform mixing time is of order @math . For @math when @math , the relaxation time is of order @math , the total variation mixing time is of order @math , and the uniform mixing time is of order @math . In particular, these three quantities are of different orders of magnitude.", "We give the first comprehensive analysis of the effect of boundary conditions on the mixing time of the Glauber dynamics in the so-called Bethe approximation. Specifically, we show that the spectral gap and the log-Sobolev constant of the Glauber dynamics for the Ising model on an n-vertex regular tree with (+)-boundary are bounded below by a constant independent of n at all temperatures and all external fields. This implies that the mixing time is O(logn) (in contrast to the free boundary case, where it is not bounded by any fixed polynomial at low temperatures). In addition, our methods yield simpler proofs and stronger results for the spectral gap and log-Sobolev constant in the regime where the mixing time is insensitive to the boundary condition. Our techniques also apply to a much wider class of models, including those with hard-core constraints like the antiferromagnetic Potts model at zero temperature (proper colorings) and the hard–core lattice gas (independent sets).", "In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .", "For lattice models on ℤ d , weak mixing is the property that the influence of the boundary condition on a finite decays exponentially with distance from that region. For a wide class of models on ℤ2, including all finite range models, we show that weak mixing is a consequence of Gibbs uniqueness, exponential decay of an appropriate form of connectivity, and a natural coupling property. In particular, on ℤ2, the Fortuin-Kasteleyn random cluster model is weak mixing whenever uniqueness holds and the connectivity decays exponentially, and the q-state Potts model above the critical temperature is weak mixing whenever correlations decay exponentially, a hypothesis satisfied if q is sufficiently large. Ratio weak mixing is the property that uniformly over events A and B occurring on subsets Λ and Γ, respectively, of the lattice, |P(A∩B) P(A)P(B)−1| decreases exponentially in the distance between Λ and Γ. We show that under mild hypotheses, for example finite range, weak mixing implies ratio weak mixing." ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
Previous attempts at studying this problem, with bounded average degree but some large degrees, for sampling uniform colorings yielded weaker results. @cite_10 it is shown that Gibbs sampling rapidly mixes on @math if @math where @math and that a variant of the algorithm rapidly mixes if @math . Indeed the main open problem of @cite_10 is to determine if one can take @math to be a function of @math only. Our results here provide a positive answer to the analogous question for the Ising model. We further note that other results where the conditions on degree are relaxed @cite_13 do not apply in our setting.
{ "cite_N": [ "@cite_13", "@cite_10" ], "mid": [ "2048759201", "2950224979" ], "abstract": [ "We consider local Markov chain Monte–Carlo algorithms for sampling from the weighted distribution of independent sets with activity λ, where the weight of an independent set I is λ|I|. A recent result has established that Gibbs sampling is rapidly mixing in sampling the distribution for graphs of maximum degree d and λ λ c it is NP-hard to approximate the above weighted sum over independent sets to within a factor polynomial in the size of the graph.", "In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math ." ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
It is natural to conjecture that properties of the Ising model on the branching process with @math offspring distribution determines the mixing time of the dynamics on @math . In particular, it is natural to conjecture that the critical point for uniqueness of Gibbs measures plays a fundamental role @cite_19 @cite_12 as results of similar flavor were recently obtained for the hard-core model on random bi-partite @math regular graphs @cite_14 .
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_12" ], "mid": [ "2950224979", "2090216341", "2092260192" ], "abstract": [ "In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .", "We establish tight results for rapid mixing of Gibbs samplers for the Ferromagnetic Ising model on general graphs. We show that if (d−1) tanh β < 1, then there exists a constant C such that the discrete time mixing time of Gibbs samplers for the ferromagnetic Ising model on any graph of n vertices and maximal degree d, where all interactions are bounded by β, and arbitrary external fields are bounded by Cnlogn. Moreover, the spectral gap is uniformly bounded away from 0 for all such graphs, as well as for infinite graphs of maximal degree d. We further show that when dtanh β < 1, with high probability over the Erdős–Renyi random graph G(n,d n), it holds that the mixing time of Gibbs samplers is n1+Θ(1 loglogn). Both results are tight, as it is known that the mixing time for random regular and Erdős–Renyi random graphs is, with high probability, exponential in n when (d−1) tanh β> 1, and dtanh β>1, respectively. To our knowledge our results give the first tight sufficient conditions for rapid mixing of spin systems on general graphs. Moreover, our results are the first rigorous results establishing exact thresholds for dynamics on random graphs in terms of spatial thresholds on trees.", "We prove that for finite range discrete spin systems on the two dimensional latticeZ2, the (weak) mixing condition which follows, for instance, from the Dobrushin-Shlosman uniqueness condition for the Gibbs state implies a stronger mixing property of the Gibbs state, similar to the Dobrushin-Shlosman complete analyticity condition, but restricted to all squares in the lattice, or, more generally, to all sets multiple of a large enough square. The key observation leading to the proof is that a change in the boundary conditions cannot propagate either in the bulk, because of the weak mixing condition, or along the boundary because it is one dimensional. As a consequence we obtain for ferromagnetic Ising-type systems proofs that several nice properties hold arbitrarily close to the critical temperature; these properties include the existence of a convergent cluster expansion and uniform boundedness of the logarithmic Sobolev constant and rapid convergence to equilibrium of the associated Glauber dynamics on nice subsets ofZ2, including the full lattice." ] }
0704.3603
2950224979
In this work we show that for every @math , such that for all @math where the parameters of the model do not depend on @math . They also provide a rare example where one can prove a polynomial time mixing of Gibbs sampler in a situation where the actual mixing time is slower than @math . Our proof exploits in novel ways the local treelike structure of Erd o s-R 'enyi random graphs, comparison and block dynamics arguments and a recent result of Weitz. Our results extend to much more general families of graphs which are sparse in some average sense and to much more general interactions. In particular, they apply to any graph for which every vertex @math of the graph has a neighborhood @math of radius @math in which the induced sub-graph is a tree union at most @math edges and where for each simple path in @math the sum of the vertex degrees along the path is @math . Moreover, our result apply also in the case of arbitrary external fields and provide the first FPRAS for sampling the Ising distribution in this case. We finally present a non Markov Chain algorithm for sampling the distribution which is effective for a wider range of parameters. In particular, for @math it applies for all external fields and @math , where @math is the critical point for decay of correlation for the Ising model on @math .
After proposing the conjecture we have recently learned that Antoine Gerschenfeld and Andrea Montanari have found an elegant proof for estimating the partition function (that is the normalizing constant @math ) for the Ising model on random @math -regular graphs @cite_8 . Their result together with a standard conductance argument shows exponentially slow mixing above the uniqueness threshold which in the context of random regular graphs is @math .
{ "cite_N": [ "@cite_8" ], "mid": [ "1489398757" ], "abstract": [ "We determine the asymptotics of the independence number of the random @math -regular graph for all @math . It is highly concentrated, with constant-order fluctuations around @math for explicit constants @math and @math . Our proof rigorously confirms the one-step replica symmetry breaking heuristics for this problem, and we believe the techniques will be more broadly applicable to the study of other combinatorial properties of random graphs." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
After the Fischler and Susskind exposition of the problematic application of the holographic principle for spatially closed models @cite_23 and R. Easther and D. Lowe confirmed these difficulties @cite_28 , several authors proposed feasible solutions. Kalyana Rama @cite_26 proposed a two-fluid cosmological model, and found that when one was of quintessence type, the FS prescription would be verified under some additional conditions. N. Cruz and S. Lepe @cite_22 studied cosmological models with spatial dimension @math , and found also that models with negative pressure could verify the FS prescription. There are some alternative ways such as @cite_8 which are worth quoting. All these authors analyzed mathematically the functional behavior of relation @math ; our work however claims to endorse the mathematical work with a simple picture: ever expanding spatially closed cosmological models could verify the FS holographic prescription, since, due to the cosmological acceleration, future light cones could not reconverge into focal points and, so, the particle horizon area would never shrink to zero.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_8", "@cite_28", "@cite_23" ], "mid": [ "2061462698", "1977747956", "2162227770", "1631663583", "2004482079" ], "abstract": [ "Abstract We examine in details Friedmann–Robertson–Walker models in 2+1 dimensions in order to investigate the cosmic holographic principle suggested by Fischler and Susskind. Our results are rigorously derived differing from the previous one found by Wang and Abdalla. We discuss the erroneous assumptions done in this work. The matter content of the models is composed of a perfect fluid, with a γ -law equation of state. We found that closed universes satisfy the holographic principle only for exotic matter with a negative pressure. We also analyze the case of a collapsing flat universe.", "Abstract A closed universe containing pressureless dust, a more generally perfect fluid matter with pressure-to-density ratio w in the range ( 1 3 ,− 1 3 ) , violates the holographic principle applied according to the Fischler–Susskind proposal. We show, first for a class of two-fluid solutions and then for the general multifluid case, that the closed universe will obey the holographic principle if it also contains matter with w 1 3 , and if the present value of its total density is sufficiently close to the critical density. It is possible that such matter can be realised by some form of quintessence', much studied recently.", "It is shown that for small, spherically symmetric perturbations of asymptotically flat two-ended Reissner–Nordstrom data for the Einstein–Maxwell-real scalar field system, the boundary of the dynamic spacetime which evolves is globally represented by a bifurcate null hypersurface across which the metric extends continuously. Under additional assumptions, it is shown that the Hawking mass blows up identically along this bifurcate null hypersurface, and thus the metric cannot be extended twice differentiably; in fact, it cannot be extended in a weaker sense characterized at the level of the Christoffel symbols. The proof combines estimates obtained in previous work with an elementary Cauchy stability argument. There are no restrictions on the size of the support of the scalar field, and the result applies to both the future and past boundary of spacetime. In particular, it follows that for an open set in the moduli space of solutions around Reissner–Nordstrom, there is no spacelike component of either the future or the past singularity.", "The holographic bound states that the entropy in a region cannot exceed one quarter of the area (in Planck units) of the bounding surface. A version of the holographic principle that can be applied to cosmological spacetimes has recently been given by Fischler and Susskind. This version can be shown to fail in closed spacetimes and they concluded that the holographic bound may rule out such universes. In this paper I give a modified definition of the holographic bound that holds in a large class of closed universes. Fischler and Susskind also showed that the dominant energy condition follows from the holographic principle applied to cosmological spacetimes with @math . Here I show that the dominant energy condition can be violated by cosmologies satisfying the holographic principle with more general scale factors.", "We consider a spherically symmetric, double characteristic initial value problem for the (real) Einstein-Maxwell-scalar field equations. On the initial outgoing characteristic, the data is assumed to satisfy the Price law decay widely believed to hold on an event horizon arising from the collapse of an asymptotically flat Cauchy surface. We establish that the heuristic mass inflation scenario put forth by Israel and Poisson is mathematically correct in the context of this initial value problem. In particular, the maximal future development has a future boundary over which the space-time is extendible as a C0 metric but along which the Hawking mass blows up identically; thus, the space-time is inextendible as a C1 metric. In view of recent results of the author in collaboration with I. Rodnianski, which rigorously establish the validity of Price's law as an upper bound for the decay of scalar field hair, the C0 extendibility result applies to the collapse of complete, asymptotically flat, spacelike initial data where the scalar field is compactly supported. This shows that under Christodoulou's C0 formulation, the strong cosmic censorship conjecture is false for this system. © 2005 Wiley Periodicals, Inc." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
As one can imagine, by virtue of the previous argument there are many spatially closed cosmological models which fulfill the FS holographic prescription; ensuring a sufficiently accelerated final era is enough. Examples other than quintessence concern spatially closed models with conventional matter and a positive cosmological constant, the so-called @cite_30 . In fact, the late evolution of this family of models is dominated by the cosmological constant which is compatible with @math , and this value verifies ). Roughly speaking, an asymptotically exponential expansion will provide acceleration enough to avoid the reconvergence of future light cones.
{ "cite_N": [ "@cite_30" ], "mid": [ "2004482079" ], "abstract": [ "We consider a spherically symmetric, double characteristic initial value problem for the (real) Einstein-Maxwell-scalar field equations. On the initial outgoing characteristic, the data is assumed to satisfy the Price law decay widely believed to hold on an event horizon arising from the collapse of an asymptotically flat Cauchy surface. We establish that the heuristic mass inflation scenario put forth by Israel and Poisson is mathematically correct in the context of this initial value problem. In particular, the maximal future development has a future boundary over which the space-time is extendible as a C0 metric but along which the Hawking mass blows up identically; thus, the space-time is inextendible as a C1 metric. In view of recent results of the author in collaboration with I. Rodnianski, which rigorously establish the validity of Price's law as an upper bound for the decay of scalar field hair, the C0 extendibility result applies to the collapse of complete, asymptotically flat, spacelike initial data where the scalar field is compactly supported. This shows that under Christodoulou's C0 formulation, the strong cosmic censorship conjecture is false for this system. © 2005 Wiley Periodicals, Inc." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
One more remark about observational result comes to support the study of quintessence models. If the fundamental character of the Holographic Principle as a primary principle guiding the behavior of our universe is assumed, it looks reasonable to suppose the saturation of the holographic limit. This is one of the arguments used by T. Banks and W. Fischler @cite_4 @cite_16 to propose a holographic cosmology based on a an early universe, spatially flat, dominated by a fluid with @math Banks and Fischler propose a scenario where black holes of the maximum possible size --the size of the particle horizon-- coalesce saturating the holographic limit; this fluid'' evolves according to @math . . According to ) this value saturates the FS prescription for spatially flat FRW models, but it seems fairly incompatible with observational results. However, for spatially closed FRW cosmological models, it has been found that the saturation of the Holographic Principle is related to the value @math which is compatible with current observations (according to @cite_12 , @math at the 95 Taking @math gives @math in agreement the measured value @cite_32 .
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_32", "@cite_12" ], "mid": [ "2061462698", "2088673235", "1631663583", "2139440433" ], "abstract": [ "Abstract We examine in details Friedmann–Robertson–Walker models in 2+1 dimensions in order to investigate the cosmic holographic principle suggested by Fischler and Susskind. Our results are rigorously derived differing from the previous one found by Wang and Abdalla. We discuss the erroneous assumptions done in this work. The matter content of the models is composed of a perfect fluid, with a γ -law equation of state. We found that closed universes satisfy the holographic principle only for exotic matter with a negative pressure. We also analyze the case of a collapsing flat universe.", "We present a new version of holographic cosmology, which is compatible with present observations. A primordial p = ? phase of the universe is followed by a brief matter dominated era and a brief period of inflation, whose termination heats the universe. The flatness and horizon problems are solved by the p = ? dynamics. The model is characterized by two parameters, which should be calculated in a more fundamental approach to the theory. For a large range in the phenomenologically allowed parameter space, the observed fluctuations in the cosmic microwave background were generated during the p = ? era, and are exactly scale invariant. The scale invariant spectrum cuts off sharply at both upper and lower ends, and this may have observational consequences. We argue that the amplitude of fluctuations is small but cannot yet calculate it precisely.", "The holographic bound states that the entropy in a region cannot exceed one quarter of the area (in Planck units) of the bounding surface. A version of the holographic principle that can be applied to cosmological spacetimes has recently been given by Fischler and Susskind. This version can be shown to fail in closed spacetimes and they concluded that the holographic bound may rule out such universes. In this paper I give a modified definition of the holographic bound that holds in a large class of closed universes. Fischler and Susskind also showed that the dominant energy condition follows from the holographic principle applied to cosmological spacetimes with @math . Here I show that the dominant energy condition can be violated by cosmologies satisfying the holographic principle with more general scale factors.", "We have discovered 16 Type Ia supernovae (SNe Ia) with the Hubble Space Telescope (HST) and have used them to provide the first conclusive evidence for cosmic deceleration that preceded the current epoch of cosmic acceleration. These objects, discovered during the course of the GOODS ACS Treasury program, include 6 of the 7 highest redshift SNe Ia known, all at z > 1.25, and populate the Hubble diagram in unexplored territory. The luminosity distances to these objects and to 170 previously reported SNe Ia have been determined using empirical relations between light-curve shape and luminosity. A purely kinematic interpretation of the SN Ia sample provides evidence at the greater than 99 confidence level for a transition from deceleration to acceleration or, similarly, strong evidence for a cosmic jerk. Using a simple model of the expansion history, the transition between the two epochs is constrained to be at z = 0.46 ± 0.13. The data are consistent with the cosmic concordance model of ΩM ≈ 0.3, ΩΛ ≈ 0.7 (χ = 1.06) and are inconsistent with a simple model of evolution or dust as an alternative to dark energy. For a flat universe with a cosmological constant, we measure ΩM = 0.29 ± (equivalently, ΩΛ = 0.71). When combined with external flat-universe constraints, including the cosmic microwave background and large-scale structure, we find w = -1.02 ± (and w < -0.76 at the 95 confidence level) for an assumed static equation of state of dark energy, P = wρc2. Joint constraints on both the recent equation of state of dark energy, w0, and its time evolution, dw dz, are a factor of 8 more precise than the first estimates and twice as precise as those without the SNe Ia discovered with HST. Our constraints are consistent with the static nature of and value of w expected for a cosmological constant (i.e., w0 = -1.0, dw dz = 0) and are inconsistent with very rapid evolution of dark energy. We address consequences of evolving dark energy for the fate of the universe." ] }
0704.1637
2049396157
When Fischler and Susskind proposed a holographic prescription based on the particle horizon, they found that spatially closed cosmological models do not verify it due to the apparently unavoidable recontraction of the particle horizon area. In this paper, after a short review of their original work, we expose graphically and analytically that spatially closed cosmological models can avoid this problem if they expand fast enough. It has also been shown that the holographic principle is saturated for a codimension one-brane dominated universe. The Fischler?Susskind prescription is used to obtain the maximum number of degrees of freedom per Planck volume at the Planck era compatible with the holographic principle.
Finally, two recent conjectures concerning holography in spatially closed universes deserve some comments. W. Zimdahl and D. Pavon @cite_17 claim that dynamics of the holographic dark energy in a spatially closed universe could solve the coincidence problem; however the cosmological scale necessary for the definition of the holographic dark energy seems to be incompatible with the particle horizon @cite_24 @cite_2 @cite_9 . In a more recent paper F. Simpson @cite_31 proposed an imaginative mechanism in which the non-monotonic evolution of the particle horizon over a spatially closed universe controls the equation of state of the dark energy. The abundant work in that line is still inconclusive but it seems to be a fairly promising line of work.
{ "cite_N": [ "@cite_9", "@cite_24", "@cite_2", "@cite_31", "@cite_17" ], "mid": [ "1981945064", "2157971336", "2046990005", "1631663583", "2088673235" ], "abstract": [ "A model for holographic dark energy is proposed, following the idea that the short distance cut-off is related to the infrared cut-off. We assume that the infrared cut-off relevant to the dark energy is the size of the event horizon. With the input Omega(Lambda) = 0.73, we predict the equation of state of the dark energy at the present time be characterized by w = -0.90. The cosmic coincidence problem can be resolved by inflation in our scenario, provided we assume the minimal number of e-foldings. (C) 2004 Elsevier B.V. All rights reserved.", "Here we consider a scenario in which dark energy is associated with the apparent area of a surface in the early universe. In order to resemble the cosmological constant at late times, this hypothetical reference scale should maintain an approximately constant physical size during an asymptotically de Sitter expansion. This is found to arise when the particle horizon—anticipated to be significantly greater than the Hubble length—is approaching the antipode of a closed universe. Depending on the constant of proportionality, either the ensuing inflationary period prevents the particle horizon from vanishing, or it may lead to a sequence of 'big rips'.", "We employ the holographic model of interacting dark energy to obtain the equation of state for the holographic energy density in non-flat (closed) universe enclosed by the event horizon measured from the sphere of horizon named L.", "The holographic bound states that the entropy in a region cannot exceed one quarter of the area (in Planck units) of the bounding surface. A version of the holographic principle that can be applied to cosmological spacetimes has recently been given by Fischler and Susskind. This version can be shown to fail in closed spacetimes and they concluded that the holographic bound may rule out such universes. In this paper I give a modified definition of the holographic bound that holds in a large class of closed universes. Fischler and Susskind also showed that the dominant energy condition follows from the holographic principle applied to cosmological spacetimes with @math . Here I show that the dominant energy condition can be violated by cosmologies satisfying the holographic principle with more general scale factors.", "We present a new version of holographic cosmology, which is compatible with present observations. A primordial p = ? phase of the universe is followed by a brief matter dominated era and a brief period of inflation, whose termination heats the universe. The flatness and horizon problems are solved by the p = ? dynamics. The model is characterized by two parameters, which should be calculated in a more fundamental approach to the theory. For a large range in the phenomenologically allowed parameter space, the observed fluctuations in the cosmic microwave background were generated during the p = ? era, and are exactly scale invariant. The scale invariant spectrum cuts off sharply at both upper and lower ends, and this may have observational consequences. We argue that the amplitude of fluctuations is small but cannot yet calculate it precisely." ] }
math0703675
2950427145
Let @math be a product of @math independent, identically distributed random matrices @math , with the properties that @math is bounded in @math , and that @math has a deterministic (constant) invariant vector. Assuming that the probability of @math having only the simple eigenvalue 1 on the unit circle does not vanish, we show that @math is the sum of a fluctuating and a decaying process. The latter converges to zero almost surely, exponentially fast as @math . The fluctuating part converges in Cesaro mean to a limit that is characterized explicitly by the deterministic invariant vector and the spectral data of @math associated to 1. No additional assumptions are made on the matrices @math ; they may have complex entries and not be invertible. We apply our general results to two classes of dynamical systems: inhomogeneous Markov chains with random transition matrices (stochastic matrices), and random repeated interaction quantum systems. In both cases, we prove ergodic theorems for the dynamics, and we obtain the form of the limit states.
Getting informations on the fluctuations of the process around its limiting value is certainly an interesting and important issue. It amounts to getting informations about the law of the vector valued random variable @math of Theorem , which is quite difficult in general. There are recent partial results about aspects of the law of such random vectors in case they are obtained by means of matrices belonging to some subgroups of @math satisfying certain irreducibility conditions, see e.g. @cite_20 . However, these results do not apply to our situation.
{ "cite_N": [ "@cite_20" ], "mid": [ "2082401067" ], "abstract": [ "Let B i be deterministic real symmetric m × m matrices, and ξ i be independent random scalars with zero mean and “of order of one” (e.g., @math ). We are interested to know under what conditions “typical norm” of the random matrix @math is of order of 1. An evident necessary condition is @math , which, essentially, translates to @math ; a natural conjecture is that the latter condition is sufficient as well. In the paper, we prove a relaxed version of this conjecture, specifically, that under the above condition the typical norm of S N is @math : @math for all Ω > 0 We outline some applications of this result, primarily in investigating the quality of semidefinite relaxations of a general quadratic optimization problem with orthogonality constraints @math , where F is quadratic in X = (X 1,... ,X k ). We show that when F is convex in every one of X j , a natural semidefinite relaxation of the problem is tight within a factor slowly growing with the size m of the matrices @math ." ] }
cs0703042
1981868151
Users of online dating sites are facing information overload that requires them to manually construct queries and browse huge amount of matching user profiles. This becomes even more problematic for multimedia profiles. Although matchmaking is frequently cited as a typical application for recommender systems, there is a surprising lack of work published in this area. In this paper we describe a recommender system we implemented and perform a quantitative comparison of two collaborative filtering (CF) and two global algorithms. Results show that collaborative filtering recommenders significantly outperform global algorithms that are currently used by dating sites. A blind experiment with real users also confirmed that users prefer CF based recommendations to global popularity recommendations. Recommender systems show a great potential for online dating where they could improve the value of the service to users and improve monetization of the service.
Recommender systems @cite_8 are a popular and successful way of tackling the information overload. Recommender systems have been popularized by applications such as Amazon @cite_17 or Netflix recommenders http: amazon.com , http: netflix.com . The most widely used recommender systems are based on collaborative filtering algorithms. One of the first collaborative filtering systems was Tapestry @cite_1 . Other notable CF systems include jester @cite_18 , Ringo @cite_14 , Movielens and Launch.com.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_1", "@cite_17" ], "mid": [ "1511814458", "1994389483", "2090641502", "2026962484", "2047109571" ], "abstract": [ "Many recommendation systems suggest items to users by utilizing the techniques of collaborative filtering(CF) based on historical records of items that the users have viewed, purchased, or rated. Two major problems that most CF approaches have to contend with are scalability and sparseness of the user profiles. To tackle these issues, in this paper, we describe a CF algorithm alternating-least-squares with weighted-?-regularization(ALS-WR), which is implemented on a parallel Matlab platform. We show empirically that the performance of ALS-WR (in terms of root mean squared error(RMSE)) monotonically improves with both the number of features and the number of ALS iterations. We applied the ALS-WR algorithm on a large-scale CF problem, the Netflix Challenge, with 1000 hidden features and obtained a RMSE score of 0.8985, which is one of the best results based on a pure method. In addition, combining with the parallel version of other known methods, we achieved a performance improvement of 5.91 over Netflix's own CineMatch recommendation system. Our method is simple and scales well to very large datasets.", "Recommender systems provide users with personalized suggestions for products or services. These systems often rely on Collaborating Filtering (CF), where past transactions are analyzed in order to establish connections between users and products. The two more successful approaches to CF are latent factor models, which directly profile both users and products, and neighborhood models, which analyze similarities between products or users. In this work we introduce some innovations to both approaches. The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model. Further accuracy improvements are achieved by extending the models to exploit both explicit and implicit feedback by the users. The methods are tested on the Netflix data. Results are better than those previously published on that dataset. In addition, we suggest a new evaluation metric, which highlights the differences among methods, based on their performance at a top-K recommendation task.", "Recommender systems are widely used in online e-commerce applications to improve user engagement and then to increase revenue. A key challenge for recommender systems is providing high quality recommendation to users in cold-start\" situations. We consider three types of cold-start problems: 1) recommendation on existing items for new users; 2) recommendation on new items for existing users; 3) recommendation on new items for new users. We propose predictive feature-based regression models that leverage all available information of users and items, such as user demographic information and item content features, to tackle cold-start problems. The resulting algorithms scale efficiently as a linear function of the number of observations. We verify the usefulness of our approach in three cold-start settings on the MovieLens and EachMovie datasets, by comparing with five alternatives including random, most popular, segmented most popular, and two variations of Vibes affinity algorithm widely used at Yahoo! for recommendation.", "The rapid development of Internet technologies in recent decades has imposed a heavy information burden on users. This has led to the popularity of recommender systems, which provide advice to users about items they may like to examine. Collaborative Filtering (CF) is the most promising technique in recommender systems, providing personalized recommendations to users based on their previously expressed preferences and those of other similar users. This paper introduces a CF framework based on Fuzzy Association Rules and Multiple-level Similarity (FARAMS). FARAMS extended existing techniques by using fuzzy association rule mining, and takes advantage of product similarities in taxonomies to address data sparseness and nontransitive associations. Experimental results show that FARAMS improves prediction quality, as compared to similar approaches.", "Recommender systems provide users with personalized suggestions for products or services. These systems often rely on collaborating filtering (CF), where past transactions are analyzed in order to establish connections between users and products. The most common approach to CF is based on neighborhood models, which originate from similarities between products or users. In this work we introduce a new neighborhood model with an improved prediction accuracy. Unlike previous approaches that are based on heuristic similarities, we model neighborhood relations by minimizing a global cost function. Further accuracy improvements are achieved by extending the model to exploit both explicit and implicit feedback by the users. Past models were limited by the need to compute all pairwise similarities between items or users, which grow quadratically with input size. In particular, this limitation vastly complicates adopting user similarity models, due to the typical large number of users. Our new model solves these limitations by factoring the neighborhood model, thus making both item-item and user-user implementations scale linearly with the size of the data. The methods are tested on the Netflix data, with encouraging results." ] }
cs0703074
2949893529
We propose a memory abstraction able to lift existing numerical static analyses to C programs containing union types, pointer casts, and arbitrary pointer arithmetics. Our framework is that of a combined points-to and data-value analysis. We abstract the contents of compound variables in a field-sensitive way, whether these fields contain numeric or pointer values, and use stock numerical abstract domains to find an overapproximation of all possible memory states--with the ability to discover relationships between variables. A main novelty of our approach is the dynamic mapping scheme we use to associate a flat collection of abstract cells of scalar type to the set of accessed memory locations, while taking care of byte-level aliases - i.e., C variables with incompatible types allocated in overlapping memory locations. We do not rely on static type information which can be misleading in C programs as it does not account for all the uses a memory zone may be put to. Our work was incorporated within the Astr ' e e static analyzer that checks for the absence of run-time-errors in embedded, safety-critical, numerical-intensive software. It replaces the former memory domain limited to well-typed, union-free, pointer-cast free data-structures. Early results demonstrate that this abstraction allows analyzing a larger class of C programs, without much cost overhead.
Instead of relying on the structure of C types, we chose to represent the memory as flat sequences of bytes. This allows shifting to a representation of pointers as pairs: a symbolic base and a numeric offset. It is a common practice---it is used, for instance, by Wilson and Lam in @cite_8 . This also suggests combining the pointer and value analyses into a single one---offsets being treated as integer variables. There is experimental proof @cite_25 that this is more precise than a pointer analysis followed by a value analysis. Some authors rely on non-relational abstractions of offsets--- e.g. , a reduced product of intervals and congruences @cite_21 , or intervals together byte-size factors @cite_24 . Others, such as @cite_18 @cite_12 or ourself, permit more precise, relational offset abstractions.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_21", "@cite_24", "@cite_25", "@cite_12" ], "mid": [ "1840408880", "2963105378", "2017381974", "1516842532", "2131143419", "1991504773" ], "abstract": [ "In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.", "For statistical learning, categorical variables in a table are usually considered as discrete entities and encoded separately to feature vectors, e.g., with one-hot encoding. “Dirty” non-curated data give rise to categorical variables with a very high cardinality but redundancy: several categories reflect the same entity. In databases, this issue is typically solved with a deduplication step. We show that a simple approach that exposes the redundancy to the learning algorithm brings significant gains. We study a generalization of one-hot encoding, similarity encoding, that builds feature vectors from similarities across categories. We perform a thorough empirical validation on non-curated tables, a problem seldom studied in machine learning. Results on seven real-world datasets show that similarity encoding brings significant gains in predictive performance in comparison with known encoding methods for categories or strings, notably one-hot encoding and bag of character n-grams. We draw practical recommendations for encoding dirty categories: 3-gram similarity appears to be a good choice to capture morphological resemblance. For very high-cardinalities, dimensionality reduction significantly reduces the computational cost with little loss in performance: random projections or choosing a subset of prototype categories still outperform classic encoding approaches.", "This paper proposes a new method of encoding numbers by variable-length byte-strings. The primary property of the encoding is that the lexicographic comparison of the encoded numbers corresponds correctly to the order of the real numbers. The encoding is space-efficient. Further, unlike the fixed-length representations of numbers (fixed-point, floating-point, etc.,) the encoded numbers are not limited in their magnitude or the number of their significant digits. The paper also elaborates the application of the encoding method to the storage of numeric data in databases. The proposed application for databases is a uniform format for all the numbers, regardless of their types and attributes (fields). All the numbers are represented in a form of lexicographically-comparable byte-strings. This form simplifies the data management software (only one format to deal with at the physical database level) and hardware (when associative memory and storage devices etc. are used); makes the applications more flexible (by removing limitations on the sizes of numbers); and is space-efficient for all numbers while being especially concise for those numbers that are used more frequently in databases.", "We present a new technique for speeding up static analysis of (shared memory) concurrent programs. We focus on analyses that compute thread correlations : such analyses infer invariants that capture correlations between the local states of different threads (as well as the global state). Such invariants are required for verifying many natural properties of concurrent programs. Tracking correlations between different thread states, however, is very expensive. A significant factor that makes such analysis expensive is the cost of applying abstract transformers. In this paper, we introduce a technique that exploits the notion of footprints and memoization to compute individual abstract transformers more efficiently. We have implemented this technique in our concurrent shape analysis framework. We have used this implementation to prove properties of fine-grained concurrent programs with a shared, mutable, heap in the presence of an unbounded number of objects and threads. The properties we verified include memory safety, data structure invariants, partial correctness, and linearizability. Our empirical evaluation shows that our new technique reduces the analysis time significantly (e.g., by a factor of 35 in one case).", "Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the single-instruction, multiple data SIMD instructions available in common processors to boost the speed of integer compression schemes. Our S4-BP128-D4 scheme uses as little as 0.7CPU cycles per decoded 32-bit integer while still providing state-of-the-art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decompression speed can be wasted. To show that it does not have to be so, we 1 vectorize and optimize the intersection of posting lists; 2 introduce the SIMD GALLOPING algorithm. We exploit the fact that one SIMD instruction can compare four pairs of 32-bit integers at once. We experiment with two Text REtrieval Conference TREC text collections, GOV2 and ClueWeb09 category B, using logs from the TREC million-query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state-of-the-art approach. Copyright © 2015 John Wiley & Sons, Ltd.", "We consider integer arithmetic modulo a power of 2 as providedby mainstream programming languages like Java or standardimplementations of C. The difficulty here is that, for w> 1, the ring Zm of integers modulom = 2w has zero divisors and thus cannotbe embedded into a field. Not withstanding that, we present intra-and interprocedural algorithms for inferring for every programpoint u affine relations between program variables valid atu. If conditional branching is replaced withnondeterministic branching, our algorithms are not only sound butalso complete in that they detect all valid affinerelations in a natural class of programs. Moreover, they run intime linear in the program size and polynomial in the number ofprogram variables and can be implemented by using the same modularinteger arithmetic as the target language to be analyzed. We alsoindicate how our analysis can be extended to deal with equalityguards, even in an interprocedural setting." ] }
cs0703074
2949893529
We propose a memory abstraction able to lift existing numerical static analyses to C programs containing union types, pointer casts, and arbitrary pointer arithmetics. Our framework is that of a combined points-to and data-value analysis. We abstract the contents of compound variables in a field-sensitive way, whether these fields contain numeric or pointer values, and use stock numerical abstract domains to find an overapproximation of all possible memory states--with the ability to discover relationships between variables. A main novelty of our approach is the dynamic mapping scheme we use to associate a flat collection of abstract cells of scalar type to the set of accessed memory locations, while taking care of byte-level aliases - i.e., C variables with incompatible types allocated in overlapping memory locations. We do not rely on static type information which can be misleading in C programs as it does not account for all the uses a memory zone may be put to. Our work was incorporated within the Astr ' e e static analyzer that checks for the absence of run-time-errors in embedded, safety-critical, numerical-intensive software. It replaces the former memory domain limited to well-typed, union-free, pointer-cast free data-structures. Early results demonstrate that this abstraction allows analyzing a larger class of C programs, without much cost overhead.
Finally, note that most articles--- @cite_18 being a notable exception---directly leap from a memory model informally described in English to the formal description of a static analysis. Following the Abstract Interpretation framework, we give a full mathematical description of the memory model before presenting computable abstractions proved correct with respect to the model.
{ "cite_N": [ "@cite_18" ], "mid": [ "2160248455" ], "abstract": [ "While the reconstruction of the control-flow graph of a binary has received wide attention, the challenge of categorizing code into defect-free and possibly incorrect remains a challenge for current static analyses. We present the intermediate language RREIL and a corresponding analysis framework that is able to infer precise numeric information on variables without resorting to an expensive analysis at the bit-level. Specifically, we propose a hierarchy of three interfaces to abstract domains, namely for inferring memory layout, bit-level information and numeric information. Our framework can be easily enriched with new abstract domains at each level. We demonstrate the extensibility of our framework by detailing a novel acceleration technique (a so-called widening) as an abstract domain that helps to find precise fix points of loops." ] }
cs0703083
1561850421
Search engines provide cached copies of indexed content so users will have something to "click on" if the remote resource is temporarily or permanently unavailable. Depending on their proprietary caching strategies, search engines will purge their indexes and caches of resources that exceed a threshold of unavailability. Although search engine caches are provided only as an aid to the interactive user, we are interested in building reliable preservation services from the aggregate of these limited caching services. But first, we must understand the contents of search engine caches. In this paper, we have examined the cached contents of Ask, Google, MSN and Yahoo to profile such things as overlap between index and cache, size, MIME type and "staleness" of the cached resources. We also examined the overlap of the various caches with the holdings of the Internet Archive.
Besides a study by @cite_13 which examined the freshness of 38 German web pages in SE caches, we are unaware of any research that has characterized the SE caches or attempted to find the overlap of SE caches with the IA.
{ "cite_N": [ "@cite_13" ], "mid": [ "2131399212" ], "abstract": [ "We present the Potsdam systems that participated in the semantic dependency parsing shared task of SemEval 2014. They are based on linguistically motivated bidirectional transformations between graphs and trees and on utilization of syntactic dependency parsing. They were entered in both the closed track and the open track of the challenge, recording a peak average labeled F1 score of 78.60." ] }
cs0703133
2952084784
This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy , which was proposed by kls as a way to represent all Nash equilibria of a graphical game. In egg , it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.
Our approximation scheme (Theorem and Theorem ) shows a contrast between the games that we study and two-player @math -action games, for which the corresponding problems are usually intractable. For two-player @math -action games, the problem of finding Nash equilibria with special properties is typically NP-hard. In particular, this is the case for Nash equilibria that maximize the social welfare @cite_15 @cite_5 . Moreover, it is likely to be intractable even to approximate such equilibria. In particular, Chen, Deng and Teng @cite_9 show that there exists some @math , inverse polynomial in @math , for which computing an @math -Nash equilibrium in 2-player games with @math actions per player is PPAD-complete.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_9" ], "mid": [ "2149624304", "2002373723", "2096651633" ], "abstract": [ "We investigate the complexity of finding Nash equilibria in which the strategy of each player is uniform on its support set. We show that, even for a restricted class of win-lose bimatrix games, deciding the existence of such uniform equilibria is an NP-complete problem. Our proof is graph-theoretical. Motivated by this result, we also give NP-completeness results for the problems of finding regular induced subgraphs of large size or regularity, which can be of independent interest.", "In 1951, John F. Nash proved that every game has a Nash equilibrium [Ann. of Math. (2), 54 (1951), pp. 286-295]. His proof is nonconstructive, relying on Brouwer's fixed point theorem, thus leaving open the questions, Is there a polynomial-time algorithm for computing Nash equilibria? And is this reliance on Brouwer inherent? Many algorithms have since been proposed for finding Nash equilibria, but none known to run in polynomial time. In 1991 the complexity class PPAD (polynomial parity arguments on directed graphs), for which Brouwer's problem is complete, was introduced [C. Papadimitriou, J. Comput. System Sci., 48 (1994), pp. 489-532], motivated largely by the classification problem for Nash equilibria; but whether the Nash problem is complete for this class remained open. In this paper we resolve these questions: We show that finding a Nash equilibrium in three-player games is indeed PPAD-complete; and we do so by a reduction from Brouwer's problem, thus establishing that the two problems are computationally equivalent. Our reduction simulates a (stylized) Brouwer function by a graphical game [M. Kearns, M. Littman, and S. Singh, Graphical model for game theory, in 17th Conference in Uncertainty in Artificial Intelligence (UAI), 2001], relying on “gadgets,” graphical games performing various arithmetic and logical operations. We then show how to simulate this graphical game by a three-player game, where each of the three players is essentially a color class in a coloring of the underlying graph. Subsequent work [X. Chen and X. Deng, Setting the complexity of 2-player Nash-equilibrium, in 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2006] established, by improving our construction, that even two-player games are PPAD-complete; here we show that this result follows easily from our proof.", "A recent sequence of results established that computing Nash equilibria in normal form games is a PPAD-complete problem even in the case of two players [11,6,4]. By extending these techniques we prove a general theorem, showing that, for a far more general class of families of succinctly representable multiplayer games, the Nash equilibrium problem can also be reduced to the two-player case. In view of empirically successful algorithms available for this problem, this is in essence a positive result — even though, due to the complexity of the reductions, it is of no immediate practical significance. We further extend this conclusion to extensive form games and network congestion games, two classes which do not fall into the same succinct representation framework, and for which no positive algorithmic result had been known." ] }
cs0703133
2952084784
This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy , which was proposed by kls as a way to represent all Nash equilibria of a graphical game. In egg , it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.
Lipton and Markakis @cite_14 study the algebraic properties of Nash equilibria, and point out that standard quantifier elimination algorithms can be used to solve them. Note that these algorithms are not polynomial-time in general. The games we study in this paper have polynomial-time computable Nash equilibria in which all mixed strategies are rational numbers, but an optimal Nash equilibrium may necessarily include mixed strategies with high algebraic degree.
{ "cite_N": [ "@cite_14" ], "mid": [ "2740520762" ], "abstract": [ "Abstract We study the computation of Nash equilibria of anonymous games, via algorithms that use adaptive queries to a game's payoff function. We show that exact equilibria cannot be found via query-efficient algorithms, and exhibit a two-strategy, 3-player anonymous game whose exact equilibria require irrational numbers. We obtain positive results for known sub-classes of anonymous games. Our main result is a new randomized query-efficient algorithm for approximate equilibria of two-strategy anonymous games that improves on the running time of previous algorithms. It is the first to obtain an inverse polynomial approximation in poly-time, and yields an efficient polynomial-time approximation scheme." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Dynamic scheduling of parallel applications has been an active area of research for several years. Much of the early work targets shared memory architectures although several recent efforts focus on grid environments. @cite_1 propose an algorithm for dynamic scheduling on parallel machines under a PRAM programming model. @cite_7 propose a dynamic processor allocation policy for shared memory multiprocessors and study space-sharing vs. time-sharing in this context. @cite_3 present a scheduling policy for shared memory systems that allocates processors based on the performance of the application.
{ "cite_N": [ "@cite_3", "@cite_1", "@cite_7" ], "mid": [ "2094587335", "2142444503", "1983235612" ], "abstract": [ "We propose and evaluate empirically the performance of a dynamic processor-scheduling policy for multiprogrammed shared-memory multiprocessors. The policy is dynamic in that it reallocates processors from one parallel job to another based on the currently realized parallelism of those jobs. The policy is suitable for implementation in production systems in that: —It interacts well with very efficient user-level thread packages, leaving to them many low-level thread operations that do not require kernel intervention. —It deals with thread blocking due to user I O and page faults. —It ensures fairness in delivering resources to jobs. —Its performance, measured in terms of average job response time, is superior to that of previously proposed schedulers, including those implemented in existing systems. It provides good performance to very short, sequential (e.g., interactive) requests. We have evaluated our scheduler and compared it to alternatives using a set of prototype implementations running on a Sequent Symmetry multiprocessor. Using a number of parallel applications with distinct qualitative behaviors, we have both evaluated the policies according to the major criterion of overall performance and examined a number of more general policy issues, including the advantage of “space sharing” over “time sharing” the processors of a multiprocessor, and the importance of cooperation between the kernel and the application in reallocating processors between jobs. We have also compared the policies according to other criteia important in real implementations, in particular, fairness and respone time to short, sequential requests. We conclude that a combination of performance and implementation considerations makes a compelling case for our dynamic scheduling policy.", "Emerging GPGPU architectures, along with programming models like CUDA and OpenCL, offer a cost-effective platform for many applications by providing high thread level parallelism at lower energy budgets. Unfortunately, for many general-purpose applications, available hardware resources of a GPGPU are not efficiently utilized, leading to lost opportunity in improving performance. A major cause of this is the inefficiency of current warp scheduling policies in tolerating long memory latencies. In this paper, we identify that the scheduling decisions made by such policies are agnostic to thread-block, or cooperative thread array (CTA), behavior, and as a result inefficient. We present a coordinated CTA-aware scheduling policy that utilizes four schemes to minimize the impact of long memory latencies. The first two schemes, CTA-aware two-level warp scheduling and locality aware warp scheduling, enhance per-core performance by effectively reducing cache contention and improving latency hiding capability. The third scheme, bank-level parallelism aware warp scheduling, improves overall GPGPU performance by enhancing DRAM bank-level parallelism. The fourth scheme employs opportunistic memory-side prefetching to further enhance performance by taking advantage of open DRAM rows. Evaluations on a 28-core GPGPU platform with highly memory-intensive applications indicate that our proposed mechanism can provide 33 average performance improvement compared to the commonly-employed round-robin warp scheduling policy.", "In this paper, we present techniques that coordinate the thread scheduling and prefetching decisions in a General Purpose Graphics Processing Unit (GPGPU) architecture to better tolerate long memory latencies. We demonstrate that existing warp scheduling policies in GPGPU architectures are unable to effectively incorporate data prefetching. The main reason is that they schedule consecutive warps, which are likely to access nearby cache blocks and thus prefetch accurately for one another, back-to-back in consecutive cycles. This either 1) causes prefetches to be generated by a warp too close to the time their corresponding addresses are actually demanded by another warp, or 2) requires sophisticated prefetcher designs to correctly predict the addresses required by a future \"far-ahead\" warp while executing the current warp. We propose a new prefetch-aware warp scheduling policy that overcomes these problems. The key idea is to separate in time the scheduling of consecutive warps such that they are not executed back-to-back. We show that this policy not only enables a simple prefetcher to be effective in tolerating memory latencies but also improves memory bank parallelism, even when prefetching is not employed. Experimental evaluations across a diverse set of applications on a 30-core simulated GPGPU platform demonstrate that the prefetch-aware warp scheduler provides 25 and 7 average performance improvement over baselines that employ prefetching in conjunction with, respectively, the commonly-employed round-robin scheduler or the recently-proposed two-level warp scheduler. Moreover, when prefetching is not employed, the prefetch-aware warp scheduler provides higher performance than both of these baseline schedulers as it better exploits memory bank parallelism." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Moreira and Naik @cite_17 propose a technique for dynamic resource management on distributed systems using a checkpointing framework called Distributed Resource Management Systems (DRMS). The framework supports jobs that can change their active number of tasks during program execution, map the new set of tasks to execution units, and redistribute data among the new set of tasks. DRMS does not make reconfiguration decisions based on application performance however, and it uses file-based checkpointing for data redistribution. A more recent work by Kale @cite_2 achieves reconfiguration of MPI-based message passing programs. However, the reconfiguration is achieved using Adaptive MPI (AMPI), which in turn relies on Charm++ @cite_5 for the processor virtualization layer, and requires that the application be run with many more threads than processors.
{ "cite_N": [ "@cite_5", "@cite_2", "@cite_17" ], "mid": [ "2014594876", "2089818961", "2545968212" ], "abstract": [ "The ability to produce malleable parallel applications that can be stopped and reconfigured during the execution can offer attractive benefits for both the system and the applications. The reconfiguration can be in terms of varying the parallelism for the applications, changing the data distributions during the executions or dynamically changing the software components involved in the application execution. In distributed and Grid computing systems, migration and reconfiguration of such malleable applications across distributed heterogeneous sites which do not share common file systems provides flexibility for scheduling and resource management in such distributed environments. The present reconfiguration systems do not support migration of parallel applications to distributed locations. In this paper, we discuss a framework for developing malleable and migratable MPI message-passing parallel applications for distributed systems. The framework includes a user-level checkpointing library called SRS and a runtime support system that manages the checkpointed data for distribution to distributed locations. Our experiments and results indicate that the parallel applications, with instrumentation to SRS library, were able to achieve reconfigurability incurring about 15-35 overhead.", "Efficient management of distributed resources, under conditions of unpredictable and varying workload, requires enforcement of dynamic resource management policies. Execution of such policies requires a relatively fine-grain control over the resources allocated to jobs in the system. Although this is a difficult task using conventional job management and program execution models, reconfigurable applications can be used to make it viable. With reconfigurable applications, it is possible to dynamically change, during the course of program execution, the number of concurrently executing tasks of an application as well as the resources allocated. Thus, reconfigurable applications can adapt to internal changes in resource requirements and to external changes affecting available resources. In this paper, we discuss dynamic management of resources on distributed systems with the help of reconfigurable applications. We first characterize reconfigurable parallel applications. We then present a new programming model for reconfigurable applications and the Distributed Resource Management System (DRMS), an integrated environment for the design, development, execution, and resource scheduling of reconfigurable applications. Experiments were conducted to verify the functionality and performance of application reconfiguration under DRMS. A detailed breakdown of the costs in reconfiguration is presented with respect to several different applications. Our results indicate that application reconfiguration is effective under DRMS and can be beneficial in improving individual application performance as well as overall system performance. We observe a significant reduction in average job response time and an improvement in overall system utilization.", "Most high-performance, scientific libraries have adopted hybrid parallelization schemes - such as the popular MPI+OpenMP hybridization - to benefit from the capacities of modern distributed-memory machines. While these approaches have shown to achieve high performance, they require a lot of effort to design and maintain sophisticated synchronization communication strategies. On the other hand, task-based programming paradigms aim at delegating this burden to a runtime system for maximizing productivity. In this article, we assess the potential of task-based fast multipole methods (FMM) on clusters of multicore processors. We propose both a hybrid MPI+task FMM parallelization and a pure task-based parallelization where the MPI communications are implicitly handled by the runtime system. The latter approach yields a very compact code following a sequential task-based programming model. We show that task-based approaches can compete with a hybrid MPI+OpenMP highly optimized code and that furthermore the compact task-based scheme fully matches the performance of the sophisticated, hybrid MPI+task version, ensuring performance while maximizing productivity. We illustrate our discussion with the ScalFMM FMM library and the StarPU runtime system." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
@cite_11 describe an application-aware job scheduler that dynamically controls resource allocation among concurrently executing jobs. The scheduler implements policies for adding or removing resources from jobs based on performance predictions from the Prophet system @cite_18 . All processors send data to the root node for data redistribution. The authors present simulated results based on supercomputer workload traces. Cirne and Berman @cite_9 use the term to describe jobs which can adapt to different processor sizes. In their work the application scheduler AppLeS selects the job with the least estimated turn-around time out of a set of moldable jobs, based on the current state of the parallel computer. Possible processor configurations are specified by the user, and the number of processors assigned to a job does not change after job-initiation time.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_11" ], "mid": [ "2000300079", "2048210274", "2094587335" ], "abstract": [ "Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job. In this dissertation, we show that the request that submits a moldable job can be automatically selected in a way that often reduces the job's turn-around time. The turn-around time of a job is the time elapsed between the job's submission and its completion. More precisely, we will introduce and evaluate SA, an application scheduler that chooses which request to use to submit a moldable job on behalf of the user. The user provides SA with a set of possible requests that can be used to submit a given moldable job. SA estimates the turn-around time of each request based on the current state of the supercomputer, and then forwards to the supercomputer the request with the smallest expected turn-around time. Users are thus relieved by SA of a task unrelated with their final goals, namely that of selecting which request to use. Moreover and more importantly, SA often improves the turn-around time of the job under a variety of conditions. The conditions under which SA was studied cover variations on the characteristics of the job, the state of the supercomputer, and the information available to SA. The emergent behavior generated by having most jobs using SA to craft their requests was also investigated.", "We consider the online scheduling of malleable jobs on parallel systems, such as clusters, symmetric multiprocessing computers, and multi-core processor computers. Malleable jobs is a model of parallel processing in which jobs adapt to the number of processors assigned to them. This model permits the scheduler and resource manager to make more efficient use of the available resources. Each malleable job is characterized by arrival time, deadline, and value. If the job completes by its deadline, the user earns the payoff indicated by the value; otherwise, she earns a payoff of zero. The scheduling objective is to maximize the sum of the values of the jobs that complete by their associated deadlines. Complicating the matter is that users in the real world are rational and they will attempt to manipulate the scheduler by misreporting their jobs' parameters if it benefits them to do so. To mitigate this behavior, we design an incentive compatible online scheduling mechanism. Incentive compatibility assures us that the users will obtain the maximum payoff only if they truthfully report their jobs' parameters to the scheduler. Finally, we simulate and study the mechanism to show the effects of misreports on the cheaters and on the system.", "We propose and evaluate empirically the performance of a dynamic processor-scheduling policy for multiprogrammed shared-memory multiprocessors. The policy is dynamic in that it reallocates processors from one parallel job to another based on the currently realized parallelism of those jobs. The policy is suitable for implementation in production systems in that: —It interacts well with very efficient user-level thread packages, leaving to them many low-level thread operations that do not require kernel intervention. —It deals with thread blocking due to user I O and page faults. —It ensures fairness in delivering resources to jobs. —Its performance, measured in terms of average job response time, is superior to that of previously proposed schedulers, including those implemented in existing systems. It provides good performance to very short, sequential (e.g., interactive) requests. We have evaluated our scheduler and compared it to alternatives using a set of prototype implementations running on a Sequent Symmetry multiprocessor. Using a number of parallel applications with distinct qualitative behaviors, we have both evaluated the policies according to the major criterion of overall performance and examined a number of more general policy issues, including the advantage of “space sharing” over “time sharing” the processors of a multiprocessor, and the importance of cooperation between the kernel and the application in reallocating processors between jobs. We have also compared the policies according to other criteia important in real implementations, in particular, fairness and respone time to short, sequential requests. We conclude that a combination of performance and implementation considerations makes a compelling case for our dynamic scheduling policy." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
Vadhiyar and Dongarra @cite_13 @cite_8 describe a user-level checkpointing framework called Stop Restart Software (SRS) for developing malleable and migratable applications for distributed and Grid computing systems. The framework implements a rescheduler which monitors application progress and can migrate the application to a better resource. Data redistribution is done via user-level file-based checkpointing.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2014594876", "2115367411" ], "abstract": [ "The ability to produce malleable parallel applications that can be stopped and reconfigured during the execution can offer attractive benefits for both the system and the applications. The reconfiguration can be in terms of varying the parallelism for the applications, changing the data distributions during the executions or dynamically changing the software components involved in the application execution. In distributed and Grid computing systems, migration and reconfiguration of such malleable applications across distributed heterogeneous sites which do not share common file systems provides flexibility for scheduling and resource management in such distributed environments. The present reconfiguration systems do not support migration of parallel applications to distributed locations. In this paper, we discuss a framework for developing malleable and migratable MPI message-passing parallel applications for distributed systems. The framework includes a user-level checkpointing library called SRS and a runtime support system that manages the checkpointed data for distribution to distributed locations. Our experiments and results indicate that the parallel applications, with instrumentation to SRS library, were able to achieve reconfigurability incurring about 15-35 overhead.", "We present a new distributed checkpoint-restart mechanism, Cruz, that works without requiring application, library, or base kernel modifications. This mechanism provides comprehensive support for checkpointing and restoring application state, both at user level and within the OS. Our implementation builds on Zap, a process migration mechanism, implemented as a Linux kernel module, which operates by interposing a thin layer between applications and the OS. In particular, we enable support for networked applications by adding migratable IP and MAC addresses, and checkpoint-restart of socket buffer state, socket options, and TCP state. We leverage this capability to devise a novel method for coordinated checkpoint-restart that is simpler than prior approaches. For instance, it eliminates the need to flush communication channels by exploiting the packet re-transmission behavior of TCP and existing OS support for packet filtering. Our experiments show that the overhead of coordinating checkpoint-restart is negligible, demonstrating the scalability of this approach." ] }
cs0703137
2951377381
Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.
The framework described in this paper has several aspects that differentiate it from the above work. is designed for applications running on distributed-memory clusters. Like @cite_9 @cite_11 , applications must be moldable in order to take advantage of ; but in our case the user is not required to specify the legal partition sizes ahead of time. Instead, can dynamically calculate partition sizes based on the run-time performance of the application. Our framework uses neither file-based checkpointing nor a single node for redistribution. Instead, we use an efficient data redistribution algorithm which remaps data on-the-fly using message-passing over the high-performance cluster interconnect. Finally, we evaluate our system using experimental data from a real cluster, allowing us to investigate potential benefits both for individual job turn-around time and overall system utilization and throughput.
{ "cite_N": [ "@cite_9", "@cite_11" ], "mid": [ "2225938006", "2155204206" ], "abstract": [ "In order to cope with the ever-increasing data volume, distributed stream processing systems have been proposed. To ensure scalability most distributed systems partition the data and distribute the workload among multiple machines. This approach does, however, raise the question how the data and the workload should be partitioned and distributed. A uniform scheduling strategy--a uniform distribution of computation load among available machines--typically used by stream processing systems, disregards network-load as one of the major bottlenecks for throughput resulting in an immense load in terms of intermachine communication. In this paper we propose a graph-partitioning based approach for workload scheduling within stream processing systems. We implemented a distributed triple-stream processing engine on top of the Storm realtime computation framework and evaluate its communication behavior using two real-world datasets. We show that the application of graph partitioning algorithms can decrease inter-machine communication substantially (by 40 to 99 ) whilst maintaining an even workload distribution, even using very limited data statistics. We also find that processing RDF data as single triples at a time rather than graph fragments (containing multiple triples), may decrease throughput indicating the usefulness of semantics.", "As high performance clusters continue to grow in size, the mean time between failures shrinks. Thus, the issues of fault tolerance and reliability are becoming one of the challenging factors for application scalability. The traditional disk-based method of dealing with faults is to checkpoint the state of the entire application periodically to reliable storage and restart from the recent checkpoint. The recovery of the application from faults involves (often manually) restarting applications on all processors and having it read the data from disks on all processors. The restart can therefore take minutes after it has been initiated. Such a strategy requires that the failed processor can be replaced so that the number of processors at checkpoint-time and recovery-time are the same. We present FTC-Charms ++, a fault-tolerant runtime based on a scheme for fast and scalable in-memory checkpoint and restart. At restart, when there is no extra processor, the program can continue to run on the remaining processors while minimizing the performance penalty due to losing processors. The method is useful for applications whose memory footprint is small at the checkpoint state, while a variation of this scheme - in-disk checkpoint restart can be applied to applications with large memory footprint. The scheme does not require any individual component to be fault-free. We have implemented this scheme for Charms++ and AMPI (an adaptive version of MPl). This work describes the scheme and shows performance data on a cluster using 128 processors." ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Marbach, Mihatsch and Tsitsiklis @cite_20 have applied an actor-critic (value-search) algorithm to address resource allocation within communication networks by tackling both routing and call admission control. They adopt a decompositional approach, representing the network as consisting of link processes, each with its own differential reward. Unfortunately, the empirical results even on small networks, @math and @math nodes, show little advantage over heuristic techniques.
{ "cite_N": [ "@cite_20" ], "mid": [ "2778821583" ], "abstract": [ "We introduce an Actor-Critic Ensemble(ACE) method for improving the performance of Deep Deterministic Policy Gradient(DDPG) algorithm. At inference time, our method uses a critic ensemble to select the best action from proposals of multiple actors running in parallel. By having a larger candidate set, our method can avoid actions that have fatal consequences, while staying deterministic. Using ACE, we have won the 2nd place in NIPS'17 Learning to Run competition, under the name of \"Megvii-hzwer\"." ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Carlstr "om @cite_5 introduces another rl strategy based on decomposition called predictive gain scheduling. The control problem of admission control is decomposed into a time-series prediction of near-future call arrival rates and precomputation of control policies for Poisson call arrival processes. This approach results in faster learning without performance loss. Online convergence rate increases 50 times on a simulated link with capacity @math units sec.
{ "cite_N": [ "@cite_5" ], "mid": [ "2125132331" ], "abstract": [ "In integrated service communication networks, an important problem is to exercise call admission control and routing so as to optimally use the network resources. This problem is naturally formulated as a dynamic programming problem, which, however, is too complex to be solved exactly. We use methods of reinforcement learning (RL), together with a decomposition approach, to find call admission control and routing policies. The performance of our policy for a network with approximately 1045 different feature configurations is compared with a commonly used heuristic policy." ] }
cs0703138
2949715023
Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.
Generally speaking, value-search algorithms have been more extensively investigated than policy search ones in the domain of communications. Value-search (Q-learning) algorithms have arrived at promising results. Boyan and Littman's @cite_9 algorithm - Q-routing, proves superior to non-adaptive techniques based on shortest path, and robust with respect to dynamic variations in the simulation on a variety of network topology, including an irregular @math grid and 116-node lata phone network. It regulates the trade-off between the number of nodes a packet has to traverse and the possibility of congestion.
{ "cite_N": [ "@cite_9" ], "mid": [ "2156666755" ], "abstract": [ "This paper describes the Q-routing algorithm for packet routing, in which a reinforcement learning module is embedded into each node of a switching network. Only local communication is used by each node to keep accurate statistics on which routing decisions lead to minimal delivery times. In simple experiments involving a 36-node, irregularly connected network, Q-routing proves superior to a nonadaptive algorithm based on precomputed shortest paths and is able to route efficiently even when critical aspects of the simulation, such as the network load, are allowed to vary dynamically. The paper concludes with a discussion of the tradeoff between discovering shortcuts and maintaining stable policies." ] }