aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
cs0411010
2952058472
We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.
The approach presented in this paper belongs to the spectrum of intensional specifications, and is related to @cite_18 @cite_6 . In @cite_6 , a requirement specification language is proposed. This language is useful for specifying sets of requirements for classes of protocol; the requirements can be mapped onto a particular protocol instance, which can be later verified using their tool, called NRL Protocol Analyzer. This approach has been subsequently used to specify the GDOI secure multicast protocol @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_6" ], "mid": [ "2078142047", "1997596059", "1926128235" ], "abstract": [ "In this paper we present a formal language for specifying and reasoning about cryptographic protocol requirements. We give sets of requirements for key distribution protocols and for key agreement protocols in that language. We look at a key agreement protocol due to Aziz and Diffie that might meet those requirements and show how to specify it in the language of the NRL Protocol Analyzer. We also show how to map our formal requirements to the language of the NRL Protocol Analyzer and use the Analyzer to show that the protocol meets those requirements. In other words, we use the Analyzer to assess the validity of the formulae that make up the requirements in models of the protocol. Our analysis reveals an implicit assumption about implementations of the protocol and reveals subtleties in the kinds of requirements one might specify for similar protocols.", "Although there is a substantial amount of work on formal requirements for two and three-party key distribution protocols, very little has been done on requirements for group protocols. However, since the latter have security requirements that can differ in important but subtle ways, we believe that a rigorous expression of these requirements can be useful in determining whether a given protocol can satisfy an application's needs. In this paper we make a first step in providing a formal understanding of security requirements for group key distribution by using the NPATRL language, a temporal requirement specification language for use with the NRL Protocol Analyzer. We specify the requirements for GDOI, a protocol being proposed as an IETF standard, which we are formally specifying and verifying in cooperation with the MSec working group.", "When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication." ] }
cs0411010
2952058472
We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN.
In @cite_20 , Cremers, Mauw and de Vink present another logic for specifying local security properties. Similarly to us, in @cite_20 the authors define the message authenticity property by referring to the variables occurring in the protocol role. In addition, in @cite_20 , it is defined a new kind of authentication, called , which is then compared with the Lowe's intensional specification. The logic presented in this paper cannot handle the specification of the synchronization authentication. In fact, we cannot handle the weaker notion of injective authentication, since we cannot match corresponding events in a trace. However, we believe we can extend our logic to support these properties. Briefly, this could be achieved by decorating the different runs with label identifiers and adding a primitive to reason about events that happenned before others in a trace.
{ "cite_N": [ "@cite_20" ], "mid": [ "146967524" ], "abstract": [ "In this paper we define a general trace model for security protocols which allows to reason about various formal definitions of authentication. In the model, we define a strong form of authentication which we call synchronization. We present both an injective and a noninjective version. We relate synchronization to a formulation of agreement in our trace model and contribute to the discussion on intensional vs. extensional specifications." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
The authors have previously considered topologically-based load balancing with a simpler model than BON which is amenable to analytical study @cite_17 . In that work each node's resources were proportional to in-degree and load was distributed by performing a short random walk and migrating load to the last node of the walk; this method produces Erd "o s-R 'e nyi (ER) random graphs and exhibits good load-balancing performance. As we demonstrate in the current work, performing more complex functions on the random walk can significantly improve performance.
{ "cite_N": [ "@cite_17" ], "mid": [ "2121441899" ], "abstract": [ "By spreading the workload across a sensor network, load balancing reduces hot spots in the sensor network and increases the energy lifetime of the sensor network. In this paper, we design a node-centric algorithm that constructs a load-balanced tree in sensor networks of asymmetric architecture. We utilize a Chebyshev Sum metric to evaluate via simulation the balance of the routing trees produced by our algorithm. We find that our algorithm achieves routing trees that are more effectively balanced than the routing based on breadth-first search (BFS) and shortest-path obtained by Dijkstra's algorithm." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
The majority of distributed computing research has focused on central server methods, DHT architectures, agent-based systems, randomized algorithms and local diffusive techniques @cite_22 @cite_21 @cite_13 @cite_10 @cite_3 @cite_18 @cite_12 . Some of the most successful systems to date @cite_14 @cite_5 have used a centralized approach. This can be explained by the relatively small scale of the networked systems or by special properties of the workload experienced by these systems. However since a central server must have @math bandwidth capacity and CPU power, systems that depend on central architectures are unscalable @cite_9 @cite_23 . Reliability is also a concern since a central server is a single point of failure. BON addresses both of these issues by using @math maximum communications scaling and no single points of failure. Furthermore since the networks created by the BON algorithm are random graphs, they will be highly robust to random failures.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_3", "@cite_23", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2484626270", "1558940048", "2151682391", "2031684765", "2102879646", "2110100895", "2030982625", "2144499729", "2081385143", "1969490101", "2134659242" ], "abstract": [ "The field of structured P2P systems has seen fast growth upon the introduction of Distributed Hash Tables (DHTs) in the early 2000s. The first proposals, including Chord, Pastry, Tapestry, were gradually improved to cope with scalability, locality and security issues. By utilizing the processing and bandwidth resources of end users, the P2P approach enables high performance of data distribution which is hard to achieve with traditional client-server architectures. The P2P computing community is also being actively utilized for software updates to the Internet, P2PSIP VoIP, video-on-demand, and distributed backups. The recent introduction of the identifier-locator split proposal for future Internet architectures poses another important application for DHTs, namely mapping between host permanent identity and changing IP address. The growing complexity and scale of modern P2P systems requires the introduction of hierarchy and intelligence in routing of requests. Structured Peer-to-Peer Systems covers fundamental issues in organization, optimization, and tradeoffs of present large-scale structured P2P systems, as well as, provides principles, analytical models, and simulation methods applicable in designing future systems. Part I presents the state-of-the-art of structured P2P systems, popular DHT topologies and protocols, and the design challenges for efficient P2P network topology organization, routing, scalability, and security. Part II shows that local strategies with limited knowledge per peer provide the highest scalability level subject to reasonable performance and security constraints. Although the strategies are local, their efficiency is due to elements of hierarchical organization, which appear in many DHT designs that traditionally are considered as flat ones. Part III describes methods to gradually enhance the local view limit when a peer is capable to operate with larger knowledge, still partial, about the entire system. These methods were formed in the evolution of hierarchical organization from flat DHT networks to hierarchical DHT architectures, look-ahead routing, and topology-aware ranking. Part IV highlights some known P2P-based experimental systems and commercial applications in the modern Internet. The discussion clarifies the importance of P2P technology for building present and future Internet systems.", "In this paper, we address the problem of designing a scalable, accurate query processor for peer-to-peer filesharing and similar distributed keyword search systems. Using a globally-distributed monitoring infrastructure, we perform an extensive study of the Gnutella filesharing network, characterizing its topology, data and query workloads. We observe that Gnutella's query processing approach performs well for popular content, but quite poorly for rare items with few replicas. We then consider an alternate approach based on Distributed Hash Tables (DHTs). We describe our implementation of PIERSearch, a DHT-based system, and propose a hybrid system where Gnutella is used to locate popular items, and PIERSearch for handling rare items. We develop an analytical model of the two approaches, and use it in concert with our Gnutella traces to study the trade-off between query recall and system overhead of the hybrid system. We evaluate a variety of localized schemes for identifying items that are rare and worth handling via the DHT. Lastly, we show in a live deployment on fifty nodes on two continents that it nicely complements Gnutella in its ability to handle rare items.", "Mobile ad-hoc networks (MANETs) and distributed hash-tables (DHTs) share key characteristics in terms of self organization, decentralization, redundancy requirements, and limited infrastructure. However, node mobility and the continually changing physical topology pose a special challenge to scalability and the design of a DHT for mobile ad-hoc network. The mobile hash-table (MHT) [9] addresses this challenge by mapping a data item to a path through the environment. In contrast to existing DHTs, MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the moving nodes and allows the MHT to scale up to several ten thousands of nodes.This paper addresses the problem of churn in mobile hash tables. Similar to Internet based peer-to-peer systems a deployed mobile hash table suffers from suddenly leaving nodes and the need to recover lost data items. We evaluate how redundancy and recovery technique used in the internet domain can be deployed in the mobile hash table. Furthermore, we show that these redundancy techniques can greatly benefit from the local broadcast properties of typical mobile ad-hoc networks.", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves.", "We revisit the classic problem of spreading a piece of information in a group of @math n fully connected processors. By suitably adding a small dose of randomness to the protocol of Gasieniec and Pelc (Parallel Comput 22:903---912, 1996), we derive for the first time protocols that (i) use a linear number of messages, (ii) are correct even when an arbitrary number of adversarially chosen processors does not participate in the process, and (iii) with high probability have the asymptotically optimal runtime of @math O(logn) when at least an arbitrarily small constant fraction of the processors are working. In addition, our protocols do not require that the system is synchronized nor that all processors are simultaneously woken up at time zero, they are fully based on push-operations, and they do not need an a priori estimate on the number of failed nodes. Our protocols thus overcome the typical disadvantages of the two known approaches, algorithms based on random gossip (typically needing a large number of messages due to their unorganized nature) and algorithms based on fair workload splitting (which are either not time-efficient or require intricate preprocessing steps plus synchronization).", "Over the last decade, we have seen a revolution in connectivity between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossip-based protocols are emerging as an approach to maintaining simplicity and scalability while achieving fault-tolerant information dissemination. In this paper, we study the problem of computing aggregates with gossip-style protocols. Our first contribution is an analysis of simple gossip-based protocols for the computation of sums, averages, random samples, quantiles, and other aggregate functions, and we show that our protocols converge exponentially fast to the true answer when using uniform gossip. Our second contribution is the definition of a precise notion of the speed with which a node's data diffuses through the network. We show that this diffusion speed is at the heart of the approximation guarantees for all of the above problems. We analyze the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms. The latter expose interesting connections to random walks on graphs.", "This paper studies the problem of constructing a minimum-weight spanning tree (MST) in a distributed network. This is one of the most important problems in the area of distributed computing. There is a long line of gradually improving protocols for this problem, and the state of the art today is a protocol with running time O(Λ(G)+n⋅log∗n) due to Kutten and Peleg [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27], where Λ(G) denotes the diameter of the graph G. Peleg and Rubinovich [D. Peleg, V. Rubinovich, A near-tight lower bound on the time complexity of distributed MST construction, in: Proc. 40th IEEE Symp. on Foundations of Computer Science, 1999, pp. 253–261] have shown that Ω˜(n) time is required for constructing MST even on graphs of small diameter, and claimed that their result “establishes the asymptotic near-optimality” of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. In this paper we refine this claim, and devise a protocol that constructs the MST in Ω˜(μ(G,ω)+n) rounds, where μ(G,ω) is the MST-radius of the graph. The ratio between the diameter and the MST-radius may be as large as Θ(n), and, consequently, on some inputs our protocol is faster than the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27] by a factor of Ω˜(n). Also, on every input, the running time of our protocol is never greater than twice the running time of the protocol of [S. Kutten, D. Peleg, Fast distributed construction of k-dominating sets and applications, J. Algorithms 28 (1998) 40–66; preliminary version appeared in: Proc. of 14th ACM Symp. on Principles of Distributed Computing, Ottawa, Canada, August 1995, pp. 20–27]. As part of our protocol for constructing an MST, we develop a protocol for constructing neighborhood covers with a drastically improved running time. The latter result may be of independent interest.", "We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.", "We investigate a new approach to the design of distributed, shared-nothing RDF engines. Our engine, coined \"TriAD\", combines join-ahead pruning via a novel form of RDF graph summarization with a locality-based, horizontal partitioning of RDF triples into a grid-like, distributed index structure. The multi-threaded and distributed execution of joins in TriAD is facilitated by an asynchronous Message Passing protocol which allows us to run multiple join operators along a query plan in a fully parallel, asynchronous fashion. We believe that our architecture provides a so far unique approach to join-ahead pruning in a distributed environment, as the more classical form of sideways information passing would not permit for executing distributed joins in an asynchronous way. Our experiments over the LUBM, BTC and WSDTS benchmarks demonstrate that TriAD consistently outperforms centralized RDF engines by up to two orders of magnitude, while gaining a factor of more than three compared to the currently fastest, distributed engines. To our knowledge, we are thus able to report the so far fastest query response times for the above benchmarks using a mid-range server and regular Ethernet setup.", "It is becoming increasingly common to construct network services using redundant resources geographically distributed across the Internet. Content Distribution Networks are a prime example. Such systems distribute client requests to an appropriate server based on a variety of factors---e.g., server load, network proximity, cache locality---in an effort to reduce response time and increase the system capacity under load. This paper explores the design space of strategies employed to redirect requests, and defines a class of new algorithms that carefully balance load, locality, and proximity. We use large-scale detailed simulations to evaluate the various strategies. These simulations clearly demonstrate the effectiveness of our new algorithms, which yield a 60--91 improvement in system capacity when compared with the best published CDN technology, yet user-perceived response latency remains low and the system scales well with the number of servers.", "We consider optimal load balancing in a distributed computing environment consisting of homogeneous unreliable processors. Each processor receives its own sequence of tasks from outside users, some of which can be redirected to the other processors. Processing times are independent and identically distributed with an arbitrary distribution. The arrival sequence of outside tasks to each processor may be arbitrary as long as it is independent of the state of the system. Processors may fail, with arbitrary failure and repair processes that are also independent of the state of the system. The only information available to a processor is the history of its decisions for routing work to other processors, and the arrival times of its own arrival sequence. We prove the optimality of the round-robin policy, in which each processor sends all the tasks that can be redirected to each of the other processors in turn. We show that, among all policies that balance workload, round robin stochastically minimizes the nth task completion time for all n, and minimizes response times and queue lengths in a separable increasing convex sense for the entire system. We also show that if there is a single centralized controller, round-robin is the optimal policy, and a single controller using round-robin routing is better than the optimal distributed system in which each processor routes its own arrivals. Again \"optimal\" and \"better\" are in the sense of stochastically minimizing task completion times, and minimizing response time and queue lengths in the separable increasing convex sense." ] }
cs0411046
1622839875
We present a novel framework, called balanced overlay networks (BON), that provides scalable, decentralized load balancing for distributed computing using large-scale pools of heterogeneous computers. Fundamentally, BON encodes the information about each node's available computational resources in the structure of the links connecting the nodes in the network. This distributed encoding is self-organized, with each node managing its in-degree and local connectivity via random-walk sampling. Assignment of incoming jobs to nodes with the most free resources is also accomplished by sampling the nodes via short random walks. Extensive simulations show that the resulting highly dynamic and self-organized graph structure can efficiently balance computational load throughout large-scale networks. These simulations cover a wide spectrum of cases, including significant heterogeneity in available computing resources and high burstiness in incoming load. We provide analytical results that prove BON's scalability for truly large-scale networks: in particular we show that under certain ideal conditions, the network structure converges to Erdos-Renyi (ER) random graphs; our simulation results, however, show that the algorithm does much better, and the structures seem to approach the ideal case of d-regular random graphs. We also make a connection between highly-loaded BONs and the well-known ball-bin randomized load balancing framework.
BON is designed to be deployed on extremely large ensembles of nodes. This is a major similarity with BOINC @cite_14 . The Einstein@home project which processes gravitation data and Predictor@home which studies protein-related disease are based on BOINC, the latest infrastructure for creating public-resource computing projects. Such projects are single-purpose and are designed to handle massive, embarrassingly parallel problems with tens or hundreds of thousands of nodes. BON should scale to networks of this scale and beyond while providing a dynamic, multi-user environment instead of the special purpose environment provided by BOINC.
{ "cite_N": [ "@cite_14" ], "mid": [ "2142863519" ], "abstract": [ "BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication requirements. PC owners can participate in multiple BOINC projects, and can specify how their resources are allocated among these projects. We describe the goals of BOINC, the design issues that we confronted, and our solutions to these problems." ] }
cs0410066
2949608853
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node's CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree.
The concept of the memory wall has been popularized by Wulf @cite_5 . Many researchers have been working on improving cache efficiency to overcome the memory wall problem. The pioneering work @cite_11 done by has both theoretically and experimentally studied the blocking technique and described the factors that affect the cache formance. However, there is not an easy way to apply the blocking technique to the tree traversal problem or to the index structure lookup problem to improve the cache efficiency.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2128746953", "2295173494" ], "abstract": [ "In this report we propose a parallel cache oblivious spatial and temporal blocking algorithm for the lattice Boltzmann method in three spatial dimensions. The algorithm has originally been proposed by (1999) and divides the space-time domain of stencil-based methods in an optimal way, independently of any external parameters, e.g., cache size. In view of the increasing gap between processor speed and memory performance this approach offers a promising path to increase cache utilisation. We find that even a straightforward cache oblivious implementation can reduce memory traffic at least by a factor of two if compared to a highly optimised standard kernel and improves scalability for shared memory parallelisation. Due to the recursive structure of the algorithm we use an unconventional parallelisation scheme based on task queuing.", "A program can benefit from improved cache block utilization when contemporaneously accessed data elements are placed in the same memory block. This can reduce the program's memory block working set and thereby, reduce the capacity miss rate. We formally define the problem of data packing for arbitrary number of blocks in the cache and packing factor (the number of data objects fitting in a cache block) and study how well the optimal solution can be approximated for two dual problems. On the one hand, we show that the cache hit maximization problem is approximable within a constant factor, for every fixed number of blocks in the cache. On the other hand, we show that unless P=NP, the cache miss minimization problem cannot be efficiently approximated." ] }
cs0410066
2949608853
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data. Examples include object tracking in sensor networks, packet routing over the internet, request processing in publish-subscribe middleware, and query processing in database systems. When the tree structure is larger than the CPU cache, the standard implementation potentially incurs many cache misses for each lookup; one cache miss at each successive level of the tree. As the CPU-RAM gap grows, this performance degradation will only become worse in the future. We propose a solution that takes advantage of the growing speed of local area networks for clusters. We split the sorted tree structure among the nodes of the cluster. We assume that the structure will fit inside the aggregation of the CPU caches of the entire cluster. We then send a word over the network (as part of a larger packet containing other words) in order to examine the tree structure in another node's CPU cache. We show that this is often faster than the standard solution, which locally incurs multiple cache misses while accessing each successive level of the tree.
In the area of theory and experimental algorithms, @cite_0 proposed an analytical model to predict the cache performance. In their model, they assume all nodes in a tree are accessed uniformly. This model is not accurate for the tree lookup problem. Because the number of nodes from root node to leaf nodes is exponentially increasing, nodes' access rates are exponentially decreasing as the their positioned levels in the tree increase. Hankins and Patel @cite_7 proposed a model with an exponential distributed node access rate in a B+ tree according to the level of a node positioned. However, they only considered the compulsory cache misses, and not the capacity cache misses. They also assume that the tree can fit in the cache. So, for tree structures that can't fit in the cache, the model in @cite_7 is not applicable.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "1549860141", "2949272603" ], "abstract": [ "Many researchers have been working on the performance analysis of caching in Information-Centric Networks (ICNs) under various replacement policies like Least Recently Used (LRU), FIFO or Random (RND). However, no exact results are provided, and many approximate models do not scale even for the simple network of two caches connected in tandem. In this paper, we introduce a Time-To-Live based policy (TTL), that assigns a timer to each content stored in the cache and redraws the timer each time the content is requested (at each hit miss). We show that our TTL policy is more general than LRU, FIFO or RND, since it is able to mimic their behavior under an appropriate choice of its parameters. Moreover, the analysis of networks of TTL-based caches appears simpler not only under the Independent Reference Model (IRM, on which many existing results rely) but also with the Renewal Model for requests. In particular, we determine exact formulas for the performance metrics of interest for a linear network and a tree network with one root cache and N leaf caches. For more general networks, we propose an approximate solution with the relative errors smaller than 10−3 and 10−2 for exponentially distributed and constant TTLs respectively.", "We investigate the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay. Here, content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access a piece of content, a user must decide whether to route its request to a cache or to the back-end server. Additionally, caches must decide which content to cache. We investigate the problem complexity of two problem formulations, where the direct path to the back-end server is modeled as i) a congestion-sensitive or ii) a congestion-insensitive path, reflecting whether or not the delay of the uncached path to the back-end server depends on the user request load, respectively. We show that the problem is NP-complete in both cases. We prove that under the congestion-insensitive model the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify a structural property of the user-cache graph that potentially makes the problem NP-complete. For the congestion-sensitive model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both models within a (1-1 e) factor of the optimal solution, and demonstrate a greedy algorithm that is found to be within 1 of optimal for small problem sizes. Through trace-driven simulations we evaluate the performance of our greedy algorithms, which show up to a 50 reduction in average delay over solutions based on LRU content caching." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The system was the first setup in which a supergravity dual of pure @math SYM theory with no hypermultiplets was studied @cite_9 . It was constructed by wrapping BPS D-branes on a K3 manifold, and studying the resulting geometry. From the supergravity point of view, the system exhibited a novel singularity resolution mechanism. Naively, there appeared to be a naked timelike singularity in the space transverse to the branes, dubbed the repulson, because a massive particle would feel a repulsive potential which becomes infinite in magnitude at a finite radius from the naive position of the branes. Probing the background with a wrapped D-brane, however, showed that the @math source D-branes do not, in fact, sit at the origin. Rather, they expand to form a shell of branes, inside of which the geometry does not, after all, become singular.
{ "cite_N": [ "@cite_9" ], "mid": [ "2040482607" ], "abstract": [ "We study brane configurations that give rise to large-N gauge theories with eight supersymmetries and no hypermultiplets. These configurations include a variety of wrapped, fractional, and stretched branes or strings. The corresponding spacetime geometries which we study have a distinct kind of singularity known as a repulson. We find that this singularity is removed by a distinctive mechanism, leaving a smooth geometry with a core having an enhanced gauge symmetry. The spacetime geometry can be related to large-N Seiberg-Witten theory." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
A natural generalisation was to study geometries for which the system gains energy above the BPS bound. An unusual two-branch structure was found @cite_9 @cite_0 . One class of possible solutions had the appearance of a black hole (or black brane), and was dubbed the horizon branch, while the other appeared to have an -like shell surrounding an inner event horizon and was dubbed the shell branch. Only the shell branch correctly matches onto the BPS solution in the limit of zero energy above extremality but, for sufficiently high extra energy, both solutions were seen to be consistent with the asymptotic charges. The presence of the horizon branch far from extremality was expected, since there, the system should look like an uncharged black hole, when the energy is highly dominant over the charge. Additionally, for the shell branch, fixing the asymptotic charges did not specify exactly how the extra energy distributed itself between the inner horizon and the shell.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2471626564", "2028523941" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on.", "The stability of the inner Reissner-Nordstroem geometry is studied with test massless integer-spin fields. In contrast to previous mathematical treatments we present physical arguments for the processes involved and show that ray tracing and simple first-order scattering suffice to elucidate most of the results. Monochromatic waves which are of small amplitude and ingoing near the outer horizon develop infinite energy densities near the inner Cauchy horizon (as measured by a freely falling observer). Previous work has shown that certain derivatives of the field in a general (nonmonochromatic) disturbance must fall off exponentially near the inner (Cauchy) horizon (r = r sub - ) if energy densities are to remain finite. Thus the solution is unstable to physically reasonable perturbations which arise outside the black hole because such perturbations, if localized near past null infinity (I sup - ), cannot be localized near r sub + , the outer horizon. The mass-energy of an infalling disturbance would generate multipole moments on the black hole. Price, Sibgatullin, and Alekseev have shown that such moments are radiated away as ''tails'' which travel outward and are rescattered inward yielding a wave field with a time dependence t sup -p , p > 0. This decay in time is sufficiently slow that themore » tails yield infinite energy densities on the Cauchy horizon. (The amplification of the low-frequency tails upon interacting with the time-dependent potential between the horizons is an important feature guaranteeing the infinite energy density.) The interior structure of the analytically extended solution is thus disrupted by finite external disturbances. have further shown that even perturbations which are localized as they cross the outer horizon produce singularities at the inner horizon. It is shown that this singularity arises when the incoming radiation is first scattered just inside the outer horizon« less" ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Dimitriadis and Ross did a preliminary search @cite_6 for a classical instability that would provide evidence that the two branches are connected. Such an instability, which is fundamentally different in nature from the Gregory-Laflamme instability, could be interpreted as signalling a phase transition in the dual gauge theory. Such instability was not found. Also presented was an entropic argument that, at high mass, the horizon branch should dominate over the shell branch in a canonical ensemble. In later work @cite_7 , a numerical study of perturbations of the non-BPS shell branch was completed, but still no instability was found. An analytic proof of non-existence of such instabilities could not be found either, owing to the non-linearity of the coupled equations. Furthermore, @cite_7 investigated whether the shell branch might violate a standard gravitational energy condition. Indeed, they found that the shell branch violates the weak energy condition (WEC). This matter will be important for us in a later section, and so we review it here.
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "2471626564", "2031810620" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on.", "Abstract At the heart of this article will be the study of a branching Brownian motion (BBM) with killing , where individual particles move as Brownian motions with drift − ρ , perform dyadic branching at rate β and are killed on hitting the origin. Firstly, by considering properties of the right-most particle and the extinction probability, we will provide a probabilistic proof of the classical result that the ‘one-sided’ FKPP travelling-wave equation of speed − ρ with solutions f : [ 0 , ∞ ) → [ 0 , 1 ] satisfying f ( 0 ) = 1 and f ( ∞ ) = 0 has a unique solution with a particular asymptotic when ρ 2 β , and no solutions otherwise. Our analysis is in the spirit of the standard BBM studies of [S.C. Harris, Travelling-waves for the FKPP equation via probabilistic arguments, Proc. Roy. Soc. Edinburgh Sect. A 129 (3) (1999) 503–517] and [A.E. Kyprianou, Travelling wave solutions to the K-P-P equation: alternatives to Simon Harris' probabilistic analysis, Ann. Inst. H. Poincare Probab. Statist. 40 (1) (2004) 53–72] and includes an intuitive application of a change of measure inducing a spine decomposition that, as a by product, gives the new result that the asymptotic speed of the right-most particle in the killed BBM is 2 β − ρ on the survival set. Secondly, we introduce and discuss the convergence of an additive martingale for the killed BBM, W λ , that appears of fundamental importance as well as facilitating some new results on the almost-sure exponential growth rate of the number of particles of speed λ ∈ ( 0 , 2 β − ρ ) . Finally, we prove a new result for the asymptotic behaviour of the probability of finding the right-most particle with speed λ > 2 β − ρ . This result combined with Chauvin and Rouault's [B. Chauvin, A. Rouault, KPP equation and supercritical branching Brownian motion in the subcritical speed area. Application to spatial trees, Probab. Theory Related Fields 80 (2) (1988) 299–314] arguments for standard BBM readily yields an analogous Yaglom-type conditional limit theorem for the killed BBM and reveals W λ as the limiting Radon–Nikodým derivative when conditioning the right-most particle to travel at speed λ into the distant future." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Surprisingly, when the system is near extremality and the asymptotic volume of the K3 is large, the first two terms combine into a dominant, negative, contribution. Thus the shell branch violates the WEC. It was argued @cite_7 that the shell branch should therefore be regarded as unphysical. Accordingly, the horizon branch should be considered the dominant, valid, supergravity solution for non-BPS , for the range of parameters admitting it. For the region of parameter space in which no horizon branch exists, other solutions, more general than those yet considered, might be valid @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2471626564" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In subsequent work on non-BPS , involving two of the current authors, we used simple supergravity techniques to find the most general solutions with the correct symmetries and asymptotic charges of the hot system @cite_3 . We showed that the only non-BPS solution with a well-behaved event horizon is the horizon branch.
{ "cite_N": [ "@cite_3" ], "mid": [ "2471626564" ], "abstract": [ "We study the supergravity solutions describing nonextremal enhan c c ons. There are two branches of solutions: a shell branch'' connected to the extremal solution, and a horizon branch'' which connects to the Schwarzschild black hole at large mass. We show that the shell branch solutions violate the weak energy condition, and are hence unphysical. We investigate linearized perturbations of the horizon branch and the extremal solution numerically, completing an investigation initiated in a previous paper. We show that these solutions are stable against the perturbations we consider. This provides further evidence that these latter supergravity solutions are capturing some of the true physics of the enhan c c on." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
Here, the story is particularly simple. We find that, at some radius greater than @math , the volume of the K3 always shrinks to zero, indicating that somewhere outside this radius, the K3 has reached its stringy volume. Note that the old ( @math ) shell solution @cite_0 falls into this category.
{ "cite_N": [ "@cite_0" ], "mid": [ "2064943293" ], "abstract": [ "Abstract The problem of finding the maximum diameter of n equal mutually disjoint circles inside a unit square is addressed in this paper. Exact solutions exist for only n = 1, …, 9,10,16,25,36 while for other n only conjectural solutions have been reported. In this work a max-min optimization approach is introduced which matches the best reported solutions in the literature for all n ⩽ 30, yields a better configuration for n = 15, and provides new results for n = 28 and 29." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
It is straightforward to find an expression for the radius of the @math -shell solutions: We could also rewrite this in terms of the parameters: @math , @math , @math , in order to put the solution exactly in terms of the language of previous studies @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2218835040" ], "abstract": [ "We speed up previous (1 + e)-factor approximation algorithms for a number of geometric optimization problems in fixed dimensions: diameter, width, minimum-radius enclosing cylinder, minimum-width enclosing annulus, minimum-width enclosing cylindrical shell, etc. Linear time bounds were known before; we further improve the dependence of the \"constants\" in terms of e.We next consider the data-stream model and present new (1 + e)-factor approximation algorithms that need only constant space for all of the above problems in any fixed dimension. Previously, such a result was known only for diameter.Both sets of results are obtained using the core-set framework recently proposed by Agarwal, Har-Peled, and Varadarajan. Published by Elsevier B.V." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In a related context, the geometry of fractional D @math -branes was studied @cite_5 . Fractional branes can be described as regular D @math -branes wrapped on a vanishing two-cycle inside the @math orbifold limit of K3. The dual gauge theory is again @math SYM with no hypermultiplets. Attempting to take the decoupling limit once again fails to yield a clean strong weak duality. This happens in a way directly analogous to the original case.
{ "cite_N": [ "@cite_5" ], "mid": [ "2152342374" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The authors of @cite_5 found supergravity solutions for fractional branes in six dimensions using two different methods. First, they used boundary state technology to produce a consistent truncation of Type II supergravity coupled to fractional brane sources; second, they related their consistent truncation to the heterotic theory via a chain of dualities. The BPS solutions they found exhibit repulson-like behaviour and an analogous phenomenon occurs.
{ "cite_N": [ "@cite_5" ], "mid": [ "1992572456" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The natural extension of this work was, again, to consider the systems when energy is added to take them above the BPS bound. In @cite_2 , a consistent six-dimensional truncation ansatz for fractional Dp-branes in orbifold backgrounds was provided, for general @math . Solutions corresponding to the geometry of non-BPS fractional branes were found, in analogy to the non-BPS work @cite_0 . After imposition of positivity of ADM mass, half of the solutions were disposed of. One of the remaining solutions was discarded because it did not have a BPS limit.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2152342374", "2018505968" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "We study the wall-crossing phenomena of D4-D2-D0 bound states with two units of D4-branechargeontheresolvedconifold. We identify the walls of marginal stability and evaluate the discrete changes of the BPS indices by using the Kontsevich-Soibelman wall-crossing formula. In particular, we find that the field theories on D4-branes in two large radius limits are properly connected by the wall-crossings involving the flop transition of the conifold. We also find that in one of the large radius limits there are stable bound states of two D4-D2-D0 fragments." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
The construction of fractional brane geometries that exhibit the mechanism is expected to be dual (through T-duality of type IIA on K3) to the original geometries @cite_9 @cite_5 @cite_2 . However, in view of work reviewed in the previous subsection, the conclusion that horizons never form in the non-BPS fractional brane geometries is puzzling.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_2" ], "mid": [ "2152342374", "2019049541", "1992572456" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "Abstract By looking at fractional D p -branes of type IIA on T 4 Z 2 as wrapped branes and by using boundary state techniques we construct the effective low-energy action for the fields generated by fractional branes, build their worldvolume action and find the corresponding classical geometry. The explicit form of the classical background is consistent only outside an enhancon sphere of radius r e , which encloses a naked singularity of repulson-type. The perturbative running of the gauge coupling constant, dictated by the NS–NS twisted field that keeps its one-loop expression at any distance, also fails at r e .", "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
We will show that this apparent discord is actually an artifact. The hot fractional brane system exhibits the exact dual behavior to that of the hot . In particular, we will show that the solutions of @cite_2 are related by duality to the hot solutions of @cite_0 . By continuously varying the K3 moduli away from the orbifold point, we can reach solutions in which the shell branch solutions once again violate the WEC. In the following sections we pin down the precise map between the two setups, and resurrect the horizon branch on the fractional brane side. We will also exhibit the fractional brane equivalent of the @math -shell solutions.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2152342374", "2902333126" ], "abstract": [ "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system.", "We study B-branes in two-dimensional N=(2,2) anomalous models, and their behaviour as we vary bulk parameters in the quantum Kahler moduli space. We focus on the case of (2,2) theories defined by abelian gauged linear sigma models (GLSM). We use the hemisphere partition function as a guide to find how B-branes split in the IR into components supported on Higgs, mixed and Coulomb branches: this generalizes the band restriction rule of Herbst-Hori-Page to anomalous models. As a central example, we work out in detail the case of GLSMs for Hirzebruch-Jung resolutions of cyclic surface singularities. In these non-compact models we explain how to compute and regularize the hemisphere partition function for a brane with compact support, and check that its Higgs branch component explicitly matches with the geometric central charge of an object in the derived category." ] }
hep-th0409280
2070302448
We review what has been learnt and what remains unknown about the physics of hot enhancons following studies in supergravity. We recall a rather general family of static, spherically symmetric, non-extremal enhancon solutions describing D4 branes wrapped on K3 and discuss physical aspects of the solutions. We embed these solutions in the six dimensional supergravity describing type IIA strings on K3 and generalize them to have arbitrary charge vector. This allows us to demonstrate the equivalence with a known family of hot fractional D0 brane solutions, to widen the class of solutions of this second type and to carry much of the discussion across from the D4 brane analysis. In particular we argue for the existence of a horizon branch for these branes.
In order to embed the non-extremal D4 brane solutions of @cite_3 in the six dimensional supergravity, we display a simple two charge truncation which describes the solutions studied in @cite_3 . These solutions can then be lifted straight across into the larger supergravity theory. In deriving the truncation, it is convenient to switch to heterotic variables using the well-known duality between type IIA on K3 and heterotic strings on @math . This is also convenient for comparing with the fractional brane solutions of @cite_2 since that paper presents solutions in the heterotic frame. However, we should stress that we are performing T-dualities between different IIA solutions and in principle we could have worked in IIA variables throughout.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1992572456", "2152342374" ], "abstract": [ "We construct new supersymmetric solutions of D = 11 supergravity describing n orthogonally “overlapping” membranes and fivebranes for n = 2,…,8. Overlapping branes arise after separating intersecting branes in a direction transverse to all of the branes. The solutions, which generalize known intersecting brane solutions, preserve at least 2−n of the supersymmetry. Each pairwise overlap involves a membrane overlapping a membrane in a 0-brane, a fivebrane overlapping a fivebrane in a 3-brane or a membrane overlapping a fivebrane in a string. After reducing n overlapping membranes to obtain n overlapping D-2-branes in D = 10, T-duality generates new overlapping D-brane solutions in type IIA and type IIB string theory. Uplifting certain type IIA solutions leads to the D = 11 solutions. Some of the new solutions reduce to dilaton black holes in D = 4. Additionally, we present a D = 10 solution that describes two D-5-branes overlapping in a string. T-duality then generates further D = 10 solutions and uplifting one of the type IIA solutions gives a new D = 11 solution describing two fivebranes overlapping in a string.", "Abstract We construct non-extremal fractional D-brane solutions of type-II string theory at the Z 2 orbifold point of K3. These solutions generalize known extremal fractional-brane solutions and provide further insights into N =2 supersymmetric gauge theories and dual descriptions thereof. In particular, we find that for these solutions the horizon radius cannot exceed the non-extremal enhancon radius. As a consequence, we conclude that a system of non-extremal fractional branes cannot develop into a black brane. This conclusion is in agreement with known dual descriptions of the system." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
For direct offline optimization, i.e. from an oracle that evaluates the function, in theory one can use the ellipsoid @cite_6 or more recent random-walk based approaches @cite_2 . In black-box optimization, practitioners often use Simulated Annealing @cite_12 or finite difference simulated perturbation stochastic approximation methods (see, for example, @cite_16 ). In the case that the functions may change dramatically over time, a single-point approximation to the gradient may be necessary. Granichin and Spall propose a different single-point estimate of the gradient @cite_5 @cite_11 .
{ "cite_N": [ "@cite_6", "@cite_2", "@cite_5", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2154682027", "2277643282", "2192240806", "2500690480", "2655474054", "2769394111" ], "abstract": [ "We give novel algorithms for stochastic strongly-convex optimization in the gradient oracle model which return a O(1 T)-approximate solution after T iterations. The first algorithm is deterministic, and achieves this rate via gradient updates and historical averaging. The second algorithm is randomized, and is based on pure gradient steps with a random step size. his rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T T), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic strongly-convex optimization setting. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.", "We consider parallel global optimization of derivative-free expensive-to-evaluate functions, and propose an efficient method based on stochastic approximation for implementing a conceptual Bayesian optimization algorithm proposed by (2007). To accomplish this, we use infinitessimal perturbation analysis (IPA) to construct a stochastic gradient estimator and show that this estimator is unbiased. We also show that the stochastic gradient ascent algorithm using the constructed gradient estimator converges to a stationary point of the q-EI surface, and therefore, as the number of multiple starts of the gradient ascent algorithm and the number of steps for each start grow large, the one-step Bayes optimal set of points is recovered. We show in numerical experiments that our method for maximizing the q-EI is faster than methods based on closed-form evaluation using high-dimensional integration, when considering many parallel function evaluations, and is comparable in speed when considering few. We also show that the resulting one-step Bayes optimal algorithm for parallel global optimization finds high quality solutions with fewer evaluations that a heuristic based on approximately maximizing the q-EI. A high quality open source implementation of this algorithm is available in the open source Metrics Optimization Engine (MOE).", "This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.", "We consider stochastic strongly convex optimization with a complex inequality constraint. This complex inequality constraint may lead to computationally expensive projections in algorithmic iterations of the stochastic gradient descent (SGD) methods. To reduce the computation costs pertaining to the projections, we propose an Epoch-Projection Stochastic Gradient Descent (Epro-SGD) method. The proposed Epro-SGD method consists of a sequence of epochs; it applies SGD to an augmented objective function at each iteration within the epoch, and then performs a projection at the end of each epoch. Given a strongly convex optimization and for a total number of @math iterations, Epro-SGD requires only @math projections, and meanwhile attains an optimal convergence rate of @math , both in expectation and with a high probability. To exploit the structure of the optimization problem, we propose a proximal variant of Epro-SGD, namely Epro-ORDA, based on the optimal regularized dual averaging method. We apply the proposed methods on real-world applications; the empirical results demonstrate the effectiveness of our methods.", "We focus on nonconvex and nonsmooth minimization problems with a composite objective, where the differentiable part of the objective is freed from the usual and restrictive global Lipschitz gradient continuity assumption. This longstanding smoothness restriction is pervasive in first order methods (FOM), and was recently circumvent for convex composite optimization by Bauschke, Bolte and Teboulle, through a simple and elegant framework which captures, all at once, the geometry of the function and of the feasible set. Building on this work, we tackle genuine nonconvex problems. We first complement and extend their approach to derive a full extended descent lemma by introducing the notion of smooth adaptable functions. We then consider a Bregman-based proximal gradient methods for the nonconvex composite model with smooth adaptable functions, which is proven to globally converge to a critical point under natural assumptions on the problem's data. To illustrate the power and potential of our general framework and results, we consider a broad class of quadratic inverse problems with sparsity constraints which arises in many fundamental applications, and we apply our approach to derive new globally convergent schemes for this class.", "Nesterov's accelerated gradient descent (AGD), an instance of the general family of \"momentum methods\", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stationary point in @math iterations, faster than the @math iterations required by GD. To the best of our knowledge, this is the first Hessian-free algorithm to find a second-order stationary point faster than GD, and also the first single-loop algorithm with a faster rate than GD even in the setting of finding a first-order stationary point. Our analysis is based on two key ideas: (1) the use of a simple Hamiltonian function, inspired by a continuous-time perspective, which AGD monotonically decreases per step even for nonconvex functions, and (2) a novel framework called improve or localize, which is useful for tracking the long-term behavior of gradient-based optimization algorithms. We believe that these techniques may deepen our understanding of both acceleration algorithms and nonconvex optimization." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
In addition to the appeal of an online model of convex optimization, Zinkevich's gradient descent analysis can be applied to several other online problems for which gradient descent and other special-purpose algorithms have been carefully analyzed, such as Universal Portfolios @cite_0 @cite_18 @cite_19 , online linear regression @cite_13 , and online shortest paths @cite_3 (one convexifies to get an online shortest flow problem).
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_0", "@cite_19", "@cite_13" ], "mid": [ "2952840318", "2129160848", "2004001705", "2153749284", "1697075315" ], "abstract": [ "We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).", "In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret @math , for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log?(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1:1---19, 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.", "We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.", "We propose a new method for unconstrained optimization of a smooth and strongly convex function, which attains the optimal rate of convergence of Nesterov’s accelerated gradient descent. The new algorithm has a simple geometric interpretation, loosely inspired by the ellipsoid method. We provide some numerical evidence that the new method can be superior to Nesterov’s accelerated gradient descent.", "We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
A similar line of research has developed for the problem of online linear optimization @cite_1 @cite_10 @cite_9 . Here, one wants to solve the related but incomparable problem of optimizing a sequence of linear functions, over a possibly non-convex feasible set, modeling problems such as online shortest paths and online binary search trees (which are difficult to convexify). Kalai and Vempala @cite_1 show that, for such linear optimization problems in general, if the offline optimization problem is solvable efficiently, then regret can be bounded by @math also by an efficient online algorithm, in the full-information model. Awerbuch and Kleinberg @cite_10 generalize this to the bandit setting against an oblivious adversary (like ours). Blum and McMahan @cite_9 give a simpler algorithm that applies to adaptive adversaries, that may choose their functions @math depending on the previous points.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_1" ], "mid": [ "2473549844", "2120745256", "2004001705" ], "abstract": [ "We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .", "In the online linear optimization problem, a learner must choose, in each round, a decision from a set D ⊂ ℝn in order to minimize an (unknown and changing) linear cost function. We present sharp rates of convergence (with respect to additive regret) for both the full information setting (where the cost function is revealed at the end of each round) and the bandit setting (where only the scalar cost incurred is revealed). In particular, this paper is concerned with the price of bandit information, by which we mean the ratio of the best achievable regret in the bandit setting to that in the full-information setting. For the full information case, the upper bound on the regret is O*( √nT), where n is the ambient dimension and T is the time horizon. For the bandit case, we present an algorithm which achieves O*(n3 2 √T) regret — all previous (nontrivial) bounds here were O(poly(n)T2 3) or worse. It is striking that the convergence rate for the bandit setting is only a factor of n worse than in the full information case — in stark contrast to the K-arm bandit setting, where the gap in the dependence on K is exponential (√TK vs. √T log K). We also present lower bounds showing that this gap is at least √n, which we conjecture to be the correct order. The bandit algorithm we present can be implemented efficiently in special cases of particular interest, such as path planning and Markov Decision Problems.", "We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period." ] }
cs0408007
2952840318
We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).
A few comparisons are interesting to make with the online linear optimization problem. First of all, for the bandit versions of the linear problems, there was a distinction between exploration phases and exploitation phases. During exploration phases, one action from a barycentric spanner @cite_10 basis of @math actions was chosen, for the sole purpose of estimating the linear objective function. In contrast, our algorithm does a little bit of exploration each time. Secondly, Blum and McMahan @cite_9 were able to compete against an adaptive adversary, using a careful Martingale analysis. It is not clear if that can be done in our setting.
{ "cite_N": [ "@cite_9", "@cite_10" ], "mid": [ "2004001705", "1618543586" ], "abstract": [ "We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.", "In sequential decision problems in an unknown environment, the decision maker often faces a dilemma over whether to explore to discover more about the environment, or to exploit current knowledge. We address the exploration-exploitation dilemma in a general setting encompassing both standard and contextualised bandit problems. The contextual bandit problem has recently resurfaced in attempts to maximise click-through rates in web based applications, a task with significant commercial interest. In this article we consider an approach of Thompson (1933) which makes use of samples from the posterior distributions for the instantaneous value of each action. We extend the approach by introducing a new algorithm, Optimistic Bayesian Sampling (OBS), in which the probability of playing an action increases with the uncertainty in the estimate of the action value. This results in better directed exploratory behaviour. We prove that, under unrestrictive assumptions, both approaches result in optimal behaviour with respect to the average reward criterion of Yang and Zhu (2002). We implement OBS and measure its performance in simulated Bernoulli bandit and linear regression domains, and also when tested with the task of personalised news article recommendation on a Yahoo! Front Page Today Module data set. We find that OBS performs competitively when compared to recently proposed benchmark algorithms and outperforms Thompson's method throughout." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Regular model checking @cite_17 @cite_28 uses regular languages to represent parameterized systems and computes the closure for the regular relations to construct the reachable state space. In general, the method is not guaranteed to be complete and requires various acceleration techniques (sometimes guided by the user) to ensure termination. Moreover, approaches based on regular language are not suited for representing data in the system. Several examples that we consider in this work can't be modeled in this framework; the out-of-order processor which contains data operations or the Peterson's mutual exclusion are few such examples. Even though the Bakery algorithm can be verified in this framework, it requires considerable user ingenuity to encode the protocol in a regular language.
{ "cite_N": [ "@cite_28", "@cite_17" ], "mid": [ "1861590051", "2055083505" ], "abstract": [ "We present regular model checking, a framework for algorithmic verification of infinite-state systems with, e.g., queues, stacks, integers, or a parameterized linear topology. States are represented by strings over a finite alphabet and the transition relation by a regular length-preserving relation on strings. Major problems in the verification of parameterized and infinite-state systems are to compute the set of states that are reachable from some set of initial states, and to compute the transitive closure of the transition relation. We present two complementary techniques for these problems. One is a direct automata-theoretic construction, and the other is based on widening. Both techniques are incomplete in general, but we give sufficient conditions under which they work. We also present a method for verifying ω-regular properties of parameterized systems, by computation of the transitive closure of a transition relation.", "Regular model checking is a form of symbolic model checking for parameterized and infinite-state systems whose states can be represented as words of arbitrary length over a finite alphabet, in which regular sets of words are used to represent sets of states. We present LTL(MSO), a combination of the logics monadic second-order logic (MSO) and LTL as a natural logic for expressing the temporal properties to be verified in regular model checking. In other words, LTL(MSO) is a natural specification language for both the system and the property under consideration. LTL(MSO) is a two-dimensional modal logic, where MSO is used for specifying properties of system states and transitions, and LTL is used for specifying temporal properties. In addition, the first-order quantification in MSO can be used to express properties parameterized on a position or process. We give a technique for model checking LTL(MSO), which is adapted from the automata-theoretic approach: a formula is translated to a buchi regular transition system with a regular set of accepting states, and regular model checking techniques are used to search for models. We have implemented the technique, and show its application to a number of parameterized algorithms from the literature." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Several researchers have investigated restrictions on the system description to make the parameterized verification problem decidable. Notable among them is the early work by German and Sistla @cite_0 for verifying single-indexed properties for synchronously communicating systems. For restricted systems, finite cut-off'' based approaches @cite_16 @cite_12 @cite_27 reduce the problem to verifying networks of some fixed finite size. These bounds have been established for verifying restricted classes of ring networks and cache coherence protocols. Emerson and Kahlon @cite_27 have verified the version of German's cache coherence protocol with single entry channels by manually reducing it to a snoopy protocol, for which finite cut-off exists. However, the reduction is manually performed and exploits details of operation of the protocol, and thus requires user ingenuity. It can't be easily extended to verify other unbounded systems including the Bakery algorithm or the out-of-order processors.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_16", "@cite_12" ], "mid": [ "2963688871", "2504057811", "2088665600", "2130123502" ], "abstract": [ "We study verification problems for autonomous swarms of mobile robots that self-organize and cooperate to solve global objectives. In particular, we focus in this paper on the model proposed by Suzuki and Yamashita of anonymous robots evolving in a discrete space with a finite number of locations (here, a ring). A large number of algorithms have been proposed working for rings whose size is not a priori fixed and can be hence considered as a parameter. Handmade correctness proofs of these algorithms have been shown to be error-prone, and recent attention had been given to the application of formal methods to automatically prove those. Our work is the first to study the verification problem of such algorithms in the parameterized case. We show that safety and reachability problems are undecidable for robots evolving asynchronously. On the positive side, we show that safety properties are decidable in the synchronous case, as well as in the asynchronous case for a particular class of algorithms. Several properties on the protocol can be decided as well. Decision procedures rely on an encoding in Presburger arithmetics formulae that can be verified by an SMT-solver. Feasibility of our approach is demonstrated by the encoding of several case studies.", "We propose a new method for the verification of parameterized cache coherence protocols. Cache coherence protocols are used to maintain data consistency in multiprocessor systems equipped with local fast caches. In our approach we use arithmetic constraints to model possibly infinite sets of global states of a multiprocessor system with many identical caches. In preliminary experiments using symbolic model checkers for infinite-state systems based on real arithmetics (HyTech [HHW97] and DMC [DP99]) we have automatically verified safety properties for parameterized versions of widely implemented write-invalidate and write-update cache coherence policies like the Mesi, Berkeley, Illinois, Firefly and Dragon protocols [Han93]. With this application, we show that symbolic model checking tools originally designed for hybrid and concurrent systems can be applied successfully to a new class of infinite-state systems of practical interest.", "In the unconditionally reliable message transmission (URMT) problem, two non-faulty players, the sender S and the receiver R are part of a synchronous network modeled as a directed graph. S has a message that he wishes to send to R; the challenge is to design a protocol such that after exchanging messages as per the protocol, the receiver R should correctly obtain S's message with arbitrarily small error probability Δ, in spite of the influence of a Byzantine adversary that may actively corrupt up to t nodes in the network (we denote such a URMT protocol as (t, (1 - Δ))-reliable). While it is known that (2t + 1) vertex disjoint directed paths from S to R are necessary and sufficient for (t, 1)-reliable URMT (that is with zero error probability), we prove that a strictly weaker condition, which we define and denote as (2t, t)-special-connectivity, together with just (t+1) vertex disjoint directed paths from S to R, is necessary and sufficient for (t, (1' - Δ))-reliable URMT with arbitrarily small (but non-zero) error probability, Δ. Thus, we demonstrate the power of randomization in the context of reliable message transmission. In fact, for any positive integer k > 0, we show that there always exists a digraph Gk such that (k, 1)-reliable URMT is impossible over Gk whereas there exists a (2k, (1 - Δ))-reliable URMT protocol, Δ > 0 in Gk. In a digraph G on which (t, (1 - Δ))-reliable URMT is possible, an edge is called critical if the deletion of that edge renders (t, (1 - Δ))-reliable URMT impossible. We give an example of a digraph G on n vertices such that G has Ω(n2) critical edges. This is quite baffling since no such graph exists for the case of perfect reliable message transmission (or equivalently (t, 1)-reliable URMT) or when the underlying graph is undirected. Such is the anomalous behavior of URMT protocols (when \"randomness meet directedness\") that it makes it extremely hard to design efficient protocols over arbitrary digraphs. However, if URMT is possible between every pair of vertices in the network, then we present efficient protocols for the same.", "We present decidability results for the verification of cryptographic protocols in the presence of equational theories corresponding to xor and Abelian groups. Since the perfect cryptography assumption is unrealistic for cryptographic primitives with visible algebraic properties such as xor, we extend the conventional Dolev-Yao model by permitting the intruder to exploit these properties. We show that the ground reachability problem in NP for the extended intruder theories in the cases of xor and Abelian groups. This result follows from a normal proof theorem. Then, we show how to lift this result in the xor case: we consider a symbolic constraint system expressing the reachability (e.g., secrecy) problem for a finite number of sessions. We prove that such a constraint system is decidable, relying in particular on an extension of combination algorithms for unification procedures. As a corollary, this enables automatic symbolic verification of cryptographic protocols employing xor for a fixed number of sessions." ] }
cs0407006
2951800949
Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.
Flanagan and Qadeer @cite_2 use indexed predicates to synthesize loop invariants for sequential software programs that involve unbounded arrays. They also provide heuristics to extract some of the predicates from the program text automatically. The heuristics are specific to loops in sequential software and not suited for verifying more general unbounded systems that we handle in this paper. In this work, we explore formal properties of this formulation and apply it for verifying distributed systems. In a recent work @cite_22 , we provide a weakest precondition transformer @cite_25 based syntactic heuristic for discovering most of the predicates for many of the systems that we consider in this paper.
{ "cite_N": [ "@cite_25", "@cite_22", "@cite_2" ], "mid": [ "1503677488", "39834257", "171295454" ], "abstract": [ "We present a new method for automatic generation of loop invariants for programs containing arrays. Unlike all previously known methods, our method allows one to generate first-order invariants containing alternations of quantifiers. The method is based on the automatic analysis of the so-called update predicates of loops. An update predicate for an array A expresses updates made to A . We observe that many properties of update predicates can be extracted automatically from the loop description and loop properties obtained by other methods such as a simple analysis of counters occurring in the loop, recurrence solving and quantifier elimination over loop variables. We run the theorem prover Vampire on some examples and show that non-trivial loop invariants can be generated.", "Geometric heuristics for the quantifier elimination approach presented by Kapur (2004) are investigated to automatically derive loop invariants expressing weakly relational numerical properties (such as l≤x≤h or l≤±x ±y≤h) for imperative programs. Such properties have been successfully used to analyze commercial software consisting of hundreds of thousands of lines of code (using for example, the Astree tool based on abstract interpretation framework proposed by Cousot and his group). The main attraction of the proposed approach is its much lower complexity in contrast to the abstract interpretation approach (O(n2) in contrast to O(n4), where n is the number of variables) with the ability to still generate invariants of comparable strength. This approach has been generalized to consider disjunctive invariants of the similar form, expressed using maximum function (such as max (x+a,y+b,z+c,d)≤max (x+e,y+f,z+g,h)), thus enabling automatic generation of a subclass of disjunctive invariants for imperative programs as well.", "We present a technique for using infeasible program paths to automatically infer Range Predicates that describe properties of unbounded array segments. First, we build proofs showing the infeasibility of the paths, using axioms that precisely encode the high-level (but informal) rules with which programmers reason about arrays. Next, we mine the proofs for Craig Interpolants which correspond to predicates that refute the particular counterexample path. By embedding the predicate inference technique within a Counterexample-Guided Abstraction-Refinement (CEGAR) loop, we obtain a method for verifying data-sensitive safety properties whose precision is tailored in a program- and property-sensitive manner. Though the axioms used are simple, we show that the method suffices to prove a variety of array-manipulating programs that were previously beyond automatic model checkers." ] }
math0407092
2952242105
In this paper we study a random graph with @math nodes, where node @math has degree @math and @math are i.i.d. with @math . We assume that @math for some @math and some constant @math . This graph model is a variant of the so-called configuration model, and includes heavy tail degrees with finite variance. The minimal number of edges between two arbitrary connected nodes, also known as the graph distance or the hopcount, is investigated when @math . We prove that the graph distance grows like @math , when the base of the logarithm equals @math . This confirms the heuristic argument of Newman, Strogatz and Watts NSW00 . In addition, the random fluctuations around this asymptotic mean @math are characterized and shown to be uniformly bounded. In particular, we show convergence in distribution of the centered graph distance along exponentially growing subsequences.
A second related model can be found in @cite_15 and @cite_42 , where edges between nodes @math and @math are present with probability equal to @math for some expected degree vector' @math . Chung and Lu @cite_15 show that when @math is proportional to @math the average distance between pairs of nodes is @math when @math , and @math when @math . The difference between this model and ours is that the nodes are not exchangeable in @cite_15 , but the observed phenomena are similar. This result can be heuristically understood as follows. Firstly, the actual degree vector in @cite_15 should be close to the expected degree vector. Secondly, for the expected degree vector, we can compute that the number of nodes for which the degree is less than or equal to @math equals @math Thus, one expects that the number of nodes with degree at most @math decreases as @math , similarly as in our model. In @cite_42 , Chung and Lu study the sizes of the connected components in the above model. The advantage of this model is that the edges are independently present, which makes the resulting graph closer to a traditional random graph.
{ "cite_N": [ "@cite_15", "@cite_42" ], "mid": [ "2950469527", "2964292545" ], "abstract": [ "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math .", "We consider the problem of clustering a graph @math into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables @math , where @math is the incidence matrix of a graph @math , @math is the vector of unknown vertex variables (with a uniform prior), and @math is a noise vector with Bernoulli @math i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery (up to global flip) of @math is possible if and only the graph @math is connected, with a sharp threshold at the edge probability @math for Erdős-Renyi random graphs. The first goal of this paper is to determine how the edge probability @math needs to scale to allow exact recovery in the presence of noise. Defining the degree rate of the graph by @math , it is shown that exact recovery is possible if and only if @math . In other words, @math is the information theoretic threshold for exact recovery at low-SNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. For a deterministic graph @math , defining the degree rate as @math , where @math is the minimum degree of the graph, it is shown that the proposed method achieves the rate @math , where @math is the spectral gap of the graph @math ." ] }
math0407092
2952242105
In this paper we study a random graph with @math nodes, where node @math has degree @math and @math are i.i.d. with @math . We assume that @math for some @math and some constant @math . This graph model is a variant of the so-called configuration model, and includes heavy tail degrees with finite variance. The minimal number of edges between two arbitrary connected nodes, also known as the graph distance or the hopcount, is investigated when @math . We prove that the graph distance grows like @math , when the base of the logarithm equals @math . This confirms the heuristic argument of Newman, Strogatz and Watts NSW00 . In addition, the random fluctuations around this asymptotic mean @math are characterized and shown to be uniformly bounded. In particular, we show convergence in distribution of the centered graph distance along exponentially growing subsequences.
The reason why we study the random graphs at a given time instant is that we are interested in the topology of the random graph. In @cite_36 , and inspired by the observed power law degree sequence in @cite_12 , the configuration model with i.i.d. degrees is proposed as a model for the AS-graph in Internet, and it is argued on a qualitative basis that this simple model serves as a better model for the Internet topology than currently used topology generators. Our results can be seen as a step towards the quantitative understanding of whether the hopcount in Internet is described well by the average graph distance in the configuration model.
{ "cite_N": [ "@cite_36", "@cite_12" ], "mid": [ "2107648668", "1978479505" ], "abstract": [ "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "Random graphs with a given degree sequence are a useful model capturing several features absent in the classical Erd˝os-Renyi model, such as dependent edges and non-binomial degrees. In this paper, we use a characterization due to Erd˝os and Gallai to develop a sequential algorithm for generating a random labeled graph with a given degree sequence. The algorithm is easy to implement and allows surprisingly ecient sequential importance sampling. Applications are given, in- cluding simulating a biological network and estimating the number of graphs with a given degree sequence. 1. Introduction. Random graphs with given vertex degrees have recently attracted great interest as a model for many real-world complex networks, including the World Wide Web, peer-to-peer networks, social networks, and biological networks. Newman (58) contains an excellent survey of these networks, with extensive references. A common approach to simulating these systems is to study (empirically or theoretically) the degrees of the vertices in instances of the network, and then to generate a random graph with the appropriate degrees. Graphs with prescribed degrees also appear in random matrix theory and string theory, which can call for large simulations based on random k-regular graphs. Throughout, we are concerned with generating simple graphs, i.e., no loops or multiple edges are allowed (the problem becomes considerably easier if loops and multiple edges are allowed). The main result of this paper is a new sequential importance sampling algorithm for generating random graphs with a given degree sequence. The idea is to build up the graph sequentially, at each stage choosing an edge from a list of candidates with probability proportional to the degrees. Most previously studied algorithms for this problem sometimes either get stuck or produce loops or multiple edges in the output, which is handled by starting over and trying again. Often for such algorithms, the probability of a restart being needed on a trial rapidly approaches 1 as the degree parameters grow, resulting in an enormous number of trials being needed on average to obtain a simple graph. A major advantage of our algorithm is that it never gets stuck. This is achieved using the Erd˝os- Gallai characterization, which is explained in Section 2, and a carefully chosen order of edge selection." ] }
cs0406019
2950243200
We consider the problem of providing service guarantees in a high-speed packet switch. As basic requirements, the switch should be scalable to high speeds per port, a large number of ports and a large number of traffic flows with independent guarantees. Existing scalable solutions are based on Virtual Output Queuing, which is computationally complex when required to provide service guarantees for a large number of flows. We present a novel architecture for packet switching that provides support for such service guarantees. A cost-effective fabric with small external speedup is combined with a feedback mechanism that enables the fabric to be virtually lossless, thus avoiding packet drops indiscriminate of flows. Through analysis and simulation, we show that this architecture provides accurate support for service guarantees, has low computational complexity and is scalable to very high port speeds.
In recent years, these potential scalability concerns have been addressed by implementing a very small number of independent service guarantees. Under the Differentiated Services framework @cite_12 , flows are aggregated in @math classes, and service guarantees are offered for classes. The downside is that the realized QoS per flow has a lower level of assurance (higher probability of violating the desired service level) than the QoS per aggregate @cite_9 , @cite_1 . Moreover, recently proposed VPN and VLAN services @cite_11 , @cite_17 require per-VPN or VLAN QoS guarantees. All the above are arguments in favor of implemeting a number of independent service guarantees per port much larger than six.
{ "cite_N": [ "@cite_11", "@cite_9", "@cite_1", "@cite_12", "@cite_17" ], "mid": [ "2046531425", "1985441584", "2136136555", "1979708789", "2160997001" ], "abstract": [ "The paper addresses the issue of providing Quality of Service (QoS) guarantees in the Internet. After a brief discussion of Internet traffic characteristics, we consider the possibility of performing multiplexing with predictable performance for stream and elastic traffic using open–loop and closed–loop control, respectively. QoS depends essentially on providing sufficient capacity to handle expected demand. We argue that flow awareness is additionally necessary to ensure that traffic is directed over routes with available capacity and to avoid congestion collapse in case of overload. Proposed flow–aware controls allow simple volume–based charging and the development of an economic model similar to that of the telephone network.", "Providing differentiated Quality of Service (QoS) over unreliable wireless channels is an important challenge for supporting several future applications. We analyze a model that has been proposed to describe the QoS requirements by four criteria: traffic pattern, channel reliability, delay bound, and throughput bound. We study this mathematical model and extend it to handle variable bit rate applications. We then obtain a sharp characterization of schedulability vis-a-vis latencies and timely throughput. Our results extend the results so that they are general enough to be applied on a wide range of wireless applications, including MPEG Variable-Bit-Rate (VBR) video streaming, VoIP with differentiated quality, and wireless sensor networks (WSN). Two major issues concerning QoS over wireless are admission control and scheduling. Based on the model incorporating the QoS criteria, we analytically derive a necessary and sufficient condition for a set of variable bit-rate clients to be feasible. Admission control is reduced to evaluating the necessary and sufficient condition. We further analyze two scheduling policies that have been proposed, and show that they are both optimal in the sense that they can fulfill every set of clients that is feasible by some scheduling algorithms. The policies are easily implemented on the IEEE 802.11 standard. Simulation results under various settings support the theoretical study.", "We propose a novel approach to QoS for real-time traffic over wireless mesh networks, in which application layer characteristics are exploited or shaped in the design of medium access control. Specifically, we consider the problem of efficiently supporting a mix of Voice over IP (VoIP) and delay-insensitive traffic, assuming a narrowband physical layer with CSMA CA capabilities. The VoIP call carrying capacity of wireless mesh networks based on classical CSMA CA (e.g., the IEEE 802.11 standard) is low compared to the raw available bandwidth, due to lack of bandwidth and delay guarantees. Time Division Multiplexing (TDM) could potentially provide such guarantees, but it requires fine-grained network-wide synchronization and scheduling, which are difficult to implement. In this paper, we introduce Sticky CSMA CA, a new medium access mechanism that provides TDM-like performance to real-time flows without requiring explicit synchronization. We exploit the natural periodicity of VoIP flows to obtain implicit synchronization and multiplexing gains. Nodes monitor the medium using the standard CSMA CA mechanism, except that they remember the recent history of activity in the medium. A newly arriving VoIP flow uses this information to grab the medium at the first available opportunity, and then sticks to a periodic schedule, providing delay and bandwidth guarantees. Delay-insensitive traffic fills the gaps left by the real-time flows using novel contention mechanisms to ensure efficient use of the leftover bandwidth. Large gains over IEEE 802.11 networks are demonstrated in terms of increased voice call carrying capacity (more than 100 in some cases). We briefly discuss extensions of these ideas to a broader class of real-time applications, in which artificially imposing periodicity (or some other form of regularity) at the application layer can lead to significant enhancements of QoS due to improved medium access.", "Abstract This paper studies the quality of service (QoS) provision problem in noncooperative networks where applications or users are selfish and routers implement generalized processor sharing based packet scheduling. We formulate a model of QoS provision in noncooperative networks where users are given the freedom to choose both the service classes and traffic volume allocated, and heterogenous QoS preferences are captured by a user's utility function. We present a comprehensive analysis of the noncooperative multi-class QoS provision game, giving a complete characterization of Nash equilibria and their existence criteria, and show under what conditions they are Pareto- and system-optimal. We show that, in general, Nash equilibria need not exist, and when they do exist, they need not be Pareto- nor system-optimal. For certain “resource-plentiful” systems, however, we show that the world indeed can be nice with Nash equilibria, Pareto optima, and system optima collapsing into a single class. We study the problem of facilitating effective QoS in systems with multi-dimensional QoS vectors containing both mean- and burstiness-related QoS measures. We extend the game-theoretic analysis to multi-dimensional QoS vector games and show under what conditions the aforementioned results carry over.", "In wireless ATM-based networks, admission control is required to reserve resources in advance for calls requiring guaranteed services. In the case of a multimedia call, each of its substreams (i.e., video, audio, and data) has its own distinct quality of service (QoS) requirements (e.g., cell loss rate, delay, jitter, etc.). The network attempts to deliver the required QoS by allocating an appropriate amount of resources (e.g., bandwidth, buffers). The negotiated QoS requirements constitute a certain QoS level that remains fixed during the call (static allocation approach). Accordingly, the corresponding allocated resources also remain unchanged. We present and analyze an adaptive allocation of resources algorithm based on genetic algorithms. In contrast to the static approach, each substream declares a preset range of acceptable QoS levels (e.g., high, medium, low) instead of just a single one. As the availability of resources in the wireless network varies, the algorithm selects the best possible QoS level that each substream can obtain. In case of congestion, the algorithm attempts to free up some resources by degrading the QoS levels of the existing calls to lesser ones. This is done, however, under the constraint of achieving maximum utilization of the resources while simultaneously distributing them fairly among the calls. The degradation is limited to a minimum value predefined in a user-defined profile (UDP). Genetic algorithms have been used to solve the optimization problem. From the user perspective, the perception of the QoS degradation is very graceful and happens only during overload periods. The network services, on the other hand, are greatly enhanced due to the fact that the call blocking probability is significantly decreased. Simulation results demonstrate that the proposed algorithm performs well in terms of increasing the number of admitted calls while utilizing the available bandwidth fairly and effectively." ] }
cs0406019
2950243200
We consider the problem of providing service guarantees in a high-speed packet switch. As basic requirements, the switch should be scalable to high speeds per port, a large number of ports and a large number of traffic flows with independent guarantees. Existing scalable solutions are based on Virtual Output Queuing, which is computationally complex when required to provide service guarantees for a large number of flows. We present a novel architecture for packet switching that provides support for such service guarantees. A cost-effective fabric with small external speedup is combined with a feedback mechanism that enables the fabric to be virtually lossless, thus avoiding packet drops indiscriminate of flows. Through analysis and simulation, we show that this architecture provides accurate support for service guarantees, has low computational complexity and is scalable to very high port speeds.
More recent proposals @cite_16 decrease the time interval between two runs of the matching algorithm, but with a tradeoff in increased burstiness and additional scheduling algorithms for mitigating unbounded delays. Moreover, the service presented in @cite_16 is of type Premium 1-to-1, but cannot provide Assured N-to-1 service.
{ "cite_N": [ "@cite_16" ], "mid": [ "1984382275" ], "abstract": [ "This paper proposes a new class of online policies for scheduling in input-buffered crossbar switches. Given an initial configuration of packets at the input buffers, these policies drain all packets in the system in the minimal amount of time provided that there are no further arrivals. These policies are also throughput optimal for a large class of arrival processes which satisfy strong-law of large numbers. We show that it is possible for policies in our class to be throughput optimal even if they are not constrained to be maximal in every time slot. Most algorithms for switch scheduling take an edge based approach; in contrast, we focus on scheduling (a large enough set of) the most congested ports. This alternate approach allows for lower-complexity algorithms, and also requires a non-standard technique to prove throughput-optimality. One algorithm in our class, Maximum Vertex-weighted Matching (MVM) has worst-case complexity similar to Max-size Matching, and in simulations shows slightly better delay performance than Max-(edge)weighted-Matching (MWM)." ] }
cond-mat0406404
1540064387
Mapping the Internet generally consists in sampling the network from a limited set of sources by using "traceroute"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.
Work by @cite_21 has shown that power-law like distributions can be obtained for subgraphs of Erd "os-R 'enyi random graphs when the subgraph is the result of a traceroute exploration with relatively few sources and destinations. They discuss the origin of these biases and the effect of the distance between source and target in the mapping process.
{ "cite_N": [ "@cite_21" ], "mid": [ "2107648668" ], "abstract": [ "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias." ] }
cond-mat0406404
1540064387
Mapping the Internet generally consists in sampling the network from a limited set of sources by using "traceroute"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.
In Ref. @cite_11 , Petermann and De Los Rios have studied a traceroute -like procedure on various examples of scale-free graphs, showing that, in the case of a single source, power-law distributions with underestimated exponents are obtained. Analytical estimates of the measured exponents as a function of the true ones were also derived. Finally, in a recent preprint appeared during the completion of our work, Guillaume and Latapy @cite_28 report about the shortest-paths explorations of synthetic graphs, comparing properties of the resulting sampled graph with those of the original network. The exploration is made using level plots for the proportion of discovered nodes and edges in the graph as a function of the number of sources and targets, giving also hints for optimal placement of sources and targets. All these pieces of work make clear the relevance of determining up to which extent the topological properties observed in sampled graphs are representative of that of the real networks.
{ "cite_N": [ "@cite_28", "@cite_11" ], "mid": [ "1540064387", "2107648668" ], "abstract": [ "Mapping the Internet generally consists in sampling the network from a limited set of sources by using \"traceroute\"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.", "Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias." ] }
cs0405044
2950057546
Most previous work on the recently developed language-modeling approach to information retrieval focuses on document-specific characteristics, and therefore does not take into account the structure of the surrounding corpus. We propose a novel algorithmic framework in which information provided by document-based language models is enhanced by the incorporation of information drawn from clusters of similar documents. Using this framework, we develop a suite of new algorithms. Even the simplest typically outperforms the standard language-modeling approach in precision and recall, and our new interpolation algorithm posts statistically significant improvements for both metrics over all three corpora tested.
Document clustering has a long history in information retrieval @cite_11 @cite_12 ; in particular, approximating topics via clusters is a recurring theme @cite_18 . Arguably the work most related to ours by dint of employing both clustering and language modeling in the context of ad hoc retrieval See e.g., @cite_16 , @cite_3 , and @cite_5 for applications of clustering in related areas. is that on latent-variable models, e.g., @cite_1 @cite_13 @cite_15 @cite_10 , of which the classic aspect model is one instantiation. Such work takes a strictly probabilistic approach to the problems we have discussed with standard language modeling, as opposed to our algorithmic viewpoint. Also, a focus in the latent-variable work has been on sophisticated cluster induction, whereas we find that a very simple clustering scheme works rather well in practice. Interestingly, Hofmann @cite_13 linearly interpolated his probabilistic model's score, which is based on (soft) clusters, with the usual cosine metric; this is quite close in spirit to what our algorithm does.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_1", "@cite_3", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2103986443", "2042980227", "1965667542", "2060314721", "2590145195", "2158201212", "2082729696", "1981081578", "316065036", "2128859735" ], "abstract": [ "Document clustering is useful in many information retrieval tasks: document browsing, organization and viewing of retrieval results, generation of Yahoo-like hierarchies of documents, etc. The general goal of clustering is to group data elements such that the intra-group similarities are high and the inter-group similarities are low. We present a clustering algorithm called CBC (Clustering By Committee) that is shown to produce higher quality clusters in document clustering tasks as compared to several well known clustering algorithms. It initially discovers a set of tight clusters (high intra-group similarity), called committees, that are well scattered in the similarity space (low inter-group similarity). The union of the committees is but a subset of all elements. The algorithm proceeds by assigning elements to their most similar committee. Evaluating cluster quality has always been a difficult task. We present a new evaluation methodology that is based on the editing distance between output clusters and manually constructed classes (the answer key). This evaluation measure is more intuitive and easier to interpret than previous evaluation measures.", "Search algorithms incorporating some form of topic model have a long history in information retrieval. For example, cluster-based retrieval has been studied since the 60s and has recently produced good results in the language model framework. An approach to building topic models based on a formal generative model of documents, Latent Dirichlet Allocation (LDA), is heavily cited in the machine learning literature, but its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use LDA to improve ad-hoc retrieval. We propose an LDA-based document model within the language modeling framework, and evaluate it on several TREC collections. Gibbs sampling is employed to conduct approximate inference in LDA and the computational complexity is analyzed. We show that improvements over retrieval using cluster-based models can be obtained with reasonable efficiency.", "A novel probabilistic retrieval model is presented. It forms a basis to interpret the TF-IDF term weights as making relevance decisions. It simulates the local relevance decision-making for every location of a document, and combines all of these “local” relevance decisions as the “document-wide” relevance decision for the document. The significance of interpreting TF-IDF in this way is the potential to: (1) establish a unifying perspective about information retrieval as relevance decision-making; and (2) develop advanced TF-IDF-related term weights for future elaborate retrieval models. Our novel retrieval model is simplified to a basic ranking formula that directly corresponds to the TF-IDF term weights. In general, we show that the term-frequency factor of the ranking formula can be rendered into different term-frequency factors of existing retrieval systems. In the basic ranking formula, the remaining quantity - log p(rvt ∈ d) is interpreted as the probability of randomly picking a nonrelevant usage (denoted by r) of term t. Mathematically, we show that this quantity can be approximated by the inverse document-frequency (IDF). Empirically, we show that this quantity is related to IDF, using four reference TREC ad hoc retrieval data collections.", "We present a novel implementation of the recently introduced information bottleneck method for unsupervised document clustering. Given a joint empirical distribution of words and documents, p(x, y), we first cluster the words, Y, so that the obtained word clusters, Ytilde;, maximally preserve the information on the documents. The resulting joint distribution. p(X, Ytilde;), contains most of the original information about the documents, I(X; Ytilde;) a I(X; Y), but it is much less sparse and noisy. Using the same procedure we then cluster the documents, X, so that the information about the word-clusters is preserved. Thus, we first find word-clusters that capture most of the mutual information about to set of documents, and then find document clusters, that preserve the information about the word clusters. We tested this procedure over several document collections based on subsets taken from the standard 20Newsgroups corpus. The results were assessed by calculating the correlation between the document clusters and the correct labels for these documents. Finding from our experiments show that this double clustering procedure, which uses the information bottleneck method, yields significantly superior performance compared to other common document distributional clustering algorithms. Moreover, the double clustering procedure improves all the distributional clustering methods examined here.", "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogazici University Printhouse. http: www.issi2015.org files downloads all-papers 1042.pdf, 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "The cluster hypothesis states: closely related documents tend to be relevant to the same request. We exploit this hypothesis directly by adjusting ad hoc retrieval scores from an initial retrieval so that topically related documents receive similar scores. We refer to this process as score regularization. Score regularization can be presented as an optimization problem, allowing the use of results from semi-supervised learning. We demonstrate that regularized scores consistently and significantly rank documents better than unregularized scores, given a variety of initial retrieval algorithms. We evaluate our method on two large corpora across a substantial number of topics.", "This paper reports on a novel technique for literature indexing and searching in a mechanized library system. The notion of relevance is taken as the key concept in the theory of information retrieval and a comparative concept of relevance is explicated in terms of the theory of probability. The resulting technique called “Probabilistic Indexing,” allows a computing machine, given a request for information, to make a statistical inference and derive a number (called the “relevance number”) for each document, which is a measure of the probability that the document will satisfy the given request. The result of a search is an ordered list of those documents which satisfy the request ranked according to their probable relevance. The paper goes on to show that whereas in a conventional library system the cross-referencing (“see” and “see also”) is based solely on the “semantical closeness” between index terms, statistical measures of closeness between index terms can be defined and computed. Thus, given an arbitrary request consisting of one (or many) index term(s), a machine can elaborate on it to increase the probability of selecting relevant documents that would not otherwise have been selected. Finally, the paper suggests an interpretation of the whole library problem as one where the request is considered as a clue on the basis of which the library system makes a concatenated statistical inference in order to provide as an output an ordered list of those documents which most probably satisfy the information needs of the user.", "This paper presents a detailed empirical study of 12 generative approaches to text clustering, obtained by applying four types of document-to-cluster assignment strategies (hard, stochastic, soft and deterministic annealing (DA) based assignments) to each of three base models, namely mixtures of multivariate Bernoulli, multinomial, and von Mises-Fisher (vMF) distributions. A large variety of text collections, both with and without feature selection, are used for the study, which yields several insights, including (a) showing situations wherein the vMF-centric approaches, which are based on directional statistics, fare better than multinomial model-based methods, and (b) quantifying the trade-off between increased performance of the soft and DA assignments and their increased computational demands. We also compare all the model-based algorithms with two state-of-the-art discriminative approaches to document clustering based, respectively, on graph partitioning (CLUTO) and a spectral coclustering method. Overall, DA and CLUTO perform the best but are also the most computationally expensive. The vMF models provide good performance at low cost while the spectral coclustering algorithm fares worse than vMF-based methods for a majority of the datasets.", "Automated unsupervised learning of topic-based clusters is used in various text data mining applications, e.g., document organization in content management, information retrieval and filtering in news aggregation services. Typically batch models are used for this purpose, which perform clustering on the document collection in aggregate. In this paper, we first analyze three batch topic models that have been recently proposed in the machine learning and data mining community – Latent Dirichlet Allocation (LDA), Dirichlet Compound Multinomial (DCM) mixtures and von-Mises Fisher (vMF) mixture models. Our discussion uses a common framework based on the particular assumptions made regarding the conditional distributions corresponding to each component and the topic priors. Our experiments on large real-world document collections demonstrate that though LDA is a good model for finding word-level topics, vMF finds better document-level topic clusters more efficiently, which is often important in text mining applications. In cases where offline clustering on complete document collections is infeasible due to resource constraints, online unsupervised clustering methods that process incoming data incrementally are necessary. To this end, we propose online variants of vMF, EDCM and LDA. Experiments on real-world streaming text illustrate the speed and performance benefits of online vMF. Finally, we propose a practical heuristic for hybrid topic modeling, which learns online topic models on streaming text data and intermittently runs batch topic models on aggregated documents offline. Such a hybrid model is useful for applications (e.g., dynamic topic-based aggregation of consumer-generated content in social networking sites) that need a good tradeoff between the performance of batch offline algorithms and efficiency of incremental online algorithms.", "We present a new method for clustering based on compression. The method does not use subject-specific features or background knowledge, and works as follows: First, we determine a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files (singly and in pairwise concatenation). Second, we apply a hierarchical clustering method. The NCD is not restricted to a specific application area, and works across application area boundaries. A theoretical precursor, the normalized information distance, co-developed by one of the authors, is provably optimal. However, the optimality comes at the price of using the noncomputable notion of Kolmogorov complexity. We propose axioms to capture the real-world setting, and show that the NCD approximates optimality. To extract a hierarchy of clusters from the distance matrix, we determine a dendrogram (ternary tree) by a new quartet method and a fast heuristic to implement it. The method is implemented and available as public software, and is robust under choice of different compressors. To substantiate our claims of universality and robustness, we report evidence of successful application in areas as diverse as genomics, virology, languages, literature, music, handwritten digits, astronomy, and combinations of objects from completely different domains, using statistical, dictionary, and block sorting compressors. In genomics, we presented new evidence for major questions in Mammalian evolution, based on whole-mitochondrial genomic analysis: the Eutherian orders and the Marsupionta hypothesis against the Theria hypothesis." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
The protocol described here is derived from earlier work @cite_21 in which we covered the background of the LOCKSS system. That protocol used redundancy, rate limitation, effort balancing, bimodal behavior (polls must be won or lost by a landslide) and friend bias (soliciting some percentage of votes from peers on the friends list) to prevent powerful adversaries from modifying the content without detection, or discrediting the intrusion detection system with false alarms. To mitigate its vulnerability to attrition, in this work we reinforce these defenses using admission control, desynchronization, and redundancy, and restructure votes to support a block-based repair mechanism that penalizes free-riding. In this section we list work that describes the nature and types of denial of service attacks, as well as related work that applies defenses similar to ours.
{ "cite_N": [ "@cite_21" ], "mid": [ "2950945875" ], "abstract": [ "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in opinion polls.'' Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection in new ways that ensure even an adversary capable of unlimited effort over decades has only a small probability of causing irrecoverable damage before being detected." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Our attrition adversary draws on a wide range of work in detecting @cite_60 , measuring @cite_34 , and combating @cite_8 @cite_38 @cite_27 @cite_49 network-level DDoS attacks capable of stopping traffic to and from our peers. This work observes that current attacks are not simultaneously of high intensity, long duration, and high coverage (many peers) @cite_34 .
{ "cite_N": [ "@cite_38", "@cite_8", "@cite_60", "@cite_27", "@cite_49", "@cite_34" ], "mid": [ "2119227347", "2170810185", "2903785932", "2159160833", "1502916507", "2162133150" ], "abstract": [ "A low-rate distributed denial of service (DDoS) attack has significant ability of concealing its traffic because it is very much like normal traffic. It has the capacity to elude the current anomaly-based detection schemes. An information metric can quantify the differences of network traffic with various probability distributions. In this paper, we innovatively propose using two new information metrics such as the generalized entropy metric and the information distance metric to detect low-rate DDoS attacks by measuring the difference between legitimate traffic and attack traffic. The proposed generalized entropy metric can detect attacks several hops earlier (three hops earlier while the order α = 10 ) than the traditional Shannon metric. The proposed information distance metric outperforms (six hops earlier while the order α = 10) the popular Kullback-Leibler divergence approach as it can clearly enlarge the adjudication distance and then obtain the optimal detection sensitivity. The experimental results show that the proposed information metrics can effectively detect low-rate DDoS attacks and clearly reduce the false positive rate. Furthermore, the proposed IP traceback algorithm can find all attacks as well as attackers from their own local area networks (LANs) and discard attack traffic.", "This paper presents a new distributed approach to detecting DDoS (distributed denial of services) flooding attacks at the traffic-flow level The new defense system is suitable for efficient implementation over the core networks operated by Internet service providers (ISPs). At the early stage of a DDoS attack, some traffic fluctuations are detectable at Internet routers or at the gateways of edge networks. We develop a distributed change-point detection (DCD) architecture using change aggregation trees (CAT). The idea is to detect abrupt traffic changes across multiple network domains at the earliest time. Early detection of DDoS attacks minimizes the floe cling damages to the victim systems serviced by the provider. The system is built over attack-transit routers, which work together cooperatively. Each ISP domain has a CAT server to aggregate the flooding alerts reported by the routers. CAT domain servers collaborate among themselves to make the final decision. To resolve policy conflicts at different ISP domains, a new secure infrastructure protocol (SIP) is developed to establish mutual trust or consensus. We simulated the DCD system up to 16 network domains on the Cyber Defense Technology Experimental Research (DETER) testbed, a 220-node PC cluster for Internet emulation experiments at the University of Southern California (USC) Information Science Institute. Experimental results show that four network domains are sufficient to yield a 98 percent detection accuracy with only 1 percent false-positive alarms. Based on a 2006 Internet report on autonomous system (AS) domain distribution, we prove that this DDoS defense system can scale well to cover 84 AS domains. This security coverage is wide enough to safeguard most ISP core networks from real-life DDoS flooding attacks.", "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9 accuracy, our method achieves 55.7 ; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6 accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6 classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10 . Code is available at this https URL.", "Launching a denial of service (DoS) attack is trivial, but detection and response is a painfully slow and often a manual process. Automatic classification of attacks as single- or multi-source can help focus a response, but current packet-header-based approaches are susceptible to spoofing. This paper introduces a framework for classifying DoS attacks based on header content, and novel techniques such as transient ramp-up behavior and spectral analysis. Although headers are easily forged, we show that characteristics of attack ramp-up and attack spectrum are more difficult to spoof. To evaluate our framework we monitored access links of a regional ISP detecting 80 live attacks. Header analysis identified the number of attackers in 67 attacks, while the remaining 13 attacks were classified based on ramp-up and spectral analysis. We validate our results through monitoring at a second site, controlled experiments, and simulation. We use experiments and simulation to understand the underlying reasons for the characteristics observed. In addition to helping understand attack dynamics, classification mechanisms such as ours are important for the development of realistic models of DoS traffic, can be packaged as an automated tool to aid in rapid response to attacks, and can also be used to estimate the level of DoS activity on the Internet.", "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).", "Denial of service (DoS) attack on the Internet has become a pressing problem. In this paper, we describe and evaluate route-based distributed packet filtering (DPF), a novel approach to distributed DoS (DDoS) attack prevention. We show that DPF achieves proactiveness and scalability, and we show that there is an intimate relationship between the effectiveness of DPF at mitigating DDoS attack and power-law network topology.The salient features of this work are two-fold. First, we show that DPF is able to proactively filter out a significant fraction of spoofed packet flows and prevent attack packets from reaching their targets in the first place. The IP flows that cannot be proactively curtailed are extremely sparse so that their origin can be localized---i.e., IP traceback---to within a small, constant number of candidate sites. We show that the two proactive and reactive performance effects can be achieved by implementing route-based filtering on less than 20 of Internet autonomous system (AS) sites. Second, we show that the two complementary performance measures are dependent on the properties of the underlying AS graph. In particular, we show that the power-law structure of Internet AS topology leads to connectivity properties which are crucial in facilitating the observed performance effects." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Related to first-hand reputation is the use of game-theoretic analysis of peer behavior by @cite_7 to show that a reciprocative strategy in admission control policy can motivate cooperation among selfish peers.
{ "cite_N": [ "@cite_7" ], "mid": [ "1506509256" ], "abstract": [ "We commonly use the experience of others when taking decisions. Reputation mechanisms aggregate in a formal way the feedback collected from peers and compute the reputation of products, services, or providers. The success of reputation mechanisms is however conditioned on obtaining true feedback. Side-payments (i.e. agents get paid for submitting feedback) can make honest reporting rational (i.e. Nash equilibrium). Unfortunately, known schemes also have other Nash equilibria that imply lying. In this paper we analyze the equilibria of two incentive-compatible reputation mechanisms and investigate how undesired equilibrium points can be eliminated by using trusted reports." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Admission control has been used to improve the usability of overloaded services. For example, @cite_1 propose admission control strategies that help protect long-running Web service sessions (i.e., related sequences of requests) from abrupt termination. Preserving the responsiveness of Web services in the face of demand spikes is critical, whereas LOCKSS peers need only manage their resources to make progress at the necessary rate in the long term. They can treat demand spikes as hostile behavior. In a P2P context, @cite_14 use admission control (and rate limiting) to mitigate the effects of a query flood attack against superpeers in unstructured file-sharing peer-to-peer networks such as Gnutella.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2160436229", "2148414233" ], "abstract": [ "We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of client's individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads.", "An admission control algorithm must coordinate between flows to provide guarantees about how the medium is shared. In wired networks, nodes can monitor the medium to see how much bandwidth is being used. However, in ad hoc networks, communication from one node may consume the bandwidth of neighboring nodes. Therefore, the bandwidth consumption of flows and the available resources to a node are not local concepts, but related to the neighboring nodes in carrier-sensing range. Current solutions do not address how to perform admission control in such an environment so that the admitted flows in the network do not exceed network capacity. In this paper, we present a scalable and efficient admission control framework - contention-aware admission control protocol (CACP) - to support QoS in ad hoc networks. We present several options for the design of CACP and compare the performance of these options using both mathematical analysis and simulation results. We also demonstrate the effectiveness of CACP compared to existing approaches through extensive simulations." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
Golle and Mironov @cite_2 provide compliance enforcement in the context of distributed computation using a receipt technique similar to ours. Random auditing using challenges and hashing has been proposed @cite_42 @cite_37 as a means of enforcing trading requirements in some distributed storage systems.
{ "cite_N": [ "@cite_37", "@cite_42", "@cite_2" ], "mid": [ "2105526656", "2585130803", "202035697" ], "abstract": [ "We introduce transactors, a fault-tolerant programming model for composing loosely-coupled distributed components running in an unreliable environment such as the internet into systems that reliably maintain globally consistent distributed state. The transactor model incorporates certain elements of traditional transaction processing, but allows these elements to be composed in different ways without the need for central coordination, thus facilitating the study of distributed fault-tolerance from a semantic point of view. We formalize our approach via the τ-calculus, an extended lambda-calculus based on the actor model, and illustrate its usage through a number of examples. The τ-calculus incorporates constructs which distributed processes can use to create globally-consistent checkpoints. We provide an operational semantics for the τ-calculus, and formalize the following safety and liveness properties: first, we show that globally-consistent checkpoints have equivalent execution traces without any node failures or application-level failures, and second, we show that it is possible to reach globally-consistent checkpoints provided that there is some bounded failure-free interval during which checkpointing can occur.", "Increasing transaction volumes have led to a resurgence of interest in distributed transaction processing. In particular, partitioning data across several servers can improve throughput by allowing servers to process transactions in parallel. But executing transactions across servers limits the scalability and performance of these systems. In this paper, we quantify the effects of distribution on concurrency control protocols in a distributed environment. We evaluate six classic and modern protocols in an in-memory distributed database evaluation framework called Deneva, providing an apples-to-apples comparison between each. Our results expose severe limitations of distributed transaction processing engines. Moreover, in our analysis, we identify several protocol-specific scalability bottlenecks. We conclude that to achieve truly scalable operation, distributed concurrency control solutions must seek a tighter coupling with either novel network hardware (in the local area) or applications (via data modeling and semantically-aware execution), or both.", "The ubiquity of the Internet has led to increased resource sharing between large numbers of users in widely-disparate administrative domains. Unfortunately, traditional identity-based solutions to the authorization problem do not allow for the dynamic establishment of trust, and thus cannot be used to facilitate interactions between previously-unacquainted parties. Furthermore, the management of identity-based systems becomes burdensome as the number of users in the system increases. To address this gap between the needs of open computing systems and existing authorization infrastructures, researchers have begun to investigate novel attribute-based access control (ABAC) systems based on techniques such as trust negotiation and other forms of distributed proving. To date, research in these areas has been largely theoretical and has produced many important foundational results. However, if these techniques are to be safely deployed in practice, the systems-level barriers hindering their adoption must be overcome. In this thesis, we show that safely and securely adopting decentralized ABAC approaches to authorization is not simply a matter of implementation and deployment, but requires careful consideration of both formal properties and practical issues. To this end, we investigate a progression of important questions regarding the safety analysis, deployment, implementation, and optimization of these types of systems. We first show that existing ABAC theory does not properly account for the asynchronous nature of open systems, which allows attackers to subvert these systems by forcing decisions to be made using inconsistent system states. To address this, we develop provably-secure and lightweight consistency enforcement mechanisms suitable for use in trust negotiation and distributed proof systems. We next focus on deployment issues, and investigate how user interactions can be audited in the absence of concrete user identities. We develop the technique of virtual fingerprinting, which can be used to accomplish this task without adversely affecting the scalability of audit systems. Lastly, we present TrustBuilder2, which is the first fully-configurable framework for trust negotiation. Within this framework, we examine availability problems associated with the trust negotiation process and develop a novel approach to policy compliance checking that leverages an efficient pattern-matching approach to outperform existing techniques by orders of magnitude." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
In DHTs waves of synchronized routing updates caused by joins or departures cause instability during periods of high churn. Bamboo's @cite_28 desynchronization defense using lazy updates is effective.
{ "cite_N": [ "@cite_28" ], "mid": [ "2162733677" ], "abstract": [ "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
The previous version of the LOCKSS protocol used rate-limiting, inherent intrusion detection through bimodal system behavior, and churning of friends into the reference list to prevent poll samples from being influenced by nominated peers. These techniques are effective in defending against adversaries attempting to modify content without being detected or trying to trigger intrusion detection alarms to discredit the system @cite_21 . The previous version of the protocol, however, did not tolerate attrition attacks well. An attrition adversary with about 50 nodes of computational power was able to bring a system of 1000 peers to a crawl. By further leveraging the rate-limitation defense to provide admission control, compliance enforcement, and desynchronization of poll invitations raise the computational power an adversary must use to equal that used by the defenders.
{ "cite_N": [ "@cite_21" ], "mid": [ "2950945875" ], "abstract": [ "The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in opinion polls.'' Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection in new ways that ensure even an adversary capable of unlimited effort over decades has only a small probability of causing irrecoverable damage before being detected." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Rate limits on peers joining a DHT have been suggested @cite_47 @cite_37 as a defense against attempts to control parts of the hash space, for example to control the placement of certain data objects or for misrouting. Limiting both joins and stores to empirically determined safe rates will also be needed to thwart the attrition adversary. At least for file sharing, studies @cite_24 have suggested that users' behavior may not be sensitive to latency. The increased storage latency that rate limits create is probably unimportant. XXX Matt Williamson's viral work XXX
{ "cite_N": [ "@cite_24", "@cite_47", "@cite_37" ], "mid": [ "2049794981", "2049130980", "2143339817" ], "abstract": [ "Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems.In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS is fast, incurs little network overhead, and requires relatively few modifications to existing DHT systems.For idealized versions of DHT systems like Chord, Tapestry and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord.Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.", "A number of distributed hash table (DHT)-based protocols have been proposed to address the issue of scalability in peer-to-peer networks. In this paper, we present Ulysses, a peer-to-peer network based on the butterfly topology that achieves the theoretical lower bound of log n log log n on network diameter when the average routing table size at nodes is no more than log n. Compared to existing DHT-based schemes with similar routing table size, Ulysses reduces the network diameter by a factor of log log n, which is 2–4 for typical configurations. This translates into the same amount of reduction on query latency and average traffic per link node. In addition, Ulysses maintains the same level of robustness in terms of routing in the face of faults and recovering from graceful ungraceful joins and departures, as provided by existing DHT-based schemes. The performance of the protocol has been evaluated using both analysis and simulation. Copyright © 2004 AEI", "Large-scale distributed systems are hard to deploy, and distributed hash tables (DHTs) are no exception. To lower the barriers facing DHT-based applications, we have created a public DHT service called OpenDHT. Designing a DHT that can be widely shared, both among mutually untrusting clients and among a variety of applications, poses two distinct challenges. First, there must be adequate control over storage allocation so that greedy or malicious clients do not use more than their fair share. Second, the interface to the DHT should make it easy to write simple clients, yet be sufficiently general to meet a broad spectrum of application requirements. In this paper we describe our solutions to these design challenges. We also report our early deployment experience with OpenDHT and describe the variety of applications already using the system." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
: Admission control appears frequently as a defense against overloading, for example in the context of Web services. For example, @cite_1 propose admission control strategies that help protect long-running sessions (i.e., related sequences of requests) from abrupt termination. However, several of the pertinent assumptions that hold true in a Web environment are inapplicable to LOCKSS: request rejection costs much less than an accepted request, and explicit rejection rarely stems the tide of further requests when a denial of service attack is under way. @cite_14 use admission control as well as rate limiting to mitigate the effects of a query flood attack against superpeers in unstructured file-sharing peer-to-peer network such as Gnutella.
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2160436229", "2038154299" ], "abstract": [ "We consider a new, session-based workload for measuring web server performance. We define a session as a sequence of client's individual requests. Using a simulation model, we show that an overloaded web server can experience a severe loss of throughput measured as a number of completed sessions compared against the server throughput measured in requests per second. Moreover, statistical analysis of completed sessions reveals that the overloaded web server discriminates against longer sessions. For e-commerce retail sites, longer sessions are typically the ones that would result in purchases, so they are precisely the ones for which the companies want to guarantee completion. To improve Web QoS for commercial Web servers, we introduce a session-based admission control (SBAC) to prevent a web server from becoming overloaded and to ensure that longer sessions can be completed. We show that a Web server augmented with the admission control mechanism is able to provide a fair guarantee of completion, for any accepted session, independent of a session length. This provides a predictable and controllable platform for web applications and is a critical requirement for any e-business. Additionally, we propose two new adaptive admission control strategies, hybrid and predictive, aiming to optimize the performance of SBAC mechanism. These new adaptive strategies are based on a self-tunable admission control function, which adjusts itself accordingly to variations in traffic loads.", "This paper presents a novel approach for stream-based admission control and job scheduling for video transcoding called SBACS (Stream-Based Admission Control and Scheduling). SBACS uses queue waiting time of transcoding servers to make admission control decisions for incoming video streams. It implements stream-based admission control with per stream admission. To ensure efficient utilization of the transcoding servers, video streams are segmented at the Group of Pictures level. In addition to the traditional rejection policy, SBACS also provides a stream deferment policy, which exploits cloud elasticity to allow temporary deferment of the incoming video streams. In other words, the admission controller can decide to admit, defer, or reject an incoming stream and hence reduce rejection rate. In order to prevent transcoding jitters in the admitted streams, we introduce a job scheduling mechanism, which drops a small proportion of video frames from a video segment to ensure continued delivery of video contents to the user. The approach is demonstrated in a discrete-event simulation with a series of experiments involving different load patterns and stream arrival rates." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Some researchers have proposed storing useless content in exchange for having content be stored as a way to enforce symmetric storage relationships. Compliance enforcement is achieved by asking the peer storing the file of interest to hash some portion of the file as proof that it is still storing the file @cite_42 @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_42" ], "mid": [ "2148042433", "2019586918" ], "abstract": [ "Peer-to-peer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peer-to-peer systems.Samsara enforces fairness in peer-to-peer storage systems without requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return---a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure.", "We propose a privacy protection framework for large-scale content-based information retrieval. It offers two layers of protection. First, robust hash values are used as queries to prevent revealing original content or features. Second, the client can choose to omit certain bits in a hash value to further increase the ambiguity for the server. Due to the reduced information, it is computationally difficult for the server to know the client’s interest. The server has to return the hash values of all possible candidates to the client. The client performs a search within the candidate list to find the best match. Since only hash values are exchanged between the client and the server, the privacy of both parties is protected. We introduce the concept oftunable privacy, where the privacy protection level can be adjusted according to a policy. It is realized through hash-based piecewise inverted indexing. The idea is to divide a feature vector into pieces and index each piece with a subhash value. Each subhash value is associated with an inverted index list. The framework has been extensively tested using a large image database. We have evaluated both retrieval performance and privacy-preserving performance for a particular content identification application. Two different constructions of robust hash algorithms are used. One is based on random projections; the other is based on the discrete wavelet transform. Both algorithms exhibit satisfactory performance in comparison with state-of-the-art retrieval schemes. The results show that the privacy enhancement slightly improves the retrieval performance. We consider the majority voting attack for estimating the query category and identification. Experiment results show that this attack is a threat when there are near-duplicates, but the success rate decreases with the number of omitted bits and the number of distinct items." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
. Waves of synchronized routing updates caused by joins or departures cause instability during periods of high churn @cite_28 . Breaking the synchrony through lazy updates (e.g., in Bamboo @cite_28 ) can absorb the brunt of a churn attack.
{ "cite_N": [ "@cite_28" ], "mid": [ "2162733677" ], "abstract": [ "This paper addresses the problem of churn--the continuous process of node arrival and departure--in distributed hash tables (DHTs). We argue that DHTs should perform lookups quickly and consistently under churn rates at least as high as those observed in deployed P2P systems such as Kazaa. We then show through experiments on an emulated network that current DHT implementations cannot handle such churn rates. Next, we identify and explore three factors affecting DHT performance under churn: reactive versus periodic failure recovery, message timeout calculation, and proximity neighbor selection. We work in the context of a mature DHT implementation called Bamboo, using the ModelNet network emulator, which models in-network queuing, cross-traffic, and packet loss. These factors are typically missing in earlier simulation-based DHT studies, and we show that careful attention to them in Bamboo's design allows it to function effectively at churn rates at or higher than that observed in P2P file-sharing applications, while using lower maintenance bandwidth than other DHT implementations." ] }
cs0405111
2950383323
In peer-to-peer systems, attrition attacks include both traditional, network-level denial of service attacks as well as application-level attacks in which malign peers conspire to waste loyal peers' resources. We describe several defenses for LOCKSS, a peer-to-peer digital preservation system, that help ensure that application-level attacks even from powerful adversaries are less effective than simple network-level attacks, and that network-level attacks must be intense, wide-spread, and prolonged to impair the system.
As (the rate at which the peer population changes) increases, both the latency and the probability of failure of queries to a DHT increases @cite_28 . An attrition attack might consist of adversary peers joining and leaving fast enough to destabilize the routing infrastructure.
{ "cite_N": [ "@cite_28" ], "mid": [ "2049130980" ], "abstract": [ "A number of distributed hash table (DHT)-based protocols have been proposed to address the issue of scalability in peer-to-peer networks. In this paper, we present Ulysses, a peer-to-peer network based on the butterfly topology that achieves the theoretical lower bound of log n log log n on network diameter when the average routing table size at nodes is no more than log n. Compared to existing DHT-based schemes with similar routing table size, Ulysses reduces the network diameter by a factor of log log n, which is 2–4 for typical configurations. This translates into the same amount of reduction on query latency and average traffic per link node. In addition, Ulysses maintains the same level of robustness in terms of routing in the face of faults and recovering from graceful ungraceful joins and departures, as provided by existing DHT-based schemes. The performance of the protocol has been evaluated using both analysis and simulation. Copyright © 2004 AEI" ] }
cs0405070
2949521043
We propose a model for the World Wide Web graph that couples the topological growth with the traffic's dynamical evolution. The model is based on a simple traffic-driven dynamics and generates weighted directed graphs exhibiting the statistical properties observed in the Web. In particular, the model yields a non-trivial time evolution of vertices and heavy-tail distributions for the topological and traffic properties. The generated graphs exhibit a complex architecture with a hierarchy of cohesiveness levels similar to those observed in the analysis of real data.
A very interesting class of models that considers the main features of the WWW growth has been introduced by @cite_11 in order to produce a mechanism which does not assume the knowledge of the degree of the existing vertices. Each newly introduced vertex @math selects at random an already existing vertex @math ; for each out-neighbour @math of @math , @math connects to @math with a certain probability @math ; with probability @math it connects instead to another randomly chosen node. This model describes the growth process of the WWW as a copy mechanism in which newly arriving web-pages tends to reproduce the hyperlinks of similar web-pages; i.e. the first to which they connect. Interestingly, this model effectively recovers a preferential attachment mechanism without explicitely introducing it.
{ "cite_N": [ "@cite_11" ], "mid": [ "2099674897" ], "abstract": [ "We present an analysis of the statistical properties and growth of the free on-line encyclopedia Wikipedia. By describing topics by vertices and hyperlinks between them as edges, we can represent this encyclopedia as a directed graph. The topological properties of this graph are in close analogy with those of the World Wide Web, despite the very different growth mechanism. In particular, we measure a scale-invariant distribution of the in and out degree and we are able to reproduce these features by means of a simple statistical model. As a major consequence, Wikipedia growth can be described by local rules such as the preferential attachment mechanism, though users, who are responsible of its evolution, can act globally on the network." ] }
cs0404002
2138964611
We review existing approaches to mathematical modeling and analysis of multi-agent systems in which complex collective behavior arises out of local interactions between many simple agents. Though the behavior of an individual agent can be considered to be stochastic and unpredictable, the collective behavior of such systems can have a simple probabilistic description. We show that a class of mathematical models that describe the dynamics of collective behavior of multi-agent systems can be written down from the details of the individual agent controller. The models are valid for Markov or memoryless agents, in which each agents future state depends only on its present state and not any of the past states. We illustrate the approach by analyzing in detail applications from the robotics domain: collaboration and foraging in groups of robots.
With the exceptions noted below, there has been very little prior work on mathematical analysis of multi-agent systems. The closest in spirit to our paper is the work by Huberman, Hogg and coworkers on computational ecologies @cite_5 @cite_73 . These authors mathematically studied collective behavior in a system of agents, each choosing between two alternative strategies. They derived a rate equation for the average number of agents using each strategy from the underlying probability distributions. Our approach is consistent with theirs --- in fact, we can easily write down the same rate equations from the macroscopic state diagram of the system, without having to derive them from the underlying probability distributions. Computational ecologies can, therefore, be considered an application of the methodology described in this paper. Yet another application of the approach presented here is the author's work on coalition formation in electronic marketplaces @cite_71 .
{ "cite_N": [ "@cite_5", "@cite_73", "@cite_71" ], "mid": [ "2145374086", "2761043131", "2084044206" ], "abstract": [ "In this paper, we formulate and solve a randomized optimal consensus problem for multi-agent systems with stochastically time-varying interconnection topology. The considered multi-agent system with a simple randomized iterating rule achieves an almost sure consensus meanwhile solving the optimization problem min\"z\"@?\"R\"^\"[email protected]?\"i\"=\"1^nf\"i(z), in which the optimal solution set of objective function f\"i can only be observed by agent i itself. At each time step, simply determined by a Bernoulli trial, each agent independently and randomly chooses either taking an average among its neighbor set, or projecting onto the optimal solution set of its own optimization component. Both directed and bidirectional communication graphs are studied. Connectivity conditions are proposed to guarantee an optimal consensus almost surely with proper convexity and intersection assumptions. The convergence analysis is carried out using convex analysis. We compare the randomized algorithm with the deterministic one via a numerical example. The results illustrate that a group of autonomous agents can reach an optimal opinion by each node simply making a randomized trade-off between following its neighbors or sticking to its own opinion at each time step.", "Agent-based modeling of zapping behavior of viewers, television commercial allocation, and advertisement markets by Hiroyuki Kyan and Jun-ichi Inoue.- Agent based modeling of Housing asset bubble: A simple utility function based investigation by Kausik Gangopadhyay and Kousik Guhathakurta.- Urn model-based adaptive Multi-arm clinical trials: A stochastic approximation approach by Sophie Laruelle and Gilles Pages.- Logistic modeling of a Religious Sect features by Marcel Ausloos.- Characterizing financial crisis by means of the three states random field Ising model by Mitsuaki Murota and Jun-ichi Inoue.- Themes and applications of kinetic exchange models: Redux by Asim Ghosh, Anindya S. Chakrabarti, Anjan Kumar Chandra and Anirban Chakraborti.- Kinetic exchange opinion model: solution in the single parameter map limit by Krishanu Roy Chowdhury, Asim Ghosh, Soumyajyoti Biswas and Bikas K. Chakrabarti.- An overview of the new frontiers of Economic Complexity by Matthieu Cristelli, Andrea Tacchella, Luciano Pietronero.- Jan Tinbergen's legacy for economic networks: from the gravity model to quantum statistics by Tiziano Squartini and Diego Garlaschelli.- A macroscopic order of consumer demand due to heterogenous consumer behaviors on Japanese household demand tested by the random matrix theory by Yuji Aruka,Yuichi Kichikawa and Hiroshi Iyetomi.- Uncovering the network structure of the world currency market: Cross-correlations in the fluctuations of daily exchange rates by Sitabhra Sinha and Uday Kovur.- Systemic risk in Japanese credit network by Hideaki Aoyama.- Pricing of goods with Bandwagon properties: The curse of coordination by Mirta B. Gordon, Jean-Pierre Nadal, Denis Phan and Viktoriya Semeshenko.- Evolution of Econophysics by Kishore C. Dash.- Econophysics and sociophysics: Problems and prospects by Asim Ghosh and Anindya S. Chakrabarti.- A discussion on Econophysics by Hideaki Aoyama.", "We study the average consensus problem of multi-agent systems for general network topologies with unidirectional information flow. We propose two linear distributed algorithms, deterministic and gossip, respectively for the cases where the inter-agent communication is synchronous and asynchronous. In both cases, the developed algorithms guarantee state averaging on arbitrary strongly connected digraphs; in particular, this graphical condition does not require that the network be balanced or symmetric, thereby extending previous results in the literature. The key novelty of our approach is to augment an additional variable for each agent, called \"surplus\", whose function is to locally record individual state updates. For convergence analysis, we employ graph-theoretic and nonnegative matrix tools, plus the eigenvalue perturbation theory playing a crucial role." ] }
cs0404002
2138964611
We review existing approaches to mathematical modeling and analysis of multi-agent systems in which complex collective behavior arises out of local interactions between many simple agents. Though the behavior of an individual agent can be considered to be stochastic and unpredictable, the collective behavior of such systems can have a simple probabilistic description. We show that a class of mathematical models that describe the dynamics of collective behavior of multi-agent systems can be written down from the details of the individual agent controller. The models are valid for Markov or memoryless agents, in which each agents future state depends only on its present state and not any of the past states. We illustrate the approach by analyzing in detail applications from the robotics domain: collaboration and foraging in groups of robots.
In the robotics domain, Sugawara and coworkers @cite_62 @cite_26 developed simple state-based analytical models of cooperative foraging in groups of communicating and non-co -mmu -ni -cating robots and studied them quantitatively. Although these models are similar to ours, they are overly simplified and fail to take crucial interactions among robots into account. In separate papers, we have analyzed collaborative @cite_46 and foraging @cite_29 behavior in groups of robots. The focus of that work is on realistic models and the comparison of the models' predictions to experimental and simulations results. For example, in @cite_46 , we considered the same model of collaborative stick-pulling presented here, but studied it under the same conditions as the experiments. In @cite_29 , we found that we had to include avoiding-while-searching and wall-avoiding states in the model in order to obtain good quantitative agreement between the model and results of sensor-based simulations. The focus of this paper, on the other hand, is to show that there is a principled way to construct a macroscopic model of collective dynamics of a MAS, and, more importantly, a practical recipe'' for creating such a model from the details of the microscopic controller.
{ "cite_N": [ "@cite_29", "@cite_46", "@cite_62", "@cite_26" ], "mid": [ "1578969637", "2137153348", "2140542425", "2139610651" ], "abstract": [ "In multi-robot applications, such as foraging or collection tasks, interference, which results from competition for space between spatially extended robots, can significantly affect the performance of the group. We present a mathematical model of foraging in a homogeneous multi-robot system, with the goal of understanding quantitatively the effects of interference. We examine two foraging scenarios: a simplified collection task where the robots only collect objects, and a foraging task, where they find objects and deliver them to some pre-specified “home” location. In the first case we find that the overall group performance improves as the system size growss however, interference causes this improvement to be sublinear, and as a result, each robot's individual performance decreases as the group size increases. We also examine the full foraging task where robots collect objects and deliver them home. We find an optimal group size that maximizes group performance. For larger group sizes, the group performance declines. However, again due to the effects of interference, the individual robot's performance is a monotonically decreasing function of the group size. We validate both models by comparing their predictions to results of sensor-based simulations in a multi-robot system and find good agreement between theory and simulations data.", "In this article, we present a macroscopic analytical model of collaboration in a group of reactive robots. The model consists of a series of coupled differential equations that describe the dynamics of group behavior. After presenting the general model, we analyze in detail a case study of collaboration, the stick-pulling experiment, studied experimentally and in simulation by (Autonomous Robots, 11, 149-171). The robots' task is to pull sticks out of their holes, and it can be successfully achieved only through the collaboration of two robots. There is no explicit communication or coordination between the robots. Unlike microscopic simulations (sensor-based or using a probabilistic numerical model), in which computational time scales with the robot group size, the macroscopic model is computationally efficient, because its solutions are independent of robot group size. Analysis reproduces several qualitative conclusions of : namely, the different dynamical regimes for different values of the ratio of robots to sticks, the existence of optimal control parameters that maximize system performance as a function of group size, and the transition from superlinear to sublinear performance as the number of robots is increased.", "We consider the problem of navigating a mobile robot through dense human crowds. We begin by exploring a fundamental impediment to classical motion planning algorithms called the “freezing robot problem”: once the environment surpasses a certain level of dynamic complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place or performs unnecessary maneuvers to avoid collisions. We argue that this problem can be avoided if the robot anticipates human cooperation, and accordingly we develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a “multiple goal” extension that models the goal-driven nature of human decision making. We validate this model with an empirical study of robot navigation in dense human crowds 488 runs, specifically testing how cooperation models effect navigation performance. The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 0.8 humans m2, while a state-of-the-art non-cooperative planner exhibits unsafe behavior more than three times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people m2. We also show that our non-cooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.", "This paper presents a methodology for finding optimal control parameters as well as optimal system parameters for robot swarm controllers using probabilistic, population dynamic models. With distributed task allocation as a case study, we show how optimal control parameters leading to a desired steady-state task distribution for two fully-distributed algorithms can be found even if the parameters of the system are unknown. First, a reactive algorithm in which robots change states independently from each other and which leads to a linear macroscopic model describing the dynamics of the system is considered. Second, a threshold-based algorithm where robots change states based on the number of other robots in this state and which leads to a non-linear model is investigated. Whereas analytical results can be obtained for the linear system, the optimization of the non-linear controller is performed numerically. Finally, we show using stochastic simulations that whereas the presented methodology and models work best if the swarm size is large, useful results can already be obtained for team-sizes below a hundred robots. The methodology presented can be applied to scenarios involving the control of large numbers of entities with limited computational and communication abilities as well as a tight energy budget, such as swarms of robots from the centimeter to nanometer range or sensor networks." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Recently, Bertolino et. al. @cite_17 recognized the importance of testing a software component in its deployment environment. They developed a framework that supports functional testing of a software component with respect to customer's specification, which also provides a simple way to enclose with a component the developer's test suites which can be re-executed by the customer. Yet their approach requires the customer to have a complete specification about the component to be incorporated into a system, which is not always possible. McCamant and Ernst @cite_19 considered the issue of predicting the safety of dynamic component upgrade, which is part of the problem we consider. But their approach is completely different since they try to generate some abstract operational expectation about the new component through observing a system's run-time behavior with the old component.
{ "cite_N": [ "@cite_19", "@cite_17" ], "mid": [ "2121376435", "2100161032" ], "abstract": [ "We present a new, automatic technique to assess whether replacing a component of a software system by a purportedly compatible component may change the behavior of the system. The technique operates before integrating the new component into the system or running system tests, permitting quicker and cheaper identification of problems. It takes into account the system's use of the component, because a particular component upgrade may be desirable in one context but undesirable in another. No formal specifications are required, permitting detection of problems due either to errors in the component or to errors in the system. Both external and internal behaviors can be compared, enabling detection of problems that are not immediately reflected in the output.The technique generates an operational abstraction for the old component in the context of the system and generates an operational abstraction for the new component in the context of its test suite; an operational abstraction is a set of program properties that generalizes over observed run-time behavior. If automated logical comparison indicates that the new component does not make all the guarantees that the old one did, then the upgrade may affect system behavior and should not be performed without further scrutiny. In case studies, the technique identified several incompatibilities among software components.", "Component-based development is the emerging paradigm in software production, though several challenges still slow down its full taking up. In particular, the \"component trust problem\" refers to how adequate guarantees and documentation about a component' s behaviour can be transferred from the component developer to its potential users. The capability to test a component when deployed within the target application environment can help establish the compliance of a candidate component to the customer's expectations and certainly contributes to \"increase trust\". To this purpose, we propose the CDT framework for Component Deployment Testing. CDT provides the customer with both a technique to early specify a deployment test suite and an environment for running and reusing the specified tests on any component implementation. The framework can also be used to deliver the component developer's test suite and to later re-execute it. The central feature of CDT is the complete decoupling between the specification of the tests and the component implementation." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
In the formal verification area, there has been a long history of research on verification of systems with modular structure (called modular verification @cite_9 ). A key idea @cite_31 @cite_11 in modular verification is the assume-guarantee paradigm: A module should guarantee to have the desired behavior once the environment with which the module is interacting has the assumed behavior. There have been a variety of implementations for this idea (see, e.g., @cite_0 ). However, the assume-guarantee idea does not immediately fit with our problem setup since it requires that users must have clear assumptions about a module's environment.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_31", "@cite_11" ], "mid": [ "2471765219", "1488659932", "2956034981", "169399732" ], "abstract": [ "In modular verification the specification of a module consists of two parts. One hart describes the guaranteed behavior of the module. The other part describes the assumed behavior of the system iii which the module is interacting. This is called the assume-guarantee paradigm. In this paper we consider assume-guarantee specifications in which the guarantee is specified by branching temporal formulas. We distinguish between two approaches. In the first approach, the assumption is specified by branching temporal formulas. In the second approach, the assumption is specified by linear temporal logic. We consider guarantees in ∀CTL and ∀CTL * , the universal fragments of CTL and CTL * , and assumptions in LTL. ∀CTL, and ∀CTL * . We describe a reduction of modular model checking to standard model checking. Using the reduction, we show that modular model checking is PSPACE-complete for ∀CTL and is EXPSPACE-complete for ∀CTL * . We then show that the case of LTL assumption is a special case of the case of ∀CTL * assumption, but that the EXPSPACE-hardness result apply already to assumptions in LTL.", "Assume-guarantee reasoning has long been advertised as an important method for decomposing proof obligations in system verification. Refinement mappings (homomorphisms) have long been advertised as an important method for solving the language-inclusion problem in practice. When confronted with large verification problems, we therefore attempted to make use of both techniques. We soon found that rather than offering instant solutions, the success of assume-guarantee reasoning depends critically on the construction of suitable abstraction modules, and the success of refinement checking depends critically on the construction of suitable witness modules. Moreover, as abstractions need to be witnessed, and witnesses abstracted, the process must be iterated. We present here the main lessons we learned from our experiments, in limn of a systematic and structured discipline for the compositional verification of reactive modules. An infrastructure to support this discipline, and automate parts of the verification, has been implemented in the tool Mocha.", "Formal verification of a control system can be performed by checking if a model of its dynamical behavior conforms to temporal requirements. Unfortunately, adoption of formal verification in an industrial setting is a formidable challenge as design requirements are often vague, nonmodular, evolving, or sometimes simply unknown. We propose a framework to mine requirements from a closed-loop model of an industrial-scale control system, such as one specified in Simulink. The input to our algorithm is a requirement template expressed in parametric signal temporal logic: a logical formula in which concrete signal or time values are replaced with parameters. Given a set of simulation traces of the model, our method infers values for the template parameters to obtain the strongest candidate requirement satisfied by the traces. It then tries to falsify the candidate requirement using a falsification tool. If a counterexample is found, it is added to the existing set of traces and these steps are repeated; otherwise, it terminates with the synthesized requirement. Requirement mining has several usage scenarios: mined requirements can be used to formally validate future modifications of the model, they can be used to gain better understanding of legacy models or code, and can also help enhancing the process of bug finding through simulations. We demonstrate the scalability and utility of our technique on three complex case studies in the domain of automotive powertrain systems: a simple automatic transmission controller, an air-fuel controller with a mean-value model of the engine dynamics, and an industrial-size prototype airpath controller for a diesel engine. We include results on a bug found in the prototype controller by our method.", "We propose a typing theory, based on multiparty session types, for modular verification of real-time choreographic interactions. To model real-time implementations, we introduce a simple calculus with delays and a decidable static proof system. The proof system ensures type safety and time-error freedom, namely processes respect the prescribed timing and causalities between interactions. A decidable condition on timed global types guarantees time-progress for validated processes with delays, and gives a sound and complete characterisation of a new class of CTAs with general topologies that enjoys progress and liveness." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
In the past decade, there has also been some research on combining model-checking and testing techniques for system verification, which can be classified into a broader class of techniques called specification-based testing. But most of the work only utilizes model-checkers' ability of generating counter-examples from a system's specification to produce test cases against an implementation @cite_32 @cite_18 @cite_35 @cite_25 @cite_30 @cite_39 @cite_4 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_4", "@cite_32", "@cite_39", "@cite_25" ], "mid": [ "1498432697", "2118645685", "2171520043", "2052495090", "2956034981", "2112561088", "2291637985" ], "abstract": [ "Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.", "Model verification examines the correctness of a model implementation with respect to a model specification. While being described from model specification, implementation prepares to execute or evaluate a simulation model by a computer program. Viewing model verification as a program test this paper proposes a method for generation of test sequences that completely covers all possible behavior in specification at an I O level. Timed State Reachability Graph (TSRG) is proposed as a means of model specification. Graph theoretical analysis of TSRG has generated a test set of timed I O event sequences, which guarantees 100 test coverage of an implementation under test.", "Model Based Development (MBD) using Mathworks tools like Simulink, Stateflow etc. is being pursued in Honeywell for the development of safety critical avionics software. Formal verification techniques are well-known to identify design errors of safety critical systems reducing development cost and time. As of now, formal verification of Simulink design models is being carried out manually resulting in excessive time consumption during the design phase. We present a tool that automatically translates certain Simulink models into input language of a suitable model checker. Formal verification of safety critical avionics components becomes faster and less error prone with this tool. Support is also provided for reverse translation of traces violating requirements (as given by the model checker) into Simulink notation for playback.", "About a decade after the initial proposal to use model checkers for the generation of test cases we take a look at the results in this field of research. Model checkers are formal verification tools, capable of providing counterexamples to violated properties. Normally, these counterexamples are meant to guide an analyst when searching for the root cause of a property violation. They are, however, also very useful as test cases. Many different approaches have been presented, many problems have been solved, yet many issues remain. This survey paper reviews the state of the art in testing with model checkers. Copyright © 2008 John Wiley & Sons, Ltd.", "Formal verification of a control system can be performed by checking if a model of its dynamical behavior conforms to temporal requirements. Unfortunately, adoption of formal verification in an industrial setting is a formidable challenge as design requirements are often vague, nonmodular, evolving, or sometimes simply unknown. We propose a framework to mine requirements from a closed-loop model of an industrial-scale control system, such as one specified in Simulink. The input to our algorithm is a requirement template expressed in parametric signal temporal logic: a logical formula in which concrete signal or time values are replaced with parameters. Given a set of simulation traces of the model, our method infers values for the template parameters to obtain the strongest candidate requirement satisfied by the traces. It then tries to falsify the candidate requirement using a falsification tool. If a counterexample is found, it is added to the existing set of traces and these steps are repeated; otherwise, it terminates with the synthesized requirement. Requirement mining has several usage scenarios: mined requirements can be used to formally validate future modifications of the model, they can be used to gain better understanding of legacy models or code, and can also help enhancing the process of bug finding through simulations. We demonstrate the scalability and utility of our technique on three complex case studies in the domain of automotive powertrain systems: a simple automatic transmission controller, an air-fuel controller with a mean-value model of the engine dynamics, and an industrial-size prototype airpath controller for a diesel engine. We include results on a bug found in the prototype controller by our method.", "In the past two decades, model-checking has emerged as a promising and powerful approach to fully automatic verification of hardware systems. But model checking technology can be usefully applied to other application areas, and this article provides fundamentals that a practitioner can use to translate verification problems into model-checking questions. A taxonomy of the notions of \"model,\" \"property,\" and \"model checking\" are presented, and three standard model-checking approaches are described and applied to examples.", "Formal verification is used to establish the compliance of software and hardware systems with important classes of requirements. System compliance with functional requirements is frequently analyzed using techniques such as model checking, and theorem proving. In addition, a technique called quantitative verification supports the analysis of the reliability, performance, and other quality-of-service (QoS) properties of systems that exhibit stochastic behavior. In this paper, we extend the applicability of quantitative verification to the common scenario when the probabilities of transition between some or all states of the Markov models analyzed by the technique are unknown, but observations of these transitions are available. To this end, we introduce a theoretical framework, and a tool chain that establish confidence intervals for the QoS properties of a software system modelled as a Markov chain with uncertain transition probabilities. We use two case studies from different application domains to assess the effectiveness of the new quantitative verification technique. Our experiments show that disregarding the above source of uncertainty may significantly affect the accuracy of the verification results, leading to wrong decisions, and low-quality software systems." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Callahan et. al. @cite_32 used the model-checker SPIN @cite_18 to check a program's execution traces generated during white-box testing and to generate new test-cases from the counter-example found by SPIN; in @cite_35 , SPIN was also used to generate test-cases from counter-examples found during model-checking system specifications. Gargantini and Heitmeyer @cite_25 used SMV to both generate test-cases from the operational SCR specifications and as test oracles. In @cite_30 @cite_39 , Ammann et. al. also exploited the ability of producing counter-examples with the model-checker SMV @cite_12 ; but their approach is by mutating both specifications and properties such that a large set of test cases can be generated. (A detailed introduction on using model-checkers in testing can be found in @cite_4 ).
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_4", "@cite_32", "@cite_39", "@cite_25", "@cite_12" ], "mid": [ "1889756448", "2115309705", "1548575501", "2052495090", "1796295358", "1896160926", "1511405608", "2043811931" ], "abstract": [ "We present combined-case k-induction, a novel technique for verifying software programs. This technique draws on the strengths of the classical inductive-invariant method and a recent application of k-induction to program verification. In previous work, correctness of programs was established by separately proving a base case and inductive step. We present a new k-induction rule that takes an unstructured, reducible control flow graph (CFG), a natural loop occurring in the CFG, and a positive integer k, and constructs a single CFG in which the given loop is eliminated via an unwinding proportional to k. Recursively applying the proof rule eventually yields a loop-free CFG, which can be checked using SAT- SMT-based techniques. We state soundness of the rule, and investigate its theoretical properties. We then present two implementations of our technique: K-INDUCTOR, a verifier for C programs built on top of the CBMC model checker, and K-BOOGIE, an extension of the Boogie tool. Our experiments, using a large set of benchmarks, demonstrate that our k-induction technique frequently allows program verification to succeed using significantly weaker loop invariants than are required with the standard inductive invariant approach.", "SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.", "The main limiting factor of the model checker SPIN is currently the amount of available physical memory. This paper explores the possibility of exploiting a distributed-memory execution environment, such as a network of workstations interconnected by a standard LAN, to extend the size of the verification problems that can be successfully handled by SPIN. A distributed version of the algorithm used by SPIN to verify safety properties is presented, and its compatibility with the main memory and complexity reduction mechanisms of SPIN is discussed. Finally, some preliminary experimental results are presented.", "About a decade after the initial proposal to use model checkers for the generation of test cases we take a look at the results in this field of research. Model checkers are formal verification tools, capable of providing counterexamples to violated properties. Normally, these counterexamples are meant to guide an analyst when searching for the root cause of a property violation. They are, however, also very useful as test cases. Many different approaches have been presented, many problems have been solved, yet many issues remain. This survey paper reviews the state of the art in testing with model checkers. Copyright © 2008 John Wiley & Sons, Ltd.", "We are interested in finding algorithms which will allow an agent roaming between different electronic auction institutions to automatically verify the game-theoretic properties of a previously unseen auction protocol. A property may be that the protocol is robust to collusion or deception or that a given strategy is optimal. Model checking provides an automatic way of carrying out such proofs. However it may suffer from state space explosion for large models. To improve the performance of model checking, abstractions were used along with the Spin model checker. We considered two case studies: the Vickrey auction and a tractable combinatorial auction. Numerical results showed the limits of relying solely on Spin . To reduce the state space required by Spin , two property-preserving abstraction methods were applied: the first is the classical program slicing technique, which removes irrelevant variables with respect to the property; the second replaces large data, possibly infinite values of variables with smaller abstract values. This enabled us to model check the strategy-proofness property of the Vickrey auction for unbounded bid range and number of agents.", "We apply a model checker to the problem of test generation using a new application of mutation analysis. We define syntactic operators, each of which produces a slight variation on a given model. The operators define a form of mutation analysis at the level of the model checker specification. A model checker generates countersamples which distinguish the variations from the original specification. The countersamples can easily be turned into complete test cases, that is, with inputs and expected results. We define two classes of operators: those that produce test cases from which a correct implementation must differ, and those that produce test cases with which it must agree. There are substantial advantages to combining a model checker with mutation analysis. First, test case generation is automatic; each countersample is a complete test case. Second, in sharp contrast to program-based mutation analysis, equivalent mutant identification is also automatic. We apply our method to an example specification and evaluate the resulting test sets with coverage metrics on a Java implementation.", "One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple versions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes identification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and nonfailing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail.", "A major obstacle to finding program errors in a real system is knowing what correctness rules the system must obey. These rules are often undocumented or specified in an ad hoc manner. This paper demonstrates techniques that automatically extract such checking information from the source code itself, rather than the programmer, thereby avoiding the need for a priori knowledge of system rules.The cornerstone of our approach is inferring programmer \"beliefs\" that we then cross-check for contradictions. Beliefs are facts implied by code: a dereference of a pointer, p, implies a belief that p is non-null, a call to \"unlock(1)\" implies that 1 was locked, etc. For beliefs we know the programmer must hold, such as the pointer dereference above, we immediately flag contradictions as errors. For beliefs that the programmer may hold, we can assume these beliefs hold and use a statistical analysis to rank the resulting errors from most to least likely. For example, a call to \"spin_lock\" followed once by a call to \"spin_unlock\" implies that the programmer may have paired these calls by coincidence. If the pairing happens 999 out of 1000 times, though, then it is probably a valid belief and the sole deviation a probable error. The key feature of this approach is that it requires no a priori knowledge of truth: if two beliefs contradict, we know that one is an error without knowing what the correct belief is.Conceptually, our checkers extract beliefs by tailoring rule \"templates\" to a system --- for example, finding all functions that fit the rule template \"a must be paired with b.\" We have developed six checkers that follow this conceptual framework. They find hundreds of bugs in real systems such as Linux and OpenBSD. From our experience, they give a dramatic reduction in the manual effort needed to check a large system. Compared to our previous work [9], these template checkers find ten to one hundred times more rule instances and derive properties we found impractical to specify manually." ] }
cs0404037
1653657989
Component-based software development has posed a serious challenge to system verification since externally-obtained components could be a new source of system failures. This issue can not be completely solved by either model-checking or traditional software testing techniques alone due to several reasons: 1) externally obtained components are usually unspecified partially specified; 2)it is generally difficult to establish an adequacy criteria for testing a component; 3)components may be used to dynamically upgrade a system. This paper introduces a new approach (called model-checking driven black-box testing ) that combines model-checking with traditional black-box software testing to tackle the problem in a complete, sound, and automatic way. The idea is to, with respect to some requirement (expressed in CTL or LTL) about the system, use model-checking techniques to derive a condition (expressed in communication graphs) for an unspecified component such that the system satisfies the requirement iff the condition is satisfied by the component, and which can be established by testing the component with test cases generated from the condition on-the-fly. In this paper, we present model-checking driven black-box testing algorithms to handle both CTL and LTL requirements. We also illustrate the idea through some examples.
Peled et. al. @cite_27 @cite_29 @cite_5 studied the issue of checking a black-box against a temporal property (called black-box checking). But their focus is on how to efficiently establish an abstract model of the black-box through black-box testing , and their approach requires a clearly-defined property (LTL formula) about the black-box, which is not always possible in component-based systems. Kupferman and Vardi @cite_28 investigated module checking by considering the problem of checking an open finite-state system under all possible environments. Module checking is different from the problem in (*) mentioned at the beginning of the paper in the sense that a component understood as an environment in @cite_28 is a specific one. Fisler et. al. @cite_24 @cite_6 proposed an idea of deducing a model-checking condition for extension features from the base feature, which is adopted to study model-checking feature-oriented software designs. Their approach relies totally on model-checking techniques; their algorithms have false negatives and do not handle LTL formulas.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_6", "@cite_24", "@cite_27", "@cite_5" ], "mid": [ "1498432697", "2167672803", "2122213509", "1986424898", "2793566095", "2031397756" ], "abstract": [ "Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.", "We study the problem of model checking software product line (SPL) behaviours against temporal properties. This is more difficult than for single systems because an SPL with n features yields up to 2n individual systems to verify. As each individual verification suffers from state explosion, it is crucial to propose efficient formalisms and heuristics. We recently proposed featured transition systems (FTS), a compact representation for SPL behaviour, and defined algorithms for model checking FTS against linear temporal properties. Although they showed to outperform individual system verifications, they still face a state explosion problem as they enumerate and visit system states one by one. In this paper, we tackle this latter problem by using symbolic representations of the state space. This lead us to consider computation tree logic (CTL) which is supported by the industry-strength symbolic model checker NuSMV. We first lay the foundations for symbolic SPL model checking by defining a feature-oriented version of CTL and its dedicated algorithms. We then describe an implementation that adapts the NuSMV language and tool infrastructure. Finally, we propose theoretical and empirical evaluations of our results. The benchmarks show that for certain properties, our algorithm is over a hundred times faster than model checking each system with the standard algorithm.", "Using ideas from automata theory we design a new efficient (deterministic) identity test for the noncommutative polynomial identity testing problem (first introduced and studied in [RS05, BW05]). More precisely, given as input a noncommutative circuit C x1, ldrldrldr , xn computing a polynomial in F x1, ldrldrldr , xn of degree d with at most t monomials, where the variables xi are noncommuting, we give a deterministic polynomial identity test that checks if C equiv 0 and runs in time polynomial in d, n, |C|, and t. The same methods works in a black-box setting: given a noncommuting black-box polynomial f isin F x1, ldrldrldr , xn of degree d with t monomials we can, in fact, reconstruct the entire polynomial f in time polynomial in n, d and t. Indeed, we apply this idea to the reconstruction of black-box noncommuting algebraic branching programs (the ABPs considered by Nisan in [N91] and Raz-Shpilka in [RS05]). Assuming that the black-box model allows us to query the ABP for the output at any given gate then we can reconstruct an (equivalent) ABP in deterministic polynomial time. Finally, we turn to commutative identity testing and explore the complexity of the problem when the coefficients of the input polynomial come from an arbitrary finite commutative ring with unity whose elements are uniformly encoded as strings and the ring operations are given by an oracle. We show that several algorithmic results for polynomial identity testing over fields also hold when the coefficients come from such finite rings.", "This paper presents the linear temporal logic of rewriting (LTLR) model checker under localized fairness assumptions for the Maude system. The linear temporal logic of rewriting extends linear temporal logic (LTL) with spatial action patterns that describe patterns of rewriting events. Since LTLR generalizes and extends various state-based and event-based logics, mixed properties involving both state propositions and actions, such as fairness properties, can be naturally expressed in LTLR. However, often the needed fairness assumptions cannot even be expressed as propositional temporal logic formulas because they are parametric, that is, they correspond to universally quantified temporal logic formulas. Such universal quantification is succinctly captured by the notion of localized fairness; for example, fairness is localized to the object name parameter in object fairness conditions. We summarize the foundations, and present the language design and implementation of the Maude Fair LTLR model checker, developed at the C++ level within the Maude system by extending the existing Maude LTL model checker. Our tool provides not only an efficient LTLR model checking algorithm under parameterized fairness assumptions but also suitable specification languages as part of its user interface. The expressiveness and effectiveness of the Maude Fair LTLR model checker are illustrated by five case studies. This is the first tool we are aware of that can model check temporal logic properties under parameterized fairness assumptions. We develop the LTLR model checker under localized fairness assumptions.The linear temporal logic of rewriting (LTLR) extends LTL with action patterns.Localized fairness specifies parameterized fairness over generic system entities.We present the foundations, the language design, and the implementation of our tool.We illustrate the expressiveness and effectiveness of our tool with case studies.", "Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose Distill-and-Compare, an approach to audit such models without probing the black-box model API or pre-defining features to audit. To gain insight into black-box models, we treat them as teachers, training transparent student models to mimic the risk scores assigned by the black-box models. We compare the mimic model trained with distillation to a second, un-distilled transparent model trained on ground truth outcomes, and use differences between the two models to gain insight into the black-box model. We demonstrate the approach on four data sets: COMPAS, Stop-and-Frisk, Chicago Police, and Lending Club. We also propose a statistical test to determine if a data set is missing key features used to train the black-box model. Our test finds that the ProPublica data is likely missing key feature(s) used in COMPAS.", "This paper examines techniques for finding falsifying trajectories of hybrid systems using an approach that we call trajectory splicing. Many formal verification techniques for hybrid systems, including flowpipe construction, can identify plausible abstract counterexamples for property violations. However, there is often a gap between the reported abstract counterexamples and the concrete system trajectories. Our approach starts with a candidate sequence of disconnected trajectory segments, each segment lying inside a discrete mode. However, such disconnected segments do not form concrete violations due to the gaps that exist between the ending state of one segment and the starting state of the subsequent segment. Therefore, trajectory splicing uses local optimization to minimize the gap between these segments, effectively splicing them together to form a concrete trajectory. We demonstrate the use of our approach for falsifying safety properties of hybrid systems using standard optimization techniques. As such, our approach is not restricted to linear systems. We compare our approach with other falsification approaches including uniform random sampling and a robustness guided falsification approach used in the tool S-Taliro. Our preliminary evaluation clearly shows the potential of our approach to search for candidate trajectory segments and use them to find concrete property violations." ] }
cs0402003
2952176458
The notion of preference is becoming more and more ubiquitous in present-day information systems. Preferences are primarily used to filter and personalize the information reaching the users of such systems. In database systems, preferences are usually captured as preference relations that are used to build preference queries. In our approach, preference queries are relational algebra or SQL queries that contain occurrences of the winnow operator ("find the most preferred tuples in a given relation"). We present here a number of semantic optimization techniques applicable to preference queries. The techniques make use of integrity constraints, and make it possible to remove redundant occurrences of the winnow operator and to apply a more efficient algorithm for the computation of winnow. We also study the propagation of integrity constraints in the result of the winnow. We have identified necessary and sufficient conditions for the applicability of our techniques, and formulated those conditions as constraint satisfiability problems.
The basic reference for semantic query optimization is @cite_11 . The most common techniques are: join elimination introduction, predicate elimination and introduction, and detecting an empty answer set. @cite_15 discusses the implementation of predicate introduction and join elimination in an industrial query optimizer. Semantic query optimization techniques for relational queries are studied in @cite_8 in the context of denial and referential constraints, and in @cite_14 in the context of constraint tuple-generating dependencies (a generalization of CGDs and classical relational dependencies). FDs are used for reasoning about sort orders in @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "2006519067", "1548134621", "2108930340", "2106082581", "2107105629" ], "abstract": [ "The purpose of semantic query optimization is to use semantic knowledge (e.g., integrity constraints) for transforming a query into a form that may be answered more efficiently than the original version. In several previous papers we described and proved the correctness of a method for semantic query optimization in deductive databases couched in first-order logic. This paper consolidates the major results of these papers emphasizing the techniques and their applicability for optimizing relational queries. Additionally, we show how this method subsumes and generalizes earlier work on semantic query optimization. We also indicate how semantic query optimization techniques can be extended to databases that support recursion and integrity constraints that contain disjunction, negation, and recursion.", "We critically evaluate the current state of research in multiple query opGrnization, synthesize the requirements for a modular opCrnizer, and propose an architecture. Our objective is to facilitate future research by providing modular subproblems and a good general-purpose data structure. In rhe context of this archiuzcture. we provide an improved subsumption algorithm. and discuss migration paths from single-query to multiple-query oplimizers. The architecture has three key ingredients. First. each type of work is performed at an appropriate level of abstraction. Segond, a uniform and very compact representation stores all candidate strategies. Finally, search is handled as a discrete optimization problem separable horn the query processing tasks. 1. Problem Definition and Objectives A multiple query optimizer (h4QO) takes several queries as input and seeks to generate a good multi-strategy, an executable operator gaph that simultaneously computes answers to all the queries. The idea is to save by evaluating common subexpressions only once. The commonalities to be exploited include identical selections and joins, predicates that subsume other predicates, and also costly physical operators such as relation scans and SOULS. The multiple query optimization problem is to find a multi-strategy that minimizes the total cost (with overlap exploited). Figure 1 .l shows a multi-strategy generated exploiting commonalities among queries Ql-Q3 at both the logical and physical level. To be really satisfactory, a multi-query optimization algorithm must offer solution quality, ejjiciency, and ease of Permission to copy without fee all a part of this mataial is granted provided that the copies are nut made a diitributed for direct commercial advantage, the VIDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Da Blse Endowment. To copy otherwise. urturepublim identify many kinds of commonalities (e.g., by predicate splitting, sharing relation scans); and search effectively to choose a good combination of l-strategies. Efficiency requires that the optimization avoid a combinatorial explosion of possibilities, and that within those it considers, redundant work on common subexpressions be minimized. ‘Finally, ease of implementation is crucial an algorithm will be practically useful only if it is conceptually simple, easy to attach to an optimizer, and requires relatively little additional soft-", "The authors address the issue of reasoning with two classes of commonly used semantic integrity constraints in database and knowledge-base systems: implication constraints and referential constraints. They first consider a central problem in this respect, the IRC-refuting problem, which is to decide whether a conjunctive query always produces an empty relation on (finite) database instances satisfying a given set of implication and referential constraints. Since the general problem is undecidable, they only consider acyclic referential constraints. Under this assumption, they prove that the IRC-refuting problem is decidable, and give a novel necessary and sufficient condition for it. Under the same assumption, they also study several other problems encountered in semantic query optimization, such as the semantics-based query containment problem, redundant join problem, and redundant selection-condition problem, and show that they are polynomially equivalent or reducible to the IRC-refuting problem. Moreover, they give results on reducing the complexity for some special cases of the IRC-refuting problem.", "New applications of information systems need to integrate a large number of heterogeneous databases over computer networks. Answering a query in these applications usually involves selecting relevant information sources and generating a query plan to combine the data automatically. As significant progress has been made in source selection and plan generation, the critical issue has been shifting to query optimization. This paper presents a semantic query optimization (SQO) approach to optimizing query plans of heterogeneous multidatabase systems. This approach provides global optimization for query plans as well as local optimization for subqueries that retrieve data from individual database sources. An important feature of our local optimization algorithm is that we prove necessary and sufficient conditions to eliminate an unnecessary join in a conjunctive query of arbitrary join topology. This feature allows our optimizer to utilize more expressive relational rules to provide a wider range of possible optimizations than previous work in SQO. The local optimization algorithm also features a new data structure called AND-OR implication graphs to facilitate the search for optimal queries. These features allow the global optimization to effectively use semantic knowledge to reduce the data transmission cost. We have implemented this approach in the PESTO (Plan Enhancement by SemanTic Optimization) query plan optimizer as a part of the SIMS information mediator. Experimental results demonstrate that PESTO can provide significant savings in query execution cost over query plan execution without optimization.", "Various approaches for keyword proximity search have been implemented in relational databases, XML and the Web. Yet, in all of them, an answer is a Q-fragment, namely, a subtree T of the given data graph G, such that T contains all the keywords of the query Q and has no proper subtree with this property. The rank of an answer is inversely proportional to its weight. Three problems are of interest: finding an optimal (i.e., top-ranked) answer, computing the top-k answers and enumerating all the answers in ranked order. It is shown that, under data complexity, an efficient algorithm for solving the first problem is sufficient for solving the other two problems with polynomial delay. Similarly, an efficient algorithm for finding a θ-approximation of the optimal answer suffices for carrying out the following two tasks with polynomial delay, under query-and-data complexity. First, enumerating in a (θ+1)-approximate order. Second, computing a (θ+1)-approximation of the top-k answers. As a corollary, this paper gives the first efficient algorithms, under data complexity, for enumerating all the answers in ranked order and for computing the top-k answers. It also gives the first efficient algorithms, under query-and-data complexity, for enumerating in a provably approximate order and for computing an approximation of the top-k answers." ] }
cs0312023
2951755603
This paper focuses on the inference of modes for which a logic program is guaranteed to terminate. This generalises traditional termination analysis where an analyser tries to verify termination for a specified mode. Our contribution is a methodology in which components of traditional termination analysis are combined with backwards analysis to obtain an analyser for termination inference. We identify a condition on the components of the analyser which guarantees that termination inference will infer all modes which can be checked to terminate. The application of this methodology to enhance a traditional termination analyser to perform also termination inference is demonstrated.
This paper draws on results from two areas: termination (checking) analysis and backwards analysis. It shows how to combine components implementing these so as to obtain an analyser for termination inference. Termination checking for logic programs has been studied extensively (see for example the survey @cite_18 ). Backwards reasoning for imperative programs dates back to the early days of static analysis and has been applied extensively in functional programming. Applications of backwards analysis in the context of logic programming are few. For details concerning other applications of backwards analysis, see @cite_14 . The only other work on termination inference that we are aware of is that of Mesnard and coauthors. The implementation of Mesnard's cTI analyser is described in @cite_15 and its formal justification is given in @cite_23 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_23", "@cite_15" ], "mid": [ "1524883003", "2136242294", "2009286786", "2012816689" ], "abstract": [ "We present the implementation of cTI, a system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, requiringu ser annotations. With termination inference such annotations are no longer necessary. Instead, all provably terminatingclasses to all related predicates are inferred at once. The architecture of cTI is described1 and some optimizations are discussed. Runningti mes for classical examples from the termination literature in LP and for some middle-sized logic programs are given.", "We describe a new program termination analysis designed to handle imperative programs whose termination depends on the mutation of the program's heap. We first describe how an abstract interpretation can be used to construct a finite number of relations which, if each is well-founded, implies termination. We then give an abstract interpretation based on separation logic formulaewhich tracks the depths of pieces of heaps. Finally, we combine these two techniques to produce an automatic termination prover. We show that the analysis is able to prove the termination of loops extracted from Windows device drivers that could not be proved terminating before by other means; we also discuss a previously unknown bug found with the analysis.", "Abstract We survey termination analysis techniques for Logic Programs. We give an extensive introduction to the topic. We recall several motivations for the work, and point out the intuitions behind a number of LP-specific issues that turn up, such as: the study of different classes of programs and LP languages, of different classes of queries and of different selection rules, the difference between existential and universal termination, and the treatment of backward unification and local variables. Then, we turn to more technical aspects: the structure of the termination proofs, the selection of well-founded orderings, norms and level mappings, the inference of interargument relations, and special treatments proposed for dealing with mutual recursion. For each of these, we briefly sketch the main approaches presented in the literature, using a fixed example as a file rouge. We conclude with some comments on loop detection and cycle unification and state some open problems.", "Proof, verification and analysis methods for termination all rely on two induction principles: (1) a variant function or induction on data ensuring progress towards the end and (2) some form of induction on the program structure. The abstract interpretation design principle is first illustrated for the design of new forward and backward proof, verification and analysis methods for safety. The safety collecting semantics defining the strongest safety property of programs is first expressed in a constructive fixpoint form. Safety proof and checking verification methods then immediately follow by fixpoint induction. Static analysis of abstract safety properties such as invariance are constructively designed by fixpoint abstraction (or approximation) to (automatically) infer safety properties. So far, no such clear design principle did exist for termination so that the existing approaches are scattered and largely not comparable with each other. For (1), we show that this design principle applies equally well to potential and definite termination. The trace-based termination collecting semantics is given a fixpoint definition. Its abstraction yields a fixpoint definition of the best variant function. By further abstraction of this best variant function, we derive the Floyd Turing termination proof method as well as new static analysis methods to effectively compute approximations of this best variant function. For (2), we introduce a generalization of the syntactic notion of struc- tural induction (as found in Hoare logic) into a semantic structural induction based on the new semantic concept of inductive trace cover covering execution traces by segments, a new basis for formulating program properties. Its abstractions allow for generalized recursive proof, verification and static analysis methods by induction on both program structure, control, and data. Examples of particular instances include Floyd's handling of loop cutpoints as well as nested loops, Burstall's intermittent assertion total correctness proof method, and Podelski-Rybalchenko transition invariants." ] }
cs0312023
2951755603
This paper focuses on the inference of modes for which a logic program is guaranteed to terminate. This generalises traditional termination analysis where an analyser tries to verify termination for a specified mode. Our contribution is a methodology in which components of traditional termination analysis are combined with backwards analysis to obtain an analyser for termination inference. We identify a condition on the components of the analyser which guarantees that termination inference will infer all modes which can be checked to terminate. The application of this methodology to enhance a traditional termination analyser to perform also termination inference is demonstrated.
Both systems compute the greatest fixed point of a system of recursive equations. In our case the implementation is based on a simple meta-interpreter written in Prolog. In cTI, the implementation is based on a @math -calculus interpreter. In our case this system of equations is set up as an instance of backwards analysis hence providing a clear motivation and justification @cite_23 .
{ "cite_N": [ "@cite_23" ], "mid": [ "2075296484" ], "abstract": [ "A natural term rewriting framework for the Bellantoni Cook schemata of predicative recursion, which yields a canonical definition of the polynomial time computable functions, is introduced. In terms of an exponential function both, an upper bound and a lower bound are proved for the resulting derivation lengths of the functions in question. It is proved that any natural reduction strategy yields an algorithm which runs in exponential time. We give an example in which this estimate is tight. It is proved that the resulting derivation lengths become polynomially bounded in the lengths of the inputs if the rewrite rules are only applied to terms in which the safe arguments – no restrictions are assumed for the normal arguments – consist of values, i.e. numerals, and not of names, i.e. non numeral terms. It is proved that in the latter situation any inside first reduction strategy and any head reduction strategy yield algorithms, for the function under consideration, for which the running time is bounded by an appropriate polynomial in the lengths of the input. A feasible rewrite system for predicative recursion with predicative parameter substitution is defined. It is proved that the derivation lengths of this rewrite system are polynomially bounded in the lengths of the inputs. As a corollary we reobtain Bellantoni’s result stating that predicative recursion is closed under predicative parameter recursion." ] }
math0312490
2166075559
As a sequel to our proof of the analog of Serre's conjecture for function fields in Part I of this work, we study in this paper the deformation rings of @math -dimensional mod @math representations @math of the arithmetic fundamental group @math where @math is a geometrically irreducible, smooth curve over a finite field @math of characteristic @math ( @math ). We are able to show in many cases that the resulting rings are finite flat over @math . The proof principally uses a lifting result of the authors in Part I of this two-part work, Taylor-Wiles systems and the result of Lafforgue. This implies a conjecture of A.J. de Jong for representations with coefficients in power series rings over finite fields of characteristic @math , that have this mod @math representation as their reduction.
The key qualitative difference between the mentioned works and ours is that we can prove automorphy of residual representations like @math in the theorem, while in the other works this has to be at the moment an imprtant assumption that seems extremely difficult to verify in their number field case; further we are mainly interested in establishing algebraic properties of deformation rings, while in the number field case these are established en route to proving modularity of @math -adic representations (which is known in our context by @cite_22 !). Thus our uses of the methods pioneered by Wiles can be deemed to a certain extent to be warped!
{ "cite_N": [ "@cite_22" ], "mid": [ "230449344" ], "abstract": [ "Let ( O ) k be the ring of integers of a finite extension k of the field ( Q ) p of p-adic numbers. The endomorphisms of a formal group law defined over ( O ) k provide nontrivial examples of commuting formal series with coefficients in ( O ) k . This article deals with the inverse problem formulated by Jonathan Lubin within the context of non-Archimedean dynamical systems. We present a large family of series, with coefficients in ( Z ) p , which satisfy Lubin's conjecture. These series are constructed with the help of Lubin–Tate formal group laws over ( Q ) p . We introduce the notion of minimally ramified series which turn out to be modulo p reductions of some series of this family. The commutant monoids of these minimally ramified series are determined by using the Fontaine–Wintenberger theory of the field of norms which allows an interpretation of them as automorphisms of ( Z ) p -extensions of local fields of characteristic zero. A particularly effective example illustrating the paper is given by a family of series generalizing Cebysev polynomials" ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
In @cite_20 , an argumentation semantics for extended logic programs, similar to Prakken and Sartor's, is proposed; it is influenced by WFSX, and distinguishes between sceptical and credulous conclusions of an argument. It also provides a proof theory based on dialogue trees, similar to Prakken and Sartor's.
{ "cite_N": [ "@cite_20" ], "mid": [ "2395246457" ], "abstract": [ "It is well-known, in the area of argumentation theory, that there is a direct relationship between extension-based argumentation semantics and logic programming semantics with negation as failure. One of the main implication of this relationship is that one can explore the implementation of argumentation engines by considering logic programming solvers. Recently, it was proved that the argumentation semantics CF2 can be characterized by the stratified minimal model semantics (MM). The stratified minimal model semantics is also a recently introduced logic programming semantics which is based on a recursive construction and minimal models. In this paper, we introduce a solver based on MINISAT algorithm for inferring the logic programming semantics MM∗. As one of the applications of the MM solver, we will argue that this solver is a suitable tool for computing the argumentation semantics CF2." ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
Defeasible Logic Programming @cite_44 @cite_25 @cite_30 is a formalism very similar to Prakken and Sartor's, based on the first order logic argumentation framework of @cite_1 . It includes logic programming with two kinds of negation, distinction between strict and defeasible rules, and allowing for various criteria for comparing arguments. Its semantics is given operationally, by proof procedures based on dialectical trees @cite_44 @cite_25 . In @cite_19 , the semantics of Defeasible Logic Programming is related to the well-founded semantics, albeit only for the restricted language corresponding to normal logic programs @cite_41 .
{ "cite_N": [ "@cite_30", "@cite_41", "@cite_1", "@cite_44", "@cite_19", "@cite_25" ], "mid": [ "2159569510", "2156092566", "2170232725", "190056634", "176609766", "2152131859" ], "abstract": [ "The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query @math will succeed when there is an argument @math for @math that is warranted, i.e. the argument @math that supports @math is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.", "In this dissertation I present a formal approach to defeasible reasoning. This mathematical approach is based on the notion of specificity introduced by Poole and the general theory of warrant as presented by Pollock. General background information on the subject of Nonmonotonic Reasoning is presented and some of the shortcomings of existing systems are analyzed. We believe that the approach presented here represents a definite improvement over past systems. The main contribution of this thesis is a formally precise, elegant, clean, well-defined system which exhibits a correct behavior when applied to the benchmark examples in the literature. Model-theoretic semantical issues have been addressed. The investigation on the theoretical issues has aided the study of how this kind of reasoner can be realized on a computer. An interpreter of a restricted language, an extension of Horn clauses with defeasible rules, has been implemented. Finally, the implementation details are discussed.", "This paper relates the Defeasible Logic Programming (DeLP) framework and its semantics SEMDeLP to classical logic programming frameworks. In DeLP, we distinguish between two different sorts of rules: strict and defeasible rules. Negative literals (∼A) in these rules are considered to represent classical negation. In contrast to this, in normal logic programming (NLP), there is only one kind of rules, but the meaning of negative literals (not A) is different: they represent a kind of negation as failure, and thereby introduce defeasibility. Various semantics have been defined for NLP, notably the well-founded semantics (WFS) (van , Proceedings of the Seventh Symposium on Principles of Database Systems, 1988, pp. 221-230; J. ACM 38 (3) (1991) 620) and the stable semantics Stable (Gelfond and Lifschitz, Fifth Conference on Logic Programming, MIT Press, Cambridge, MA, 1988, pp. 1070-1080; Proceedings of the Seventh International Conference on Logical Programming, Jerusalem, MIT Press, Cambridge, MA, 1991, pp. 579-597).In this paper we consider the transformation properties for NLP introduced by Brass and Dix (J. Logic Programming 38(3) (1999) 167) and suitably adjusted for the DeLP framework. We show which transformation properties are satisfied, thereby identifying aspects in which NLP and DeLP differ. We contend that the transformation rules presented in this paper can help to gain a better understanding of the relationship of DeLP semantics with respect to more traditional logic programming approaches. As a byproduct, we obtain the result that DeLP is a proper extension of NLP.", "We present here a knowledge representation language, where defeasible and non-defeasible rules can be expressed. The language has two different negations: classical negation, which is represented by the symbol “∼” used for representing contradictory knowledge; and negation as failure, represented by the symbol “not” used for representing incomplete information. Defeasible reasoning is done using a argumentation formalism. Thus, systems for acting in a dynamic domain, that properly handle contradictory and or incomplete information can be developed with this language. An argument is used as a defeasible reason for supporting conclusions. A conclusion q will be considered justified only when the argument that supports it becomes a justification. Building a justification involves the construction of a nondefeated argument A for q. In order to establish that A is a non-defeated argument, the system looks for counterarguments that could be defeaters for A. Since defeaters are arguments, there may exist defeaters for the defeaters, and so on, thus requiring a complete dialectical analysis. The system also detects, avoids, circular argumentation. The language was implemented using an abstract machine defined and developed as an extension of the Warren Abstract Machine (wam).", "Horn clause logic programming can be extended to include abduction with integrity constraints. In the resulting extension of logic programming, negation by failure can be simulated by making negative conditions abducible and by imposing appropriate denials and disjunctions as integrity constraints. This gives an alternative semantics for negation by failure, which generalises the stable model semantics of negation by failure. The abductive extension of logic programming extends negation by failure in three ways: (1) computation can be perfonned in alternative minimal models, (2) positive as well as negative conditions can be made abducible, and (3) other integrity constraints can also be accommodated. * This paper was written while the first author was at Imperial College. 235 Introduction The tenn \"abduction\" was introduced by the philosopher Charles Peirce [1931] to refer to a particular kind of hypothetical reasoning. In the simplest case, it has the fonn: From A and A fB infer B as a possible \"explanation\" of A. Abduction has been given prominence in Charniak and McDennot's [1985] \"Introduction to Artificial Intelligence\", where it has been applied to expert systems and story comprehension. Independently, several authors have developed deductive techniques to drive the generation of abductive hypotheses. Cox and Pietrzykowski [1986] construct hypotheses from the \"dead ends\" of linear resolution proofs. Finger and Genesereth [1985] generate \"deductive solutions to design problems\" using the \"residue\" left behind in resolution proofs. Poole, Goebel and Aleliunas [1987] also use linear resolution to generate hypotheses. All impose the restriction that hypotheses should be consistent with the \"knowledge base\". Abduction is a fonn of non-monotonic reasoning, because hypotheses which are consistent with one state of a knowledge base may become inconSistent when new knowledge is added. Poole [1988] argues that abduction is preferable to noh-monotonic logics for default reasoning. In this view, defaults are hypotheses fonnulated within classical logic rather than conclusions derived withln some fonn of non-monotonic logic. The similarity between abduction and default reasoning was also pointed out in [Kowalski, 1979]. In this paper we show how abduction can be integrated with logic programming, and we concentrate on the use of abduction to generalise negation by failure. Conditional Answers Compared with Abduction In the simplest case, a logic program consists of a set of Horn Clauses, which are used backward to_reduce goals to sub goals. The initial goal is solved when there are no subgollls left;", "Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variabledfree) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variabledfree program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating builtdin predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., builtdin integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported." ] }
cs0311008
2952174649
Argumentation has proved a useful tool in defining formal semantics for assumption-based reasoning by viewing a proof as a process in which proponents and opponents attack each others arguments by undercuts (attack to an argument's premise) and rebuts (attack to an argument's conclusion). In this paper, we formulate a variety of notions of attack for extended logic programs from combinations of undercuts and rebuts and define a general hierarchy of argumentation semantics parameterised by the notions of attack chosen by proponent and opponent. We prove the equivalence and subset relationships between the semantics and examine some essential properties concerning consistency and the coherence principle, which relates default negation and explicit negation. Most significantly, we place existing semantics put forward in the literature in our hierarchy and identify a particular argumentation semantics for which we prove equivalence to the paraconsistent well-founded semantics with explicit negation, WFSX @math . Finally, we present a general proof theory, based on dialogue trees, and show that it is sound and complete with respect to the argumentation semantics.
A number of authors @cite_23 @cite_18 @cite_15 @cite_27 @cite_37 @cite_45 @cite_14 @cite_20 work on argumentation for negotiating agents. Of these, the approaches of @cite_37 @cite_45 @cite_14 are based on logic programming. The advantage of the logic programming approach for arguing agents is the availability of goal-directed, top-down proof procedures. This is vital when implementing systems which need to react in real-time and therefore cannot afford to compute all justified arguments, as would be required when a bottom-up argumentation semantics would be used.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_14", "@cite_27", "@cite_45", "@cite_23", "@cite_15", "@cite_20" ], "mid": [ "2095587681", "1814404551", "2206132849", "2140044962", "3558650", "74587951", "2104126268", "2395246457" ], "abstract": [ "In a multi-agent environment, where self-motivated agents try to pursue their own goals, cooperation cannot be taken for granted. Cooperation must be planned for and achieved through communication and negotiation. We present a logical model of the mental states of the agents based on a representation of their beliefs, desires, intentions, and goals. We present argumentation as an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions. We look at argumentation as a mechanism for achieving cooperation and agreements. Using categories identified from human multi-agent negotiation, we demonstrate how the logic can be used to specify argument formulation and evaluation. We also illustrate how the developed logic can be used to describe different types of agents. Furthermore, we present a general Automated Negotiation Agent which we implemented, based on the logical model. Using this system, a user can analyze and explore different methods to negotiate and argue in a noncooperative environment where no centralized mechanism for coordination exists. The development of negotiating agents in the framework of the Automated Negotiation Agent is illustrated with an example where the agents plan, act, and resolve conflicts via negotiation in a Blocks World environment.", "Agents engage in dialogues having as goals to make some arguments acceptable or unacceptable. To do so they may put forward arguments, adding them to the argumentation framework. Argumentation semantics can relate a change in the framework to the resulting extensions but it is not clear, given an argumentation framework and a desired acceptance state for a given set of arguments, which further arguments should be added in order to achieve those justification statuses. Our methodology, called conditional labelling, is based on argument labelling and assigns to each argument three propositional formulae. These formulae describe which arguments should be attacked by the agent in order to get a particular argument in, out, or undecided, respectively. Given a conditional labelling, the agents have a full knowledge about the consequences of the attacks they may raise on the acceptability of each argument without having to recompute the overall labelling of the framework for each possible set of attack they may raise.", "This paper studies an abduction problem in formal argumentation frameworks. Given an argument, an agent verifies whether the argument is justified or not in its argumentation framework. If the argument is not justified, the agent seeks conditions to explain the argument in its argumentation framework. We formulate such abductive reasoning in argumentation semantics and provide its computation in logic programming. Next we apply abduction in argumentation frameworks to reasoning by players in debate games. In debate games, two players have their own argumentation frameworks and each player builds claims to refute the opponent. A player may provide false or inaccurate arguments as a tactic to win the game. We show that abduction is used not only for seeking counter-claims but also for building dishonest claims in debate games.", "The ability to view extended logic programs as argumentation systems opens the way for the use of this language in formalizing communication among reasoning computing agents in a distributed framework. In this paper we define an argumentative and cooperative multi-agent framework, introducing credulous and sceptical conclusions. We also present an algorithm for inference and show how the agents can have more credulous or sceptical conclusions.", "This contribution proposes a model for argumentation-based multi-agent planning, with a focus on cooperative scenarios. It consists in a multi-agent extension of DeLP-POP, partial order planning on top of argumentation-based defeasible logic programming. In DeLP-POP, actions and arguments (combinations of rules and facts) may be used to enforce some goal, if their conditions (are known to) apply and arguments are not defeated by other arguments applying. In a cooperative planning problem a team of agents share a set of goals but have diverse abilities and beliefs. In order to plan for these goals, agents start a stepwise dialogue consisting of exchanges of plan proposals, plus arguments against them. Since these dialogues instantiate an A* search algorithm, these agents will find a solution if some solution exists, and moreover, it will be provably optimal (according to their knowledge).", "When several agents are engaged in an argumentation process, they are faced with the problem of deciding how to contribute to the current state of the debate in order to satisfy their own goal, ie. to make an argument under a given semantics accepted or not. In this paper, we study the minimal changes or target sets on the current state of the debate that are required to achieve such a goal, where changes are the addition and or deletion of attacks among arguments. We study some properties of these target sets, and propose a Maude specification of rewriting rules which allow to compute all the target sets for some types of goals.", "The purpose of this paper is to study the fundamental mechanism, humans use in argumentation, and to explore ways to implement this mechanism on computers. We do so by first developing a theory for argumentation whose central notion is the acceptability of arguments. Then we argue for the “correctness” or “appropriateness” of our theory with two strong arguments. The first one shows that most of the major approaches to nonmonotonic reasoning in AI and logic programming are special forms of our theory of argumentation. The second argument illustrates how our theory can be used to investigate the logical structure of many practical problems. This argument is based on a result showing that our theory captures naturally the solutions of the theory of n-person games and of the well-known stable marriage problem. By showing that argumentation can be viewed as a special form of logic programming with negation as failure, we introduce a general logic-programming-based method for generating meta-interpreters for argumentation systems, a method very much similar to the compiler-compiler idea in conventional programming. Keyword: Argumentation; Nonmonotonic reasoning; Logic programming; n-person games; The stable marriage problem", "It is well-known, in the area of argumentation theory, that there is a direct relationship between extension-based argumentation semantics and logic programming semantics with negation as failure. One of the main implication of this relationship is that one can explore the implementation of argumentation engines by considering logic programming solvers. Recently, it was proved that the argumentation semantics CF2 can be characterized by the stratified minimal model semantics (MM). The stratified minimal model semantics is also a recently introduced logic programming semantics which is based on a recursive construction and minimal models. In this paper, we introduce a solver based on MINISAT algorithm for inferring the logic programming semantics MM∗. As one of the applications of the MM solver, we will argue that this solver is a suitable tool for computing the argumentation semantics CF2." ] }
cs0310016
1673079227
By recording every state change in the run of a program, it is possible to present the programmer every bit of information that might be desired. Essentially, it becomes possible to debug the program by going backwards in time,'' vastly simplifying the process of debugging. An implementation of this idea, the Omniscient Debugger,'' is used to demonstrate its viability and has been used successfully on a number of large programs. Integration with an event analysis engine for searching and control is presented. Several small-scale user studies provide encouraging results. Finally performance issues and implementation are discussed along with possible optimizations. This paper makes three contributions of interest: the concept and technique of going backwards in time,'' the GUI which presents a global view of the program state and has a formal notion of navigation through time,'' and the integration with an event analyzer.
HERCULE @cite_9 is a tool which can record and replay distributed events, in particular, window events and appearence. It does for windows much of what ODB does for programs, and provides much of the functionality that the ODB lacks.
{ "cite_N": [ "@cite_9" ], "mid": [ "1566707746" ], "abstract": [ "This paper presents HERCULE, an approach to non-invasively tracking end-user application activity in a distributed, component-based system. Such tracking can support the visualisation of user and application activity, system auditing, monitoring of system performance and the provision of feedback. A framework is provided that allows the insertion of proxies, dynamically and transparently, into a component-based system. Proxies are inserted in between the user and the graphical user-interface and between the client application and the rest of the distributed, component-based system. The paper describes: how the code for the proxies is generated by mining component documentation; how they are inserted without affecting pre-existing code; and how information produced by the proxies can be used to model application activity. The viability of this approach is demonstrated by means of a prototype implementation." ] }
cs0310020
2119104528
A simple mathematical definition of the 4-port model for pure Prolog is given. The model combines the intuition of ports with a compact representation of execution state. Forward and backward derivation steps are possible. The model satisfies a modularity claim, making it suitable for formal reasoning.
In contrast to the few specifications of the Byrd box, there are many more general models of pure (or even full) Prolog execution. Due to space limitations we mention here only some models, directly relevant to , and for a more comprehensive discussion see @cite_1 . Comparable to our work are the stack-based approaches. St "ark gives in @cite_3 , as a side issue, a simple operational semantics of pure logic programming. A state of execution is a stack of frame stacks, where each frame consists of a goal (ancestor) and an environment. In comparison, our state of execution consists of exactly one environment and one ancestor stack . The seminal paper of Jones and Mycroft @cite_10 was the first to present a stack-based model of execution, applicable to pure Prolog with cut added. It uses a sequence of frames. In these stack-based approaches (including our previous attempt @cite_1 ), there is no , it is not possible to abstract the execution of a subgoal.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_3" ], "mid": [ "2625909586", "844671342", "1909063750" ], "abstract": [ "This article describes the final solution of team monkeytyping, who finished in second place in the YouTube-8M video understanding challenge. The dataset used in this challenge is a large-scale benchmark for multi-label video classification. We extend the work in [1] and propose several improvements for frame sequence modeling. We propose a network structure called Chaining that can better capture the interactions between labels. Also, we report our approaches in dealing with multi-scale information and attention pooling. In addition, We find that using the output of model ensemble as a side target in training can boost single model performance. We report our experiments in bagging, boosting, cascade, and stacking, and propose a stacking algorithm called attention weighted stacking. Our final submission is an ensemble that consists of 74 sub models, all of which are listed in the appendix.", "Symbolic execution extends concrete execution by allowing symbolic input data and then exploring all feasible execution paths. It has been defined and used in the context of many different programming languages and paradigms. A symbolic execution engine is at the heart of many program analysis and transformation techniques, like partial evaluation, test case generation or model checking, to name a few. Despite its relevance, traditional symbolic execution also suffers from several drawbacks. For instance, the search space is usually huge (often infinite) even for the simplest programs. Also, symbolic execution generally computes an overapproximation of the concrete execution space, so that false positives may occur. In this paper, we propose the use of a variant of symbolic execution, called concolic execution, for test case generation in Prolog. Our technique aims at full statement coverage. We argue that this technique computes an underapproximation of the concrete execution space (thus avoiding false positives) and scales up better to medium and large Prolog applications.", "In this paper we introduce a model for a wide class of computational systems, whose behaviour can be described by certain rewriting rules. We gathered our inspiration both from the world of term rewriting, in particular from the rewriting logic framework Mes92 , and of concurrency theory: among the others, the structured operational semantics Plo81 , the context systems LX90 and the structured transition systems CM92 approaches. Our model recollects many properties of these sources: first, it provides a compositional way to describe both the states and the sequences of transitions performed by a given system, stressing their distributed nature. Second, a suitable notion of typed proof allows to take into account also those formalisms relying on the notions of synchronization and side-effects to determine the actual behaviour of a system. Finally, an equivalence relation over sequences of transitions is defined, equipping the system under analysis with a concurrent semantics, where each equivalence class denotes a family of computationally equivalent'''' behaviours, intended to correspond to the execution of the same set of (causally unrelated) events. As a further abstraction step, our model is conveniently represented using double-categories: its operational semantics is recovered with a free construction, by means of a suitable adjunction." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
In Program Slicing @cite_15 @cite_4 , statements that cannot influence the value of a variable at a given program point are eliminated by considering the dependencies between the statements. Backward reasoning from output values, as in our approach, is not possible. Similar ideas were successfully utilized in a MBD tool analyzing VHDL programs @cite_29 @cite_8 .
{ "cite_N": [ "@cite_15", "@cite_29", "@cite_4", "@cite_8" ], "mid": [ "2253239179", "2069300761", "2066486177", "2060526835" ], "abstract": [ "We derive a least-squares formulation for MDDMp technique.A novel multi-label feature extraction algorithm is proposed.Our algorithm maximizes both feature variance and feature-label dependence.Experiments show that our algorithm is a competitive candidate. Dimensionality reduction is an important pre-processing procedure for multi-label classification to mitigate the possible effect of dimensionality curse, which is divided into feature extraction and selection. Principal component analysis (PCA) and multi-label dimensionality reduction via dependence maximization (MDDM) represent two mainstream feature extraction techniques for unsupervised and supervised paradigms. They produce many small and a few large positive eigenvalues respectively, which could deteriorate the classification performance due to an improper number of projection directions. It has been proved that PCA proposed primarily via maximizing feature variance is associated with a least-squares formulation. In this paper, we prove that MDDM with orthonormal projection directions also falls into the least-squares framework, which originally maximizes Hilbert-Schmidt independence criterion (HSIC). Then we propose a novel multi-label feature extraction method to integrate two least-squares formulae through a linear combination, which maximizes both feature variance and feature-label dependence simultaneously and thus results in a proper number of positive eigenvalues. Experimental results on eight data sets show that our proposed method can achieve a better performance, compared with other seven state-of-the-art multi-label feature extraction algorithms.", "This paper attempts to provide an adequate basis for formal definitions of the meanings of programs in appropriately defined programming languages, in such a way that a rigorous standard is established for proofs about computer programs, including proofs of correctness, equivalence, and termination. The basis of our approach is the notion of an interpretation of a program: that is, an association of a proposition with each connection in the flow of control through a program, where the proposition is asserted to hold whenever that connection is taken. To prevent an interpretation from being chosen arbitrarily, a condition is imposed on each command of the program. This condition guarantees that whenever a command is reached by way of a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. Then by induction on the number of commands executed, one sees that if a program is entered by a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. By this means, we may prove certain properties of programs, particularly properties of the form: ‘If the initial values of the program variables satisfy the relation R l, the final values on completion will satisfy the relation R 2’.", "Probabilistic programs use familiar notation of programming languages to specify probabilistic models. Suppose we are interested in estimating the distribution of the return expression r of a probabilistic program P. We are interested in slicing the probabilistic program P and obtaining a simpler program Sli(P) which retains only those parts of P that are relevant to estimating r, and elides those parts of P that are not relevant to estimating r. We desire that the Sli transformation be both correct and efficient. By correct, we mean that P and Sli(P) have identical estimates on r. By efficient, we mean that estimation over Sli(P) be as fast as possible. We show that the usual notion of program slicing, which traverses control and data dependencies backward from the return expression r, is unsatisfactory for probabilistic programs, since it produces incorrect slices on some programs and sub-optimal ones on others. Our key insight is that in addition to the usual notions of control dependence and data dependence that are used to slice non-probabilistic programs, a new kind of dependence called observe dependence arises naturally due to observe statements in probabilistic programs. We propose a new definition of Sli(P) which is both correct and efficient for probabilistic programs, by including observe dependence in addition to control and data dependences for computing slices. We prove correctness mathematically, and we demonstrate efficiency empirically. We show that by applying the Sli transformation as a pre-pass, we can improve the efficiency of probabilistic inference, not only in our own inference tool R2, but also in other systems for performing inference such as Church and Infer.NET.", "While the covering algorithm has been perfected recently by the iterative approaches, such as DAOmap and IMap, its application has been limited to technology mapping. The main factor preventing the covering problem's migration to other logic transformations, such as elimination and resynthesis region identification found in SIS and FBDD, is the exponential number of alternative cuts that have to be evaluated. Traditional methods of cut generation do not scale beyond a cut size of 6. In this paper, a symbolic method that can enumerate all cuts is proposed without any pruning, up to a cut size of 10. We show that it can outperform traditional methods by an order of magnitude and, as a result, scales to 100K gate benchmarks. As a practical driver, the covering problem applied to elimination is shown where it can not only produce competitive area, but also provide more than 6times average runtime reduction of the total runtime in FBDD, a BDD based logic synthesis tool with a reported order of magnitude faster runtime than SIS and commercial tools with negligible impact on area." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
@cite_41 @cite_22 use probability measurements to guide diagnosis. The program debugging process is divided into two steps. In the first one, program parts that may cause a discrepancy are computed by tracing the incorrect output back to the inputs and collecting the involved statements. In a second step, a belief network is used to identify the most probable statements causing the fault. Although this approach was successful in debugging a very large program, it requires statistics relating the statement types and fault symptoms, which makes it unsuitable for debugging general programs.
{ "cite_N": [ "@cite_41", "@cite_22" ], "mid": [ "2162045655", "2043811931" ], "abstract": [ "One of the most expensive and time-consuming components of the debugging process is locating the errors or faults. To locate faults, developers must identify statements involved in failures and select suspicious statements that might contain faults. This paper presents a new technique that uses visualization to assist with these tasks. The technique uses color to visually map the participation of each program statement in the outcome of the execution of the program with a test suite, consisting of both passed and failed test cases. Based on this visual mapping, a user can inspect the statements in the program, identify statements involved in failures, and locate potentially faulty statements. The paper also describes a prototype tool that implements our technique along with a set of empirical studies that use the tool for evaluation of the technique. The empirical studies show that, for the subject we studied, the technique can be effective in helping a user locate faults in a program.", "A major obstacle to finding program errors in a real system is knowing what correctness rules the system must obey. These rules are often undocumented or specified in an ad hoc manner. This paper demonstrates techniques that automatically extract such checking information from the source code itself, rather than the programmer, thereby avoiding the need for a priori knowledge of system rules.The cornerstone of our approach is inferring programmer \"beliefs\" that we then cross-check for contradictions. Beliefs are facts implied by code: a dereference of a pointer, p, implies a belief that p is non-null, a call to \"unlock(1)\" implies that 1 was locked, etc. For beliefs we know the programmer must hold, such as the pointer dereference above, we immediately flag contradictions as errors. For beliefs that the programmer may hold, we can assume these beliefs hold and use a statistical analysis to rank the resulting errors from most to least likely. For example, a call to \"spin_lock\" followed once by a call to \"spin_unlock\" implies that the programmer may have paired these calls by coincidence. If the pairing happens 999 out of 1000 times, though, then it is probably a valid belief and the sole deviation a probable error. The key feature of this approach is that it requires no a priori knowledge of truth: if two beliefs contradict, we know that one is an error without knowing what the correct belief is.Conceptually, our checkers extract beliefs by tailoring rule \"templates\" to a system --- for example, finding all functions that fit the rule template \"a must be paired with b.\" We have developed six checkers that follow this conceptual framework. They find hundreds of bugs in real systems such as Linux and OpenBSD. From our experience, they give a dramatic reduction in the manual effort needed to check a large system. Compared to our previous work [9], these template checkers find ten to one hundred times more rule instances and derive properties we found impractical to specify manually." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Jackson @cite_35 introduces a framework to detect faults in programs that manifest through changed dependencies between the input and the output variables of a program. The approach detects differences between the dependencies computed for a program and the dependencies specified by the user. It is able to detect certain kinds of structural faults but no test case information is exploited. Whereas Jackson focuses on bug detection, the model--based approach is also capable of locating faults. Further, the information obtained from present and absent dependencies can aid the debugger to focus on certain regions and types of faults, and thus find possible causes more quickly.
{ "cite_N": [ "@cite_35" ], "mid": [ "2093831363" ], "abstract": [ "The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
@cite_16 apply similar ideas to knowledge base maintenance, exploiting hierarchical information to speed up the diagnostic process and to reduce the number of diagnoses.
{ "cite_N": [ "@cite_16" ], "mid": [ "199096196" ], "abstract": [ "Debugging, validation, and maintenance of configurator knowledge bases are important tasks for the successful deployment of product configuration systems, due to frequent changes (e.g., new component types, new regulations) in the configurable products. Model based diagnosis techniques have shown to be a promising approach to support the test engineer in identifying faulty parts in declarative knowledge bases. Given positive (existing configurations) and negative test cases, explanations for the unexpected behavior of the configuration systems can be calculated using a consistency based approach. For the case of large and complex knowledge bases, we show how the usage of hierarchical abstractions can reduce the computation times for the explanations and in addition gives the possibility to iteratively and interactively refine diagnoses from abstract to more detailed levels. Starting from a logical definition of configuration and diagnosis of knowledge bases, we show how a basic diagnostic algorithm can be extended to support hierarchical abstractions in the configuration domain. Finally, experimental results from a prototypical implementation using an industrial constraint based configurator library are presented." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Abstract Interpretation to analyze programs was first introduced by @cite_0 , and later extended by @cite_25 @cite_23 to include assertions for abstract debugging. Their approach aims at analyzing every possible execution of a program, which makes is suitable to detect errors even in the case where no test cases are available. A common problem of these approaches is that of choosing appropriate abstractions in order to obtain useful results, which hinders the automatic applicability of these approaches for many programs. @cite_27 introduces a relaxed form of representation for abstract interpretation, which allows for more complex domains, while building the structure of the approximation dynamically. Our framework is strongly inspired by this work, but provides more insight on how to choose approximation operators for debugging, in particular in the case where test information is known. These questions are not addressed in @cite_27 .
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_25", "@cite_23" ], "mid": [ "2043100293", "2165069483", "2158735282", "2170736936" ], "abstract": [ "A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).", "Abstract interpretation is a formal method that enables the static determination (i.e. at compile-time) of the dynamic properties (i.e. at run-time) of programs. We present an abstract interpretation-based method, called abstract debugging , which enables the static and formal debugging of programs, prior to their execution, by finding the origin of potential bugs as well as necessary conditions for these bugs not to occur at run-time. We show how invariant assertions and intermittent assertions , such as termination, can be used to formally debug programs. Finally, we show how abstract debugging can be effectively and efficiently applied to higher-order imperative programs with exceptions and jumps to non-local labels, and present the Syntox system that enables the abstract debugging of the Pascal language by the determination of the range of the scalar variables of programs.", "Abstract interpretation provides an elegant formalism for performing program analysis. Unfortunately, designing and implementing a sound, precise, scalable, and extensible abstract interpreter is difficult. In this paper, we describe an approach to creating correct-by-construction abstract interpreters that also attain the fundamental limits on precision that abstract-interpretation theory establishes. Our approach requires the analysis designer to implement only a small number of operations. In particular, we describe a systematic method for implementing an abstract interpreter that solves the following problem:Given program P, and an abstract domain A, find the most-precise inductive A-invariant for P.", "We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software.The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization (Sect. 3 and 7), the symbolic manipulation of expressions to improve the precision of abstract transfer functions (Sect. 6.3), the octagon (Sect. 6.2.2), ellipsoid (Sect. 6.2.3), and decision tree (Sect. 6.2.4) abstract domains, all with sound handling of rounding errors in oating point computations, widening strategies (with thresholds: Sect. 7.1.2, delayed: Sect. 7.1.3) and the automatic determination of the parameters (parametrized packing: Sect. 7.2)." ] }
cs0309030
1519115557
ABSTRACTThis paper introduces an automatic debuggingframework that relies on model–based reasoning techniquesto locate faults in programs. In particular, model–based diagnosis, together with an abstract interpretationbased conflict detection mechanism is used to derive diagnoses, which correspond to possible faults in pro-grams. Design information and partial specifications are applied to guide a model revision process, whichallows for automatic detection and correction of structural faults. KEYWORDS : Model–based Debugging, Diagnosis, Abstract Interpretation, Program Analysis 1 Introduction Detecting a faulty behavior within a program, locating the cause of the fault, and fixing the faultby means of changing the program, continues to be a crucial and challenging task in software de-velopment. Many papers have been published so far in the domain of detecting faults in software,e.g., testing or formal verification [CDH + 00], and locating them, e.g., program slicing [Wei84] andautomatic program debugging [Llo87]. More recently model–based diagnosis [Rei87] has been usedfor locating faults in software [CFD93, MSWW02a].This paper extends previous research in several directions: Firstly, a parameterized debuggingframework is introduced, which integrates dynamic and static properties, as well as design infor-mation of programs. The framework is based on results derived in the field of abstract interpreta-tion [CC77], and can therefore be parameterized with different lattices and context selection strate-gies.Secondly, the one–to–one correspondence between model components and program statementsis replaced by a hierarchy of components, which provides means for more efficient reasoning proce-dures, as well as more flexibility when focusing on interesting parts of a program.This work is organized as follows. In Section 2, we give an introduction to model-based debug-ging. Section 3 describes mapping from source code to model components and the (approximate)computation of program effects in the style of [CC77] and [Bou93]. The next section discusses themodeling of programs and the reasoning framework. In Section 5, we provide an example whichputs together the different models and demonstrates the debugging capabilities of our approach.
Recently, model checking approaches have been extended to attempt fault localization in counterexample traces. @cite_40 extended a model checking algorithm that is able to pinpoint transitions in traces responsible for a faulty behavior. @cite_39 presents another approach, which explores the neighborhood of counterexamples to determine causes of faulty behavior. These techniques mostly consider deviations in control flow and do not take data dependencies into account. Also, the derivation of the abstract model from the concrete program usually is non--trivial and is difficult to automate.
{ "cite_N": [ "@cite_40", "@cite_39" ], "mid": [ "1523041988", "2158870716" ], "abstract": [ "A traditional counterexample to a linear-time safety property shows the values of all signals at all times prior to the error. However, some signals may not be critical to causing the failure. A succinct explanation may help human understanding as well as speed up algorithms that have to analyze many such traces. In Bounded Model Checking (BMC), a counterexample is constructed from a satisfying assignment to a Boolean formula, typically in CNF. Modern SAT solvers usually assign values to all variables when the input formula is satisfiable. Deriving minimal satisfying assignments from such complete assignments does not lead to concise explanations of counterexamples because of how CNF formulae are derived from the models. Hence, we formulate the extraction of a succinct counterexample as the problem of finding a minimal assignment that, together with the Boolean formula describing the model, implies an objective. We present a two-stage algorithm for this problem, such that the result of each stage contributes to identify the “interesting” events that cause the failure. We demonstrate the effectiveness of our approach with an example and with experimental results.", "There is significant room for improving users' experiences with model checking tools. An error trace produced by a model checker can be lengthy and is indicative of a symptom of an error. As a result, users can spend considerable time examining an error trace in order to understand the cause of the error. Moreover, even state-of-the-art model checkers provide an experience akin to that provided by parsers before syntactic error recovery was invented: they report a single error trace per run. The user has to fix the error and run the model checker again to find more error traces.We present an algorithm that exploits the existence of correct traces in order to localize the error cause in an error trace, report a single error trace per error cause, and generate multiple error traces having independent causes. We have implemented this algorithm in the context of slam , a software model checker that automatically verifies temporal safety properties of C programs, and report on our experience using it to find and localize errors in device drivers. The algorithm typically narrows the location of a cause down to a few lines, even in traces consisting of hundreds of statements." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
Boothe @cite_18 made a C debugger with reverse execution capability using a step counter which counts the number of step executions and re-execution from the beginning of debuggees. The capability could be also implemented with our timestamp counter and re-execution. The difference comes from the purpose of each project. Boothe made reverse execution version of existing debugger commands such as backward step'', backward finish'', and so on. Since we try to implement more abstract control of program execution than raw debugger commands, the counter of step execution is too expensive for our purpose.
{ "cite_N": [ "@cite_18" ], "mid": [ "1969550081" ], "abstract": [ "This paper discusses our research into algorithms for creating anefficient bidirectional debugger in which all traditional forward movement commands can be performed with equal ease in the reverse direction. We expect that adding these backwards movement capabilities to a debugger will greatly increase its efficacy as a programming tool. The efficiency of our methods arises from our use of event countersthat are embedded into the program being debugged. These counters areused to precisely identify the desired target event on the fly as thetarget program executes. This is in contrast to traditional debuggers that may trap back to the debugger many times for some movements. For reverse movements we re-execute the program (possibly using two passes) to identify and stop at the desired earlier point. Our counter based techniques are essential for these reverse movements because they allow us to efficiently execute through the millions of events encountered during re-execution. Two other important components of this debugger are its I O logging and checkpointing. We log and later replay the results of system callsto ensure deterministic re-execution, and we use checkpointing to bound theamount of re-execution used for reverse movements. Short movements generally appear instantaneous, and the time for longer movements is usually bounded within a small constant factor of the temporal distance moved back." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_17 , Moher @cite_8 and @cite_15 save complete memory history of process to achieve fully random accessibility to program states. Their systems have to deal with large log''. Our system, however, saves only a pair of line number and value of timestamp to obtain the same capability by assuming the determinism of debuggees.
{ "cite_N": [ "@cite_8", "@cite_15", "@cite_17" ], "mid": [ "2117703621", "2020301027", "2079632486" ], "abstract": [ "The cost of accessing main memory is increasing. Machine designers have tried to mitigate the consequences of the processor and memory technology trends underlying this increasing gap with a variety of techniques to reduce or tolerate memory latency. These techniques, unfortunately, are only occasionally successful for pointer-manipulating programs. Recent research has demonstrated the value of a complementary approach, in which pointer-based data structures are reorganized to improve cache locality.This paper studies a technique for using a generational garbage collector to reorganize data structures to produce a cache-conscious data layout, in which objects with high temporal affinity are placed next to each other, so that they are likely to reside in the same cache block. The paper explains how to collect, with low overhead, real-time profiling information about data access patterns in object-oriented languages, and describes a new copying algorithm that utilizes this information to produce a cache-conscious object layout.Preliminary results show that this technique reduces cache miss rates by 21--42 , and improves program performance by 14--37 over Cheney's algorithm. We also compare our layouts against those produced by the Wilson-Lam-Moher algorithm, which attempts to improve program locality at the page level. Our cache-conscious object layouts reduces cache miss rates by 20--41 and improves program performance by 18--31 over their algorithm, indicating that improving locality at the page level is not necessarily beneficial at the cache level.", "A Euclidean approximate sparse recovery system consists of parameters k,N, an m-by-N measurement matrix, Φ, and a decoding algorithm, D. Given a vector, x, the system approximates x by ^x=D(Φ x), which must satisfy ||x - x||2≤ C ||x - xk||2, where xk denotes the optimal k-term approximation to x. (The output ^x may have more than k terms). For each vector x, the system must succeed with probability at least 3 4. Among the goals in designing such systems are minimizing the number m of measurements and the runtime of the decoding algorithm, D. In this paper, we give a system with m=O(k log(N k)) measurements--matching a lower bound, up to a constant factor--and decoding time k log O(1) N, matching a lower bound up to log(N) factors. We also consider the encode time (i.e., the time to multiply Φ by x), the time to update measurements (i.e., the time to multiply Φ by a 1-sparse x), and the robustness and stability of the algorithm (adding noise before and after the measurements). Our encode and update times are optimal up to log(k) factors. The columns of Φ have at most O(log2(k)log(N k)) non-zeros, each of which can be found in constant time. Our full result, an FPRAS, is as follows. If x=xk+ν1, where ν1 and ν2 (below) are arbitrary vectors (regarded as noise), then, setting ^x = D(Φ x + ν2), and for properly normalized ν, we get [||^x - x||22 ≤ (1+e)||ν1||22 + e||ν2||22,] using O((k e)log(N k)) measurements and (k e)logO(1)(N) time for decoding.", "The property of locality in program behavior has been studied and modelled extensively because of its application to memory design, code optimization, multiprogramming etc. We propose a k order Markov chain based scheme to model the sequence of time intervals between successive references to the same address in memory during program execution. Each unique address in a program is modelled separately. To validate our model, which we call the Inter-Reference Gap (IRG) model, we show substantial improvements in three different areas where it is applied. (1) We improve upon the miss ratio for the Least Recently Used (LRU) memory replacement algorithm by up to 37 . (2) We achieve up to 22 space-time product improvement over the Working Set (WS) algorithm for dynamic memory management. (3) A new trace compression technique is proposed which compresses up to 2.5 with zero error in WS simulations and up to 3.7 error in the LRU simulations. All these results are obtained experimentally, via trace driven simulations over a wide range of cache traces, page reference traces, object traces and database traces." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
Ducass ' e @cite_26 allows the programmer to control the execution not by source statement orientation, but by event orientation such as assignments, function calls, loops, and so on. Users write Prolog-like forms to designate breakpoints which have complex conditions. This mechanism is complementary to our system and suitable for a front end of it in order to designate appropriate positions where we would move control point to.
{ "cite_N": [ "@cite_26" ], "mid": [ "2028053566" ], "abstract": [ "In this paper, we compose six di erent Python and Prolog VMs into 4 pairwise compositions: one using C interpreters; one running on the JVM; one using meta-tracing interpreters; and one using a C interpreter and a meta-tracing interpreter. We show that programs that cross the language barrier frequently execute faster in a meta-tracing composition, and that meta-tracing imposes a significantly lower overhead on composed programs relative to mono-language programs. 1. Overview Programming language composition aims to allow the mixing of programming languages in a fine-grained manner. This vision brings many challenging problems, from the interaction of language semantics to performance. In this paper, we investigate the runtime performance of composed programs in high-level languages. We start from the assumption that execution of such programs is most likely to be through composing language implementations that use interpreters and VMs, rather than traditional compilers. This raises the question: how do di erent styles of composition a ect performance? Clearly, such a question cannot have a single answer but, to the best of our knowledge, this issue has not been explored in the past. This paper's hypothesis is that meta-tracing - a relatively new technique used to produce JIT (Just-In-Time) compilers from interpreters (1) - will lead to faster interpreter composition than traditional approaches. To test this hypothesis, we present a Python and Prolog composition which allows Python programs to embed and call Prolog programs. We then implement the composition in four di erent ways, comparing the absolute times and the relative cross-language costs of each. In addition to the 'traditional' approaches to composing interpreters (in C and upon the JVM), we also investigate the application of meta-tracing to interpreter composition. The experiments we then carry out confirm our initial hypothesis. There is a long tradition of composing Prolog with other languages (with e.g. Icon (2), Lisp (3), and Smalltalk (4)) because one can express certain types of programs far easier in Prolog than in other languages. We have two additional reasons for choosing this pairing. First, these languages represent very di erent points in the language design space: Prolog's inherently" ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_1 @cite_10 developed a event-based instrumentation tool, CCI, which inserts instrumentation codes into C source codes. The converted codes have platform independence. The execution slowdown, however, is 2.09 times in the case of laplace.c and 5.85 times in the case of life.c @cite_10 . In order to achieve position system, events only about control flow should be generated.
{ "cite_N": [ "@cite_10", "@cite_1" ], "mid": [ "2096537660", "1737373496" ], "abstract": [ "Automatic software instrumentation is usually done at the machine level or is targeted at specific program behavior for use with a particular monitoring application. The paper describes CCI, an automatic software instrumentation tool for ANSI C designed to serve a broad range of program execution monitors. CCI supports high level instrumentation for both application-specific behavior as well as standard libraries and data types. The event generation mechanism is defined by the execution monitor which uses CCI, providing flexibility for different monitors' execution models. Code explosion and the runtime cost of instrumentation are reduced by declarative configuration facilities that allow the monitor to select specific events to be instrumented. Higher level events can be defined by combining lower level events with information obtained from semantic analysis of the instrumented program.", "In this work we propose solving huge-scale instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes: serial and parallel implementations of randomized and greedy rules for the selection of the variable(s) (potential bar(s)) to be updated in the next iteration. Both serial methods enjoy an O(n k) iteration complexity guarantee, where n is the number of potential bars and k the iteration counter. Our parallel implementations, written in CUDA and running on a graphical processing unit (GPU), are capable of speedups of up to two orders of magnitude when compared to their serial counterparts. Numerical experiments were performed on instances with up to 30 million potential bars." ] }
cs0309031
1638678101
Many programmers have had to deal with an overwritten variable resulting for example from an aliasing problem. The culprit is obviously the last write-access to that memory location before the manifestation of the bug. The usual technique for removing such bugs starts with the debugger by (1) finding the last write and (2) moving the control point of execution back to that time by re-executing the program from the beginning. We wish to automate this. Step (2) is easy if we can somehow mark the last write found in step (1) and control the execution-point to move it back to this time. In this paper we propose a new concept, position, that is, a point in the program execution trace, as needed for step (2) above. The position enables debuggers to automate the control of program execution to support common debugging activities. We have implemented position in C by modifying GCC and in Java with a bytecode transformer. Measurements show that position can be provided with an acceptable amount of overhead.
@cite_3 made EEL, which is a library for building tools to analyze and modify an executable program. Using EEL, we could implement the insertion of codes to maintain timestamp in executable code level. The solution, however, is dependent on a specified platform, so we chose the intermediate code level and modified GCC.
{ "cite_N": [ "@cite_3" ], "mid": [ "2040183246" ], "abstract": [ "EEL (Executable Editing Library) is a library for building tools to analyze and modify an executable (compiled) program. The systems and languages communities have built many tools for error detection, fault isolation, architecture translation, performance measurement, simulation, and optimization using this approach of modifying executables. Currently, however, tools of this sort are difficult and time-consuming to write and are usually closely tied to a particular machine and operating system. EEL supports a machine- and system-independent editing model that enables tool builders to modify an executable without being aware of the details of the underlying architecture or operating system or being concerned with the consequences of deleting instructions or adding foreign code." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
The problem of debugging memory corruption problems in production was explicitly identified by Patil and Fischer in @cite_2 , in which they describe using idle processors to absorb their technique's substantial performance impact. Unfortunately, this is not practical in a general-purpose system: idle processors cannot be relied upon to be available for extraneous processing. Indeed, in performance critical systems any performance impact is often unacceptable.
{ "cite_N": [ "@cite_2" ], "mid": [ "2005139304" ], "abstract": [ "By studying the behavior of several programs that crash due to memory errors, we observed that locating the errors can be challenging because significant propagation of corrupt memory values can occur prior to the point of the crash. In this article, we present an automated approach for locating memory errors in the presence of memory corruption propagation. Our approach leverages the information revealed by a program crash: when a crash occurs, this reveals a subset of the memory corruption that exists in the execution. By suppressing (nullifying) the effect of this known corruption during execution, the crash is avoided and any remaining (hidden) corruption may then be exposed by subsequent crashes. The newly exposed corruption can then be suppressed in turn. By iterating this process until no further crashes occur, the first point of memory corruption—and the likely root cause of the program failure—can be identified. However, this iterative approach may terminate prematurely, since programs may not crash even when memory corruption is present during execution. To address this, we show how crashes can be exposed in an execution by manipulating the relative ordering of particular variables within memory. By revealing crashes through this variable re-ordering, the effectiveness and applicability of the execution suppression approach can be improved. We describe a set of experiments illustrating the effectiveness of our approach in consistently and precisely identifying the first points of memory corruption in executions that fail due to memory errors. We also discuss a baseline software implementation of execution suppression that incurs an average overhead of 7.2x, and describe how to reduce this overhead to 1.8x through hardware support." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
Some memory allocators have addressed debugging problems in production by allowing their behavior to be dynamically changed to provide greater debugging support @cite_13 . This allows optimal allocators to be deployed into production, while still allowing their debugging features to be later enabled should problems arise. A common way for these allocators to detect buffer overruns is to optionally place red zones around allocated memory. However, this only provides for immediate identification of the errant code if stores to the red zone induce a synchronous fault. Such faults are typically achieved by coopting the virtual memory system in some way --- either by surrounding a buffer with unmapped regions, or by performing a check on each access. The first has enormous cost in terms of space, and the second in terms of time --- neither can be acceptably enabled at all times. Thus, these approaches are still only useful for reproducible memory corruption problems.
{ "cite_N": [ "@cite_13" ], "mid": [ "2128274900" ], "abstract": [ "Parallel, multithreaded C and C++ programs such as web servers, database managers, news servers, and scientific applications are becoming increasingly prevalent. For these applications, the memory allocator is often a bottleneck that severely limits program performance and scalability on multiprocessor systems. Previous allocators suffer from problems that include poor performance and scalability, and heap organizations that introduce false sharing. Worse, many allocators exhibit a dramatic increase in memory consumption when confronted with a producer-consumer pattern of object allocation and freeing. This increase in memory consumption can range from a factor of P (the number of processors) to unbounded memory consumption.This paper introduces Hoard, a fast, highly scalable allocator that largely avoids false sharing and is memory efficient. Hoard is the first allocator to simultaneously solve the above problems. Hoard combines one global heap and per-processor heaps with a novel discipline that provably bounds memory consumption and has very low synchronization costs in the common case. Our results on eleven programs demonstrate that Hoard yields low average fragmentation and improves overall program performance over the standard Solaris allocator by up to a factor of 60 on 14 processors, and up to a factor of 18 over the next best allocator we tested." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
If memory corruption cannot be acceptably prevented in production code, then the focus must shift to debugging the corruption postmortem. While the notion of postmortem debugging has existed since the earliest dawn of debugging @cite_5 , there seems to have been very little work on postmortem debugging of memory corruption per se; such as it is, work on postmortem debugging has focused on race condition detection in parallel and distributed programs. The lack of work on postmortem debugging is surprising given its clear advantages for debugging production systems --- advantages that were clearly elucidated by McGregor and Malone in @cite_10 :
{ "cite_N": [ "@cite_5", "@cite_10" ], "mid": [ "2005139304", "2098809490" ], "abstract": [ "By studying the behavior of several programs that crash due to memory errors, we observed that locating the errors can be challenging because significant propagation of corrupt memory values can occur prior to the point of the crash. In this article, we present an automated approach for locating memory errors in the presence of memory corruption propagation. Our approach leverages the information revealed by a program crash: when a crash occurs, this reveals a subset of the memory corruption that exists in the execution. By suppressing (nullifying) the effect of this known corruption during execution, the crash is avoided and any remaining (hidden) corruption may then be exposed by subsequent crashes. The newly exposed corruption can then be suppressed in turn. By iterating this process until no further crashes occur, the first point of memory corruption—and the likely root cause of the program failure—can be identified. However, this iterative approach may terminate prematurely, since programs may not crash even when memory corruption is present during execution. To address this, we show how crashes can be exposed in an execution by manipulating the relative ordering of particular variables within memory. By revealing crashes through this variable re-ordering, the effectiveness and applicability of the execution suppression approach can be improved. We describe a set of experiments illustrating the effectiveness of our approach in consistently and precisely identifying the first points of memory corruption in executions that fail due to memory errors. We also discuss a baseline software implementation of execution suppression that incurs an average overhead of 7.2x, and describe how to reduce this overhead to 1.8x through hardware support.", "Memory leaks and memory corruption are two major forms of software bugs that severely threaten system availability and security. According to the US-CERT vulnerability notes database, 68 of all reported vulnerabilities in 2003 were caused by memory leaks or memory corruption. Dynamic monitoring tools, such as the state-of-the-art Purify, are commonly used to detect memory leaks and memory corruption. However, most of these tools suffer from high overhead, with up to a 20 times slowdown, making them infeasible to be used for production-runs. This paper proposes a tool called SafeMem to detect memory leaks and memory corruption on-the-fly during production-runs. This tool does not rely on any new hardware support. Instead, it makes a novel use of existing ECC memory technology and exploits intelligent dynamic memory usage behavior analysis to detect memory leaks and corruption. We have evaluated SafeMem with seven real-world applications that contain memory leak or memory corruption bugs. SafeMem detects all tested bugs with low overhead (only 1.6 -14.4 ), 2-3 orders of magnitudes smaller than Purify. Our results also show that ECC-protection is effective in pruning false positives for memory leak detection, and in reducing the amount of memory waste (by a factor of 64-74) used for memory monitoring in memory corruption detection compared to page-protection." ] }
cs0309037
2158179037
This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, production systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one Cbased system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implementation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification.
The only nod to postmortem debugging of memory corruption seems to come from memory allocators such as the slab allocator @cite_13 used by the Solaris kernel. This allocator can optionally log information with each allocation and deallocation; in the event of failure, these logs can be used to determine the subsystem allocating the overrun buffer. While this mechanism has proved to be enormously useful in debugging memory corruption problems in the Solaris kernel, it is still far too space- and time-intensive to be enabled at all times in production environments.
{ "cite_N": [ "@cite_13" ], "mid": [ "1746694335" ], "abstract": [ "This paper presents a comprehensive design overview of the SunOS 5.4 kernel memory allocator. This allocator is based on a set of object-caching primitives that reduce the cost of allocating complex objects by retaining their state between uses. These same primitives prove equally effective for managing stateless memory (e.g. data pages and temporary buffers) because they are space-efficient and fast. The allocator's object caches respond dynamically to global memory pressure, and employ an object-coloring scheme that improves the system's overall cache utilization and bus balance. The allocator also has several statistical and debugging features that can detect a wide range of problems throughout the system." ] }
cs0309055
1495757250
In this paper, we propose a mathematical framework for automated bug localization. This framework can be briefly summarized as follows. A program execution can be represented as a rooted acyclic directed graph. We define an execution snapshot by a cut-set on the graph. A program state can be regarded as a conjunction of labels on edges in a cut-set. Then we argue that a debugging task is a pruning process of the execution graph by using cut-sets. A pruning algorithm, i.e., a debugging task, is also presented.
* Shapiro's algorithmic debugging was invented for prolog programs @cite_2 . Fig. shows our interpretation of his work. From our viewpoint, it uses a proof tree as an execution graph. (Attention: This interpretation differs from a normal proof tree. Our interpretation is based on a line graph A line graph can be get by interchanging vertices and edges of an original graph. of a normal proof tree.) He used only one edge as a cut-set since removal of any edge divides a tree into two disconnected subtrees. A state is also simple because only one label, i.e., one unified clause, is enough. In this work, step of the pruning process is fully automated and a programmer carries out step by answering yes'' or no'' to tell a system the correctness of the label on the edge. GADT @cite_6 and Lichtenstein's system @cite_0 can be interpreted as the same manner because they are straightforward extensions of Shapiro's work.
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_2" ], "mid": [ "1514468887", "2134080718", "2963702702" ], "abstract": [ "The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize.", "This paper presents a method for semi-automatic bug localization, generalized algorithmic debugging, which has been integrated with the category partition method for functional testing. In this way the efficiency of the algorithmic debugging method for bug localization can be improved by using test specifications and test results. The long-range goal of this work is a semi-automatic debugging and testing system which can be used during large-scale program development of nontrivial programs. The method is generally applicable to procedural langua ges and is not dependent on any ad hoc assumptions regarding the subject program. The original form of algorithmic debugging, introduced by Shapiro, was however limited to small Prolog programs without side-effects, but has later been generalized to concurrent logic programming languages. Another drawback of the original method is the large number of interactions with the user during bug localization. To our knowledge, this is the first method which uses category partition testing to improve the bug localization properties of algorithmic debugging. The method can avoid irrelevant questions to the programmer by categorizing input parameters and then match these against test cases in the test database. Additionally, we use program slicing, a data flow analysis technique, to dynamically compute which parts of the program are relevant for the search, thus further improving bug localization. We believe that this is the first generalization of algorithmic debugging for programs with side-effects written in imperative languages such as Pascal. These improvements together makes it more feasible to debug larger programs. However, additional improvements are needed to make it handle pointer-related side-effects and concurrent Pascal programs. A prototype generalized algorithmic debugger for a Pascal subset without pointer side-effects and a test case generator for application programs in Pascal, C, dBase, and LOTUS have been implemented.", "We prove tight upper bounds on the logarithmic derivative of the independence and matching polynomials of d-regular graphs. For independent sets, this theorem is a strengthening of Kahn's result that a disjoint union of copies of Kd;d maximizes the number of independent sets of a bipartite d-regular graph, Galvin and Tet ali's result that the independence polynomial is maximized by the same, and Zhao's extension of both results to all d-regular graphs. For matchings, this shows that the matching polynomial and the total number of matchings of a d-regular graph are maximized by a union of copies of Kd;d. Using this we prove the asymptotic upper matching conjecture of Friedland, Krop, Lundow, and Markstrom. In probabilistic language, our main theorems state that for all d-regular graphs and all �, the occupancy fraction of the hard-core model and the edge occupancy fraction of the monomer-dimer model with fugacity � are maximized by Kd;d. Our method involves constrained optimization problems over distributions of random variables and applies to all d-regular graphs directly, without a reduction to the bipartite case. Using a variant of the method we prove a lower bound on the occupancy fraction of the hard-core model on any d-regular, vertex-transitive, bipartite graph: the occupancy fraction of such a graph is strictly greater than the occupancy fraction of the unique translationinvariant hard-core measure on the infinite d-regular tree" ] }
cs0308007
2950479998
The past years have seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -- OPTYap -- and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
A first proposal on how to exploit implicit parallelism in tabling systems was Freire's @cite_53 . In this model, each tabled subgoal is computed independently in a single computational thread, a . Each generator thread is associated with a unique tabled subgoal and it is responsible for fully exploiting its search tree in order to obtain the complete set of answers. A generator thread dependent on other tabled subgoals will asynchronously consume answers as the correspondent generator threads will make them available. Within this model, parallelism results from having several generator threads running concurrently. Parallelism arising from non-tabled subgoals or from execution alternatives to tabled subgoals is not exploited. Moreover, we expect that scheduling and load balancing would be even harder than for traditional parallel systems.
{ "cite_N": [ "@cite_53" ], "mid": [ "2787223504" ], "abstract": [ "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators." ] }
cs0308007
2950479998
The past years have seen widening efforts at increasing Prolog's declarativeness and expressiveness. Tabling has proved to be a viable technique to efficiently overcome SLD's susceptibility to infinite loops and redundant subcomputations. Our research demonstrates that implicit or-parallelism is a natural fit for logic programs with tabling. To substantiate this belief, we have designed and implemented an or-parallel tabling engine -- OPTYap -- and we used a shared-memory parallel machine to evaluate its performance. To the best of our knowledge, OPTYap is the first implementation of a parallel tabling engine for logic programming systems. OPTYap builds on Yap's efficient sequential Prolog engine. Its execution model is based on the SLG-WAM for tabling, and on the environment copying for or-parallelism. Preliminary results indicate that the mechanisms proposed to parallelize search in the context of SLD resolution can indeed be effectively and naturally generalized to parallelize tabled computations, and that the resulting systems can achieve good performance on shared-memory parallel machines. More importantly, it emphasizes our belief that through applying or-parallelism and tabling to logic programs the range of applications for Logic Programming can be increased.
There have been other proposals for concurrent tabling but in a distributed memory context. Hu @cite_46 was the first to formulate a method for distributed tabled evaluation termed . This method matches subgoals with processors in a similar way to Freire's approach. Each processor gets a single subgoal and it is responsible for fully exploiting its search tree and obtain the complete set of answers. One of the main contributions of SLGMP is its controlled scheme of propagation of subgoal dependencies in order to safely perform distributed completion. An implementation prototype of SLGMP was developed, but as far as we know no results have been reported.
{ "cite_N": [ "@cite_46" ], "mid": [ "2495949634" ], "abstract": [ "SLG resolution, a type of tabled resolution and a technique of logic programming (LP), has polynomial data complexity for ground Datalog queries with negation, making it suitable for deductive database (DDB). It evaluates non-stratified negation according to the three-valued Well-Founded Semantics, making it a suitable starting point for non-monotonic reasoning (NMR). Furthermore, SLG has an efficient partial implementation in the SLG-WAM which, in the XSB logic programming system, has proven an order of magnitude faster than current DDR systems for in-memory queries. Building on SLG resolution, we formulate a method for distributed tabled resolution termed Multi-Processor SLG (SLGMP). Since SLG is modeled as a forest of trees, it then becomes natural to think of these trees as executing at various places over a distributed network in SLGMP. Incremental completion, which is necessary for efficient sequential evaluation, can be modeled through the use of a subgoal dependency graph (SDG), or its approximation. However the subgoal dependency graph is a global property of a forest; in a distributed environment each processor should maintain as small a view of the SDG as possible. The formulation of what and when dependency information must be maintained and propagated in order for distributed completion to be performed safely is the central contribution of SLGMP. Specifically, subgoals in SLGMP are properly numbered such that most of the dependencies among subgoals are represented by the subgoal numbers. Dependency information that is not represented by subgoal numbers is maintained explicitly at each processor and propagated by each processor. SLGMP resolution aims at efficiently evaluating normal logic programs in a distributed environment. SLGMP operations are explicitly defined and soundness and completeness is proven for SLGMP with respect to SLG for programs which terminate for SLG evaluation. The resulting framework can serve as a basis for query processing and non-monotonic reasoning within a distributed environment. We also implemented Distributed XSB, a prototype implementation of SLGMP. Distributed XSB, as a distributed tabled evaluation model, is really a distributed problem solving system, where the data to solve the problem is distributed and each participating process cooperates with other participants (perhaps including itself), by sending and receiving data. Distributed XSB proposes a distributed data computing model, where there may be cyclic dependencies among participating processes and the dependencies can be both negative and positive." ] }
cs0308015
2109841503
OpenPGP, an IETF Proposed Standard based on PGP application, has its own Public Key Infrastructure (PKI) architecture which is different from the one based on X.509, another standard from ITU. This paper describes the OpenPGP PKI; the historical perspective as well as its current use. The current OpenPGP PKI issues include the capability of a PGP keyserver and its performance. PGP keyservers have been developed and operated by volunteers since the 1990s. The keyservers distribute, merge, and expire the OpenPGP public keys. Major keyserver managers from several countries have built the globally distributed network of PGP keyservers. However, the current PGP Public Keyserver (pksd) has some limitations. It does not support fully the OpenPGP format so that it is neither expandable nor flexible, without any cluster technology. Finally we introduce the project on the next generation OpenPGP public keyserver called the OpenPKSD, lead by Hironobu Suzuki, one of the authors, and funded by Japanese Information-technology Promotion Agency(IPA).
A web of trust'' used in PGP is referred in several researches including the peer-to-peer authentication @cite_9 , trust computation @cite_7 @cite_14 , and privacy enhanced technology @cite_6 . However, there are few description on PGP keyserver. It might be because PGP keyserver mechanism is too simple. It is not a CA but just a pool of public keys. From users' viewpoint, PGP keyserver has a large amount of OpenPGP public keys that provide the interesting material for social analysis of network community. For example, OpenPGP keyserver developer Jonathan McDowell also developed Experimental PGP key path finder'' @cite_20 that searches and displays the chain of certification between the users.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_6", "@cite_20" ], "mid": [ "1879433355", "1494178662", "2133032376", "2601330228", "2538863639" ], "abstract": [ "PGP is built upon a Distributed Web of Trust in which a user’s trustworthiness is established by others who can vouch through a digital signature for that user’s identity. Preventing its wholesale adoption are a number of inherent weaknesses to include (but not limited to) the following: 1) Trust Relationships are built on a subjective honor system, 2) Only first degree relationships can be fully trusted, 3) Levels of trust are difficult to quantify with actual values, and 4) Issues with the Web of Trust itself (Certification and Endorsement). Although the security that PGP provides is proven to be reliable, it has largely failed to garner large scale adoption. In this paper, we propose several novel contributions to address the aforementioned issues with PGP and associated Web of Trust. To address the subjectivity of the Web of Trust, we provide a new certificate format based on Bitcoin which allows a user to verify a PGP certificate using Bitcoin identity-verification transactions - forming first degree trust relationships that are tied to actual values (i.e., number of Bitcoins transferred during transaction). Secondly, we present the design of a novel Distributed PGP key server that leverages the Bitcoin transaction blockchain to store and retrieve our certificates.", "From the Publisher: Use of the Internet is expanding beyond anyone's expectations. As corporations, government offices, and ordinary citizens begin to rely on the information highway to conduct business, they are realizing how important it is to protect their communications -- both to keep them a secret from prying eyes and to ensure that they are not altered during transmission. Encryption, which until recently was an esoteric field of interest only to spies, the military, and a few academics, provides a mechanism for doing this. PGP, which stands for Pretty Good Privacy, is a free and widely available encryption program that lets you protect files and electronic mail. Written by Phil Zimmermann and released in 1991, PGP works on virtually every platform and has become very popular both in the U.S. and abroad. Because it uses state-of-the-art public key cryptography, PGP can be used to authenticate messages, as well as keep them secret. With PGP, you can digitally \"sign\" a message when you send it. By checking the digital signature at the other end, the recipient can be sure that the message was not changed during transmission and that the message actually came from you. PGP offers a popular alternative to U.S. government initiatives like the Clipper Chip because, unlike Clipper, it does not allow the government or any other outside agency access to your secret keys. PGP: Pretty Good Privacy by Simson Garfinkel is both a readable technical user's guide and a fascinating behind-the-scenes look at cryptography and privacy. Part I, \"PGP Overview,\" introduces PGP and the cryptography that underlies it. Part II, \"Cryptography History and Policy,\" describes the history of PGP -- its personalities, legal battles, and other intrigues; it also provides background on the battles over public key cryptography patents and the U.S. government export restrictions, and other aspects of the ongoing public debates about privacy and free speech. Part III, \"Using PGP,\" describes how to use PGP: protecting files and email, creating and using keys, signing messages, certifying and distributing keys, and using key servers. Part IV, \"Appendices,\" describes how to obtain PGP from Internet sites, how to install it on PCs, UNIX systems, and the Macintosh, and other background information. The book also contains a glossary, a bibliography, and a handy reference card that summarizes all of the PGP commands, environment variables, and configuration variables.", "Authentication using a path of trusted intermediaries, each able to authenticate the next in the path, is a well-known technique for authenticating channels in a large distributed system. In this paper, we explore the use of multiple paths to redundantly authenticate a channel and focus on two notions of path independence-disjoint paths and connective paths-that seem to increase assurance in the authentication. We give evidence that there are no efficient algorithms for locating maximum sets of paths with these independence properties and propose several approximation algorithms for these problems. We also describe a service we have deployed, called PathServer, that makes use of our algorithms to find such sets of paths to support authentication in PGP applications.", "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts. SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.", "The semantics of online authentication in the web are rather straightforward: if Alice has a certificate binding Bob's name to a public key, and if a remote entity can prove knowledge of Bob's private key, then (barring key compromise) that remote entity must be Bob. However, in reality, many websites' and the majority of the most popular ones-are hosted at least in part by third parties such as Content Delivery Networks (CDNs) or web hosting providers. Put simply: administrators of websites who deal with (extremely) sensitive user data are giving their private keys to third parties. Importantly, this sharing of keys is undetectable by most users, and widely unknown even among researchers. In this paper, we perform a large-scale measurement study of key sharing in today's web. We analyze the prevalence with which websites trust third-party hosting providers with their secret keys, as well as the impact that this trust has on responsible key management practices, such as revocation. Our results reveal that key sharing is extremely common, with a small handful of hosting providers having keys from the majority of the most popular websites. We also find that hosting providers often manage their customers' keys, and that they tend to react more slowly yet more thoroughly to compromised or potentially compromised keys." ] }
cs0308015
2109841503
OpenPGP, an IETF Proposed Standard based on PGP application, has its own Public Key Infrastructure (PKI) architecture which is different from the one based on X.509, another standard from ITU. This paper describes the OpenPGP PKI; the historical perspective as well as its current use. The current OpenPGP PKI issues include the capability of a PGP keyserver and its performance. PGP keyservers have been developed and operated by volunteers since the 1990s. The keyservers distribute, merge, and expire the OpenPGP public keys. Major keyserver managers from several countries have built the globally distributed network of PGP keyservers. However, the current PGP Public Keyserver (pksd) has some limitations. It does not support fully the OpenPGP format so that it is neither expandable nor flexible, without any cluster technology. Finally we introduce the project on the next generation OpenPGP public keyserver called the OpenPKSD, lead by Hironobu Suzuki, one of the authors, and funded by Japanese Information-technology Promotion Agency(IPA).
OpenPGP PKI itself can be described as the superset of PKI @cite_27 , however, combining OpenPGP PKI with other authentication system is challenging work in both theoretical and operational field. Formal study of trust relationship of PKI started in the late 1990s @cite_7 @cite_14 and GnuPG development version in December 2002 started to support its trust calculation with GnuPGP's trust signature.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_7" ], "mid": [ "2601330228", "2883738719", "2511395838" ], "abstract": [ "The Public Key Infrastructure (PKI) in use today on the Internet to secure communications has several drawbacks arising from its centralised and non-transparent design. In the past there has been instances of certificate authorities publishing rogue certificates for targeted attacks, and this has been difficult to immediately detect as certificate authorities are not transparent about the certificates they issue. Furthermore, the centralised selection of trusted certificate authorities by operating system and browser vendors means that it is not practical to untrust certificate authorities that have issued rogue certificates, as this would disrupt the TLS process for many other hosts. SCPKI is an alternative PKI system based on a decentralised and transparent design using a web-of-trust model and a smart contract on the Ethereum blockchain, to make it easily possible for rogue certificates to be detected when they are published. The web-of-trust model is designed such that an entity or authority in the system can verify (or vouch for) fine-grained attributes of another entity's identity (such as company name or domain name), as an alternative to the centralised certificate authority identity verification model.", "The public key infrastructure (PKI) based authentication protocol provides the basic security services for vehicular ad-hoc networks (VANETs). However, trust and privacy are still open issues due to the unique characteristics of vehicles. It is crucial for VANETs to prevent internal vehicles from broadcasting forged messages while simultaneously protecting the privacy of each vehicle against tracking attacks. In this paper, we propose a blockchain-based anonymous reputation system (BARS) to break the linkability between real identities and public keys to preserve privacy. The certificate and revocation transparency is implemented efficiently using two blockchains. We design a trust model to improve the trustworthiness of messages relying on the reputation of the sender based on both direct historical interactions and indirect opinions about the sender. Experiments are conducted to evaluate BARS in terms of security and performance and the results show that BARS is able to establish distributed trust management, while protecting the privacy of vehicles.", "The current Transport Layer Security (TLS) Public-Key Infrastructure (PKI) is based on a weakest-link security model that depends on over a thousand trust roots. The recent history of malicious and compromised Certification Authorities has fueled the desire for alternatives. Creating a new, secure infrastructure is, however, a surprisingly challenging task due to the large number of parties involved and the many ways that they can interact. A principled approach to its design is therefore mandatory, as humans cannot feasibly consider all the cases that can occur due to the multitude of interleavings of actions by legitimate parties and attackers, such as private key compromises (e.g., domain, Certification Authority, log server, other trusted entities), key revocations, key updates, etc. We present ARPKI, a PKI architecture that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI efficiently supports these operations, and gracefully handles catastrophic events such as domain key loss or compromise. Moreover ARPKI is the first PKI architecture that is co-designed with a formal model, and we verify its core security property using the TAMARIN prover. We prove that ARPKI offers extremely strong security guarantees, where compromising even n 1 trusted signing and verifying entities is insufficient to launch a man-in-the-middle attack. Moreover, ARPKI’s use deters misbehavior as all operations are publicly visible. Finally, we present a proof-of-concept implementation that provides all the features required for deployment. Our experiments indicate that ARPKI efficiently handles the certification process with low overhead. It does not incur additional latency to TLS, since no additional round trips are required." ] }
cs0308044
2950013766
A new method of hierarchical clustering of graph vertexes is suggested. In the method, the graph partition is determined with an equivalence relation satisfying a recursive definition stating that vertexes are equivalent if the vertexes they point to (or vertexes pointing to them) are equivalent. Iterative application of the partitioning yields a hierarchical clustering of graph vertexes. The method is applied to the citation graph of hep-th. The outcome is a two-level classification scheme for the subject field presented in hep-th, and indexing of the papers from hep-th in this scheme. A number of tests show that the classification obtained is adequate.
In this subsection, we demonstrate that above equivalence relation @math is a natural development of the recursive algorithms PageRank @cite_5 , HITS @cite_12 , and SimRank @cite_3 , which became lately quite popular among the network miners.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_12" ], "mid": [ "2951132123", "1545879303", "2117831564" ], "abstract": [ "We address the problem of replicating a Voronoi diagram @math of a planar point set @math by making proximity queries, which are of three possible (in decreasing order of information content): 1. the exact location of the nearest site(s) in @math ; 2. the distance to and label(s) of the nearest site(s) in @math ; 3. a unique label for every nearest site in @math . We provide algorithms showing how queries of Type 1 and Type 2 allow an exact cloning of @math with @math queries and @math processing time. We also prove that queries of Type 3 can never exactly clone @math , but we show that with @math queries we can construct an @math -approximate cloning of @math . In addition to showing the limits of nearest-neighbor database security, our methods also provide one of the first natural algorithmic applications of retroactive data structures.", "Similarity assessment is one of the core tasks in hyperlink analysis. Recently, with the proliferation of applications, e.g., web search and collaborative filtering, SimRank has been a well-studied measure of similarity between two nodes in a graph. It recursively follows the philosophy that \"two nodes are similar if they are referenced (have incoming edges) from similar nodes\", which can be viewed as an aggregation of similarities based on incoming paths. Despite its popularity, SimRank has an undesirable property, i.e., \"zero-similarity\": It only accommodates paths with equal length from a common \"center\" node. Thus, a large portion of other paths are fully ignored. This paper attempts to remedy this issue. (1) We propose and rigorously justify SimRank*, a revised version of SimRank, which resolves such counter-intuitive \"zero-similarity\" issues while inheriting merits of the basic SimRank philosophy. (2) We show that the series form of SimRank* can be reduced to a fairly succinct and elegant closed form, which looks even simpler than SimRank, yet enriches semantics without suffering from increased computational cost. This leads to a fixed-point iterative paradigm of SimRank* in O(Knm) time on a graph of n nodes and m edges for K iterations, which is comparable to SimRank. (3) To further optimize SimRank* computation, we leverage a novel clustering strategy via edge concentration. Due to its NP-hardness, we devise an efficient and effective heuristic to speed up SimRank* computation to O(Knm) time, where m is generally much smaller than m. (4) Using real and synthetic data, we empirically verify the rich semantics of SimRank*, and demonstrate its high computation efficiency.", "The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
DBXplorer @cite_3 was developed by Microsoft Research, and like BANKS and Mragyati, it uses join trees to compute an SQL statement to access the data. The algorithm to compute these differs, as does the implementation, which was developed for Microsoft's IIS and SQL Server, the others being implemented in Java. DbSurfer does not require access to the database to discover the trails, only to display the data when user clicks on a link in that trail.
{ "cite_N": [ "@cite_3" ], "mid": [ "2121350579" ], "abstract": [ "Internet search engines have popularized the keyword-based search paradigm. While traditional database management systems offer powerful query languages, they do not allow keyword-based search. In this paper, we discuss DBXplorer, a system that enables keyword-based searches in relational databases. DBXplorer has been implemented using a commercial relational database and Web server and allows users to interact via a browser front-end. We outline the challenges and discuss the implementation of our system, including results of extensive experimental evaluation." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
DISCOVER is the latest offering and shares many similarities to Mragyati, BANKS and DbXplorer, but uses a greedy algorithm to discover the @cite_17 . It also takes greater advantage of the database's internal keyword search facilities by using Oracle's Context cartridge for the text indexing.
{ "cite_N": [ "@cite_17" ], "mid": [ "2098388305" ], "abstract": [ "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
have also introduced a system for keyword search @cite_40 . Their system works by finding results for queries of the form @math near @math (e.g. find movie near travolta cage). Two sets of entries are found - and the contents of the first set are returned based upon their proximity to members of the second set. In comparison to DbSurfer, there is no support for navigation of the database (manual or assisted) nor any display of the context of the results.
{ "cite_N": [ "@cite_40" ], "mid": [ "2062180302" ], "abstract": [ "This article deals with the computation of consistent answers to queries on relational databases that violate primary key constraints. A repair of such inconsistent database is obtained by selecting a maximal number of tuples from each relation without ever selecting two distinct tuples that agree on the primary key. We are interested in the following problem: Given a Boolean conjunctive query q, compute a Boolean first-order (FO) query @j such that for every database db, @j evaluates to true on db if and only if q evaluates to true on every repair of db. Such @j is called a consistent FO rewriting of q. We use novel techniques to characterize classes of queries that have a consistent FO rewriting. In this way, we are able to extend previously known classes and discover new ones. Finally, we use an Ehrenfeucht-Fraisse game to show the non-existence of a consistent FO rewriting for @[email protected]?y(R([email protected]?,y)@?R([email protected]?,c)), where c is a constant and the first coordinate of R is the primary key." ] }
cs0307073
1636672680
We present a new application for keyword search within relational databases, which uses a novel algorithm to solve the join discovery problem by finding Memex-like trails through the graph of foreign key dependencies. It differs from previous efforts in the algorithms used, in the presentation mechanism and in the use of primary-key only database queries at query-time to maintain a fast response for users. We present examples using the DBLP data set.
The join discovery problem is related to the problem tackled by the @cite_18 @cite_11 . The idea underlying the universal relation model is to allow querying the database soley through its attributes without explicitly specifying the join paths. The expressive querying power of such a system is essentially that of a union of conjunctive queries (see @cite_19 ). DbSurfer takes this approach further by allowing the user to specify values (keywords) without stating their related attributes and providing relevance based filtering.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_11" ], "mid": [ "2396012111", "2098388305", "2790840297" ], "abstract": [ "The join ordering problem is a fundamental challenge that has to be solved by any query optimizer. Since the high-performance RDF systems are often implemented as triple stores (i.e., they represent RDF data as a single table with three attributes, at least conceptually), the query optimization strategies employed by such systems are often adopted from relational query optimization. In this paper we show that the techniques borrowed from traditional SQL query optimization (such as Dynamic Programming algorithm or greedy heuristics) are not immediately capable of handling large SPARQL queries. We introduce a new join ordering algorithm that performs a SPARQL-tailored query simplification. Furthermore, we present a novel RDF statistical synopsis that accurately estimates cardinalities in large SPARQL queries. Our experiments show that this algorithm is highly superior to the state-of-the-art SPARQL optimization approaches, including the RDF-3X’s original Dynamic Programming strategy.", "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.", "Efficient join processing is one of the most fundamental and well-studied tasks in database research. In this work, we examine algorithms for natural join queries over many relations and describe a novel algorithm to process these queries optimally in terms of worst-case data complexity. Our result builds on recent work by Atserias, Grohe, and Marx, who gave bounds on the size of a full conjunctive query in terms of the sizes of the individual relations in the body of the query. These bounds, however, are not constructive: they rely on Shearer's entropy inequality which is information-theoretic. Thus, the previous results leave open the question of whether there exist algorithms whose running time achieve these optimal bounds. An answer to this question may be interesting to database practice, as we show in this paper that any project-join plan is polynomially slower than the optimal bound for some queries. We construct an algorithm whose running time is worst-case optimal for all natural join queries. Our result may be of independent interest, as our algorithm also yields a constructive proof of the general fractional cover bound by Atserias, Grohe, and Marx without using Shearer's inequality. In addition, we show that this bound is equivalent to a geometric inequality by Bollobas and Thomason, one of whose special cases is the famous Loomis-Whitney inequality. Hence, our results algorithmically prove these inequalities as well. Finally, we discuss how our algorithm can be used to compute a relaxed notion of joins." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Ideas stemming from linear logic have been used previously by Abramsky in the study of classical reversible computation @cite_44 .
{ "cite_N": [ "@cite_44" ], "mid": [ "1675985504" ], "abstract": [ "We review some of our recent results (with collaborators) on information processing in an ordered linear spaces framework for probabilistic theories. These include demonstrations that many \"inherently quantum\" phenomena are in reality quite general characteristics of non-classical theories, quantum or otherwise. As an example, a set of states in such a theory is broadcastable if, and only if, it is contained in a simplex whose vertices are cloneable, and therefore distinguishable by a single measurement. As another example, information that can be obtained about a system in this framework without causing disturbance to the system state, must be inherently classical. We also review results on teleportation protocols in the framework, and the fact that any non-classical theory without entanglement allows exponentially secure bit commitment in this framework. Finally, we sketch some ways of formulating our framework in terms of categories, and in this light consider the relation of our work to that of Abramsky, Coecke, Selinger, Baez and others on information processing and other aspects of theories formulated categorically." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
One of the earlier attempts at formulating a language for quantum computation was Greg Baker's Qgol @cite_23 . Its implementation (which remained incomplete) used so-called uniqueness types (similar but not identical to our linear variables) for quantum objects @cite_36 . The language is not universal for quantum computation.
{ "cite_N": [ "@cite_36", "@cite_23" ], "mid": [ "2115164346", "2157601714" ], "abstract": [ "In this paper, we show that all languages in NP have logarithmic-size quantum proofs which can be verified provided that two unentangled copies are given. More formally, we introduce the complexity class QMAlog(2) and show that 3COL E QMAlog(2). To obtain this strong and surprising result we have to relax the usual requirements: the completeness is one but the soundness is 1-1 poly. Since the natural classical equivalent of QMAlog(2) is uninteresting (it would be equal to P), this result, like many others, stresses the fact that quantum information is fundamentally different from classical information. It also contributes to our understanding of entanglement since QMAlog = BQP[7].", "We introduce the language QML, a functional language for quantum computations on finite types. Its design is guided by its categorical semantics: QML programs are interpreted by morphisms in the category FQC of finite quantum computations, which provides a constructive semantics of irreversible quantum computations realisable as quantum gates. QML integrates reversible and irreversible quantum computations in one language, using first order strict linear logic to make weakenings explicit. Strict programs are free from decoherence and hence preserve superpositions and entanglement -which is essential for quantum parallelism." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Another imperative language, based on C++, is the Q language developed by Bettelli, Calarco and Serafini @cite_7 . As in the case of QCL, no formal calculus is provided. A simulator is also available.
{ "cite_N": [ "@cite_7" ], "mid": [ "102856817" ], "abstract": [ "We present an imperative quantum programming language LanQ which was designed to support combination of quantum and classical programming and basic process operations - process creation and interprocess communication. The language can thus be used for implementing both classical and quantum algorithms and protocols. Its syntax is similar to that of C language what makes it easy to learn for existing programmers. In this paper, we present operational semantics of the language and a proof of type soundness of the noncommunicating part of the language. We provide an example run of a quantum random number generator." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A more theoretical approach is taken by Selinger in his description of the functional language QPL @cite_26 . This language has both a graphical and a textual representation. A formal semantics is provided.
{ "cite_N": [ "@cite_26" ], "mid": [ "1847465957" ], "abstract": [ "In order to define models of simply typed functional programming languages being closer to the operational semantics of these languages, the notions of sequentiality, stability and seriality were introduced. These works originated from the definability problem for PCF, posed in [Sco72], and the full abstraction problem for PCF, raised in [Plo77]." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
The imperative language qGCL, developed by Sanders and Zuliani @cite_63 , is based on Dijkstra's guarded command language. It has a formal semantics and proof system.
{ "cite_N": [ "@cite_63" ], "mid": [ "2034373223" ], "abstract": [ "We present a new approach to adding state and state-changing commands to a term language. As a formal semantics it can be seen as a generalization of predicate transformer semantics, but beyond that it brings additional opportunities for specifying and verifying programs. It is based on a construct called a phrase, which is a term of the form C r t, where C stands for a command and t stands for a term of any type. If R is boolean, C r R is closely related to the weakest precondition wp(C,R). The new theory draws together functional and imperative programming in a simple way. In particular, imperative procedures and functions are seen to be governed by the same laws as classical functions. We get new techniques for reasoning about programs, including the ability to dispense with logical variables and their attendant complexities. The theory covers both programming and specification languages, and supports unbounded demonic and angelic nondeterminacy in both commands and terms." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A previous attempt to construct a lambda calculus for quantum computation is described by Maymin in @cite_27 . However, his calculus appears to be strictly stronger than the quantum Turing machine @cite_19 . It seems to go beyond quantum mechanics in that it does not appear to have a unitary and reversible operational model, instead relying on a more general class of transformations. It is an open question whether the calculus is physically realizable.
{ "cite_N": [ "@cite_19", "@cite_27" ], "mid": [ "1668464107", "1676498955" ], "abstract": [ "We show that the lambda-q calculus can efficiently simulate quantum Turing machines by showing how the lambda-q calculus can efficiently simulate a class of quantum cellular automaton that are equivalent to quantum Turing machines. We conclude by noting that the lambda-q calculus may be strictly stronger than quantum computers because NP-complete problems such as satisfiability are efficiently solvable in the lambda-q calculus but there is a widespread doubt that they are efficiently solvable by quantum computers.", "This paper introduces a formal met alanguage called the lambda-q calculus for the specification of quantum programming languages. This met alanguage is an extension of the lambda calculus, which provides a formal setting for the specification of classical programming languages. As an intermediary step, we introduce a formal met alanguage called the lambda-p calculus for the specification of programming languages that allow true random number generation. We demonstrate how selected randomized algorithms can be programmed directly in the lambda-p calculus. We also demonstrate how satisfiability can be solved in the lambda-q calculus." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
A seminar by Wehr @cite_41 suggests that linear logic may be useful in constructing a calculus for quantum computation within the mathematical framework of Chu spaces. However, the author stops short of developing such a calculus.
{ "cite_N": [ "@cite_41" ], "mid": [ "1534027338" ], "abstract": [ "The paper deals with the relationship of committed-choice logic programming languages and their proof-theoretic semantics based on linear logic. Fragments of linear logic are used in order to express various aspects of guarded clause concurrent programming and behavior of the system. The outlined translation comprises structural properties of concurrent computations, providing a sound and complete model wrt. to the interleaving operational semantics based on transformation systems. In the presence of variables, just asynchronous properties are captured without resorting to special proof-generating strategies, so the model is only correct for deadlock-free programs." ] }
quant-ph0307150
2110240249
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of linear logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.
Abramsky and Coecke describe a realization of a model of multiplicative linear logic via the quantum processes of entangling and de-entangling by means of typed projectors. They briefly discuss how these processes can be represented as terms of an affine lambda calculus @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "2807350215" ], "abstract": [ "We show that any language in nondeterministic time @math , where the number of iterated exponentials is an arbitrary function @math , can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness @math and soundness @math , where the number of iterated exponentials is @math and @math is a universal constant. The result was previously known for @math and @math ; we obtain it for any time-constructible function @math . The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC'17). As a separate consequence of this technique we obtain a different proof of Slofstra's recent result (unpublished) on the uncomputability of the entangled value of multiprover games. Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson's problem on the relation between the commuting operator and tensor product models for quantum correlations." ] }
cs0306044
2952425141
We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of , which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.
In addition, there is a long history of interest in optimality of a distributed algorithm given certain conditions, such as a particular pattern of failures @cite_23 @cite_26 @cite_11 @cite_35 @cite_27 @cite_33 , or a particular pattern of message delivery @cite_1 @cite_42 @cite_3 . In a sense, work on optimality envisions a fundamentally different role for the adversary in which it is trying to produce bad performance for both the candidate and champion algorithms; in contrast, the adversary used in competitive analysis usually cooperates with the champion.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_33", "@cite_42", "@cite_1", "@cite_3", "@cite_27", "@cite_23", "@cite_11" ], "mid": [ "2134659242", "2139297408", "1726079515", "2963230393", "2138720431", "2040176268", "2468687866", "2552732996", "2045376554" ], "abstract": [ "We consider optimal load balancing in a distributed computing environment consisting of homogeneous unreliable processors. Each processor receives its own sequence of tasks from outside users, some of which can be redirected to the other processors. Processing times are independent and identically distributed with an arbitrary distribution. The arrival sequence of outside tasks to each processor may be arbitrary as long as it is independent of the state of the system. Processors may fail, with arbitrary failure and repair processes that are also independent of the state of the system. The only information available to a processor is the history of its decisions for routing work to other processors, and the arrival times of its own arrival sequence. We prove the optimality of the round-robin policy, in which each processor sends all the tasks that can be redirected to each of the other processors in turn. We show that, among all policies that balance workload, round robin stochastically minimizes the nth task completion time for all n, and minimizes response times and queue lengths in a separable increasing convex sense for the entire system. We also show that if there is a single centralized controller, round-robin is the optimal policy, and a single controller using round-robin routing is better than the optimal distributed system in which each processor routes its own arrivals. Again \"optimal\" and \"better\" are in the sense of stochastically minimizing task completion times, and minimizing response time and queue lengths in the separable increasing convex sense.", "In this work, we study the notion of competing campaigns in a social network and address the problem of influence limitation where a \"bad\" campaign starts propagating from a certain node in the network and use the notion of limiting campaigns to counteract the effect of misinformation. The problem can be summarized as identifying a subset of individuals that need to be convinced to adopt the competing (or \"good\") campaign so as to minimize the number of people that adopt the \"bad\" campaign at the end of both propagation processes. We show that this optimization problem is NP-hard and provide approximation guarantees for a greedy solution for various definitions of this problem by proving that they are submodular. We experimentally compare the performance of the greedy method to various heuristics. The experiments reveal that in most cases inexpensive heuristics such as degree centrality compare well with the greedy approach. We also study the influence limitation problem in the presence of missing data where the current states of nodes in the network are only known with a certain probability and show that prediction in this setting is a supermodular problem. We propose a prediction algorithm that is based on generating random spanning trees and evaluate the performance of this approach. The experiments reveal that using the prediction algorithm, we are able to tolerate about 90 missing data before the performance of the algorithm starts degrading and even with large amounts of missing data the performance degrades only to 75 of the performance that would be achieved with complete data.", "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones.", "We introduce randomized Limited View (LV) adversary codes that provide protection against an adversary that uses their partial view of the channel to construct an adversarial error vector that is added to the channel. For a codeword of length N, the adversary selects a subset of size ρrN of components to “see”, and then “adds” an adversarial error vector of weight ρwN to the codeword. Performance of the code is measured by the probability of the decoder failure in recovering the sent message. An (N, qRN, δ)-limited view adversary is a code of rate R that ensures that the success chance of the adversary in making decoder to fail is bounded by δ. Our main motivation to study these codes is providing protection for wireless communication at the physical layer of networks. We formalize the definition of adversarial error and decoder failure, construct a code with efficient encoding and decoding that allows the adversary to, depending on the code rate, read up to half of the sent codeword and add error on the same coordinates. The code is non-linear, has an efficient decoding algorithm, and is constructed using a message authentication code (MAC) and a Folded Reed-Solomon (FRS) code. The decoding algorithm uses an innovative approach that combines the list decoding algorithm of the FRS codes and the MAC verification algorithm to eliminate the exponential size of the list output from the decoding algorithm. We discuss our results and future work.", "We propose two distributed algorithms to maintain, respectively, a maximal matching and a maximal independent set in a given ad hoc network; our algorithms are fault tolerant (reliable) in the sense that the algorithms can detect occasional link failures and or new link creations in the network (due to mobility of the hosts) and can readjust the global predicates. We provide time complexity analysis of the algorithms in terms of the number of rounds needed for the algorithm to stabilize after a topology change, where a round is defined as a period of time in which each node in the system receives beacon messages from all its neighbors. In any ad hoc network, the participating nodes periodically transmit beacon messages for message transmission as well as to maintain the knowledge of the local topology at the node; as a result, the nodes get the information about their neighbor nodes synchronously (at specific time intervals). Thus, the paradigm to analyze the complexity of the self-stabilizing algorithms in the context of ad hoc networks is very different from the traditional concept of an adversary daemon used in proving the convergence and correctness of self-stabilizing distributed algorithms in general.", "In this paper, we develop algorithms for distributed computation of averages of the node data over networks with bandwidth power constraints or large volumes of data. Distributed averaging algorithms fail to achieve consensus when deterministic uniform quantization is adopted. We propose a distributed algorithm in which the nodes utilize probabilistically quantized information, i.e., dithered quantization, to communicate with each other. The algorithm we develop is a dynamical system that generates sequences achieving a consensus at one of the quantization values almost surely. In addition, we show that the expected value of the consensus is equal to the average of the original sensor data. We derive an upper bound on the mean-square-error performance of the probabilistically quantized distributed averaging (PQDA). Moreover, we show that the convergence of the PQDA is monotonic by studying the evolution of the minimum-length interval containing the node values. We reveal that the length of this interval is a monotonically nonincreasing function with limit zero. We also demonstrate that all the node values, in the worst case, converge to the final two quantization bins at the same rate as standard unquantized consensus. Finally, we report the results of simulations conducted to evaluate the behavior and the effectiveness of the proposed algorithm in various scenarios.", "Distributed algorithms for multi-robot systems rely on network communications to share information. However, the motion of the robots changes the network topology, which affects the information presented to the algorithm. For an algorithm to produce accurate output, robots need to communicate rapidly enough to keep the network topology correlated to their physical configuration. Infrequent communications will cause most multi-robot distributed algorithms to produce less accurate results, and cause some algorithms to stop working altogether. The central theme of this work is that algorithm accuracy, communications bandwidth, and physical robot speed are related. This thesis has three main contributions: First, I develop a prototypical multi-robot application and computational model, propose a set of complexity metrics to evaluate distributed algorithm performance on multi-robot systems, and introduce the idea of the robot speed ratio, a dimensionless measure of robot speed relative to message speed in networks that rely on multi-hop communication. The robot speed ratio captures key relationships between communications bandwidth, mobility, and algorithm accuracy, and can be used at design time to trade off between them. I use this speed ratio to evaluate the performance of existing distributed algorithms for multi-hop communication and navigation. Second, I present a definition of boundaries in multi-robot systems, and develop new distributed algorithms to detect and characterize them. Finally, I define the problem of dynamic task assignment, and present four distributed algorithms that solve this problem, each representing a different trade-off between accuracy, running time, and communication resources. All the algorithms presented in this work are provably correct under ideal conditions and produce verifiable real-world performance. They are self-stabilizing and robust to communications failures, population changes, and other errors. All the algorithms were tested on a swarm of 112 robots. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)", "Nowadays, in the world of limited attention, the techniques that maximize the spread of social influence are more than welcomed. Companies try to maximize their profits on sales by providing customers with free samples believing in the power of word-of-mouth marketing, governments and non-governmental organizations often want to introduce positive changes in the society by appropriately selecting individuals or election candidates want to spend least budget yet still win the election. In this work we propose the use of evolutionary algorithm as a mean for selecting seeds in social networks. By framing the problem as genetic algorithm challenge we show that it is possible to outperform well-known greedy algorithm in the problem of influence maximization for the linear threshold model in both: quality (up to 16 better) and efficiency (up to 35 times faster). We implemented these two algorithms by using GPGPU approach showing that also the evolutionary algorithm can benefit from GPU acceleration making it efficient and scaling better than the greedy algorithm. As the experiments conducted by using three real world datasets reveal, the evolutionary approach proposed in this paper outperforms the greedy algorithm in terms of the outcome and it also scales much better than the greedy algorithm when the network size is increasing. The only drawback in the GPGPU approach so far is the maximum size of the network that can be processed - it is limited by the memory of the GPU card. We believe that by showing the superiority of the evolutionary approach over the greedy algorithm, we will motivate the scientific community to look for an idea to overcome this limitation of the GPU approach - we also suggest one of the possible paths to explore. Since the proposed approach is based only on topological features of the network, not on the attributes of nodes, the applications of it are broader than the ones that are dataset-specific.", "In this paper, we prove a lower bound on the number of rounds required by a deterministic distributed protocol for broadcasting a message in radio networks whose processors do not know the identities of their neighbors. Such an assumption captures the main characteristic of mobile and wireless environments [3], i.e., the instability of the network topology. For any distributed broadcast protocol II, for any n and for any D ≤ n 2, we exhibit a network G with n nodes and diameter D such that the number of rounds needed by H for broadcasting a message in G is Ω(D log n). The result still holds even if the processors in the network use a different program and know n and D. We also consider the version of the broadcast problem in which an arbitrary number of processors issue at the same time an identical message that has to be delivered to the other processors. In such a case we prove that, even assuming that the processors know the network topology, Ω(n) rounds are required for solving the problem on a complete network (D=1) with n processors." ] }
cs0306048
1675778287
Dataset storage, exchange, and access play a critical role in scientific applications. For such purposes netCDF serves as a portable and efficient file format and programming interface, which is popular in numerous scientific application domains. However, the original interface does not provide an efficient mechanism for parallel data storage and access. In this work, we present a new parallel interface for writing and reading netCDF datasets. This interface is derived with minimum changes from the serial netCDF interface but defines semantics for parallel access and is tailored for high performance. The underlying parallel I O is achieved through MPI-IO, allowing for dramatic performance gains through the use of collective I O optimizations. We compare the implementation strategies with HDF5 and analyze both. Our tests indicate programming convenience and significant I O performance improvement with this parallel netCDF interface.
MPI-IO is a parallel I O interface specified in the MPI-2 standard. It is implemented and used on a wide range of platforms. The most popular implementation, ROMIO @cite_13 is implemented portably on top of an abstract I O device layer @cite_1 @cite_22 that enables portability to new underlying I O systems. One of the most important features in ROMIO is collective I O operations, which adopt a two-phase I O strategy @cite_11 @cite_20 @cite_18 @cite_2 and improve the parallel I O performance by significantly reducing the number of I O requests that would otherwise result in many small, noncontiguous I O requests. However, MPI-IO reads and writes data in a raw format without providing any functionality to effectively manage the associated metadata. Nor does it guarantee data portability, thereby making it inconvenient for scientists to organize, transfer, and share their application data.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_1", "@cite_2", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2174300520", "2104486653", "2567936601", "2111925167", "2151436730", "2081612620", "1991732708" ], "abstract": [ "The I O access patterns of parallel programs often consist of accesses to a large number of small, noncontiguous pieces of data. If an application's I O needs are met by making many small, distinct I O requests, however, the I O performance degrades drastically. To avoid this problem, MPI-IO allows users to access a noncontiguous data set with a single I O function call. This feature provides MPI-IO implementations an opportunity to optimize data access. We describe how our MPI-IO implementation, ROMIO, delivers high performance in the presence of noncontiguous requests. We explain in detail the two key optimizations ROMIO performs: data sieving for noncontiguous requests from one process and collective I O for noncontiguous requests from multiple processes. We describe how one can implement these optimizations portably on multiple machines and file systems, control their memory requirements, and also achieve high performance. We demonstrate the performance and portability with performance results for three applications--an astrophysics-application template (DIST3D), the NAS BTIO benchmark, and an unstructured code (UNSTRUC)--on five different parallel machines: HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, and SGI Origin2000.", "We discuss the issues involved in implementing MPI-IO portably on multiple machines and file systems and also achieving high performance. One way to implement MPI-IO portably is to implement it on top of the basic Unix I O functions (open, lseek, read, write, and close), which are themselves portable. We argue that this approach has limitations in both functionality and performance. We instead advocate an implementation approach that combines a large portion of portable code and a small portion of code that is optimized separately for different machines and file systems. We have used such an approach to develop a high-performance, portable MPI-IO implementation, called ROMIO. In addition to basic I O functionality, we consider the issues of supporting other MPI-IO features, such as 64-bit file sizes, noncontiguous accesses, collective I O, asynchronous I O, consistency and atomicity semantics, user-supplied hints, shared file pointers, portable data representation, and file preallocation. We describe how we implemented each of these features on various machines and file systems. The machines we consider are the HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, SGI Origin2000, and networks of workstations; and the file systems we consider are HP HFS, IBM PIOFS, Intel PFS, NEC SFS, SGI XFS, NFS, and any general Unix file system (UFS). We also present our thoughts on how a file system can be designed to better support MPI-IO. We provide a list of features desired from a file system that would help in implementing MPI-IO correctly and with high performance.", "ROMIO is a high-performance, portable implementation of MPI-IO (the I O chapter in MPI-2). This document describes how to install and use ROMIO version 1.0.0 on the following machines: IBM SP; Intel Paragon; HP Convex Exemplar; SGI Origin 2000, Challenge, and Power Challenge; and networks of workstations (Sun4, Solaris, IBM, DEC, SGI, HP, FreeBSD, and Linux).", "We propose a strategy for implementing parallel I O interfaces portably and efficiently. We have defined an abstract device interface for parallel I O, called ADIO. Any parallel I O API can be implemented on multiple file systems by implementing the API portably on top of ADIO, and implementing only ADIO on different file systems. This approach simplifies the task of implementing an API and yet exploits the specific high performance features of individual file systems. We have used ADIO to implement the Intel PFS interface and subsets of MPI-IO and IBM PIOFS interfaces on PFS, PIOFS, Unix, and NFS file systems. Our performance studies indicate that the overhead of using ADIO as an implementation strategy is very low.", "Client-side file caching has long been recognized as a file system enhancement to reduce the amount of data transfer between application processes and I O servers. However, caching also introduces cache coherence problems when a file is simultaneously accessed by multiple processes. Existing coherence controls tend to treat the client processes independently and ignore the aggregate I O access pattern. This causes a serious performance degradation for parallel I O applications. In this paper we discuss our new implementation and present an extended performance evaluation on GPFS and Lustre parallel file systems. In addition to comparing our methods to traditional approaches, we examine the performance of MPI-IO caching under direct I O mode to bypass the underlying file system cache. We also investigate the performance impact of two file domain partitioning methods to MPI collective I O operations: one which creates a balanced workload and the other which aligns accesses to the file system stripe size. In our experiments, alignment results in better performance by reducing file lock contention. When the cache page size is set to a multiple of the stripe size, MPI-IO caching inherits the same advantage and produces significantly improved I O bandwidth.", "MPI (Message Passing Interface) is a specification for a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the specification being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.", "Message Passing Interface (MPI) collective communication routines are widely used in parallel applications. In order for a collective communication routine to achieve high performance for different applications on different platforms, it must be adaptable to both the system architecture and the application workload. Current MPI implementations do not support such software adaptability and are not able to achieve high performance on many platforms. In this paper, we present STAR-MPI (Self Tuned Adaptive Routines for MPI collective operations), a set of MPI collective communication routines that are capable of adapting to system architecture and application workload. For each operation, STAR-MPI maintains a set of communication algorithms that can potentially be efficient at different situations. As an application executes, a STAR-MPI routine applies the Automatic Empirical Optimization of Software (AEOS) technique at run time to dynamically select the best performing algorithm for the application on the platform. We describe the techniques used in STAR-MPI, analyze STAR-MPI overheads, and evaluate the performance of STAR-MPI with applications and benchmarks. The results of our study indicate that STAR-MPI is robust and efficient. It is able to and efficient algorithms with reasonable overheads, and it out-performs traditional MPI implementations to a large degree in many cases." ] }
cs0305010
1668544585
Numerous systems for dissemination, retrieval, and archiving of documents have been developed in the past. Those systems often focus on one of these aspects and are hard to extend and combine. Typically, the transmission protocols, query and filtering languages are fixed as well as the interfaces to other systems. We rather envisage the seamless establishment of networks among the providers, repositories and consumers of information, supporting information retrieval and dissemination while being highly interoperable and extensible. We propose a framework with a single event-based mechanism that unifies document storage, retrieval, and dissemination. This framework offers complete openness with respect to document and metadata formats, transmission protocols, and filtering mechanisms. It specifies a high-level building kit, by which arbitrary processors for document streams can be incorporated to support the retrieval, transformation, aggregation and disaggregation of documents. Using the same kit, interfaces for different transmission protocols can be added easily to enable the communication with various information sources and information consumers.
@cite_12 is a push-model publish subscribe system for alerting within a wide-area network. It offers scalability by distributing filters over servers within the network and saving bandwidth by filtering close to the event sources and bundling similar subscriptions. Siena is modular and offers sophisticated filtering mechanisms including dynamic configuration and distribution. It lacks openness, document stream transformation, and scheduling.
{ "cite_N": [ "@cite_12" ], "mid": [ "2103856529" ], "abstract": [ "The publish subscribe (pub sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extending a pub sub system in wireless networks has become a promising topic. However, most existing works focus on pub sub systems in infrastructured wireless networks. To adapt pub sub systems to mobile ad hoc networks, we propose DRIP, a dynamic Voronoi region-based pub sub protocol. In our design, the network is dynamically divided into several Voronoi regions after choosing proper nodes as broker nodes. Each broker node is used to collect subscriptions and detected events, as well as efficiently notify subscribers with matched events in its Voronoi region. Other nodes join their nearest broker nodes to submit subscriptions, publish events, and wait for notifications of their requested events. Broker nodes cooperate with each other for sharing subscriptions and useful events. Our proposal includes two major components: a Voronoi regions construction protocol, and a delivery mechanism that implements the pub sub paradigm. The effectiveness of DRIP is demonstrated through comprehensive simulation studies." ] }
math0304100
2115080784
The Shub-Smale Tau Conjecture is a hypothesis relating the number of integral roots of a polynomial f in one variable and the Straight-Line Program (SLP) complexity of f. A consequence of the truth of this conjecture is that, for the Blum-Shub-Smale model over the complex numbers, P differs from NP. We prove two weak versions of the Tau Conjecture and in so doing show that the Tau Conjecture follows from an even more plausible hypothesis. Our results follow from a new p-adic analogue of earlier work relating real algebraic geometry to additive complexity. For instance, we can show that a nonzero univariate polynomial of additive complexity s can have no more than 15+s^3(s+1)(7.5)^s s! =O(e^ s s ) roots in the 2-adic rational numbers Q_2, thus dramatically improving an earlier result of the author. This immediately implies the same bound on the number of ordinary rational roots, whereas the best previous upper bound via earlier techniques from real algebraic geometry was a quantity in Omega((22.6)^ s^2 ). This paper presents another step in the author's program of establishing an algorithmic arithmetic version of fewnomial theory.
That the @math -conjecture is still open is a testament to the fact that we know far less about the complexity measures @math and @math than we should. For example, there is still no more elegant method known to compute @math for a fixed polynomial than brute force enumeration. Also, the computability of additive complexity is still an open question, although a more efficient variant (allowing radicals as well) can be computed in triply exponential time @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "2807350215" ], "abstract": [ "We show that any language in nondeterministic time @math , where the number of iterated exponentials is an arbitrary function @math , can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness @math and soundness @math , where the number of iterated exponentials is @math and @math is a universal constant. The result was previously known for @math and @math ; we obtain it for any time-constructible function @math . The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC'17). As a separate consequence of this technique we obtain a different proof of Slofstra's recent result (unpublished) on the uncomputability of the entangled value of multiprover games. Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson's problem on the relation between the commuting operator and tensor product models for quantum correlations." ] }