IFIG Research Report

Dauerhafte URI für die Sammlung

Bis 1996 unter dem Titel: Bericht / Arbeitsgruppe Informatik

URN: urn:nbn:de:hebis:26-opus-12462

Stöbern nach

Neue Veröffentlichungen

Gerade angezeigt 1 - 20 von 59
  • Item
    A Short Comment on Controlled Context-Free Grammar Derivations
    (2024-07-04) Holzer, Markus
    We prove that the family of languages generated by regularly controlled grammars with control languages accepted by ordered automata is equal to the family of languages generated by matrix grammars. To our knowledge, this equivalence has been overlooked in the literature.
  • Item
    On Pumping Preserving Homomorphisms and the Complexity of the Pumping Problem
    (2024-06-27) Gruber, Hermann; Holzer, Markus; Rauch, Christian
    This paper complements a recent inapproximability result for the minimal pumping constant w.r.t. a fixed regular pumping lemma for nondeterministic finite automata [H. Gruber and M. Holzer and C. Rauch. The Pumping Lemma for Regular Languages is Hard. CIAA 2023, pp. 128-140.], by showing the inapproximability of this problem even for deterministic finite automata, and at the same time proving stronger lower bound on the attainable approximation ratio, assuming the Exponential Time Hypothesis (ETH). To that end, we describe those homomorphisms that, in a precise sense, preserve the respective pumping arguments used in two different pumping lemmata. We show that, perhaps surprisingly, this concept coincides with the classic notion of star height preserving homomorphisms as studied by McNaughton, and by Hashiguchi and Honda in the 1970s. Also, we gain a complete understanding of the minimal pumping constant for bideterministic finite automata, which may be of independent interest.
  • Item
    Comments on Monoids Induced by NFAs
    (2023-03) Holzer, Markus
    We summarize known results on the transformation monoid of nondeterministic finite automata (NFAs) from semigroup theory. In particular, we list what is known from the literature on the size of monoids induced by NFAs and their (minimal) number of generators - a comprehensive list of these generators is given in the Appendix. It is shown that any language accepted by an n-state NFA has a syntactic monoid of size at most 2^{n^2}. This bound is reachable by the generators of the semigroup B_n of n x n Boolean matrices with the usual matrix multiplication except that we assume 1 + 1 = 1. The number of these generators grows exponentially in n. This is a significant difference to the deterministic case, where three generators suffice to generate all elements of T_n. Moreover, we prove a lower bound for the \nfa-to-\dfa\ conversion using Lambert's W function.
  • Item
    On the complexity of rolling block and Alice mazes
    (2012) Holzer, Markus; Jakobi, Sebastian
    We investigate the computational complexity of two maze problems, namely rolling block and Alice mazes. Simply speaking, in the former game one has to roll blocks through a maze, ending in a particular game situation, and in the latter one, one has to move tokens of variable speed through a maze following some prescribed directions. It turns out that when the number of blocks or the number of tokens is not restricted (unbounded), then the problem of solving such a maze becomes PSPACE-complete. Hardness is shown via a reduction from the nondeterministic constraint logic (NCL) of Demaine and Hearn to the problems in question. In this way we improve on a previous PSPACE-completeness result of Buchin and Buchin on rolling block mazes to best possible. Moreover, we also consider bounded variants of these maze games, i.e., when the number of blocks or tokens is bounded by a constant, and prove close relations to variants of graph reachability problems.
  • Item
    18. Theorietag "Automaten und Formale Sprachen" : Wettenberg-Launsbach bei Gießen 30. September - 2. Oktober 2008
    (2008)
    Der Theorietag ist die Jahrestagung der Fachgruppe Automaten und Formale Sprachen der Gesellschaft für Informatik. Er wird seit 1991 von Mitgliedern der Fachgruppe an wechselnden Orten in Deutschland und Österreich veranstaltet. Im Laufe des Theorietags findet auch die jährliche Fachgruppensitzung statt. Seit 1996 wird der Theorietag von einem eintägigen Workshop mit eingeladenen Vorträgen begleitet. Die bisherigen Austragungsorte waren Magdeburg (1991), Kiel (1992), Dagstuhl (1993), Herrsching (1994), Schloß Rauischholzhausen (1995), Cunnersdorf (1996), Barnstorf (1997), Riveris (1998), Schauenburg-Elmshagen (1999), Wien (2000), Wendgräben (2001), Wittenberg (2002), Herrsching (2003), Caputh (2004),Lauterbad (2005), Wien (2006) und Leipzig (2007). Der diesjährige Theorietag wird nach 13 Jahren wieder vom Institut für Informatik der Justus-Liebig-Universität Gießen ausgerichtet. Er findet mit dem vorangestellten Workshop über Selected Topics in Theoretical Computer Science vom 30. September bis zum 2. Oktober 2008 in Wettenberg-Launsbach bei Gießen statt. Teilnehmer aus Belgien, Deutschland, England, Frankreich, Italien, Österreich, Tschechien und Ungarn folgten der Einladung nach Mittelhessen.
  • Item
    Descriptional complexity of pushdown store languages
    (2012) Malcher, Andreas; Meckel, Katja; Mereghetti, Carlo; Palano, Beatrice
    It is well known that the pushdown store language P(M) of a pushdown automaton (PDA) M i.e., the language consisting of words occurring on the pushdownalong accepting computations of M is a regular language. Here, we design succinct nondeterministic finite automata (NFA) accepting P(M). In detail, an upper bound on the size of an NFA for P(M) is obtained, which is quadratic in the number of states and linear in the number of pushdown symbols of M. Moreover, this upper bound is shown to be asymptotically optimal. Then, several restricted variants of PDA are considered, leading to improved constructions. In all cases, we prove the asymptotical optimality of the size of the resulting NFA. Finally, we apply our results to decidability questions related to PDA, and obtain solutions in deterministic polynomial time.
  • Item
    Tight bounds on the descriptional complexity of regular expressions
    (2009) Gruber, Hermann; Holzer, Markus
    We improve on some recent results on lower bounds for conversion problems for regular expressions. In particular we consider the conversion of planar deterministic finite automata to regular expressions, study the effect of the complementation operation on the descriptional complexity of regular expressions, and the conversion of regular expressions extended by adding intersection or interleaving to ordinary regular expressions. Almost all obtained lower bounds are optimal, and the presented examples are over a binary alphabet, which is best possible.
  • Item
    Simplifying regular expressions : A quantitative perspective
    (2009) Gruber, Hermann; Gulan, Stefan
    In this work, we consider the efficient simplification of regular expressions. We suggest a quantitative comparison of heuristics for simplifying regular expressions. We propose a new normal form for regular expressions, which outperforms previous heuristics while still being computable in linear time. We apply this normal form to determine an exact bound for the relation between the two most common size measures for regular expressions, namely alphabetic width and reverse polish notation length. Then we proceed to show that every regular expression of alphabetic with n can be converted into a nondeterministic finite automaton with e-transitions of size at most 42 5n + 1, and that this bound is optimal. This provides an exact resolution of a research problem posed by Ilie and Yu, who had obtained lower and upper bounds of 4n -1 and 9n - 1 2, respectively [L. Ilie, S. Yu: Follow automata. Inform. Comput. 186, 2003]. For reverse polish notation length as input size measure, an optimal bound was recently determined [S. Gulan, H. Fernau: An optimal construction of finite automata from regular expressions. In: Proc. FST & TCS, 2008]. We prove that, under mild restrictions, their construction is also optimal when taking alphabetic width as input size measure.
  • Item
    Cellular automata with sparse communication
    (2009) Kutrib, Martin; Malcher, Andreas
    We investigate cellular automata whose internal inter-cell communication is bounded. The communication is quantitatively measured by the number of uses of the links between cells. Bounds on the sum of all communications of a computation as well as bounds on the maximal number of communications that may appear between each two cells are considered. It is shown that even the weakest non-trivial device in question, that is,one-way cellular automata where each two neighboring cells may communicate constantly often only, accept rather complicated languages. We investigate the computational capacity of the devices in question and prove an infinite strict hierarchy depending on the bound on the total number of communications during a computation. Despite their sparse communication even for the weakest devices, by reduction of Hilbert´s tenth problem undecidability of several problems is derived. Finally, the question whether a given real-time one-way cellular automaton belongs to the weakest class is shown to be undecidable. This result can be adapted to answer an open question posed in [Vollmar, R.: Zur Zustandsänderungskomplexität von Zellularautomaten. In: Beiträge zur Theorie der Polyautomaten zweite Folge,Braunschweig (1982) 139 151 (in German)].
  • Item
    Grid graphs with diagonal edges and the complexity of Xmas mazes
    (2012) Holzer, Markus; Jakobi, Sebastian
    We investigate the computational complexity of some maze problems, namely the reachability problem for (undirected) grid graphs with diagonal edges, and the solvability of Xmas tree mazes. Simply speaking, in the latter game one has to move sticks of a certain length through a maze, ending in a particular game situation. It turns out that when the number of sticks is bounded by some constant, these problems are closely related to the grid graph problems with diagonals. If on the other hand an unbounded number of sticks is allowed, then the problem of solving such a maze becomes PSPACE-complete. Hardness is shown via a reduction from the nondeterministic constraint logic (NCL) of Demaine and Hearn to Xmas tree mazes.
  • Item
    Massively parallel pattern recognition with link failures
    (2000) Löwe, Jan-Thomas; Kutrib, Martin
    The capabilities of reliable computations in linear cellular arrays with communication failures are investigated in terms of pattern recognition. The defective processing elements (cells) that cause the misoperations are assumed to behave as follows. Dependent on the result of a self-diagnosis of their communication links they store their working state locally such that it becomes visible to the neighbors. A defective cell is not able to receive information via one of its both links to adjacent cells. The self-diagnosis is run once before the actual computation. Subsequently no more failures may occur in order to obtain a valid computation. We center our attention to patterns that are recognizable very fast, i.e. in real-time. It is well-known that real-time one-way arrays are strictly less powerful than real-time two-way arrays, but there is only little known on the range between these two devices. Here it is shown that the sets of patterns reliably recognizable by real-time arrays with link failures are strictly in between the sets of (intact) one-way and (intact) two-way arrays. Hence, the failures cannot be compensated in general but, on the other hand, do not decrease the computing power to that one of one-way arrays. CR Subject Classification (1998): F.1, F.4.3, B.6.1, E.1, B.8.1, C.4
  • Item
    Below linear-time : Dimensions versus time
    (2000) Kutrib, Martin
    Deterministic d-dimensional Turing machines are considered. We investigate the classes of languages acceptable by such devices with time bounds of the form id + r where r E o(id) is a sublinear function. It is shown that for any dimension d >= 1 there exist infinite time hierarchies of separated complexity classes in that range. Moreover, for the corresponding time bounds separated dimension hierarchies are proved. CR Subject Classification (1998): F.1.3, F.1.1, F.4.3
  • Item
    Efficient universal pushdown cellular automata and their application to complexity
    (2000) Kutrib, Martin
    In order to obtain universal classical cellular automata an infinite space is required. Therefore, the number of required processors depends on the length of input data and, additionally, may increase during the computation. On the other hand, Turing machines are universal devices which have one processor only and additionally an infinite storage tape. Here an in some sense intermediate model is studied. The pushdown cellular automata are a stack augmented generalization of classical cellular automata. They form a massively parallel universal model where the number of processors is bounded by the length of input data. Effcient universal pushdown cellular automata and their efficiently verifiable encodings are proposed. They are applied to computational complexity, and tight time and stack-space hierarchies are shown. CR Subject Classification (1998): F.1, F.4.3, B.6.1, E.1
  • Item
    Fault tolerant parallel pattern recognition
    (2000) Kutrib, Martin; Löwe, Jan-Thomas
    The general capabilities of fault tolerant computations in one-way and two-way linear cellular arrays are investigated in terms of pattern recognition. The defective processing elements (cells) that cause the misoperations are assumed to behave as follows. Dependent on the result of a self-diagnosis they store their working state locally such that it becomes visible to the neighbors. A non-working (defective) cell cannot modify information but is able to transmit it unchanged with unit speed. Arrays with static defects run the self-diagnosis once before the actual computation. Subsequently no more defects may occur.In case of dynamic defects cells may fail during the computation. We center our attention to patterns that are recognizable very fast, i.e. in real-time, but almost all results can be generalized to arbitrary recognition times in a straightforward manner. It is shown that fault tolerant recognition capabilities of two-way arrays with static defects are characterizable by intact one-way arrays and that one-way arrays are fault tolerant per se. For arrays with dynamic defects it is proved that the failures can be compensated as long as the number of adjacent defective cells is bounded. Arbitrary large defective regions (and thus fault tolerant computations) lead to a dramatically decrease of computing power. The recognizable patterns are those of a single processing element, the regular ones. CR Subject Classification (1998): F.1, F.4.3, B.6.1, E.1, B.8.1, C.4
  • Item
    Iterative arrays with small time bounds
    (1999) Buchholz, Thomas; Klein, Andreas; Kutrib, Martin
    An iterative arrays is a line of interconnected interacting finite automata. One distinguished automaton, the communication cell, is connected to the outside world and fetches the input serially symbol by symbol. Sometimes in the literature this model is referred to as cellular automaton with sequential input mode. We investigate deterministic iterative arrays (IA) with small time bounds between real-time and linear-time. It is shown that there exists an infinite dense hierarchy of strictly included complexity classes in that range. The result closes the last gap in the time hierarchy of IAs.
  • Item
    Deterministic Turing machines in the range between real-time and linear-time
    (2000) Klein, Andreas; Kutrib, Martin
    Deterministic k-tape and multitape Turing machines with one-way, two-way and without a separated input tape are considered. We investigate the classes of languages acceptable by such devices with time bounds of the form n + r(n) where r E o(n) is a sublinear function. It is shown that there exist infinite time hierarchies of separated complexity classes in that range. For these classes weak closure properties are proved. Finally, it is shown that similar results are valid for several types of acceptors with the same time bounds. CR Subject Classification (1998): F.1.3, F.1.1, F.4.3
  • Item
    Automata arrays and context-free languages
    (1999) Kutrib, Martin
    From a biological point of view automata arrays have been employed by John von Neumann in order to solve the logical problem of nontrivial self-reproduction. From a computer scientific point of view they are a model for massively parallel computing systems. Here we are dealing with automata arrays as acceptors for formal languages. Our focus of investigations concerns their capabilities to accept the classical linguistic languages. While there are simple relations to the regular and context-sensitive ones here we shed some light on the relations to the context-free languages and some of their important subfamilies. CR Subject Classification (1998): F.1, F.4.3, B.6.1, E.1
  • Item
    Decision lists and related Boolean functions
    (1998) Eiter, Thomas; Ibaraki, Toshihide; Makino, Kazuhisa
    We consider Boolean functions represented by decision lists, and study their relationships to other classes of Boolean functions. It turns out that the elementary class of 1­decision lists has interesting relationships to independently defined classes such as disguised Horn functions, readonce functions, nested differences of concepts, threshold functions, and 2­monotonic functions. In particular, 1­decision lists coincide with fragments of the mentioned classes. We further investigate the recognition problem for this class, as well as the extension problem in the context of partially defined Boolean functions (pdBfs). We show that finding an extension of a given pdBf in the class of 1­decision lists is possible in linear time. This improves on previous results. Moreover, we present an algorithm for enumerating all such extensions with polynomial delay.
  • Item
    A first-order representation of stable models
    (1998) Eiter, Thomas; Lu, James; Subrahmanian, V.S
    Turi (1991) introduced the important notion of a constrained atom: an atom with associated equality and disequality constraints on its arguments. A set of constrained atoms is a constrained interpretation. We investigate how non­ground representations of both the stable model semantics and the well­founded semantics may be obtained through Turi's approach. The practical implication of this is that the well­founded model (or the set of stable models) may be partially pre­computed at compile­time, resulting in the association of each predicate symbol in the program to a constrained atom. Algorithms to create such models are presented, both for the well founded case, and the case of stable models. Query processing reduces to checking whether each atom in the query is true in a stable model (resp. well­founded model). This amounts to showing the atom is an instance of one of some constrained atom whose associated constraint is solvable. Various related complexity results are explored, and the impacts of these results are discussed from the point of view of implementing systems that incorporate the stable and well­founded semantics.
  • Item
    Preferred answer sets for extended logic programs
    (1998) Brewka, Gerd; Eiter, Thomas
    In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.