Journées Nationales 2018
du GDR Informatique Mathématique

Du 3 au 6 avril 2018
École polytechnique, Palaiseau, France

Résumés des posters

  • Maxime Audinot (Institut de Recherche en Informatique et Systèmes Aléatoires, CentraleSupélec & CNRS & ENS Rennes & IMT Atlantique & Inria & INSA Rennes & Université de Bretagne Sud & Université de Rennes 1)

    ATSyRA : Un outil pour la conception et l'analyse d'arbres d'attaque

    Ce poster présentera mon travail sur le développement de l'outil ATSyRA pour l'analyse et la synthèse d'arbres d'attaque, et les problèmes théoriques que nous avons mis en évidence pour répondre aux besoins des experts en analyse de risque, tels que ceux exprimés par nos collaborateurs de la DGA.

  • Julien Baste (LIP6, CNRS & Sorbonne Université), en collaboration avec Ignasi Sau et Dimitrios Thilikos

    F-DELETION parameterized by treewidth

    For a fixed collection F of graphs containing at least one edge, the F-DELETION problem consists in, given a graph G and an integer k, deciding whether there exists a set S of vertices of G of size at most k such that G without the vertices of S does not contain any of the graphs of F as a minor. This problem, known to be NP-hard, is a generalization of some well known problems as VERTEX COVER (F={K2}), FEEDBACK VERTEX SET (F={K3}), or VERTEX PLANARIZATION (F={K5, K3,3}). We are interested in the parameterized complexity of F-DELETION when the parameter is the treewidth of the input graph, denoted by tw. Our objective is to determine, for a fixed F, the smallest function f such that F-DELETION can be solved in time f(tw)*poly(n) on n-vertex graphs.

  • Andreea Beica (Département d'Informatique de l'ENS, CNRS & ENS Paris & Inria), en collaboration avec Vincent Danos

    Synchronization and Linearity in Chemical Reaction Networks

    In order to model and analyze biological systems, one can choose from a wealth of models and techniques, ranging from continuous to discrete, from deterministic to stochastic, and from qualitative to quantitative. Each modeling approach abstracts the biological processes being considered in a certain way, thus bringing along its own set of assumptions and, consequently, niche of validity. The overall challenge is related with the understanding of cellular processes and their modeling within the adequate level of abstraction. In recent works, the issue of intracellular processes coordination has been tackled from the perspective of scheduling policies, a concept usually applicable to the case of man-made systems [1, 2, 3]. More generally, discrete event based abstractions could provide a rich framework for cellular modeling, given that many aspects of biological processes display characteristics of discrete event systems: a discrete state set, whose evolution depends on the occurrence of events. In this sense, the current works aims at tackling the issue of modeling, analysis and control of cellular system from the perspective of Control Theory, and more specifically using techniques from the Discrete Event Systems domain. Considering biological systems to be Discrete Event Systems means that the main characteristics of the dynamic behavior are synchronization and competition, behaviors that can not be captured by ODE models, or their discrete time analogues, but are inherent to the Petri Nets framework. While synchronization is the main focus of the (max,+) setting, competition, or concurrency, is the main object of our study: in the case of biological systems, the issue concurrency is actually that of resource allocation, which represents the main subject of our work. By employing Petri Nets, and the Event Graph subclass, to model Chemical Reaction Networks, we aim at using and extending results from the theory of linear systems on dioids [4, 5] in order to address issues such as the existence of periodic and stationary regimes, model reductions, growth rate as an emergent property of the model, and biological hierarchy. From a semantic POV, we aim to substitute to a Chemical Reaction Network, especially a "growth" one (i.e., for which an exponential stationary phase exists, such as the one given in [6]), a piecewise synchronous approximation of the dynamics: a resource-allocation-centered Petri Net with maximal-step execution semantics. In previous works [7], we tackle the case of unimolecular chemical reactions, and prove the correctness of our method; we show that it can be used either as an approximation of the dynamics, or as a method of constraining the reaction rate constants (an alternative to flux balance analysis, using an emergent formally defined notion of "growth rate" as the objective function), or as a technique of refuting models. The current work deals with using the (max,+) algebra framework to generalize these results to bi- and multi-molecular chemical reactions, as well as deal with cellular resource allocation in growth models, provide a method for biological model reductions, and possibly infer knowledge on the concept of "hierarchies" in biological models.

    Keywords: chemical reaction networks, discrete event system, (max,+) algebra, approximation, resource allocation, max-parallel execution of Petri Nets, model reduction

    [1] Pugatch, Rami: Greedy scheduling of cellular self-replication leads to optimal doubling times with a log-Frechet distribution. PNAS 112/8, 2611-2616 (2015)
    [2] Brackley, C.A., et al: A max-plus model of ribosome dynamics during mRNA translation. Journal of Theoretical Biology 303(2012), 128-140
    [3] Patel, E.L., Broomhead, D.: A Max-Plus Model of Asynchronous Cellular Automata. Complex Systems, 23
    [4] Baccelli, F., Cohen, G., Olsder, G.J., Quadrat, J.P.: Synchronization and Linearity: An Algebra for Discrete Event Systems
    [5] Cury, J.E.R., Baldissera, F.L.: Some Perspectives and Challenges in the (Discrete) Control of Cellular Systems. WODES'14
    [6] Weiße, A.Y., Oyarzún, D.A., Danos, V., Swain, P.S.: Mechanistic links between cellular trade-offs, gene expression, and growth. PNAS 112/9, E1038-E1047 (2015)
    [7] Beica, A., Danos, V.: Synchronous Balanced Analysis. 5th International Workshop, HSB 2016, Grenoble, France, October 20-21, 2016, Proceedings

  • Thomas Caissard (Laboratoire d'InfoRmatique en Image et Systèmes d'information, CNRS & INSA Lyon & Université Claude Bernard Lyon 1 & Université Lumière Lyon 2 & École Centrale de Lyon)

    Laplace–Beltrami Operator on Digital Surfaces

    This poster presents a novel discretization of the Laplace–Beltrami operator on digital surfaces. We adapt an existing convolution technique proposed by Belkin et al. (see Discrete Laplace Operator on Meshed Surfaces) for triangular meshes to topological border of subsets of Zn. The core of the method relies on first-order estimation of measures associated with our discrete elements (such as length, area etc.). We show some applications to our discretizations such as the mean curvature or the so-called manifold harmonics derived from the operator.

  • Paulina Cecchi Bernales (Institute de Recherche en Informatique Fondamentale, CNRS & Paris-Diderot / USACh, Chile), en collaboration avec V. Berthé, F. Dolce, F. Durand, J. Leroy, S. Petite et D. Perrin.

    Dendric words and dendric subshifts

    Given a finite alphabet A and a biinfinte word x with symbols in A, the language of x is the set of factors of x, that is, the set of finite words appearing on x. To every factor w of x we can associate a bipartite graph, called the extension graph of w, in which one puts edges between left and right copies of letters a, b such that awb is a factor of x. The word x is said to be dendric if for all factor w of x, the extension graph of w is a tree. Dendric words are therefore defined in terms of combinatorial properties of their language. Dendric subshifts are symbolic dynamical systems defined using dendric words. We study some dynamical properties of dendric subshifts such as those related to invariant measures, and provide sufficient conditions for two dendric subshifts to be orbit equivalent.

  • Célia Châtel (Laboratoire d'Informatique et Systèmes, CNRS & Université d'Aix-Marseille & Université de Toulon), en collaboration avec François Brucker et Pascal Préa

    A characterization of binary hypergraphs by a sequence of trees

    We consider finite hypergraphs closed under intersection. A hypergraph is said to be binary if (using the inclusion order of its hyperedges) every hyperedge covers at most two other hyperedges and is covered by at most two. We propose a characterization of those hypergraphs by a sequence of mixed trees (trees with directed and undirected edges). We give an algorithm to generate a sequence of trees corresponding to a binary hypergraph. Binarizable hypergraphs (i.e hypergraphs which are partial hypergraphs of a binary one) are equivalent to totally balanced hypergraphs and our construction algorithm can also be used for recognition of such hypergraphs and to approximate a totally balanced non binary hypergraph by a binary hypergraph.

  • Jérémy Cochoy (INRIA Saclay, équipe Datashape)

    Multidimensional persistence and some of its properties

    One dimensional persistence is a tool allowing to recover the shape of a point cloud. One dimensional persistence is now understood and many implementation of the algorithm are available. But one dimensional persistence is very sensible to outliers and noise. One approach to this problem is to use a generalization: multipersistence. Most of the know theorem in the one dimensional case do not hold in higher dimension and many questions remain open. This poster will present the main properties of multidimensional persistence and some new results.

  • Christophe Cordero (Laboratoire d'Informatique Gaspard Monge, CNRS & Université Paris-Est Marne-la-Vallée & ESIEE Paris & École des Ponts ParisTech)

    Combinatoire énumérative des prographes

    Nous exhibons une bijection entre les prographes et des familles de chemins tridimensionnels colorés. Par une étude classique des chemins, nous obtenons des relations de récurrence et des formules closes pour les prographes.

  • Aram Dermenjian (Laboratoire d'informatique de l'École polytechnique, CNRS & École polytechnique)

    Ordre faible facial

    On s'intéresse aux réflexions d'un point par un ensemble de miroirs donné. Les images de ce point par ces miroirs peuvent être ordonné en considérant quels miroirs sont utilisés pour arriver à ces points. Cet ordre est l'ordre faible d'un groupe de Coxeter. Il peut aussi se voir sur le graphe d'un polytope appelé permutaèdre. Ce poster présentera une généralisation de l'ordre faible sur toutes les faces du permutaèdre d'un groupe de Coxeter fini.

  • Henri Derycke (Laboratoire Bordelais de Recherche en Informatique, CNRS & Université de Bordeaux & Bordeaux INP)

    Relaxations dans le modèle du tas de sable sur la grille Z2

    Le modèle du tas de sable est un modèle de diffusion discret de grains sur les sommets d'un graphe. Je présente 3 points qui articulent ma recherche sur ce modèle sur la grille Z2. L'étude d'un modèle de percolation avec des amas finis rectangulaires en lien avec la stabilisation de la pile de sable étudié notamment par Levine et Peres, étude qui mène à des statistiques sur les permutations. La recherche de sous-groupes (finis) sur des configurations périodiques d'un hypothétique groupe sur les configurations récurrentes de Z2. La reconnaissance des configurations récurrentes de la bande horizontale Z × [1,H] par un automate sur les colonnes dans la même approche que les travaux de Gamlin, Jarai et Lyons.

  • Benjamin Dupont (Institut Camille Jordan, CNRS & Université Claude Bernard Lyon 1 & Université Jean Monnet Saint-Étienne & École Centrale de Lyon & Institut national des sciences appliquées de Lyon)

    Higher-dimensional linear rewriting

    The commutative Gröbner basis theory has been introduced to solve polynomial systems, to compute in ideals or to solve algebraic geometry issues with elimination theory. A notion of non-commutative Gröbner basis has also been developed to compute linear bases of algebras and homological invariants. In this poster, we present an extension of these theories to higher-dimensional algebras. We develop termination techniques and confluence proofs to compute in these higher-dimensional structures. We illustrate this by giving explicit bases with suitable algebraic properties for some family of algebras arising in representation theory.

  • Justine Falque (Laboratoire de Recherche en Informatique, CNRS & Université Paris-Sud)

    Finite generation of the orbit algebra of the permutation groups with a finite profile

    Define, for G an infinite permutation group, the counting function which maps any positive integer n to the number of orbits of n-subsets, for the induced action of G on the finite subsets of elements. Cameron conjectured that this function, called the profile of G, is a quasi-polynomial in case it is bounded by a polynomial function. Another, stronger conjecture was later made by one of his students, Macpherson. It involves a certain structure of graded algebra on the orbits of subsets created by Cameron, the orbit algebra, and states that if the profile of G is bounded by a polynomial, then its orbit algebra is finitely generated. The poster will give an idea of the context of these two conjectures, some examples, and a sketchy sketch of the proof of Macpherson's conjecture, result of a common work of Nicolas Thiéry and Justine Falque (with precious advice from Peter Cameron himself).

  • Joël Gay (Laboratoire de Recherche en Informatique, CNRS & Université Paris-Sud)

    The 0-rook monoid and its representation theory

    We are interested in a proper degeneracy at q=0 of the q-deformed rook monoid of Solomon. We show that it is the algebra of a monoid Rn0 namely the 0-rook monoid, in the same vein as Norton's 0-Hecke algebra is the algebra of a monoid Hn0, which Rn0 contains. We start from presentation of these monoids and also describe them in term of actions. This monoid turns out to be J-trivial and we thus combinatorially describe its representation theory, and its projectivity over Hn0. Furthermore the rook monoid happens to be the type A of a family of monoid called the Renner monoids, and the same construction works at least in type B and D.

  • Alex Bredariol Grilo (Institut de Recherche en Informatique Fondamentale, CNRS & Université Paris-Diderot)

    Relativistic verifiable delegation of quantum computation

    The importance of being able to verify quantum computation delegated to remote servers increases with recent development of quantum technologies. In some of the proposed protocols for this task, a client delegates her quantum computation to non-communicating servers. The fact that the servers do not communicate is not physically justified and it is essential for the proof of security of such protocols. For the best of our knowledge, we present in this work the first verifiable delegation scheme where a classical client delegates her quantum computation to two entangled servers that are allowed to communicate, but respecting the plausible assumption that information cannot be propagated faster than speed of light. We achieve this result by proposing the first one-round two-prover game for the Local Hamiltonian problem where provers only need polynomial time quantum computation and access to copies of the groundstate of the Hamiltonian.

  • Timothée Goubault de Brugière (Laboratoire de Recherche en Informatique, CNRS & Université Paris-Sud)

    Quantum circuits synthesis with Householder matrices

    Quantum circuits are the quantum analogs of classical Boolean circuits. Compiling a quantum operator into a circuit is crucial for the design of new quantum algorithms or the study of quantum systems. Over the years, the methods essentially aimed at minimizing the size of the final circuit to overcome technological constraints such as the decoherence time of quantum particles. However, for some applications such as the simulation of a quantum computer, it is also necessary to quickly synthesize a unitary operator with a classical computer. With this new constraint, the most efficient methods do not necessarily provide the smallest circuits. In particular the use of Householder matrices seems to give promising results on the speed of circuits generation. The purpose of this study is to characterize the compromise between generation speed and circuit size of Householder methods, which we will then compare with the current best methods such as the Quantum Shannon Decomposition.

  • Antoine Grospellier (équipe SECRET, Inria Paris), en collaboration avec Omar Fawzi et Anthony Leverrier

    Efficient decoding of random errors for quantum expander codes

    Quantum error correction is a technique used in order to prevent the decoherence of quantum states. A quantum error correcting code encodes some logical qubits into a larger number of physical qubits. They allow us to recover an initial state when some small errors have been applied on it. We study quantum expander codes [3] and in particular we proved that there exists a threshold for them. These codes have several nice properties: they are LDPC, they have a strictly positive rate, their minimal distance is good and there is an efficient decoding algorithm to decode them. We proved that these codes can almost surely correct random errors (in the so called locally stochastic noise error model) assuming that the probability of error is below some strictly positive constant called "threshold". We use the same kind of tools than [1] and [2] and in particular we use percolation theory. A standard result of percolation theory states that if we pick randomly and independently vertices in a degree bounded graph then the induced sub-graph has small connected components. We proved a generalized version of this theorem and applied it to prove that there is a threshold for quantum expander codes (picked vertices represent errors on qubits). We have investigated the case where there is no error in the syndrome measurement and the case where the syndrome is noisy. We have been able to prove the existence of a threshold in both cases. An application of our work is efficient fault-tolerant quantum computation with constant overhead, using a previous work of Gottesman [1]. Fault-tolerant quantum computation is an essential tool in the designing of a quantum computer since it allows us to fight against the noise in quantum circuits.

    [1] Daniel Gottesman. Fault-tolerant quantum computation with constant overhead. arXiv preprint arXiv:1310.2984, 2013.
    [2] Alexey Kovalevand, Leonid Pryadko. Fault tolerance of quantum low-density parity check codes with sublinear distance scaling. Physical Review A, 87(2):020304, 2013.
    [3] Anthony Leverrier, Jean-Pierre Tillich, and Gilles Zémor. Quantum expander codes. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 810-824. IEEE, 2015.

  • Yann Hamdaoui (Institut de Recherche en Informatique Fondamentale, CNRS & Université Paris-Diderot)

    Encoding Effects and Concurrency in Linear Logic

    The translation of lambda calculus in Linear Logic (LL) gave new insights about the process of computation, and in particular the beta-reduction. The parallel, asynchronous and fine-grained nature of LL's reduction, together with its low-level execution models such as Geometry of Interaction style machines, makes it an interesting potential backend for the compilation of functional languages. In this work, we make a step further in this direction by giving a translation of a language featuring concurrency, references and thus non determinism to a fragment of differential linear logic. Unavoidably, concurrent programs translate to cyclic objects, that do not correspond to valid proofs: we developed specific tools and devices to ensure the correctness of translation and termination in this wild setting.

  • Théo Karaboghossian (Laboratoire Bordelais de Recherche en Informatique, CNRS & Université de Bordeaux & Bordeaux INP)

    Invariants combinatoires par méthode algébrique

    Les algèbres de Hopf ont d'abord été introduites en géométrie algébrique mais occupent maintenant une importante place en combinatoire. De nombreux objets combinatoires classiques (graphes, partitions d'entiers, etc) ont été munis de différentes structures d'algèbres de Hopf, pour ainsi être étudiés de façon plus algébrique. Les monoïdes de Hopf ont été introduits plus tard par Aguiar et Mahajan et sont un analogue des algèbres de Hopf sur des structures combinatoires étiquetées. Ils ont aussi défini des foncteurs permettant de passer des monoïdes de Hopf aux algèbres.
    Aguiar Bergeron et Sottile ont donné un résultat fondamental sur la structure de la catégorie des algèbres de Hopf combinatoires et qui permet de définir des invariants polynomiaux sur des objets combinatoires. (Eric Marberg a plus tard donné un résultat analogue sur les monoïde de Hopf.) Récemment Aguiar et Ardila ont publié un article où ils introduisent entre autre un nouvel invariant polynomial en passant par les monoïdes de Hopf.
    Nous nous somme intéressés aux liens entre ces différents invariants polynomiaux pendant les premiers mois de ma thèse.

  • Marie Kerjean (Institut de Recherche en Informatique Fondamentale, CNRS & Université Paris-Diderot)

    A Type Theory for Linear Partial Differential Equation

    We introduce a sequent calculus with rules traducing the resolution of Linear Partial Differential Equation through a new constructor !D. This corresponds to a generalisation of Differential Linear Logic. We explain how from any fixed linear partial differential operator with constant coefficient one constructs a model of this sequent calculus.

  • Guillaume Lagarde (Institut de Recherche en Informatique Fondamentale, CNRS & Université Paris-Diderot)

    Lempel-Ziv: a "one-bit catastrophe" but not a tragedy

    LZ78 is a famous and very simple lossless data compression algorithm published by Abraham Lempel and Jacob Ziv in 1978. It is fair enough to expect a certain stability from a data compression algorithm against small perturbation on the input. In this direction, Jack Lutz among with others asked the following question, so-called "the one-bit catastrophe": given an infinite word w, can the compression ratio of w be different from 1w? In this poster, I would like to give some results in this fashion; in particular the fact that a catastrophe can indeed occur in LZ78.

  • Jérémy Ledent (Laboratoire d'informatique de l'École polytechnique, CNRS & École polytechnique), en collaboration avec Éric Goubault et Samuel Mimram

    Chemins cubiques et subdivisions chromatiques : un pont entre deux modèles géométriques de la concurrence

    Deux approches géométriques de la concurrence cohabitent. L'une (Goubault et. al.) voit les différentes exécutions d'un programme comme des chemins dans un espace topologique dirigé. L'autre (Herlihy et. al.) cherche à démontrer des résultats d'impossibilité en réduisant la résolution d'une tâche à l'existence d'une certaine application entre complexes simpliciaux. L'objectif de ce poster est de présenter un résultat combinatoire liant les deux structures clés de ces deux approches : les chemins dirigés dans un cube d'une part, et la subdivision chromatique standard d'autre part.

  • Mathieu Lehaut (Laboratoire d'informatique de Paris 6, CNRS & Sorbonne Université)

    Contrôle de systèmes paramétrés

    On s'intéresse à des systèmes constitués de plusieurs processus identiques, mais dont le nombre n'est pas connu à l'avance par exemple car des processus peuvent être créés en cours d'exécution. Le système interagit avec un environnement incontrôlable, et on cherche à contrôler les actions de chaque processus de façon à ce que le comportement global satisfasse une spécification préalablement donnée. On utilise pour cela des jeux sur un modèle d'automate sur un alphabet infini [1, 2] en se restreignant pour retrouver la décidabilité à des comportements bornés sur le nombre de rounds [3], où dans un round chaque processus peut faire un nombre non borné d'actions à la suite mais ne fera plus rien une fois qu'il aura passée la main au processus suivant. Le but est de trouver une stratégie gagnante pour le contrôleur dans ces jeux, ce qui correspond à une façon centralisée de contrôler les processus.

    [1] H. Björklund and T. Schwentick. On notions of regularity for data languages. Theoretical Computer Science, 411(4-5):702-715, 2010.
    [2] M. Bojanczyk, C. David, A. Muscholl, T. Schwentick, and L. Segoufin. Two-variable logic on data words. ACM Transactions on Computational Logic, 12(4) :27, 2011.
    [3] S. La Torre, P. Madhusudan, and G. Parlato. Model-checking parameterized concurrent programs using linear interfaces. In Proceedings of CAV'10, volume 6174 of LNCS, pages 629-644. Springer, 2010.

  • Mathias Lepoutre (Laboratoire d'informatique de l'École polytechnique, CNRS & École polytechnique)

    A bijective proof of the rationality of maps in higher genus

    Bender and Canfield proved in 1991 that the generating series of maps in higher genus is a rational function of the generating series of planar maps. In this poster, I display the first bijective proof of this result. Our approach starts with the introduction of a canonical orientation that enables us to construct a bijection between 4-valent bicolorable maps and a family of unicellular blossoming maps.

  • Baptiste Louf (Institut de Recherche en Informatique Fondamentale, CNRS & Université Paris-Diderot)

    A new family of bijections for planar maps

    The Goulden-Jackson and Carell-Chapuy formulas are recurrence relation on maps that arose from algebraic methods via the KP hierarchy. I will present new bijections that explain the planar cases of these formulas. The bijections rely on a certain exploration of the map, then on a « cut-and-slide » operation.

  • Olivier Mullier (Unité d'Informatique et Ingénierie des Systèmes, ENSTA Paristech), en collaboration avec Alexandre Chapoutot et Julien Alexandre dit Sandretto

    Validated Computation of the Local Truncation Error of Runge-Kutta Methods with Automatic Differentiation

    A novel approach to bound the local truncation error of explicit and implicit Runge-Kutta methods is presented in this poster. This approach takes its roots in the modern theory of Runge-Kutta methods, namely the order condition theorem, defined by John Butcher in the 60's. More precisely, our work is an instance, for Runge-Kutta methods, of the generic algorithm defined by Ferenc Bartha and Hans Munthe-Kaas in 2014 which computes B-series with automatic differentiation techniques. In particular, this specialized algorithm is combined with set-membership framework to define validated numerical integration methods based on Runge-Kutta methods.

  • Anca Nitulescu (Département d'Informatique de l'ENS, CNRS & ENS Paris & Inria)

    Efficient Protocols for Privacy and Integrity for the Cloud

    Cloud computing is a paradigm in which clients outsource tasks and resources to Cloud Providers for the sake of several benefits. How can they be assured that the Cloud will operate correctly? Cloud computing poses big privacy and integrity concerns because the service provider can access the data that is in the cloud at any time.
    What can go wrong? Software bugs, hardware errors, attacks from malicious intruders. The integrity of data or of the computations outsourced by the client can be compromised! The solution to this is to ask the Cloud for a SNARK: a Succinct non-interactive argument of knowledge. SNARKs were introduced to enable verifiable computation and are now the most widely deployed proof systems in the real world. This poster presents their formalization, their security requirements and exhibits some scenarios where these proofs are useful and the problems that can appear in case of negligent employment.

  • Federico Olimpieri (Institut de Mathématiques de Marseille, CNRS & Aix-Marseille Université & Centrale Marseille)

    Sémantique quantitative et non-déterminisme dans le lambda calcul

    La sémantique quantitative a été proposée par J.-Y. Girard et consiste à interpréter un programme comme une série entière de ses approximations linéaires. Grâce à la sémantique quantitative, le concept de développement de Taylor d'un programme a été introduit. Le but de mon travail est d'utiliser ces outils pour étudier le non-déterminisme dans le domaine du lambda-calcul.

  • Pacôme Perrotin (Laboratoire d'Informatique et Systèmes, CNRS & Université d'Aix-Marseille & Université de Toulon)

    La (dé)composition de Réseaux d'Automates Booléens

    On propose un formalisme pour découper et recomposer les Réseaux d'Automates Booléens (RABs), afin de permettre une compréhension à moindre coup de leur capacités de calculs. Un résultat à présenter est un théorème sur la simulation, permettant de dire que n'importe quel RAB est simulé par un RAB possédant certaines propriétés (monotonie, par exemple).

  • Pablo Rotondo (Institut de Recherche en Informatique Fondamentale, CNRS & Université Paris-Diderot / UdelaR, Uruguay)

    Probabilistic Analysis of the Continued Logarithm Algorithm

    Introduced by Gosper in 1978, the Continued Logarithm Algorithm computes the gcd of two integers efficiently, performing only shifts and subtractions. Shallit has studied its worst-case complexity in 2016, showing it to be linear. Here, we perform the average-case analysis of the algorithm: we study its main parameters (number of iterations, total number of shifts) and obtain precise asymptotics for their mean values, with explicit constants. Our analysis involves the dynamical system underlying the algorithm, which produces continued fraction expansions whose quotients are powers of 2. Even though Chan studied in 2005, this system from an Ergodic Theory perspective, the presence of powers of 2 in the quotients gives a dyadic flavour which cannot be analysed only with Chan's results. Thus we introduce a dyadic component and deal with a two-component dynamical system. With this new mixed system at hand, we provide a complete average-case analysis of the algorithm.

  • Sudarshan Shinde (équipe Ouragan d'Inria Paris & IMJ-PRG)

    Finding ECM-friendly curves: A Galois approach

    The elliptic curve method (ECM) is a factorization algorithm widely used in cryptography. It was proposed in 1985 by Lenstra and improved a couple of months later by Montgomery using well-chosen curves. In order to compare different families of curves, he associated to each elliptic curve E and prime l, the mean valuation of l in the cardinality of E modulo random primes. More precisely, we set vl= E(vl(#(E(Fp)))) where the expectation is with respect to random primes p.
    Montgomery increased vl by forcing curves to have l-torsion points over Q. Brier and Clavier further increased v2 by imposing torsion points over Q(i). In 2012, Barbulescu et al. [1] produced families of elliptic curves with better mean valuation without adding any torsion points on Q(i). Moreover, they showed that it is impossible to change vl without changing the degree of the l-torsion field, which has a generic value [2] in the sense in which the Galois group of an irreducible polynomial of degree n is generically Sn. In this talk we search families of elliptic curves with a larger valuation for l = 2 and l = 3 which boils down to searching families with non-generic Galois groups. Initially, we considered the method of Lagrange resolvant but this is not feasible for our polynomials of interest: the degree of division polynomials is quadratic in l. We then present two algorithms which produce a system of polynomial equations characterising every subfamily of elliptic curves having non-generic v3 and every subfamily of Montgomery curves having non-generic v2. We shall also present the recent works of Sutherland, Zywina [3] and Rouse, Zurich-Brown [4] and our contribution.

    [1] Razvan Barbulescu, Joppe Bos, Cyril Bouvier, Thorsten Kleinjung, and Peter Montgomery. Finding ecm-friendly curves through a study of galois properties. The Open Book Series, 1(1):63-86, 2013.
    [2] Jean-Pierre Serre. Propriétés galoisiennes des points d'ordre fini des courbes elliptiques. Inventiones mathematicae, 15(4):259-331, 1971.
    [3] Andrew V Sutherland and David Zywina. Modular curves of prime-power level with infinitely many rational points. Algebra & Number Theory, 11(5):1199-1229, 2017.
    [4] Jeremy Rouse and David Zureick-Brown. Elliptic curves over Q and 2-adic images of galois. Research in Number Theory, 1(1):1-34, 2015.

  • Tsinjo Rakotoarimalala (Laboratoire d'Informatique de Paris-Nord, CNRS & Institut Galilée & Université Paris 13)

    The complexity of the Multiple Pattern Matching Problem for random strings

    We generalise a multiple string pattern matching algorithm, proposed by Fredriksson and Grabowski [J. Discr. Alg. 7, 2009], to deal with arbitrary dictionaries on an alphabet of size s. If rm is the number of words of length m in the dictionary, and ϕ(r) = maxm ⁡(ln⁡(s m⁢rm) / m), the complexity rate for the string characters to be read by this algorithm is at most κU⁢B ⁢ϕ(r) for some constant κU⁢B. Then, we generalise the classical lower bound of Yao [SIAM J. Comput. 8, 1979], for the problem with a single pattern, to deal with arbitrary dictionaries, and determine it to be at least κL⁢B⁢ ϕ(r). This proves the optimality of the algorithm, improving and correcting previous claims.

  • Anastasia Volkova (Inria Grenoble - Rhône-Alpes)

    Towards reliable implementation of digital filters. How digital filters and computer arithmetic meet.

    Ce travail porte sur l’amélioration d’algorithmes arithmétiques pour la conception rigoureuse de filtres linéaires numériques pour le traitement du signal. En supposant l’arithmétique réelle exacte, la théorie de ces filtres est solidement établie depuis longtemps. De nombreux problèmes se posent lorsqu’on est confronté à la pratique : les coefficients des filtres sont représentés sur un petit nombre de chiffres, les opérations arithmétiques ne sont pas exactes, etc. On souhaite cependant que les implémentations soient les plus efficaces en termes de vitesse de calculs, de surface sur les circuits, de consommation d’énergie, etc. Dans la pratique, les concepteurs trouvent souvent des solutions via des approches empiriques, et les valident par des tests qui sont loin d’être exhaustifs. Cette façon d’implémenter les filtres convient parfaitement à certaines applications comme les télécommunications, mais pas pour des applications critiques par exemple liées aux transports et à l'aéronautique.
    Dans ce travail nous couvrons toute la chaine de conceptions de filtres linéaires : des fonctions de transfert à l’implémentation logiciel/matériel fiable et fine. Nous fournissons une nouvelle méthodologie pour l’analyse d’erreur lors de l’étude d’algorithmes de filtres linéaires du point de vue de l’arithmétique des ordinateurs. L’analyse d’erreur proposée est basée sur la combinaison de techniques telles que l’analyse d’erreur en Virgule Flottante, l’arithmétique d’intervalles et les implémentations multi-précisions. Nous fournissons des briques algorithmiques de bases pour des implémentations efficaces de filtres sur FPGA. Finalement, nous intégrons nos approches dans un générateur de code afin de permettre l’implémentation automatique et fiable de tout algorithme de filtre linéaire numérique (cet outil est open-source).

  • Alexandre Wallet (Laboratoire de l'Informatique du Parallélisme, CNRS, ENS de Lyon, Inria, Université Claude Bernard Lyon 1)

    On the Ring-LWE and Polynomial-LWE problems

    The Ring Learning With Errors problem (RLWE) comes in various forms. Vanilla RLWE is the decision dual-RLWE variant, consisting in distinguishing from uniform a distribution that depends on a secret belonging to the dual OK* of the ring of integers OK of a specified number field K. In primal-RLWE, the secret is instead in OK. Both decision dual-RLWE and primal-RLWE enjoy search counterparts. Also widely used is (search/decision) Polynomial Learning With Errors (PLWE), which is defined over polynomial rings as ZZ[x]/f for a monic irreducible f with integer coefficients.
    We show that there exist reductions between all of these six problems that incur limited parameter losses. More precisely: we prove that the (decision/search) dual to primal reduction from Lyubashevsky et al. [EUROCRYPT 2010] and Peikert [SCN 2016] can be adapted with a small error rate growth for all rings (the resulting reduction is non-uniform polynomial time). We extend it to polynomial-time reductions between (decision/search) primal-RLWE and PLWE that work for a family of polynomials f that is exponentially large as a function of deg f (the resulting reduction is also non-uniform polynomial time); and we exploit the recent technique from Peikert et al. [STOC 2017] to obtain a search to decision reduction for RLWE for arbitrary number fields. The reductions incur error rate increases that depend on intrinsic quantities related to K and f.

Personnes connectées : 1