Teacher: Paolo Ferragina
CFU: 9 (first semester).
Course ID: 531AA.
Degree: Master degree in Computer Science and Master degree in CS&Networking.
Question time: Monday: 15-17, or by appointment
News: about this course will be distributed via a Telegram channel.
Official lectures schedule: The schedule and content of the lectures is available below and with the official registro.
In this course we will study, design and analyze advanced algorithms and data structures for the efficient solution of combinatorial problems involving all basic data types, such as integers, strings, (geometric) points, trees and graphs. The design and analysis will involve several models of computation— such as RAM, 2-level memory, cache-oblivious, streaming— in order to take into account the architectural features and the memory hierarchy of modern PCs and the availability of Big Data upon which those algorithms could work on. We will add to such theoretical analysis several other engineering considerations spurring from the implementation of the proposed algorithms and from experiments published in the literature.
Every lecture will follow a problem-driven approach that starts from a real software-design problem, abstracts it in a combinatorial way (suitable for an algorithmic investigation), and then introduces algorithms aimed at minimizing the use of some computational resources like time, space, communication, I/O, energy, etc. Some of these solutions will be discussed also at an experimental level, in order to introduce proper engineering and tuning tools for algorithmic development.
|Monday||9:00 - 11:00||L1|
|Tuesday||11:00 - 13:00||L1|
|Wednesday||11:00 - 13:00||L1|
|date, hours||room||which one, text, solution, results.|| Students that got a rank >= 16 can participate to the second midterm exam.
Date, hr, room, correction of the written exam. Score “30 e lode” is assigned only to the students who got in both exams the score 30. The score is lost if the student participates to one of the next exams (just sitting is enough !). The score can be registered in any of the following exam dates (even in the summer), but PLEASE do not write your name in the ESAMI platform if you want to register your exam score, just show yourself in one of those dates.
I strongly suggest to refresh your knowledge about basic Algorithms and Data Structures by looking at the well-known book Introduction to Algorithms, Cormen-Leiserson-Rivest-Stein (third edition). Specifically, I suggest you to look at the chapters 2, 3, 4, 6, 7, 8, 10, 11 (no perfect hash), 12 (no randomly built), 15 (no optimal BST), 18, 22 (no strongly connected components). Also, you could look at the Video Lectures by Erik Demaine and Charles Leiserson, specifically Lectures 1-7, 9-10 and 15-17.
We'll use both the old-fashioned blackboard and slides. Most of the content of the course will be covered by some notes I wrote in these years; for some topics parts of papers/books will be used.
You can download the latest version of my notes from this link.
|16/09/2019||Introduction to the course. Models of computation: RAM, 2-level memory. An example of algorithm analysis: the sum of n numbers, and binary search. The role of the Virtual Memory system.||Chap. 1 of the notes.|
|Finding the maximum-sum subsequence (from notes!). Random sampling: disk model, known length, the streaming model m=1.||Sec 2.5 of Chapter 2 (no proofs); Chap. 3 of the notes. Sec.3.1|
|Random sampling on the streaming model, known and unknown length. Reservoir sampling. Algorithm and proofs. The List Ranking problem: parallel solution|
|List Ranking: difficulties on disk, pointer-jumping technique, I/O-efficient simulation. Divide and Conquer for List Ranking. Randomized coin tossing to determine the independent set.||Chap. 4 of the notes.|
|Students are warmly invited to refresh their know-how about: Divide-and-conquer technique for algorithm design and Master Theorem for solving recurrent relations; and Binary Search Trees||Lecture 2, 9 and 10 of Demaine-Leiserson's course at MIT|
|Sorting atomic items: sorting vs permuting, comments on the time and I/O bounds, binary merge-sort and its bounds. Snow Plow and compression. Multi-way mergesort. Algorithm for Permuting.||Chap. 5 of the notes|
|Divide and Conquer Algorithm for List Ranking: Example. Deterministic Coin Tossing.|
|Lower bounds for sorting. The case of D>1 disks: non-optimality of multi-way MergeSort, the disk-striping technique. Quicksort: recap on best-case, worst-case.||Lower bound of Permuting is optional (sect 5.2.2).|
|Quicksort: Average-case with analysis. Selection of k-th ranked item in linear average time (with proof). 3-way partition for better in-memory quicksort. RandSelect.|
|Bounded Quicksort; Multiway Quicksort. Selection of k-1 “good pivot” via Oversampling. Proof of the average time complexity|
|Dual Pivot QuickSort. Recap: BFS and DFS visits, Minimum Spanning Tree problem: Kruskal and Prim algorithms and analysis.||CLR cap.23|
|Algorithms for external and semi-external computation of MST, Sybein algorithm.||Sect 11.5 of the Mehlhorn-Sander's book.|
|Fast set intersection, various solutions: scan, sorted merge, binary search, mutual partition, binary search with exponential jumps.||Chap. 6 of the notes.|
|Fast set intersection: two-level scan, random shuffling. String sorting: comments on the difficulty of the problem on disk, lower bound. LSD-radix sort with proof of time complexity and correctness.||Chap. 7 of the notes.|
|MSD-radix sort and the trie data structure. Multi-key Quicksort. Ternary search tree.|
|Exercises.||simulazione alg. Sybein|
|Prefix search: definition of the problem, solution based on arrays, Front-coding, two-level indexing. Locality Preserving front coding and its use with arrays.||Chap. 9 of the notes: 9.1, 9.3.|
|Interpolation search. Compacted tries. Analysis of space, I/Os and time of the prefix search for all data structures seen in class. More on two-level indexing of strings: Solution based on Patricia trie, with analysis of space, I/Os and time of the prefix search. Locality Preserving front coding and its use with Patricia trie.||Chap. 9 of the notes: 9.4 and 9.5.|
|Substring search: definition, properties, reduction to prefix search. The Suffix Array. Binary searching the Suffix Array: p log n. Searching in Suffix Arrays with p + log n. Suffix Array construction via qsort and its asymptotic analysis. LCP array construction in linear time.||Chap. 10 of the notes: 10.1, 10.2.1 and 10.2.2, 10.2.3 (no “The skew algorithm”, “The Scan-based algorithm”).|
|Suffix Trees: properties, structure, pattern search, space occupancy. Construction of Suffix Trees from Suffix Arrays and LCP arrays, and vice versa. Text mining use of suffix arrays.||Sect 10.3, 10.3.1, 10.3.2, 10.4.3|
|The k-mismatch problem with SA+LCP or ST and RMQ data structure. Auto-completion search. RMQ and LCA queries, equivalence and reductions, their algorithmic solutions and few applications.||Sect 10.4.1|
|Prefix-free codes, notion of entropy, optimal codes. Integer coding: the problem and some considerations. The codes Gamma and Delta, space/time performance and consideration on optimal distributions. Rice, PForDelta.||Chap. 11 of the notes|
|Coders: (s,c)-codes, variable-byte, Interpolative. Elias-Fano. With examples.|
|Huffman, with optimality (proof). Canonical Huffman: construction, properties, decompression algorithm.||Chap. 12 of the notes (no sect 12.1.2).|
|Arithmetic coding: properties, algorithm and proofs. Dictionary-based compressors: properties and algorithmic structure. LZ77, LZSS and LZ8.|| Chap. 12 sect 12.2. No PPM and Range coding. Chap 13, no from par 13.3 and following ones.|
|Students are warmly invited to refresh their know-how about: hash functions and their properties; hashing with chaining.||Lectures 7 of Demaine-Leiserson's course at MIT|
|LZ compression via Suffix Tree. Hashing and dictionary problem: direct addressing, simple hash functions, hashing with chaining, uniform hashing and its computing/storage cost, universal hashing (definition and properties).||Chap 10 at sect 10.4.2. Chap. 8 of the notes. Theorem 8.3 without proof, Theo 8.5 without proof (only the statements).|
|Two examples of Universal Hash functions: one with correctness proof, the other without. Perfect hash table (with proof). Minimal ordered perfect hashing: definition, properties, construction, space and time complexity.|
|Cuckoo hashing (with proof). Bloom Filter: properties, construction, query and insertion operations, error estimation (with proofs).||No proof of lower bound.|
|Randomized data structures: Skip lists (with proofs and comments on I/Os), and Treaps (with proofs).||Notes by others. Study also Theorems and Lemmas. See Demaine's lecture num. 12 on skip lists.|