Algorithmic Techniques for Taming Big Data (DS-563/CS-543, Fall 2021)
Growing amounts of available data lead to significant challenges in processing them efficiently. In many cases, it is no longer possible to design feasible algorithms that can freely access the entire data set. Instead of that, we often have to resort to techniques that allow for reducing the amount of data such as sampling, sketching, dimensionality reduction, and core sets. Apart from these approaches, the course will also explore scenarios in which large data sets are distributed across several machines, or even geographical locations, and the goal is to design efficient communication protocols or MapReduce algorithms.
The course will include a final project and programming assignments in which we will explore the performance of our techniques when applied to publicly available data sets. Throughout the course, we will explore various strategies for implementing techniques that have theoretical guarantees in practice.
- Fluency with programming and basic data structures is required (commensurate with CS-111, EK-125, or equivalent). Familiarity with C++, Java, or Python is recommended.
- Familiarity with basic topics on algorithms and computation complexity (commensurate with CS-330, EC-330, or equivalent) is required. These topics are covered in textbooks such as Cormen, Leiserson, Rivest, and Stein "Introduction to Algorithms."
- Familiarity with basics of linear algebra (commensurate with CS-132, MA-242, or equivalent) and probability (commensurate with MA-115, CS-237, EK-381, or equivalent) is required. These topics are covered in textbooks such as Strang "Introduction to Linear Algebra," Lay, Lay, and McDonald "Linear Algebra and Its Applications," and Pishro-Nik "Introduction to Probability, Statistics, and Random Processes."
Since advanced courses offered through the Faculty of Computing & Data Sciences are meant to be open to students from various disciplines, we provide an on-line questionnaire to assist students with self–assessment and placement. Please follow this link to access it.
- Data projections
- Lecture 1: Course overview. Frequency estimation (CountMin sketch).
- Lecture 2: Approximate counting. Estimation of data frequency moments.
- Lecture 3: Estimation of data frequency moments.
- Lecture 4: Applications to distributed monitoring.
- Lectures 5 & 6: Compressed graph representations with applications (graph sketches).
- Lecture 7: Data dimensionality reduction (Johnson–Lindenstrauss Lemma).
- Lecture 8: Data dimensionality reduction for clustering.
- Lectures 9 & 10: Finding similar data points (nearest–neighbor search).
- Selection of representative subsets
- Lecture 11: Simple geometric problems. Clustering via core sets.
- Lecture 12: Clustering via core sets.
- Lecture 13: Diversity maximization via core sets.
- Sampling from probability distributions
- Lecture 14: Estimation of distributions and their properties.
- Lecture 15: Verification of a distribution's uniformity.
- Lecture 16: Verification of other properties. Access methods beyond sampling.
- Querying and sampling subsets of data sets
- Lecture 17: Estimation of data parameters and approximate verification of properties.
- Lecture 18: Efficient local sparse graph exploration techniques. Estimating graph parameters.
- Distributed computation
- Lecture 19: MapReduce and the Massively Parallel Computation model. Sample MPC algorithms.
- Lecture 20: Clustering on MPC.
- Lecture 21: Graph algorithms on MPC.
- Lecture 22: Limitations of distributed algorithms.
- Closing lectures
- Lectures 23 & 24: Efficient sparse least–square regression.
- Lecture 25: Overview of additional topics not covered in detail.
- Lectures 26 & 27: Project presentations and discussions.
- Learning–augmented algorithms
- Adversarially robust streaming algorithms
Apart from active participation, the class will require solving homework problem sets, two experimental programming assignments, and a final project. The overall grade will be based on the following factors:
- class participation: 5%
- homework: 25%
- two programming assignments: 25%
- project proposal: 5%
- final project: 40%
The course will feature two programming assignments in which students will implement algorithms covered in class and apply them to data sets of their choice. Collaboration here is not allowed (except for discussing high–level ideas), i.e., students are required to implement algorithms and run experiments on their own.
Possible final projects ideas include but are not limited to
- implementing an algorithm not covered in class and testing its practical performance on real–world data,
- creating an open–source implementation of one of the algorithms with easy to follow documentation,
- developing a new algorithm with good theoretical or practical guarantees.
The outcome of a project will be a short technical report, describing obtained results and conclusions. As opposed to programming assignments, students are allowed to work in teams of two or three. A list of potential projects topics will be provided, but students are encouraged to develop their own ideas. These projects have to be approved by the instructor.