Link Search Menu Expand Document

Projects

See the Ed post for project logistics: #515.

Projects:

  • Adversarial Machine Learning: PDF - Code
  • Control of Multiplicative Noise Systems: PDF - Code
  • Optimization Algorithms: Code

Project solutions:

  • Adversarial Machine Learning: PDF - Code
  • Control of Multiplicative Noise Systems: PDF - Code
  • Optimization Algorithms: Code

Logistics

All projects have two components: the actual work and a peer review section. The project itself is due Wednesday, December 7 at 11:00 PM. The peer review component is a mandatory addition and is due Friday, December 9 at 11:00 PM. Lecture on Thursday, November 10 will include an introduction to the three projects.

Project Overview:

Each project can be completed in groups of size 1-4 and will allow you to apply and extend the knowledge you have acquired in this class in a different direction. These projects have open-ended components, and exceptional projects that go above and beyond may receive extra credit beyond at our discretion. As a reminder, this project is optional and can only help your grade, as we will take the maximum of the two grading schemes described on the class website.

You will submit a project report with the specifications given in each project. You will also need to submit a writeup of each group member’s contributions to the project (if you work in a group), or a certification that you worked alone if you work alone. Projects will be peer reviewed as part of the evaluation process, and your grade will be influenced by both the peer reviews others give you and your participation in peer reviewing (details below).

Project Group Formation

Given that the project is not mandatory for the course, it is important to us that anyone choosing to work with a group makes a sincere commitment to the project. As such, there will be a question on homework 12 in which everyone will have to commit to opting out of the project, completing a project alone, or completing a project with their specified partners. In an effort to ensure this commitment is honored, we may penalize your course grade on the basis of incomplete effort in your group even if your higher final course grade does not ultimately include the project. This is to ensure that students who commit to working in groups will follow through.

Even though you don’t need to formally commit to your group until homework 12 is due, you should start working on your project as soon as possible! If you would like to complete the project in a group but do not have group mates in mind, you can fill out this form by Friday, November 11, at 11:00 PM, and course staff will use the information provided to assign you project groups.

Project Descriptions:

Adversarial Machine Learning

Although deep neural networks are now more or less the dominant technique in computer vision and NLP, among other fields, it turns out the decisions made by these networks can be very fragile. Even if a neural net has near-perfect accuracy on real-world data, it is often possible for an adversary to perturb inputs by a small amount, imperceptible to humans, and fool the network into making a mistake. This has huge implications for human safety; e.g. one might force a car accident by fooling an autonomous vehicle into believing a stop sign is really a speed-limit sign.

In this project, we will use Lagrangian duality to defend against these adversarial attacks by formulating an optimization problem concerning how an adversarial player might fool a classifier by making small perturbations to its input. If the classifier is not fooled in the worst case, then it is certified to be robust to adversarial attacks. We will also see a way to use this certificate to re-train a more robust classifier.

This project is based on “Provable defenses against adversarial examples via the convex outer adversarial polytope,” Wong and Kolter (2017). The project walks you through a simpler version of this paper.

Control of Multiplicative Noise Systems

Modeling processes using controlled dynamical systems is ubiquitous in the sciences; a stereotypical example is the trajectory of a car or plane , but the trajectory of weights in gradient descent is another good example. One of the key goals of a controller in a control system is to keep the system stable in the presence of noise, e.g. to make sure that the system state (or errors) do not grow unboundedly. A common model for such systems is a linear time-invariant control system. The problem of designing optimal control strategies was solved for such systems, first in the idealistic and “noise-free” case, then in the case of “additive noise”.

However, this problem is still unsolved for the case of “multiplicative noise”, i.e. when the system model might be changing, say, because it is being repeatedly estimated from data. Over the course of the project, we gradually build up from designing optimal controls for easier types of multiplicative-noise systems, into attempting to design controls for harder types of such systems where there is no known optimal control. Along the way, we’ll learn techniques from stochastic optimization, work with the policy gradient algorithm, and use the techniques we’ve learned in this class to work on active research problems.

This project is based on

  • “The uncertainty threshold principle: Some fundamental limitations of optimal decision making under dynamic uncertainty,” Athans, Ku and Gershwin (1977)
  • “Control capacity,” Ranade and Sahai (2018)
  • “When multiplicative noise stymies control,” Ding, Peres, Ranade, Zhai (2019)

The project considers some of the simplest cases from each of these papers, ending with an open problem that is considered in the last reference “When multiplicative noise stymies control,” Ding, Peres, Ranade, Zhai (2019).

Optimization Algorithms

In recent years, it has become increasingly important to develop efficient optimization algorithms. The most common such algorithms are the so-called “first-order” optimization algorithms, which only use information about the objective function and its first derivative; gradient descent is the prototypical first-order optimization algorithm. In this project, we will explore and implement five different first-order optimization algorithms, starting with gradient descent and ending with Adam, the latter being one of the most widely-used optimization algorithms in the world (especially in machine learning). We will benchmark our algorithms against difficult objective functions, and visualize their performance.

Project OH

The project is largely meant for you to complete independently and the staff will not be offering much support for this. However, we will hold project-specific office hours starting on November 14. These slots are TBD and will be released in Week 13 Administrivia.

Given our limited staff we will not be able to answer project questions on Ed.

Grading

The projects will be graded holistically, through a combination of staff grading and peer review. During peer review, you will review other students’ projects on an A, B, C scale. Specific rubrics for each of the projects are included in the project PDFs. Both peer reviews and the staff grades will be assigned using this rubric. There will be a significant deduction to your grade if you fail to complete the peer review component.

To Summarize

So that you have them all in one place, here are the deadlines associated with the project:

  • 11/11 11:00 PM
    • Request for staff-formed group due (optional). If you request a staff group, you’ll receive it by November 14.
  • 11/18 11:00 PM, alongside HW12
    • Project group registration form due
  • 12/7 11:00 PM
    • Project PDF due
    • Project code due
    • Project group evaluations due (you must complete this assignment even if you work alone.)
    • Project must be emailed to your assigned peer reviewers. Details on this to come.
  • 12/9 11:00 PM
    • Project peer review due