Welcome

This is the public website for CS588 Spring 2024. Course information will be kept up to date on Canvas (https://canvas.illinois.edu) for enrolled students.

This course will introduce students to the computational principles involved in autonomous vehicles, with practical labwork on an actual vehicle. Sensing topics will include vision, lidar and sonar sensing, including state-of-the-art methods for detection, classification, and segmentation. Bayesian filtering methods will be covered in the context of both SLAM and visual tracking.  Planning and control topics will cover vehicle dynamics models, state-lattice planning, sampling-based kinodynamic planning, optimal control and trajectory optimization, and some reinforcement learning. Evaluation will involve ambitious challenge projects implemented on a physical vehicle.

Time and Location

09:30AM – 10:45AM Mondays and Wednesdays
2233 Everitt Laboratory
Labs are located at Highbay Lab, 60 Hazelwood Dr, Bay C, Champaign, IL 61820

Course Staff

Office hours: 4:15-5:15PM, Tuesdays
Highbay Lab

TA: Rachel Moan

Center for Autonomy staff software engineer: Hang Cui

Center for Autonomy lab manager: John Hart

Prerequisites

CS374, ECE484, or equivalent.

Readings

Readings will be excerpted from the following online texts:

Course organization

This course will expose students to the process of engineering a large hardware and software system, which is a radically different environment than one typically finds in university courses. Students will imagine themselves as being engineers in an autonomous vehicle startup company trying to build its first self-driving product. The course prepares students with technical, communication, and organizational skills for practical challenges faced in industry. Projects are entirely collaborative all-class projects, and students will be exposed to industry practices that tackle the problem of unifying the efforts of dozens of engineers.

The first phase of the course is a “bootcamp” that involves lectures and group assignments to provide familiarity the GEM vehicle, the existing software stack, and software integration practices. The second and third phases, will involve self-directed design and development of one or more autonomous vehicle behaviors. Lectures on technical topics will be offered approximately once every 2-3 class sections. Throughout this course, students will be contributing to a monolithic repository of the software stack (aka, a “monorepo”) that represents the best version of the system to date. Datasets, simulators, and the real vehicle are available for testing. The work will culminate in a final presentation outlining the group’s contributions toward the self-driving product. The grade in this course is measured by one’s ability to contribute meaningfully to the joint effort and work effectively with fellow classmates to solve technical and organizational challenges that arise in a complex integrated system. Some students will take on a managerial role and will be graded based on their ability to organize efforts within their group and between groups, while others will have engineering roles and will be graded based on the quality (e.g., significance, creativity, performance) of their integrated contributions. Group members are expected to interact frequently amongst each other and with other groups in and out of the lab. In class, groups will deliver design documents and presentations on their ideas, goals, plans, and progress on a regular basis. Another important component of working on a large team is feedback, and individual students will offer periodic peer reviews of other students’ performance both on and off their group. Students will also be able to directly boost one another’s grades by awarding bonus points (“stars”) to other students, and they can also earn bonus points for contributing requested components to the repository (“bounties”).

Software and Hardware

The hardware used in this course include two vehicles, GEM e2 and GEM e4, with drive-by-wire and a sensor package. Detailed information on these vehicles hardware and driver stack can be found here. The GEM e4 is preferred for this course but we may need to fall back to the e2 in case of hardware issues.

We will be using Git and Github for all software development. We recommend Github Desktop and Visual Studio Code as the IDE, but you are free to use whatever development environment you like. Nearly all of the programming in the course will be done in Python 3.7+, and some components of the course will use ROS Noetic. Useful libraries include Numpy, Scipy, PyTorch, Shapely, CasADi, and Klampt.

The main Github repository is found here. The course branch is s2025, and your group will have its own development branch s2025_groupX. Your team’s contributions should be tested on your team’s branch. Once you have presented your work in class and obtain approval to contribute to the class system, you may then submit a pull request to the s2025 branch.

Course staff have several SSD hard drives that can be directly connected to the GEM computer. The base image on these hard drives contains the drivers and software sufficient to run basic demonstrations on the GEM vehicle. You can ask course staff for a hard drive to be wiped back to the base image at any time.

Groups

Students will self-organize into groups of at least 4 and no more than 10 students, working on sub-areas of autonomous vehicle system development. Each group will nominate a Lead who acts as the manager of the team. The Lead will handle scheduling, organizational issues, will have the final say in project approval and staffing, and will review group members’ contributions to the effort. Non-leads will also review the Lead’s performance in managing the group. Students can change groups with approval from group Leads, and Leads can also vary through the semester. The instructors should be notified about any group composition changes.

Each group will choose a thematic area and establish projects within that area. Projects should be chosen considering:

  • Alignment with product goals
  • Reasonable scope to be accomplished within the semester timeframe
  • Intermediate and final goals should be specific and measurable
  • Coordination with the efforts of other teams

Students will present their proposed projects through design documents and presentations to receive feedback from the rest of the class. Projects can and will change throughout the semester, and the design documents and presentations will serve as a history to track how the development process evolved through the semester.

Theme areas and potential projects may include, but are not limited to:

  • Hardware / dynamics: sensor calibration; calibrate steering and driving dynamics; calibrate geometry; tune trajectory tracking and actuation limits around curves; surface and weather-conditioned dynamics models for planning; get GEM e4 and Pacifica operational with software stack.
  • Vision: train detectors on KITTI and Waymo datasets; 3D agent and obstacle detection; sign detection; dynamic lane detection; weather and road surface estimation.
  • Estimation: improving state estimation using sensor fusion / filtering; agent motion prediction; roadgraph-conditioned agent motion prediction; determination of lead-vehicle and pass/yield relationships.
  • SW infra: continuous integration (CI) testing; adding linters to code check-in; log management; visualization of internals with Gazebo or Foxglove; map visualization and editing; A/B testing; parallel execution.
  • Simulation: make road graphs and scenes; agent simulation with IDM or learned models; integration with CARLA or Gazebo and sensor simulation.
  • Reasoning: import road graphs from OpenStreetMap; create dynamic routers that accept targets; create driving logic for stop signs, stop lights, and lane changes.
  • Planning: motion primitive selection planner; trajectory optimization; avoid agents and obstacles; perform crosswalk / stop sign / stop light logic; planning for lane changing; free-form parking lot maneuvers.

Tentative Schedule

  • Week 1-2: Introduction to system engineering
    Lecture topics: Course introduction, logistics overview, group formation, hardware and software infrastructure. System integration techniques, ROS. Coordinate transforms.
    Deadlines: Course survey. Group assignment. Scheduling lab time. Phase 1 homework released.
  • Week 3-4: Vehicle dynamics and control
    Lecture topics: Vehicle models, simulation, PID control. Pure pursuit trajectory tracking. Trajectory representations, continuity. Software engineering best practices.
    Deadlines: Safety Driver training. Phase 1 homework.
  • Week 5-6: 3D vision and knowledge representation
    Lecture topics: Sensor and camera models. 3D scenes and geometric queries. Sensor calibration. Object recognition and segmentation. Working with vision datasets.
    Deadlines: Group realignment, project brainstorming, and Phase 2 Project Pitch.
  • Weeks 7-8: Motion planning and model predictive control
    Lecture topics: Motion primitive planning, state lattice planning, trajectory optimization, model predictive control, real-time considerations.
    Deadlines: Phase 2 Checkpoint. Design document, design review presentation. Peer reviews.
  • Week 9-10: State estimation and trajectory prediction
    Lecture topics: Probabilistic filtering, Kalman filter and its variants. Monte Carlo methods and particle filtering. System ID and trajectory prediction.
    Deadlines: Integration Checkpoint. Code contribution review. Phase 2 presentation and Phase 3 Pitch. Peer reviews.
  • Week 11-12: Learning in autonomous vehicles
    Lecture topics: TBD
    Deadlines: Phase 3 Checkpoint. Design review presentation. Peer reviews.
  • Week 13-14: Handling uncertainty in autonomous vehicles
    Lecture topics: TBD
    Deadlines: Phase 3 Checkpoint. Design review presentation. Peer reviews.
  • Week 15: Wrap up
    Lecture topics: TBD
    Deadlines: Final presentation. Code contribution review. Peer reviews.