AI agents are computational systems that perceive information, make decisions, and act toward goals. The modern view of AI agents emphasizes rational, goal-directed behavior under uncertainty, often using learning to adapt over time (Russell and Norvig, 2020, ch. 2). This course focuses on engineering practice, so we will treat agents as tools that support design and analysis as well as components embedded in intelligent systems.
AI-assisted coding is now a standard tool in engineering practice. In this course, we will use GitHub Copilot in VS Code as a pair programmer that helps us draft code, explore alternatives, and debug faster. The goal is not to outsource thinking, but to accelerate the engineering loop: define the problem, generate a candidate solution, test it, and iterate.
Coding assistants are just one form of AI agent. Other examples you will encounter in engineering contexts include:
Even early in this course, you can use agents to accelerate experimentation and interpretation while still keeping human oversight and responsibility for correctness.
When used well, Copilot can:
But it can also confidently generate incorrect or unsafe code. Treat Copilot as a fast, creative junior collaborator: helpful for drafts and alternatives, not authoritative.
Copilot shows up in three main ways:
Think of these as different interfaces to the same capability. Inline suggestions are great for completing local patterns, while chat is best for higher-level planning, debugging, or explaining code. Refactor-style prompts are ideal when you already have code and want improvements.
Under the hood, Copilot can route requests to different models depending on the task and context. In general, these are large language models (LLMs) trained on mixtures of natural language and code, with some variants tuned specifically for code completion or dialogue. You should expect some variability in responses across sessions or prompts because the system may switch models or choose different inference strategies. In practice, this means you should treat outputs as drafts and rely on tests and review rather than assuming consistent behavior.
AI helps most when you give it structure. A good workflow looks like:
This keeps you in control while still leveraging speed.
You will get better output with clear constraints and context. Some examples:
simulate_pendulum(t, dt) that returns angle and angular velocity arrays. Use RK4 and include basic input validation."time, pressure, flow, compute a 3-sample moving average and plot the result. Add axis labels."When a suggestion is good, accept only what you understand. When it is not, use it as a starting point, but rewrite to match your intent.
Adopt these habits when working with AI-generated code:
AI is fast, but engineering is about being right.
Goal: Use Copilot to draft a small, verifiable function. Get comfortable with our development workflow.
src/copilot_demo.py.#src/copilot_demo.py, write a function moving_average(x, window) that returns a NumPy array of the moving average. Include input validation and a short docstring. Assume x is a time-series of sensor data."[1, 2, 3, 4, 5] and window=3.Open a terminal in VS Code (Ctrl+`).
Stage the new file for commit.
git add src/copilot_demo.pyCommit the changes to your local repository with a descriptive message.
git commit -m "Add moving_average function with test"Push the commit to your remote (GitHub) repository.
git pushReflection: Did the generated code handle edge cases (window size, non-numeric input)? If not, fix it. How would you add an example with a visualization?
Copilot can help make code easier to read and maintain. A good prompt is:
"Refactor this function to improve clarity and add helpful comments, without changing behavior."
Ask Copilot to introduce helper functions, clarify variable names, or add assertions. Then review each change carefully.
Goal: Improve clarity without changing behavior.
Create a new file in your project from the previous exercise: src/normalize_signal.py.
The following function normalizes a 1D sensor signal by scaling (zero mean, unit variance) and clipping extreme values. It works but is hard to read. Copy and paste it into normalize_signal.py.
def normalize_signal(x, clip_value=3.0, eps=1e-8):
"""
Normalize a 1D sensor signal.
- "Scaling" means subtract the mean and divide by the standard deviation.
- "Clipping" means limiting extreme values to +/- clip_value.
"""
n = len(x)
if n == 0:
return x
s = 0.0
count = 0
for v in x:
if v is None:
continue
s += v
count += 1
mean = s / count if count > 0 else 0.0
s = 0.0
count = 0
for v in x:
if v is None:
continue
d = v - mean
s += d * d
count += 1
std = (s / count) ** 0.5 if count > 0 else 1.0
y = []
for v in x:
if v is None:
y.append(0.0)
continue
z = (v - mean) / (std + eps)
if z > clip_value:
z = clip_value
if z < -clip_value:
z = -clip_value
y.append(z)
return yUse Copilot to create a simple example for normalize_signal() at the bottom of the file to see what the output is. (Run the example to get a baseline input/output pair for writing a later test.)
Instruct Copilot to refactor normalize_signal() for readability, adding docstrings and assertions.
Instruct Copilot to use the baseline input/output pair to write a simple test at the bottom of the file, replacing the example.
Run the test to ensure the refactored version matches the original output.
Review the diff line by line and decide which changes to keep.
Stage (git add), commit (git commit -m "..."), and push (git push) your changes.
Reflection: Which changes made the code clearer? Which were unnecessary?
Copilot can be useful for debugging, but only if you provide good context. A strong debugging prompt includes:
For example:
"I expected this function to return a scalar, but it returns a vector. Here is the function and the test that fails. What is wrong?"
Copilot will often find the bug or propose hypotheses. You still need to verify the fix.
Goal: Use Copilot to diagnose a bug and validate the fix.
Create a new file in your project: src/split_data.py.
Copy and paste the following buggy function that is supposed to split a dataset into training and testing sets.
def train_test_split(x, y, test_ratio=0.2):
"""
Split paired data into train/test sets.
x and y are equal-length 1D lists.
"""
n = len(x)
if n == len(y):
raise ValueError("x and y must be the same length")
if test_ratio <= 0 or test_ratio >= 1:
raise ValueError("test_ratio must be between 0 and 1")
split = int(n * test_ratio)
x_train = x[:split]
x_test = x[split:]
y_train = y[:split]
y_test = y[split:
return x_train, x_test, y_train, y_testAsk Copilot Chat to diagnose and fix the bugs.
Ask Copilot to write a simple test that verifies the lengths of the splits and that no data is lost. (Copilot might choose to write the tests in the same file or in a separate one, usually in the tests/ directory. This is a common pattern for test organization.)
Verify the fix by inspecting and running the test.
Reflection: Did Copilot identify and fix the bugs correctly on the first try? Did the test it wrote help validate the fixes?
Copilot is best used as an accelerator, not an oracle. In this course, the expectation is that you:
We are embracing AI as a tool for learning and engineering. The better you get at guiding it, the more productive (and more thoughtful) you will be.
There are many advanced features in Copilot, including configuration options, keyboard shortcuts, and integration with other tools. Copilot features change quickly as new models and workflows are introduced. For the most current capabilities and usage details, consult the official Copilot documentation (GitHub, 2026).