Press Ctrl+D to draw

Drawing Tools

Log in for saved annotations

1px

1.4  Writing Code with AI Agents

AI agents are computational systems that perceive information, make decisions, and act toward goals. The modern view of AI agents emphasizes rational, goal-directed behavior under uncertainty, often using learning to adapt over time (Russell and Norvig, 2020, ch. 2). This course focuses on engineering practice, so we will treat agents as tools that support design and analysis as well as components embedded in intelligent systems.

AI-assisted coding is now a standard tool in engineering practice. In this course, we will use GitHub Copilot in VS Code as a pair programmer that helps us draft code, explore alternatives, and debug faster. The goal is not to outsource thinking, but to accelerate the engineering loop: define the problem, generate a candidate solution, test it, and iterate.

1.4.1 AI Agents Beyond Coding

Coding assistants are just one form of AI agent. Other examples you will encounter in engineering contexts include:

  • Analysis agents that explore datasets, fit models, and summarize findings
  • Simulation agents that probe design spaces by running parameter sweeps
  • Control agents that adapt to sensor feedback in real time (e.g., drones, robots)
  • Information agents that search literature or technical documentation
  • Design agents that propose alternative system architectures or component choices

Even early in this course, you can use agents to accelerate experimentation and interpretation while still keeping human oversight and responsibility for correctness.

1.4.2 Why AI-Assisted Coding Matters

When used well, Copilot can:

  • Reduce boilerplate so you spend more time on the hard parts of the problem
  • Suggest patterns that you might not remember offhand
  • Generate test scaffolds quickly so you can validate ideas early
  • Explain unfamiliar code and help you navigate new libraries

But it can also confidently generate incorrect or unsafe code. Treat Copilot as a fast, creative junior collaborator: helpful for drafts and alternatives, not authoritative.

1.4.3 Copilot in VS Code: A Practical Mental Model

Copilot shows up in three main ways:

  1. Inline suggestions (ghost text while you type)
  2. Copilot Chat (ask questions, request changes, or generate code blocks)
  3. Edits/Refactors (ask Copilot to modify existing code across a file)

Think of these as different interfaces to the same capability. Inline suggestions are great for completing local patterns, while chat is best for higher-level planning, debugging, or explaining code. Refactor-style prompts are ideal when you already have code and want improvements.

Under the hood, Copilot can route requests to different models depending on the task and context. In general, these are large language models (LLMs) trained on mixtures of natural language and code, with some variants tuned specifically for code completion or dialogue. You should expect some variability in responses across sessions or prompts because the system may switch models or choose different inference strategies. In practice, this means you should treat outputs as drafts and rely on tests and review rather than assuming consistent behavior.

1.4.4 A Reliable Workflow

AI helps most when you give it structure. A good workflow looks like:

  1. Specify the task in plain language (what you want, constraints, inputs/outputs)
  2. Sketch a solution outline (data flow, algorithm choice, edge cases)
  3. Ask Copilot for a draft (or for a specific piece of the draft)
  4. Run or test the code immediately
  5. Review and revise based on results and your own reasoning

This keeps you in control while still leveraging speed.

1.4.5 Prompting Patterns That Work

You will get better output with clear constraints and context. Some examples:

  • "Write a Python function simulate_pendulum(t, dt) that returns angle and angular velocity arrays. Use RK4 and include basic input validation."
  • "Given this DataFrame with columns time, pressure, flow, compute a 3-sample moving average and plot the result. Add axis labels."
  • "Refactor this loop to use NumPy vectorization. Keep output identical."

When a suggestion is good, accept only what you understand. When it is not, use it as a starting point, but rewrite to match your intent.

1.4.6 Guardrails for Engineering Code

Adopt these habits when working with AI-generated code:

  • Run tests early. If there are no tests, create some minimal ones.
  • Check assumptions. Verify input shapes, units, and edge cases.
  • Read for correctness. If you cannot explain a line, you should not keep it.
  • Watch for hidden side effects. File I/O, network calls, or random seeds.
  • Never paste secrets (tokens, passwords) into prompts.

AI is fast, but engineering is about being right.

Hands-On: First Copilot Session

Goal: Use Copilot to draft a small, verifiable function. Get comfortable with our development workflow.

  1. Follow the instructions in Section 1.3.9: Phase 2: Bootstrapping Your Repository to create and configure a new project repo with VS Code and Copilot.
  2. Open your project repository in VS Code.
  3. Create a new file: src/copilot_demo.py.
  4. Ask Copilot Chat something like, "In the file #src/copilot_demo.py, write a function moving_average(x, window) that returns a NumPy array of the moving average. Include input validation and a short docstring. Assume x is a time-series of sensor data."
  5. Accept the draft, then ask Copilot to write a simple test at the bottom that tries a simple array like [1, 2, 3, 4, 5] and window=3.
  6. Save and run (i.e., press the โ–ท button) the file and confirm the output is as expected.
  7. Use Git to add, commit, and push your changes to your remote (GitHub) repository as follows:
    1. Open a terminal in VS Code (Ctrl+`).

    2. Stage the new file for commit.

      git add src/copilot_demo.py
    3. Commit the changes to your local repository with a descriptive message.

      git commit -m "Add moving_average function with test"
    4. Push the commit to your remote (GitHub) repository.

      git push

Reflection: Did the generated code handle edge cases (window size, non-numeric input)? If not, fix it. How would you add an example with a visualization?

1.4.7 Copilot for Refactoring and Readability

Copilot can help make code easier to read and maintain. A good prompt is:

"Refactor this function to improve clarity and add helpful comments, without changing behavior."

Ask Copilot to introduce helper functions, clarify variable names, or add assertions. Then review each change carefully.

Hands-On: Refactor With Constraints

Goal: Improve clarity without changing behavior.

  1. Create a new file in your project from the previous exercise: src/normalize_signal.py.

  2. The following function normalizes a 1D sensor signal by scaling (zero mean, unit variance) and clipping extreme values. It works but is hard to read. Copy and paste it into normalize_signal.py.

    def normalize_signal(x, clip_value=3.0, eps=1e-8):
       """
       Normalize a 1D sensor signal.
       - "Scaling" means subtract the mean and divide by the standard deviation.
       - "Clipping" means limiting extreme values to +/- clip_value.
       """
       n = len(x)
       if n == 0:
          return x
       s = 0.0
       count = 0
       for v in x:
          if v is None:
                continue
          s += v
          count += 1
       mean = s / count if count > 0 else 0.0
       s = 0.0
       count = 0
       for v in x:
          if v is None:
                continue
          d = v - mean
          s += d * d
          count += 1
       std = (s / count) ** 0.5 if count > 0 else 1.0
       y = []
       for v in x:
          if v is None:
                y.append(0.0)
                continue
          z = (v - mean) / (std + eps)
          if z > clip_value:
                z = clip_value
          if z < -clip_value:
                z = -clip_value
          y.append(z)
       return y
  3. Use Copilot to create a simple example for normalize_signal() at the bottom of the file to see what the output is. (Run the example to get a baseline input/output pair for writing a later test.)

  4. Instruct Copilot to refactor normalize_signal() for readability, adding docstrings and assertions.

  5. Instruct Copilot to use the baseline input/output pair to write a simple test at the bottom of the file, replacing the example.

  6. Run the test to ensure the refactored version matches the original output.

  7. Review the diff line by line and decide which changes to keep.

  8. Stage (git add), commit (git commit -m "..."), and push (git push) your changes.

Reflection: Which changes made the code clearer? Which were unnecessary?

1.4.8 Debugging With Copilot

Copilot can be useful for debugging, but only if you provide good context. A strong debugging prompt includes:

  • The error message
  • The relevant code snippet
  • What you expected to happen

For example:

"I expected this function to return a scalar, but it returns a vector. Here is the function and the test that fails. What is wrong?"

Copilot will often find the bug or propose hypotheses. You still need to verify the fix.

Hands-On: Debugging Practice

Goal: Use Copilot to diagnose a bug and validate the fix.

  1. Create a new file in your project: src/split_data.py.

  2. Copy and paste the following buggy function that is supposed to split a dataset into training and testing sets.

    def train_test_split(x, y, test_ratio=0.2):
       """
       Split paired data into train/test sets.
       x and y are equal-length 1D lists.
       """
       n = len(x)
       if n == len(y):
          raise ValueError("x and y must be the same length")
       if test_ratio <= 0 or test_ratio >= 1:
          raise ValueError("test_ratio must be between 0 and 1")
       split = int(n * test_ratio)
       x_train = x[:split]
       x_test = x[split:]
       y_train = y[:split]
       y_test = y[split:
       return x_train, x_test, y_train, y_test
  3. Ask Copilot Chat to diagnose and fix the bugs.

  4. Ask Copilot to write a simple test that verifies the lengths of the splits and that no data is lost. (Copilot might choose to write the tests in the same file or in a separate one, usually in the tests/ directory. This is a common pattern for test organization.)

  5. Verify the fix by inspecting and running the test.

Reflection: Did Copilot identify and fix the bugs correctly on the first try? Did the test it wrote help validate the fixes?

1.4.9 Using Copilot as an Engineering Partner

Copilot is best used as an accelerator, not an oracle. In this course, the expectation is that you:

  • Understand and can explain every line of code you submit
  • Use AI to augment your reasoning, not replace it
  • Validate results with tests, plots, or sanity checks

We are embracing AI as a tool for learning and engineering. The better you get at guiding it, the more productive (and more thoughtful) you will be.

1.4.10 Exploring the Copilot Documentation

There are many advanced features in Copilot, including configuration options, keyboard shortcuts, and integration with other tools. Copilot features change quickly as new models and workflows are introduced. For the most current capabilities and usage details, consult the official Copilot documentation (GitHub, 2026).

Bibliography

  1. GitHub. "GitHub Copilot documentation". (2026). Accessed January 20, 2026. https://docs-internal.github.com/en/copilot
  2. [AI] Russell, Stuart J. and Peter Norvig. Artificial intelligence: a modern approach. (2020) 4 ed. Prentice Hall. http://aima.cs.berkeley.edu/