Issue #1: You’re Not the Construction Worker Anymore 🏗️➡️🧠

Hi everyone!
Welcome to the first issue of what we hope will be a practical, grounded series on what AI actually means for our Supervisory Union. This isn’t hype, fear-mongering, or “10 magic prompts.” It’s real-talk about how AI shows up in the work we do every day - teaching, supporting students, communicating with families, and keeping our systems running smoothly (all while keeping in mind our new “Generative AI Best Practices Guide”).
Here’s the single most important mindset shift:
With generative AI, you are no longer the construction worker. You are the foreman.
For most of our careers, work meant doing the labor yourself - writing the email, drafting the lesson plan, building the spreadsheet. Quality was tied directly to your time and effort.
AI changes that, not by removing responsibility, but by moving it up a level.
Your job is no longer to put up the wall. Your job is to decide where the wall goes, why it exists, what it’s made of, and whether it meets code 🧱📐.
AI is fast and tireless. It will happily build you a brick wall in seconds - and just as confidently build the wrong wall in the wrong place and made out of plaster of paris. That’s why the foreman role matters. Oversight, context, and judgment stay with us.
What Generative AI Is Good At
Think of AI as a very fast, very willing, and very eager to please junior assistant. It’s useful for:
- first drafts
- summarizing long content
- rewriting for tone or format
- generating examples or variations
- helping you get past a blank page ✍️
Supervised well, it’s helpful. Treated as an authority, it becomes risky.
Where It Can Go Wrong ⚠️
AI doesn’t understand our SU, our students, or our policies unless we supply that context - and even then, nuance can be lost. It can hallucinate facts, oversimplify, misread tone, reflect bias, and sound 100% confident while being 100% wrong.
The rule of thumb is simple: if you wouldn’t send a human draft without reviewing it, don’t send an AI draft without reviewing it 👀.
Sensitive Information and EDU Systems 🔐
Older guidance said “never put sensitive data into AI.” That was once universally true. The reality is now more nuanced.
Some platforms train on user input. Approved EDU-configured systems - such as Gemini for EDU - do not train on your data. Currently, the only district approved AI tools are Google-provided AI tools, like Gemini and NotebookLM.
That means, within district-approved, FERPA-aligned environments and existing policy, limited use of sensitive information may be appropriate when care is taken via Data Anonymization. You may use Approved AI tools with student-related content ONLY if the data is fully anonymized.
- Remove Direct Identifiers: No names, emails, addresses, or ID numbers.
- Remove Indirect Identifiers: No specific details (e.g., "the quarterback of the football team who lives on Main St") that could identify a student when combined.
- Sensitive Data Ban: Never enter IEP, 504, disciplinary, medical, or counseling notes into an AI prompt, even if anonymized.
- The "Public Document" Rule: If you would not feel comfortable posting the text on a public bulletin board, do not paste it into an AI prompt.
The keys are:
- Use SU-approved tools only
- Understand which platforms are EDU-configured
- Follow data-handling policies
- Ask when unsure rather than guessing
We’ll dig deeper into this in a future issue.
Prompting Is Supervision, Not Magic 🧭
Ignore the internet’s “prompt spell” culture.
Good prompting is clear direction, like briefing a colleague:
Who is this for? What’s the goal? What tone fits? What should be included or avoided? What constraints matter?
You’re not casting magic words. You’re supervising work. And the first draft is rarely the final copy.
If you’d like to take a free, self-paced, 2 hour course with a PD certificate at the end, please consider this Generative AI for Educators course from Google’s “Grow with Google” program.
How This Series Will Work 💬
This issue sets the foundation, and we have some content for the next few issues that we want to get out first. After that’s exhausted, we’ll be taking requests for what AI-adjacent questions are keeping you up at night.
We’ll do our best to provide answers in a clear and intelligible format, and include short tutorials or links to screen-recorded guides. The first tutorial is quick and simple, and very powerful, but easy to miss:
How to make Gemini think harder before answering.
The Big Picture 🎯
Used well, generative AI saves time and reduces friction. Used poorly, it can amplify errors quickly.
The difference isn’t the model. It’s the supervision.
You’re still the professional in the room. AI doesn’t replace your judgment - it depends on it.
Thank you!
- Carrie & Marvin
