The September 10, 2025, Active Teaching Lab explored how artificial intelligence has quietly moved beyond the familiar dialogue box and is increasingly integrated across the tools and platforms we use to teach. It’s no longer just about opening ChatGPT in a new tab—AI is now embedded in Canvas, Adobe Creative Suite, Microsoft Office, and even Google Chrome itself. As David Delgado emphasized throughout the session, this shift requires us to think differently about our role as AI users: we’re becoming project managers of AI teams rather than simply consumers of AI outputs.
The implications for education are profound. As AI capabilities accelerate the technology has become increasingly woven into our students’ digital lives. We may (with good reasons) remain uneasy about the growing prevalence of AI in society, let alone in our students’ work. Yet, we have a very real responsibility to learn and use it. The goal isn’t some vague notion of efficiency in teaching. We must learn how to use AI so we can model thoughtful AI integration in our work while helping students develop the critical thinking skills to use these tools wisely. AI will continue advancing whether we engage with it or not. Our students will increasingly use it whether we think it wise or not. Our choice then is this: whether we teach students how to become skilled and wise directors of AI capabilities or leave them to figure it out alone.
David’s PowerPoint slides are available here.
Lab Takeaways: Three Modes of AI Integration
The September Active Teaching Lab session highlighted that the majority of AI use falls into three main modes:
- Dialogue Mode represents the familiar conversational interface most of us know: you ask questions, get responses, and refine through back-and-forth interaction. This works well for brainstorming, initial drafts, and quick clarification.
- Reasoning Mode goes deeper, with AI systems that show their thinking process. Instead of just providing answers, these tools surface their assumptions and lay out step-by-step approaches to complex problems. This transparency helps users better understand how AI works and enables better critiques of reasoning processes.
- Agent Mode represents today’s AI frontier. AI agents act independently of prompts for extended periods (currently measured in minutes). Agents can browse the internet, write code, analyze files, and operate software while working toward user set goals. Rather than managing each interaction, users are directing autonomous work sessions.
AI as a Creation Tool
AI Creates New Content – it doesn’t simply recycle old information. Contemporary AI systems create new material by understanding patterns and relationships in ways that produce novel outputs. For example: when asked to blend Spanish and German into new words, AI doesn’t just mash existing terms together—it utilizes the linguistic structures of both languages and creates plausible hybrids. Similarly, AI can generate unique historically grounded counterfactuals, like imagining how Jimmy Carter might have responded to the Nazi invasion of Austria, by understanding both Carter’s documented approaches as president and the 1930’s historical context.
AI’s creative capabilities extend beyond text. AI can now interpret emotional nuance from written passages and render them as expressive audio, or even a podcast. Entering the text to a Romeo and Juliet soliloquy yields a Shakespearean performance that captures the intended emotional weight, not just a robotic reading of words.
What’s Available at UWM
There are a variety of tools already available to UWM faculty, staff, and students:
- Microsoft Copilot runs throughout the Office 365 ecosystem with full FERPA compliance when you use your UWM credentials. It can synthesize long email threads, query complex documents, and even access GPT-5.
- Adobe Firefly and Express, both available with an UWM ID, are each designed for user friendly AI-enhanced image generation and editing.
- Canvas now includes a host of AI features, though most are not currently available at UWM. There is however an AI tool for instructors on discussion summarization.
- Google ecosystem tools like Gemini in Chrome, Google Lens, and Homework Help, allow individuals to highlight text or images directly on the screen and receive contextual explanations or answers—even inside Canvas quizzes. While Google “paused” Homework Help on September 19th, the tool’s functionality remains available to students by using Gemini in Chrome and Google Lens.
- Gemini in Chrome | Introduction | How to use
- Google Lens | Introduction | How to use
- Homework Help Information
Academic Integrity
Unfortunately, AI’s increased integration into our digital ecosystem (especially AI built into browsers) exacerbates the ongoing challenge of academic integrity. Students can now get instant answers to highlighted content from any screen, even during supposedly secure assessments. When students combine mobile and desktop devices, they can receive answers to most Canvas quiz questions even when using lockdown browsers or video surveillance. Review this clip from the September session recording for an in-depth demonstration.
The integration of AI into the digital environment underscores the need to evolve assessment approaches. This means emphasizing process over product and requiring students to document their thinking and decision-making alongside their final outputs.
Strategic Responses to AI
- In syllabi, define clear AI-use expectations. State what’s allowed (e.g., brainstorming, grammar checks) and what isn’t (e.g., submitting AI-generated final answers). Explain to your students why you made these decisions – either in the syllabus or during classroom conversations.
- In assignments shift grading toward process artifacts that make learning visible: drafts, revision notes, rationale statements, and logs of AI interactions. When students must explain their reasoning and document their process, the learning objectives become more – but not totally – resistant to AI misuse. In addition, framing assignments around local, applied, or discipline-specific contexts creates opportunities for authentic engagement that are less prone to AI misconduct.
- Teach AI literacy. Teaching AI literacy is not teaching students how to use AI. Teaching AI literacy means teaching students how to use AI responsibly: to evaluate its accuracy, challenge its results, cite AI-assisted work appropriately, and employ it in ways that enhance — rather than undermine — their learning.
Experiments Worth Trying
- This week: Test Microsoft Copilot for email management and document summarization to experience firsthand how AI can operate as a “team member.”
- Next week: Test the Google AI tools built into Chrome within a Canvas course. How easily and effectively can you complete quizzes, assignments, and discussions using just AI?
- This month: Requiring students to demonstrate how they reached conclusions can help mitigate reliance on AI. Examine one current assignment through this lens of process documentation. Where could you add requirements for students to show their thinking without completely redesigning the learning experience?
- This semester Pilot a low-stakes assignment focused on AI documentation. Ask students to record what they asked AI tools, what responses they received, and how they evaluated and used those results.
- This academic year: We learn best from each other! As you experiment with AI in your teaching, share your experiences with colleagues. Record what you tried and how it worked in this survey.
AI Strategies Survey
Please add your responses to the form below. Share both your current AI practices and what you believe constitutes sound/effective practice. Feel free to submit multiple examples throughout the year. Only fill in the questions you wish.