Guide To Teaching and Learning

Pascal Glissmann

Generative AI in Teaching, for Critical Thinking & Making: The Visual Archive

In this class we will explore the relationship between form and content — how is meaning constructed and communicated through visual language? Through observing, collecting, analyzing, writing, and form-making, students will apply design processes involving visual research, concept generation, and craft skills. Driven by research interest, students will use digital and analog means to build visual archives.


The #FalsePredictions and #GeneratedDoubts workshop proved to be a successful and generative experiment for students, not because of the aesthetic outcome of the AI-generated visuals, but because of the critical discourse it sparked throughout the process. What stood out most was how the assignment shifted the emphasis from producing final results to interrogating the foundational choices that shape them—specifically, the construction of visual datasets. Students spent significant time reflecting on what it means to “feed” an AI system, asking: What do these images represent? Whose perspectives do they carry? What narratives are being included or excluded?

This attention to the ethical and environmental stakes of dataset creation became central to the learning process. Students engaged with the unsettling reality that training data is never neutral. Conversations organically emerged around authorship, representation, and the extractive nature of data collection. In many cases, students began to see the act of compiling a dataset not as a preparatory task, but as an experimental and conceptual practice in its own right—one that already responds to their research inquiry before the AI is ever involved.

The most meaningful insight came when students recognized that the “moment of feeding” the system was not merely technical but deeply philosophical and political. The very act of offering a dataset became a moment of confrontation—with one’s assumptions, biases, and blind spots. As a result, the speculative outputs generated by AI were no longer just strange or humorous distortions, but became visual provocations that invited further inquiry.

Ultimately, the success of the workshop lay in how it turned a tool often associated with automation and efficiency into a site of reflection and critical making. The students left not with answers, but with sharper questions—and that, in itself, was the goal.


Workshop for #FalsePredictions & #GeneratedDoubts

We ran a workshop I called False Predictions and Generated Doubts. The idea was to question the authority of the archive—since archives often present themselves as sources of truth or official memory. Instead, we asked: what happens when we train an AI model on visual datasets that the students themselves created in response to their research questions? The goal wasn’t to generate polished outcomes, but rather to disrupt familiar ways of seeing. We used AI as a tool to surface bias, to challenge assumptions, and to explore the limitations of our own image sets. The key question became: what do these visual predictions reveal—not just about the images—but about the language and logic we’ve encoded into them? How do our visual datasets shape what the AI “sees”? And how might these speculative outputs lead us toward new directions of inquiry? Through these experiments, students produced a wide range of speculative visualizations. We then mapped and distributed them as part of an open-ended, exploratory framework—treating the results not as conclusions, but as provocations.

Project: WhatTheModelSaw:
By “collaborating” with a machine that “learns” from collected images, participants engaged in a form of speculative visual research. The AI does not recognize objects, context, or politics—it only sees patterns. What it generates might be inaccurate, strange, or surprisingly resonant. The results are not the point; rather, we are interested in what these visual predictions reveal about the image set, visual assumptions, and the constrats of the chosen language.

Project: MachinesForgetNothing
This method raises ethical and environmental considerations. Pre-trained models often incorporate material without explicit authorization, and each interaction with AI contribute to environmental impact. Participants were also advise against using highly personal or sensitive material, understanding that anything fed into an AI system may be stored, transformed, misinterpreted, and reused beyond their control. Engaging with these risks was an intentional and critical part of the exercise.

Take The Next Step

Submit your application

Undergraduate

To apply to any of our Bachelor's programs (Except the Bachelor's Program for Adult Transfer Students) complete and submit the Common App online.

Graduates and Adult Learners

To apply to any of our Master's, Doctural, Professional Studies Diploma, Graduates Certificate, or Associate's programs, or to apply to the Bachelor's Program for Adult and Transfer Students, complete and submit the New School Online Application.

Close