When Practice Becomes an AI Companion

I now have three public Custom GPTs in ChatGPT.

Seeing them together made me pause. At first, they looked like three separate tools: one for movie recommendations, one for reflective learning, and one for digital accessibility. But the more I looked at them, the more I realized they are not random experiments.

They are three digital extensions of work I have been doing for years.

  • Movie Streaming Tonight comes from my lifelong love of film and the very real human problem of wanting to watch something good without spending half the evening scrolling through streaming platforms.
  • Learning Begins with Dialogue came directly from a recent article I wrote after starting an AI Strategy and Governance course, where the learning experience began not with passive content delivery, but with dialogue.
  • Title II Accessibility Assistant grows out of my digital accessibility work in higher education, especially the practical institutional challenge of helping people review, improve, and remediate digital materials for ADA Title II and WCAG-aligned accessibility.

Three tools. Three parts of my work. Three different expressions of the same larger question:

How can AI help people think, choose, learn, and act more thoughtfully?

That is what interests me most about Custom GPTs. The point is not simply that anyone can make a chatbot. The point is that we can now shape small AI companions around a specific purpose, voice, method, and set of values.

That matters.

A Custom GPT is not just a tool that answers questions. At its best, it is a designed interaction. It has a purpose. It has boundaries. It reflects decisions about what kind of help should be offered, what kind should be avoided, and where human judgment should remain central.

That is why these three GPTs feel meaningful to me.

Movie Streaming Tonight is playful, but not trivial. Choosing a movie is a small act of culture, mood, timing, and taste. Sometimes we want comfort. Sometimes we want surprise. Sometimes we want something critically acclaimed, and sometimes we just want a good popcorn movie. The GPT is designed to reduce friction and make discovery feel easier.

Circular icon showing a screen with a play button, surrounded by a clapperboard, film strip, popcorn, crescent moon, and stars.
Figure 1. Movie Streaming Tonight helps users find five movie recommendations based on mood, genre, and streaming availability.

Learning Begins with Dialogue is more reflective. It helps a learner begin any learning experience by clarifying what they already understand, what they hope to learn, and why it matters. That may sound simple, but it points to something important. Learning does not begin only when content is delivered. Often, learning begins when we are asked to locate ourselves inside a subject.

Circular icon showing an open book with a sprout growing from it and two overlapping speech bubbles above, representing reflective learning through dialogue.
Figure 2. Learning Begins with Dialogue invites learners to begin with reflection before moving deeper into a learning experience.

Title II Accessibility Assistant is the most institutional of the three. It is designed to help organizations review materials, improve accessibility, write better alt text, think through remediation, and plan for Title II work. It does not replace legal counsel, accessibility professionals, or human review. It is not there to certify compliance. It is there to help people move from “we know this matters” to “here is what we can fix next.”

Circular icon showing a checklist labeled “Title II Accessibility Assistant,” with a courthouse symbol, accessibility icon, and checkmarks.
Figure 3. Title II Accessibility Assistant supports practical digital accessibility review, remediation, and planning.

That last part is important because so much institutional work gets stuck between awareness and action.

People may understand that accessibility matters, but not know where to begin. They may know they need to improve a document, but not know what makes it inaccessible. They may know Title II compliance is coming, but not know how to turn that requirement into practical workflows.

A well-designed GPT can help bridge that gap.

Not by replacing expertise, but by making first steps easier.

This is where I think Custom GPTs become interesting as a form of human-AI collaboration. They allow us to translate part of our knowledge, practice, and judgment into an interactive experience that others can use. They are not a full substitute for the person who created them. They are more like small public-facing instruments built from that person’s work.

In that sense, creating a GPT feels different from writing an article or designing a workshop.

  • An article presents an idea.
  • A workshop guides a group through an experience.
  • A Custom GPT creates a reusable dialogue around a purpose.

That is a new kind of authorship.

It requires different questions:

  • What should this tool help people do?
  • What should it refuse to do?
  • What kind of tone should it use?
  • When should it ask questions instead of giving answers?
  • When should it provide a direct recommendation?
  • When should it remind users that human judgment still matters?

Those questions are not just technical. They are pedagogical, ethical, and creative.

They are also deeply human.

The more I work with AI, the more convinced I become that the most meaningful uses are not the ones that erase the person behind the work. They are the ones that make the person’s thinking more available, more usable, and more alive in different contexts.

That is what I see in these three GPTs.

  • One helps people find something worth watching.
  • One helps learners begin with reflection.
  • One helps institutions take practical steps toward digital accessibility.

Together, they form a small ecosystem around things I care about: film, learning, accessibility, and responsible AI.

I do not think of them as finished products. They are living tools. They will need testing, revision, and better instructions over time. Some responses will be great. Some will need adjustment. That is part of the work.

But seeing them together made something clear to me.

Custom GPTs are not just about automation. They are about articulation.

To build one well, you have to articulate your own process. You have to name what matters, define the boundaries, and imagine the kind of help someone might need. You have to turn instinct into instruction.

That is harder than it sounds.

It is also why this work feels creative.

AI often gets framed as a machine that gives answers. But in this case, building with AI required me to ask better questions about my own practice. What do I actually do when I help someone choose a film? What do I actually do when I help someone begin learning? What do I actually do when I help someone understand accessibility?

The GPTs are useful outputs, but the design process itself was also reflective.

That may be one of the overlooked gifts of building with AI. It can help us see the shape of our own work.

Not perfectly. Not completely. But enough to notice patterns.

A body of work is not only what we have already made. It is also the set of questions we keep returning to, the forms of help we keep offering, and the values that keep showing up across different contexts.

For me, these three GPTs make that visible.

They are not the whole story. But they are a beginning.

In the age of AI, we are not only writing, teaching, advising, designing, or creating. We are also beginning to shape companions that carry small pieces of our practice into the world.

The responsibility is to make those companions useful, honest, bounded, and human-centered.

Because the goal is not to create tools that sound like us.

The goal is to create tools that help others think, learn, choose, and act with more care.

Leave a comment

Professional headshot of Joni Gutierrez, smiling and wearing a black blazer and black shirt, set against a neutral gray background in a circular frame.

Hi, I’m Joni Gutierrez — an AI strategist, researcher, and Founder of CHAIRES: Center for Human–AI Research, Ethics, and Studies. I explore how emerging technologies can spark creativity, drive innovation, and strengthen human connection. I help people engage AI in ways that are meaningful, responsible, and inspiring through my writing, speaking, and creative projects.