HAIML
A framework for learning with AI while preserving human agency, reflection, and ethical judgment
The Human-Centered AI Metacognitive Learning Model, or HAIML, is a framework I developed to guide how students learn with AI while maintaining human agency, critical thinking, and ethical responsibility. Rather than positioning AI as a replacement for learning, HAIML centers the human learner and asks how AI can support reflection, judgment, and intentional engagement.
HAIML is grounded in the idea that students should not simply use AI to complete tasks. They should also think critically about how AI influences their choices, their confidence, their revision process, and their understanding. In this way, learning with AI becomes both experiential and reflective.
As AI becomes increasingly embedded in education, students need more than access to tools. They need frameworks that help them understand when AI is useful, when it may limit their thinking, and how to remain active decision-makers in the learning process. HAIML provides that structure by connecting use, reflection, and ethics.
This model is especially important in writing-intensive, reasoning-based, and decision-making contexts where students may be tempted to outsource thinking rather than deepen it. HAIML helps shift AI use away from convenience alone and toward metacognitive and ethical growth.
In the experiential layer, students actively engage with AI tools in authentic learning tasks. This may include brainstorming, organizing ideas, generating feedback, analyzing responses, or comparing AI-generated material with their own thinking. The goal is not passive exposure to AI, but meaningful interaction with it as part of the learning experience.
In the metacognitive layer, students reflect on how AI influences their thinking, confidence, judgment, and revision process. They consider questions such as: What did AI help me see? What did it make easier? What did it make me rely on too quickly? This layer is essential because it keeps students aware of their own thinking rather than allowing AI to disappear into the background of the task.
In the ethical decision-making layer, students evaluate AI use through the lens of responsibility, transparency, authorship, fairness, and human accountability. This layer reinforces that using AI is not simply a technical choice, but also an ethical one. Students learn to ask whether their use of AI supports learning and integrity, and whether it aligns with the expectations of the assignment, course, or discipline.
HAIML works in close alignment with my four-level AI use guidelines. Together, these models help students understand both what kinds of AI use are allowed and how to think about that use more deeply. The guidelines provide structure, while HAIML provides the learning model underneath that structure.
This combination supports a human-centered approach in which students remain active participants in their own learning. They are not simply following rules about AI. They are learning how to think with intention in AI-rich environments.
HAIML can be applied across a wide range of educational settings, including online courses, writing-intensive assignments, discussion activities, reflection tasks, and AI-supported feedback environments. It is especially useful in courses where students are developing critical thinking, self-awareness, and ethical reasoning.
In my own work, HAIML informs course design, structured AI assignments, reflective prompts, AI-supported grading with human oversight, and faculty conversations about responsible AI integration. It is intended to be practical, adaptable, and scalable across disciplines.
HAIML continues to evolve as I study how students experience AI-supported learning and how educators can preserve human judgment in technology-rich environments. Future directions include expanded course applications, faculty resources, student-facing materials, and research on how reflective AI use influences learning, confidence, revision, and ethical awareness.