After six months of diving into AI tools, I’m still figuring out how to work with them without compromising my professional integrity. The old boundaries between my words and borrowed words don’t map cleanly onto AI assistance. When AI helps craft my prose, am I still the author? When my students use GPTs (a generative, pre-trained transformer, built on a large-language model, or LLM and more commonly known as an AI chat tool) to generate sophisticated responses, how do I know if they actually understand the material? The underlying tension in both contexts is the same: Where does human skill end and tool dependency begin?
I’ve been wrestling with this question through direct experience, using AI tools in my own writing and as I prepare to teach students about AI applications in technical writing scenarios. The uncertainty isn’t comfortable, but it’s productive. It’s forcing a clarity about professional standards that I previously took for granted.
The attribution gap
Traditional models for crediting intellectual work assume clear human sources. Plagiarism involves stealing and passing off ideas or words as one’s own without crediting the source. That creates a property line between what’s “my work” and what isn’t. The property line is fuzzy, but it’s recognized.
Work-for-hire contracts handle using another’s writing by transferring ownership. I might write the words, but my employer becomes “the writer” through legal assignment. Editorial assistance operates on a continuum: the greater the influence of dictionaries, grammar checkers, or human editors on the final product, the more they should be credited.
Search engines provide access to enormous amounts of knowledge while making attribution relatively straightforward. This makes it easy to build on others’ genius, and cite the sources to avoid plagiarism, while maintaining a clear distinction between your words and those of others.
But what’s a GPT in this context?
- Is it a sophisticated grammar checker? Definitely.
- Is it a mechanical editor? It can be.
- Is it your personal writer-for-hire? It could be.
- Is it a source of original content? Unclear.
- Is it an industrial-strength plagiarism machine? That’s still a topic of heated discussion.
AI doesn’t fit into existing categories while somehow fitting into all of them. The academic press is still debating this. Stances range from “do what makes sense” to “no way, no how” depending on the field and editorial board. Most of my academic papers were guided by ACM and IEEE standards, so the ACM’s more flexible approach feels familiar and reasonable while maintaining academic transparency.
The competence question
As a writer, the integrity question centers on authorship: “Am I still the writer if AI helps structure my arguments?” As an instructor, it’s about learning: “Do my students understand the material if they’re using AI to generate responses?”
Both questions probe the same concern about where human capability ends and tool dependency begins.
In my experience preparing to teach students about AI applications, GPTs create a dangerous shortcut. They make it easy to offload understanding to the AI while focusing only on task completion. Worse, this can create an illusion of understanding that encourages detrimental practices.
I’m convinced that GPTs can be excellent tools for applying sound knowledge but poor tools for learning fundamental concepts. They work well when you can already identify errors and craft precise prompts. But if you can’t spot when a GPT gets something wrong, you can’t distinguish between learning from facts and learning from AI hallucinations. This makes them risky vehicles for learning concepts from scratch. This is made even more difficult by the GPT’s bias towards providing answers, even when it’s not certain it has one.
As an instructor in this new world, I can no longer rely on final output as the sole measure of understanding. I need to examine the process behind the work product. This works in the professional education courses I teach where I can observe student thinking through reflection and peer review, but it would be challenging in traditional undergraduate classes that are grade-oriented and rely heavily on work product assessment.
The transparency challenge
The Association of Computing Machinery takes what I consider to be a “do what makes sense” approach to AI use in academic publishing, requiring an appropriate transparency about AI assistance while allowing authors to determine the disclosure levels.
But what constitutes appropriate transparency? How much AI disclosure serves readers versus what protects professional credibility? The answer likely depends on context and audience expectations.
A working approach to AI collaboration
Rather than waiting for industry consensus, I’ve been experimenting with a workflow that addresses my integrity concerns while maintaining both authorship and learning. Here’s what I’ve developed:
Step 1: Preserve the thinking work. I create the first draft using traditional methods such as search engines for research and citation and to establish the article’s general arc through my own analysis. This ensures the core thinking remains mine.
Step 2: AI as developmental editor. I send my draft to Claude, which I’ve trained on my writing style. The response includes feedback that I’d expect from a developmental edit; suggestions such as how to improve flow, correct voice and tone, and identify structural issues.
Step 3: Maintain control of the content. I review the suggestions and question any changes. We negotiate edits as I would with human editors. We discuss idea flow, main points, transitions. I decide which suggestions to accept based on my understanding of the content and audience and review the result.
Step 4: Iterative refinement. The AI and I iterate several more times as we approach the final version. Each round preserves my decision-making authority while leveraging AI’s capacity for consistent improvement.
Step 5: Final human review and publishing. I copy content into WordPress to apply the document design elements and review the rendered preview. I run one last spell check in MS Word before publishing, add tags and such, then push the Publish button.
Principles from practice
This process addresses the core integrity challenges:
- Preserve human analysis: AI improves expression but doesn’t replace thinking. The first draft ensures my ideas and structure drive the piece.
- Maintain editorial control: I negotiate with AI suggestions rather than accepting them wholesale. This preserves my understanding of why changes matter.
- Document the process: Transparency about AI collaboration builds trust with readers while establishing professional standards.
- Focus on capability building: Tools enhance my existing skills rather than replacing them. I’m becoming a better editor through AI collaboration, not a dependent user.
I’ve been using this process to author this and recent blog posts, and it seems to be working, although I haven’t tested it in other contexts, yet. I’m still refining the process. But it provides one approach to working with AI tools while maintaining professional integrity.
This human-AI collaboration is something I continue to explore. The goal isn’t to avoid AI assistance but to use it in ways that strengthen rather than replace human capability. This requires ongoing attention to where we draw the lines and why those boundaries matter for our professional development and our audiences’ trust.