This phase of the project was structured as a comparative experiment. We tested three large language models, ChatGPT, Google Gemini, and Microsoft Copilot, to evaluate how each tool handles resume optimization.
To explore how these tools handle real resume content, a comparison was conducted using a full resume. The same prompt was applied across all three models to evaluate how each tool rewrites and structures experience.
| Category | ChatGPT | Gemini | Copilot |
| Clarity | Improved readability and structure | Clear but sometimes overly complex | Clear and structured, slightly formal |
| Technical Keywords | Highlighted existing tools | Added industry-level terminology | Emphasized keywords and system-relevant terms |
| Authenticity | Stayed close to original experience | Sometimes reframed tasks more strategically | Balanced, but occassionally rephrased content |
| Tone | Natural and professional | Corporate and highly technical | Professional with a formal tone |
| Risk | Slightly generic phrasing | Possible exaggeration of responsibilities | Moderate risk of over-structuring language |
While all three models improved clarity and structure, they differed in how they represented experience. ChatGPT maintained a balance between clarity and authenticity, making it easier to understand the original work. Gemini and Copilot introduced more technical and formal language, which increased perceived impact but sometimes moved away from the original phrasing.
Another comparison was ran that compared three descriptive projects for a resume, and used the same prompt in all three LLM’s to see what kinds of language and descriptions the tools provided:
- Vehicle Counting System with Object Detection (Oct. 2023)Focus: Computer vision (OpenCV/YOLOv8), Python, and cloud-based development.
- Event-Based Queueing Simulation for Network Optimization (Nov. 2025)Focus: Low-level C++, OMNeT++, and network protocols (TCP Reno/Dynamic Routing).
- Campus Accessibility Improvement Project (Oct. 2023)Focus: Strategic framework, user requirements, and stakeholder presentations.
Tool Comparison & Observations
All three models improved the clarity and structure of resume content, but each prioritized different aspects:
- ChatGPT focused on readability and balance. It refined sentence structure, improved flow, and incorporated relevant tools without overcomplicating the content. It was most effective at preserving the original intent while improving clarity.
- Google Gemini demonstrated strong performance in technically complex scenarios. It identified specific technologies and concepts more precisely, particularly in engineering or simulation-based projects. However, it occasionally introduced more abstract or “enterprise-level” language that extended beyond the original scope.
- Microsoft Copilot produced more formal and structured outputs. It emphasized operational language and keyword alignment, making it effective for optimizing content for ATS systems, though less flexible in tone.
Key Considerations and Reflection
A major focus in this phase was ensuring that AI-generated content remained accurate and aligned with the original experience. While AI tools can enhance wording and structure, they can also introduce language that overstates or reframes tasks if not carefully reviewed. This highlights the importance of maintaining accuracy when using AI-assisted tools, particularly in professional contexts where content must reflect actual experience.
As part of this phase of our inquiry, we also met with an HR professional to better understand how resumes are actually evaluated in real hiring contexts. One of the main takeaways from that conversation was that how experience is written is just as important as the experience itself. Hiring managers are not only scanning for technical skills, but also looking for clear communication, relevance to the role, and an understanding of how a candidate contributes value. Overly technical or jargon-heavy descriptions can make it harder to quickly interpret what a candidate actually did, especially when resumes are reviewed in short timeframes.
This insight directly connects to our testing of AI tools. We found that a model like ChatGPT was more effective in this context because it tends to translate technical work into clearer, more descriptive language that highlights skills and impact. In contrast, other models often leaned toward more formal or technical phrasing, which can sound impressive but may reduce clarity for non-technical reviewers. This reinforces the idea that AI is most useful when it helps bridge the gap between technical detail and professional communication, rather than simply increasing the complexity of the language used.
Digital Literacy & Ethics
Our research into the “black box” of hiring algorithms highlighted a significant ethical concern. As candidates use AI to “optimize” for the machine, there is a risk of losing the authentic self. Research suggests that while AI-aided tools like ResumeFlow can help customize resumes for specific jobs, they can also inadvertently mirror existing biases in the hiring data they were trained on (Wilson et al., 2025).
Our final resource demonstrates that the most effective digital presence is to utilize these tools in a hybrid-like approach. By utilizing LLMs to identify keywords and refine LaTeX structures, we can ensure our skills are “visible” to ATS systems while maintaining the technical integrity that a human hiring manager expects. As LLM-aided tools continue to evolve, the goal remains the same: using technology to enhance, not replace, the human learning process (Zinjad et al., 2024).
Outcome
This phase of the inquiry resulted in a structured approach to using AI for resume and portfolio development. For this part of our inquiry, we have come to know that digital literacy is not just knowing how to use AI, but knowing how to utilize it and detect misinformation and inconsistent standards. By combining multiple tools and workflows, we this improves clarity, aligns with hiring systems, and supports the presentation of technical work across platforms and different resumes.
References
- Wilson, K., et al. (2025). People mirror AI systems’ hiring biases, study finds. University of Washington News. https://www.washington.edu/news/2025/11/10/people-mirror-ai-systems-hiring-biases-study-finds/
- Zinjad, S. B., et al. (2024). ResumeFlow: An LLM-aided tool to quickly customize job-specific resumes. https://arxiv.org/abs/2402.06221