With the rapid advancement of AI technologies, tools like ChatGPT, Jasper, and others are revolutionizing the way we create written content. From drafting professional emails to preparing detailed reports, AI-generated text is becoming more prevalent. While this brings numerous advantages, it also presents a critical question: how can we distinguish between AI-generated text and human-written content?

For HR professionals, this is especially important in evaluating candidate applications, cover letters, or written assessments, where authenticity, tone, and intent play pivotal roles. This article delves into how HR practitioners can identify AI-generated text, challenges this poses, and actionable strategies to address these issues.


Why Identifying AI-Generated Text Matters in HR

  1. Authenticity in Recruitment:
    • AI tools enable candidates to craft flawless resumes or responses, but this may mask their true skills and writing abilities.
    • HR professionals need to ensure that candidates genuinely possess the required competencies, especially for roles requiring critical thinking and communication skills.
  2. Ethics and Fairness:
    • The overuse of AI tools may create an uneven playing field where candidates relying solely on human effort are disadvantaged.
    • Recognizing AI-generated content helps maintain fairness in the recruitment process.
  3. Cultural Fit:
    • Human writing often reveals personality, creativity, and values that align with an organization’s culture—qualities AI cannot fully replicate.

Key Characteristics of AI-Generated Text

1. Overly Polished and Generic Tone

  • Observation: AI-generated text often lacks a personal touch and feels too perfect. The tone is consistent but may come across as robotic or impersonal.
  • Example: “I am extremely passionate about leveraging innovative solutions to drive organizational success” – a statement likely generated by AI due to its generic phrasing.

2. Repetitive Phrasing

  • Observation: AI can sometimes overuse specific phrases or synonyms to meet word count requirements.
  • Example: Repeatedly describing a task as “highly effective” or “innovative” without adding unique insights.

3. Lack of Personal Anecdotes

  • Observation: AI-generated text rarely includes personal stories or examples that demonstrate individuality or human experiences.

4. Inconsistent Formatting

  • Observation: While AI can structure content logically, formatting may lack context-specific relevance, such as tailoring responses to a particular job description.

5. Overemphasis on Buzzwords

  • Observation: AI tools often use industry jargon excessively, aiming to impress without necessarily adding value.

Challenges in Identifying AI-Generated Text

  1. Increasing Sophistication of AI:
    • Advanced tools are continually improving, making their outputs more human-like and harder to detect.
  2. Limited Time for Analysis:
    • HR professionals often handle a high volume of applications, leaving little time for in-depth evaluation of each submission.
  3. Bias Risks:
    • Over-scrutinizing text for AI traits could inadvertently reject genuine submissions that exhibit similar characteristics.
  4. Ethical Dilemmas:
    • Balancing between allowing AI-aided applications and maintaining authenticity is a grey area many organizations face.

Practical Strategies for HR Professionals

1. Ask for Specific Examples or Case Studies

  • Why: AI struggles with providing detailed and specific experiences tied to real events.
  • How: Incorporate questions like “Describe a challenge you faced and how you overcame it with specific examples.”

2. Use Follow-Up Interviews

  • Why: Verbal communication often reveals discrepancies in written responses crafted by AI.
  • How: Compare the candidate’s spoken explanations to their written submissions for consistency.

3. Leverage AI Detection Tools

  • Why: Tools like GPTZero, Originality.AI, or others can flag potential AI-generated content.
  • How: Use these tools as a supplementary step, not the sole decision-making factor.

4. Test Writing Skills in Real-Time

  • Why: Real-time tasks reveal authentic skills that AI-generated text cannot replicate.
  • How: Conduct live written assessments during the interview process.

5. Evaluate Content Depth

  • Why: AI tends to generate surface-level insights without delving into complex reasoning.
  • How: Look for depth, originality, and nuanced thinking in responses.

Long-Term Solutions for HR

  1. Redefine Recruitment Policies
    • Accept the role of AI in modern applications but set clear guidelines on its permissible use.
    • Example: Allow AI assistance for formatting but not for answering specific competency questions.
  2. Educate Teams
    • Train HR teams on recognizing AI-generated content and understanding its implications.
  3. Encourage Transparency
    • Ask candidates to disclose if AI was used and to what extent.
  4. Promote Authenticity in Culture
    • Build a company culture that values genuine contributions over polished perfection.

A Broader Perspective

AI is not inherently negative—it’s a tool that can amplify capabilities when used ethically. HR professionals must embrace it while setting boundaries to ensure fairness, authenticity, and alignment with organizational values.

By adopting a balanced approach and sharpening their evaluative techniques, HR teams can navigate the evolving landscape of AI-generated content while making informed and ethical hiring decisions.

The key takeaway? Always look beyond the surface. Authenticity isn’t just about what’s written—it’s about how it connects.