By Beeri Amiel, Founder of XFunnel
Here’s the thing about AI in the workplace: while organizations focus on AI’s potential, people are already experiencing how it shapes their day-to-day work.
At XFunnel, we help the world’s biggest brands (such as HiBob) improve how they appear on AI search engines like ChatGPT and Google’s Gemini. I’ve watched this transformation happen in real time. But I wanted to understand what people were actually thinking, beyond the polished survey responses and corporate feedback sessions.
So we did something different. We analyzed over 500,000 real conversations happening on Reddit, where people share their honest thoughts about work without any filter. What we found was a clear pattern in how people experience AI at work—and where organizations are getting it right, or creating friction.
These conversations highlight how people experience AI in real workflows, decisions, and everyday moments.
What we analyzed
Think of Reddit as the modern break room conversation—people openly talk about work there, and they’re often more candid than they are in surveys or internal feedback channels. We analyzed posts and comments across communities like r/antiwork, r/WorkReform, r/jobs, and r/cscareerquestions—places where people freely discuss their work experiences.
Using XFunnel’s social listening technology, we identified the themes that came up most often, measured by how much people engaged with them (upvotes and comments). The result? A clear picture of what’s really on people’s minds about AI at work.
Here are the 10 themes shaping how people experience AI at work and what they signal for HR leaders:
10 things people are actually saying about AI at work
1. “It feels like we’re being watched”: Privacy concerns with AI monitoring
(60,154+ upvotes, 5,330+ comments)
This was the biggest concern by far. People are starting to feel the impact of how closely organizations monitor their work activity—AI tracking keystrokes, analyzing email patterns, even monitoring facial expressions during video calls. They describe feeling like they’re “under a microscope” and worry that they’re being reduced to just data points.
One person put it perfectly: “I feel like I can’t even take a bathroom break without it being tracked somewhere.”
What HR leaders can do: Be transparent about what you monitor, why it matters, and how that data informs decision-making. Create clear policies around AI surveillance and give your people a voice in how you implement these tools. Consider starting with employee input sessions before rolling out monitoring tools.
Recommended For Further Reading
2. “Will AI make the rich richer?”: Economic inequality concerns
(32,951+ upvotes, 3,877+ comments)
There is a clear concern about being left behind. People see AI benefiting executives and shareholders while wondering if their own jobs will become less secure or valuable. There’s a growing concern that a two-tiered workforce could emerge.
What HR leaders can do: Communicate clearly about how AI investments will benefit everyone, not just leadership. Develop upskilling programs that help people participate in the AI-enhanced workplace. Consider profit-sharing or bonus structures that ensure AI benefits reach all levels of the organization.
3. “Is AI fair to everyone?”: Bias and discrimination worries
(17,245+ upvotes, 1,957+ comments)
People are concerned about AI systems making unfair decisions, especially in hiring, promotions, and performance reviews. Professionals from diverse backgrounds worry about being systematically disadvantaged by algorithms that might not understand different perspectives or working styles.
The “black box” problem is real: People don’t understand why they were passed over for opportunities, and they feel powerless to appeal algorithmic decisions.What HR leaders can do: Regularly audit AI tools for bias and be transparent about how AI decisions are made. Create clear appeals processes for AI-influenced decisions. Include diverse voices in AI tool selection and implementation to catch potential blind spots early.
4. “Am I training my own replacement?”: Job displacement fears
(2,831+ upvotes, 2,329+ comments)
While this didn’t get the most engagement, the conversations were intense and detailed. People aren’t just afraid of being replaced overnight—they’re watching their responsibilities slowly shift to AI tools. Writers worry about content generation, customer service reps see chatbots taking over, and analysts watch AI handle their data work.
What HR leaders can do: Be honest about which roles might change and how, and invest in retraining programs early. Help your people see how they can work alongside AI rather than be replaced by it. Create new career paths that leverage human creativity and judgment alongside AI capabilities.
5. “Where’s the human touch?”: Creativity and authenticity concerns
(1,272+ upvotes, 683+ comments)
Creative professionals are struggling with feeling like they’re becoming “prompt engineers” instead of true creators. They worry about competing with AI that can generate content instantly and cheaply, and they don’t want to lose the satisfaction that comes from creating something genuinely original.
What HR leaders can do: Emphasize the irreplaceable value of human creativity and perspective. Position AI as a tool that handles routine tasks so humans can focus on higher-level creative work. Celebrate and showcase human-created work alongside AI-assisted projects to show that you still value the human touch.
6. “My skills are becoming obsolete”: Adaptation anxiety
(1,547+ upvotes, 726+ comments)
Mid-career professionals especially worry about their hard-earned expertise becoming irrelevant. They’re exhausted by the pressure to constantly learn new AI tools while maintaining their current workload, and they’re not sure if their employers will support their ongoing education.
What HR leaders can do: Provide dedicated time, support, and resources for AI upskilling so people can build capability alongside their day-to-day work. Create mentorship programs that pair people more familiar with AI tools with experienced team members to help bridge the knowledge gap. Recognize and reward the learning process, not just the end results.
7. “What happens when AI breaks?”: Dependency concerns
(896+ upvotes, 699+ comments)
People worry about becoming too reliant on AI systems. They’ve experienced work grinding to a halt when AI tools go offline, and they fear losing fundamental skills as they become dependent on AI for basic tasks.
What HR leaders can do: Maintain backup processes and ensure people retain core skills even as they adopt AI tools. Create cross-training programs so teams aren’t entirely dependent on any single system. Build redundancy into critical AI-dependent workflows.
8. “Who’s accountable when AI makes mistakes?”: Ethical decision-making
(804+ upvotes, 699+ comments)
People are asking tough questions about AI making decisions that affect human lives and careers. They worry about being asked to implement AI in ways that feel morally questionable, and they want to know who’s responsible when AI makes harmful decisions.
What HR leaders can do: Establish clear ethical guidelines for AI use and create channels for your people to raise concerns. Ensure human oversight remains in place for important or nuanced decisions. Be transparent about who’s accountable when AI-influenced decisions go wrong.
9. “Is this real or AI-generated?”: Authenticity questions
(354+ upvotes, 157+ comments)
People are grappling with distinguishing between human and AI-generated work. They worry about trust, intellectual property, and maintaining professional credibility in a world where it’s getting harder to tell what’s authentically human-created.
What HR leaders can do: Develop clear policies around AI content disclosure and attribution. Train team members on how to identify and properly label AI-assisted work. Create standards for when human-only work is required versus when AI assistance is appropriate.
10. “Can we trust AI to get it right?”: Reliability concerns
(400+ upvotes, 250+ comments)
People worry about AI errors, especially in fields where accuracy is critical. They’re concerned about being held responsible for AI mistakes they didn’t catch, and they fear that AI errors could damage their professional reputation.
What HR leaders can do: Implement strong quality control processes for AI-generated work. Train your people to effectively review and verify AI outputs. Create clear protocols for handling AI errors and ensure your people aren’t unfairly blamed for system failures.
Taken together, these themes point to a broader pattern. As AI scales and becomes more common in day-to-day use, existing gaps in communication, implementation, and support become increasingly visible and impactful.
The opportunity for HR leaders
Here’s what struck me most about these conversations: People aren’t anti-AI. They’re asking for something much more reasonable: clear, consistent, and thoughtful implementation that people understand and trust, and that considers their needs and concerns.
This is where HR leaders have an incredible opportunity to play a unique role in how AI shows up across organizations. You can be the voice that ensures AI adoption happens with empathy, transparency, and genuine care for employee wellbeing.
The companies that will see the most value from AI are those that implement it in ways people understand, trust, and use consistently.
Your people are ready for AI. They just want to know you’re ready to listen to them about it.
The technology might be artificial, but the leadership guiding its implementation should always be deeply, authentically human. Clear communication, shared expectations, and investment in people are—and will remain—the foundation for making it all work. And that’s exactly the kind of leadership HR professionals are perfectly positioned to provide.
Key takeaways: AI in the workplace—what people need and what HR can do
- People aren’t resisting AI—they’re responding to how it’s implemented. Clear communication and consistent rollout matter more than the technology itself.
- Trust is the foundation of AI adoption. Transparency around data use, decision-making, and accountability helps people engage with AI confidently.
- Concerns about AI reflect broader system gaps. Issues like unclear policies, weak communication, or lack of support become more visible as AI scales.
- Upskilling is critical for long-term AI success. People need time, support, and structured learning to build confidence and capability alongside their day-to-day work.
- Human oversight remains essential. AI can assist and automate, but people are still responsible for judgment, ethics, and final decisions.
- Consistency drives real impact. AI only delivers value when people understand how to use it and apply it in a coordinated, reliable way across the organization.
- HR plays a central role in AI adoption. From policy and governance to communication and capability building, HR shapes how AI shows up in everyday work.
- The opportunity is organizational, not technical. The companies seeing the most impact from AI are the ones that embed AI into workflows, align it with people strategies, and support it with clear systems.
- Better adoption leads to better outcomes. When people trust and use AI effectively, organizations see stronger decision-making, improved performance, and more consistent results.
From Beeri Amiel
Beeri Amiel is the founder of XFunnel, an AI search optimization platform that bridges between traditional SEO and AI search for SEO and marketing teams. When he's not decoding the future of search, you'll catch him exploring new tech or brainstorming ways to boost help marketing teams unlock growth.