Why Generative AI Can Make Inexperienced Employees Less Effective — and How AI Assistants Can Fix This
One of the key strengths of generative AI lies in its remarkable flexibility: it adapts extremely well to user prompts. However, this very advantage can also produce negative effects — particularly when users lack a clear understanding of what they are trying to accomplish with AI.
Without sufficient context, AI interprets prompts literally, which often amplifies the incompetence of inexperienced users — just as effectively as it enhances the productivity of experts.
The reason is straightforward: AI appears to be a highly intelligent and knowledgeable conversational partner. As a result, users tend to lower their critical thinking and rely excessively on its responses.
This perception is difficult to break even when users notice that AI’s answers in their own domain of expertise often lack depth, precision, or even correctness. This resembles the Gell-Mann amnesia effect, but applied to AI rather than mass media.
What Problems Does This Create for Workplace Performance?
1. AI in the Workplace: Key Issues
-
Without sufficient task-related experience, employees cannot obtain high-quality results from AI chatbots, primarily because they lack the ability to critically assess AI-generated outputs.
-
Even worse, AI can hinder learning for beginners. Employees who initially lack experience may never fully develop core problem-solving skills because AI shortcuts replace genuine learning.
-
If experienced colleagues do not provide proper context and instructions, novice prompts often misguide AI away from what truly matters — leading to task failure. In practice, this means that humans must focus AI on the critical elements of a task upfront, not merely evaluate results afterward. Beginners, however, are typically unable to prioritize effectively.
This is especially problematic for complex objectives rather than narrowly defined tasks — for example, when a junior product manager attempts to analyze product metrics using AI.
All three problems can be summarized as follows:
AI assistance is ineffective — and sometimes harmful — for inexperienced employees performing complex tasks. AI delivers acceptable results only when guided by people who already know how to solve these tasks without AI.
As a result, many employees restrict AI usage to simple tasks such as summarization or text rewriting. Consequently, corporate investments in AI yield far lower returns than expected.
2. Where Organizational Experience Is Usually Stored
Short answer: almost nowhere.
Organizations derive strength from their workforce diversity — juniors, mid-level specialists, and seniors all coexist.
-
Highly experienced employees are usually overloaded and cannot participate in every task within their expertise. According to the Peter Principle, they often end up solving problems outside their core competencies.
-
Less experienced staff could theoretically solve tasks with AI, but their prompts frequently cause hallucinations and misdirection. Without senior oversight, they cannot recognize these errors.
Traditionally, companies attempt to transfer expertise through knowledge bases. Senior employees are encouraged to document workflows, while juniors are expected to consult these repositories rather than interrupt colleagues.
While AI can automate knowledge-base enrichment and search, these systems mostly capture general knowledge — not practical skills or tacit insights gained through real-world problem solving.
Detailed procedural documentation is rare because it requires significant effort. Moreover, AI itself is poor at generating step-by-step instructions without first-hand task experience.
As a result, practical expertise rarely transfers asynchronously. Real knowledge exchange still relies heavily on meetings and joint work — both costly and inefficient.
3. AI Assistants as a Method of Capturing Expertise
Goal: enable inexperienced employees to achieve results comparable to experts — using AI.
This leads to a clear solution:
Experienced employees should be able to easily create contextual AI instructions for tasks they master. AI then becomes not just a tool, but a guide.
With expert-defined system prompts, AI no longer blindly follows novice instructions. Instead, it operates within carefully structured contexts that focus on what truly matters.
Such context-driven systems are often called AI assistants.
Every department contains multiple micro-domains of expertise. Marketing alone may involve social media content, long-form writing, event organization, advertising, and more. Product management may involve analytics, research, roadmap planning, and stakeholder communication.
Each domain ideally needs its own AI assistant.
This quickly leads to the need for dozens — or even hundreds — of specialized assistants.
4. Expectations vs Reality: Why Most AI Assistants Fail
Tools like ChatGPT GPTs, Open WebUI, and Dify.ai allow easy creation of AI assistants. However, real-world effectiveness remains limited.
Key Challenges:
-
Multi-step tasks: AI cannot reliably follow long procedural chains — especially when novice prompts interrupt logical sequences.
-
Prompt engineering complexity: Designing reliable system prompts requires specialized skills that domain experts rarely possess.
-
Workflow integration: Many tasks involve interacting with internal systems, forms, and colleagues. Manual copy-pasting between chatbots and enterprise systems introduces friction and errors — destroying much of AI’s potential value.
5. How to Help Employees Build Effective AI Assistants
Step 1: Universal Access
All employees must have access to an AI platform where assistants can be easily created and shared. Restricted access models do not work.
Step 2: Break Complex Tasks into Simple Assistants
Instead of building massive monolithic assistants, companies should create collections of simple, single-purpose assistants that together form workflows.
These assistants should be organized not only by department but also by practical tasks.
Advanced setups may include workflow orchestration platforms, such as Dify.ai, which visually map multi-step AI interactions.
However, no-code workflow editors often prove even more complex for non-technical users.
Organizational Techniques That Actually Work
1. Templates & Meta-Assistants
-
Maintain a prompt library with reusable system instructions.
-
Deploy a meta-assistant that converts raw expert knowledge (“brain dumps”) into structured system prompts.
2. Context Engineering Training
Rather than broad training programs, companies should cultivate AI champions — employees skilled in prompt and context engineering — who can mentor others.
3. Motivation Systems
-
Mini-hackathons: Regular 2-hour sessions where each employee builds one assistant, showcases it, and tests others’.
-
Assistants should primarily solve the creator’s own daily problems.
-
Make authorship visible: e.g., “CyberNata — Marketing Copy Assistant”.
What Comes Next?
These methods establish the foundation for bottom-up AI transformation — far more effective than imposing top-down AI initiatives few employees actually use. However, they still fail to fully solve the complexity challenge. In the next article, we will examine the core reason why building powerful AI assistants remains difficult — and introduce a more advanced technical solution.
ES
EN