Not Everything Is GenAIous: Rethinking Artificial Intelligence
GenAI: A Revolution First and Foremost in Accessibility
Generative models (like ChatGPT or Midjourney) are built on massive architectures (Large Language Models) capable of imitating our textual, visual, or auditory productions. Their main achievement? Accessibility.
For the first time, an AI tool became usable by everyone through a simple interface: chat-based conversation. This democratization completely reshaped how people perceive AI — it is now tangible, real, and relatable. Yet, GenAI is only a fragment of a much broader field.
Artificial Intelligence Is Much More Than Generated Text
Before ChatGPT, AI already existed in many forms:
- Symbolic AI: based on explicit logical rules — clear but rigid. Commonly used in grammar and spell-checking systems.
- Classical Machine Learning: mathematical algorithms that are simple, fast, and explainable, still widely used for fraud detection, anomaly detection in industry, or pattern recognition.
- Hybrid AI: a combination of logic-based systems and learning, blending performance with explainability — particularly promising for sensitive domains such as healthcare or justice.
The “All-GenAI” Mindset: A Misleading Shortcut
Generative AI isn’t always the best option:
- Performance: for targeted tasks (binary classification, rare event detection, financial forecasting…), classical models are often faster, more precise, and more stable.
- Cost: LLMs are energy-intensive and expensive to run — partly due to the cost of
GPUs and the vast amount of data required. - Ethics and Environment: training data is opaque and often unverified regarding privacy regulations or content licensing. Many biases persist due to the uncontrollable scale of datasets. Add to that the massive carbon footprint and water consumption required for each model training.
More sober and explainable approaches exist — no need to use a bulldozer to build a sandcastle.
Selecting the Right AI for the Right Purpose
Artificial intelligence is not a single monolithic entity but an ecosystem of methods. Generative models are useful for writing, summarizing, or translating, but they should not become the default solution.
Today’s challenge is to choose the right tool for each use case and to prioritize relevance over raw performance.
How uh!ive Designs and Uses Its AIs
We combine multiple AI approaches to intelligently interpret the content of phone conversations:
- Symbolic AI for building tags — such as identifying heating types (electric, gas, wood, etc.) — or for masking numeric data (used in ultra-sensitive industries where no numbers should remain in transcripts).
- Classical Machine Learning for named entity recognition (first name, date, address…) ensuring robust and fast anonymization of personal data, and for categorizing calls within a finite list of topics.
- Hybrid AI to detect calls that don’t fit predefined categories and propose new ones. It’s also used to verify quality-monitoring criteria (compliance with brand guidelines, tone, and behavior).
- Generative AI to produce concise call summaries, making information more digestible and actionable.
In Conclusion
There isn’t one single AI, but as many AIs as there are use cases.
Understanding each context is crucial to applying the most appropriate approach.
True intelligence isn’t measured by model size but by its relevance, accuracy, and efficiency.



