About me

  Hi ! Thanks for stopping by! 🙏

  I am a Hands-on Senior Data & Product Manager with proven experience in Product Management, Data Governance, and Digital Transformation. I am focused on delivering tangible business value and measurable results.

  I specialize in bridging the gap between complex business challenges 💰 and tailored data-centric solutions ⚙️. I have worked across large-scale institutions and agile startups dealing with different sectors and international environments.

  From defining Data Governance models to launching new Products , I thrive on transforming business challenges into innovative and scalable solutions.

  As a tech enthusiast and product crafter, I enjoy diving into end-to-end engineering journeys and tackling new challenges 💪.

  •     ➡️ I have built this Project 🎙 as an experiment and I hope you enjoy your visit.

  •   Marhaba 🙏
  •   Let's get in touch! 😉

Working on

  • design icon

    Product Management


    ➡️ Product Strategy
    Roadmaps, OKRs, Business Insights.


    ➡️ Continuous Discovery
    User Research, Data-centric solutions.


    ➡️ Launching 0-to-1 products
    Design, Integration, APIs.

  • Data Management icon

    Data & Digital Transformation


    ➡️ Data Governance
    TOM, Data Catalog, Quality, Lineage.


    ➡️ Data Analysis
    from Sourcing to Production metrics.


    ➡️ Driving Digital Transformation
    across Data domains.

Worked with

Experience

Work Experience

  1. Senior Data & Product Manager

    PaaS Business Partner (Freelancer)

    📅 2021 — Present 📍 Remote 💻

    ➡️ Partnered with leading organizations & startups to accelerate their Data & AI transformation and deliver high-impact digital products.


    • In various sectors of activity: Construction, Real Estate, Industry, AgriTech.

    • Supported the setup of a comprehensive Data Governance & AI Program, leading the operationalization of Data Stewardship, Master Data Management (MDM), and Data Catalog Repository components. Designed the framework for automated client data enrichment.

    • Defined the organizational layer of a Target Operating Model, establishing the Vision, Roles, Roadmap, and OKRs in alignment with the target technical architecture. Delivered actionable data use cases to unify data domains across the enterprise.

    • Guided the implementation of Data Governance frameworks, including Data Product templates, Data Processes, Data Lineage, and Data Quality gates. Introduced innovative cross-initiative use cases within a unified Data Platform vision.

    • Led the deployment strategy and product delivery of B2B applications in a new market, managing 0-to-1 product development from ideation to launch — including market discovery, MVP definition, workflow design, and client relationship management.

      Data Governance · Product Management · Craft   MEVN · AI-LLM-Low Code · APIs · Open Metadata   B2B PaaS · Consulting Services
  2. Lead Data Product Owner

    Crédit Agricole CIB

    📅 2019 — 2021 📍 Paris 🇫🇷

    ➡️ Drove strategic initiatives to enhance risk management, regulatory compliance, and data value creation across banking operations.


    • Led the delivery of an internal risk optimization platform, improving capital exposure efficiency by over 5% through data-driven insights and operational excellence.

    • Directed the design and implementation of end-to-end regulatory data solutions, ensuring compliance, reliability, and traceability across complex data ecosystems.

    • Oversaw the evolution of market risk data and performance metrics, enabling advanced monitoring, decision support, and alignment with emerging regulatory frameworks.

      Product Owner · Data Analysis · Agile @Scale   UI UX design · Delivery · MS Excel   Loans · Market Risks
  3. Head of Fast Product Team

    Société Générale

    📅 2017 — 2019 📍 Paris 🇫🇷

    ➡️ Led a cross-functional Fast Product team within the Group Finance Division, delivering strategic solutions across Liquidity, Rates, and Exchange domains at both Group and Entity levels.


    • Directed tactical product management and delivery, aligning priorities with business and regulatory objectives across international entities.

    • Oversaw the design and enhancement of steering and regulatory tools, strengthening accounting consistency, liquidity monitoring, and compliance with resolution requirements.

    • Fostered a culture of agility and innovation, improving team performance and stakeholder satisfaction across the finance organization.

      Leadership · Data Analysis · Agile @Scale   Oracle SQL · MS Access/Excel VBA   ALM
  4. Financial Data Engineer

    Société Générale

    📅 2014 — 2017 📍 Paris 🇫🇷

    ➡️ Contributed to the Group Finance transformation by developing data products and analytical capabilities supporting central departments and global entities.


    • Delivered steering and regulatory metrics at Group and Entity levels, ensuring accuracy and timeliness of financial insights.

    • Designed and automated reporting templates and analytical datasets, enabling large-scale data production and distribution to over 120 users.

    • Supported strategic and regulatory initiatives, responding to urgent requests from top management and European supervisory bodies.

      Development · Data Analysis   Oracle SQL · MS Access/Excel VBA   ALM

Internships & Projects

  1. Junior Solution Analyst

    Clear2Pay (FIS)

    📅 2014 (5 months) 📍 Brussels 🇧🇪

    ➡️ Contributed to the R&D Open Test Framework team, supporting innovation in payment processing solutions.


    • Conducted analysis on SWIFT to SEPA conformity within an end-to-end transaction testing framework.

    • Supported the validation of payment system processes, ensuring regulatory and operational compliance.

      Data Analysis   XML · Testing   Payment
  2. Junior Tech Engineer

    Amadeus

    📅 2013 (6 months) 📍 Nice 🇫🇷

    ➡️ Joined the R&D Server-Side Extensibility team to explore innovative technologies and enhance next-generation travel IT solutions.


    • Led product discovery, user research, and rapid prototyping to validate new digital product concepts.

    • Conducted a comparative analysis of push communication frameworks, improving system responsiveness and scalability.

    • Developed a two-factor authentication prototype with multi-channel push notifications, pioneering secure and seamless user experiences across platforms.

      Product Discovery · User research · Development   JavaScript · Java · WebSockets   Tech Innovation
  3. Junior Tech Engineer

    DRJSCS

    📅 2011 (4 months) 📍 Nantes 🇫🇷

    ➡️ Contributed to the Data & Technology team by supporting digital solutions for national social services workflows.


    • Performed reverse engineering and web application design to optimize internal processes.

    • Developed tools to streamline national social work examination workflows, improving efficiency and data handling.

      Software Development · Reverse Engineering   SQL · .NET · VBA   Process Digitalization

Education

Academics & Diplomas

  1. Centrale Nantes · Grande École

    📅 2009 — 2013 📍 Nantes 🇫🇷

    - Master of Science in Engineering · Diplôme Grandes Écoles.

    • Specialized in Computer Science and Project Management.

    ➡️ Developped problem-solving, communication, and teamwork skills.

    ➡️ Participated in research projects and gained hands-on experience.

      Engineering · Data Science · Finance
  2. Lycée Joffre · Classes Préparatoires

    📅 2006 — 2009 📍 Montpellier 🇫🇷

    - Intensive program to pursue engineering and science studies at top french Grandes Écoles.

    • MPSI-MP* : A rigorous specialization program in advanced Mathematics and Physics.

    ➡️ Developped the ability to think critically and solve problems creatively.

    ➡️ Learnt the foundations of engineering, algorithmics and research.

      Mathematics · Physics

Programs & Certificates

  1. Product School & Pendo.io

    📅 2024 📍 Remote 💻

    - Product Management, Strategy, Roadmapping, AI, Analytics, Product Led.

    ➡️ Product School & Pendo.io Certificates.

      Product Managment · Product Strategy
  2. HEC Paris · ICCF® · Executive Program

    📅 2022 📍 Paris 🇫🇷

    - Corporate Finance, decision-making from a financial perspective.

    ➡️ ICCF@HEC PARIS Certificate.

      Financial Analysis · Business Valuation
  3. Intel® · AI DevCamp Series · Training

    📅 2018 📍 Paris 🇫🇷

    - Experience on Machine Learning with Python and OpenVINO™.

    ➡️ Creating an end to end Deep Learning Project.

      Machine Learning · Python

Languages

  • English
    C1 - Professional
  • French
    C2 - Native
  • Arabic
    C2 - Native
  • Spanish
    B2 - Intermediate
  • German
    B2 - Intermediate

Portfolio

Blog

Blog

The Agentic Harness: Beyond LLM Performance to Context Engineering

April 8, 2026

Executive Summary


The AI inflection point has fundamentally shifted from model capability to Context Engineering. While state-of-the-art models like Opus 4.6 and GPT 5.4 are exceptionally capable, their real-world performance is strictly limited by the "harness"—the surrounding context and operational logic—built around them. This article explores Ras Mic’s minimalist methodology for building high-fidelity AI agents. By replacing bulky configuration files with modular Skills and leveraging a Recursive Refinement loop, builders can achieve 100% reliability in complex, multi-source workflows while maintaining peak model reasoning.


Minimalist Circuitry Architecture Blueprint



1. The "Skills Over Configuration" Mandate: Token-First Architecture


Key Concept: The Token-Reasoning Tradeoff


A common mistake in the current agentic landscape is the over-reliance on massive agent.md or claude.md configuration files. While these files aim to provide guidance, loading them into every turn of a conversation is a "performance killer." Ras Mic highlights a critical technical reality: as the Context Window approaches its limit, the model’s reasoning capabilities begin to degrade—a phenomenon he compares to a student "cramming" at the last minute.


Operational Deep Dive


To combat this, the "Skills Maxi" approach utilizes Progressive Disclosure. Instead of saturating the context with thousands of tokens of instructions, only the Skill name and its brief description are initially loaded. The full instruction set is only "disclosed" when the agent determines it is relevant to the current task.


The quantitative impact is staggering. A 116-line Code Structure Skill that would cost 944 tokens in a persistent configuration file costs a mere 53 tokens when managed as a modular skill. This 94% reduction in token overhead allows builders to respect the 70% Rule—maintaining context saturation below 70% to ensure the model stays "smart," performant, and cost-effective.


💡 Pro-Tip: The 70% Benchmark
Monitor your context window usage rigorously. If you hit 80% saturation, your agent will likely start making "dumb" or generic mistakes. Transition persistent instructions into Skills to free up reasoning space for the task at hand.




2. Recursive Skill Building: The Path to 100% Reliability


Key Concept: The "Failure as Data" Feedback Loop


Reliability in autonomous agents isn't a result of "better" initial prompting; it is the product of an iterative, corrective feedback loop. Ras Mic argues that builders must move past the expectation that an agent will work perfectly out of the box. Instead, every failure should be treated as high-value data for the Recursive Skill Building process.


Operational Deep Dive


The methodology begins with a Successful Run. A builder should never formalize a skill until they have manually walked the agent through the workflow, correcting mistakes in real-time. Once the agent executes the task successfully, its own successful context is used to generate the skill.md file.


However, the process doesn't end there. When the agent eventually hits an edge case and fails—perhaps due to an API 5005 error or insufficient credits—the builder must identify the specific error and fix it. The final, critical step is to instruct the agent: "Update the skill file so this never happens again." This recursive refinement turned a complex YouTube Analytics report generator, which pulled from eight distinct data sources, from a "garbage" prototype into a flawlessly executing 10-minute automated workflow.


Recursive Feedback Loop Abstract Art



3. Scaling for Productivity: The Manager of Agents Mindset


Key Concept: Incremental Complexity vs. "Cool" Slop


There is a significant difference between "Scaling for Cool" and Scaling for Productivity. Many users attempt to jump straight into complex, multi-agent architectures like Paperclip before they have mastered a single-agent workflow. Ras Mic warns that complexity is a liability; if you wouldn't hire ten people for a company without a product, you shouldn't set up ten sub-agents without a proven workflow.


Operational Deep Dive


The most effective mental model is to treat an AI agent like a new, junior employee. Just as the character Jim Halpert struggled in The Office when asked for a "rundown" without any context, an agent will fail if its instructions are ambiguous.


Builders should start with a Single-Agent Foundation, using one agent to handle multiple domains like research, spreadsheets, and email. Only after these core Skills are codified and 100% reliable should specialized Sub-Agents for Marketing, Business, or Personal tasks be introduced. In this paradigm, the human's role shifts from "executor" to a Manager of Agents, where the primary value is the "Arbitrage of Taste"—codifying unique strategies and workflows while letting the model handle general technical knowledge.


Professional Command and Control Center

💡 Pro-Tip: The "Office Rundown" Test
If you cannot explain a task to a human junior employee in three simple sentences, your agent will likely fail. Simplify and standardize the workflow manually before attempting to codify it into a skill.




Integrated Business Value: The Arbitrage of Excellence


Mastering the agentic harness offers a unique strategic opportunity for modern builders:


  • Token Arbitrage: Achieving 90%+ token savings translates directly to lower operational costs and longer, more complex conversations.
  • Operational Reliability: Transitioning from "vibe-based" results to 100% reliable automated reporting that scales across multiple data sources.
  • Knowledge Empowerment: Closing the "Trust Gap" and enabling non-technical founders to "vibe code" products with billion-dollar valuations by mastering the management of agentic intelligence.



Technical References


  1. OpenClaw – An open-source harness for high-fidelity multi-agent systems.
  2. Claude CodeAnthropic's agentic developer tool that treats the codebase as the primary context.
  3. Progressive Disclosure – A core design principle applied here to maximize token efficiency.
  4. Opus 4.6 & GPT 5.4 – The state-of-the-art models that serve as the foundation for the current agentic era.
  5. Notion & YouTube Analytics – Key data sources used to benchmark the reliability of recursive skill building.



Published on: 2026-04-08


Blog

The AI Inflection: Engineering Agency and Productizing Sovereignty

April 6, 2026

Executive Summary


The AI era is fundamentally shifting from information retrieval to autonomous action and sovereign governance. While the initial wave of Large Language Models (LLMs) focused on generative text and conversational interfaces, the next frontier is defined by systems that don't just talk, but execute. By synthesizing the strategic frameworks of Houda Nait El Barj (OpenAI) and Mehdi Ghissassi (AI71), this article provides a technical roadmap for engineers and product managers. We explore the transition from assistants to Agentic IA and why data sovereignty is the new mandatory "Permission to Play" in the age of AGI.


Modular Server Infrastructure Blueprint



1. The Engineering Mandate: Sovereignty as the "Permission to Play"


Key Concept: Building Beyond the Global Cloud


For B2B and government sectors, the primary bottleneck to AI adoption isn't model performance—it's the "Trust Gap." Mehdi Ghissassi argues that for sensitive verticals like finance, healthcare, and national defense, data is the most precious asset. To unlock true disruption, engineers must architect systems that guarantee data remains off shared global clouds and within localized jurisdictions.


Operational Deep Dive


Engineering for sovereignty requires a shift toward physical data centers located within the client's borders. This prevents "data hallucinations" where one organization's proprietary logic might inadvertently influence a competitor's model outputs. At AI71, this is achieved by building specialized vertical applications—such as GovGPT—on a unified, horizontal engineering core. This architecture ensures that while the underlying model (like Falcon) benefits from global scale, the actual data persistence and inference layers are isolated, satisfying stringent regulations like HIPAA.


💡 Pro-Tip: The Sovereignty Rule
In high-stakes B2B, sovereignty is your moat. Architect your data persistence layers to satisfy local jurisdiction before optimizing for model latency.




2. The Product Pivot: From Search to Agentic Action


Key Concept: The "Intent-In, Action-Out" Paradigm


The "Chat" interface we see today is merely the onboarding phase for humanity. Houda Nait El Barj highlights a critical transition: the shift from AI as a search interface to Agentic IA. The goal is no longer to provide a list of links or a summary of text, but to execute a multi-step workflow in the virtual world.


Operational Deep Dive


This disruption is best exemplified by Operator, a product Houda worked on at OpenAI. Instead of writing an email about a restaurant reservation, the user simply states their intent, and the agent navigates the web to book the table and manage the follow-up communications. This leads to the concept of the Invisible Interface. For product managers, the definition of success is shifting: we are moving toward a future where technology disappears into the background and becomes a seamless extension of human will.


Autonomous Process Automation Concept

💡 Pro-Tip: The Agency Metric
In the agentic era, "User Engagement Time" is a legacy KPI. Modern AI products should be measured by "Task Completion Velocity"—the less time a user spends looking at your screen, the better your product is performing.




3. The PM as Matchmaker: Research Store vs. Problem Store


Key Concept: Bridging the "Hammer-Nail" Conflict


A central challenge in the AI era is the disconnect between scientific breakthroughs and real-world market needs. Mehdi Ghissassi describes the role of the high-fidelity PM as a "matchmaker" between two distinct lists: the Research Store (what scientists can build) and the Problem Store (the specific bottlenecks faced by business units).


Operational Deep Dive


To bypass organizational inertia and the "Innovator's Dilemma," Mehdi advocates for high-impact incubation using small, elite teams of just 2 to 3 people. These teams focus on proving technical feasibility through a "MuZero Moment." A landmark example is the use of MuZero—a DeepMind reinforcement learning algorithm—to optimize YouTube bandwidth. By reducing bandwidth usage by even a few percentage points, a tiny team was able to generate hundreds of millions in cost savings, providing the undeniable ROI needed to scale the project across the entire organization.


Research-to-Product Strategic Mapping



4. The Human Layer: From Execution to Discernment


Key Concept: The Premium on Critical Thinking


As AI automates the "doing"—the research, the drafting, and the execution—the value of human labor is shifting from technical throughput to high-fidelity judgment. Houda Nait El Barj emphasizes that we are moving toward a "judging and deciding economy."


Operational Deep Dive


In this new paradigm, the most critical skill is Discernment. While AI can automate a six-day research task into a six-second summary, the human "in the loop" must Own the Decision. This requires a deep understanding of human values, empathy, and social group interaction—qualities that are inherently difficult for a model to replicate. Successful leaders in the AI era will be those who can assume the responsibility for the consequences of an automated output, recognizing that purely rational logic is often insufficient for leading complex, "irrational" human organizations.


High-Stakes Decision Making Abstract



Integrated Business Value: The Arbitrage of Excellence


The convergence of engineering agency and product sovereignty offers a unique strategic opportunity:


  • Operational Efficiency: Proving massive ROI via optimization breakthroughs like the MuZero bandwidth model.
  • Administrative Transformation: Using GovGPT to eliminate the "admin burden" in healthcare and government, allowing professionals to focus on human-centric service.
  • Trust Arbitrage: Capturing high-value B2B markets by offering localized, sovereign AI that global cloud providers cannot match.



Technical References & Further Reading


  1. MuZero – DeepMind’s reinforcement learning framework for mastering tasks without prior knowledge of the rules.
  2. Falcon – The pre-eminent open-source sovereign model developed by the TII in Abu Dhabi.
  3. OpenAI Operator – A pioneering model designed for autonomous virtual world actions and workflow execution.
  4. HIPAA – The global benchmark for healthcare data security and sovereignty.



Published on: 2026-04-06


Blog

The Friction Audit: Navigating the Operational Realities of the Moroccan Tech Ecosystem

April 1st, 2026

Executive Summary


While the Moroccan startup ecosystem is maturing, it faces a set of high-stakes structural and operational bottlenecks. This article provides an audit of these "Friction Points"—ranging from the Density Gap in logistics to the Technical Founder Gap in software. By contrasting the investor’s mandate for EBITDA-positive scaling with the entrepreneur’s need for Mission-Driven Verticality, we identify the critical path for Morocco to transition from a local clone factory to a global knowledge economy.


Executive Summary Visual



Scaling Pains: The "Density over Geography" Fallacy


Key Concept: Profitability as a Function of Concentration


In emerging markets like Morocco, geographic breadth is often a profitability killer. Many startups fall into the "expansion trap," equating more cities with more value. However, high-velocity growth in North Africa requires a fundamental shift in mindset: moving from horizontal coverage to vertical density.


Operational Deep Dive


The most significant operational lesson comes from Glovo's rationalization strategy. As revealed by Hamza Naciri Bennani, the platform reduced its footprint from 60 cities to 35 in late 2022. This wasn't a retreat; it was a strategic move to reach a seuil critique (critical mass). Expansion into 'Tier 3' cities had led to a dilution of resources, where sales and operations teams were over-extended in markets with weak value propositions and high displacement costs. By focusing on Tier 1 and Tier 2 cities, Glovo was able to cover 80% of the urban population while dramatically optimizing courier travel times and restaurant density.


💡 Pro-Tip: The Density Mandate
Profitability in on-demand delivery is driven by density, not geography. Focus your resources on saturating high-density hubs until the unit economics allow for "bundling" and sustainable earnings.


Urban Density Abstract



The Informal Sector: Standardizing the "Archaic Black Box"


Key Concept: Operational Excellence as the Primary Innovation


In the Moroccan "Traditional Market," pure software is rarely the solution. The real innovation lies in the standardization of physical operations within an archaic, opaque supply chain.


Operational Deep Dive


Youssef Mamou of Yola Fresh describes the traditional market—managed by Hattara and Khaddar—as an "Archaic Black Box." These retailers have operated without formal invoices or standardized weights for over 30 years. Yola Fresh's disruption is rooted in Operational Excellence: standardizing crates (e.g., 30kg for potatoes) to provide price predictability and introducing formal invoicing for the first time. This moves retailers from a state of precarious daily survival to structured business operations. However, this standardization carries a "Daily Presence Trust Tax"—if founders are not physically seen in the wholesale markets, the trust loop breaks immediately.


💡 Pro-Tip: Operations First
In informal sectors, 'Software is not the solution; Operations is the solution.' Build high-fidelity physical processes first; the tech layer should only serve to scale those proven flows.


Traditional Market Innovation



The Trust Deficit: From Digital Wallets to "Cash-Out" Reflexes


Key Concept: The Infrastructure-Trust Mismatch


The technology for a cashless economy exists in Morocco, but the "Merchant Ecosystem" trust remains the primary hurdle.


Operational Deep Dive


While 40% of the population remains unbanked, the bigger issue is the lack of digital spending utility at the 'Hanout' (neighborhood shop) level. This leads to a persistent "Cash-Out" reflex, where digital wallet balances are withdrawn as cash immediately. Soufiane Marhraoui (Taptap Send) notes that trust acts as the "wedge" for adoption. Glovo has managed to bypass this by achieving a 57% card adoption rate (vs. a 10% market average) by providing a reliable, "2-click" frictionless experience. High-frequency services like Glovo Prime remove the friction of explicit decision-making, turning a digital service into a background habit and building the trust necessary for larger fintech adoption.


💡 Pro-Tip: Friction-Killers
Subscription models are the ultimate friction-killers. They transition users from an 'explicit decision' to spend money to a 'background habit,' which is essential for scaling digital services in cash-heavy markets.


Digital Payment Friction



The Talent Paradox: Scarcity of Technical Founders


Key Concept: Shifting from "Service Mindset" to "Product Architecture"


The Moroccan ecosystem is abundant in business talent but suffers from a critical gap in technical founders who build for global scalability.


Operational Deep Dive


Hamza Naciri Bennani argues that the ecosystem is limited by a lack of Technical Founders. Many Moroccan engineers suffer from a "Service Mindset"—falling in love with the technical complexity (the "How") instead of the product mission (the "Why"). Benchmarking against the Baltic or Eastern European models, Morocco needs more engineer-led startups that build global platforms from Day 1 rather than local clones. Founders like Ahmed El Azzabi exemplify the alternative: the "Manager of One" who uses technological minimalism and permissionless code to build independently, bypassing local infrastructure constraints to target global markets.


💡 Pro-Tip: Global from Day 1
Do not fall in love with your own custom infrastructure. Use the simplest global tools (Cloud, AI, SaaS APIs) to reach Product-Market Fit fast. Authority comes from the 'Why,' not the complexity of the 'How.'


Architectural Engineering



The Investor vs. Entrepreneur Standpoint


Key Concept: Profitability-First Mandates vs. Mission-Driven "Verticality"


The "Fundraising Winter" of 2021-2022 forced a fundamental realignment between investors and founders.


Operational Deep Dive


For investors, the goal has shifted from "Top-Line Burn" to "Profitability as Freedom." Youssef Mamou notes that being EBITDA-positive gives a startup control over its own destiny. However, for entrepreneurs like Hamza Rkha (Sowit), surviving the long cycles of sectors like agritech requires "Verticality"—a deep, spiritual sense of purpose. This "Verticality" allows founders to endure the operational hardships of emerging markets that pure financial metrics cannot justify. The most successful ventures today are those that achieve an "Arbitrage of Excellence"—solving massive structural gaps (like the agricultural credit gap) with industrial-grade operational efficiency.


💡 Pro-Tip: Skin in the Game
Investors today value operational resilience over rapid burn. Demonstrate 'Skin in the Game' by integrating financing or operational layers that solve structural market bottlenecks.


Strategic Balance



Strategic Business Value: The Arbitrage of Excellence


The Moroccan ecosystem offers a unique Arbitrage Opportunity for both founders and investors:


  • Operational Efficiency: Building tech-enabled services in a market that demands high-precision logistics and trust-building creates a highly resilient business model.
  • Talent Scaling: Leveraging the pool of world-class Moroccan engineers and multi-lingual experts to build solutions that are relevant for both the Global North and the high-growth markets of Africa.
  • Institutional Stability: Utilizing the rapid regulatory professionalization (e.g., the Conseil de la Concurrence framework) to build institutional-grade startups that are ready for global M&A or IPOs.



Technical References & Further Reading


  1. The Hard Thing About Doing Hard Things by Ben Horowitz – A playbook for the 'Rationalization' and crisis management phases.
  2. "The Brazilian PIX Model" – A technical and regulatory benchmark for solving the cash-out reflex in emerging markets.
  3. Theory of Constraints – Applying Goldratt’s logic to identify supply chain bottlenecks in traditional retail.
  4. Standard Operating Procedures (SOPs) – The FlowBrave framework for scaling high-fidelity operations globally.



Published on: 2026-04-01


Blog

The Moroccan Playbook: Scaling from Local Nuance to Regional Dominance

April 1st, 2026

Executive Summary


Morocco is no longer just a "promising" market; it is becoming a laboratory for high-stakes operational excellence and technical innovation. By synthesizing insights from the leaders of Glovo, DeepMind, Sowit, and FlowBrave, this article deconstructs the unique Moroccan "Playbook." We explore how density-driven profitability, the preservation of linguistic authenticity, and a shift toward technical founder-led ventures are defining the next wave of African tech leadership.


Executive Summary Visual



The GTM Framework: "Beldy" Execution vs. "Romi" Standards


Key Concept: The Strategic Hybridization of Growth


In many emerging markets, the temptation is to clone Western models and hope they scale. However, the Moroccan market demands a unique hybrid approach. Growth here is not just about horizontal expansion—it is about the strategic hybridization of global technical standards with hyper-local operational pragmatism.


Operational Deep Dive


The first lesson in the Moroccan GTM playbook is Density over Geography. Many startups fail by spreading themselves too thin. Hamza Naciri Bennani (GM of Glovo Morocco) demonstrated this by rationalizing operations from 60 cities down to 35. This was not a retreat, but a move to reach a seuil critique (critical mass). By focusing on urban density, Glovo was able to optimize courier travel distances and restaurant partnerships, making the unit economics finally "work" in a way that expansive, low-density coverage never could.


This leads to the Beldy vs. Romi framework, a core philosophy championed by Hamza Rkha of Sowit. In this model, "Romi" represents the global, industrial-grade technical standards of your product (Silicon Valley-grade software, satellite data, etc.). "Beldy," however, represents the "street-level" intelligence required for distribution. Mastering the "Beldy" execution means building local trust, navigating informal networks, and ensuring that your high-tech product can survive the "last mile" of Moroccan reality.


💡 Pro-Tip: The 10% Psychology Rule
In the Moroccan consumer market, the Total Transaction Fee (TTF) is the primary conversion killer. For marketplaces and delivery services, the fee must generally stay below 10% of the Average Unit Value (AUV) to ensure psychological acceptance and long-term user retention.


Moroccan Urban Dynamic



Building Trust: The "Trust Wedge" in Fintech and Marketplaces


Key Concept: Trust as the Infrastructure for Digital Adoption


Financial inclusion and digital adoption are often viewed as technical problems. In reality, they are trust problems. In a market where cash has historically been king, digital services must act as a "Trust Wedge" to break through legacy behaviors.


Operational Deep Dive


How do you move a population from cash to card? You create a Subscription Habit. By introducing Glovo Prime, the platform didn't just lower delivery costs; it removed the "explicit decision" to pay for every order. This reduced friction resulted in frequency doubling, turning an occasional service into a deep-seated habit.


Once the habit is established, trust in the digital layer follows. As noted by Soufiane Marhraoui, this trust-led adoption is what allowed Glovo to achieve card usage rates that are 5x higher than the national market average. The "wedge" is a reliable, high-frequency service that proves to the user, over and over, that their money and data are safe.


💡 Pro-Tip: Fieldwork Empathy
Do not build your trust engine based on Excel data alone. Implement "Fieldwork Empathy"—require every employee, from engineers to marketers, to perform mandatory delivery shifts or field visits. This is where you identify the "hidden bugs" in the customer experience that raw data never reveals.


Trust and Digital Flow



The Investor Perspective: From Asset-Heavy to Knowledge-Led


Key Concept: The Transition to High-Value AI and SaaS


The Moroccan investment landscape is maturing. We are moving away from the era of simple "local clones" of global services toward an era of Knowledge-Led innovation.


Operational Deep Dive


There is a growing Technical Founder Gap that savvy investors are beginning to fill. The most successful future ventures will be those led by engineers who understand global platforms from day one. Figures like Houda Nait El Barj (Research Lead at OpenAI) and Mehdi Ghissassi (former DeepMind, now leader at AI71) represent this new archetype: the Moroccan expert operating at the absolute frontier of global tech.


For these founders, the focus is on Data Sovereignty and the Knowledge Economy. Investors are increasingly looking for startups that provide total control over data in critical vertical sectors like health, finance, and construction. Furthermore, for startups targeting the U.S. market from Morocco—such as Yassine Loqmane’s FlowBrave—the strategy shifts toward high-precision SOPs and validating Product-Market Fit through Silicon Valley-grade networking and "warm intros."


💡 Pro-Tip: Global from Day One
If you are building a SaaS from Morocco, do not optimize for the local market first. Optimize for the market with the highest maturity in your vertical (often the U.S. or Northern Europe) while leveraging the operational cost arbitrage of building your engine in Morocco.


Future Tech Vision



Strategic Business Value: The Arbitrage of Excellence


The Moroccan ecosystem offers a unique Arbitrage Opportunity for both founders and investors:


  • Operational Efficiency: Building tech-enabled services in a market that demands high-precision logistics and trust-building creates a highly resilient business model.
  • Talent Scaling: Leveraging the pool of world-class Moroccan engineers and multi-lingual experts to build solutions that are relevant for both the Global North and the high-growth markets of Africa.
  • Institutional Stability: Utilizing the rapid regulatory professionalization (e.g., the Conseil de la Concurrence framework) to build institutional-grade startups that are ready for global M&A or IPOs.



Technical References & Further Reading


  1. The Lean Startup by Eric Ries – A foundational guide for applying iterative industrial principles to tech innovation.
  2. Theory of Constraints by Eliyahu M. Goldratt – Essential for identifying and elevating the single biggest bottleneck in any supply chain.
  3. "Factfulness" by Hans Rosling – A data-driven framework for understanding the true potential of emerging markets.
  4. Systems Thinking Frameworks – As advocated by Yassine Loqmane for personal and organizational operational excellence.



Published on: 2026-04-01


Blog

What it takes to become data driven

Oct, 15th, 2025

Introduction: From Data-Rich to Data-Driven


"Most people who work on data science, AI, and digital transformation are painfully aware that it is often culture, not technology, that stymies their efforts."


In today's business landscape, companies are facing a paradoxical challenge: they are collecting more data than ever before, yet struggling to translate it into a competitive advantage. Cross-industry studies show that, on average, less than half of an organization’s structured data is actively used in making decisions, and less than 1% of its unstructured data is analyzed or used at all. This leaves a vast reserve of potential value untapped.


The common response is to invest in more technology—new platforms, more sophisticated AI, and larger data lakes. While necessary, these investments often miss the point. Becoming a truly "data-driven" organization is not a technology problem; it is a fundamental shift in strategy, culture, and operations. It requires a new way of thinking and working, where data is not just a byproduct of business processes but the core asset that informs every decision.



This post synthesizes the core principles from the work of leading strategists at Harvard Business Review, distilling their frameworks into an actionable guide for what it truly takes to make this transformation. By moving beyond technology-centric solutions, organizations can build the capabilities required to turn their data from a dormant asset into their most powerful engine for growth and innovation.


The Core Frameworks for a Data-Driven Transformation


1. Master the Data Offense-Defense Playbook


The first step in any data transformation is to establish a clear strategy. A powerful framework for this is the concept of balancing data "defense" and data "offense." Data defense involves activities that minimize downside risk, such as ensuring regulatory compliance, preventing fraud, and guaranteeing data integrity. It is about control and establishing a single source of truth (SSOT) that the organization can rely on. An SSOT is one authoritative, inviolable copy of all crucial data, such as revenue or customer details, that is standardized across the enterprise.


Data offense, in contrast, focuses on activities that support business objectives and create value, such as increasing revenue and improving customer satisfaction. Offensive activities are about flexibility and are enabled by creating multiple versions of the truth (MVOTs) from that reliable data foundation. For example, marketing and finance departments might both report on ad spending. Marketing, interested in campaign effectiveness, reports spending when ads air. Finance, focused on cash flow, reports it when invoices are paid. Both are accurate MVOTs for their specific purpose, derived from the same SSOT.


This strategic balance is a critical starting point because it provides a practical architectural principle for managing data. It forces an organization to clarify the primary purpose of its data and make deliberate trade-offs between control (SSOT) and flexibility (MVOTs). Without this clarity, technology investments can become rudderless and ineffective. The right balance will vary, but defining it is a non-negotiable prerequisite for success.


"Data defense is about minimizing downside risk... Data offense focuses on supporting business objectives such as increasing revenue, profitability, and customer satisfaction."


Image 1 Description

2. Treat Your Data Like a Product, Not a Byproduct


Many companies struggle with data architecture, falling into one of two traps: the "big bang" approach, building a single, monolithic data lake to solve everything, or the "grassroots" approach, which results in a tangled mess of siloed, single-use data pipelines. A more effective strategy is to treat your data like a product.


A data product is a high-quality, ready-to-use, and reusable set of data that can be easily accessed and applied to different business challenges, much like a standard car chassis can be used as the foundation for multiple car models. This product-oriented approach delivers curated data sets—like a 360-degree view of the customer—that can power dozens of different applications, from credit risk scoring to personalized marketing.


This requires a specific operating model. Data products are managed by dedicated teams—composed of data engineers, architects, and modelers—that are embedded within business units, not siloed in IT. These teams are supported by a central "center of excellence" that sets standards, provides specialized talent, and designs the architectural patterns that enable product reuse. This mindset shift is the key to achieving scale, speed, and efficiency. By creating reusable data assets, companies can dramatically reduce the time it takes to launch new analytics projects and lower the total cost of ownership of their data infrastructure.


"We find that companies are most successful when they treat data like a product."


3. Democratize Your Data to Unleash Innovation


A data-driven transformation cannot be accomplished by a small group of technologists working in isolation. True, scalable innovation requires the democratization of data and digital capabilities. This democratization is only possible when data is treated as a reliable, accessible product (as discussed in the previous point), rather than a chaotic byproduct of operations.


The goal is to empower executives, managers, and frontline workers to rethink how every aspect of the business should operate. When the people closest to the business problems and customers are equipped with accessible, easy-to-use data and tools, they are best positioned to develop solutions that fit their actual needs. This broad-based capability building increases an organization's "tech intensity"—the extent to which employees can put technology to use to drive business outcomes.


Empowering your entire workforce to innovate with data is not just a force multiplier; it is a competitive necessity. Companies that successfully invest in democratizing their data tools and training find that their revenue grows more than twice as fast as that of laggards. Innovation is no longer the sole province of the IT department; it becomes a distributed capability woven into the fabric of the organization.


"...frontline users, who are closest to the use cases and best positioned to develop solutions that fit their needs, must take a central role, joining agile teams that dynamically coalesce and dissolve on the basis of business needs."


4. Build a Culture Where People, Not Just Algorithms, Thrive


The most sophisticated technology and the most brilliant data strategy will ultimately fail without a culture that embraces and supports them. The human element is paramount. The foundation of a data-driven organization is a "digital mindset"—a set of attitudes and behaviors that enable people to see data as a source of new possibilities rather than a threat or a burden.


Building this culture requires getting everyone involved. At Gulf Bank, for example, the transformation began by creating a "data ambassadors" program. To make this program successful and rewarding, the bank invested in world-class training to build new skills, used internal media publicity to highlight the ambassadors' work, and engaged marketing for branding, creating a program logo and branded giveaways. The bank also worked to instill two core concepts: that every employee is a "data customer" who needs data, and a "data creator" who produces data that others rely on. This approach made data relevant and empowering for everyone.


Without a culture that values data, change will not stick. The real work is in shifting behaviors and making data a natural part of everyone's job. This cultural foundation is what allows data-driven practices to move from isolated projects to the default way the organization operates.


"A digital mindset is a set of attitudes and behaviors that enable people and organizations to see how data, algorithms, and AI open up new possibilities and to chart a path for success in a business landscape increasingly dominated by data-intensive and intelligent technologies."


5. Get the Foundation Right: Quality is Non-Negotiable


The most advanced analytics and AI systems are not only useless but actively dangerous when fed bad data. The financial crisis of 2007-2009 serves as a stark reminder. The analytical models used to slice and dice mortgages were "actually quite good," but they failed because the mortgage data they were fed was "not, in fact, high-quality."


Ensuring high-quality data is the bedrock of any data-driven ambition. This requires fulfilling two fundamental criteria. First, you must have the "right data" for the problem at hand. Second, you must ensure the "data is right"—that it is accurate and correct. Crucially, these principles apply equally to the training data used to build a model and the future data fed to it in production. Microsoft’s Tay chatbot illustrates this vividly: it was trained well but began producing toxic output after users fed it bad data, demonstrating that quality is an ongoing operational discipline, not a one-time cleanup task.


While focusing on data quality may seem unglamorous, ignoring it is a recipe for failure. Poor data leads to flawed insights, failed projects, and a deep-seated lack of trust in the very systems you are trying to build. Quality is not just a technical detail; it is the essential foundation that makes everything else possible.


"Good data science + bad data = bad business results."


Conclusion: Your Next Step on the Data-Driven Journey


Ultimately, becoming data-driven is a holistic endeavor where each framework enables the next: A clear offense-defense strategy is built on a foundation of data quality; a product mindset provides the reusable assets necessary for democratization; and an empowered culture ensures that these new capabilities are used to create lasting value. It is far more than a technology initiative; it is a fundamental re-architecting of how an organization creates value.


Image 1 Description

The journey to becoming data-driven is not a one-time project but a continuous process of evolution. The real question isn't whether your organization has data, but whether it has the courage to change. What is the one thing you can do this week to start building a true data culture?


Let's take the Challenge 💪


Blog

And Data happened..The Inevitable.

Oct, 8th, 2025

"Today truly is a wide-open frontier. We are all becoming. It is the best time ever in human history to begin."


Introduction: Making Sense of the Inevitable


The pace of technological change is overwhelming. Amidst the daily glitter of new gadgets, apps, and platforms, it’s easy to feel lost in the noise, struggling to see the big picture. We are morphing so fast that our ability to invent new things outpaces the rate at which we can civilize them, leaving us in a constant state of adaptation.


In his work "The Inevitable," Kevin Kelly draws on over three decades of firsthand experience at the birth of the online world to identify the deep, slow-moving currents shaping the next thirty years. He uncovers a dozen powerful technological forces—trends rooted not in social whims, but in the fundamental nature of bits and networks. These forces are trajectories, not destinies, that show us the direction we are headed.



This post distills six of the most surprising, counter-intuitive, and impactful takeaways from this exploration of our digital future. By understanding these underlying forces, we can work with their nature rather than struggle against them, gaining the best of what they have to offer as we navigate the decades to come.


Image 1 Description

Takeaway 1: We’re Heading to ‘Protopia,’ Not Utopia or Dystopia


When we imagine the future, our minds often jump to dramatic extremes: the perfectly engineered, problem-free world of a utopia, or the chaotic, lawless collapse of a dystopia. These cinematic visions are easy to imagine but are ultimately unsustainable and unlikely destinations.


Kelly proposes an alternative concept: Protopia. This is not a destination but a process—a state of becoming, characterized by incremental improvement and mild progress. In the protopian mode, things are slightly better today than they were yesterday. This slow, steady progress can be hard to notice because it generates almost as many new problems as it does benefits. The technological solutions to today’s problems will inevitably cause the problems of tomorrow, creating a circular expansion that hides a steady accumulation of small net benefits over time. Its benefits never star in movies.


"Protopia is a state of becoming, rather than a destination. It is a process. In the protopian mode, things are better today than they were yesterday, although only a little better. It is incremental improvement or mild progress. The “pro” in protopian stems from the notions of process and progress."


Takeaway 2: Get Used to Being a Newbie, Forever


In this era of relentless change, Kelly asserts that we are all becoming perpetual beginners. Regardless of age or experience, the state of being a clueless new user—a "newbie"—is the new default for everyone.


This state of the "Endless Newbie" is driven by three core realities. First, most of the important technologies that will dominate our lives in 30 years have not been invented yet, so we will naturally be beginners with them. Second, the requirement for endless upgrades for the technology we already use ensures we remain in a constant learning mode, as features shift and menus morph. Third, the cycle of obsolescence is accelerating so rapidly—the average lifespan of a phone app is a mere 30 days—that we won't have time to master anything before it's replaced.


This idea fundamentally shifts our life’s focus. The goal is no longer to achieve mastery of a tool or trade, but to cultivate the humility and adaptability needed to be a good, fast-learning beginner, over and over again.


"All of us—every one of us—will be endless newbies in the future simply trying to keep up... Endless Newbie is the new default for everyone, no matter your age or experience."


Takeaway 3: When Copies Are Worthless, Value Shifts to the Uncopyable


The internet is the world’s largest and most efficient copy machine. Anything that can be copied—a song, a movie, a piece of software—and touches the network will be copied, freely and promiscuously. This superabundant flow of free copies is the foundation of the digital economy, but it also has a powerful, counter-intuitive effect: it renders the copies themselves economically worthless.


When copies are free, true value shifts to things that cannot be copied. Kelly identifies eight "generatives"—qualities that are "better than free" because people will willingly pay for them. These are attributes that must be generated in real-time and cannot be faked, stored, or replicated.


  • - Immediacy: Getting a product the moment it is released is a valuable asset people will pay for.
  • - Personalization: A generic version may be free, but one tailored specifically for you is worth a premium.
  • - Interpretation: A free copy of your DNA sequence is worthless without a guide that interprets what it means.
  • - Authenticity: In a world of infinite fakes, a guarantee of the real, verified original creates value.
  • - Accessibility: Owning files is a hassle; having an expert service provide convenient access anywhere is a benefit.
  • - Embodiment: A digital copy is formless, but a live performance or physical version provides a valuable experience.
  • - Patronage: Fans will pay creators simply for the pleasure of supporting those they admire.
  • - Discoverability: Amidst millions of options, being found is scarce and valuable; we pay for curation.

This economic inversion fundamentally changes business strategy. The focus moves away from protecting products through scarcity and copy protection, and toward nurturing the relationships, experiences, and qualities that cannot be duplicated with a click.


"When copies are superabundant, they become worthless. Instead, stuff that can’t be copied becomes scarce and valuable."


Image 1 Description

Takeaway 4: Let the Robots Take Our Jobs


The idea that automation will replace human labor often sparks fear. Kelly argues that before the end of this century, automation will replace 70% of today’s occupations—and that we should welcome it.


He reframes this monumental shift not as a race against the machines, but as a race with them. Our future economic value will not be determined by how we compete with robots, but by how well we collaborate with them. When we let robots take over the repetitive, measurable, and efficiency-driven tasks they excel at, we are freed up to focus on what humans do best: creativity, innovation, and answering the question, "What are humans for?"


Automation doesn't just eliminate old jobs; it creates entirely new ones—occupations we cannot even imagine today, built on technologies that don't yet exist. By 2050 most truck drivers won’t be human. Since truck driving is currently the most common occupation in the U.S., this is a big deal. By letting robots do the work they are suited for, we are empowered to dream up new work that matters and to become, in essence, more human.


Kelly predicts our relationship with automation will follow a recurring seven-stage cycle of denial, acceptance, and collaboration:


  1. >> A robot/computer cannot possibly do the tasks I do.
  2. >> OK, it can do a lot of those tasks, but it can’t do everything I do.
  3. >> OK, it can do everything I do... except it needs me when it breaks down, which is often !
  4. >> OK, it operates flawlessly on routine stuff, but I need to train it for new tasks.
  5. >> OK, OK, it can have my old boring job, because it’s obvious that was not a job that humans were meant to do.
  6. >> Wow, now that robots are doing my old job, my new job is much more interesting and pays more!
  7. >> I am so glad a robot/computer cannot possibly do what I do now. [Repeat.]

"This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots."


Takeaway 5: AI's Greatest Purpose Is to Define Humanity


Discussions about artificial intelligence often revolve around productivity, efficiency, or a dystopian robot takeover. Kelly proposes a far more profound and unexpected benefit: AI's primary purpose is to help us define what it means to be human.


By creating "alien intelligences"—synthetic minds that think different from our own—we are forced to confront what is truly unique about human consciousness. As we invent more species of AI, we will continually surrender tasks and abilities we once thought were exclusively human, from playing chess to making music. This process will spark a permanent identity crisis, forcing us to constantly re-evaluate what makes us special.


This is the grand irony: the most valuable thing we will get from the rise of artificial intelligence won't be smarter machines, but a deeper understanding of ourselves.


"The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are."


Conclusion: You Are Not Late


If we could climb into a time machine and journey 30 years into the future, we would look back at our present with a sense of wonder at how primitive everything was. Kelly writes that the citizens of 2050, surrounded by their holodecks and AI interfaces, would say, "Oh, you didn't really have the internet... back then." The last 30 years have simply created a marvelous starting platform. In terms of the internet, "nothing has happened yet!"


The message is profoundly optimistic and empowering. The wide-open frontier is not in the past; it is right now. Today, this very minute, is the best time in human history to invent something new, to start an enterprise, to create the future. We are all becoming, and we are not late.


The most incredible inventions of the next 30 years are waiting to be born from today's impossible ideas—what will you begin?


Let's take the Challenge 💪


Blog

5 Counter-Intuitive Truths for Building a Truly Innovative Company

Oct, 1st, 2025

"One CEO described the experience of transforming as 'changing from driving on the right side of the road to the left—but gradually.'"


Many companies are pouring massive investments into technology, digital transformation, and Agile processes, yet see frustratingly little in return. They become efficient "feature factories," shipping more than ever but failing to move the needle on business results. In an era of rapid disruption, where new technologies like Generative AI are constantly reshaping the competitive landscape, this isn't just frustrating—it's an existential threat. These companies find themselves in a battle for customers with weapons and strategies that can no longer compete.



This is a common story, but it’s not inevitable. The problem isn’t a lack of effort; it's a fundamentally different way of working. In his book TRANSFORMED, Marty Cagan details the "Product Operating Model"—a set of principles and competencies that power the world's most innovative companies. This article distills five of the most impactful and often surprising lessons from their work.


Your CEO, Not Your CIO, Must Lead the Charge


One of the most critical lessons for a successful transformation is that the effort must be led, owned, and evangelized by the CEO. This is not a technology initiative that can be delegated to a CIO, Chief Digital Officer, or Chief Transformation Officer.


The reason is simple: moving to a product model is a company-wide culture shift that fundamentally impacts sales, marketing, finance, HR, and legal. When a CEO delegates the transformation, resistant stakeholders in these other departments can stall progress because the person leading the charge lacks the necessary authority over them. These individuals all eventually report up to the CEO, and they will look to her to decide if this change is truly important.


"To be explicit: The CEO needs to be viewed as the chief evangelist for the product model."


Ultimately, "the company cares about what the leader cares about." Without the CEO's active and visible support, the transformation effort is likely to stall out as soon as it encounters the inevitable friction from other parts of the business.


Your 'Product Managers' Probably Aren't Equipped for the Job (And It's Not Their Fault)


Many companies attempting to transform make a critical and often fatal mistake: they simply retitle their existing "product owners" or "business analysts" as "product managers." This cosmetic change, however, does nothing to establish the core competency required for the product model to function.


The old role was often narrowly defined as managing a backlog of features for engineers to build. In the product operating model, a true product manager is explicitly responsible for ensuring a solution has both value (customers will buy or choose to use it) and viability (it works for the constraints of the business, from finance to legal to sales). This requires a deep, evidence-based understanding of the customer, the data, the business, and the industry.


"If you believe you can simply retitle your product owners or business analysts, you are very likely heading for failure."


This is a critical failure point because the entire model is predicated on empowering teams to solve hard problems. Without a competent product manager, stakeholders will not trust the team, engineers cannot be truly empowered, and the system collapses. The standard for this role is exceptionally high: the CEO should believe that each product manager has the potential to be a future leader of the company. To meet that standard, they require direct, unencumbered access to users and customers, to product data, and to business stakeholders. Without this access and competency, the entire model fails.


Image 1 Description

True Innovation Doesn't Come from a Roadmap—It Comes from Your Engineers


In most companies, engineers are treated like mercenaries: their job is to build what they are told, as specified in a roadmap handed down from "the business." The product operating model requires a fundamental mindset shift in how a company views and utilizes its engineering talent.


The source of consistent innovation is not in a stakeholder's list of desired features. It comes from the people who work with the enabling technology every single day. They are in the unique position to see "what's just now possible" and to imagine solutions that customers (and product managers) could never dream up on their own.


"Of all the principles underlying the product model, the single most important is the realization that innovation absolutely depends on empowered engineers."


This principle moves engineers from being passive implementers to active partners in discovering the right solution. The mechanism for innovation is unlocked when these empowered engineers are partnered with skilled product managers and designers, and are exposed directly to users and customers. It is this collaborative unit that can apply deep technical knowledge to solve customer problems in truly novel ways.


To Improve Quality, You Need to Release More Often, Not Less


It sounds completely counter-intuitive, but one of the clearest lessons from modern product companies is that small, frequent releases result in higher quality and greater stability than large, infrequent "big-bang" releases.


The logic is straightforward. When you release a small number of changes, it is much easier to test them thoroughly and to pinpoint the source of any problem that arises. In contrast, massive quarterly or annual releases bundle hundreds or thousands of changes together, making it a nightmare to find the source of regressions and forcing customers to absorb a disruptive amount of change all at once.


"If you genuinely care about providing a reliable service for your customers, it is much easier to ensure that a small number of changes are working properly and don't introduce any inadvertent problems than it is to group a large number of changes together and try to deliver all at once..."


This practice of continuous integration and continuous deployment (CI/CD) is a key reason why top tech companies can innovate so quickly while maintaining reliability. They don't just move faster; they often release several times per day. This allows them to deliver value sooner, get feedback immediately, and respond to issues in minutes, not months, providing a more stable and consistently improving experience for their customers.


Empowered Teams Need More Leadership, Not Less


A common misconception is that empowering teams means managers should simply "back off" and let teams do whatever they want. In reality, empowered product teams depend on better leadership, not less. The mantra is to lead with "context, not control."


Instead of micromanaging the "how" through roadmaps and feature lists, a modern product leader has two primary responsibilities. The first is actively coaching their people to develop the skills needed to do their jobs well. The second is providing the essential strategic context—the product vision and product strategy—that enables teams to make good, autonomous decisions that align with the company's broader goals.


As legendary CEO Andy Grove explained, there are only two possibilities when good work doesn't get done:


"What gets in the way of good work? There are only two possibilities. The first is that people don't know how to do good work. The second is that they know how, but they aren't motivated."


The product leader's job is to address both of these issues. They coach their people to ensure they know how to do good work, and they provide an inspiring strategic context that ensures they are motivated to do it.


Conclusion: It’s a Culture, Not a Process


These five truths highlight a central theme: transforming into a modern product-driven company is not about adopting a new process like Agile or implementing a new software tool. It is a profound cultural shift in how a company thinks about technology, leadership, accountability, and, most importantly, trust.


Image 1 Description

It requires moving from a world of top-down control to one of empowered teams, from delivering features to delivering results, and from serving stakeholders to serving customers. Instead of asking whether your teams are hitting their deadlines, perhaps the more powerful question is: Are they empowered to solve the right problems?


Let's take the Challenge 💪


Blog

Data Stewardship and Metadata

Sep 23th, 2025

"Formal enterprise-wide Data Stewardship, as part of a comprehensive Data Governance effort, is crucial for managing data and enabling organizations to begin treating data as an asset."


Achieving success in treating data as an asset requires robust inventorying and understanding of the data. This necessity places the spotlight directly on metadata—the knowledge about the data—and the imperative to make that metadata accessible and trusted. In essence, the case for open and documented metadata is the backbone of effective Data Stewardship.



The Core Mission: Stewardship and Metadata Quality


Data Stewardship fundamentally deals with knowledge management, so much so that "Data Stewardship" is often also described as "Metadata Stewardship". The ultimate goal is to move data from an ungoverned state (rarely defined, quality unknown, no accountability) to a governed state (trusted, understood, and accountable).


Business Data Stewards are the key authority figures responsible for this transformation. They provide the definition, meaning, and business rules associated with their data. If they are effective, they ensure documentation such as robust definitions, derivations, creation/usage rules, and quality rules.


However, this effort is wasted if the results are not easily discoverable. The principle is clear: metadata must be easy to find and of high quality. If metadata is hidden or is of poor quality, the underlying data will be misunderstood and potentially misused. Without assigned stewardship and proper communication, the data can be full of surprises, leading to a lack of trust among analysts.


Achieving Transparency: The "Open Metadata" Imperative


To counteract data surprises and ensure trust, the Data Stewardship effort must be inherently transparent. Transparency requires formalized documentation and communication mechanisms, confirming that decisions made about the data are accessible to all interested parties.


The Data Governance Program Office (DGPO) coordinates this transparency effort, ensuring that everything the overall Data Governance effort achieves is fully documented and made available. The Enterprise Data Steward specifically supports this mission by maintaining a repository of information and decisions.


Key components in driving transparency include:


  • - Documented Decisions: Data Stewardship decisions must be clearly documented and formally published to interested parties using sanctioned communication methods.
  • - Targeted Communications: Communications must be carefully managed to ensure the information is timely and relevant to the audience, covering updates on data sources, data changes, and metadata changes.
  • - The Data Stewardship Portal: It is imperative that all users and participants in the effort have access to a web portal where they can find staffing information, policies, procedures, status reports, and key contacts. This serves as the single reference point for the entire program.

Image 1 Description

Practical Tools for Publishing Metadata


The transparency mandate is realized through a specific toolset designed to publish and connect metadata across the enterprise:


A Critical Artifact: The Business Glossary


The Business Glossary is the tool that records and helps manage business metadata. It serves as the primary location where business metadata is published and actively used. The core purpose of the Business Glossary is to inventory the organization’s common vocabulary, documenting terms, definitions, and relationships between those terms.


A robust Business Glossary provides several functions vital to openness:


  1. - Establishing Decision Rights: It documents the ownership and decision rights for the Business Data Elements (BDEs) and other metadata stored within the glossary.
  2. - Ensuring Timely Access: By providing a central store for business metadata, it ensures timely access to information, which enables quick location and utilization of the required knowledge.
  3. - Automating Processes: Modern glossaries often automate common governance processes (such as creating and approving BDE definitions) using configurable workflows.

The Metadata Repository (MDR)


While the Business Glossary handles the business view of metadata, the Metadata Repository (MDR) focuses on physical and technical metadata. The MDR stores technical information like database structures, ETL lineage, and business intelligence tool metadata.


The MDR is critical because it provides the logical/physical lineage necessary to link the BDEs defined in the Business Glossary to their numerous physical implementations across the systems.


This linkage is essential for governance and openness, as it enables:


  • - Impact Analysis: Assessing the consequences of proposed changes by mapping downstream data.
  • - Root Cause Analysis: Tracing data backward from a quality issue to identify where the problem originated.
  • - Data Integrity: Proving the integrity of the data for purposes such as regulatory reporting.

Image 1 Description

Through these tools and the mandated transparency driven by Data Stewards, organizations ensure that data is not only managed but also fully understood and trusted throughout its lifecycle.


Let's take the Challenge 💪


Blog

Building AI: 5 Core Concepts That Will Surprise You

Sep, 16th, 2025

“All of our experiments suggest that our results can be improved simply by waiting for faster GPUs and bigger datasets to become available.” — AlexNet authors, 2012


In the last few years, powerful AI tools like ChatGPT have emerged with such force that they seem to have appeared out of nowhere. This sudden arrival, however, was not an overnight success. That 2012 prediction was made in a landmark paper by the authors of AlexNet, a breakthrough in computer vision. One of those authors, Ilya Sutskever, went on to co-found OpenAI, where he turned this exact insight into reality with the GPT models.


This "sudden" arrival was the culmination of steady progress, where a small boost in model quality unlocked an explosion of new possibilities. To truly understand modern AI, it helps to look past the hype and grasp the foundational concepts that drive it. This article reveals five of the most surprising, counter-intuitive, and impactful core ideas from the world of AI engineering, based on Chip Huyen's book, AI Engineering.



Understanding these concepts is the key to grasping both the incredible power and the inherent limitations of the AI tools we use today. It marks a transition from the world of traditional, deterministic software to a new engineering discipline that must manage scale, probability, and ambiguity.


It’s Not New Tech, It’s Old Tech at Unprecedented Scale


While tools like ChatGPT feel revolutionary, the underlying language modeling techniques they are built on have been around for a while, with the first papers on the topic dating back to the 1950s. The AI community has known for over a decade that simply scaling up a model—feeding it more data and running it on faster computers—improves its performance.


The recent AI revolution is primarily a story of scale, not a fundamental change in concept. As author Chip Huyen notes, what was surprising was not the model's new capabilities, but the "explosion of new possibilities" that was unlocked by a relatively modest boost in quality that came from this massive scaling.


This shift has given rise to a new discipline called AI Engineering. Unlike traditional Machine Learning engineering, which focused heavily on building and training models from scratch, AI engineering focuses less on modeling and training, and more on model adaptation—the practice of building applications on top of these powerful, readily available foundation models.

Image 1 Description

The “Last Mile” Is the Hardest: From Demo to Production Is a Massive Leap


It can be surprisingly easy to build an impressive AI demo in a weekend. However, this initial success can be misleading. One of the most important concepts in AI engineering is the "last mile challenge," which describes the massive gap between a simple demo and a production-ready application. This challenge reflects a shift from building something that works sometimes to building something that works reliably and safely.


“the journey from 0 to 60 is easy, whereas progressing from 60 to 100 becomes exceedingly challenging.”


A concrete example of this comes from LinkedIn, which shared its experience building an AI based YAML parsing solution. It took their team just one month to achieve 80% accuracy, but it took four more months to surpass 95%. Much of that time was spent working out "product kinks and dealing with hallucinations." The team noted that "the slow speed of achieving each subsequent 1% gain was discouraging."


This concept highlights the immense engineering effort required to make an AI application reliable, safe, and truly useful for real-world users. Getting an AI to perform a task correctly most of the time is one thing; ensuring it performs correctly and safely all of the time is an entirely different, and much harder, challenge.


AI Is a Prediction Machine, Not a Knowledge Machine


At its core, a language model is a "completion machine." It works by predicting the next most probable token based on the input it receives. It is not accessing a database of facts; it is generating a sequence of text based on statistical patterns in its training data. A token can be a full word (like "cat"), a part of a word (like the "-ing" in "running"), or even a punctuation mark. This method helps models handle unknown or made-up words; for instance, "chatgpting" can be split into "chatgpt" and "ing," allowing the model to understand its structure.


This probabilistic nature is a double-edged sword. On one hand, it’s the source of AI’s creativity. Its ability to explore different probable outcomes allows it to brainstorm, write poetry, and generate novel ideas. On the other hand, it’s the root cause of its most frustrating flaws: inconsistency (giving different answers to the same question) and hallucinations (making up facts).


A hallucination occurs because the model can’t differentiate between the data it’s given and the data it generates. To illustrate, imagine you give a model the prompt: "Who’s Chip Huyen?" and the first sentence it generates is: "Chip Huyen is an architect." To generate the next token, the model treats the entire sequence—your prompt plus its own output—as the new ground truth. It sees "Who’s Chip Huyen? Chip Huyen is an architect" and continues generating text based on the plausible but false premise that she is an architect. The model treats what it produced with the same authority as the original prompt, leading it down a path of what some researchers call "self-delusion." Much of AI engineering is about managing and mitigating this inherent probabilistic behavior.


We’re Running Out of Human Data


One of the most surprising bottlenecks in scaling AI further is not compute power, but the availability of high-quality, human-generated data. Foundation models require such vast amounts of text and images for training that there is a realistic concern we will run out of usable data from the internet in the next few years.


This problem is compounded by a phenomenon known as "model collapse." The internet is now being rapidly populated with AI-generated data. If new AI models are trained on this synthetic data, they risk gradually forgetting the original patterns from human data, which can degrade their performance over time.


"the web is full of ChatGPT outputs."


This data scarcity has a major implication: unique, proprietary human data is becoming a critical competitive advantage in the AI race. Data sources like copyrighted books, translations, contracts, medical records, and genome sequences are now incredibly valuable assets for training the next generation of more capable models.


Evaluating AI Is Often Harder Than Building It


As AI models become more intelligent and their outputs more open-ended, evaluating their performance has become one of the biggest hurdles in the field. Because this evaluation is so difficult, many teams settle for ad-hoc methods like "word of mouth" or simply "eyeballing the results"—a process sometimes called a "vibe check." For some applications, figuring out a reliable evaluation method can take up the majority of the development effort.


This stands in stark contrast to traditional software, where you can write tests with exact, predictable outcomes. If you ask a calculator for 2+2, the answer is always 4. But for a generative AI, evaluating the "correctness" of an essay, a legal summary, or a piece of code is subjective and complex.


This difficulty has led many teams to a seemingly circular solution: using other powerful AI models to act as a "judge." While this can be effective, it introduces its own set of challenges, including the judge model's potential biases and inconsistencies. A significant portion of the work in modern AI engineering is now focused on building reliable and systematic evaluation pipelines—a task that is often more complex than building the initial AI feature itself.


Conclusion: Engineering in a Probabilistic World


Building with AI is not about finding a single magic algorithm. Grappling with old tech at new scale, navigating the treacherous 'last mile' from demo to production, managing a prediction machine, confronting a looming data bottleneck, and solving the puzzle of evaluation—these are the new pillars of AI engineering. This new discipline requires grappling with unique challenges of unprecedented scale, probabilistic behavior, and the very nature of data itself.


Image 1 Description

These ideas show that the path forward requires more than just bigger models. It requires clever engineering, rigorous evaluation, and a deep understanding of the systems we are building. As AI systems increasingly learn from a world saturated with their own output, how will we ensure they remain grounded in human reality?


Let's take the Challenge 💪


Blog

Why Your 0-to-1 Product Will Likely Fail (And the 5 Counter-Intuitive Rules to Ensure It Succeeds)

Jul, 25th, 2024

"Most product managers devote less than 20% of their time to product discovery and more than 80% to product delivery. This results in building features and products nobody wants."


Launching a new, "0-to-1" product is one of the most challenging endeavors in business. The landscape is littered with well-funded, well-engineered products that launched to crickets because they failed to mitigate the four big risks: value risk (will they buy it?), usability risk (can they use it?), feasibility risk (can we build it?), and business viability risk (does it work for our business?). The core reason often traces back to the quote above. As author Stefan Richter identifies, this imbalance between discovery and delivery is the primary reason products fail.



The path to building a successful new product isn't about having a bigger budget or a faster engineering team. It’s about adopting a different mindset, one that often runs counter to traditional project management. Success lies in a disciplined process of learning, validating, and adapting before you commit to building at scale.


This article distills five of the most impactful and counter-intuitive takeaways for building successful new products, based on the principles outlined in Stefan Richter's "The Product Manager's Playbook".


Image 1 Description

Treat Your Roadmap as a Strategic Compass, Not a Project Plan


The most common mistake organizations make is treating a product roadmap like a Gantt chart—a fixed project plan with a list of features and firm due dates. This approach creates a rigid system where teams are measured on their ability to ship pre-defined features on a schedule, regardless of whether those features solve the right problem.


"...many organizations make one fundamental mistake: they treat a product roadmap as a project plan; i.e., a prioritized list of features with due dates on a timeline."


A more effective alternative is the "theme-based roadmap." Instead of listing outputs (features), it focuses on themes, which are the high-level user or business problems you aim to solve. These themes are the strategic outcomes you want to achieve, organized into flexible buckets like "Now, Next, and Later" rather than being tied to specific dates.


This shift is transformational. For a 0-to-1 product, a feature-based timeline is a map to a place that might not exist; a theme-based roadmap is a compass that helps you find the treasure. It forces the conversation to evolve from "What are we building?" to "What problem are we solving?"—the single most important question in product.


Success Is an Outcome, Not an Output


Closely related to the roadmap is the fundamental distinction between outputs and outcomes. An "output" is a tangible deliverable, like a new login page or a notifications feature. An "outcome" is a measurable change in user behavior that creates value, such as a 10% increase in daily active users or a 15% reduction in customer support tickets.


A team's goal should never be to simply "ship the feature." The goal is to achieve the desired outcome. The feature is merely a hypothesis for how to achieve that outcome.


"By concentrating on outcomes rather than features, you and your company will be better able to concentrate on problems instead of solutions."


This mindset shift empowers the product team. Instead of being handed a solution to build, they are given a problem to solve. This autonomy allows them to find the right solution, which may not be the one originally envisioned. This redefines success. A team that ships a feature on time but fails to change user behavior has failed. A team that invalidates an idea with a simple experiment has succeeded.


Spend More Time Discovering Than Delivering


The imbalance highlighted in the opening quote—less than 20% of time spent on discovery—is the root cause of most product failures. Product work consists of two distinct phases:


  • - Product Discovery: Building the RIGHT product.
  • - Product Delivery: Building the product RIGHT.

Think of it this way: Delivery is about building the ladder efficiently. Discovery is about making sure the ladder is leaning against the right wall. While both are critical, the highest-leverage activity a product manager can perform is in discovery. Ensuring you are solving a painful, urgent problem for a clear audience before you write a single line of production code is the best way to de-risk a new product.


Successful discovery saves immense time, money, and morale by preventing the team from building a polished product that nobody wants. But this doesn't require months of abstract research. The most effective discovery is done by testing tangible concepts with users, often without writing a single line of code.


Prove Your Idea Without Writing a Line of Code


Committing engineering resources to an unproven idea is a massive gamble. The goal of product discovery is to validate your solution hypotheses in the cheapest, fastest way possible. This often means running experiments that require no code at all.


Several lean validation methods can prove demand before you build:


  • - Smoke Test / Fake Door: This is your tool for validating top-of-funnel market demand and desirability. A simple landing page describes the product's value proposition and includes a call to action (e.g., "Sign up for early access"). The conversion rate is a direct measure of interest.
  • - Concierge MVP: This method validates the solution's value and UX by having you manually deliver the service to your first users. It’s the ultimate way to learn intimately about their needs and uncover operational complexities before building any automation.
  • - Wizard of Oz MVP: This tests the proposed user experience by presenting a seemingly functional front-end, while all back-end tasks are performed manually by a human. It validates the user flow and value proposition before complex back-end logic is built.

The purpose of these techniques is to maximize learning, not to build a product. They focus on answering the most critical question first: is this idea desirable enough that people will take action?


To See the Future, Only Ask About the Past


Effective discovery hinges on gathering accurate user feedback, but most people conduct user interviews incorrectly. The most common mistake is asking users to speculate about their future behavior. Questions like "Would you use this feature?" are unreliable because people are poor predictors of their own actions and often try to please the interviewer.


The tactical rule for effective interviews is to only ask about specific, past experiences.


"...during user interviews, always ask about past/present experiences and never about the future."


Instead of asking, "Would you use a tool to manage your pet's health records?" ask, "Tell me about the last time you had a health problem with your pet." This prompts a story that reveals real pain points, workarounds, and emotions. To get to the root of those pain points, you can use several high-impact techniques:


  • - Avoid leading questions: Instead of "Do you have any problems with your boss?" which implies problems exist, ask "Tell me about your relationship with your boss."
  • - Use the mirror technique: If a user says, "I ate a chocolate croissant," simply repeat it back: "A chocolate croissant." This encourages them to elaborate without you guiding them.
  • - Ask "why" several times: This helps you drill down from a surface-level complaint to the core, underlying problem.

This simple change in questioning uncovers true user needs rooted in actual behavior, not speculation.


Conclusion: Plan to Iterate


The reason most 0-to-1 products fail is not a lack of engineering talent or a flawed launch plan. Failure comes from prioritizing delivery over discovery, outputs over outcomes, and speculation over evidence. The five principles above are the direct antidote to these common traps.


Building a successful new product is not a linear process of planning and execution. It is a rigorous cycle of learning, adapting, and iterating toward value. As product leader Marty Cagan famously wrote, "it typically takes several iterations to get the implementation of this idea to the point where it actually delivers the expected business value." By embracing this iterative, learning-focused process, teams can dramatically increase their odds of creating products that customers not only use, but love.


Image 1 Description

Which of these principles would create the biggest change in how your team builds new products today?


Let's take the Challenge 💪


Blog

Push Technologies

Jul, 30th, 2023

"Comparative study of push technologies either of HTTP "workarounds" or new solutions such as bidirectional WebSocket."


Context


As part of the move towards a PaaS ecosystem, it is essential to develop effective and innovative capabilities such as client push notifications. Some HTTP solutions based on periodic polling checking if data is available on server side are not well suited for low latency applications. I conducted a comparative study of push technologies either of HTTP "workarounds", inefficient but widely compatible, and new solutions such as WebSocket which allows an efficient bidirectional real time data transmission. However, WebSocket is not widely compatible with current browsers and servers.


What happens


In the context of the evolution towards PaaS ecosystems and the traffic growth expectations, it is critical to develop application capabilities such as notifications (detected issues, server side events…) and monitoring. Such features require the ability to start sending new server data to client at any time.


server to client

With the HTTP request/response paradigm, there is no "natural" way to let a server pushing data to a client. Some solutions based on short periodic polling (Short Polling, JSONP Polling), constantly checking for new data, are not well suited for low latency. Long Polling is more efficient since the server suspends the request waiting for data before responding; the client will have to request again. Other solutions, using a one way streaming, show compatibility or state tracking lacks (e.g. Ajax multipart streaming, Forever iframe). HTML5 specifications brought Server Sent Events as an elaborated streaming solution though still not bidirectional.


Websocket protocol has a growing notoriety and generates the enthusiasm of the developer community since it liberates from HTTP limitations by taking full benefit of TCP features thus allowing a two-way real time data transmission with low latency. Flash Socket (Adobe TCP connection proprietary solution) has also been explored but lacks from interoperability with Java environment.


Compatibility


While almost all HTTP based technologies are fully compatible either with browsers and servers, new technologies such as Server Sent Events and Websocket are not widely supported. Consequently, one has to tradeoff between performance and compatibility.


server to client 2

Push technologies being resources consuming, servers have to be designed in order handle a large number of concurrent and long lived connections either “naturally” (evented servers like nginx web server), or by implementing suitable solutions (threaded servers like Apache HTTP server or WLS 12c Java EE application server, latest version, implementing Servlet 3.0).

Blog

MEVN Stack

Jul, 13th, 2023

"Looking for a powerful and versatile platform for web development ? The MEVN stack is a great option."


What is the MEVN stack ?


The MEVN stack is a JavaScript software stack that uses MongoDB, Express.js, Vue.js, and Node.js to build web applications. It is a variation of the MEAN stack, which uses Angular.js, and the MERN stack, which uses React.js, instead of Vue.js.


server to client 2

Why use the MEVN stack ?


There are several reasons why developers might choose to use the MEVN stack, including:


  • JavaScript ecosystem : The MEVN stack is built entirely on JavaScript, which means that developers can use the same language for both the frontend and backend of their applications. This can make development more efficient and easier to maintain.
  • Performance : The MEVN stack is known for its performance. MongoDB is a very lightweight database, and Node.js is a very efficient runtime environment. This means that MEVN stack applications can be very fast and responsive.
  • Community : The MEVN stack has a large and active community of developers. This means that there are plenty of resources available to help developers learn and use the stack.

server to client

The components of the MEVN stack


The MEVN stack consists of four main components :


  • MongoDB: : MongoDB is a document-oriented database that is known for its scalability and flexibility. It is a good choice for storing data for web applications that need to be able to handle a lot of traffic or that need to be able to store complex data structures.
  • Node.js : Node.js is a runtime environment that allows JavaScript to be run outside of a web browser. It is used to create web applications, but it can also be used to create other types of applications, such as command-line tools or desktop applications.
  • Express.js : Express.js is a web framework that is built on top of Node.js. It provides a simple and easy-to-use API for creating web applications.
  • Vue.js : Vue.js is a JavaScript framework for building user interfaces. It is known for its simplicity and its ability to be used with other frameworks, such as React or Angular.

server to client

One can use a styling framework such as Element on top of Vue to benefit from interesting pre-built interactive front end layouts, icons and themes.


Opinion


The MEVN stack is a powerful and versatile stack that can be used to build a wide variety of web applications. It is a good choice for developers who are looking for a stack that is both performant and scalable. However, it is important to be aware of the learning curve before choosing the MEVN stack for your next project.


I hope this overview of the MEVN stack has been helpful. If you have any remark, please feel free to reach out.