AI Workflow
Workflow
The workflow for Honeydew AI helps data teams ensure the stability and improve the quality of AI responses, in view of user demands.
Data teams using Honeydew to build production-grade AI systems typically follow three key approaches:
- Metadata: Define AI-specific metadata to clarify intent and resolve ambiguities or inconsistencies in data.
- Evaluation: Regularly validate AI outputs by testing them against a curated set of reference questions.
- Monitoring: Continuously track new AI responses to gauge overall quality and identify specific modeling issues.
AI Domain
Honeydew AI operates on a dedicated domain, within a workspace and a branch.
The aim of the AI domain is to control the metrics and attributes that are approved for AI usage, as well as a business context.
You can create multiple AI domains within the same workspace, serving different categories of business questions.
Setting up a new business domain
Setting up any new business domain will include:
- Set up Semantic Layer Definitions: Map data sources to a semantic layer; create relevant metrics or entities; add them to an AI domain.
- Create AI Metadata: Descriptions, sample values and other hints.
- Test Business Questions: A set of business questions to evaluate the AI Analyst.
You can use the AI modeling helper at every step:
- Use it to suggest required metrics or attributes given a question.
- Ask it about places where additional metadata can be useful, or where metadata is ambiguous.
- Check its judging response on tested business questions.
Maintaining an active business domain
The more frequently a domain is used, the more critical it becomes to actively monitor its quality and refine the semantic layer as necessary.
Activities on an active domain will include:
- Monitor AI Responses to improve the semantic model when there are gaps.
- Add New Business Questions to the evaluation set to cover new business demands.
- Keep testing Known Business Questions as things change to avoid regression.
Metadata
Metadata helps AI to understand the context of your business. You can add AI-specific metadata to various semantic objects. This metadata provides AI with additional context to be able to accurately answer the user questions.
The most important metadata for an object is its name.
Always prefer descriptive names for objects (entities, attributes, metrics) over explanations about what they do.
AI metadata serves to amplify the context for a semantic object that is not obvious from its name.
Domain
Domain metadata is the place to teach AI about the context of your business domain as a whole. A typical explanation might include who you are, what you do, and limitations of your data to ground it in your business domain.
For example, “We are ACME corp., an export-import provider in the Northwest. This includes all our sales data for the last seven years for financial analysis. NOTE: Data is up to previous day only.”
Domain metadata can be changed in Honeydew Studio, or within the domain YAML structure:
Attributes and Metrics
You can add AI-specific metadata to attributes and metrics that specifies the AI context for these fields. When an AI-specific description is not provided, the general field description is used as a fallback.
Typical hints can include:
- Synonyms: “also known as GMV”
- Data structure: “US state, abbreviated to two letters as NY”
- Usage rules: “Use case-insensitive ILIKE to filter”
- Other hints for when to prefer a field: “Key indicator for performance”
Field metadata can be changed in Honeydew Studio, or within the metric or attribute YAML structure:
Entities
You can add AI-specific metadata to entities that specifies the AI context for these entities as a whole. When an AI-specific description is not provided, the general field description is used as a fallback.
For example, “Only includes customers with an active subscription” for a customers
entity.
Entity metadata can be changed in Honeydew Studio, or within the entity YAML structure:
Evaluation
An evaluation set is a group of known questions that can be used to evaluate the performance of AI.
Sets can be of any size or structure, and can be set to run periodically or after schema changes.
Running evaluations
You can use the Honeydew Native Application for Snowflake to automate the AI evaluation workflow.
-
Create a table containing the business questions you want to answer using AI. The table should have a column for the question text.
-
Run the evaluation flow using the Honeydew Native Application.
-
Review the results of the evaluation, including the judge verdict and explanation. The
judge_verdict
column will indicate whether the AI response is correct or not. Thejudge_explanation
column will provide an explanation for the verdict.
The Honeydew AI Judge is an agent tasked with evaluating answers given by AI to business users for completeness, relevance, and overall correctness