ARDI™ RESEARCH CENTER
Research Methodologies & Standards
How GOSH AI conducts, documents, and validates research on AI recommendation behavior. Every finding is traceable to specific prompts, models, and dates.
Built on observed AI behavior — real prompts, real models, real outputs.
FOUNDATION
Purpose & Approach
Why This Research Exists
The ARDI™ Research Center documents how AI systems interpret, retrieve, and present information about brands in real-world conditions. This research exists to support applied decision-making — not academic theory.
It helps organizations understand how AI models actually behave, reduce visibility risk, close discovery gaps, and make informed investments in their AI authority.
The approach is observational and empirical. We design prompts that reflect real user queries, collect AI responses, and compare outputs against real-world facts. The focus is on identifying patterns, not isolated anomalies.
This Page Covers
Systems observed & models tested
Prompting & testing controls
Evaluation & interpretation criteria
Scope & limitations of findings
Independence & editorial standards
How to apply this research
Reproducibility: Wherever feasible, tests are described so others can reproduce the prompt and output comparison, acknowledging model version differences.
SYSTEMS OBSERVED
AI Models & Platforms Tested
Research includes observations from one or more of the following systems, depending on availability and relevance at the time of testing. Model versions are noted when relevant.
ChatGPT
OpenAI
Gemini
Google
Claude
Anthropic
Perplexity
AI Answer Engine
Grok
xAI
AI systems evolve rapidly. All findings are time-bound to the date of observation. Model versions, training data updates, and retrieval changes can alter behavior between testing cycles.
TESTING CONTROLS
How We Control for Consistency
To reduce noise and improve reliability, testing follows structured controls at every stage.
Plain Language Prompts
Prompts are written to reflect how real users ask questions — no jargon, no engineered phrasing, no hidden system instructions.
📍
Intentional Location Context
Geographic context is included or excluded deliberately when relevant. Location-sensitive and location-neutral prompts are tested separately.
🔁
Documented Follow-Ups
Follow-up prompts are recorded when they materially affect outcomes. The full prompt chain is preserved as part of the research record.
🔓
No Hidden Instructions
No proprietary system prompts or model-specific instructions are used. Prompts are treated as controlled variables — transparent and reproducible.
🔄
Alternative Formulations
When alternative prompt phrasing influences outcomes, both versions are documented. The goal is to capture real variance, not cherry-pick favorable outputs.
🕒
Repeated Testing
Repeated testing is conducted where feasible to confirm consistency. Single outputs are flagged as preliminary; patterns require multiple observations.
OBSERVATION MODEL
Two Distinct Observation Layers
ARDI™ research captures AI behavior through two separate observation paths. Each produces different data because each reflects a fundamentally different mechanism inside the model.
The Search Path
Measures what AI retrieves in real time — the brands, sources, and citations the model actively searches for and surfaces when responding to a prompt. This captures the retrieval layer: what the model goes out and finds.
Real-time retrieval behavior
Source citations & link-outs
What the model finds when it looks
The Learned Path
Measures what AI already knows — the brand associations, authority signals, and category understanding embedded in the model’s own training. This captures the knowledge layer: what the model believes without searching.
Embedded training knowledge
Brand associations & authority signals
What the model knows without looking
Why both matter: The same brand can appear through the Search Path but be absent from the Learned Path — or vice versa. Each path produces different brand exposure, different competitor dynamics, and different signals. ARDI™ captures both because a complete picture of AI recommendation behavior requires observing both mechanisms independently.
RIGOR
Interpretation Standards & Scope
How We Interpret Findings
Repeated or consistent behaviors are prioritized over single outputs
Discrepancies between AI answers and real-world facts are noted explicitly
Ambiguity in AI responses is treated as a finding, not an error
Absence of visibility is as meaningful as presence
Conclusions are based on observable evidence, not inferred intent
Known Limitations
AI systems may produce different responses at different times
Outputs may vary by user context, location, or session history
Not all systems disclose sourcing or citation logic
Observations reflect behavior at a specific point in time
Findings are directional and descriptive, not deterministic
EDITORIAL STANDARDS
Independence, Updates & Revisions
Research Independence
The ARDI™ Research Center operates independently within GOSH AI, documenting real AI behavior to support practice-oriented decisions.
Research is not commissioned by third parties
Not optimized for rankings or promotional outcomes
Not written to promote specific tools or platforms
The intent is to document how AI systems behave so organizations can make informed decisions with clarity
Living Documentation
AI behavior changes over time. Research published in the ARDI™ Research Center is treated as living documentation:
Updated to reflect new model behavior
Annotated with temporal context
Superseded by newer observations when warranted
Timestamped to preserve historical accuracy
APPLICATION
How to Use This Research
These insights can inform strategy, operational priorities, or be combined with a full ARDI™ engagement for execution.
01
Understand AI Discovery Risk
Identify where your brand is visible to AI models — and where it’s completely absent from the conversation.
02
Close Discovery Gaps
Find the delta between your SEO performance and your AI representation. Ranking on Google doesn’t mean AI recommends you.
03
Inform Content & Entity Strategy
Use findings to guide content authority, entity structuring, and citation amplification — the core disciplines within ARDI™.
04
Ask Better Questions
Develop sharper internal understanding of how AI interprets your brand, your category, and your competitors.
A note on application: This research should not be treated as a checklist, guarantee, or substitute for human judgment. The practical application — including prioritization, implementation, and optimization — is delivered through GOSH AI’s advisory and engagement services.
Questions About Our Methodology?
Reach out to the ARDI™ Research Center team for questions about methodology, interpretation, or how this research applies to your brand.
Contact the Research Team
or email directly at ardi@mygosh.ai
We don’t chase rankings.
We influence AI recommendations.