Skip to main content
Dakota has made its documentation AI-friendly through multiple mechanisms designed to help Large Language Models understand and integrate with the platform.

LLM Feed Files

To help LLMs stay current on how Dakota works, we expose two continuously updated files for ingestion:
  • llms.txt - A concise, high-signal list of top-level docs pages, great for smaller models or quick context building.
  • llms-full.txt - A more exhaustive listing that includes nearly all pages, ideal for full-context indexing.
You can regularly ingest these URLs into your custom GPTs or other LLM apps to ensure Dakota-specific questions are grounded in accurate technical detail. The documentation supports “contextual” features allowing you to:

Export as Markdown

Export any Dakota documentation page as Markdown for:
  • Custom GPT training data
  • Internal knowledge bases
  • Team documentation
  • Offline reference

AI Chat Integration

Launch pre-loaded chat sessions with Claude or ChatGPT for specific documentation pages. This enables:
  • Instant troubleshooting
  • Code generation with proper context
  • Deeper topic exploration
  • Interactive learning

Use Cases

Troubleshooting: Open a docs page about webhooks, click “Ask Claude”, and get immediate help with your specific webhook implementation issue. Code Generation: Load the API reference page, start a chat, and generate production-ready code that follows Dakota’s best practices. Learning: Explore complex topics like transaction flows by chatting with an AI that has full context of Dakota’s documentation.

Best Practices

Regular Ingestion

For custom GPTs or internal tools:
  • Fetch llms.txt or llms-full.txt regularly (daily or weekly)
  • Update your knowledge base with the latest documentation
  • Ensure accurate, current technical information

Context Management

  • Use llms.txt for general queries and overviews
  • Use llms-full.txt when detailed implementation guidance is needed
  • Combine with live API testing for verification

Security Considerations

  • Never share API keys with AI assistants
  • Use sandbox credentials when generating code examples
  • Review AI-generated code before production deployment
  • Verify security recommendations against official docs

Getting Started

  1. Choose Your Integration Method
    • Quick start: Use llms.txt with your AI assistant
    • Full context: Ingest llms-full.txt into custom GPTs
  2. Test Your Setup
    • Ask basic questions about Dakota concepts
    • Request code examples for common operations
    • Verify responses against official documentation
  3. Build with Confidence
    • Generate boilerplate integration code
    • Get instant answers to API questions
    • Troubleshoot issues with AI assistance

Resources

Support

Need help integrating Dakota documentation with your AI tools? Contact our support team at dakota.io/talk-to-sales for assistance.