FAQ
Frequently asked questions about the KapitaalBot engine, observability and tiers. Use the search box above for FAQ + documentation + assistant (multiple languages), without strategy source code.
Search: FAQ, documentation, assistant
We match the on-page FAQ first, then technical docs, then the extended knowledge backend. If your question closely matches one you already asked this session, we reuse that answer. If we still cannot answer, we say so honestly — no guessing.
No messages yet. Ask a question, for example: "What is the difference between Tier 1 and Tier 2?"
Follow-up? Use the same search box — earlier questions in this session are recognised.
All frequently asked questions (by category)
Overview & scope
What is KapitaalBot?
KapitaalBot is an autonomous crypto trading system running on 600+ spot markets. The engine is multi-regime (e.g. range, trend, high-volatility, low-liquidity) and multi-strategy (e.g. liquidity-, momentum- and volume-oriented strategies). This site shows only observability data about that runtime, no live orders or real-time signals.
Why delayed data on Tier 1?
Tier 1 is intended as an observability layer: transparency and explanation without facilitating real-time signal behaviour or reverse engineering. You see run status, regime and strategy overviews, symbol counts and aggregated metrics; no live order feed.
What is the difference between Tier 1, Tier 2 and Tier 3?
Tier 1 is public: status strip, regimes, strategies, trade counts, market/pair summary and demo trades from public_* snapshots. Tier 2 is on request and adds extra observability modules, such as execution and latency dashboards, extended trading statistics and shadow trades. Tier 3 is internal (admin observability) and includes full lifecycle and debug telemetry.
Does KapitaalBot trade autonomously with real capital?
Yes, the live runtime trades with real capital within fixed exposure and risk limits. What you see here are read-model snapshots and aggregated observability only; no live positions, exact balances or strategy parameters are shown.
Is this site investment advice or a signal service?
No. The observability site explains how the engine is built and behaves but does not make recommendations, publish entry/exit signals or offer portfolio advice. The information is for technical and operational transparency.
Can I connect KapitaalBot to my own exchange account?
Not today. The live engine runs on a limited set of internally managed accounts. Future research collaborations are possible, but only under strict agreements on risk, observability and data access.
How often is information on this site refreshed?
Snapshots are periodically exported from the live runtime. Exact frequencies differ per snapshot type (status, regimes, trading, demo trades), but Tier 1 always shows delayed and aggregated data, not the raw real-time feed.
Which exchanges and pairs are supported?
The current live variant runs on selected spot markets at Kraken. Within that universe only part of all possible pairs is actively monitored and an even smaller subset is used for actual trading, depending on regime, liquidity and risk filters.
Is KapitaalBot intended as a black-box product for end clients?
No. KapitaalBot is primarily an internal research and trading engine with emphasis on explainability and auditability. This site aims to show the system, risk structure and observability, not to market a plug-and-play retail product.
Why is so much documentation publicly visible?
Because architecture, observability and risk design can be judged more fairly when they are transparent. At the same time, strategy implementations, exact thresholds and venue-specific details are intentionally kept abstract.
How does this observability site differ from a classic performance dashboard?
A classic performance dashboard mostly shows P&L curves, Sharpe ratios and backtests. This site focuses on runtime behaviour: dataflows, regimes, safety guards and validation paths. Performance views may be added later as a separate layer but are not the core goal here.
How does KapitaalBot compare to typical retail trading bots?
Many retail bots are parameter-driven scripts around one or two indicators. KapitaalBot is a state-first runtime with multiple data feeds, regimes, strategy families, safety layers and a full observability and validation chain. The emphasis is on control and explainability, not on maximising trade frequency.
Architecture & data flow
How does data flow from exchange to KapitaalBot?
Ingest connects to multiple public WebSockets (ticker, trades, L2, L3) and writes raw data into an ingest DB. From that, a state table (run_symbol_state) is built per run. The route engine and execution read only from that state (state-first); observability snapshots read aggregated information from the same database.
What do you mean by state-first architecture?
Instead of deciding directly from raw market data, a compact state per symbol is built and updated first. Evaluation, regime detection, strategy choice and execution use only that state. That avoids multiple sources of truth, makes safety and freshness easier to control and simplifies auditing.
What role does the database play in the architecture?
The database is the core of the engine: ingest writes raw events, a state builder maintains compact tables per run, the route engine reads exclusively from those state tables and observability reads aggregated views. Decisions are not taken from ad hoc in-memory caches that cannot be reconstructed in the DB.
How do you handle multiple WebSocket connections and backpressure?
The implementation uses a small number of long-lived WebSocket connections that multiplex on internal request IDs. Backpressure is handled with bounded queues, prioritisation of feed types and temporary throttling of non-critical channels when latency or load increases.
How is the execution engine separated from the rest of the system?
Execution only sees a restricted, controlled read model (for example selected routes with explicit expiries and risk limits). It has no direct coupling to ingest or raw L2/L3 feeds. This allows the rest of the system to restart or be upgraded without execution running on ghost state.
How does observability influence the architecture design?
Observability is a first-class concern, not an afterthought. Important steps have explicit log markers, metrics and snapshots. Many tables are designed so they can be used both for live decisions and later validation or audit.
How do you deal with schema changes and migrations?
Schema changes are made via versioned migrations and tied to specific engine releases. There are paths for online migrations (e.g. adding columns) and heavier changes during planned maintenance windows.
Does the architecture support multiple runtimes or only one live instance?
The architecture supports live, observe and backfill runs. They share core components but use different configuration profiles and schema names, so experiments and analyses do not contaminate the live state.
How do you handle failures in external dependencies such as exchange outages?
Outages are treated as events: ingest detects timeouts and incomplete feeds, marks symbols or venues as degraded and ensures state and route engine see that explicitly. In extreme cases the engine can move into exit-only or hard-blocked modes.
How do observability snapshots relate to raw logs?
Snapshots are compact, coherent views built from DB and logs. Raw logs remain the deepest source for investigations, but for daily overview and tiered access, snapshots are more efficient and easier to present.
Regimes, strategies & selection
What regimes does KapitaalBot use in broad terms?
The engine classifies markets into a few main regimes, such as RANGE, TREND, HIGH_VOLATILITY, LOW_LIQUIDITY and CHAOS. Each regime describes the character of the market (e.g. mean-reverting vs trending, calm vs highly volatile).
What does multi-strategy mean in this system?
Per regime, different strategy families are available (e.g. liquidity-, momentum- and volume-oriented approaches). For a symbol, the engine chooses a suitable strategy based on market and regime characteristics, without exposing that choice as a fixed rule set.
How is a small subset of hundreds of markets chosen to trade?
For each symbol, characteristics are measured (liquidity, spreads, volatility, stability of price movements, L3 quality, etc.). The route engine produces one or a few candidate routes per symbol and ranks them by expected net benefit vs risk and time. Per evaluation cycle, at most a small number of symbols are actually selected.
Does KapitaalBot rely on machine learning or fixed rules?
The system is modular: some components are rule-based with explicit thresholds, others use data-driven scoring or filters. Where possible, simple and explainable logic is preferred; more complex models are only used when they add clear value and can be monitored properly.
How do you avoid overfitting to a specific market period?
Research runs use out-of-sample periods, forward walks and scenario tests. Strategy components are only promoted to the live engine when they show stable behaviour across multiple regimes and conditions. Observability and validation help detect regressions quickly.
Can regimes and strategies be disabled dynamically?
Yes. Regimes, strategy families and even whole route types have feature flags and safety guards. In case of doubt a component can be moved to observation or shadow mode where it still produces signals but no longer triggers live orders.
How do you treat symbols with poor microstructure?
Symbols with structurally low liquidity, extremely wide spreads or unstable order books end up in stricter suitability filters. In many cases they are fully excluded or only allowed in non-aggressive regimes.
What if multiple strategies like the same symbol at once?
The route engine aggregates signals into a small set of candidate routes per symbol. These are scored on expected benefit, risk, execution complexity and resource usage. Ultimately only a limited number of routes are accepted per evaluation cycle.
Are positions actively managed or mostly one-off trades?
Most strategies have explicit lifecycle logic: entries, management and exits are seen as one coherent process. Some routes are short-lived microstructure trades, others are slower and more focused on regime shifts.
How is a regime different from a single market indicator?
A regime is a compact classification of multiple signals at once (e.g. volatility, trend strength, liquidity, microstructure noise). It is not a single indicator but a summary of the environment within which strategies make decisions.
Risk & safety
How does KapitaalBot limit risk per trade and per account?
The live engine uses fixed limits for e.g. maximum stake per trade and total available capital. In addition there are safety modes (normal, exit-only, hard-blocked) that temporarily or permanently exclude symbols when data or market conditions are not reliable enough.
How is stale data prevented?
State is refreshed before each evaluation. In addition, per route type there are maximum data ages; when exceeded, a route is blocked and the engine logs this explicitly. A generation gate ensures execution only runs on state that is visible and recent on the decision DB.
How do you handle black swan events or extreme volatility?
Besides per-trade and per-symbol limits there are global guardrails. In extreme events the engine can move to stricter modes, reduce exposure or stop new entries. The priority then is to reduce risk in a controlled way instead of chasing new opportunities.
What happens if the connection to the exchange is lost?
When ingest or execution lose connectivity, the engine detects this via heartbeats and freshness metrics. New entries are stopped and, where appropriate, existing positions are unwound via failsafe logic once a reliable window returns.
Does the engine use leverage or margin trading?
No, the current live variant runs spot only without leverage. That reduces the risk of liquidation events and makes risk and margin management more straightforward.
How do you avoid concentrated exposure in a single asset or theme?
There are caps per symbol, per asset cluster and in total. Correlations between assets are taken into account in exposure profiles to avoid implicit concentration in one theme.
What does monitoring look like in practice?
In addition to this observability site there are internal dashboards for latency, error rates, safety events and exposure. Alerts trigger when thresholds or specific combinations of events are hit, so manual intervention can happen quickly.
How do you prevent software bugs from causing large losses?
Critical paths are coded defensively with sanity checks, limits and invariants that hard-block when violated. There are also test runs, observe modes and validation scripts to exercise new changes in a controlled environment first.
Is there a kill switch or emergency stop?
Yes. On both system level and exchange-account level all new orders can be blocked and existing positions can be unwound or flattened in a controlled way. These paths are tested regularly as part of the runbook.
How do you handle operational risk (hardware, network, human error)?
The system runs in a controlled server environment with monitoring, logging and backups. Procedures for deploys, config changes and incident response are captured in the runbook, and changes go through version control rather than ad-hoc edits on the live system.
Observability & tiers
Which snapshots does the observability website use?
Tier 1 reads public_status_snapshot.json, public_regime_snapshot.json, public_strategy_snapshot.json, public_market_snapshot.json, public_trading_snapshot.json and public_demo_trades.json. In later iterations tier2_* snapshots and admin_observability_snapshot will be added; they contain more detail but are not public.
What do I see extra on Tier 2?
Tier 2 offers extra dashboards around execution, latency, safety and shadow trades. Instead of only aggregated counts per run you can see the distribution of execution outcomes, latency profiles and safety events per module. The exact strategy logic remains abstract.
How do demo trades relate to real fills?
Demo trades in Tier 1 are derived examples based on real fills but simplified and aggregated so that individual orders, exact timestamps and venue details cannot be reconstructed. They are meant to illustrate behaviour, not to publish exact signals.
Why are some dashboards Tier 2 only?
Some observability layers show sensitive information such as latency profiles, detailed safety events or near-live distributions of execution outcomes. These are relevant for technical and compliance work but not suitable as fully public dashboards.
How can I tell whether the runtime is healthy?
The status strip and main cards show whether ingest, evaluation and execution have been recent and consistent. Unusual safety modes, sharp jumps in trade counts or large gaps between snapshots are signals to dig deeper into observability data and logs.
Do you keep historical snapshots?
Yes. Underlying DBs and exports retain snapshots per run and per day. The public site shows a summary of the most recent state; historical material is primarily used internally for analysis and audits.
How does the FAQ chatbot tie into the documentation?
The chatbot uses a retrieval layer that pulls fragments from the current documentation and then uses a generative model on top. The model follows the docs; when documentation is updated, the knowledge base moves with it.
Why are some diagrams simplified compared to the real architecture?
For public use, diagrams are intentionally abstracted: fewer internal details, more focus on main components and flows. More detailed views exist in technical documents and internal observability layers.
Are metrics also used to plan future improvements?
Yes. Observability data feeds research questions: which regimes behave robustly, where bottlenecks arise, which safety events are common. That input shapes priorities for new releases.
How does this site fit into a broader compliance and reporting strategy?
The observability site is one window into the runtime. There are also internal reports, run logs and validation scripts. Together they form a trail from which decisions and outcomes can be reconstructed and assessed.
Validation & auditing
How do you prove the engine runs correctly?
The validation model consists of bootstrap, attach, evaluation and lifecycle proofs. These combine log markers (e.g. EXECUTION_ENGINE_START, LIVE_EVALUATION_*, DATA_INTEGRITY_*, ROUTE_FRESHNESS_*) with database tables (orders, fills, state) to show that a run is consistent from start to fill.
Which documentation is authoritative for the engine?
DOC_INDEX.md and the numbered layer guides 01_ARCHITECTURE through 08_OPERATIONS are the technical spine. Additions: 00_MODULE_INVENTORY, specialized policy docs, and the observability snapshot contract with this site.
What do you mean by bootstrap, attach and lifecycle proofs?
Bootstrap proofs show that a run was started correctly from a known code and DB state. Attach proofs show that observability and reporting attach to that same state. Lifecycle proofs demonstrate that the path from data ingest to fills is consistent and reproducible.
How do you validate data integrity over time?
There are checks for missing or duplicate events, schema compatibility, type mismatches and unexpected outliers. When problems are found, routes or whole symbols are blocked until the root cause is understood and fixed. Validation logs record which checks were run.
Do you run test runs without real money?
Yes. Besides live runs there are observe and simulation runs which use the same data and logic but place no real orders. Those runs are used to test new strategy components and risk rules before they reach the live engine.
How do you ensure validation code does not introduce its own errors?
Validation code is treated like the rest of the engine: version control, review where appropriate and clear linkage to documentation. Validation scripts are also run periodically on historical data to see if their behaviour stays stable.
How do runbooks relate to the validation model?
Runbooks describe the operational steps for starting, stopping, upgrading and intervening. The validation model describes which traces those steps leave in DB and logs. Together they ensure actions are repeatable and verifiable after the fact.
Can external reviewers or auditors look along?
In principle yes, via controlled access to documentation, observability snapshots and selected log and DB views. This site is designed to make that dialogue easier without exposing sensitive strategy implementation details.
How do you approach regression testing for new releases?
New releases are tested against existing datasets and scenarios with known expected outcomes. Deviations are investigated before a release goes live. Validation scripts and observability comparisons help detect subtle regressions.
How can an outsider know that documentation here is still current?
The docs on this site are synchronised automatically with the bot’s main repository. Important changes are captured in the changelog. That way there is a direct link between code, docs and what is presented here.
Legal, privacy & use of this site
Is anything on this site investment advice?
No. Text, dashboards and FAQ describe technical architecture, observability and risk principles. It is not a recommendation to trade, not personal financial advice, and not an invitation to participate.
What personal or account data is shown here?
None. Public and Tier 2 snapshots are aggregated and contain no identifiable account data, no full order ids and no reproducible strategy parameters.
May I reuse FAQ or doc text commercially?
The site is for system insight. Reuse is allowed with attribution and without implying partnership or endorsement unless agreed in writing.
How does this relate to crypto regulation and disclaimers?
Crypto is highly volatile; the site carries risk warnings per project policy. This is not legal advice; visitors and operators remain responsible for compliance with applicable law.
Why no live order feed or real-time PnL?
That limits signal leakage, protects operational safety and reduces misuse of aggregated but sensitive information. Observability is intentionally delayed and summarized.
Who is responsible if the site is technically wrong?
Content follows synchronised engine docs and snapshots; the repository and runtime export are authoritative. The site is provided “as is” for transparency, not as a guarantee of profit or error-free operation.
Donations, building & testing
Can I financially support KapitaalBot?
Yes. Donations help fund continued build-out and testing: engineering time, infrastructure, tooling, and test runs. This is not buying returns or signals — it supports a research and engineering effort.
As a contributor, do I get broader observability access (e.g. Tier 2)?
Possibly, but it is not automatic. Contributors may be eligible for expanded access (such as Tier 2 dashboards), depending on capacity and fit. That is discussed via contact or a Tier 2 request; mention briefly that you want to contribute.
What does a donation actually cover?
Server and runtime costs, test environments, observability export, and focused time to ship features, run regressions, and keep the engine reliable — not mass-market marketing.
Is a donation an investment or equity?
No. You are not buying shares and there is no profit or payback promise. A donation is voluntary support for engineering and transparency; Tier 2 access may follow by arrangement, not as a purchased product.
How do I start that conversation?
Use the contact form on this page or the Tier 2 access form. Say that you want to talk about a donation or ongoing support; expectations around trading outcomes are not part of that discussion.