Developing FineCode¶
This guide is for developers contributing to FineCode itself — the monorepo structure, conventions, and workflows used internally.
Repository structure¶
The repo is a monorepo. Each package has its own pyproject.toml. The root directory is the workspace.
finecode/ # Main package (Workspace Manager)
finecode_extension_api/ # Public API for extension authors
finecode_extension_runner/ # Extension execution engine
finecode_jsonrpc/ # JSON-RPC client/transport layer
finecode_httpclient/ # HTTP client for extensions
finecode_builtin_handlers/ # Built-in action handlers
extensions/ # Extension packages (ruff, flake8, mypy, ...)
presets/ # Preset packages (recommended, lint, format)
finecode_dev_common_preset/ # Preset used for developing FineCode itself
tests/ # Test suite
Setting up the development environment¶
# From the repo root, inside the dev_workspace venv:
python -m finecode prepare-envs
# Re-prepare a single environment (e.g. after changing its dependencies):
python -m finecode prepare-envs --env=dev_no_runtime
# Multiple envs at once:
python -m finecode prepare-envs --env=dev --env=dev_no_runtime
# Prepare only a specific project:
python -m finecode prepare-envs --project=finecode_extension_api
# Combine filters — one env in one project:
python -m finecode prepare-envs --project=finecode_extension_api --env=dev_no_runtime
Running checks¶
Logging strategy (development policy)¶
This section defines the logging policy contributors should follow when adding or changing logs in FineCode.
The policy below defines the approach for reducing noise while keeping deep diagnostics available.
Goals¶
- keep logs useful in normal development and CI runs
- allow deep diagnostics only when needed
- make noisy areas controllable per module
- avoid logging sensitive data
Level policy¶
ERROR: operation failed and needs attention; include actionable contextWARNING: recoverable problem, degraded behavior, or skipped stepINFO: lifecycle milestones and key business events (start/stop, action run result)DEBUG: developer diagnostics for branch decisions and compact internal stateTRACE: high-volume details (payload previews, loop-level details, per-item processing)
Rules:
- default global level must be
INFO TRACEmust be disabled by defaultTRACEshould be opt-in for specific modules or short debugging sessions- avoid
INFOin tight loops; useTRACE/DEBUGinstead
Module-level overrides (target contract)¶
Use per-module log levels so diagnostics can be enabled surgically without turning on global trace.
Recommended config shape:
[tool.finecode.logging]
default_level = "INFO"
format = "json"
[tool.finecode.logging.module_levels]
"finecode.wm_server.services.run_service" = "TRACE"
"finecode.wm_server.runner.runner_manager" = "DEBUG"
Recommended env override pattern:
Recommended CLI override pattern:
Notes:
--log-levelis supported by all commands:run,prepare-envs,dump-config,start-lsp,start-wm-server,start-mcpprepare-envs --env=<name>limits environment preparation to the named env(s); the flag may be repeatedprepare-envs --project=<name>limits to the named project(s); the flag may be repeated; can be combined with--env- when a CLI command spawns a dedicated WM server subprocess, the log level is propagated automatically
- module overrides should take precedence over global level (not yet implemented)
What to log¶
Log at boundaries where failures or latency matter:
- request start/end with identifiers (
request_id,run_id,project,action) - external process and RPC boundaries (spawn, send, receive, timeout, cancel)
- retries, fallbacks, and decision points
- final result summary (status, duration, item counts)
For high-volume objects:
- log previews and metadata instead of full payloads
- include sizes/counts (
len, keys, return code) rather than full dumps - use full payload logs only at
TRACE
Safety and performance guardrails¶
- never log secrets or tokens (API keys, auth headers, credentials, full env dumps)
- redact known sensitive keys (
token,password,secret,authorization) - prefer lazy/cheap log construction on hot paths
- guard expensive
TRACEformatting with level checks
Incident workflow¶
- keep production/dev default at
INFO - during incident analysis, enable
TRACEonly for affected modules - ~~prefer time-bounded overrides (TTL) so verbose logging auto-reverts~~
- once resolved, remove temporary overrides and keep only useful
INFO/WARNING
Dependency lock files¶
FineCode uses pylock.toml lock files for reproducible dependency installation.
Why lock files¶
Without lock files, prepare-envs resolves dependency versions from the ranges declared in pyproject.toml at install time. This means two developers (or CI runs) can end up with different versions depending on when they ran the command. Lock files pin exact versions for reproducible environments.
Canonical lock strategy¶
FineCode standardizes on a single canonical lock file as the source of truth:
The canonical lock should encode the supported target matrix (environment, platform, interpreter, architecture) using PEP 751 semantics (for example, marker-based package selection), rather than splitting truth across many authoritative files.
The architecture decision is documented in ADR-0023.
Generating lock files¶
Use the lock_dependencies action:
For Python, prefer handlers that can operate on standardized pylock data directly. uv is currently the preferred backend where available.
Installing from lock files¶
There are two lock-file handlers depending on the pipeline you use:
PrepareEnvInstallDepsFromLockHandler— used in the per-environmentprepare_envpipeline (the default). Readspylock.<env_name>.tomland passes pinned versions toinstall_deps_in_envfor that single env.PrepareEnvsInstallDepsFromLockHandler— legacy multi-env variant that handles all environments in one handler. Use only if you are running a customprepare_envspipeline that does not go throughPrepareEnvsDispatchHandler.
During migration, existing per-env lock handlers can continue to consume derived files such as pylock.<env_name>.toml. Long-term direction is canonical-first consumption with projection only when required for compatibility.
Lock files in CI¶
Lock files should be committed to the repository. CI should install from them, not regenerate them:
To update lock files, run lock_dependencies locally or in a scheduled CI job and commit the result. For multi-platform projects, use a CI matrix to generate lock files on each target platform.
JSON-RPC key naming convention¶
All JSON-RPC channels in FineCode use camelCase for message keys:
| Channel | Convention | Reason |
|---|---|---|
| WM server ↔ any client (internal TCP) | camelCase | Standard for JSON-based protocols; language-agnostic (clients may be written in Go, TypeScript, Rust, etc.) |
| LSP command handlers → IDE | camelCase | Same convention; no conversion needed |
| ER ↔ WM (pygls custom commands) | camelCase | Consistent with WM protocol |
Rule: write keys explicitly, no auto-conversion¶
Handler return dicts must use camelCase keys written explicitly. There is no automatic snake_case → camelCase conversion in the WM server. Auto-conversion is fragile — it was the root cause of the return_code bug in _handle_run_action where only the inner value was wrapped in _NoConvert but the outer keys were still silently converted.
# correct — keys written as camelCase explicitly
return {"returnCode": result.return_code, "resultByFormat": result.result_by_format}
# wrong — snake_case keys in a JSON response
return {"return_code": result.return_code, "result_by_format": result.result_by_format}
Python internal data structures (dataclass fields, local variables, function parameters) stay snake_case per Python convention. Only the dict keys that cross a JSON-RPC boundary are camelCase.
What this means per layer¶
WM server handlers (wm_server.py): return dicts with camelCase keys directly. No _NoConvert wrapper, no _convert_to_camel_case call.
wm_client.py: accesses response keys in camelCase.
Python CLI clients (prepare_envs_cmd.py, run_cmd.py): access camelCase keys from responses.
LSP command handlers (lsp_server/endpoints/): pass WM responses through to the IDE as-is — no conversion needed since the WM already produces camelCase.
ER response dicts (finecode_extension_runner): use camelCase keys (returnCode, resultByFormat, status).
Async generator handlers¶
A handler's run() method can be either a regular coroutine (returns a result) or an async generator (yields one or more partial results). The framework detects which one it is at call time using inspect.isasyncgen().
When to use an async generator¶
Use yield when your handler produces results incrementally — especially when the caller should receive data before the handler finishes:
- Processing a collection and sending per-item results (see
LintHandler—finecode_builtin_handlers/lint.py) - Long-running handlers (servers, watchers) that should emit an initial result (address, port, status) before entering a blocking loop
How it works¶
Each yielded value is treated as a partial result. The framework:
1. Sends it to the LSP/MCP client immediately (if a partial_result_token was supplied by the client)
2. Forwards it to a parent handler's run_action_iter() loop (if called as a sub-action)
3. Accumulates all yielded values using the result type's update() method
The final accumulated result becomes the action's return value. If no value is accumulated (generator yields nothing), the result is None.
Pattern: yield before blocking¶
For handlers that start a server or watcher and then block indefinitely, yield the result as soon as the resource is ready, then enter the blocking loop:
async def run(self, payload, run_context):
server = _start_server(payload.host, payload.port)
bound_host, bound_port = server.server_address
# Yield immediately — callers get address/port without waiting for cancellation
yield MyRunResult(base_url=f"http://{bound_host}:{bound_port}", ...)
async with run_context.progress("Serving", cancellable=True) as prog:
await prog.report(message=f"http://{bound_host}:{bound_port}")
try:
while True:
await asyncio.sleep(1.0)
except asyncio.CancelledError:
pass
# generator exhausts here; cleanup in finally block
Without the yield, the caller would only receive the result after the action is cancelled — never during normal operation.
Canonical examples¶
ServeWalExplorerFromStoreHandler(extensions/fine_wal_explorer/) — yield-before-blocking patternLintHandler(finecode_builtin_handlers/lint.py) — iterates a sub-action withrun_action_iterand re-yields each partial
Partial result internals¶
Understanding how partial results are forwarded is useful when debugging why a caller does (or does not) receive incremental data.
Two forward paths¶
When a handler yields a partial result, the framework forwards it via one or both paths depending on how the action was invoked:
| Path | Set when | Transport |
|---|---|---|
partial_result_token |
Client sent a token with the request | partial_result_sender.schedule_sending() → WM notification → LSP/MCP client |
partial_result_queue |
Parent handler called run_action_iter() |
asyncio.Queue.put() → parent's async for loop |
Both checks happen in the same place in execute_action_handler (finecode_extension_runner/_services/run_action.py). A comment there notes the future opportunity to unify them into a single PartialResultForwarder abstraction.
Sub-action partial results¶
Calling run_action(sub_action, ...) discards all intermediate yields — only the final accumulated result is returned. To receive intermediate yields from a sub-action, use run_action_iter(sub_action, ...) instead. The queue path above is what makes this work.
MCP real-time streaming¶
The MCP server (src/finecode/mcp_server.py) forwards both partial results and progress notifications as real-time send_log_message calls to the AI client. This means both mechanisms surface to the user immediately — there is no buffering at the MCP layer.
Referencing ADRs in source code¶
When code implements a non-obvious constraint or design choice, add a comment referencing the relevant ADR. This prevents future contributors from accidentally "fixing" something that was intentionally designed that way.
# Single shared IO thread services all active ERs — see docs/adr/0003-*.md
_io_thread = threading.Thread(target=_service_loop, daemon=True)
When to add an ADR reference:
- The implementation looks like it could be simplified but cannot be
- There is a temptation to refactor in a way that would violate the decision
- The constraint is not derivable from the code itself
When not to add one:
- The code is self-explanatory
- The ADR covers a broad design area — reference it only at the specific site that enforces the decision, not everywhere related code appears
ADR references differ from user-doc references: user docs explain the API surface for consumers; ADRs explain why a constraint exists for contributors.
Code Style¶
Typing¶
- type the code
-- use complete types, no holes in generics like
listinstead oflist[int]
Imports¶
- keep imports at the top of the module
- keep imports at the root level of module
-- there are 2 exceptions:
- you need to avoid circle dependency (usually it means there is a problem in code structure)
- you want to avoid loading the module on startup (e.g. don't import all CLI command handlers if only one is needed for current CLI call)
Exports¶
- explicitly export public module members using
__all__-- it may not contain dynamic elements, only literal strings