What is SPARKIT?
SPARKIT is an API-first scientific research agent. You send a question over HTTP; an agent retrieves and synthesizes the relevant literature, performs any analyses the question requires, and returns a Markdown report. When the question warrants it, the report includes inline citations and a structured list of sources.
How is SPARKIT different from ChatGPT, Claude, or Perplexity?
SPARKIT is an API designed to be called from your own agent or application — not a chat interface. It runs multi-step research and returns structured Markdown with citations rather than conversational text. On HLE-Gold (Humanity's Last Exam, gold subset), SPARKIT scores 53.0% vs 34.9% for direct GPT-5.5 and 28.9% for direct Claude Opus 4.7.
How much does SPARKIT cost?
Plus is $35/month for 10 research queries. Pro is $90/month for 30 queries (most popular). Max is $250/month for 90 queries. All paid tiers are billed monthly on a 12-month commitment. Try-it is a one-time $10 purchase for 5 queries with no subscription. Enterprise is available for teams with custom support, custom agent design, or connections to proprietary data — email info@sparkit.science to scope an engagement.
How long does a research query take?
Median end-to-end time is approximately 110 seconds, depending on question complexity and how many sources the agent retrieves. Jobs run asynchronously — you get a job_id back immediately, then either poll GET /v1/research/{job_id} or pass a callback_url to receive a webhook when the report is ready.
Does SPARKIT cite its sources?
When the question warrants literature evidence, claims in the returned Markdown report are followed by inline citations, and the response payload includes a structured `sources` array with title, URL, DOI, year, and citation count for each source. Not every question requires literature references — for purely computational or definitional questions, citations may not be included.
Is the output reliable for clinical or high-stakes decisions?
No. SPARKIT is an AI agent and, like any LLM-driven system, can be wrong: citations may be misattributed, sources may be summarized inaccurately, and conclusions can overstate what the literature supports. Always verify the cited sources before relying on outputs for clinical, regulatory, legal, or other high-stakes decisions.
Does SPARKIT screen queries for safety?
Yes. Queries are screened by a safety policy. Requests that solicit dual-use research of concern (e.g., synthesis of dangerous pathogens or chemical weapons), unverified clinical advice, or other prohibited content are refused with a `safety_blocked` error.
How do I install the SDK?
The official Python SDK is on PyPI as `sparkit-science`. Install with `pip install sparkit-science` (Python 3.10+). Then mint an API key at app.sparkit.science/keys and call `client.research('your question')`.
Do you train on my queries?
No. SPARKIT does not use your research queries or returned reports to train any model. Your inputs and outputs are stored only for your account's billing and history, and are never sent to any model provider for training.
Can I cancel anytime?
Subscriptions are 12-month annual commitments. Service and billing continue through the end of the 12-month term, then both stop. Try-it is a one-time purchase with no subscription, so there's nothing to cancel.