Upload

Drag & drop a .log/.txt file here, or

Help

Getting Started

LogSieve runs entirely in your browser — no data is ever uploaded to a server. Click any section header to expand it; only one section is open at a time. The Results section is always visible so you can see your data while working.

Workflow:

  1. Upload — load your log file
  2. Search Tools — configure filters, extractors, transformations, and columns
  3. Run Pipeline — execute your configured steps in order and review results
Upload

Drag and drop a file onto the upload area, or click "Browse..." to select one. Supported formats:

  • Plain text (.log, .txt) — auto-detects timestamps, log levels, and multi-line events like stack traces
  • JSON (.json) — structured JSON with automatic field mapping
  • NDJSON (.ndjson) — newline-delimited JSON for streaming logs
  • CSV (.csv) — tabular data with header row detection
Search Tools

Use the tabs inside Search Tools to configure each type of pipeline step. Changes take effect when you click Run Pipeline.

Filters & Sort

Build structured filter rules with the Query Builder, or use the Advanced Query for free-form searches. Both are pipeline steps that run in your configured order.

  • Builder rules — Field + Operator + Value; chain rules with AND / OR
  • Advanced Query — free-form text, field search, regex, boolean logic (see syntax below)
  • Sort — by ID, timestamp, level, or any extracted field
  • Page Size — 25 – 200 results per page

Advanced Query syntax

  • field:value  ·  level:ERROR
  • AND, OR, NOT  ·  (level:ERROR OR level:WARN) AND app:main
  • Wildcards: user:admin*
  • Regex: msg:/error \d+/   or quoted: msg:"/fatal error/"
  • Existence: has:field, missing:field
  • IN list: level:IN(ERROR, WARN, INFO)
  • Comparisons: latency>500, ts<2025-01-01
  • Bare words search the raw log line: connection refused

Timestamps & Timezones

  • Timestamps with a timezone (e.g. 2025-11-13T10:30:00Z) are converted to your local timezone for display
  • Naive timestamps (no timezone) are treated as local time
  • Your detected timezone is shown above the results table
Extractor Library

Extract structured fields from unstructured text using named-group regex patterns like (?<user>\w+).

  • Click + New Extractor to define a pattern, then check it to activate it
  • Apply Scope — "All rows" or "Filtered rows"
  • Merge Strategy — Last Wins, First Wins, or Merge for overlapping field names
  • Quick Test — try a pattern without saving

Extracted fields appear as sortable columns and export cleanly to CSV / JSON.

Transformations

Reusable row transformations that run as pipeline steps. Leave the target field blank to overwrite the source.

  • JSON Parse — optional dot-notation path (e.g. user.id)
  • XML Extract Tag — extracts the first matching tag's text
  • URL Decode / Encode — standard URI component behavior
  • JavaScript Expression (Unsafe) — requires opt-in; use sparingly

If a transformation fails for a row the original value is kept and processing continues.

Columns

Toggle column visibility and drag the ≡ handle to reorder. Changes are saved in your browser and respected by CSV / JSON exports.

Saved Filters

Save the current Builder rules + Advanced Query + sort order as a named preset. Load it later with "Load & Apply" or "Load into Filters" to review before applying. Expand the Saved Filters panel to rename or delete presets.

Run Pipeline

The authoritative execution plan. Each enabled step runs top-to-bottom. Every run starts from the original parsed rows, so disabling or removing a step cleanly reverts its effect.

  • Step types: Builder rules, Advanced Query, active Extractors, active Transformations
  • Reorder: drag steps — e.g. run extraction before a filter that depends on extracted fields
  • Enable / disable: use step checkboxes to skip steps without deleting them
Results

Always visible. Above the table: Statistics (row counts by level), File Info, and a Timeline Sparkline. Click "raw" on any row to see the original log line.

Summary Statistics (Beta) — collapse and re-expand the panel to refresh after running the pipeline. Field types, distributions, and basic stats are best-effort.

Troubleshooting

  • Timestamps off? — check whether your log has a trailing Z or timezone offset
  • Builder rule returns nothing — make sure the Value field is filled in
  • Regex not matching — test it in the Quick Test panel first
Import / Export
  • Export Data — save filtered results as JSON or CSV via Tools → Data
  • Export Library — backup extractors, transformations, saved filters, and preferences via Tools → Library → Export
  • Import Library — load from a JSON file via Tools → Library → Import (merge or replace)
Keyboard Shortcuts & Theme
  • ESC — close any open modal dialog

Click the 🌙/☀️ button in the top header to toggle dark / light theme. Your preference is saved automatically.

Filters & Sort

Saved Filters

Extractor Library

When multiple active extractors capture the same field name, this setting determines which value is kept.
No extractors saved. Click "+ New Extractor" to create one.
Quick Test (one-time pattern)

Transformations

JS expressions are executed in a worker with limits, but still unsafe for untrusted input.
No transformations saved. Add one above.

Run Pipeline

Drag to reorder the execution flow. Pipeline always re-runs from original parsed rows.

No pipeline steps yet. Add filter rules, extractors, or transformations.

Results

Summary Statistics
Rows
0
INFO
0
WARN
0
ERROR
0
no file 0 lines no filters
Times shown in UTC by default
ID Timestamp (local) Level Message