Documentation

Find answers to your questions about FormatJSONOnline's JSON tools. Our documentation provides tips, tutorials, and FAQs to help you use our services effectively.

Technical Architecture & Performance

FormatJSONOnline is built on a modern, privacy-first architecture leveraging Web Workers, Service Workers, and client-side processing to deliver fast, secure JSON operations without server dependency.

1. Technology Stack

Frontend

  • Next.js: App Router & Server Components
  • React 19: Function components with hooks
  • TypeScript: Full type safety
  • Tailwind CSS: Responsive utility-first styling
  • Radix UI: Accessible component primitives
  • Monaco Editor: Code highlighting & editing

Backend & Services

  • Node.js: API middleware & utilities
  • Vercel: Hosting & serverless functions
  • Web Workers: Background JSON processing
  • Service Workers: Offline support & caching
  • OpenAI/Anthropic: AI processing APIs

2. System Architecture

┌─────────────────────────────────────────────────────────────┐
│                        User Browser                          │
├─────────────────────────────────────────────────────────────┤
│  ┌──────────────────┐  ┌──────────────────┐                 │
│  │  Main Thread     │  │ Web Workers (x4) │                 │
│  ├──────────────────┤  ├──────────────────┤                 │
│  │ React Component  │  │ JSON Processing  │                 │
│  │ State Management │  │ Heavy Computation│                 │
│  │ UI Rendering     │  └──────────────────┘                 │
│  └──────────────────┘                                       │
│                                                              │
│  ┌────────────────────────────────────────────────────────┐ │
│  │          Service Worker (Offline Support)              │ │
│  │  - Caching strategy                                    │ │
│  │  - Request Interception                               │ │
│  ├────────────────────────────────────────────────────────┤ │
│  │ LocalStorage: Minimal (Settings, User Preferences)   │ │
│  └────────────────────────────────────────────────────────┘ │
└──────────────────┬──────────────────────────────────────────┘
                   │ HTTPS
         ┌─────────┴──────────┐
         │                    │
    ┌────▼─────┐         ┌────▼──────────┐
    │ API Layer │         │ AI Endpoints  │
    │(Vercel)   │         │(OpenAI, etc)  │
    │           │         │               │
    │ - Rate    │         │ - Streaming   │
    │ - Limits  │         │ - Encryption  │
    │ - Auth    │         │ - Zero Log    │
    └───────────┘         └───────────────┘

3. Web Worker Strategy

Worker Pool Architecture

FormatJSONOnline uses a worker pool pattern to manage multiple Web Workers:

class WorkerPool { private workers: Worker[] = []; private taskQueue: Task[] = []; private activeWorkers = new Set<Worker>(); constructor(poolSize = 4) { for(let i = 0; i < poolSize; i++) { this.workers.push(new Worker('json-worker.ts')); } } async processJSON(data: any, operation: string): Promise<any> { const availableWorker = await this.getAvailableWorker(); return new Promise((resolve) => { availableWorker.onmessage = (e) => { this.activeWorkers.delete(availableWorker); resolve(e.data); }; availableWorker.postMessage({ data, operation }); this.activeWorkers.add(availableWorker); }); } }

Operations Executed in Workers

CPU-Intensive

  • ✓ Large file formatting (100MB+)
  • ✓ Validation of complex structures
  • ✓ Diff/comparison algorithms
  • ✓ Schema generation & inference

Memory-Intensive

  • ✓ Deep tree traversal
  • ✓ Flatten/unflatten operations
  • ✓ Merge multiple large files
  • ✓ Type generation

4. Performance Benchmarks

Processing Times (Client-Side)

Operation1 MB File10 MB File100 MB File
Format~15ms~120ms~1.2s
Validate~10ms~85ms~900ms
Minify~8ms~75ms~750ms
Diff~25ms~200msNot recommended

Benchmarks measured on modern hardware (4-core CPU, 8GB+ RAM). Actual times vary by device.

Memory Usage Profile

Base Application: ~2-3 MB (JavaScript bundle + assets)

Per Worker: ~1 MB (copy of worker code + utilities)

During Processing: ~2x-3x the input file size (for intermediate data structures)

Peak Memory: Input size + intermediate buffers (typically cleaned up after processing)

5. AI Processing Pipeline

Request/Response Flow

1. User submits JSON + Prompt ↓ 2. Client validates input ├─ Check file size limits ├─ Validate JSON structure └─ Warn about sensitive data ↓ 3. Encrypt data (TLS) ↓ 4. Send HTTPS POST to /api/ai/{operation} ├─ payload: { data, prompt, model } └─ headers: { Authorization, Content-Type } ↓ 5. Server processes (OpenAI/Anthropic API) ├─ Real-time streaming ├─ No persistence to database └─ No model training signal ↓ 6. Stream response back to client ↓ 7. Immediate deletion of request data ↓ 8. Display result with AI disclaimer

API Endpoint Examples

POST /api/ai/generate

Generate mock JSON from schema

// Request { "schema": { "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "number" } } }, "count": 5, "model": "gpt-4" } // Response (streaming) [ { "name": "Alice Johnson", "age": 28 }, { "name": "Bob Smith", "age": 35 }, ... ]

POST /api/ai/fix

Auto-fix invalid JSON

// Request { "malformedJSON": "{name: 'John', age: 30}", "model": "claude-3-sonnet" } // Response { "fixed": { "name": "John", "age": 30 }, "explanation": "Added quotes around property names", "confidence": 0.95 }

⚠️ Rate Limiting

Per User: 10 requests / minute (authenticated)

Per IP: 5 requests / minute (unauthenticated)

Max Payload: 50 MB

Response includes rate limit headers: X-RateLimit-Remaining

6. Offline Support & Caching

Cache Strategy

Static Assets: Cache-first strategy (app will check cache before network)

HTML/JS Bundles: Stale-while-revalidate (serve from cache, update in background)

API Requests: Network-first (AI features require network)

Service Worker Scope: Root (/) covers entire application

Offline Capabilities

  • ✓ All client-side JSON tools work offline
  • ✓ UI fully functional without network
  • ✗ AI features require network connection
  • ✗ Analytics disabled in offline mode

7. Code Examples & Usage

Client-Side JSON Processing (TypeScript)

import { JsonWorkerPool } from '@/lib/workers/pool'; // Initialize pool const pool = new JsonWorkerPool(4); // Format JSON const formatted = await pool.format(largeJSON, { indent: 2, sortKeys: true }); // Validate const validation = await pool.validate(jsonData); if (!validation.isValid) { console.error('Errors:', validation.errors); } // Convert to TypeScript const types = await pool.toTypeScript(jsonData, { name: 'MyType', style: 'interface' }); // All processing happens in background workers!

Using AI API (JavaScript)

// Generate mock data async function generateMockJSON() { const response = await fetch('/api/ai/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer YOUR_TOKEN' }, body: JSON.stringify({ schema: { type: 'object', properties: { id: { type: 'string' }, email: { type: 'string' }, created: { type: 'string' } } }, count: 10 }) }); // Handle streaming response const reader = response.body.getReader(); let result = ''; while (true) { const { done, value } = await reader.read(); if (done) break; result += new TextDecoder().decode(value); } return JSON.parse(result); }

Analyzing Performance

// Measure operation timing const startTime = performance.now(); const formatted = await pool.format(largeJSON); const duration = performance.now() - startTime; // Monitor memory usage console.log({ duration: duration.toFixed(2) + 'ms', fileSize: JSON.stringify(largeJSON).length / 1024 + 'KB', throughput: (JSON.stringify(largeJSON).length / 1024 / (duration / 1000)).toFixed(2) + ' KB/s' });

Learning Resources