# Connector Registry

## Overview

A regional law enforcement agency deploying the platform already runs Mark43 for records management, a Tyler Technologies CAD system, and a legacy jail management platform from a third vendor. Getting useful intelligence from those systems means getting their data into one place without writing custom integration code for each one. The Connector Registry solves that problem: 153 pre-built connectors cover the most common public safety, financial, social media, and government data sources, with schema discovery and field mapping that reduces integration setup from weeks to hours.

The platform features automatic schema discovery, smart field mapping, real-time data quality validation, and normalisation into standard schemas including NIBRS, UCR, NIEM, and GJXDM.

```mermaid
flowchart LR
    A[RMS / CAD Systems] --> B[Connector Registry]
    C[Financial Institutions] --> B
    D[Social Media APIs] --> B
    E[Government Databases] --> B
    F[Commercial Intel Feeds] --> B
    B --> G[Schema Discovery]
    G --> H[ML Field Mapping]
    H --> I[Quality Validation]
    I --> J[Universal Normalisation]
    J --> K[PostgreSQL Primary Store]
    K --> L[Investigation Platform]
    K --> M[Analytics Layer]
    K --> N[Bidirectional Sync]
    N --> A
```

## Key Features

### Pre-Built Connector Library

153 connectors cover public safety and law enforcement systems including 40+ RMS platforms, CAD systems, jail management, and court records, as well as financial data sources, social media platforms, government databases, and commercial intelligence providers. Zero-configuration schema introspection with automatic field detection and type inference gets integrations running quickly.

### Smart Field Mapping

ML-powered field mapping suggestions based on semantic similarity and historical mapping patterns. Reduce integration setup time from weeks to hours by automatically suggesting how source fields map to destination schemas.

### Real-Time Data Quality

Field-level quality scoring with completeness, accuracy, consistency, and timeliness metrics. Monitor ingested data quality continuously and alert when data sources degrade or produce anomalous records.

### Universal Normalisation

Standardise disparate data formats into recognised schemas including NIBRS, UCR, NIEM, and GJXDM. Transform heterogeneous source data into consistent formats for cross-source analysis and reporting.

### Bidirectional Sync

Real-time bidirectional data synchronisation with conflict resolution and merge strategies. Connected systems stay in sync without manual data entry or batch transfer delays. Multi-tenant source isolation ensures polling configurations remain segregated between organisations.

### Change Data Capture

Incremental synchronisation with log-based change detection for minimal performance impact on source systems. Process only changes rather than full data refreshes, reducing load and latency.

### Data Lineage and Provenance

Complete lineage tracking from source system to analytics with transformation audit trails. Every data point carries a clear record of origin, transformation history, and last update time.

## Use Cases

- **Multi-System Integration**: Connect disparate public safety systems into a unified data platform for cross-system analysis and reporting without custom integration development.
- **Data Migration**: Migrate historical data from legacy systems with automated schema mapping, quality validation, and transformation into modern data standards.
- **Intelligence Aggregation**: Ingest and normalise data from multiple intelligence sources, commercial databases, and open-source feeds for comprehensive investigative support.
- **Real-Time Data Sharing**: Enable bidirectional sync between systems for inter-agency data sharing, regional information exchanges, and multi-jurisdictional collaboration.

## Integration

Supports connections to RMS, CAD, jail management, court systems, financial institutions, social media platforms, government databases, and commercial data providers. Standard authentication methods include API keys, OAuth, SFTP, and database connections. All ingested data writes go to the PostgreSQL primary data store.

**Last Reviewed:** 2026-02-05
**Last Updated:** 2026-04-14
