# RxDB Core API Documentation > Reference for the core RxDB API: RxDatabase, RxSchema, RxCollection, RxDocument, RxQuery, and related entities. This file contains all documentation content in a single document following the llmstxt.org standard. ## Installation # Install RxDB ## npm To install the latest release of `rxdb` and its dependencies and save it to your `package.json`, run: ```bash npm i rxdb --save ``` ## peer-dependency You also need to install the peer-dependency `rxjs` if you have not installed it before. ```bash npm i rxjs --save ``` ## polyfills RxDB is coded with ES8 and transpiled to ES5. This means you have to install [polyfills](https://developer.mozilla.org/en-US/docs/Glossary/Polyfill) to support older browsers. For example you can use the babel-polyfills with: ```bash npm i @babel/polyfill --save ``` If you need polyfills, you have to import them in your code. ```typescript import '@babel/polyfill'; ``` ## Polyfill the `global` variable When you use RxDB with [Angular](./articles/angular-database.md) or other **Webpack** based frameworks, you might get the error `Uncaught ReferenceError: global is not defined`. This is because some dependencies of RxDB assume a Node.js-specific `global` variable that is not added to browser runtimes by some bundlers. You have to add them manually, like we do [here](https://github.com/pubkey/rxdb/blob/master/examples/angular/src/polyfills.ts). ```ts (window as any).global = window; (window as any).process = { env: { DEBUG: undefined }, }; ``` ## Project Setup and Configuration In the [examples](https://github.com/pubkey/rxdb/tree/master/examples) folder you can find CI tested projects for different frameworks and use cases, while in the [/config](https://github.com/pubkey/rxdb/tree/master/config) folder base configuration files for Webpack, Rollup, Mocha, Karma, TypeScript are exposed. Consult [package.json](https://github.com/pubkey/rxdb/blob/master/package.json) for the versions of the packages supported. ## Installing the latest RxDB build If you need the latest development state of RxDB, add it as git dependency into your `package.json`. ```json "dependencies": { "rxdb": "git+https://git@github.com/pubkey/rxdb.git#commitHash" } ``` Replace `commitHash` with the hash of the latest [build-commit](https://github.com/pubkey/rxdb/search?q=build&type=Commits). ## Import To import `rxdb`, add this to your JavaScript file to import the default bundle that contains the RxDB core: ```typescript import { createRxDatabase, // ./rx-database.md /* ... */ } from 'rxdb'; ``` --- ## RxDB Docs import { Overview } from '@site/src/components/overview'; # RxDB Documentation --- ## Quickstart import {Steps} from '@site/src/components/steps'; import {TriggerEvent} from '@site/src/components/trigger-event'; import {Tabs} from '@site/src/components/tabs'; import {NavbarDropdownSyncList} from '@site/src/components/navbar-dropdowns'; # RxDB Quickstart Welcome to the RxDB Quickstart. Here we'll learn how to create a simple real-time app with the RxDB database that is able to store and query data persistently in a browser and does realtime updates to the UI on changes.
### Installation Install the RxDB library and the RxJS dependency: ```bash npm install rxdb rxjs ``` ### Pick a Storage RxDB is able to run in a wide range of JavaScript runtimes like browsers, mobile apps, desktop and servers. Therefore different storage engines exist that ensure the best performance depending on where RxDB is used. #### LocalStorage Use this for the simplest browser setup and very small datasets. It has a tiny bundle size and works anywhere [localStorage](./articles/localstorage.md) is available, but is not optimized for large data or heavy writes. ```ts import { getRxStorageLocalstorage } from 'rxdb/plugins/storage-localstorage'; let storage = getRxStorageLocalstorage(); ``` #### IndexedDB πŸ‘‘ The premium [IndexedDB storage](./rx-storage-indexeddb.md) is a high-performance, browser-native storage with a smaller bundle and faster startup compared to Dexie-based IndexedDB. Recommended when you have [πŸ‘‘ premium](/premium/) access and care about performance and bundle size. ```ts import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; let storage = getRxStorageIndexedDB(); ``` #### Dexie.js [Dexie.js](./rx-storage-dexie.md) is a friendly wrapper around IndexedDB and is a great default for browser apps when you don't use premium. It's reliable, works well for medium-sized datasets, and is free to use. ```ts import { getRxStorageDexie } from 'rxdb/plugins/storage-dexie'; let storage = getRxStorageDexie(); ``` #### SQLite [SQLite](./rx-storage-sqlite.md) is ideal for React Native, Capacitor, Electron, Node.js and other hybrid or native environments. It gives you a fast, durable database on disk. Use the πŸ‘‘ premium storage for production; a trial version exists for quick experimentation. **Premium SQLite (Node.js example)** ```ts import { getRxStorageSQLite, getSQLiteBasicsNode } from 'rxdb-premium/plugins/storage-sqlite'; // Provide the sqliteBasics adapter for your runtime, e.g. Node.js, React Native, etc. // For example in Node.js you would derive // sqliteBasics from a sqlite3-compatible library: import sqlite3 from 'sqlite3'; const storage = getRxStorageSQLite({ sqliteBasics: getSQLiteBasicsNode(sqlite3) }); ``` **SQLite trial storage (Node.js, free)** ```ts import { getRxStorageSQLiteTrial, getSQLiteBasicsNodeNative } from 'rxdb/plugins/storage-sqlite'; import { DatabaseSync } from 'node:sqlite'; const storage = getRxStorageSQLiteTrial({ sqliteBasics: getSQLiteBasicsNodeNative(DatabaseSync) }); ``` #### Expo Filesystem πŸ‘‘ For React Native and Expo applications, the [Expo Filesystem storage](./rx-storage-filesystem-expo.md) offers superior performance compared to SQLite and Async Storage by utilizing OPFS JSI bindings. ```ts import { getRxStorageExpoAsync } from 'rxdb-premium/plugins/storage-filesystem-expo'; let storage = getRxStorageExpoAsync(); ``` #### And more... There are many more storages such as [MongoDB](./rx-storage-mongodb.md), [DenoKV](./rx-storage-denokv.md), [Filesystem](./rx-storage-filesystem-node.md), [Memory](./rx-storage-memory.md), [Memory-Mapped](./rx-storage-memory-mapped.md), [FoundationDB](./rx-storage-foundationdb.md) and more. [Browse the full list of storages](/rx-storage.html).
Which storage should I use? RxDB provides a wide range of storages depending on your JavaScript runtime and performance needs. In the Browser: Use the LocalStorage storage for simple setup and small build size. For bigger datasets, use either the dexie.js storage (free) or the IndexedDB RxStorage if you have πŸ‘‘ premium access which is a bit faster and has a smaller build size. In Electron and React Native: Use the SQLite RxStorage if you have πŸ‘‘ premium access or the SQLite Trial RxStorage for tryouts. In Capacitor: Use the SQLite RxStorage if you have πŸ‘‘ premium access, otherwise use the LocalStorage storage.
### Dev-Mode When you use RxDB in development, you should always enable the [dev-mode plugin](./dev-mode.md), which adds helpful checks and validations, and tells you if you do something wrong. ```ts import { addRxPlugin } from 'rxdb/plugins/core'; import { RxDBDevModePlugin } from 'rxdb/plugins/dev-mode'; addRxPlugin(RxDBDevModePlugin); ``` ### Schema Validation [Schema validation](./schema-validation.md) is required when using dev-mode and recommended (but optional) in production. Wrap your storage with the AJV schema validator to ensure all documents match your schema before being saved. ```ts import { wrappedValidateAjvStorage } from 'rxdb/plugins/validate-ajv'; storage = wrappedValidateAjvStorage({ storage }); ``` ### Create a Database A database is the top‑level container in RxDB, responsible for managing collections, coordinating persistence, and providing reactive change streams. ```ts import { createRxDatabase } from 'rxdb/plugins/core'; const myDatabase = await createRxDatabase({ name: 'mydatabase', storage: storage }); ``` ### Add a Collection An RxDatabase contains [RxCollection](./rx-collection.md)s for storing and querying data. A collection is similar to an SQL table, and individual records are stored in the collection as JSON documents. An [RxDatabase](./rx-database.md) can have as many collections as you need. Add a collection with a [schema](./rx-schema.md) to the database: ```ts await myDatabase.addCollections({ // name of the collection todos: { // we use the JSON-schema standard schema: { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have maxLength }, name: { type: 'string' }, done: { type: 'boolean' }, timestamp: { type: 'string', format: 'date-time' } }, required: ['id', 'name', 'done', 'timestamp'] } } }); ``` ### Insert a Document Now that we have an RxCollection we can store some [documents](./rx-document.md) in it. ```ts const myDocument = await myDatabase.todos.insert({ id: 'todo1', name: 'Learn RxDB', done: false, timestamp: new Date().toISOString() }); ``` ### Run a Query Execute a [query](./rx-query.md) that returns all found documents once: ```ts const foundDocuments = await myDatabase.todos.find({ selector: { done: { $eq: false } } }).exec(); ``` ### Update a Document In the first found document, set `done` to `true`: ```ts const firstDocument = foundDocuments[0]; await firstDocument.patch({ done: true }); ``` ### Delete a Document Delete the document so that it can no longer be found in queries: ```ts await firstDocument.remove(); ``` ### Observe a Query Subscribe to data changes so that your UI is always up-to-date with the data stored on disk. RxDB allows you to subscribe to data changes even when the change happens in another part of your application, another browser tab, or during database [replication/synchronization](./replication.md): ```ts const observable = myDatabase.todos.find({ selector: { done: { $eq: false } } }).$ // get the observable via RxQuery.$; observable.subscribe(notDoneDocs => { console.log('Currently have ' + notDoneDocs.length + ' things to do'); // -> here you would re-render your app to show the updated document list }); ``` ### Observe a Document Value You can also subscribe to the fields of a single RxDocument. Add the `$` sign to the desired field and then subscribe to the returned observable. ```ts myDocument.done$.subscribe(isDone => { console.log('done: ' + isDone); }); ``` ### Sync the Client RxDB has multiple [replication plugins](./replication.md) to replicate database state with a server. #### HTTP ```ts import { replicateHTTP, pullQueryBuilderFromRxSchema, } from "rxdb/plugins/replication-http"; replicateHTTP({ collection: db.todos, push: { handler: async (rows) => { return fetch("https://example.com/api/todos/push", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify(rows), }).then((res) => res.json()); }, }, pull: { handler: async (lastCheckpoint) => { return fetch( "https://example.com/api/todos/pull?" + new URLSearchParams({ checkpoint: JSON.stringify(lastCheckpoint) }), ).then((res) => res.json()); }, }, }); ``` #### GraphQL ```ts import { replicateGraphQL } from 'rxdb/plugins/replication-graphql'; replicateGraphQL({ collection: db.todos, url: 'https://example.com/graphql', push: { batchSize: 50 }, pull: { batchSize: 50 } }); ``` #### WebRTC (P2P) The easiest way to replicate data between your clients' devices is the [WebRTC replication plugin](./replication-webrtc.md) that replicates data between devices without a centralized server. This makes it easy to try out replication without having to host anything: ```ts import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; replicateWebRTC({ collection: myDatabase.todos, connectionHandlerCreator: getConnectionHandlerSimplePeer({}), topic: '', // <- set any app-specific room id here. secret: 'mysecret', pull: {}, push: {} }) ``` #### CouchDB ```ts import { replicateCouchDB } from 'rxdb/plugins/replication-couchdb'; replicateCouchDB({ collection: db.todos, url: 'http://example.com/todos/', push: {}, pull: {} }); ``` #### And more... Explore all [replication plugins](/replication.html), including advanced conflict handling and custom protocols.
## Next steps You are now ready to dive deeper into RxDB. - Start reading the full documentation [here](./install.md). - There is a full implementation of the [quickstart guide](https://github.com/pubkey/rxdb-quickstart) so you can clone that repository and play with the code. - For frameworks and runtimes like Angular, React Native and others, check out the list of [example implementations](https://github.com/pubkey/rxdb/tree/master/examples). - Also please continue reading the documentation, join the community on our [Discord chat](/chat/), and star the [GitHub repo](https://github.com/pubkey/rxdb). - If you are using RxDB in a production environment and are able to support its continued development, please take a look at the [πŸ‘‘ Premium package](/premium/) which includes additional plugins and utilities. --- ## Attachments import { DefaultCompressibleTypes } from '@site/src/components/default-compressible-types'; # Attachments Attachments are binary data files that can be attachment to an `RxDocument`, like a file that is attached to an email. Using attachments instead of adding the data to the normal document, ensures that you still have a good **performance** when querying and writing documents, even when a big amount of data, like an image file has to be stored. - You can store string, binary files, images and whatever you want side by side with your documents. - Deleted documents automatically loose all their attachments data. - Not all [replication](./replication.md) plugins support the replication of attachments. - Attachments can be stored [encrypted](./encryption.md). Internally, attachment data is stored as `Blob` objects. Blob is the canonical internal type because it is immutable, carries MIME type metadata via `Blob.type`, provides synchronous size via `Blob.size`, and is [structured-cloneable](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm) (works with Worker/Electron `postMessage` and IndexedDB). Conversion to `ArrayBuffer` only happens at system boundaries that require it: encryption (Web Crypto), compression (CompressionStream), digest hashing, and WebSocket serialization. ## Add the attachments plugin To enable the attachments, you have to add the `attachments` plugin. ```ts import { addRxPlugin } from 'rxdb'; import { RxDBAttachmentsPlugin } from 'rxdb/plugins/attachments'; addRxPlugin(RxDBAttachmentsPlugin); ``` ## Enable attachments in the schema Before you can use attachments, you have to ensure that the attachments-object is set in the schema of your `RxCollection`. ```javascript const mySchema = { version: 0, type: 'object', properties: { // . // . // . }, attachments: { // if true, the attachment-data will be // encrypted with the db-password encrypted: true } }; const myCollection = await myDatabase.addCollections({ humans: { schema: mySchema } }); ``` ## putAttachment() Adds an attachment to a `RxDocument`. Returns a Promise with the new attachment. ```javascript import { createBlob } from 'rxdb'; const attachment = await myDocument.putAttachment( { id: 'cat.txt', // (string) name of the attachment data: createBlob('meowmeow', 'text/plain'), // (Blob) data of the attachment type: 'text/plain' // (string) type of the attachment // data like 'image/jpeg' } ); ``` :::warning Expo/React-Native does not support the `Blob` API natively. Make sure you use your own polyfill that properly supports `blob.arrayBuffer()` when using RxAttachments or use the `putAttachmentBase64()` and `getDataBase64()` so that you do not have to create blobs. ::: ## putAttachments() Write multiple attachments to a `RxDocument` in a single atomic operation. This is more efficient than calling `putAttachment()` multiple times because it only performs one write to the storage. Returns a Promise with an array of the new attachments. ```javascript import { createBlob } from 'rxdb'; const attachments = await myDocument.putAttachments([ { id: 'cat.txt', data: createBlob('meowmeow', 'text/plain'), type: 'text/plain' }, { id: 'dog.txt', data: createBlob('woof', 'text/plain'), type: 'text/plain' } ]); ``` ## putAttachmentBase64() Same as `putAttachment()` but accepts a plain base64 string instead of a `Blob`. ```ts const attachment = await doc.putAttachmentBase64({ id: 'cat.txt', length: 4, data: 'bWVvdw==', type: 'text/plain' }); ``` ## getAttachment() Returns an `RxAttachment` by its id. Returns `null` when the attachment does not exist. ```javascript const attachment = myDocument.getAttachment('cat.jpg'); ``` ## allAttachments() Returns an array of all attachments of the `RxDocument`. ```javascript const attachments = myDocument.allAttachments(); ``` ## allAttachments$ Gets an Observable which emits a stream of all attachments from the document. Re-emits each time an attachment gets added or removed from the [RxDocument](./rx-document.md). ```javascript const all = []; myDocument.allAttachments$.subscribe( attachments => all = attachments ); ``` ## RxAttachment The attachments of RxDB are represented by the type `RxAttachment` which has the following attributes/methods. ### doc The `RxDocument` which the attachment is assigned to. ### id The id as `string` of the attachment. ### type The type as `string` of the attachment. ### length The length of the data of the attachment as `number`. ### digest The hash of the attachments data as `string`. :::note The digest is NOT calculated by RxDB, instead it is calculated by the RxStorage. The only guarantee is that the digest will change when the attachments data changes. ::: ### rev The revision-number of the attachment as `number`. ### remove() Removes the attachment. Returns a Promise that resolves when done. ```javascript const attachment = myDocument.getAttachment('cat.jpg'); await attachment.remove(); ``` ## getData() Returns a Promise which resolves the attachment's data as `Blob`. (async) ```javascript const attachment = myDocument.getAttachment('cat.jpg'); const blob = await attachment.getData(); // Blob ``` ## getDataBase64() Returns a Promise which resolves the attachment's data as **base64** `string`. ```javascript const attachment = myDocument.getAttachment('cat.jpg'); const base64Database = await attachment.getDataBase64(); // 'bWVvdw==' ``` ## getStringData() Returns a Promise which resolves the attachment's data as `string`. ```javascript const attachment = await myDocument.getAttachment('cat.jpg'); const data = await attachment.getStringData(); // 'meow' ``` ## Inline attachments on insert and upsert Instead of inserting a document first and then calling `putAttachment()` separately, you can include attachments directly in the document data when using `insert()`, `bulkInsert()`, `upsert()`, `bulkUpsert()`, or `incrementalUpsert()`. Provide `_attachments` as an array of `{ id, type, data }` objects. ```javascript import { createBlob } from 'rxdb'; // insert with inline attachments const doc = await myCollection.insert({ name: 'foo', _attachments: [ { id: 'photo.jpg', type: 'image/jpeg', data: myJpegBlob }, { id: 'notes.txt', type: 'text/plain', data: createBlob('some notes', 'text/plain') } ] }); const attachment = doc.getAttachment('photo.jpg'); ``` ### Upsert behavior with attachments When upserting a document that already exists, attachments from the new data are **merged** with the document's existing attachments by default. This means existing attachments not mentioned in the upsert data are preserved. To replace all existing attachments instead, pass `{ deleteExistingAttachments: true }` as the second argument: ```javascript // Merge (default): keeps existing attachments, adds/updates new ones const doc = await myCollection.upsert(docData); // Replace: only the attachments in docData will exist after the upsert const doc2 = await myCollection.upsert(docData, { deleteExistingAttachments: true }); ``` This option works with `upsert()`, `bulkUpsert()`, and `incrementalUpsert()`. ## Attachment compression {#attachment-compression} Storing many attachments can be a problem when the disc space of the device is exceeded. Therefore it can make sense to compress the attachments before storing them in the [RxStorage](./rx-storage.md). With the `attachments-compression` plugin you can compress the attachments data on write and decompress it on reads. This happens internally and will not change how you use the API. The compression is run with the [Compression Streams API](https://developer.mozilla.org/en-US/docs/Web/API/Compression_Streams_API) which is only supported on [newer browsers](https://caniuse.com/?search=compressionstream). ## MIME-type-aware compression The compression plugin is MIME-type-aware. It only compresses attachment types that benefit from compression (text, JSON, SVG, etc.) and passes through already-compressed formats (JPEG, PNG, MP4, etc.) as-is. This avoids wasting CPU cycles on files that won't shrink. A built-in default list of compressible types is used automatically. You can override it with the `compressibleTypes` option in your schema: ```ts import { wrappedAttachmentsCompressionStorage } from 'rxdb/plugins/attachments-compression'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; // create a wrapped storage with attachment-compression. const storageWithAttachmentsCompression = wrappedAttachmentsCompressionStorage({ storage: getRxStorageIndexedDB() }); const db = await createRxDatabase({ name: 'mydatabase', storage: storageWithAttachmentsCompression }); // set the compression mode at the schema level const mySchema = { version: 0, type: 'object', properties: { // .. }, attachments: { // Specify the compression mode. // OneOf ['deflate', 'gzip'] compression: 'deflate', // Optional: override which MIME types get compressed. // Supports wildcard prefix matching // (e.g. 'text/*' matches 'text/plain', // 'text/html', etc.). // If omitted, a built-in default list of compressible types is used. compressibleTypes: [ 'text/*', 'application/json', 'application/xml', 'image/svg+xml' // ... add your own patterns ] } }; /* ... create your collections as usual and store attachments in them. */ ``` The default compressible types include the following MIME type patterns. Binary formats like `image/jpeg`, `image/png`, `video/*`, and `audio/*` are **not** in the default list and will be stored without re-compression. --- ## Master Data - Create and Manage RxCollections # RxCollection A collection stores documents of the same type. ## Creating a Collection To create one or more collections you need an [RxDatabase](./rx-database.md) object which has the `.addCollections()` method. Every collection needs a collection name and a valid [RxJsonSchema](./rx-schema.md). Other attributes are optional. ```js const myCollections = await myDatabase.addCollections({ // key = collectionName humans: { schema: mySchema, statics: {}, // (optional) ORM-functions // for this collection methods: {}, // (optional) ORM-functions for documents attachments: {}, // (optional) ORM-functions for attachments options: {}, // (optional) Custom parameters // that might be used in plugins migrationStrategies: {}, // (optional) autoMigrate: true, // (optional) [default=true] cacheReplacementPolicy: function(){}, // (optional) custom // cache replacement policy conflictHandler: function(){} // (optional) custom // conflict handler }, // you can create multiple collections at once animals: { // ... } }); ``` ### name The name uniquely identifies the collection and should be used to refine the collection in the database. Two different collections in the same database can never have the same name. Collection names must match the following regex: `^[a-z][a-z0-9]*$`. ### schema The schema defines how the documents of the collection are structured. RxDB uses a schema format, similar to [JSON schema](https://json-schema.org/). Read more about the RxDB schema format [here](./rx-schema.md). ### ORM-functions With the parameters `statics`, `methods` and `attachments`, you can define ORM functions that are applied to each of these objects that belong to this collection. See [ORM/DRM](./orm.md). ### Migration With the parameters `migrationStrategies` and `autoMigrate` you can specify how migration between different schema-versions should be done. [See Migration](./migration-schema.md). ## Get a collection from the database To get an existing collection from the database, call the collection name directly on the database: ```javascript // newly created collection const collections = await db.addCollections({ heroes: { schema: mySchema } }); const collection2 = db.heroes; console.log(collections.heroes === collection2); //> true ``` ## Functions ### Observe $ Calling this will return an [rxjs-Observable](https://rxjs.dev/guide/observable) which streams every change to data of this collection. ```js myCollection.$.subscribe(changeEvent => console.dir(changeEvent)); // you can also observe single event-types with insert$ update$ remove$ myCollection.insert$.subscribe(changeEvent => console.dir(changeEvent)); myCollection.update$.subscribe(changeEvent => console.dir(changeEvent)); myCollection.remove$.subscribe(changeEvent => console.dir(changeEvent)); ``` ### insert() Use this to insert new documents into the database. The collection will validate the schema and automatically encrypt any encrypted fields. Returns the new RxDocument. ```js const doc = await myCollection.insert({ name: 'foo', lastname: 'bar' }); ``` ### insertIfNotExists() The insertIfNotExists() method attempts to insert a new document into the collection only if a document with the same primary key does not already exist. This is useful for ensuring uniqueness without having to manually check for existing records before inserting or handling [conflicts](./transactions-conflicts-revisions.md). Returns either the newly added [RxDocument](./rx-document.md) or the previous existing document. ```js const doc = await myCollection.insertIfNotExists({ name: 'foo', lastname: 'bar' }); ``` ### bulkInsert() When you have to insert many documents at once, use bulk insert. This is much faster than calling `.insert()` multiple times. Returns an object with a `success` and `error` arrays. ```js const result = await myCollection.bulkInsert([{ name: 'foo1', lastname: 'bar1' }, { name: 'foo2', lastname: 'bar2' }]); // > { // success: [RxDocument, RxDocument], // error: [] // } ``` :::note `bulkInsert` will not fail on update conflicts and you cannot expect that on failure the other documents are not inserted. Also, the call to `bulkInsert()` will not throw if a single document errors because of validation errors. Instead it will return the error in the `.error` property of the returned object. ::: ### bulkRemove() When you want to remove many documents at once, use bulk remove. Returns an object with a `success`- and `error`-array. ```js const result = await myCollection.bulkRemove([ 'primary1', 'primary2' ]); // > { // success: [RxDocument, RxDocument], // error: [] // } ``` Instead of providing the document ids, you can also use the [RxDocument](./rx-document.md) instances. This can have better performance if your code knows them already at the moment of removing them: ```js const result = await myCollection.bulkRemove([ myRxDocument1, myRxDocument2, /* ... */ ]); ``` ### upsert() Inserts the document if it does not exist within the collection, otherwise it will overwrite it. Returns the new or overwritten RxDocument. When the document already exists, any [inline attachments](./rx-attachment.md#inline-attachments-on-insert-and-upsert) in the upsert data are **merged** with existing attachments by default. Pass `{ deleteExistingAttachments: true }` as the second argument to replace all existing attachments instead. ```js const doc = await myCollection.upsert({ name: 'foo', lastname: 'bar2' }); // with options const doc2 = await myCollection.upsert(docData, { deleteExistingAttachments: true }); ``` ### bulkUpsert() Same as `upsert()` but runs over multiple documents. Improves performance compared to running many `upsert()` calls. Returns an `error` and a `success` array. Accepts an optional second argument for [upsert options](./rx-attachment.md#upsert-behavior-with-attachments). ```js const docs = await myCollection.bulkUpsert([ { name: 'foo', lastname: 'bar2' }, { name: 'bar', lastname: 'foo2' } ]); /** * { * success: [RxDocument, RxDocument] * error: [], * } */ ``` ### incrementalUpsert() When you run many upsert operations on the same RxDocument in a very short timespan, you might get a `409 Conflict` error. This means that you tried to run a `.upsert()` on the document, while the previous upsert operation was still running. To prevent these types of errors, you can run incremental upsert operations. The behavior is similar to [RxDocument.incrementalModify](./rx-document.md#incrementalModify). ```js const docData = { name: 'Bob', // primary lastName: 'Kelso' }; myCollection.upsert(docData); myCollection.upsert(docData); // -> throws because of parallel update to the same document myCollection.incrementalUpsert(docData); myCollection.incrementalUpsert(docData); myCollection.incrementalUpsert(docData); // wait until last upsert finished await myCollection.incrementalUpsert(docData); // -> works ``` ### find() To find documents in your collection, use this method. [See RxQuery.find()](./rx-query.md#find). ```js // find all that are older than 18 const olderDocuments = await myCollection .find() .where('age') .gt(18) .exec(); // execute ``` ### findOne() This does basically what find() does, but it returns only a single document. You can pass a primary value to find a single document more easily. To find documents in your collection, use this method. [See RxQuery.find()](./rx-query.md#findOne). ```js // get document with name:foobar myCollection.findOne({ selector: { name: 'foo' } }).exec().then(doc => console.dir(doc)); // get document by primary, functionally identical to above query myCollection.findOne('foo') .exec().then(doc => console.dir(doc)); ``` ### findByIds() Find many documents by their id (primary value). This has a way better performance than running multiple `findOne()` or a `find()` with a big `$or` selector. Returns a `Map` where the primary key of the document is mapped to the document. Documents that do not exist or are deleted, will not be inside of the returned Map. ```js const ids = [ 'alice', 'bob', /* ... */ ]; const docsMap = await myCollection.findByIds(ids); console.dir(docsMap); // Map(2) ``` :::note The `Map` returned by `findByIds` is not guaranteed to return elements in the same order as the list of ids passed to it. ::: ### exportJSON() Use this function to create a JSON export from every document in the collection. Before `exportJSON()` and `importJSON()` can be used, you have to add the `json-dump` plugin. ```javascript import { addRxPlugin } from 'rxdb'; import { RxDBJsonDumpPlugin } from 'rxdb/plugins/json-dump'; addRxPlugin(RxDBJsonDumpPlugin); ``` ```js myCollection.exportJSON() .then(json => console.dir(json)); ``` ### importJSON() To import the JSON dump into your collection, use this function. ```js // import the dump to the database myCollection.importJSON(json) .then(() => console.log('done')); ``` Note that importing will fire events for each inserted document. ### remove() Removes all known data of the collection and its previous versions. This removes the documents, the schemas, and older schemaVersions. ```js await myCollection.remove(); // collection is now removed and can be re-created ``` ### close() Removes the collection's object instance from the [RxDatabase](./rx-database.md). This is to free up memory and stop all observers and replications. It will not delete the collection's data. When you create the collection again with `database.addCollections()`, the newly added collection will still have all data. ```js await myCollection.close(); ``` ### onClose / onRemove() With these you can add a function that is run when the collection was closed or removed. This works even across multiple browser tabs so you can detect when another tab removes the collection and your application can behave accordingly. ```js await myCollection.onClose(() => console.log('I am closed')); await myCollection.onRemove(() => console.log('I am removed')); ``` ### isRxCollection Returns true if the given object is an instance of RxCollection. Returns false if not. ```js const is = isRxCollection(myObj); ``` ## FAQ
When I reload the browser window, will my collections still be in the database? No, the javascript instance of the collections will not automatically load into the database on page reloads. You have to call the `addCollections()` method each time you create your database. This will create the JavaScript object instance of the RxCollection so that you can use it in the RxDatabase. The persisted data will automatically be available in your RxCollection each time you create it.
How to remove the limit of 13 collections? In the open-source version of RxDB, the amount of RxCollections that can exist in parallel is limited to `13`. To remove this limit, you can purchase the [Premium Plugins](/premium/) and call the `setPremiumFlag()` function before creating a database: ```ts import { setPremiumFlag } from 'rxdb-premium/plugins/shared'; setPremiumFlag(); ```
--- ## RxDatabase - The Core of Your Realtime Data # RxDatabase An RxDatabase Object contains your [collections](./rx-collection.md) and handles the synchronization of change events. ## Creation The database is created by the asynchronous `.createRxDatabase()` function of the core RxDB module. It has the following parameters: ```javascript import { createRxDatabase } from 'rxdb/plugins/core'; import { getRxStorageLocalstorage } from 'rxdb/plugins/storage-localstorage'; const db = await createRxDatabase({ name: 'heroesdb', // <- name storage: getRxStorageLocalstorage(), // <- RxStorage /* Optional parameters: */ password: 'myPassword', // <- password (optional) multiInstance: true, // <- multiInstance (optional, default: true) eventReduce: true, // <- eventReduce (optional, default: false) cleanupPolicy: {} // <- custom cleanup policy (optional) }); ``` ### name The database name is a string which uniquely identifies the database. When two RxDatabases have the same name and use the same `RxStorage`, their data can be assumed as equal and they will share events between each other. Depending on the storage or adapter this can also be used to define the filesystem folder of your data. ### storage RxDB works on top of an implementation of the [RxStorage](./rx-storage.md) interface. This interface is an abstraction that allows you to use different underlying databases that actually handle the documents. Depending on your use case you might use a different `storage` with different tradeoffs in performance, bundle size or supported runtimes. There are many `RxStorage` implementations that can be used depending on the JavaScript environment and performance requirements. For example you can use the [LocalStorage RxStorage](./rx-storage-localstorage.md) in the browser or use the [MongoDB RxStorage](./rx-storage-mongodb.md) in Node.js. - [List of RxStorage implementations](./rx-storage.md) ```javascript // use the LocalStorage that stores data in the browser. import { getRxStorageLocalstorage } from 'rxdb/plugins/storage-localstorage'; const db = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageLocalstorage() }); // ...or use the MongoDB RxStorage in Node.js. import { getRxStorageMongoDB } from 'rxdb/plugins/storage-mongodb'; const dbMongo = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageMongoDB({ connection: 'mongodb://localhost:27017,localhost:27018,localhost:27019' }) }); ``` ### password `(optional)` If you want to use encrypted fields in the collections of a database, you have to set a password for it. The password must be a string with at least 12 characters. [Read more about encryption here](./encryption.md). ### multiInstance `(optional=true)` When you create more than one instance of the same database in a single javascript-runtime, you should set `multiInstance` to ```true```. This will enable the event sharing between the two instances. For example when the user has opened multiple browser windows, events will be shared between them so that both windows react to the same changes. `multiInstance` should be set to `false` when you have single instances like a single Node.js process, a React Native app, a Cordova app or a single-window [Electron](./electron-database.md) app which can decrease the startup time because no instance coordination has to be done. ### eventReduce `(optional=false)` One big benefit of having a realtime database is that big performance optimizations can be done when the database knows a query is observed and the updated results are needed continuously. RxDB uses the [EventReduce Algorithm](https://github.com/pubkey/event-reduce) to optimize observer or recurring queries. For better performance, you should always set `eventReduce: true`. This will also be the default in the next major RxDB version. ### ignoreDuplicate `(optional=false)` If you create multiple RxDatabase-instances with the same name and same adapter, it's very likely that you have done something wrong. To prevent this common mistake, RxDB will throw an error when you do this. In some rare cases like unit-tests, you want to do this intentionally by setting `ignoreDuplicate` to `true`. Because setting `ignoreDuplicate: true` in production will decrease the performance by having multiple instances of the same database, `ignoreDuplicate` is only allowed to be set in [dev-mode](./dev-mode.md). ```js const db1 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageLocalstorage(), ignoreDuplicate: true }); const db2 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageLocalstorage(), // this create-call will not throw because // you explicitly allow it ignoreDuplicate: true }); ``` ### closeDuplicates `(optional=false)` Closes all other RxDatabase instances that have the same storage+name combination. ```js const db1 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageLocalstorage(), closeDuplicates: true }); const db2 = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageLocalstorage(), closeDuplicates: true // this create-call will close db1 }); // db1 is now closed. ``` ### hashFunction By default, RxDB will use `crypto.subtle.digest('SHA-256', data)` for hashing. If you need a different hash function or the `crypto.subtle` API is not supported in your JavaScript runtime, you can provide your own hash function instead. A hash function gets a `string`, `ArrayBuffer`, or `Blob` as input and returns a `Promise` that resolves a string. When a `Blob` is received (for attachment digest hashing), convert it to a string or ArrayBuffer before hashing. ```ts // example hash function that runs in plain JavaScript import { sha256 } from 'ohash'; import { blobToBase64String } from 'rxdb'; async function myOwnHashFunction(input: string | ArrayBuffer | Blob) { if (typeof Blob !== 'undefined' && input instanceof Blob) { input = await blobToBase64String(input); } else if (input instanceof ArrayBuffer) { input = new TextDecoder().decode(new Uint8Array(input)); } return sha256(input); } const db = await createRxDatabase({ hashFunction: myOwnHashFunction /* ... */ }); ``` If you get the error message `TypeError: Cannot read properties of undefined (reading 'digest')` this likely means that you are neither running on `localhost` nor on `https` which is why your browser might not allow access to `crypto.subtle.digest`. ## Methods ### Observe with $ Calling this will return an [RxJS Observable](http://reactivex.io/documentation/observable.html) which streams all write events of the `RxDatabase`. ```javascript myDb.$.subscribe(changeEvent => console.dir(changeEvent)); ``` ### exportJSON() Use this function to create a JSON export from every piece of data in every collection of this database. You can pass `true` as a parameter to decrypt the encrypted data fields of your document. Before `exportJSON()` and `importJSON()` can be used, you have to add the `json-dump` plugin. ```javascript import { addRxPlugin } from 'rxdb'; import { RxDBJsonDumpPlugin } from 'rxdb/plugins/json-dump'; addRxPlugin(RxDBJsonDumpPlugin); ``` ```javascript myDatabase.exportJSON() .then(json => console.dir(json)); ``` ### importJSON() To import the JSON dumps into your database, use this function. ```javascript // import the dump to the database emptyDatabase.importJSON(json) .then(() => console.log('done')); ``` ### backup() Writes the current (or ongoing) database state to the filesystem. [Read more](./backup.md) ### waitForLeadership() Returns a Promise which resolves when the RxDatabase becomes [elected leader](./leader-election.md). ### requestIdlePromise() Returns a promise which resolves when the database is in idle. This works similar to [requestIdleCallback](https://developer.mozilla.org/de/docs/Web/API/Window/requestIdleCallback) but tracks the idleness of the database instead of the CPU. Use this for semi-important tasks like cleanups which should not affect the speed of important tasks. ```javascript myDatabase.requestIdlePromise().then(() => { // this will run at the moment the database has nothing else to do myCollection.customCleanupFunction(); }); // with timeout myDatabase.requestIdlePromise(1000 /* time in ms */).then(() => { // this will run at the moment the database has nothing else to do // or the timeout has passed myCollection.customCleanupFunction(); }); ``` ### close() Closes the database's object instance. This is to free up memory and stop all observers and replications. Returns a `Promise` that resolves when the database is closed. Closing a database will not remove the database's data. When you create the database again with `createRxDatabase()`, all data will still be there. ```javascript await myDatabase.close(); ``` ### remove() Wipes all documents from the storage. Use this to free up disk space. ```javascript await myDatabase.remove(); // database instance is now gone ``` You can also clear a database without removing its instance by using `removeRxDatabase()`. This is useful if you want to migrate data or reset the user's state by renaming the database. Then you can remove the previous data with `removeRxDatabase()` without creating a RxDatabase first. Notice that this will only remove the stored data on the storage. It will not clear the cache of any [RxDatabase](./rx-database.md) instances. ```javascript import { removeRxDatabase } from 'rxdb'; removeRxDatabase('mydatabasename', 'localstorage'); ``` ### isRxDatabase Returns true if the given object is an instance of RxDatabase. Returns false if not. ```javascript import { isRxDatabase } from 'rxdb'; const is = isRxDatabase(myObj); ``` ### collections$ Emits events whenever an [RxCollection](./rx-collection.md) is added or removed to the instance of the RxDatabase. Notice that this only emits the JavaScript instance of the RxCollection class, it does not emit events across browser tabs. ```javascript const sub = myDatabase.collections$.subscribe(event => { console.dir(event); }); await myDatabase.addCollections({ heroes: { schema: mySchema } }); // -> emits the event sub.unsubscribe(); ``` --- ## RxDocument An RxDocument is an object which represents the data of a single JSON document stored in a [collection](./rx-collection.md). It can be compared to a single record in a relational database table. You get an `RxDocument` either as return on inserts/updates, or as result-set of [queries](./rx-query.md). RxDB works on RxDocuments instead of plain JSON data to have more convenient operations on the documents. Also Documents that are fetched multiple times by different queries or operations are automatically de-duplicated by RxDB in memory. ## insert To insert a document into a collection, you have to call the collection's `.insert()` function. ```js await myCollection.insert({ name: 'foo', lastname: 'bar' }); ``` ## find To find documents in a collection, you have to call the collection's `.find()` function. [See RxQuery](./rx-query.md). ```js const docs = await myCollection.find().exec(); // <- find all documents ``` ## Functions ### get() This will get a single field of the document. If the field is encrypted, it will be automatically decrypted before returning. ```js const name = myDocument.get('name'); // returns the name // OR const name = myDocument.name; ``` ### get$() This function returns an observable of the given path's value. The current value of this path will be emitted each time the document changes. ```js // get the live-updating value of 'name' let isName; myDocument.get$('name') .subscribe(newName => { isName = newName; }); await myDocument.incrementalPatch({name: 'foobar2'}); console.dir(isName); // isName is now 'foobar2' // OR myDocument.name$ .subscribe(newName => { isName = newName; }); ``` ### proxy-get All properties of an `RxDocument` are assigned as getters so you can also directly access values instead of using the get()-function. ```js // Identical to myDocument.get('name'); const name = myDocument.name; // Can also get nested values. const nestedValue = myDocument.whatever.nestedfield; // Also usable with observables: myDocument.firstName$.subscribe(newName => console.log('name is: ' + newName)); // > 'name is: Stefe' await myDocument.incrementalPatch({firstName: 'Steve'}); // > 'name is: Steve' ``` ### update() Updates the document based on the [Mongo update syntax](https://docs.mongodb.com/manual/reference/operator/update-field/), based on the [mingo library](https://github.com/kofrasa/mingo#updating-documents). ```js /** * If not done before, you have to add the update plugin. */ import { addRxPlugin } from 'rxdb'; import { RxDBUpdatePlugin } from 'rxdb/plugins/update'; addRxPlugin(RxDBUpdatePlugin); await myDocument.update({ $inc: { age: 1 // increases age by 1 }, $set: { firstName: 'foobar' // sets firstName to foobar } }); ``` ### modify() Updates a document's data based on a function that mutates the current data and returns the new value. ```js const changeFunction = (oldData) => { oldData.age = oldData.age + 1; oldData.name = 'foooobarNew'; return oldData; } await myDocument.modify(changeFunction); console.log(myDocument.name); // 'foooobarNew' ``` ### patch() Overwrites the given attributes in the document's data. ```js await myDocument.patch({ name: 'Steve', age: undefined // setting an attribute to undefined will remove it }); console.log(myDocument.name); // 'Steve' ``` ### Prevent conflicts with the incremental methods {#incrementalModify} Making a normal change to the non-latest version of an `RxDocument` will lead to a `409 CONFLICT` error because RxDB uses [revision checks](./transactions-conflicts-revisions.md) instead of transactions. To make a change to a document, no matter what the current state is, you can use the `incremental` methods: ```js // update await myDocument.incrementalUpdate({ $inc: { age: 1 // increases age by 1 } }); // modify await myDocument.incrementalModify(docData => { docData.age = docData.age + 1; return docData; }); // patch await myDocument.incrementalPatch({ age: 100 }); // remove await myDocument.incrementalRemove({ age: 100 }); ``` ### getLatest() Returns the latest known state of the `RxDocument`. ```js const myDocument = await myCollection.findOne('foobar').exec(); const docAfterEdit = await myDocument.incrementalPatch({ age: 10 }); const latestDoc = myDocument.getLatest(); console.log(docAfterEdit === latestDoc); // > true ``` ### Observe $ {#observe} Calling this will return an [RxJS Observable](https://rxjs.dev/guide/observable) which emits the current newest state of the RxDocument. ```js // get all changeEvents myDocument.$ .subscribe(currentRxDocument => console.dir(currentRxDocument)); ``` ### remove() This removes the document from the collection. Notice that this will not purge the document from the store but set `_deleted:true` so that it will be no longer returned on queries. To fully purge a document, use the [cleanup plugin](./cleanup.md). ```js myDocument.remove(); ``` ### Remove and update in a single atomic operation Sometimes you want to change a document's value and also remove it in the same operation. For example this can be useful when you use [replication](./replication.md) and want to set a `deletedAt` timestamp. Then you might have to ensure that setting this timestamp and deleting the document happens in the same atomic operation. To do this the modifying operations of a document accept setting the `_deleted` field. For example: ```ts // update() and remove() await doc.update({ $set: { deletedAt: new Date().getTime(), _deleted: true } }); // modify() and remove() await doc.modify(data => { data.age = 1; data._deleted = true; return data; }); ``` ### deleted$ Emits a boolean value, depending on whether the RxDocument is deleted or not. ```js let lastState = null; myDocument.deleted$.subscribe(state => lastState = state); console.log(lastState); // false await myDocument.remove(); console.log(lastState); // true ``` ### get deleted A getter to get the current value of `deleted$`. ```js console.log(myDocument.deleted); // false await myDocument.remove(); console.log(myDocument.deleted); // true ``` ### toJSON() Returns the document's data as plain JSON object. This will return an **immutable** object. To get something that can be modified, use `toMutableJSON()` instead. ```js const json = myDocument.toJSON(); console.dir(json); /* { passportId: 'h1rg9ugdd30o', firstName: 'Carolina', lastName: 'Gibson', age: 33 ... */ ``` You can also set `withMetaFields: true` to get additional meta fields like the revision, [attachments](./rx-attachment.md) or the deleted flag. ```js const json = myDocument.toJSON(true); console.dir(json); /* { passportId: 'h1rg9ugdd30o', firstName: 'Carolina', lastName: 'Gibson', _deleted: false, _attachments: { ... }, _rev: '1-aklsdjfhaklsdjhf...' */ ``` ### toMutableJSON() Same as `toJSON()` but returns a deep cloned object that can be mutated afterwards. Remember that deep cloning is expensive and should only be done when necessary. ```js const json = myDocument.toMutableJSON(); json.firstName = 'Alice'; // The returned document can be mutated ``` :::note All methods of RxDocument are bound to the instance When you get a method from a `RxDocument`, the method is automatically bound to the document's instance. This means you do not have to use things like `myMethod.bind(myDocument)` like you would do in jsx. ::: ### isRxDocument Returns true if the given object is an instance of RxDocument. Returns false if not. ```js const is = isRxDocument(myObj); ``` ## Document Lifetime and Immutability **RxDocument instances are immutable.** Each instance represents a snapshot of the document at the time it was fetched or last written. Modifying a document does not update existing instances of it - it creates a new `RxDocument` instance with the updated data. The old instance retains its original data. ```js const doc = await myCollection.findOne('foobar').exec(); console.log(doc.age); // 10 await doc.incrementalPatch({ age: 20 }); // The original instance still has the old data console.log(doc.age); // 10 // Use getLatest() to get the updated state console.log(doc.getLatest().age); // 20 ``` **RxDB de-duplicates document instances.** When the same document is fetched multiple times without any writes in between, RxDB returns the same instance to save memory. Once a write occurs, subsequent fetches return a new instance reflecting the updated state. **Calling non-incremental write methods on an outdated instance throws a `CONFLICT` error.** If you hold a reference to a document and another operation modifies that document in the meantime, calling `.patch()`, `.update()`, or `.modify()` on the outdated instance will fail with a conflict error. See [Transactions, Conflicts and Revisions](./transactions-conflicts-revisions.md) for details on how RxDB handles conflicts. To avoid this, either: - Use the [incremental methods](#incrementalModify) (`incrementalPatch`, `incrementalModify`, `incrementalUpdate`) which always fetch the latest state before applying changes. - Call `getLatest()` to get the current state before writing. - Re-query the collection to get a fresh document. **How long to keep a reference to an `RxDocument`.** Treat an `RxDocument` like plain JSON data - it is a snapshot valid at the time of retrieval. RxDB manages query result caching internally via [event-reduce](./rx-query.md), so you do not need to cache documents yourself. For components that display document data and need live updates, subscribe to the document's `$` observable instead of holding a static reference. --- ## Master Local Documents in RxDB # Local Documents Local documents are a special class of documents which are used to store local metadata. They come in handy when you want to store settings or additional data next to your documents. - Local Documents can exist on a [RxDatabase](./rx-database.md) or [RxCollection](./rx-collection.md). - Local Document do not have to match the collections schema. - Local Documents do not get replicated. - Local Documents will not be found on queries. - Local Documents can not have [attachments](./rx-attachment.md). - Local Documents will not get handled by the [migration-schema](./migration-schema.md). - The id of a local document has the `maxLength` of `128` characters. :::note While local documents can be very useful, in many cases the [RxState](./rx-state.md) API is more convenient. ::: ## Add the local documents plugin To enable the local documents, you have to add the `local-documents` plugin. ```ts import { addRxPlugin } from 'rxdb'; import { RxDBLocalDocumentsPlugin } from 'rxdb/plugins/local-documents'; addRxPlugin(RxDBLocalDocumentsPlugin); ``` ## Activate the plugin for a RxDatabase or RxCollection For better performance, the local document plugin does not create a storage for every database or collection that is created. Instead you have to set `localDocuments: true` when you want to store local documents in the instance. ```js // activate local documents on a RxDatabase const myDatabase = await createRxDatabase({ name: 'mydatabase', storage: getRxStorageLocalstorage(), localDocuments: true // <- activate this to store local documents in the database }); myDatabase.addCollections({ messages: { schema: messageSchema, // activate this to store local documents // in the collection localDocuments: true } }); ``` :::note If you want to store local documents in a `RxCollection` but **NOT** in the `RxDatabase`, you **MUST NOT** set `localDocuments: true` in the `RxDatabase` because it will only slow down the initial database creation. ::: ## insertLocal() Creates a local document for the database or collection. Throws if a local document with the same id already exists. Returns a Promise which resolves the new `RxLocalDocument`. ```javascript const localDoc = await myCollection.insertLocal( 'foobar', // id { // data foo: 'bar' } ); // you can also use local-documents on a database const localDoc = await myDatabase.insertLocal( 'foobar', // id { // data foo: 'bar' } ); ``` ## upsertLocal() Creates a local document for the database or collection if not exists. Overwrites the if exists. Returns a Promise which resolves the `RxLocalDocument`. ```javascript const localDoc = await myCollection.upsertLocal( 'foobar', // id { // data foo: 'bar' } ); ``` ## getLocal() Find a `RxLocalDocument` by its id. Returns a Promise which resolves the `RxLocalDocument` or `null` if not exists. ```javascript const localDoc = await myCollection.getLocal('foobar'); ``` ## getLocal$() Like `getLocal()` but returns an `Observable` that emits the document or `null` if not exists. ```javascript const subscription = myCollection.getLocal$('foobar').subscribe(documentOrNull => { console.dir(documentOrNull); // > RxLocalDocument or null }); ``` ## RxLocalDocument A `RxLocalDocument` behaves like a normal `RxDocument`. ```javascript const localDoc = await myCollection.getLocal('foobar'); // access data const foo = localDoc.get('foo'); // change data localDoc.set('foo', 'bar2'); await localDoc.save(); // observe data localDoc.get$('foo').subscribe(value => { /* .. */ }); // remove it await localDoc.remove(); ``` :::note Because the local document does not have a schema, accessing the documents data-fields via pseudo-proxy will not work. ::: ```javascript const foo = localDoc.foo; // undefined const foo = localDoc.get('foo'); // works! localDoc.foo = 'bar'; // does not work! localDoc.set('foo', 'bar'); // works ``` For the usage with typescript, you can have access to the typed data of the document over `toJSON()` ```ts declare type MyLocalDocumentType = { foo: string } const localDoc = await myCollection.upsertLocal( 'foobar', // id { // data foo: 'bar' } ); // typescript will know that foo is a string const foo: string = localDoc.toJSON().foo; ``` --- ## RxPipeline - Automate Data Flows in RxDB # RxPipeline The RxPipeline plugin enables you to run operations depending on writes to a collection. Whenever a write happens on the source collection of a pipeline, a handler is called to process the writes and run operations on another collection. You could have a similar behavior by observing the collection stream and process data on emits: ```ts mySourceCollection.$.subscribe(event => {/* ...process...*/}); ``` While this could work in some cases, it causes many problems that are fixed by using the pipeline plugin instead: - In an RxPipeline, only the [Leading Instance](./leader-election.md) runs the operations. For example when you have multiple browser tabs open, only one will run the processing and when that tab is closed, another tab will become elected leader and continue the pipeline processing. - On sudden stops and restarts of the JavaScript process, the processing will continue at the correct checkpoint and not miss out any documents even on unexpected crashes. - Reads/Writes on the destination collection are halted while the pipeline is processing. This ensures your queries only return fully processed documents and no partial results. So when you run a query to the destination collection directly after a write to the source collection, you can be sure your query results are up to date and the pipeline has already been run at the moment the query resolved: ```ts await mySourceCollection.insert({/* ... */}); /** * Because our pipeline blocks reads to the destination, * we know that the result array contains data created * on top of the previously inserted documents. */ const result = await myDestinationCollection.find().exec(); ``` ## Creating a RxPipeline Pipelines are created on top of a source [RxCollection](./rx-collection.md) and have another `RxCollection` as destination. An identifier is used to identify the state of the pipeline so that different pipelines have a different processing checkpoint state. A plain JavaScript function `handler` is used to process the data of the source collection writes. ```ts const pipeline = await mySourceCollection.addPipeline({ identifier: 'my-pipeline', destination: myDestinationCollection, handler: async (docs) => { /** * Here you can process the documents and write to * the destination collection. */ for (const doc of docs) { await myDestinationCollection.insert({ id: doc.primary, category: doc.category }); } } }); ``` ## Use Cases for RxPipeline The RxPipeline is a handy building block for different features and plugins. You can use it to aggregate data or restructure local data. ### UseCase: Re-Index data that comes from replication Sometimes you want to [replicate](./replication.md) atomic documents over the wire but locally you want to split these documents for better indexing. For example you replicate email documents that have multiple receivers in a string-array. While string-arrays cannot be indexed, locally you need a way to query for all emails of a given receiver. To handle this case you can set up a RxPipeline that writes the mapping into a separate collection: ```ts const pipeline = await emailCollection.addPipeline({ identifier: 'map-email-receivers', destination: emailByReceiverCollection, handler: async (docs) => { for (const doc of docs) { // remove previous mapping await emailByReceiverCollection.find({emailId: doc.primary}).remove(); // add new mapping if(!doc.deleted) { await emailByReceiverCollection.bulkInsert( doc.receivers.map(receiver => ({ emailId: doc.primary, receiver: receiver })) ); } } } }); ``` With this you can efficiently query for "all emails that a person received" by running: ```ts const mailIds = await emailByReceiverCollection.find({ receiver: 'foobar@example.com' }).exec(); ``` ### UseCase: Fulltext Search You can utilize the pipeline plugin to index text data for efficient [fulltext search](./fulltext-search.md). ```ts const pipeline = await emailCollection.addPipeline({ identifier: 'email-fulltext-search', destination: mailByWordCollection, handler: async (docs) => { for (const doc of docs) { // remove previous mapping await mailByWordCollection.find({emailId: doc.primary}).remove(); // add new mapping if(!doc.deleted) { const words = doc.text.split(' '); await mailByWordCollection.bulkInsert( words.map(word => ({ emailId: doc.primary, word: word })) ); } } } }); ``` With this you can efficiently query for "all emails that contain a given word" by running: ```ts const mailIds = await emailByReceiverCollection.find({word: 'foobar'}).exec(); ``` ### UseCase: Download data based on source documents When you have to fetch data for each document of a collection from a server, you can use the pipeline to ensure all documents have their data downloaded and no document is missed. ```ts const pipeline = await emailCollection.addPipeline({ identifier: 'download-data', destination: serverDataCollection, handler: async (docs) => { for (const doc of docs) { const response = await fetch('https://example.com/doc/' + doc.primary); const serverData = await response.json(); await serverDataCollection.upsert({ id: doc.primary, data: serverData }); } } }); ``` ## RxPipeline methods ### awaitIdle() You can await the idleness of a pipeline with `await myRxPipeline.awaitIdle()`. This will await a promise that resolves when the pipeline has processed all documents and is not running anymore. ### close() `await myRxPipeline.close()` stops the pipeline so that it is no longer doing stuff. This is automatically called when the RxCollection or [RxDatabase](./rx-database.md) of the pipeline is closed. ### remove() `await myRxPipeline.remove()` removes the pipeline and all metadata which it has stored. Recreating the pipeline afterwards will start processing all source documents from scratch. ## Using RxPipeline correctly ### Pipeline handlers must be idempotent Because a JavaScript process can exit at any time, like when the user closes a browser tab, the pipeline handler function must be idempotent. This means when it only runs partially and is started again with the same input, it should still end up in the correct result. ### Pipeline handlers must not throw Pipeline handlers must never throw. If you run operations inside of the handler that might cause errors, you must wrap the handler's code with a `try-catch` by yourself and also handle retries. If your handler throws, your pipeline will be stuck and no longer be usable, which should never happen. ### Be careful when doing http requests in the handler When you run http requests inside of your handler, you no longer have an [offline first](./offline-first.md) application because reads to the destination collection will be blocked until all handlers have finished. When your client is offline, therefore the collection will be blocked for reads and writes. ### Pipelines temporarily block external reads and writes While a pipeline is running, **all reads and writes to its destination collection are blocked**. This guarantees that queries never observe partially processed data, but it also means that pipelines can block each other if they interact incorrectly. Problems occur when multiple pipelines: - read or write across the same collections, or - wait for each other using `awaitIdle()` from inside a pipeline handler. ```ts // Example of a deadlock // Pipeline A: files β†’ files (reads folders) const pipelineA = await db.files.addPipeline({ identifier: 'file-path-sync', destination: db.files, handler: async (docs) => { const folders = await folders.find().exec(); // can block /* ... */ } }); // Pipeline B: files β†’ folders (waits for A) await db.folders.addPipeline({ identifier: 'file-count', destination: db.folders, handler: async () => { await pipelineA.awaitIdle(); // ❌ may deadlock /* ... */ } }); ``` To prevent deadlocks, consider: - Never call `awaitIdle()` inside a pipeline handler. - Avoid circular dependencies between pipelines. - Prefer one-directional data flow. --- ## RxQuery To find documents inside of an [RxCollection](./rx-collection.md), RxDB uses the RxQuery interface that handles all query operations: it serves as the main interface for fetching documents, relies on a MongoDB-like [Mango Query Syntax](https://github.com/cloudant/mango), and provides three types of queries: [find()](#find), [findOne()](#findOne) and [count()](#count). By caching and de-duplicating results, RxQuery ensures efficient in-memory handling, and when queries are observed or re-run, the [EventReduce algorithm](https://github.com/pubkey/event-reduce) speeds up updates for a fast real-time experience and queries that run more than once. ## find() To create a basic `RxQuery`, call `.find()` on a collection and insert selectors. The result-set of normal queries is an array with documents. ```js // find all that are older than 18 const query = myCollection .find({ selector: { age: { $gt: 18 } } }); ``` ## findOne() {#findOne} A findOne-query has only a single [RxDocument](./rx-document.md) or `null` as result-set. ```js // find alice const query = myCollection .findOne({ selector: { name: 'alice' } }); ``` ```js // find the youngest one const query = myCollection .findOne({ selector: {}, sort: [ {age: 'asc'} ] }); ``` ```js // find one document by the primary key const query = myCollection.findOne('foobar'); ``` ## exec() Returns a `Promise` that resolves with the result-set of the query. ```js const query = myCollection.find(); const results = await query.exec(); console.dir(results); // > [RxDocument,RxDocument,RxDocument..] ``` On `.findOne()` queries, you can call `.exec(true)` to ensure your document exists and to make TypeScript handling easier: ```ts // docOrUndefined can be RxDocument or null // which then has to be handled to be typesafe. const docOrUndefined = await myCollection.findOne().exec(); // with .exec(true), it will throw if the document // cannot be found and always return type RxDocument const doc = await myCollection.findOne().exec(true); ``` ## Observe $ {#observe} An `BehaviorSubject` [see](https://medium.com/@luukgruijs/understanding-rxjs-behaviorsubject-replaysubject-and-asyncsubject-8cc061f1cfc0) that always has the current result-set as value. This is extremely helpful when used together with UIs that should always show the same state as what is written in the database. ```js const query = myCollection.find(); const querySub = query.$.subscribe(results => { console.log('got results: ' + results.length); }); // > 'got results: 5' // BehaviorSubjects emit on subscription await myCollection.insert({/* ... */}); // insert one // > 'got results: 6' // $.subscribe() was called again with the new results // stop watching this query querySub.unsubscribe() ``` ## update() Runs an [update](./rx-document.md#update) on every RxDocument of the query-result. ```js // to use the update() method, you need to add the update plugin. import { RxDBUpdatePlugin } from 'rxdb/plugins/update'; addRxPlugin(RxDBUpdatePlugin); const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.update({ $inc: { age: 1 // increases age of every found document by 1 } }); ``` ## patch() / incrementalPatch() Runs the [RxDocument.patch()](./rx-document.md#patch) function on every RxDocument of the query result. ```js const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.patch({ age: 12 // set the age of every found to 12 }); ``` ## modify() / incrementalModify() Runs the [RxDocument.modify()](./rx-document.md#modify) function on every RxDocument of the query result. ```js const query = myCollection.find({ selector: { age: { $gt: 18 } } }); await query.modify((docData) => { docData.age = docData.age + 1; // increases age of every found document by 1 return docData; }); ``` ## remove() / incrementalRemove() Deletes all found documents. Returns a promise which resolves to the deleted documents. ```javascript // All documents where the age is less than 18 const query = myCollection.find({ selector: { age: { $lt: 18 } } }); // Remove the documents from the collection const removedDocs = await query.remove(); ``` On `.findOne()` queries, `.remove()` returns `null` when no document matches. You can call `.remove(true)` to throw if the document is missing, similar to `.exec(true)`: ```ts // returns null if no document matches const removed = await myCollection.findOne('foobar').remove(); // throws if no document matches, return type is always RxDocument const removed = await myCollection.findOne('foobar').remove(true); ``` ## doesDocumentDataMatch() Returns `true` if the given document data matches the query. ```js const documentData = { id: 'foobar', age: 19 }; myCollection.find({ selector: { age: { $gt: 18 } } }).doesDocumentDataMatch(documentData); // > true myCollection.find({ selector: { age: { $gt: 20 } } }).doesDocumentDataMatch(documentData); // > false ``` ## Query Builder Plugin To use chained query methods, you can also use the `query-builder` plugin. ```ts // add the query builder plugin import { addRxPlugin } from 'rxdb'; import { RxDBQueryBuilderPlugin } from 'rxdb/plugins/query-builder'; addRxPlugin(RxDBQueryBuilderPlugin); // now you can use chained query methods const query = myCollection.find().where('age').gt(18); const result = await query.exec(); ``` ## Query Examples Here some examples to learn quickly how to write queries without reading the docs. - [Pouch-find-docs](https://github.com/pouchdb/pouchdb/blob/master/packages/node_modules/pouchdb-find/README.md) - learn how to use mango-queries - [mquery-docs](https://github.com/aheckmann/mquery/blob/master/README.md) - learn how to use chained-queries ```js // directly pass search-object myCollection.find({ selector: { name: { $eq: 'foo' } } }) .exec().then(documents => console.dir(documents)); /* * find by using sql equivalent '%like%' syntax * This example will e.g. match 'foo' but also 'fifoo' or 'foofa' or 'fifoofa' * Notice that in RxDB queries, a regex is * represented as a $regex string with the * $options parameter for flags. * Using a RegExp instance is not allowed * because they are not JSON.stringify()-able * and also RegExp instances are mutable which * could cause undefined behavior when the * RegExp is mutated * after the query was parsed. */ myCollection.find({ selector: { name: { $regex: '.*foo.*' } } }) .exec().then(documents => console.dir(documents)); // find using a composite statement eg: $or // This example checks where name is either foo // or if name is not existent on the document myCollection.find({ selector: { $or: [ { name: { $eq: 'foo' } }, { name: { $exists: false } }] } }) .exec().then(documents => console.dir(documents)); // do a case insensitive search // This example will match 'foo' or 'FOO' or 'FoO' etc... myCollection.find({ selector: { name: { $regex: '^foo$', $options: 'i' } } }) .exec().then(documents => console.dir(documents)); // chained queries myCollection.find().where('name').eq('foo') .exec().then(documents => console.dir(documents)); ``` :::note RxDB will always append the primary key to the sort parameters For several performance optimizations, like the [EventReduce algorithm](https://github.com/pubkey/event-reduce), RxDB expects all queries to return a deterministic sort order that does not depend on the insert order of the documents. To ensure a deterministic ordering, RxDB will always append the primary key as last sort parameter to all queries and to all indexes. This works in contrast to most other databases where a query without sorting would return the documents in the order in which they had been inserted to the database. ::: ## Setting a specific index By default, the query will be sent to the RxStorage, where a query planner will determine which one of the available indexes must be used. But the query planner cannot know everything and sometimes will not pick the most optimal index. To improve query performance, you can specify which index must be used, when running the query. ```ts const query = myCollection .findOne({ selector: { age: { $gt: 18 }, gender: { $eq: 'm' } }, /** * Because the developer knows that 50% of the documents are 'male', * but only 20% are below age 18, * it makes sense to enforce using the * ['gender', 'age'] index to improve * performance. * This could not be known by the query * planner which might have chosen * ['age', 'gender'] instead. */ index: ['gender', 'age'] }); ``` ## Count When you only need the amount of documents that match a query, but you do not need the document data itself, you can use a count query for **better performance**. The performance difference compared to a normal query differs depending on which [RxStorage](./rx-storage.md) implementation is used. ```ts const query = myCollection.count({ selector: { age: { $gt: 18 } } // 'limit' and 'skip' MUST NOT be set for count queries. }); // get the count result once const matchingAmount = await query.exec(); // > number // observe the result query.$.subscribe(amount => { console.log('Currently has ' + amount + ' documents'); }); ``` :::note Count queries have a better performance than normal queries because they do not have to fetch the full document data out of the storage. Therefore it is **not** possible to run a `count()` query with a selector that requires fetching and comparing the document data. So if your query selector **does not** fully match an index of the schema, it is not allowed to run it. These queries would have no performance benefit compared to normal queries but have the tradeoff of not using the fetched document data for caching. ::: ```ts /** * The following will throw an error because * the count operation cannot run on any specific index range * because the $regex operator is used. */ const query = myCollection.count({ selector: { age: { $regex: 'foobar' } } }); /** * The following will throw an error because * the count operation cannot run on any specific index range * because there is no ['age' ,'otherNumber'] index * defined in the schema. */ const query = myCollection.count({ selector: { age: { $gt: 20 }, otherNumber: { $gt: 10 } } }); ``` If you want to count these kinds of queries, you should do a normal query instead and use the length of the result set as counter. This has the same performance as running a non-fully-indexed count which has to fetch all document data from the database and run a query matcher. ```ts // get count manually once const resultSet = await myCollection.find({ selector: { age: { $regex: 'foobar' } } }).exec(); const count = resultSet.length; // observe count manually const count$ = myCollection.find({ selector: { age: { $regex: 'foobar' } } }).$.pipe( map(result => result.length) ); /** * To allow non-fully-indexed count queries, * you can also specify that by setting allowSlowCount: true * when creating the database. */ const database = await createRxDatabase({ name: 'mydatabase', allowSlowCount: true, // set this to true [default=false] /* ... */ }); ``` ### `allowSlowCount` To allow non-fully-indexed count queries, you can also specify that by setting `allowSlowCount: true` when creating the database. Doing this is mostly not wanted, because it would run the counting on the storage without having the document stored in the RxDB document cache. This is only recommended if the RxStorage is running remotely like in a WebWorker and you do not always want to send the document-data between the worker and the main thread. In this case you might only need the count-result instead to save performance. ## RxQuery instances are immutable Because RxDB is a reactive database, we can do heavy performance-optimisation on query-results which change over time. To be able to do this, RxQueries have to be immutable. This means, when you have a `RxQuery` and run a `.where()` on it, the original RxQuery object is not changed. Instead the where-function returns a new `RxQuery`-Object with the changed where-field. Keep this in mind if you create RxQueries and change them afterwards. Example: ```javascript const queryObject = myCollection.find().where('age').gt(18); // Creates a new RxQuery object, does not modify previous one queryObject.sort('name'); const results = await queryObject.exec(); console.dir(results); // result-documents are not sorted by name const queryObjectSort = queryObject.sort('name'); const results = await queryObjectSort.exec(); console.dir(results); // result-documents are now sorted ``` ### isRxQuery Returns true if the given object is an instance of RxQuery. Returns false if not. ```js const is = isRxQuery(myObj); ``` ## Design Decisions Like most other noSQL-Databases, RxDB uses the [mango-query-syntax](https://github.com/cloudant/mango) similar to MongoDB and others. - We use the JSON-based Mango Query Syntax because: - Mango Queries work better with TypeScript compared to SQL strings. - Mango Queries are composable and easy to transform by code without joining SQL strings. - Queries can be run very fast and efficient with only a minimal query planner to plan the best indexes and operations. - NoSQL queries can be optimized with the [EventReduce](https://github.com/pubkey/event-reduce) algorithm to improve performance of observed and cached queries. ## FAQ
Can I specify which document fields are returned by an RxDB query? No, RxDB does not support partial document retrieval. Because RxDB is a client-side database with limited memory, it caches and de-duplicates entire documents across multiple queries. Even if you only need a few fields, most storages must still fetch the entire JSON data, so subselecting fields would not significantly improve performance. Therefore, RxDB always returns full documents. If you only need certain fields, you can filter them out in your application code or consider storing just the necessary data in a separate collection.
Why doesn't RxDB support aggregations on queries? RxDB runs entirely on the client side. Any "aggregation" or data processing you might do within RxDB would still happen in the same JavaScript environment as your application code. Therefore, there's no real performance advantage or difference between doing the aggregation in RxDB vs. doing it in your own code after fetching the data. As a result, RxDB doesn't provide built-in aggregation methods. Instead, just query the documents you need and perform any calculations directly in your app's code.
Why does RxDB not support cross-collection queries? RxDB is a client-side database and does not provide built-in cross-collection queries or transactions. Instead, you can execute multiple queries in your JavaScript code and combine their results as needed. Because everything runs in the same environment, this approach offers the same performance you would get if cross-collection queries were built in - without the added complexity.
Why Doesn't RxDB Support Case-Insensitive Search? RxDB relies on various storage engines as its backend, and these storage engines generally do not support case-insensitive search natively, like [IndexedDB](./rx-storage-indexeddb.md) or [FoundationDB](./rx-storage-foundationdb.md). This limitation arises from the design of these engines, which prioritize efficiency and flexibility for specific types of queries rather than universal features like case-insensitivity. Although RxDB does not offer built-in support for case-insensitive search, there are two common workarounds: - **Store Data in a Meta-Field for Lowercase Search**: To enable case-insensitive search, you can store an additional field in your documents where the relevant text data is preprocessed and saved in lowercase. ```ts const document = { name: 'John Doe', nameLowercase: 'john doe' // Meta-field }; await myCollection.insert(document); const query = myCollection.find({ selector: { nameLowercase: { $eq: 'john doe' } } }); ``` - **Use a Regex Query**: Regular expressions can perform case-insensitive searches. For example: ```ts const query = myCollection.find({ selector: { name: { $regex: '^john doe$', $options: 'i' } // Case-insensitive regex } }); ``` However, this method has a significant downside: regex queries often cannot leverage indexes efficiently. As a result, they may be slower, especially for large datasets.
--- ## Design Perfect Schemas in RxDB # RxSchema Schemas define the structure of the documents of a collection. Which field should be used as the primary key, which fields should be used as indexes, and what should be encrypted. Every collection has its own schema. With RxDB, schemas are defined with the [JSON Schema](https://json-schema.org/blog/posts/rxdb-case-study) standard which you might know from other projects. ## Example In this example-schema we define a hero-collection with the following settings: - the version-number of the schema is 0 - the name-property is the **primaryKey**. This means it's a unique, indexed, required `string` which can be used to definitely find a single document. - the color-field is required for every document - the healthpoints-field must be a number between 0 and 100 - the secret-field stores an encrypted value - the birthyear-field is final which means it is required and cannot be changed - the skills-attribute must be an array of objects which contain the name and the damage-attribute. There is a maximum of 5 skills per hero. - Allows adding attachments and storing them encrypted ```json { "title": "hero schema", "version": 0, "description": "describes a simple hero", "primaryKey": "name", "type": "object", "properties": { "name": { "type": "string", "maxLength": 100 // <- the primary key must have set maxLength }, "color": { "type": "string" }, "healthpoints": { "type": "number", "minimum": 0, "maximum": 100 }, "secret": { "type": "string" }, "birthyear": { "type": "number", "final": true, "minimum": 1900, "maximum": 2050 }, "skills": { "type": "array", "maxItems": 5, "uniqueItems": true, "items": { "type": "object", "properties": { "name": { "type": "string" }, "damage": { "type": "number" } } } } }, "required": [ "name", "color" ], "encrypted": ["secret"], "attachments": { "encrypted": true } } ``` ## Create a collection with the schema ```javascript await myDatabase.addCollections({ heroes: { schema: myHeroSchema } }); console.dir(myDatabase.heroes.name); // heroes ``` ## version The `version` field is a number, starting with `0`. When the version is greater than 0, you have to provide the `migrationStrategies` to create a collection with this schema. ## primaryKey The `primaryKey` field contains the fieldname of the property that will be used as primary key for the whole collection. The value of the primary key of the document must be a `string`, unique, final and required. ### composite primary key You can define a composite primary key which is composed from multiple properties of the document data. ```javascript const mySchema = { keyCompression: true, // set this to true, to enable the keyCompression version: 0, title: 'human schema with composite primary', primaryKey: { // where should the composed string be stored key: 'id', // fields that will be used to create the composed key fields: [ 'firstName', 'lastName' ], // separator which is used to concat the fields values. separator: '|' }, type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' } }, required: [ 'id', 'firstName', 'lastName' ] }; ``` You can then find a document by using the relevant parts to create the composite primaryKey: ```ts // inserting with composite primary await myRxCollection.insert({ // id, <- do not set the id, it will be filled by RxDB firstName: 'foo', lastName: 'bar' }); // find by composite primary const id = myRxCollection.schema.getPrimaryOfDocumentData({ firstName: 'foo', lastName: 'bar' }); const myRxDocument = await myRxCollection.findOne(id).exec(); ``` ## Indexes RxDB supports secondary indexes which are defined at the schema-level of the collection. Indexes are only allowed on field types `string`, `integer` and `number`. Some RxStorages allow to use `boolean` fields as index. Depending on the field type, you must have set some meta attributes like `maxLength` or `minimum`. This is required so that RxDB is able to know the maximum string representation length of a field, which is needed to craft custom indexes in several `RxStorage` implementations. **Performance Note:** Having a large `maxLength` for indexed fields and primary keys can negatively impact performance and storage size on many storages. Therefore, you should only set it as large as strictly needed for your application. :::note RxDB will always append the `primaryKey` to all indexes to ensure a deterministic sort order of query results. You do not have to add the `primaryKey` to any index. ::: ### Index-example ```javascript const schemaWithIndexes = { version: 0, title: 'human schema with indexes', keyCompression: true, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string', // string-fields used as an index, // must have set maxLength. maxLength: 100 }, lastName: { type: 'string' }, active: { type: 'boolean' }, familyName: { type: 'string' }, balance: { type: 'number', // number fields used in an index, must set // minimum, maximum and multipleOf minimum: 0, maximum: 100000, multipleOf: 0.01 }, creditCards: { type: 'array', items: { type: 'object', properties: { cvc: { type: 'number' } } } } }, required: [ 'id', 'active' // <- boolean fields that are used in an index must be required. ], indexes: [ 'firstName', // <- this will create a simple index for the `firstName` field // <- compound-index for these two fields ['active', 'firstName'], 'active' ] }; ``` # internalIndexes When you use RxDB on the server-side, you might want to use internalIndexes to speed up internal queries. [Read more](./rx-server.md#server-only-indexes) ## attachments To use attachments in the collection, you have to add the `attachments`-attribute to the schema. [See RxAttachment](./rx-attachment.md). ## default Default values can only be defined for first-level fields. Whenever you insert a document unset fields will be filled with default-values. ```javascript const schemaWithDefaultAge = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer', default: 20 // <- default will be used } }, required: ['id'] }; ``` ## final By setting a field to `final`, you make sure it cannot be modified later. Final fields are always required. Final fields cannot be observed because they will not change. Advantages: - With final fields you can ensure that no-one accidentally modifies the data. - When you enable the `eventReduce` algorithm, some performance-improvements are done. ```javascript const schemaWithFinalAge = { version: 0, primaryKey: 'id', type: 'object', properties: { id: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer', final: true } }, required: ['id'] }; ``` ## Non allowed properties The schema is not only used to validate objects before they are written into the database, but also used to map getters to observe and populate single fieldnames, key compression and other things. Therefore you can not use every schema which would be valid for the spec of [json-schema.org](http://json-schema.org/). For example, fieldnames must match the regex `^[a-zA-Z](?:[[a-zA-Z0-9_]*]?[a-zA-Z0-9])?$` and `additionalProperties` is always set to `false`. But don't worry, RxDB will instantly throw an error when you pass an invalid schema into it. Also the following class properties of `RxDocument` cannot be used as top level fields because they would clash when the RxDocument property is accessed: ```json [ "collection", "_data", "_propertyCache", "isInstanceOfRxDocument", "primaryPath", "primary", "revision", "deleted$", "deleted$$", "deleted", "getLatest", "$", "$$", "get$", "get$$", "populate", "get", "toJSON", "toMutableJSON", "update", "incrementalUpdate", "updateCRDT", "putAttachment", "putAttachmentBase64", "getAttachment", "allAttachments", "allAttachments$", "modify", "incrementalModify", "patch", "incrementalPatch", "_saveData", "remove", "incrementalRemove", "close", "deleted", "synced" ] ``` ## FAQ
How can I store a Date? With RxDB you can only store plain JSON data inside of a document. You cannot store a JavaScript `new Date()` instance directly. This is for performance reasons and because `Date` is a mutable object where changing it at any time might cause strange problems that are hard to debug. To store a date in RxDB, you have to define a string field with a `format` attribute: ```json { "type": "string", "format": "date-time" } ``` When storing the data you have to first transform your `Date` object into a string `Date.toISOString()`. Because the `date-time` is sortable, you can do whatever query operations on that field and even use it as an index.
How do I specify nullable in JSON Schema? In JSON Schema, you make a field nullable by allowing multiple types with an array: ```json { "type": ["string", "null"] } ``` When you use a nullable type like `["string", "null"]`, you should always add that field to the `required` array. If a nullable field is not required, it can end up in three possible states: a string value, `null`, or `undefined` (not set). Having three states instead of two makes your code harder to reason about. In RxDB it is recommended to **not** store `null` values at all. Instead, define the field as non-required and leave it `undefined` (not set) when there is no value. A field that is not listed in the `required` array can be omitted from a document. This approach works better with RxDB's internal handling and keeps your data cleaner: ```ts { "version": 0, "primaryKey": "id", "type": "object", "properties": { "id": { "type": "string", "maxLength": 100 }, "nickname": { "type": "string" } }, "required": ["id"] // "nickname" is not required, so it can be left undefined (not set) } ```
How to store schemaless data? By design, RxDB requires that every collection has a schema. This means you cannot create a truly "schema-less" collection where top-level fields are unknown at schema creation time. RxDB must know about all fields of a document at the top level to perform validation, index creation, and other internal optimizations. However, there is a way to store data of arbitrary structure at sub-fields. To do this, define a property with `type: "object"` in your schema. For example: ```ts { "version": 0, "primaryKey": "id", "type": "object", "properties": { "id": { "type": "string", "maxLength": 100 }, "myDynamicData": { "type": "object" // Here you can store any JSON data // because it's an open object. } }, "required": ["id"] } ```
Why does RxDB automatically set `additionalProperties: false` at the top level RxDB automatically sets `additionalProperties: false` at the top level of a schema to ensure that all top-level fields are known in advance. This design choice offers several benefits: - Prevents collisions with [RxDocument](./rx-document.md) class properties: RxDB documents have built-in class methods (e.g., .toJSON, .save) at the top level. By forbidding unknown top-level properties, we avoid accidental naming collisions with these built-in methods. - Avoids conflicts with user-defined ORM functions: Developers can add custom [ORM methods](./orm.md) to RxDocuments. If top-level properties were unbounded, a property name could accidentally conflict with a method name, leading to unexpected behavior. - Improves TypeScript typings: If RxDB didn't know about all top-level fields, the document type would effectively become `any`. That means a simple typo like `myDocument.toJOSN()` would only be caught at runtime, not at build time. By disallowing unknown properties, TypeScript can provide strict typing and catch errors sooner.
Can't change the schema of a collection When you make changes to the schema of a collection, you sometimes can get an error like `Error: addCollections(): another instance created this collection with a different schema`. This means you have created a collection before and added document-data to it. When you now just change the schema, it is likely that the new schema does not match the saved documents inside of the collection. This would cause strange bugs and would be hard to debug, so RxDB checks if your schema has changed and throws an error. To change the schema in **production**-mode, do the following steps: - Increase the `version` by 1 - Add the appropriate [migrationStrategies](https://pubkey.github.io/rxdb/migration-schema.html) so the saved data will be modified to match the new schema
Why does the top-level schema complain about a missing `_id` primary key field? You encounter an error stating that the top-level schema is missing the `_id` primary key field during [replication](./replication.md). RxDB requires every schema to explicitly define the primary key property. Other databases use an implicit `_id` field. You must add the `_id` property to your schema manually if your backend expects it. You declare `_id` as a string type and set it as the `primaryKey` in your schema definition.
In **development**-mode, the schema-change can be simplified by **one of these** strategies: - Use the memory-storage so your db resets on restart and your schema is not saved permanently - Call `removeRxDatabase('mydatabasename', RxStorage);` before creating a new [RxDatabase](./rx-database.md)-instance - Add a timestamp as suffix to the database-name to create a new one each run like `name: 'heroesDB' + new Date().getTime()`
--- ## RxState - Reactive Persistent State with RxDB RxState is a flexible state library build on top of the [RxDB Database](https://rxdb.info/). While RxDB stores similar documents inside of collections, RxState can store any complex JSON data without having a predefined schema. The state is automatically persisted through RxDB and states changes are propagated between browser tabs. Even setting up replication is simple by using the RxDB [Replication feature](./replication.md). ## Creating a RxState A `RxState` instance is created on top of a [RxDatabase](./rx-database.md). The state will automatically be persisted with the [storage](./rx-storage.md) that was used when setting up the RxDatabase. To use it you first have to import the `RxDBStatePlugin` and add it to RxDB with `addRxPlugin()`. To create a state call the `addState()` method on the database instance. Calling `addState` multiple times will automatically de-duplicated and only create a single RxState object. ```javascript import { createRxDatabase, addRxPlugin } from 'rxdb'; import { getRxStorageLocalstorage } from 'rxdb/plugins/storage-localstorage'; // first add the RxState plugin to RxDB import { RxDBStatePlugin } from 'rxdb/plugins/state'; addRxPlugin(RxDBStatePlugin); const database = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageLocalstorage(), }); // create a state instance const myState = await database.addState(); // you can also create states with a given namespace const myChildState = await database.addState('myNamepsace'); ``` ## Writing data and Persistence Writing data to the state happen by a so called `modifier`. It is a simple JavaScript function that gets the current value as input and returns the new, modified value. For example to increase the value of `myField` by one, you would use a modifier that increases the current value: ```ts // initially set value to zero await myState.set('myField', v => 0); // increase value by one await myState.set('myField', v => v + 1); // update value to be 42 await myState.set('myField', v => 42); ``` The modifier is used instead of a direct assignment to ensure correct behavior when other JavaScript realms write to the state at the same time, like other browser tabs or webworkers. On conflicts, the modifier will just be run again to ensure deterministic and correct behavior. Therefore mutation is `async`, you have to `await` the call to the set function when you care about the moment when the change actually happened. ## Get State Data The state stored inside of a RxState instance can be seen as a big single JSON object that contains all data. You can fetch the whole object or partially get a single properties or nested ones. Fetching data can either happen with the `.get()` method or by accessing the field directly like `myRxState.myField`. ```ts // get root state data const val = myState.get(); // get single property const val = myState.get('myField'); const val = myState.myField; // get nested property const val = myState.get('myField.childfield'); const val = myState.myField.childfield; // get nested array property const val = myState.get('myArrayField[0].foobar'); const val = myState.myArrayField[0].foobar; ``` ## Observability Instead of fetching the state once, you can also observe the state with either rxjs observables or [custom reactivity handlers](#rxstate-with-signals-and-hooks) like signals or hooks. Rxjs observables can be created by either using the `.get$()` method or by accessing the top level property suffixed with a dollar sign like `myState.myField$`. ```ts const observable = myState.get$('myField'); const observable = myState.myField$; // then you can subscribe to that observable observable.subscribe(newValue => { // update the UI }); ``` Subscription works across multiple JavaScript realms like browser tabs or Webworkers. ## RxState with signals and hooks With the double-dollar sign you can also access [custom reactivity](./reactivity.md) instances like signals or hooks. These are easier to use compared to rxjs, depending on which JavaScript framework you are using. For example in angular to use signals, you would first add a reactivity factory to your database and then access the signals of the RxState: ```ts import { RxReactivityFactory, createRxDatabase } from 'rxdb/plugins/core'; import { toSignal } from '@angular/core/rxjs-interop'; const reactivityFactory: RxReactivityFactory = { fromObservable(obs, initialValue) { return toSignal(obs, { initialValue }); } }; const database = await createRxDatabase({ name: 'mydb', storage: getRxStorageLocalstorage(), reactivity: reactivityFactory }); const myState = await database.addState(); const mySignal = myState.get$$('myField'); const mySignal = myState.myField$$; ``` ## Cleanup RxState operations For faster writes, changes to the state are only written as list of operations to disc. After some time you might have too many operations written which would delay the initial state creation. To automatically merge the state operations into a single operation and clear the old operations, you should add the [Cleanup Plugin](./cleanup.md) before creating the [RxDatabase](./rx-database.md): ```ts import { addRxPlugin } from 'rxdb'; import { RxDBCleanupPlugin } from 'rxdb/plugins/cleanup'; addRxPlugin(RxDBCleanupPlugin); ``` ## Correctness over Performance RxState is optimized for correctness, not for performance. Compared to other state libraries, RxState directly persists data to storage and ensures write conflicts are handled properly. Other state libraries are handles mainly in-memory and lazily persist to disc without caring about conflicts or multiple browser tabs which can cause problems and hard to reproduce bugs. RxState still uses RxDB which has a range of [great performing storages](./rx-storage-performance.md) so the write speed is more than sufficient. Also to further improve write performance you can use more RxState instances (with an different namespace) to split writes across multiple storage instances. Reads happen directly in-memory which makes RxState read performance comparable to other state libraries. ## RxState Replication Because the state data is stored inside of an internal [RxCollection](./rx-collection.md) you can easily use the [RxDB Replication](./replication.md) to sync data between users or devices of the same user. For example with the [P2P WebRTC replication](./replication-webrtc.md) you can start the replication on the collection and automatically sync the RxState operations between users directly: ```ts import { replicateWebRTC, getConnectionHandlerSimplePeer } from 'rxdb/plugins/replication-webrtc'; const database = await createRxDatabase({ name: 'heroesdb', storage: getRxStorageLocalstorage(), }); const myState = await database.addState(); const replicationPool = await replicateWebRTC( { collection: myState.collection, topic: 'my-state-replication-pool', connectionHandlerCreator: getConnectionHandlerSimplePeer({}), pull: {}, push: {} } ); ``` --- ## RxStorage Layer - Choose the Perfect RxDB Storage for Every Use Case # RxStorage RxDB is not a self-contained database. Instead the data is stored in an implementation of the [RxStorage interface](https://github.com/pubkey/rxdb/blob/master/src/types/rx-storage.interface.d.ts). This allows you to **switch out** the underlying data layer, depending on the JavaScript environment and performance requirements. For example you can use the SQLite storage for a capacitor app or you can use the LocalStorage RxStorage to store data in localstorage in a browser-based application. There are also storages for other JavaScript runtimes like Node.js, React-Native, NativeScript and more. ## Quick Recommendations - In the Browser: Use the [LocalStorage](./rx-storage-localstorage.md) storage for simple setup and small build size. For bigger datasets, use either the [dexie.js storage](./rx-storage-dexie.md) (free) or the [IndexedDB RxStorage](./rx-storage-indexeddb.md) if you have [πŸ‘‘ premium access](/premium/) which is a bit faster and has a smaller build size. - In [Electron](./electron-database.md) and [ReactNative](./react-native-database.md): Use the [SQLite RxStorage](./rx-storage-sqlite.md) if you have [πŸ‘‘ premium access](/premium/) or the [trial-SQLite RxStorage](./rx-storage-sqlite.md) for tryouts. For ultimate performance in Expo and React Native, use the [Expo Filesystem RxStorage](./rx-storage-filesystem-expo.md). - In Capacitor: Use the [SQLite RxStorage](./rx-storage-sqlite.md) if you have [πŸ‘‘ premium access](/premium/), otherwise use the [localStorage](./rx-storage-localstorage.md) storage. ## Configuration Examples The RxStorage layer of RxDB is very flexible. Here are some examples on how to configure more complex settings: ### Storing much data in a browser securely Lets say you build a browser app that needs to store a big amount of data as securely as possible. Here we can use a combination of the storages (encryption, IndexedDB, compression, schema-checks) that increase security and reduce the stored data size. We use the schema-validation on the top level to ensure schema-errors are clearly readable and do not contain [encrypted](./encryption.md)/[compressed](./key-compression.md) data. The encryption is used inside of the compression because encryption of compressed data is more efficient. ```ts import { wrappedValidateAjvStorage } from 'rxdb/plugins/validate-ajv'; import { wrappedKeyCompressionStorage } from 'rxdb/plugins/key-compression'; import { wrappedKeyEncryptionCryptoJsStorage } from 'rxdb/plugins/encryption-crypto-js'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; const myDatabase = await createRxDatabase({ storage: wrappedValidateAjvStorage({ storage: wrappedKeyCompressionStorage({ storage: wrappedKeyEncryptionCryptoJsStorage({ storage: getRxStorageIndexedDB() }) }) }) }); ``` ### High query Load Also we can utilize a combination of storages to create a database that is optimized to run complex queries on the data really fast. Here we use the sharding storage together with the worker storage. This allows to run queries in parallel multithreading instead of a single JavaScript process. Because the worker initialization can slow down the initial page load, we also use the [localstorage-meta-optimizer](./rx-storage-localstorage-meta-optimizer.md) to improve initialization time. ```ts import { getRxStorageSharding } from 'rxdb-premium/plugins/storage-sharding'; import { getRxStorageWorker } from 'rxdb-premium/plugins/storage-worker'; import { getRxStorageIndexedDB } from 'rxdb-premium/plugins/storage-indexeddb'; import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; const myDatabase = await createRxDatabase({ storage: getLocalstorageMetaOptimizerRxStorage({ storage: getRxStorageSharding({ storage: getRxStorageWorker({ workerInput: 'path/to/worker.js', storage: getRxStorageIndexedDB() }) }) }) }); ``` ### Low Latency on Writes and Simple Reads Here we create a storage configuration that is optimized to have a low latency on simple reads and writes. It uses the memory-mapped storage to fetch and store data in memory. For persistence the OPFS storage is used in the main thread which has lower latency for fetching big chunks of data when at initialization the data is loaded from disk into memory. We do not use workers because sending data from the main thread to workers and backwards would increase the latency. ```ts import { getLocalstorageMetaOptimizerRxStorage } from 'rxdb-premium/plugins/storage-localstorage-meta-optimizer'; import { getMemoryMappedRxStorage } from 'rxdb-premium/plugins/storage-memory-mapped'; import { getRxStorageOPFSMainThread } from 'rxdb-premium/plugins/storage-worker'; const myDatabase = await createRxDatabase({ storage: getLocalstorageMetaOptimizerRxStorage({ storage: getMemoryMappedRxStorage({ storage: getRxStorageOPFSMainThread() }) }) }); ``` ## All RxStorage Implementations List ### Memory A storage that stores the data as plain data in the memory of the JavaScript process. Really fast and can be used in all environments. [Read more](./rx-storage-memory.md) ### LocalStorage The localStorage based storage stores the data inside of a browsers [localStorage API](./articles/localstorage.md). It is the easiest to set up and has a small bundle size. **If you are new to RxDB, you should start with the LocalStorage RxStorage**. [Read more](./rx-storage-localstorage.md) ### πŸ‘‘ IndexedDB The IndexedDB `RxStorage` is based on plain IndexedDB. For most use cases, this has the best performance together with the OPFS storage. [Read more](./rx-storage-indexeddb.md) ### πŸ‘‘ OPFS The OPFS `RxStorage` is based on the File System Access API. This has the best performance of all other non-in-memory storage, when RxDB is used inside of a browser. [Read more](./rx-storage-opfs.md) ### πŸ‘‘ Filesystem Node The Filesystem Node storage is best suited when you use RxDB in a Node.js process or with [electron.js](./electron.md). [Read more](./rx-storage-filesystem-node.md) ### Storage Wrapper Plugins #### πŸ‘‘ Worker The worker RxStorage is a wrapper around any other RxStorage which allows to run the storage in a WebWorker (in browsers) or a Worker Thread (in Node.js). By doing so, you can take CPU load from the main process and move it into the worker's process which can improve the perceived performance of your application. [Read more](./rx-storage-worker.md) #### πŸ‘‘ SharedWorker The SharedWorker RxStorage is a wrapper around any other RxStorage which allows to run the storage in a SharedWorker (only in browsers). By doing so, you can take CPU load from the main process and move it into the worker's process which can improve the perceived performance of your application. [Read more](./rx-storage-shared-worker.md) #### Remote The Remote RxStorage is made to use a remote storage and communicate with it over an asynchronous message channel. The remote part could be on another JavaScript process or even on a different host machine. Mostly used internally in other storages like Worker or Electron-ipc. [Read more](./rx-storage-remote.md) #### πŸ‘‘ Sharding On some `RxStorage` implementations (like IndexedDB), a huge performance improvement can be done by sharding the documents into multiple database instances. With the sharding plugin you can wrap any other `RxStorage` into a sharded storage. [Read more](./rx-storage-sharding.md) #### πŸ‘‘ Memory Mapped The memory-mapped [RxStorage](./rx-storage.md) is a wrapper around any other RxStorage. The wrapper creates an in-memory storage that is used for query and write operations. This memory instance stores its data in an underlying storage for persistence. The main reason to use this is to improve query/write performance while still having the data stored on disk. [Read more](./rx-storage-memory-mapped.md) #### πŸ‘‘ Localstorage Meta Optimizer The [RxStorage](./rx-storage.md) Localstorage Meta Optimizer is a wrapper around any other RxStorage. The wrapper uses the original RxStorage for normal collection documents. But to optimize the initial page load time, it uses [localstorage](./articles/localstorage.md) to store the plain key-value metadata that RxDB needs to create databases and collections. This plugin can only be used in browsers. [Read more](./rx-storage-localstorage-meta-optimizer.md) #### Electron IpcRenderer & IpcMain To use RxDB in [electron](./electron-database.md), it is recommended to run the RxStorage in the main process and the [RxDatabase](./rx-database.md) in the renderer processes. With the rxdb electron plugin you can create a remote RxStorage and consume it from the renderer process. [Read more](./electron.md) ### Third Party based Storages #### πŸ‘‘ Expo Filesystem The Expo Filesystem storage brings blazing-fast OPFS capabilities to React Native and Expo applications, bypassing the bridge via JSI bindings for maximum performance. This is the fastest storage engine for React Native. [Read more](./rx-storage-filesystem-expo.md) #### πŸ‘‘ SQLite The SQLite storage has great performance when RxDB is used on **Node.js**, **Electron**, **React Native**, **Cordova** or **Capacitor**. [Read more](./rx-storage-sqlite.md) #### Dexie.js The Dexie.js based storage is based on the Dexie.js IndexedDB wrapper library. [Read more](./rx-storage-dexie.md) #### MongoDB To use RxDB on the server side, the MongoDB RxStorage provides a way of having a secure, scalable and performant storage based on the popular MongoDB NoSQL database. [Read more](./rx-storage-mongodb.md) #### DenoKV To use RxDB in Deno. The DenoKV RxStorage provides a way of having a secure, scalable and performant storage based on the Deno Key Value Store. [Read more](./rx-storage-denokv.md) #### FoundationDB To use RxDB on the server side, the FoundationDB RxStorage provides a way of having a secure, fault-tolerant and performant storage. [Read more](./rx-storage-foundationdb.md) --- ## TypeScript Setup import {Steps} from '@site/src/components/steps'; # Using RxDB with TypeScript In this tutorial you will learn how to use RxDB with TypeScript. We will create a basic database with one collection and several [ORM](../orm.md)-methods, fully typed! RxDB directly comes with its typings and you do not have to install anything else, however the latest version of RxDB requires that you are using Typescript v3.8 or newer. Our way to go is - First define what the documents look like - Then define what the collections look like - Then define what the database looks like ## Declare the types First you import the types from RxDB. ```typescript import { createRxDatabase, RxDatabase, RxCollection, RxJsonSchema, RxDocument, } from 'rxdb/plugins/core'; import { getRxStorageLocalstorage } from 'rxdb/plugins/storage-localstorage'; ``` ## Create the base document type First we have to define the TypeScript type of the documents of a collection: **Option A**: Create the document type from the schema ```typescript import { toTypedRxJsonSchema, ExtractDocumentTypeFromTypedRxJsonSchema, RxJsonSchema } from 'rxdb'; export const heroSchemaLiteral = { title: 'hero schema', description: 'describes a human being', version: 0, keyCompression: true, primaryKey: 'passportId', type: 'object', properties: { passportId: { type: 'string', maxLength: 100 // <- the primary key must have set maxLength }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer' } }, required: ['firstName', 'lastName', 'passportId'], indexes: ['firstName'] } as const; // <- It is important to set 'as const' to preserve the literal type const schemaTyped = toTypedRxJsonSchema(heroSchemaLiteral); // aggregate the document type from the schema export type HeroDocType = ExtractDocumentTypeFromTypedRxJsonSchema< typeof schemaTyped >; // create the typed RxJsonSchema from the literal typed object. export const heroSchema: RxJsonSchema = heroSchemaLiteral; ``` **Option B**: Manually type the document type ```typescript export type HeroDocType = { passportId: string; firstName: string; lastName: string; age?: number; // optional }; ``` **Option C**: Generate the document type from schema during build time If your schema is in a `.json` file or generated from somewhere else, you might generate the typings with the [json-schema-to-typescript](https://www.npmjs.com/package/json-schema-to-typescript) module. ## Types for the ORM methods We also add some ORM-methods for the document. ```typescript export type HeroDocMethods = { scream: (v: string) => string; }; ``` ## Create [RxDocument](../rx-document.md) Type We can merge these into our HeroDocument. ```typescript export type HeroDocument = RxDocument; ``` ## Create [RxCollection](../rx-collection.md) Type Now we can define type for the collection which contains the documents. ```typescript // we declare one static ORM-method for the collection export type HeroCollectionMethods = { countAllDocuments: () => Promise; } // and then merge all our types export type HeroCollection = RxCollection< HeroDocType, HeroDocMethods, HeroCollectionMethods >; ``` ## Create [RxDatabase](../rx-database.md) Type Before we can define the database, we make a helper-type which contains all collections of it. ```typescript export type MyDatabaseCollections = { heroes: HeroCollection } ``` Now the database. ```typescript export type MyDatabase = RxDatabase; ``` ## Using the types Now that we have declare all our types, we can use them. ```typescript /** * create database and collections */ const myDatabase: MyDatabase = await createRxDatabase({ name: 'mydb', storage: getRxStorageLocalstorage() }); const heroSchema: RxJsonSchema = { title: 'human schema', description: 'describes a human being', version: 0, keyCompression: true, primaryKey: 'passportId', type: 'object', properties: { passportId: { type: 'string', maxLength: 100 }, firstName: { type: 'string' }, lastName: { type: 'string' }, age: { type: 'integer' } }, required: ['passportId', 'firstName', 'lastName'] }; const heroDocMethods: HeroDocMethods = { scream: function(this: HeroDocument, what: string) { return this.firstName + ' screams: ' + what.toUpperCase(); } }; const heroCollectionMethods: HeroCollectionMethods = { countAllDocuments: async function(this: HeroCollection) { const allDocs = await this.find().exec(); return allDocs.length; } }; await myDatabase.addCollections({ heroes: { schema: heroSchema, methods: heroDocMethods, statics: heroCollectionMethods } }); // add a postInsert-hook myDatabase.heroes.postInsert( function myPostInsertHook( this: HeroCollection, // own collection is bound to the scope docData: HeroDocType, // documents data doc: HeroDocument // RxDocument ) { console.log('insert to ' + this.name + '-collection: ' + doc.firstName); }, false // not async ); /** * use the database */ // insert a document const hero: HeroDocument = await myDatabase.heroes.insert({ passportId: 'myId', firstName: 'piotr', lastName: 'potter', age: 5 }); // access a property console.log(hero.firstName); // use a orm method hero.scream('AAH!'); // use a static orm method from the collection const amount: number = await myDatabase.heroes.countAllDocuments(); console.log(amount); /** * clean up */ myDatabase.close(); ```