Skip to main content
Version: 6.4

Configuration

Entity Discovery

You can either provide array of entity instances via entities, or let the ORM look up your entities in selected folders.

MikroORM.init({
entities: [Author, Book, Publisher, BookTag],
});

We can also use folder based discovery by providing list of paths to the entities we want to discover (globs are supported as well). This way we also need to specify entitiesTs, where we point the paths to the TS source files instead of the JS compiled files (see more at Metadata Providers).

The entitiesTs option is used when running the app via ts-node, as the ORM needs to discover the TS files. Always specify this option if you use folder/file based discovery.

MikroORM.init({
entities: ['./dist/modules/users/entities', './dist/modules/projects/entities'],
entitiesTs: ['./src/modules/users/entities', './src/modules/projects/entities'],
// optionally you can override the base directory (defaults to `process.cwd()`)
baseDir: process.cwd(),
});

Be careful when overriding the baseDir with dynamic values like __dirname, as you can end up with valid paths from ts-node, but invalid paths from node. Ideally you should keep the default of process.cwd() there to always have the same base path regardless of how you run the app.

By default, ReflectMetadataProvider is used that leverages the reflect-metadata. You can also use TsMorphMetadataProvider by installing @mikro-orm/reflection. This provider will analyse your entity source files (or .d.ts type definition files). If you aim to use plain JavaScript instead of TypeScript, use EntitySchema.

You can also implement your own metadata provider and use it instead. To do so, extend the MetadataProvider class.

import { MikroORM } from '@mikro-orm/core';
import { TsMorphMetadataProvider } from '@mikro-orm/reflection';

MikroORM.init({
metadataProvider: TsMorphMetadataProvider,
});

There are also some additional options how you can adjust the discovery process:

MikroORM.init({
discovery: {
warnWhenNoEntities: false, // by default, discovery throws when no entity is processed
requireEntitiesArray: true, // force usage of class references in `entities` instead of paths
alwaysAnalyseProperties: false, // do not analyse properties when not needed (with ts-morph)
},
});

If you disable discovery.alwaysAnalyseProperties option, you will need to explicitly provide nullable and ref parameters (where applicable).

Read more about this in Metadata Providers sections.

Adjusting default type mapping

Since v5.2 we can alter how the ORM picks the default mapped type representation based on the inferred type of a property. One example is a mapping of foo: string to varchar(255). If we wanted to change this default to a text type in postgres, we can use the discover.getMappedType callback:

import { MikroORM, Platform, Type } from '@mikro-orm/core';

const orm = await MikroORM.init({
discovery: {
getMappedType(type: string, platform: Platform) {
// override the mapping for string properties only
if (type === 'string') {
return Type.getType(TextType);
}

return platform.getDefaultMappedType(type);
},
},
});

onMetadata hook

Sometimes you might want to alter some behavior of the ORM on metadata level. You can use the onMetadata hook to modify the metadata. Let's say you want to use your entities with different drivers, and you want to use some driver specific feature. Using the onMetadata hook, you can modify the metadata dynamically to fit the drivers requirements.

The hook will be executed before the internal process of filling defaults, so you can think of it as modifying the property options in your entity definitions, they will be respected e.g. when inferring the column type.

The hook can be async, but it will be awaited only if you use the async MikroORM.init() method, not with the MikroORM.initSync().

import { EntityMetadata, MikroORM, Platform } from '@mikro-orm/sqlite';

const orm = await MikroORM.init({
// ...
discovery: {
onMetadata(meta: EntityMetadata, platform: Platform) {
// sqlite driver does not support schemas
delete meta.schema;
},
},
});

Alternatively, you can also use the afterDiscovered hook, which is fired after the discovery process ends. You can access all the metadata there, and add or remove them as you wish.

import { EntityMetadata, MikroORM, Platform } from '@mikro-orm/sqlite';

const orm = await MikroORM.init({
// ...
discovery: {
afterDiscovered(storage: MetadataStorage) {
// ignore FooBar entity in schema generator
storage.reset('FooBar');
},
},
});

Extensions

Since v5.6, the ORM extensions like SchemaGenerator, Migrator or EntityGenerator can be registered via the extensions config option. This will be the only supported way to have the shortcuts like orm.migrator available in v6, so we no longer need to dynamically require those dependencies or specify them as optional peer dependencies (both of those things cause issues with various bundling tools like Webpack, or those used in Remix or Next.js).

import { defineConfig } from '@mikro-orm/postgresql';
import { Migrator } from '@mikro-orm/migrations';
import { EntityGenerator } from '@mikro-orm/entity-generator';
import { SeedManager } from '@mikro-orm/seeder';

export default defineConfig({
dbName: 'test',
extensions: [Migrator, EntityGenerator, SeedManager],
});

The SchemaGenerator (as well as MongoSchemaGenerator) is registered automatically as it does not require any 3rd party dependencies to be installed.

Since v6.3, the extensions are again checked dynamically if not explicitly registered, so it should be enough to have the given package (e.g. @mikro-orm/seeder) installed as in v5.

Driver

To select driver, you can either use type option, or provide the driver class reference.

typedriver namedependencynote
mongoMongoDrivermongodb-
mysqlMySqlDrivermysql2compatible with MariaDB
mariadbMariaDbDrivermariadbcompatible with MySQL
postgresqlPostgreSqlDriverpgcompatible with CockroachDB
mssqlMsSqlDrivertedious-
sqliteSqliteDriversqlite3-
better-sqliteBetterSqliteDriverbetter-sqlite3-
libsqlLibSqlDriverlibsql-

Driver and connection implementations are not directly exported from @mikro-orm/core module. You can import them from the driver packages (e.g. import { PostgreSqlDriver } from '@mikro-orm/postgresql').

You can pass additional options to the underlying driver (e.g. mysql2) via driverOptions. The object will be deeply merged, overriding all internally used options.

import { MySqlDriver } from '@mikro-orm/mysql';

MikroORM.init({
driver: MySqlDriver,
driverOptions: { connection: { timezone: '+02:00' } },
});

From v3.5.1 you can also set the timezone directly in the ORM configuration:

MikroORM.init({
timezone: '+02:00',
});

Connection

Each platform (driver) provides default connection string, you can override it as a whole through clientUrl, or partially through one of following options:

export interface DynamicPassword {
password: string;
expirationChecker?: () => boolean;
}

export interface ConnectionOptions {
dbName?: string;
name?: string; // for logging only (when replicas are used)
clientUrl?: string;
host?: string;
port?: number;
user?: string;
password?: string | (() => string | Promise<string> | DynamicPassword | Promise<DynamicPassword>);
charset?: string;
multipleStatements?: boolean; // for mysql driver
pool?: PoolConfig; // provided by `knex`
}

Following table shows default client connection strings:

typedefault connection url
mongomongodb://127.0.0.1:27017
mysqlmysql://root@127.0.0.1:3306
mariadbmysql://root@127.0.0.1:3306
postgresqlpostgresql://postgres@127.0.0.1:5432

Read Replicas

To set up read replicas, you can use replicas option. You can provide only those parts of the ConnectionOptions interface, they will be used to override the master connection options.

MikroORM.init({
dbName: 'my_db_name',
user: 'write-user',
host: 'master.db.example.com',
port: 3306,
replicas: [
{ user: 'read-user-1', host: 'read-1.db.example.com', port: 3307 },
{ user: 'read-user-2', host: 'read-2.db.example.com', port: 3308 },
{ user: 'read-user-3', host: 'read-3.db.example.com', port: 3309 },
],
});

Read more about this in Installation and Read Connections sections.

Using short-lived tokens

Many cloud providers include alternative methods for connecting to database instances using short-lived authentication tokens. MikroORM supports dynamic passwords via a callback function, either synchronous or asynchronous. The callback function must resolve to a string.

MikroORM.init({
dbName: 'my_db_name',
password: async () => someCallToGetTheToken(),
});

The password callback value will be cached, to invalidate this cache we can specify expirationChecker callback:

MikroORM.init({
dbName: 'my_db_name',
password: async () => {
const { token, tokenExpiration } = await someCallToGetTheToken();
return { password: token, expirationChecker: () => tokenExpiration <= Date.now() }
},
});

onQuery hook and observability

Sometimes you might want to alter the generated queries. One use case for that might be adding contextual query hints to allow observability. Before a more native approach is added to the ORM, you can use the onQuery hook to modify all the queries by hand. The hook will be fired for every query before its execution.

import { AsyncLocalStorage } from 'node:async_hooks';

const ctx = new AsyncLocalStorage();

// provide the necessary data to the store in some middleware
app.use((req, res, next) => {
const store = { endpoint: req.url };
ctx.run(store, next);
});

MikroORM.init({
onQuery: (sql: string, params: unknown[]) => {
const store = ctx.getStore();

if (!store) {
return sql;
}

// your function that generates the necessary query hint
const hint = createQueryHint(store);

return sql + hint;
},
});

Naming Strategy

When mapping your entities to database tables and columns, their names will be defined by naming strategy. There are 3 basic naming strategies you can choose from:

  • UnderscoreNamingStrategy - default of all SQL drivers
  • MongoNamingStrategy - default of MongoDriver
  • EntityCaseNamingStrategy - uses unchanged entity and property names

You can also define your own custom NamingStrategy implementation.

MikroORM.init({
namingStrategy: EntityCaseNamingStrategy,
});

Read more about this in Naming Strategy section.

Auto-join of 1:1 owners

By default, owning side of 1:1 relation will be auto-joined when you select the inverse side so we can have the reference to it. You can disable this behaviour via autoJoinOneToOneOwner configuration toggle.

MikroORM.init({
autoJoinOneToOneOwner: false,
});

Auto-join of M:1 and 1:1 relations with filters

Since v6, filters are applied to the relations too, as part of JOIN ON condition. If a filter exists on a M:1 or 1:1 relation target, such an entity will be automatically joined, and when the foreign key is defined as NOT NULL, it will result in an INNER JOIN rather than LEFT JOIN. This is especially important for implementing soft deletes via filters, as the foreign key might point to a soft-deleted entity. When this happens, the automatic INNER JOIN will result in such a record not being returned at all. You can disable this behavior via autoJoinRefsForFilters ORM option.

MikroORM.init({
autoJoinRefsForFilters: false,
});

Forcing UTC Timezone

Use forceUtcTimezone option to force the Dates to be saved in UTC in datetime columns without timezone. It works for MySQL (datetime type) and PostgreSQL (timestamp type). SQLite does this by default.

MikroORM.init({
forceUtcTimezone: true,
});

Mapping null values to undefined

By default null values from nullable database columns are hydrated as null. Using forceUndefined we can tell the ORM to convert those null values to undefined instead.

MikroORM.init({
forceUndefined: true,
});

Ignoring undefined values in Find Queries

The ORM will treat explicitly defined undefined values in your em.find() queries as nulls. If you want to ignore them instead, use ignoreUndefinedInQuery option:

MikroORM.init({
ignoreUndefinedInQuery: true,
});

// resolves to `em.find(User, {})`
await em.find(User, { email: undefined, { profiles: { foo: undefined } } });

Serialization of new entities

After flushing a new entity, all relations are marked as populated, just like if the entity was loaded from the db. This aligns the serialized output of e.toJSON() of a loaded entity and just-inserted one.

In v4 this behaviour was disabled by default, so even after the new entity was flushed, the serialized form contained only FKs for its relations. We can opt in to this old behaviour via populateAfterFlush: false.

MikroORM.init({
populateAfterFlush: false,
});

Population where condition

This applies only to SELECT_IN strategy, as JOINED strategy implies the inference.

In v4, when we used populate hints in em.find() and similar methods, the query for our entity would be analysed and parts of it extracted and used for the population. Following example would find all authors that have books with given IDs, and populate their books collection, again using this PK condition, resulting in only such books being in those collections.

// this would end up with `Author.books` collections having only books of PK 1, 2, 3
const a = await em.find(Author, { books: [1, 2, 3] }, { populate: ['books'] });

Following this example, if we wanted to load all books, we would need a separate em.populate() call:

const a = await em.find(Author, { books: [1, 2, 3] });
await em.populate(a, ['books']);

This behaviour changed and is now configurable both globally and locally, via populateWhere option. Globally we can specify one of PopulateHint.ALL and PopulateHint.INFER, the former being the default in v5, the latter being the default behaviour in v4. Locally (via FindOptions) we can also specify custom where condition that will be passed to em.populate() call.

MikroORM.init({
// defaults to PopulateHint.ALL in v5
populateWhere: PopulateHint.INFER, // revert to v4 behaviour
});

Custom Hydrator

Hydrator is responsible for assigning values from the database to entities. You can implement your custom Hydrator (by extending the abstract Hydrator class):

MikroORM.init({
hydrator: MyCustomHydrator,
});

Custom Repository

You can also register custom base repository (for all entities where you do not specify repository option) globally:

You can still use entity specific repositories in combination with global base repository.

MikroORM.init({
entityRepository: CustomBaseRepository,
});

Read more about this in Repositories section.

Strict Mode and property validation

Since v4.0.3 the validation needs to be explicitly enabled via validate: true. It has performance implications and usually should not be needed, as long as you don't modify your entities via Object.assign().

MikroORM will validate your properties before actual persisting happens. It will try to fix wrong data types for you automatically. If automatic conversion fails, it will throw an error. You can enable strict mode to disable this feature and let ORM throw errors instead. Validation is triggered when persisting the entity.

MikroORM.init({
validate: true,
strict: true,
});

Read more about this in Property Validation section.

Required properties validation

Since v5, new entities are validated on runtime (just before executing insert queries), based on the entity metadata. This means that mongo users now need to use nullable: true on their optional properties too).

This behaviour can be disabled globally via validateRequired: false in the ORM config.

MikroORM.init({
validateRequired: false,
});

Debugging & Logging

You can enable logging with debug option. Either set it to true to log everything, or provide array of 'query' | 'query-params' | 'discovery' | 'info' namespaces.

MikroORM.init({
logger: (message: string) => myLogger.info(message), // defaults to `console.log()`
debug: true, // or provide array like `['query', 'query-params']`
highlight: false, // defaults to true
highlightTheme: { ... }, // you can also provide custom highlight there
});

Read more about this in Debugging section.

Custom Fail Handler

When no entity is found during em.findOneOrFail() call, a NotFoundError will be thrown. You can customize how the Error instance is created via findOneOrFailHandler (or findExactlyOneOrFailHandler if strict mode is enabled):

MikroORM.init({
findOneOrFailHandler: (entityName: string, where: Dictionary | IPrimaryKey) => {
return new NotFoundException(`${entityName} not found!`);
},
});

Read more about this in Entity Manager docs.

Schema Generator

Following example shows all possible options and their defaults:

MikroORM.init({
schemaGenerator: {
disableForeignKeys: true, // try to disable foreign_key_checks (or equivalent)
createForeignKeyConstraints: true, // do not generate FK constraints
},
});

Migrations

Under the migrations namespace, you can adjust how the integrated migrations support works. Following example shows all possible options and their defaults:

MikroORM.init({
migrations: {
tableName: 'mikro_orm_migrations', // migrations table name
path: process.cwd() + '/migrations', // path to folder with migration files
glob: '!(*.d).{js,ts}', // how to match migration files (all .js and .ts files, but not .d.ts)
transactional: true, // run each migration inside transaction
disableForeignKeys: true, // try to disable foreign_key_checks (or equivalent)
allOrNothing: true, // run all migrations in current batch in master transaction
emit: 'ts', // migration generation mode
},
});

Read more about this in Migrations section.

Seeder

Following example shows all possible options and their defaults:

MikroORM.init({
seeder: {
path: './seeders',
defaultSeeder: 'DatabaseSeeder',
},
});

Read more about this in seeding docs.

Caching

By default, metadata discovery results are cached. You can either disable caching, or adjust how it works. Following example shows all possible options and their defaults:

MikroORM.init({
metadataCache: {
enabled: true,
pretty: false, // allows to pretty print the JSON cache
adapter: FileCacheAdapter, // you can provide your own implementation here, e.g. with redis
options: { cacheDir: process.cwd() + '/temp' }, // options will be passed to the constructor of `adapter` class
},
});

Read more about this in Metadata Cache section.

Importing database dump files (MySQL and PostgreSQL)

Using the mikro-orm database:import db-file.sql you can import a database dump file. This can be useful when kickstarting an application or could be used in tests to reset the database. Database dumps often have queries spread over multiple lines, and therefore you need the following configuration.

MikroORM.init({
...
multipleStatements: true,
...
});

This should be disabled in production environments for added security.

Using native private properties

If we want to use native private properties inside entities, the default approach of how MikroORM creates entity instances via Object.create() is not viable (more about this in the issue). To force usage of entity constructors, we can use forceEntityConstructor toggle:

MikroORM.init({
forceEntityConstructor: true, // or specify just some entities via `[Author, 'Book', ...]`
});

Persist created entities automatically

When you create new entity instance via em.create(), it will be automatically marked for future persistence (em.persist() will be called on it before its returned to you). In case you want to disable this behavior, you can set persistOnCreate: false globally or override this locally via em.create(Type, data, { persist: false }).

This flag affects only em.create(), entities created manually via constructor still need an explicit em.persist() call, or they need to be part of the entity graph of some already managed entity.

MikroORM.init({
persistOnCreate: false, // defaults to true since v5.5
});

Using global Identity Map

In v5, it is no longer possible to use the global identity map. This was a common issue that led to weird bugs, as using the global EM without request context is almost always wrong, we always need to have a dedicated context for each request, so they do not interfere.

We still can disable this check via allowGlobalContext configuration, or a connected environment variable MIKRO_ORM_ALLOW_GLOBAL_CONTEXT - this can be handy especially in unit tests.

MikroORM.init({
allowGlobalContext: true,
});

Deprecation warnings

By default, doing something that is deprecated will result in a deprecation warning being logged. The default logger will in turn show it on the console.

You can ignore all or only specific deprecation warnings. See Logging's section on deprecation warnings for details.

The full list of deprecation warnings:

labelmessage
D0001Path for config file was inferred from the command line arguments. Instead, you should set the MIKRO_ORM_CLI_CONFIG environment variable to specify the path, or if you really must use the command line arguments, import the config manually based on them, and pass it to init.

Using environment variables

Since v4.5 it is possible to set most of the ORM options via environment variables. By default .env file from the root directory is loaded - it is also possible to set full path to the env file you want to use via MIKRO_ORM_ENV environment variable.

Only env vars with MIKRO_ORM_ prefix are be loaded this way, all the others will be ignored. If you want to access all your env vars defined in the .env file, call dotenv.register() yourself in your app (or possibly in your ORM config file).

Environment variables always have precedence over the ORM config.

Example .env file:

MIKRO_ORM_TYPE = sqlite
MIKRO_ORM_ENTITIES = ./dist/foo/*.entity.js, ./dist/bar/*.entity.js
MIKRO_ORM_ENTITIES_TS = ./src/foo/*.entity.ts, ./src/bar/*.entity.ts
MIKRO_ORM_DB_NAME = test.db
MIKRO_ORM_MIGRATIONS_PATH = ./dist/migrations
MIKRO_ORM_MIGRATIONS_PATH_TS = ./src/migrations
MIKRO_ORM_POPULATE_AFTER_FLUSH = true
MIKRO_ORM_FORCE_ENTITY_CONSTRUCTOR = true
MIKRO_ORM_FORCE_UNDEFINED = true

Full list of supported options:

env variableconfig key
MIKRO_ORM_CONTEXT_NAMEcontextName
MIKRO_ORM_BASE_DIRbaseDir
MIKRO_ORM_TYPEtype
MIKRO_ORM_ENTITIESentities
MIKRO_ORM_ENTITIES_TSentitiesTs
MIKRO_ORM_CLIENT_URLclientUrl
MIKRO_ORM_HOSThost
MIKRO_ORM_PORTport
MIKRO_ORM_USERuser
MIKRO_ORM_PASSWORDpassword
MIKRO_ORM_DB_NAMEdbName
MIKRO_ORM_SCHEMAschema
MIKRO_ORM_LOAD_STRATEGYloadStrategy
MIKRO_ORM_BATCH_SIZEbatchSize
MIKRO_ORM_USE_BATCH_INSERTSuseBatchInserts
MIKRO_ORM_USE_BATCH_UPDATESuseBatchUpdates
MIKRO_ORM_STRICTstrict
MIKRO_ORM_VALIDATEvalidate
MIKRO_ORM_AUTO_JOIN_ONE_TO_ONE_OWNERautoJoinOneToOneOwner
MIKRO_ORM_PROPAGATE_TO_ONE_OWNERpropagateToOneOwner
MIKRO_ORM_POPULATE_AFTER_FLUSHpopulateAfterFlush
MIKRO_ORM_FORCE_ENTITY_CONSTRUCTORforceEntityConstructor
MIKRO_ORM_FORCE_UNDEFINEDforceUndefined
MIKRO_ORM_FORCE_UTC_TIMEZONEforceUtcTimezone
MIKRO_ORM_TIMEZONEtimezone
MIKRO_ORM_ENSURE_INDEXESensureIndexes
MIKRO_ORM_IMPLICIT_TRANSACTIONSimplicitTransactions
MIKRO_ORM_DEBUGdebug
MIKRO_ORM_COLORScolors
MIKRO_ORM_DISCOVERY_WARN_WHEN_NO_ENTITIESdiscovery.warnWhenNoEntities
MIKRO_ORM_DISCOVERY_REQUIRE_ENTITIES_ARRAYdiscovery.requireEntitiesArray
MIKRO_ORM_DISCOVERY_ALWAYS_ANALYSE_PROPERTIESdiscovery.alwaysAnalyseProperties
MIKRO_ORM_DISCOVERY_DISABLE_DYNAMIC_FILE_ACCESSdiscovery.disableDynamicFileAccess
MIKRO_ORM_MIGRATIONS_TABLE_NAMEmigrations.tableName
MIKRO_ORM_MIGRATIONS_PATHmigrations.path
MIKRO_ORM_MIGRATIONS_PATH_TSmigrations.pathTs
MIKRO_ORM_MIGRATIONS_GLOBmigrations.glob
MIKRO_ORM_MIGRATIONS_TRANSACTIONALmigrations.transactional
MIKRO_ORM_MIGRATIONS_DISABLE_FOREIGN_KEYSmigrations.disableForeignKeys
MIKRO_ORM_MIGRATIONS_ALL_OR_NOTHINGmigrations.allOrNothing
MIKRO_ORM_MIGRATIONS_DROP_TABLESmigrations.dropTables
MIKRO_ORM_MIGRATIONS_SAFEmigrations.safe
MIKRO_ORM_MIGRATIONS_EMITmigrations.emit
MIKRO_ORM_SCHEMA_GENERATOR_DISABLE_FOREIGN_KEYSmigrations.disableForeignKeys
MIKRO_ORM_SCHEMA_GENERATOR_CREATE_FOREIGN_KEY_CONSTRAINTSmigrations.createForeignKeyConstraints
MIKRO_ORM_SEEDER_PATHseeder.path
MIKRO_ORM_SEEDER_PATH_TSseeder.pathTs
MIKRO_ORM_SEEDER_GLOBseeder.glob
MIKRO_ORM_SEEDER_EMITseeder.emit
MIKRO_ORM_SEEDER_DEFAULT_SEEDERseeder.defaultSeeder

Note that setting MIKRO_ORM_CONTEXT_NAME without also setting another configuration environment variable from the table above has a slightly different effect. When combined with other environment variables, the final configuration object is considered to have this contextName. Without other environment variables, it is a value of contextName to search within the config file. The final config object is picked based on this value.

For example, assume no .env file is present (or is present, but sets nothing from the table above) and you run:

$ MIKRO_ORM_CONTEXT_NAME=example1 \
node ./dist/index.js

This will look for a config file in the standard paths, and will expect the config file to be able to provide a config with contextName set to "example1".

If you also set other environment variables, MikroORM will still search for a config file and try to a find a config with this contextName, but if it can't find one, it will create a config based on this contextName and the rest of the environment variables.

There are also env vars you can use to control the CLI settings (those you can set in your package.json):

env variableconfig key
MIKRO_ORM_CLI_CONFIG(CLI only)
MIKRO_ORM_CLI_TS_CONFIG_PATH(CLI only)
MIKRO_ORM_CLI_ALWAYS_ALLOW_TS(CLI only)
MIKRO_ORM_CLI_USE_TS_NODE(CLI only)
MIKRO_ORM_CLI_VERBOSE(CLI only)