Skip to main content

7 posts tagged with "node"

View All Tags

· 6 min read
Martin Adámek

After a longer pause, I am pleased to announce next feature release - MikroORM v5.8, probably the last one before v6. While I don't blog about the feature releases very often, I feel like this one deserves more attention. Why?

Fine-grained control over upserting

Upserting, so em.upsert and em.upsertMany, is relatively new addition to the EntityManager toolkit. It allows you to run insert on conflict queries (and works in MongoDB too!) to ensure a record exists in the database, updating its state when necessary.

The default behavior was to always prefer using the primary key for the on conflict condition, and fallback to the first unique property with a value. While this was a sane default, it quickly appeared it won't work very well for some use cases - the most common one being entities with UUID primary keys. In this case, we cannot use the primary key for the conflict resolution - but we often generate this value via property initializers, so the default logic would prefer it (and fail to find a match). A way around this was using the POJO signature of em.upsert, and not providing the UUID value in the data - but that leads to counter issues with drivers that don't have an easy way to generate UUID on server side (via database default). And it is not just about drivers like SQLite, where we don't have a UUID function on database level - the very same problem comes with PostgreSQL too, when you want to generate a UUID v7 key - those are not yet supported out of box.

MikroORM v5.8 now offers a fine-grained control over how the upserting logic works. You can use the following advanced options now:

  • onConflictFields?: (keyof T)[] to control the conflict clause
  • onConflictAction?: 'ignore' | 'merge' used ignore and merge as that is how the QB methods are called
  • onConflictMergeFields?: (keyof T)[] to control the merge clause
  • onConflictExcludeFields?: (keyof T)[] to omit fields from the merge clause

Here is a more complex example, note that it would work with both signatures the same way - we can pass in an array of entities too, and leverage the runtime defaults we set via property initializers.

const [author1, author2, author3] = await em.upsertMany(Author, [{ ... }, { ... }, { ... }], {
onConflictFields: ['email'], // specify a manual set of fields pass to the on conflict clause
onConflictAction: 'merge',
onConflictExcludeFields: ['id'],

This will generate query similar to the following:

insert into "author" 
("id", "current_age", "email", "foo")
(1, 41, 'a1', true),
(2, 42, 'a2', true),
(5, 43, 'a3', true)
on conflict ("email")
do update set
"current_age" = excluded."current_age",
"foo" = excluded."foo"
returning "_id", "current_age", "foo", "bar"

It might not be obvious on the first sight, but since we specified onConflictExcludeFields: ['id'] in this example, the primary key value will be narrowed via the returning clause - in the third item, we specify a wrong primary key (5) which is not matching the database, and as a result, we get a property hydrated entity with the correct primary key we get from the database.

There were other hidden improvements made to the upserting, namely the qb.onConflict() now maps property names to column names properly, the em.upsertMany runs in batches now (respecting the global batchSize option), and the result of em.upsert/Many now contains all the values with database defaults, so it truly represents the current database state and not just the data provided by you in the payload.

Filters with Joined loading strategy

Another important change is about filters. Previously, the filters were applied only when using the select-in strategy, as part of the where condition. Since v5.8, when using the joined strategy, filters will be also applied to the query, via join on conditions. This means the strategies should now behave the same finally. I am currently considering whether we should change the default strategy in v6, as this was the last big feature missing in the joined strategy implementation.

A connected change to this one is how the populateWhere option works - those conditions are now also applied as join on conditions when using the joined strategy.

New Collection helpers

Lastly, there are several new Collection methods, so its interface is more in-line with the Array prototype:

  • find() to find the first matching element
  • map() to map elements
  • filter() to filter elements
  • reduce() to map the array to a dictionary
  • slice() to extract a portion of the elements

And few more convenience methods:

  • exists() to check if a matching element exists
  • isEmpty() to check if the collection is empty
  • indexBy() to map the items to a dictionary, if there are more items with the same key, only the first one will be present.

The indexBy() can be pretty helpful, let's have a closer look at what it is capable of. If we specify only the first parameter, we get a map of the elements indexed by a given key:

// user.config is `Collection<Option>` where `Option` has a `key` and `value` props
const config = user.config.indexBy('key');
// {
// option1: { id: 5, key: 'option1', value: 'Value 1' },
// option2: { id: 12, key: 'option2', value: 'Value 2' },
// option3: { id: 8, key: 'option3', value: 'Value 3' },
// }

The second parameter allows to map to a value directly instead of mapping to the target entity:

const config = user.config.indexBy('key', 'value');
// {
// option1: 'Value 1',
// option2: 'Value 2',
// option3: 'Value 3',
// }

Smaller improvements worth mentioning

  • readonly transactions (native feature of MySQL and PostgreSQL)
  • comments and hint comments on SQL queries (e.g. for query optimizers and logging purposes)
  • global disableIdentityMap and resultCache options
  • Migration class now offers getEntityManager() helper which accepts the current transaction context

What about v6?

I mentioned in the beginning that this might be the final feature release of the 5.x branch, so where are we with the next major? There are still few refactors I want to undergo, before I consider it done, but most of them are already in progress and shouldn't be too hard to resolve.

  • support for propagation with useDefineForClassFields: true
  • rework Date mapping to work via mapped types
  • rework hydration of STI entities to support shadowing of properties with same type

I was focusing on the 5.x branch recently, so the v6 development was a bit stalled, but this is going to change now. But there is one significant improvement that was merged into v6 just yesterday - the initial dataloader support for loading Collection and Reference properties!

It will still take some time, definitely weeks, maybe a few months, but worst case, we should see the next major released by the end of the 2023.

Getting started guide

I got one more thing to share. Maybe you missed it, but the work-in-progress Getting Started guide is now public. It's far away from complete, but it should already provide some value, especially to beginners. Be sure to check it out, and as always, feedback is very welcome!

· 16 min read
Martin Adámek

The next major version of MikroORM has been just released. The title says: Stricter, Safer, Smarter – why?

  • Greatly improved type safety (e.g. populate and partial loading hints)
  • Auto-flush mode (so we never lose in-memory changes)
  • Automatic refreshing of loaded entities (say goodby to refresh: true)
  • Reworked schema diffing with automatic down migrations support
  • and many many more...

This time it took almost a year to get here – initial work on v5 started back in March 2021.

In case you don’t know…

If you never heard of MikroORM, it’s a TypeScript data-mapper ORM with Unit of Work and Identity Map. It supports MongoDB, MySQL, PostgreSQL, and SQLite drivers currently. Key features of the ORM are:

You can read the full introductory article here (but note that many things have changed since that was written) or browse through the docs.

Quick summary of 4.x releases

Before we dive into all the things v5, let’s recap what happened in 4.x releases:

But enough of the history lesson, let’s talk about the future!

Improved type safety

Let’s jump right into the most interesting feature – strict typing (almost) everywhere! em.create(), toJSON(), toObject(), populate, partial loading, and order by hints, all of that (and even more!) is now strictly typed.

Let’s check the following example:

const god = em.create(Author, {
name: 'God', // validates required properties
email: '',
books: [{
title: 'Bible, part 1',
tags: [{ name: 'old' }, { name: 'bestseller' }],
}, { persist: true }); // we can enable this globally via `persistOnCreate: true`
await em.flush();

// simulate new request

// `authors` is of type `Loaded<Author, 'books.tags'>[]`
const authors = await em.find(Author, {}, {
populate: ['books.tags'], // populate hint can be inferred from `fields` if not explicitly provided
fields: [''], // strict partial loading with dot notation support
orderBy: { name: 'asc', books: { tags: { name: 'asc' } } }, // strict order by with object nesting

// `books` and `tags` will be typed as `LoadedCollection` so we can use safe `$` accessor
const dto = wrap(authors[0]).toObject();
console.log(dto.books[0].tags[0].name); // DTOs are also strictly typed

First, we use em.create() to build the whole entity graph in a single step. It will validate the payload for both types and optionality. Some properties on the entity might have default values provided via hooks or database functions – while we might want to define them as required properties, they should act as optional in the context of em.create(). To deal with this problem, we can specify such properties that should be considered as optional via OptionalProps symbol:

export class Author {

// only `name` will be considered as required for `em.create()`
[OptionalProps]?: 'createdAt' | 'updatedAt';

id!: number;

@Property({ defaultRaw: 'current_timestamp()' })
createdAt!: Date;

@Property({ onUpdate: () => new Date(), length: 3, defaultRaw: 'current_timestamp(3)' })
updatedAt!: Date;

name!: string;


Some property names are always considered as optional: id, _id, uuid.

Then we load all Author entities, populating their books and the book tags. All of the FindOptions here are strictly typed, moreover, we could even skip the populate hint as it can be inferred from fields option automatically.

We might still need some type casting for DTOs. The serialized form of an entity can be very unpredictable – there are many variables that define how an entity will be serialized, e.g. loaded relation vs reference, property serializers, lazy properties, custom entity serializer/toJSON method, eager loading, recursion checks, … Therefore, all relations on the EntityDTO type are considered as loaded, this is mainly done to allow better DX as if we had all relations typed as Primary<T> | EntityDTO<T> (e.g. number | EntityDTO<Book>), it would be impossible to benefit from intellisense/autosuggestions. Imagine this scenario:

const book = {} as Book;
const dto = wrap(book).toObject(); // EntityDTO<Book>

// this is now possible, but with the PK union type, we would need to type cast all the time
const name =;

Validation improvements

Adding on top of the compile-time validation, we also get a runtime validation right before insert queries are fired, to ensure required properties have their values. This is important mainly in mongo, where we don’t have optionality checks on the schema level.

When we try to use the CLI without installing it locally, we also get a warning. And what if we forget to update some of the ORM packages and ended up with version mismatch and multiple installed core packages? We now validate that too!

Reworked schema diffing

Schema diffing has been one of the weakest spots. Often, additional queries were produced or it was even impossible to get to a fully synchronized state.

Schema diffing has been completely reworked to address all currently known issues, and adding a bit more on top of that:

  • Diffing foreign key constraints
  • Proper index diffing (before we compared just names)
  • Custom index expressions
  • Comment diffing
  • Column length diffing (e.g. numeric(10,2) or varchar(100))
  • Changing primary key types
  • Schema/namespace diffing (Postgres only)
  • Automatic down migrations (no SQLite support yet)
  • Check constraints support (Postgres only)

Smarter migrations

In the production environment, we might want to use compiled migration files. Since v5, this should work almost out of the box, all we need to do is to configure the migrations path accordingly. Executed migrations now ignore the file extension, so we can use both node and ts-node on the same database. This is done in a backward-compatible manner.

import { MikroORM, Utils } from '@mikro-orm/core';

await MikroORM.init({
migrations: {
path: 'dist/migrations',
pathTs: 'src/migrations',
// or alternatively
// migrations: {
// path: Utils.detectTsNode() ? 'src/migrations' : 'dist/migrations',
// },
// ...

Creating new migration will now automatically save the target schema snapshot into the migrations folder. This snapshot will be then used if we try creating a new migration, instead of using the current database schema. This means that if we try to create new migration before we run the pending ones, we still get the right schema diff (and no migration will be created if no additional changes were made).

Snapshots should be versioned just like the regular migration files.

Auto-flush mode

Up until now, flushing was always an explicit action. With v5, we can configure the flushing strategy, similarly to how JPA/hibernate work. We have 3 flush modes:

  • FlushMode.COMMIT - The EntityManager delays the flush until the current Transaction is committed.
  • FlushMode.AUTO - This is the default mode, and it flushes the EntityManager only if necessary.
  • FlushMode.ALWAYS - Flushes the EntityManager before every query.

FlushMode.AUTO will try to detect changes on the entity we are querying, and flush if there is an overlap:

// querying for author will trigger auto-flush if we have new author persisted
const a1 = new Author(...);
const r1 = await em.find(Author, {});

// querying author won't trigger auto-flush if we have new book, but no changes on author
const b4 = new Book(...);
const r2 = await em.find(Author, {});

// but querying for book will trigger auto-flush
const r3 = await em.find(Book, {});

More about flush modes in the docs.

Automatic refreshing of loaded entities

Previously, when an entity was loaded and we needed to reload it, providing explicit refresh: true in the options was required. Refreshing of entity also had one problematic side effect – the entity data (used for computing changesets) were always updated based on the newly loaded entity, hence forgetting the previous state (resulting in possibly lost updates done on the entity before refreshing).

Now we always merge the newly loaded data with the current state, and when we see an updated property, we keep the changed value instead. Moreover, for em.findOne() with a primary key condition, we try to detect whether it makes sense to reload an entity, by comparing the options and already loaded property names. In this step the fields and populate options are taken into account to support both partial loading and lazy properties.

// first partially load author with `id` and `email` only
const a1 = await em.findOneOrFail(Author, 123, { fields: ['id', 'email'] }); = 'lol'; // let's change the email

// reloading with same fields won't fire the query (as before)
const a2 = await em.findOneOrFail(Author, 123, { fields: ['email'] });
console.log(a1 === a2); // true, same entity instance, no query was fired

// reloading with additional fields will work without `refresh: true`
const a3 = await em.findOneOrFail(Author, 123, { fields: ['id', 'age'] });
console.log(a1 === a3); // true, same entity instance, but updated!
console.log(a1.age); // new values are loaded
a1.age = 1000; // let's override them

// reloading full entity will work without `refresh: true`
const a4 = await em.findOneOrFail(Author, 123, { populate: ['books'] });
console.log(a1 === a4); // true, same entity instance, but updated!
console.log(a1.termsAccepted); // new values are loaded

await em.flush(); // updates the author with new email and age

For complex conditions in em.findOne() and for any queries via em.find(), we always do the query anyway, but now instead of ignoring the data in case such entity was loaded, we merge them in the same manner.

// first partially load author entities
const r1 = await em.find(Author, {}, { fields: ['id'] });
r1[0].email = 'lol'; // let's change one of the emails
console.log(r1[0].name); // undefined, not loaded

// reload full entities - no `refresh: true` needed!
const r2 = await em.find(Author, {});
console.log(r2[0]); // fully loaded author entity, but `email` is changed to 'lol'
console.log(r1[0] === r2[0]); // true, same entity instance, just updated!

// flushing will now fire one update query to change the email of one author
await em.flush();

Seeder package

MikroORM v5 now has a new package for seeding your database with initial or testing data. It allows creating entities via the same EntityManager API as usual, adding support for entity factories, and generating fake data via faker (the newly release community version).

See the seeder docs for more examples.

Polymorphic embeddables

Polymorphic embeddables allow us to define multiple classes for a single embedded property and the right one will be used based on the discriminator column, similar to how single table inheritance works. While this currently works only for embeddables, support for polymorphic entities will be probably added in one of the 5.x releases.

class Owner {

id!: number;

name!: string;

@Embedded(() => [Cat, Dog])
pet!: Cat | Dog;


Check out the documentation for a complete example.

There are many other small improvements in embeddables, as well as many issues were addressed. Two examples:

  • Support for many-to-one relations (storing only primary key and being able to populate the relation same as with regular entities)
  • Support for onCreate and onUpdate property options

Populating lazy scalar properties

Previously, the only way to populate a lazy scalar property was during the initial load of containing entity. If such entity was already loaded in the identity map (without this property), we needed to refresh its state – and potentially lose some state. MikroORM v5 allows to populate such properties via em.populate() too. Doing so will never override any in-memory changes we might have done on the entity.

Creating references without EntityManager

When we wanted to create a reference, so an entity that is represented only by its primary key, we always had to have access to the current EntityManager instance, as such entity always needed to be managed.

Thanks to the new helper methods on the Reference class, we can now create entity references without access to EntityManager. This can be handy if you want to create a reference from an inside entity constructor:

export class Book {

@ManyToOne(() => Author, { wrappedReference: true })
author!: IdentifiedReference<Author>;

constructor(authorId: number) { = Reference.createFromPK(Author, authorId);


The Reference wrapper is an optional class to allow more type safety over relationships. Alternatively, we can use Reference.createNakedFromPK().

This will create an unmanaged reference, that will be then merged to the EntityManager once owning entity gets flushed. Note that before we flush it, methods like Reference.init() or Reference.load() won’t be available as they require the EntityManager instance.

Smarter expr helper

The expr() helper can be used to get around strict typing. It was an identity function, doing nothing more than returning its parameter – all it did was to tell TypeScript the value is actually of a different type (a generic string to be precise).

We can now use the helper in two more ways:

  • With a callback signature to allow dynamic aliasing of the expression
  • With an array argument to allow comparing tuples
import { expr } from '@mikro-orm/core';

const res1 = await em.find(Book, {
// the type argument is optional, use it to get autocomplete on the entity properties
[expr<Book>(['price', 'createdAt'])]: { $lte: [100, new Date()] },

// will issue query similar to this:
// select `b0`.* from `book` as `b0` where (`b0`.`price`, `b0`.`created_at`) <= (?, ?)

const res2 = await em.find(Book, {
// the type argument is optional, use it to get autocomplete on the entity properties
[expr(as => `lower(${as}.name)`)]: 'jon',

// will issue query similar to this:
// select `b0`.* from `book` as `b0` where lower( = ?

Awaitable QueryBuilder

QueryBuilder is now aware of its type, and the getResult() and execute() methods are typed based on it. We can also await the QueryBuilder instance directly, which will automatically execute the QB and return the appropriate response. The QB instance is now typed based on usage of select/insert/update/delete/truncate methods to one of:

  • SelectQueryBuilder – awaiting yields array of entities
  • CountQueryBuilder – awaiting yields number
  • InsertQueryBuilder – awaiting yields QueryResult
  • UpdateQueryBuilder – awaiting yields QueryResult
  • DeleteQueryBuilder – awaiting yields QueryResult
  • TruncateQueryBuilder – awaiting yields QueryResult

const res1 = await em.createQueryBuilder(Publisher).insert({
name: 'p1',
type: PublisherType.GLOBAL,
// res1 is of type `QueryResult<Publisher>`

const res2 = await em.createQueryBuilder(Publisher)
.where({ name: 'p1' })
// res2 is Publisher[]
console.log( =>;

const res3 = await em.createQueryBuilder(Publisher).count().where({ name: 'p1' });
// res3 is number
console.log(res3 > 0);

const res4 = await em.createQueryBuilder(Publisher)
.update({ type: PublisherType.LOCAL })
.where({ name: 'p1' });
// res4 is QueryResult<Publisher>
console.log(res4.affectedRows > 0);

const res5 = await em.createQueryBuilder(Publisher).delete().where({ name: 'p1' });
// res4 is QueryResult<Publisher>
console.log(res4.affectedRows > 0);
expect(res5.affectedRows > 0).toBe(true); // test the type

Wildcard schema entities

Up until now, we were able to define entities in a specific schema, or without a schema. Such entities then used the schema based on ORM config or FindOptions. This allowed us to read entities from a specific schema, but we were missing the power of Unit of Work here.

With v5, entity instances now hold schema name (as part of WrappedEntity). Managed entities will have the schema from FindOptions or metadata. Methods that create new entity instances like em.create() or em.getReference() now have an options parameter to allow setting the schema. We can also use wrap(entity).getSchema() and wrap(entity).setSchema().

Entities can now specify wildcard schema via @Entity({ schema: '*' }). That way they will be ignored in SchemaGenerator unless the schema option is specified.

  • If we specify schema, the entity only exists in that schema
  • If we define * schema, the entity can exist in any schema, always controlled by the parameter
  • If we skip schema option, the value will be taken from global ORM config

More about this topic can be found here.

Deep assigning of entities

Another weak spot was assigning new values to existing entities. While wrap().assign() was originally designed to update a single entity and its values, a lot of users wanted to assign an entity graph, updating relations in a single step too.

With v5, the way how EntityAssigner detects what entity should be updated has changed. Assigning a deep entity graph should be possible by default, without any additional options. It works based on matching entity primary keys, so if you want to issue an update for a relationship instead of creating new relation, make sure you first load it and pass down its primary key to the assign helper:

const book = await em.findOneOrFail(Book, 1, { populate: ['author'] });

// update existing book's author's name
author: {
name: 'New name...',

If we want to always update the entity, even without the entity PK being present in data, we can use updateByPrimaryKey: false:

const book = await em.findOneOrFail(Book, 1, { populate: ['author'] });

// update existing book's author's name
author: {
name: 'New name...',
}, { updateByPrimaryKey: false });

More examples on this topic can be found in the docs.

Experimental support for ES modules

While MikroORM v5 is still compiled and published as CommonJS, we added several improvements that should allow using it with ESM projects too. Namely, we use the gen-esm-wrapper package to allow using named imports, and we use one nasty trick to keep dynamic imports instead of compiling them to require statements – for that we need to use MIKRO_ORM_DYNAMIC_IMPORTS env var. This should allow us to use folder-based discovery with ES modules, which was previously not possible.

Other notable changes

Also be sure to check the upgrading guide.

What’s next?

Here is a list of things I would like to focus on going forward:

  • allow specifying pivot entity for M:N relations (so we can have additional columns there, but still map it as M:N for reading purposes)
  • support for database views (or maybe just entities representing SQL expressions)
  • more drivers – namely better-sqlite3 and cockroach sounds like low hanging fruit, given knex now supports those natively

Like MikroORM? ⭐️ Star it on GitHub and share this article with your friends. If you want to support the project financially, you can do so via GitHub Sponsors.

· 13 min read
Martin Adámek

After 4 months of active development, I am thrilled to announce the release of MikroORM 4. When I started to work on v4, the goal was to make it relatively small release, mainly to drop support for TypeScript 3.6 and Node.js 8, and to split the project into multiple packages, so we can have more fine grained control over the dependencies (mainly because of ts-morph having TS as a runtime dependency).

But what a major release would that be, without having a bunch of new features as well, right?