Chapter 3: Project Setup
So far you were just toying around with your entities, let's start building something real. As mentioned earlier, you will use Fastify as a web server, and Vitest for testing it. Let's set that up, create your first endpoint and test it.
Fastify
Let's create new file app.ts inside src directory, and export a bootstrap function from it, where you create the fastify app instance. Remember how you were forking the EntityManager to get around the global context validation? For web servers, you can leverage middlewares, or in fastify hooks, to achieve unique request contexts automatically. MikroORM provides a handy helper called RequestContext which can be used to create the fork for each request. The EntityManager is aware of this class and tries to get the right context from it automatically.
RequestContext helper work?Internally all EntityManager methods that work with the Identity Map (e.g. em.find() or em.getReference()) first call em.getContext() to access the contextual fork. This method will first check if the code is running inside RequestContext handler and prefer the EntityManager fork from it.
// we call em.find() on the global EM instance
const res = await orm.em.find(Book, {});
// but under the hood this resolves to
const res = await orm.em.getContext().find(Book, {});
// which then resolves to
const res = await RequestContext.getEntityManager().find(Book, {});
The RequestContext.getEntityManager() method then checks AsyncLocalStorage static instance used for creating new EM forks in the RequestContext.create() method.
The AsyncLocalStorage class from Node.js core is the magician here. It allows us to track the context throughout the async calls. It allows us to decouple the EntityManager fork creation (usually in a middleware as shown in the previous section) from its usage through the global EntityManager instance.
import { MikroORM, RequestContext } from '@mikro-orm/core';
import { fastify } from 'fastify';
import config from './mikro-orm.config.js';
export async function bootstrap(port = 3001) {
const orm = await MikroORM.init(config);
const app = fastify();
// register request context hook
app.addHook('onRequest', (request, reply, done) => {
RequestContext.create(orm.em, done);
});
// shut down the connection when closing the app
app.addHook('onClose', async () => {
await orm.close();
});
// register routes here
// ...
const url = await app.listen({ port });
return { app, url };
}
And use this function in the server.ts file - you can wipe all the code you had so far and replace it with the following:
import { bootstrap } from './app.js';
try {
const { url } = await bootstrap();
console.log(`server started at ${url}`);
} catch (e) {
console.error(e);
}
Now hitting the npm start again, you should see something like this:
[info] MikroORM version: 7.0.0
[discovery] ORM entity discovery started
[discovery] - processing entity User
[discovery] - processing entity Article
[discovery] - processing entity Tag
[discovery] - processing entity BaseEntity
[discovery] - entity discovery finished, found 5 entities, took 5 ms
[info] MikroORM successfully connected to database sqlite.db
server started at http://127.0.0.1:3001
The server is running, good! To stop it, press CTRL + C.
User profile endpoint
Let's add the first endpoint - GET /article which lists all existing articles. It is a public endpoint that can take limit and offset query parameters and return requested items together with the total count of all available articles.
You could use em.count() to get the number of entities, but since you want to return the count next to the paginated list of entities, there's a better way - em.findAndCount(). This method serves exactly this purpose, retuning the paginated list with the total count of items.
app.get('/article', async request => {
const { limit, offset } = request.query as { limit?: number; offset?: number };
const [items, total] = await orm.em.findAndCount(Article, {}, {
limit, offset,
});
return { items, total };
});
Basic Dependency Injection container
Before getting to testing the first endpoint, let's refactor a bit to make the setup more future-proof. Add a new src/db.ts file, which will serve as a simple Dependency Injection (DI) container. It will export initORM() function that will first initialize the ORM and cache it into memory, so the following calls will return the same instance. Thanks to top-level await, you could just initialize the ORM and export it right ahead, but soon you will want to alter some options before doing so, for testing purposes, and having a function like this will help in achieving that.
Note that you are importing all of
EntityManager,EntityRepository,MikroORM,Optionsfrom the@mikro-orm/sqlitepackage - those exports are typed to theSqliteDriver.
import { EntityManager, EntityRepository, MikroORM, Options } from '@mikro-orm/sqlite';
import config from '../src/mikro-orm.config.js';
export interface Services {
orm: MikroORM;
em: EntityManager;
article: EntityRepository<Article>;
user: EntityRepository<User>;
tag: EntityRepository<Tag>;
}
let cache: Services;
export function initORM(options?: Options): Services {
if (cache) {
return cache;
}
const orm = new MikroORM({
...config,
...options,
});
// save to cache before returning
return cache = {
orm,
em: orm.em,
article: orm.em.getRepository(Article),
user: orm.em.getRepository(User),
tag: orm.em.getRepository(Tag),
};
}
And use it in the app.ts file instead of initializing the ORM directly:
import { RequestContext } from '@mikro-orm/core';
import { fastify } from 'fastify';
import { initORM } from './db.js';
export async function bootstrap(port = 3001) {
const db = initORM();
const app = fastify();
// register request context hook
app.addHook('onRequest', (request, reply, done) => {
RequestContext.create(db.em, done);
});
// shut down the connection when closing the app
app.addHook('onClose', async () => {
await db.orm.close();
});
// register routes here
app.get('/article', async request => {
const { limit, offset } = request.query as { limit?: number; offset?: number };
const [items, total] = await db.article.findAndCount({}, {
limit, offset,
});
return { items, total };
});
const url = await app.listen({ port });
return { app, url };
}
EntityManager and EntityRepository from driver packageWhile EntityManager and EntityRepository classes are provided by the @mikro-orm/core package, those are only the base - driver agnostic - implementations. One example of what that means is the QueryBuilder - as an SQL concept, it has no place in the @mikro-orm/core package, instead, an extension of the EntityManager called SqlEntityManager is provided by the SQL driver packages (it is defined in @mikro-orm/knex package and reexported in every SQL driver packages that depend on it). This SqlEntityManager class provides the additional SQL related methods, like em.createQueryBuilder().
For convenience, the SqlEntityManager class is also reexported under EntityManager alias. This means you can do import { EntityManager } from '@mikro-orm/sqlite' to access it.
Under the hood, MikroORM will always use this driver-specific EntityManager implementation (you can verify that by console.log(orm.em), it will be an instance of SqlEntityManager), but for TypeScript to understand it, you will need to use the driver package to import it. The same applies to the EntityRepository and SqlEntityRepository classes.
import { EntityManager, EntityRepository } from '@mikro-orm/sqlite'; // or any other driver package
You can also use MikroORM, defineConfig and Options exported from the driver package, it works similarly, providing the driver type without the need to use generics.
What is EntityRepository
Entity repositories are thin layers on top of EntityManager. They act as an extension point, so you can add custom methods, or even alter the existing ones. The default EntityRepository implementation just forwards the calls to the underlying EntityManager instance.
EntityRepository class carries the entity type, so you do not have to pass it to every find or findOne calls.
Note that there is no such thing as "flushing a repository" - it is just a shortcut to em.flush(). In other words, we always flush the whole Unit of Work, not just a single entity that this repository represents.
Testing the endpoint
The first endpoint is ready, let's test it. You already have vitest installed and available via npm test, now add a test case. Put it into the test folder and name the file with .test.ts extension so vitest knows it is a test file.
So how should you test the endpoint? Fastify offers an easy way to test endpoints via app.inject(), all you need to do is create the fastify app instance inside the test case (you already have the bootstrap method for that). But that would be testing against your production database, you don't want that!
Let's create one more utility file before getting to the first test, and put it into the test folder too, but without the .test.ts suffix - let's call it utils.ts. You will define a function called initTestApp that initializes the ORM with overridden options for testing, create the schema and bootstrap your fastify app, all in one go. It will take the port number as a parameter, again to allow easy parallel runs when testing - every test case will have its own in-memory database and a fastify app running on its own port.
import { bootstrap } from '../src/app.js';
import { initORM } from '../src/db.js';
import config from '../src/mikro-orm.config.js';
export async function initTestApp(port: number) {
// this will create all the ORM services and cache them
const { orm } = initORM({
// first, include the main config
...config,
// no need for debug information, it would only pollute the logs
debug: false,
// use in-memory database, this way tests can easily be parallelized
dbName: ':memory:',
});
// create the schema so the database can be used
await orm.schema.createSchema();
const { app } = await bootstrap(port);
return app;
}
And now the test case, finally. Currently, there is no data as you are using an empty in-memory database, fresh for each test run, so the article listing endpoint will return just an empty array - you will handle that in a moment.
Notice that you are using
beforeAllhook to initialize the app andafterAllto tear it down - theapp.close()will result in theonClosehook that callsorm.close(). Without that, the process would hang.
import { afterAll, beforeAll, expect, test } from 'vitest';
import { FastifyInstance } from 'fastify';
import { initTestApp } from './utils.js';
let app: FastifyInstance;
beforeAll(async () => {
// use different ports to allow parallel testing
app = await initTestApp(30001);
});
afterAll(async () => {
// we close only the fastify app - it will close the database connection via onClose hook automatically
await app.close();
});
test('list all articles', async () => {
// mimic the http request via `app.inject()`
const res = await app.inject({
method: 'get',
url: '/article',
});
// assert it was successful response
expect(res.statusCode).toBe(200);
// with expected shape
expect(res.json()).toMatchObject({
items: [],
total: 0,
});
});
Now run npm test, you should be good to go:
✓ test/article.test.ts (1)
Test Files 1 passed (1)
Tests 1 passed (1)
Start at 15:56:41
Duration 876ms (transform 264ms, setup 0ms, collect 300ms, tests 147ms)
PASS Waiting for file changes...
press h to show help, press q to quit
Note about unit tests
It might be tempting to skip the MikroORM.init() phase in some of your unit tests that do not require database connection, but the init method is doing more than just establishing that. The most important part of that method is metadata discovery, where the ORM checks all the entity definitions and sets up the default values for various metadata options (mainly for naming strategy and bidirectional relations).
The discovery phase is required for propagation to work.
const orm = new MikroORM({
// ...
});
Seeding the database
There are many ways how to go about seeding your testing database. The obvious way is to do it directly in your test, for example in the beforeAll hook, right after you initialize the ORM.
One alternative to that is using the Seeder, an ORM package (available via @mikro-orm/seeder), which offers utilities to populate your database with (not necessarily) fake data.
You will be using Seeder for populating the test database with fake data, but it is a valid approach to have a seeder that creates initial data for a production database too - you could create the default set of article tags this way, or the initial admin user. You can set up a hierarchy of seeders or call them one by one.
Let's install the seeder package and use the CLI to generate a test seeder:
- npm
- Yarn
- pnpm
- Bun
npm install @mikro-orm/seeder
yarn add @mikro-orm/seeder
pnpm add @mikro-orm/seeder
bun add @mikro-orm/seeder
Next step will be to register the SeedManager extension in your ORM config, this will make it available via orm.seeder property:
import { defineConfig } from '@mikro-orm/sqlite';
import { SeedManager } from '@mikro-orm/seeder';
export default defineConfig({
// ...
extensions: [SeedManager],
});
Other extensions you can use are
SchemaGenerator,MigratorandEntityGenerator. TheSchemaGenerator(as well asMongoSchemaGenerator) is registered automatically as it does not require any 3rd party dependencies to be installed.
Now let's try to create a new seeder named test:
npx mikro-orm seeder:create test
This will create src/seeders directory and a TestSeeder.ts file inside it, with a skeleton of your new seeder:
import type { EntityManager } from '@mikro-orm/core';
import { Seeder } from '@mikro-orm/seeder';
export class TestSeeder extends Seeder {
async run(em: EntityManager): Promise<void> {}
}
You can use the em.create() function described earlier. It effectively calls em.persist(entity) before it returns the created entity, so you don't even need to do anything with the entity itself, calling em.create() on its own will be enough. Time to test it!
export class TestSeeder extends Seeder {
async run(em: EntityManager): Promise<void> {
em.create(User, {
fullName: 'Foo Bar',
email: 'foo@bar.com',
password: 'password123',
articles: [
{
title: 'title 1/3',
description: 'desc 1/3',
text: 'text text text 1/3',
tags: [{ id: 1, name: 'foo1' }, { id: 2, name: 'foo2' }],
},
{
title: 'title 2/3',
description: 'desc 2/3',
text: 'text text text 2/3',
tags: [{ id: 2, name: 'foo2' }],
},
{
title: 'title 3/3',
description: 'desc 3/3',
text: 'text text text 3/3',
tags: [{ id: 2, name: 'foo2' }, { id: 3, name: 'foo3' }],
},
],
});
}
}
Then you need to run the TestSeeder, let's do that in your initTestApp helper, right after calling orm.schema.createSchema():
await orm.schema.createSchema();
await orm.seeder.seed(TestSeeder);
And adjust the test assertion, as you now get 3 articles in the feed:
expect(res.json()).toMatchObject({
items: [
{ author: 1, slug: 'title-13', title: 'title 1/3' },
{ author: 1, slug: 'title-23', title: 'title 2/3' },
{ author: 1, slug: 'title-33', title: 'title 3/3' },
],
total: 3,
});
Now run npm test to verify things work as expected.
That should be enough for now, but don't worry, you will get back to this topic later on.
SchemaGenerator
Earlier in the guide, when you needed to create the database for testing, you used the SchemaGenerator to recreate the database. Let's talk a bit more about this class.
SchemaGenerator is responsible for generating the SQL queries based on your entity metadata. In other words, it translates the entity definition into the Data Definition Language (DDL). Moreover, it can also understand your current database schema and compare it with the metadata, resulting in queries needed to put your schema in sync.
It can be used programmatically:
// to get the queries
const diff = await orm.schema.getUpdateSchemaSQL();
console.log(diff);
// or to run the queries
await orm.schema.updateSchema();
With the
orm.schema.updateSchema()you could easily set up the same behavior as TypeORM has viasynchronize: true, just put that into your app right after the ORM gets initialized (or into some app bootstrap code). Keep in mind this approach can be destructive and is discouraged - you should always verify what queries theSchemaGeneratorproduced before you run them!
Or via CLI:
To run the queries, replace
--dumpwith--run.
npx mikro-orm schema:create --dump # Dumps create schema SQL
npx mikro-orm schema:update --dump # Dumps update schema SQL
npx mikro-orm schema:drop --dump # Dumps drop schema SQL
Your production database (the one in sqlite.db file in the root of your project) is probably out of sync, as you were mostly using the in-memory database inside the tests. Let's try to sync it via the CLI. First, run it with the --dump (or -d) flag to see what queries it generates, then run them via --run (or -r):
# first check what gets generated
npx mikro-orm schema:update --dump
# and when its fine, sync the schema
npx mikro-orm schema:update --run
If this command does not work and produces some invalid queries, you can always recreate the schema from scratch, by first calling
schema:drop --run.
Working with SchemaGenerator can be handy when prototyping the initial app, or especially when testing, where you might want to have many databases with the latest schema, regardless of how your production schema looks like. But beware, it can be very dangerous when used on a real production database. Luckily, there's a solution for that - the migrations.
Migrations
To use migrations you first need to install
@mikro-orm/migrationspackage for SQL drivers (or@mikro-orm/migrations-mongodbfor MongoDB), and register theMigratorextension in your ORM config.
MikroORM has integrated support for migrations via umzug. It allows you to generate migrations with current schema differences, as well as manage their execution. By default, each migration will be executed inside a transaction, and all of them will be wrapped in one master transaction, so if one of them fails, everything will be rolled back.
Let's install the migrations package and try to create your first migration:
- npm
- Yarn
- pnpm
- Bun
npm install @mikro-orm/migrations
yarn add @mikro-orm/migrations
pnpm add @mikro-orm/migrations
bun add @mikro-orm/migrations
Then register the Migrator extension in your ORM config:
import { defineConfig } from '@mikro-orm/sqlite';
import { SeedManager } from '@mikro-orm/seeder';
import { Migrator } from '@mikro-orm/migrations';
export default defineConfig({
// ...
extensions: [SeedManager, Migrator],
});
And finally try to create your first migration:
npx mikro-orm migration:create
If you followed the guide closely, you should see this message:
No changes required, schema is up-to-date
That is because you just synchronized the schema by calling npx mikro-orm schema:update --run a moment ago. You have two options here, drop the schema first, or a less destructive one - an initial migration.
Initial migration
If you want to start using migrations, and you already have the schema generated, the --initial flag will help with keeping the existing schema, while generating the first migration based only on the entity metadata. It can be used only if the schema is empty or fully up-to-date. The generated migration will be automatically marked as executed if your schema already exists - if not, you will need to execute it manually as any other migration, via npx mikro-orm migration:up.
Initial migration can be created only if there are no migrations previously generated or executed. If you are starting fresh, and you have no schema yet, you don't need to use the
--initalflag, a regular migration will do the job too.
npx mikro-orm migration:create --initial
This will create the initial migration in the src/migrations directory, containing queries from schema:create command. The migration will be automatically marked as executed because your schema was already in sync.
Migration class
Let's take a look at the generated migration. You can see there is a class that extends the Migration abstract class from the @mikro-orm/migrations package:
import { Migration } from '@mikro-orm/migrations';
export class Migration20220913202829 extends Migration {
async up(): Promise<void> {
this.addSql('create table `tag` (`id` integer not null primary key autoincrement, `created_at` datetime not null, `updated_at` datetime not null, `name` text not null);');
// ...
}
}
To support undoing those changed, you can implement the down method, which throws an error by default.
MikroORM will generate the down migrations automatically (although not for the initial migration, for security concerns), with one exception - the SQLite driver, due to its limited capabilities. If you use any other driver, a down migration will be generated (unless it's an initial migration).
You can also execute queries inside the
up()/down()method viathis.execute('...'), which will run queries in the same transaction as the rest of the migration. Thethis.addSql('...)method also accepts instances of knex. Knex instance can be accessed viathis.getKnex();
Read more about migrations in the documentation.
One more entity
The migrations are set up, let's test them by adding one more entity - the Comment, again belonging to the article module, so it goes into src/modules/article/comment.entity.ts.
import { Entity, ManyToOne, Property } from '@mikro-orm/core';
import { Article } from './article.entity.js';
import { User } from '../user/user.entity.js';
import { BaseEntity } from '../common/base.entity.js';
@Entity()
export class Comment extends BaseEntity {
@Property({ length: 1000 })
text!: string;
@ManyToOne()
article!: Article;
@ManyToOne()
author!: User;
}
and a OneToMany inverse side in Article entity:
@OneToMany({ mappedBy: 'article', eager: true, orphanRemoval: true })
comments = new Collection<Comment>(this);
Don't forget to add the repository to your simple DI container too:
export interface Services {
orm: MikroORM;
em: EntityManager;
user: UserRepository;
article: EntityRepository<Article>;
comment: EntityRepository<Comment>;
tag: EntityRepository<Tag>;
}
export function initORM(options?: Options): Promise<Services> {
// ...
return cache = {
orm,
em: orm.em,
user: orm.em.getRepository(User),
article: orm.em.getRepository(Article),
comment: orm.em.getRepository(Comment),
tag: orm.em.getRepository(Tag),
};
}
This uses two new options,
eagerandorphanRemoval:
eager: truewill automatically populate this relation, just like if you would usepopulate: ['comments']explicitly.orphanRemoval: trueis a special type of cascading, any entity removed from such collection will be deleted from the database, as opposed to being just detached from the relationship (by setting the foreign key tonull).
Now create the migration via CLI and run it. And just for the sake of testing, also try the other migration-related commands:
# create new migration based on the schema difference
npx mikro-orm migration:create
# list pending migrations
npx mikro-orm migration:pending
# run the pending migrations
npx mikro-orm migration:up
# list executed migrations
npx mikro-orm migration:list
You should see output similar to this:
npx mikro-orm migration:create
Migration20220913205718.ts successfully created
npx mikro-orm migration:pending
┌─────────────────────────┐
│ Name │
├─────────────────────────┤
│ Migration20220913205718 │
└─────────────────────────┘
npx mikro-orm migration:up
Processing 'Migration20220913205718'
Applied 'Migration20220913205718'
Successfully migrated up to the latest version
npx mikro-orm migration:list
┌─────────────────────────┬──────────────────────────┐
│ Name │ Executed at │
├─────────────────────────┼──────────────────────────┤
│ Migration20220913202829 │ 2022-09-13T18:57:12.000Z │
│ Migration20220913205718 │ 2022-09-13T18:57:27.000Z │
└─────────────────────────┴──────────────────────────┘
Creating new migration will automatically save the target schema snapshot into the migrations folder. This snapshot will be then used if you try to create a new migration, instead of using the current database schema. This means that if you try to create new migration before you run the pending ones, you still get the right schema diff.
Snapshots should be versioned just like the regular migration files.
Snapshotting can be disabled via migrations.snapshot: false in the ORM config.
Running migrations automatically
Before calling it a day, let's automate running the migrations a bit - you can use the Migrator programmatically, in a similar way like the SchemaGenerator. You want to run them during your app bootstrap cycle, before it starts to accept connections, so a good place for that is your bootstrap function, right after you initialize the ORM.
export async function bootstrap(port = 3001, migrate = true) {
const db = initORM();
if (migrate) {
// sync the schema
await db.orm.migrator.up();
}
// ...
}
You need to do this conditionally, as you want to run the migrations only for the production database, not for your testing ones (as they use the SchemaGenerator directly, together with the Seeder). Don't forget to pass false when calling the bootstrap() function from your test case:
export async function initTestApp(port: number) {
const { orm } = initORM({ ... });
await orm.schema.createSchema();
await orm.seeder.seed(TestSeeder);
const { app } = await bootstrap(port, false); // <-- here
return app;
}
⛳ Checkpoint 3
You now have 4 entities, a working web app with a single get endpoint and a basic test case for it. You also set up migrations and seeding. This is your app.ts right now:
This uses an in-memory database, a SQLite feature available via special database name
:memory:.
This is the app.ts file after this chapter: