abelcastro.dev

Switching from REST to GraphQL in My Blog with Minimal Code Changes

2025-01-05

TestingRest-APINestJSGraphQL

Setting Up a REPL in a NestJS Project with Mikro-ORM: A Django Shell Equivalent

2024-10-21

NestJSMikro-ORMDjango

Testing Strategies for a NestJS + Mikro-ORM App with Jest

2024-09-20

TypeScriptNestJSMikro-ORM
1
2
3
...
910

Abel Castro 2026 - checkout the source code of this page on GitHub - Privacy Policy

After spending considerable time working with the Python framework Django, I've recently ventured into the Node.js world with NestJS. One of the features I deeply missed from Django was the Django Shell. It was an incredibly useful tool that allowed me to interact with my application in a Python shell, test out code snippets, and manipulate data directly using the Django ORM.

In the NestJS ecosystem, I was searching for a similar interactive environment and discovered that what I was looking for is called a REPL (Read-Evaluate-Print Loop). A REPL provides an interactive shell where you can execute code in real-time within the context of your application.

In this post, I'll show you how to set up a REPL in a NestJS project that uses Mikro-ORM, drawing from my experience adapting code from this GitHub repository.

Why Use a REPL?

A REPL is invaluable for:

  • Testing code snippets: Quickly try out code without writing tests or modifying your application files.
  • Database manipulation: Interact with your database through your ORM to query or modify data.
  • Debugging: Experiment with different functions and methods to troubleshoot issues.

Setting Up the REPL in NestJS with Mikro-ORM

Here's how you can set up a REPL in your NestJS project:

Step 1: Create a repl.ts File

In the root of your project, create a file named repl.ts with the following content:

import 'tsconfig-paths/register';

import { repl } from '@nestjs/core';
import { AppModule } from './app.module';
import { MikroORM } from '@mikro-orm/core';
import { commonMikroOrmConfig } from './mikro-orm.config';
import { Post } from './posts/post.entities';

async function bootstrap() {
  const replServer = await repl(AppModule);
  const { context } = replServer;

  const orm = await MikroORM.init({
    ...commonMikroOrmConfig,
    allowGlobalContext: true,
    entitiesTs: ['./**/*.entities.ts'],
    entities: ['./dist/**/*.entities.js'],
    discovery: {
      warnWhenNoEntities: false,
    },
  });

  // Add your entities and ORM to the REPL context for easy access
  context.Post = Post;
  context.orm = orm;
  context.em = orm.em;
}
bootstrap();

Explanation:

  • Import Statements: We import necessary modules, including repl from @nestjs/core and MikroORM from @mikro-orm/core.
  • Bootstrap Function: We initialize the REPL server and MikroORM within an asynchronous bootstrap function.
  • Context Enhancement: We add the ORM instance, the entity manager (em), and any entities (like Post) to the REPL context for easy access.

Step 2: Start the REPL

Run the following command in your terminal:

npm run start -- --entryFile repl

This tells NestJS to use repl.ts as the entry file instead of the default main.ts.

Using the REPL

Once the REPL starts, you'll see a prompt like this:

[info] MikroORM successfully connected to database blog_db on postgresql://blog_user:*****@127.0.0.1:5432

>

Now you can interact with your application. Here's an example of querying all Post entities:

> const posts = await em.find(Post, {});
[query] select "p0".* from "post" as "p0" [took 5 ms, 2 results]
> posts
[
  {
    id: 1,
    title: 'First Post',
    content: 'This is the first post.',
    createdAt: 2023-10-21T12:34:56.789Z
  },
  {
    id: 2,
    title: 'Second Post',
    content: 'This is the second post.',
    createdAt: 2023-10-22T08:15:30.123Z
  }
]

Tips for Using the REPL:

  • Access Entities: Use Post, User, or any other entities you've added to the context.
  • Entity Manager: em is available for database operations.
  • Autocomplete: The REPL supports autocomplete for faster coding.

Important Considerations

  • Production Use: While a REPL is powerful, using it in a production environment can be risky. Be cautious when manipulating data directly.
  • Security: Ensure that access to the REPL in production environments is secure and restricted.

Conclusion

Setting up a REPL in your NestJS project with Mikro-ORM bridges the gap between Django's interactive shell and the Node.js world. It enhances productivity by allowing real-time interaction with your application's context and database.

Feel free to explore and extend this setup by adding more entities or custom services to the REPL context. Happy coding!


References:

  • NestJS Documentation - REPL
  • Mikro-ORM Documentation

Over time, I’ve shared a few posts about how my blog evolved into the Next.js project it is today. In this post, I want to dive into a recent architectural improvement and explain more about how I seamlessly switched my blog’s data source from a REST API to a GraphQL API by modifying just a handful of files.

This shift was possible thanks to the use of data providers in my project. By consistently interacting with an abstraction layer (activeDataProvider), I was able to decouple my data-fetching logic from the actual source of the data.

The beauty of this design lies in its simplicity. To change the data provider, all I had to do was:

  1. Implement a new class that adheres to the IDataProvider interface.
  2. Set an instance of that class to activeDataProvider.

That’s it! No need to rewrite logic across multiple components or refactor complex parts of the application.

The Data Provider Interface

Here’s what the current iteration of the IDataProvider interface looks like:

export interface IDataProvider {
    getAll(options: PostSearchOptions): Promise<PaginatedPosts>;
    getOneBySlug(slug: string): Promise<Post | null>;
    getPostMetadata(slug: string): Promise<Post | null>;
    create?(data: Partial<Post>): Promise<Post>;
    update?(slug: string, data: Partial<Post>): Promise<Post | null>;
    delete?(slug: string): Promise<boolean>;
}

This interface enforces the core methods required for interacting with my blog’s data—fetching posts, retrieving metadata, and optionally creating, updating, or deleting posts.

Abstracting with BaseDataProvider

To ensure consistency across different data providers, I implemented a BaseDataProvider class:

This class handles all the standard methods from the interface and introduces an extra layer of abstraction by defining abstract methods that subclasses must implement:

export abstract class BaseDataProvider implements IDataProvider {
    create?(data: Partial<Post>): Promise<Post> {
        throw new Error('Method not implemented.');
    }
    update?(slug: string, data: Partial<Post>): Promise<Post | null> {
        throw new Error('Method not implemented.');
    }
    delete?(slug: string): Promise<boolean> {
        throw new Error('Method not implemented.');
    }

    abstract getAllFromStorage(
        options: PostSearchOptions,
    ): Promise<PaginatedPosts>;
    abstract getOneBySlugFromStorage(slug: string): Promise<Post | null>;
    abstract getPostMetadataFromStorage(slug: string): Promise<Post | null>;

    async getAll(: ): <> {
          ( (resolve, reject) => {
             paginatedPosts =  .(options);

            (paginatedPosts);
        });
    }

     (: ): < | > {
          ( (resolve, reject) => {
             matchingPost =  .(slug);
             (matchingPost) {
                (matchingPost);
            }  {
                ();
            }
        });
    }

     (: ): < | > {
          ( (resolve, reject) => {
             matchingPost =  .(slug);
             (matchingPost) {
                (matchingPost);
            }  {
                ();
            }
        });
    }
}

These <Something>FromStorage methods are the real magic—they encapsulate the logic for interacting with the actual data source, whether it’s a REST API, a GraphQL endpoint, or even static files.

On the other hand, the BaseDataProvider provides the generic methods getAll, getOneBySlug and getPostMetadata. These are the methods that our components interact with directly. Internally, they call the appropriate getAllFromStorage, getOneBySlugFromStorage and getPostMetadataFromStorage methods.

This separation ensures that the specific details of the persistence layer are abstracted away from the components, keeping the architecture clean and decoupled.

Why This Matters

In my opinion, decoupling components from external dependencies is a powerful asset that allows us to create more resilient and testable code. While this approach introduces some overhead and requires a shift in mindset, it proves invaluable in fast-paced environments where technologies evolve and change rapidly.

By creating abstraction layers, we can make our code more adaptable, enabling smoother transitions to new tools or data sources without major rewrites. This flexibility ultimately helps future-proof the project and maintain long-term efficiency.

Of course, one could argue that maintaining this level of abstraction is easy in a small project like my blog. But my perspective is this: if I can apply such high coding standards to a personal project that generates no profit and where no one will complain if it breaks tomorrow, why shouldn’t I hold myself to the same (or even higher) standard in my professional work?

In a business environment, where the software directly contributes to revenue and people rely on the services I build, maintaining clean, adaptable, and well-structured code is even more critical.

Bad Example vs Good Example

This is an example of a component tightly coupled to the implementation of the data layer, in this case using Apollo GraphQL:

const PostList: FC = () => {
    const { data } = useQuery<PostsData, PostsVars>(GET_POSTS, {
        variables: { limit: 10, offset: 0 },
    });

    return (
        <ul>
            {data?.posts.map((post) => (
                <li key={post.id}>
                    <h2>{post.title}</h2>
                    <p>{post.text}</p>
                </li>
            ))}
        </ul>
    );
};

An one that uses the approach that I propose:

const PostList: FC = () => {
    const { data } = await activeDataProvider.getAll(options);

    return (
        <ul>
            {data?.posts.map((post) => (
                <li key={post.id}>
                    <h2>{post.title}</h2>
                    <p>{post.text}</p>
                </li>
            ))}
        </ul>
    );
};

This example does not include any imports related to GraphQL or other external dependencies. It relies solely on the activeDataProvider interface, ensuring the component remains decoupled from the underlying data-fetching implementation.

Writing Mocks

Another benefit of this approach is how easily a data provider can be replaced by writing unit tests.

Since my app relies on activeDataProvider, I can easily swap the real data provider with an in-memory mock during unit tests.

In my vitest.setup.ts file, I added a mock that replaces activeDataProvider with a lightweight, in-memory provider:

// Replace active dataProvider with MemoryDataProvider
vi.mock('./data-providers/active', async () => {
    const jsonData = JSON.parse(
        readFileSync('./tests/test-data.json', 'utf-8'),
    );
    return {
        default: new MemoryDataProvider(jsonData),
    };
});

This mock loads data from a static JSON file during tests, ensuring predictable results without external dependencies.

Testing a component that fetches posts is as simple as calling:

await activeDataProvider.getAll(options);

Since the provider is mocked, the tests run fast, and I can easily simulate various data states (empty results, errors, or populated lists).

How It Worked in Practice

During this latest iteration, I transitioned the blog to pull data from a GraphQL API by implementing a new provider class that extends BaseDataProvider. By implementing the abstract methods (getAllFromStorage, etc.) with GraphQL queries, the switch was complete.

Now, for example, whenever a component fetches posts, it calls:

await activeDataProvider.getAll(options);

The underlying provider handles the communication with the persistance layer, ensuring the component is agnostic to whether the data came from GraphQL, REST, or elsewhere.

Checkout the complete source code here.

When building an application with NestJS and Mikro-ORM in TypeScript, ensuring proper testing is essential to maintain code quality and reliability. In this post, I will cover three main testing strategies for database-related operations, each with its pros and cons.

Option 1: In-Memory Database (SQLite as Driver)

In this approach, you set up an in-memory SQLite database during tests to simulate persistence without interacting with a real database.

Pros:

  • Entities persist, allowing you to perform actual database operations and queries.
  • Tests remain relatively fast because no external DB connection is required.

Cons:

  • SQLite might behave differently from your production database (e.g., PostgreSQL). This can result in misleading tests, especially for complex queries or schema-related features.
  • Mikro-ORM's discussion has discouraged this approach due to potential discrepancies, but the Mikro-ORM repository still uses it in some tests.

Example: Setting up an In-Memory SQLite Database

import { MikroORM } from '@mikro-orm/core';
import { User } from './user.entity'; // example entity
import { SqliteDriver } from '@mikro-orm/sqlite';

describe('User Service - In-Memory DB', () => {
  let orm: MikroORM;

  beforeAll(async () => {
    orm = await MikroORM.init({
      entities: [User],
      dbName: ':memory:',
      type: 'sqlite',
    });

    const generator = orm.getSchemaGenerator();
    await generator.createSchema();
  });

  afterAll(async () => {
    await orm.close(true);
  });

  it('should persist and retrieve a user entity', async () => {
    const userRepo = orm.em.getRepository(User);
    const user = userRepo.create({ name: 'John Doe' });
    
    await userRepo.persistAndFlush(user);
    
    const retrievedUser = await userRepo.findOne({ name: 'John Doe' });
    expect(retrievedUser).toBeDefined();
    expect(retrievedUser.name).toBe('John Doe');
  });
});

This setup is relatively straightforward, but keep in mind the limitations regarding database compatibility. Note also this approach is not recommended by the Mikro-ORM creator but in the Mikro-ORM repo it is used anyway for some tests.

Option 2: Same Driver, No Database Connection (Mock Queries)

Another option is to initialize Mikro-ORM with the same driver you'd use in production but prevent it from connecting to a real database by setting connect: false. This can be a quick setup, especially when you don't need to run real database queries.

Pros:

  • Simple to set up.
  • No real database connection required, meaning no external dependency.

Cons:

  • Since the database isn’t connected, you can’t make real queries.
  • You’ll likely end up mocking database operations, which can lead to less meaningful tests.

Example: Mocking Queries with No DB Connection

import { MikroORM } from '@mikro-orm/core';
import { User } from './user.entity';

describe('User Service - No DB Connection', () => {
  let orm: MikroORM;

  beforeAll(async () => {
    orm = await MikroORM.init({
      entities: [User],
      dbName: 'test-db',
      type: 'postgresql', // same as production
      connect: false, // prevent real connection
    });
  });

  it('should mock user creation and retrieval', async () => {
    const mockUser = { id: 1, name: 'Mock User' };
    
    const userRepo = orm.em.getRepository(User);
    
    jest.spyOn(userRepo, 'persistAndFlush').mockImplementation(async () => mockUser);
    jest.spyOn(userRepo, 'findOne').mockResolvedValue(mockUser);
    
    await userRepo.persistAndFlush(mockUser);
    const foundUser = await userRepo.findOne({ name:  });
    
    (foundUser).();
    (foundUser.).();
  });
});

This approach works well for unit tests where database interaction is mocked. However, the lack of actual persistence may make your tests less reliable.

Option 3: Mocking Everything

Mocking everything is an approach where you mock both the repository methods and any related services to simulate the behavior of the database without involving the actual ORM operations. See example an example in the nestjs-realworld-example-app here.

Pros:

  • Tests run extremely fast because no real database or ORM is involved.
  • Full control over the behavior of mocked services and repositories.

Cons:

  • Requires significant mocking effort, which can make tests harder to maintain and understand.
  • Mocking too much might lead to tests that are disconnected from reality.

Example: Fully Mocked Service and Repository

import { Test, TestingModule } from '@nestjs/testing';
import { UserService } from './user.service';
import { User } from './user.entity';
import { getRepositoryToken } from '@mikro-orm/nestjs';

describe('User Service - Full Mock', () => {
  let userService: UserService;
  const mockRepository = {
    persistAndFlush: jest.fn(),
    findOne: jest.fn(),
  };

  beforeEach(async () => {
    const module: TestingModule = await Test.createTestingModule({
      providers: [
        UserService,
        { provide: getRepositoryToken(User), useValue: mockRepository },
      ],
    }).compile();

    userService = module.get<UserService>(UserService);
  });

  it('should create and return a user', async () => {
    const mockUser = { id: 1, name: 'Mock User' };
    mockRepository..(mockUser);
    mockRepository..(mockUser);
    
     createdUser =  userService.({ :  });
     foundUser =  userService.({ :  });
    
    (createdUser).(mockUser);
    (foundUser).(mockUser);
  });
});

This is particularly useful in unit tests where the focus is on testing business logic rather than database interaction.

Conclusion

Choosing the right testing strategy depends on the scope and type of your tests:

  • In-Memory DB (Option 1) is great for integration tests that closely mimic production behavior, but be cautious of differences between SQLite and your production DB.
  • No DB Connection (Option 2) simplifies the setup but limits real database operations, which may force you to rely on mocking.
  • Mock Everything (Option 3) provides full control and is the fastest, but the tests might lose touch with actual database behavior, which could cause issues later.

Consider mixing and matching these approaches based on the requirements of your project to balance accuracy, speed, and simplicity.

options
PostSearchOptions
Promise
PaginatedPosts
return
new
Promise
async
const
await
this
getAllFromStorage
resolve
async
getOneBySlug
slug
string
Promise
Post
null
return
new
Promise
async
const
await
this
getOneBySlugFromStorage
if
resolve
else
resolve
null
async
getPostMetadata
slug
string
Promise
Post
null
return
new
Promise
async
const
await
this
getPostMetadataFromStorage
if
resolve
else
resolve
null
'Mock User'
expect
toBeDefined
expect
name
toBe
'Mock User'
persistAndFlush
mockResolvedValue
findOne
mockResolvedValue
const
await
create
name
'Mock User'
const
await
findOne
name
'Mock User'
expect
toEqual
expect
toEqual