Switching from REST to GraphQL in My Blog with Minimal Code Changes
2025-01-05
Setting Up a REPL in a NestJS Project with Mikro-ORM: A Django Shell Equivalent
2024-10-21
2025-01-05
2024-10-21
Over time, I’ve shared a few posts about how my blog evolved into the Next.js project it is today. In this post, I want to dive into a recent architectural improvement and explain more about how I seamlessly switched my blog’s data source from a REST API to a GraphQL API by modifying just a handful of files.
This shift was possible thanks to the use of data providers in my project. By consistently interacting with an abstraction layer (activeDataProvider
), I was able to decouple my data-fetching logic from the actual source of the data.
The beauty of this design lies in its simplicity. To change the data provider, all I had to do was:
IDataProvider
interface.activeDataProvider
.That’s it! No need to rewrite logic across multiple components or refactor complex parts of the application.
Here’s what the current iteration of the IDataProvider interface looks like:
export interface IDataProvider {
getAll(options: PostSearchOptions): Promise<PaginatedPosts>;
getOneBySlug(slug: string): Promise<Post | null>;
getPostMetadata(slug: string): Promise<Post | null>;
create?(data: Partial<Post>): Promise<Post>;
update?(slug: string, data: Partial<Post>): Promise<Post | null>;
delete?(slug: string): Promise<boolean>;
}
This interface enforces the core methods required for interacting with my blog’s data—fetching posts, retrieving metadata, and optionally creating, updating, or deleting posts.
BaseDataProvider
To ensure consistency across different data providers, I implemented a BaseDataProvider
class:
This class handles all the standard methods from the interface and introduces an extra layer of abstraction by defining abstract methods that subclasses must implement:
export abstract class BaseDataProvider implements IDataProvider {
create?(data: Partial<Post>): Promise<Post> {
throw new Error('Method not implemented.');
}
update?(slug: string, data: Partial<Post>): Promise<Post | null> {
throw new Error('Method not implemented.');
}
delete?(slug: string): Promise<boolean> {
throw new Error('Method not implemented.');
}
abstract getAllFromStorage(
options: PostSearchOptions,
): Promise<PaginatedPosts>;
abstract getOneBySlugFromStorage(slug: string): Promise<Post | null>;
abstract getPostMetadataFromStorage(slug: string): Promise<Post | null>;
async getAll(options: PostSearchOptions): Promise<PaginatedPosts> {
return new Promise(async (resolve, reject) => {
const paginatedPosts = await this.getAllFromStorage(options);
resolve(paginatedPosts);
});
}
async getOneBySlug(slug: string): Promise<Post | null> {
return new Promise(async (resolve, reject) => {
const matchingPost = await this.getOneBySlugFromStorage(slug);
if (matchingPost) {
resolve(matchingPost);
} else {
resolve(null);
}
});
}
async getPostMetadata(slug: string): Promise<Post | null> {
return new Promise(async (resolve, reject) => {
const matchingPost = await this.getPostMetadataFromStorage(slug);
if (matchingPost) {
resolve(matchingPost);
} else {
resolve(null);
}
});
}
}
These <Something>FromStorage
methods are the real magic—they encapsulate the logic for interacting with the actual data source, whether it’s a REST API, a GraphQL endpoint, or even static files.
On the other hand, the BaseDataProvider
provides the generic methods getAll
, getOneBySlug
and getPostMetadata
. These are the methods that our components interact with directly. Internally, they call the appropriate getAllFromStorage
, getOneBySlugFromStorage
and getPostMetadataFromStorage
methods.
This separation ensures that the specific details of the persistence layer are abstracted away from the components, keeping the architecture clean and decoupled.
In my opinion, decoupling components from external dependencies is a powerful asset that allows us to create more resilient and testable code. While this approach introduces some overhead and requires a shift in mindset, it proves invaluable in fast-paced environments where technologies evolve and change rapidly.
By creating abstraction layers, we can make our code more adaptable, enabling smoother transitions to new tools or data sources without major rewrites. This flexibility ultimately helps future-proof the project and maintain long-term efficiency.
Of course, one could argue that maintaining this level of abstraction is easy in a small project like my blog. But my perspective is this: if I can apply such high coding standards to a personal project that generates no profit and where no one will complain if it breaks tomorrow, why shouldn’t I hold myself to the same (or even higher) standard in my professional work?
In a business environment, where the software directly contributes to revenue and people rely on the services I build, maintaining clean, adaptable, and well-structured code is even more critical.
This is an example of a component tightly coupled to the implementation of the data layer, in this case using Apollo GraphQL:
const PostList: FC = () => {
const { data } = useQuery<PostsData, PostsVars>(GET_POSTS, {
variables: { limit: 10, offset: 0 },
});
return (
<ul>
{data?.posts.map((post) => (
<li key={post.id}>
<h2>{post.title}</h2>
<p>{post.text}</p>
</li>
))}
</ul>
);
};
An one that uses the approach that I propose:
const PostList: FC = () => {
const { data } = await activeDataProvider.getAll(options);
return (
<ul>
{data?.posts.map((post) => (
<li key={post.id}>
<h2>{post.title}</h2>
<p>{post.text}</p>
</li>
))}
</ul>
);
};
This example does not include any imports related to GraphQL or other external dependencies. It relies solely on the activeDataProvider
interface, ensuring the component remains decoupled from the underlying data-fetching implementation.
Another benefit of this approach is how easily a data provider can be replaced by writing unit tests.
Since my app relies on activeDataProvider
, I can easily swap the real data provider with an in-memory mock during unit tests.
In my vitest.setup.ts
file, I added a mock that replaces activeDataProvider with a lightweight, in-memory provider:
// Replace active dataProvider with MemoryDataProvider
vi.mock('./data-providers/active', async () => {
const jsonData = JSON.parse(
readFileSync('./tests/test-data.json', 'utf-8'),
);
return {
default: new MemoryDataProvider(jsonData),
};
});
This mock loads data from a static JSON file during tests, ensuring predictable results without external dependencies.
Testing a component that fetches posts is as simple as calling:
await activeDataProvider.getAll(options);
Since the provider is mocked, the tests run fast, and I can easily simulate various data states (empty results, errors, or populated lists).
During this latest iteration, I transitioned the blog to pull data from a GraphQL API by implementing a new provider class that extends BaseDataProvider
. By implementing the abstract methods (getAllFromStorage
, etc.) with GraphQL queries, the switch was complete.
Now, for example, whenever a component fetches posts, it calls:
await activeDataProvider.getAll(options);
The underlying provider handles the communication with the persistance layer, ensuring the component is agnostic to whether the data came from GraphQL, REST, or elsewhere.
Checkout the complete source code here.
After spending considerable time working with the Python framework Django, I've recently ventured into the Node.js world with NestJS. One of the features I deeply missed from Django was the Django Shell. It was an incredibly useful tool that allowed me to interact with my application in a Python shell, test out code snippets, and manipulate data directly using the Django ORM.
In the NestJS ecosystem, I was searching for a similar interactive environment and discovered that what I was looking for is called a REPL (Read-Evaluate-Print Loop). A REPL provides an interactive shell where you can execute code in real-time within the context of your application.
In this post, I'll show you how to set up a REPL in a NestJS project that uses Mikro-ORM, drawing from my experience adapting code from this GitHub repository.
A REPL is invaluable for:
Here's how you can set up a REPL in your NestJS project:
repl.ts
FileIn the root of your project, create a file named repl.ts
with the following content:
import 'tsconfig-paths/register';
import { repl } from '@nestjs/core';
import { AppModule } from './app.module';
import { MikroORM } from '@mikro-orm/core';
import { commonMikroOrmConfig } from './mikro-orm.config';
import { Post } from './posts/post.entities';
async function bootstrap() {
const replServer = await repl(AppModule);
const { context } = replServer;
const orm = await MikroORM.init({
...commonMikroOrmConfig,
allowGlobalContext: true,
entitiesTs: ['./**/*.entities.ts'],
entities: ['./dist/**/*.entities.js'],
discovery: {
warnWhenNoEntities: false,
},
});
// Add your entities and ORM to the REPL context for easy access
context.Post = Post;
context.orm = orm;
context.em = orm.em;
}
bootstrap();
repl
from @nestjs/core
and MikroORM
from @mikro-orm/core
.bootstrap
function.em
), and any entities (like Post
) to the REPL context for easy access.Run the following command in your terminal:
npm run start -- --entryFile repl
This tells NestJS to use repl.ts
as the entry file instead of the default main.ts
.
Once the REPL starts, you'll see a prompt like this:
[info] MikroORM successfully connected to database blog_db on postgresql://blog_user:*****@127.0.0.1:5432
>
Now you can interact with your application. Here's an example of querying all Post
entities:
> const posts = await em.find(Post, {});
[query] select "p0".* from "post" as "p0" [took 5 ms, 2 results]
> posts
[
{
id: 1,
title: 'First Post',
content: 'This is the first post.',
createdAt: 2023-10-21T12:34:56.789Z
},
{
id: 2,
title: 'Second Post',
content: 'This is the second post.',
createdAt: 2023-10-22T08:15:30.123Z
}
]
Post
, User
, or any other entities you've added to the context.em
is available for database operations.Setting up a REPL in your NestJS project with Mikro-ORM bridges the gap between Django's interactive shell and the Node.js world. It enhances productivity by allowing real-time interaction with your application's context and database.
Feel free to explore and extend this setup by adding more entities or custom services to the REPL context. Happy coding!
References:
When building an application with NestJS and Mikro-ORM in TypeScript, ensuring proper testing is essential to maintain code quality and reliability. In this post, I will cover three main testing strategies for database-related operations, each with its pros and cons.
In this approach, you set up an in-memory SQLite database during tests to simulate persistence without interacting with a real database.
Pros:
Cons:
import { MikroORM } from '@mikro-orm/core';
import { User } from './user.entity'; // example entity
import { SqliteDriver } from '@mikro-orm/sqlite';
describe('User Service - In-Memory DB', () => {
let orm: MikroORM;
beforeAll(async () => {
orm = await MikroORM.init({
entities: [User],
dbName: ':memory:',
type: 'sqlite',
});
const generator = orm.getSchemaGenerator();
await generator.createSchema();
});
afterAll(async () => {
await orm.close(true);
});
it('should persist and retrieve a user entity', async () => {
const userRepo = orm.em.getRepository(User);
const user = userRepo.create({ name: 'John Doe' });
await userRepo.persistAndFlush(user);
const retrievedUser = await userRepo.findOne({ name: 'John Doe' });
expect(retrievedUser).toBeDefined();
expect(retrievedUser.name).toBe('John Doe');
});
});
This setup is relatively straightforward, but keep in mind the limitations regarding database compatibility. Note also this approach is not recommended by the Mikro-ORM creator but in the Mikro-ORM repo it is used anyway for some tests.
Another option is to initialize Mikro-ORM with the same driver you'd use in production but prevent it from connecting to a real database by setting connect: false
. This can be a quick setup, especially when you don't need to run real database queries.
Pros:
Cons:
import { MikroORM } from '@mikro-orm/core';
import { User } from './user.entity';
describe('User Service - No DB Connection', () => {
let orm: MikroORM;
beforeAll(async () => {
orm = await MikroORM.init({
entities: [User],
dbName: 'test-db',
type: 'postgresql', // same as production
connect: false, // prevent real connection
});
});
it('should mock user creation and retrieval', async () => {
const mockUser = { id: 1, name: 'Mock User' };
const userRepo = orm.em.getRepository(User);
jest.spyOn(userRepo, 'persistAndFlush').mockImplementation(async () => mockUser);
jest.spyOn(userRepo, 'findOne').mockResolvedValue(mockUser);
await userRepo.persistAndFlush(mockUser);
const foundUser = await userRepo.findOne({ name: 'Mock User' });
expect(foundUser).toBeDefined();
expect(foundUser.name).toBe('Mock User');
});
});
This approach works well for unit tests where database interaction is mocked. However, the lack of actual persistence may make your tests less reliable.
Mocking everything is an approach where you mock both the repository methods and any related services to simulate the behavior of the database without involving the actual ORM operations. See example an example in the nestjs-realworld-example-app here.
Pros:
Cons:
import { Test, TestingModule } from '@nestjs/testing';
import { UserService } from './user.service';
import { User } from './user.entity';
import { getRepositoryToken } from '@mikro-orm/nestjs';
describe('User Service - Full Mock', () => {
let userService: UserService;
const mockRepository = {
persistAndFlush: jest.fn(),
findOne: jest.fn(),
};
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
UserService,
{ provide: getRepositoryToken(User), useValue: mockRepository },
],
}).compile();
userService = module.get<UserService>(UserService);
});
it('should create and return a user', async () => {
const mockUser = { id: 1, name: 'Mock User' };
mockRepository.persistAndFlush.mockResolvedValue(mockUser);
mockRepository.findOne.mockResolvedValue(mockUser);
const createdUser = await userService.create({ name: 'Mock User' });
const foundUser = await userService.findOne({ name: 'Mock User' });
expect(createdUser).toEqual(mockUser);
expect(foundUser).toEqual(mockUser);
});
});
This is particularly useful in unit tests where the focus is on testing business logic rather than database interaction.
Choosing the right testing strategy depends on the scope and type of your tests:
Consider mixing and matching these approaches based on the requirements of your project to balance accuracy, speed, and simplicity.