Over time, I’ve shared a few posts about how my blog evolved into the Next.js project it is today. In this post, I want to dive into a recent architectural improvement and explain more about how I seamlessly switched my blog’s data source from a REST API to a GraphQL API by modifying just a handful of files.
This shift was possible thanks to the use of data providers in my project. By consistently interacting with an abstraction layer (activeDataProvider
), I was able to decouple my data-fetching logic from the actual source of the data.
The beauty of this design lies in its simplicity. To change the data provider, all I had to do was:
- Implement a new class that adheres to the
IDataProvider
interface.
- Set an instance of that class to
activeDataProvider
.
That’s it! No need to rewrite logic across multiple components or refactor complex parts of the application.
The Data Provider Interface
Here’s what the current iteration of the IDataProvider interface looks like:
export interface IDataProvider {
getAll(options: PostSearchOptions): Promise<PaginatedPosts>;
getOneBySlug(slug: string): Promise<Post | null>;
getPostMetadata(slug: string): Promise<Post | null>;
create?(data: Partial<Post>): Promise<Post>;
update?(slug: string, data: Partial<Post>): Promise<Post | null>;
delete?(slug: string): Promise<boolean>;
}
This interface enforces the core methods required for interacting with my blog’s data—fetching posts, retrieving metadata, and optionally creating, updating, or deleting posts.
Abstracting with BaseDataProvider
To ensure consistency across different data providers, I implemented a BaseDataProvider
class:
This class handles all the standard methods from the interface and introduces an extra layer of abstraction by defining abstract methods that subclasses must implement:
export abstract class BaseDataProvider implements IDataProvider {
create?(data: Partial<Post>): Promise<Post> {
throw new Error('Method not implemented.');
}
update?(slug: string, data: Partial<Post>): Promise<Post | null> {
throw new Error('Method not implemented.');
}
delete?(slug: string): Promise<boolean> {
throw new Error('Method not implemented.');
}
abstract getAllFromStorage(
options: PostSearchOptions,
): Promise<PaginatedPosts>;
abstract getOneBySlugFromStorage(slug: string): Promise<Post | null>;
abstract getPostMetadataFromStorage(slug: string): Promise<Post | null>;
async getAll(options: PostSearchOptions): Promise<PaginatedPosts> {
return new Promise(async (resolve, reject) => {
const paginatedPosts = await this.getAllFromStorage(options);
resolve(paginatedPosts);
});
}
async getOneBySlug(slug: string): Promise<Post | null> {
return new Promise(async (resolve, reject) => {
const matchingPost = await this.getOneBySlugFromStorage(slug);
if (matchingPost) {
resolve(matchingPost);
} else {
resolve(null);
}
});
}
async getPostMetadata(slug: string): Promise<Post | null> {
return new Promise(async (resolve, reject) => {
const matchingPost = await this.getPostMetadataFromStorage(slug);
if (matchingPost) {
resolve(matchingPost);
} else {
resolve(null);
}
});
}
}
These <Something>FromStorage
methods are the real magic—they encapsulate the logic for interacting with the actual data source, whether it’s a REST API, a GraphQL endpoint, or even static files.
On the other hand, the BaseDataProvider
provides the generic methods getAll
, getOneBySlug
and getPostMetadata
. These are the methods that our components interact with directly. Internally, they call the appropriate getAllFromStorage
, getOneBySlugFromStorage
and getPostMetadataFromStorage
methods.
This separation ensures that the specific details of the persistence layer are abstracted away from the components, keeping the architecture clean and decoupled.
Why This Matters
In my opinion, decoupling components from external dependencies is a powerful asset that allows us to create more resilient and testable code. While this approach introduces some overhead and requires a shift in mindset, it proves invaluable in fast-paced environments where technologies evolve and change rapidly.
By creating abstraction layers, we can make our code more adaptable, enabling smoother transitions to new tools or data sources without major rewrites. This flexibility ultimately helps future-proof the project and maintain long-term efficiency.
Of course, one could argue that maintaining this level of abstraction is easy in a small project like my blog. But my perspective is this: if I can apply such high coding standards to a personal project that generates no profit and where no one will complain if it breaks tomorrow, why shouldn’t I hold myself to the same (or even higher) standard in my professional work?
In a business environment, where the software directly contributes to revenue and people rely on the services I build, maintaining clean, adaptable, and well-structured code is even more critical.
Bad Example vs Good Example
This is an example of a component tightly coupled to the implementation of the data layer, in this case using Apollo GraphQL:
const PostList: FC = () => {
const { data } = useQuery<PostsData, PostsVars>(GET_POSTS, {
variables: { limit: 10, offset: 0 },
});
return (
<ul>
{data?.posts.map((post) => (
<li key={post.id}>
<h2>{post.title}</h2>
<p>{post.text}</p>
</li>
))}
</ul>
);
};
An one that uses the approach that I propose:
const PostList: FC = () => {
const { data } = await activeDataProvider.getAll(options);
return (
<ul>
{data?.posts.map((post) => (
<li key={post.id}>
<h2>{post.title}</h2>
<p>{post.text}</p>
</li>
))}
</ul>
);
};
This example does not include any imports related to GraphQL or other external dependencies. It relies solely on the activeDataProvider
interface, ensuring the component remains decoupled from the underlying data-fetching implementation.
Writing Mocks
Another benefit of this approach is how easily a data provider can be replaced by writing unit tests.
Since my app relies on activeDataProvider
, I can easily swap the real data provider with an in-memory mock during unit tests.
In my vitest.setup.ts
file, I added a mock that replaces activeDataProvider with a lightweight, in-memory provider:
vi.mock('./data-providers/active', async () => {
const jsonData = JSON.parse(
readFileSync('./tests/test-data.json', 'utf-8'),
);
return {
default: new MemoryDataProvider(jsonData),
};
});
This mock loads data from a static JSON file during tests, ensuring predictable results without external dependencies.
Testing a component that fetches posts is as simple as calling:
await activeDataProvider.getAll(options);
Since the provider is mocked, the tests run fast, and I can easily simulate various data states (empty results, errors, or populated lists).
How It Worked in Practice
During this latest iteration, I transitioned the blog to pull data from a GraphQL API by implementing a new provider class that extends BaseDataProvider
. By implementing the abstract methods (getAllFromStorage
, etc.) with GraphQL queries, the switch was complete.
Now, for example, whenever a component fetches posts, it calls:
await activeDataProvider.getAll(options);
The underlying provider handles the communication with the persistance layer, ensuring the component is agnostic to whether the data came from GraphQL, REST, or elsewhere.
Checkout the complete source code here.
After spending considerable time working with the Python framework Django, I've recently ventured into the Node.js world with NestJS. One of the features I deeply missed from Django was the Django Shell. It was an incredibly useful tool that allowed me to interact with my application in a Python shell, test out code snippets, and manipulate data directly using the Django ORM.
In the NestJS ecosystem, I was searching for a similar interactive environment and discovered that what I was looking for is called a REPL (Read-Evaluate-Print Loop). A REPL provides an interactive shell where you can execute code in real-time within the context of your application.
In this post, I'll show you how to set up a REPL in a NestJS project that uses Mikro-ORM, drawing from my experience adapting code from this GitHub repository.
Why Use a REPL?
A REPL is invaluable for:
- Testing code snippets: Quickly try out code without writing tests or modifying your application files.
- Database manipulation: Interact with your database through your ORM to query or modify data.
- Debugging: Experiment with different functions and methods to troubleshoot issues.
Setting Up the REPL in NestJS with Mikro-ORM
Here's how you can set up a REPL in your NestJS project:
Step 1: Create a repl.ts
File
In the root of your project, create a file named repl.ts
with the following content:
import 'tsconfig-paths/register';
import { repl } from '@nestjs/core';
import { AppModule } from './app.module';
import { MikroORM } from '@mikro-orm/core';
import { commonMikroOrmConfig } from './mikro-orm.config';
import { Post } from './posts/post.entities';
async function bootstrap() {
const replServer = await repl(AppModule);
const { context } = replServer;
const orm = await MikroORM.init({
...commonMikroOrmConfig,
allowGlobalContext: true,
entitiesTs: ['./**/*.entities.ts'],
entities: ['./dist/**/*.entities.js'],
discovery: {
warnWhenNoEntities: false,
},
});
context.Post = Post;
context.orm = orm;
context.em = orm.em;
}
bootstrap();
Explanation:
- Import Statements: We import necessary modules, including
repl
from @nestjs/core
and MikroORM
from @mikro-orm/core
.
- Bootstrap Function: We initialize the REPL server and MikroORM within an asynchronous
bootstrap
function.
- Context Enhancement: We add the ORM instance, the entity manager (
em
), and any entities (like Post
) to the REPL context for easy access.
Step 2: Start the REPL
Run the following command in your terminal:
npm run start -- --entryFile repl
This tells NestJS to use repl.ts
as the entry file instead of the default main.ts
.
Using the REPL
Once the REPL starts, you'll see a prompt like this:
[info] MikroORM successfully connected to database blog_db on postgresql://blog_user:*****@127.0.0.1:5432
>
Now you can interact with your application. Here's an example of querying all Post
entities:
> const posts = await em.find(Post, {});
[query] select "p0".* from "post" as "p0" [took 5 ms, 2 results]
> posts
[
{
id: 1,
title: 'First Post',
content: 'This is the first post.',
createdAt: 2023-10-21T12:34:56.789Z
},
{
id: 2,
title: 'Second Post',
content: 'This is the second post.',
createdAt: 2023-10-22T08:15:30.123Z
}
]
Tips for Using the REPL:
- Access Entities: Use
Post
, User
, or any other entities you've added to the context.
- Entity Manager:
em
is available for database operations.
- Autocomplete: The REPL supports autocomplete for faster coding.
Important Considerations
- Production Use: While a REPL is powerful, using it in a production environment can be risky. Be cautious when manipulating data directly.
- Security: Ensure that access to the REPL in production environments is secure and restricted.
Conclusion
Setting up a REPL in your NestJS project with Mikro-ORM bridges the gap between Django's interactive shell and the Node.js world. It enhances productivity by allowing real-time interaction with your application's context and database.
Feel free to explore and extend this setup by adding more entities or custom services to the REPL context. Happy coding!
References: