Creating a Restful API has never been easier with FeathersJS. In this tutorial, we will use MongoDB as an example to see how quickly we can set up a state of the art API. We will also go thru how to use Postman to call/test the Restful API.
NextFeathers uses JSON web token (JWT) for authentication when calling the Restful API implemented by FeathersJS. The JWT token was simply saved in the browser's localStorage and removed when the user is logged out. Many people said this is very bad because the hacker could run Javascript via what so-called XSS on your website, and read the data from localStorage. Personally, I kinda against this because it's unlikely happened, and as I know that is how AWS-amplify works by default. But there is indeed a risk, so I would like to fix it.
In this article, we will learn how to easily convert CSV files into a Restful API using FeathersJS Command Line and MongoDB. We will also learn how to cast data type using FeatherJS hook.
In Feathers, JWT tokens are stateless with expiration date. After expiration, we need to relogin to get a new one. For a better UX, especially users are writing a very long post, the expired token could cause unsaved draft. By default Feathers' jwt strategy does not return a new jwt token, but this could be customized, then to be used for our purpose - JWT Token Auto Renew.
In this article, we will learn how to deploy FeathersJS to the Node server along with the Apache server using Reverse Proxy. The PM2 is also used to easily start/stop the application.
"File" is a special type of data submitting to the server. It's encoded as multipart/form-data (i.e., binary data.) It does not like simple key/value pairs from text fields, which could be captured in params or data context of FeathersJS, we would need middleware to covert this to either params or data, and then be saved to the server.
In this article, I show you how I implemented the tags function using remote data. The frontend is done by the Semantic UI Dropdown component and the backend is implemented using FeathersJS with Mongodb.
Sometimes the Node.js application does not close itself when you close the IDE, then when you try to start the application later, it will not start because the port is in use. It's easy to fix, but you need to remember the command to find the process and then kill it. Why not add the "kill-port" to the script in the package.json?
While working on a recent Feathers.js project, I encountered a challenge related to modifying service results in an after hook. Specifically, I needed to append a calculated value (in my case, a rank) to each record. However, the way I initially approached the problem led to an unintended recursion issue, where the after hook triggered itself repeatedly.
Prefetching is one of the most powerful features of Next.js, designed to make navigation between pages incredibly fast. By preloading essential resources in the background, Next.js can provide a seamless user experience, reducing the time it takes to load new pages. But how exactly does prefetching work, and how does it handle dynamic data? Let’s dive into it.
To reduce high SYN_RECV and TIME_WAIT states on our Next.js + Feathers.js server, we enabled HTTP keep-alive in Axios, monitored socket usage, and adjusted system settings (tcp_syncookies=1, somaxconn=1024). While keep-alive helped reduce overhead, increasing somaxconn didn’t clearly improve results and may need to be reverted. Final tuning may also depend on upstream traffic and reverse proxy behavior.
When deploying Node.js applications with PM2, it’s important to understand how cluster mode, instances, and CPU cores interact — especially when you’re combining multiple apps like a Next.js frontend and a Feathers.js backend.
This post breaks down the core concepts and lessons learned from setting up next-dna and feathers-dna with PM2 cluster mode.
On May 28, I reviewed 20 minutes of live traffic logs from my website to evaluate whether my 2-core server could keep up with demand. The data showed over 13,000 requests — roughly 11 per second — with a mix of bot and real user traffic hitting SSR pages and API routes. While the server handled it, signs of strain appeared: spikes in TIME_WAIT, socket hang-up errors, and MongoDB lag. The conclusion? My setup is nearing its safe limits. Upgrading to 4 cores or offloading some processing (e.g., MongoDB or caching) may be necessary to maintain performance and stability as traffic grows.