Interview NodeJS Core Node.js

Core Node.js Interview Questions

Agar tum Node.js developer ho aur interview ki preparation kar rahe ho, to yeh guide tumhare liye perfect hai. Isme hum cover karenge Event Loop, Streams, Buffers, Promises, async/await aur real-world scenarios — sab kuch Hinglish mein easy explanation ke saath.

27 Questions Updated 1 week ago
Share:
27 visible of 27 total
Free Access
Q1
Free (All Users)

Node.js single-threaded hote hue bhi concurrent kaise handle karta hai?

Yeh question bahut important hai — aur iska answer samajhne ke liye Event Loop samajhna padega.


Pehle problem samjho

Socho ek waiter hai restaurant mein. Agar wo single-threaded hota — matlab ek customer ka order leke kitchen mein khud khada rehta jab tak khana na bane — toh baaki sab bhookhe marenge. Yahi problem traditional blocking servers mein hoti thi.

Node.js ne isko differently solve kiya.


Node.js ka secret: Event Loop + Non-Blocking I/O

Node.js ka main thread sirf ek kaam karta hai — JavaScript execute karna. Lekin jab bhi koi slow kaam aata hai (DB query, file read, HTTP request), wo usse background mein delegate kar deta hai aur aage badh jaata hai.

Kaise kaam karta hai — step by step

1. Single thread sirf JS chalata hai Node.js ka main thread V8 engine pe chalta hai. Ek waqt mein sirf ek cheez execute hoti hai — yahi "single-threaded" hone ka matlab hai.

2. Slow kaam delegate ho jaata hai Jab code fs.readFile(), http.get(), ya setTimeout() call karta hai, Node us kaam ko Libuv ko de deta hai (file/crypto ke liye thread pool, network ke liye OS-level non-blocking syscalls). Main thread ruka nahi, aage chal pada.

3. Event Loop ghoomta rehta hai Ek infinite loop chal raha hai background mein. Wo continuously check karta hai: "koi async kaam complete hua kya? Koi callback ready hai kya?"

4. Callback Queue mein callback aata hai Jab Libuv ya OS ka kaam complete hota hai, uska callback Callback Queue mein push hota hai.

5. Microtask Queue ki priority zyada hai Promise.then() aur queueMicrotask() ke callbacks Microtask Queue mein jaate hain. Event Loop pehle yahi khali karta hai, tab Callback Queue dekha jaata hai.

6. Call Stack pe wapas aata hai Jab Call Stack empty hoti hai, Event Loop ek callback uthata hai aur usse execute karta hai.


Waiter wali analogy yaad hai?

Node.js ka main thread ek super-efficient waiter ki tarah hai:

  • Customer ka order liya (function call)
  • Kitchen ko diya (Libuv/OS)
  • Waiter doosre tables pe chala gaya (next JS code)
  • Jab kitchen ne bell bajayi (callback ready), tab wapas aaya

Koi thread block nahi hua. Yehi reason hai ki Node.js thousands of concurrent connections handle kar sakta hai ek hi thread pe — kyunki waiting time mein woh doosra kaam karta rehta hai.

Interview me ye bolna hai-

Node.js single-threaded hai, lekin concurrency achieve karta hai Event Loop aur non-blocking I/O ke through. Jab bhi koi async operation aata hai — jaise file read ya DB query — usse Libuv ke thread pool ya OS ke non-blocking syscalls ko delegate kar deta hai, aur main thread block hue bina aage chal deta hai. Jab wo operation complete hota hai, uska callback Event Loop ke through Call Stack pe push hota hai aur execute hota hai. Is tarah ek hi thread pe thousands of concurrent requests efficiently handle ho jaate hain.

 

Q2
Free (All Users)

Event loop kya hota hai? Phases explain karo.

Event Loop — Phases

Event Loop ek infinite loop hai jo continuously check karta hai ki Call Stack empty hai ya nahi, aur agar hai toh koi pending callback execute karna hai kya. Yeh 6 phases mein kaam karta hai, har phase ka apna queue hota hai.

1. TimerssetTimeout / setInterval
Jo callbacks ka time expire ho chuka ho unhe execute karta hai. Exact time ki guarantee nahi — sirf "us time ke baad" ki guarantee hai.
setTimeout(() => console.log('hi'), 100)
// minimum 100ms baad — exact nahi
 
2. Pending callbacksI/O errors
Previous iteration mein defer kiye gaye I/O error callbacks yahan chalte hain. Jaise TCP connection errors. Mostly internal — developer ko rarely matter karta hai.
// TCP error callbacks
// Aap directly yahan kuch nahi karoge
 
3. Idle / PrepareInternal only
Completely Node.js internal hai. Poll phase ke liye preparation karta hai. Developer ke liye irrelevant — interview mein bas "internal use" bol do.
// Developer ke liye: ignore karo
 
4. Poll (Sabse important)
Yahi asli kaam hota hai. Naye I/O events fetch karta hai — file reads, DB queries, HTTP requests. Queue empty ho toh nayi events ka wait karta hai ya phir Check phase pe jaata hai.
fs.readFile('file.txt', (err, data) => {
  // yeh callback Poll phase mein chalega
})

5. ChecksetImmediate

Sirf setImmediate() callbacks yahan chalte hain. Poll ke turant baad guarantee hai. I/O ke andar setImmediate hamesha setTimeout(fn,0) se pehle chalega.
setImmediate(() => console.log('check'))
setTimeout(() => console.log('timer'), 0)
// I/O ke andar: check pehle, timer baad

6. Close callbacksCleanup

socket.destroy() ya abrupt close hone pe 'close' event yahan fire hota hai. Cleanup ka kaam karta hai.

socket.on('close', () => {
  // yeh yahan chalega
})
 
7. Microtask QueueHighest priority
Technically phase nahi hai — lekin har do phases ke beech automatically run hoti hai. Promise.then, async/await, queueMicrotask sab yahan aate hain. Inhe pehle drain kiya jaata hai, tabhi agla phase shuru hota hai.
Promise.resolve().then(() => console.log('microtask'))
setTimeout(() => console.log('timer'), 0)
// Output: microtask PEHLE, timer baad mein

Interview Answer-

Event Loop ek continuously running mechanism hai jo Node.js mein async callbacks ko manage karta hai. Yeh 6 phases mein kaam karta hai — Timers, Pending Callbacks, Idle/Prepare, Poll, Check, aur Close Callbacks. Sabse important Poll phase hai jahan actual I/O callbacks execute hote hain. Har do phases ke beech Microtask Queue — yani Promises aur async/await — sabse pehle drain hoti hai, isliye unki priority sabse zyada hoti hai. Is cyclic mechanism ki wajah se Node.js bina multiple threads ke thousands of concurrent operations handle kar leta hai.

Q3
Free (All Users)

Microtask queue vs Callback queue difference?

Microtask Queue aur Callback Queue dono mein async callbacks jaate hain, lekin priority alag hai. Microtask Queue — jisme Promises aur async/await aate hain — har phase ke baad aur Call Stack khali hote hi poori drain hoti hai. Callback Queue — jisme setTimeout, setInterval, I/O callbacks aate hain — sirf tab chalta hai jab Microtask Queue bilkul khali ho. Isliye Promises hamesha setTimeout se pehle execute hote hain, chahe dono ek saath schedule kiye gaye hon.

Q4
Free (All Users)

process.nextTick() vs setImmediate()?

process.nextTick() ka callback Microtask Queue mein bhi Promises se pehle chalta hai — matlab current operation complete hote hi, Event Loop ke kisi bhi phase se pehle execute hota hai. setImmediate() Check phase mein chalta hai, yaani Poll phase complete hone ke baad. Toh priority order yeh hai: process.nextTick()Promise.then()setImmediate(). Practically, nextTick ko use karo jab koi callback same tick mein guaranteed chahiye ho, aur setImmediate jab I/O ke baad next iteration mein chalana ho.

Q5
Free (All Users)

Blocking vs Non-blocking code example ke sath?

Blocking code mein main thread operation complete hone tak ruka rehta hai — jaise readFileSync mein poora server usi file ka wait karta hai, koi aur request serve nahi hoti. Non-blocking mein operation ko Libuv ko delegate kar dete hain, main thread aage badhta rehta hai, aur kaam complete hone pe callback execute hota hai. Node.js ka pura performance model non-blocking I/O pe based hai — isliye production mein Sync methods request handlers ke andar kabhi use nahi karne chahiye, sirf startup time pe acceptable hai.

Q6
Free (All Users)

Streams kya hote hain? Types explain karo.

Streams Node.js mein data ko chunks mein process karne ka mechanism hai — poora data memory mein load karne ki jagah piece by piece handle karta hai. Char types hote hain: Readable (data source, jaise file read karna), Writable (data destination, jaise file write karna), Duplex (dono — jaise TCP socket), aur Transform (data ko beech mein modify kare — jaise compression ya encryption). Practically, 1GB file ko readFileSync se padhoge toh 1GB RAM chahiye, lekin Readable Stream se padhoge toh sirf ek chunk ki memory lagegi. Pipe method se Readable ko Writable se connect kar sakte ho — jaise fs.createReadStream().pipe(res) — jo Node mein most efficient data transfer pattern hai.

Q7
Free (All Users)

Node.js me Buffer kya hota hai ?

Buffer Node.js mein raw binary data store karne ka way hai — jab data aisa ho jo string mein represent nahi ho sakta, jaise images, videos, ya network packets. JavaScript mein string UTF-16 hoti hai, lekin Buffer directly memory mein fixed-size bytes allocate karta hai V8 heap ke bahar. Streams ke saath buffer automatically kaam karta hai — jab data producer consumer se fast ho toh chunks buffer mein hold hote hain. Buffer.from('hello'), Buffer.alloc(1024) jaise methods se create karte hain — aur mostly tum directly Buffer se tab miloge jab file I/O, crypto, ya binary protocols handle kar rahe ho.

Q8
Free (All Users)

Node.js me Global objects kaun kaun se hote hain?

Node.js mein kuch objects globally available hote hain bina require kiye — sabse common hain __filename aur __dirname jo current file ka path dete hain, process object jo environment variables, arguments, aur process lifecycle control karta hai, aur console jo logging ke liye hai. setTimeout, setInterval, setImmediate, aur clearTimeout bhi global hain. global object Node ka window equivalent hai — browser mein window hota hai, Node mein global. Ek important point: __filename aur __dirname ES Modules mein available nahi hote, wahan import.meta.url use karna padta hai.

Q9
Free (All Users)

__dirname vs process.cwd() me kya difference hai ?

__dirname us file ki directory ka absolute path deta hai jisme yeh likha hai — chahe tum us file ko kahin se bhi call karo, path same rahega. process.cwd() current working directory deta hai — yaani jis directory se tumne node command run ki hai. Difference tab matter karta hai jab tum project ke root se kisi nested file ko run karo — __dirname us nested file ka path dega, lekin process.cwd() root directory ka. Isliye file paths ke liye hamesha __dirname use karo, process.cwd() nahi — warna relative paths production mein break ho jaate hain.

Q10
Free (All Users)

Node.js me module system kaise kaam karta hai?

Node.js mein do module systems hain — CommonJS (CJS) aur ES Modules (ESM). CommonJS mein require() se module load karte hain aur module.exports se export karte hain — yeh synchronous hai aur runtime pe resolve hota hai. ESM mein import/export syntax hai — yeh asynchronous hai aur compile time pe resolve hota hai, isliye tree-shaking possible hai. Jab require('something') call karte ho toh Node pehle core modules check karta hai, phir node_modules folder mein dhundhta hai, aur har module ko pehli baar load hone ke baad cache kar leta hai — isliye same module ko baar baar require karo toh fresh copy nahi milti, cached instance milta hai.

Q11
Free (All Users)

Callback hell kya hota hai? kaise avoid karte hain?

Callback hell tab hota hai jab multiple async operations ek ke andar ek nested callbacks ke roop mein likhi jaati hain — code pyramid shape le leta hai jo padhna aur debug karna mushkil ho jaata hai. Isse avoid karne ke teen main tarike hain: pehla — Named Functions, yaani anonymous callbacks ki jagah alag named functions likho; doosra — Promises, jo .then().catch() chain se flat structure deta hai; teesra aur best — async/await, jo asynchronous code ko synchronous jaisa dikhata hai, error handling try/catch se hoti hai aur code clean rehta hai. Production mein aaj async/await hi standard hai — callbacks sirf low-level libraries ya legacy code mein dikhte hain.

Q12
Free (All Users)

Promise chaining kaise kaam karti hai?

Promise chaining mein har .then() ek naya Promise return karta hai, isliye unhe chain kar sakte ho — pehle ka output doosre ka input ban jaata hai. Agar kisi .then() ke andar koi value return karo toh wo next .then() ko milti hai, aur agar Promise return karo toh chain us Promise ke resolve hone ka wait karti hai. Error handling ke liye chain ke end mein ek .catch() kaafi hai — woh upar kisi bhi .then() mein throw hua error pakad leta hai. Practically async/await internally same Promise chaining use karta hai — bas syntax sugar hai, isliye dono ka behavior identical hota hai.

Q13
Free (All Users)

Promise.all vs Promise.allSettled?

Promise.all mein agar ek bhi Promise reject ho jaaye toh poora result reject ho jaata hai — baaki Promises ka wait nahi karta. Promise.allSettled mein saari Promises complete hone ka wait karta hai chahe reject hon ya resolve — har ek ka status aur value/reason deta hai. Practically, Promise.all tab use karo jab saari operations ka succeed karna zaroori ho jaise multiple DB writes, aur Promise.allSettled tab jab partial failure acceptable ho jaise multiple API calls jinka result individually check karna ho.

Q14
Free (All Users)

async/await internally kaise kaam karta hai node.js me.?

async/await internally Promises aur Generators ka combination hai — babel isko compile karta hai toh state machine ban jaata hai. async function hamesha ek Promise return karta hai, aur await us point pe function ko pause karke control Event Loop ko de deta hai — thread block nahi hota. Jab awaited Promise resolve hoti hai toh function wahi se resume ho jaata hai Microtask Queue ke through. Isliye await sirf async function ke andar kaam karta hai — aur top-level await sirf ES Modules mein allowed hai.

Q15
Free (All Users)

Error handling in async functions kaise karte ho?

Async functions mein error handling ke do tarike hain — try/catch block jo sabse clean aur readable hai, aur .catch() jo async function ke return kiye Promise pe laga sakte ho. Agar try/catch nahi lagaya aur error throw hua toh UnhandledPromiseRejection milta hai — Node.js latest versions mein yeh process crash kar deta hai. Best practice yeh hai ki har async function mein try/catch rakho, aur global safety net ke liye process.on('unhandledRejection') bhi handle karo — especially Express jaise frameworks mein async route handlers mein error next(err) ko pass karna zaroori hota hai warna server hang kar jaata hai.

Q16
Free (All Users)

Race condition kya hota hai Node me?

Race condition tab hoti hai jab do ya zyada async operations ek shared resource ko simultaneously access karte hain aur unka execution order unpredictable hota hai — jis wajah se result expected se alag aata hai. Node.js single-threaded hai isliye traditional race conditions kam hoti hain, lekin async operations ke beech ho sakti hain — jaise pehle file exist check karo, phir write karo, lekin beech mein koi aur process us file ko delete kar de. Isko avoid karne ke liye atomic operations use karo, yaani check aur action ko alag mat karo — directly operation karo aur error handle karo. Database level pe race conditions se bachne ke liye transactions aur optimistic locking use karte hain.

Q17
Free (All Users)

Parallel vs sequential execution ka difference?

Sequential execution mein async operations ek ke baad ek chalti hain — pehli complete hone ka wait karo, phir doosri shuru karo, total time sabka sum hota hai. Parallel execution mein saari operations ek saath start ho jaati hain aur Promise.all se wait karte hain — total time sabse slow operation jitna hota hai. Example: teen DB queries jo 1 second leti hain — sequential mein 3 second lagenge, parallel mein sirf 1 second. Practically, independent operations hamesha parallel chalao Promise.all se, aur sequential tab use karo jab ek operation ka output doosre ka input ho — jaise pehle user fetch karo phir us user ke orders fetch karo.

Q18
Free (All Users)

Retry mechanism kaise implement karoge API call me?

Retry mechanism mein failed API call ko exponential backoff ke saath dobara try karte hain — har retry mein wait time double hota hai taaki server pe load na aaye. Implementation mein ek recursive ya loop-based function banate hain jo maximum retry count tak chalti hai, aur sirf retryable errors pe retry karte hain jaise 429 ya 503 — 404 ya 400 pe retry karna pointless hai. Real production code mein axios-retry ya p-retry jaise libraries use karte hain. Jitter bhi add karte hain — yaani wait time mein thoda random delay — taaki multiple clients ek saath retry karke server ko dobara overwhelm na kar dein.

Most Asked
Q19
Free (All Users)

Agar API slow ho rahi hai to kaise debug karoge?

Agar API slow ho rahi hai to main step-by-step debug karta hoon:

  • Logs check karta hoon (response time, errors)
  • Slow queries identify karta hoon (DB profiling, EXPLAIN)
  • Middleware/logic bottleneck check karta hoon
  • External API calls ka time check karta hoon
  • Load testing / monitoring tools (PM2, New Relic) use karta hoon
Hard Most Asked Practical
Q20
Free (All Users)

High traffic handle kaise karoge?

High traffic handle karne ke liye main:

  • Load balancing use karta hoon (Nginx / AWS ELB)
  • Clustering (PM2) se multiple instances run karta hoon
  • Caching (Redis) use karta hoon taaki DB load kam ho
  • Database optimize karta hoon (indexes, pooling)
  • Async/background jobs (queues like RabbitMQ) use karta hoon
  • Rate limiting apply karta hoon

Short Answer :
High traffic handle karne ke liye load balancing, clustering, caching aur async processing use karke system ko scalable banaya jata hai.

Hard Most Asked Practical
Q21
Free (All Users)

Agar production me server crash ho raha hai to aap root cause kaise identify karoge?

Sabse pehle main logs check karta hoon — PM2 logs, application logs aur system logs (jaise /var/log) — taaki exact error message aur stack trace mil sake.

Uske baad main crash pattern analyze karta hoon — kya ye specific API hit par ho raha hai, ya high traffic par. Agar memory related issue lagta hai to main memory usage aur CPU spikes monitor karta hoon (top, htop, PM2 metrics) taaki memory leak ya infinite loop identify ho sake.

Phir main recent deployments ya code changes review karta hoon, kyunki zyadatar crashes kisi recent change ki wajah se hote hain. Saath hi main unhandled exceptions aur promise rejections check karta hoon, kyunki Node.js me agar ye handle na ho to process crash ho sakta hai.

Agar DB ya external service involved hai to main slow queries ya timeout issues bhi check karta hoon. Zarurat pade to main replicate karne ki koshish karta hoon staging ya local me, taaki exact scenario samajh aaye.

End me, fix apply karne ke baad main monitoring tools (PM2, New Relic, CloudWatch) lagata hoon taaki future me issue proactively detect ho sake.

Hard Most Asked Tricky Practical
Q22
Free (All Users)

Production me logging aur monitoring ka setup kaise karte ho?

Production me main logging ke liye structured logging use karta hoon (jaise Winston ya Pino), jisme logs ko proper format (JSON) me store karte hain aur alag-alag levels (info, error, debug) maintain karte hain.

Logs ko centralize karne ke liye unhe ELK stack (Elasticsearch, Logstash, Kibana) ya cloud services (AWS CloudWatch) me bhejte hain, jisse easily search aur analyze ho sake.

Monitoring ke liye main PM2, New Relic ya Datadog use karta hoon jisse CPU, memory, response time aur error rate track hota hai, aur alerts setup karte hain taaki issue aate hi notify ho jaye.

Hard Most Asked Tricky Practical
Q23
Free (All Users)

Database connection pooling ko kaise manage karte ho Node/Express app me?

Connection pooling me hum multiple reusable DB connections ka pool create karte hain taaki har request par naya connection create na karna pade, isse performance improve hoti hai.

Express me main ORM ya drivers (jaise Sequelize / MySQL driver) ke through pool size, max/min connections aur timeout configure karta hoon. Har request pool se connection leta hai aur kaam ke baad release kar deta hai.

Saath hi main ensure karta hoon ki connections properly close/release ho, warna connection leak ho sakta hai aur app slow ya crash ho sakta hai.

Hard Most Asked Tricky Conceptual Practical
Q24
Free (All Users)

API me timeout aur retry mechanism kaise implement karte ho?

Timeout implement karne ke liye main HTTP client (jaise Axios) me timeout set karta hoon, taki agar API fixed time me response na de to request fail ho jaye.

Retry ke liye main retry logic lagata hoon — ya to manually (loop/recursive) ya libraries (axios-retry) se — jisme limited attempts aur delay (exponential backoff) set karta hoon, taki system overload na ho.

Production me main ensure karta hoon ki retry sirf safe operations (GET) par ho aur proper logging ho.

Short Answer :
Timeout ke liye request time limit set karte hain aur retry ke liye limited attempts with delay (exponential backoff) use karte hain.

Hard Most Asked Tricky Practical
Q25
Free (All Users)

Express app me large payloads (big data / file uploads) ko kaise handle karte ho?

Large payload handle karne ke liye main streaming approach use karta hoon, taki data chunk-wise process ho aur memory overload na ho.

File uploads ke case me main Multer ya direct streams (S3 upload) use karta hoon, aur unnecessary large body ko block karne ke liye request size limit set karta hoon (limit in body parser).

Saath hi main compression enable karta hoon aur agar possible ho to large data ko pagination ya chunking me break kar deta hoon.

Hard Most Asked Tricky
Q26
Free (All Users)

Express me synchronous aur asynchronous middleware me kya difference hai? Real-world example ke sath explain karo.

Express me middleware do type ke hote hain — synchronous aur asynchronous — aur inka main difference execution aur handling ka hota hai.

Synchronous middleware:
Ye immediately execute hota hai aur blocking nature ka hota hai. Isme koi async operation nahi hota, aur ye direct next() call karke aage badh jata hai.
Example: logging middleware

app.use((req, res, next) => {
  console.log(req.url);
  next();
});

Asynchronous middleware:
Isme async operations hote hain jaise DB call, API call, file read, etc. Ye non-blocking hota hai aur jab async task complete hota hai tab next() call hota hai.

Example: user authentication (DB se check)

app.use(async (req, res, next) => {
  const user = await User.findById(req.headers.id);
  if (!user) return res.status(401).send("Unauthorized");
  next();
});

Real-world example:
Maan lo ek e-commerce app hai:

  • Synchronous middleware: har request ka log print karna
  • Asynchronous middleware: user authentication ya order fetch karna DB se

Key diff:
Synchronous middleware turant execute hota hai, jabki asynchronous middleware me async tasks complete hone ke baad execution aage badhta hai.

 

Hard Most Asked
Q27
Free (All Users)

Node.js/Express app me memory leak kaise detect aur analyze karte ho? Real-world example ke sath explain karo.

Memory leak tab hota hai jab app memory allocate karta rehta hai lekin properly release nahi karta, jisse time ke saath memory continuously badhti rehti hai aur eventually app crash ho sakta hai.

Detect karne ke liye sabse pehle main memory usage monitor karta hoon — jaise process.memoryUsage(), PM2 metrics, ya tools like New Relic. Agar memory continuously increase ho rahi hai bina drop hue, to ye leak ka sign hai.

Uske baad main heap snapshots aur profiling tools use karta hoon (Chrome DevTools / Node inspector) taaki pata chale kaunse objects memory me stuck hain. Isse exact source identify karna easy ho jata hai.

Phir main code review karta hoon aur common issues check karta hoon:

  • Global variables ya caches jo clear nahi ho rahe
  • Event listeners jo remove nahi ho rahe
  • Unclosed DB connections ya file handles
  • Large objects jo memory me hold ho rahe hain

Real-world Example:
Ek project me maine dekha ki API hit hone par memory dheere-dheere badh rahi thi. Investigation me pata chala ki humne ek in-memory cache banaya tha (object me data store kar rahe the) lekin uska cleanup ya TTL nahi tha. Har request me data add ho raha tha aur kabhi remove nahi ho raha tha.
Fix me maine Redis with TTL use kiya aur unnecessary data cleanup implement kiya, jisse memory stable ho gayi.

Hard Most Asked