Curl is one of the most common tools when using PHP for network requests. Especially in large-scale concurrency scenarios, such as batch pulling API interface data, pushing notifications, verifying multiple addresses, etc., curl is often the first choice. And in these practices, a question that is often discussed is:
This sounds like a small detail, but in high concurrency scenarios, this detail may become the key to whether your program can run stably.
curl_close($ch) is a function in PHP that closes the handle $ch initialized by curl_init() . Simply put, it will release relevant resources. Many tutorials and documents will emphasize: "Remember to close after using it", which sounds like an operation that needs no doubt.
But in large-scale concurrent requests , frequent switching of resources really means more efficient?
Many developers mistakenly believe that as long as curl_close() is called, the memory will be "freed" back to the system immediately, but this is not the case. In PHP, especially in the case of long connection scripts or Swoole/FPM resident memory, PHP's memory management mechanism will not return the memory to the operating system immediately, but will mark it as "available" and wait for the next time.
This means that if you have hundreds or thousands of curl requests in your script, even if you call curl_close() every time, the memory usage may continue to grow, especially if the request returns a large volume of data.
The following is a simple comparison test to illustrate:
// Scene 1:Not called curl_close
for ($i = 0; $i < 1000; $i++) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://api.gitbox.net/test-endpoint?id=$i");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
// neglect curl_close
}
// Scene 2:Call curl_close
for ($i = 0; $i < 1000; $i++) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://api.gitbox.net/test-endpoint?id=$i");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
}
Through the comparison of memory_get_usage() , it was found that in a single execution script, the two writing methods have little difference in memory usage, and even the difference can be ignored.
However, in a service that is running for a long time under FPM processing multiple requests or Swoole coroutines , not calling curl_close() will cause a continuous backlog of memory, eventually triggering OOM or performance bottlenecks.
When the number of concurrent requests is large, using curl_multi is the correct solution. It allows you to initialize multiple concurrent requests at the same time, and process returns asynchronously through polling mechanisms, improving efficiency and better controlling resource release.
$multiHandle = curl_multi_init();
$curlHandles = [];
for ($i = 0; $i < 100; $i++) {
$ch = curl_init("https://api.gitbox.net/batch?id=$i");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_multi_add_handle($multiHandle, $ch);
$curlHandles[$i] = $ch;
}
// Perform concurrent requests
do {
$status = curl_multi_exec($multiHandle, $active);
curl_multi_select($multiHandle);
} while ($active);
// Get the result and close
foreach ($curlHandles as $ch) {
$response = curl_multi_getcontent($ch);
curl_multi_remove_handle($multiHandle, $ch);
curl_close($ch);
}
curl_multi_close($multiHandle);
The conclusion is: curl_close() itself cannot significantly optimize memory, especially in short-lifetime scripts. However, in high concurrency, resident memory, and long life cycle services, not calling curl_close() is a serious resource leakage , which will eventually drag down your service stability.
Therefore, it is not the "secret" of performance optimization, but it is the premise for your program to "can you run on it".
Normal batch script: curl + curl_close is recommended
High concurrency processing: use curl_multi or use coroutine clients (such as Guzzle + async)
Long connection service: Be sure to manage resource life cycles, including curl_close
PHP FPM: Pay attention to the memory usage of requests in the script, reasonably split the request or control the request volume
Only in the face of details can performance be as stable as a rock. Next time you write a concurrent request, you might as well look back and see if you forgot curl_close() .