When multiple processes or threads try to read cache misses and write to cache at the same time, multiple processes may execute the same slow query or complex calculations at the same time, resulting in waste of resources. This situation is called a cache race condition.
For example:
if (!apcu_exists('my_cache_key')) {
$data = get_data_from_db(); // Complex query
apcu_store('my_cache_key', $data);
}
echo apcu_fetch('my_cache_key');
In a high concurrency environment, multiple requests find that the cache does not exist at the same time, and database queries will be executed at the same time, causing performance problems.
The full name of apcu_cas is "Compare And Swap", which is an atomic operation used to compare whether the cached value is equal to the expected value, and if it is equal, it will be replaced with a new value. This operation can avoid the competition problem caused by multiple requests to modify the cache simultaneously.
Function prototype:
bool apcu_cas(string $key, mixed $old, mixed $new)
$key : cache key
$old : the expected old value
$new : The new value to be replaced
Returning true means the replacement is successful, false means the old value does not match, and the replacement is failed.
We implement mutex access cache by setting a "lock" flag. Specific ideas:
Read cache and return directly if it exists.
If the cache does not exist, try setting a "lock" flag to indicate that the cache is being generated.
Setting the "lock" failed, indicating that other processes are already generating cache, waiting or trying again.
After the setting is successful, execute a slow query to generate data.
Write data to the cache and release the "lock" flag.
Return data.
function getCacheData() {
$cacheKey = 'my_cache_key';
$lockKey = 'my_cache_key_lock';
// 1. Try to read the cache first
$data = apcu_fetch($cacheKey, $success);
if ($success) {
return $data;
}
// 2. Try to passapcu_casSet lock,Prevent multiple requests from simultaneously generating caches
// 先尝试Set lock标志位为false(Initial status)
apcu_add($lockKey, false);
// Expected lock isfalse,Try changing totrue(Lock)
if (!apcu_cas($lockKey, false, true)) {
// Indicates that other requests have been locked,Wait for cache generation
// Can be simplesleepOr wait in a loop
usleep(100000); // wait100millisecond
return getCacheData(); // Recursively try
}
// 3. Obtain the lock,Execute slow query
$data = get_data_from_db();
// 4. Write cache
apcu_store($cacheKey, $data);
// 5. Release the lock(Set lock为false)
apcu_store($lockKey, false);
return $data;
}
function get_data_from_db() {
// Simulate slow query
sleep(1);
return ['time' => time(), 'data' => 'sample'];
}
In the above code, apcu_cas ensures atomic switching of "lock", avoids multiple requests to execute slow queries at the same time, and avoids cache competition.
Caching race conditions are a common problem in high concurrency caching scenarios.
apcu_cas is a powerful tool for implementing atomic operations and can realize an efficient locking mechanism.
Through the "lock" mechanism, it is ensured that only one request can perform slow query and write cache, and other requests can wait or try again.
This method is suitable for APCu cache in stand-alone environments, and more complex locking solutions can be considered in distributed environments.
Mastering the use of apcu_cas can make your PHP cache mechanism more robust and efficient, significantly avoiding performance bottlenecks caused by cache breakdown.
$data = apcu_fetch($cacheKey, $success);
if ($success) {
return $data;
}
apcu_add($lockKey, false);
if (!apcu_cas($lockKey, false, true)) {
usleep(100000);
return getCacheData();
}
$data = get_data_from_db();
apcu_store($cacheKey, $data);
apcu_store($lockKey, false);
return $data;
}
function get_data_from_db() {
sleep(1);
return ['time' => time(), 'data' => 'sample'];
}
?>
</code>