| Install | |
|---|---|
composer require modelslab/octane-coroutine |
|
| Latest Version: | v0.8.4 |
| PHP: | ^8.1.0 |
⚡ High-performance Laravel with true coroutine support for massive concurrency
Requires the latest Swoole with coroutine hooks enabled. Older versions are not supported.
This is an enhanced fork of Laravel Octane that adds true Swoole coroutine support, enabling your Laravel application to handle thousands of concurrent requests efficiently through non-blocking I/O.
Standard Octane uses a "One Worker = One Request" model. When a request performs blocking I/O (database queries, API calls, file operations), the entire worker is blocked:
8 workers × 1 request per worker = 8 concurrent requests max
With 1-second blocking operations, this means only ~8 requests/second throughput.
This fork enables Swoole's coroutine runtime hooks (SWOOLE_HOOK_ALL), which automatically converts PHP's blocking functions into non-blocking, coroutine-safe versions:
32 workers × ~87 concurrent requests per worker = 2,784+ concurrent requests
With the same 1-second blocking operations, this achieves 2,773+ requests/second — a 360× improvement!
sleep() → Non-blocking coroutine sleepfile_get_contents() → Non-blocking file I/Ocurl_exec() → Non-blocking HTTP requestsInstall via Composer from Packagist:
composer require modelslab/octane-coroutine
Then install Octane with Swoole:
php artisan octane:install swoole
# Install latest stable
composer require modelslab/octane-coroutine:^0.7
# Install development version
composer require modelslab/octane-coroutine:dev-main
# Update to the latest version
composer update modelslab/octane-coroutine
# Clear caches after updating
php artisan config:clear
php artisan cache:clear
php artisan octane:reload
Tip: Pin your production deployments to specific versions:
{
"require": {
"modelslab/octane-coroutine": "^0.7.7"
}
}
The package works out-of-the-box with sensible defaults. Coroutines are enabled by default with runtime hooks.
Start with appropriate worker count:
# Development (auto-detect CPU cores)
php artisan octane:start --server=swoole
# Production (explicit worker count)
php artisan octane:start --server=swoole --workers=32
Edit config/octane.php if needed:
'swoole' => [
'options' => [
'enable_coroutine' => true, // Already enabled by default
'worker_num' => 32,
'max_request' => 500,
],
],
Coroutine mode relies on coroutine-safe IO drivers and connection handling. Recommended defaults:
# Redis
REDIS_CLIENT=phpredis
REDIS_PERSISTENT=true
# Database (disable PDO persistent connections in coroutine mode)
DB_PERSISTENT=false
# Pool sizing (keep min low to avoid exhausting MySQL max_connections)
DB_POOL_MIN=1
DB_POOL_MAX=50
Notes:
persistent_id per pool worker to avoid socket sharing across coroutines.composer require predis/predisREDIS_CLIENT=predisREDIS_PERSISTENT=falseThis section clarifies the key concepts that make this fork different from standard Octane.
Workers are OS-level processes spawned by Swoole. Each worker:
--workers=N or worker_num in configStandard Octane: 1 Worker = 1 Request at a time (blocking)
The Pool is a collection of pre-initialized Laravel Application instances within each worker. This fork introduces pooling to solve state isolation:
This Fork: 1 Worker = 1 Pool of N Application instances
When a coroutine needs to handle a request, it borrows an Application from the pool, uses it, then returns it. This ensures:
Coroutines are lightweight, cooperative "threads" managed by Swoole at the application level (not OS-level). When a coroutine encounters blocking I/O, it yields control to other coroutines instead of blocking the entire worker.
Traditional: Worker blocks → other requests wait
Coroutines: Worker yields → other requests continue
┌─────────────────────────────────────────────────────────────┐
│ SWOOLE SERVER │
├─────────────────────────────────────────────────────────────┤
│ Worker 0 Worker 1 │
│ ┌─────────────────────┐ ┌─────────────────────────┐ │
│ │ Pool (10 Apps) │ │ Pool (10 Apps) │ │
│ │ ┌───┐┌───┐┌───┐ │ │ ┌───┐┌───┐┌───┐ │ │
│ │ │App││App││App│ ... │ │ │App││App││App│ ... │ │
│ │ └───┘└───┘└───┘ │ │ └───┘└───┘└───┘ │ │
│ │ │ │ │ │
│ │ Coroutines: │ │ Coroutines: │ │
│ │ cid:1 → App[0] │ │ cid:1 → App[0] │ │
│ │ cid:2 → App[1] │ │ cid:2 → App[1] │ │
│ │ cid:3 → App[2] │ │ cid:3 → App[2] │ │
│ │ ... │ │ ... │ │
│ └─────────────────────┘ └─────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
No! They solve different problems:
| Concept | What It Does | Solves |
|---|---|---|
| Coroutines | Non-blocking I/O, concurrent execution | Performance (throughput) |
| Pool | Pre-initialized Application instances | State isolation (correctness) |
Unit tests require a PHP build with the Swoole extension installed.
php83 vendor/bin/phpunit --testsuite Unit
This fork adds a new pool configuration section to config/octane.php:
'swoole' => [
'options' => [
'worker_num' => 8, // OS processes (CLI: --workers=8)
],
// NEW: Application pool per worker
'pool' => [
'size' => 100, // Applications per worker
'min_size' => 1, // Minimum pool size
'max_size' => 1000, // Maximum pool size
'idle_timeout' => 10, // Seconds before trimming idle workers
],
],
Note: Standard Octane only has worker_num. The pool configuration is unique to this fork.
You can also configure the pool via .env:
OCTANE_POOL_SIZE=50
OCTANE_POOL_MIN_SIZE=1
OCTANE_POOL_MAX_SIZE=1000
OCTANE_POOL_IDLE_TIMEOUT=10
OCTANE_POOL_WAIT_TIMEOUT=30
OCTANE_POOL_REJECT_ON_FULL=false
OCTANE_POOL_OVERLOAD_STATUS=503
OCTANE_POOL_OVERLOAD_RETRY_AFTER=5
OCTANE_POOL_DB_MAX_CONNECTIONS_BUFFER=10
Notes:
OCTANE_POOL_WAIT_TIMEOUT controls how long a request can wait for a pooled app before an overload response is returned.OCTANE_POOL_REJECT_ON_FULL=true disables queuing and immediately returns OCTANE_POOL_OVERLOAD_STATUS.Following Hyperf/Swoole best practices, this fork disables tick timers by default to prevent unnecessary CPU usage.
Octane can dispatch "tick" events to task workers every second. However:
'tick' => false in config/octane.php)In earlier configurations, tick timers with --task-workers=auto would create one task worker per CPU core (e.g., 12 workers on a 12-core system). Even with no traffic:
12 task workers × tick every 1 second = constant CPU overhead
This causes high CPU usage even when the server is idle!
Only enable tick if you have listeners for TickReceived or TickTerminated events that need to run periodically:
// config/octane.php
'swoole' => [
'tick' => true, // Enable tick timers
],
Then start with minimal task workers (not auto):
# Good: Only 1-2 task workers for tick
php artisan octane:start --task-workers=1
# Bad: Creates CPU_COUNT task workers (excessive overhead)
php artisan octane:start --task-workers=auto
| Scenario | Recommended --task-workers |
|---|---|
| Tick disabled (default) | 0 (auto) |
| Tick enabled | 1 or 2 |
| Heavy async task dispatch | 2 to 4 |
| Never use | auto (causes CPU overhead) |
Real-world load testing results with wrk:
wrk -t12 -c2000 -d30s http://localhost:8000/test
wrk -t12 -c20000 -d60s http://localhost:8000/test
| Configuration | Req/sec per worker | Concurrent requests per worker |
|---|---|---|
| Standard Octane | ~1 | 1 |
| With Coroutines | ~87 | ~87 |
Each worker can efficiently handle ~87 concurrent requests thanks to coroutines!
Enabled automatically on worker start:
// src/Swoole/Handlers/OnWorkerStart.php
\Swoole\Runtime::enableCoroutine(SWOOLE_HOOK_ALL);
This converts all blocking I/O to coroutine-safe operations without any code changes required.
Workers log their initialization for monitoring:
🚀 Worker #0 starting initialization...
✅ Worker #0 (PID: 4958) initialized and ready!
If a worker isn't ready, requests receive 503 responses until initialization completes:
{
"error": "Service Unavailable",
"message": "Worker not initialized yet",
"worker_id": 5
}
Check worker initialization in your logs:
tail -f storage/logs/swoole_http.log | grep "Worker"
Monitor your application:
Memory needed ≈ workers × 100-200MB per worker
Example: 32 workers = 3.2-6.4GB RAM
For high concurrency (10,000+ connections):
# Increase file descriptor limits
ulimit -n 65536
# Add to /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
For extreme load:
// config/octane.php
'swoole' => [
'options' => [
'worker_num' => 64,
'backlog' => 65536,
'socket_buffer_size' => 2097152,
],
],
Enable debug logging to track worker behavior:
// Check worker initialization
tail -f storage/logs/swoole_http.log
// Monitor in real-time
php artisan octane:start --server=swoole --workers=32 | grep "Worker"
max_connections can handle your concurrencyThis section provides specific, tested recommendations for achieving 10,000 requests/second on an 8-core CPU.
Total Concurrent Capacity = Workers × Pool Size × Coroutine Efficiency
For 10K req/sec with 100ms average response time:
- Concurrent requests needed: 10,000 × 0.1 = 1,000 concurrent
- With 8 workers, each needs: 1,000 ÷ 8 = 125 concurrent per worker
- Pool size recommendation: 150-200 (with buffer)
// config/octane.php
'swoole' => [
'options' => [
'worker_num' => 8, // Match CPU cores
'max_request' => 10000, // Restart worker after N requests (memory safety)
'max_request_grace' => 1000, // Grace period for graceful restart
'backlog' => 8192, // Connection queue size
'socket_buffer_size' => 2097152, // 2MB socket buffer
'buffer_output_size' => 2097152, // 2MB output buffer
],
'pool' => [
'size' => 200, // 200 apps per worker = 1,600 total capacity
'min_size' => 10,
'max_size' => 500,
'idle_timeout' => 10,
],
],
php artisan octane:start \
--server=swoole \
--workers=8 \
--task-workers=0 \
--max-requests=10000 \
--port=8000
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 8 cores | 8+ cores |
| RAM | 8GB | 16GB |
| File Descriptors | 65536 | 100000+ |
| Network | 1Gbps | 10Gbps |
Memory per Worker ≈ Base (50MB) + (Pool Size × App Memory)
Memory per App ≈ 10-30MB (depends on your application)
Example with pool size 200:
- Per worker: 50MB + (200 × 15MB) = ~3GB
- 8 workers: 8 × 3GB = ~24GB peak
Note: This is peak memory. Actual usage is lower as apps share memory.
Realistic: 8-12GB for 8 workers with pool size 200
Critical: With 8 workers × 200 pool size, you could have up to 1,600 concurrent database connections!
// config/database.php
'mysql' => [
'driver' => 'mysql',
// ... other config
'pool' => [
'min_connections' => 1,
'max_connections' => 50, // Per worker: 8 × 50 = 400 max connections
'connect_timeout' => 10.0,
'wait_timeout' => 3.0,
],
],
Or configure MySQL server:
SET GLOBAL max_connections = 500;
SET GLOBAL wait_timeout = 28800;
# /etc/sysctl.conf
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Apply changes
sysctl -p
# /etc/security/limits.conf
* soft nofile 100000
* hard nofile 100000
* soft nproc 65535
* hard nproc 65535
# Apply (requires re-login)
ulimit -n 100000
With the above configuration on 8-core CPU:
| Scenario | Expected req/sec |
|---|---|
| Simple JSON response | 15,000-20,000 |
| Database SELECT (cached) | 8,000-12,000 |
| Database SELECT (no cache) | 3,000-6,000 |
| External API call (100ms) | 8,000-10,000 |
| Complex business logic | 5,000-8,000 |
| Strategy | Config | Capacity | Memory | Best For |
|---|---|---|---|---|
| More Workers | 16 workers × 50 pool | 800 concurrent | ~8GB | CPU-bound work |
| Larger Pool | 8 workers × 200 pool | 1,600 concurrent | ~10GB | I/O-bound work |
| Balanced | 12 workers × 100 pool | 1,200 concurrent | ~9GB | Mixed workloads |
Rule of Thumb:
Contributions are welcome! Please read the contribution guide.
Please review our security policy to report vulnerabilities.
This fork maintains the original MIT license. See LICENSE.md.
Built with ❤️ by ModelsLab
Original Laravel Octane by Taylor Otwell and the Laravel team